problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_17764 | rasdani/github-patches | git_diff | joke2k__faker-2038 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BE Vat Numbers should have a modulo 97 check and start wit 1 or 0
* Faker version: 25
* OS: OS. X Monterery
VAT Numbers generated in Belgium (currently set-up in the provider ssn for locale nl_BE should start with 1 or 0 and have a controle number with a module 97 check.
Refer to https://en.wikipedia.org/wiki/VAT_identification_number
### Steps to reproduce
1. Generate vat_id wit locale nl_BE
### Expected behavior
Vat numbers should be starting with BE0 or BE1 and have a module 97 check as the last two numbers
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/ssn/nl_BE/__init__.py`
Content:
```
1 from .. import Provider as SsnProvider
2
3 """
4 For more info on rijksregisternummer, see https://nl.wikipedia.org/wiki/Rijksregisternummer
5 Dutch/French only for now ...
6 """
7
8
9 class Provider(SsnProvider):
10 def ssn(self) -> str:
11 """
12 Returns a 11 digits Belgian SSN called "rijksregisternummer" as a string
13
14 The first 6 digits represent the birthdate with (in order) year, month and day.
15 The second group of 3 digits is represents a sequence number (order of birth).
16 It is even for women and odd for men.
17 For men the range starts at 1 and ends 997, for women 2 until 998.
18 The third group of 2 digits is a checksum based on the previous 9 digits (modulo 97).
19 Divide those 9 digits by 97, subtract the remainder from 97 and that's the result.
20 For persons born in or after 2000, the 9 digit number needs to be proceeded by a 2
21 (add 2000000000) before the division by 97.
22
23 """
24
25 # see http://nl.wikipedia.org/wiki/Burgerservicenummer (in Dutch)
26 def _checksum(digits):
27 res = 97 - (digits % 97)
28 return res
29
30 # Generate a date (random)
31 mydate = self.generator.date()
32 # Convert it to an int
33 elms = mydate.split("-")
34 # Adjust for year 2000 if necessary
35 if elms[0][0] == "2":
36 above = True
37 else:
38 above = False
39 # Only keep the last 2 digits of the year
40 elms[0] = elms[0][2:4]
41 # Simulate the gender/sequence - should be 3 digits
42 seq = self.generator.random_int(1, 998)
43 # Right justify sequence and append to list
44 seq_str = f"{seq:0>3}"
45 elms.append(seq_str)
46 # Now convert list to an integer so the checksum can be calculated
47 date_as_int = int("".join(elms))
48 if above:
49 date_as_int += 2000000000
50 # Generate checksum
51 s = _checksum(date_as_int)
52 s_rjust = f"{s:0>2}"
53 # return result as a string
54 elms.append(s_rjust)
55 return "".join(elms)
56
57 vat_id_formats = ("BE##########",)
58
59 def vat_id(self) -> str:
60 """
61 http://ec.europa.eu/taxation_customs/vies/faq.html#item_11
62 :return: A random Belgian VAT ID
63 """
64 return self.bothify(self.random_element(self.vat_id_formats))
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/providers/ssn/nl_BE/__init__.py b/faker/providers/ssn/nl_BE/__init__.py
--- a/faker/providers/ssn/nl_BE/__init__.py
+++ b/faker/providers/ssn/nl_BE/__init__.py
@@ -57,8 +57,24 @@
vat_id_formats = ("BE##########",)
def vat_id(self) -> str:
+
+ vat_id_random_section = (
+ '#######'
+ )
+
+ vat_id_possible_initial_numbers = (
+ '0',
+ '1'
+ )
"""
http://ec.europa.eu/taxation_customs/vies/faq.html#item_11
- :return: A random Belgian VAT ID
+ https://en.wikipedia.org/wiki/VAT_identification_number
+ :return: A random Belgian VAT ID starting with 0 or 1 and has a correct checksum with a modulo 97 check
"""
- return self.bothify(self.random_element(self.vat_id_formats))
+ generated_initial_number = self.random_element(vat_id_possible_initial_numbers)
+ vat_without_check = self.bothify(generated_initial_number + vat_id_random_section)
+ vat_as_int = int(vat_without_check)
+ vat_check = 97 - (vat_as_int % 97)
+ vat_check_str = f"{vat_check:0>2}"
+
+ return "BE" + vat_without_check + vat_check_str
| {"golden_diff": "diff --git a/faker/providers/ssn/nl_BE/__init__.py b/faker/providers/ssn/nl_BE/__init__.py\n--- a/faker/providers/ssn/nl_BE/__init__.py\n+++ b/faker/providers/ssn/nl_BE/__init__.py\n@@ -57,8 +57,24 @@\n vat_id_formats = (\"BE##########\",)\n \n def vat_id(self) -> str:\n+\n+ vat_id_random_section = (\n+ '#######'\n+ )\n+\n+ vat_id_possible_initial_numbers = (\n+ '0',\n+ '1'\n+ )\n \"\"\"\n http://ec.europa.eu/taxation_customs/vies/faq.html#item_11\n- :return: A random Belgian VAT ID\n+ https://en.wikipedia.org/wiki/VAT_identification_number\n+ :return: A random Belgian VAT ID starting with 0 or 1 and has a correct checksum with a modulo 97 check\n \"\"\"\n- return self.bothify(self.random_element(self.vat_id_formats))\n+ generated_initial_number = self.random_element(vat_id_possible_initial_numbers)\n+ vat_without_check = self.bothify(generated_initial_number + vat_id_random_section)\n+ vat_as_int = int(vat_without_check)\n+ vat_check = 97 - (vat_as_int % 97)\n+ vat_check_str = f\"{vat_check:0>2}\"\n+\n+ return \"BE\" + vat_without_check + vat_check_str\n", "issue": "BE Vat Numbers should have a modulo 97 check and start wit 1 or 0\n* Faker version: 25\r\n* OS: OS. X Monterery\r\n\r\nVAT Numbers generated in Belgium (currently set-up in the provider ssn for locale nl_BE should start with 1 or 0 and have a controle number with a module 97 check.\r\n\r\nRefer to https://en.wikipedia.org/wiki/VAT_identification_number\r\n\r\n### Steps to reproduce\r\n\r\n1. Generate vat_id wit locale nl_BE\r\n\r\n\r\n### Expected behavior\r\n\r\nVat numbers should be starting with BE0 or BE1 and have a module 97 check as the last two numbers\r\n\r\n\r\n\n", "before_files": [{"content": "from .. import Provider as SsnProvider\n\n\"\"\"\nFor more info on rijksregisternummer, see https://nl.wikipedia.org/wiki/Rijksregisternummer\nDutch/French only for now ...\n\"\"\"\n\n\nclass Provider(SsnProvider):\n def ssn(self) -> str:\n \"\"\"\n Returns a 11 digits Belgian SSN called \"rijksregisternummer\" as a string\n\n The first 6 digits represent the birthdate with (in order) year, month and day.\n The second group of 3 digits is represents a sequence number (order of birth).\n It is even for women and odd for men.\n For men the range starts at 1 and ends 997, for women 2 until 998.\n The third group of 2 digits is a checksum based on the previous 9 digits (modulo 97).\n Divide those 9 digits by 97, subtract the remainder from 97 and that's the result.\n For persons born in or after 2000, the 9 digit number needs to be proceeded by a 2\n (add 2000000000) before the division by 97.\n\n \"\"\"\n\n # see http://nl.wikipedia.org/wiki/Burgerservicenummer (in Dutch)\n def _checksum(digits):\n res = 97 - (digits % 97)\n return res\n\n # Generate a date (random)\n mydate = self.generator.date()\n # Convert it to an int\n elms = mydate.split(\"-\")\n # Adjust for year 2000 if necessary\n if elms[0][0] == \"2\":\n above = True\n else:\n above = False\n # Only keep the last 2 digits of the year\n elms[0] = elms[0][2:4]\n # Simulate the gender/sequence - should be 3 digits\n seq = self.generator.random_int(1, 998)\n # Right justify sequence and append to list\n seq_str = f\"{seq:0>3}\"\n elms.append(seq_str)\n # Now convert list to an integer so the checksum can be calculated\n date_as_int = int(\"\".join(elms))\n if above:\n date_as_int += 2000000000\n # Generate checksum\n s = _checksum(date_as_int)\n s_rjust = f\"{s:0>2}\"\n # return result as a string\n elms.append(s_rjust)\n return \"\".join(elms)\n\n vat_id_formats = (\"BE##########\",)\n\n def vat_id(self) -> str:\n \"\"\"\n http://ec.europa.eu/taxation_customs/vies/faq.html#item_11\n :return: A random Belgian VAT ID\n \"\"\"\n return self.bothify(self.random_element(self.vat_id_formats))\n", "path": "faker/providers/ssn/nl_BE/__init__.py"}], "after_files": [{"content": "from .. import Provider as SsnProvider\n\n\"\"\"\nFor more info on rijksregisternummer, see https://nl.wikipedia.org/wiki/Rijksregisternummer\nDutch/French only for now ...\n\"\"\"\n\n\nclass Provider(SsnProvider):\n def ssn(self) -> str:\n \"\"\"\n Returns a 11 digits Belgian SSN called \"rijksregisternummer\" as a string\n\n The first 6 digits represent the birthdate with (in order) year, month and day.\n The second group of 3 digits is represents a sequence number (order of birth).\n It is even for women and odd for men.\n For men the range starts at 1 and ends 997, for women 2 until 998.\n The third group of 2 digits is a checksum based on the previous 9 digits (modulo 97).\n Divide those 9 digits by 97, subtract the remainder from 97 and that's the result.\n For persons born in or after 2000, the 9 digit number needs to be proceeded by a 2\n (add 2000000000) before the division by 97.\n\n \"\"\"\n\n # see http://nl.wikipedia.org/wiki/Burgerservicenummer (in Dutch)\n def _checksum(digits):\n res = 97 - (digits % 97)\n return res\n\n # Generate a date (random)\n mydate = self.generator.date()\n # Convert it to an int\n elms = mydate.split(\"-\")\n # Adjust for year 2000 if necessary\n if elms[0][0] == \"2\":\n above = True\n else:\n above = False\n # Only keep the last 2 digits of the year\n elms[0] = elms[0][2:4]\n # Simulate the gender/sequence - should be 3 digits\n seq = self.generator.random_int(1, 998)\n # Right justify sequence and append to list\n seq_str = f\"{seq:0>3}\"\n elms.append(seq_str)\n # Now convert list to an integer so the checksum can be calculated\n date_as_int = int(\"\".join(elms))\n if above:\n date_as_int += 2000000000\n # Generate checksum\n s = _checksum(date_as_int)\n s_rjust = f\"{s:0>2}\"\n # return result as a string\n elms.append(s_rjust)\n return \"\".join(elms)\n\n vat_id_formats = (\"BE##########\",)\n\n def vat_id(self) -> str:\n\n vat_id_random_section = (\n '#######'\n )\n\n vat_id_possible_initial_numbers = (\n '0',\n '1'\n )\n \"\"\"\n http://ec.europa.eu/taxation_customs/vies/faq.html#item_11\n https://en.wikipedia.org/wiki/VAT_identification_number\n :return: A random Belgian VAT ID starting with 0 or 1 and has a correct checksum with a modulo 97 check\n \"\"\"\n generated_initial_number = self.random_element(vat_id_possible_initial_numbers)\n vat_without_check = self.bothify(generated_initial_number + vat_id_random_section)\n vat_as_int = int(vat_without_check)\n vat_check = 97 - (vat_as_int % 97)\n vat_check_str = f\"{vat_check:0>2}\"\n\n return \"BE\" + vat_without_check + vat_check_str\n", "path": "faker/providers/ssn/nl_BE/__init__.py"}]} | 1,165 | 329 |
gh_patches_debug_21801 | rasdani/github-patches | git_diff | SciTools__cartopy-2287 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove "cfeature.LAND" from "Feature Creation" example ?
In the nice example provided on the [Feature Creation page](https://scitools.org.uk/cartopy/docs/latest/gallery/feature_creation.html), you could probably remove the useless line:
B) `ax.add_feature(cfeature.LAND)`
because you have already called
A) `ax.stock_img()`
As a matter of fact, B) being called after A), it should theoretically be plotted above A). I see you are using a default `zorder=-1` for LAND, so it is probably below everything else (did not know you could use negative zorder values)
```
>>> cfeature.LAND.kwargs
{'edgecolor': 'face', 'zorder': -1, 'facecolor': array([0.9375 , 0.9375 , 0.859375])}
```
If I use `ax.add_feature(cfeature.LAND, zorder=10)`, I do get land with a uniform color above the stock image on land (and the stock ocean elsewhere)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/lines_and_polygons/feature_creation.py`
Content:
```
1 """
2 Feature Creation
3 ----------------
4
5 This example manually instantiates a
6 :class:`cartopy.feature.NaturalEarthFeature` to access administrative
7 boundaries (states and provinces).
8
9 Note that this example is intended to illustrate the ability to construct
10 Natural Earth features that cartopy does not necessarily know about
11 *a priori*.
12 In this instance however, it would be possible to make use of the
13 pre-defined :data:`cartopy.feature.STATES` constant.
14
15 """
16 from matplotlib.offsetbox import AnchoredText
17 import matplotlib.pyplot as plt
18
19 import cartopy.crs as ccrs
20 import cartopy.feature as cfeature
21
22
23 def main():
24 fig = plt.figure()
25 ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree())
26 ax.set_extent([80, 170, -45, 30], crs=ccrs.PlateCarree())
27
28 # Put a background image on for nice sea rendering.
29 ax.stock_img()
30
31 # Create a feature for States/Admin 1 regions at 1:50m from Natural Earth
32 states_provinces = cfeature.NaturalEarthFeature(
33 category='cultural',
34 name='admin_1_states_provinces_lines',
35 scale='50m',
36 facecolor='none')
37
38 SOURCE = 'Natural Earth'
39 LICENSE = 'public domain'
40
41 ax.add_feature(cfeature.LAND)
42 ax.add_feature(cfeature.COASTLINE)
43 ax.add_feature(states_provinces, edgecolor='gray')
44
45 # Add a text annotation for the license information to the
46 # the bottom right corner.
47 text = AnchoredText('\u00A9 {}; license: {}'
48 ''.format(SOURCE, LICENSE),
49 loc=4, prop={'size': 12}, frameon=True)
50 ax.add_artist(text)
51
52 plt.show()
53
54
55 if __name__ == '__main__':
56 main()
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/lines_and_polygons/feature_creation.py b/examples/lines_and_polygons/feature_creation.py
--- a/examples/lines_and_polygons/feature_creation.py
+++ b/examples/lines_and_polygons/feature_creation.py
@@ -28,7 +28,7 @@
# Put a background image on for nice sea rendering.
ax.stock_img()
- # Create a feature for States/Admin 1 regions at 1:50m from Natural Earth
+ # Create a feature for States/Admin 1 regions at 1:50m from Natural Earth.
states_provinces = cfeature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
@@ -38,9 +38,11 @@
SOURCE = 'Natural Earth'
LICENSE = 'public domain'
- ax.add_feature(cfeature.LAND)
- ax.add_feature(cfeature.COASTLINE)
+ # Add our states feature.
ax.add_feature(states_provinces, edgecolor='gray')
+ # Add land feature, overriding the default negative zorder so it shows
+ # above the background image.
+ ax.add_feature(cfeature.LAND, zorder=1, edgecolor='k')
# Add a text annotation for the license information to the
# the bottom right corner.
| {"golden_diff": "diff --git a/examples/lines_and_polygons/feature_creation.py b/examples/lines_and_polygons/feature_creation.py\n--- a/examples/lines_and_polygons/feature_creation.py\n+++ b/examples/lines_and_polygons/feature_creation.py\n@@ -28,7 +28,7 @@\n # Put a background image on for nice sea rendering.\n ax.stock_img()\n \n- # Create a feature for States/Admin 1 regions at 1:50m from Natural Earth\n+ # Create a feature for States/Admin 1 regions at 1:50m from Natural Earth.\n states_provinces = cfeature.NaturalEarthFeature(\n category='cultural',\n name='admin_1_states_provinces_lines',\n@@ -38,9 +38,11 @@\n SOURCE = 'Natural Earth'\n LICENSE = 'public domain'\n \n- ax.add_feature(cfeature.LAND)\n- ax.add_feature(cfeature.COASTLINE)\n+ # Add our states feature.\n ax.add_feature(states_provinces, edgecolor='gray')\n+ # Add land feature, overriding the default negative zorder so it shows\n+ # above the background image.\n+ ax.add_feature(cfeature.LAND, zorder=1, edgecolor='k')\n \n # Add a text annotation for the license information to the\n # the bottom right corner.\n", "issue": "Remove \"cfeature.LAND\" from \"Feature Creation\" example ?\nIn the nice example provided on the [Feature Creation page](https://scitools.org.uk/cartopy/docs/latest/gallery/feature_creation.html), you could probably remove the useless line:\r\nB) `ax.add_feature(cfeature.LAND)`\r\nbecause you have already called\r\nA) `ax.stock_img()`\r\n\r\nAs a matter of fact, B) being called after A), it should theoretically be plotted above A). I see you are using a default `zorder=-1` for LAND, so it is probably below everything else (did not know you could use negative zorder values)\r\n```\r\n>>> cfeature.LAND.kwargs\r\n{'edgecolor': 'face', 'zorder': -1, 'facecolor': array([0.9375 , 0.9375 , 0.859375])}\r\n```\r\nIf I use `ax.add_feature(cfeature.LAND, zorder=10)`, I do get land with a uniform color above the stock image on land (and the stock ocean elsewhere)\n", "before_files": [{"content": "\"\"\"\nFeature Creation\n----------------\n\nThis example manually instantiates a\n:class:`cartopy.feature.NaturalEarthFeature` to access administrative\nboundaries (states and provinces).\n\nNote that this example is intended to illustrate the ability to construct\nNatural Earth features that cartopy does not necessarily know about\n*a priori*.\nIn this instance however, it would be possible to make use of the\npre-defined :data:`cartopy.feature.STATES` constant.\n\n\"\"\"\nfrom matplotlib.offsetbox import AnchoredText\nimport matplotlib.pyplot as plt\n\nimport cartopy.crs as ccrs\nimport cartopy.feature as cfeature\n\n\ndef main():\n fig = plt.figure()\n ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree())\n ax.set_extent([80, 170, -45, 30], crs=ccrs.PlateCarree())\n\n # Put a background image on for nice sea rendering.\n ax.stock_img()\n\n # Create a feature for States/Admin 1 regions at 1:50m from Natural Earth\n states_provinces = cfeature.NaturalEarthFeature(\n category='cultural',\n name='admin_1_states_provinces_lines',\n scale='50m',\n facecolor='none')\n\n SOURCE = 'Natural Earth'\n LICENSE = 'public domain'\n\n ax.add_feature(cfeature.LAND)\n ax.add_feature(cfeature.COASTLINE)\n ax.add_feature(states_provinces, edgecolor='gray')\n\n # Add a text annotation for the license information to the\n # the bottom right corner.\n text = AnchoredText('\\u00A9 {}; license: {}'\n ''.format(SOURCE, LICENSE),\n loc=4, prop={'size': 12}, frameon=True)\n ax.add_artist(text)\n\n plt.show()\n\n\nif __name__ == '__main__':\n main()\n", "path": "examples/lines_and_polygons/feature_creation.py"}], "after_files": [{"content": "\"\"\"\nFeature Creation\n----------------\n\nThis example manually instantiates a\n:class:`cartopy.feature.NaturalEarthFeature` to access administrative\nboundaries (states and provinces).\n\nNote that this example is intended to illustrate the ability to construct\nNatural Earth features that cartopy does not necessarily know about\n*a priori*.\nIn this instance however, it would be possible to make use of the\npre-defined :data:`cartopy.feature.STATES` constant.\n\n\"\"\"\nfrom matplotlib.offsetbox import AnchoredText\nimport matplotlib.pyplot as plt\n\nimport cartopy.crs as ccrs\nimport cartopy.feature as cfeature\n\n\ndef main():\n fig = plt.figure()\n ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree())\n ax.set_extent([80, 170, -45, 30], crs=ccrs.PlateCarree())\n\n # Put a background image on for nice sea rendering.\n ax.stock_img()\n\n # Create a feature for States/Admin 1 regions at 1:50m from Natural Earth.\n states_provinces = cfeature.NaturalEarthFeature(\n category='cultural',\n name='admin_1_states_provinces_lines',\n scale='50m',\n facecolor='none')\n\n SOURCE = 'Natural Earth'\n LICENSE = 'public domain'\n\n # Add our states feature.\n ax.add_feature(states_provinces, edgecolor='gray')\n # Add land feature, overriding the default negative zorder so it shows\n # above the background image.\n ax.add_feature(cfeature.LAND, zorder=1, edgecolor='k')\n\n # Add a text annotation for the license information to the\n # the bottom right corner.\n text = AnchoredText('\\u00A9 {}; license: {}'\n ''.format(SOURCE, LICENSE),\n loc=4, prop={'size': 12}, frameon=True)\n ax.add_artist(text)\n\n plt.show()\n\n\nif __name__ == '__main__':\n main()\n", "path": "examples/lines_and_polygons/feature_creation.py"}]} | 1,016 | 293 |
gh_patches_debug_26290 | rasdani/github-patches | git_diff | scrapy__scrapy-4052 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exception when using DummyStatsCollector
### Description
Using the DummyStatsCollector results in an exception:
```
2019-09-09 13:51:23 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method CoreStats.spider_closed of <scrapy.extensions.corestats.CoreStats object at 0x7f86269cac18>>
Traceback (most recent call last):
File ".../lib/python3.6/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File ".../lib/python3.6/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File ".../lib/python3.6/site-packages/scrapy/extensions/corestats.py", line 28, in spider_closed
elapsed_time = finish_time - self.stats.get_value('start_time')
TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType'
```
This problem has been introduced in aa46e1995cd5cb1099aba17535372b538bd656b3.
### Steps to Reproduce
Set `STATS_CLASS = "scrapy.statscollectors.DummyStatsCollector"` in the settings module as described in the documentation (https://docs.scrapy.org/en/latest/topics/stats.html#dummystatscollector).
**Expected behavior:** no exception
**Actual behavior:** exception thrown
**Reproduces how often:** always
### Versions
At least master as of 534de7395da3a53b5a2c89960db9ec5d8fdab60c
### Fix
A possible fix is to use the elapsed time as a default argument so that `get_value()` does not return None. I can prepare a PR if needed.
```diff
--- a/scrapy/extensions/corestats.py
+++ b/scrapy/extensions/corestats.py
@@ -25,7 +25,7 @@ class CoreStats(object):
def spider_closed(self, spider, reason):
finish_time = datetime.datetime.utcnow()
- elapsed_time = finish_time - self.stats.get_value('start_time')
+ elapsed_time = finish_time - self.stats.get_value('start_time', finish_time)
elapsed_time_seconds = elapsed_time.total_seconds()
self.stats.set_value('elapsed_time_seconds', elapsed_time_seconds, spider=spider)
self.stats.set_value('finish_time', finish_time, spider=spider)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/extensions/corestats.py`
Content:
```
1 """
2 Extension for collecting core stats like items scraped and start/finish times
3 """
4 import datetime
5
6 from scrapy import signals
7
8 class CoreStats(object):
9
10 def __init__(self, stats):
11 self.stats = stats
12
13 @classmethod
14 def from_crawler(cls, crawler):
15 o = cls(crawler.stats)
16 crawler.signals.connect(o.spider_opened, signal=signals.spider_opened)
17 crawler.signals.connect(o.spider_closed, signal=signals.spider_closed)
18 crawler.signals.connect(o.item_scraped, signal=signals.item_scraped)
19 crawler.signals.connect(o.item_dropped, signal=signals.item_dropped)
20 crawler.signals.connect(o.response_received, signal=signals.response_received)
21 return o
22
23 def spider_opened(self, spider):
24 self.stats.set_value('start_time', datetime.datetime.utcnow(), spider=spider)
25
26 def spider_closed(self, spider, reason):
27 finish_time = datetime.datetime.utcnow()
28 elapsed_time = finish_time - self.stats.get_value('start_time')
29 elapsed_time_seconds = elapsed_time.total_seconds()
30 self.stats.set_value('elapsed_time_seconds', elapsed_time_seconds, spider=spider)
31 self.stats.set_value('finish_time', finish_time, spider=spider)
32 self.stats.set_value('finish_reason', reason, spider=spider)
33
34 def item_scraped(self, item, spider):
35 self.stats.inc_value('item_scraped_count', spider=spider)
36
37 def response_received(self, spider):
38 self.stats.inc_value('response_received_count', spider=spider)
39
40 def item_dropped(self, item, spider, exception):
41 reason = exception.__class__.__name__
42 self.stats.inc_value('item_dropped_count', spider=spider)
43 self.stats.inc_value('item_dropped_reasons_count/%s' % reason, spider=spider)
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/extensions/corestats.py b/scrapy/extensions/corestats.py
--- a/scrapy/extensions/corestats.py
+++ b/scrapy/extensions/corestats.py
@@ -1,14 +1,16 @@
"""
Extension for collecting core stats like items scraped and start/finish times
"""
-import datetime
+from datetime import datetime
from scrapy import signals
+
class CoreStats(object):
def __init__(self, stats):
self.stats = stats
+ self.start_time = None
@classmethod
def from_crawler(cls, crawler):
@@ -21,11 +23,12 @@
return o
def spider_opened(self, spider):
- self.stats.set_value('start_time', datetime.datetime.utcnow(), spider=spider)
+ self.start_time = datetime.utcnow()
+ self.stats.set_value('start_time', self.start_time, spider=spider)
def spider_closed(self, spider, reason):
- finish_time = datetime.datetime.utcnow()
- elapsed_time = finish_time - self.stats.get_value('start_time')
+ finish_time = datetime.utcnow()
+ elapsed_time = finish_time - self.start_time
elapsed_time_seconds = elapsed_time.total_seconds()
self.stats.set_value('elapsed_time_seconds', elapsed_time_seconds, spider=spider)
self.stats.set_value('finish_time', finish_time, spider=spider)
| {"golden_diff": "diff --git a/scrapy/extensions/corestats.py b/scrapy/extensions/corestats.py\n--- a/scrapy/extensions/corestats.py\n+++ b/scrapy/extensions/corestats.py\n@@ -1,14 +1,16 @@\n \"\"\"\n Extension for collecting core stats like items scraped and start/finish times\n \"\"\"\n-import datetime\n+from datetime import datetime\n \n from scrapy import signals\n \n+\n class CoreStats(object):\n \n def __init__(self, stats):\n self.stats = stats\n+ self.start_time = None\n \n @classmethod\n def from_crawler(cls, crawler):\n@@ -21,11 +23,12 @@\n return o\n \n def spider_opened(self, spider):\n- self.stats.set_value('start_time', datetime.datetime.utcnow(), spider=spider)\n+ self.start_time = datetime.utcnow()\n+ self.stats.set_value('start_time', self.start_time, spider=spider)\n \n def spider_closed(self, spider, reason):\n- finish_time = datetime.datetime.utcnow()\n- elapsed_time = finish_time - self.stats.get_value('start_time')\n+ finish_time = datetime.utcnow()\n+ elapsed_time = finish_time - self.start_time\n elapsed_time_seconds = elapsed_time.total_seconds()\n self.stats.set_value('elapsed_time_seconds', elapsed_time_seconds, spider=spider)\n self.stats.set_value('finish_time', finish_time, spider=spider)\n", "issue": "Exception when using DummyStatsCollector\n### Description\r\n\r\nUsing the DummyStatsCollector results in an exception:\r\n\r\n```\r\n2019-09-09 13:51:23 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method CoreStats.spider_closed of <scrapy.extensions.corestats.CoreStats object at 0x7f86269cac18>>\r\nTraceback (most recent call last):\r\n File \".../lib/python3.6/site-packages/twisted/internet/defer.py\", line 150, in maybeDeferred\r\n result = f(*args, **kw)\r\n File \".../lib/python3.6/site-packages/pydispatch/robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \".../lib/python3.6/site-packages/scrapy/extensions/corestats.py\", line 28, in spider_closed\r\n elapsed_time = finish_time - self.stats.get_value('start_time')\r\nTypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType'\r\n```\r\n\r\nThis problem has been introduced in aa46e1995cd5cb1099aba17535372b538bd656b3.\r\n\r\n### Steps to Reproduce\r\n\r\nSet `STATS_CLASS = \"scrapy.statscollectors.DummyStatsCollector\"` in the settings module as described in the documentation (https://docs.scrapy.org/en/latest/topics/stats.html#dummystatscollector).\r\n\r\n**Expected behavior:** no exception\r\n**Actual behavior:** exception thrown\r\n**Reproduces how often:** always\r\n\r\n### Versions\r\n\r\nAt least master as of 534de7395da3a53b5a2c89960db9ec5d8fdab60c\r\n\r\n### Fix\r\n\r\nA possible fix is to use the elapsed time as a default argument so that `get_value()` does not return None. I can prepare a PR if needed.\r\n\r\n```diff\r\n--- a/scrapy/extensions/corestats.py\r\n+++ b/scrapy/extensions/corestats.py\r\n@@ -25,7 +25,7 @@ class CoreStats(object):\r\n \r\n def spider_closed(self, spider, reason):\r\n finish_time = datetime.datetime.utcnow()\r\n- elapsed_time = finish_time - self.stats.get_value('start_time')\r\n+ elapsed_time = finish_time - self.stats.get_value('start_time', finish_time)\r\n elapsed_time_seconds = elapsed_time.total_seconds()\r\n self.stats.set_value('elapsed_time_seconds', elapsed_time_seconds, spider=spider)\r\n self.stats.set_value('finish_time', finish_time, spider=spider)\r\n```\n", "before_files": [{"content": "\"\"\"\nExtension for collecting core stats like items scraped and start/finish times\n\"\"\"\nimport datetime\n\nfrom scrapy import signals\n\nclass CoreStats(object):\n\n def __init__(self, stats):\n self.stats = stats\n\n @classmethod\n def from_crawler(cls, crawler):\n o = cls(crawler.stats)\n crawler.signals.connect(o.spider_opened, signal=signals.spider_opened)\n crawler.signals.connect(o.spider_closed, signal=signals.spider_closed)\n crawler.signals.connect(o.item_scraped, signal=signals.item_scraped)\n crawler.signals.connect(o.item_dropped, signal=signals.item_dropped)\n crawler.signals.connect(o.response_received, signal=signals.response_received)\n return o\n\n def spider_opened(self, spider):\n self.stats.set_value('start_time', datetime.datetime.utcnow(), spider=spider)\n\n def spider_closed(self, spider, reason):\n finish_time = datetime.datetime.utcnow()\n elapsed_time = finish_time - self.stats.get_value('start_time')\n elapsed_time_seconds = elapsed_time.total_seconds()\n self.stats.set_value('elapsed_time_seconds', elapsed_time_seconds, spider=spider)\n self.stats.set_value('finish_time', finish_time, spider=spider)\n self.stats.set_value('finish_reason', reason, spider=spider)\n\n def item_scraped(self, item, spider):\n self.stats.inc_value('item_scraped_count', spider=spider)\n\n def response_received(self, spider):\n self.stats.inc_value('response_received_count', spider=spider)\n\n def item_dropped(self, item, spider, exception):\n reason = exception.__class__.__name__\n self.stats.inc_value('item_dropped_count', spider=spider)\n self.stats.inc_value('item_dropped_reasons_count/%s' % reason, spider=spider)\n", "path": "scrapy/extensions/corestats.py"}], "after_files": [{"content": "\"\"\"\nExtension for collecting core stats like items scraped and start/finish times\n\"\"\"\nfrom datetime import datetime\n\nfrom scrapy import signals\n\n\nclass CoreStats(object):\n\n def __init__(self, stats):\n self.stats = stats\n self.start_time = None\n\n @classmethod\n def from_crawler(cls, crawler):\n o = cls(crawler.stats)\n crawler.signals.connect(o.spider_opened, signal=signals.spider_opened)\n crawler.signals.connect(o.spider_closed, signal=signals.spider_closed)\n crawler.signals.connect(o.item_scraped, signal=signals.item_scraped)\n crawler.signals.connect(o.item_dropped, signal=signals.item_dropped)\n crawler.signals.connect(o.response_received, signal=signals.response_received)\n return o\n\n def spider_opened(self, spider):\n self.start_time = datetime.utcnow()\n self.stats.set_value('start_time', self.start_time, spider=spider)\n\n def spider_closed(self, spider, reason):\n finish_time = datetime.utcnow()\n elapsed_time = finish_time - self.start_time\n elapsed_time_seconds = elapsed_time.total_seconds()\n self.stats.set_value('elapsed_time_seconds', elapsed_time_seconds, spider=spider)\n self.stats.set_value('finish_time', finish_time, spider=spider)\n self.stats.set_value('finish_reason', reason, spider=spider)\n\n def item_scraped(self, item, spider):\n self.stats.inc_value('item_scraped_count', spider=spider)\n\n def response_received(self, spider):\n self.stats.inc_value('response_received_count', spider=spider)\n\n def item_dropped(self, item, spider, exception):\n reason = exception.__class__.__name__\n self.stats.inc_value('item_dropped_count', spider=spider)\n self.stats.inc_value('item_dropped_reasons_count/%s' % reason, spider=spider)\n", "path": "scrapy/extensions/corestats.py"}]} | 1,305 | 297 |
gh_patches_debug_12903 | rasdani/github-patches | git_diff | optuna__optuna-1814 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improvement proposal for TensorBoard integration
## Motivation
- Improve the display of TensorBoard
## Description
### 1. Trial ID
I guess using directory name as trial ID is helpful to find a good trial.
Current integration uses default hash.
**Current display**

**Proposal**

### 2. Use trial number as horizontal axis
**Current display**

**Proposal**

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optuna/integration/tensorboard.py`
Content:
```
1 import os
2 from typing import Dict
3
4 import optuna
5 from optuna._experimental import experimental
6 from optuna._imports import try_import
7
8 with try_import() as _imports:
9 from tensorboard.plugins.hparams import api as hp
10 import tensorflow as tf
11
12
13 @experimental("2.0.0")
14 class TensorBoardCallback(object):
15 """Callback to track Optuna trials with TensorBoard.
16
17 This callback adds relevant information that is tracked by Optuna to TensorBoard.
18
19 See `the example <https://github.com/optuna/optuna/blob/master/
20 examples/tensorboard_simple.py>`_.
21
22 Args:
23 dirname:
24 Directory to store TensorBoard logs.
25 metric_name:
26 Name of the metric. Since the metric itself is just a number,
27 `metric_name` can be used to give it a name. So you know later
28 if it was roc-auc or accuracy.
29
30 """
31
32 def __init__(self, dirname: str, metric_name: str) -> None:
33 _imports.check()
34 self._dirname = dirname
35 self._metric_name = metric_name
36 self._hp_params = dict() # type: Dict[str, hp.HParam]
37
38 def __call__(self, study: optuna.study.Study, trial: optuna.trial.FrozenTrial) -> None:
39 if len(self._hp_params) == 0:
40 self._initialization(study)
41 if trial.state != optuna.trial.TrialState.COMPLETE:
42 return
43 trial_value = trial.value if trial.value is not None else float("nan")
44 hparams = dict()
45 for param_name, param_value in trial.params.items():
46 if param_name not in self._hp_params:
47 self._add_distributions(trial.distributions)
48 hparams[self._hp_params[param_name]] = param_value
49 run_name = "trial-%d" % trial.number
50 run_dir = os.path.join(self._dirname, run_name)
51 with tf.summary.create_file_writer(run_dir).as_default():
52 hp.hparams(hparams) # record the values used in this trial
53 tf.summary.scalar(self._metric_name, trial_value, step=1)
54
55 def _add_distributions(
56 self, distributions: Dict[str, optuna.distributions.BaseDistribution]
57 ) -> None:
58 for param_name, param_distribution in distributions.items():
59 if isinstance(param_distribution, optuna.distributions.UniformDistribution):
60 self._hp_params[param_name] = hp.HParam(
61 param_name, hp.RealInterval(param_distribution.low, param_distribution.high)
62 )
63 elif isinstance(param_distribution, optuna.distributions.LogUniformDistribution):
64 self._hp_params[param_name] = hp.HParam(
65 param_name, hp.RealInterval(param_distribution.low, param_distribution.high)
66 )
67 elif isinstance(param_distribution, optuna.distributions.DiscreteUniformDistribution):
68 self._hp_params[param_name] = hp.HParam(
69 param_name, hp.Discrete(param_distribution.low, param_distribution.high)
70 )
71 elif isinstance(param_distribution, optuna.distributions.IntUniformDistribution):
72 self._hp_params[param_name] = hp.HParam(
73 param_name, hp.IntInterval(param_distribution.low, param_distribution.high)
74 )
75 elif isinstance(param_distribution, optuna.distributions.CategoricalDistribution):
76 self._hp_params[param_name] = hp.HParam(
77 param_name, hp.Discrete(param_distribution.choices)
78 )
79 else:
80 distribution_list = [
81 optuna.distributions.UniformDistribution.__name__,
82 optuna.distributions.LogUniformDistribution.__name__,
83 optuna.distributions.DiscreteUniformDistribution.__name__,
84 optuna.distributions.IntUniformDistribution.__name__,
85 optuna.distributions.CategoricalDistribution.__name__,
86 ]
87 raise NotImplementedError(
88 "The distribution {} is not implemented. "
89 "The parameter distribution should be one of the {}".format(
90 param_distribution, distribution_list
91 )
92 )
93
94 def _initialization(self, study: optuna.Study) -> None:
95 completed_trials = [
96 trial
97 for trial in study.get_trials(deepcopy=False)
98 if trial.state == optuna.trial.TrialState.COMPLETE
99 ]
100 for trial in completed_trials:
101 self._add_distributions(trial.distributions)
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/optuna/integration/tensorboard.py b/optuna/integration/tensorboard.py
--- a/optuna/integration/tensorboard.py
+++ b/optuna/integration/tensorboard.py
@@ -49,8 +49,8 @@
run_name = "trial-%d" % trial.number
run_dir = os.path.join(self._dirname, run_name)
with tf.summary.create_file_writer(run_dir).as_default():
- hp.hparams(hparams) # record the values used in this trial
- tf.summary.scalar(self._metric_name, trial_value, step=1)
+ hp.hparams(hparams, trial_id=run_name) # record the values used in this trial
+ tf.summary.scalar(self._metric_name, trial_value, step=trial.number)
def _add_distributions(
self, distributions: Dict[str, optuna.distributions.BaseDistribution]
| {"golden_diff": "diff --git a/optuna/integration/tensorboard.py b/optuna/integration/tensorboard.py\n--- a/optuna/integration/tensorboard.py\n+++ b/optuna/integration/tensorboard.py\n@@ -49,8 +49,8 @@\n run_name = \"trial-%d\" % trial.number\n run_dir = os.path.join(self._dirname, run_name)\n with tf.summary.create_file_writer(run_dir).as_default():\n- hp.hparams(hparams) # record the values used in this trial\n- tf.summary.scalar(self._metric_name, trial_value, step=1)\n+ hp.hparams(hparams, trial_id=run_name) # record the values used in this trial\n+ tf.summary.scalar(self._metric_name, trial_value, step=trial.number)\n \n def _add_distributions(\n self, distributions: Dict[str, optuna.distributions.BaseDistribution]\n", "issue": "Improvement proposal for TensorBoard integration\n## Motivation\r\n\r\n- Improve the display of TensorBoard\r\n\r\n## Description\r\n\r\n### 1. Trial ID\r\n\r\nI guess using directory name as trial ID is helpful to find a good trial.\r\nCurrent integration uses default hash.\r\n\r\n**Current display**\r\n\r\n\r\n**Proposal**\r\n\r\n\r\n\r\n### 2. Use trial number as horizontal axis\r\n\r\n**Current display**\r\n\r\n\r\n**Proposal**\r\n\r\n\n", "before_files": [{"content": "import os\nfrom typing import Dict\n\nimport optuna\nfrom optuna._experimental import experimental\nfrom optuna._imports import try_import\n\nwith try_import() as _imports:\n from tensorboard.plugins.hparams import api as hp\n import tensorflow as tf\n\n\n@experimental(\"2.0.0\")\nclass TensorBoardCallback(object):\n \"\"\"Callback to track Optuna trials with TensorBoard.\n\n This callback adds relevant information that is tracked by Optuna to TensorBoard.\n\n See `the example <https://github.com/optuna/optuna/blob/master/\n examples/tensorboard_simple.py>`_.\n\n Args:\n dirname:\n Directory to store TensorBoard logs.\n metric_name:\n Name of the metric. Since the metric itself is just a number,\n `metric_name` can be used to give it a name. So you know later\n if it was roc-auc or accuracy.\n\n \"\"\"\n\n def __init__(self, dirname: str, metric_name: str) -> None:\n _imports.check()\n self._dirname = dirname\n self._metric_name = metric_name\n self._hp_params = dict() # type: Dict[str, hp.HParam]\n\n def __call__(self, study: optuna.study.Study, trial: optuna.trial.FrozenTrial) -> None:\n if len(self._hp_params) == 0:\n self._initialization(study)\n if trial.state != optuna.trial.TrialState.COMPLETE:\n return\n trial_value = trial.value if trial.value is not None else float(\"nan\")\n hparams = dict()\n for param_name, param_value in trial.params.items():\n if param_name not in self._hp_params:\n self._add_distributions(trial.distributions)\n hparams[self._hp_params[param_name]] = param_value\n run_name = \"trial-%d\" % trial.number\n run_dir = os.path.join(self._dirname, run_name)\n with tf.summary.create_file_writer(run_dir).as_default():\n hp.hparams(hparams) # record the values used in this trial\n tf.summary.scalar(self._metric_name, trial_value, step=1)\n\n def _add_distributions(\n self, distributions: Dict[str, optuna.distributions.BaseDistribution]\n ) -> None:\n for param_name, param_distribution in distributions.items():\n if isinstance(param_distribution, optuna.distributions.UniformDistribution):\n self._hp_params[param_name] = hp.HParam(\n param_name, hp.RealInterval(param_distribution.low, param_distribution.high)\n )\n elif isinstance(param_distribution, optuna.distributions.LogUniformDistribution):\n self._hp_params[param_name] = hp.HParam(\n param_name, hp.RealInterval(param_distribution.low, param_distribution.high)\n )\n elif isinstance(param_distribution, optuna.distributions.DiscreteUniformDistribution):\n self._hp_params[param_name] = hp.HParam(\n param_name, hp.Discrete(param_distribution.low, param_distribution.high)\n )\n elif isinstance(param_distribution, optuna.distributions.IntUniformDistribution):\n self._hp_params[param_name] = hp.HParam(\n param_name, hp.IntInterval(param_distribution.low, param_distribution.high)\n )\n elif isinstance(param_distribution, optuna.distributions.CategoricalDistribution):\n self._hp_params[param_name] = hp.HParam(\n param_name, hp.Discrete(param_distribution.choices)\n )\n else:\n distribution_list = [\n optuna.distributions.UniformDistribution.__name__,\n optuna.distributions.LogUniformDistribution.__name__,\n optuna.distributions.DiscreteUniformDistribution.__name__,\n optuna.distributions.IntUniformDistribution.__name__,\n optuna.distributions.CategoricalDistribution.__name__,\n ]\n raise NotImplementedError(\n \"The distribution {} is not implemented. \"\n \"The parameter distribution should be one of the {}\".format(\n param_distribution, distribution_list\n )\n )\n\n def _initialization(self, study: optuna.Study) -> None:\n completed_trials = [\n trial\n for trial in study.get_trials(deepcopy=False)\n if trial.state == optuna.trial.TrialState.COMPLETE\n ]\n for trial in completed_trials:\n self._add_distributions(trial.distributions)\n", "path": "optuna/integration/tensorboard.py"}], "after_files": [{"content": "import os\nfrom typing import Dict\n\nimport optuna\nfrom optuna._experimental import experimental\nfrom optuna._imports import try_import\n\nwith try_import() as _imports:\n from tensorboard.plugins.hparams import api as hp\n import tensorflow as tf\n\n\n@experimental(\"2.0.0\")\nclass TensorBoardCallback(object):\n \"\"\"Callback to track Optuna trials with TensorBoard.\n\n This callback adds relevant information that is tracked by Optuna to TensorBoard.\n\n See `the example <https://github.com/optuna/optuna/blob/master/\n examples/tensorboard_simple.py>`_.\n\n Args:\n dirname:\n Directory to store TensorBoard logs.\n metric_name:\n Name of the metric. Since the metric itself is just a number,\n `metric_name` can be used to give it a name. So you know later\n if it was roc-auc or accuracy.\n\n \"\"\"\n\n def __init__(self, dirname: str, metric_name: str) -> None:\n _imports.check()\n self._dirname = dirname\n self._metric_name = metric_name\n self._hp_params = dict() # type: Dict[str, hp.HParam]\n\n def __call__(self, study: optuna.study.Study, trial: optuna.trial.FrozenTrial) -> None:\n if len(self._hp_params) == 0:\n self._initialization(study)\n if trial.state != optuna.trial.TrialState.COMPLETE:\n return\n trial_value = trial.value if trial.value is not None else float(\"nan\")\n hparams = dict()\n for param_name, param_value in trial.params.items():\n if param_name not in self._hp_params:\n self._add_distributions(trial.distributions)\n hparams[self._hp_params[param_name]] = param_value\n run_name = \"trial-%d\" % trial.number\n run_dir = os.path.join(self._dirname, run_name)\n with tf.summary.create_file_writer(run_dir).as_default():\n hp.hparams(hparams, trial_id=run_name) # record the values used in this trial\n tf.summary.scalar(self._metric_name, trial_value, step=trial.number)\n\n def _add_distributions(\n self, distributions: Dict[str, optuna.distributions.BaseDistribution]\n ) -> None:\n for param_name, param_distribution in distributions.items():\n if isinstance(param_distribution, optuna.distributions.UniformDistribution):\n self._hp_params[param_name] = hp.HParam(\n param_name, hp.RealInterval(param_distribution.low, param_distribution.high)\n )\n elif isinstance(param_distribution, optuna.distributions.LogUniformDistribution):\n self._hp_params[param_name] = hp.HParam(\n param_name, hp.RealInterval(param_distribution.low, param_distribution.high)\n )\n elif isinstance(param_distribution, optuna.distributions.DiscreteUniformDistribution):\n self._hp_params[param_name] = hp.HParam(\n param_name, hp.Discrete(param_distribution.low, param_distribution.high)\n )\n elif isinstance(param_distribution, optuna.distributions.IntUniformDistribution):\n self._hp_params[param_name] = hp.HParam(\n param_name, hp.IntInterval(param_distribution.low, param_distribution.high)\n )\n elif isinstance(param_distribution, optuna.distributions.CategoricalDistribution):\n self._hp_params[param_name] = hp.HParam(\n param_name, hp.Discrete(param_distribution.choices)\n )\n else:\n distribution_list = [\n optuna.distributions.UniformDistribution.__name__,\n optuna.distributions.LogUniformDistribution.__name__,\n optuna.distributions.DiscreteUniformDistribution.__name__,\n optuna.distributions.IntUniformDistribution.__name__,\n optuna.distributions.CategoricalDistribution.__name__,\n ]\n raise NotImplementedError(\n \"The distribution {} is not implemented. \"\n \"The parameter distribution should be one of the {}\".format(\n param_distribution, distribution_list\n )\n )\n\n def _initialization(self, study: optuna.Study) -> None:\n completed_trials = [\n trial\n for trial in study.get_trials(deepcopy=False)\n if trial.state == optuna.trial.TrialState.COMPLETE\n ]\n for trial in completed_trials:\n self._add_distributions(trial.distributions)\n", "path": "optuna/integration/tensorboard.py"}]} | 1,777 | 196 |
gh_patches_debug_29 | rasdani/github-patches | git_diff | MongoEngine__mongoengine-2424 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
When will be new release?
When are you planning a new release? Some features has been added since last one in May. Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mongoengine/__init__.py`
Content:
```
1 # Import submodules so that we can expose their __all__
2 from mongoengine import connection
3 from mongoengine import document
4 from mongoengine import errors
5 from mongoengine import fields
6 from mongoengine import queryset
7 from mongoengine import signals
8
9 # Import everything from each submodule so that it can be accessed via
10 # mongoengine, e.g. instead of `from mongoengine.connection import connect`,
11 # users can simply use `from mongoengine import connect`, or even
12 # `from mongoengine import *` and then `connect('testdb')`.
13 from mongoengine.connection import *
14 from mongoengine.document import *
15 from mongoengine.errors import *
16 from mongoengine.fields import *
17 from mongoengine.queryset import *
18 from mongoengine.signals import *
19
20
21 __all__ = (
22 list(document.__all__)
23 + list(fields.__all__)
24 + list(connection.__all__)
25 + list(queryset.__all__)
26 + list(signals.__all__)
27 + list(errors.__all__)
28 )
29
30
31 VERSION = (0, 20, 0)
32
33
34 def get_version():
35 """Return the VERSION as a string.
36
37 For example, if `VERSION == (0, 10, 7)`, return '0.10.7'.
38 """
39 return ".".join(map(str, VERSION))
40
41
42 __version__ = get_version()
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mongoengine/__init__.py b/mongoengine/__init__.py
--- a/mongoengine/__init__.py
+++ b/mongoengine/__init__.py
@@ -28,7 +28,7 @@
)
-VERSION = (0, 20, 0)
+VERSION = (0, 21, 0)
def get_version():
| {"golden_diff": "diff --git a/mongoengine/__init__.py b/mongoengine/__init__.py\n--- a/mongoengine/__init__.py\n+++ b/mongoengine/__init__.py\n@@ -28,7 +28,7 @@\n )\n \n \n-VERSION = (0, 20, 0)\n+VERSION = (0, 21, 0)\n \n \n def get_version():\n", "issue": "When will be new release?\nWhen are you planning a new release? Some features has been added since last one in May. Thanks!\n", "before_files": [{"content": "# Import submodules so that we can expose their __all__\nfrom mongoengine import connection\nfrom mongoengine import document\nfrom mongoengine import errors\nfrom mongoengine import fields\nfrom mongoengine import queryset\nfrom mongoengine import signals\n\n# Import everything from each submodule so that it can be accessed via\n# mongoengine, e.g. instead of `from mongoengine.connection import connect`,\n# users can simply use `from mongoengine import connect`, or even\n# `from mongoengine import *` and then `connect('testdb')`.\nfrom mongoengine.connection import *\nfrom mongoengine.document import *\nfrom mongoengine.errors import *\nfrom mongoengine.fields import *\nfrom mongoengine.queryset import *\nfrom mongoengine.signals import *\n\n\n__all__ = (\n list(document.__all__)\n + list(fields.__all__)\n + list(connection.__all__)\n + list(queryset.__all__)\n + list(signals.__all__)\n + list(errors.__all__)\n)\n\n\nVERSION = (0, 20, 0)\n\n\ndef get_version():\n \"\"\"Return the VERSION as a string.\n\n For example, if `VERSION == (0, 10, 7)`, return '0.10.7'.\n \"\"\"\n return \".\".join(map(str, VERSION))\n\n\n__version__ = get_version()\n", "path": "mongoengine/__init__.py"}], "after_files": [{"content": "# Import submodules so that we can expose their __all__\nfrom mongoengine import connection\nfrom mongoengine import document\nfrom mongoengine import errors\nfrom mongoengine import fields\nfrom mongoengine import queryset\nfrom mongoengine import signals\n\n# Import everything from each submodule so that it can be accessed via\n# mongoengine, e.g. instead of `from mongoengine.connection import connect`,\n# users can simply use `from mongoengine import connect`, or even\n# `from mongoengine import *` and then `connect('testdb')`.\nfrom mongoengine.connection import *\nfrom mongoengine.document import *\nfrom mongoengine.errors import *\nfrom mongoengine.fields import *\nfrom mongoengine.queryset import *\nfrom mongoengine.signals import *\n\n\n__all__ = (\n list(document.__all__)\n + list(fields.__all__)\n + list(connection.__all__)\n + list(queryset.__all__)\n + list(signals.__all__)\n + list(errors.__all__)\n)\n\n\nVERSION = (0, 21, 0)\n\n\ndef get_version():\n \"\"\"Return the VERSION as a string.\n\n For example, if `VERSION == (0, 10, 7)`, return '0.10.7'.\n \"\"\"\n return \".\".join(map(str, VERSION))\n\n\n__version__ = get_version()\n", "path": "mongoengine/__init__.py"}]} | 644 | 85 |
gh_patches_debug_37916 | rasdani/github-patches | git_diff | dotkom__onlineweb4-1122 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add redirect to newly created item after creating it in dashboard inventory
As title states.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/inventory/dashboard/views.py`
Content:
```
1 # -*- encoding: utf-8 -*-
2
3 from datetime import datetime
4
5 from django.contrib import messages
6 from django.contrib.auth.decorators import login_required
7 from django.core.exceptions import PermissionDenied
8 from django.shortcuts import render, get_object_or_404, redirect
9 from django.http import HttpResponseBadRequest, HttpResponse, JsonResponse
10
11 from guardian.decorators import permission_required
12
13 from apps.dashboard.tools import has_access, get_base_context
14 from apps.inventory.dashboard.forms import ItemForm, BatchForm
15 from apps.inventory.models import Item, Batch
16
17
18 @login_required
19 @permission_required('inventory.view_item', return_403=True)
20 def index(request):
21
22 # Generic check to see if user has access to dashboard. (In Komiteer or superuser)
23 if not has_access(request):
24 raise PermissionDenied
25
26 # Create the base context needed for the sidebar
27 context = get_base_context(request)
28
29 context['items'] = Item.objects.all().order_by('name')
30
31 return render(request, 'inventory/dashboard/index.html', context)
32
33
34 @login_required
35 @permission_required('inventory.add_item', return_403=True)
36 def new(request):
37
38 if not has_access(request):
39 raise PermissionDenied
40
41 # Get base context
42 context = get_base_context(request)
43
44 if request.method == 'POST':
45 inventory_form = ItemForm(request.POST)
46
47 if not inventory_form.is_valid():
48 messages.error(request, u'Noen av de påkrevde feltene inneholder feil.')
49 else:
50 inventory_form.save()
51 messages.success(request, u'Varen ble opprettet')
52 return redirect(index)
53
54 context['form'] = inventory_form
55
56 else:
57 context['form'] = ItemForm()
58
59 return render(request, 'inventory/dashboard/new.html', context)
60
61
62 @login_required
63 @permission_required('inventory.view_item', return_403=True)
64 def details(request, item_pk):
65 # Generic check to see if user has access to dashboard. (In Komiteer or superuser)
66 if not has_access(request):
67 raise PermissionDenied
68
69 # Create the base context needed for the sidebar
70 context = get_base_context(request)
71
72 context['item'] = get_object_or_404(Item, pk=item_pk)
73
74 if request.method == 'POST':
75 if 'inventory.change_item' not in context['user_permissions']:
76 raise PermissionDenied
77
78 item_form = ItemForm(request.POST, instance=context['item'])
79 if not item_form.is_valid():
80 messages.error(request, u'Noen av de påkrevde feltene inneholder feil.')
81 else:
82 item_form.save()
83 messages.success(request, u'Varen ble oppdatert')
84 context['item_form'] = item_form
85 else:
86 context['item_form'] = ItemForm(instance=context['item'])
87
88 context['new_batch_form'] = BatchForm()
89
90 context['batch_forms'] = [(batch.id, BatchForm(instance=batch)) for batch in Batch.objects.filter(item=context['item'])]
91
92 return render(request, 'inventory/dashboard/details.html', context)
93
94 @login_required
95 @permission_required('inventory.delete_item', return_403=True)
96 def item_delete(request, item_pk):
97 if not has_access(request):
98 raise PermissionDenied
99
100 item = get_object_or_404(Item, pk=item_pk)
101
102 item.delete()
103
104 messages.success(request, u'Varen %s ble slettet.' % item.name)
105
106 return redirect(index)
107
108 @login_required
109 @permission_required('inventory.add_batch', return_403=True)
110 def batch_new(request, item_pk):
111 if not has_access(request):
112 raise PermissionDenied
113
114 # Get base context
115
116 item = get_object_or_404(Item, pk=item_pk)
117
118 if request.method == 'POST':
119 batch_form = BatchForm(request.POST)
120 batch = batch_form.save(commit=False)
121 batch.item = item
122
123 if not batch_form.is_valid():
124 messages.error(request, u'Noen av de påkrevde feltene inneholder feil.')
125 else:
126 batch.save()
127 messages.success(request, u'Batchen ble lagt til.')
128
129 return redirect(details, item_pk=item_pk)
130
131 raise PermissionDenied
132
133 @login_required
134 @permission_required('inventory.change_batch', return_403=True)
135 def batch(request, item_pk, batch_pk):
136 if not has_access(request):
137 raise PermissionDenied
138
139 # Get base context
140
141 item = get_object_or_404(Item, pk=item_pk)
142 batch = get_object_or_404(Batch, pk=batch_pk)
143
144 if request.method == 'POST':
145 batch_form = BatchForm(request.POST, instance=batch)
146
147 if not batch_form.is_valid():
148 messages.error(request, u'Noen av de påkrevde feltene inneholder feil.')
149 else:
150 batch_form.save()
151 messages.success(request, u'Batchen ble oppdatert.')
152
153 return redirect(details, item_pk=item_pk)
154
155 raise PermissionDenied
156
157
158 @login_required
159 @permission_required('inventory.delete_batch', return_403=True)
160 def batch_delete(request, item_pk, batch_pk):
161 if not has_access(request):
162 raise PermissionDenied
163
164 batch = get_object_or_404(Batch, pk=batch_pk)
165
166 batch.delete()
167
168 messages.success(request, u'Batchen ble slettet.')
169
170 return redirect(details, item_pk=item_pk)
171
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/inventory/dashboard/views.py b/apps/inventory/dashboard/views.py
--- a/apps/inventory/dashboard/views.py
+++ b/apps/inventory/dashboard/views.py
@@ -47,9 +47,9 @@
if not inventory_form.is_valid():
messages.error(request, u'Noen av de påkrevde feltene inneholder feil.')
else:
- inventory_form.save()
+ item = inventory_form.save()
messages.success(request, u'Varen ble opprettet')
- return redirect(index)
+ return redirect(details, item.id)
context['form'] = inventory_form
@@ -99,11 +99,15 @@
item = get_object_or_404(Item, pk=item_pk)
- item.delete()
+ if request.method == 'POST':
+
+ item.delete()
+
+ messages.success(request, u'Varen %s ble slettet.' % item.name)
- messages.success(request, u'Varen %s ble slettet.' % item.name)
+ return redirect(index)
- return redirect(index)
+ raise PermissionDenied
@login_required
@permission_required('inventory.add_batch', return_403=True)
@@ -111,18 +115,27 @@
if not has_access(request):
raise PermissionDenied
- # Get base context
+ # Field mapper
+ fieldmap = {
+ 'amount': u'Mengde',
+ 'expiration_date': u'Utløpsdato',
+ }
item = get_object_or_404(Item, pk=item_pk)
if request.method == 'POST':
batch_form = BatchForm(request.POST)
- batch = batch_form.save(commit=False)
- batch.item = item
if not batch_form.is_valid():
- messages.error(request, u'Noen av de påkrevde feltene inneholder feil.')
+ # Dirty hack to display errors since the form is not passed in redirect context
+ error_reply = u"Feil i felt:"
+ for field, error in batch_form.errors.items():
+ error_reply += ' ' + fieldmap[field] + ' (' + batch_form.error_class.as_text(error) + '),'
+
+ messages.error(request, error_reply.rstrip(','))
else:
+ batch = batch_form.save(commit=False)
+ batch.item = item
batch.save()
messages.success(request, u'Batchen ble lagt til.')
@@ -163,9 +176,12 @@
batch = get_object_or_404(Batch, pk=batch_pk)
- batch.delete()
+ if request.method == 'POST':
- messages.success(request, u'Batchen ble slettet.')
+ batch.delete()
+ messages.success(request, u'Batchen ble slettet.')
- return redirect(details, item_pk=item_pk)
+ return redirect(details, item_pk=item_pk)
+
+ raise PermissionDenied
| {"golden_diff": "diff --git a/apps/inventory/dashboard/views.py b/apps/inventory/dashboard/views.py\n--- a/apps/inventory/dashboard/views.py\n+++ b/apps/inventory/dashboard/views.py\n@@ -47,9 +47,9 @@\n if not inventory_form.is_valid():\n messages.error(request, u'Noen av de p\u00e5krevde feltene inneholder feil.')\n else:\n- inventory_form.save()\n+ item = inventory_form.save()\n messages.success(request, u'Varen ble opprettet')\n- return redirect(index)\n+ return redirect(details, item.id)\n \n context['form'] = inventory_form\n \n@@ -99,11 +99,15 @@\n \n item = get_object_or_404(Item, pk=item_pk)\n \n- item.delete()\n+ if request.method == 'POST':\n+\n+ item.delete()\n+\n+ messages.success(request, u'Varen %s ble slettet.' % item.name)\n \n- messages.success(request, u'Varen %s ble slettet.' % item.name)\n+ return redirect(index)\n \n- return redirect(index)\n+ raise PermissionDenied\n \n @login_required\n @permission_required('inventory.add_batch', return_403=True)\n@@ -111,18 +115,27 @@\n if not has_access(request):\n raise PermissionDenied\n \n- # Get base context\n+ # Field mapper\n+ fieldmap = {\n+ 'amount': u'Mengde',\n+ 'expiration_date': u'Utl\u00f8psdato',\n+ }\n \n item = get_object_or_404(Item, pk=item_pk)\n \n if request.method == 'POST':\n batch_form = BatchForm(request.POST)\n- batch = batch_form.save(commit=False)\n- batch.item = item\n \n if not batch_form.is_valid():\n- messages.error(request, u'Noen av de p\u00e5krevde feltene inneholder feil.')\n+ # Dirty hack to display errors since the form is not passed in redirect context\n+ error_reply = u\"Feil i felt:\"\n+ for field, error in batch_form.errors.items():\n+ error_reply += ' ' + fieldmap[field] + ' (' + batch_form.error_class.as_text(error) + '),'\n+\n+ messages.error(request, error_reply.rstrip(','))\n else:\n+ batch = batch_form.save(commit=False)\n+ batch.item = item\n batch.save()\n messages.success(request, u'Batchen ble lagt til.')\n \n@@ -163,9 +176,12 @@\n \n batch = get_object_or_404(Batch, pk=batch_pk)\n \n- batch.delete()\n+ if request.method == 'POST':\n \n- messages.success(request, u'Batchen ble slettet.')\n+ batch.delete()\n+ messages.success(request, u'Batchen ble slettet.')\n \n- return redirect(details, item_pk=item_pk)\n+ return redirect(details, item_pk=item_pk)\n+\n+ raise PermissionDenied\n", "issue": "Add redirect to newly created item after creating it in dashboard inventory\nAs title states.\n\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\n\nfrom datetime import datetime\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import render, get_object_or_404, redirect\nfrom django.http import HttpResponseBadRequest, HttpResponse, JsonResponse\n\nfrom guardian.decorators import permission_required\n\nfrom apps.dashboard.tools import has_access, get_base_context\nfrom apps.inventory.dashboard.forms import ItemForm, BatchForm\nfrom apps.inventory.models import Item, Batch\n\n\n@login_required\n@permission_required('inventory.view_item', return_403=True)\ndef index(request):\n\n # Generic check to see if user has access to dashboard. (In Komiteer or superuser)\n if not has_access(request):\n raise PermissionDenied\n\n # Create the base context needed for the sidebar\n context = get_base_context(request)\n\n context['items'] = Item.objects.all().order_by('name')\n\n return render(request, 'inventory/dashboard/index.html', context)\n\n\n@login_required\n@permission_required('inventory.add_item', return_403=True)\ndef new(request):\n\n if not has_access(request):\n raise PermissionDenied\n\n # Get base context\n context = get_base_context(request)\n\n if request.method == 'POST':\n inventory_form = ItemForm(request.POST)\n\n if not inventory_form.is_valid():\n messages.error(request, u'Noen av de p\u00e5krevde feltene inneholder feil.')\n else:\n inventory_form.save()\n messages.success(request, u'Varen ble opprettet')\n return redirect(index)\n\n context['form'] = inventory_form\n\n else:\n context['form'] = ItemForm()\n\n return render(request, 'inventory/dashboard/new.html', context)\n\n\n@login_required\n@permission_required('inventory.view_item', return_403=True)\ndef details(request, item_pk):\n # Generic check to see if user has access to dashboard. (In Komiteer or superuser)\n if not has_access(request):\n raise PermissionDenied\n\n # Create the base context needed for the sidebar\n context = get_base_context(request)\n\n context['item'] = get_object_or_404(Item, pk=item_pk)\n\n if request.method == 'POST':\n if 'inventory.change_item' not in context['user_permissions']:\n raise PermissionDenied\n\n item_form = ItemForm(request.POST, instance=context['item'])\n if not item_form.is_valid():\n messages.error(request, u'Noen av de p\u00e5krevde feltene inneholder feil.')\n else:\n item_form.save()\n messages.success(request, u'Varen ble oppdatert')\n context['item_form'] = item_form\n else:\n context['item_form'] = ItemForm(instance=context['item'])\n\n context['new_batch_form'] = BatchForm()\n\n context['batch_forms'] = [(batch.id, BatchForm(instance=batch)) for batch in Batch.objects.filter(item=context['item'])]\n\n return render(request, 'inventory/dashboard/details.html', context)\n\n@login_required\n@permission_required('inventory.delete_item', return_403=True)\ndef item_delete(request, item_pk):\n if not has_access(request):\n raise PermissionDenied\n\n item = get_object_or_404(Item, pk=item_pk)\n\n item.delete()\n\n messages.success(request, u'Varen %s ble slettet.' % item.name)\n\n return redirect(index)\n\n@login_required\n@permission_required('inventory.add_batch', return_403=True)\ndef batch_new(request, item_pk):\n if not has_access(request):\n raise PermissionDenied\n\n # Get base context\n\n item = get_object_or_404(Item, pk=item_pk)\n\n if request.method == 'POST':\n batch_form = BatchForm(request.POST)\n batch = batch_form.save(commit=False)\n batch.item = item\n\n if not batch_form.is_valid():\n messages.error(request, u'Noen av de p\u00e5krevde feltene inneholder feil.')\n else:\n batch.save()\n messages.success(request, u'Batchen ble lagt til.')\n\n return redirect(details, item_pk=item_pk)\n\n raise PermissionDenied\n\n@login_required\n@permission_required('inventory.change_batch', return_403=True)\ndef batch(request, item_pk, batch_pk):\n if not has_access(request):\n raise PermissionDenied\n\n # Get base context\n\n item = get_object_or_404(Item, pk=item_pk)\n batch = get_object_or_404(Batch, pk=batch_pk)\n\n if request.method == 'POST':\n batch_form = BatchForm(request.POST, instance=batch)\n\n if not batch_form.is_valid():\n messages.error(request, u'Noen av de p\u00e5krevde feltene inneholder feil.')\n else:\n batch_form.save()\n messages.success(request, u'Batchen ble oppdatert.')\n\n return redirect(details, item_pk=item_pk)\n\n raise PermissionDenied\n\n\n@login_required\n@permission_required('inventory.delete_batch', return_403=True)\ndef batch_delete(request, item_pk, batch_pk):\n if not has_access(request):\n raise PermissionDenied\n\n batch = get_object_or_404(Batch, pk=batch_pk)\n\n batch.delete()\n\n messages.success(request, u'Batchen ble slettet.')\n\n return redirect(details, item_pk=item_pk)\n\n", "path": "apps/inventory/dashboard/views.py"}], "after_files": [{"content": "# -*- encoding: utf-8 -*-\n\nfrom datetime import datetime\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import render, get_object_or_404, redirect\nfrom django.http import HttpResponseBadRequest, HttpResponse, JsonResponse\n\nfrom guardian.decorators import permission_required\n\nfrom apps.dashboard.tools import has_access, get_base_context\nfrom apps.inventory.dashboard.forms import ItemForm, BatchForm\nfrom apps.inventory.models import Item, Batch\n\n\n@login_required\n@permission_required('inventory.view_item', return_403=True)\ndef index(request):\n\n # Generic check to see if user has access to dashboard. (In Komiteer or superuser)\n if not has_access(request):\n raise PermissionDenied\n\n # Create the base context needed for the sidebar\n context = get_base_context(request)\n\n context['items'] = Item.objects.all().order_by('name')\n\n return render(request, 'inventory/dashboard/index.html', context)\n\n\n@login_required\n@permission_required('inventory.add_item', return_403=True)\ndef new(request):\n\n if not has_access(request):\n raise PermissionDenied\n\n # Get base context\n context = get_base_context(request)\n\n if request.method == 'POST':\n inventory_form = ItemForm(request.POST)\n\n if not inventory_form.is_valid():\n messages.error(request, u'Noen av de p\u00e5krevde feltene inneholder feil.')\n else:\n item = inventory_form.save()\n messages.success(request, u'Varen ble opprettet')\n return redirect(details, item.id)\n\n context['form'] = inventory_form\n\n else:\n context['form'] = ItemForm()\n\n return render(request, 'inventory/dashboard/new.html', context)\n\n\n@login_required\n@permission_required('inventory.view_item', return_403=True)\ndef details(request, item_pk):\n # Generic check to see if user has access to dashboard. (In Komiteer or superuser)\n if not has_access(request):\n raise PermissionDenied\n\n # Create the base context needed for the sidebar\n context = get_base_context(request)\n\n context['item'] = get_object_or_404(Item, pk=item_pk)\n\n if request.method == 'POST':\n if 'inventory.change_item' not in context['user_permissions']:\n raise PermissionDenied\n\n item_form = ItemForm(request.POST, instance=context['item'])\n if not item_form.is_valid():\n messages.error(request, u'Noen av de p\u00e5krevde feltene inneholder feil.')\n else:\n item_form.save()\n messages.success(request, u'Varen ble oppdatert')\n context['item_form'] = item_form\n else:\n context['item_form'] = ItemForm(instance=context['item'])\n\n context['new_batch_form'] = BatchForm()\n\n context['batch_forms'] = [(batch.id, BatchForm(instance=batch)) for batch in Batch.objects.filter(item=context['item'])]\n\n return render(request, 'inventory/dashboard/details.html', context)\n\n@login_required\n@permission_required('inventory.delete_item', return_403=True)\ndef item_delete(request, item_pk):\n if not has_access(request):\n raise PermissionDenied\n\n item = get_object_or_404(Item, pk=item_pk)\n\n if request.method == 'POST':\n\n item.delete()\n\n messages.success(request, u'Varen %s ble slettet.' % item.name)\n\n return redirect(index)\n\n raise PermissionDenied\n\n@login_required\n@permission_required('inventory.add_batch', return_403=True)\ndef batch_new(request, item_pk):\n if not has_access(request):\n raise PermissionDenied\n\n # Field mapper\n fieldmap = {\n 'amount': u'Mengde',\n 'expiration_date': u'Utl\u00f8psdato',\n }\n\n item = get_object_or_404(Item, pk=item_pk)\n\n if request.method == 'POST':\n batch_form = BatchForm(request.POST)\n\n if not batch_form.is_valid():\n # Dirty hack to display errors since the form is not passed in redirect context\n error_reply = u\"Feil i felt:\"\n for field, error in batch_form.errors.items():\n error_reply += ' ' + fieldmap[field] + ' (' + batch_form.error_class.as_text(error) + '),'\n\n messages.error(request, error_reply.rstrip(','))\n else:\n batch = batch_form.save(commit=False)\n batch.item = item\n batch.save()\n messages.success(request, u'Batchen ble lagt til.')\n\n return redirect(details, item_pk=item_pk)\n\n raise PermissionDenied\n\n@login_required\n@permission_required('inventory.change_batch', return_403=True)\ndef batch(request, item_pk, batch_pk):\n if not has_access(request):\n raise PermissionDenied\n\n # Get base context\n\n item = get_object_or_404(Item, pk=item_pk)\n batch = get_object_or_404(Batch, pk=batch_pk)\n\n if request.method == 'POST':\n batch_form = BatchForm(request.POST, instance=batch)\n\n if not batch_form.is_valid():\n messages.error(request, u'Noen av de p\u00e5krevde feltene inneholder feil.')\n else:\n batch_form.save()\n messages.success(request, u'Batchen ble oppdatert.')\n\n return redirect(details, item_pk=item_pk)\n\n raise PermissionDenied\n\n\n@login_required\n@permission_required('inventory.delete_batch', return_403=True)\ndef batch_delete(request, item_pk, batch_pk):\n if not has_access(request):\n raise PermissionDenied\n\n batch = get_object_or_404(Batch, pk=batch_pk)\n\n if request.method == 'POST':\n\n batch.delete()\n messages.success(request, u'Batchen ble slettet.')\n\n return redirect(details, item_pk=item_pk)\n\n raise PermissionDenied\n\n", "path": "apps/inventory/dashboard/views.py"}]} | 1,870 | 659 |
gh_patches_debug_1519 | rasdani/github-patches | git_diff | databricks__koalas-1959 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plot does not work in Ubuntu
Plotting with `matplotlib` 3.3.0+ causes an exception because of Pillow dependency added at https://github.com/matplotlib/matplotlib/commit/370e9a2d5d9e637abc90b3270d368642c69f66c6#diff-60f61ab7a8d1910d86d9fda2261620314edcae5894d5aaa236b821c7256badd7 on Ubuntu:
```python
import databricks.koalas as ks
df = ks.DataFrame({'x': [1, 2], 'y': [3, 4], 'z': [5, 6]})
df.plot...
```
```
ImportError: /.../lib/python3.8/site-packages/PIL/_imaging.cpython-38-x86_64-linux-gnu.so: ELF load command address/offset not properly aligned
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 #
4 # Copyright (C) 2019 Databricks, Inc.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17 #
18 from __future__ import print_function
19
20 from io import open
21 import sys
22 from setuptools import setup
23 from os import path
24
25 DESCRIPTION = "Koalas: pandas API on Apache Spark"
26
27 this_directory = path.abspath(path.dirname(__file__))
28 with open(path.join(this_directory, 'README.md'), encoding='utf-8') as f:
29 LONG_DESCRIPTION = f.read()
30
31 try:
32 exec(open('databricks/koalas/version.py').read())
33 except IOError:
34 print("Failed to load Koalas version file for packaging. You must be in Koalas root dir.",
35 file=sys.stderr)
36 sys.exit(-1)
37 VERSION = __version__ # noqa
38
39 setup(
40 name='koalas',
41 version=VERSION,
42 packages=[
43 'databricks',
44 'databricks.koalas',
45 'databricks.koalas.missing',
46 'databricks.koalas.plot',
47 'databricks.koalas.spark',
48 'databricks.koalas.typedef',
49 'databricks.koalas.usage_logging'],
50 extras_require={
51 'spark': ['pyspark>=2.4.0'],
52 'mlflow': ['mlflow>=1.0'],
53 'plotly': ['plotly>=4.8'],
54 },
55 python_requires='>=3.5,<3.9',
56 install_requires=[
57 'pandas>=0.23.2',
58 'pyarrow>=0.10',
59 'numpy>=1.14',
60 'matplotlib>=3.0.0',
61 ],
62 author="Databricks",
63 author_email="[email protected]",
64 license='http://www.apache.org/licenses/LICENSE-2.0',
65 url="https://github.com/databricks/koalas",
66 project_urls={
67 'Bug Tracker': 'https://github.com/databricks/koalas/issues',
68 'Documentation': 'https://koalas.readthedocs.io/',
69 'Source Code': 'https://github.com/databricks/koalas'
70 },
71 description=DESCRIPTION,
72 long_description=LONG_DESCRIPTION,
73 long_description_content_type='text/markdown',
74 classifiers=[
75 'Programming Language :: Python :: 3.5',
76 'Programming Language :: Python :: 3.6',
77 'Programming Language :: Python :: 3.7',
78 'Programming Language :: Python :: 3.8',
79 ],
80 )
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -57,7 +57,7 @@
'pandas>=0.23.2',
'pyarrow>=0.10',
'numpy>=1.14',
- 'matplotlib>=3.0.0',
+ 'matplotlib>=3.0.0,<3.3.0',
],
author="Databricks",
author_email="[email protected]",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -57,7 +57,7 @@\n 'pandas>=0.23.2',\n 'pyarrow>=0.10',\n 'numpy>=1.14',\n- 'matplotlib>=3.0.0',\n+ 'matplotlib>=3.0.0,<3.3.0',\n ],\n author=\"Databricks\",\n author_email=\"[email protected]\",\n", "issue": "plot does not work in Ubuntu\nPlotting with `matplotlib` 3.3.0+ causes an exception because of Pillow dependency added at https://github.com/matplotlib/matplotlib/commit/370e9a2d5d9e637abc90b3270d368642c69f66c6#diff-60f61ab7a8d1910d86d9fda2261620314edcae5894d5aaa236b821c7256badd7 on Ubuntu:\r\n\r\n```python\r\nimport databricks.koalas as ks\r\ndf = ks.DataFrame({'x': [1, 2], 'y': [3, 4], 'z': [5, 6]})\r\ndf.plot...\r\n```\r\n\r\n```\r\nImportError: /.../lib/python3.8/site-packages/PIL/_imaging.cpython-38-x86_64-linux-gnu.so: ELF load command address/offset not properly aligned\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n\n#\n# Copyright (C) 2019 Databricks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nfrom __future__ import print_function\n\nfrom io import open\nimport sys\nfrom setuptools import setup\nfrom os import path\n\nDESCRIPTION = \"Koalas: pandas API on Apache Spark\"\n\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, 'README.md'), encoding='utf-8') as f:\n LONG_DESCRIPTION = f.read()\n\ntry:\n exec(open('databricks/koalas/version.py').read())\nexcept IOError:\n print(\"Failed to load Koalas version file for packaging. You must be in Koalas root dir.\",\n file=sys.stderr)\n sys.exit(-1)\nVERSION = __version__ # noqa\n\nsetup(\n name='koalas',\n version=VERSION,\n packages=[\n 'databricks',\n 'databricks.koalas',\n 'databricks.koalas.missing',\n 'databricks.koalas.plot',\n 'databricks.koalas.spark',\n 'databricks.koalas.typedef',\n 'databricks.koalas.usage_logging'],\n extras_require={\n 'spark': ['pyspark>=2.4.0'],\n 'mlflow': ['mlflow>=1.0'],\n 'plotly': ['plotly>=4.8'],\n },\n python_requires='>=3.5,<3.9',\n install_requires=[\n 'pandas>=0.23.2',\n 'pyarrow>=0.10',\n 'numpy>=1.14',\n 'matplotlib>=3.0.0',\n ],\n author=\"Databricks\",\n author_email=\"[email protected]\",\n license='http://www.apache.org/licenses/LICENSE-2.0',\n url=\"https://github.com/databricks/koalas\",\n project_urls={\n 'Bug Tracker': 'https://github.com/databricks/koalas/issues',\n 'Documentation': 'https://koalas.readthedocs.io/',\n 'Source Code': 'https://github.com/databricks/koalas'\n },\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type='text/markdown',\n classifiers=[\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n#\n# Copyright (C) 2019 Databricks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nfrom __future__ import print_function\n\nfrom io import open\nimport sys\nfrom setuptools import setup\nfrom os import path\n\nDESCRIPTION = \"Koalas: pandas API on Apache Spark\"\n\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, 'README.md'), encoding='utf-8') as f:\n LONG_DESCRIPTION = f.read()\n\ntry:\n exec(open('databricks/koalas/version.py').read())\nexcept IOError:\n print(\"Failed to load Koalas version file for packaging. You must be in Koalas root dir.\",\n file=sys.stderr)\n sys.exit(-1)\nVERSION = __version__ # noqa\n\nsetup(\n name='koalas',\n version=VERSION,\n packages=[\n 'databricks',\n 'databricks.koalas',\n 'databricks.koalas.missing',\n 'databricks.koalas.plot',\n 'databricks.koalas.spark',\n 'databricks.koalas.typedef',\n 'databricks.koalas.usage_logging'],\n extras_require={\n 'spark': ['pyspark>=2.4.0'],\n 'mlflow': ['mlflow>=1.0'],\n 'plotly': ['plotly>=4.8'],\n },\n python_requires='>=3.5,<3.9',\n install_requires=[\n 'pandas>=0.23.2',\n 'pyarrow>=0.10',\n 'numpy>=1.14',\n 'matplotlib>=3.0.0,<3.3.0',\n ],\n author=\"Databricks\",\n author_email=\"[email protected]\",\n license='http://www.apache.org/licenses/LICENSE-2.0',\n url=\"https://github.com/databricks/koalas\",\n project_urls={\n 'Bug Tracker': 'https://github.com/databricks/koalas/issues',\n 'Documentation': 'https://koalas.readthedocs.io/',\n 'Source Code': 'https://github.com/databricks/koalas'\n },\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type='text/markdown',\n classifiers=[\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n)\n", "path": "setup.py"}]} | 1,319 | 112 |
gh_patches_debug_30358 | rasdani/github-patches | git_diff | nvaccess__nvda-10338 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
winVersion.isWin10: modify it to return early if major is not 10
Hi,
This feature request should not be done for 2019.3 in order to avoid introducing possible regressions, but perhaps defer it to 2020.1:
### Is your feature request related to a problem? Please describe.
In NVDA 2019.2, as part of UIA console support, winVersion module introduced isWin10 function which returns whether a system is running a specific version of Windows 10 or later. At the moment it checks major version in two places:
* If "at least" flag is specified: return False if version is less than 10.
* If "at least" flag is not specified: return False if major version is not 10.
At least it is understandable. But I would argue that for sake of performance and keeping the subject line and body in sync, I propose the below modification.
### Describe the solution you'd like
Instead of checking flags after defining version information map, return early if major version isn't even 10. Not only flag checks can be avoided on Windows 8.1 and earlier, it can make the function body in line with what the function title says.
Thus I propose:
1. First, check winVersion.major and return False if it isn't 10.
2. Define version to build map.
3. Same as current function.
### Describe alternatives you've considered
Keep the function as is.
### Additional context
I'm sure there might be other opportunities to optimize this function, but that's in the future.
Thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `source/winVersion.py`
Content:
```
1 # A part of NonVisual Desktop Access (NVDA)
2 # Copyright (C) 2006-2019 NV Access Limited
3 # This file is covered by the GNU General Public License.
4 # See the file COPYING for more details.
5
6 import sys
7 import os
8 import winUser
9
10 winVersion=sys.getwindowsversion()
11 winVersionText="{v.major}.{v.minor}.{v.build}".format(v=winVersion)
12 if winVersion.service_pack_major!=0:
13 winVersionText+=" service pack %d"%winVersion.service_pack_major
14 if winVersion.service_pack_minor!=0:
15 winVersionText+=".%d"%winVersion.service_pack_minor
16 winVersionText+=" %s" % ("workstation","domain controller","server")[winVersion.product_type-1]
17
18 def isSupportedOS():
19 # NVDA can only run on Windows 7 Service pack 1 and above
20 return (winVersion.major,winVersion.minor,winVersion.service_pack_major) >= (6,1,1)
21
22 def canRunVc2010Builds():
23 return isSupportedOS()
24
25 UWP_OCR_DATA_PATH = os.path.expandvars(r"$windir\OCR")
26 def isUwpOcrAvailable():
27 return os.path.isdir(UWP_OCR_DATA_PATH)
28
29 def isWin10(version=1507, atLeast=True):
30 """
31 Returns True if NVDA is running on the supplied release version of Windows 10. If no argument is supplied, returns True for all public Windows 10 releases.
32 @param version: a release version of Windows 10 (such as 1903).
33 @param atLeast: return True if NVDA is running on at least this Windows 10 build (i.e. this version or higher).
34 """
35 from logHandler import log
36 win10VersionsToBuilds={
37 1507: 10240,
38 1511: 10586,
39 1607: 14393,
40 1703: 15063,
41 1709: 16299,
42 1803: 17134,
43 1809: 17763,
44 1903: 18362,
45 1909: 18363,
46 }
47 if atLeast and winVersion.major < 10:
48 return False
49 elif not atLeast and winVersion.major != 10:
50 return False
51 try:
52 if atLeast:
53 return winVersion.build >= win10VersionsToBuilds[version]
54 else:
55 return winVersion.build == win10VersionsToBuilds[version]
56 except KeyError:
57 log.error("Unknown Windows 10 version {}".format(version))
58 return False
59
60
61 def isFullScreenMagnificationAvailable():
62 return (winVersion.major, winVersion.minor) >= (6, 2)
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/source/winVersion.py b/source/winVersion.py
--- a/source/winVersion.py
+++ b/source/winVersion.py
@@ -26,34 +26,35 @@
def isUwpOcrAvailable():
return os.path.isdir(UWP_OCR_DATA_PATH)
-def isWin10(version=1507, atLeast=True):
+
+WIN10_VERSIONS_TO_BUILDS = {
+ 1507: 10240,
+ 1511: 10586,
+ 1607: 14393,
+ 1703: 15063,
+ 1709: 16299,
+ 1803: 17134,
+ 1809: 17763,
+ 1903: 18362,
+ 1909: 18363,
+}
+
+
+def isWin10(version: int = 1507, atLeast: bool = True):
"""
Returns True if NVDA is running on the supplied release version of Windows 10. If no argument is supplied, returns True for all public Windows 10 releases.
@param version: a release version of Windows 10 (such as 1903).
@param atLeast: return True if NVDA is running on at least this Windows 10 build (i.e. this version or higher).
"""
- from logHandler import log
- win10VersionsToBuilds={
- 1507: 10240,
- 1511: 10586,
- 1607: 14393,
- 1703: 15063,
- 1709: 16299,
- 1803: 17134,
- 1809: 17763,
- 1903: 18362,
- 1909: 18363,
- }
- if atLeast and winVersion.major < 10:
- return False
- elif not atLeast and winVersion.major != 10:
+ if winVersion.major != 10:
return False
try:
if atLeast:
- return winVersion.build >= win10VersionsToBuilds[version]
+ return winVersion.build >= WIN10_VERSIONS_TO_BUILDS[version]
else:
- return winVersion.build == win10VersionsToBuilds[version]
+ return winVersion.build == WIN10_VERSIONS_TO_BUILDS[version]
except KeyError:
+ from logHandler import log
log.error("Unknown Windows 10 version {}".format(version))
return False
| {"golden_diff": "diff --git a/source/winVersion.py b/source/winVersion.py\n--- a/source/winVersion.py\n+++ b/source/winVersion.py\n@@ -26,34 +26,35 @@\n def isUwpOcrAvailable():\r\n \treturn os.path.isdir(UWP_OCR_DATA_PATH)\r\n \r\n-def isWin10(version=1507, atLeast=True):\r\n+\r\n+WIN10_VERSIONS_TO_BUILDS = {\r\n+\t1507: 10240,\r\n+\t1511: 10586,\r\n+\t1607: 14393,\r\n+\t1703: 15063,\r\n+\t1709: 16299,\r\n+\t1803: 17134,\r\n+\t1809: 17763,\r\n+\t1903: 18362,\r\n+\t1909: 18363,\r\n+}\r\n+\r\n+\r\n+def isWin10(version: int = 1507, atLeast: bool = True):\r\n \t\"\"\"\r\n \tReturns True if NVDA is running on the supplied release version of Windows 10. If no argument is supplied, returns True for all public Windows 10 releases.\r\n \t@param version: a release version of Windows 10 (such as 1903).\r\n \t@param atLeast: return True if NVDA is running on at least this Windows 10 build (i.e. this version or higher).\r\n \t\"\"\"\r\n-\tfrom logHandler import log\r\n-\twin10VersionsToBuilds={\r\n-\t\t1507: 10240,\r\n-\t\t1511: 10586,\r\n-\t\t1607: 14393,\r\n-\t\t1703: 15063,\r\n-\t\t1709: 16299,\r\n-\t\t1803: 17134,\r\n-\t\t1809: 17763,\r\n-\t\t1903: 18362,\r\n-\t\t1909: 18363,\r\n-\t}\r\n-\tif atLeast and winVersion.major < 10:\r\n-\t\treturn False\r\n-\telif not atLeast and winVersion.major != 10:\r\n+\tif winVersion.major != 10:\r\n \t\treturn False\r\n \ttry:\r\n \t\tif atLeast:\r\n-\t\t\treturn winVersion.build >= win10VersionsToBuilds[version]\r\n+\t\t\treturn winVersion.build >= WIN10_VERSIONS_TO_BUILDS[version]\r\n \t\telse:\r\n-\t\t\treturn winVersion.build == win10VersionsToBuilds[version]\r\n+\t\t\treturn winVersion.build == WIN10_VERSIONS_TO_BUILDS[version]\r\n \texcept KeyError:\r\n+\t\tfrom logHandler import log\r\n \t\tlog.error(\"Unknown Windows 10 version {}\".format(version))\r\n \t\treturn False\n", "issue": "winVersion.isWin10: modify it to return early if major is not 10\nHi,\r\n\r\nThis feature request should not be done for 2019.3 in order to avoid introducing possible regressions, but perhaps defer it to 2020.1:\r\n\r\n### Is your feature request related to a problem? Please describe.\r\nIn NVDA 2019.2, as part of UIA console support, winVersion module introduced isWin10 function which returns whether a system is running a specific version of Windows 10 or later. At the moment it checks major version in two places:\r\n\r\n* If \"at least\" flag is specified: return False if version is less than 10.\r\n* If \"at least\" flag is not specified: return False if major version is not 10.\r\n\r\nAt least it is understandable. But I would argue that for sake of performance and keeping the subject line and body in sync, I propose the below modification.\r\n\r\n### Describe the solution you'd like\r\nInstead of checking flags after defining version information map, return early if major version isn't even 10. Not only flag checks can be avoided on Windows 8.1 and earlier, it can make the function body in line with what the function title says.\r\n\r\nThus I propose:\r\n\r\n1. First, check winVersion.major and return False if it isn't 10.\r\n2. Define version to build map.\r\n3. Same as current function.\r\n\r\n### Describe alternatives you've considered\r\nKeep the function as is.\r\n\r\n### Additional context\r\nI'm sure there might be other opportunities to optimize this function, but that's in the future.\r\n\r\nThanks.\n", "before_files": [{"content": "# A part of NonVisual Desktop Access (NVDA)\r\n# Copyright (C) 2006-2019 NV Access Limited\r\n# This file is covered by the GNU General Public License.\r\n# See the file COPYING for more details.\r\n\r\nimport sys\r\nimport os\r\nimport winUser\r\n\r\nwinVersion=sys.getwindowsversion()\r\nwinVersionText=\"{v.major}.{v.minor}.{v.build}\".format(v=winVersion)\r\nif winVersion.service_pack_major!=0:\r\n\twinVersionText+=\" service pack %d\"%winVersion.service_pack_major\r\n\tif winVersion.service_pack_minor!=0:\r\n\t\twinVersionText+=\".%d\"%winVersion.service_pack_minor\r\nwinVersionText+=\" %s\" % (\"workstation\",\"domain controller\",\"server\")[winVersion.product_type-1]\r\n\r\ndef isSupportedOS():\r\n\t# NVDA can only run on Windows 7 Service pack 1 and above\r\n\treturn (winVersion.major,winVersion.minor,winVersion.service_pack_major) >= (6,1,1)\r\n\r\ndef canRunVc2010Builds():\r\n\treturn isSupportedOS()\r\n\r\nUWP_OCR_DATA_PATH = os.path.expandvars(r\"$windir\\OCR\")\r\ndef isUwpOcrAvailable():\r\n\treturn os.path.isdir(UWP_OCR_DATA_PATH)\r\n\r\ndef isWin10(version=1507, atLeast=True):\r\n\t\"\"\"\r\n\tReturns True if NVDA is running on the supplied release version of Windows 10. If no argument is supplied, returns True for all public Windows 10 releases.\r\n\t@param version: a release version of Windows 10 (such as 1903).\r\n\t@param atLeast: return True if NVDA is running on at least this Windows 10 build (i.e. this version or higher).\r\n\t\"\"\"\r\n\tfrom logHandler import log\r\n\twin10VersionsToBuilds={\r\n\t\t1507: 10240,\r\n\t\t1511: 10586,\r\n\t\t1607: 14393,\r\n\t\t1703: 15063,\r\n\t\t1709: 16299,\r\n\t\t1803: 17134,\r\n\t\t1809: 17763,\r\n\t\t1903: 18362,\r\n\t\t1909: 18363,\r\n\t}\r\n\tif atLeast and winVersion.major < 10:\r\n\t\treturn False\r\n\telif not atLeast and winVersion.major != 10:\r\n\t\treturn False\r\n\ttry:\r\n\t\tif atLeast:\r\n\t\t\treturn winVersion.build >= win10VersionsToBuilds[version]\r\n\t\telse:\r\n\t\t\treturn winVersion.build == win10VersionsToBuilds[version]\r\n\texcept KeyError:\r\n\t\tlog.error(\"Unknown Windows 10 version {}\".format(version))\r\n\t\treturn False\r\n\r\n\r\ndef isFullScreenMagnificationAvailable():\r\n\treturn (winVersion.major, winVersion.minor) >= (6, 2)\r\n", "path": "source/winVersion.py"}], "after_files": [{"content": "# A part of NonVisual Desktop Access (NVDA)\r\n# Copyright (C) 2006-2019 NV Access Limited\r\n# This file is covered by the GNU General Public License.\r\n# See the file COPYING for more details.\r\n\r\nimport sys\r\nimport os\r\nimport winUser\r\n\r\nwinVersion=sys.getwindowsversion()\r\nwinVersionText=\"{v.major}.{v.minor}.{v.build}\".format(v=winVersion)\r\nif winVersion.service_pack_major!=0:\r\n\twinVersionText+=\" service pack %d\"%winVersion.service_pack_major\r\n\tif winVersion.service_pack_minor!=0:\r\n\t\twinVersionText+=\".%d\"%winVersion.service_pack_minor\r\nwinVersionText+=\" %s\" % (\"workstation\",\"domain controller\",\"server\")[winVersion.product_type-1]\r\n\r\ndef isSupportedOS():\r\n\t# NVDA can only run on Windows 7 Service pack 1 and above\r\n\treturn (winVersion.major,winVersion.minor,winVersion.service_pack_major) >= (6,1,1)\r\n\r\ndef canRunVc2010Builds():\r\n\treturn isSupportedOS()\r\n\r\nUWP_OCR_DATA_PATH = os.path.expandvars(r\"$windir\\OCR\")\r\ndef isUwpOcrAvailable():\r\n\treturn os.path.isdir(UWP_OCR_DATA_PATH)\r\n\r\n\r\nWIN10_VERSIONS_TO_BUILDS = {\r\n\t1507: 10240,\r\n\t1511: 10586,\r\n\t1607: 14393,\r\n\t1703: 15063,\r\n\t1709: 16299,\r\n\t1803: 17134,\r\n\t1809: 17763,\r\n\t1903: 18362,\r\n\t1909: 18363,\r\n}\r\n\r\n\r\ndef isWin10(version: int = 1507, atLeast: bool = True):\r\n\t\"\"\"\r\n\tReturns True if NVDA is running on the supplied release version of Windows 10. If no argument is supplied, returns True for all public Windows 10 releases.\r\n\t@param version: a release version of Windows 10 (such as 1903).\r\n\t@param atLeast: return True if NVDA is running on at least this Windows 10 build (i.e. this version or higher).\r\n\t\"\"\"\r\n\tif winVersion.major != 10:\r\n\t\treturn False\r\n\ttry:\r\n\t\tif atLeast:\r\n\t\t\treturn winVersion.build >= WIN10_VERSIONS_TO_BUILDS[version]\r\n\t\telse:\r\n\t\t\treturn winVersion.build == WIN10_VERSIONS_TO_BUILDS[version]\r\n\texcept KeyError:\r\n\t\tfrom logHandler import log\r\n\t\tlog.error(\"Unknown Windows 10 version {}\".format(version))\r\n\t\treturn False\r\n\r\n\r\ndef isFullScreenMagnificationAvailable():\r\n\treturn (winVersion.major, winVersion.minor) >= (6, 2)\r\n", "path": "source/winVersion.py"}]} | 1,375 | 657 |
gh_patches_debug_15883 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1311 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
South Glos UK doesn't return all Recycling / Food Bin dates
The South Gloucester UK source is only returning the Food Bin and Recycling bins every other week, when they should be every week.
Calendar in HA shows like this for me:

The South Glos webpage shows:

And the service endpoint that you are querying replies with:

I could put a PR in to treat the Rx dates as Recycling and Food as well?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/southglos_gov_uk.py`
Content:
```
1 import datetime
2 import json
3
4 import requests
5 from waste_collection_schedule import Collection
6
7 TITLE = "South Gloucestershire Council" # Title will show up in README.md and info.md
8 DESCRIPTION = "Source script for southglos.gov.uk" # Describe your source
9 URL = "https://southglos.gov.uk" # Insert url to service homepage. URL will show up in README.md and info.md
10 TEST_CASES = { # Insert arguments for test cases to be used by test_sources.py script
11 "Test_001": {"uprn": "643346"},
12 "Test_002": {"uprn": "641084"}
13 }
14
15 ICON_MAP = { # Optional: Dict of waste types and suitable mdi icons
16 "BLACK BIN": "mdi:trash-can",
17 "RECYCLING": "mdi:recycle",
18 "GARDEN WASTE": "mdi:leaf",
19 "FOOD BIN": "mdi:food"
20 }
21
22
23 class Source:
24 def __init__(self, uprn: str): # argX correspond to the args dict in the source configuration
25 self._uprn = uprn
26
27 def fetch(self):
28 session = requests.Session()
29 r = session.get(
30 f"https://webapps.southglos.gov.uk/Webservices/SGC.RefuseCollectionService/RefuseCollectionService.svc"
31 f"/getCollections/{self._uprn}")
32 r.raise_for_status()
33 output = r.text.strip('[]')
34 output = json.loads(output)
35 recycling_and_food_bin_dates = [output['C1'], output['C2'], output['C3']]
36 black_bin_dates = [output['R1'], output['R2'], output['R3']]
37 garden_bin_dates = [output['G1'], output['G2'], output['G3']]
38 entries = [] # List that holds collection schedule
39
40 for collection in recycling_and_food_bin_dates:
41 entries.append(
42 Collection(
43 date=datetime.datetime.strptime(collection, "%d/%m/%Y").date(),
44 t="RECYCLING",
45 icon=ICON_MAP.get("RECYCLING"),
46 )
47 )
48 entries.append(
49 Collection(
50 date=datetime.datetime.strptime(collection, "%d/%m/%Y").date(),
51 t="FOOD BIN",
52 icon=ICON_MAP.get("FOOD BIN"),
53 )
54 )
55
56 for collection in black_bin_dates:
57 entries.append(
58 Collection(
59 date=datetime.datetime.strptime(collection, "%d/%m/%Y").date(),
60 t="BLACK BIN",
61 icon=ICON_MAP.get("BLACK BIN"),
62 )
63 )
64
65 if garden_bin_dates[1] != '': #
66 for collection in garden_bin_dates:
67 entries.append(
68 Collection(
69 date=datetime.datetime.strptime(collection, "%d/%m/%Y").date(),
70 t="GARDEN WASTE",
71 icon=ICON_MAP.get("GARDEN WASTE"),
72 )
73 )
74
75 return entries
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/southglos_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/southglos_gov_uk.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/southglos_gov_uk.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/southglos_gov_uk.py
@@ -32,8 +32,11 @@
r.raise_for_status()
output = r.text.strip('[]')
output = json.loads(output)
- recycling_and_food_bin_dates = [output['C1'], output['C2'], output['C3']]
+ # Recycling and food are fields starting with C and R
+ recycling_and_food_bin_dates = [output['C1'], output['C2'], output['C3'], output['R1'], output['R2'], output['R3']]
+ # Black bin dates are fields starting R
black_bin_dates = [output['R1'], output['R2'], output['R3']]
+ # Garden bin dates are fields starting G
garden_bin_dates = [output['G1'], output['G2'], output['G3']]
entries = [] # List that holds collection schedule
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/southglos_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/southglos_gov_uk.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/southglos_gov_uk.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/southglos_gov_uk.py\n@@ -32,8 +32,11 @@\n r.raise_for_status()\n output = r.text.strip('[]')\n output = json.loads(output)\n- recycling_and_food_bin_dates = [output['C1'], output['C2'], output['C3']]\n+ # Recycling and food are fields starting with C and R\n+ recycling_and_food_bin_dates = [output['C1'], output['C2'], output['C3'], output['R1'], output['R2'], output['R3']]\n+ # Black bin dates are fields starting R\n black_bin_dates = [output['R1'], output['R2'], output['R3']]\n+ # Garden bin dates are fields starting G\n garden_bin_dates = [output['G1'], output['G2'], output['G3']]\n entries = [] # List that holds collection schedule\n", "issue": "South Glos UK doesn't return all Recycling / Food Bin dates\nThe South Gloucester UK source is only returning the Food Bin and Recycling bins every other week, when they should be every week.\r\n\r\nCalendar in HA shows like this for me:\r\n\r\n\r\nThe South Glos webpage shows:\r\n\r\n\r\nAnd the service endpoint that you are querying replies with:\r\n\r\n\r\nI could put a PR in to treat the Rx dates as Recycling and Food as well?\n", "before_files": [{"content": "import datetime\nimport json\n\nimport requests\nfrom waste_collection_schedule import Collection\n\nTITLE = \"South Gloucestershire Council\" # Title will show up in README.md and info.md\nDESCRIPTION = \"Source script for southglos.gov.uk\" # Describe your source\nURL = \"https://southglos.gov.uk\" # Insert url to service homepage. URL will show up in README.md and info.md\nTEST_CASES = { # Insert arguments for test cases to be used by test_sources.py script\n \"Test_001\": {\"uprn\": \"643346\"},\n \"Test_002\": {\"uprn\": \"641084\"}\n}\n\nICON_MAP = { # Optional: Dict of waste types and suitable mdi icons\n \"BLACK BIN\": \"mdi:trash-can\",\n \"RECYCLING\": \"mdi:recycle\",\n \"GARDEN WASTE\": \"mdi:leaf\",\n \"FOOD BIN\": \"mdi:food\"\n}\n\n\nclass Source:\n def __init__(self, uprn: str): # argX correspond to the args dict in the source configuration\n self._uprn = uprn\n\n def fetch(self):\n session = requests.Session()\n r = session.get(\n f\"https://webapps.southglos.gov.uk/Webservices/SGC.RefuseCollectionService/RefuseCollectionService.svc\"\n f\"/getCollections/{self._uprn}\")\n r.raise_for_status()\n output = r.text.strip('[]')\n output = json.loads(output)\n recycling_and_food_bin_dates = [output['C1'], output['C2'], output['C3']]\n black_bin_dates = [output['R1'], output['R2'], output['R3']]\n garden_bin_dates = [output['G1'], output['G2'], output['G3']]\n entries = [] # List that holds collection schedule\n\n for collection in recycling_and_food_bin_dates:\n entries.append(\n Collection(\n date=datetime.datetime.strptime(collection, \"%d/%m/%Y\").date(),\n t=\"RECYCLING\",\n icon=ICON_MAP.get(\"RECYCLING\"),\n )\n )\n entries.append(\n Collection(\n date=datetime.datetime.strptime(collection, \"%d/%m/%Y\").date(),\n t=\"FOOD BIN\",\n icon=ICON_MAP.get(\"FOOD BIN\"),\n )\n )\n\n for collection in black_bin_dates:\n entries.append(\n Collection(\n date=datetime.datetime.strptime(collection, \"%d/%m/%Y\").date(),\n t=\"BLACK BIN\",\n icon=ICON_MAP.get(\"BLACK BIN\"),\n )\n )\n\n if garden_bin_dates[1] != '': #\n for collection in garden_bin_dates:\n entries.append(\n Collection(\n date=datetime.datetime.strptime(collection, \"%d/%m/%Y\").date(),\n t=\"GARDEN WASTE\",\n icon=ICON_MAP.get(\"GARDEN WASTE\"),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/southglos_gov_uk.py"}], "after_files": [{"content": "import datetime\nimport json\n\nimport requests\nfrom waste_collection_schedule import Collection\n\nTITLE = \"South Gloucestershire Council\" # Title will show up in README.md and info.md\nDESCRIPTION = \"Source script for southglos.gov.uk\" # Describe your source\nURL = \"https://southglos.gov.uk\" # Insert url to service homepage. URL will show up in README.md and info.md\nTEST_CASES = { # Insert arguments for test cases to be used by test_sources.py script\n \"Test_001\": {\"uprn\": \"643346\"},\n \"Test_002\": {\"uprn\": \"641084\"}\n}\n\nICON_MAP = { # Optional: Dict of waste types and suitable mdi icons\n \"BLACK BIN\": \"mdi:trash-can\",\n \"RECYCLING\": \"mdi:recycle\",\n \"GARDEN WASTE\": \"mdi:leaf\",\n \"FOOD BIN\": \"mdi:food\"\n}\n\n\nclass Source:\n def __init__(self, uprn: str): # argX correspond to the args dict in the source configuration\n self._uprn = uprn\n\n def fetch(self):\n session = requests.Session()\n r = session.get(\n f\"https://webapps.southglos.gov.uk/Webservices/SGC.RefuseCollectionService/RefuseCollectionService.svc\"\n f\"/getCollections/{self._uprn}\")\n r.raise_for_status()\n output = r.text.strip('[]')\n output = json.loads(output)\n # Recycling and food are fields starting with C and R\n recycling_and_food_bin_dates = [output['C1'], output['C2'], output['C3'], output['R1'], output['R2'], output['R3']]\n # Black bin dates are fields starting R\n black_bin_dates = [output['R1'], output['R2'], output['R3']]\n # Garden bin dates are fields starting G\n garden_bin_dates = [output['G1'], output['G2'], output['G3']]\n entries = [] # List that holds collection schedule\n\n for collection in recycling_and_food_bin_dates:\n entries.append(\n Collection(\n date=datetime.datetime.strptime(collection, \"%d/%m/%Y\").date(),\n t=\"RECYCLING\",\n icon=ICON_MAP.get(\"RECYCLING\"),\n )\n )\n entries.append(\n Collection(\n date=datetime.datetime.strptime(collection, \"%d/%m/%Y\").date(),\n t=\"FOOD BIN\",\n icon=ICON_MAP.get(\"FOOD BIN\"),\n )\n )\n\n for collection in black_bin_dates:\n entries.append(\n Collection(\n date=datetime.datetime.strptime(collection, \"%d/%m/%Y\").date(),\n t=\"BLACK BIN\",\n icon=ICON_MAP.get(\"BLACK BIN\"),\n )\n )\n\n if garden_bin_dates[1] != '': #\n for collection in garden_bin_dates:\n entries.append(\n Collection(\n date=datetime.datetime.strptime(collection, \"%d/%m/%Y\").date(),\n t=\"GARDEN WASTE\",\n icon=ICON_MAP.get(\"GARDEN WASTE\"),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/southglos_gov_uk.py"}]} | 1,329 | 280 |
gh_patches_debug_64458 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1834 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
E3021 after upgrading to cfn_lint-0.44.0
cfn-lint version: cfn_lint-0.44.0
Hi!
After upgrading to version cfn_lint 0.44.0 don't validate AWS::Events::Rule
This was reported valid since yesterday:
```yaml
TaskSchedule:
Type: AWS::Events::Rule
Properties:
Description: xxxxxxxxx
Name: !Join ["-", [!Ref EnvironmentName, scheduled, !Ref App]]
ScheduleExpression: "cron(15 9 * * ? *)"
State: "ENABLED"
Targets:
- Arn:
Fn::ImportValue: !Sub ${EnvironmentName}-microservice-cluster-arn
RoleArn: !GetAtt ServiceRole.Arn
Id: my-task
EcsParameters:
TaskCount: 1
TaskDefinitionArn: !Ref TaskDefinition
```
The error reported is: `E3021 An Events Rule can have up to 5 Targets`
There is one Target so should still be valid
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/events/RuleTargetsLimit.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 from cfnlint.rules import CloudFormationLintRule
6 from cfnlint.rules import RuleMatch
7
8
9 class RuleTargetsLimit(CloudFormationLintRule):
10 """Check State Machine Definition"""
11 id = 'E3021'
12 shortdesc = 'Check Events Rule Targets are less than or equal to 5'
13 description = 'CloudWatch Events Rule can only support up to 5 targets'
14 source_url = 'https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/cloudwatch_limits_cwe.html'
15 tags = ['resources', 'events']
16 max_count = 5
17
18 def __init__(self):
19 """Init"""
20 super(RuleTargetsLimit, self).__init__()
21 self.resource_property_types = ['AWS::Events::Rule']
22 self.limits = {}
23
24 # pylint: disable=W0613
25 def check_value(self, value, path):
26 """Count them up """
27
28 resource_name = path[1]
29 if len(path) > 4:
30 if path[4] == 'Fn::If':
31 resource_name = '%s.%s' % (path[1], path[5])
32
33 if resource_name not in self.limits:
34 self.limits[resource_name] = {
35 'count': 0,
36 'path': path[:-1]
37 }
38
39 self.limits[resource_name]['count'] += 1
40 return []
41
42 def match_resource_properties(self, properties, _, path, cfn):
43 """Check CloudFormation Properties"""
44 matches = []
45 matches.extend(
46 cfn.check_value(
47 obj=properties, key='Targets',
48 path=path[:],
49 check_value=self.check_value
50 ))
51
52 for _, limit in self.limits.items():
53 if limit['count'] > self.max_count:
54 message = 'An Events Rule can have up to {0} Targets'
55 matches.append(RuleMatch(limit['path'], message.format(self.max_count)))
56
57 return matches
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/resources/events/RuleTargetsLimit.py b/src/cfnlint/rules/resources/events/RuleTargetsLimit.py
--- a/src/cfnlint/rules/resources/events/RuleTargetsLimit.py
+++ b/src/cfnlint/rules/resources/events/RuleTargetsLimit.py
@@ -21,6 +21,9 @@
self.resource_property_types = ['AWS::Events::Rule']
self.limits = {}
+ def initialize(self, cfn):
+ self.limits = {}
+
# pylint: disable=W0613
def check_value(self, value, path):
"""Count them up """
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/events/RuleTargetsLimit.py b/src/cfnlint/rules/resources/events/RuleTargetsLimit.py\n--- a/src/cfnlint/rules/resources/events/RuleTargetsLimit.py\n+++ b/src/cfnlint/rules/resources/events/RuleTargetsLimit.py\n@@ -21,6 +21,9 @@\n self.resource_property_types = ['AWS::Events::Rule']\n self.limits = {}\n \n+ def initialize(self, cfn):\n+ self.limits = {}\n+\n # pylint: disable=W0613\n def check_value(self, value, path):\n \"\"\"Count them up \"\"\"\n", "issue": "E3021 after upgrading to cfn_lint-0.44.0\ncfn-lint version: cfn_lint-0.44.0\r\n\r\nHi! \r\n\r\nAfter upgrading to version cfn_lint 0.44.0 don't validate AWS::Events::Rule\r\n\r\nThis was reported valid since yesterday:\r\n\r\n```yaml\r\n TaskSchedule:\r\n Type: AWS::Events::Rule\r\n Properties:\r\n Description: xxxxxxxxx\r\n Name: !Join [\"-\", [!Ref EnvironmentName, scheduled, !Ref App]]\r\n ScheduleExpression: \"cron(15 9 * * ? *)\"\r\n State: \"ENABLED\"\r\n Targets:\r\n - Arn:\r\n Fn::ImportValue: !Sub ${EnvironmentName}-microservice-cluster-arn\r\n RoleArn: !GetAtt ServiceRole.Arn\r\n Id: my-task\r\n EcsParameters:\r\n TaskCount: 1\r\n TaskDefinitionArn: !Ref TaskDefinition\r\n```\r\n\r\nThe error reported is: `E3021 An Events Rule can have up to 5 Targets`\r\n\r\nThere is one Target so should still be valid \r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass RuleTargetsLimit(CloudFormationLintRule):\n \"\"\"Check State Machine Definition\"\"\"\n id = 'E3021'\n shortdesc = 'Check Events Rule Targets are less than or equal to 5'\n description = 'CloudWatch Events Rule can only support up to 5 targets'\n source_url = 'https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/cloudwatch_limits_cwe.html'\n tags = ['resources', 'events']\n max_count = 5\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super(RuleTargetsLimit, self).__init__()\n self.resource_property_types = ['AWS::Events::Rule']\n self.limits = {}\n\n # pylint: disable=W0613\n def check_value(self, value, path):\n \"\"\"Count them up \"\"\"\n\n resource_name = path[1]\n if len(path) > 4:\n if path[4] == 'Fn::If':\n resource_name = '%s.%s' % (path[1], path[5])\n\n if resource_name not in self.limits:\n self.limits[resource_name] = {\n 'count': 0,\n 'path': path[:-1]\n }\n\n self.limits[resource_name]['count'] += 1\n return []\n\n def match_resource_properties(self, properties, _, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n matches.extend(\n cfn.check_value(\n obj=properties, key='Targets',\n path=path[:],\n check_value=self.check_value\n ))\n\n for _, limit in self.limits.items():\n if limit['count'] > self.max_count:\n message = 'An Events Rule can have up to {0} Targets'\n matches.append(RuleMatch(limit['path'], message.format(self.max_count)))\n\n return matches\n", "path": "src/cfnlint/rules/resources/events/RuleTargetsLimit.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass RuleTargetsLimit(CloudFormationLintRule):\n \"\"\"Check State Machine Definition\"\"\"\n id = 'E3021'\n shortdesc = 'Check Events Rule Targets are less than or equal to 5'\n description = 'CloudWatch Events Rule can only support up to 5 targets'\n source_url = 'https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/cloudwatch_limits_cwe.html'\n tags = ['resources', 'events']\n max_count = 5\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super(RuleTargetsLimit, self).__init__()\n self.resource_property_types = ['AWS::Events::Rule']\n self.limits = {}\n\n def initialize(self, cfn):\n self.limits = {}\n\n # pylint: disable=W0613\n def check_value(self, value, path):\n \"\"\"Count them up \"\"\"\n\n resource_name = path[1]\n if len(path) > 4:\n if path[4] == 'Fn::If':\n resource_name = '%s.%s' % (path[1], path[5])\n\n if resource_name not in self.limits:\n self.limits[resource_name] = {\n 'count': 0,\n 'path': path[:-1]\n }\n\n self.limits[resource_name]['count'] += 1\n return []\n\n def match_resource_properties(self, properties, _, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n matches.extend(\n cfn.check_value(\n obj=properties, key='Targets',\n path=path[:],\n check_value=self.check_value\n ))\n\n for _, limit in self.limits.items():\n if limit['count'] > self.max_count:\n message = 'An Events Rule can have up to {0} Targets'\n matches.append(RuleMatch(limit['path'], message.format(self.max_count)))\n\n return matches\n", "path": "src/cfnlint/rules/resources/events/RuleTargetsLimit.py"}]} | 1,058 | 136 |
gh_patches_debug_32851 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-447 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Initiator dashboard: too many notification
I get notifications of any projects of my organization, even it's not my project. I'm not sure if I want to see all changes in all projects of my organization. I think it will be too much.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/notifications/signals.py`
Content:
```
1 from django.db.models import signals
2 from django.dispatch import receiver
3
4 from adhocracy4.actions.models import Action
5 from adhocracy4.actions.verbs import Verbs
6 from adhocracy4.follows.models import Follow
7 from adhocracy4.phases.models import Phase
8 from adhocracy4.projects.models import Project
9 from apps.organisations.models import Organisation
10 from . import emails
11
12
13 @receiver(signals.post_save, sender=Action)
14 def send_notifications(instance, created, **kwargs):
15 action = instance
16 verb = Verbs(action.verb)
17
18 if verb == Verbs.CREATE or verb == Verbs.ADD:
19 emails.NotifyCreatorEmail.send(action)
20
21 if action.project:
22 emails.NotifyModeratorsEmail.send(action)
23 emails.NotifyFollowersOnNewItemCreated.send(action)
24
25 elif verb == Verbs.SCHEDULE:
26 if isinstance(action.obj, Phase):
27 emails.NotifyFollowersOnPhaseIsOverSoonEmail.send(action)
28
29
30 @receiver(signals.m2m_changed, sender=Project.moderators.through)
31 def autofollow_project_moderators(instance, action, pk_set, reverse, **kwargs):
32 if action == 'post_add':
33 if not reverse:
34 project = instance
35 users_pks = pk_set
36
37 for user_pk in users_pks:
38 Follow.objects.update_or_create(
39 project=project,
40 creator_id=user_pk,
41 defaults={
42 'enabled': True
43 }
44 )
45 else:
46 user = instance
47 project_pks = pk_set
48
49 for project_pk in project_pks:
50 Follow.objects.update_or_create(
51 project_id=project_pk,
52 creator_id=user,
53 defaults={
54 'enabled': True
55 }
56 )
57
58
59 @receiver(signals.m2m_changed, sender=Organisation.initiators.through)
60 def autofollow_organisation_initiators(instance, action, pk_set, reverse,
61 **kwargs):
62 if action == 'post_add':
63 if not reverse:
64 organisation = instance
65 users_pks = pk_set
66
67 for project in Project.objects.filter(organisation=organisation):
68 for user_pk in users_pks:
69 Follow.objects.update_or_create(
70 project=project,
71 creator_id=user_pk,
72 defaults={
73 'enabled': True
74 }
75 )
76 else:
77 user = instance
78 organisation_pk_set = pk_set
79
80 for project in Project.objects.filter(
81 organisation_id__in=organisation_pk_set):
82 Follow.objects.update_or_create(
83 project=project,
84 creator=user,
85 defaults={
86 'enabled': True
87 }
88 )
89
90
91 @receiver(signals.post_save)
92 def autofollow_organisation_initiators_new_projects(sender, instance, created,
93 **kwargs):
94 if issubclass(sender, Project):
95 # we have to check if the senders inherits from Project to catch
96 # signals from external projects and bplans
97 project = instance
98 if created:
99 for user in project.organisation.initiators.all():
100 Follow.objects.update_or_create(
101 project=project,
102 creator=user,
103 defaults={
104 'enabled': True
105 }
106 )
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/notifications/signals.py b/apps/notifications/signals.py
--- a/apps/notifications/signals.py
+++ b/apps/notifications/signals.py
@@ -6,7 +6,6 @@
from adhocracy4.follows.models import Follow
from adhocracy4.phases.models import Phase
from adhocracy4.projects.models import Project
-from apps.organisations.models import Organisation
from . import emails
@@ -54,53 +53,3 @@
'enabled': True
}
)
-
-
-@receiver(signals.m2m_changed, sender=Organisation.initiators.through)
-def autofollow_organisation_initiators(instance, action, pk_set, reverse,
- **kwargs):
- if action == 'post_add':
- if not reverse:
- organisation = instance
- users_pks = pk_set
-
- for project in Project.objects.filter(organisation=organisation):
- for user_pk in users_pks:
- Follow.objects.update_or_create(
- project=project,
- creator_id=user_pk,
- defaults={
- 'enabled': True
- }
- )
- else:
- user = instance
- organisation_pk_set = pk_set
-
- for project in Project.objects.filter(
- organisation_id__in=organisation_pk_set):
- Follow.objects.update_or_create(
- project=project,
- creator=user,
- defaults={
- 'enabled': True
- }
- )
-
-
-@receiver(signals.post_save)
-def autofollow_organisation_initiators_new_projects(sender, instance, created,
- **kwargs):
- if issubclass(sender, Project):
- # we have to check if the senders inherits from Project to catch
- # signals from external projects and bplans
- project = instance
- if created:
- for user in project.organisation.initiators.all():
- Follow.objects.update_or_create(
- project=project,
- creator=user,
- defaults={
- 'enabled': True
- }
- )
| {"golden_diff": "diff --git a/apps/notifications/signals.py b/apps/notifications/signals.py\n--- a/apps/notifications/signals.py\n+++ b/apps/notifications/signals.py\n@@ -6,7 +6,6 @@\n from adhocracy4.follows.models import Follow\n from adhocracy4.phases.models import Phase\n from adhocracy4.projects.models import Project\n-from apps.organisations.models import Organisation\n from . import emails\n \n \n@@ -54,53 +53,3 @@\n 'enabled': True\n }\n )\n-\n-\n-@receiver(signals.m2m_changed, sender=Organisation.initiators.through)\n-def autofollow_organisation_initiators(instance, action, pk_set, reverse,\n- **kwargs):\n- if action == 'post_add':\n- if not reverse:\n- organisation = instance\n- users_pks = pk_set\n-\n- for project in Project.objects.filter(organisation=organisation):\n- for user_pk in users_pks:\n- Follow.objects.update_or_create(\n- project=project,\n- creator_id=user_pk,\n- defaults={\n- 'enabled': True\n- }\n- )\n- else:\n- user = instance\n- organisation_pk_set = pk_set\n-\n- for project in Project.objects.filter(\n- organisation_id__in=organisation_pk_set):\n- Follow.objects.update_or_create(\n- project=project,\n- creator=user,\n- defaults={\n- 'enabled': True\n- }\n- )\n-\n-\n-@receiver(signals.post_save)\n-def autofollow_organisation_initiators_new_projects(sender, instance, created,\n- **kwargs):\n- if issubclass(sender, Project):\n- # we have to check if the senders inherits from Project to catch\n- # signals from external projects and bplans\n- project = instance\n- if created:\n- for user in project.organisation.initiators.all():\n- Follow.objects.update_or_create(\n- project=project,\n- creator=user,\n- defaults={\n- 'enabled': True\n- }\n- )\n", "issue": "Initiator dashboard: too many notification\nI get notifications of any projects of my organization, even it's not my project. I'm not sure if I want to see all changes in all projects of my organization. I think it will be too much.\n", "before_files": [{"content": "from django.db.models import signals\nfrom django.dispatch import receiver\n\nfrom adhocracy4.actions.models import Action\nfrom adhocracy4.actions.verbs import Verbs\nfrom adhocracy4.follows.models import Follow\nfrom adhocracy4.phases.models import Phase\nfrom adhocracy4.projects.models import Project\nfrom apps.organisations.models import Organisation\nfrom . import emails\n\n\n@receiver(signals.post_save, sender=Action)\ndef send_notifications(instance, created, **kwargs):\n action = instance\n verb = Verbs(action.verb)\n\n if verb == Verbs.CREATE or verb == Verbs.ADD:\n emails.NotifyCreatorEmail.send(action)\n\n if action.project:\n emails.NotifyModeratorsEmail.send(action)\n emails.NotifyFollowersOnNewItemCreated.send(action)\n\n elif verb == Verbs.SCHEDULE:\n if isinstance(action.obj, Phase):\n emails.NotifyFollowersOnPhaseIsOverSoonEmail.send(action)\n\n\n@receiver(signals.m2m_changed, sender=Project.moderators.through)\ndef autofollow_project_moderators(instance, action, pk_set, reverse, **kwargs):\n if action == 'post_add':\n if not reverse:\n project = instance\n users_pks = pk_set\n\n for user_pk in users_pks:\n Follow.objects.update_or_create(\n project=project,\n creator_id=user_pk,\n defaults={\n 'enabled': True\n }\n )\n else:\n user = instance\n project_pks = pk_set\n\n for project_pk in project_pks:\n Follow.objects.update_or_create(\n project_id=project_pk,\n creator_id=user,\n defaults={\n 'enabled': True\n }\n )\n\n\n@receiver(signals.m2m_changed, sender=Organisation.initiators.through)\ndef autofollow_organisation_initiators(instance, action, pk_set, reverse,\n **kwargs):\n if action == 'post_add':\n if not reverse:\n organisation = instance\n users_pks = pk_set\n\n for project in Project.objects.filter(organisation=organisation):\n for user_pk in users_pks:\n Follow.objects.update_or_create(\n project=project,\n creator_id=user_pk,\n defaults={\n 'enabled': True\n }\n )\n else:\n user = instance\n organisation_pk_set = pk_set\n\n for project in Project.objects.filter(\n organisation_id__in=organisation_pk_set):\n Follow.objects.update_or_create(\n project=project,\n creator=user,\n defaults={\n 'enabled': True\n }\n )\n\n\n@receiver(signals.post_save)\ndef autofollow_organisation_initiators_new_projects(sender, instance, created,\n **kwargs):\n if issubclass(sender, Project):\n # we have to check if the senders inherits from Project to catch\n # signals from external projects and bplans\n project = instance\n if created:\n for user in project.organisation.initiators.all():\n Follow.objects.update_or_create(\n project=project,\n creator=user,\n defaults={\n 'enabled': True\n }\n )\n", "path": "apps/notifications/signals.py"}], "after_files": [{"content": "from django.db.models import signals\nfrom django.dispatch import receiver\n\nfrom adhocracy4.actions.models import Action\nfrom adhocracy4.actions.verbs import Verbs\nfrom adhocracy4.follows.models import Follow\nfrom adhocracy4.phases.models import Phase\nfrom adhocracy4.projects.models import Project\nfrom . import emails\n\n\n@receiver(signals.post_save, sender=Action)\ndef send_notifications(instance, created, **kwargs):\n action = instance\n verb = Verbs(action.verb)\n\n if verb == Verbs.CREATE or verb == Verbs.ADD:\n emails.NotifyCreatorEmail.send(action)\n\n if action.project:\n emails.NotifyModeratorsEmail.send(action)\n emails.NotifyFollowersOnNewItemCreated.send(action)\n\n elif verb == Verbs.SCHEDULE:\n if isinstance(action.obj, Phase):\n emails.NotifyFollowersOnPhaseIsOverSoonEmail.send(action)\n\n\n@receiver(signals.m2m_changed, sender=Project.moderators.through)\ndef autofollow_project_moderators(instance, action, pk_set, reverse, **kwargs):\n if action == 'post_add':\n if not reverse:\n project = instance\n users_pks = pk_set\n\n for user_pk in users_pks:\n Follow.objects.update_or_create(\n project=project,\n creator_id=user_pk,\n defaults={\n 'enabled': True\n }\n )\n else:\n user = instance\n project_pks = pk_set\n\n for project_pk in project_pks:\n Follow.objects.update_or_create(\n project_id=project_pk,\n creator_id=user,\n defaults={\n 'enabled': True\n }\n )\n", "path": "apps/notifications/signals.py"}]} | 1,181 | 451 |
gh_patches_debug_1020 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-1970 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
district, topic and localisation for external projects and bplans
external projects also need district, topic and localisation as the other projects do.
same is true for b-plans as not all of them come via imperia
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/bplan/tasks.py`
Content:
```
1 import json
2 import logging
3 import urllib
4
5 from background_task import background
6
7 from adhocracy4.administrative_districts.models import AdministrativeDistrict
8 from meinberlin.apps.bplan.models import Bplan
9
10 logger = logging.getLogger(__name__)
11
12
13 def get_features_from_bplan_api(endpoint):
14 url = 'https://bplan-prod.liqd.net/api/' + endpoint
15 req = urllib.request.Request(url)
16 res = urllib.request.urlopen(req)
17 res_body = res.read()
18 res_json = json.loads(res_body.decode("utf-8"))
19
20 return res_json.get('features')
21
22
23 def get_bplan_point_and_district_pk(bplan_identifier):
24 url_poi = 'bplan/points/' + \
25 '?bplan={}'.format(bplan_identifier.replace(' ', '%20'))
26
27 try:
28 features = get_features_from_bplan_api(url_poi)
29 if features:
30 district_pk = features[0]['properties']['bezirk']
31 point = features[0]
32
33 return point, district_pk
34
35 return None, None
36
37 except UnicodeEncodeError:
38 # catches bplan-identifiers with problematic chars
39 pass
40
41
42 def get_bplan_api_pk_to_a4_admin_district_dict():
43 url_dis = 'bezirke/'
44 features = get_features_from_bplan_api(url_dis)
45 dis_dict = {}
46 if features:
47 for district in features:
48
49 dis_model = AdministrativeDistrict.objects.filter(
50 name=district['properties']['name']
51 )
52 if dis_model:
53 dis_dict[district['properties']['pk']] = \
54 dis_model[0]
55 else:
56 dis_dict[district['properties']['pk']] = None
57
58 return dis_dict
59
60
61 @background(schedule=0)
62 def get_location_information(bplan_id):
63 bplan = Bplan.objects.get(pk=bplan_id)
64 point, district_pk = get_bplan_point_and_district_pk(bplan.identifier)
65 dis_dict = get_bplan_api_pk_to_a4_admin_district_dict()
66
67 if district_pk:
68 bplan.administrative_district = \
69 dis_dict[district_pk]
70 else:
71 logger.error(
72 "The identifier '{}' for bplan '{}' seems to be wrong. "
73 "It doesn't exist on https://bplan-prod.liqd.net/api/"
74 .format(bplan.identifier, bplan)
75 )
76 bplan.point = point
77 bplan.save(update_fields=['point', 'administrative_district'])
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/meinberlin/apps/bplan/tasks.py b/meinberlin/apps/bplan/tasks.py
--- a/meinberlin/apps/bplan/tasks.py
+++ b/meinberlin/apps/bplan/tasks.py
@@ -74,4 +74,5 @@
.format(bplan.identifier, bplan)
)
bplan.point = point
- bplan.save(update_fields=['point', 'administrative_district'])
+ bplan.topics = ['URB']
+ bplan.save(update_fields=['point', 'administrative_district', 'topics'])
| {"golden_diff": "diff --git a/meinberlin/apps/bplan/tasks.py b/meinberlin/apps/bplan/tasks.py\n--- a/meinberlin/apps/bplan/tasks.py\n+++ b/meinberlin/apps/bplan/tasks.py\n@@ -74,4 +74,5 @@\n .format(bplan.identifier, bplan)\n )\n bplan.point = point\n- bplan.save(update_fields=['point', 'administrative_district'])\n+ bplan.topics = ['URB']\n+ bplan.save(update_fields=['point', 'administrative_district', 'topics'])\n", "issue": "district, topic and localisation for external projects and bplans\nexternal projects also need district, topic and localisation as the other projects do. \r\n\r\nsame is true for b-plans as not all of them come via imperia\n", "before_files": [{"content": "import json\nimport logging\nimport urllib\n\nfrom background_task import background\n\nfrom adhocracy4.administrative_districts.models import AdministrativeDistrict\nfrom meinberlin.apps.bplan.models import Bplan\n\nlogger = logging.getLogger(__name__)\n\n\ndef get_features_from_bplan_api(endpoint):\n url = 'https://bplan-prod.liqd.net/api/' + endpoint\n req = urllib.request.Request(url)\n res = urllib.request.urlopen(req)\n res_body = res.read()\n res_json = json.loads(res_body.decode(\"utf-8\"))\n\n return res_json.get('features')\n\n\ndef get_bplan_point_and_district_pk(bplan_identifier):\n url_poi = 'bplan/points/' + \\\n '?bplan={}'.format(bplan_identifier.replace(' ', '%20'))\n\n try:\n features = get_features_from_bplan_api(url_poi)\n if features:\n district_pk = features[0]['properties']['bezirk']\n point = features[0]\n\n return point, district_pk\n\n return None, None\n\n except UnicodeEncodeError:\n # catches bplan-identifiers with problematic chars\n pass\n\n\ndef get_bplan_api_pk_to_a4_admin_district_dict():\n url_dis = 'bezirke/'\n features = get_features_from_bplan_api(url_dis)\n dis_dict = {}\n if features:\n for district in features:\n\n dis_model = AdministrativeDistrict.objects.filter(\n name=district['properties']['name']\n )\n if dis_model:\n dis_dict[district['properties']['pk']] = \\\n dis_model[0]\n else:\n dis_dict[district['properties']['pk']] = None\n\n return dis_dict\n\n\n@background(schedule=0)\ndef get_location_information(bplan_id):\n bplan = Bplan.objects.get(pk=bplan_id)\n point, district_pk = get_bplan_point_and_district_pk(bplan.identifier)\n dis_dict = get_bplan_api_pk_to_a4_admin_district_dict()\n\n if district_pk:\n bplan.administrative_district = \\\n dis_dict[district_pk]\n else:\n logger.error(\n \"The identifier '{}' for bplan '{}' seems to be wrong. \"\n \"It doesn't exist on https://bplan-prod.liqd.net/api/\"\n .format(bplan.identifier, bplan)\n )\n bplan.point = point\n bplan.save(update_fields=['point', 'administrative_district'])\n", "path": "meinberlin/apps/bplan/tasks.py"}], "after_files": [{"content": "import json\nimport logging\nimport urllib\n\nfrom background_task import background\n\nfrom adhocracy4.administrative_districts.models import AdministrativeDistrict\nfrom meinberlin.apps.bplan.models import Bplan\n\nlogger = logging.getLogger(__name__)\n\n\ndef get_features_from_bplan_api(endpoint):\n url = 'https://bplan-prod.liqd.net/api/' + endpoint\n req = urllib.request.Request(url)\n res = urllib.request.urlopen(req)\n res_body = res.read()\n res_json = json.loads(res_body.decode(\"utf-8\"))\n\n return res_json.get('features')\n\n\ndef get_bplan_point_and_district_pk(bplan_identifier):\n url_poi = 'bplan/points/' + \\\n '?bplan={}'.format(bplan_identifier.replace(' ', '%20'))\n\n try:\n features = get_features_from_bplan_api(url_poi)\n if features:\n district_pk = features[0]['properties']['bezirk']\n point = features[0]\n\n return point, district_pk\n\n return None, None\n\n except UnicodeEncodeError:\n # catches bplan-identifiers with problematic chars\n pass\n\n\ndef get_bplan_api_pk_to_a4_admin_district_dict():\n url_dis = 'bezirke/'\n features = get_features_from_bplan_api(url_dis)\n dis_dict = {}\n if features:\n for district in features:\n\n dis_model = AdministrativeDistrict.objects.filter(\n name=district['properties']['name']\n )\n if dis_model:\n dis_dict[district['properties']['pk']] = \\\n dis_model[0]\n else:\n dis_dict[district['properties']['pk']] = None\n\n return dis_dict\n\n\n@background(schedule=0)\ndef get_location_information(bplan_id):\n bplan = Bplan.objects.get(pk=bplan_id)\n point, district_pk = get_bplan_point_and_district_pk(bplan.identifier)\n dis_dict = get_bplan_api_pk_to_a4_admin_district_dict()\n\n if district_pk:\n bplan.administrative_district = \\\n dis_dict[district_pk]\n else:\n logger.error(\n \"The identifier '{}' for bplan '{}' seems to be wrong. \"\n \"It doesn't exist on https://bplan-prod.liqd.net/api/\"\n .format(bplan.identifier, bplan)\n )\n bplan.point = point\n bplan.topics = ['URB']\n bplan.save(update_fields=['point', 'administrative_district', 'topics'])\n", "path": "meinberlin/apps/bplan/tasks.py"}]} | 980 | 123 |
gh_patches_debug_3356 | rasdani/github-patches | git_diff | pypa__setuptools-3106 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Entry points are missing when supplied as a string.
### setuptools version
setuptools==60.9.0
### Python version
3.8.12
### OS
Debian
### Additional environment information
_No response_
### Description
When installing a Python package which uses `setuptools` and has at least one `entry_point` directly from Git the `entry_point` script is not created with `setuptools==60.9.0`.
I have verified that this does work with `setuptools=60.8.2` but not `60.9.0`.
### Expected behavior
The expected `entry_point` script `tap-postgres` is present in `venv/bin/`.
### How to Reproduce
```
python -m venv venv
source venv/bin/activate
pip install git+https://github.com/thread/pipelinewise-tap-postgres.git@ee9bedb4711bb27c9660892f56c8fb27f3770d2a
```
### Output
```console
```
[BUG] Entry points are missing when supplied as a string.
### setuptools version
setuptools==60.9.0
### Python version
3.8.12
### OS
Debian
### Additional environment information
_No response_
### Description
When installing a Python package which uses `setuptools` and has at least one `entry_point` directly from Git the `entry_point` script is not created with `setuptools==60.9.0`.
I have verified that this does work with `setuptools=60.8.2` but not `60.9.0`.
### Expected behavior
The expected `entry_point` script `tap-postgres` is present in `venv/bin/`.
### How to Reproduce
```
python -m venv venv
source venv/bin/activate
pip install git+https://github.com/thread/pipelinewise-tap-postgres.git@ee9bedb4711bb27c9660892f56c8fb27f3770d2a
```
### Output
```console
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setuptools/_entry_points.py`
Content:
```
1 import functools
2 import operator
3 import itertools
4
5 from .extern.jaraco.text import yield_lines
6 from .extern.jaraco.functools import pass_none
7 from ._importlib import metadata
8 from ._itertools import ensure_unique
9 from .extern.more_itertools import consume
10
11
12 def ensure_valid(ep):
13 """
14 Exercise one of the dynamic properties to trigger
15 the pattern match.
16 """
17 ep.extras
18
19
20 def load_group(value, group):
21 """
22 Given a value of an entry point or series of entry points,
23 return each as an EntryPoint.
24 """
25 # normalize to a single sequence of lines
26 lines = yield_lines(value)
27 text = f'[{group}]\n' + '\n'.join(lines)
28 return metadata.EntryPoints._from_text(text)
29
30
31 def by_group_and_name(ep):
32 return ep.group, ep.name
33
34
35 def validate(eps: metadata.EntryPoints):
36 """
37 Ensure entry points are unique by group and name and validate each.
38 """
39 consume(map(ensure_valid, ensure_unique(eps, key=by_group_and_name)))
40 return eps
41
42
43 @functools.singledispatch
44 def load(eps):
45 """
46 Given a Distribution.entry_points, produce EntryPoints.
47 """
48 groups = itertools.chain.from_iterable(
49 load_group(value, group)
50 for group, value in eps.items())
51 return validate(metadata.EntryPoints(groups))
52
53
54 @load.register(str)
55 def _(eps):
56 return validate(metadata.EntryPoints._from_text(eps))
57
58
59 load.register(type(None), lambda x: x)
60
61
62 @pass_none
63 def render(eps: metadata.EntryPoints):
64 by_group = operator.attrgetter('group')
65 groups = itertools.groupby(sorted(eps, key=by_group), by_group)
66
67 return '\n'.join(
68 f'[{group}]\n{render_items(items)}\n'
69 for group, items in groups
70 )
71
72
73 def render_items(eps):
74 return '\n'.join(
75 f'{ep.name} = {ep.value}'
76 for ep in sorted(eps)
77 )
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setuptools/_entry_points.py b/setuptools/_entry_points.py
--- a/setuptools/_entry_points.py
+++ b/setuptools/_entry_points.py
@@ -53,7 +53,16 @@
@load.register(str)
def _(eps):
- return validate(metadata.EntryPoints._from_text(eps))
+ r"""
+ >>> ep, = load('[console_scripts]\nfoo=bar')
+ >>> ep.group
+ 'console_scripts'
+ >>> ep.name
+ 'foo'
+ >>> ep.value
+ 'bar'
+ """
+ return validate(metadata.EntryPoints(metadata.EntryPoints._from_text(eps)))
load.register(type(None), lambda x: x)
| {"golden_diff": "diff --git a/setuptools/_entry_points.py b/setuptools/_entry_points.py\n--- a/setuptools/_entry_points.py\n+++ b/setuptools/_entry_points.py\n@@ -53,7 +53,16 @@\n \n @load.register(str)\n def _(eps):\n- return validate(metadata.EntryPoints._from_text(eps))\n+ r\"\"\"\n+ >>> ep, = load('[console_scripts]\\nfoo=bar')\n+ >>> ep.group\n+ 'console_scripts'\n+ >>> ep.name\n+ 'foo'\n+ >>> ep.value\n+ 'bar'\n+ \"\"\"\n+ return validate(metadata.EntryPoints(metadata.EntryPoints._from_text(eps)))\n \n \n load.register(type(None), lambda x: x)\n", "issue": "[BUG] Entry points are missing when supplied as a string.\n### setuptools version\r\n\r\nsetuptools==60.9.0\r\n\r\n### Python version\r\n\r\n3.8.12\r\n\r\n### OS\r\n\r\nDebian\r\n\r\n### Additional environment information\r\n\r\n_No response_\r\n\r\n### Description\r\n\r\nWhen installing a Python package which uses `setuptools` and has at least one `entry_point` directly from Git the `entry_point` script is not created with `setuptools==60.9.0`.\r\n\r\nI have verified that this does work with `setuptools=60.8.2` but not `60.9.0`.\r\n\r\n### Expected behavior\r\n\r\nThe expected `entry_point` script `tap-postgres` is present in `venv/bin/`.\r\n\r\n### How to Reproduce\r\n\r\n```\r\npython -m venv venv\r\nsource venv/bin/activate\r\npip install git+https://github.com/thread/pipelinewise-tap-postgres.git@ee9bedb4711bb27c9660892f56c8fb27f3770d2a\r\n```\r\n\r\n### Output\r\n\r\n```console\r\n\r\n```\r\n\n[BUG] Entry points are missing when supplied as a string.\n### setuptools version\r\n\r\nsetuptools==60.9.0\r\n\r\n### Python version\r\n\r\n3.8.12\r\n\r\n### OS\r\n\r\nDebian\r\n\r\n### Additional environment information\r\n\r\n_No response_\r\n\r\n### Description\r\n\r\nWhen installing a Python package which uses `setuptools` and has at least one `entry_point` directly from Git the `entry_point` script is not created with `setuptools==60.9.0`.\r\n\r\nI have verified that this does work with `setuptools=60.8.2` but not `60.9.0`.\r\n\r\n### Expected behavior\r\n\r\nThe expected `entry_point` script `tap-postgres` is present in `venv/bin/`.\r\n\r\n### How to Reproduce\r\n\r\n```\r\npython -m venv venv\r\nsource venv/bin/activate\r\npip install git+https://github.com/thread/pipelinewise-tap-postgres.git@ee9bedb4711bb27c9660892f56c8fb27f3770d2a\r\n```\r\n\r\n### Output\r\n\r\n```console\r\n\r\n```\r\n\n", "before_files": [{"content": "import functools\nimport operator\nimport itertools\n\nfrom .extern.jaraco.text import yield_lines\nfrom .extern.jaraco.functools import pass_none\nfrom ._importlib import metadata\nfrom ._itertools import ensure_unique\nfrom .extern.more_itertools import consume\n\n\ndef ensure_valid(ep):\n \"\"\"\n Exercise one of the dynamic properties to trigger\n the pattern match.\n \"\"\"\n ep.extras\n\n\ndef load_group(value, group):\n \"\"\"\n Given a value of an entry point or series of entry points,\n return each as an EntryPoint.\n \"\"\"\n # normalize to a single sequence of lines\n lines = yield_lines(value)\n text = f'[{group}]\\n' + '\\n'.join(lines)\n return metadata.EntryPoints._from_text(text)\n\n\ndef by_group_and_name(ep):\n return ep.group, ep.name\n\n\ndef validate(eps: metadata.EntryPoints):\n \"\"\"\n Ensure entry points are unique by group and name and validate each.\n \"\"\"\n consume(map(ensure_valid, ensure_unique(eps, key=by_group_and_name)))\n return eps\n\n\[email protected]\ndef load(eps):\n \"\"\"\n Given a Distribution.entry_points, produce EntryPoints.\n \"\"\"\n groups = itertools.chain.from_iterable(\n load_group(value, group)\n for group, value in eps.items())\n return validate(metadata.EntryPoints(groups))\n\n\[email protected](str)\ndef _(eps):\n return validate(metadata.EntryPoints._from_text(eps))\n\n\nload.register(type(None), lambda x: x)\n\n\n@pass_none\ndef render(eps: metadata.EntryPoints):\n by_group = operator.attrgetter('group')\n groups = itertools.groupby(sorted(eps, key=by_group), by_group)\n\n return '\\n'.join(\n f'[{group}]\\n{render_items(items)}\\n'\n for group, items in groups\n )\n\n\ndef render_items(eps):\n return '\\n'.join(\n f'{ep.name} = {ep.value}'\n for ep in sorted(eps)\n )\n", "path": "setuptools/_entry_points.py"}], "after_files": [{"content": "import functools\nimport operator\nimport itertools\n\nfrom .extern.jaraco.text import yield_lines\nfrom .extern.jaraco.functools import pass_none\nfrom ._importlib import metadata\nfrom ._itertools import ensure_unique\nfrom .extern.more_itertools import consume\n\n\ndef ensure_valid(ep):\n \"\"\"\n Exercise one of the dynamic properties to trigger\n the pattern match.\n \"\"\"\n ep.extras\n\n\ndef load_group(value, group):\n \"\"\"\n Given a value of an entry point or series of entry points,\n return each as an EntryPoint.\n \"\"\"\n # normalize to a single sequence of lines\n lines = yield_lines(value)\n text = f'[{group}]\\n' + '\\n'.join(lines)\n return metadata.EntryPoints._from_text(text)\n\n\ndef by_group_and_name(ep):\n return ep.group, ep.name\n\n\ndef validate(eps: metadata.EntryPoints):\n \"\"\"\n Ensure entry points are unique by group and name and validate each.\n \"\"\"\n consume(map(ensure_valid, ensure_unique(eps, key=by_group_and_name)))\n return eps\n\n\[email protected]\ndef load(eps):\n \"\"\"\n Given a Distribution.entry_points, produce EntryPoints.\n \"\"\"\n groups = itertools.chain.from_iterable(\n load_group(value, group)\n for group, value in eps.items())\n return validate(metadata.EntryPoints(groups))\n\n\[email protected](str)\ndef _(eps):\n r\"\"\"\n >>> ep, = load('[console_scripts]\\nfoo=bar')\n >>> ep.group\n 'console_scripts'\n >>> ep.name\n 'foo'\n >>> ep.value\n 'bar'\n \"\"\"\n return validate(metadata.EntryPoints(metadata.EntryPoints._from_text(eps)))\n\n\nload.register(type(None), lambda x: x)\n\n\n@pass_none\ndef render(eps: metadata.EntryPoints):\n by_group = operator.attrgetter('group')\n groups = itertools.groupby(sorted(eps, key=by_group), by_group)\n\n return '\\n'.join(\n f'[{group}]\\n{render_items(items)}\\n'\n for group, items in groups\n )\n\n\ndef render_items(eps):\n return '\\n'.join(\n f'{ep.name} = {ep.value}'\n for ep in sorted(eps)\n )\n", "path": "setuptools/_entry_points.py"}]} | 1,343 | 156 |
gh_patches_debug_14850 | rasdani/github-patches | git_diff | liqd__adhocracy4-893 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
django-admin: place comment's is_blocked next to is_censored
**URL:**
**user:**
**expected behaviour:** As is_removed, is_censored and is_blocked have a very similar behaviour, the is_blocked field should be next to the first two fields. In the detail as well as in the list view
**behaviour:**
**important screensize:**
**device & browser:**
**Comment/Question:**
Screenshot?


--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `adhocracy4/comments/admin.py`
Content:
```
1 from django.contrib import admin
2
3 from .models import Comment
4
5
6 @admin.register(Comment)
7 class CommentAdmin(admin.ModelAdmin):
8 fields = (
9 'content_type', 'content_object', 'comment', 'is_removed',
10 'is_censored', 'is_moderator_marked', 'creator', 'comment_categories',
11 'is_blocked'
12 )
13 readonly_fields = ('creator', 'content_type', 'content_object')
14 list_display = (
15 '__str__', 'creator', 'is_removed', 'is_censored', 'created',
16 'is_blocked'
17 )
18 search_fields = ('comment',)
19 date_hierarchy = 'created'
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/adhocracy4/comments/admin.py b/adhocracy4/comments/admin.py
--- a/adhocracy4/comments/admin.py
+++ b/adhocracy4/comments/admin.py
@@ -7,13 +7,13 @@
class CommentAdmin(admin.ModelAdmin):
fields = (
'content_type', 'content_object', 'comment', 'is_removed',
- 'is_censored', 'is_moderator_marked', 'creator', 'comment_categories',
- 'is_blocked'
+ 'is_censored', 'is_blocked', 'is_moderator_marked',
+ 'creator', 'comment_categories'
)
readonly_fields = ('creator', 'content_type', 'content_object')
list_display = (
- '__str__', 'creator', 'is_removed', 'is_censored', 'created',
- 'is_blocked'
+ '__str__', 'creator', 'is_removed', 'is_censored',
+ 'is_blocked', 'created'
)
search_fields = ('comment',)
date_hierarchy = 'created'
| {"golden_diff": "diff --git a/adhocracy4/comments/admin.py b/adhocracy4/comments/admin.py\n--- a/adhocracy4/comments/admin.py\n+++ b/adhocracy4/comments/admin.py\n@@ -7,13 +7,13 @@\n class CommentAdmin(admin.ModelAdmin):\n fields = (\n 'content_type', 'content_object', 'comment', 'is_removed',\n- 'is_censored', 'is_moderator_marked', 'creator', 'comment_categories',\n- 'is_blocked'\n+ 'is_censored', 'is_blocked', 'is_moderator_marked',\n+ 'creator', 'comment_categories'\n )\n readonly_fields = ('creator', 'content_type', 'content_object')\n list_display = (\n- '__str__', 'creator', 'is_removed', 'is_censored', 'created',\n- 'is_blocked'\n+ '__str__', 'creator', 'is_removed', 'is_censored',\n+ 'is_blocked', 'created'\n )\n search_fields = ('comment',)\n date_hierarchy = 'created'\n", "issue": "django-admin: place comment's is_blocked next to is_censored\n**URL:** \r\n**user:** \r\n**expected behaviour:** As is_removed, is_censored and is_blocked have a very similar behaviour, the is_blocked field should be next to the first two fields. In the detail as well as in the list view\r\n**behaviour:** \r\n**important screensize:**\r\n**device & browser:** \r\n**Comment/Question:** \r\n\r\nScreenshot?\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from django.contrib import admin\n\nfrom .models import Comment\n\n\[email protected](Comment)\nclass CommentAdmin(admin.ModelAdmin):\n fields = (\n 'content_type', 'content_object', 'comment', 'is_removed',\n 'is_censored', 'is_moderator_marked', 'creator', 'comment_categories',\n 'is_blocked'\n )\n readonly_fields = ('creator', 'content_type', 'content_object')\n list_display = (\n '__str__', 'creator', 'is_removed', 'is_censored', 'created',\n 'is_blocked'\n )\n search_fields = ('comment',)\n date_hierarchy = 'created'\n", "path": "adhocracy4/comments/admin.py"}], "after_files": [{"content": "from django.contrib import admin\n\nfrom .models import Comment\n\n\[email protected](Comment)\nclass CommentAdmin(admin.ModelAdmin):\n fields = (\n 'content_type', 'content_object', 'comment', 'is_removed',\n 'is_censored', 'is_blocked', 'is_moderator_marked',\n 'creator', 'comment_categories'\n )\n readonly_fields = ('creator', 'content_type', 'content_object')\n list_display = (\n '__str__', 'creator', 'is_removed', 'is_censored',\n 'is_blocked', 'created'\n )\n search_fields = ('comment',)\n date_hierarchy = 'created'\n", "path": "adhocracy4/comments/admin.py"}]} | 679 | 224 |
gh_patches_debug_3853 | rasdani/github-patches | git_diff | spack__spack-29252 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix neovim install on M1
With #29228 and #29229 in, the `spack install neovim` fails with:
```
1022 undef: _tigetstr
1023 undef: _cur_term
1024 undef: _setupterm
>> 1025 Undefined symbols for architecture arm64:
1026 "_tigetstr", referenced from:
1027 _try_load_terminfo_key in libtermkey.a(driver-ti.o)
1028 "_cur_term", referenced from:
1029 _load_terminfo in libtermkey.a(driver-ti.o)
1030 "_setupterm", referenced from:
1031 _new_driver in libtermkey.a(driver-ti.o)
1032 _load_terminfo in libtermkey.a(driver-ti.o)
1033 ld: symbol(s) not found for architecture arm64
```
While linking the `nvim` executable. These symbols seem to be coming from `ncurses`, but linking `ncurses` explicitly didn't seem to fix it. However, the current PR fixes it. One must turn off `termlib` in `ncurses` and then one must explicitly link it. Then `nvim` builds just fine. I am opening this PR as a Draft, because the `+termlib` seems hardwired in `lua`, so I don't know how to fix this properly. Also just adding `ncurses` in the cmake for `neovim` doesn't feel right, one should explicitly depend on `ncurses` and then find it using cmake. I don't have time to work on that. But this PR might be helpful to others to finish this work. Either way, neovim seems to work fine now.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `var/spack/repos/builtin/packages/libtermkey/package.py`
Content:
```
1 # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack import *
7
8
9 class Libtermkey(Package):
10 """Easy keyboard entry processing for terminal programs"""
11 homepage = "http://www.leonerd.org.uk/code/libtermkey/"
12 url = "http://www.leonerd.org.uk/code/libtermkey/libtermkey-0.18.tar.gz"
13
14 version('0.22', sha256='6945bd3c4aaa83da83d80a045c5563da4edd7d0374c62c0d35aec09eb3014600')
15 version('0.18', sha256='239746de41c845af52bb3c14055558f743292dd6c24ac26c2d6567a5a6093926')
16 version('0.17', sha256='68949364ed5eaad857b3dea10071cde74b00b9f236dfbb702169f246c3cef389')
17 version('0.16', sha256='6c8136efa5d0b3277014a5d4519ea81190079c82656b7db1655a1bd147326a70')
18 version('0.15b', sha256='6825422c6297e4f81b2c48962b4512585ca8a50bf31f24b3234a1be71a9d7a6e')
19 version('0.14', sha256='3d114d4509499b80a583ea39cd35f18268aacf4a7bbf56c142cd032632005c79')
20
21 depends_on('libtool', type='build')
22 depends_on('ncurses')
23 depends_on('pkgconfig')
24
25 def install(self, spec, prefix):
26 make()
27 make("install", "PREFIX=" + prefix)
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/var/spack/repos/builtin/packages/libtermkey/package.py b/var/spack/repos/builtin/packages/libtermkey/package.py
--- a/var/spack/repos/builtin/packages/libtermkey/package.py
+++ b/var/spack/repos/builtin/packages/libtermkey/package.py
@@ -19,7 +19,7 @@
version('0.14', sha256='3d114d4509499b80a583ea39cd35f18268aacf4a7bbf56c142cd032632005c79')
depends_on('libtool', type='build')
- depends_on('ncurses')
+ depends_on('unibilium')
depends_on('pkgconfig')
def install(self, spec, prefix):
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/libtermkey/package.py b/var/spack/repos/builtin/packages/libtermkey/package.py\n--- a/var/spack/repos/builtin/packages/libtermkey/package.py\n+++ b/var/spack/repos/builtin/packages/libtermkey/package.py\n@@ -19,7 +19,7 @@\n version('0.14', sha256='3d114d4509499b80a583ea39cd35f18268aacf4a7bbf56c142cd032632005c79')\n \n depends_on('libtool', type='build')\n- depends_on('ncurses')\n+ depends_on('unibilium')\n depends_on('pkgconfig')\n \n def install(self, spec, prefix):\n", "issue": "Fix neovim install on M1\nWith #29228 and #29229 in, the `spack install neovim` fails with:\r\n```\r\n 1022 undef: _tigetstr\r\n 1023 undef: _cur_term\r\n 1024 undef: _setupterm\r\n >> 1025 Undefined symbols for architecture arm64:\r\n 1026 \"_tigetstr\", referenced from:\r\n 1027 _try_load_terminfo_key in libtermkey.a(driver-ti.o)\r\n 1028 \"_cur_term\", referenced from:\r\n 1029 _load_terminfo in libtermkey.a(driver-ti.o)\r\n 1030 \"_setupterm\", referenced from:\r\n 1031 _new_driver in libtermkey.a(driver-ti.o)\r\n 1032 _load_terminfo in libtermkey.a(driver-ti.o)\r\n 1033 ld: symbol(s) not found for architecture arm64\r\n```\r\nWhile linking the `nvim` executable. These symbols seem to be coming from `ncurses`, but linking `ncurses` explicitly didn't seem to fix it. However, the current PR fixes it. One must turn off `termlib` in `ncurses` and then one must explicitly link it. Then `nvim` builds just fine. I am opening this PR as a Draft, because the `+termlib` seems hardwired in `lua`, so I don't know how to fix this properly. Also just adding `ncurses` in the cmake for `neovim` doesn't feel right, one should explicitly depend on `ncurses` and then find it using cmake. I don't have time to work on that. But this PR might be helpful to others to finish this work. Either way, neovim seems to work fine now.\n", "before_files": [{"content": "# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass Libtermkey(Package):\n \"\"\"Easy keyboard entry processing for terminal programs\"\"\"\n homepage = \"http://www.leonerd.org.uk/code/libtermkey/\"\n url = \"http://www.leonerd.org.uk/code/libtermkey/libtermkey-0.18.tar.gz\"\n\n version('0.22', sha256='6945bd3c4aaa83da83d80a045c5563da4edd7d0374c62c0d35aec09eb3014600')\n version('0.18', sha256='239746de41c845af52bb3c14055558f743292dd6c24ac26c2d6567a5a6093926')\n version('0.17', sha256='68949364ed5eaad857b3dea10071cde74b00b9f236dfbb702169f246c3cef389')\n version('0.16', sha256='6c8136efa5d0b3277014a5d4519ea81190079c82656b7db1655a1bd147326a70')\n version('0.15b', sha256='6825422c6297e4f81b2c48962b4512585ca8a50bf31f24b3234a1be71a9d7a6e')\n version('0.14', sha256='3d114d4509499b80a583ea39cd35f18268aacf4a7bbf56c142cd032632005c79')\n\n depends_on('libtool', type='build')\n depends_on('ncurses')\n depends_on('pkgconfig')\n\n def install(self, spec, prefix):\n make()\n make(\"install\", \"PREFIX=\" + prefix)\n", "path": "var/spack/repos/builtin/packages/libtermkey/package.py"}], "after_files": [{"content": "# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass Libtermkey(Package):\n \"\"\"Easy keyboard entry processing for terminal programs\"\"\"\n homepage = \"http://www.leonerd.org.uk/code/libtermkey/\"\n url = \"http://www.leonerd.org.uk/code/libtermkey/libtermkey-0.18.tar.gz\"\n\n version('0.22', sha256='6945bd3c4aaa83da83d80a045c5563da4edd7d0374c62c0d35aec09eb3014600')\n version('0.18', sha256='239746de41c845af52bb3c14055558f743292dd6c24ac26c2d6567a5a6093926')\n version('0.17', sha256='68949364ed5eaad857b3dea10071cde74b00b9f236dfbb702169f246c3cef389')\n version('0.16', sha256='6c8136efa5d0b3277014a5d4519ea81190079c82656b7db1655a1bd147326a70')\n version('0.15b', sha256='6825422c6297e4f81b2c48962b4512585ca8a50bf31f24b3234a1be71a9d7a6e')\n version('0.14', sha256='3d114d4509499b80a583ea39cd35f18268aacf4a7bbf56c142cd032632005c79')\n\n depends_on('libtool', type='build')\n depends_on('unibilium')\n depends_on('pkgconfig')\n\n def install(self, spec, prefix):\n make()\n make(\"install\", \"PREFIX=\" + prefix)\n", "path": "var/spack/repos/builtin/packages/libtermkey/package.py"}]} | 1,349 | 190 |
gh_patches_debug_4404 | rasdani/github-patches | git_diff | pallets__werkzeug-930 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add an 'extras_require' entry to setup.py for watchdog
The reloader feature has an optional dependency on the watchdog package:
http://werkzeug.pocoo.org/docs/0.11/serving/#reloader
Whilst people can just add 'watchdog' to their requirements files, it's clearer to specifically add an `extras_require` entry for it to setup.py, which allows people to instead add eg `Werkzeug[watchdog]` to their requirements file, which more clearly shows why the dependency is required.
In addition, should Werkzeug ever need a specific version of watchdog in the future (eg due to backwards incompatibilities), then the `extras_require` entry could be updated with a version range, causing zero breakage for people who'd used the `Werkzeug[watchdog]` form in their requirements file.
I'll open a PR for this shortly :-)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 Werkzeug
4 ========
5
6 Werkzeug started as simple collection of various utilities for WSGI
7 applications and has become one of the most advanced WSGI utility
8 modules. It includes a powerful debugger, full featured request and
9 response objects, HTTP utilities to handle entity tags, cache control
10 headers, HTTP dates, cookie handling, file uploads, a powerful URL
11 routing system and a bunch of community contributed addon modules.
12
13 Werkzeug is unicode aware and doesn't enforce a specific template
14 engine, database adapter or anything else. It doesn't even enforce
15 a specific way of handling requests and leaves all that up to the
16 developer. It's most useful for end user applications which should work
17 on as many server environments as possible (such as blogs, wikis,
18 bulletin boards, etc.).
19
20 Details and example applications are available on the
21 `Werkzeug website <http://werkzeug.pocoo.org/>`_.
22
23
24 Features
25 --------
26
27 - unicode awareness
28
29 - request and response objects
30
31 - various utility functions for dealing with HTTP headers such as
32 `Accept` and `Cache-Control` headers.
33
34 - thread local objects with proper cleanup at request end
35
36 - an interactive debugger
37
38 - A simple WSGI server with support for threading and forking
39 with an automatic reloader.
40
41 - a flexible URL routing system with REST support.
42
43 - fully WSGI compatible
44
45
46 Development Version
47 -------------------
48
49 The Werkzeug development version can be installed by cloning the git
50 repository from `github`_::
51
52 git clone [email protected]:pallets/werkzeug.git
53
54 .. _github: http://github.com/pallets/werkzeug
55 """
56 import ast
57 import re
58 try:
59 from setuptools import setup, Command
60 except ImportError:
61 from distutils.core import setup, Command
62
63
64 _version_re = re.compile(r'__version__\s+=\s+(.*)')
65
66 with open('werkzeug/__init__.py', 'rb') as f:
67 version = str(ast.literal_eval(_version_re.search(
68 f.read().decode('utf-8')).group(1)))
69
70
71 class TestCommand(Command):
72 user_options = []
73
74 def initialize_options(self):
75 pass
76
77 def finalize_options(self):
78 pass
79
80 def run(self):
81 import pytest
82 pytest.cmdline.main(args=[])
83
84
85 setup(
86 name='Werkzeug',
87 version=version,
88 url='http://werkzeug.pocoo.org/',
89 license='BSD',
90 author='Armin Ronacher',
91 author_email='[email protected]',
92 description='The Swiss Army knife of Python web development',
93 long_description=__doc__,
94 classifiers=[
95 'Development Status :: 5 - Production/Stable',
96 'Environment :: Web Environment',
97 'Intended Audience :: Developers',
98 'License :: OSI Approved :: BSD License',
99 'Operating System :: OS Independent',
100 'Programming Language :: Python',
101 'Programming Language :: Python :: 2',
102 'Programming Language :: Python :: 2.6',
103 'Programming Language :: Python :: 2.7',
104 'Programming Language :: Python :: 3',
105 'Programming Language :: Python :: 3.3',
106 'Programming Language :: Python :: 3.4',
107 'Programming Language :: Python :: 3.5',
108 'Topic :: Internet :: WWW/HTTP :: Dynamic Content',
109 'Topic :: Software Development :: Libraries :: Python Modules'
110 ],
111 packages=['werkzeug', 'werkzeug.debug', 'werkzeug.contrib'],
112 cmdclass=dict(test=TestCommand),
113 include_package_data=True,
114 zip_safe=False,
115 platforms='any'
116 )
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -109,6 +109,9 @@
'Topic :: Software Development :: Libraries :: Python Modules'
],
packages=['werkzeug', 'werkzeug.debug', 'werkzeug.contrib'],
+ extras_require={
+ 'watchdog': ['watchdog'],
+ },
cmdclass=dict(test=TestCommand),
include_package_data=True,
zip_safe=False,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -109,6 +109,9 @@\n 'Topic :: Software Development :: Libraries :: Python Modules'\n ],\n packages=['werkzeug', 'werkzeug.debug', 'werkzeug.contrib'],\n+ extras_require={\n+ 'watchdog': ['watchdog'],\n+ },\n cmdclass=dict(test=TestCommand),\n include_package_data=True,\n zip_safe=False,\n", "issue": "Add an 'extras_require' entry to setup.py for watchdog\nThe reloader feature has an optional dependency on the watchdog package:\nhttp://werkzeug.pocoo.org/docs/0.11/serving/#reloader\n\nWhilst people can just add 'watchdog' to their requirements files, it's clearer to specifically add an `extras_require` entry for it to setup.py, which allows people to instead add eg `Werkzeug[watchdog]` to their requirements file, which more clearly shows why the dependency is required.\n\nIn addition, should Werkzeug ever need a specific version of watchdog in the future (eg due to backwards incompatibilities), then the `extras_require` entry could be updated with a version range, causing zero breakage for people who'd used the `Werkzeug[watchdog]` form in their requirements file.\n\nI'll open a PR for this shortly :-)\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nWerkzeug\n========\n\nWerkzeug started as simple collection of various utilities for WSGI\napplications and has become one of the most advanced WSGI utility\nmodules. It includes a powerful debugger, full featured request and\nresponse objects, HTTP utilities to handle entity tags, cache control\nheaders, HTTP dates, cookie handling, file uploads, a powerful URL\nrouting system and a bunch of community contributed addon modules.\n\nWerkzeug is unicode aware and doesn't enforce a specific template\nengine, database adapter or anything else. It doesn't even enforce\na specific way of handling requests and leaves all that up to the\ndeveloper. It's most useful for end user applications which should work\non as many server environments as possible (such as blogs, wikis,\nbulletin boards, etc.).\n\nDetails and example applications are available on the\n`Werkzeug website <http://werkzeug.pocoo.org/>`_.\n\n\nFeatures\n--------\n\n- unicode awareness\n\n- request and response objects\n\n- various utility functions for dealing with HTTP headers such as\n `Accept` and `Cache-Control` headers.\n\n- thread local objects with proper cleanup at request end\n\n- an interactive debugger\n\n- A simple WSGI server with support for threading and forking\n with an automatic reloader.\n\n- a flexible URL routing system with REST support.\n\n- fully WSGI compatible\n\n\nDevelopment Version\n-------------------\n\nThe Werkzeug development version can be installed by cloning the git\nrepository from `github`_::\n\n git clone [email protected]:pallets/werkzeug.git\n\n.. _github: http://github.com/pallets/werkzeug\n\"\"\"\nimport ast\nimport re\ntry:\n from setuptools import setup, Command\nexcept ImportError:\n from distutils.core import setup, Command\n\n\n_version_re = re.compile(r'__version__\\s+=\\s+(.*)')\n\nwith open('werkzeug/__init__.py', 'rb') as f:\n version = str(ast.literal_eval(_version_re.search(\n f.read().decode('utf-8')).group(1)))\n\n\nclass TestCommand(Command):\n user_options = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n import pytest\n pytest.cmdline.main(args=[])\n\n\nsetup(\n name='Werkzeug',\n version=version,\n url='http://werkzeug.pocoo.org/',\n license='BSD',\n author='Armin Ronacher',\n author_email='[email protected]',\n description='The Swiss Army knife of Python web development',\n long_description=__doc__,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Internet :: WWW/HTTP :: Dynamic Content',\n 'Topic :: Software Development :: Libraries :: Python Modules'\n ],\n packages=['werkzeug', 'werkzeug.debug', 'werkzeug.contrib'],\n cmdclass=dict(test=TestCommand),\n include_package_data=True,\n zip_safe=False,\n platforms='any'\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nWerkzeug\n========\n\nWerkzeug started as simple collection of various utilities for WSGI\napplications and has become one of the most advanced WSGI utility\nmodules. It includes a powerful debugger, full featured request and\nresponse objects, HTTP utilities to handle entity tags, cache control\nheaders, HTTP dates, cookie handling, file uploads, a powerful URL\nrouting system and a bunch of community contributed addon modules.\n\nWerkzeug is unicode aware and doesn't enforce a specific template\nengine, database adapter or anything else. It doesn't even enforce\na specific way of handling requests and leaves all that up to the\ndeveloper. It's most useful for end user applications which should work\non as many server environments as possible (such as blogs, wikis,\nbulletin boards, etc.).\n\nDetails and example applications are available on the\n`Werkzeug website <http://werkzeug.pocoo.org/>`_.\n\n\nFeatures\n--------\n\n- unicode awareness\n\n- request and response objects\n\n- various utility functions for dealing with HTTP headers such as\n `Accept` and `Cache-Control` headers.\n\n- thread local objects with proper cleanup at request end\n\n- an interactive debugger\n\n- A simple WSGI server with support for threading and forking\n with an automatic reloader.\n\n- a flexible URL routing system with REST support.\n\n- fully WSGI compatible\n\n\nDevelopment Version\n-------------------\n\nThe Werkzeug development version can be installed by cloning the git\nrepository from `github`_::\n\n git clone [email protected]:pallets/werkzeug.git\n\n.. _github: http://github.com/pallets/werkzeug\n\"\"\"\nimport ast\nimport re\ntry:\n from setuptools import setup, Command\nexcept ImportError:\n from distutils.core import setup, Command\n\n\n_version_re = re.compile(r'__version__\\s+=\\s+(.*)')\n\nwith open('werkzeug/__init__.py', 'rb') as f:\n version = str(ast.literal_eval(_version_re.search(\n f.read().decode('utf-8')).group(1)))\n\n\nclass TestCommand(Command):\n user_options = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n import pytest\n pytest.cmdline.main(args=[])\n\n\nsetup(\n name='Werkzeug',\n version=version,\n url='http://werkzeug.pocoo.org/',\n license='BSD',\n author='Armin Ronacher',\n author_email='[email protected]',\n description='The Swiss Army knife of Python web development',\n long_description=__doc__,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Internet :: WWW/HTTP :: Dynamic Content',\n 'Topic :: Software Development :: Libraries :: Python Modules'\n ],\n packages=['werkzeug', 'werkzeug.debug', 'werkzeug.contrib'],\n extras_require={\n 'watchdog': ['watchdog'],\n },\n cmdclass=dict(test=TestCommand),\n include_package_data=True,\n zip_safe=False,\n platforms='any'\n)\n", "path": "setup.py"}]} | 1,476 | 102 |
gh_patches_debug_1694 | rasdani/github-patches | git_diff | iterative__dvc-4826 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unexpected error on `dvc diff`
## Bug Report
When running `dvc diff staging`, I got a KeyError, here is the traceback:
```
Traceback (most recent call last):
File "/home/ubuntu/.local/share/virtualenvs/speech-api-EI_ft4iY/lib/python3.7/site-packages/dvc/main.py", line 76, in main
ret = cmd.run()
File "/home/ubuntu/.local/share/virtualenvs/speech-api-EI_ft4iY/lib/python3.7/site-packages/dvc/command/diff.py", line 130, in run
diff = self.repo.diff(self.args.a_rev, self.args.b_rev)
File "/home/ubuntu/.local/share/virtualenvs/speech-api-EI_ft4iY/lib/python3.7/site-packages/dvc/repo/__init__.py", line 54, in wrapper
return f(repo, *args, **kwargs)
File "/home/ubuntu/.local/share/virtualenvs/speech-api-EI_ft4iY/lib/python3.7/site-packages/dvc/repo/diff.py", line 43, in diff
missing = sorted(_filter_missing(self, deleted_or_missing))
File "/home/ubuntu/.local/share/virtualenvs/speech-api-EI_ft4iY/lib/python3.7/site-packages/dvc/repo/diff.py", line 125, in _filter_missing
if out.status()[str(out)] == "not in cache":
KeyError: 'data/KPI/KPI_from_dvc/en/post_probs'
```
It only happens when I give a specific revision.
Any ideas? Could it be my data?
### Please provide information about your setup
**Output of `dvc version`:**
```console
$ dvc version
DVC version: 1.9.1 (pip)
---------------------------------
Platform: Python 3.7.3 on Linux-5.4.0-1029-aws-x86_64-with-debian-buster-sid
Supports: http, https, s3
Cache types: hardlink, symlink
Cache directory: ext4 on /dev/nvme0n1p1
Workspace directory: ext4 on /dev/nvme0n1p1
Repo: dvc, git
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/repo/diff.py`
Content:
```
1 import logging
2 import os
3
4 from dvc.repo import locked
5 from dvc.tree.local import LocalTree
6 from dvc.tree.repo import RepoTree
7
8 logger = logging.getLogger(__name__)
9
10
11 @locked
12 def diff(self, a_rev="HEAD", b_rev=None):
13 """
14 By default, it compares the workspace with the last commit's tree.
15
16 This implementation differs from `git diff` since DVC doesn't have
17 the concept of `index`, but it keeps the same interface, thus,
18 `dvc diff` would be the same as `dvc diff HEAD`.
19 """
20
21 if self.scm.no_commits:
22 return {}
23
24 b_rev = b_rev if b_rev else "workspace"
25 results = {}
26 for rev in self.brancher(revs=[a_rev, b_rev]):
27 if rev == "workspace" and rev != b_rev:
28 # brancher always returns workspace, but we only need to compute
29 # workspace paths/checksums if b_rev was None
30 continue
31 results[rev] = _paths_checksums(self)
32
33 old = results[a_rev]
34 new = results[b_rev]
35
36 # Compare paths between the old and new tree.
37 # set() efficiently converts dict keys to a set
38 added = sorted(set(new) - set(old))
39 deleted_or_missing = set(old) - set(new)
40 if b_rev == "workspace":
41 # missing status is only applicable when diffing local workspace
42 # against a commit
43 missing = sorted(_filter_missing(self, deleted_or_missing))
44 else:
45 missing = []
46 deleted = sorted(deleted_or_missing - set(missing))
47 modified = sorted(set(old) & set(new))
48
49 ret = {
50 "added": [{"path": path, "hash": new[path]} for path in added],
51 "deleted": [{"path": path, "hash": old[path]} for path in deleted],
52 "modified": [
53 {"path": path, "hash": {"old": old[path], "new": new[path]}}
54 for path in modified
55 if old[path] != new[path]
56 ],
57 "not in cache": [
58 {"path": path, "hash": old[path]} for path in missing
59 ],
60 }
61
62 return ret if any(ret.values()) else {}
63
64
65 def _paths_checksums(repo):
66 """
67 A dictionary of checksums addressed by relpaths collected from
68 the current tree outputs.
69
70 To help distinguish between a directory and a file output,
71 the former one will come with a trailing slash in the path:
72
73 directory: "data/"
74 file: "data"
75 """
76
77 return dict(_output_paths(repo))
78
79
80 def _output_paths(repo):
81 repo_tree = RepoTree(repo, stream=True)
82 on_working_tree = isinstance(repo.tree, LocalTree)
83
84 def _exists(output):
85 if on_working_tree:
86 return output.exists
87 return True
88
89 def _to_path(output):
90 return (
91 str(output)
92 if not output.is_dir_checksum
93 else os.path.join(str(output), "")
94 )
95
96 def _to_checksum(output):
97 if on_working_tree:
98 return repo.cache.local.tree.get_hash(output.path_info).value
99 return output.hash_info.value
100
101 for stage in repo.stages:
102 for output in stage.outs:
103 if _exists(output):
104 yield _to_path(output), _to_checksum(output)
105 if output.is_dir_checksum:
106 yield from _dir_output_paths(repo_tree, output)
107
108
109 def _dir_output_paths(repo_tree, output):
110 from dvc.config import NoRemoteError
111
112 try:
113 for fname in repo_tree.walk_files(output.path_info):
114 yield str(fname), repo_tree.get_file_hash(fname).value
115 except NoRemoteError:
116 logger.warning("dir cache entry for '%s' is missing", output)
117
118
119 def _filter_missing(repo, paths):
120 repo_tree = RepoTree(repo, stream=True)
121 for path in paths:
122 metadata = repo_tree.metadata(path)
123 if metadata.is_dvc:
124 out = metadata.outs[0]
125 if out.status()[str(out)] == "not in cache":
126 yield path
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dvc/repo/diff.py b/dvc/repo/diff.py
--- a/dvc/repo/diff.py
+++ b/dvc/repo/diff.py
@@ -122,5 +122,5 @@
metadata = repo_tree.metadata(path)
if metadata.is_dvc:
out = metadata.outs[0]
- if out.status()[str(out)] == "not in cache":
+ if out.status().get(str(out)) == "not in cache":
yield path
| {"golden_diff": "diff --git a/dvc/repo/diff.py b/dvc/repo/diff.py\n--- a/dvc/repo/diff.py\n+++ b/dvc/repo/diff.py\n@@ -122,5 +122,5 @@\n metadata = repo_tree.metadata(path)\n if metadata.is_dvc:\n out = metadata.outs[0]\n- if out.status()[str(out)] == \"not in cache\":\n+ if out.status().get(str(out)) == \"not in cache\":\n yield path\n", "issue": "Unexpected error on `dvc diff`\n## Bug Report\r\nWhen running `dvc diff staging`, I got a KeyError, here is the traceback:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/.local/share/virtualenvs/speech-api-EI_ft4iY/lib/python3.7/site-packages/dvc/main.py\", line 76, in main\r\n ret = cmd.run()\r\n File \"/home/ubuntu/.local/share/virtualenvs/speech-api-EI_ft4iY/lib/python3.7/site-packages/dvc/command/diff.py\", line 130, in run\r\n diff = self.repo.diff(self.args.a_rev, self.args.b_rev)\r\n File \"/home/ubuntu/.local/share/virtualenvs/speech-api-EI_ft4iY/lib/python3.7/site-packages/dvc/repo/__init__.py\", line 54, in wrapper\r\n return f(repo, *args, **kwargs)\r\n File \"/home/ubuntu/.local/share/virtualenvs/speech-api-EI_ft4iY/lib/python3.7/site-packages/dvc/repo/diff.py\", line 43, in diff\r\n missing = sorted(_filter_missing(self, deleted_or_missing))\r\n File \"/home/ubuntu/.local/share/virtualenvs/speech-api-EI_ft4iY/lib/python3.7/site-packages/dvc/repo/diff.py\", line 125, in _filter_missing\r\n if out.status()[str(out)] == \"not in cache\":\r\nKeyError: 'data/KPI/KPI_from_dvc/en/post_probs'\r\n```\r\n\r\nIt only happens when I give a specific revision.\r\n\r\nAny ideas? Could it be my data?\r\n\r\n### Please provide information about your setup\r\n\r\n**Output of `dvc version`:**\r\n\r\n```console\r\n$ dvc version\r\nDVC version: 1.9.1 (pip)\r\n---------------------------------\r\nPlatform: Python 3.7.3 on Linux-5.4.0-1029-aws-x86_64-with-debian-buster-sid\r\nSupports: http, https, s3\r\nCache types: hardlink, symlink\r\nCache directory: ext4 on /dev/nvme0n1p1\r\nWorkspace directory: ext4 on /dev/nvme0n1p1\r\nRepo: dvc, git\r\n```\n", "before_files": [{"content": "import logging\nimport os\n\nfrom dvc.repo import locked\nfrom dvc.tree.local import LocalTree\nfrom dvc.tree.repo import RepoTree\n\nlogger = logging.getLogger(__name__)\n\n\n@locked\ndef diff(self, a_rev=\"HEAD\", b_rev=None):\n \"\"\"\n By default, it compares the workspace with the last commit's tree.\n\n This implementation differs from `git diff` since DVC doesn't have\n the concept of `index`, but it keeps the same interface, thus,\n `dvc diff` would be the same as `dvc diff HEAD`.\n \"\"\"\n\n if self.scm.no_commits:\n return {}\n\n b_rev = b_rev if b_rev else \"workspace\"\n results = {}\n for rev in self.brancher(revs=[a_rev, b_rev]):\n if rev == \"workspace\" and rev != b_rev:\n # brancher always returns workspace, but we only need to compute\n # workspace paths/checksums if b_rev was None\n continue\n results[rev] = _paths_checksums(self)\n\n old = results[a_rev]\n new = results[b_rev]\n\n # Compare paths between the old and new tree.\n # set() efficiently converts dict keys to a set\n added = sorted(set(new) - set(old))\n deleted_or_missing = set(old) - set(new)\n if b_rev == \"workspace\":\n # missing status is only applicable when diffing local workspace\n # against a commit\n missing = sorted(_filter_missing(self, deleted_or_missing))\n else:\n missing = []\n deleted = sorted(deleted_or_missing - set(missing))\n modified = sorted(set(old) & set(new))\n\n ret = {\n \"added\": [{\"path\": path, \"hash\": new[path]} for path in added],\n \"deleted\": [{\"path\": path, \"hash\": old[path]} for path in deleted],\n \"modified\": [\n {\"path\": path, \"hash\": {\"old\": old[path], \"new\": new[path]}}\n for path in modified\n if old[path] != new[path]\n ],\n \"not in cache\": [\n {\"path\": path, \"hash\": old[path]} for path in missing\n ],\n }\n\n return ret if any(ret.values()) else {}\n\n\ndef _paths_checksums(repo):\n \"\"\"\n A dictionary of checksums addressed by relpaths collected from\n the current tree outputs.\n\n To help distinguish between a directory and a file output,\n the former one will come with a trailing slash in the path:\n\n directory: \"data/\"\n file: \"data\"\n \"\"\"\n\n return dict(_output_paths(repo))\n\n\ndef _output_paths(repo):\n repo_tree = RepoTree(repo, stream=True)\n on_working_tree = isinstance(repo.tree, LocalTree)\n\n def _exists(output):\n if on_working_tree:\n return output.exists\n return True\n\n def _to_path(output):\n return (\n str(output)\n if not output.is_dir_checksum\n else os.path.join(str(output), \"\")\n )\n\n def _to_checksum(output):\n if on_working_tree:\n return repo.cache.local.tree.get_hash(output.path_info).value\n return output.hash_info.value\n\n for stage in repo.stages:\n for output in stage.outs:\n if _exists(output):\n yield _to_path(output), _to_checksum(output)\n if output.is_dir_checksum:\n yield from _dir_output_paths(repo_tree, output)\n\n\ndef _dir_output_paths(repo_tree, output):\n from dvc.config import NoRemoteError\n\n try:\n for fname in repo_tree.walk_files(output.path_info):\n yield str(fname), repo_tree.get_file_hash(fname).value\n except NoRemoteError:\n logger.warning(\"dir cache entry for '%s' is missing\", output)\n\n\ndef _filter_missing(repo, paths):\n repo_tree = RepoTree(repo, stream=True)\n for path in paths:\n metadata = repo_tree.metadata(path)\n if metadata.is_dvc:\n out = metadata.outs[0]\n if out.status()[str(out)] == \"not in cache\":\n yield path\n", "path": "dvc/repo/diff.py"}], "after_files": [{"content": "import logging\nimport os\n\nfrom dvc.repo import locked\nfrom dvc.tree.local import LocalTree\nfrom dvc.tree.repo import RepoTree\n\nlogger = logging.getLogger(__name__)\n\n\n@locked\ndef diff(self, a_rev=\"HEAD\", b_rev=None):\n \"\"\"\n By default, it compares the workspace with the last commit's tree.\n\n This implementation differs from `git diff` since DVC doesn't have\n the concept of `index`, but it keeps the same interface, thus,\n `dvc diff` would be the same as `dvc diff HEAD`.\n \"\"\"\n\n if self.scm.no_commits:\n return {}\n\n b_rev = b_rev if b_rev else \"workspace\"\n results = {}\n for rev in self.brancher(revs=[a_rev, b_rev]):\n if rev == \"workspace\" and rev != b_rev:\n # brancher always returns workspace, but we only need to compute\n # workspace paths/checksums if b_rev was None\n continue\n results[rev] = _paths_checksums(self)\n\n old = results[a_rev]\n new = results[b_rev]\n\n # Compare paths between the old and new tree.\n # set() efficiently converts dict keys to a set\n added = sorted(set(new) - set(old))\n deleted_or_missing = set(old) - set(new)\n if b_rev == \"workspace\":\n # missing status is only applicable when diffing local workspace\n # against a commit\n missing = sorted(_filter_missing(self, deleted_or_missing))\n else:\n missing = []\n deleted = sorted(deleted_or_missing - set(missing))\n modified = sorted(set(old) & set(new))\n\n ret = {\n \"added\": [{\"path\": path, \"hash\": new[path]} for path in added],\n \"deleted\": [{\"path\": path, \"hash\": old[path]} for path in deleted],\n \"modified\": [\n {\"path\": path, \"hash\": {\"old\": old[path], \"new\": new[path]}}\n for path in modified\n if old[path] != new[path]\n ],\n \"not in cache\": [\n {\"path\": path, \"hash\": old[path]} for path in missing\n ],\n }\n\n return ret if any(ret.values()) else {}\n\n\ndef _paths_checksums(repo):\n \"\"\"\n A dictionary of checksums addressed by relpaths collected from\n the current tree outputs.\n\n To help distinguish between a directory and a file output,\n the former one will come with a trailing slash in the path:\n\n directory: \"data/\"\n file: \"data\"\n \"\"\"\n\n return dict(_output_paths(repo))\n\n\ndef _output_paths(repo):\n repo_tree = RepoTree(repo, stream=True)\n on_working_tree = isinstance(repo.tree, LocalTree)\n\n def _exists(output):\n if on_working_tree:\n return output.exists\n return True\n\n def _to_path(output):\n return (\n str(output)\n if not output.is_dir_checksum\n else os.path.join(str(output), \"\")\n )\n\n def _to_checksum(output):\n if on_working_tree:\n return repo.cache.local.tree.get_hash(output.path_info).value\n return output.hash_info.value\n\n for stage in repo.stages:\n for output in stage.outs:\n if _exists(output):\n yield _to_path(output), _to_checksum(output)\n if output.is_dir_checksum:\n yield from _dir_output_paths(repo_tree, output)\n\n\ndef _dir_output_paths(repo_tree, output):\n from dvc.config import NoRemoteError\n\n try:\n for fname in repo_tree.walk_files(output.path_info):\n yield str(fname), repo_tree.get_file_hash(fname).value\n except NoRemoteError:\n logger.warning(\"dir cache entry for '%s' is missing\", output)\n\n\ndef _filter_missing(repo, paths):\n repo_tree = RepoTree(repo, stream=True)\n for path in paths:\n metadata = repo_tree.metadata(path)\n if metadata.is_dvc:\n out = metadata.outs[0]\n if out.status().get(str(out)) == \"not in cache\":\n yield path\n", "path": "dvc/repo/diff.py"}]} | 1,925 | 113 |
gh_patches_debug_39452 | rasdani/github-patches | git_diff | wagtail__wagtail-1375 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SnippetChooserPanel cannot take model string in model argument
We should update `SnippetChooserPanel` to use `resolve_model_string` (https://github.com/torchbox/wagtail/blob/master/wagtail/wagtailcore/utils.py#L13-L37). This will make it behave more like `PageChooserPanel`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/wagtailsnippets/edit_handlers.py`
Content:
```
1 from __future__ import absolute_import, unicode_literals
2
3 from django.template.loader import render_to_string
4 from django.contrib.contenttypes.models import ContentType
5 from django.utils.safestring import mark_safe
6 from django.utils.encoding import force_text
7
8 from wagtail.wagtailadmin.edit_handlers import BaseChooserPanel
9 from .widgets import AdminSnippetChooser
10
11
12 class BaseSnippetChooserPanel(BaseChooserPanel):
13 object_type_name = 'item'
14
15 _content_type = None
16
17 @classmethod
18 def widget_overrides(cls):
19 return {cls.field_name: AdminSnippetChooser(
20 content_type=cls.content_type(), snippet_type_name=cls.snippet_type_name)}
21
22 @classmethod
23 def content_type(cls):
24 if cls._content_type is None:
25 # TODO: infer the content type by introspection on the foreign key rather than having to pass it explicitly
26 cls._content_type = ContentType.objects.get_for_model(cls.snippet_type)
27
28 return cls._content_type
29
30 def render_as_field(self):
31 instance_obj = self.get_chosen_item()
32 return mark_safe(render_to_string(self.field_template, {
33 'field': self.bound_field,
34 self.object_type_name: instance_obj,
35 'snippet_type_name': self.snippet_type_name,
36 }))
37
38
39 class SnippetChooserPanel(object):
40 def __init__(self, field_name, snippet_type):
41 self.field_name = field_name
42 self.snippet_type = snippet_type
43
44 def bind_to_model(self, model):
45 return type(str('_SnippetChooserPanel'), (BaseSnippetChooserPanel,), {
46 'model': model,
47 'field_name': self.field_name,
48 'snippet_type_name': force_text(self.snippet_type._meta.verbose_name),
49 'snippet_type': self.snippet_type,
50 })
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/wagtailsnippets/edit_handlers.py b/wagtail/wagtailsnippets/edit_handlers.py
--- a/wagtail/wagtailsnippets/edit_handlers.py
+++ b/wagtail/wagtailsnippets/edit_handlers.py
@@ -4,28 +4,42 @@
from django.contrib.contenttypes.models import ContentType
from django.utils.safestring import mark_safe
from django.utils.encoding import force_text
+from django.core.exceptions import ImproperlyConfigured
from wagtail.wagtailadmin.edit_handlers import BaseChooserPanel
+from wagtail.wagtailcore.utils import resolve_model_string
from .widgets import AdminSnippetChooser
class BaseSnippetChooserPanel(BaseChooserPanel):
object_type_name = 'item'
- _content_type = None
+ _target_content_type = None
@classmethod
def widget_overrides(cls):
return {cls.field_name: AdminSnippetChooser(
- content_type=cls.content_type(), snippet_type_name=cls.snippet_type_name)}
+ content_type=cls.target_content_type(), snippet_type_name=cls.snippet_type_name)}
@classmethod
- def content_type(cls):
- if cls._content_type is None:
- # TODO: infer the content type by introspection on the foreign key rather than having to pass it explicitly
- cls._content_type = ContentType.objects.get_for_model(cls.snippet_type)
+ def target_content_type(cls):
+ if cls._target_content_type is None:
+ if cls.snippet_type:
+ try:
+ model = resolve_model_string(cls.snippet_type)
+ except LookupError:
+ raise ImproperlyConfigured("{0}.snippet_type must be of the form 'app_label.model_name', given {1!r}".format(
+ cls.__name__, cls.snippet_type))
+ except ValueError:
+ raise ImproperlyConfigured("{0}.snippet_type refers to model {1!r} that has not been installed".format(
+ cls.__name__, cls.snippet_type))
- return cls._content_type
+ cls._target_content_type = ContentType.objects.get_for_model(model)
+ else:
+ target_model = cls.model._meta.get_field(cls.field_name).rel.to
+ cls._target_content_type = ContentType.objects.get_for_model(target_model)
+
+ return cls._target_content_type
def render_as_field(self):
instance_obj = self.get_chosen_item()
@@ -35,6 +49,10 @@
'snippet_type_name': self.snippet_type_name,
}))
+ @property
+ def snippet_type_name(self):
+ return force_text(self.target_content_type()._meta.verbose_name)
+
class SnippetChooserPanel(object):
def __init__(self, field_name, snippet_type):
@@ -45,6 +63,5 @@
return type(str('_SnippetChooserPanel'), (BaseSnippetChooserPanel,), {
'model': model,
'field_name': self.field_name,
- 'snippet_type_name': force_text(self.snippet_type._meta.verbose_name),
'snippet_type': self.snippet_type,
})
| {"golden_diff": "diff --git a/wagtail/wagtailsnippets/edit_handlers.py b/wagtail/wagtailsnippets/edit_handlers.py\n--- a/wagtail/wagtailsnippets/edit_handlers.py\n+++ b/wagtail/wagtailsnippets/edit_handlers.py\n@@ -4,28 +4,42 @@\n from django.contrib.contenttypes.models import ContentType\n from django.utils.safestring import mark_safe\n from django.utils.encoding import force_text\n+from django.core.exceptions import ImproperlyConfigured\n \n from wagtail.wagtailadmin.edit_handlers import BaseChooserPanel\n+from wagtail.wagtailcore.utils import resolve_model_string\n from .widgets import AdminSnippetChooser\n \n \n class BaseSnippetChooserPanel(BaseChooserPanel):\n object_type_name = 'item'\n \n- _content_type = None\n+ _target_content_type = None\n \n @classmethod\n def widget_overrides(cls):\n return {cls.field_name: AdminSnippetChooser(\n- content_type=cls.content_type(), snippet_type_name=cls.snippet_type_name)}\n+ content_type=cls.target_content_type(), snippet_type_name=cls.snippet_type_name)}\n \n @classmethod\n- def content_type(cls):\n- if cls._content_type is None:\n- # TODO: infer the content type by introspection on the foreign key rather than having to pass it explicitly\n- cls._content_type = ContentType.objects.get_for_model(cls.snippet_type)\n+ def target_content_type(cls):\n+ if cls._target_content_type is None:\n+ if cls.snippet_type:\n+ try:\n+ model = resolve_model_string(cls.snippet_type)\n+ except LookupError:\n+ raise ImproperlyConfigured(\"{0}.snippet_type must be of the form 'app_label.model_name', given {1!r}\".format(\n+ cls.__name__, cls.snippet_type))\n+ except ValueError:\n+ raise ImproperlyConfigured(\"{0}.snippet_type refers to model {1!r} that has not been installed\".format(\n+ cls.__name__, cls.snippet_type))\n \n- return cls._content_type\n+ cls._target_content_type = ContentType.objects.get_for_model(model)\n+ else:\n+ target_model = cls.model._meta.get_field(cls.field_name).rel.to\n+ cls._target_content_type = ContentType.objects.get_for_model(target_model)\n+\n+ return cls._target_content_type\n \n def render_as_field(self):\n instance_obj = self.get_chosen_item()\n@@ -35,6 +49,10 @@\n 'snippet_type_name': self.snippet_type_name,\n }))\n \n+ @property\n+ def snippet_type_name(self):\n+ return force_text(self.target_content_type()._meta.verbose_name)\n+\n \n class SnippetChooserPanel(object):\n def __init__(self, field_name, snippet_type):\n@@ -45,6 +63,5 @@\n return type(str('_SnippetChooserPanel'), (BaseSnippetChooserPanel,), {\n 'model': model,\n 'field_name': self.field_name,\n- 'snippet_type_name': force_text(self.snippet_type._meta.verbose_name),\n 'snippet_type': self.snippet_type,\n })\n", "issue": "SnippetChooserPanel cannot take model string in model argument\nWe should update `SnippetChooserPanel` to use `resolve_model_string` (https://github.com/torchbox/wagtail/blob/master/wagtail/wagtailcore/utils.py#L13-L37). This will make it behave more like `PageChooserPanel`.\n\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nfrom django.template.loader import render_to_string\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.utils.safestring import mark_safe\nfrom django.utils.encoding import force_text\n\nfrom wagtail.wagtailadmin.edit_handlers import BaseChooserPanel\nfrom .widgets import AdminSnippetChooser\n\n\nclass BaseSnippetChooserPanel(BaseChooserPanel):\n object_type_name = 'item'\n\n _content_type = None\n\n @classmethod\n def widget_overrides(cls):\n return {cls.field_name: AdminSnippetChooser(\n content_type=cls.content_type(), snippet_type_name=cls.snippet_type_name)}\n\n @classmethod\n def content_type(cls):\n if cls._content_type is None:\n # TODO: infer the content type by introspection on the foreign key rather than having to pass it explicitly\n cls._content_type = ContentType.objects.get_for_model(cls.snippet_type)\n\n return cls._content_type\n\n def render_as_field(self):\n instance_obj = self.get_chosen_item()\n return mark_safe(render_to_string(self.field_template, {\n 'field': self.bound_field,\n self.object_type_name: instance_obj,\n 'snippet_type_name': self.snippet_type_name,\n }))\n\n\nclass SnippetChooserPanel(object):\n def __init__(self, field_name, snippet_type):\n self.field_name = field_name\n self.snippet_type = snippet_type\n\n def bind_to_model(self, model):\n return type(str('_SnippetChooserPanel'), (BaseSnippetChooserPanel,), {\n 'model': model,\n 'field_name': self.field_name,\n 'snippet_type_name': force_text(self.snippet_type._meta.verbose_name),\n 'snippet_type': self.snippet_type,\n })\n", "path": "wagtail/wagtailsnippets/edit_handlers.py"}], "after_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nfrom django.template.loader import render_to_string\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.utils.safestring import mark_safe\nfrom django.utils.encoding import force_text\nfrom django.core.exceptions import ImproperlyConfigured\n\nfrom wagtail.wagtailadmin.edit_handlers import BaseChooserPanel\nfrom wagtail.wagtailcore.utils import resolve_model_string\nfrom .widgets import AdminSnippetChooser\n\n\nclass BaseSnippetChooserPanel(BaseChooserPanel):\n object_type_name = 'item'\n\n _target_content_type = None\n\n @classmethod\n def widget_overrides(cls):\n return {cls.field_name: AdminSnippetChooser(\n content_type=cls.target_content_type(), snippet_type_name=cls.snippet_type_name)}\n\n @classmethod\n def target_content_type(cls):\n if cls._target_content_type is None:\n if cls.snippet_type:\n try:\n model = resolve_model_string(cls.snippet_type)\n except LookupError:\n raise ImproperlyConfigured(\"{0}.snippet_type must be of the form 'app_label.model_name', given {1!r}\".format(\n cls.__name__, cls.snippet_type))\n except ValueError:\n raise ImproperlyConfigured(\"{0}.snippet_type refers to model {1!r} that has not been installed\".format(\n cls.__name__, cls.snippet_type))\n\n cls._target_content_type = ContentType.objects.get_for_model(model)\n else:\n target_model = cls.model._meta.get_field(cls.field_name).rel.to\n cls._target_content_type = ContentType.objects.get_for_model(target_model)\n\n return cls._target_content_type\n\n def render_as_field(self):\n instance_obj = self.get_chosen_item()\n return mark_safe(render_to_string(self.field_template, {\n 'field': self.bound_field,\n self.object_type_name: instance_obj,\n 'snippet_type_name': self.snippet_type_name,\n }))\n\n @property\n def snippet_type_name(self):\n return force_text(self.target_content_type()._meta.verbose_name)\n\n\nclass SnippetChooserPanel(object):\n def __init__(self, field_name, snippet_type):\n self.field_name = field_name\n self.snippet_type = snippet_type\n\n def bind_to_model(self, model):\n return type(str('_SnippetChooserPanel'), (BaseSnippetChooserPanel,), {\n 'model': model,\n 'field_name': self.field_name,\n 'snippet_type': self.snippet_type,\n })\n", "path": "wagtail/wagtailsnippets/edit_handlers.py"}]} | 804 | 679 |
gh_patches_debug_27915 | rasdani/github-patches | git_diff | pulp__pulpcore-193 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Verify if domain name has more than 50 chars
fixes: #4976
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/viewsets/upload.py`
Content:
```
1 import re
2 from datetime import datetime
3
4 from gettext import gettext as _
5 from drf_yasg.utils import swagger_auto_schema
6 from drf_yasg.openapi import Parameter
7 from rest_framework import mixins, serializers
8 from rest_framework.decorators import detail_route
9 from rest_framework.response import Response
10
11 from pulpcore.app.models import Upload
12 from pulpcore.app.serializers import UploadChunkSerializer, UploadCommitSerializer, UploadSerializer
13 from pulpcore.app.viewsets.base import NamedModelViewSet
14
15
16 class UploadViewSet(NamedModelViewSet,
17 mixins.CreateModelMixin,
18 mixins.RetrieveModelMixin,
19 mixins.UpdateModelMixin,
20 mixins.ListModelMixin):
21 """View for chunked uploads."""
22 endpoint_name = 'uploads'
23 queryset = Upload.objects.all()
24 serializer_class = UploadSerializer
25 http_method_names = ['get', 'post', 'head', 'put']
26
27 content_range_pattern = r'^bytes (\d+)-(\d+)/(\d+|[*])$'
28 content_range_parameter = \
29 Parameter(name='Content-Range', in_='header', required=True, type='string',
30 pattern=content_range_pattern,
31 description='The Content-Range header specifies the location of the file chunk '
32 'within the file.')
33
34 @swagger_auto_schema(operation_summary="Upload a file chunk",
35 request_body=UploadChunkSerializer,
36 manual_parameters=[content_range_parameter],
37 responses={200: UploadSerializer})
38 def update(self, request, pk=None):
39 """
40 Upload a chunk for an upload.
41 """
42 upload = self.get_object()
43
44 if upload.completed is not None:
45 raise serializers.ValidationError(_("Cannot upload chunk for a completed upload."))
46
47 try:
48 chunk = request.data['file']
49 except KeyError:
50 raise serializers.ValidationError(_("Missing 'file' parameter."))
51
52 content_range = request.META.get('HTTP_CONTENT_RANGE', '')
53 match = re.compile(self.content_range_pattern).match(content_range)
54 if not match:
55 raise serializers.ValidationError(_("Invalid or missing content range header."))
56 start = int(match[1])
57 end = int(match[2])
58
59 if (end - start + 1) != len(chunk):
60 raise serializers.ValidationError(_("Chunk size does not match content range."))
61
62 if end > upload.size - 1:
63 raise serializers.ValidationError(_("End byte is greater than upload size."))
64
65 upload.append(chunk, start)
66
67 serializer = UploadSerializer(upload, context={'request': request})
68 return Response(serializer.data)
69
70 @swagger_auto_schema(operation_summary="Finish an Upload",
71 request_body=UploadCommitSerializer,
72 responses={200: UploadSerializer})
73 @detail_route(methods=('put',))
74 def commit(self, request, pk):
75 """
76 Commit the upload and mark it as completed.
77 """
78 upload = self.get_object()
79
80 try:
81 sha256 = request.data['sha256']
82 except KeyError:
83 raise serializers.ValidationError(_("Checksum not supplied."))
84
85 if sha256 != upload.sha256:
86 raise serializers.ValidationError(_("Checksum does not match upload."))
87
88 if upload.completed is not None:
89 raise serializers.ValidationError(_("Upload is already complete."))
90
91 upload.completed = datetime.now()
92 upload.save()
93
94 serializer = UploadSerializer(upload, context={'request': request})
95 return Response(serializer.data)
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pulpcore/app/viewsets/upload.py b/pulpcore/app/viewsets/upload.py
--- a/pulpcore/app/viewsets/upload.py
+++ b/pulpcore/app/viewsets/upload.py
@@ -10,19 +10,33 @@
from pulpcore.app.models import Upload
from pulpcore.app.serializers import UploadChunkSerializer, UploadCommitSerializer, UploadSerializer
-from pulpcore.app.viewsets.base import NamedModelViewSet
+from pulpcore.app.viewsets import BaseFilterSet
+from pulpcore.app.viewsets.base import DATETIME_FILTER_OPTIONS, NamedModelViewSet
+from pulpcore.app.viewsets.custom_filters import IsoDateTimeFilter
+
+
+class UploadFilter(BaseFilterSet):
+ completed = IsoDateTimeFilter(field_name='completed')
+
+ class Meta:
+ model = Upload
+ fields = {
+ 'completed': DATETIME_FILTER_OPTIONS + ['isnull']
+ }
class UploadViewSet(NamedModelViewSet,
mixins.CreateModelMixin,
mixins.RetrieveModelMixin,
mixins.UpdateModelMixin,
+ mixins.DestroyModelMixin,
mixins.ListModelMixin):
"""View for chunked uploads."""
endpoint_name = 'uploads'
queryset = Upload.objects.all()
serializer_class = UploadSerializer
- http_method_names = ['get', 'post', 'head', 'put']
+ filterset_class = UploadFilter
+ http_method_names = ['get', 'post', 'head', 'put', 'delete'] # remove PATCH
content_range_pattern = r'^bytes (\d+)-(\d+)/(\d+|[*])$'
content_range_parameter = \
| {"golden_diff": "diff --git a/pulpcore/app/viewsets/upload.py b/pulpcore/app/viewsets/upload.py\n--- a/pulpcore/app/viewsets/upload.py\n+++ b/pulpcore/app/viewsets/upload.py\n@@ -10,19 +10,33 @@\n \n from pulpcore.app.models import Upload\n from pulpcore.app.serializers import UploadChunkSerializer, UploadCommitSerializer, UploadSerializer\n-from pulpcore.app.viewsets.base import NamedModelViewSet\n+from pulpcore.app.viewsets import BaseFilterSet\n+from pulpcore.app.viewsets.base import DATETIME_FILTER_OPTIONS, NamedModelViewSet\n+from pulpcore.app.viewsets.custom_filters import IsoDateTimeFilter\n+\n+\n+class UploadFilter(BaseFilterSet):\n+ completed = IsoDateTimeFilter(field_name='completed')\n+\n+ class Meta:\n+ model = Upload\n+ fields = {\n+ 'completed': DATETIME_FILTER_OPTIONS + ['isnull']\n+ }\n \n \n class UploadViewSet(NamedModelViewSet,\n mixins.CreateModelMixin,\n mixins.RetrieveModelMixin,\n mixins.UpdateModelMixin,\n+ mixins.DestroyModelMixin,\n mixins.ListModelMixin):\n \"\"\"View for chunked uploads.\"\"\"\n endpoint_name = 'uploads'\n queryset = Upload.objects.all()\n serializer_class = UploadSerializer\n- http_method_names = ['get', 'post', 'head', 'put']\n+ filterset_class = UploadFilter\n+ http_method_names = ['get', 'post', 'head', 'put', 'delete'] # remove PATCH\n \n content_range_pattern = r'^bytes (\\d+)-(\\d+)/(\\d+|[*])$'\n content_range_parameter = \\\n", "issue": "Verify if domain name has more than 50 chars\nfixes: #4976\n", "before_files": [{"content": "import re\nfrom datetime import datetime\n\nfrom gettext import gettext as _\nfrom drf_yasg.utils import swagger_auto_schema\nfrom drf_yasg.openapi import Parameter\nfrom rest_framework import mixins, serializers\nfrom rest_framework.decorators import detail_route\nfrom rest_framework.response import Response\n\nfrom pulpcore.app.models import Upload\nfrom pulpcore.app.serializers import UploadChunkSerializer, UploadCommitSerializer, UploadSerializer\nfrom pulpcore.app.viewsets.base import NamedModelViewSet\n\n\nclass UploadViewSet(NamedModelViewSet,\n mixins.CreateModelMixin,\n mixins.RetrieveModelMixin,\n mixins.UpdateModelMixin,\n mixins.ListModelMixin):\n \"\"\"View for chunked uploads.\"\"\"\n endpoint_name = 'uploads'\n queryset = Upload.objects.all()\n serializer_class = UploadSerializer\n http_method_names = ['get', 'post', 'head', 'put']\n\n content_range_pattern = r'^bytes (\\d+)-(\\d+)/(\\d+|[*])$'\n content_range_parameter = \\\n Parameter(name='Content-Range', in_='header', required=True, type='string',\n pattern=content_range_pattern,\n description='The Content-Range header specifies the location of the file chunk '\n 'within the file.')\n\n @swagger_auto_schema(operation_summary=\"Upload a file chunk\",\n request_body=UploadChunkSerializer,\n manual_parameters=[content_range_parameter],\n responses={200: UploadSerializer})\n def update(self, request, pk=None):\n \"\"\"\n Upload a chunk for an upload.\n \"\"\"\n upload = self.get_object()\n\n if upload.completed is not None:\n raise serializers.ValidationError(_(\"Cannot upload chunk for a completed upload.\"))\n\n try:\n chunk = request.data['file']\n except KeyError:\n raise serializers.ValidationError(_(\"Missing 'file' parameter.\"))\n\n content_range = request.META.get('HTTP_CONTENT_RANGE', '')\n match = re.compile(self.content_range_pattern).match(content_range)\n if not match:\n raise serializers.ValidationError(_(\"Invalid or missing content range header.\"))\n start = int(match[1])\n end = int(match[2])\n\n if (end - start + 1) != len(chunk):\n raise serializers.ValidationError(_(\"Chunk size does not match content range.\"))\n\n if end > upload.size - 1:\n raise serializers.ValidationError(_(\"End byte is greater than upload size.\"))\n\n upload.append(chunk, start)\n\n serializer = UploadSerializer(upload, context={'request': request})\n return Response(serializer.data)\n\n @swagger_auto_schema(operation_summary=\"Finish an Upload\",\n request_body=UploadCommitSerializer,\n responses={200: UploadSerializer})\n @detail_route(methods=('put',))\n def commit(self, request, pk):\n \"\"\"\n Commit the upload and mark it as completed.\n \"\"\"\n upload = self.get_object()\n\n try:\n sha256 = request.data['sha256']\n except KeyError:\n raise serializers.ValidationError(_(\"Checksum not supplied.\"))\n\n if sha256 != upload.sha256:\n raise serializers.ValidationError(_(\"Checksum does not match upload.\"))\n\n if upload.completed is not None:\n raise serializers.ValidationError(_(\"Upload is already complete.\"))\n\n upload.completed = datetime.now()\n upload.save()\n\n serializer = UploadSerializer(upload, context={'request': request})\n return Response(serializer.data)\n", "path": "pulpcore/app/viewsets/upload.py"}], "after_files": [{"content": "import re\nfrom datetime import datetime\n\nfrom gettext import gettext as _\nfrom drf_yasg.utils import swagger_auto_schema\nfrom drf_yasg.openapi import Parameter\nfrom rest_framework import mixins, serializers\nfrom rest_framework.decorators import detail_route\nfrom rest_framework.response import Response\n\nfrom pulpcore.app.models import Upload\nfrom pulpcore.app.serializers import UploadChunkSerializer, UploadCommitSerializer, UploadSerializer\nfrom pulpcore.app.viewsets import BaseFilterSet\nfrom pulpcore.app.viewsets.base import DATETIME_FILTER_OPTIONS, NamedModelViewSet\nfrom pulpcore.app.viewsets.custom_filters import IsoDateTimeFilter\n\n\nclass UploadFilter(BaseFilterSet):\n completed = IsoDateTimeFilter(field_name='completed')\n\n class Meta:\n model = Upload\n fields = {\n 'completed': DATETIME_FILTER_OPTIONS + ['isnull']\n }\n\n\nclass UploadViewSet(NamedModelViewSet,\n mixins.CreateModelMixin,\n mixins.RetrieveModelMixin,\n mixins.UpdateModelMixin,\n mixins.DestroyModelMixin,\n mixins.ListModelMixin):\n \"\"\"View for chunked uploads.\"\"\"\n endpoint_name = 'uploads'\n queryset = Upload.objects.all()\n serializer_class = UploadSerializer\n filterset_class = UploadFilter\n http_method_names = ['get', 'post', 'head', 'put', 'delete'] # remove PATCH\n\n content_range_pattern = r'^bytes (\\d+)-(\\d+)/(\\d+|[*])$'\n content_range_parameter = \\\n Parameter(name='Content-Range', in_='header', required=True, type='string',\n pattern=content_range_pattern,\n description='The Content-Range header specifies the location of the file chunk '\n 'within the file.')\n\n @swagger_auto_schema(operation_summary=\"Upload a file chunk\",\n request_body=UploadChunkSerializer,\n manual_parameters=[content_range_parameter],\n responses={200: UploadSerializer})\n def update(self, request, pk=None):\n \"\"\"\n Upload a chunk for an upload.\n \"\"\"\n upload = self.get_object()\n\n if upload.completed is not None:\n raise serializers.ValidationError(_(\"Cannot upload chunk for a completed upload.\"))\n\n try:\n chunk = request.data['file']\n except KeyError:\n raise serializers.ValidationError(_(\"Missing 'file' parameter.\"))\n\n content_range = request.META.get('HTTP_CONTENT_RANGE', '')\n match = re.compile(self.content_range_pattern).match(content_range)\n if not match:\n raise serializers.ValidationError(_(\"Invalid or missing content range header.\"))\n start = int(match[1])\n end = int(match[2])\n\n if (end - start + 1) != len(chunk):\n raise serializers.ValidationError(_(\"Chunk size does not match content range.\"))\n\n if end > upload.size - 1:\n raise serializers.ValidationError(_(\"End byte is greater than upload size.\"))\n\n upload.append(chunk, start)\n\n serializer = UploadSerializer(upload, context={'request': request})\n return Response(serializer.data)\n\n @swagger_auto_schema(operation_summary=\"Finish an Upload\",\n request_body=UploadCommitSerializer,\n responses={200: UploadSerializer})\n @detail_route(methods=('put',))\n def commit(self, request, pk):\n \"\"\"\n Commit the upload and mark it as completed.\n \"\"\"\n upload = self.get_object()\n\n try:\n sha256 = request.data['sha256']\n except KeyError:\n raise serializers.ValidationError(_(\"Checksum not supplied.\"))\n\n if sha256 != upload.sha256:\n raise serializers.ValidationError(_(\"Checksum does not match upload.\"))\n\n if upload.completed is not None:\n raise serializers.ValidationError(_(\"Upload is already complete.\"))\n\n upload.completed = datetime.now()\n upload.save()\n\n serializer = UploadSerializer(upload, context={'request': request})\n return Response(serializer.data)\n", "path": "pulpcore/app/viewsets/upload.py"}]} | 1,170 | 349 |
gh_patches_debug_10530 | rasdani/github-patches | git_diff | pytorch__examples-1084 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The GPU load is unbalanced
https://github.com/pytorch/examples/blob/2ee8d43dbe420be152fd5ce0d80b43b419a0e352/distributed/ddp-tutorial-series/multigpu_torchrun.py#L39
When I run the code and resume from a existed .pt file. The memory usage of GPU0 is significantly higher than other GPUs.
It can be solved by adding a parameter "map_location".
`snapshot = torch.load(snapshot_path, map_location=torch.device('cuda', int(os.environ["LOCAL_RANK"])))`
## My Environment
cudatoolkit 10.2
pytorch 12.1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `distributed/ddp-tutorial-series/multinode.py`
Content:
```
1 import torch
2 import torch.nn.functional as F
3 from torch.utils.data import Dataset, DataLoader
4 from datautils import MyTrainDataset
5
6 import torch.multiprocessing as mp
7 from torch.utils.data.distributed import DistributedSampler
8 from torch.nn.parallel import DistributedDataParallel as DDP
9 from torch.distributed import init_process_group, destroy_process_group
10 import os
11
12
13 def ddp_setup():
14 init_process_group(backend="nccl")
15
16 class Trainer:
17 def __init__(
18 self,
19 model: torch.nn.Module,
20 train_data: DataLoader,
21 optimizer: torch.optim.Optimizer,
22 save_every: int,
23 snapshot_path: str,
24 ) -> None:
25 self.local_rank = int(os.environ["LOCAL_RANK"])
26 self.global_rank = int(os.environ["RANK"])
27 self.model = model.to(self.local_rank)
28 self.train_data = train_data
29 self.optimizer = optimizer
30 self.save_every = save_every
31 self.epochs_run = 0
32 self.snapshot_path = snapshot_path
33 if os.path.exists(snapshot_path):
34 print("Loading snapshot")
35 self._load_snapshot(snapshot_path)
36
37 self.model = DDP(self.model, device_ids=[self.local_rank])
38
39 def _load_snapshot(self, snapshot_path):
40 snapshot = torch.load(snapshot_path)
41 self.model.load_state_dict(snapshot["MODEL_STATE"])
42 self.epochs_run = snapshot["EPOCHS_RUN"]
43 print(f"Resuming training from snapshot at Epoch {self.epochs_run}")
44
45 def _run_batch(self, source, targets):
46 self.optimizer.zero_grad()
47 output = self.model(source)
48 loss = F.cross_entropy(output, targets)
49 loss.backward()
50 self.optimizer.step()
51
52 def _run_epoch(self, epoch):
53 b_sz = len(next(iter(self.train_data))[0])
54 print(f"[GPU{self.global_rank}] Epoch {epoch} | Batchsize: {b_sz} | Steps: {len(self.train_data)}")
55 self.train_data.sampler.set_epoch(epoch)
56 for source, targets in self.train_data:
57 source = source.to(self.local_rank)
58 targets = targets.to(self.local_rank)
59 self._run_batch(source, targets)
60
61 def _save_snapshot(self, epoch):
62 snapshot = {
63 "MODEL_STATE": self.model.module.state_dict(),
64 "EPOCHS_RUN": epoch,
65 }
66 torch.save(snapshot, self.snapshot_path)
67 print(f"Epoch {epoch} | Training snapshot saved at {self.snapshot_path}")
68
69 def train(self, max_epochs: int):
70 for epoch in range(self.epochs_run, max_epochs):
71 self._run_epoch(epoch)
72 if self.local_rank == 0 and epoch % self.save_every == 0:
73 self._save_snapshot(epoch)
74
75
76 def load_train_objs():
77 train_set = MyTrainDataset(2048) # load your dataset
78 model = torch.nn.Linear(20, 1) # load your model
79 optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
80 return train_set, model, optimizer
81
82
83 def prepare_dataloader(dataset: Dataset, batch_size: int):
84 return DataLoader(
85 dataset,
86 batch_size=batch_size,
87 pin_memory=True,
88 shuffle=False,
89 sampler=DistributedSampler(dataset)
90 )
91
92
93 def main(save_every: int, total_epochs: int, batch_size: int, snapshot_path: str = "snapshot.pt"):
94 ddp_setup()
95 dataset, model, optimizer = load_train_objs()
96 train_data = prepare_dataloader(dataset, batch_size)
97 trainer = Trainer(model, train_data, optimizer, save_every, snapshot_path)
98 trainer.train(total_epochs)
99 destroy_process_group()
100
101
102 if __name__ == "__main__":
103 import argparse
104 parser = argparse.ArgumentParser(description='simple distributed training job')
105 parser.add_argument('total_epochs', type=int, help='Total epochs to train the model')
106 parser.add_argument('save_every', type=int, help='How often to save a snapshot')
107 parser.add_argument('--batch_size', default=32, help='Input batch size on each device (default: 32)')
108 args = parser.parse_args()
109
110 main(args.save_every, args.total_epochs, args.batch_size)
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/distributed/ddp-tutorial-series/multinode.py b/distributed/ddp-tutorial-series/multinode.py
--- a/distributed/ddp-tutorial-series/multinode.py
+++ b/distributed/ddp-tutorial-series/multinode.py
@@ -37,7 +37,8 @@
self.model = DDP(self.model, device_ids=[self.local_rank])
def _load_snapshot(self, snapshot_path):
- snapshot = torch.load(snapshot_path)
+ loc = f"cuda:{self.gpu_id}"
+ snapshot = torch.load(snapshot_path, map_location=loc)
self.model.load_state_dict(snapshot["MODEL_STATE"])
self.epochs_run = snapshot["EPOCHS_RUN"]
print(f"Resuming training from snapshot at Epoch {self.epochs_run}")
| {"golden_diff": "diff --git a/distributed/ddp-tutorial-series/multinode.py b/distributed/ddp-tutorial-series/multinode.py\n--- a/distributed/ddp-tutorial-series/multinode.py\n+++ b/distributed/ddp-tutorial-series/multinode.py\n@@ -37,7 +37,8 @@\n self.model = DDP(self.model, device_ids=[self.local_rank])\n \n def _load_snapshot(self, snapshot_path):\n- snapshot = torch.load(snapshot_path)\n+ loc = f\"cuda:{self.gpu_id}\"\n+ snapshot = torch.load(snapshot_path, map_location=loc)\n self.model.load_state_dict(snapshot[\"MODEL_STATE\"])\n self.epochs_run = snapshot[\"EPOCHS_RUN\"]\n print(f\"Resuming training from snapshot at Epoch {self.epochs_run}\")\n", "issue": "The GPU load is unbalanced\nhttps://github.com/pytorch/examples/blob/2ee8d43dbe420be152fd5ce0d80b43b419a0e352/distributed/ddp-tutorial-series/multigpu_torchrun.py#L39\r\nWhen I run the code and resume from a existed .pt file. The memory usage of GPU0 is significantly higher than other GPUs. \r\nIt can be solved by adding a parameter \"map_location\".\r\n`snapshot = torch.load(snapshot_path, map_location=torch.device('cuda', int(os.environ[\"LOCAL_RANK\"])))`\r\n## My Environment\r\ncudatoolkit 10.2\r\npytorch 12.1\r\n\n", "before_files": [{"content": "import torch\nimport torch.nn.functional as F\nfrom torch.utils.data import Dataset, DataLoader\nfrom datautils import MyTrainDataset\n\nimport torch.multiprocessing as mp\nfrom torch.utils.data.distributed import DistributedSampler\nfrom torch.nn.parallel import DistributedDataParallel as DDP\nfrom torch.distributed import init_process_group, destroy_process_group\nimport os\n\n\ndef ddp_setup():\n init_process_group(backend=\"nccl\")\n\nclass Trainer:\n def __init__(\n self,\n model: torch.nn.Module,\n train_data: DataLoader,\n optimizer: torch.optim.Optimizer,\n save_every: int,\n snapshot_path: str,\n ) -> None:\n self.local_rank = int(os.environ[\"LOCAL_RANK\"])\n self.global_rank = int(os.environ[\"RANK\"])\n self.model = model.to(self.local_rank)\n self.train_data = train_data\n self.optimizer = optimizer\n self.save_every = save_every\n self.epochs_run = 0\n self.snapshot_path = snapshot_path\n if os.path.exists(snapshot_path):\n print(\"Loading snapshot\")\n self._load_snapshot(snapshot_path)\n\n self.model = DDP(self.model, device_ids=[self.local_rank])\n\n def _load_snapshot(self, snapshot_path):\n snapshot = torch.load(snapshot_path)\n self.model.load_state_dict(snapshot[\"MODEL_STATE\"])\n self.epochs_run = snapshot[\"EPOCHS_RUN\"]\n print(f\"Resuming training from snapshot at Epoch {self.epochs_run}\")\n\n def _run_batch(self, source, targets):\n self.optimizer.zero_grad()\n output = self.model(source)\n loss = F.cross_entropy(output, targets)\n loss.backward()\n self.optimizer.step()\n\n def _run_epoch(self, epoch):\n b_sz = len(next(iter(self.train_data))[0])\n print(f\"[GPU{self.global_rank}] Epoch {epoch} | Batchsize: {b_sz} | Steps: {len(self.train_data)}\")\n self.train_data.sampler.set_epoch(epoch)\n for source, targets in self.train_data:\n source = source.to(self.local_rank)\n targets = targets.to(self.local_rank)\n self._run_batch(source, targets)\n\n def _save_snapshot(self, epoch):\n snapshot = {\n \"MODEL_STATE\": self.model.module.state_dict(),\n \"EPOCHS_RUN\": epoch,\n }\n torch.save(snapshot, self.snapshot_path)\n print(f\"Epoch {epoch} | Training snapshot saved at {self.snapshot_path}\")\n\n def train(self, max_epochs: int):\n for epoch in range(self.epochs_run, max_epochs):\n self._run_epoch(epoch)\n if self.local_rank == 0 and epoch % self.save_every == 0:\n self._save_snapshot(epoch)\n\n\ndef load_train_objs():\n train_set = MyTrainDataset(2048) # load your dataset\n model = torch.nn.Linear(20, 1) # load your model\n optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)\n return train_set, model, optimizer\n\n\ndef prepare_dataloader(dataset: Dataset, batch_size: int):\n return DataLoader(\n dataset,\n batch_size=batch_size,\n pin_memory=True,\n shuffle=False,\n sampler=DistributedSampler(dataset)\n )\n\n\ndef main(save_every: int, total_epochs: int, batch_size: int, snapshot_path: str = \"snapshot.pt\"):\n ddp_setup()\n dataset, model, optimizer = load_train_objs()\n train_data = prepare_dataloader(dataset, batch_size)\n trainer = Trainer(model, train_data, optimizer, save_every, snapshot_path)\n trainer.train(total_epochs)\n destroy_process_group()\n\n\nif __name__ == \"__main__\":\n import argparse\n parser = argparse.ArgumentParser(description='simple distributed training job')\n parser.add_argument('total_epochs', type=int, help='Total epochs to train the model')\n parser.add_argument('save_every', type=int, help='How often to save a snapshot')\n parser.add_argument('--batch_size', default=32, help='Input batch size on each device (default: 32)')\n args = parser.parse_args()\n \n main(args.save_every, args.total_epochs, args.batch_size)\n", "path": "distributed/ddp-tutorial-series/multinode.py"}], "after_files": [{"content": "import torch\nimport torch.nn.functional as F\nfrom torch.utils.data import Dataset, DataLoader\nfrom datautils import MyTrainDataset\n\nimport torch.multiprocessing as mp\nfrom torch.utils.data.distributed import DistributedSampler\nfrom torch.nn.parallel import DistributedDataParallel as DDP\nfrom torch.distributed import init_process_group, destroy_process_group\nimport os\n\n\ndef ddp_setup():\n init_process_group(backend=\"nccl\")\n\nclass Trainer:\n def __init__(\n self,\n model: torch.nn.Module,\n train_data: DataLoader,\n optimizer: torch.optim.Optimizer,\n save_every: int,\n snapshot_path: str,\n ) -> None:\n self.local_rank = int(os.environ[\"LOCAL_RANK\"])\n self.global_rank = int(os.environ[\"RANK\"])\n self.model = model.to(self.local_rank)\n self.train_data = train_data\n self.optimizer = optimizer\n self.save_every = save_every\n self.epochs_run = 0\n self.snapshot_path = snapshot_path\n if os.path.exists(snapshot_path):\n print(\"Loading snapshot\")\n self._load_snapshot(snapshot_path)\n\n self.model = DDP(self.model, device_ids=[self.local_rank])\n\n def _load_snapshot(self, snapshot_path):\n loc = f\"cuda:{self.gpu_id}\"\n snapshot = torch.load(snapshot_path, map_location=loc)\n self.model.load_state_dict(snapshot[\"MODEL_STATE\"])\n self.epochs_run = snapshot[\"EPOCHS_RUN\"]\n print(f\"Resuming training from snapshot at Epoch {self.epochs_run}\")\n\n def _run_batch(self, source, targets):\n self.optimizer.zero_grad()\n output = self.model(source)\n loss = F.cross_entropy(output, targets)\n loss.backward()\n self.optimizer.step()\n\n def _run_epoch(self, epoch):\n b_sz = len(next(iter(self.train_data))[0])\n print(f\"[GPU{self.global_rank}] Epoch {epoch} | Batchsize: {b_sz} | Steps: {len(self.train_data)}\")\n self.train_data.sampler.set_epoch(epoch)\n for source, targets in self.train_data:\n source = source.to(self.local_rank)\n targets = targets.to(self.local_rank)\n self._run_batch(source, targets)\n\n def _save_snapshot(self, epoch):\n snapshot = {\n \"MODEL_STATE\": self.model.module.state_dict(),\n \"EPOCHS_RUN\": epoch,\n }\n torch.save(snapshot, self.snapshot_path)\n print(f\"Epoch {epoch} | Training snapshot saved at {self.snapshot_path}\")\n\n def train(self, max_epochs: int):\n for epoch in range(self.epochs_run, max_epochs):\n self._run_epoch(epoch)\n if self.local_rank == 0 and epoch % self.save_every == 0:\n self._save_snapshot(epoch)\n\n\ndef load_train_objs():\n train_set = MyTrainDataset(2048) # load your dataset\n model = torch.nn.Linear(20, 1) # load your model\n optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)\n return train_set, model, optimizer\n\n\ndef prepare_dataloader(dataset: Dataset, batch_size: int):\n return DataLoader(\n dataset,\n batch_size=batch_size,\n pin_memory=True,\n shuffle=False,\n sampler=DistributedSampler(dataset)\n )\n\n\ndef main(save_every: int, total_epochs: int, batch_size: int, snapshot_path: str = \"snapshot.pt\"):\n ddp_setup()\n dataset, model, optimizer = load_train_objs()\n train_data = prepare_dataloader(dataset, batch_size)\n trainer = Trainer(model, train_data, optimizer, save_every, snapshot_path)\n trainer.train(total_epochs)\n destroy_process_group()\n\n\nif __name__ == \"__main__\":\n import argparse\n parser = argparse.ArgumentParser(description='simple distributed training job')\n parser.add_argument('total_epochs', type=int, help='Total epochs to train the model')\n parser.add_argument('save_every', type=int, help='How often to save a snapshot')\n parser.add_argument('--batch_size', default=32, help='Input batch size on each device (default: 32)')\n args = parser.parse_args()\n \n main(args.save_every, args.total_epochs, args.batch_size)\n", "path": "distributed/ddp-tutorial-series/multinode.py"}]} | 1,544 | 171 |
gh_patches_debug_22395 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-1858 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update CONTRIBUTING, README and package metadata
### Description:
- [x] Update `CONTRIBUTING.md` to mention the usage of `nox` instead of `tox`
- [x] Reorganize `README.md`
- [x] Update `setup.py` to add links to the project
CI/CD: Tox -> Nox
We are python project, and working with python files should be easier for any python developer, over working with text configuration in tox.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `noxfile.py`
Content:
```
1 """Nox tool configuration file.
2
3 Nox is Tox tool replacement.
4 """
5 import shutil
6 from pathlib import Path
7
8 import nox
9
10 nox.options.keywords = "not docs"
11
12
13 def base_install(session):
14 """Create basic environment setup for tests and linting."""
15 session.install("-r", "test_requirements.txt")
16 session.install("-e", ".")
17 return session
18
19
20 @nox.session(python="3.10")
21 def lint(session):
22 """Run linting check locally."""
23 session.install("pre-commit")
24 session.run("pre-commit", "run", "-a")
25
26
27 @nox.session(python=["3.7", "3.8", "3.9", "3.10", "3.11"])
28 def tests(session):
29 """Run test suite with pytest."""
30 session = base_install(session)
31 session.run(
32 "pytest",
33 "--cov-report=html",
34 "--cov-report=xml",
35 "--cov-branch",
36 "--cov-fail-under=100",
37 )
38
39
40 @nox.session(python=["3.7", "3.8", "3.9", "3.10", "3.11"])
41 def safety_tests(session):
42 """Run safety tests."""
43 session = base_install(session)
44 session.run("safety", "check", "--full-report")
45
46
47 @nox.session(python="3.10")
48 def documentation_tests(session):
49 """Run documentation tests."""
50 return docs(session, batch_run=True)
51
52
53 @nox.session(python="3.10")
54 def docs(session, batch_run: bool = False):
55 """Build the documentation or serve documentation interactively."""
56 shutil.rmtree(Path("docs").joinpath("_build"), ignore_errors=True)
57 session.install("-r", "docs/requirements.txt")
58 session.install("-e", ".")
59 session.cd("docs")
60 sphinx_args = ["-b", "html", "-W", ".", "_build/html"]
61
62 if not session.interactive or batch_run:
63 sphinx_cmd = "sphinx-build"
64 else:
65 sphinx_cmd = "sphinx-autobuild"
66 sphinx_args.extend(
67 [
68 "--open-browser",
69 "--port",
70 "9812",
71 "--watch",
72 "../*.md",
73 "--watch",
74 "../*.rst",
75 "--watch",
76 "../*.py",
77 "--watch",
78 "../cookiecutter",
79 ]
80 )
81
82 session.run(sphinx_cmd, *sphinx_args)
83
```
Path: `setup.py`
Content:
```
1 """cookiecutter distutils configuration."""
2 from setuptools import setup
3
4 version = "2.1.2.dev0"
5
6 with open('README.md', encoding='utf-8') as readme_file:
7 readme = readme_file.read()
8
9 requirements = [
10 'binaryornot>=0.4.4',
11 'Jinja2>=2.7,<4.0.0',
12 'click>=7.0,<9.0.0',
13 'pyyaml>=5.3.1',
14 'jinja2-time>=0.2.0',
15 'python-slugify>=4.0.0',
16 'requests>=2.23.0',
17 ]
18
19 setup(
20 name='cookiecutter',
21 version=version,
22 description=(
23 'A command-line utility that creates projects from project '
24 'templates, e.g. creating a Python package project from a '
25 'Python package project template.'
26 ),
27 long_description=readme,
28 long_description_content_type='text/markdown',
29 author='Audrey Feldroy',
30 author_email='[email protected]',
31 url='https://github.com/cookiecutter/cookiecutter',
32 packages=['cookiecutter'],
33 package_dir={'cookiecutter': 'cookiecutter'},
34 entry_points={'console_scripts': ['cookiecutter = cookiecutter.__main__:main']},
35 include_package_data=True,
36 python_requires='>=3.7',
37 install_requires=requirements,
38 license='BSD',
39 zip_safe=False,
40 classifiers=[
41 "Development Status :: 5 - Production/Stable",
42 "Environment :: Console",
43 "Intended Audience :: Developers",
44 "Natural Language :: English",
45 "License :: OSI Approved :: BSD License",
46 "Programming Language :: Python :: 3 :: Only",
47 "Programming Language :: Python :: 3",
48 "Programming Language :: Python :: 3.7",
49 "Programming Language :: Python :: 3.8",
50 "Programming Language :: Python :: 3.9",
51 "Programming Language :: Python :: 3.10",
52 "Programming Language :: Python :: 3.11",
53 "Programming Language :: Python :: Implementation :: CPython",
54 "Programming Language :: Python :: Implementation :: PyPy",
55 "Programming Language :: Python",
56 "Topic :: Software Development",
57 ],
58 keywords=[
59 "cookiecutter",
60 "Python",
61 "projects",
62 "project templates",
63 "Jinja2",
64 "skeleton",
65 "scaffolding",
66 "project directory",
67 "package",
68 "packaging",
69 ],
70 )
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -28,12 +28,14 @@
def tests(session):
"""Run test suite with pytest."""
session = base_install(session)
+ posargs = session.posargs or ""
session.run(
"pytest",
"--cov-report=html",
"--cov-report=xml",
"--cov-branch",
"--cov-fail-under=100",
+ *posargs,
)
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -29,6 +29,11 @@
author='Audrey Feldroy',
author_email='[email protected]',
url='https://github.com/cookiecutter/cookiecutter',
+ project_urls={
+ "Documentation": "https://cookiecutter.readthedocs.io",
+ "Issues": "https://github.com/cookiecutter/cookiecutter/issues",
+ "Discord": "https://discord.gg/9BrxzPKuEW",
+ },
packages=['cookiecutter'],
package_dir={'cookiecutter': 'cookiecutter'},
entry_points={'console_scripts': ['cookiecutter = cookiecutter.__main__:main']},
| {"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -28,12 +28,14 @@\n def tests(session):\n \"\"\"Run test suite with pytest.\"\"\"\n session = base_install(session)\n+ posargs = session.posargs or \"\"\n session.run(\n \"pytest\",\n \"--cov-report=html\",\n \"--cov-report=xml\",\n \"--cov-branch\",\n \"--cov-fail-under=100\",\n+ *posargs,\n )\n \n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -29,6 +29,11 @@\n author='Audrey Feldroy',\n author_email='[email protected]',\n url='https://github.com/cookiecutter/cookiecutter',\n+ project_urls={\n+ \"Documentation\": \"https://cookiecutter.readthedocs.io\",\n+ \"Issues\": \"https://github.com/cookiecutter/cookiecutter/issues\",\n+ \"Discord\": \"https://discord.gg/9BrxzPKuEW\",\n+ },\n packages=['cookiecutter'],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={'console_scripts': ['cookiecutter = cookiecutter.__main__:main']},\n", "issue": "Update CONTRIBUTING, README and package metadata\n### Description:\r\n\r\n- [x] Update `CONTRIBUTING.md` to mention the usage of `nox` instead of `tox`\r\n- [x] Reorganize `README.md`\r\n- [x] Update `setup.py` to add links to the project\nCI/CD: Tox -> Nox\nWe are python project, and working with python files should be easier for any python developer, over working with text configuration in tox.\n", "before_files": [{"content": "\"\"\"Nox tool configuration file.\n\nNox is Tox tool replacement.\n\"\"\"\nimport shutil\nfrom pathlib import Path\n\nimport nox\n\nnox.options.keywords = \"not docs\"\n\n\ndef base_install(session):\n \"\"\"Create basic environment setup for tests and linting.\"\"\"\n session.install(\"-r\", \"test_requirements.txt\")\n session.install(\"-e\", \".\")\n return session\n\n\[email protected](python=\"3.10\")\ndef lint(session):\n \"\"\"Run linting check locally.\"\"\"\n session.install(\"pre-commit\")\n session.run(\"pre-commit\", \"run\", \"-a\")\n\n\[email protected](python=[\"3.7\", \"3.8\", \"3.9\", \"3.10\", \"3.11\"])\ndef tests(session):\n \"\"\"Run test suite with pytest.\"\"\"\n session = base_install(session)\n session.run(\n \"pytest\",\n \"--cov-report=html\",\n \"--cov-report=xml\",\n \"--cov-branch\",\n \"--cov-fail-under=100\",\n )\n\n\[email protected](python=[\"3.7\", \"3.8\", \"3.9\", \"3.10\", \"3.11\"])\ndef safety_tests(session):\n \"\"\"Run safety tests.\"\"\"\n session = base_install(session)\n session.run(\"safety\", \"check\", \"--full-report\")\n\n\[email protected](python=\"3.10\")\ndef documentation_tests(session):\n \"\"\"Run documentation tests.\"\"\"\n return docs(session, batch_run=True)\n\n\[email protected](python=\"3.10\")\ndef docs(session, batch_run: bool = False):\n \"\"\"Build the documentation or serve documentation interactively.\"\"\"\n shutil.rmtree(Path(\"docs\").joinpath(\"_build\"), ignore_errors=True)\n session.install(\"-r\", \"docs/requirements.txt\")\n session.install(\"-e\", \".\")\n session.cd(\"docs\")\n sphinx_args = [\"-b\", \"html\", \"-W\", \".\", \"_build/html\"]\n\n if not session.interactive or batch_run:\n sphinx_cmd = \"sphinx-build\"\n else:\n sphinx_cmd = \"sphinx-autobuild\"\n sphinx_args.extend(\n [\n \"--open-browser\",\n \"--port\",\n \"9812\",\n \"--watch\",\n \"../*.md\",\n \"--watch\",\n \"../*.rst\",\n \"--watch\",\n \"../*.py\",\n \"--watch\",\n \"../cookiecutter\",\n ]\n )\n\n session.run(sphinx_cmd, *sphinx_args)\n", "path": "noxfile.py"}, {"content": "\"\"\"cookiecutter distutils configuration.\"\"\"\nfrom setuptools import setup\n\nversion = \"2.1.2.dev0\"\n\nwith open('README.md', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\nrequirements = [\n 'binaryornot>=0.4.4',\n 'Jinja2>=2.7,<4.0.0',\n 'click>=7.0,<9.0.0',\n 'pyyaml>=5.3.1',\n 'jinja2-time>=0.2.0',\n 'python-slugify>=4.0.0',\n 'requests>=2.23.0',\n]\n\nsetup(\n name='cookiecutter',\n version=version,\n description=(\n 'A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'\n ),\n long_description=readme,\n long_description_content_type='text/markdown',\n author='Audrey Feldroy',\n author_email='[email protected]',\n url='https://github.com/cookiecutter/cookiecutter',\n packages=['cookiecutter'],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={'console_scripts': ['cookiecutter = cookiecutter.__main__:main']},\n include_package_data=True,\n python_requires='>=3.7',\n install_requires=requirements,\n license='BSD',\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development\",\n ],\n keywords=[\n \"cookiecutter\",\n \"Python\",\n \"projects\",\n \"project templates\",\n \"Jinja2\",\n \"skeleton\",\n \"scaffolding\",\n \"project directory\",\n \"package\",\n \"packaging\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "\"\"\"Nox tool configuration file.\n\nNox is Tox tool replacement.\n\"\"\"\nimport shutil\nfrom pathlib import Path\n\nimport nox\n\nnox.options.keywords = \"not docs\"\n\n\ndef base_install(session):\n \"\"\"Create basic environment setup for tests and linting.\"\"\"\n session.install(\"-r\", \"test_requirements.txt\")\n session.install(\"-e\", \".\")\n return session\n\n\[email protected](python=\"3.10\")\ndef lint(session):\n \"\"\"Run linting check locally.\"\"\"\n session.install(\"pre-commit\")\n session.run(\"pre-commit\", \"run\", \"-a\")\n\n\[email protected](python=[\"3.7\", \"3.8\", \"3.9\", \"3.10\", \"3.11\"])\ndef tests(session):\n \"\"\"Run test suite with pytest.\"\"\"\n session = base_install(session)\n posargs = session.posargs or \"\"\n session.run(\n \"pytest\",\n \"--cov-report=html\",\n \"--cov-report=xml\",\n \"--cov-branch\",\n \"--cov-fail-under=100\",\n *posargs,\n )\n\n\[email protected](python=[\"3.7\", \"3.8\", \"3.9\", \"3.10\", \"3.11\"])\ndef safety_tests(session):\n \"\"\"Run safety tests.\"\"\"\n session = base_install(session)\n session.run(\"safety\", \"check\", \"--full-report\")\n\n\[email protected](python=\"3.10\")\ndef documentation_tests(session):\n \"\"\"Run documentation tests.\"\"\"\n return docs(session, batch_run=True)\n\n\[email protected](python=\"3.10\")\ndef docs(session, batch_run: bool = False):\n \"\"\"Build the documentation or serve documentation interactively.\"\"\"\n shutil.rmtree(Path(\"docs\").joinpath(\"_build\"), ignore_errors=True)\n session.install(\"-r\", \"docs/requirements.txt\")\n session.install(\"-e\", \".\")\n session.cd(\"docs\")\n sphinx_args = [\"-b\", \"html\", \"-W\", \".\", \"_build/html\"]\n\n if not session.interactive or batch_run:\n sphinx_cmd = \"sphinx-build\"\n else:\n sphinx_cmd = \"sphinx-autobuild\"\n sphinx_args.extend(\n [\n \"--open-browser\",\n \"--port\",\n \"9812\",\n \"--watch\",\n \"../*.md\",\n \"--watch\",\n \"../*.rst\",\n \"--watch\",\n \"../*.py\",\n \"--watch\",\n \"../cookiecutter\",\n ]\n )\n\n session.run(sphinx_cmd, *sphinx_args)\n", "path": "noxfile.py"}, {"content": "\"\"\"cookiecutter distutils configuration.\"\"\"\nfrom setuptools import setup\n\nversion = \"2.1.2.dev0\"\n\nwith open('README.md', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\nrequirements = [\n 'binaryornot>=0.4.4',\n 'Jinja2>=2.7,<4.0.0',\n 'click>=7.0,<9.0.0',\n 'pyyaml>=5.3.1',\n 'jinja2-time>=0.2.0',\n 'python-slugify>=4.0.0',\n 'requests>=2.23.0',\n]\n\nsetup(\n name='cookiecutter',\n version=version,\n description=(\n 'A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'\n ),\n long_description=readme,\n long_description_content_type='text/markdown',\n author='Audrey Feldroy',\n author_email='[email protected]',\n url='https://github.com/cookiecutter/cookiecutter',\n project_urls={\n \"Documentation\": \"https://cookiecutter.readthedocs.io\",\n \"Issues\": \"https://github.com/cookiecutter/cookiecutter/issues\",\n \"Discord\": \"https://discord.gg/9BrxzPKuEW\",\n },\n packages=['cookiecutter'],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={'console_scripts': ['cookiecutter = cookiecutter.__main__:main']},\n include_package_data=True,\n python_requires='>=3.7',\n install_requires=requirements,\n license='BSD',\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development\",\n ],\n keywords=[\n \"cookiecutter\",\n \"Python\",\n \"projects\",\n \"project templates\",\n \"Jinja2\",\n \"skeleton\",\n \"scaffolding\",\n \"project directory\",\n \"package\",\n \"packaging\",\n ],\n)\n", "path": "setup.py"}]} | 1,737 | 289 |
gh_patches_debug_40014 | rasdani/github-patches | git_diff | DDMAL__CantusDB-1352 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
we should clean up CantusDB/django/cantusdb_project
```shell
root@7aa2f88fb303:/code/django/cantusdb_project# ls
align_text_mel.py create_fixtures.sh error_log.txt manage.py requirements.txt
articles differentia_data.txt latin_syllabification.py next_chants.py static
break_json.py editors_chant.csv load_fixtures.sh old_users_list.py templates
cantusdb editors_source.csv main_app oldcantususer_uid_role.csv users
```
The following files have already served their purpose:
- `differentia_data.txt` (used in #1137)
- `editors_chant.csv` (used in the data sync from OldCantus to New)
- `editors_source.csv` (used in the data sync from OldCantus to New)
- `old_users_list.py` (used in the data sync from OldCantus to New)
- `oldcantususer_uid_role.csv` (used in the data sync from OldCantus to New, created by `old_users_list.py`)
- `create_fixtures.sh`, (we no longer migrate data using fixtures)
- `error_log.txt` (This is a log generated during the syncing process from OldCantus to NewCantus. It's not clear why it was committed to the repo in the first place.)
- `break_json.py` (we no longer migrate data using fixtures)
- `load_fixtures.sh` (we no longer migrate data using fixtures)
I propose deleting these files, so that future developers don't need to spend time figuring out what they are.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/old_users_list.py`
Content:
```
1 import csv
2 import lxml.html as lh
3 import requests
4
5 with open("oldcantususer_uid_role.csv", "r") as csvinput:
6 with open("oldcantususer_uid_role_detailed.csv", "w") as csvoutput:
7 with open("id_username_email.csv", "r") as csvinput_username_email:
8 writer = csv.writer(csvoutput, lineterminator="\n")
9 reader = csv.reader(csvinput)
10 reader_username_email = csv.reader(csvinput_username_email)
11
12 # header
13 writer.writerow(
14 [
15 "uid",
16 "old role",
17 "new role",
18 "name",
19 "surname",
20 "institution",
21 "town",
22 "country",
23 "username",
24 "email",
25 ]
26 )
27
28 for row, row_username_email in zip(reader, reader_username_email):
29 old_role = row[1]
30 if old_role == "administrator":
31 row.append("project manager")
32 elif old_role == "anonymous user":
33 row.append("")
34 elif old_role == "authenticated user":
35 row.append("")
36 elif old_role == "contributor":
37 row.append("contributor")
38 elif old_role == "Debra":
39 row.append("project manager")
40 elif old_role == "editor":
41 row.append("editor")
42 elif old_role == "power":
43 row.append("editor")
44 elif old_role == "proofreader":
45 row.append("editor")
46 elif old_role == "SIMSSA contributor":
47 row.append("contributor")
48
49 id = row[0]
50 url = f"https://cantus.uwaterloo.ca/user/{id}"
51 response = requests.get(url)
52 doc = lh.fromstring(response.content)
53
54 try:
55 name = (
56 doc.find_class("field-name-field-name")[0]
57 .find_class("field-item")[0]
58 .text_content()
59 )
60 except:
61 name = ""
62 try:
63 surname = (
64 doc.find_class("field-name-field-surname")[0]
65 .find_class("field-item")[0]
66 .text_content()
67 )
68 except:
69 surname = ""
70 try:
71 institution = (
72 doc.find_class("field-name-field-institution")[0]
73 .find_class("field-item")[0]
74 .text_content()
75 )
76 except:
77 institution = ""
78 try:
79 town = (
80 doc.find_class("field-name-field-town")[0]
81 .find_class("field-item")[0]
82 .text_content()
83 )
84 except:
85 town = ""
86 try:
87 country = (
88 doc.find_class("field-name-field-country")[0]
89 .find_class("field-item")[0]
90 .text_content()
91 )
92 except:
93 country = ""
94
95 username = row_username_email[1]
96 email = row_username_email[2]
97
98 row.append(name)
99 row.append(surname)
100 row.append(institution)
101 row.append(town)
102 row.append(country)
103 row.append(username)
104 row.append(email)
105
106 writer.writerow(row)
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django/cantusdb_project/old_users_list.py b/django/cantusdb_project/old_users_list.py
deleted file mode 100755
--- a/django/cantusdb_project/old_users_list.py
+++ /dev/null
@@ -1,106 +0,0 @@
-import csv
-import lxml.html as lh
-import requests
-
-with open("oldcantususer_uid_role.csv", "r") as csvinput:
- with open("oldcantususer_uid_role_detailed.csv", "w") as csvoutput:
- with open("id_username_email.csv", "r") as csvinput_username_email:
- writer = csv.writer(csvoutput, lineterminator="\n")
- reader = csv.reader(csvinput)
- reader_username_email = csv.reader(csvinput_username_email)
-
- # header
- writer.writerow(
- [
- "uid",
- "old role",
- "new role",
- "name",
- "surname",
- "institution",
- "town",
- "country",
- "username",
- "email",
- ]
- )
-
- for row, row_username_email in zip(reader, reader_username_email):
- old_role = row[1]
- if old_role == "administrator":
- row.append("project manager")
- elif old_role == "anonymous user":
- row.append("")
- elif old_role == "authenticated user":
- row.append("")
- elif old_role == "contributor":
- row.append("contributor")
- elif old_role == "Debra":
- row.append("project manager")
- elif old_role == "editor":
- row.append("editor")
- elif old_role == "power":
- row.append("editor")
- elif old_role == "proofreader":
- row.append("editor")
- elif old_role == "SIMSSA contributor":
- row.append("contributor")
-
- id = row[0]
- url = f"https://cantus.uwaterloo.ca/user/{id}"
- response = requests.get(url)
- doc = lh.fromstring(response.content)
-
- try:
- name = (
- doc.find_class("field-name-field-name")[0]
- .find_class("field-item")[0]
- .text_content()
- )
- except:
- name = ""
- try:
- surname = (
- doc.find_class("field-name-field-surname")[0]
- .find_class("field-item")[0]
- .text_content()
- )
- except:
- surname = ""
- try:
- institution = (
- doc.find_class("field-name-field-institution")[0]
- .find_class("field-item")[0]
- .text_content()
- )
- except:
- institution = ""
- try:
- town = (
- doc.find_class("field-name-field-town")[0]
- .find_class("field-item")[0]
- .text_content()
- )
- except:
- town = ""
- try:
- country = (
- doc.find_class("field-name-field-country")[0]
- .find_class("field-item")[0]
- .text_content()
- )
- except:
- country = ""
-
- username = row_username_email[1]
- email = row_username_email[2]
-
- row.append(name)
- row.append(surname)
- row.append(institution)
- row.append(town)
- row.append(country)
- row.append(username)
- row.append(email)
-
- writer.writerow(row)
| {"golden_diff": "diff --git a/django/cantusdb_project/old_users_list.py b/django/cantusdb_project/old_users_list.py\ndeleted file mode 100755\n--- a/django/cantusdb_project/old_users_list.py\n+++ /dev/null\n@@ -1,106 +0,0 @@\n-import csv\n-import lxml.html as lh\n-import requests\n-\n-with open(\"oldcantususer_uid_role.csv\", \"r\") as csvinput:\n- with open(\"oldcantususer_uid_role_detailed.csv\", \"w\") as csvoutput:\n- with open(\"id_username_email.csv\", \"r\") as csvinput_username_email:\n- writer = csv.writer(csvoutput, lineterminator=\"\\n\")\n- reader = csv.reader(csvinput)\n- reader_username_email = csv.reader(csvinput_username_email)\n-\n- # header\n- writer.writerow(\n- [\n- \"uid\",\n- \"old role\",\n- \"new role\",\n- \"name\",\n- \"surname\",\n- \"institution\",\n- \"town\",\n- \"country\",\n- \"username\",\n- \"email\",\n- ]\n- )\n-\n- for row, row_username_email in zip(reader, reader_username_email):\n- old_role = row[1]\n- if old_role == \"administrator\":\n- row.append(\"project manager\")\n- elif old_role == \"anonymous user\":\n- row.append(\"\")\n- elif old_role == \"authenticated user\":\n- row.append(\"\")\n- elif old_role == \"contributor\":\n- row.append(\"contributor\")\n- elif old_role == \"Debra\":\n- row.append(\"project manager\")\n- elif old_role == \"editor\":\n- row.append(\"editor\")\n- elif old_role == \"power\":\n- row.append(\"editor\")\n- elif old_role == \"proofreader\":\n- row.append(\"editor\")\n- elif old_role == \"SIMSSA contributor\":\n- row.append(\"contributor\")\n-\n- id = row[0]\n- url = f\"https://cantus.uwaterloo.ca/user/{id}\"\n- response = requests.get(url)\n- doc = lh.fromstring(response.content)\n-\n- try:\n- name = (\n- doc.find_class(\"field-name-field-name\")[0]\n- .find_class(\"field-item\")[0]\n- .text_content()\n- )\n- except:\n- name = \"\"\n- try:\n- surname = (\n- doc.find_class(\"field-name-field-surname\")[0]\n- .find_class(\"field-item\")[0]\n- .text_content()\n- )\n- except:\n- surname = \"\"\n- try:\n- institution = (\n- doc.find_class(\"field-name-field-institution\")[0]\n- .find_class(\"field-item\")[0]\n- .text_content()\n- )\n- except:\n- institution = \"\"\n- try:\n- town = (\n- doc.find_class(\"field-name-field-town\")[0]\n- .find_class(\"field-item\")[0]\n- .text_content()\n- )\n- except:\n- town = \"\"\n- try:\n- country = (\n- doc.find_class(\"field-name-field-country\")[0]\n- .find_class(\"field-item\")[0]\n- .text_content()\n- )\n- except:\n- country = \"\"\n-\n- username = row_username_email[1]\n- email = row_username_email[2]\n-\n- row.append(name)\n- row.append(surname)\n- row.append(institution)\n- row.append(town)\n- row.append(country)\n- row.append(username)\n- row.append(email)\n-\n- writer.writerow(row)\n", "issue": "we should clean up CantusDB/django/cantusdb_project\n```shell\r\nroot@7aa2f88fb303:/code/django/cantusdb_project# ls\r\nalign_text_mel.py create_fixtures.sh\t error_log.txt\t\t manage.py\t\t requirements.txt\r\narticles\t differentia_data.txt latin_syllabification.py next_chants.py\t static\r\nbreak_json.py\t editors_chant.csv\t load_fixtures.sh\t old_users_list.py\t templates\r\ncantusdb\t editors_source.csv\t main_app\t\t oldcantususer_uid_role.csv users\r\n```\r\nThe following files have already served their purpose:\r\n- `differentia_data.txt` (used in #1137)\r\n- `editors_chant.csv` (used in the data sync from OldCantus to New)\r\n- `editors_source.csv` (used in the data sync from OldCantus to New)\r\n- `old_users_list.py` (used in the data sync from OldCantus to New)\r\n- `oldcantususer_uid_role.csv` (used in the data sync from OldCantus to New, created by `old_users_list.py`)\r\n- `create_fixtures.sh`, (we no longer migrate data using fixtures)\r\n- `error_log.txt` (This is a log generated during the syncing process from OldCantus to NewCantus. It's not clear why it was committed to the repo in the first place.)\r\n- `break_json.py` (we no longer migrate data using fixtures)\r\n- `load_fixtures.sh` (we no longer migrate data using fixtures)\r\n\r\nI propose deleting these files, so that future developers don't need to spend time figuring out what they are.\n", "before_files": [{"content": "import csv\nimport lxml.html as lh\nimport requests\n\nwith open(\"oldcantususer_uid_role.csv\", \"r\") as csvinput:\n with open(\"oldcantususer_uid_role_detailed.csv\", \"w\") as csvoutput:\n with open(\"id_username_email.csv\", \"r\") as csvinput_username_email:\n writer = csv.writer(csvoutput, lineterminator=\"\\n\")\n reader = csv.reader(csvinput)\n reader_username_email = csv.reader(csvinput_username_email)\n\n # header\n writer.writerow(\n [\n \"uid\",\n \"old role\",\n \"new role\",\n \"name\",\n \"surname\",\n \"institution\",\n \"town\",\n \"country\",\n \"username\",\n \"email\",\n ]\n )\n\n for row, row_username_email in zip(reader, reader_username_email):\n old_role = row[1]\n if old_role == \"administrator\":\n row.append(\"project manager\")\n elif old_role == \"anonymous user\":\n row.append(\"\")\n elif old_role == \"authenticated user\":\n row.append(\"\")\n elif old_role == \"contributor\":\n row.append(\"contributor\")\n elif old_role == \"Debra\":\n row.append(\"project manager\")\n elif old_role == \"editor\":\n row.append(\"editor\")\n elif old_role == \"power\":\n row.append(\"editor\")\n elif old_role == \"proofreader\":\n row.append(\"editor\")\n elif old_role == \"SIMSSA contributor\":\n row.append(\"contributor\")\n\n id = row[0]\n url = f\"https://cantus.uwaterloo.ca/user/{id}\"\n response = requests.get(url)\n doc = lh.fromstring(response.content)\n\n try:\n name = (\n doc.find_class(\"field-name-field-name\")[0]\n .find_class(\"field-item\")[0]\n .text_content()\n )\n except:\n name = \"\"\n try:\n surname = (\n doc.find_class(\"field-name-field-surname\")[0]\n .find_class(\"field-item\")[0]\n .text_content()\n )\n except:\n surname = \"\"\n try:\n institution = (\n doc.find_class(\"field-name-field-institution\")[0]\n .find_class(\"field-item\")[0]\n .text_content()\n )\n except:\n institution = \"\"\n try:\n town = (\n doc.find_class(\"field-name-field-town\")[0]\n .find_class(\"field-item\")[0]\n .text_content()\n )\n except:\n town = \"\"\n try:\n country = (\n doc.find_class(\"field-name-field-country\")[0]\n .find_class(\"field-item\")[0]\n .text_content()\n )\n except:\n country = \"\"\n\n username = row_username_email[1]\n email = row_username_email[2]\n\n row.append(name)\n row.append(surname)\n row.append(institution)\n row.append(town)\n row.append(country)\n row.append(username)\n row.append(email)\n\n writer.writerow(row)\n", "path": "django/cantusdb_project/old_users_list.py"}], "after_files": [{"content": null, "path": "django/cantusdb_project/old_users_list.py"}]} | 1,482 | 808 |
gh_patches_debug_19105 | rasdani/github-patches | git_diff | dotkom__onlineweb4-321 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Registering with an existing e-mail creates the user
When registering:
Choose a username
Enter an email-adresse already in use.
The user will be created, and your chosen username will be taken!
Registering with an existing e-mail creates the user
When registering:
Choose a username
Enter an email-adresse already in use.
The user will be created, and your chosen username will be taken!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/authentication/forms.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import datetime
4 import re
5
6 from django import forms
7 from django.contrib import auth
8 from django.utils.translation import ugettext as _
9
10 from apps.authentication.models import OnlineUser as User
11
12 class LoginForm(forms.Form):
13 username = forms.CharField(widget=forms.TextInput(), label=_("Brukernavn"), max_length=50)
14 password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u"Passord"))
15 user = None
16
17 def clean(self):
18 if self._errors:
19 return
20
21 user = auth.authenticate(username=self.cleaned_data['username'], password=self.cleaned_data['password'])
22
23 if user:
24 if user.is_active:
25 self.user = user
26 else:
27 self._errors['username'] = self.error_class([_(u"Din konto er ikke aktiv. Forsøk gjenoppretning av passord.")])
28 else:
29 self._errors['username'] = self.error_class([_(u"Kontoen eksisterer ikke, eller kombinasjonen av brukernavn og passord er feil.")])
30 return self.cleaned_data
31
32 def login(self, request):
33 try:
34 User.objects.get(username=request.POST['username'])
35 except:
36 return False
37 if self.is_valid():
38 auth.login(request, self.user)
39 request.session.set_expiry(0)
40 return True
41 return False
42
43 class RegisterForm(forms.Form):
44 username = forms.CharField(label=_("brukernavn"), max_length=20)
45 first_name = forms.CharField(label=_("fornavn"), max_length=50)
46 last_name = forms.CharField(label=_("etternavn"), max_length=50)
47 email = forms.EmailField(label=_("epost"), max_length=50)
48 password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_("passord"))
49 repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_("gjenta passord"))
50 address = forms.CharField(label=_("adresse"), max_length=50)
51 zip_code = forms.CharField(label=_("postnummer"), max_length=4)
52 phone = forms.CharField(label=_("telefon"), max_length=20)
53
54 def clean(self):
55 super(RegisterForm, self).clean()
56 if self.is_valid():
57 cleaned_data = self.cleaned_data
58
59 # Check passwords
60 if cleaned_data['password'] != cleaned_data['repeat_password']:
61 self._errors['repeat_password'] = self.error_class([_(u"Passordene er ikke like.")])
62
63 # Check username
64 username = cleaned_data['username']
65 if User.objects.filter(username=username).count() > 0:
66 self._errors['username'] = self.error_class([_(u"Brukernavnet er allerede registrert.")])
67 if not re.match("^[a-zA-Z0-9_-]+$", username):
68 self._errors['username'] = self.error_class([_(u"Ditt brukernavn inneholdt ulovlige tegn. Lovlige tegn: a-Z 0-9 - _")])
69
70 # Check email
71 email = cleaned_data['email']
72 if User.objects.filter(email=email).count() > 0:
73 self._errors['email'] = self.error_class([_(u"Det fins allerede en bruker med denne epostadressen.")])
74
75 # ZIP code digits only
76 zip_code = cleaned_data['zip_code']
77 if len(zip_code) != 4 or not zip_code.isdigit():
78 self._errors['zip_code'] = self.error_class([_(u"Postnummer må bestå av fire siffer.")])
79
80 return cleaned_data
81
82 class RecoveryForm(forms.Form):
83 email = forms.EmailField(label="Email", max_length=50)
84
85 class ChangePasswordForm(forms.Form):
86 new_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u"nytt passord"))
87 repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u"gjenta passord"))
88
89 def clean(self):
90 super(ChangePasswordForm, self).clean()
91 if self.is_valid():
92 cleaned_data = self.cleaned_data
93
94 # Check passwords
95 if cleaned_data['new_password'] != cleaned_data['repeat_password']:
96 self._errors['repeat_password'] = self.error_class([_(u"Passordene er ikke like.")])
97
98 return cleaned_data
99
100
101 class NewEmailForm(forms.Form):
102 new_email = forms.EmailField(_(u"ny epostadresse"))
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/authentication/forms.py b/apps/authentication/forms.py
--- a/apps/authentication/forms.py
+++ b/apps/authentication/forms.py
@@ -7,7 +7,7 @@
from django.contrib import auth
from django.utils.translation import ugettext as _
-from apps.authentication.models import OnlineUser as User
+from apps.authentication.models import OnlineUser as User, Email
class LoginForm(forms.Form):
username = forms.CharField(widget=forms.TextInput(), label=_("Brukernavn"), max_length=50)
@@ -69,7 +69,7 @@
# Check email
email = cleaned_data['email']
- if User.objects.filter(email=email).count() > 0:
+ if Email.objects.filter(email=email).count() > 0:
self._errors['email'] = self.error_class([_(u"Det fins allerede en bruker med denne epostadressen.")])
# ZIP code digits only
| {"golden_diff": "diff --git a/apps/authentication/forms.py b/apps/authentication/forms.py\n--- a/apps/authentication/forms.py\n+++ b/apps/authentication/forms.py\n@@ -7,7 +7,7 @@\n from django.contrib import auth\n from django.utils.translation import ugettext as _\n \n-from apps.authentication.models import OnlineUser as User\n+from apps.authentication.models import OnlineUser as User, Email\n \n class LoginForm(forms.Form):\n username = forms.CharField(widget=forms.TextInput(), label=_(\"Brukernavn\"), max_length=50)\n@@ -69,7 +69,7 @@\n \n # Check email\n email = cleaned_data['email']\n- if User.objects.filter(email=email).count() > 0:\n+ if Email.objects.filter(email=email).count() > 0:\n self._errors['email'] = self.error_class([_(u\"Det fins allerede en bruker med denne epostadressen.\")])\n \n # ZIP code digits only\n", "issue": "Registering with an existing e-mail creates the user\nWhen registering:\n\nChoose a username\nEnter an email-adresse already in use.\nThe user will be created, and your chosen username will be taken!\n\nRegistering with an existing e-mail creates the user\nWhen registering:\n\nChoose a username\nEnter an email-adresse already in use.\nThe user will be created, and your chosen username will be taken!\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport datetime\nimport re\n\nfrom django import forms\nfrom django.contrib import auth\nfrom django.utils.translation import ugettext as _\n\nfrom apps.authentication.models import OnlineUser as User\n\nclass LoginForm(forms.Form):\n username = forms.CharField(widget=forms.TextInput(), label=_(\"Brukernavn\"), max_length=50)\n password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"Passord\"))\n user = None\n\n def clean(self):\n if self._errors:\n return\n \n user = auth.authenticate(username=self.cleaned_data['username'], password=self.cleaned_data['password'])\n\n if user:\n if user.is_active:\n self.user = user\n else:\n self._errors['username'] = self.error_class([_(u\"Din konto er ikke aktiv. Fors\u00f8k gjenoppretning av passord.\")])\n else:\n self._errors['username'] = self.error_class([_(u\"Kontoen eksisterer ikke, eller kombinasjonen av brukernavn og passord er feil.\")])\n return self.cleaned_data\n\n def login(self, request):\n try:\n User.objects.get(username=request.POST['username'])\n except:\n return False\n if self.is_valid():\n auth.login(request, self.user)\n request.session.set_expiry(0)\n return True\n return False\n\nclass RegisterForm(forms.Form):\n username = forms.CharField(label=_(\"brukernavn\"), max_length=20)\n first_name = forms.CharField(label=_(\"fornavn\"), max_length=50)\n last_name = forms.CharField(label=_(\"etternavn\"), max_length=50)\n email = forms.EmailField(label=_(\"epost\"), max_length=50)\n password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(\"passord\"))\n repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(\"gjenta passord\"))\n address = forms.CharField(label=_(\"adresse\"), max_length=50)\n zip_code = forms.CharField(label=_(\"postnummer\"), max_length=4)\n phone = forms.CharField(label=_(\"telefon\"), max_length=20)\n \n def clean(self):\n super(RegisterForm, self).clean()\n if self.is_valid():\n cleaned_data = self.cleaned_data\n\n # Check passwords\n if cleaned_data['password'] != cleaned_data['repeat_password']:\n self._errors['repeat_password'] = self.error_class([_(u\"Passordene er ikke like.\")])\n\n # Check username\n username = cleaned_data['username']\n if User.objects.filter(username=username).count() > 0:\n self._errors['username'] = self.error_class([_(u\"Brukernavnet er allerede registrert.\")])\n if not re.match(\"^[a-zA-Z0-9_-]+$\", username):\n self._errors['username'] = self.error_class([_(u\"Ditt brukernavn inneholdt ulovlige tegn. Lovlige tegn: a-Z 0-9 - _\")])\n\n # Check email\n email = cleaned_data['email']\n if User.objects.filter(email=email).count() > 0:\n self._errors['email'] = self.error_class([_(u\"Det fins allerede en bruker med denne epostadressen.\")])\n\n # ZIP code digits only\n zip_code = cleaned_data['zip_code']\n if len(zip_code) != 4 or not zip_code.isdigit():\n self._errors['zip_code'] = self.error_class([_(u\"Postnummer m\u00e5 best\u00e5 av fire siffer.\")])\n\n return cleaned_data \n\nclass RecoveryForm(forms.Form):\n email = forms.EmailField(label=\"Email\", max_length=50)\n\nclass ChangePasswordForm(forms.Form):\n new_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"nytt passord\"))\n repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"gjenta passord\"))\n\n def clean(self):\n super(ChangePasswordForm, self).clean()\n if self.is_valid():\n cleaned_data = self.cleaned_data\n\n # Check passwords\n if cleaned_data['new_password'] != cleaned_data['repeat_password']:\n self._errors['repeat_password'] = self.error_class([_(u\"Passordene er ikke like.\")])\n\n return cleaned_data\n\n\nclass NewEmailForm(forms.Form):\n new_email = forms.EmailField(_(u\"ny epostadresse\"))\n", "path": "apps/authentication/forms.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport datetime\nimport re\n\nfrom django import forms\nfrom django.contrib import auth\nfrom django.utils.translation import ugettext as _\n\nfrom apps.authentication.models import OnlineUser as User, Email\n\nclass LoginForm(forms.Form):\n username = forms.CharField(widget=forms.TextInput(), label=_(\"Brukernavn\"), max_length=50)\n password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"Passord\"))\n user = None\n\n def clean(self):\n if self._errors:\n return\n \n user = auth.authenticate(username=self.cleaned_data['username'], password=self.cleaned_data['password'])\n\n if user:\n if user.is_active:\n self.user = user\n else:\n self._errors['username'] = self.error_class([_(u\"Din konto er ikke aktiv. Fors\u00f8k gjenoppretning av passord.\")])\n else:\n self._errors['username'] = self.error_class([_(u\"Kontoen eksisterer ikke, eller kombinasjonen av brukernavn og passord er feil.\")])\n return self.cleaned_data\n\n def login(self, request):\n try:\n User.objects.get(username=request.POST['username'])\n except:\n return False\n if self.is_valid():\n auth.login(request, self.user)\n request.session.set_expiry(0)\n return True\n return False\n\nclass RegisterForm(forms.Form):\n username = forms.CharField(label=_(\"brukernavn\"), max_length=20)\n first_name = forms.CharField(label=_(\"fornavn\"), max_length=50)\n last_name = forms.CharField(label=_(\"etternavn\"), max_length=50)\n email = forms.EmailField(label=_(\"epost\"), max_length=50)\n password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(\"passord\"))\n repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(\"gjenta passord\"))\n address = forms.CharField(label=_(\"adresse\"), max_length=50)\n zip_code = forms.CharField(label=_(\"postnummer\"), max_length=4)\n phone = forms.CharField(label=_(\"telefon\"), max_length=20)\n \n def clean(self):\n super(RegisterForm, self).clean()\n if self.is_valid():\n cleaned_data = self.cleaned_data\n\n # Check passwords\n if cleaned_data['password'] != cleaned_data['repeat_password']:\n self._errors['repeat_password'] = self.error_class([_(u\"Passordene er ikke like.\")])\n\n # Check username\n username = cleaned_data['username']\n if User.objects.filter(username=username).count() > 0:\n self._errors['username'] = self.error_class([_(u\"Brukernavnet er allerede registrert.\")])\n if not re.match(\"^[a-zA-Z0-9_-]+$\", username):\n self._errors['username'] = self.error_class([_(u\"Ditt brukernavn inneholdt ulovlige tegn. Lovlige tegn: a-Z 0-9 - _\")])\n\n # Check email\n email = cleaned_data['email']\n if Email.objects.filter(email=email).count() > 0:\n self._errors['email'] = self.error_class([_(u\"Det fins allerede en bruker med denne epostadressen.\")])\n\n # ZIP code digits only\n zip_code = cleaned_data['zip_code']\n if len(zip_code) != 4 or not zip_code.isdigit():\n self._errors['zip_code'] = self.error_class([_(u\"Postnummer m\u00e5 best\u00e5 av fire siffer.\")])\n\n return cleaned_data \n\nclass RecoveryForm(forms.Form):\n email = forms.EmailField(label=\"Email\", max_length=50)\n\nclass ChangePasswordForm(forms.Form):\n new_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"nytt passord\"))\n repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"gjenta passord\"))\n\n def clean(self):\n super(ChangePasswordForm, self).clean()\n if self.is_valid():\n cleaned_data = self.cleaned_data\n\n # Check passwords\n if cleaned_data['new_password'] != cleaned_data['repeat_password']:\n self._errors['repeat_password'] = self.error_class([_(u\"Passordene er ikke like.\")])\n\n return cleaned_data\n\n\nclass NewEmailForm(forms.Form):\n new_email = forms.EmailField(_(u\"ny epostadresse\"))\n", "path": "apps/authentication/forms.py"}]} | 1,504 | 199 |
gh_patches_debug_8087 | rasdani/github-patches | git_diff | ansible__ansible-11609 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
set_fact skipping
The following code works under ansible 1.9 but fails on the latest devel checkout. It runs the same rules, but skips the set_fact command.
```
- name: Set zookeeper ID facts
gather_facts: True
hosts: zookeeper
user: username
sudo: True
tasks:
- set_fact: zkid={{ item.0 | int + 1 }}
when: hostvars[item.1]['ansible_hostname'] == ansible_hostname
with_indexed_items: groups['zookeeper']
```
This assigns a unique id to each zookeeper instance dynamically.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansible/executor/task_result.py`
Content:
```
1 # (c) 2012-2014, Michael DeHaan <[email protected]>
2 #
3 # This file is part of Ansible
4 #
5 # Ansible is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # Ansible is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
17
18 # Make coding more python3-ish
19 from __future__ import (absolute_import, division, print_function)
20 __metaclass__ = type
21
22 from ansible.parsing import DataLoader
23
24 class TaskResult:
25 '''
26 This class is responsible for interpretting the resulting data
27 from an executed task, and provides helper methods for determining
28 the result of a given task.
29 '''
30
31 def __init__(self, host, task, return_data):
32 self._host = host
33 self._task = task
34 if isinstance(return_data, dict):
35 self._result = return_data.copy()
36 else:
37 self._result = DataLoader().load(return_data)
38
39 def is_changed(self):
40 return self._check_key('changed')
41
42 def is_skipped(self):
43 return self._check_key('skipped')
44
45 def is_failed(self):
46 if 'failed_when_result' in self._result or \
47 'results' in self._result and True in [True for x in self._result['results'] if 'failed_when_result' in x]:
48 return self._check_key('failed_when_result')
49 else:
50 return self._check_key('failed') or self._result.get('rc', 0) != 0
51
52 def is_unreachable(self):
53 return self._check_key('unreachable')
54
55 def _check_key(self, key):
56 if 'results' in self._result:
57 flag = False
58 for res in self._result.get('results', []):
59 if isinstance(res, dict):
60 flag |= res.get(key, False)
61 return flag
62 else:
63 return self._result.get(key, False)
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/ansible/executor/task_result.py b/lib/ansible/executor/task_result.py
--- a/lib/ansible/executor/task_result.py
+++ b/lib/ansible/executor/task_result.py
@@ -40,7 +40,14 @@
return self._check_key('changed')
def is_skipped(self):
- return self._check_key('skipped')
+ if 'results' in self._result:
+ flag = True
+ for res in self._result.get('results', []):
+ if isinstance(res, dict):
+ flag &= res.get('skipped', False)
+ return flag
+ else:
+ return self._result.get('skipped', False)
def is_failed(self):
if 'failed_when_result' in self._result or \
| {"golden_diff": "diff --git a/lib/ansible/executor/task_result.py b/lib/ansible/executor/task_result.py\n--- a/lib/ansible/executor/task_result.py\n+++ b/lib/ansible/executor/task_result.py\n@@ -40,7 +40,14 @@\n return self._check_key('changed')\n \n def is_skipped(self):\n- return self._check_key('skipped')\n+ if 'results' in self._result:\n+ flag = True\n+ for res in self._result.get('results', []):\n+ if isinstance(res, dict):\n+ flag &= res.get('skipped', False)\n+ return flag\n+ else:\n+ return self._result.get('skipped', False)\n \n def is_failed(self):\n if 'failed_when_result' in self._result or \\\n", "issue": "set_fact skipping\nThe following code works under ansible 1.9 but fails on the latest devel checkout. It runs the same rules, but skips the set_fact command.\n\n```\n- name: Set zookeeper ID facts\n gather_facts: True\n hosts: zookeeper\n user: username\n sudo: True\n tasks:\n - set_fact: zkid={{ item.0 | int + 1 }}\n when: hostvars[item.1]['ansible_hostname'] == ansible_hostname\n with_indexed_items: groups['zookeeper']\n```\n\nThis assigns a unique id to each zookeeper instance dynamically.\n\n", "before_files": [{"content": "# (c) 2012-2014, Michael DeHaan <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\n# Make coding more python3-ish\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nfrom ansible.parsing import DataLoader\n\nclass TaskResult:\n '''\n This class is responsible for interpretting the resulting data\n from an executed task, and provides helper methods for determining\n the result of a given task.\n '''\n\n def __init__(self, host, task, return_data):\n self._host = host\n self._task = task\n if isinstance(return_data, dict):\n self._result = return_data.copy()\n else:\n self._result = DataLoader().load(return_data)\n\n def is_changed(self):\n return self._check_key('changed')\n\n def is_skipped(self):\n return self._check_key('skipped')\n\n def is_failed(self):\n if 'failed_when_result' in self._result or \\\n 'results' in self._result and True in [True for x in self._result['results'] if 'failed_when_result' in x]:\n return self._check_key('failed_when_result')\n else:\n return self._check_key('failed') or self._result.get('rc', 0) != 0\n\n def is_unreachable(self):\n return self._check_key('unreachable')\n\n def _check_key(self, key):\n if 'results' in self._result:\n flag = False\n for res in self._result.get('results', []):\n if isinstance(res, dict):\n flag |= res.get(key, False)\n return flag\n else:\n return self._result.get(key, False)\n", "path": "lib/ansible/executor/task_result.py"}], "after_files": [{"content": "# (c) 2012-2014, Michael DeHaan <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\n# Make coding more python3-ish\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nfrom ansible.parsing import DataLoader\n\nclass TaskResult:\n '''\n This class is responsible for interpretting the resulting data\n from an executed task, and provides helper methods for determining\n the result of a given task.\n '''\n\n def __init__(self, host, task, return_data):\n self._host = host\n self._task = task\n if isinstance(return_data, dict):\n self._result = return_data.copy()\n else:\n self._result = DataLoader().load(return_data)\n\n def is_changed(self):\n return self._check_key('changed')\n\n def is_skipped(self):\n if 'results' in self._result:\n flag = True\n for res in self._result.get('results', []):\n if isinstance(res, dict):\n flag &= res.get('skipped', False)\n return flag\n else:\n return self._result.get('skipped', False)\n\n def is_failed(self):\n if 'failed_when_result' in self._result or \\\n 'results' in self._result and True in [True for x in self._result['results'] if 'failed_when_result' in x]:\n return self._check_key('failed_when_result')\n else:\n return self._check_key('failed') or self._result.get('rc', 0) != 0\n\n def is_unreachable(self):\n return self._check_key('unreachable')\n\n def _check_key(self, key):\n if 'results' in self._result:\n flag = False\n for res in self._result.get('results', []):\n if isinstance(res, dict):\n flag |= res.get(key, False)\n return flag\n else:\n return self._result.get(key, False)\n", "path": "lib/ansible/executor/task_result.py"}]} | 1,042 | 178 |
gh_patches_debug_28365 | rasdani/github-patches | git_diff | learningequality__kolibri-8691 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Learner - Cannot change username and full name
## Observed behavior
The fields for editing a learner's username and full name are disabled and I cannot change them.
## Expected behavior
It should be possible for a learner to edit their username and full name if the options are enabled in Facility settings
## Steps to reproduce the issue
1. Install the following [build.](https://github.com/learningequality/kolibri/releases/tag/v0.15.0-beta2)
2. Create a facility with enabled options for 'Allow learners to edit their username' and 'Allow learners to edit their full name' in Facility settings
3. Create a Learner user
4. Sign in with the Learner, go to Profile and select the Edit button.
5. Attempt to edit the Full name and Username fields.
## Additional information

Logs:
[logs.zip](https://github.com/learningequality/kolibri/files/7540298/logs.zip)
## Usage Details
- OS: Windows 10
- Browser: Chrome
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/core/auth/middleware.py`
Content:
```
1 from django.apps import apps
2 from django.conf import settings
3 from django.contrib.auth import _get_user_session_key
4 from django.contrib.auth import get_user
5 from django.contrib.auth.middleware import AuthenticationMiddleware
6 from django.contrib.sessions.middleware import SessionMiddleware
7 from django.core.cache import cache
8 from django.core.exceptions import ImproperlyConfigured
9 from django.utils.functional import SimpleLazyObject
10
11
12 def get_anonymous_user_model():
13 """
14 Return the Anonymous User model that is active in this project.
15 """
16 try:
17 app_name = settings.AUTH_ANONYMOUS_USER_MODEL.split(".")[0]
18 except AttributeError:
19 raise ImproperlyConfigured("AUTH_ANONYMOUS_USER_MODEL is not a string")
20 try:
21 model_name = settings.AUTH_ANONYMOUS_USER_MODEL.split(".")[1]
22 app = apps.get_app_config(app_name)
23 models_module = app.models_module
24 except IndexError:
25 raise ImproperlyConfigured(
26 "AUTH_ANONYMOUS_USER_MODEL must be of the form 'app_label.model_name'"
27 )
28 except LookupError:
29 raise ImproperlyConfigured(
30 "AUTH_ANONYMOUS_USER_MODEL refers to an app '{}' that has not been installed".format(
31 app_name
32 )
33 )
34 try:
35 return getattr(models_module, model_name)
36 except AttributeError:
37 raise ImproperlyConfigured(
38 "AUTH_ANONYMOUS_USER_MODEL refers to a model '{}' that does not exist in the app '{}'".format(
39 model_name, app_name
40 )
41 )
42
43
44 def _get_user(request):
45 if not hasattr(request, "_cached_user"):
46 try:
47 user_id = _get_user_session_key(request)
48 USER_CACHE_KEY = "USER_BY_SESSION_CACHE_{}".format(user_id)
49 user = cache.get(USER_CACHE_KEY)
50 if not user:
51 user = get_user(request)
52 cache.set(USER_CACHE_KEY, user)
53 except KeyError:
54 user = get_user(request)
55 if user.is_anonymous():
56 AnonymousUser = get_anonymous_user_model()
57 user = AnonymousUser()
58 request._cached_user = user
59
60 return request._cached_user
61
62
63 class CustomAuthenticationMiddleware(AuthenticationMiddleware):
64 """
65 Adaptation of Django's ``account.middleware.AuthenticationMiddleware``
66 to replace the default AnonymousUser with a custom implementation.
67 """
68
69 def process_request(self, request):
70 if not hasattr(request, "session"):
71 raise AssertionError(
72 "The authentication middleware requires session middleware "
73 "to be installed. Edit your MIDDLEWARE_CLASSES setting to insert "
74 "'django.contrib.sessions.middleware.SessionMiddleware' before "
75 "'kolibri.core.auth.middleware.CustomAuthenticationMiddleware'."
76 )
77 request.user = SimpleLazyObject(lambda: _get_user(request))
78
79
80 class XhrPreventLoginPromptMiddleware(object):
81 """
82 By default, HTTP 401 responses are sent with a ``WWW-Authenticate``
83 header. Web browsers react to this header by displaying a login prompt
84 dialog. By removing the header, the login prompt can be avoided. While
85 this isn't recommended in general, there's a convention of removing it
86 for XHR requests, so that unauthenticated XHR requests don't trigger a
87 popup.
88
89 See `here <https://stackoverflow.com/a/20221330>`_ for reference.
90 """
91
92 def __init__(self, get_response):
93 self.get_response = get_response
94
95 def __call__(self, request):
96 response = self.get_response(request)
97 if response and response.status_code == 401 and request.is_ajax():
98 del response["WWW-Authenticate"]
99 return response
100
101
102 SESSION_EXEMPT = "_session_exempt"
103
104
105 def session_exempt(view):
106 def wrapper_func(*args, **kwargs):
107 return view(*args, **kwargs)
108
109 setattr(wrapper_func, SESSION_EXEMPT, True)
110 return wrapper_func
111
112
113 class KolibriSessionMiddleware(SessionMiddleware):
114 def _is_exempt(self, obj):
115 return hasattr(obj, SESSION_EXEMPT)
116
117 def process_view(self, request, callback, callback_args, callback_kwargs):
118 if self._is_exempt(callback):
119 setattr(request, SESSION_EXEMPT, True)
120 return None
121
122 def process_response(self, request, response):
123 if self._is_exempt(request):
124 return response
125 return super(KolibriSessionMiddleware, self).process_response(request, response)
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kolibri/core/auth/middleware.py b/kolibri/core/auth/middleware.py
--- a/kolibri/core/auth/middleware.py
+++ b/kolibri/core/auth/middleware.py
@@ -6,6 +6,7 @@
from django.contrib.sessions.middleware import SessionMiddleware
from django.core.cache import cache
from django.core.exceptions import ImproperlyConfigured
+from django.db.models.signals import post_save
from django.utils.functional import SimpleLazyObject
@@ -41,11 +42,14 @@
)
+USER_SESSION_CACHE_KEY = "USER_BY_SESSION_CACHE_{}"
+
+
def _get_user(request):
if not hasattr(request, "_cached_user"):
try:
user_id = _get_user_session_key(request)
- USER_CACHE_KEY = "USER_BY_SESSION_CACHE_{}".format(user_id)
+ USER_CACHE_KEY = USER_SESSION_CACHE_KEY.format(user_id)
user = cache.get(USER_CACHE_KEY)
if not user:
user = get_user(request)
@@ -60,6 +64,14 @@
return request._cached_user
+def clear_user_cache(sender, instance, created, **kwargs):
+ if not created:
+ cache.delete(USER_SESSION_CACHE_KEY.format(instance.id))
+
+
+post_save.connect(clear_user_cache, sender=settings.AUTH_USER_MODEL)
+
+
class CustomAuthenticationMiddleware(AuthenticationMiddleware):
"""
Adaptation of Django's ``account.middleware.AuthenticationMiddleware``
| {"golden_diff": "diff --git a/kolibri/core/auth/middleware.py b/kolibri/core/auth/middleware.py\n--- a/kolibri/core/auth/middleware.py\n+++ b/kolibri/core/auth/middleware.py\n@@ -6,6 +6,7 @@\n from django.contrib.sessions.middleware import SessionMiddleware\n from django.core.cache import cache\n from django.core.exceptions import ImproperlyConfigured\n+from django.db.models.signals import post_save\n from django.utils.functional import SimpleLazyObject\n \n \n@@ -41,11 +42,14 @@\n )\n \n \n+USER_SESSION_CACHE_KEY = \"USER_BY_SESSION_CACHE_{}\"\n+\n+\n def _get_user(request):\n if not hasattr(request, \"_cached_user\"):\n try:\n user_id = _get_user_session_key(request)\n- USER_CACHE_KEY = \"USER_BY_SESSION_CACHE_{}\".format(user_id)\n+ USER_CACHE_KEY = USER_SESSION_CACHE_KEY.format(user_id)\n user = cache.get(USER_CACHE_KEY)\n if not user:\n user = get_user(request)\n@@ -60,6 +64,14 @@\n return request._cached_user\n \n \n+def clear_user_cache(sender, instance, created, **kwargs):\n+ if not created:\n+ cache.delete(USER_SESSION_CACHE_KEY.format(instance.id))\n+\n+\n+post_save.connect(clear_user_cache, sender=settings.AUTH_USER_MODEL)\n+\n+\n class CustomAuthenticationMiddleware(AuthenticationMiddleware):\n \"\"\"\n Adaptation of Django's ``account.middleware.AuthenticationMiddleware``\n", "issue": "Learner - Cannot change username and full name\n## Observed behavior\r\nThe fields for editing a learner's username and full name are disabled and I cannot change them.\r\n\r\n## Expected behavior\r\nIt should be possible for a learner to edit their username and full name if the options are enabled in Facility settings\r\n\r\n## Steps to reproduce the issue\r\n1. Install the following [build.](https://github.com/learningequality/kolibri/releases/tag/v0.15.0-beta2)\r\n2. Create a facility with enabled options for 'Allow learners to edit their username' and 'Allow learners to edit their full name' in Facility settings\r\n3. Create a Learner user\r\n4. Sign in with the Learner, go to Profile and select the Edit button.\r\n5. Attempt to edit the Full name and Username fields.\r\n\r\n## Additional information\r\n\r\n\r\nLogs: \r\n[logs.zip](https://github.com/learningequality/kolibri/files/7540298/logs.zip)\r\n\r\n## Usage Details\r\n - OS: Windows 10\r\n - Browser: Chrome\n", "before_files": [{"content": "from django.apps import apps\nfrom django.conf import settings\nfrom django.contrib.auth import _get_user_session_key\nfrom django.contrib.auth import get_user\nfrom django.contrib.auth.middleware import AuthenticationMiddleware\nfrom django.contrib.sessions.middleware import SessionMiddleware\nfrom django.core.cache import cache\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.utils.functional import SimpleLazyObject\n\n\ndef get_anonymous_user_model():\n \"\"\"\n Return the Anonymous User model that is active in this project.\n \"\"\"\n try:\n app_name = settings.AUTH_ANONYMOUS_USER_MODEL.split(\".\")[0]\n except AttributeError:\n raise ImproperlyConfigured(\"AUTH_ANONYMOUS_USER_MODEL is not a string\")\n try:\n model_name = settings.AUTH_ANONYMOUS_USER_MODEL.split(\".\")[1]\n app = apps.get_app_config(app_name)\n models_module = app.models_module\n except IndexError:\n raise ImproperlyConfigured(\n \"AUTH_ANONYMOUS_USER_MODEL must be of the form 'app_label.model_name'\"\n )\n except LookupError:\n raise ImproperlyConfigured(\n \"AUTH_ANONYMOUS_USER_MODEL refers to an app '{}' that has not been installed\".format(\n app_name\n )\n )\n try:\n return getattr(models_module, model_name)\n except AttributeError:\n raise ImproperlyConfigured(\n \"AUTH_ANONYMOUS_USER_MODEL refers to a model '{}' that does not exist in the app '{}'\".format(\n model_name, app_name\n )\n )\n\n\ndef _get_user(request):\n if not hasattr(request, \"_cached_user\"):\n try:\n user_id = _get_user_session_key(request)\n USER_CACHE_KEY = \"USER_BY_SESSION_CACHE_{}\".format(user_id)\n user = cache.get(USER_CACHE_KEY)\n if not user:\n user = get_user(request)\n cache.set(USER_CACHE_KEY, user)\n except KeyError:\n user = get_user(request)\n if user.is_anonymous():\n AnonymousUser = get_anonymous_user_model()\n user = AnonymousUser()\n request._cached_user = user\n\n return request._cached_user\n\n\nclass CustomAuthenticationMiddleware(AuthenticationMiddleware):\n \"\"\"\n Adaptation of Django's ``account.middleware.AuthenticationMiddleware``\n to replace the default AnonymousUser with a custom implementation.\n \"\"\"\n\n def process_request(self, request):\n if not hasattr(request, \"session\"):\n raise AssertionError(\n \"The authentication middleware requires session middleware \"\n \"to be installed. Edit your MIDDLEWARE_CLASSES setting to insert \"\n \"'django.contrib.sessions.middleware.SessionMiddleware' before \"\n \"'kolibri.core.auth.middleware.CustomAuthenticationMiddleware'.\"\n )\n request.user = SimpleLazyObject(lambda: _get_user(request))\n\n\nclass XhrPreventLoginPromptMiddleware(object):\n \"\"\"\n By default, HTTP 401 responses are sent with a ``WWW-Authenticate``\n header. Web browsers react to this header by displaying a login prompt\n dialog. By removing the header, the login prompt can be avoided. While\n this isn't recommended in general, there's a convention of removing it\n for XHR requests, so that unauthenticated XHR requests don't trigger a\n popup.\n\n See `here <https://stackoverflow.com/a/20221330>`_ for reference.\n \"\"\"\n\n def __init__(self, get_response):\n self.get_response = get_response\n\n def __call__(self, request):\n response = self.get_response(request)\n if response and response.status_code == 401 and request.is_ajax():\n del response[\"WWW-Authenticate\"]\n return response\n\n\nSESSION_EXEMPT = \"_session_exempt\"\n\n\ndef session_exempt(view):\n def wrapper_func(*args, **kwargs):\n return view(*args, **kwargs)\n\n setattr(wrapper_func, SESSION_EXEMPT, True)\n return wrapper_func\n\n\nclass KolibriSessionMiddleware(SessionMiddleware):\n def _is_exempt(self, obj):\n return hasattr(obj, SESSION_EXEMPT)\n\n def process_view(self, request, callback, callback_args, callback_kwargs):\n if self._is_exempt(callback):\n setattr(request, SESSION_EXEMPT, True)\n return None\n\n def process_response(self, request, response):\n if self._is_exempt(request):\n return response\n return super(KolibriSessionMiddleware, self).process_response(request, response)\n", "path": "kolibri/core/auth/middleware.py"}], "after_files": [{"content": "from django.apps import apps\nfrom django.conf import settings\nfrom django.contrib.auth import _get_user_session_key\nfrom django.contrib.auth import get_user\nfrom django.contrib.auth.middleware import AuthenticationMiddleware\nfrom django.contrib.sessions.middleware import SessionMiddleware\nfrom django.core.cache import cache\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.db.models.signals import post_save\nfrom django.utils.functional import SimpleLazyObject\n\n\ndef get_anonymous_user_model():\n \"\"\"\n Return the Anonymous User model that is active in this project.\n \"\"\"\n try:\n app_name = settings.AUTH_ANONYMOUS_USER_MODEL.split(\".\")[0]\n except AttributeError:\n raise ImproperlyConfigured(\"AUTH_ANONYMOUS_USER_MODEL is not a string\")\n try:\n model_name = settings.AUTH_ANONYMOUS_USER_MODEL.split(\".\")[1]\n app = apps.get_app_config(app_name)\n models_module = app.models_module\n except IndexError:\n raise ImproperlyConfigured(\n \"AUTH_ANONYMOUS_USER_MODEL must be of the form 'app_label.model_name'\"\n )\n except LookupError:\n raise ImproperlyConfigured(\n \"AUTH_ANONYMOUS_USER_MODEL refers to an app '{}' that has not been installed\".format(\n app_name\n )\n )\n try:\n return getattr(models_module, model_name)\n except AttributeError:\n raise ImproperlyConfigured(\n \"AUTH_ANONYMOUS_USER_MODEL refers to a model '{}' that does not exist in the app '{}'\".format(\n model_name, app_name\n )\n )\n\n\nUSER_SESSION_CACHE_KEY = \"USER_BY_SESSION_CACHE_{}\"\n\n\ndef _get_user(request):\n if not hasattr(request, \"_cached_user\"):\n try:\n user_id = _get_user_session_key(request)\n USER_CACHE_KEY = USER_SESSION_CACHE_KEY.format(user_id)\n user = cache.get(USER_CACHE_KEY)\n if not user:\n user = get_user(request)\n cache.set(USER_CACHE_KEY, user)\n except KeyError:\n user = get_user(request)\n if user.is_anonymous():\n AnonymousUser = get_anonymous_user_model()\n user = AnonymousUser()\n request._cached_user = user\n\n return request._cached_user\n\n\ndef clear_user_cache(sender, instance, created, **kwargs):\n if not created:\n cache.delete(USER_SESSION_CACHE_KEY.format(instance.id))\n\n\npost_save.connect(clear_user_cache, sender=settings.AUTH_USER_MODEL)\n\n\nclass CustomAuthenticationMiddleware(AuthenticationMiddleware):\n \"\"\"\n Adaptation of Django's ``account.middleware.AuthenticationMiddleware``\n to replace the default AnonymousUser with a custom implementation.\n \"\"\"\n\n def process_request(self, request):\n if not hasattr(request, \"session\"):\n raise AssertionError(\n \"The authentication middleware requires session middleware \"\n \"to be installed. Edit your MIDDLEWARE_CLASSES setting to insert \"\n \"'django.contrib.sessions.middleware.SessionMiddleware' before \"\n \"'kolibri.core.auth.middleware.CustomAuthenticationMiddleware'.\"\n )\n request.user = SimpleLazyObject(lambda: _get_user(request))\n\n\nclass XhrPreventLoginPromptMiddleware(object):\n \"\"\"\n By default, HTTP 401 responses are sent with a ``WWW-Authenticate``\n header. Web browsers react to this header by displaying a login prompt\n dialog. By removing the header, the login prompt can be avoided. While\n this isn't recommended in general, there's a convention of removing it\n for XHR requests, so that unauthenticated XHR requests don't trigger a\n popup.\n\n See `here <https://stackoverflow.com/a/20221330>`_ for reference.\n \"\"\"\n\n def __init__(self, get_response):\n self.get_response = get_response\n\n def __call__(self, request):\n response = self.get_response(request)\n if response and response.status_code == 401 and request.is_ajax():\n del response[\"WWW-Authenticate\"]\n return response\n\n\nSESSION_EXEMPT = \"_session_exempt\"\n\n\ndef session_exempt(view):\n def wrapper_func(*args, **kwargs):\n return view(*args, **kwargs)\n\n setattr(wrapper_func, SESSION_EXEMPT, True)\n return wrapper_func\n\n\nclass KolibriSessionMiddleware(SessionMiddleware):\n def _is_exempt(self, obj):\n return hasattr(obj, SESSION_EXEMPT)\n\n def process_view(self, request, callback, callback_args, callback_kwargs):\n if self._is_exempt(callback):\n setattr(request, SESSION_EXEMPT, True)\n return None\n\n def process_response(self, request, response):\n if self._is_exempt(request):\n return response\n return super(KolibriSessionMiddleware, self).process_response(request, response)\n", "path": "kolibri/core/auth/middleware.py"}]} | 1,762 | 310 |
gh_patches_debug_24177 | rasdani/github-patches | git_diff | pre-commit__pre-commit-756 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
venv tests break virtualenv's `pip` when run from a `-mvirtualenv` virtualenv
Here's a reproduction, not exactly sure what's happening here:
```
$ tox -e py36 -r --notest
GLOB sdist-make: /home/asottile/workspace/pre-commit/setup.py
py36 create: /home/asottile/workspace/pre-commit/.tox/py36
py36 installdeps: -rrequirements-dev.txt
py36 inst: /home/asottile/workspace/pre-commit/.tox/dist/pre_commit-1.10.0.zip
py36 installed: You are using pip version 9.0.1, however version 10.0.1 is available.,You should consider upgrading via the 'pip install --upgrade pip' command.,aspy.yaml==1.1.1,atomicwrites==1.1.5,attrs==18.1.0,cached-property==1.4.2,cfgv==1.0.0,coverage==4.5.1,flake8==3.5.0,identify==1.0.18,mccabe==0.6.1,mock==2.0.0,more-itertools==4.2.0,nodeenv==1.3.0,pbr==4.0.3,pluggy==0.6.0,-e [email protected]:pre-commit/pre-commit@97fb49a533de9a378d20f0a41e79df118362e534#egg=pre_commit,py==1.5.3,pycodestyle==2.3.1,pyflakes==1.6.0,pytest==3.6.0,pytest-env==0.6.2,PyYAML==3.12,six==1.11.0,toml==0.9.4,virtualenv==16.0.0
___________________________________ summary ____________________________________
py36: skipped tests
congratulations :)
$ head -1 .tox/py36/bin/pip
#!/home/asottile/workspace/pre-commit/.tox/py36/bin/python3.6
$ .tox/py36/bin/pytest tests -k venv
============================= test session starts ==============================
platform linux -- Python 3.6.5, pytest-3.6.0, py-1.5.3, pluggy-0.6.0
rootdir: /home/asottile/workspace/pre-commit, inifile: tox.ini
plugins: env-0.6.2
collected 500 items / 492 deselected
tests/repository_test.py .. [ 25%]
tests/commands/install_uninstall_test.py . [ 37%]
tests/languages/all_test.py ..... [100%]
=================== 8 passed, 492 deselected in 4.12 seconds ===================
$ head -1 .tox/py36/bin/pip
#!/home/asottile/workspace/pre-commit/.tox/py36/bin/python3.6
$ tox -e py36 -- tests -k venv
GLOB sdist-make: /home/asottile/workspace/pre-commit/setup.py
py36 inst-nodeps: /home/asottile/workspace/pre-commit/.tox/dist/pre_commit-1.10.0.zip
py36 installed: You are using pip version 9.0.1, however version 10.0.1 is available.,You should consider upgrading via the 'pip install --upgrade pip' command.,aspy.yaml==1.1.1,atomicwrites==1.1.5,attrs==18.1.0,cached-property==1.4.2,cfgv==1.0.0,coverage==4.5.1,flake8==3.5.0,identify==1.0.18,mccabe==0.6.1,mock==2.0.0,more-itertools==4.2.0,nodeenv==1.3.0,pbr==4.0.3,pluggy==0.6.0,pre-commit==1.10.0,py==1.5.3,pycodestyle==2.3.1,pyflakes==1.6.0,pytest==3.6.0,pytest-env==0.6.2,PyYAML==3.12,six==1.11.0,toml==0.9.4,virtualenv==16.0.0
py36 runtests: PYTHONHASHSEED='93802395'
py36 runtests: commands[0] | coverage erase
py36 runtests: commands[1] | coverage run -m pytest tests -k venv
============================= test session starts ==============================
platform linux -- Python 3.6.5, pytest-3.6.0, py-1.5.3, pluggy-0.6.0
rootdir: /home/asottile/workspace/pre-commit, inifile: tox.ini
plugins: env-0.6.2
collected 500 items / 492 deselected
tests/repository_test.py .. [ 25%]
tests/commands/install_uninstall_test.py . [ 37%]
tests/languages/all_test.py ..... [100%]
=================== 8 passed, 492 deselected in 4.32 seconds ===================
py36 runtests: commands[2] | coverage report --fail-under 99
Name Stmts Miss Branch BrPart Cover Missing
---------------------------------------------------------------------------------------------
...
17 files skipped due to complete coverage.
ERROR: InvocationError: '/home/asottile/workspace/pre-commit/.tox/py36/bin/coverage report --fail-under 99'
___________________________________ summary ____________________________________
ERROR: py36: commands failed
$ head -1 .tox/py36/bin/pip
#!/tmp/pytest-of-asottile/pytest-3/test_python_venv0/0/.pre-commit/repo5xcuq11q/py_venv-python3.6/bin/python3.6
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/python_venv.py`
Content:
```
1 from __future__ import unicode_literals
2
3 from pre_commit.languages import python
4 from pre_commit.util import cmd_output
5
6
7 ENVIRONMENT_DIR = 'py_venv'
8
9
10 def make_venv(envdir, python):
11 cmd_output(python, '-mvenv', envdir, cwd='/')
12
13
14 get_default_version = python.get_default_version
15 _interface = python.py_interface(ENVIRONMENT_DIR, make_venv)
16 in_env, healthy, run_hook, install_environment = _interface
17
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/languages/python_venv.py b/pre_commit/languages/python_venv.py
--- a/pre_commit/languages/python_venv.py
+++ b/pre_commit/languages/python_venv.py
@@ -1,14 +1,46 @@
from __future__ import unicode_literals
+import os.path
+
from pre_commit.languages import python
+from pre_commit.util import CalledProcessError
from pre_commit.util import cmd_output
ENVIRONMENT_DIR = 'py_venv'
+def orig_py_exe(exe): # pragma: no cover (platform specific)
+ """A -mvenv virtualenv made from a -mvirtualenv virtualenv installs
+ packages to the incorrect location. Attempt to find the _original_ exe
+ and invoke `-mvenv` from there.
+
+ See:
+ - https://github.com/pre-commit/pre-commit/issues/755
+ - https://github.com/pypa/virtualenv/issues/1095
+ - https://bugs.python.org/issue30811
+ """
+ try:
+ prefix_script = 'import sys; print(sys.real_prefix)'
+ _, prefix, _ = cmd_output(exe, '-c', prefix_script)
+ prefix = prefix.strip()
+ except CalledProcessError:
+ # not created from -mvirtualenv
+ return exe
+
+ if os.name == 'nt':
+ expected = os.path.join(prefix, 'python.exe')
+ else:
+ expected = os.path.join(prefix, 'bin', os.path.basename(exe))
+
+ if os.path.exists(expected):
+ return expected
+ else:
+ return exe
+
+
def make_venv(envdir, python):
- cmd_output(python, '-mvenv', envdir, cwd='/')
+ cmd_output(orig_py_exe(python), '-mvenv', envdir, cwd='/')
get_default_version = python.get_default_version
| {"golden_diff": "diff --git a/pre_commit/languages/python_venv.py b/pre_commit/languages/python_venv.py\n--- a/pre_commit/languages/python_venv.py\n+++ b/pre_commit/languages/python_venv.py\n@@ -1,14 +1,46 @@\n from __future__ import unicode_literals\n \n+import os.path\n+\n from pre_commit.languages import python\n+from pre_commit.util import CalledProcessError\n from pre_commit.util import cmd_output\n \n \n ENVIRONMENT_DIR = 'py_venv'\n \n \n+def orig_py_exe(exe): # pragma: no cover (platform specific)\n+ \"\"\"A -mvenv virtualenv made from a -mvirtualenv virtualenv installs\n+ packages to the incorrect location. Attempt to find the _original_ exe\n+ and invoke `-mvenv` from there.\n+\n+ See:\n+ - https://github.com/pre-commit/pre-commit/issues/755\n+ - https://github.com/pypa/virtualenv/issues/1095\n+ - https://bugs.python.org/issue30811\n+ \"\"\"\n+ try:\n+ prefix_script = 'import sys; print(sys.real_prefix)'\n+ _, prefix, _ = cmd_output(exe, '-c', prefix_script)\n+ prefix = prefix.strip()\n+ except CalledProcessError:\n+ # not created from -mvirtualenv\n+ return exe\n+\n+ if os.name == 'nt':\n+ expected = os.path.join(prefix, 'python.exe')\n+ else:\n+ expected = os.path.join(prefix, 'bin', os.path.basename(exe))\n+\n+ if os.path.exists(expected):\n+ return expected\n+ else:\n+ return exe\n+\n+\n def make_venv(envdir, python):\n- cmd_output(python, '-mvenv', envdir, cwd='/')\n+ cmd_output(orig_py_exe(python), '-mvenv', envdir, cwd='/')\n \n \n get_default_version = python.get_default_version\n", "issue": "venv tests break virtualenv's `pip` when run from a `-mvirtualenv` virtualenv\nHere's a reproduction, not exactly sure what's happening here:\r\n\r\n```\r\n$ tox -e py36 -r --notest\r\nGLOB sdist-make: /home/asottile/workspace/pre-commit/setup.py\r\npy36 create: /home/asottile/workspace/pre-commit/.tox/py36\r\npy36 installdeps: -rrequirements-dev.txt\r\npy36 inst: /home/asottile/workspace/pre-commit/.tox/dist/pre_commit-1.10.0.zip\r\npy36 installed: You are using pip version 9.0.1, however version 10.0.1 is available.,You should consider upgrading via the 'pip install --upgrade pip' command.,aspy.yaml==1.1.1,atomicwrites==1.1.5,attrs==18.1.0,cached-property==1.4.2,cfgv==1.0.0,coverage==4.5.1,flake8==3.5.0,identify==1.0.18,mccabe==0.6.1,mock==2.0.0,more-itertools==4.2.0,nodeenv==1.3.0,pbr==4.0.3,pluggy==0.6.0,-e [email protected]:pre-commit/pre-commit@97fb49a533de9a378d20f0a41e79df118362e534#egg=pre_commit,py==1.5.3,pycodestyle==2.3.1,pyflakes==1.6.0,pytest==3.6.0,pytest-env==0.6.2,PyYAML==3.12,six==1.11.0,toml==0.9.4,virtualenv==16.0.0\r\n___________________________________ summary ____________________________________\r\n py36: skipped tests\r\n congratulations :)\r\n\r\n$ head -1 .tox/py36/bin/pip\r\n#!/home/asottile/workspace/pre-commit/.tox/py36/bin/python3.6\r\n$ .tox/py36/bin/pytest tests -k venv\r\n============================= test session starts ==============================\r\nplatform linux -- Python 3.6.5, pytest-3.6.0, py-1.5.3, pluggy-0.6.0\r\nrootdir: /home/asottile/workspace/pre-commit, inifile: tox.ini\r\nplugins: env-0.6.2\r\ncollected 500 items / 492 deselected \r\n\r\ntests/repository_test.py .. [ 25%]\r\ntests/commands/install_uninstall_test.py . [ 37%]\r\ntests/languages/all_test.py ..... [100%]\r\n\r\n=================== 8 passed, 492 deselected in 4.12 seconds ===================\r\n$ head -1 .tox/py36/bin/pip\r\n#!/home/asottile/workspace/pre-commit/.tox/py36/bin/python3.6\r\n$ tox -e py36 -- tests -k venv\r\nGLOB sdist-make: /home/asottile/workspace/pre-commit/setup.py\r\npy36 inst-nodeps: /home/asottile/workspace/pre-commit/.tox/dist/pre_commit-1.10.0.zip\r\npy36 installed: You are using pip version 9.0.1, however version 10.0.1 is available.,You should consider upgrading via the 'pip install --upgrade pip' command.,aspy.yaml==1.1.1,atomicwrites==1.1.5,attrs==18.1.0,cached-property==1.4.2,cfgv==1.0.0,coverage==4.5.1,flake8==3.5.0,identify==1.0.18,mccabe==0.6.1,mock==2.0.0,more-itertools==4.2.0,nodeenv==1.3.0,pbr==4.0.3,pluggy==0.6.0,pre-commit==1.10.0,py==1.5.3,pycodestyle==2.3.1,pyflakes==1.6.0,pytest==3.6.0,pytest-env==0.6.2,PyYAML==3.12,six==1.11.0,toml==0.9.4,virtualenv==16.0.0\r\npy36 runtests: PYTHONHASHSEED='93802395'\r\npy36 runtests: commands[0] | coverage erase\r\npy36 runtests: commands[1] | coverage run -m pytest tests -k venv\r\n============================= test session starts ==============================\r\nplatform linux -- Python 3.6.5, pytest-3.6.0, py-1.5.3, pluggy-0.6.0\r\nrootdir: /home/asottile/workspace/pre-commit, inifile: tox.ini\r\nplugins: env-0.6.2\r\ncollected 500 items / 492 deselected \r\n\r\ntests/repository_test.py .. [ 25%]\r\ntests/commands/install_uninstall_test.py . [ 37%]\r\ntests/languages/all_test.py ..... [100%]\r\n\r\n=================== 8 passed, 492 deselected in 4.32 seconds ===================\r\npy36 runtests: commands[2] | coverage report --fail-under 99\r\nName Stmts Miss Branch BrPart Cover Missing\r\n---------------------------------------------------------------------------------------------\r\n...\r\n17 files skipped due to complete coverage.\r\nERROR: InvocationError: '/home/asottile/workspace/pre-commit/.tox/py36/bin/coverage report --fail-under 99'\r\n___________________________________ summary ____________________________________\r\nERROR: py36: commands failed\r\n\r\n$ head -1 .tox/py36/bin/pip\r\n#!/tmp/pytest-of-asottile/pytest-3/test_python_venv0/0/.pre-commit/repo5xcuq11q/py_venv-python3.6/bin/python3.6\r\n```\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom pre_commit.languages import python\nfrom pre_commit.util import cmd_output\n\n\nENVIRONMENT_DIR = 'py_venv'\n\n\ndef make_venv(envdir, python):\n cmd_output(python, '-mvenv', envdir, cwd='/')\n\n\nget_default_version = python.get_default_version\n_interface = python.py_interface(ENVIRONMENT_DIR, make_venv)\nin_env, healthy, run_hook, install_environment = _interface\n", "path": "pre_commit/languages/python_venv.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport os.path\n\nfrom pre_commit.languages import python\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import cmd_output\n\n\nENVIRONMENT_DIR = 'py_venv'\n\n\ndef orig_py_exe(exe): # pragma: no cover (platform specific)\n \"\"\"A -mvenv virtualenv made from a -mvirtualenv virtualenv installs\n packages to the incorrect location. Attempt to find the _original_ exe\n and invoke `-mvenv` from there.\n\n See:\n - https://github.com/pre-commit/pre-commit/issues/755\n - https://github.com/pypa/virtualenv/issues/1095\n - https://bugs.python.org/issue30811\n \"\"\"\n try:\n prefix_script = 'import sys; print(sys.real_prefix)'\n _, prefix, _ = cmd_output(exe, '-c', prefix_script)\n prefix = prefix.strip()\n except CalledProcessError:\n # not created from -mvirtualenv\n return exe\n\n if os.name == 'nt':\n expected = os.path.join(prefix, 'python.exe')\n else:\n expected = os.path.join(prefix, 'bin', os.path.basename(exe))\n\n if os.path.exists(expected):\n return expected\n else:\n return exe\n\n\ndef make_venv(envdir, python):\n cmd_output(orig_py_exe(python), '-mvenv', envdir, cwd='/')\n\n\nget_default_version = python.get_default_version\n_interface = python.py_interface(ENVIRONMENT_DIR, make_venv)\nin_env, healthy, run_hook, install_environment = _interface\n", "path": "pre_commit/languages/python_venv.py"}]} | 1,765 | 436 |
gh_patches_debug_35079 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-962 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[PORT] [Certificate Authentication] Expose sendX5c parameter
> Port this change from botbuilder-dotnet/master branch:
https://github.com/microsoft/botbuilder-dotnet/pull/3741
This parameter enables application developers to achieve easy certificates roll-over in Azure AD: setting this parameter to true will send the public certificate to Azure AD along with the token request, so that Azure AD can use it to validate the subject name based on a trusted issuer policy. This saves the application admin from the need to explicitly manage the certificate rollover (either via portal or powershell/CLI operation)
# Changed projects
* Microsoft.Bot.Connector
[R9,authentication]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botframework-connector/botframework/connector/auth/certificate_app_credentials.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from abc import ABC
5
6 from msal import ConfidentialClientApplication
7
8 from .app_credentials import AppCredentials
9
10
11 class CertificateAppCredentials(AppCredentials, ABC):
12 """
13 AppCredentials implementation using a certificate.
14
15 See:
16 https://github.com/AzureAD/microsoft-authentication-library-for-python/wiki/Client-Credentials#client-credentials-with-certificate
17 """
18
19 def __init__(
20 self,
21 app_id: str,
22 certificate_thumbprint: str,
23 certificate_private_key: str,
24 channel_auth_tenant: str = None,
25 oauth_scope: str = None,
26 ):
27 # super will set proper scope and endpoint.
28 super().__init__(
29 app_id=app_id,
30 channel_auth_tenant=channel_auth_tenant,
31 oauth_scope=oauth_scope,
32 )
33
34 self.scopes = [self.oauth_scope]
35 self.app = None
36 self.certificate_thumbprint = certificate_thumbprint
37 self.certificate_private_key = certificate_private_key
38
39 def get_access_token(self, force_refresh: bool = False) -> str:
40 """
41 Implementation of AppCredentials.get_token.
42 :return: The access token for the given certificate.
43 """
44
45 # Firstly, looks up a token from cache
46 # Since we are looking for token for the current app, NOT for an end user,
47 # notice we give account parameter as None.
48 auth_token = self.__get_msal_app().acquire_token_silent(
49 self.scopes, account=None
50 )
51 if not auth_token:
52 # No suitable token exists in cache. Let's get a new one from AAD.
53 auth_token = self.__get_msal_app().acquire_token_for_client(
54 scopes=self.scopes
55 )
56 return auth_token["access_token"]
57
58 def __get_msal_app(self):
59 if not self.app:
60 self.app = ConfidentialClientApplication(
61 client_id=self.microsoft_app_id,
62 authority=self.oauth_endpoint,
63 client_credential={
64 "thumbprint": self.certificate_thumbprint,
65 "private_key": self.certificate_private_key,
66 },
67 )
68
69 return self.app
70
```
Path: `libraries/botframework-connector/setup.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3 import os
4 from setuptools import setup
5
6 NAME = "botframework-connector"
7 VERSION = os.environ["packageVersion"] if "packageVersion" in os.environ else "4.7.1"
8 REQUIRES = [
9 "msrest==0.6.10",
10 "requests==2.22.0",
11 "cryptography==2.8.0",
12 "PyJWT==1.5.3",
13 "botbuilder-schema>=4.7.1",
14 "adal==1.2.1",
15 "msal==1.1.0",
16 ]
17
18 root = os.path.abspath(os.path.dirname(__file__))
19
20 with open(os.path.join(root, "README.rst"), encoding="utf-8") as f:
21 long_description = f.read()
22
23 setup(
24 name=NAME,
25 version=VERSION,
26 description="Microsoft Bot Framework Bot Builder SDK for Python.",
27 author="Microsoft",
28 url="https://www.github.com/Microsoft/botbuilder-python",
29 keywords=["BotFrameworkConnector", "bots", "ai", "botframework", "botbuilder"],
30 install_requires=REQUIRES,
31 packages=[
32 "botframework.connector",
33 "botframework.connector.auth",
34 "botframework.connector.async_mixin",
35 "botframework.connector.operations",
36 "botframework.connector.models",
37 "botframework.connector.aio",
38 "botframework.connector.aio.operations_async",
39 "botframework.connector.teams",
40 "botframework.connector.teams.operations",
41 "botframework.connector.token_api",
42 "botframework.connector.token_api.aio",
43 "botframework.connector.token_api.models",
44 "botframework.connector.token_api.operations",
45 ],
46 include_package_data=True,
47 long_description=long_description,
48 long_description_content_type="text/x-rst",
49 license="MIT",
50 classifiers=[
51 "Programming Language :: Python :: 3.7",
52 "Intended Audience :: Developers",
53 "License :: OSI Approved :: MIT License",
54 "Operating System :: OS Independent",
55 "Development Status :: 5 - Production/Stable",
56 "Topic :: Scientific/Engineering :: Artificial Intelligence",
57 ],
58 )
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libraries/botframework-connector/botframework/connector/auth/certificate_app_credentials.py b/libraries/botframework-connector/botframework/connector/auth/certificate_app_credentials.py
--- a/libraries/botframework-connector/botframework/connector/auth/certificate_app_credentials.py
+++ b/libraries/botframework-connector/botframework/connector/auth/certificate_app_credentials.py
@@ -23,7 +23,20 @@
certificate_private_key: str,
channel_auth_tenant: str = None,
oauth_scope: str = None,
+ certificate_public: str = None,
):
+ """
+ AppCredentials implementation using a certificate.
+
+ :param app_id:
+ :param certificate_thumbprint:
+ :param certificate_private_key:
+ :param channel_auth_tenant:
+ :param oauth_scope:
+ :param certificate_public: public_certificate (optional) is public key certificate which will be sent
+ through ‘x5c’ JWT header only for subject name and issuer authentication to support cert auto rolls.
+ """
+
# super will set proper scope and endpoint.
super().__init__(
app_id=app_id,
@@ -35,6 +48,7 @@
self.app = None
self.certificate_thumbprint = certificate_thumbprint
self.certificate_private_key = certificate_private_key
+ self.certificate_public = certificate_public
def get_access_token(self, force_refresh: bool = False) -> str:
"""
@@ -63,6 +77,9 @@
client_credential={
"thumbprint": self.certificate_thumbprint,
"private_key": self.certificate_private_key,
+ "public_certificate": self.certificate_public
+ if self.certificate_public
+ else None,
},
)
diff --git a/libraries/botframework-connector/setup.py b/libraries/botframework-connector/setup.py
--- a/libraries/botframework-connector/setup.py
+++ b/libraries/botframework-connector/setup.py
@@ -12,7 +12,7 @@
"PyJWT==1.5.3",
"botbuilder-schema>=4.7.1",
"adal==1.2.1",
- "msal==1.1.0",
+ "msal==1.2.0",
]
root = os.path.abspath(os.path.dirname(__file__))
| {"golden_diff": "diff --git a/libraries/botframework-connector/botframework/connector/auth/certificate_app_credentials.py b/libraries/botframework-connector/botframework/connector/auth/certificate_app_credentials.py\n--- a/libraries/botframework-connector/botframework/connector/auth/certificate_app_credentials.py\n+++ b/libraries/botframework-connector/botframework/connector/auth/certificate_app_credentials.py\n@@ -23,7 +23,20 @@\n certificate_private_key: str,\r\n channel_auth_tenant: str = None,\r\n oauth_scope: str = None,\r\n+ certificate_public: str = None,\r\n ):\r\n+ \"\"\"\r\n+ AppCredentials implementation using a certificate.\r\n+\r\n+ :param app_id:\r\n+ :param certificate_thumbprint:\r\n+ :param certificate_private_key:\r\n+ :param channel_auth_tenant:\r\n+ :param oauth_scope:\r\n+ :param certificate_public: public_certificate (optional) is public key certificate which will be sent\r\n+ through \u2018x5c\u2019 JWT header only for subject name and issuer authentication to support cert auto rolls.\r\n+ \"\"\"\r\n+\r\n # super will set proper scope and endpoint.\r\n super().__init__(\r\n app_id=app_id,\r\n@@ -35,6 +48,7 @@\n self.app = None\r\n self.certificate_thumbprint = certificate_thumbprint\r\n self.certificate_private_key = certificate_private_key\r\n+ self.certificate_public = certificate_public\r\n \r\n def get_access_token(self, force_refresh: bool = False) -> str:\r\n \"\"\"\r\n@@ -63,6 +77,9 @@\n client_credential={\r\n \"thumbprint\": self.certificate_thumbprint,\r\n \"private_key\": self.certificate_private_key,\r\n+ \"public_certificate\": self.certificate_public\r\n+ if self.certificate_public\r\n+ else None,\r\n },\r\n )\r\n \r\ndiff --git a/libraries/botframework-connector/setup.py b/libraries/botframework-connector/setup.py\n--- a/libraries/botframework-connector/setup.py\n+++ b/libraries/botframework-connector/setup.py\n@@ -12,7 +12,7 @@\n \"PyJWT==1.5.3\",\n \"botbuilder-schema>=4.7.1\",\n \"adal==1.2.1\",\n- \"msal==1.1.0\",\n+ \"msal==1.2.0\",\n ]\n \n root = os.path.abspath(os.path.dirname(__file__))\n", "issue": "[PORT] [Certificate Authentication] Expose sendX5c parameter\n> Port this change from botbuilder-dotnet/master branch:\nhttps://github.com/microsoft/botbuilder-dotnet/pull/3741\n\nThis parameter enables application developers to achieve easy certificates roll-over in Azure AD: setting this parameter to true will send the public certificate to Azure AD along with the token request, so that Azure AD can use it to validate the subject name based on a trusted issuer policy. This saves the application admin from the need to explicitly manage the certificate rollover (either via portal or powershell/CLI operation)\n\n\r\n# Changed projects\r\n* Microsoft.Bot.Connector\r\n\r\n[R9,authentication]\r\n\r\n\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\r\n# Licensed under the MIT License.\r\n\r\nfrom abc import ABC\r\n\r\nfrom msal import ConfidentialClientApplication\r\n\r\nfrom .app_credentials import AppCredentials\r\n\r\n\r\nclass CertificateAppCredentials(AppCredentials, ABC):\r\n \"\"\"\r\n AppCredentials implementation using a certificate.\r\n\r\n See:\r\n https://github.com/AzureAD/microsoft-authentication-library-for-python/wiki/Client-Credentials#client-credentials-with-certificate\r\n \"\"\"\r\n\r\n def __init__(\r\n self,\r\n app_id: str,\r\n certificate_thumbprint: str,\r\n certificate_private_key: str,\r\n channel_auth_tenant: str = None,\r\n oauth_scope: str = None,\r\n ):\r\n # super will set proper scope and endpoint.\r\n super().__init__(\r\n app_id=app_id,\r\n channel_auth_tenant=channel_auth_tenant,\r\n oauth_scope=oauth_scope,\r\n )\r\n\r\n self.scopes = [self.oauth_scope]\r\n self.app = None\r\n self.certificate_thumbprint = certificate_thumbprint\r\n self.certificate_private_key = certificate_private_key\r\n\r\n def get_access_token(self, force_refresh: bool = False) -> str:\r\n \"\"\"\r\n Implementation of AppCredentials.get_token.\r\n :return: The access token for the given certificate.\r\n \"\"\"\r\n\r\n # Firstly, looks up a token from cache\r\n # Since we are looking for token for the current app, NOT for an end user,\r\n # notice we give account parameter as None.\r\n auth_token = self.__get_msal_app().acquire_token_silent(\r\n self.scopes, account=None\r\n )\r\n if not auth_token:\r\n # No suitable token exists in cache. Let's get a new one from AAD.\r\n auth_token = self.__get_msal_app().acquire_token_for_client(\r\n scopes=self.scopes\r\n )\r\n return auth_token[\"access_token\"]\r\n\r\n def __get_msal_app(self):\r\n if not self.app:\r\n self.app = ConfidentialClientApplication(\r\n client_id=self.microsoft_app_id,\r\n authority=self.oauth_endpoint,\r\n client_credential={\r\n \"thumbprint\": self.certificate_thumbprint,\r\n \"private_key\": self.certificate_private_key,\r\n },\r\n )\r\n\r\n return self.app\r\n", "path": "libraries/botframework-connector/botframework/connector/auth/certificate_app_credentials.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nimport os\nfrom setuptools import setup\n\nNAME = \"botframework-connector\"\nVERSION = os.environ[\"packageVersion\"] if \"packageVersion\" in os.environ else \"4.7.1\"\nREQUIRES = [\n \"msrest==0.6.10\",\n \"requests==2.22.0\",\n \"cryptography==2.8.0\",\n \"PyJWT==1.5.3\",\n \"botbuilder-schema>=4.7.1\",\n \"adal==1.2.1\",\n \"msal==1.1.0\",\n]\n\nroot = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(root, \"README.rst\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nsetup(\n name=NAME,\n version=VERSION,\n description=\"Microsoft Bot Framework Bot Builder SDK for Python.\",\n author=\"Microsoft\",\n url=\"https://www.github.com/Microsoft/botbuilder-python\",\n keywords=[\"BotFrameworkConnector\", \"bots\", \"ai\", \"botframework\", \"botbuilder\"],\n install_requires=REQUIRES,\n packages=[\n \"botframework.connector\",\n \"botframework.connector.auth\",\n \"botframework.connector.async_mixin\",\n \"botframework.connector.operations\",\n \"botframework.connector.models\",\n \"botframework.connector.aio\",\n \"botframework.connector.aio.operations_async\",\n \"botframework.connector.teams\",\n \"botframework.connector.teams.operations\",\n \"botframework.connector.token_api\",\n \"botframework.connector.token_api.aio\",\n \"botframework.connector.token_api.models\",\n \"botframework.connector.token_api.operations\",\n ],\n include_package_data=True,\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n license=\"MIT\",\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "libraries/botframework-connector/setup.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\r\n# Licensed under the MIT License.\r\n\r\nfrom abc import ABC\r\n\r\nfrom msal import ConfidentialClientApplication\r\n\r\nfrom .app_credentials import AppCredentials\r\n\r\n\r\nclass CertificateAppCredentials(AppCredentials, ABC):\r\n \"\"\"\r\n AppCredentials implementation using a certificate.\r\n\r\n See:\r\n https://github.com/AzureAD/microsoft-authentication-library-for-python/wiki/Client-Credentials#client-credentials-with-certificate\r\n \"\"\"\r\n\r\n def __init__(\r\n self,\r\n app_id: str,\r\n certificate_thumbprint: str,\r\n certificate_private_key: str,\r\n channel_auth_tenant: str = None,\r\n oauth_scope: str = None,\r\n certificate_public: str = None,\r\n ):\r\n \"\"\"\r\n AppCredentials implementation using a certificate.\r\n\r\n :param app_id:\r\n :param certificate_thumbprint:\r\n :param certificate_private_key:\r\n :param channel_auth_tenant:\r\n :param oauth_scope:\r\n :param certificate_public: public_certificate (optional) is public key certificate which will be sent\r\n through \u2018x5c\u2019 JWT header only for subject name and issuer authentication to support cert auto rolls.\r\n \"\"\"\r\n\r\n # super will set proper scope and endpoint.\r\n super().__init__(\r\n app_id=app_id,\r\n channel_auth_tenant=channel_auth_tenant,\r\n oauth_scope=oauth_scope,\r\n )\r\n\r\n self.scopes = [self.oauth_scope]\r\n self.app = None\r\n self.certificate_thumbprint = certificate_thumbprint\r\n self.certificate_private_key = certificate_private_key\r\n self.certificate_public = certificate_public\r\n\r\n def get_access_token(self, force_refresh: bool = False) -> str:\r\n \"\"\"\r\n Implementation of AppCredentials.get_token.\r\n :return: The access token for the given certificate.\r\n \"\"\"\r\n\r\n # Firstly, looks up a token from cache\r\n # Since we are looking for token for the current app, NOT for an end user,\r\n # notice we give account parameter as None.\r\n auth_token = self.__get_msal_app().acquire_token_silent(\r\n self.scopes, account=None\r\n )\r\n if not auth_token:\r\n # No suitable token exists in cache. Let's get a new one from AAD.\r\n auth_token = self.__get_msal_app().acquire_token_for_client(\r\n scopes=self.scopes\r\n )\r\n return auth_token[\"access_token\"]\r\n\r\n def __get_msal_app(self):\r\n if not self.app:\r\n self.app = ConfidentialClientApplication(\r\n client_id=self.microsoft_app_id,\r\n authority=self.oauth_endpoint,\r\n client_credential={\r\n \"thumbprint\": self.certificate_thumbprint,\r\n \"private_key\": self.certificate_private_key,\r\n \"public_certificate\": self.certificate_public\r\n if self.certificate_public\r\n else None,\r\n },\r\n )\r\n\r\n return self.app\r\n", "path": "libraries/botframework-connector/botframework/connector/auth/certificate_app_credentials.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nimport os\nfrom setuptools import setup\n\nNAME = \"botframework-connector\"\nVERSION = os.environ[\"packageVersion\"] if \"packageVersion\" in os.environ else \"4.7.1\"\nREQUIRES = [\n \"msrest==0.6.10\",\n \"requests==2.22.0\",\n \"cryptography==2.8.0\",\n \"PyJWT==1.5.3\",\n \"botbuilder-schema>=4.7.1\",\n \"adal==1.2.1\",\n \"msal==1.2.0\",\n]\n\nroot = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(root, \"README.rst\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nsetup(\n name=NAME,\n version=VERSION,\n description=\"Microsoft Bot Framework Bot Builder SDK for Python.\",\n author=\"Microsoft\",\n url=\"https://www.github.com/Microsoft/botbuilder-python\",\n keywords=[\"BotFrameworkConnector\", \"bots\", \"ai\", \"botframework\", \"botbuilder\"],\n install_requires=REQUIRES,\n packages=[\n \"botframework.connector\",\n \"botframework.connector.auth\",\n \"botframework.connector.async_mixin\",\n \"botframework.connector.operations\",\n \"botframework.connector.models\",\n \"botframework.connector.aio\",\n \"botframework.connector.aio.operations_async\",\n \"botframework.connector.teams\",\n \"botframework.connector.teams.operations\",\n \"botframework.connector.token_api\",\n \"botframework.connector.token_api.aio\",\n \"botframework.connector.token_api.models\",\n \"botframework.connector.token_api.operations\",\n ],\n include_package_data=True,\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n license=\"MIT\",\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "libraries/botframework-connector/setup.py"}]} | 1,613 | 528 |
gh_patches_debug_19257 | rasdani/github-patches | git_diff | ESMCI__cime-1240 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PR #1230 appears to break batch systems
After merging #1230 I get an error from Z_FullSystemTest:
Traceback (most recent call last):
File "./scripts_regression_tests.py", line 1175, in test_full_system
self.assertTrue(test_time > 0, msg="test time was zero for %s" % test_status)
AssertionError: test time was zero for /scratch/cluster/jedwards/scripts_regression_test.20170313_145646/ERR.f45_g37_rx1.A.hobart_intel.fake_testing_only_20170313_151740/TestStatus
All of the tests actually passed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/lib/CIME/case_submit.py`
Content:
```
1 #!/usr/bin/env python
2
3 """
4 case.submit - Submit a cesm workflow to the queueing system or run it
5 if there is no queueing system. A cesm workflow may include multiple
6 jobs.
7 """
8 import socket
9 from CIME.XML.standard_module_setup import *
10 from CIME.utils import expect, run_and_log_case_status
11 from CIME.preview_namelists import create_namelists
12 from CIME.check_lockedfiles import check_lockedfiles
13 from CIME.check_input_data import check_all_input_data
14 from CIME.test_status import *
15
16 logger = logging.getLogger(__name__)
17
18 def _submit(case, job=None, resubmit=False, no_batch=False, batch_args=None):
19 caseroot = case.get_value("CASEROOT")
20
21 if job is None:
22 if case.get_value("TEST"):
23 job = "case.test"
24 else:
25 job = "case.run"
26
27 if resubmit:
28 resub = case.get_value("RESUBMIT")
29 logger.info("Submitting job '%s', resubmit=%d" % (job, resub))
30 case.set_value("RESUBMIT",resub-1)
31 if case.get_value("RESUBMIT_SETS_CONTINUE_RUN"):
32 case.set_value("CONTINUE_RUN", True)
33 else:
34 if job in ("case.test","case.run"):
35 check_case(case, caseroot)
36 check_DA_settings(case)
37 if case.get_value("MACH") == "mira":
38 with open(".original_host","w") as fd:
39 fd.write( socket.gethostname())
40
41 # if case.submit is called with the no_batch flag then we assume that this
42 # flag will stay in effect for the duration of the RESUBMITs
43 env_batch = case.get_env("batch")
44 if not resubmit:
45 case.set_value("IS_FIRST_RUN", True)
46 if no_batch:
47 batch_system = "none"
48 else:
49 batch_system = env_batch.get_batch_system_type()
50 case.set_value("BATCH_SYSTEM", batch_system)
51 else:
52 if env_batch.get_batch_system_type() == "none":
53 no_batch = True
54
55 # This is a resubmission, do not reinitialize test values
56 case.set_value("IS_FIRST_RUN", False)
57
58 #Load Modules
59 case.load_env()
60
61 case.set_value("RUN_WITH_SUBMIT",True)
62 case.flush()
63
64 logger.warn("submit_jobs %s" % job)
65 job_ids = case.submit_jobs(no_batch=no_batch, job=job, batch_args=batch_args)
66 logger.info("Submitted job ids %s" % job_ids)
67
68 def submit(case, job=None, resubmit=False, no_batch=False, batch_args=None):
69 if case.get_value("TEST"):
70 caseroot = case.get_value("CASEROOT")
71 casebaseid = case.get_value("CASEBASEID")
72 # This should take care of the race condition where the submitted job
73 # begins immediately and tries to set RUN phase. We proactively assume
74 # a passed SUBMIT phase.
75 with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:
76 ts.set_status(SUBMIT_PHASE, TEST_PASS_STATUS)
77
78 try:
79 functor = lambda: _submit(case, job, resubmit, no_batch, batch_args)
80 run_and_log_case_status(functor, "case.submit", caseroot=case.get_value("CASEROOT"))
81 except:
82 # If something failed in the batch system, make sure to mark
83 # the test as failed if we are running a test.
84 if case.get_value("TEST"):
85 with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:
86 ts.set_status(SUBMIT_PHASE, TEST_FAIL_STATUS)
87
88 raise
89
90 def check_case(case, caseroot):
91 check_lockedfiles(caseroot)
92 create_namelists(case) # Must be called before check_all_input_data
93 logger.info("Checking that inputdata is available as part of case submission")
94 check_all_input_data(case)
95
96 expect(case.get_value("BUILD_COMPLETE"), "Build complete is "
97 "not True please rebuild the model by calling case.build")
98 logger.info("Check case OK")
99
100 def check_DA_settings(case):
101 if case.get_value("DATA_ASSIMILATION"):
102 script = case.get_value("DATA_ASSIMILATION_SCRIPT")
103 cycles = case.get_value("DATA_ASSIMILATION_CYCLES")
104 logger.info("Data Assimilation enabled using script %s with %d cycles"%(script,cycles))
105
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/lib/CIME/case_submit.py b/scripts/lib/CIME/case_submit.py
--- a/scripts/lib/CIME/case_submit.py
+++ b/scripts/lib/CIME/case_submit.py
@@ -71,9 +71,13 @@
casebaseid = case.get_value("CASEBASEID")
# This should take care of the race condition where the submitted job
# begins immediately and tries to set RUN phase. We proactively assume
- # a passed SUBMIT phase.
+ # a passed SUBMIT phase. If this state is already PASS, don't set it again
+ # because then we'll lose RUN phase info if it's there. This info is important
+ # for system_tests_common to know if it needs to reinitialize the test or not.
with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:
- ts.set_status(SUBMIT_PHASE, TEST_PASS_STATUS)
+ phase_status = ts.get_status(SUBMIT_PHASE)
+ if phase_status != TEST_PASS_STATUS:
+ ts.set_status(SUBMIT_PHASE, TEST_PASS_STATUS)
try:
functor = lambda: _submit(case, job, resubmit, no_batch, batch_args)
| {"golden_diff": "diff --git a/scripts/lib/CIME/case_submit.py b/scripts/lib/CIME/case_submit.py\n--- a/scripts/lib/CIME/case_submit.py\n+++ b/scripts/lib/CIME/case_submit.py\n@@ -71,9 +71,13 @@\n casebaseid = case.get_value(\"CASEBASEID\")\n # This should take care of the race condition where the submitted job\n # begins immediately and tries to set RUN phase. We proactively assume\n- # a passed SUBMIT phase.\n+ # a passed SUBMIT phase. If this state is already PASS, don't set it again\n+ # because then we'll lose RUN phase info if it's there. This info is important\n+ # for system_tests_common to know if it needs to reinitialize the test or not.\n with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:\n- ts.set_status(SUBMIT_PHASE, TEST_PASS_STATUS)\n+ phase_status = ts.get_status(SUBMIT_PHASE)\n+ if phase_status != TEST_PASS_STATUS:\n+ ts.set_status(SUBMIT_PHASE, TEST_PASS_STATUS)\n \n try:\n functor = lambda: _submit(case, job, resubmit, no_batch, batch_args)\n", "issue": "PR #1230 appears to break batch systems \nAfter merging #1230 I get an error from Z_FullSystemTest:\r\n\r\nTraceback (most recent call last):\r\n File \"./scripts_regression_tests.py\", line 1175, in test_full_system\r\n self.assertTrue(test_time > 0, msg=\"test time was zero for %s\" % test_status)\r\nAssertionError: test time was zero for /scratch/cluster/jedwards/scripts_regression_test.20170313_145646/ERR.f45_g37_rx1.A.hobart_intel.fake_testing_only_20170313_151740/TestStatus\r\n\r\n\r\nAll of the tests actually passed. \n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\ncase.submit - Submit a cesm workflow to the queueing system or run it\nif there is no queueing system. A cesm workflow may include multiple\njobs.\n\"\"\"\nimport socket\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import expect, run_and_log_case_status\nfrom CIME.preview_namelists import create_namelists\nfrom CIME.check_lockedfiles import check_lockedfiles\nfrom CIME.check_input_data import check_all_input_data\nfrom CIME.test_status import *\n\nlogger = logging.getLogger(__name__)\n\ndef _submit(case, job=None, resubmit=False, no_batch=False, batch_args=None):\n caseroot = case.get_value(\"CASEROOT\")\n\n if job is None:\n if case.get_value(\"TEST\"):\n job = \"case.test\"\n else:\n job = \"case.run\"\n\n if resubmit:\n resub = case.get_value(\"RESUBMIT\")\n logger.info(\"Submitting job '%s', resubmit=%d\" % (job, resub))\n case.set_value(\"RESUBMIT\",resub-1)\n if case.get_value(\"RESUBMIT_SETS_CONTINUE_RUN\"):\n case.set_value(\"CONTINUE_RUN\", True)\n else:\n if job in (\"case.test\",\"case.run\"):\n check_case(case, caseroot)\n check_DA_settings(case)\n if case.get_value(\"MACH\") == \"mira\":\n with open(\".original_host\",\"w\") as fd:\n fd.write( socket.gethostname())\n\n # if case.submit is called with the no_batch flag then we assume that this\n # flag will stay in effect for the duration of the RESUBMITs\n env_batch = case.get_env(\"batch\")\n if not resubmit:\n case.set_value(\"IS_FIRST_RUN\", True)\n if no_batch:\n batch_system = \"none\"\n else:\n batch_system = env_batch.get_batch_system_type()\n case.set_value(\"BATCH_SYSTEM\", batch_system)\n else:\n if env_batch.get_batch_system_type() == \"none\":\n no_batch = True\n\n # This is a resubmission, do not reinitialize test values\n case.set_value(\"IS_FIRST_RUN\", False)\n\n #Load Modules\n case.load_env()\n\n case.set_value(\"RUN_WITH_SUBMIT\",True)\n case.flush()\n\n logger.warn(\"submit_jobs %s\" % job)\n job_ids = case.submit_jobs(no_batch=no_batch, job=job, batch_args=batch_args)\n logger.info(\"Submitted job ids %s\" % job_ids)\n\ndef submit(case, job=None, resubmit=False, no_batch=False, batch_args=None):\n if case.get_value(\"TEST\"):\n caseroot = case.get_value(\"CASEROOT\")\n casebaseid = case.get_value(\"CASEBASEID\")\n # This should take care of the race condition where the submitted job\n # begins immediately and tries to set RUN phase. We proactively assume\n # a passed SUBMIT phase.\n with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:\n ts.set_status(SUBMIT_PHASE, TEST_PASS_STATUS)\n\n try:\n functor = lambda: _submit(case, job, resubmit, no_batch, batch_args)\n run_and_log_case_status(functor, \"case.submit\", caseroot=case.get_value(\"CASEROOT\"))\n except:\n # If something failed in the batch system, make sure to mark\n # the test as failed if we are running a test.\n if case.get_value(\"TEST\"):\n with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:\n ts.set_status(SUBMIT_PHASE, TEST_FAIL_STATUS)\n\n raise\n\ndef check_case(case, caseroot):\n check_lockedfiles(caseroot)\n create_namelists(case) # Must be called before check_all_input_data\n logger.info(\"Checking that inputdata is available as part of case submission\")\n check_all_input_data(case)\n\n expect(case.get_value(\"BUILD_COMPLETE\"), \"Build complete is \"\n \"not True please rebuild the model by calling case.build\")\n logger.info(\"Check case OK\")\n\ndef check_DA_settings(case):\n if case.get_value(\"DATA_ASSIMILATION\"):\n script = case.get_value(\"DATA_ASSIMILATION_SCRIPT\")\n cycles = case.get_value(\"DATA_ASSIMILATION_CYCLES\")\n logger.info(\"Data Assimilation enabled using script %s with %d cycles\"%(script,cycles))\n\n", "path": "scripts/lib/CIME/case_submit.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\ncase.submit - Submit a cesm workflow to the queueing system or run it\nif there is no queueing system. A cesm workflow may include multiple\njobs.\n\"\"\"\nimport socket\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import expect, run_and_log_case_status\nfrom CIME.preview_namelists import create_namelists\nfrom CIME.check_lockedfiles import check_lockedfiles\nfrom CIME.check_input_data import check_all_input_data\nfrom CIME.test_status import *\n\nlogger = logging.getLogger(__name__)\n\ndef _submit(case, job=None, resubmit=False, no_batch=False, batch_args=None):\n caseroot = case.get_value(\"CASEROOT\")\n\n if job is None:\n if case.get_value(\"TEST\"):\n job = \"case.test\"\n else:\n job = \"case.run\"\n\n if resubmit:\n resub = case.get_value(\"RESUBMIT\")\n logger.info(\"Submitting job '%s', resubmit=%d\" % (job, resub))\n case.set_value(\"RESUBMIT\",resub-1)\n if case.get_value(\"RESUBMIT_SETS_CONTINUE_RUN\"):\n case.set_value(\"CONTINUE_RUN\", True)\n else:\n if job in (\"case.test\",\"case.run\"):\n check_case(case, caseroot)\n check_DA_settings(case)\n if case.get_value(\"MACH\") == \"mira\":\n with open(\".original_host\",\"w\") as fd:\n fd.write( socket.gethostname())\n\n # if case.submit is called with the no_batch flag then we assume that this\n # flag will stay in effect for the duration of the RESUBMITs\n env_batch = case.get_env(\"batch\")\n if not resubmit:\n case.set_value(\"IS_FIRST_RUN\", True)\n if no_batch:\n batch_system = \"none\"\n else:\n batch_system = env_batch.get_batch_system_type()\n case.set_value(\"BATCH_SYSTEM\", batch_system)\n else:\n if env_batch.get_batch_system_type() == \"none\":\n no_batch = True\n\n # This is a resubmission, do not reinitialize test values\n case.set_value(\"IS_FIRST_RUN\", False)\n\n #Load Modules\n case.load_env()\n\n case.set_value(\"RUN_WITH_SUBMIT\",True)\n case.flush()\n\n logger.warn(\"submit_jobs %s\" % job)\n job_ids = case.submit_jobs(no_batch=no_batch, job=job, batch_args=batch_args)\n logger.info(\"Submitted job ids %s\" % job_ids)\n\ndef submit(case, job=None, resubmit=False, no_batch=False, batch_args=None):\n if case.get_value(\"TEST\"):\n caseroot = case.get_value(\"CASEROOT\")\n casebaseid = case.get_value(\"CASEBASEID\")\n # This should take care of the race condition where the submitted job\n # begins immediately and tries to set RUN phase. We proactively assume\n # a passed SUBMIT phase. If this state is already PASS, don't set it again\n # because then we'll lose RUN phase info if it's there. This info is important\n # for system_tests_common to know if it needs to reinitialize the test or not.\n with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:\n phase_status = ts.get_status(SUBMIT_PHASE)\n if phase_status != TEST_PASS_STATUS:\n ts.set_status(SUBMIT_PHASE, TEST_PASS_STATUS)\n\n try:\n functor = lambda: _submit(case, job, resubmit, no_batch, batch_args)\n run_and_log_case_status(functor, \"case.submit\", caseroot=case.get_value(\"CASEROOT\"))\n except:\n # If something failed in the batch system, make sure to mark\n # the test as failed if we are running a test.\n if case.get_value(\"TEST\"):\n with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:\n ts.set_status(SUBMIT_PHASE, TEST_FAIL_STATUS)\n\n raise\n\ndef check_case(case, caseroot):\n check_lockedfiles(caseroot)\n create_namelists(case) # Must be called before check_all_input_data\n logger.info(\"Checking that inputdata is available as part of case submission\")\n check_all_input_data(case)\n\n expect(case.get_value(\"BUILD_COMPLETE\"), \"Build complete is \"\n \"not True please rebuild the model by calling case.build\")\n logger.info(\"Check case OK\")\n\ndef check_DA_settings(case):\n if case.get_value(\"DATA_ASSIMILATION\"):\n script = case.get_value(\"DATA_ASSIMILATION_SCRIPT\")\n cycles = case.get_value(\"DATA_ASSIMILATION_CYCLES\")\n logger.info(\"Data Assimilation enabled using script %s with %d cycles\"%(script,cycles))\n\n", "path": "scripts/lib/CIME/case_submit.py"}]} | 1,617 | 269 |
gh_patches_debug_18728 | rasdani/github-patches | git_diff | TheAlgorithms__Python-6190 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[suggestion] use relative path in DIRECTORY.md
when openning DIRECTORY.md in local machine, the links in it refers to https://github.com/TheAlgorithms/Python/blob/master/xxx.
it's not convinient for reading locally.
I suggest to make a new file "TOC.md", which removes "https://github.com/TheAlgorithms/Python/blob/master/" in every link.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/build_directory_md.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import os
4 from typing import Iterator
5
6 URL_BASE = "https://github.com/TheAlgorithms/Python/blob/master"
7
8
9 def good_file_paths(top_dir: str = ".") -> Iterator[str]:
10 for dir_path, dir_names, filenames in os.walk(top_dir):
11 dir_names[:] = [d for d in dir_names if d != "scripts" and d[0] not in "._"]
12 for filename in filenames:
13 if filename == "__init__.py":
14 continue
15 if os.path.splitext(filename)[1] in (".py", ".ipynb"):
16 yield os.path.join(dir_path, filename).lstrip("./")
17
18
19 def md_prefix(i):
20 return f"{i * ' '}*" if i else "\n##"
21
22
23 def print_path(old_path: str, new_path: str) -> str:
24 old_parts = old_path.split(os.sep)
25 for i, new_part in enumerate(new_path.split(os.sep)):
26 if i + 1 > len(old_parts) or old_parts[i] != new_part:
27 if new_part:
28 print(f"{md_prefix(i)} {new_part.replace('_', ' ').title()}")
29 return new_path
30
31
32 def print_directory_md(top_dir: str = ".") -> None:
33 old_path = ""
34 for filepath in sorted(good_file_paths(top_dir)):
35 filepath, filename = os.path.split(filepath)
36 if filepath != old_path:
37 old_path = print_path(old_path, filepath)
38 indent = (filepath.count(os.sep) + 1) if filepath else 0
39 url = "/".join((URL_BASE, filepath, filename)).replace(" ", "%20")
40 filename = os.path.splitext(filename.replace("_", " ").title())[0]
41 print(f"{md_prefix(indent)} [{filename}]({url})")
42
43
44 if __name__ == "__main__":
45 print_directory_md(".")
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/build_directory_md.py b/scripts/build_directory_md.py
--- a/scripts/build_directory_md.py
+++ b/scripts/build_directory_md.py
@@ -3,8 +3,6 @@
import os
from typing import Iterator
-URL_BASE = "https://github.com/TheAlgorithms/Python/blob/master"
-
def good_file_paths(top_dir: str = ".") -> Iterator[str]:
for dir_path, dir_names, filenames in os.walk(top_dir):
@@ -36,7 +34,7 @@
if filepath != old_path:
old_path = print_path(old_path, filepath)
indent = (filepath.count(os.sep) + 1) if filepath else 0
- url = "/".join((URL_BASE, filepath, filename)).replace(" ", "%20")
+ url = "/".join((filepath, filename)).replace(" ", "%20")
filename = os.path.splitext(filename.replace("_", " ").title())[0]
print(f"{md_prefix(indent)} [{filename}]({url})")
| {"golden_diff": "diff --git a/scripts/build_directory_md.py b/scripts/build_directory_md.py\n--- a/scripts/build_directory_md.py\n+++ b/scripts/build_directory_md.py\n@@ -3,8 +3,6 @@\n import os\n from typing import Iterator\n \n-URL_BASE = \"https://github.com/TheAlgorithms/Python/blob/master\"\n-\n \n def good_file_paths(top_dir: str = \".\") -> Iterator[str]:\n for dir_path, dir_names, filenames in os.walk(top_dir):\n@@ -36,7 +34,7 @@\n if filepath != old_path:\n old_path = print_path(old_path, filepath)\n indent = (filepath.count(os.sep) + 1) if filepath else 0\n- url = \"/\".join((URL_BASE, filepath, filename)).replace(\" \", \"%20\")\n+ url = \"/\".join((filepath, filename)).replace(\" \", \"%20\")\n filename = os.path.splitext(filename.replace(\"_\", \" \").title())[0]\n print(f\"{md_prefix(indent)} [{filename}]({url})\")\n", "issue": "[suggestion] use relative path in DIRECTORY.md\nwhen openning DIRECTORY.md in local machine, the links in it refers to https://github.com/TheAlgorithms/Python/blob/master/xxx.\r\n\r\nit's not convinient for reading locally.\r\n\r\nI suggest to make a new file \"TOC.md\", which removes \"https://github.com/TheAlgorithms/Python/blob/master/\" in every link.\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport os\nfrom typing import Iterator\n\nURL_BASE = \"https://github.com/TheAlgorithms/Python/blob/master\"\n\n\ndef good_file_paths(top_dir: str = \".\") -> Iterator[str]:\n for dir_path, dir_names, filenames in os.walk(top_dir):\n dir_names[:] = [d for d in dir_names if d != \"scripts\" and d[0] not in \"._\"]\n for filename in filenames:\n if filename == \"__init__.py\":\n continue\n if os.path.splitext(filename)[1] in (\".py\", \".ipynb\"):\n yield os.path.join(dir_path, filename).lstrip(\"./\")\n\n\ndef md_prefix(i):\n return f\"{i * ' '}*\" if i else \"\\n##\"\n\n\ndef print_path(old_path: str, new_path: str) -> str:\n old_parts = old_path.split(os.sep)\n for i, new_part in enumerate(new_path.split(os.sep)):\n if i + 1 > len(old_parts) or old_parts[i] != new_part:\n if new_part:\n print(f\"{md_prefix(i)} {new_part.replace('_', ' ').title()}\")\n return new_path\n\n\ndef print_directory_md(top_dir: str = \".\") -> None:\n old_path = \"\"\n for filepath in sorted(good_file_paths(top_dir)):\n filepath, filename = os.path.split(filepath)\n if filepath != old_path:\n old_path = print_path(old_path, filepath)\n indent = (filepath.count(os.sep) + 1) if filepath else 0\n url = \"/\".join((URL_BASE, filepath, filename)).replace(\" \", \"%20\")\n filename = os.path.splitext(filename.replace(\"_\", \" \").title())[0]\n print(f\"{md_prefix(indent)} [{filename}]({url})\")\n\n\nif __name__ == \"__main__\":\n print_directory_md(\".\")\n", "path": "scripts/build_directory_md.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport os\nfrom typing import Iterator\n\n\ndef good_file_paths(top_dir: str = \".\") -> Iterator[str]:\n for dir_path, dir_names, filenames in os.walk(top_dir):\n dir_names[:] = [d for d in dir_names if d != \"scripts\" and d[0] not in \"._\"]\n for filename in filenames:\n if filename == \"__init__.py\":\n continue\n if os.path.splitext(filename)[1] in (\".py\", \".ipynb\"):\n yield os.path.join(dir_path, filename).lstrip(\"./\")\n\n\ndef md_prefix(i):\n return f\"{i * ' '}*\" if i else \"\\n##\"\n\n\ndef print_path(old_path: str, new_path: str) -> str:\n old_parts = old_path.split(os.sep)\n for i, new_part in enumerate(new_path.split(os.sep)):\n if i + 1 > len(old_parts) or old_parts[i] != new_part:\n if new_part:\n print(f\"{md_prefix(i)} {new_part.replace('_', ' ').title()}\")\n return new_path\n\n\ndef print_directory_md(top_dir: str = \".\") -> None:\n old_path = \"\"\n for filepath in sorted(good_file_paths(top_dir)):\n filepath, filename = os.path.split(filepath)\n if filepath != old_path:\n old_path = print_path(old_path, filepath)\n indent = (filepath.count(os.sep) + 1) if filepath else 0\n url = \"/\".join((filepath, filename)).replace(\" \", \"%20\")\n filename = os.path.splitext(filename.replace(\"_\", \" \").title())[0]\n print(f\"{md_prefix(indent)} [{filename}]({url})\")\n\n\nif __name__ == \"__main__\":\n print_directory_md(\".\")\n", "path": "scripts/build_directory_md.py"}]} | 828 | 220 |
gh_patches_debug_14360 | rasdani/github-patches | git_diff | huggingface__dataset-viewer-2811 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
opus decoding error
see https://huggingface.co/datasets/stable-speech/mls_eng_10k/discussions/1#65ef6e9d440a5fc3d94a40ad
To fix this maybe we should pin `soundfile` library to `>=1.0.31` (first version that supported opus) like [we do in `datasets` library](https://github.com/huggingface/datasets/blob/main/src/datasets/config.py#L144).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libs/libcommon/src/libcommon/viewer_utils/asset.py`
Content:
```
1 # SPDX-License-Identifier: Apache-2.0
2 # Copyright 2022 The HuggingFace Authors.
3
4 from io import BytesIO
5 from pathlib import Path
6 from tempfile import NamedTemporaryFile
7 from typing import Optional, TypedDict
8 from urllib import parse
9
10 from PIL import Image, ImageOps
11 from pydub import AudioSegment # type:ignore
12
13 from libcommon.constants import DATASET_SEPARATOR
14 from libcommon.storage import StrPath, remove_dir
15 from libcommon.storage_client import StorageClient
16
17 SUPPORTED_AUDIO_EXTENSION_TO_MEDIA_TYPE = {".wav": "audio/wav", ".mp3": "audio/mpeg"}
18
19
20 def delete_asset_dir(dataset: str, directory: StrPath) -> None:
21 dir_path = Path(directory).resolve() / dataset
22 remove_dir(dir_path)
23
24
25 class ImageSource(TypedDict):
26 src: str
27 height: int
28 width: int
29
30
31 class AudioSource(TypedDict):
32 src: str
33 type: str
34
35
36 def generate_object_key(
37 dataset: str, revision: str, config: str, split: str, row_idx: int, column: str, filename: str
38 ) -> str:
39 return f"{parse.quote(dataset)}/{DATASET_SEPARATOR}/{revision}/{DATASET_SEPARATOR}/{parse.quote(config)}/{parse.quote(split)}/{str(row_idx)}/{parse.quote(column)}/{filename}"
40
41
42 def create_image_file(
43 dataset: str,
44 revision: str,
45 config: str,
46 split: str,
47 row_idx: int,
48 column: str,
49 filename: str,
50 image: Image.Image,
51 format: str,
52 storage_client: StorageClient,
53 ) -> ImageSource:
54 object_key = generate_object_key(
55 dataset=dataset,
56 revision=revision,
57 config=config,
58 split=split,
59 row_idx=row_idx,
60 column=column,
61 filename=filename,
62 )
63 if storage_client.overwrite or not storage_client.exists(object_key):
64 image = ImageOps.exif_transpose(image) # type: ignore[assignment]
65 buffer = BytesIO()
66 image.save(fp=buffer, format=format)
67 buffer.seek(0)
68 with storage_client._fs.open(storage_client.get_full_path(object_key), "wb") as f:
69 f.write(buffer.read())
70 return ImageSource(src=storage_client.get_url(object_key), height=image.height, width=image.width)
71
72
73 def create_audio_file(
74 dataset: str,
75 revision: str,
76 config: str,
77 split: str,
78 row_idx: int,
79 column: str,
80 audio_file_bytes: bytes,
81 audio_file_extension: Optional[str],
82 filename: str,
83 storage_client: StorageClient,
84 ) -> list[AudioSource]:
85 object_key = generate_object_key(
86 dataset=dataset,
87 revision=revision,
88 config=config,
89 split=split,
90 row_idx=row_idx,
91 column=column,
92 filename=filename,
93 )
94 suffix = f".{filename.split('.')[-1]}"
95 if suffix not in SUPPORTED_AUDIO_EXTENSION_TO_MEDIA_TYPE:
96 raise ValueError(
97 f"Audio format {suffix} is not supported. Supported formats are"
98 f" {','.join(SUPPORTED_AUDIO_EXTENSION_TO_MEDIA_TYPE)}."
99 )
100 media_type = SUPPORTED_AUDIO_EXTENSION_TO_MEDIA_TYPE[suffix]
101
102 if storage_client.overwrite or not storage_client.exists(object_key):
103 audio_path = storage_client.get_full_path(object_key)
104 if audio_file_extension == suffix:
105 with storage_client._fs.open(audio_path, "wb") as f:
106 f.write(audio_file_bytes)
107 else: # we need to convert
108 # might spawn a process to convert the audio file using ffmpeg
109 with NamedTemporaryFile("wb", suffix=audio_file_extension) as tmpfile:
110 tmpfile.write(audio_file_bytes)
111 segment: AudioSegment = AudioSegment.from_file(
112 tmpfile.name, audio_file_extension[1:] if audio_file_extension else None
113 )
114 buffer = BytesIO()
115 segment.export(buffer, format=suffix[1:])
116 buffer.seek(0)
117 with storage_client._fs.open(audio_path, "wb") as f:
118 f.write(buffer.read())
119 return [AudioSource(src=storage_client.get_url(object_key), type=media_type)]
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libs/libcommon/src/libcommon/viewer_utils/asset.py b/libs/libcommon/src/libcommon/viewer_utils/asset.py
--- a/libs/libcommon/src/libcommon/viewer_utils/asset.py
+++ b/libs/libcommon/src/libcommon/viewer_utils/asset.py
@@ -108,9 +108,7 @@
# might spawn a process to convert the audio file using ffmpeg
with NamedTemporaryFile("wb", suffix=audio_file_extension) as tmpfile:
tmpfile.write(audio_file_bytes)
- segment: AudioSegment = AudioSegment.from_file(
- tmpfile.name, audio_file_extension[1:] if audio_file_extension else None
- )
+ segment: AudioSegment = AudioSegment.from_file(tmpfile.name)
buffer = BytesIO()
segment.export(buffer, format=suffix[1:])
buffer.seek(0)
| {"golden_diff": "diff --git a/libs/libcommon/src/libcommon/viewer_utils/asset.py b/libs/libcommon/src/libcommon/viewer_utils/asset.py\n--- a/libs/libcommon/src/libcommon/viewer_utils/asset.py\n+++ b/libs/libcommon/src/libcommon/viewer_utils/asset.py\n@@ -108,9 +108,7 @@\n # might spawn a process to convert the audio file using ffmpeg\n with NamedTemporaryFile(\"wb\", suffix=audio_file_extension) as tmpfile:\n tmpfile.write(audio_file_bytes)\n- segment: AudioSegment = AudioSegment.from_file(\n- tmpfile.name, audio_file_extension[1:] if audio_file_extension else None\n- )\n+ segment: AudioSegment = AudioSegment.from_file(tmpfile.name)\n buffer = BytesIO()\n segment.export(buffer, format=suffix[1:])\n buffer.seek(0)\n", "issue": "opus decoding error\nsee https://huggingface.co/datasets/stable-speech/mls_eng_10k/discussions/1#65ef6e9d440a5fc3d94a40ad\r\n\r\nTo fix this maybe we should pin `soundfile` library to `>=1.0.31` (first version that supported opus) like [we do in `datasets` library](https://github.com/huggingface/datasets/blob/main/src/datasets/config.py#L144). \r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n# Copyright 2022 The HuggingFace Authors.\n\nfrom io import BytesIO\nfrom pathlib import Path\nfrom tempfile import NamedTemporaryFile\nfrom typing import Optional, TypedDict\nfrom urllib import parse\n\nfrom PIL import Image, ImageOps\nfrom pydub import AudioSegment # type:ignore\n\nfrom libcommon.constants import DATASET_SEPARATOR\nfrom libcommon.storage import StrPath, remove_dir\nfrom libcommon.storage_client import StorageClient\n\nSUPPORTED_AUDIO_EXTENSION_TO_MEDIA_TYPE = {\".wav\": \"audio/wav\", \".mp3\": \"audio/mpeg\"}\n\n\ndef delete_asset_dir(dataset: str, directory: StrPath) -> None:\n dir_path = Path(directory).resolve() / dataset\n remove_dir(dir_path)\n\n\nclass ImageSource(TypedDict):\n src: str\n height: int\n width: int\n\n\nclass AudioSource(TypedDict):\n src: str\n type: str\n\n\ndef generate_object_key(\n dataset: str, revision: str, config: str, split: str, row_idx: int, column: str, filename: str\n) -> str:\n return f\"{parse.quote(dataset)}/{DATASET_SEPARATOR}/{revision}/{DATASET_SEPARATOR}/{parse.quote(config)}/{parse.quote(split)}/{str(row_idx)}/{parse.quote(column)}/{filename}\"\n\n\ndef create_image_file(\n dataset: str,\n revision: str,\n config: str,\n split: str,\n row_idx: int,\n column: str,\n filename: str,\n image: Image.Image,\n format: str,\n storage_client: StorageClient,\n) -> ImageSource:\n object_key = generate_object_key(\n dataset=dataset,\n revision=revision,\n config=config,\n split=split,\n row_idx=row_idx,\n column=column,\n filename=filename,\n )\n if storage_client.overwrite or not storage_client.exists(object_key):\n image = ImageOps.exif_transpose(image) # type: ignore[assignment]\n buffer = BytesIO()\n image.save(fp=buffer, format=format)\n buffer.seek(0)\n with storage_client._fs.open(storage_client.get_full_path(object_key), \"wb\") as f:\n f.write(buffer.read())\n return ImageSource(src=storage_client.get_url(object_key), height=image.height, width=image.width)\n\n\ndef create_audio_file(\n dataset: str,\n revision: str,\n config: str,\n split: str,\n row_idx: int,\n column: str,\n audio_file_bytes: bytes,\n audio_file_extension: Optional[str],\n filename: str,\n storage_client: StorageClient,\n) -> list[AudioSource]:\n object_key = generate_object_key(\n dataset=dataset,\n revision=revision,\n config=config,\n split=split,\n row_idx=row_idx,\n column=column,\n filename=filename,\n )\n suffix = f\".{filename.split('.')[-1]}\"\n if suffix not in SUPPORTED_AUDIO_EXTENSION_TO_MEDIA_TYPE:\n raise ValueError(\n f\"Audio format {suffix} is not supported. Supported formats are\"\n f\" {','.join(SUPPORTED_AUDIO_EXTENSION_TO_MEDIA_TYPE)}.\"\n )\n media_type = SUPPORTED_AUDIO_EXTENSION_TO_MEDIA_TYPE[suffix]\n\n if storage_client.overwrite or not storage_client.exists(object_key):\n audio_path = storage_client.get_full_path(object_key)\n if audio_file_extension == suffix:\n with storage_client._fs.open(audio_path, \"wb\") as f:\n f.write(audio_file_bytes)\n else: # we need to convert\n # might spawn a process to convert the audio file using ffmpeg\n with NamedTemporaryFile(\"wb\", suffix=audio_file_extension) as tmpfile:\n tmpfile.write(audio_file_bytes)\n segment: AudioSegment = AudioSegment.from_file(\n tmpfile.name, audio_file_extension[1:] if audio_file_extension else None\n )\n buffer = BytesIO()\n segment.export(buffer, format=suffix[1:])\n buffer.seek(0)\n with storage_client._fs.open(audio_path, \"wb\") as f:\n f.write(buffer.read())\n return [AudioSource(src=storage_client.get_url(object_key), type=media_type)]\n", "path": "libs/libcommon/src/libcommon/viewer_utils/asset.py"}], "after_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n# Copyright 2022 The HuggingFace Authors.\n\nfrom io import BytesIO\nfrom pathlib import Path\nfrom tempfile import NamedTemporaryFile\nfrom typing import Optional, TypedDict\nfrom urllib import parse\n\nfrom PIL import Image, ImageOps\nfrom pydub import AudioSegment # type:ignore\n\nfrom libcommon.constants import DATASET_SEPARATOR\nfrom libcommon.storage import StrPath, remove_dir\nfrom libcommon.storage_client import StorageClient\n\nSUPPORTED_AUDIO_EXTENSION_TO_MEDIA_TYPE = {\".wav\": \"audio/wav\", \".mp3\": \"audio/mpeg\"}\n\n\ndef delete_asset_dir(dataset: str, directory: StrPath) -> None:\n dir_path = Path(directory).resolve() / dataset\n remove_dir(dir_path)\n\n\nclass ImageSource(TypedDict):\n src: str\n height: int\n width: int\n\n\nclass AudioSource(TypedDict):\n src: str\n type: str\n\n\ndef generate_object_key(\n dataset: str, revision: str, config: str, split: str, row_idx: int, column: str, filename: str\n) -> str:\n return f\"{parse.quote(dataset)}/{DATASET_SEPARATOR}/{revision}/{DATASET_SEPARATOR}/{parse.quote(config)}/{parse.quote(split)}/{str(row_idx)}/{parse.quote(column)}/{filename}\"\n\n\ndef create_image_file(\n dataset: str,\n revision: str,\n config: str,\n split: str,\n row_idx: int,\n column: str,\n filename: str,\n image: Image.Image,\n format: str,\n storage_client: StorageClient,\n) -> ImageSource:\n object_key = generate_object_key(\n dataset=dataset,\n revision=revision,\n config=config,\n split=split,\n row_idx=row_idx,\n column=column,\n filename=filename,\n )\n if storage_client.overwrite or not storage_client.exists(object_key):\n image = ImageOps.exif_transpose(image) # type: ignore[assignment]\n buffer = BytesIO()\n image.save(fp=buffer, format=format)\n buffer.seek(0)\n with storage_client._fs.open(storage_client.get_full_path(object_key), \"wb\") as f:\n f.write(buffer.read())\n return ImageSource(src=storage_client.get_url(object_key), height=image.height, width=image.width)\n\n\ndef create_audio_file(\n dataset: str,\n revision: str,\n config: str,\n split: str,\n row_idx: int,\n column: str,\n audio_file_bytes: bytes,\n audio_file_extension: Optional[str],\n filename: str,\n storage_client: StorageClient,\n) -> list[AudioSource]:\n object_key = generate_object_key(\n dataset=dataset,\n revision=revision,\n config=config,\n split=split,\n row_idx=row_idx,\n column=column,\n filename=filename,\n )\n suffix = f\".{filename.split('.')[-1]}\"\n if suffix not in SUPPORTED_AUDIO_EXTENSION_TO_MEDIA_TYPE:\n raise ValueError(\n f\"Audio format {suffix} is not supported. Supported formats are\"\n f\" {','.join(SUPPORTED_AUDIO_EXTENSION_TO_MEDIA_TYPE)}.\"\n )\n media_type = SUPPORTED_AUDIO_EXTENSION_TO_MEDIA_TYPE[suffix]\n\n if storage_client.overwrite or not storage_client.exists(object_key):\n audio_path = storage_client.get_full_path(object_key)\n if audio_file_extension == suffix:\n with storage_client._fs.open(audio_path, \"wb\") as f:\n f.write(audio_file_bytes)\n else: # we need to convert\n # might spawn a process to convert the audio file using ffmpeg\n with NamedTemporaryFile(\"wb\", suffix=audio_file_extension) as tmpfile:\n tmpfile.write(audio_file_bytes)\n segment: AudioSegment = AudioSegment.from_file(tmpfile.name)\n buffer = BytesIO()\n segment.export(buffer, format=suffix[1:])\n buffer.seek(0)\n with storage_client._fs.open(audio_path, \"wb\") as f:\n f.write(buffer.read())\n return [AudioSource(src=storage_client.get_url(object_key), type=media_type)]\n", "path": "libs/libcommon/src/libcommon/viewer_utils/asset.py"}]} | 1,539 | 186 |
gh_patches_debug_538 | rasdani/github-patches | git_diff | bokeh__bokeh-5378 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Correct comment in Dimension example plot
The comment in [this example](http://bokeh.pydata.org/en/latest/docs/user_guide/styling.html#dimensions) says `# create a new plot with a title`. I expect this example was copied from the one below is demonstrating how to add a title. This comment should be changed to `# create a new plot with specific dimensions`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sphinx/source/docs/user_guide/source_examples/styling_dimensions.py`
Content:
```
1 from bokeh.plotting import figure, output_file, show
2
3 output_file("dimensions.html")
4
5 # create a new plot with a title
6 p = figure(plot_width=700)
7 p.plot_height = 300
8
9 p.circle([1, 2, 3, 4, 5], [2, 5, 8, 2, 7], size=10)
10
11 show(p)
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sphinx/source/docs/user_guide/source_examples/styling_dimensions.py b/sphinx/source/docs/user_guide/source_examples/styling_dimensions.py
--- a/sphinx/source/docs/user_guide/source_examples/styling_dimensions.py
+++ b/sphinx/source/docs/user_guide/source_examples/styling_dimensions.py
@@ -2,7 +2,7 @@
output_file("dimensions.html")
-# create a new plot with a title
+# create a new plot with specific dimensions
p = figure(plot_width=700)
p.plot_height = 300
| {"golden_diff": "diff --git a/sphinx/source/docs/user_guide/source_examples/styling_dimensions.py b/sphinx/source/docs/user_guide/source_examples/styling_dimensions.py\n--- a/sphinx/source/docs/user_guide/source_examples/styling_dimensions.py\n+++ b/sphinx/source/docs/user_guide/source_examples/styling_dimensions.py\n@@ -2,7 +2,7 @@\n \n output_file(\"dimensions.html\")\n \n-# create a new plot with a title\n+# create a new plot with specific dimensions\n p = figure(plot_width=700)\n p.plot_height = 300\n", "issue": "Correct comment in Dimension example plot\nThe comment in [this example](http://bokeh.pydata.org/en/latest/docs/user_guide/styling.html#dimensions) says `# create a new plot with a title`. I expect this example was copied from the one below is demonstrating how to add a title. This comment should be changed to `# create a new plot with specific dimensions`.\n\n", "before_files": [{"content": "from bokeh.plotting import figure, output_file, show\n\noutput_file(\"dimensions.html\")\n\n# create a new plot with a title\np = figure(plot_width=700)\np.plot_height = 300\n\np.circle([1, 2, 3, 4, 5], [2, 5, 8, 2, 7], size=10)\n\nshow(p)\n", "path": "sphinx/source/docs/user_guide/source_examples/styling_dimensions.py"}], "after_files": [{"content": "from bokeh.plotting import figure, output_file, show\n\noutput_file(\"dimensions.html\")\n\n# create a new plot with specific dimensions\np = figure(plot_width=700)\np.plot_height = 300\n\np.circle([1, 2, 3, 4, 5], [2, 5, 8, 2, 7], size=10)\n\nshow(p)\n", "path": "sphinx/source/docs/user_guide/source_examples/styling_dimensions.py"}]} | 452 | 116 |
gh_patches_debug_3795 | rasdani/github-patches | git_diff | magenta__magenta-541 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
documentation: missing dependencies
I installed magenta from scratch on a clean Mac using the README and some tests from`bazel test //magenta/...` were failing because of the missing dependencies:
```
pip install IPython
pip install intervaltree
```
The other failures are related to https://github.com/tensorflow/magenta/issues/529 and apparently should be fixed with the upcoming version of Bazel
This issue might be related to the fact that I wasn't using conda, which could totally be a user error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `magenta/tools/pip/setup.py`
Content:
```
1 # Copyright 2016 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """A setuptools based setup module for magenta."""
15
16 from setuptools import find_packages
17 from setuptools import setup
18
19 # Bit of a hack to parse the version string stored in version.py without
20 # executing __init__.py, which will end up requiring a bunch of dependencies to
21 # execute (e.g., tensorflow, pretty_midi, etc.).
22 # Makes the __version__ variable available.
23 execfile('magenta/version.py')
24
25
26 REQUIRED_PACKAGES = [
27 'intervaltree >= 2.1.0',
28 'mido >= 1.1.17',
29 'Pillow >= 3.4.2',
30 'pretty_midi >= 0.2.6',
31 'scipy >= 0.18.1',
32 'tensorflow >= 1.0.0',
33 'matplotlib >= 1.5.3',
34 'wheel',
35 ]
36
37 CONSOLE_SCRIPTS = [
38 'magenta.interfaces.midi.magenta_midi',
39 'magenta.models.drums_rnn.drums_rnn_create_dataset',
40 'magenta.models.drums_rnn.drums_rnn_generate',
41 'magenta.models.drums_rnn.drums_rnn_train',
42 'magenta.models.image_stylization.image_stylization_create_dataset',
43 'magenta.models.image_stylization.image_stylization_evaluate',
44 'magenta.models.image_stylization.image_stylization_finetune',
45 'magenta.models.image_stylization.image_stylization_train',
46 'magenta.models.image_stylization.image_stylization_transform',
47 'magenta.models.improv_rnn.improv_rnn_create_dataset',
48 'magenta.models.improv_rnn.improv_rnn_generate',
49 'magenta.models.improv_rnn.improv_rnn_train',
50 'magenta.models.melody_rnn.melody_rnn_create_dataset',
51 'magenta.models.melody_rnn.melody_rnn_generate',
52 'magenta.models.melody_rnn.melody_rnn_train',
53 'magenta.models.polyphony_rnn.polyphony_rnn_create_dataset',
54 'magenta.models.polyphony_rnn.polyphony_rnn_generate',
55 'magenta.models.polyphony_rnn.polyphony_rnn_train',
56 'magenta.models.rl_tuner.rl_tuner_train',
57 'magenta.scripts.convert_dir_to_note_sequences',
58 ]
59
60 setup(
61 name='magenta',
62 version=__version__, # pylint: disable=undefined-variable
63 description='Use machine learning to create art and music',
64 long_description='',
65 url='https://magenta.tensorflow.org/',
66 author='Google Inc.',
67 author_email='[email protected]',
68 license='Apache 2',
69 # PyPI package information.
70 classifiers=[
71 'Development Status :: 4 - Beta',
72 'Intended Audience :: Developers',
73 'Intended Audience :: Education',
74 'Intended Audience :: Science/Research',
75 'License :: OSI Approved :: Apache Software License',
76 'Programming Language :: Python :: 2.7',
77 'Topic :: Scientific/Engineering :: Mathematics',
78 'Topic :: Software Development :: Libraries :: Python Modules',
79 'Topic :: Software Development :: Libraries',
80 ],
81 keywords='tensorflow machine learning magenta music art',
82
83 packages=find_packages(),
84 install_requires=REQUIRED_PACKAGES,
85 entry_points={
86 'console_scripts': ['%s = %s:console_entry_point' % (n, p) for n, p in
87 ((s.split('.')[-1], s) for s in CONSOLE_SCRIPTS)],
88 },
89
90 include_package_data=True,
91 package_data={
92 'magenta': ['models/image_stylization/evaluation_images/*.jpg'],
93 },
94 )
95
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/magenta/tools/pip/setup.py b/magenta/tools/pip/setup.py
--- a/magenta/tools/pip/setup.py
+++ b/magenta/tools/pip/setup.py
@@ -24,13 +24,14 @@
REQUIRED_PACKAGES = [
+ 'IPython',
+ 'Pillow >= 3.4.2',
'intervaltree >= 2.1.0',
+ 'matplotlib >= 1.5.3',
'mido >= 1.1.17',
- 'Pillow >= 3.4.2',
'pretty_midi >= 0.2.6',
'scipy >= 0.18.1',
'tensorflow >= 1.0.0',
- 'matplotlib >= 1.5.3',
'wheel',
]
| {"golden_diff": "diff --git a/magenta/tools/pip/setup.py b/magenta/tools/pip/setup.py\n--- a/magenta/tools/pip/setup.py\n+++ b/magenta/tools/pip/setup.py\n@@ -24,13 +24,14 @@\n \n \n REQUIRED_PACKAGES = [\n+ 'IPython',\n+ 'Pillow >= 3.4.2',\n 'intervaltree >= 2.1.0',\n+ 'matplotlib >= 1.5.3',\n 'mido >= 1.1.17',\n- 'Pillow >= 3.4.2',\n 'pretty_midi >= 0.2.6',\n 'scipy >= 0.18.1',\n 'tensorflow >= 1.0.0',\n- 'matplotlib >= 1.5.3',\n 'wheel',\n ]\n", "issue": "documentation: missing dependencies\nI installed magenta from scratch on a clean Mac using the README and some tests from`bazel test //magenta/...` were failing because of the missing dependencies:\r\n\r\n```\r\npip install IPython\r\npip install intervaltree\r\n```\r\n\r\nThe other failures are related to https://github.com/tensorflow/magenta/issues/529 and apparently should be fixed with the upcoming version of Bazel\r\n\r\nThis issue might be related to the fact that I wasn't using conda, which could totally be a user error.\n", "before_files": [{"content": "# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"A setuptools based setup module for magenta.\"\"\"\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n# Bit of a hack to parse the version string stored in version.py without\n# executing __init__.py, which will end up requiring a bunch of dependencies to\n# execute (e.g., tensorflow, pretty_midi, etc.).\n# Makes the __version__ variable available.\nexecfile('magenta/version.py')\n\n\nREQUIRED_PACKAGES = [\n 'intervaltree >= 2.1.0',\n 'mido >= 1.1.17',\n 'Pillow >= 3.4.2',\n 'pretty_midi >= 0.2.6',\n 'scipy >= 0.18.1',\n 'tensorflow >= 1.0.0',\n 'matplotlib >= 1.5.3',\n 'wheel',\n]\n\nCONSOLE_SCRIPTS = [\n 'magenta.interfaces.midi.magenta_midi',\n 'magenta.models.drums_rnn.drums_rnn_create_dataset',\n 'magenta.models.drums_rnn.drums_rnn_generate',\n 'magenta.models.drums_rnn.drums_rnn_train',\n 'magenta.models.image_stylization.image_stylization_create_dataset',\n 'magenta.models.image_stylization.image_stylization_evaluate',\n 'magenta.models.image_stylization.image_stylization_finetune',\n 'magenta.models.image_stylization.image_stylization_train',\n 'magenta.models.image_stylization.image_stylization_transform',\n 'magenta.models.improv_rnn.improv_rnn_create_dataset',\n 'magenta.models.improv_rnn.improv_rnn_generate',\n 'magenta.models.improv_rnn.improv_rnn_train',\n 'magenta.models.melody_rnn.melody_rnn_create_dataset',\n 'magenta.models.melody_rnn.melody_rnn_generate',\n 'magenta.models.melody_rnn.melody_rnn_train',\n 'magenta.models.polyphony_rnn.polyphony_rnn_create_dataset',\n 'magenta.models.polyphony_rnn.polyphony_rnn_generate',\n 'magenta.models.polyphony_rnn.polyphony_rnn_train',\n 'magenta.models.rl_tuner.rl_tuner_train',\n 'magenta.scripts.convert_dir_to_note_sequences',\n]\n\nsetup(\n name='magenta',\n version=__version__, # pylint: disable=undefined-variable\n description='Use machine learning to create art and music',\n long_description='',\n url='https://magenta.tensorflow.org/',\n author='Google Inc.',\n author_email='[email protected]',\n license='Apache 2',\n # PyPI package information.\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n keywords='tensorflow machine learning magenta music art',\n\n packages=find_packages(),\n install_requires=REQUIRED_PACKAGES,\n entry_points={\n 'console_scripts': ['%s = %s:console_entry_point' % (n, p) for n, p in\n ((s.split('.')[-1], s) for s in CONSOLE_SCRIPTS)],\n },\n\n include_package_data=True,\n package_data={\n 'magenta': ['models/image_stylization/evaluation_images/*.jpg'],\n },\n)\n\n", "path": "magenta/tools/pip/setup.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"A setuptools based setup module for magenta.\"\"\"\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n# Bit of a hack to parse the version string stored in version.py without\n# executing __init__.py, which will end up requiring a bunch of dependencies to\n# execute (e.g., tensorflow, pretty_midi, etc.).\n# Makes the __version__ variable available.\nexecfile('magenta/version.py')\n\n\nREQUIRED_PACKAGES = [\n 'IPython',\n 'Pillow >= 3.4.2',\n 'intervaltree >= 2.1.0',\n 'matplotlib >= 1.5.3',\n 'mido >= 1.1.17',\n 'pretty_midi >= 0.2.6',\n 'scipy >= 0.18.1',\n 'tensorflow >= 1.0.0',\n 'wheel',\n]\n\nCONSOLE_SCRIPTS = [\n 'magenta.interfaces.midi.magenta_midi',\n 'magenta.models.drums_rnn.drums_rnn_create_dataset',\n 'magenta.models.drums_rnn.drums_rnn_generate',\n 'magenta.models.drums_rnn.drums_rnn_train',\n 'magenta.models.image_stylization.image_stylization_create_dataset',\n 'magenta.models.image_stylization.image_stylization_evaluate',\n 'magenta.models.image_stylization.image_stylization_finetune',\n 'magenta.models.image_stylization.image_stylization_train',\n 'magenta.models.image_stylization.image_stylization_transform',\n 'magenta.models.improv_rnn.improv_rnn_create_dataset',\n 'magenta.models.improv_rnn.improv_rnn_generate',\n 'magenta.models.improv_rnn.improv_rnn_train',\n 'magenta.models.melody_rnn.melody_rnn_create_dataset',\n 'magenta.models.melody_rnn.melody_rnn_generate',\n 'magenta.models.melody_rnn.melody_rnn_train',\n 'magenta.models.polyphony_rnn.polyphony_rnn_create_dataset',\n 'magenta.models.polyphony_rnn.polyphony_rnn_generate',\n 'magenta.models.polyphony_rnn.polyphony_rnn_train',\n 'magenta.models.rl_tuner.rl_tuner_train',\n 'magenta.scripts.convert_dir_to_note_sequences',\n]\n\nsetup(\n name='magenta',\n version=__version__, # pylint: disable=undefined-variable\n description='Use machine learning to create art and music',\n long_description='',\n url='https://magenta.tensorflow.org/',\n author='Google Inc.',\n author_email='[email protected]',\n license='Apache 2',\n # PyPI package information.\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n keywords='tensorflow machine learning magenta music art',\n\n packages=find_packages(),\n install_requires=REQUIRED_PACKAGES,\n entry_points={\n 'console_scripts': ['%s = %s:console_entry_point' % (n, p) for n, p in\n ((s.split('.')[-1], s) for s in CONSOLE_SCRIPTS)],\n },\n\n include_package_data=True,\n package_data={\n 'magenta': ['models/image_stylization/evaluation_images/*.jpg'],\n },\n)\n\n", "path": "magenta/tools/pip/setup.py"}]} | 1,450 | 185 |
gh_patches_debug_19493 | rasdani/github-patches | git_diff | xorbitsai__inference-566 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ENH: configurable XINFERENCE_HOME
### Is your feature request related to a problem? Please describe
XINFERENCE_HOME should be configurable.
### Describe the solution you'd like
We could starts with supporting the XINFERENCE_HOME env variable. This could be done by adding an util `get_xinference_home` and make sure this is the only way to get xinference home.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xinference/constants.py`
Content:
```
1 # Copyright 2022-2023 XProbe Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 from pathlib import Path
17
18 XINFERENCE_HOME = str(Path.home() / ".xinference")
19 XINFERENCE_CACHE_DIR = os.path.join(XINFERENCE_HOME, "cache")
20 XINFERENCE_MODEL_DIR = os.path.join(XINFERENCE_HOME, "model")
21 XINFERENCE_LOG_DIR = os.path.join(XINFERENCE_HOME, "logs")
22 XINFERENCE_IMAGE_DIR = os.path.join(XINFERENCE_HOME, "image")
23
24 XINFERENCE_DEFAULT_LOCAL_HOST = "127.0.0.1"
25 XINFERENCE_DEFAULT_DISTRIBUTED_HOST = "0.0.0.0"
26 XINFERENCE_DEFAULT_ENDPOINT_PORT = 9997
27
28 XINFERENCE_ENV_ENDPOINT = "XINFERENCE_ENDPOINT"
29 XINFERENCE_ENV_MODEL_SRC = "XINFERENCE_MODEL_SRC"
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/xinference/constants.py b/xinference/constants.py
--- a/xinference/constants.py
+++ b/xinference/constants.py
@@ -15,7 +15,16 @@
import os
from pathlib import Path
-XINFERENCE_HOME = str(Path.home() / ".xinference")
+XINFERENCE_ENV_ENDPOINT = "XINFERENCE_ENDPOINT"
+XINFERENCE_ENV_MODEL_SRC = "XINFERENCE_MODEL_SRC"
+XINFERENCE_ENV_HOME_PATH = "XINFERENCE_HOME"
+
+
+def get_xinference_home():
+ return os.environ.get(XINFERENCE_ENV_HOME_PATH, str(Path.home() / ".xinference"))
+
+
+XINFERENCE_HOME = get_xinference_home()
XINFERENCE_CACHE_DIR = os.path.join(XINFERENCE_HOME, "cache")
XINFERENCE_MODEL_DIR = os.path.join(XINFERENCE_HOME, "model")
XINFERENCE_LOG_DIR = os.path.join(XINFERENCE_HOME, "logs")
@@ -24,6 +33,3 @@
XINFERENCE_DEFAULT_LOCAL_HOST = "127.0.0.1"
XINFERENCE_DEFAULT_DISTRIBUTED_HOST = "0.0.0.0"
XINFERENCE_DEFAULT_ENDPOINT_PORT = 9997
-
-XINFERENCE_ENV_ENDPOINT = "XINFERENCE_ENDPOINT"
-XINFERENCE_ENV_MODEL_SRC = "XINFERENCE_MODEL_SRC"
| {"golden_diff": "diff --git a/xinference/constants.py b/xinference/constants.py\n--- a/xinference/constants.py\n+++ b/xinference/constants.py\n@@ -15,7 +15,16 @@\n import os\n from pathlib import Path\n \n-XINFERENCE_HOME = str(Path.home() / \".xinference\")\n+XINFERENCE_ENV_ENDPOINT = \"XINFERENCE_ENDPOINT\"\n+XINFERENCE_ENV_MODEL_SRC = \"XINFERENCE_MODEL_SRC\"\n+XINFERENCE_ENV_HOME_PATH = \"XINFERENCE_HOME\"\n+\n+\n+def get_xinference_home():\n+ return os.environ.get(XINFERENCE_ENV_HOME_PATH, str(Path.home() / \".xinference\"))\n+\n+\n+XINFERENCE_HOME = get_xinference_home()\n XINFERENCE_CACHE_DIR = os.path.join(XINFERENCE_HOME, \"cache\")\n XINFERENCE_MODEL_DIR = os.path.join(XINFERENCE_HOME, \"model\")\n XINFERENCE_LOG_DIR = os.path.join(XINFERENCE_HOME, \"logs\")\n@@ -24,6 +33,3 @@\n XINFERENCE_DEFAULT_LOCAL_HOST = \"127.0.0.1\"\n XINFERENCE_DEFAULT_DISTRIBUTED_HOST = \"0.0.0.0\"\n XINFERENCE_DEFAULT_ENDPOINT_PORT = 9997\n-\n-XINFERENCE_ENV_ENDPOINT = \"XINFERENCE_ENDPOINT\"\n-XINFERENCE_ENV_MODEL_SRC = \"XINFERENCE_MODEL_SRC\"\n", "issue": "ENH: configurable XINFERENCE_HOME\n### Is your feature request related to a problem? Please describe\r\nXINFERENCE_HOME should be configurable.\r\n\r\n### Describe the solution you'd like\r\nWe could starts with supporting the XINFERENCE_HOME env variable. This could be done by adding an util `get_xinference_home` and make sure this is the only way to get xinference home.\r\n\r\n\n", "before_files": [{"content": "# Copyright 2022-2023 XProbe Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nfrom pathlib import Path\n\nXINFERENCE_HOME = str(Path.home() / \".xinference\")\nXINFERENCE_CACHE_DIR = os.path.join(XINFERENCE_HOME, \"cache\")\nXINFERENCE_MODEL_DIR = os.path.join(XINFERENCE_HOME, \"model\")\nXINFERENCE_LOG_DIR = os.path.join(XINFERENCE_HOME, \"logs\")\nXINFERENCE_IMAGE_DIR = os.path.join(XINFERENCE_HOME, \"image\")\n\nXINFERENCE_DEFAULT_LOCAL_HOST = \"127.0.0.1\"\nXINFERENCE_DEFAULT_DISTRIBUTED_HOST = \"0.0.0.0\"\nXINFERENCE_DEFAULT_ENDPOINT_PORT = 9997\n\nXINFERENCE_ENV_ENDPOINT = \"XINFERENCE_ENDPOINT\"\nXINFERENCE_ENV_MODEL_SRC = \"XINFERENCE_MODEL_SRC\"\n", "path": "xinference/constants.py"}], "after_files": [{"content": "# Copyright 2022-2023 XProbe Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nfrom pathlib import Path\n\nXINFERENCE_ENV_ENDPOINT = \"XINFERENCE_ENDPOINT\"\nXINFERENCE_ENV_MODEL_SRC = \"XINFERENCE_MODEL_SRC\"\nXINFERENCE_ENV_HOME_PATH = \"XINFERENCE_HOME\"\n\n\ndef get_xinference_home():\n return os.environ.get(XINFERENCE_ENV_HOME_PATH, str(Path.home() / \".xinference\"))\n\n\nXINFERENCE_HOME = get_xinference_home()\nXINFERENCE_CACHE_DIR = os.path.join(XINFERENCE_HOME, \"cache\")\nXINFERENCE_MODEL_DIR = os.path.join(XINFERENCE_HOME, \"model\")\nXINFERENCE_LOG_DIR = os.path.join(XINFERENCE_HOME, \"logs\")\nXINFERENCE_IMAGE_DIR = os.path.join(XINFERENCE_HOME, \"image\")\n\nXINFERENCE_DEFAULT_LOCAL_HOST = \"127.0.0.1\"\nXINFERENCE_DEFAULT_DISTRIBUTED_HOST = \"0.0.0.0\"\nXINFERENCE_DEFAULT_ENDPOINT_PORT = 9997\n", "path": "xinference/constants.py"}]} | 697 | 294 |
gh_patches_debug_60837 | rasdani/github-patches | git_diff | openedx__ecommerce-348 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Running migrations for Travis builds
We run migrations to ensure no migrations are missing, and they work on fresh installs.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ecommerce/extensions/order/migrations/0003_auto_20150224_1520.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import unicode_literals
3
4 from django.db import models, migrations
5
6
7 def create_shipping_event(apps, schema_editor):
8 """
9
10 Create a single new shipping event type that can be applied to an order. This will allow us to initiate order
11 shipment.
12
13 """
14 # Create all our Product Types.
15 ShippingEventType = apps.get_model("order", "ShippingEventType")
16 shipped_event = ShippingEventType(code="shipped", name="Shipped")
17 shipped_event.save()
18
19
20 class Migration(migrations.Migration):
21
22 dependencies = [
23 ('order', '0002_auto_20141007_2032'),
24 ]
25
26 operations = [
27 migrations.RunPython(create_shipping_event),
28 ]
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ecommerce/extensions/order/migrations/0003_auto_20150224_1520.py b/ecommerce/extensions/order/migrations/0003_auto_20150224_1520.py
--- a/ecommerce/extensions/order/migrations/0003_auto_20150224_1520.py
+++ b/ecommerce/extensions/order/migrations/0003_auto_20150224_1520.py
@@ -13,8 +13,7 @@
"""
# Create all our Product Types.
ShippingEventType = apps.get_model("order", "ShippingEventType")
- shipped_event = ShippingEventType(code="shipped", name="Shipped")
- shipped_event.save()
+ ShippingEventType.objects.create(code="shipped", name="Shipped")
class Migration(migrations.Migration):
| {"golden_diff": "diff --git a/ecommerce/extensions/order/migrations/0003_auto_20150224_1520.py b/ecommerce/extensions/order/migrations/0003_auto_20150224_1520.py\n--- a/ecommerce/extensions/order/migrations/0003_auto_20150224_1520.py\n+++ b/ecommerce/extensions/order/migrations/0003_auto_20150224_1520.py\n@@ -13,8 +13,7 @@\n \"\"\"\n # Create all our Product Types.\n ShippingEventType = apps.get_model(\"order\", \"ShippingEventType\")\n- shipped_event = ShippingEventType(code=\"shipped\", name=\"Shipped\")\n- shipped_event.save()\n+ ShippingEventType.objects.create(code=\"shipped\", name=\"Shipped\")\n \n \n class Migration(migrations.Migration):\n", "issue": "Running migrations for Travis builds\nWe run migrations to ensure no migrations are missing, and they work on fresh installs.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models, migrations\n\n\ndef create_shipping_event(apps, schema_editor):\n \"\"\"\n\n Create a single new shipping event type that can be applied to an order. This will allow us to initiate order\n shipment.\n\n \"\"\"\n # Create all our Product Types.\n ShippingEventType = apps.get_model(\"order\", \"ShippingEventType\")\n shipped_event = ShippingEventType(code=\"shipped\", name=\"Shipped\")\n shipped_event.save()\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('order', '0002_auto_20141007_2032'),\n ]\n\n operations = [\n migrations.RunPython(create_shipping_event),\n ]\n", "path": "ecommerce/extensions/order/migrations/0003_auto_20150224_1520.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models, migrations\n\n\ndef create_shipping_event(apps, schema_editor):\n \"\"\"\n\n Create a single new shipping event type that can be applied to an order. This will allow us to initiate order\n shipment.\n\n \"\"\"\n # Create all our Product Types.\n ShippingEventType = apps.get_model(\"order\", \"ShippingEventType\")\n ShippingEventType.objects.create(code=\"shipped\", name=\"Shipped\")\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('order', '0002_auto_20141007_2032'),\n ]\n\n operations = [\n migrations.RunPython(create_shipping_event),\n ]\n", "path": "ecommerce/extensions/order/migrations/0003_auto_20150224_1520.py"}]} | 522 | 207 |
gh_patches_debug_6915 | rasdani/github-patches | git_diff | bridgecrewio__checkov-3596 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV_DOCKER_1 rule not failed when using EXPOSE 22/TCP
**Issue**
If it is related to an existing check, CKV_DOCKER_1.
Dockerfile EXPOSE can accept port using tcp proto by defaut if not specified or you can also specify the protocol with /tcp or /udp.
When using "EXPOSE 22/tcp" the rule does not detect the SSH port. something like '22/tcp'.split('/')[0] will return port without protocol or we can test both '22' or '22/tcp' in the rule.
**Examples**
Issue can be replicated with a Dockerfile which uses "EXPOSE 22/tcp".
**Version**
- Checkov Version 2.1.87
**Specification**
EXPOSE specification https://docs.docker.com/engine/reference/builder/#expose
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/dockerfile/checks/ExposePort22.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING
4
5 from checkov.common.models.enums import CheckCategories, CheckResult
6 from checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck
7
8 if TYPE_CHECKING:
9 from dockerfile_parse.parser import _Instruction
10
11
12 class ExposePort22(BaseDockerfileCheck):
13 def __init__(self) -> None:
14 name = "Ensure port 22 is not exposed"
15 id = "CKV_DOCKER_1"
16 supported_instructions = ("EXPOSE",)
17 categories = (CheckCategories.NETWORKING,)
18 super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)
19
20 def scan_resource_conf(self, conf: list[_Instruction]) -> tuple[CheckResult, list[_Instruction] | None]:
21 for expose in conf:
22 if "22" in expose["value"].split(" "):
23 return CheckResult.FAILED, [expose]
24
25 return CheckResult.PASSED, None
26
27
28 check = ExposePort22()
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/dockerfile/checks/ExposePort22.py b/checkov/dockerfile/checks/ExposePort22.py
--- a/checkov/dockerfile/checks/ExposePort22.py
+++ b/checkov/dockerfile/checks/ExposePort22.py
@@ -19,7 +19,7 @@
def scan_resource_conf(self, conf: list[_Instruction]) -> tuple[CheckResult, list[_Instruction] | None]:
for expose in conf:
- if "22" in expose["value"].split(" "):
+ if any(port in expose["value"].split(" ") for port in ("22", "22/tcp")):
return CheckResult.FAILED, [expose]
return CheckResult.PASSED, None
| {"golden_diff": "diff --git a/checkov/dockerfile/checks/ExposePort22.py b/checkov/dockerfile/checks/ExposePort22.py\n--- a/checkov/dockerfile/checks/ExposePort22.py\n+++ b/checkov/dockerfile/checks/ExposePort22.py\n@@ -19,7 +19,7 @@\n \n def scan_resource_conf(self, conf: list[_Instruction]) -> tuple[CheckResult, list[_Instruction] | None]:\n for expose in conf:\n- if \"22\" in expose[\"value\"].split(\" \"):\n+ if any(port in expose[\"value\"].split(\" \") for port in (\"22\", \"22/tcp\")):\n return CheckResult.FAILED, [expose]\n \n return CheckResult.PASSED, None\n", "issue": "CKV_DOCKER_1 rule not failed when using EXPOSE 22/TCP\n**Issue**\r\nIf it is related to an existing check, CKV_DOCKER_1.\r\nDockerfile EXPOSE can accept port using tcp proto by defaut if not specified or you can also specify the protocol with /tcp or /udp.\r\n\r\nWhen using \"EXPOSE 22/tcp\" the rule does not detect the SSH port. something like '22/tcp'.split('/')[0] will return port without protocol or we can test both '22' or '22/tcp' in the rule.\r\n\r\n**Examples**\r\nIssue can be replicated with a Dockerfile which uses \"EXPOSE 22/tcp\".\r\n\r\n**Version**\r\n - Checkov Version 2.1.87\r\n\r\n**Specification**\r\nEXPOSE specification https://docs.docker.com/engine/reference/builder/#expose\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck\n\nif TYPE_CHECKING:\n from dockerfile_parse.parser import _Instruction\n\n\nclass ExposePort22(BaseDockerfileCheck):\n def __init__(self) -> None:\n name = \"Ensure port 22 is not exposed\"\n id = \"CKV_DOCKER_1\"\n supported_instructions = (\"EXPOSE\",)\n categories = (CheckCategories.NETWORKING,)\n super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)\n\n def scan_resource_conf(self, conf: list[_Instruction]) -> tuple[CheckResult, list[_Instruction] | None]:\n for expose in conf:\n if \"22\" in expose[\"value\"].split(\" \"):\n return CheckResult.FAILED, [expose]\n\n return CheckResult.PASSED, None\n\n\ncheck = ExposePort22()\n", "path": "checkov/dockerfile/checks/ExposePort22.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck\n\nif TYPE_CHECKING:\n from dockerfile_parse.parser import _Instruction\n\n\nclass ExposePort22(BaseDockerfileCheck):\n def __init__(self) -> None:\n name = \"Ensure port 22 is not exposed\"\n id = \"CKV_DOCKER_1\"\n supported_instructions = (\"EXPOSE\",)\n categories = (CheckCategories.NETWORKING,)\n super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)\n\n def scan_resource_conf(self, conf: list[_Instruction]) -> tuple[CheckResult, list[_Instruction] | None]:\n for expose in conf:\n if any(port in expose[\"value\"].split(\" \") for port in (\"22\", \"22/tcp\")):\n return CheckResult.FAILED, [expose]\n\n return CheckResult.PASSED, None\n\n\ncheck = ExposePort22()\n", "path": "checkov/dockerfile/checks/ExposePort22.py"}]} | 729 | 170 |
gh_patches_debug_51284 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-2093 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
RuntimeError: Requested component 'jaeger' not found in entry points for 'opentelemetry_exporter'
From the exporters selection of [spec](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/sdk-environment-variables.md#exporter-selection)
>Known values for OTEL_TRACES_EXPORTER are:
>
>- "otlp": OTLP
>- "jaeger": Jaeger gRPC
>- "zipkin": Zipkin (Defaults to protobuf format)
>- "none": No automatically configured exporter for traces.
We have split up the exporters based on protocol + serialisation to avoid taking unnecessary dependency so there is no entry point for `jaeger`. If someone reads the spec and follows that recommendation they will run into this error. We should either add an entry point `jaeger` or an alias that solves this problem.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-sdk/src/opentelemetry/sdk/_configuration/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15
16 """
17 OpenTelemetry SDK Configurator for Easy Instrumentation with Distros
18 """
19
20 from os import environ
21 from typing import Sequence, Tuple
22
23 from pkg_resources import iter_entry_points
24
25 from opentelemetry import trace
26 from opentelemetry.environment_variables import (
27 OTEL_PYTHON_ID_GENERATOR,
28 OTEL_TRACES_EXPORTER,
29 )
30 from opentelemetry.instrumentation.configurator import BaseConfigurator
31 from opentelemetry.sdk.trace import TracerProvider
32 from opentelemetry.sdk.trace.export import BatchSpanProcessor, SpanExporter
33 from opentelemetry.sdk.trace.id_generator import IdGenerator
34
35 _EXPORTER_OTLP = "otlp"
36 _EXPORTER_OTLP_SPAN = "otlp_proto_grpc_span"
37
38 _RANDOM_ID_GENERATOR = "random"
39 _DEFAULT_ID_GENERATOR = _RANDOM_ID_GENERATOR
40
41
42 def _get_id_generator() -> str:
43 return environ.get(OTEL_PYTHON_ID_GENERATOR, _DEFAULT_ID_GENERATOR)
44
45
46 def _get_exporter_names() -> Sequence[str]:
47 trace_exporters = environ.get(OTEL_TRACES_EXPORTER)
48
49 exporters = set()
50
51 if trace_exporters and trace_exporters.lower().strip() != "none":
52 exporters.update(
53 {
54 trace_exporter.strip()
55 for trace_exporter in trace_exporters.split(",")
56 }
57 )
58
59 if _EXPORTER_OTLP in exporters:
60 exporters.remove(_EXPORTER_OTLP)
61 exporters.add(_EXPORTER_OTLP_SPAN)
62
63 return list(exporters)
64
65
66 def _init_tracing(
67 exporters: Sequence[SpanExporter], id_generator: IdGenerator
68 ):
69 # if env var OTEL_RESOURCE_ATTRIBUTES is given, it will read the service_name
70 # from the env variable else defaults to "unknown_service"
71 provider = TracerProvider(
72 id_generator=id_generator(),
73 )
74 trace.set_tracer_provider(provider)
75
76 for _, exporter_class in exporters.items():
77 exporter_args = {}
78 provider.add_span_processor(
79 BatchSpanProcessor(exporter_class(**exporter_args))
80 )
81
82
83 def _import_tracer_provider_config_components(
84 selected_components, entry_point_name
85 ) -> Sequence[Tuple[str, object]]:
86 component_entry_points = {
87 ep.name: ep for ep in iter_entry_points(entry_point_name)
88 }
89 component_impls = []
90 for selected_component in selected_components:
91 entry_point = component_entry_points.get(selected_component, None)
92 if not entry_point:
93 raise RuntimeError(
94 f"Requested component '{selected_component}' not found in entry points for '{entry_point_name}'"
95 )
96
97 component_impl = entry_point.load()
98 component_impls.append((selected_component, component_impl))
99
100 return component_impls
101
102
103 def _import_exporters(
104 exporter_names: Sequence[str],
105 ) -> Sequence[SpanExporter]:
106 trace_exporters = {}
107
108 for (
109 exporter_name,
110 exporter_impl,
111 ) in _import_tracer_provider_config_components(
112 exporter_names, "opentelemetry_exporter"
113 ):
114 if issubclass(exporter_impl, SpanExporter):
115 trace_exporters[exporter_name] = exporter_impl
116 else:
117 raise RuntimeError(f"{exporter_name} is not a trace exporter")
118 return trace_exporters
119
120
121 def _import_id_generator(id_generator_name: str) -> IdGenerator:
122 # pylint: disable=unbalanced-tuple-unpacking
123 [
124 (id_generator_name, id_generator_impl)
125 ] = _import_tracer_provider_config_components(
126 [id_generator_name.strip()], "opentelemetry_id_generator"
127 )
128
129 if issubclass(id_generator_impl, IdGenerator):
130 return id_generator_impl
131
132 raise RuntimeError(f"{id_generator_name} is not an IdGenerator")
133
134
135 def _initialize_components():
136 exporter_names = _get_exporter_names()
137 trace_exporters = _import_exporters(exporter_names)
138 id_generator_name = _get_id_generator()
139 id_generator = _import_id_generator(id_generator_name)
140 _init_tracing(trace_exporters, id_generator)
141
142
143 class _OTelSDKConfigurator(BaseConfigurator):
144 """A basic Configurator by OTel Python for initalizing OTel SDK components
145
146 Initializes several crucial OTel SDK components (i.e. TracerProvider,
147 MeterProvider, Processors...) according to a default implementation. Other
148 Configurators can subclass and slightly alter this initialization.
149
150 NOTE: This class should not be instantiated nor should it become an entry
151 point on the `opentelemetry-sdk` package. Instead, distros should subclass
152 this Configurator and enchance it as needed.
153 """
154
155 def _configure(self, **kwargs):
156 _initialize_components()
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_configuration/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_configuration/__init__.py
--- a/opentelemetry-sdk/src/opentelemetry/sdk/_configuration/__init__.py
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_configuration/__init__.py
@@ -109,7 +109,7 @@
exporter_name,
exporter_impl,
) in _import_tracer_provider_config_components(
- exporter_names, "opentelemetry_exporter"
+ exporter_names, "opentelemetry_traces_exporter"
):
if issubclass(exporter_impl, SpanExporter):
trace_exporters[exporter_name] = exporter_impl
| {"golden_diff": "diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_configuration/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_configuration/__init__.py\n--- a/opentelemetry-sdk/src/opentelemetry/sdk/_configuration/__init__.py\n+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_configuration/__init__.py\n@@ -109,7 +109,7 @@\n exporter_name,\n exporter_impl,\n ) in _import_tracer_provider_config_components(\n- exporter_names, \"opentelemetry_exporter\"\n+ exporter_names, \"opentelemetry_traces_exporter\"\n ):\n if issubclass(exporter_impl, SpanExporter):\n trace_exporters[exporter_name] = exporter_impl\n", "issue": "RuntimeError: Requested component 'jaeger' not found in entry points for 'opentelemetry_exporter'\nFrom the exporters selection of [spec](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/sdk-environment-variables.md#exporter-selection)\r\n\r\n>Known values for OTEL_TRACES_EXPORTER are:\r\n>\r\n>- \"otlp\": OTLP\r\n>- \"jaeger\": Jaeger gRPC\r\n>- \"zipkin\": Zipkin (Defaults to protobuf format)\r\n>- \"none\": No automatically configured exporter for traces.\r\n\r\nWe have split up the exporters based on protocol + serialisation to avoid taking unnecessary dependency so there is no entry point for `jaeger`. If someone reads the spec and follows that recommendation they will run into this error. We should either add an entry point `jaeger` or an alias that solves this problem.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\"\"\"\nOpenTelemetry SDK Configurator for Easy Instrumentation with Distros\n\"\"\"\n\nfrom os import environ\nfrom typing import Sequence, Tuple\n\nfrom pkg_resources import iter_entry_points\n\nfrom opentelemetry import trace\nfrom opentelemetry.environment_variables import (\n OTEL_PYTHON_ID_GENERATOR,\n OTEL_TRACES_EXPORTER,\n)\nfrom opentelemetry.instrumentation.configurator import BaseConfigurator\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import BatchSpanProcessor, SpanExporter\nfrom opentelemetry.sdk.trace.id_generator import IdGenerator\n\n_EXPORTER_OTLP = \"otlp\"\n_EXPORTER_OTLP_SPAN = \"otlp_proto_grpc_span\"\n\n_RANDOM_ID_GENERATOR = \"random\"\n_DEFAULT_ID_GENERATOR = _RANDOM_ID_GENERATOR\n\n\ndef _get_id_generator() -> str:\n return environ.get(OTEL_PYTHON_ID_GENERATOR, _DEFAULT_ID_GENERATOR)\n\n\ndef _get_exporter_names() -> Sequence[str]:\n trace_exporters = environ.get(OTEL_TRACES_EXPORTER)\n\n exporters = set()\n\n if trace_exporters and trace_exporters.lower().strip() != \"none\":\n exporters.update(\n {\n trace_exporter.strip()\n for trace_exporter in trace_exporters.split(\",\")\n }\n )\n\n if _EXPORTER_OTLP in exporters:\n exporters.remove(_EXPORTER_OTLP)\n exporters.add(_EXPORTER_OTLP_SPAN)\n\n return list(exporters)\n\n\ndef _init_tracing(\n exporters: Sequence[SpanExporter], id_generator: IdGenerator\n):\n # if env var OTEL_RESOURCE_ATTRIBUTES is given, it will read the service_name\n # from the env variable else defaults to \"unknown_service\"\n provider = TracerProvider(\n id_generator=id_generator(),\n )\n trace.set_tracer_provider(provider)\n\n for _, exporter_class in exporters.items():\n exporter_args = {}\n provider.add_span_processor(\n BatchSpanProcessor(exporter_class(**exporter_args))\n )\n\n\ndef _import_tracer_provider_config_components(\n selected_components, entry_point_name\n) -> Sequence[Tuple[str, object]]:\n component_entry_points = {\n ep.name: ep for ep in iter_entry_points(entry_point_name)\n }\n component_impls = []\n for selected_component in selected_components:\n entry_point = component_entry_points.get(selected_component, None)\n if not entry_point:\n raise RuntimeError(\n f\"Requested component '{selected_component}' not found in entry points for '{entry_point_name}'\"\n )\n\n component_impl = entry_point.load()\n component_impls.append((selected_component, component_impl))\n\n return component_impls\n\n\ndef _import_exporters(\n exporter_names: Sequence[str],\n) -> Sequence[SpanExporter]:\n trace_exporters = {}\n\n for (\n exporter_name,\n exporter_impl,\n ) in _import_tracer_provider_config_components(\n exporter_names, \"opentelemetry_exporter\"\n ):\n if issubclass(exporter_impl, SpanExporter):\n trace_exporters[exporter_name] = exporter_impl\n else:\n raise RuntimeError(f\"{exporter_name} is not a trace exporter\")\n return trace_exporters\n\n\ndef _import_id_generator(id_generator_name: str) -> IdGenerator:\n # pylint: disable=unbalanced-tuple-unpacking\n [\n (id_generator_name, id_generator_impl)\n ] = _import_tracer_provider_config_components(\n [id_generator_name.strip()], \"opentelemetry_id_generator\"\n )\n\n if issubclass(id_generator_impl, IdGenerator):\n return id_generator_impl\n\n raise RuntimeError(f\"{id_generator_name} is not an IdGenerator\")\n\n\ndef _initialize_components():\n exporter_names = _get_exporter_names()\n trace_exporters = _import_exporters(exporter_names)\n id_generator_name = _get_id_generator()\n id_generator = _import_id_generator(id_generator_name)\n _init_tracing(trace_exporters, id_generator)\n\n\nclass _OTelSDKConfigurator(BaseConfigurator):\n \"\"\"A basic Configurator by OTel Python for initalizing OTel SDK components\n\n Initializes several crucial OTel SDK components (i.e. TracerProvider,\n MeterProvider, Processors...) according to a default implementation. Other\n Configurators can subclass and slightly alter this initialization.\n\n NOTE: This class should not be instantiated nor should it become an entry\n point on the `opentelemetry-sdk` package. Instead, distros should subclass\n this Configurator and enchance it as needed.\n \"\"\"\n\n def _configure(self, **kwargs):\n _initialize_components()\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/_configuration/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\"\"\"\nOpenTelemetry SDK Configurator for Easy Instrumentation with Distros\n\"\"\"\n\nfrom os import environ\nfrom typing import Sequence, Tuple\n\nfrom pkg_resources import iter_entry_points\n\nfrom opentelemetry import trace\nfrom opentelemetry.environment_variables import (\n OTEL_PYTHON_ID_GENERATOR,\n OTEL_TRACES_EXPORTER,\n)\nfrom opentelemetry.instrumentation.configurator import BaseConfigurator\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import BatchSpanProcessor, SpanExporter\nfrom opentelemetry.sdk.trace.id_generator import IdGenerator\n\n_EXPORTER_OTLP = \"otlp\"\n_EXPORTER_OTLP_SPAN = \"otlp_proto_grpc_span\"\n\n_RANDOM_ID_GENERATOR = \"random\"\n_DEFAULT_ID_GENERATOR = _RANDOM_ID_GENERATOR\n\n\ndef _get_id_generator() -> str:\n return environ.get(OTEL_PYTHON_ID_GENERATOR, _DEFAULT_ID_GENERATOR)\n\n\ndef _get_exporter_names() -> Sequence[str]:\n trace_exporters = environ.get(OTEL_TRACES_EXPORTER)\n\n exporters = set()\n\n if trace_exporters and trace_exporters.lower().strip() != \"none\":\n exporters.update(\n {\n trace_exporter.strip()\n for trace_exporter in trace_exporters.split(\",\")\n }\n )\n\n if _EXPORTER_OTLP in exporters:\n exporters.remove(_EXPORTER_OTLP)\n exporters.add(_EXPORTER_OTLP_SPAN)\n\n return list(exporters)\n\n\ndef _init_tracing(\n exporters: Sequence[SpanExporter], id_generator: IdGenerator\n):\n # if env var OTEL_RESOURCE_ATTRIBUTES is given, it will read the service_name\n # from the env variable else defaults to \"unknown_service\"\n provider = TracerProvider(\n id_generator=id_generator(),\n )\n trace.set_tracer_provider(provider)\n\n for _, exporter_class in exporters.items():\n exporter_args = {}\n provider.add_span_processor(\n BatchSpanProcessor(exporter_class(**exporter_args))\n )\n\n\ndef _import_tracer_provider_config_components(\n selected_components, entry_point_name\n) -> Sequence[Tuple[str, object]]:\n component_entry_points = {\n ep.name: ep for ep in iter_entry_points(entry_point_name)\n }\n component_impls = []\n for selected_component in selected_components:\n entry_point = component_entry_points.get(selected_component, None)\n if not entry_point:\n raise RuntimeError(\n f\"Requested component '{selected_component}' not found in entry points for '{entry_point_name}'\"\n )\n\n component_impl = entry_point.load()\n component_impls.append((selected_component, component_impl))\n\n return component_impls\n\n\ndef _import_exporters(\n exporter_names: Sequence[str],\n) -> Sequence[SpanExporter]:\n trace_exporters = {}\n\n for (\n exporter_name,\n exporter_impl,\n ) in _import_tracer_provider_config_components(\n exporter_names, \"opentelemetry_traces_exporter\"\n ):\n if issubclass(exporter_impl, SpanExporter):\n trace_exporters[exporter_name] = exporter_impl\n else:\n raise RuntimeError(f\"{exporter_name} is not a trace exporter\")\n return trace_exporters\n\n\ndef _import_id_generator(id_generator_name: str) -> IdGenerator:\n # pylint: disable=unbalanced-tuple-unpacking\n [\n (id_generator_name, id_generator_impl)\n ] = _import_tracer_provider_config_components(\n [id_generator_name.strip()], \"opentelemetry_id_generator\"\n )\n\n if issubclass(id_generator_impl, IdGenerator):\n return id_generator_impl\n\n raise RuntimeError(f\"{id_generator_name} is not an IdGenerator\")\n\n\ndef _initialize_components():\n exporter_names = _get_exporter_names()\n trace_exporters = _import_exporters(exporter_names)\n id_generator_name = _get_id_generator()\n id_generator = _import_id_generator(id_generator_name)\n _init_tracing(trace_exporters, id_generator)\n\n\nclass _OTelSDKConfigurator(BaseConfigurator):\n \"\"\"A basic Configurator by OTel Python for initalizing OTel SDK components\n\n Initializes several crucial OTel SDK components (i.e. TracerProvider,\n MeterProvider, Processors...) according to a default implementation. Other\n Configurators can subclass and slightly alter this initialization.\n\n NOTE: This class should not be instantiated nor should it become an entry\n point on the `opentelemetry-sdk` package. Instead, distros should subclass\n this Configurator and enchance it as needed.\n \"\"\"\n\n def _configure(self, **kwargs):\n _initialize_components()\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/_configuration/__init__.py"}]} | 1,932 | 158 |
gh_patches_debug_22205 | rasdani/github-patches | git_diff | freedomofpress__securedrop-4523 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The Source Interface's /journalist-key endpoint returns a 404 instead of the Submission Key
## Description
The submission key should be available via the Source Interface at the URL http://[Onion address]/journalist-key, but said URL returns a 404 instead.
## Steps to Reproduce
- set up a production SecureDrop instance on HW or VMs
- visit the source interface, click through to the submission page, and click the ``public key`` link.
## Expected Behavior
The Source Interface responds with the Submission Key
## Actual Behavior
The Source Interface responds with a 404, and the following errors are recorded if logging is enabled:
```
[Thu Jun 13 17:22:12.285962 2019] [authz_core:debug] [pid 7030:tid 3881911650048] mod_authz_core.c(835): [client 127.0.0.1:45900] AH01628: authorization result: granted (no directives), referer: http://fehpa6pmg6ke4i3u.onion/lookup
[Thu Jun 13 17:22:12.286129 2019] [authz_core:debug] [pid 7030:tid 3881911650048] mod_authz_core.c(835): [client 127.0.0.1:45900] AH01628: authorization result: granted (no directives), referer: http://fehpa6pmg6ke4i3u.onion/lookup
[Thu Jun 13 17:22:12.415949 2019] [wsgi:error] [pid 7028:tid 3882000488192] [remote 127.0.0.1:20954] mod_wsgi (pid=7028): Exception occurred processing WSGI script '/var/www/source.wsgi'.
[Thu Jun 13 17:22:12.416036 2019] [wsgi:error] [pid 7028:tid 3882000488192] [remote 127.0.0.1:20954] TypeError: file like object yielded non string type
[Thu Jun 13 17:22:12.416281 2019] [authz_core:debug] [pid 7030:tid 3881911650048] mod_authz_core.c(835): [client 127.0.0.1:45900] AH01628: authorization result: granted (no directives), referer: http://fehpa6pmg6ke4i3u.onion/lookup
[Thu Jun 13 17:22:12.416360 2019] [authz_core:debug] [pid 7030:tid 3881911650048] mod_authz_core.c(835): [client 127.0.0.1:45900] AH01628: authorization result: granted (no directives), referer: http://fehpa6pmg6ke4i3u.onion/lookup
[Thu Jun 13 17:22:13.812032 2019] [authz_core:debug] [pid 7031:tid 3882000144128] mod_authz_core.c(835): [client 127.0.0.1:45902] AH01628: authorization result: granted (no directives), referer: http://fehpa6pmg6ke4i3u.onion/journalist-key
[Thu Jun 13 17:22:14.587228 2019] [authz_core:debug] [pid 7031:tid 3881991599872] mod_authz_core.c(835): [client 127.0.0.1:45902] AH01628: authorization result: granted (no directives), referer: http://fehpa6pmg6ke4i3u.onion/journalist-key
[Thu Jun 13 17:22:14.587409 2019] [authz_core:debug] [pid 7031:tid 3881991599872] mod_authz_core.c(835): [client 127.0.0.1:45902] AH01628: authorization result: granted (no directives), referer: http://fehpa6pmg6ke4i3u.onion/journalist-key
[Thu Jun 13 17:22:15.296269 2019] [authz_core:debug] [pid 7031:tid 3881911961344] mod_authz_core.c(835): [client 127.0.0.1:45902] AH01628: authorization result: granted (no directives), referer: http://fehpa6pmg6ke4i3u.onion/journalist-key
```
Please provide screenshots where appropriate.
## Comments
Suggestions to fix, any other relevant information.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/source_app/info.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from io import StringIO
4 from flask import Blueprint, render_template, send_file, current_app
5
6
7 def make_blueprint(config):
8 view = Blueprint('info', __name__)
9
10 @view.route('/tor2web-warning')
11 def tor2web_warning():
12 return render_template("tor2web-warning.html")
13
14 @view.route('/use-tor')
15 def recommend_tor_browser():
16 return render_template("use-tor-browser.html")
17
18 @view.route('/journalist-key')
19 def download_journalist_pubkey():
20 journalist_pubkey = current_app.crypto_util.gpg.export_keys(
21 config.JOURNALIST_KEY)
22 return send_file(StringIO(journalist_pubkey),
23 mimetype="application/pgp-keys",
24 attachment_filename=config.JOURNALIST_KEY + ".asc",
25 as_attachment=True)
26
27 @view.route('/why-journalist-key')
28 def why_download_journalist_pubkey():
29 return render_template("why-journalist-key.html")
30
31 return view
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/securedrop/source_app/info.py b/securedrop/source_app/info.py
--- a/securedrop/source_app/info.py
+++ b/securedrop/source_app/info.py
@@ -1,8 +1,12 @@
# -*- coding: utf-8 -*-
-
-from io import StringIO
+import six
from flask import Blueprint, render_template, send_file, current_app
+if six.PY2:
+ from cStringIO import StringIO # noqa
+else:
+ from io import BytesIO # noqa
+
def make_blueprint(config):
view = Blueprint('info', __name__)
@@ -19,7 +23,11 @@
def download_journalist_pubkey():
journalist_pubkey = current_app.crypto_util.gpg.export_keys(
config.JOURNALIST_KEY)
- return send_file(StringIO(journalist_pubkey),
+ if six.PY2:
+ data = StringIO(journalist_pubkey)
+ else:
+ data = BytesIO(journalist_pubkey.encode('utf-8'))
+ return send_file(data,
mimetype="application/pgp-keys",
attachment_filename=config.JOURNALIST_KEY + ".asc",
as_attachment=True)
| {"golden_diff": "diff --git a/securedrop/source_app/info.py b/securedrop/source_app/info.py\n--- a/securedrop/source_app/info.py\n+++ b/securedrop/source_app/info.py\n@@ -1,8 +1,12 @@\n # -*- coding: utf-8 -*-\n-\n-from io import StringIO\n+import six\n from flask import Blueprint, render_template, send_file, current_app\n \n+if six.PY2:\n+ from cStringIO import StringIO # noqa\n+else:\n+ from io import BytesIO # noqa\n+\n \n def make_blueprint(config):\n view = Blueprint('info', __name__)\n@@ -19,7 +23,11 @@\n def download_journalist_pubkey():\n journalist_pubkey = current_app.crypto_util.gpg.export_keys(\n config.JOURNALIST_KEY)\n- return send_file(StringIO(journalist_pubkey),\n+ if six.PY2:\n+ data = StringIO(journalist_pubkey)\n+ else:\n+ data = BytesIO(journalist_pubkey.encode('utf-8'))\n+ return send_file(data,\n mimetype=\"application/pgp-keys\",\n attachment_filename=config.JOURNALIST_KEY + \".asc\",\n as_attachment=True)\n", "issue": "The Source Interface's /journalist-key endpoint returns a 404 instead of the Submission Key\n## Description\r\n\r\nThe submission key should be available via the Source Interface at the URL http://[Onion address]/journalist-key, but said URL returns a 404 instead.\r\n\r\n## Steps to Reproduce\r\n\r\n- set up a production SecureDrop instance on HW or VMs\r\n- visit the source interface, click through to the submission page, and click the ``public key`` link.\r\n\r\n## Expected Behavior\r\nThe Source Interface responds with the Submission Key\r\n\r\n## Actual Behavior\r\nThe Source Interface responds with a 404, and the following errors are recorded if logging is enabled:\r\n```\r\n[Thu Jun 13 17:22:12.285962 2019] [authz_core:debug] [pid 7030:tid 3881911650048] mod_authz_core.c(835): [client 127.0.0.1:45900] AH01628: authorization result: granted (no directives), referer: http://fehpa6pmg6ke4i3u.onion/lookup\r\n[Thu Jun 13 17:22:12.286129 2019] [authz_core:debug] [pid 7030:tid 3881911650048] mod_authz_core.c(835): [client 127.0.0.1:45900] AH01628: authorization result: granted (no directives), referer: http://fehpa6pmg6ke4i3u.onion/lookup\r\n[Thu Jun 13 17:22:12.415949 2019] [wsgi:error] [pid 7028:tid 3882000488192] [remote 127.0.0.1:20954] mod_wsgi (pid=7028): Exception occurred processing WSGI script '/var/www/source.wsgi'.\r\n[Thu Jun 13 17:22:12.416036 2019] [wsgi:error] [pid 7028:tid 3882000488192] [remote 127.0.0.1:20954] TypeError: file like object yielded non string type\r\n[Thu Jun 13 17:22:12.416281 2019] [authz_core:debug] [pid 7030:tid 3881911650048] mod_authz_core.c(835): [client 127.0.0.1:45900] AH01628: authorization result: granted (no directives), referer: http://fehpa6pmg6ke4i3u.onion/lookup\r\n[Thu Jun 13 17:22:12.416360 2019] [authz_core:debug] [pid 7030:tid 3881911650048] mod_authz_core.c(835): [client 127.0.0.1:45900] AH01628: authorization result: granted (no directives), referer: http://fehpa6pmg6ke4i3u.onion/lookup\r\n[Thu Jun 13 17:22:13.812032 2019] [authz_core:debug] [pid 7031:tid 3882000144128] mod_authz_core.c(835): [client 127.0.0.1:45902] AH01628: authorization result: granted (no directives), referer: http://fehpa6pmg6ke4i3u.onion/journalist-key\r\n[Thu Jun 13 17:22:14.587228 2019] [authz_core:debug] [pid 7031:tid 3881991599872] mod_authz_core.c(835): [client 127.0.0.1:45902] AH01628: authorization result: granted (no directives), referer: http://fehpa6pmg6ke4i3u.onion/journalist-key\r\n[Thu Jun 13 17:22:14.587409 2019] [authz_core:debug] [pid 7031:tid 3881991599872] mod_authz_core.c(835): [client 127.0.0.1:45902] AH01628: authorization result: granted (no directives), referer: http://fehpa6pmg6ke4i3u.onion/journalist-key\r\n[Thu Jun 13 17:22:15.296269 2019] [authz_core:debug] [pid 7031:tid 3881911961344] mod_authz_core.c(835): [client 127.0.0.1:45902] AH01628: authorization result: granted (no directives), referer: http://fehpa6pmg6ke4i3u.onion/journalist-key\r\n```\r\n\r\n\r\nPlease provide screenshots where appropriate.\r\n\r\n## Comments\r\n\r\nSuggestions to fix, any other relevant information.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom io import StringIO\nfrom flask import Blueprint, render_template, send_file, current_app\n\n\ndef make_blueprint(config):\n view = Blueprint('info', __name__)\n\n @view.route('/tor2web-warning')\n def tor2web_warning():\n return render_template(\"tor2web-warning.html\")\n\n @view.route('/use-tor')\n def recommend_tor_browser():\n return render_template(\"use-tor-browser.html\")\n\n @view.route('/journalist-key')\n def download_journalist_pubkey():\n journalist_pubkey = current_app.crypto_util.gpg.export_keys(\n config.JOURNALIST_KEY)\n return send_file(StringIO(journalist_pubkey),\n mimetype=\"application/pgp-keys\",\n attachment_filename=config.JOURNALIST_KEY + \".asc\",\n as_attachment=True)\n\n @view.route('/why-journalist-key')\n def why_download_journalist_pubkey():\n return render_template(\"why-journalist-key.html\")\n\n return view\n", "path": "securedrop/source_app/info.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport six\nfrom flask import Blueprint, render_template, send_file, current_app\n\nif six.PY2:\n from cStringIO import StringIO # noqa\nelse:\n from io import BytesIO # noqa\n\n\ndef make_blueprint(config):\n view = Blueprint('info', __name__)\n\n @view.route('/tor2web-warning')\n def tor2web_warning():\n return render_template(\"tor2web-warning.html\")\n\n @view.route('/use-tor')\n def recommend_tor_browser():\n return render_template(\"use-tor-browser.html\")\n\n @view.route('/journalist-key')\n def download_journalist_pubkey():\n journalist_pubkey = current_app.crypto_util.gpg.export_keys(\n config.JOURNALIST_KEY)\n if six.PY2:\n data = StringIO(journalist_pubkey)\n else:\n data = BytesIO(journalist_pubkey.encode('utf-8'))\n return send_file(data,\n mimetype=\"application/pgp-keys\",\n attachment_filename=config.JOURNALIST_KEY + \".asc\",\n as_attachment=True)\n\n @view.route('/why-journalist-key')\n def why_download_journalist_pubkey():\n return render_template(\"why-journalist-key.html\")\n\n return view\n", "path": "securedrop/source_app/info.py"}]} | 1,891 | 260 |
gh_patches_debug_32547 | rasdani/github-patches | git_diff | buildbot__buildbot-6996 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
There's a small window where events will be missed by reporters during reconfig
The code in `ReporterBase.reconfigService()` stops consuming events for all event keys and then starts consuming events for all event keys again. This is not necessary for any wanted event keys which are wanted in both old and new configuration. The code should stop consuming events for no longer wanted event keys, start consuming events for newly wanted event keys and leave the rest untouched.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `master/buildbot/reporters/base.py`
Content:
```
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 import abc
17
18 from twisted.internet import defer
19 from twisted.python import log
20
21 from buildbot import config
22 from buildbot.reporters import utils
23 from buildbot.util import service
24 from buildbot.util import tuplematch
25
26 ENCODING = 'utf-8'
27
28
29 class ReporterBase(service.BuildbotService):
30 name = None
31 __meta__ = abc.ABCMeta
32
33 compare_attrs = ['generators']
34
35 def __init__(self, *args, **kwargs):
36 super().__init__(*args, **kwargs)
37 self.generators = None
38 self._event_consumers = []
39 self._pending_got_event_calls = {}
40
41 def checkConfig(self, generators):
42 if not isinstance(generators, list):
43 config.error('{}: generators argument must be a list')
44
45 for g in generators:
46 g.check()
47
48 if self.name is None:
49 self.name = self.__class__.__name__
50 for g in generators:
51 self.name += "_" + g.generate_name()
52
53 @defer.inlineCallbacks
54 def reconfigService(self, generators):
55
56 for consumer in self._event_consumers:
57 yield consumer.stopConsuming()
58 self._event_consumers = []
59
60 self.generators = generators
61
62 wanted_event_keys = set()
63 for g in self.generators:
64 wanted_event_keys.update(g.wanted_event_keys)
65
66 for key in sorted(list(wanted_event_keys)):
67 consumer = yield self.master.mq.startConsuming(self._got_event, key)
68 self._event_consumers.append(consumer)
69
70 @defer.inlineCallbacks
71 def stopService(self):
72 for consumer in self._event_consumers:
73 yield consumer.stopConsuming()
74 self._event_consumers = []
75
76 for pending_call in list(self._pending_got_event_calls.values()):
77 yield pending_call
78 self._pending_got_event_calls = {}
79
80 yield super().stopService()
81
82 def _does_generator_want_key(self, generator, key):
83 for filter in generator.wanted_event_keys:
84 if tuplematch.matchTuple(key, filter):
85 return True
86 return False
87
88 def _get_chain_key_for_event(self, key, msg):
89 if key[0] in ["builds", "buildrequests"]:
90 return ("buildrequestid", msg["buildrequestid"])
91 return None
92
93 @defer.inlineCallbacks
94 def _got_event(self, key, msg):
95 chain_key = self._get_chain_key_for_event(key, msg)
96 if chain_key is not None:
97 d = defer.Deferred()
98 pending_call = self._pending_got_event_calls.get(chain_key)
99 self._pending_got_event_calls[chain_key] = d
100 # Wait for previously pending call, if any, to ensure
101 # reports are sent out in the order events were queued.
102 if pending_call is not None:
103 yield pending_call
104
105 try:
106 reports = []
107 for g in self.generators:
108 if self._does_generator_want_key(g, key):
109 report = yield g.generate(self.master, self, key, msg)
110 if report is not None:
111 reports.append(report)
112
113 if reports:
114 yield self.sendMessage(reports)
115 except Exception as e:
116 log.err(e, 'Got exception when handling reporter events')
117
118 if chain_key is not None:
119 if self._pending_got_event_calls.get(chain_key) == d:
120 del self._pending_got_event_calls[chain_key]
121 d.callback(None) # This event is now fully handled
122
123 def getResponsibleUsersForBuild(self, master, buildid):
124 # Use library method but subclassers may want to override that
125 return utils.getResponsibleUsersForBuild(master, buildid)
126
127 @abc.abstractmethod
128 def sendMessage(self, reports):
129 pass
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/master/buildbot/reporters/base.py b/master/buildbot/reporters/base.py
--- a/master/buildbot/reporters/base.py
+++ b/master/buildbot/reporters/base.py
@@ -35,7 +35,7 @@
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.generators = None
- self._event_consumers = []
+ self._event_consumers = {}
self._pending_got_event_calls = {}
def checkConfig(self, generators):
@@ -52,26 +52,29 @@
@defer.inlineCallbacks
def reconfigService(self, generators):
-
- for consumer in self._event_consumers:
- yield consumer.stopConsuming()
- self._event_consumers = []
-
self.generators = generators
wanted_event_keys = set()
for g in self.generators:
wanted_event_keys.update(g.wanted_event_keys)
+ # Remove consumers for keys that are no longer wanted
+ for key in list(self._event_consumers.keys()):
+ if key not in wanted_event_keys:
+ yield self._event_consumers[key].stopConsuming()
+ del self._event_consumers[key]
+
+ # Add consumers for new keys
for key in sorted(list(wanted_event_keys)):
- consumer = yield self.master.mq.startConsuming(self._got_event, key)
- self._event_consumers.append(consumer)
+ if key not in self._event_consumers:
+ self._event_consumers[key] = \
+ yield self.master.mq.startConsuming(self._got_event, key)
@defer.inlineCallbacks
def stopService(self):
- for consumer in self._event_consumers:
+ for consumer in self._event_consumers.values():
yield consumer.stopConsuming()
- self._event_consumers = []
+ self._event_consumers = {}
for pending_call in list(self._pending_got_event_calls.values()):
yield pending_call
| {"golden_diff": "diff --git a/master/buildbot/reporters/base.py b/master/buildbot/reporters/base.py\n--- a/master/buildbot/reporters/base.py\n+++ b/master/buildbot/reporters/base.py\n@@ -35,7 +35,7 @@\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.generators = None\n- self._event_consumers = []\n+ self._event_consumers = {}\n self._pending_got_event_calls = {}\n \n def checkConfig(self, generators):\n@@ -52,26 +52,29 @@\n \n @defer.inlineCallbacks\n def reconfigService(self, generators):\n-\n- for consumer in self._event_consumers:\n- yield consumer.stopConsuming()\n- self._event_consumers = []\n-\n self.generators = generators\n \n wanted_event_keys = set()\n for g in self.generators:\n wanted_event_keys.update(g.wanted_event_keys)\n \n+ # Remove consumers for keys that are no longer wanted\n+ for key in list(self._event_consumers.keys()):\n+ if key not in wanted_event_keys:\n+ yield self._event_consumers[key].stopConsuming()\n+ del self._event_consumers[key]\n+\n+ # Add consumers for new keys\n for key in sorted(list(wanted_event_keys)):\n- consumer = yield self.master.mq.startConsuming(self._got_event, key)\n- self._event_consumers.append(consumer)\n+ if key not in self._event_consumers:\n+ self._event_consumers[key] = \\\n+ yield self.master.mq.startConsuming(self._got_event, key)\n \n @defer.inlineCallbacks\n def stopService(self):\n- for consumer in self._event_consumers:\n+ for consumer in self._event_consumers.values():\n yield consumer.stopConsuming()\n- self._event_consumers = []\n+ self._event_consumers = {}\n \n for pending_call in list(self._pending_got_event_calls.values()):\n yield pending_call\n", "issue": "There's a small window where events will be missed by reporters during reconfig\nThe code in `ReporterBase.reconfigService()` stops consuming events for all event keys and then starts consuming events for all event keys again. This is not necessary for any wanted event keys which are wanted in both old and new configuration. The code should stop consuming events for no longer wanted event keys, start consuming events for newly wanted event keys and leave the rest untouched.\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nimport abc\n\nfrom twisted.internet import defer\nfrom twisted.python import log\n\nfrom buildbot import config\nfrom buildbot.reporters import utils\nfrom buildbot.util import service\nfrom buildbot.util import tuplematch\n\nENCODING = 'utf-8'\n\n\nclass ReporterBase(service.BuildbotService):\n name = None\n __meta__ = abc.ABCMeta\n\n compare_attrs = ['generators']\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.generators = None\n self._event_consumers = []\n self._pending_got_event_calls = {}\n\n def checkConfig(self, generators):\n if not isinstance(generators, list):\n config.error('{}: generators argument must be a list')\n\n for g in generators:\n g.check()\n\n if self.name is None:\n self.name = self.__class__.__name__\n for g in generators:\n self.name += \"_\" + g.generate_name()\n\n @defer.inlineCallbacks\n def reconfigService(self, generators):\n\n for consumer in self._event_consumers:\n yield consumer.stopConsuming()\n self._event_consumers = []\n\n self.generators = generators\n\n wanted_event_keys = set()\n for g in self.generators:\n wanted_event_keys.update(g.wanted_event_keys)\n\n for key in sorted(list(wanted_event_keys)):\n consumer = yield self.master.mq.startConsuming(self._got_event, key)\n self._event_consumers.append(consumer)\n\n @defer.inlineCallbacks\n def stopService(self):\n for consumer in self._event_consumers:\n yield consumer.stopConsuming()\n self._event_consumers = []\n\n for pending_call in list(self._pending_got_event_calls.values()):\n yield pending_call\n self._pending_got_event_calls = {}\n\n yield super().stopService()\n\n def _does_generator_want_key(self, generator, key):\n for filter in generator.wanted_event_keys:\n if tuplematch.matchTuple(key, filter):\n return True\n return False\n\n def _get_chain_key_for_event(self, key, msg):\n if key[0] in [\"builds\", \"buildrequests\"]:\n return (\"buildrequestid\", msg[\"buildrequestid\"])\n return None\n\n @defer.inlineCallbacks\n def _got_event(self, key, msg):\n chain_key = self._get_chain_key_for_event(key, msg)\n if chain_key is not None:\n d = defer.Deferred()\n pending_call = self._pending_got_event_calls.get(chain_key)\n self._pending_got_event_calls[chain_key] = d\n # Wait for previously pending call, if any, to ensure\n # reports are sent out in the order events were queued.\n if pending_call is not None:\n yield pending_call\n\n try:\n reports = []\n for g in self.generators:\n if self._does_generator_want_key(g, key):\n report = yield g.generate(self.master, self, key, msg)\n if report is not None:\n reports.append(report)\n\n if reports:\n yield self.sendMessage(reports)\n except Exception as e:\n log.err(e, 'Got exception when handling reporter events')\n\n if chain_key is not None:\n if self._pending_got_event_calls.get(chain_key) == d:\n del self._pending_got_event_calls[chain_key]\n d.callback(None) # This event is now fully handled\n\n def getResponsibleUsersForBuild(self, master, buildid):\n # Use library method but subclassers may want to override that\n return utils.getResponsibleUsersForBuild(master, buildid)\n\n @abc.abstractmethod\n def sendMessage(self, reports):\n pass\n", "path": "master/buildbot/reporters/base.py"}], "after_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nimport abc\n\nfrom twisted.internet import defer\nfrom twisted.python import log\n\nfrom buildbot import config\nfrom buildbot.reporters import utils\nfrom buildbot.util import service\nfrom buildbot.util import tuplematch\n\nENCODING = 'utf-8'\n\n\nclass ReporterBase(service.BuildbotService):\n name = None\n __meta__ = abc.ABCMeta\n\n compare_attrs = ['generators']\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.generators = None\n self._event_consumers = {}\n self._pending_got_event_calls = {}\n\n def checkConfig(self, generators):\n if not isinstance(generators, list):\n config.error('{}: generators argument must be a list')\n\n for g in generators:\n g.check()\n\n if self.name is None:\n self.name = self.__class__.__name__\n for g in generators:\n self.name += \"_\" + g.generate_name()\n\n @defer.inlineCallbacks\n def reconfigService(self, generators):\n self.generators = generators\n\n wanted_event_keys = set()\n for g in self.generators:\n wanted_event_keys.update(g.wanted_event_keys)\n\n # Remove consumers for keys that are no longer wanted\n for key in list(self._event_consumers.keys()):\n if key not in wanted_event_keys:\n yield self._event_consumers[key].stopConsuming()\n del self._event_consumers[key]\n\n # Add consumers for new keys\n for key in sorted(list(wanted_event_keys)):\n if key not in self._event_consumers:\n self._event_consumers[key] = \\\n yield self.master.mq.startConsuming(self._got_event, key)\n\n @defer.inlineCallbacks\n def stopService(self):\n for consumer in self._event_consumers.values():\n yield consumer.stopConsuming()\n self._event_consumers = {}\n\n for pending_call in list(self._pending_got_event_calls.values()):\n yield pending_call\n self._pending_got_event_calls = {}\n\n yield super().stopService()\n\n def _does_generator_want_key(self, generator, key):\n for filter in generator.wanted_event_keys:\n if tuplematch.matchTuple(key, filter):\n return True\n return False\n\n def _get_chain_key_for_event(self, key, msg):\n if key[0] in [\"builds\", \"buildrequests\"]:\n return (\"buildrequestid\", msg[\"buildrequestid\"])\n return None\n\n @defer.inlineCallbacks\n def _got_event(self, key, msg):\n chain_key = self._get_chain_key_for_event(key, msg)\n if chain_key is not None:\n d = defer.Deferred()\n pending_call = self._pending_got_event_calls.get(chain_key)\n self._pending_got_event_calls[chain_key] = d\n # Wait for previously pending call, if any, to ensure\n # reports are sent out in the order events were queued.\n if pending_call is not None:\n yield pending_call\n\n try:\n reports = []\n for g in self.generators:\n if self._does_generator_want_key(g, key):\n report = yield g.generate(self.master, self, key, msg)\n if report is not None:\n reports.append(report)\n\n if reports:\n yield self.sendMessage(reports)\n except Exception as e:\n log.err(e, 'Got exception when handling reporter events')\n\n if chain_key is not None:\n if self._pending_got_event_calls.get(chain_key) == d:\n del self._pending_got_event_calls[chain_key]\n d.callback(None) # This event is now fully handled\n\n def getResponsibleUsersForBuild(self, master, buildid):\n # Use library method but subclassers may want to override that\n return utils.getResponsibleUsersForBuild(master, buildid)\n\n @abc.abstractmethod\n def sendMessage(self, reports):\n pass\n", "path": "master/buildbot/reporters/base.py"}]} | 1,621 | 446 |
gh_patches_debug_17794 | rasdani/github-patches | git_diff | wagtail__wagtail-7701 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Stray checkbox on locked pages report
### Issue Summary
The Locked Pages report (and possibly others inheriting from PageReportView) has a stray additional column with a checkbox, presumably from the bulk actions feature, which causes the column headings to get out of step:

Thanks to @Tijani-Dia for reporting.
### Steps to Reproduce
1. Start a new project with `wagtail start myproject` / `./manage.py migrate` / `./manage.py createsuperuser`
2. Log in, edit the homepage, select 'Lock' from the action menu
3. Go to Reports -> Locked pages
* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
### Technical details
* Python version: 3.8.0
* Django version: 3.2.8
* Wagtail version: 2.15
* Browser version: Chrome 95.0.4638.54
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/admin/views/pages/listing.py`
Content:
```
1 from django.conf import settings
2 from django.core.paginator import Paginator
3 from django.db.models import Count
4 from django.shortcuts import get_object_or_404, redirect
5 from django.template.response import TemplateResponse
6 from django.urls import reverse
7
8 from wagtail.admin.auth import user_has_any_page_permission, user_passes_test
9 from wagtail.admin.navigation import get_explorable_root_page
10 from wagtail.core import hooks
11 from wagtail.core.models import Page, UserPagePermissionsProxy
12
13
14 @user_passes_test(user_has_any_page_permission)
15 def index(request, parent_page_id=None):
16 if parent_page_id:
17 parent_page = get_object_or_404(Page, id=parent_page_id)
18 else:
19 parent_page = Page.get_first_root_node()
20
21 # This will always succeed because of the @user_passes_test above.
22 root_page = get_explorable_root_page(request.user)
23
24 # If this page isn't a descendant of the user's explorable root page,
25 # then redirect to that explorable root page instead.
26 if not (
27 parent_page.pk == root_page.pk
28 or parent_page.is_descendant_of(root_page)
29 ):
30 return redirect('wagtailadmin_explore', root_page.pk)
31
32 parent_page = parent_page.specific
33
34 user_perms = UserPagePermissionsProxy(request.user)
35 pages = (
36 parent_page.get_children().prefetch_related(
37 "content_type", "sites_rooted_here"
38 )
39 & user_perms.explorable_pages()
40 )
41
42 # Get page ordering
43 ordering = request.GET.get('ordering', '-latest_revision_created_at')
44 if ordering not in [
45 'title',
46 '-title',
47 'content_type',
48 '-content_type',
49 'live', '-live',
50 'latest_revision_created_at',
51 '-latest_revision_created_at',
52 'ord'
53 ]:
54 ordering = '-latest_revision_created_at'
55
56 if ordering == 'ord':
57 # preserve the native ordering from get_children()
58 pass
59 elif ordering == 'latest_revision_created_at':
60 # order by oldest revision first.
61 # Special case NULL entries - these should go at the top of the list.
62 # Do this by annotating with Count('latest_revision_created_at'),
63 # which returns 0 for these
64 pages = pages.annotate(
65 null_position=Count('latest_revision_created_at')
66 ).order_by('null_position', 'latest_revision_created_at')
67 elif ordering == '-latest_revision_created_at':
68 # order by oldest revision first.
69 # Special case NULL entries - these should go at the end of the list.
70 pages = pages.annotate(
71 null_position=Count('latest_revision_created_at')
72 ).order_by('-null_position', '-latest_revision_created_at')
73 else:
74 pages = pages.order_by(ordering)
75
76 # Don't paginate if sorting by page order - all pages must be shown to
77 # allow drag-and-drop reordering
78 do_paginate = ordering != 'ord'
79
80 # We want specific page instances, but do not need streamfield values here
81 pages = pages.defer_streamfields().specific()
82
83 # allow hooks defer_streamfieldsyset
84 for hook in hooks.get_hooks('construct_explorer_page_queryset'):
85 pages = hook(parent_page, pages, request)
86
87 # Annotate queryset with various states to be used later for performance optimisations
88 if getattr(settings, 'WAGTAIL_WORKFLOW_ENABLED', True):
89 pages = pages.prefetch_workflow_states()
90
91 pages = pages.annotate_site_root_state().annotate_approved_schedule()
92
93 # Pagination
94 if do_paginate:
95 paginator = Paginator(pages, per_page=50)
96 pages = paginator.get_page(request.GET.get('p'))
97
98 context = {
99 'parent_page': parent_page.specific,
100 'ordering': ordering,
101 'pagination_query_params': "ordering=%s" % ordering,
102 'pages': pages,
103 'do_paginate': do_paginate,
104 'locale': None,
105 'translations': [],
106 'show_ordering_column': request.GET.get('ordering') == 'ord'
107 }
108
109 if getattr(settings, 'WAGTAIL_I18N_ENABLED', False) and not parent_page.is_root():
110 context.update({
111 'locale': parent_page.locale,
112 'translations': [
113 {
114 'locale': translation.locale,
115 'url': reverse('wagtailadmin_explore', args=[translation.id]),
116 }
117 for translation in parent_page.get_translations().only('id', 'locale').select_related('locale')
118 ],
119 })
120
121 return TemplateResponse(request, 'wagtailadmin/pages/index.html', context)
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/admin/views/pages/listing.py b/wagtail/admin/views/pages/listing.py
--- a/wagtail/admin/views/pages/listing.py
+++ b/wagtail/admin/views/pages/listing.py
@@ -95,6 +95,8 @@
paginator = Paginator(pages, per_page=50)
pages = paginator.get_page(request.GET.get('p'))
+ show_ordering_column = request.GET.get('ordering') == 'ord'
+
context = {
'parent_page': parent_page.specific,
'ordering': ordering,
@@ -103,7 +105,8 @@
'do_paginate': do_paginate,
'locale': None,
'translations': [],
- 'show_ordering_column': request.GET.get('ordering') == 'ord'
+ 'show_ordering_column': show_ordering_column,
+ 'show_bulk_actions': not show_ordering_column,
}
if getattr(settings, 'WAGTAIL_I18N_ENABLED', False) and not parent_page.is_root():
| {"golden_diff": "diff --git a/wagtail/admin/views/pages/listing.py b/wagtail/admin/views/pages/listing.py\n--- a/wagtail/admin/views/pages/listing.py\n+++ b/wagtail/admin/views/pages/listing.py\n@@ -95,6 +95,8 @@\n paginator = Paginator(pages, per_page=50)\n pages = paginator.get_page(request.GET.get('p'))\n \n+ show_ordering_column = request.GET.get('ordering') == 'ord'\n+\n context = {\n 'parent_page': parent_page.specific,\n 'ordering': ordering,\n@@ -103,7 +105,8 @@\n 'do_paginate': do_paginate,\n 'locale': None,\n 'translations': [],\n- 'show_ordering_column': request.GET.get('ordering') == 'ord'\n+ 'show_ordering_column': show_ordering_column,\n+ 'show_bulk_actions': not show_ordering_column,\n }\n \n if getattr(settings, 'WAGTAIL_I18N_ENABLED', False) and not parent_page.is_root():\n", "issue": "Stray checkbox on locked pages report\n### Issue Summary\r\n\r\nThe Locked Pages report (and possibly others inheriting from PageReportView) has a stray additional column with a checkbox, presumably from the bulk actions feature, which causes the column headings to get out of step:\r\n\r\n\r\n\r\nThanks to @Tijani-Dia for reporting.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Start a new project with `wagtail start myproject` / `./manage.py migrate` / `./manage.py createsuperuser`\r\n2. Log in, edit the homepage, select 'Lock' from the action menu\r\n3. Go to Reports -> Locked pages\r\n\r\n* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes\r\n\r\n\r\n### Technical details\r\n\r\n* Python version: 3.8.0\r\n* Django version: 3.2.8\r\n* Wagtail version: 2.15\r\n* Browser version: Chrome 95.0.4638.54\r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.core.paginator import Paginator\nfrom django.db.models import Count\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\n\nfrom wagtail.admin.auth import user_has_any_page_permission, user_passes_test\nfrom wagtail.admin.navigation import get_explorable_root_page\nfrom wagtail.core import hooks\nfrom wagtail.core.models import Page, UserPagePermissionsProxy\n\n\n@user_passes_test(user_has_any_page_permission)\ndef index(request, parent_page_id=None):\n if parent_page_id:\n parent_page = get_object_or_404(Page, id=parent_page_id)\n else:\n parent_page = Page.get_first_root_node()\n\n # This will always succeed because of the @user_passes_test above.\n root_page = get_explorable_root_page(request.user)\n\n # If this page isn't a descendant of the user's explorable root page,\n # then redirect to that explorable root page instead.\n if not (\n parent_page.pk == root_page.pk\n or parent_page.is_descendant_of(root_page)\n ):\n return redirect('wagtailadmin_explore', root_page.pk)\n\n parent_page = parent_page.specific\n\n user_perms = UserPagePermissionsProxy(request.user)\n pages = (\n parent_page.get_children().prefetch_related(\n \"content_type\", \"sites_rooted_here\"\n )\n & user_perms.explorable_pages()\n )\n\n # Get page ordering\n ordering = request.GET.get('ordering', '-latest_revision_created_at')\n if ordering not in [\n 'title',\n '-title',\n 'content_type',\n '-content_type',\n 'live', '-live',\n 'latest_revision_created_at',\n '-latest_revision_created_at',\n 'ord'\n ]:\n ordering = '-latest_revision_created_at'\n\n if ordering == 'ord':\n # preserve the native ordering from get_children()\n pass\n elif ordering == 'latest_revision_created_at':\n # order by oldest revision first.\n # Special case NULL entries - these should go at the top of the list.\n # Do this by annotating with Count('latest_revision_created_at'),\n # which returns 0 for these\n pages = pages.annotate(\n null_position=Count('latest_revision_created_at')\n ).order_by('null_position', 'latest_revision_created_at')\n elif ordering == '-latest_revision_created_at':\n # order by oldest revision first.\n # Special case NULL entries - these should go at the end of the list.\n pages = pages.annotate(\n null_position=Count('latest_revision_created_at')\n ).order_by('-null_position', '-latest_revision_created_at')\n else:\n pages = pages.order_by(ordering)\n\n # Don't paginate if sorting by page order - all pages must be shown to\n # allow drag-and-drop reordering\n do_paginate = ordering != 'ord'\n\n # We want specific page instances, but do not need streamfield values here\n pages = pages.defer_streamfields().specific()\n\n # allow hooks defer_streamfieldsyset\n for hook in hooks.get_hooks('construct_explorer_page_queryset'):\n pages = hook(parent_page, pages, request)\n\n # Annotate queryset with various states to be used later for performance optimisations\n if getattr(settings, 'WAGTAIL_WORKFLOW_ENABLED', True):\n pages = pages.prefetch_workflow_states()\n\n pages = pages.annotate_site_root_state().annotate_approved_schedule()\n\n # Pagination\n if do_paginate:\n paginator = Paginator(pages, per_page=50)\n pages = paginator.get_page(request.GET.get('p'))\n\n context = {\n 'parent_page': parent_page.specific,\n 'ordering': ordering,\n 'pagination_query_params': \"ordering=%s\" % ordering,\n 'pages': pages,\n 'do_paginate': do_paginate,\n 'locale': None,\n 'translations': [],\n 'show_ordering_column': request.GET.get('ordering') == 'ord'\n }\n\n if getattr(settings, 'WAGTAIL_I18N_ENABLED', False) and not parent_page.is_root():\n context.update({\n 'locale': parent_page.locale,\n 'translations': [\n {\n 'locale': translation.locale,\n 'url': reverse('wagtailadmin_explore', args=[translation.id]),\n }\n for translation in parent_page.get_translations().only('id', 'locale').select_related('locale')\n ],\n })\n\n return TemplateResponse(request, 'wagtailadmin/pages/index.html', context)\n", "path": "wagtail/admin/views/pages/listing.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.core.paginator import Paginator\nfrom django.db.models import Count\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\n\nfrom wagtail.admin.auth import user_has_any_page_permission, user_passes_test\nfrom wagtail.admin.navigation import get_explorable_root_page\nfrom wagtail.core import hooks\nfrom wagtail.core.models import Page, UserPagePermissionsProxy\n\n\n@user_passes_test(user_has_any_page_permission)\ndef index(request, parent_page_id=None):\n if parent_page_id:\n parent_page = get_object_or_404(Page, id=parent_page_id)\n else:\n parent_page = Page.get_first_root_node()\n\n # This will always succeed because of the @user_passes_test above.\n root_page = get_explorable_root_page(request.user)\n\n # If this page isn't a descendant of the user's explorable root page,\n # then redirect to that explorable root page instead.\n if not (\n parent_page.pk == root_page.pk\n or parent_page.is_descendant_of(root_page)\n ):\n return redirect('wagtailadmin_explore', root_page.pk)\n\n parent_page = parent_page.specific\n\n user_perms = UserPagePermissionsProxy(request.user)\n pages = (\n parent_page.get_children().prefetch_related(\n \"content_type\", \"sites_rooted_here\"\n )\n & user_perms.explorable_pages()\n )\n\n # Get page ordering\n ordering = request.GET.get('ordering', '-latest_revision_created_at')\n if ordering not in [\n 'title',\n '-title',\n 'content_type',\n '-content_type',\n 'live', '-live',\n 'latest_revision_created_at',\n '-latest_revision_created_at',\n 'ord'\n ]:\n ordering = '-latest_revision_created_at'\n\n if ordering == 'ord':\n # preserve the native ordering from get_children()\n pass\n elif ordering == 'latest_revision_created_at':\n # order by oldest revision first.\n # Special case NULL entries - these should go at the top of the list.\n # Do this by annotating with Count('latest_revision_created_at'),\n # which returns 0 for these\n pages = pages.annotate(\n null_position=Count('latest_revision_created_at')\n ).order_by('null_position', 'latest_revision_created_at')\n elif ordering == '-latest_revision_created_at':\n # order by oldest revision first.\n # Special case NULL entries - these should go at the end of the list.\n pages = pages.annotate(\n null_position=Count('latest_revision_created_at')\n ).order_by('-null_position', '-latest_revision_created_at')\n else:\n pages = pages.order_by(ordering)\n\n # Don't paginate if sorting by page order - all pages must be shown to\n # allow drag-and-drop reordering\n do_paginate = ordering != 'ord'\n\n # We want specific page instances, but do not need streamfield values here\n pages = pages.defer_streamfields().specific()\n\n # allow hooks defer_streamfieldsyset\n for hook in hooks.get_hooks('construct_explorer_page_queryset'):\n pages = hook(parent_page, pages, request)\n\n # Annotate queryset with various states to be used later for performance optimisations\n if getattr(settings, 'WAGTAIL_WORKFLOW_ENABLED', True):\n pages = pages.prefetch_workflow_states()\n\n pages = pages.annotate_site_root_state().annotate_approved_schedule()\n\n # Pagination\n if do_paginate:\n paginator = Paginator(pages, per_page=50)\n pages = paginator.get_page(request.GET.get('p'))\n\n show_ordering_column = request.GET.get('ordering') == 'ord'\n\n context = {\n 'parent_page': parent_page.specific,\n 'ordering': ordering,\n 'pagination_query_params': \"ordering=%s\" % ordering,\n 'pages': pages,\n 'do_paginate': do_paginate,\n 'locale': None,\n 'translations': [],\n 'show_ordering_column': show_ordering_column,\n 'show_bulk_actions': not show_ordering_column,\n }\n\n if getattr(settings, 'WAGTAIL_I18N_ENABLED', False) and not parent_page.is_root():\n context.update({\n 'locale': parent_page.locale,\n 'translations': [\n {\n 'locale': translation.locale,\n 'url': reverse('wagtailadmin_explore', args=[translation.id]),\n }\n for translation in parent_page.get_translations().only('id', 'locale').select_related('locale')\n ],\n })\n\n return TemplateResponse(request, 'wagtailadmin/pages/index.html', context)\n", "path": "wagtail/admin/views/pages/listing.py"}]} | 1,793 | 229 |
gh_patches_debug_13512 | rasdani/github-patches | git_diff | larq__larq-596 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
__version__
### Feature motivation
Is there a way to dynamically poll the version of larq (or lce or larq-zoo for that matter)?
If not, could it be done using `__version__` as usual for standard library modules?
### Feature description
```
import larq
print(larq.__version__)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import find_packages, setup
2
3
4 def readme():
5 with open("README.md", "r") as f:
6 return f.read()
7
8
9 setup(
10 name="larq",
11 version="0.10.1",
12 python_requires=">=3.6",
13 author="Plumerai",
14 author_email="[email protected]",
15 description="An Open Source Machine Learning Library for Training Binarized Neural Networks",
16 long_description=readme(),
17 long_description_content_type="text/markdown",
18 url="https://larq.dev/",
19 packages=find_packages(exclude=["larq.snapshots"]),
20 license="Apache 2.0",
21 install_requires=[
22 "numpy >= 1.15.4, < 2.0",
23 "terminaltables>=3.1.0",
24 "dataclasses ; python_version<'3.7'",
25 ],
26 extras_require={
27 "tensorflow": ["tensorflow>=1.14.0"],
28 "tensorflow_gpu": ["tensorflow-gpu>=1.14.0"],
29 "test": [
30 "black==20.8b1",
31 "flake8>=3.7.9,<3.9.0",
32 "isort==5.6.4",
33 "packaging>=19.2,<21.0",
34 "pytest>=5.2.4,<6.2.0",
35 "pytest-cov>=2.8.1,<2.11.0",
36 "pytest-xdist>=1.30,<2.2",
37 "pytest-mock>=2.0,<3.4",
38 "pytype==2020.10.8",
39 "snapshottest>=0.5.1,<0.7.0",
40 ],
41 },
42 classifiers=[
43 "Development Status :: 4 - Beta",
44 "Intended Audience :: Developers",
45 "Intended Audience :: Education",
46 "Intended Audience :: Science/Research",
47 "License :: OSI Approved :: Apache Software License",
48 "Programming Language :: Python :: 3",
49 "Programming Language :: Python :: 3 :: Only",
50 "Programming Language :: Python :: 3.6",
51 "Programming Language :: Python :: 3.7",
52 "Programming Language :: Python :: 3.8",
53 "Topic :: Scientific/Engineering",
54 "Topic :: Scientific/Engineering :: Mathematics",
55 "Topic :: Scientific/Engineering :: Artificial Intelligence",
56 "Topic :: Software Development",
57 "Topic :: Software Development :: Libraries",
58 "Topic :: Software Development :: Libraries :: Python Modules",
59 ],
60 )
61
```
Path: `larq/__init__.py`
Content:
```
1 from larq import (
2 activations,
3 callbacks,
4 constraints,
5 context,
6 layers,
7 math,
8 metrics,
9 models,
10 optimizers,
11 quantizers,
12 utils,
13 )
14
15 __all__ = [
16 "layers",
17 "activations",
18 "callbacks",
19 "constraints",
20 "context",
21 "math",
22 "metrics",
23 "models",
24 "quantizers",
25 "optimizers",
26 "utils",
27 ]
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/larq/__init__.py b/larq/__init__.py
--- a/larq/__init__.py
+++ b/larq/__init__.py
@@ -12,6 +12,14 @@
utils,
)
+try:
+ from importlib import metadata # type: ignore
+except ImportError:
+ # Running on pre-3.8 Python; use importlib-metadata package
+ import importlib_metadata as metadata # type: ignore
+
+__version__ = metadata.version("larq")
+
__all__ = [
"layers",
"activations",
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -22,6 +22,7 @@
"numpy >= 1.15.4, < 2.0",
"terminaltables>=3.1.0",
"dataclasses ; python_version<'3.7'",
+ "importlib-metadata ~= 2.0 ; python_version<'3.8'",
],
extras_require={
"tensorflow": ["tensorflow>=1.14.0"],
| {"golden_diff": "diff --git a/larq/__init__.py b/larq/__init__.py\n--- a/larq/__init__.py\n+++ b/larq/__init__.py\n@@ -12,6 +12,14 @@\n utils,\n )\n \n+try:\n+ from importlib import metadata # type: ignore\n+except ImportError:\n+ # Running on pre-3.8 Python; use importlib-metadata package\n+ import importlib_metadata as metadata # type: ignore\n+\n+__version__ = metadata.version(\"larq\")\n+\n __all__ = [\n \"layers\",\n \"activations\",\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -22,6 +22,7 @@\n \"numpy >= 1.15.4, < 2.0\",\n \"terminaltables>=3.1.0\",\n \"dataclasses ; python_version<'3.7'\",\n+ \"importlib-metadata ~= 2.0 ; python_version<'3.8'\",\n ],\n extras_require={\n \"tensorflow\": [\"tensorflow>=1.14.0\"],\n", "issue": "__version__\n### Feature motivation\r\nIs there a way to dynamically poll the version of larq (or lce or larq-zoo for that matter)?\r\nIf not, could it be done using `__version__` as usual for standard library modules?\r\n\r\n### Feature description\r\n```\r\nimport larq\r\nprint(larq.__version__)\r\n```\n", "before_files": [{"content": "from setuptools import find_packages, setup\n\n\ndef readme():\n with open(\"README.md\", \"r\") as f:\n return f.read()\n\n\nsetup(\n name=\"larq\",\n version=\"0.10.1\",\n python_requires=\">=3.6\",\n author=\"Plumerai\",\n author_email=\"[email protected]\",\n description=\"An Open Source Machine Learning Library for Training Binarized Neural Networks\",\n long_description=readme(),\n long_description_content_type=\"text/markdown\",\n url=\"https://larq.dev/\",\n packages=find_packages(exclude=[\"larq.snapshots\"]),\n license=\"Apache 2.0\",\n install_requires=[\n \"numpy >= 1.15.4, < 2.0\",\n \"terminaltables>=3.1.0\",\n \"dataclasses ; python_version<'3.7'\",\n ],\n extras_require={\n \"tensorflow\": [\"tensorflow>=1.14.0\"],\n \"tensorflow_gpu\": [\"tensorflow-gpu>=1.14.0\"],\n \"test\": [\n \"black==20.8b1\",\n \"flake8>=3.7.9,<3.9.0\",\n \"isort==5.6.4\",\n \"packaging>=19.2,<21.0\",\n \"pytest>=5.2.4,<6.2.0\",\n \"pytest-cov>=2.8.1,<2.11.0\",\n \"pytest-xdist>=1.30,<2.2\",\n \"pytest-mock>=2.0,<3.4\",\n \"pytype==2020.10.8\",\n \"snapshottest>=0.5.1,<0.7.0\",\n ],\n },\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Software Development\",\n \"Topic :: Software Development :: Libraries\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n)\n", "path": "setup.py"}, {"content": "from larq import (\n activations,\n callbacks,\n constraints,\n context,\n layers,\n math,\n metrics,\n models,\n optimizers,\n quantizers,\n utils,\n)\n\n__all__ = [\n \"layers\",\n \"activations\",\n \"callbacks\",\n \"constraints\",\n \"context\",\n \"math\",\n \"metrics\",\n \"models\",\n \"quantizers\",\n \"optimizers\",\n \"utils\",\n]\n", "path": "larq/__init__.py"}], "after_files": [{"content": "from setuptools import find_packages, setup\n\n\ndef readme():\n with open(\"README.md\", \"r\") as f:\n return f.read()\n\n\nsetup(\n name=\"larq\",\n version=\"0.10.1\",\n python_requires=\">=3.6\",\n author=\"Plumerai\",\n author_email=\"[email protected]\",\n description=\"An Open Source Machine Learning Library for Training Binarized Neural Networks\",\n long_description=readme(),\n long_description_content_type=\"text/markdown\",\n url=\"https://larq.dev/\",\n packages=find_packages(exclude=[\"larq.snapshots\"]),\n license=\"Apache 2.0\",\n install_requires=[\n \"numpy >= 1.15.4, < 2.0\",\n \"terminaltables>=3.1.0\",\n \"dataclasses ; python_version<'3.7'\",\n \"importlib-metadata ~= 2.0 ; python_version<'3.8'\",\n ],\n extras_require={\n \"tensorflow\": [\"tensorflow>=1.14.0\"],\n \"tensorflow_gpu\": [\"tensorflow-gpu>=1.14.0\"],\n \"test\": [\n \"black==20.8b1\",\n \"flake8>=3.7.9,<3.9.0\",\n \"isort==5.6.4\",\n \"packaging>=19.2,<21.0\",\n \"pytest>=5.2.4,<6.2.0\",\n \"pytest-cov>=2.8.1,<2.11.0\",\n \"pytest-xdist>=1.30,<2.2\",\n \"pytest-mock>=2.0,<3.4\",\n \"pytype==2020.10.8\",\n \"snapshottest>=0.5.1,<0.7.0\",\n ],\n },\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Software Development\",\n \"Topic :: Software Development :: Libraries\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n)\n", "path": "setup.py"}, {"content": "from larq import (\n activations,\n callbacks,\n constraints,\n context,\n layers,\n math,\n metrics,\n models,\n optimizers,\n quantizers,\n utils,\n)\n\ntry:\n from importlib import metadata # type: ignore\nexcept ImportError:\n # Running on pre-3.8 Python; use importlib-metadata package\n import importlib_metadata as metadata # type: ignore\n\n__version__ = metadata.version(\"larq\")\n\n__all__ = [\n \"layers\",\n \"activations\",\n \"callbacks\",\n \"constraints\",\n \"context\",\n \"math\",\n \"metrics\",\n \"models\",\n \"quantizers\",\n \"optimizers\",\n \"utils\",\n]\n", "path": "larq/__init__.py"}]} | 1,154 | 249 |
gh_patches_debug_36493 | rasdani/github-patches | git_diff | pwndbg__pwndbg-2087 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Port checksec command to native command and add colors
This is annoying IMHO. I want colors and no pwntools update checks here!

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/wrappers/checksec.py`
Content:
```
1 from __future__ import annotations
2
3 from subprocess import CalledProcessError
4
5 import pwndbg.commands
6 import pwndbg.lib.cache
7 import pwndbg.wrappers
8
9 cmd_name = "checksec"
10 cmd_pwntools = ["pwn", "checksec"]
11
12
13 @pwndbg.wrappers.OnlyWithCommand(cmd_name, cmd_pwntools)
14 @pwndbg.lib.cache.cache_until("objfile")
15 def get_raw_out(local_path: str) -> str:
16 try:
17 return pwndbg.wrappers.call_cmd(get_raw_out.cmd + ["--file=" + local_path])
18 except CalledProcessError:
19 pass
20 try:
21 return pwndbg.wrappers.call_cmd(get_raw_out.cmd + ["--file", local_path])
22 except CalledProcessError:
23 pass
24 return pwndbg.wrappers.call_cmd(get_raw_out.cmd + [local_path])
25
26
27 @pwndbg.wrappers.OnlyWithCommand(cmd_name, cmd_pwntools)
28 def relro_status(local_path: str) -> str:
29 relro = "No RELRO"
30 out = get_raw_out(local_path)
31
32 if "Full RELRO" in out:
33 relro = "Full RELRO"
34 elif "Partial RELRO" in out:
35 relro = "Partial RELRO"
36
37 return relro
38
39
40 @pwndbg.wrappers.OnlyWithCommand(cmd_name, cmd_pwntools)
41 def pie_status(local_path: str) -> str:
42 pie = "No PIE"
43 out = get_raw_out(local_path)
44
45 if "PIE enabled" in out:
46 pie = "PIE enabled"
47
48 return pie
49
```
Path: `pwndbg/commands/checksec.py`
Content:
```
1 from __future__ import annotations
2
3 import pwndbg.color
4 import pwndbg.commands
5 import pwndbg.gdblib.file
6 import pwndbg.wrappers.checksec
7
8
9 def color_line(line: str) -> str:
10 return pwndbg.color.normal(
11 line.replace("*", pwndbg.color.green("*"))
12 .replace(":", f":{pwndbg.color.GREEN}")
13 .replace("No", f"{pwndbg.color.RED}No")
14 )
15
16
17 def color_lines(output: str) -> str:
18 return "\n".join(map(color_line, output.split("\n")))
19
20
21 @pwndbg.commands.ArgparsedCommand("Prints out the binary security settings using `checksec`.")
22 @pwndbg.commands.OnlyWithFile
23 def checksec() -> None:
24 output = pwndbg.wrappers.checksec.get_raw_out(pwndbg.gdblib.file.get_proc_exe_file())
25 print(color_lines(output))
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pwndbg/commands/checksec.py b/pwndbg/commands/checksec.py
--- a/pwndbg/commands/checksec.py
+++ b/pwndbg/commands/checksec.py
@@ -1,10 +1,18 @@
from __future__ import annotations
+import argparse
+
import pwndbg.color
import pwndbg.commands
import pwndbg.gdblib.file
import pwndbg.wrappers.checksec
+parser = argparse.ArgumentParser(
+ description="Prints out the binary security settings using `checksec`."
+)
+
+parser.add_argument("-f", "--file", type=str, help="Specify the file to run `checksec` on.")
+
def color_line(line: str) -> str:
return pwndbg.color.normal(
@@ -18,8 +26,9 @@
return "\n".join(map(color_line, output.split("\n")))
[email protected]("Prints out the binary security settings using `checksec`.")
[email protected](parser, command_name="checksec")
@pwndbg.commands.OnlyWithFile
-def checksec() -> None:
- output = pwndbg.wrappers.checksec.get_raw_out(pwndbg.gdblib.file.get_proc_exe_file())
+def checksec(file: str) -> None:
+ local_path = file or pwndbg.gdblib.file.get_proc_exe_file()
+ output = pwndbg.wrappers.checksec.get_raw_out(local_path)
print(color_lines(output))
diff --git a/pwndbg/wrappers/checksec.py b/pwndbg/wrappers/checksec.py
--- a/pwndbg/wrappers/checksec.py
+++ b/pwndbg/wrappers/checksec.py
@@ -1,30 +1,16 @@
from __future__ import annotations
-from subprocess import CalledProcessError
+from pwnlib.elf import ELF
-import pwndbg.commands
-import pwndbg.lib.cache
-import pwndbg.wrappers
-cmd_name = "checksec"
-cmd_pwntools = ["pwn", "checksec"]
+def get_raw_out(local_path: str) -> str:
+ elf = ELF(local_path)
+ output = f"File: {elf.path}\n"
+ output += f"Arch: {elf.arch}\n"
+ output += elf.checksec()
+ return output
[email protected](cmd_name, cmd_pwntools)
[email protected]_until("objfile")
-def get_raw_out(local_path: str) -> str:
- try:
- return pwndbg.wrappers.call_cmd(get_raw_out.cmd + ["--file=" + local_path])
- except CalledProcessError:
- pass
- try:
- return pwndbg.wrappers.call_cmd(get_raw_out.cmd + ["--file", local_path])
- except CalledProcessError:
- pass
- return pwndbg.wrappers.call_cmd(get_raw_out.cmd + [local_path])
-
-
[email protected](cmd_name, cmd_pwntools)
def relro_status(local_path: str) -> str:
relro = "No RELRO"
out = get_raw_out(local_path)
@@ -37,7 +23,6 @@
return relro
[email protected](cmd_name, cmd_pwntools)
def pie_status(local_path: str) -> str:
pie = "No PIE"
out = get_raw_out(local_path)
| {"golden_diff": "diff --git a/pwndbg/commands/checksec.py b/pwndbg/commands/checksec.py\n--- a/pwndbg/commands/checksec.py\n+++ b/pwndbg/commands/checksec.py\n@@ -1,10 +1,18 @@\n from __future__ import annotations\n \n+import argparse\n+\n import pwndbg.color\n import pwndbg.commands\n import pwndbg.gdblib.file\n import pwndbg.wrappers.checksec\n \n+parser = argparse.ArgumentParser(\n+ description=\"Prints out the binary security settings using `checksec`.\"\n+)\n+\n+parser.add_argument(\"-f\", \"--file\", type=str, help=\"Specify the file to run `checksec` on.\")\n+\n \n def color_line(line: str) -> str:\n return pwndbg.color.normal(\n@@ -18,8 +26,9 @@\n return \"\\n\".join(map(color_line, output.split(\"\\n\")))\n \n \[email protected](\"Prints out the binary security settings using `checksec`.\")\[email protected](parser, command_name=\"checksec\")\n @pwndbg.commands.OnlyWithFile\n-def checksec() -> None:\n- output = pwndbg.wrappers.checksec.get_raw_out(pwndbg.gdblib.file.get_proc_exe_file())\n+def checksec(file: str) -> None:\n+ local_path = file or pwndbg.gdblib.file.get_proc_exe_file()\n+ output = pwndbg.wrappers.checksec.get_raw_out(local_path)\n print(color_lines(output))\ndiff --git a/pwndbg/wrappers/checksec.py b/pwndbg/wrappers/checksec.py\n--- a/pwndbg/wrappers/checksec.py\n+++ b/pwndbg/wrappers/checksec.py\n@@ -1,30 +1,16 @@\n from __future__ import annotations\n \n-from subprocess import CalledProcessError\n+from pwnlib.elf import ELF\n \n-import pwndbg.commands\n-import pwndbg.lib.cache\n-import pwndbg.wrappers\n \n-cmd_name = \"checksec\"\n-cmd_pwntools = [\"pwn\", \"checksec\"]\n+def get_raw_out(local_path: str) -> str:\n+ elf = ELF(local_path)\n+ output = f\"File: {elf.path}\\n\"\n+ output += f\"Arch: {elf.arch}\\n\"\n+ output += elf.checksec()\n+ return output\n \n \[email protected](cmd_name, cmd_pwntools)\[email protected]_until(\"objfile\")\n-def get_raw_out(local_path: str) -> str:\n- try:\n- return pwndbg.wrappers.call_cmd(get_raw_out.cmd + [\"--file=\" + local_path])\n- except CalledProcessError:\n- pass\n- try:\n- return pwndbg.wrappers.call_cmd(get_raw_out.cmd + [\"--file\", local_path])\n- except CalledProcessError:\n- pass\n- return pwndbg.wrappers.call_cmd(get_raw_out.cmd + [local_path])\n-\n-\[email protected](cmd_name, cmd_pwntools)\n def relro_status(local_path: str) -> str:\n relro = \"No RELRO\"\n out = get_raw_out(local_path)\n@@ -37,7 +23,6 @@\n return relro\n \n \[email protected](cmd_name, cmd_pwntools)\n def pie_status(local_path: str) -> str:\n pie = \"No PIE\"\n out = get_raw_out(local_path)\n", "issue": "Port checksec command to native command and add colors\nThis is annoying IMHO. I want colors and no pwntools update checks here!\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom subprocess import CalledProcessError\n\nimport pwndbg.commands\nimport pwndbg.lib.cache\nimport pwndbg.wrappers\n\ncmd_name = \"checksec\"\ncmd_pwntools = [\"pwn\", \"checksec\"]\n\n\[email protected](cmd_name, cmd_pwntools)\[email protected]_until(\"objfile\")\ndef get_raw_out(local_path: str) -> str:\n try:\n return pwndbg.wrappers.call_cmd(get_raw_out.cmd + [\"--file=\" + local_path])\n except CalledProcessError:\n pass\n try:\n return pwndbg.wrappers.call_cmd(get_raw_out.cmd + [\"--file\", local_path])\n except CalledProcessError:\n pass\n return pwndbg.wrappers.call_cmd(get_raw_out.cmd + [local_path])\n\n\[email protected](cmd_name, cmd_pwntools)\ndef relro_status(local_path: str) -> str:\n relro = \"No RELRO\"\n out = get_raw_out(local_path)\n\n if \"Full RELRO\" in out:\n relro = \"Full RELRO\"\n elif \"Partial RELRO\" in out:\n relro = \"Partial RELRO\"\n\n return relro\n\n\[email protected](cmd_name, cmd_pwntools)\ndef pie_status(local_path: str) -> str:\n pie = \"No PIE\"\n out = get_raw_out(local_path)\n\n if \"PIE enabled\" in out:\n pie = \"PIE enabled\"\n\n return pie\n", "path": "pwndbg/wrappers/checksec.py"}, {"content": "from __future__ import annotations\n\nimport pwndbg.color\nimport pwndbg.commands\nimport pwndbg.gdblib.file\nimport pwndbg.wrappers.checksec\n\n\ndef color_line(line: str) -> str:\n return pwndbg.color.normal(\n line.replace(\"*\", pwndbg.color.green(\"*\"))\n .replace(\":\", f\":{pwndbg.color.GREEN}\")\n .replace(\"No\", f\"{pwndbg.color.RED}No\")\n )\n\n\ndef color_lines(output: str) -> str:\n return \"\\n\".join(map(color_line, output.split(\"\\n\")))\n\n\[email protected](\"Prints out the binary security settings using `checksec`.\")\[email protected]\ndef checksec() -> None:\n output = pwndbg.wrappers.checksec.get_raw_out(pwndbg.gdblib.file.get_proc_exe_file())\n print(color_lines(output))\n", "path": "pwndbg/commands/checksec.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom pwnlib.elf import ELF\n\n\ndef get_raw_out(local_path: str) -> str:\n elf = ELF(local_path)\n output = f\"File: {elf.path}\\n\"\n output += f\"Arch: {elf.arch}\\n\"\n output += elf.checksec()\n return output\n\n\ndef relro_status(local_path: str) -> str:\n relro = \"No RELRO\"\n out = get_raw_out(local_path)\n\n if \"Full RELRO\" in out:\n relro = \"Full RELRO\"\n elif \"Partial RELRO\" in out:\n relro = \"Partial RELRO\"\n\n return relro\n\n\ndef pie_status(local_path: str) -> str:\n pie = \"No PIE\"\n out = get_raw_out(local_path)\n\n if \"PIE enabled\" in out:\n pie = \"PIE enabled\"\n\n return pie\n", "path": "pwndbg/wrappers/checksec.py"}, {"content": "from __future__ import annotations\n\nimport argparse\n\nimport pwndbg.color\nimport pwndbg.commands\nimport pwndbg.gdblib.file\nimport pwndbg.wrappers.checksec\n\nparser = argparse.ArgumentParser(\n description=\"Prints out the binary security settings using `checksec`.\"\n)\n\nparser.add_argument(\"-f\", \"--file\", type=str, help=\"Specify the file to run `checksec` on.\")\n\n\ndef color_line(line: str) -> str:\n return pwndbg.color.normal(\n line.replace(\"*\", pwndbg.color.green(\"*\"))\n .replace(\":\", f\":{pwndbg.color.GREEN}\")\n .replace(\"No\", f\"{pwndbg.color.RED}No\")\n )\n\n\ndef color_lines(output: str) -> str:\n return \"\\n\".join(map(color_line, output.split(\"\\n\")))\n\n\[email protected](parser, command_name=\"checksec\")\[email protected]\ndef checksec(file: str) -> None:\n local_path = file or pwndbg.gdblib.file.get_proc_exe_file()\n output = pwndbg.wrappers.checksec.get_raw_out(local_path)\n print(color_lines(output))\n", "path": "pwndbg/commands/checksec.py"}]} | 1,054 | 776 |
gh_patches_debug_38547 | rasdani/github-patches | git_diff | cornellius-gp__gpytorch-584 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Linear kernel's variance prior is broken
Repro:
[linear_kernel_prior_issue.ipynb.txt](https://github.com/cornellius-gp/gpytorch/files/2979717/linear_kernel_prior_issue.ipynb.txt)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gpytorch/kernels/linear_kernel.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import torch
4 from .kernel import Kernel
5 from ..lazy import MatmulLazyTensor, RootLazyTensor
6
7
8 class LinearKernel(Kernel):
9 r"""
10 Computes a covariance matrix based on the Linear kernel
11 between inputs :math:`\mathbf{x_1}` and :math:`\mathbf{x_2}`:
12
13 .. math::
14 \begin{equation*}
15 k_\text{Linear}(\mathbf{x_1}, \mathbf{x_2}) = (\mathbf{x_1} - \mathbf{o})^\top
16 (\mathbf{x_2} - \mathbf{o}) + v.
17 \end{equation*}
18
19 where
20
21 * :math:`\mathbf o` is an :attr:`offset` parameter.
22 * :math:`v` is a :attr:`variance` parameter.
23
24
25 .. note::
26
27 To implement this efficiently, we use a :obj:`gpytorch.lazy.RootLazyTensor` during training and a
28 :class:`gpytorch.lazy.MatmulLazyTensor` during test. These lazy tensors represent matrices of the form
29 :math:`K = XX^{\top}` and :math:`K = XZ^{\top}`. This makes inference
30 efficient because a matrix-vector product :math:`Kv` can be computed as
31 :math:`Kv=X(X^{\top}v)`, where the base multiply :math:`Xv` takes only
32 :math:`O(nd)` time and space.
33
34 Args:
35 :attr:`num_dimensions` (int):
36 Number of data dimensions to expect. This
37 is necessary to create the offset parameter.
38 :attr:`variance_prior` (:class:`gpytorch.priors.Prior`):
39 Prior over the variance parameter (default `None`).
40 :attr:`offset_prior` (:class:`gpytorch.priors.Prior`):
41 Prior over the offset parameter (default `None`).
42 :attr:`active_dims` (list):
43 List of data dimensions to operate on.
44 `len(active_dims)` should equal `num_dimensions`.
45 """
46
47 def __init__(self, num_dimensions, variance_prior=None, active_dims=None):
48 super(LinearKernel, self).__init__(active_dims=active_dims)
49 self.register_parameter(name="raw_variance", parameter=torch.nn.Parameter(torch.zeros(1)))
50 self.register_parameter(name="offset", parameter=torch.nn.Parameter(torch.zeros(1, 1, num_dimensions)))
51 if variance_prior is not None:
52 self.register_prior("variance_prior", variance_prior, "variance")
53
54 @property
55 def variance(self):
56 return self._param_transform(self.raw_variance)
57
58 @variance.setter
59 def variance(self, value):
60 self._set_variance(value)
61
62 def _set_variance(self, value):
63 if not torch.is_tensor(value):
64 value = torch.tensor(value)
65 self.initialize(raw_variance=self._inv_param_transform(value))
66
67 def forward(self, x1, x2, diag=False, batch_dims=None, **params):
68 x1_ = x1 * self.variance.sqrt()
69 if batch_dims == (0, 2):
70 x1_ = x1_.view(x1_.size(0), x1_.size(1), -1, 1)
71 x1_ = x1_.permute(0, 2, 1, 3).contiguous()
72 x1_ = x1_.view(-1, x1_.size(-2), x1_.size(-1))
73
74 if x1.size() == x2.size() and torch.equal(x1, x2):
75 # Use RootLazyTensor when x1 == x2 for efficiency when composing
76 # with other kernels
77 prod = RootLazyTensor(x1_)
78
79 else:
80 x2_ = x2 * self.variance.sqrt()
81 if batch_dims == (0, 2):
82 x2_ = x2_.view(x2_.size(0), x2_.size(1), -1, 1)
83 x2_ = x2_.permute(0, 2, 1, 3).contiguous()
84 x2_ = x2_.view(-1, x2_.size(-2), x2_.size(-1))
85
86 prod = MatmulLazyTensor(x1_, x2_.transpose(2, 1))
87
88 return prod
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gpytorch/kernels/linear_kernel.py b/gpytorch/kernels/linear_kernel.py
--- a/gpytorch/kernels/linear_kernel.py
+++ b/gpytorch/kernels/linear_kernel.py
@@ -1,6 +1,7 @@
#!/usr/bin/env python3
import torch
+import warnings
from .kernel import Kernel
from ..lazy import MatmulLazyTensor, RootLazyTensor
@@ -12,13 +13,12 @@
.. math::
\begin{equation*}
- k_\text{Linear}(\mathbf{x_1}, \mathbf{x_2}) = (\mathbf{x_1} - \mathbf{o})^\top
- (\mathbf{x_2} - \mathbf{o}) + v.
+ k_\text{Linear}(\mathbf{x_1}, \mathbf{x_2}) = v\mathbf{x_1}^\top
+ \mathbf{x_2}.
\end{equation*}
where
- * :math:`\mathbf o` is an :attr:`offset` parameter.
* :math:`v` is a :attr:`variance` parameter.
@@ -32,24 +32,37 @@
:math:`O(nd)` time and space.
Args:
- :attr:`num_dimensions` (int):
- Number of data dimensions to expect. This
- is necessary to create the offset parameter.
:attr:`variance_prior` (:class:`gpytorch.priors.Prior`):
Prior over the variance parameter (default `None`).
- :attr:`offset_prior` (:class:`gpytorch.priors.Prior`):
- Prior over the offset parameter (default `None`).
:attr:`active_dims` (list):
List of data dimensions to operate on.
`len(active_dims)` should equal `num_dimensions`.
"""
- def __init__(self, num_dimensions, variance_prior=None, active_dims=None):
+ def __init__(self, num_dimensions=None, offset_prior=None, variance_prior=None, active_dims=None):
super(LinearKernel, self).__init__(active_dims=active_dims)
+ if num_dimensions is not None:
+ warnings.warn(
+ "The `num_dimensions` argument is deprecated and no longer used.",
+ DeprecationWarning
+ )
+ self.register_parameter(
+ name="offset",
+ parameter=torch.nn.Parameter(torch.zeros(1, 1, num_dimensions))
+ )
+ if offset_prior is not None:
+ warnings.warn(
+ "The `offset_prior` argument is deprecated and no longer used.",
+ DeprecationWarning
+ )
self.register_parameter(name="raw_variance", parameter=torch.nn.Parameter(torch.zeros(1)))
- self.register_parameter(name="offset", parameter=torch.nn.Parameter(torch.zeros(1, 1, num_dimensions)))
if variance_prior is not None:
- self.register_prior("variance_prior", variance_prior, "variance")
+ self.register_prior(
+ "variance_prior",
+ variance_prior,
+ lambda: self.variance,
+ lambda v: self._set_variance(v)
+ )
@property
def variance(self):
| {"golden_diff": "diff --git a/gpytorch/kernels/linear_kernel.py b/gpytorch/kernels/linear_kernel.py\n--- a/gpytorch/kernels/linear_kernel.py\n+++ b/gpytorch/kernels/linear_kernel.py\n@@ -1,6 +1,7 @@\n #!/usr/bin/env python3\n \n import torch\n+import warnings\n from .kernel import Kernel\n from ..lazy import MatmulLazyTensor, RootLazyTensor\n \n@@ -12,13 +13,12 @@\n \n .. math::\n \\begin{equation*}\n- k_\\text{Linear}(\\mathbf{x_1}, \\mathbf{x_2}) = (\\mathbf{x_1} - \\mathbf{o})^\\top\n- (\\mathbf{x_2} - \\mathbf{o}) + v.\n+ k_\\text{Linear}(\\mathbf{x_1}, \\mathbf{x_2}) = v\\mathbf{x_1}^\\top\n+ \\mathbf{x_2}.\n \\end{equation*}\n \n where\n \n- * :math:`\\mathbf o` is an :attr:`offset` parameter.\n * :math:`v` is a :attr:`variance` parameter.\n \n \n@@ -32,24 +32,37 @@\n :math:`O(nd)` time and space.\n \n Args:\n- :attr:`num_dimensions` (int):\n- Number of data dimensions to expect. This\n- is necessary to create the offset parameter.\n :attr:`variance_prior` (:class:`gpytorch.priors.Prior`):\n Prior over the variance parameter (default `None`).\n- :attr:`offset_prior` (:class:`gpytorch.priors.Prior`):\n- Prior over the offset parameter (default `None`).\n :attr:`active_dims` (list):\n List of data dimensions to operate on.\n `len(active_dims)` should equal `num_dimensions`.\n \"\"\"\n \n- def __init__(self, num_dimensions, variance_prior=None, active_dims=None):\n+ def __init__(self, num_dimensions=None, offset_prior=None, variance_prior=None, active_dims=None):\n super(LinearKernel, self).__init__(active_dims=active_dims)\n+ if num_dimensions is not None:\n+ warnings.warn(\n+ \"The `num_dimensions` argument is deprecated and no longer used.\",\n+ DeprecationWarning\n+ )\n+ self.register_parameter(\n+ name=\"offset\",\n+ parameter=torch.nn.Parameter(torch.zeros(1, 1, num_dimensions))\n+ )\n+ if offset_prior is not None:\n+ warnings.warn(\n+ \"The `offset_prior` argument is deprecated and no longer used.\",\n+ DeprecationWarning\n+ )\n self.register_parameter(name=\"raw_variance\", parameter=torch.nn.Parameter(torch.zeros(1)))\n- self.register_parameter(name=\"offset\", parameter=torch.nn.Parameter(torch.zeros(1, 1, num_dimensions)))\n if variance_prior is not None:\n- self.register_prior(\"variance_prior\", variance_prior, \"variance\")\n+ self.register_prior(\n+ \"variance_prior\",\n+ variance_prior,\n+ lambda: self.variance,\n+ lambda v: self._set_variance(v)\n+ )\n \n @property\n def variance(self):\n", "issue": "Linear kernel's variance prior is broken\nRepro: \r\n[linear_kernel_prior_issue.ipynb.txt](https://github.com/cornellius-gp/gpytorch/files/2979717/linear_kernel_prior_issue.ipynb.txt)\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport torch\nfrom .kernel import Kernel\nfrom ..lazy import MatmulLazyTensor, RootLazyTensor\n\n\nclass LinearKernel(Kernel):\n r\"\"\"\n Computes a covariance matrix based on the Linear kernel\n between inputs :math:`\\mathbf{x_1}` and :math:`\\mathbf{x_2}`:\n\n .. math::\n \\begin{equation*}\n k_\\text{Linear}(\\mathbf{x_1}, \\mathbf{x_2}) = (\\mathbf{x_1} - \\mathbf{o})^\\top\n (\\mathbf{x_2} - \\mathbf{o}) + v.\n \\end{equation*}\n\n where\n\n * :math:`\\mathbf o` is an :attr:`offset` parameter.\n * :math:`v` is a :attr:`variance` parameter.\n\n\n .. note::\n\n To implement this efficiently, we use a :obj:`gpytorch.lazy.RootLazyTensor` during training and a\n :class:`gpytorch.lazy.MatmulLazyTensor` during test. These lazy tensors represent matrices of the form\n :math:`K = XX^{\\top}` and :math:`K = XZ^{\\top}`. This makes inference\n efficient because a matrix-vector product :math:`Kv` can be computed as\n :math:`Kv=X(X^{\\top}v)`, where the base multiply :math:`Xv` takes only\n :math:`O(nd)` time and space.\n\n Args:\n :attr:`num_dimensions` (int):\n Number of data dimensions to expect. This\n is necessary to create the offset parameter.\n :attr:`variance_prior` (:class:`gpytorch.priors.Prior`):\n Prior over the variance parameter (default `None`).\n :attr:`offset_prior` (:class:`gpytorch.priors.Prior`):\n Prior over the offset parameter (default `None`).\n :attr:`active_dims` (list):\n List of data dimensions to operate on.\n `len(active_dims)` should equal `num_dimensions`.\n \"\"\"\n\n def __init__(self, num_dimensions, variance_prior=None, active_dims=None):\n super(LinearKernel, self).__init__(active_dims=active_dims)\n self.register_parameter(name=\"raw_variance\", parameter=torch.nn.Parameter(torch.zeros(1)))\n self.register_parameter(name=\"offset\", parameter=torch.nn.Parameter(torch.zeros(1, 1, num_dimensions)))\n if variance_prior is not None:\n self.register_prior(\"variance_prior\", variance_prior, \"variance\")\n\n @property\n def variance(self):\n return self._param_transform(self.raw_variance)\n\n @variance.setter\n def variance(self, value):\n self._set_variance(value)\n\n def _set_variance(self, value):\n if not torch.is_tensor(value):\n value = torch.tensor(value)\n self.initialize(raw_variance=self._inv_param_transform(value))\n\n def forward(self, x1, x2, diag=False, batch_dims=None, **params):\n x1_ = x1 * self.variance.sqrt()\n if batch_dims == (0, 2):\n x1_ = x1_.view(x1_.size(0), x1_.size(1), -1, 1)\n x1_ = x1_.permute(0, 2, 1, 3).contiguous()\n x1_ = x1_.view(-1, x1_.size(-2), x1_.size(-1))\n\n if x1.size() == x2.size() and torch.equal(x1, x2):\n # Use RootLazyTensor when x1 == x2 for efficiency when composing\n # with other kernels\n prod = RootLazyTensor(x1_)\n\n else:\n x2_ = x2 * self.variance.sqrt()\n if batch_dims == (0, 2):\n x2_ = x2_.view(x2_.size(0), x2_.size(1), -1, 1)\n x2_ = x2_.permute(0, 2, 1, 3).contiguous()\n x2_ = x2_.view(-1, x2_.size(-2), x2_.size(-1))\n\n prod = MatmulLazyTensor(x1_, x2_.transpose(2, 1))\n\n return prod\n", "path": "gpytorch/kernels/linear_kernel.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport torch\nimport warnings\nfrom .kernel import Kernel\nfrom ..lazy import MatmulLazyTensor, RootLazyTensor\n\n\nclass LinearKernel(Kernel):\n r\"\"\"\n Computes a covariance matrix based on the Linear kernel\n between inputs :math:`\\mathbf{x_1}` and :math:`\\mathbf{x_2}`:\n\n .. math::\n \\begin{equation*}\n k_\\text{Linear}(\\mathbf{x_1}, \\mathbf{x_2}) = v\\mathbf{x_1}^\\top\n \\mathbf{x_2}.\n \\end{equation*}\n\n where\n\n * :math:`v` is a :attr:`variance` parameter.\n\n\n .. note::\n\n To implement this efficiently, we use a :obj:`gpytorch.lazy.RootLazyTensor` during training and a\n :class:`gpytorch.lazy.MatmulLazyTensor` during test. These lazy tensors represent matrices of the form\n :math:`K = XX^{\\top}` and :math:`K = XZ^{\\top}`. This makes inference\n efficient because a matrix-vector product :math:`Kv` can be computed as\n :math:`Kv=X(X^{\\top}v)`, where the base multiply :math:`Xv` takes only\n :math:`O(nd)` time and space.\n\n Args:\n :attr:`variance_prior` (:class:`gpytorch.priors.Prior`):\n Prior over the variance parameter (default `None`).\n :attr:`active_dims` (list):\n List of data dimensions to operate on.\n `len(active_dims)` should equal `num_dimensions`.\n \"\"\"\n\n def __init__(self, num_dimensions=None, offset_prior=None, variance_prior=None, active_dims=None):\n super(LinearKernel, self).__init__(active_dims=active_dims)\n if num_dimensions is not None:\n warnings.warn(\n \"The `num_dimensions` argument is deprecated and no longer used.\",\n DeprecationWarning\n )\n self.register_parameter(\n name=\"offset\",\n parameter=torch.nn.Parameter(torch.zeros(1, 1, num_dimensions))\n )\n if offset_prior is not None:\n warnings.warn(\n \"The `offset_prior` argument is deprecated and no longer used.\",\n DeprecationWarning\n )\n self.register_parameter(name=\"raw_variance\", parameter=torch.nn.Parameter(torch.zeros(1)))\n if variance_prior is not None:\n self.register_prior(\n \"variance_prior\",\n variance_prior,\n lambda: self.variance,\n lambda v: self._set_variance(v)\n )\n\n @property\n def variance(self):\n return self._param_transform(self.raw_variance)\n\n @variance.setter\n def variance(self, value):\n self._set_variance(value)\n\n def _set_variance(self, value):\n if not torch.is_tensor(value):\n value = torch.tensor(value)\n self.initialize(raw_variance=self._inv_param_transform(value))\n\n def forward(self, x1, x2, diag=False, batch_dims=None, **params):\n x1_ = x1 * self.variance.sqrt()\n if batch_dims == (0, 2):\n x1_ = x1_.view(x1_.size(0), x1_.size(1), -1, 1)\n x1_ = x1_.permute(0, 2, 1, 3).contiguous()\n x1_ = x1_.view(-1, x1_.size(-2), x1_.size(-1))\n\n if x1.size() == x2.size() and torch.equal(x1, x2):\n # Use RootLazyTensor when x1 == x2 for efficiency when composing\n # with other kernels\n prod = RootLazyTensor(x1_)\n\n else:\n x2_ = x2 * self.variance.sqrt()\n if batch_dims == (0, 2):\n x2_ = x2_.view(x2_.size(0), x2_.size(1), -1, 1)\n x2_ = x2_.permute(0, 2, 1, 3).contiguous()\n x2_ = x2_.view(-1, x2_.size(-2), x2_.size(-1))\n\n prod = MatmulLazyTensor(x1_, x2_.transpose(2, 1))\n\n return prod\n", "path": "gpytorch/kernels/linear_kernel.py"}]} | 1,446 | 717 |
gh_patches_debug_16953 | rasdani/github-patches | git_diff | saleor__saleor-1503 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Hide 'Clear filters' when list view has page in GET
### What I'm trying to achieve
Clear filters button should appear only when some filters are applied.
### Steps to reproduce the problem
1. Go to dashboard or store front page with filters and pagination and move to next page (/dashboard/products/?page=2)
2. Clear filters button is present in filters card
### What I expected to happen
Clear filters button should be only present if some filters are applied.
### What happened instead/how it failed
Clear filter button is present.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/core/filters.py`
Content:
```
1 from django_filters import FilterSet
2
3
4 class SortedFilterSet(FilterSet):
5 '''
6 Base class for filtersets used in dashboard views. Adds flag
7 is_bound_unsorted to indicate if FilterSet has data from filters other
8 than sort_by.
9 '''
10 def __init__(self, data, *args, **kwargs):
11 data_copy = data.copy() if data else None
12 self.is_bound_unsorted = self.set_is_bound_unsorted(data_copy)
13 super().__init__(data, *args, **kwargs)
14
15 def set_is_bound_unsorted(self, data_copy):
16 if data_copy and data_copy.get('sort_by', None):
17 del data_copy['sort_by']
18 if data_copy:
19 return True
20 return False
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/core/filters.py b/saleor/core/filters.py
--- a/saleor/core/filters.py
+++ b/saleor/core/filters.py
@@ -5,16 +5,11 @@
'''
Base class for filtersets used in dashboard views. Adds flag
is_bound_unsorted to indicate if FilterSet has data from filters other
- than sort_by.
+ than sort_by or page.
'''
def __init__(self, data, *args, **kwargs):
- data_copy = data.copy() if data else None
- self.is_bound_unsorted = self.set_is_bound_unsorted(data_copy)
- super().__init__(data, *args, **kwargs)
+ self.is_bound_unsorted = self.set_is_bound_unsorted(data)
+ super(SortedFilterSet, self).__init__(data, *args, **kwargs)
- def set_is_bound_unsorted(self, data_copy):
- if data_copy and data_copy.get('sort_by', None):
- del data_copy['sort_by']
- if data_copy:
- return True
- return False
+ def set_is_bound_unsorted(self, data):
+ return any([key not in {'sort_by', 'page'} for key in data.keys()])
| {"golden_diff": "diff --git a/saleor/core/filters.py b/saleor/core/filters.py\n--- a/saleor/core/filters.py\n+++ b/saleor/core/filters.py\n@@ -5,16 +5,11 @@\n '''\n Base class for filtersets used in dashboard views. Adds flag\n is_bound_unsorted to indicate if FilterSet has data from filters other\n- than sort_by.\n+ than sort_by or page.\n '''\n def __init__(self, data, *args, **kwargs):\n- data_copy = data.copy() if data else None\n- self.is_bound_unsorted = self.set_is_bound_unsorted(data_copy)\n- super().__init__(data, *args, **kwargs)\n+ self.is_bound_unsorted = self.set_is_bound_unsorted(data)\n+ super(SortedFilterSet, self).__init__(data, *args, **kwargs)\n \n- def set_is_bound_unsorted(self, data_copy):\n- if data_copy and data_copy.get('sort_by', None):\n- del data_copy['sort_by']\n- if data_copy:\n- return True\n- return False\n+ def set_is_bound_unsorted(self, data):\n+ return any([key not in {'sort_by', 'page'} for key in data.keys()])\n", "issue": "Hide 'Clear filters' when list view has page in GET\n### What I'm trying to achieve\r\n\r\nClear filters button should appear only when some filters are applied.\r\n\r\n### Steps to reproduce the problem\r\n\r\n1. Go to dashboard or store front page with filters and pagination and move to next page (/dashboard/products/?page=2)\r\n2. Clear filters button is present in filters card\r\n\r\n### What I expected to happen\r\n\r\nClear filters button should be only present if some filters are applied.\r\n\r\n### What happened instead/how it failed\r\n\r\nClear filter button is present.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from django_filters import FilterSet\n\n\nclass SortedFilterSet(FilterSet):\n '''\n Base class for filtersets used in dashboard views. Adds flag\n is_bound_unsorted to indicate if FilterSet has data from filters other\n than sort_by.\n '''\n def __init__(self, data, *args, **kwargs):\n data_copy = data.copy() if data else None\n self.is_bound_unsorted = self.set_is_bound_unsorted(data_copy)\n super().__init__(data, *args, **kwargs)\n\n def set_is_bound_unsorted(self, data_copy):\n if data_copy and data_copy.get('sort_by', None):\n del data_copy['sort_by']\n if data_copy:\n return True\n return False\n", "path": "saleor/core/filters.py"}], "after_files": [{"content": "from django_filters import FilterSet\n\n\nclass SortedFilterSet(FilterSet):\n '''\n Base class for filtersets used in dashboard views. Adds flag\n is_bound_unsorted to indicate if FilterSet has data from filters other\n than sort_by or page.\n '''\n def __init__(self, data, *args, **kwargs):\n self.is_bound_unsorted = self.set_is_bound_unsorted(data)\n super(SortedFilterSet, self).__init__(data, *args, **kwargs)\n\n def set_is_bound_unsorted(self, data):\n return any([key not in {'sort_by', 'page'} for key in data.keys()])\n", "path": "saleor/core/filters.py"}]} | 649 | 282 |
gh_patches_debug_54080 | rasdani/github-patches | git_diff | e-valuation__EvaP-728 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Warning in courses with small number of participants
In courses with 5 or less participants a warning should be shown above the course's questionnaire:
_This course has only a small number of participants. Please remember that your comments will be visible for the responsible person and the contributors you're evaluating. If two or more people evaluate the course, the results of all voting questions will also be published._
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/student/views.py`
Content:
```
1 from django.contrib import messages
2 from django.core.exceptions import PermissionDenied
3 from django.db import transaction
4 from django.shortcuts import get_object_or_404, redirect, render
5 from django.utils.translation import ugettext as _
6
7 from evap.evaluation.auth import participant_required
8 from evap.evaluation.models import Course, Semester
9 from evap.evaluation.tools import STUDENT_STATES_ORDERED
10
11 from evap.student.forms import QuestionsForm
12 from evap.student.tools import make_form_identifier
13
14 from collections import OrderedDict
15
16 @participant_required
17 def index(request):
18 # retrieve all courses, where the user is a participant and that are not new
19 courses = list(set(Course.objects.filter(participants=request.user).exclude(state="new")))
20 voted_courses = list(set(Course.objects.filter(voters=request.user)))
21 due_courses = list(set(Course.objects.filter(participants=request.user, state='inEvaluation').exclude(voters=request.user)))
22
23 sorter = lambda course: (list(STUDENT_STATES_ORDERED.keys()).index(course.student_state), course.vote_end_date, course.name)
24 courses.sort(key=sorter)
25
26 semesters = Semester.objects.all()
27 semester_list = [dict(semester_name=semester.name, id=semester.id, courses=[course for course in courses if course.semester_id == semester.id]) for semester in semesters]
28
29 template_data = dict(
30 semester_list=semester_list,
31 voted_courses=voted_courses,
32 due_courses=due_courses,
33 can_download_grades=request.user.can_download_grades,
34 )
35 return render(request, "student_index.html", template_data)
36
37
38 def vote_preview(request, course):
39 """
40 Renders a preview of the voting page for the given course.
41 Not used by the student app itself, but by staff and contributor.
42 """
43 form_groups = helper_create_voting_form_groups(request, course.contributions.all())
44 course_form_group = form_groups.pop(course.general_contribution)
45 contributor_form_groups = list((contribution.contributor, contribution.label, form_group, False) for contribution, form_group in form_groups.items())
46
47 template_data = dict(
48 errors_exist=False,
49 course_form_group=course_form_group,
50 contributor_form_groups=contributor_form_groups,
51 course=course,
52 preview=True)
53 return render(request, "student_vote.html", template_data)
54
55
56 @participant_required
57 def vote(request, course_id):
58 # retrieve course and make sure that the user is allowed to vote
59 course = get_object_or_404(Course, id=course_id)
60 if not course.can_user_vote(request.user):
61 raise PermissionDenied
62
63 # prevent a user from voting on themselves.
64 contributions_to_vote_on = course.contributions.exclude(contributor=request.user).all()
65 form_groups = helper_create_voting_form_groups(request, contributions_to_vote_on)
66
67 if not all(all(form.is_valid() for form in form_group) for form_group in form_groups.values()):
68 errors_exist = any(helper_has_errors(form_group) for form_group in form_groups.values())
69
70 course_form_group = form_groups.pop(course.general_contribution)
71
72 contributor_form_groups = list((contribution.contributor, contribution.label, form_group, helper_has_errors(form_group)) for contribution, form_group in form_groups.items())
73
74 template_data = dict(
75 errors_exist=errors_exist,
76 course_form_group=course_form_group,
77 contributor_form_groups=contributor_form_groups,
78 course=course,
79 preview=False)
80 return render(request, "student_vote.html", template_data)
81
82 # all forms are valid, begin vote operation
83 with transaction.atomic():
84 for contribution, form_group in form_groups.items():
85 for questionnaire_form in form_group:
86 questionnaire = questionnaire_form.questionnaire
87 for question in questionnaire.question_set.all():
88 identifier = make_form_identifier(contribution, questionnaire, question)
89 value = questionnaire_form.cleaned_data.get(identifier)
90
91 if question.is_text_question:
92 if value:
93 question.answer_class.objects.create(
94 contribution=contribution,
95 question=question,
96 answer=value)
97 else:
98 if value != 6:
99 answer_counter, created = question.answer_class.objects.get_or_create(contribution=contribution, question=question, answer=value)
100 answer_counter.add_vote()
101 answer_counter.save()
102
103 # remember that the user voted already
104 course.voters.add(request.user)
105
106 course.was_evaluated(request)
107
108 messages.success(request, _("Your vote was recorded."))
109 return redirect('student:index')
110
111
112 def helper_create_form_group(request, contribution):
113 return list(QuestionsForm(request.POST or None, contribution=contribution, questionnaire=questionnaire) for questionnaire in contribution.questionnaires.all())
114
115 def helper_create_voting_form_groups(request, contributions):
116 form_groups = OrderedDict()
117 for contribution in contributions:
118 form_groups[contribution] = helper_create_form_group(request, contribution)
119 return form_groups
120
121 def helper_has_errors(form_group):
122 return any(form.errors for form in form_group)
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/evap/student/views.py b/evap/student/views.py
--- a/evap/student/views.py
+++ b/evap/student/views.py
@@ -76,6 +76,7 @@
course_form_group=course_form_group,
contributor_form_groups=contributor_form_groups,
course=course,
+ participants_warning=course.num_participants <= 5,
preview=False)
return render(request, "student_vote.html", template_data)
| {"golden_diff": "diff --git a/evap/student/views.py b/evap/student/views.py\n--- a/evap/student/views.py\n+++ b/evap/student/views.py\n@@ -76,6 +76,7 @@\n course_form_group=course_form_group,\n contributor_form_groups=contributor_form_groups,\n course=course,\n+ participants_warning=course.num_participants <= 5,\n preview=False)\n return render(request, \"student_vote.html\", template_data)\n", "issue": "Warning in courses with small number of participants\nIn courses with 5 or less participants a warning should be shown above the course's questionnaire:\n\n_This course has only a small number of participants. Please remember that your comments will be visible for the responsible person and the contributors you're evaluating. If two or more people evaluate the course, the results of all voting questions will also be published._\n\n", "before_files": [{"content": "from django.contrib import messages\nfrom django.core.exceptions import PermissionDenied\nfrom django.db import transaction\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils.translation import ugettext as _\n\nfrom evap.evaluation.auth import participant_required\nfrom evap.evaluation.models import Course, Semester\nfrom evap.evaluation.tools import STUDENT_STATES_ORDERED\n\nfrom evap.student.forms import QuestionsForm\nfrom evap.student.tools import make_form_identifier\n\nfrom collections import OrderedDict\n\n@participant_required\ndef index(request):\n # retrieve all courses, where the user is a participant and that are not new\n courses = list(set(Course.objects.filter(participants=request.user).exclude(state=\"new\")))\n voted_courses = list(set(Course.objects.filter(voters=request.user)))\n due_courses = list(set(Course.objects.filter(participants=request.user, state='inEvaluation').exclude(voters=request.user)))\n\n sorter = lambda course: (list(STUDENT_STATES_ORDERED.keys()).index(course.student_state), course.vote_end_date, course.name)\n courses.sort(key=sorter)\n\n semesters = Semester.objects.all()\n semester_list = [dict(semester_name=semester.name, id=semester.id, courses=[course for course in courses if course.semester_id == semester.id]) for semester in semesters]\n\n template_data = dict(\n semester_list=semester_list,\n voted_courses=voted_courses,\n due_courses=due_courses,\n can_download_grades=request.user.can_download_grades,\n )\n return render(request, \"student_index.html\", template_data)\n\n\ndef vote_preview(request, course):\n \"\"\"\n Renders a preview of the voting page for the given course.\n Not used by the student app itself, but by staff and contributor.\n \"\"\"\n form_groups = helper_create_voting_form_groups(request, course.contributions.all())\n course_form_group = form_groups.pop(course.general_contribution)\n contributor_form_groups = list((contribution.contributor, contribution.label, form_group, False) for contribution, form_group in form_groups.items())\n\n template_data = dict(\n errors_exist=False,\n course_form_group=course_form_group,\n contributor_form_groups=contributor_form_groups,\n course=course,\n preview=True)\n return render(request, \"student_vote.html\", template_data)\n\n\n@participant_required\ndef vote(request, course_id):\n # retrieve course and make sure that the user is allowed to vote\n course = get_object_or_404(Course, id=course_id)\n if not course.can_user_vote(request.user):\n raise PermissionDenied\n\n # prevent a user from voting on themselves.\n contributions_to_vote_on = course.contributions.exclude(contributor=request.user).all()\n form_groups = helper_create_voting_form_groups(request, contributions_to_vote_on)\n\n if not all(all(form.is_valid() for form in form_group) for form_group in form_groups.values()):\n errors_exist = any(helper_has_errors(form_group) for form_group in form_groups.values())\n\n course_form_group = form_groups.pop(course.general_contribution)\n\n contributor_form_groups = list((contribution.contributor, contribution.label, form_group, helper_has_errors(form_group)) for contribution, form_group in form_groups.items())\n\n template_data = dict(\n errors_exist=errors_exist,\n course_form_group=course_form_group,\n contributor_form_groups=contributor_form_groups,\n course=course,\n preview=False)\n return render(request, \"student_vote.html\", template_data)\n\n # all forms are valid, begin vote operation\n with transaction.atomic():\n for contribution, form_group in form_groups.items():\n for questionnaire_form in form_group:\n questionnaire = questionnaire_form.questionnaire\n for question in questionnaire.question_set.all():\n identifier = make_form_identifier(contribution, questionnaire, question)\n value = questionnaire_form.cleaned_data.get(identifier)\n\n if question.is_text_question:\n if value:\n question.answer_class.objects.create(\n contribution=contribution,\n question=question,\n answer=value)\n else:\n if value != 6:\n answer_counter, created = question.answer_class.objects.get_or_create(contribution=contribution, question=question, answer=value)\n answer_counter.add_vote()\n answer_counter.save()\n\n # remember that the user voted already\n course.voters.add(request.user)\n\n course.was_evaluated(request)\n\n messages.success(request, _(\"Your vote was recorded.\"))\n return redirect('student:index')\n\n\ndef helper_create_form_group(request, contribution):\n return list(QuestionsForm(request.POST or None, contribution=contribution, questionnaire=questionnaire) for questionnaire in contribution.questionnaires.all())\n\ndef helper_create_voting_form_groups(request, contributions):\n form_groups = OrderedDict()\n for contribution in contributions:\n form_groups[contribution] = helper_create_form_group(request, contribution)\n return form_groups\n\ndef helper_has_errors(form_group):\n return any(form.errors for form in form_group)\n", "path": "evap/student/views.py"}], "after_files": [{"content": "from django.contrib import messages\nfrom django.core.exceptions import PermissionDenied\nfrom django.db import transaction\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils.translation import ugettext as _\n\nfrom evap.evaluation.auth import participant_required\nfrom evap.evaluation.models import Course, Semester\nfrom evap.evaluation.tools import STUDENT_STATES_ORDERED\n\nfrom evap.student.forms import QuestionsForm\nfrom evap.student.tools import make_form_identifier\n\nfrom collections import OrderedDict\n\n@participant_required\ndef index(request):\n # retrieve all courses, where the user is a participant and that are not new\n courses = list(set(Course.objects.filter(participants=request.user).exclude(state=\"new\")))\n voted_courses = list(set(Course.objects.filter(voters=request.user)))\n due_courses = list(set(Course.objects.filter(participants=request.user, state='inEvaluation').exclude(voters=request.user)))\n\n sorter = lambda course: (list(STUDENT_STATES_ORDERED.keys()).index(course.student_state), course.vote_end_date, course.name)\n courses.sort(key=sorter)\n\n semesters = Semester.objects.all()\n semester_list = [dict(semester_name=semester.name, id=semester.id, courses=[course for course in courses if course.semester_id == semester.id]) for semester in semesters]\n\n template_data = dict(\n semester_list=semester_list,\n voted_courses=voted_courses,\n due_courses=due_courses,\n can_download_grades=request.user.can_download_grades,\n )\n return render(request, \"student_index.html\", template_data)\n\n\ndef vote_preview(request, course):\n \"\"\"\n Renders a preview of the voting page for the given course.\n Not used by the student app itself, but by staff and contributor.\n \"\"\"\n form_groups = helper_create_voting_form_groups(request, course.contributions.all())\n course_form_group = form_groups.pop(course.general_contribution)\n contributor_form_groups = list((contribution.contributor, contribution.label, form_group, False) for contribution, form_group in form_groups.items())\n\n template_data = dict(\n errors_exist=False,\n course_form_group=course_form_group,\n contributor_form_groups=contributor_form_groups,\n course=course,\n preview=True)\n return render(request, \"student_vote.html\", template_data)\n\n\n@participant_required\ndef vote(request, course_id):\n # retrieve course and make sure that the user is allowed to vote\n course = get_object_or_404(Course, id=course_id)\n if not course.can_user_vote(request.user):\n raise PermissionDenied\n\n # prevent a user from voting on themselves.\n contributions_to_vote_on = course.contributions.exclude(contributor=request.user).all()\n form_groups = helper_create_voting_form_groups(request, contributions_to_vote_on)\n\n if not all(all(form.is_valid() for form in form_group) for form_group in form_groups.values()):\n errors_exist = any(helper_has_errors(form_group) for form_group in form_groups.values())\n\n course_form_group = form_groups.pop(course.general_contribution)\n\n contributor_form_groups = list((contribution.contributor, contribution.label, form_group, helper_has_errors(form_group)) for contribution, form_group in form_groups.items())\n\n template_data = dict(\n errors_exist=errors_exist,\n course_form_group=course_form_group,\n contributor_form_groups=contributor_form_groups,\n course=course,\n participants_warning=course.num_participants <= 5,\n preview=False)\n return render(request, \"student_vote.html\", template_data)\n\n # all forms are valid, begin vote operation\n with transaction.atomic():\n for contribution, form_group in form_groups.items():\n for questionnaire_form in form_group:\n questionnaire = questionnaire_form.questionnaire\n for question in questionnaire.question_set.all():\n identifier = make_form_identifier(contribution, questionnaire, question)\n value = questionnaire_form.cleaned_data.get(identifier)\n\n if question.is_text_question:\n if value:\n question.answer_class.objects.create(\n contribution=contribution,\n question=question,\n answer=value)\n else:\n if value != 6:\n answer_counter, created = question.answer_class.objects.get_or_create(contribution=contribution, question=question, answer=value)\n answer_counter.add_vote()\n answer_counter.save()\n\n # remember that the user voted already\n course.voters.add(request.user)\n\n course.was_evaluated(request)\n\n messages.success(request, _(\"Your vote was recorded.\"))\n return redirect('student:index')\n\n\ndef helper_create_form_group(request, contribution):\n return list(QuestionsForm(request.POST or None, contribution=contribution, questionnaire=questionnaire) for questionnaire in contribution.questionnaires.all())\n\ndef helper_create_voting_form_groups(request, contributions):\n form_groups = OrderedDict()\n for contribution in contributions:\n form_groups[contribution] = helper_create_form_group(request, contribution)\n return form_groups\n\ndef helper_has_errors(form_group):\n return any(form.errors for form in form_group)\n", "path": "evap/student/views.py"}]} | 1,650 | 102 |
gh_patches_debug_23212 | rasdani/github-patches | git_diff | open-mmlab__mmdetection-3966 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
support for validate with batch size > 1
Hi, glad to know we could run the inference with batch size > 1, any plan to support validation during training with batch size > 1? According to
https://github.com/open-mmlab/mmdetection/blob/c8511649550834ea168f610411a47a39cf194767/mmdet/apis/train.py#L113-L120
batch size is set to 1 for validation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmdet/apis/train.py`
Content:
```
1 import random
2
3 import numpy as np
4 import torch
5 from mmcv.parallel import MMDataParallel, MMDistributedDataParallel
6 from mmcv.runner import (HOOKS, DistSamplerSeedHook, EpochBasedRunner,
7 Fp16OptimizerHook, OptimizerHook, build_optimizer)
8 from mmcv.utils import build_from_cfg
9
10 from mmdet.core import DistEvalHook, EvalHook
11 from mmdet.datasets import build_dataloader, build_dataset
12 from mmdet.utils import get_root_logger
13
14
15 def set_random_seed(seed, deterministic=False):
16 """Set random seed.
17
18 Args:
19 seed (int): Seed to be used.
20 deterministic (bool): Whether to set the deterministic option for
21 CUDNN backend, i.e., set `torch.backends.cudnn.deterministic`
22 to True and `torch.backends.cudnn.benchmark` to False.
23 Default: False.
24 """
25 random.seed(seed)
26 np.random.seed(seed)
27 torch.manual_seed(seed)
28 torch.cuda.manual_seed_all(seed)
29 if deterministic:
30 torch.backends.cudnn.deterministic = True
31 torch.backends.cudnn.benchmark = False
32
33
34 def train_detector(model,
35 dataset,
36 cfg,
37 distributed=False,
38 validate=False,
39 timestamp=None,
40 meta=None):
41 logger = get_root_logger(cfg.log_level)
42
43 # prepare data loaders
44 dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]
45 if 'imgs_per_gpu' in cfg.data:
46 logger.warning('"imgs_per_gpu" is deprecated in MMDet V2.0. '
47 'Please use "samples_per_gpu" instead')
48 if 'samples_per_gpu' in cfg.data:
49 logger.warning(
50 f'Got "imgs_per_gpu"={cfg.data.imgs_per_gpu} and '
51 f'"samples_per_gpu"={cfg.data.samples_per_gpu}, "imgs_per_gpu"'
52 f'={cfg.data.imgs_per_gpu} is used in this experiments')
53 else:
54 logger.warning(
55 'Automatically set "samples_per_gpu"="imgs_per_gpu"='
56 f'{cfg.data.imgs_per_gpu} in this experiments')
57 cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu
58
59 data_loaders = [
60 build_dataloader(
61 ds,
62 cfg.data.samples_per_gpu,
63 cfg.data.workers_per_gpu,
64 # cfg.gpus will be ignored if distributed
65 len(cfg.gpu_ids),
66 dist=distributed,
67 seed=cfg.seed) for ds in dataset
68 ]
69
70 # put model on gpus
71 if distributed:
72 find_unused_parameters = cfg.get('find_unused_parameters', False)
73 # Sets the `find_unused_parameters` parameter in
74 # torch.nn.parallel.DistributedDataParallel
75 model = MMDistributedDataParallel(
76 model.cuda(),
77 device_ids=[torch.cuda.current_device()],
78 broadcast_buffers=False,
79 find_unused_parameters=find_unused_parameters)
80 else:
81 model = MMDataParallel(
82 model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)
83
84 # build runner
85 optimizer = build_optimizer(model, cfg.optimizer)
86 runner = EpochBasedRunner(
87 model,
88 optimizer=optimizer,
89 work_dir=cfg.work_dir,
90 logger=logger,
91 meta=meta)
92 # an ugly workaround to make .log and .log.json filenames the same
93 runner.timestamp = timestamp
94
95 # fp16 setting
96 fp16_cfg = cfg.get('fp16', None)
97 if fp16_cfg is not None:
98 optimizer_config = Fp16OptimizerHook(
99 **cfg.optimizer_config, **fp16_cfg, distributed=distributed)
100 elif distributed and 'type' not in cfg.optimizer_config:
101 optimizer_config = OptimizerHook(**cfg.optimizer_config)
102 else:
103 optimizer_config = cfg.optimizer_config
104
105 # register hooks
106 runner.register_training_hooks(cfg.lr_config, optimizer_config,
107 cfg.checkpoint_config, cfg.log_config,
108 cfg.get('momentum_config', None))
109 if distributed:
110 runner.register_hook(DistSamplerSeedHook())
111
112 # register eval hooks
113 if validate:
114 val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))
115 val_dataloader = build_dataloader(
116 val_dataset,
117 samples_per_gpu=1,
118 workers_per_gpu=cfg.data.workers_per_gpu,
119 dist=distributed,
120 shuffle=False)
121 eval_cfg = cfg.get('evaluation', {})
122 eval_hook = DistEvalHook if distributed else EvalHook
123 runner.register_hook(eval_hook(val_dataloader, **eval_cfg))
124
125 # user-defined hooks
126 if cfg.get('custom_hooks', None):
127 custom_hooks = cfg.custom_hooks
128 assert isinstance(custom_hooks, list), \
129 f'custom_hooks expect list type, but got {type(custom_hooks)}'
130 for hook_cfg in cfg.custom_hooks:
131 assert isinstance(hook_cfg, dict), \
132 'Each item in custom_hooks expects dict type, but got ' \
133 f'{type(hook_cfg)}'
134 hook_cfg = hook_cfg.copy()
135 priority = hook_cfg.pop('priority', 'NORMAL')
136 hook = build_from_cfg(hook_cfg, HOOKS)
137 runner.register_hook(hook, priority=priority)
138
139 if cfg.resume_from:
140 runner.resume(cfg.resume_from)
141 elif cfg.load_from:
142 runner.load_checkpoint(cfg.load_from)
143 runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mmdet/apis/train.py b/mmdet/apis/train.py
--- a/mmdet/apis/train.py
+++ b/mmdet/apis/train.py
@@ -8,7 +8,8 @@
from mmcv.utils import build_from_cfg
from mmdet.core import DistEvalHook, EvalHook
-from mmdet.datasets import build_dataloader, build_dataset
+from mmdet.datasets import (build_dataloader, build_dataset,
+ replace_ImageToTensor)
from mmdet.utils import get_root_logger
@@ -111,10 +112,16 @@
# register eval hooks
if validate:
+ # Support batch_size > 1 in validation
+ val_samples_per_gpu = cfg.data.val.pop('samples_per_gpu', 1)
+ if val_samples_per_gpu > 1:
+ # Replace 'ImageToTensor' to 'DefaultFormatBundle'
+ cfg.data.val.pipeline = replace_ImageToTensor(
+ cfg.data.val.pipeline)
val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))
val_dataloader = build_dataloader(
val_dataset,
- samples_per_gpu=1,
+ samples_per_gpu=val_samples_per_gpu,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=distributed,
shuffle=False)
| {"golden_diff": "diff --git a/mmdet/apis/train.py b/mmdet/apis/train.py\n--- a/mmdet/apis/train.py\n+++ b/mmdet/apis/train.py\n@@ -8,7 +8,8 @@\n from mmcv.utils import build_from_cfg\n \n from mmdet.core import DistEvalHook, EvalHook\n-from mmdet.datasets import build_dataloader, build_dataset\n+from mmdet.datasets import (build_dataloader, build_dataset,\n+ replace_ImageToTensor)\n from mmdet.utils import get_root_logger\n \n \n@@ -111,10 +112,16 @@\n \n # register eval hooks\n if validate:\n+ # Support batch_size > 1 in validation\n+ val_samples_per_gpu = cfg.data.val.pop('samples_per_gpu', 1)\n+ if val_samples_per_gpu > 1:\n+ # Replace 'ImageToTensor' to 'DefaultFormatBundle'\n+ cfg.data.val.pipeline = replace_ImageToTensor(\n+ cfg.data.val.pipeline)\n val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))\n val_dataloader = build_dataloader(\n val_dataset,\n- samples_per_gpu=1,\n+ samples_per_gpu=val_samples_per_gpu,\n workers_per_gpu=cfg.data.workers_per_gpu,\n dist=distributed,\n shuffle=False)\n", "issue": "support for validate with batch size > 1\nHi, glad to know we could run the inference with batch size > 1, any plan to support validation during training with batch size > 1? According to \r\nhttps://github.com/open-mmlab/mmdetection/blob/c8511649550834ea168f610411a47a39cf194767/mmdet/apis/train.py#L113-L120\r\nbatch size is set to 1 for validation.\n", "before_files": [{"content": "import random\n\nimport numpy as np\nimport torch\nfrom mmcv.parallel import MMDataParallel, MMDistributedDataParallel\nfrom mmcv.runner import (HOOKS, DistSamplerSeedHook, EpochBasedRunner,\n Fp16OptimizerHook, OptimizerHook, build_optimizer)\nfrom mmcv.utils import build_from_cfg\n\nfrom mmdet.core import DistEvalHook, EvalHook\nfrom mmdet.datasets import build_dataloader, build_dataset\nfrom mmdet.utils import get_root_logger\n\n\ndef set_random_seed(seed, deterministic=False):\n \"\"\"Set random seed.\n\n Args:\n seed (int): Seed to be used.\n deterministic (bool): Whether to set the deterministic option for\n CUDNN backend, i.e., set `torch.backends.cudnn.deterministic`\n to True and `torch.backends.cudnn.benchmark` to False.\n Default: False.\n \"\"\"\n random.seed(seed)\n np.random.seed(seed)\n torch.manual_seed(seed)\n torch.cuda.manual_seed_all(seed)\n if deterministic:\n torch.backends.cudnn.deterministic = True\n torch.backends.cudnn.benchmark = False\n\n\ndef train_detector(model,\n dataset,\n cfg,\n distributed=False,\n validate=False,\n timestamp=None,\n meta=None):\n logger = get_root_logger(cfg.log_level)\n\n # prepare data loaders\n dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]\n if 'imgs_per_gpu' in cfg.data:\n logger.warning('\"imgs_per_gpu\" is deprecated in MMDet V2.0. '\n 'Please use \"samples_per_gpu\" instead')\n if 'samples_per_gpu' in cfg.data:\n logger.warning(\n f'Got \"imgs_per_gpu\"={cfg.data.imgs_per_gpu} and '\n f'\"samples_per_gpu\"={cfg.data.samples_per_gpu}, \"imgs_per_gpu\"'\n f'={cfg.data.imgs_per_gpu} is used in this experiments')\n else:\n logger.warning(\n 'Automatically set \"samples_per_gpu\"=\"imgs_per_gpu\"='\n f'{cfg.data.imgs_per_gpu} in this experiments')\n cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu\n\n data_loaders = [\n build_dataloader(\n ds,\n cfg.data.samples_per_gpu,\n cfg.data.workers_per_gpu,\n # cfg.gpus will be ignored if distributed\n len(cfg.gpu_ids),\n dist=distributed,\n seed=cfg.seed) for ds in dataset\n ]\n\n # put model on gpus\n if distributed:\n find_unused_parameters = cfg.get('find_unused_parameters', False)\n # Sets the `find_unused_parameters` parameter in\n # torch.nn.parallel.DistributedDataParallel\n model = MMDistributedDataParallel(\n model.cuda(),\n device_ids=[torch.cuda.current_device()],\n broadcast_buffers=False,\n find_unused_parameters=find_unused_parameters)\n else:\n model = MMDataParallel(\n model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)\n\n # build runner\n optimizer = build_optimizer(model, cfg.optimizer)\n runner = EpochBasedRunner(\n model,\n optimizer=optimizer,\n work_dir=cfg.work_dir,\n logger=logger,\n meta=meta)\n # an ugly workaround to make .log and .log.json filenames the same\n runner.timestamp = timestamp\n\n # fp16 setting\n fp16_cfg = cfg.get('fp16', None)\n if fp16_cfg is not None:\n optimizer_config = Fp16OptimizerHook(\n **cfg.optimizer_config, **fp16_cfg, distributed=distributed)\n elif distributed and 'type' not in cfg.optimizer_config:\n optimizer_config = OptimizerHook(**cfg.optimizer_config)\n else:\n optimizer_config = cfg.optimizer_config\n\n # register hooks\n runner.register_training_hooks(cfg.lr_config, optimizer_config,\n cfg.checkpoint_config, cfg.log_config,\n cfg.get('momentum_config', None))\n if distributed:\n runner.register_hook(DistSamplerSeedHook())\n\n # register eval hooks\n if validate:\n val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))\n val_dataloader = build_dataloader(\n val_dataset,\n samples_per_gpu=1,\n workers_per_gpu=cfg.data.workers_per_gpu,\n dist=distributed,\n shuffle=False)\n eval_cfg = cfg.get('evaluation', {})\n eval_hook = DistEvalHook if distributed else EvalHook\n runner.register_hook(eval_hook(val_dataloader, **eval_cfg))\n\n # user-defined hooks\n if cfg.get('custom_hooks', None):\n custom_hooks = cfg.custom_hooks\n assert isinstance(custom_hooks, list), \\\n f'custom_hooks expect list type, but got {type(custom_hooks)}'\n for hook_cfg in cfg.custom_hooks:\n assert isinstance(hook_cfg, dict), \\\n 'Each item in custom_hooks expects dict type, but got ' \\\n f'{type(hook_cfg)}'\n hook_cfg = hook_cfg.copy()\n priority = hook_cfg.pop('priority', 'NORMAL')\n hook = build_from_cfg(hook_cfg, HOOKS)\n runner.register_hook(hook, priority=priority)\n\n if cfg.resume_from:\n runner.resume(cfg.resume_from)\n elif cfg.load_from:\n runner.load_checkpoint(cfg.load_from)\n runner.run(data_loaders, cfg.workflow, cfg.total_epochs)\n", "path": "mmdet/apis/train.py"}], "after_files": [{"content": "import random\n\nimport numpy as np\nimport torch\nfrom mmcv.parallel import MMDataParallel, MMDistributedDataParallel\nfrom mmcv.runner import (HOOKS, DistSamplerSeedHook, EpochBasedRunner,\n Fp16OptimizerHook, OptimizerHook, build_optimizer)\nfrom mmcv.utils import build_from_cfg\n\nfrom mmdet.core import DistEvalHook, EvalHook\nfrom mmdet.datasets import (build_dataloader, build_dataset,\n replace_ImageToTensor)\nfrom mmdet.utils import get_root_logger\n\n\ndef set_random_seed(seed, deterministic=False):\n \"\"\"Set random seed.\n\n Args:\n seed (int): Seed to be used.\n deterministic (bool): Whether to set the deterministic option for\n CUDNN backend, i.e., set `torch.backends.cudnn.deterministic`\n to True and `torch.backends.cudnn.benchmark` to False.\n Default: False.\n \"\"\"\n random.seed(seed)\n np.random.seed(seed)\n torch.manual_seed(seed)\n torch.cuda.manual_seed_all(seed)\n if deterministic:\n torch.backends.cudnn.deterministic = True\n torch.backends.cudnn.benchmark = False\n\n\ndef train_detector(model,\n dataset,\n cfg,\n distributed=False,\n validate=False,\n timestamp=None,\n meta=None):\n logger = get_root_logger(cfg.log_level)\n\n # prepare data loaders\n dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]\n if 'imgs_per_gpu' in cfg.data:\n logger.warning('\"imgs_per_gpu\" is deprecated in MMDet V2.0. '\n 'Please use \"samples_per_gpu\" instead')\n if 'samples_per_gpu' in cfg.data:\n logger.warning(\n f'Got \"imgs_per_gpu\"={cfg.data.imgs_per_gpu} and '\n f'\"samples_per_gpu\"={cfg.data.samples_per_gpu}, \"imgs_per_gpu\"'\n f'={cfg.data.imgs_per_gpu} is used in this experiments')\n else:\n logger.warning(\n 'Automatically set \"samples_per_gpu\"=\"imgs_per_gpu\"='\n f'{cfg.data.imgs_per_gpu} in this experiments')\n cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu\n\n data_loaders = [\n build_dataloader(\n ds,\n cfg.data.samples_per_gpu,\n cfg.data.workers_per_gpu,\n # cfg.gpus will be ignored if distributed\n len(cfg.gpu_ids),\n dist=distributed,\n seed=cfg.seed) for ds in dataset\n ]\n\n # put model on gpus\n if distributed:\n find_unused_parameters = cfg.get('find_unused_parameters', False)\n # Sets the `find_unused_parameters` parameter in\n # torch.nn.parallel.DistributedDataParallel\n model = MMDistributedDataParallel(\n model.cuda(),\n device_ids=[torch.cuda.current_device()],\n broadcast_buffers=False,\n find_unused_parameters=find_unused_parameters)\n else:\n model = MMDataParallel(\n model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)\n\n # build runner\n optimizer = build_optimizer(model, cfg.optimizer)\n runner = EpochBasedRunner(\n model,\n optimizer=optimizer,\n work_dir=cfg.work_dir,\n logger=logger,\n meta=meta)\n # an ugly workaround to make .log and .log.json filenames the same\n runner.timestamp = timestamp\n\n # fp16 setting\n fp16_cfg = cfg.get('fp16', None)\n if fp16_cfg is not None:\n optimizer_config = Fp16OptimizerHook(\n **cfg.optimizer_config, **fp16_cfg, distributed=distributed)\n elif distributed and 'type' not in cfg.optimizer_config:\n optimizer_config = OptimizerHook(**cfg.optimizer_config)\n else:\n optimizer_config = cfg.optimizer_config\n\n # register hooks\n runner.register_training_hooks(cfg.lr_config, optimizer_config,\n cfg.checkpoint_config, cfg.log_config,\n cfg.get('momentum_config', None))\n if distributed:\n runner.register_hook(DistSamplerSeedHook())\n\n # register eval hooks\n if validate:\n # Support batch_size > 1 in validation\n val_samples_per_gpu = cfg.data.val.pop('samples_per_gpu', 1)\n if val_samples_per_gpu > 1:\n # Replace 'ImageToTensor' to 'DefaultFormatBundle'\n cfg.data.val.pipeline = replace_ImageToTensor(\n cfg.data.val.pipeline)\n val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))\n val_dataloader = build_dataloader(\n val_dataset,\n samples_per_gpu=val_samples_per_gpu,\n workers_per_gpu=cfg.data.workers_per_gpu,\n dist=distributed,\n shuffle=False)\n eval_cfg = cfg.get('evaluation', {})\n eval_hook = DistEvalHook if distributed else EvalHook\n runner.register_hook(eval_hook(val_dataloader, **eval_cfg))\n\n # user-defined hooks\n if cfg.get('custom_hooks', None):\n custom_hooks = cfg.custom_hooks\n assert isinstance(custom_hooks, list), \\\n f'custom_hooks expect list type, but got {type(custom_hooks)}'\n for hook_cfg in cfg.custom_hooks:\n assert isinstance(hook_cfg, dict), \\\n 'Each item in custom_hooks expects dict type, but got ' \\\n f'{type(hook_cfg)}'\n hook_cfg = hook_cfg.copy()\n priority = hook_cfg.pop('priority', 'NORMAL')\n hook = build_from_cfg(hook_cfg, HOOKS)\n runner.register_hook(hook, priority=priority)\n\n if cfg.resume_from:\n runner.resume(cfg.resume_from)\n elif cfg.load_from:\n runner.load_checkpoint(cfg.load_from)\n runner.run(data_loaders, cfg.workflow, cfg.total_epochs)\n", "path": "mmdet/apis/train.py"}]} | 1,856 | 284 |
gh_patches_debug_16108 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1620 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
E3503 does not match CloudFormation - requires ValidationDomain when CF does not want it
*cfn-lint version: (`cfn-lint --version`)*
cfn-lint 0.33.2
*Description of issue.*
I created an ACM certificate resource, and there were problems configuring the `DomainValidationOptions` block. If using DNS validation, the only properties needed are `DomainName` and `HostedZoneId`. However, cfn-lint was demanding a third property named `ValidationDomain`. When submitting the stack for deployment to CF, it triggered an immediate rollback because CF views `HostedZoneId` and `ValidationDomain` as mutually exclusive.
Adding an ignore rule to skip the E3503 error allowed me to proceed without issues. This rule should be adjusted to match what CF enforces.
**Sample:**
```yaml
Resources:
Certificate:
Type: AWS::CertificateManager::Certificate
Metadata:
cfn-lint:
config:
ignore_checks:
- E3503
Properties:
DomainName: "*.aws.domain.com"
ValidationMethod: DNS
DomainValidationOptions:
- DomainName: aws.domain.com
HostedZoneId: !ImportValue SubdomainHostedZoneId
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/certificatemanager/DomainValidationOptions.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import six
6 from cfnlint.rules import CloudFormationLintRule
7 from cfnlint.rules import RuleMatch
8
9
10 class DomainValidationOptions(CloudFormationLintRule):
11 """Check if a certificate's domain validation options are set up correctly"""
12 id = 'E3503'
13 shortdesc = 'ValidationDomain is superdomain of DomainName'
14 description = 'In ValidationDomainOptions, the ValidationDomain must be a superdomain of the DomainName being validated'
15 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-certificatemanager-certificate-domainvalidationoption.html#cfn-certificatemanager-certificate-domainvalidationoption-validationdomain'
16 tags = ['certificate', 'certificatemanager', 'domainvalidationoptions', 'validationdomain']
17
18 def __init__(self):
19 """ Init """
20 super(DomainValidationOptions, self).__init__()
21 self.resource_property_types = ['AWS::CertificateManager::Certificate']
22
23 def check_value(self, value, path, **kwargs):
24 """ Check value inside the list of DomainValidationOptions"""
25 matches = []
26 cfn = kwargs.get('cfn')
27 if isinstance(value, dict):
28 property_sets = cfn.get_object_without_conditions(value)
29 for property_set in property_sets:
30 properties = property_set.get('Object')
31 scenario = property_set.get('Scenario')
32 domain_name = properties.get('DomainName', '')
33 validation_domain = properties.get('ValidationDomain', '')
34 if isinstance(domain_name, six.string_types) and isinstance(validation_domain, six.string_types):
35 if domain_name == validation_domain:
36 continue
37
38 if not domain_name.endswith('.' + validation_domain):
39 message = 'ValidationDomain must be a superdomain of DomainName at {}'
40 if scenario is None:
41 matches.append(
42 RuleMatch(path[:] + ['DomainName'], message.format('/'.join(map(str, path)))))
43 else:
44 scenario_text = ' and '.join(
45 ['when condition "%s" is %s' % (k, v) for (k, v) in scenario.items()])
46 matches.append(
47 RuleMatch(path[:] + ['DomainName'], message.format('/'.join(map(str, path)) + ' ' + scenario_text)))
48 return matches
49
50 def match_resource_properties(self, properties, _, path, cfn):
51 matches = []
52 matches.extend(cfn.check_value(
53 properties, 'DomainValidationOptions', path[:],
54 check_value=self.check_value,
55 cfn=cfn,
56 ))
57
58 return matches
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/resources/certificatemanager/DomainValidationOptions.py b/src/cfnlint/rules/resources/certificatemanager/DomainValidationOptions.py
--- a/src/cfnlint/rules/resources/certificatemanager/DomainValidationOptions.py
+++ b/src/cfnlint/rules/resources/certificatemanager/DomainValidationOptions.py
@@ -29,8 +29,8 @@
for property_set in property_sets:
properties = property_set.get('Object')
scenario = property_set.get('Scenario')
- domain_name = properties.get('DomainName', '')
- validation_domain = properties.get('ValidationDomain', '')
+ domain_name = properties.get('DomainName', None)
+ validation_domain = properties.get('ValidationDomain', None)
if isinstance(domain_name, six.string_types) and isinstance(validation_domain, six.string_types):
if domain_name == validation_domain:
continue
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/certificatemanager/DomainValidationOptions.py b/src/cfnlint/rules/resources/certificatemanager/DomainValidationOptions.py\n--- a/src/cfnlint/rules/resources/certificatemanager/DomainValidationOptions.py\n+++ b/src/cfnlint/rules/resources/certificatemanager/DomainValidationOptions.py\n@@ -29,8 +29,8 @@\n for property_set in property_sets:\n properties = property_set.get('Object')\n scenario = property_set.get('Scenario')\n- domain_name = properties.get('DomainName', '')\n- validation_domain = properties.get('ValidationDomain', '')\n+ domain_name = properties.get('DomainName', None)\n+ validation_domain = properties.get('ValidationDomain', None)\n if isinstance(domain_name, six.string_types) and isinstance(validation_domain, six.string_types):\n if domain_name == validation_domain:\n continue\n", "issue": "E3503 does not match CloudFormation - requires ValidationDomain when CF does not want it\n*cfn-lint version: (`cfn-lint --version`)*\r\n\r\ncfn-lint 0.33.2\r\n\r\n*Description of issue.*\r\n\r\nI created an ACM certificate resource, and there were problems configuring the `DomainValidationOptions` block. If using DNS validation, the only properties needed are `DomainName` and `HostedZoneId`. However, cfn-lint was demanding a third property named `ValidationDomain`. When submitting the stack for deployment to CF, it triggered an immediate rollback because CF views `HostedZoneId` and `ValidationDomain` as mutually exclusive.\r\n\r\nAdding an ignore rule to skip the E3503 error allowed me to proceed without issues. This rule should be adjusted to match what CF enforces.\r\n\r\n**Sample:**\r\n\r\n```yaml\r\nResources:\r\n Certificate:\r\n Type: AWS::CertificateManager::Certificate\r\n Metadata:\r\n cfn-lint:\r\n config:\r\n ignore_checks:\r\n - E3503\r\n Properties:\r\n DomainName: \"*.aws.domain.com\"\r\n ValidationMethod: DNS\r\n DomainValidationOptions:\r\n - DomainName: aws.domain.com\r\n HostedZoneId: !ImportValue SubdomainHostedZoneId\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass DomainValidationOptions(CloudFormationLintRule):\n \"\"\"Check if a certificate's domain validation options are set up correctly\"\"\"\n id = 'E3503'\n shortdesc = 'ValidationDomain is superdomain of DomainName'\n description = 'In ValidationDomainOptions, the ValidationDomain must be a superdomain of the DomainName being validated'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-certificatemanager-certificate-domainvalidationoption.html#cfn-certificatemanager-certificate-domainvalidationoption-validationdomain'\n tags = ['certificate', 'certificatemanager', 'domainvalidationoptions', 'validationdomain']\n\n def __init__(self):\n \"\"\" Init \"\"\"\n super(DomainValidationOptions, self).__init__()\n self.resource_property_types = ['AWS::CertificateManager::Certificate']\n\n def check_value(self, value, path, **kwargs):\n \"\"\" Check value inside the list of DomainValidationOptions\"\"\"\n matches = []\n cfn = kwargs.get('cfn')\n if isinstance(value, dict):\n property_sets = cfn.get_object_without_conditions(value)\n for property_set in property_sets:\n properties = property_set.get('Object')\n scenario = property_set.get('Scenario')\n domain_name = properties.get('DomainName', '')\n validation_domain = properties.get('ValidationDomain', '')\n if isinstance(domain_name, six.string_types) and isinstance(validation_domain, six.string_types):\n if domain_name == validation_domain:\n continue\n\n if not domain_name.endswith('.' + validation_domain):\n message = 'ValidationDomain must be a superdomain of DomainName at {}'\n if scenario is None:\n matches.append(\n RuleMatch(path[:] + ['DomainName'], message.format('/'.join(map(str, path)))))\n else:\n scenario_text = ' and '.join(\n ['when condition \"%s\" is %s' % (k, v) for (k, v) in scenario.items()])\n matches.append(\n RuleMatch(path[:] + ['DomainName'], message.format('/'.join(map(str, path)) + ' ' + scenario_text)))\n return matches\n\n def match_resource_properties(self, properties, _, path, cfn):\n matches = []\n matches.extend(cfn.check_value(\n properties, 'DomainValidationOptions', path[:],\n check_value=self.check_value,\n cfn=cfn,\n ))\n\n return matches\n", "path": "src/cfnlint/rules/resources/certificatemanager/DomainValidationOptions.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass DomainValidationOptions(CloudFormationLintRule):\n \"\"\"Check if a certificate's domain validation options are set up correctly\"\"\"\n id = 'E3503'\n shortdesc = 'ValidationDomain is superdomain of DomainName'\n description = 'In ValidationDomainOptions, the ValidationDomain must be a superdomain of the DomainName being validated'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-certificatemanager-certificate-domainvalidationoption.html#cfn-certificatemanager-certificate-domainvalidationoption-validationdomain'\n tags = ['certificate', 'certificatemanager', 'domainvalidationoptions', 'validationdomain']\n\n def __init__(self):\n \"\"\" Init \"\"\"\n super(DomainValidationOptions, self).__init__()\n self.resource_property_types = ['AWS::CertificateManager::Certificate']\n\n def check_value(self, value, path, **kwargs):\n \"\"\" Check value inside the list of DomainValidationOptions\"\"\"\n matches = []\n cfn = kwargs.get('cfn')\n if isinstance(value, dict):\n property_sets = cfn.get_object_without_conditions(value)\n for property_set in property_sets:\n properties = property_set.get('Object')\n scenario = property_set.get('Scenario')\n domain_name = properties.get('DomainName', None)\n validation_domain = properties.get('ValidationDomain', None)\n if isinstance(domain_name, six.string_types) and isinstance(validation_domain, six.string_types):\n if domain_name == validation_domain:\n continue\n\n if not domain_name.endswith('.' + validation_domain):\n message = 'ValidationDomain must be a superdomain of DomainName at {}'\n if scenario is None:\n matches.append(\n RuleMatch(path[:] + ['DomainName'], message.format('/'.join(map(str, path)))))\n else:\n scenario_text = ' and '.join(\n ['when condition \"%s\" is %s' % (k, v) for (k, v) in scenario.items()])\n matches.append(\n RuleMatch(path[:] + ['DomainName'], message.format('/'.join(map(str, path)) + ' ' + scenario_text)))\n return matches\n\n def match_resource_properties(self, properties, _, path, cfn):\n matches = []\n matches.extend(cfn.check_value(\n properties, 'DomainValidationOptions', path[:],\n check_value=self.check_value,\n cfn=cfn,\n ))\n\n return matches\n", "path": "src/cfnlint/rules/resources/certificatemanager/DomainValidationOptions.py"}]} | 1,205 | 194 |
gh_patches_debug_18653 | rasdani/github-patches | git_diff | frappe__frappe-26417 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
(List View): In sidebar filter link field show Title of Field if set
**Is your feature request related to a problem? Please describe.**
In sidebar filter link field show field.name but not field.title
**Describe the solution you'd like**
if link field has title field - show title.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `frappe/desk/listview.py`
Content:
```
1 # Copyright (c) 2022, Frappe Technologies Pvt. Ltd. and Contributors
2 # License: MIT. See LICENSE
3
4 import frappe
5 from frappe.model import is_default_field
6 from frappe.query_builder import Order
7 from frappe.query_builder.functions import Count
8 from frappe.query_builder.terms import SubQuery
9 from frappe.query_builder.utils import DocType
10
11
12 @frappe.whitelist()
13 def get_list_settings(doctype):
14 try:
15 return frappe.get_cached_doc("List View Settings", doctype)
16 except frappe.DoesNotExistError:
17 frappe.clear_messages()
18
19
20 @frappe.whitelist()
21 def set_list_settings(doctype, values):
22 try:
23 doc = frappe.get_doc("List View Settings", doctype)
24 except frappe.DoesNotExistError:
25 doc = frappe.new_doc("List View Settings")
26 doc.name = doctype
27 frappe.clear_messages()
28 doc.update(frappe.parse_json(values))
29 doc.save()
30
31
32 @frappe.whitelist()
33 def get_group_by_count(doctype: str, current_filters: str, field: str) -> list[dict]:
34 current_filters = frappe.parse_json(current_filters)
35
36 if field == "assigned_to":
37 ToDo = DocType("ToDo")
38 User = DocType("User")
39 count = Count("*").as_("count")
40 filtered_records = frappe.qb.get_query(
41 doctype,
42 filters=current_filters,
43 fields=["name"],
44 validate_filters=True,
45 )
46
47 return (
48 frappe.qb.from_(ToDo)
49 .from_(User)
50 .select(ToDo.allocated_to.as_("name"), count)
51 .where(
52 (ToDo.status != "Cancelled")
53 & (ToDo.allocated_to == User.name)
54 & (User.user_type == "System User")
55 & (ToDo.reference_name.isin(SubQuery(filtered_records)))
56 )
57 .groupby(ToDo.allocated_to)
58 .orderby(count, order=Order.desc)
59 .limit(50)
60 .run(as_dict=True)
61 )
62
63 if not frappe.get_meta(doctype).has_field(field) and not is_default_field(field):
64 raise ValueError("Field does not belong to doctype")
65
66 return frappe.get_list(
67 doctype,
68 filters=current_filters,
69 group_by=f"`tab{doctype}`.{field}",
70 fields=["count(*) as count", f"`{field}` as name"],
71 order_by="count desc",
72 limit=50,
73 )
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/frappe/desk/listview.py b/frappe/desk/listview.py
--- a/frappe/desk/listview.py
+++ b/frappe/desk/listview.py
@@ -60,10 +60,12 @@
.run(as_dict=True)
)
- if not frappe.get_meta(doctype).has_field(field) and not is_default_field(field):
+ meta = frappe.get_meta(doctype)
+
+ if not meta.has_field(field) and not is_default_field(field):
raise ValueError("Field does not belong to doctype")
- return frappe.get_list(
+ data = frappe.get_list(
doctype,
filters=current_filters,
group_by=f"`tab{doctype}`.{field}",
@@ -71,3 +73,13 @@
order_by="count desc",
limit=50,
)
+
+ # Add in title if it's a link field and `show_title_field_in_link` is set
+ if (field_meta := meta.get_field(field)) and field_meta.fieldtype == "Link":
+ link_meta = frappe.get_meta(field_meta.options)
+ if link_meta.show_title_field_in_link:
+ title_field = link_meta.get_title_field()
+ for item in data:
+ item.title = frappe.get_value(field_meta.options, item.name, title_field)
+
+ return data
| {"golden_diff": "diff --git a/frappe/desk/listview.py b/frappe/desk/listview.py\n--- a/frappe/desk/listview.py\n+++ b/frappe/desk/listview.py\n@@ -60,10 +60,12 @@\n \t\t\t.run(as_dict=True)\n \t\t)\n \n-\tif not frappe.get_meta(doctype).has_field(field) and not is_default_field(field):\n+\tmeta = frappe.get_meta(doctype)\n+\n+\tif not meta.has_field(field) and not is_default_field(field):\n \t\traise ValueError(\"Field does not belong to doctype\")\n \n-\treturn frappe.get_list(\n+\tdata = frappe.get_list(\n \t\tdoctype,\n \t\tfilters=current_filters,\n \t\tgroup_by=f\"`tab{doctype}`.{field}\",\n@@ -71,3 +73,13 @@\n \t\torder_by=\"count desc\",\n \t\tlimit=50,\n \t)\n+\n+\t# Add in title if it's a link field and `show_title_field_in_link` is set\n+\tif (field_meta := meta.get_field(field)) and field_meta.fieldtype == \"Link\":\n+\t\tlink_meta = frappe.get_meta(field_meta.options)\n+\t\tif link_meta.show_title_field_in_link:\n+\t\t\ttitle_field = link_meta.get_title_field()\n+\t\t\tfor item in data:\n+\t\t\t\titem.title = frappe.get_value(field_meta.options, item.name, title_field)\n+\n+\treturn data\n", "issue": "(List View): In sidebar filter link field show Title of Field if set\n**Is your feature request related to a problem? Please describe.**\r\nIn sidebar filter link field show field.name but not field.title\r\n\r\n**Describe the solution you'd like**\r\nif link field has title field - show title.\r\n\n", "before_files": [{"content": "# Copyright (c) 2022, Frappe Technologies Pvt. Ltd. and Contributors\n# License: MIT. See LICENSE\n\nimport frappe\nfrom frappe.model import is_default_field\nfrom frappe.query_builder import Order\nfrom frappe.query_builder.functions import Count\nfrom frappe.query_builder.terms import SubQuery\nfrom frappe.query_builder.utils import DocType\n\n\[email protected]()\ndef get_list_settings(doctype):\n\ttry:\n\t\treturn frappe.get_cached_doc(\"List View Settings\", doctype)\n\texcept frappe.DoesNotExistError:\n\t\tfrappe.clear_messages()\n\n\[email protected]()\ndef set_list_settings(doctype, values):\n\ttry:\n\t\tdoc = frappe.get_doc(\"List View Settings\", doctype)\n\texcept frappe.DoesNotExistError:\n\t\tdoc = frappe.new_doc(\"List View Settings\")\n\t\tdoc.name = doctype\n\t\tfrappe.clear_messages()\n\tdoc.update(frappe.parse_json(values))\n\tdoc.save()\n\n\[email protected]()\ndef get_group_by_count(doctype: str, current_filters: str, field: str) -> list[dict]:\n\tcurrent_filters = frappe.parse_json(current_filters)\n\n\tif field == \"assigned_to\":\n\t\tToDo = DocType(\"ToDo\")\n\t\tUser = DocType(\"User\")\n\t\tcount = Count(\"*\").as_(\"count\")\n\t\tfiltered_records = frappe.qb.get_query(\n\t\t\tdoctype,\n\t\t\tfilters=current_filters,\n\t\t\tfields=[\"name\"],\n\t\t\tvalidate_filters=True,\n\t\t)\n\n\t\treturn (\n\t\t\tfrappe.qb.from_(ToDo)\n\t\t\t.from_(User)\n\t\t\t.select(ToDo.allocated_to.as_(\"name\"), count)\n\t\t\t.where(\n\t\t\t\t(ToDo.status != \"Cancelled\")\n\t\t\t\t& (ToDo.allocated_to == User.name)\n\t\t\t\t& (User.user_type == \"System User\")\n\t\t\t\t& (ToDo.reference_name.isin(SubQuery(filtered_records)))\n\t\t\t)\n\t\t\t.groupby(ToDo.allocated_to)\n\t\t\t.orderby(count, order=Order.desc)\n\t\t\t.limit(50)\n\t\t\t.run(as_dict=True)\n\t\t)\n\n\tif not frappe.get_meta(doctype).has_field(field) and not is_default_field(field):\n\t\traise ValueError(\"Field does not belong to doctype\")\n\n\treturn frappe.get_list(\n\t\tdoctype,\n\t\tfilters=current_filters,\n\t\tgroup_by=f\"`tab{doctype}`.{field}\",\n\t\tfields=[\"count(*) as count\", f\"`{field}` as name\"],\n\t\torder_by=\"count desc\",\n\t\tlimit=50,\n\t)\n", "path": "frappe/desk/listview.py"}], "after_files": [{"content": "# Copyright (c) 2022, Frappe Technologies Pvt. Ltd. and Contributors\n# License: MIT. See LICENSE\n\nimport frappe\nfrom frappe.model import is_default_field\nfrom frappe.query_builder import Order\nfrom frappe.query_builder.functions import Count\nfrom frappe.query_builder.terms import SubQuery\nfrom frappe.query_builder.utils import DocType\n\n\[email protected]()\ndef get_list_settings(doctype):\n\ttry:\n\t\treturn frappe.get_cached_doc(\"List View Settings\", doctype)\n\texcept frappe.DoesNotExistError:\n\t\tfrappe.clear_messages()\n\n\[email protected]()\ndef set_list_settings(doctype, values):\n\ttry:\n\t\tdoc = frappe.get_doc(\"List View Settings\", doctype)\n\texcept frappe.DoesNotExistError:\n\t\tdoc = frappe.new_doc(\"List View Settings\")\n\t\tdoc.name = doctype\n\t\tfrappe.clear_messages()\n\tdoc.update(frappe.parse_json(values))\n\tdoc.save()\n\n\[email protected]()\ndef get_group_by_count(doctype: str, current_filters: str, field: str) -> list[dict]:\n\tcurrent_filters = frappe.parse_json(current_filters)\n\n\tif field == \"assigned_to\":\n\t\tToDo = DocType(\"ToDo\")\n\t\tUser = DocType(\"User\")\n\t\tcount = Count(\"*\").as_(\"count\")\n\t\tfiltered_records = frappe.qb.get_query(\n\t\t\tdoctype,\n\t\t\tfilters=current_filters,\n\t\t\tfields=[\"name\"],\n\t\t\tvalidate_filters=True,\n\t\t)\n\n\t\treturn (\n\t\t\tfrappe.qb.from_(ToDo)\n\t\t\t.from_(User)\n\t\t\t.select(ToDo.allocated_to.as_(\"name\"), count)\n\t\t\t.where(\n\t\t\t\t(ToDo.status != \"Cancelled\")\n\t\t\t\t& (ToDo.allocated_to == User.name)\n\t\t\t\t& (User.user_type == \"System User\")\n\t\t\t\t& (ToDo.reference_name.isin(SubQuery(filtered_records)))\n\t\t\t)\n\t\t\t.groupby(ToDo.allocated_to)\n\t\t\t.orderby(count, order=Order.desc)\n\t\t\t.limit(50)\n\t\t\t.run(as_dict=True)\n\t\t)\n\n\tmeta = frappe.get_meta(doctype)\n\n\tif not meta.has_field(field) and not is_default_field(field):\n\t\traise ValueError(\"Field does not belong to doctype\")\n\n\tdata = frappe.get_list(\n\t\tdoctype,\n\t\tfilters=current_filters,\n\t\tgroup_by=f\"`tab{doctype}`.{field}\",\n\t\tfields=[\"count(*) as count\", f\"`{field}` as name\"],\n\t\torder_by=\"count desc\",\n\t\tlimit=50,\n\t)\n\n\t# Add in title if it's a link field and `show_title_field_in_link` is set\n\tif (field_meta := meta.get_field(field)) and field_meta.fieldtype == \"Link\":\n\t\tlink_meta = frappe.get_meta(field_meta.options)\n\t\tif link_meta.show_title_field_in_link:\n\t\t\ttitle_field = link_meta.get_title_field()\n\t\t\tfor item in data:\n\t\t\t\titem.title = frappe.get_value(field_meta.options, item.name, title_field)\n\n\treturn data\n", "path": "frappe/desk/listview.py"}]} | 1,060 | 302 |
gh_patches_debug_9424 | rasdani/github-patches | git_diff | napari__napari-6057 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[ipython] File > Save Screenshot... warns about replacing if extension is provided
## 🐛 Bug
If a napari viewer is launched from ipython and you use File > Save Screenshot... then everything works fine if you just enter a file name, **but if you specify the extension**—out of habit or to change the file format—then napari will warn that the file already exists, even if it does not.
Regardless of choice, the file is created and everything works.
Looking carefully, when one clicks Save the screen flashes, the file appears in the list in the dialog, and the warning pops up--the dialog remains up.
It's like the file is created and then it tries a second time when it goes to close the dialog.
Edit:
On the other hand, *if you pass a filename that exists,* **but without extension**, it will correctly warn that the file exists, but if say `no` to overwriting, the dialog closes with: `WARNING: QDialog::exec: Recursive call detected`
If you *do pass a full name with extension*, it will correctly warn that the file exists, and saying `no` will return you to the dialog to modify the name—as expected.
## To Reproduce
Steps to reproduce the behavior:
1. launch ipython and use
```
import napari
viewer = napari.Viewer()
```
3. open any sample image or add any layer
4. File > Save Screenshot...
5. enter a file name **with an extension** but make sure it's a unique name.
## Expected behavior
No warning should be raised if the file doesn't exist, the dialog should just disappear and the file should be created.
If the warning is raised and the user clicks `no` to not overwrite, the dialog should remain open so the user can change the name.
## Environment
```
napari: 0.4.18
Platform: macOS-13.4.1-arm64-arm-64bit
System: MacOS 13.4.1
Python: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:41:52) [Clang 15.0.7 ]
Qt: 5.15.6
PyQt5: 5.15.7
NumPy: 1.25.1
SciPy: 1.11.1
Dask: 2023.7.0
VisPy: 0.12.2
magicgui: 0.7.2
superqt: unknown
in-n-out: 0.1.8
app-model: 0.2.0
npe2: 0.7.0
OpenGL:
- GL version: 2.1 Metal - 83.1
- MAX_TEXTURE_SIZE: 16384
Screens:
- screen 1: resolution 1800x1169, scale 2.0
Settings path:
- /Users/sobolp/Library/Application Support/napari/napari-418_d279d6cf5d4193876ed97858e757fe322037331b/settings.yaml
```
## Additional context
Other dialogs, like Save Selected Layer work fine.
Using the native dialog (when running napari from the command line) does not have this issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `napari/_qt/dialogs/screenshot_dialog.py`
Content:
```
1 import os
2 from pathlib import Path
3 from typing import Any, Callable
4
5 from qtpy.QtWidgets import QFileDialog, QMessageBox
6
7 from napari.utils.misc import in_ipython
8 from napari.utils.translations import trans
9
10 HOME_DIRECTORY = str(Path.home())
11
12
13 class ScreenshotDialog(QFileDialog):
14 """
15 Dialog to chose save location of screenshot.
16
17 Parameters
18 ----------
19 save_function : Callable[[str], Any],
20 Function to be called on success of selecting save location
21 parent : QWidget, optional
22 Optional parent widget for this widget..
23 directory : str, optional
24 Starting directory to be set to File Dialog
25
26 """
27
28 def __init__(
29 self,
30 save_function: Callable[[str], Any],
31 parent=None,
32 directory=HOME_DIRECTORY,
33 history=None,
34 ) -> None:
35 super().__init__(parent, trans._("Save screenshot"))
36 self.setAcceptMode(QFileDialog.AcceptSave)
37 self.setFileMode(QFileDialog.AnyFile)
38 self.setNameFilter(
39 trans._("Image files (*.png *.bmp *.gif *.tif *.tiff)")
40 )
41 self.setDirectory(directory)
42 self.setHistory(history)
43
44 if in_ipython():
45 self.setOptions(QFileDialog.DontUseNativeDialog)
46
47 self.save_function = save_function
48
49 def accept(self):
50 save_path = self.selectedFiles()[0]
51 if os.path.splitext(save_path)[1] == "":
52 save_path = save_path + ".png"
53 if os.path.exists(save_path):
54 res = QMessageBox().warning(
55 self,
56 trans._("Confirm overwrite"),
57 trans._(
58 "{save_path} already exists. Do you want to replace it?",
59 save_path=save_path,
60 ),
61 QMessageBox.Yes | QMessageBox.No,
62 QMessageBox.No,
63 )
64 if res != QMessageBox.Yes:
65 # standard accept return 1, reject 0. This inform that dialog should be reopened
66 super().accept()
67 self.exec_()
68 self.save_function(save_path)
69 return super().accept()
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/napari/_qt/dialogs/screenshot_dialog.py b/napari/_qt/dialogs/screenshot_dialog.py
--- a/napari/_qt/dialogs/screenshot_dialog.py
+++ b/napari/_qt/dialogs/screenshot_dialog.py
@@ -62,8 +62,9 @@
QMessageBox.No,
)
if res != QMessageBox.Yes:
- # standard accept return 1, reject 0. This inform that dialog should be reopened
- super().accept()
- self.exec_()
- self.save_function(save_path)
- return super().accept()
+ # return in this case since a valid name for the
+ # file is needed so the dialog needs to be visible
+ return
+ super().accept()
+ if self.result():
+ self.save_function(save_path)
| {"golden_diff": "diff --git a/napari/_qt/dialogs/screenshot_dialog.py b/napari/_qt/dialogs/screenshot_dialog.py\n--- a/napari/_qt/dialogs/screenshot_dialog.py\n+++ b/napari/_qt/dialogs/screenshot_dialog.py\n@@ -62,8 +62,9 @@\n QMessageBox.No,\n )\n if res != QMessageBox.Yes:\n- # standard accept return 1, reject 0. This inform that dialog should be reopened\n- super().accept()\n- self.exec_()\n- self.save_function(save_path)\n- return super().accept()\n+ # return in this case since a valid name for the\n+ # file is needed so the dialog needs to be visible\n+ return\n+ super().accept()\n+ if self.result():\n+ self.save_function(save_path)\n", "issue": "[ipython] File > Save Screenshot... warns about replacing if extension is provided\n## \ud83d\udc1b Bug\r\n\r\nIf a napari viewer is launched from ipython and you use File > Save Screenshot... then everything works fine if you just enter a file name, **but if you specify the extension**\u2014out of habit or to change the file format\u2014then napari will warn that the file already exists, even if it does not.\r\nRegardless of choice, the file is created and everything works.\r\n\r\nLooking carefully, when one clicks Save the screen flashes, the file appears in the list in the dialog, and the warning pops up--the dialog remains up.\r\nIt's like the file is created and then it tries a second time when it goes to close the dialog.\r\n\r\nEdit:\r\nOn the other hand, *if you pass a filename that exists,* **but without extension**, it will correctly warn that the file exists, but if say `no` to overwriting, the dialog closes with: `WARNING: QDialog::exec: Recursive call detected`\r\nIf you *do pass a full name with extension*, it will correctly warn that the file exists, and saying `no` will return you to the dialog to modify the name\u2014as expected.\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. launch ipython and use \r\n```\r\nimport napari\r\nviewer = napari.Viewer()\r\n```\r\n3. open any sample image or add any layer\r\n4. File > Save Screenshot...\r\n5. enter a file name **with an extension** but make sure it's a unique name.\r\n\r\n## Expected behavior\r\n\r\nNo warning should be raised if the file doesn't exist, the dialog should just disappear and the file should be created.\r\nIf the warning is raised and the user clicks `no` to not overwrite, the dialog should remain open so the user can change the name.\r\n\r\n## Environment\r\n\r\n```\r\nnapari: 0.4.18\r\nPlatform: macOS-13.4.1-arm64-arm-64bit\r\nSystem: MacOS 13.4.1\r\nPython: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:41:52) [Clang 15.0.7 ]\r\nQt: 5.15.6\r\nPyQt5: 5.15.7\r\nNumPy: 1.25.1\r\nSciPy: 1.11.1\r\nDask: 2023.7.0\r\nVisPy: 0.12.2\r\nmagicgui: 0.7.2\r\nsuperqt: unknown\r\nin-n-out: 0.1.8\r\napp-model: 0.2.0\r\nnpe2: 0.7.0\r\n\r\nOpenGL:\r\n- GL version: 2.1 Metal - 83.1\r\n- MAX_TEXTURE_SIZE: 16384\r\n\r\nScreens:\r\n- screen 1: resolution 1800x1169, scale 2.0\r\n\r\nSettings path:\r\n- /Users/sobolp/Library/Application Support/napari/napari-418_d279d6cf5d4193876ed97858e757fe322037331b/settings.yaml\r\n\r\n```\r\n\r\n## Additional context\r\n\r\nOther dialogs, like Save Selected Layer work fine.\r\nUsing the native dialog (when running napari from the command line) does not have this issue.\n", "before_files": [{"content": "import os\nfrom pathlib import Path\nfrom typing import Any, Callable\n\nfrom qtpy.QtWidgets import QFileDialog, QMessageBox\n\nfrom napari.utils.misc import in_ipython\nfrom napari.utils.translations import trans\n\nHOME_DIRECTORY = str(Path.home())\n\n\nclass ScreenshotDialog(QFileDialog):\n \"\"\"\n Dialog to chose save location of screenshot.\n\n Parameters\n ----------\n save_function : Callable[[str], Any],\n Function to be called on success of selecting save location\n parent : QWidget, optional\n Optional parent widget for this widget..\n directory : str, optional\n Starting directory to be set to File Dialog\n\n \"\"\"\n\n def __init__(\n self,\n save_function: Callable[[str], Any],\n parent=None,\n directory=HOME_DIRECTORY,\n history=None,\n ) -> None:\n super().__init__(parent, trans._(\"Save screenshot\"))\n self.setAcceptMode(QFileDialog.AcceptSave)\n self.setFileMode(QFileDialog.AnyFile)\n self.setNameFilter(\n trans._(\"Image files (*.png *.bmp *.gif *.tif *.tiff)\")\n )\n self.setDirectory(directory)\n self.setHistory(history)\n\n if in_ipython():\n self.setOptions(QFileDialog.DontUseNativeDialog)\n\n self.save_function = save_function\n\n def accept(self):\n save_path = self.selectedFiles()[0]\n if os.path.splitext(save_path)[1] == \"\":\n save_path = save_path + \".png\"\n if os.path.exists(save_path):\n res = QMessageBox().warning(\n self,\n trans._(\"Confirm overwrite\"),\n trans._(\n \"{save_path} already exists. Do you want to replace it?\",\n save_path=save_path,\n ),\n QMessageBox.Yes | QMessageBox.No,\n QMessageBox.No,\n )\n if res != QMessageBox.Yes:\n # standard accept return 1, reject 0. This inform that dialog should be reopened\n super().accept()\n self.exec_()\n self.save_function(save_path)\n return super().accept()\n", "path": "napari/_qt/dialogs/screenshot_dialog.py"}], "after_files": [{"content": "import os\nfrom pathlib import Path\nfrom typing import Any, Callable\n\nfrom qtpy.QtWidgets import QFileDialog, QMessageBox\n\nfrom napari.utils.misc import in_ipython\nfrom napari.utils.translations import trans\n\nHOME_DIRECTORY = str(Path.home())\n\n\nclass ScreenshotDialog(QFileDialog):\n \"\"\"\n Dialog to chose save location of screenshot.\n\n Parameters\n ----------\n save_function : Callable[[str], Any],\n Function to be called on success of selecting save location\n parent : QWidget, optional\n Optional parent widget for this widget..\n directory : str, optional\n Starting directory to be set to File Dialog\n\n \"\"\"\n\n def __init__(\n self,\n save_function: Callable[[str], Any],\n parent=None,\n directory=HOME_DIRECTORY,\n history=None,\n ) -> None:\n super().__init__(parent, trans._(\"Save screenshot\"))\n self.setAcceptMode(QFileDialog.AcceptSave)\n self.setFileMode(QFileDialog.AnyFile)\n self.setNameFilter(\n trans._(\"Image files (*.png *.bmp *.gif *.tif *.tiff)\")\n )\n self.setDirectory(directory)\n self.setHistory(history)\n\n if in_ipython():\n self.setOptions(QFileDialog.DontUseNativeDialog)\n\n self.save_function = save_function\n\n def accept(self):\n save_path = self.selectedFiles()[0]\n if os.path.splitext(save_path)[1] == \"\":\n save_path = save_path + \".png\"\n if os.path.exists(save_path):\n res = QMessageBox().warning(\n self,\n trans._(\"Confirm overwrite\"),\n trans._(\n \"{save_path} already exists. Do you want to replace it?\",\n save_path=save_path,\n ),\n QMessageBox.Yes | QMessageBox.No,\n QMessageBox.No,\n )\n if res != QMessageBox.Yes:\n # return in this case since a valid name for the\n # file is needed so the dialog needs to be visible\n return\n super().accept()\n if self.result():\n self.save_function(save_path)\n", "path": "napari/_qt/dialogs/screenshot_dialog.py"}]} | 1,571 | 180 |
gh_patches_debug_14823 | rasdani/github-patches | git_diff | kornia__kornia-2620 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
expose `average_endpoint_error` on `kornia.metrics.__init__.py`
can you expose it on kornia.metrics.__init__.py ? - to use as `kornia.metrics.average_endpoint_error` too
_Originally posted by @johnnv1 in https://github.com/kornia/kornia/pull/2615#discussion_r1351007042_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kornia/metrics/__init__.py`
Content:
```
1 from .accuracy import accuracy
2 from .average_meter import AverageMeter
3 from .confusion_matrix import confusion_matrix
4 from .endpoint_error import AEPE, aepe
5 from .mean_average_precision import mean_average_precision
6 from .mean_iou import mean_iou, mean_iou_bbox
7 from .psnr import psnr
8 from .ssim import SSIM, ssim
9 from .ssim3d import SSIM3D, ssim3d
10
11 __all__ = [
12 "accuracy",
13 "AverageMeter",
14 "confusion_matrix",
15 "aepe",
16 "AEPE",
17 "mean_iou",
18 "mean_iou_bbox",
19 "mean_average_precision",
20 "psnr",
21 "ssim",
22 "ssim3d",
23 "SSIM",
24 "SSIM3D",
25 ]
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kornia/metrics/__init__.py b/kornia/metrics/__init__.py
--- a/kornia/metrics/__init__.py
+++ b/kornia/metrics/__init__.py
@@ -1,7 +1,7 @@
from .accuracy import accuracy
from .average_meter import AverageMeter
from .confusion_matrix import confusion_matrix
-from .endpoint_error import AEPE, aepe
+from .endpoint_error import AEPE, aepe, average_endpoint_error
from .mean_average_precision import mean_average_precision
from .mean_iou import mean_iou, mean_iou_bbox
from .psnr import psnr
@@ -14,6 +14,7 @@
"confusion_matrix",
"aepe",
"AEPE",
+ "average_endpoint_error",
"mean_iou",
"mean_iou_bbox",
"mean_average_precision",
| {"golden_diff": "diff --git a/kornia/metrics/__init__.py b/kornia/metrics/__init__.py\n--- a/kornia/metrics/__init__.py\n+++ b/kornia/metrics/__init__.py\n@@ -1,7 +1,7 @@\n from .accuracy import accuracy\n from .average_meter import AverageMeter\n from .confusion_matrix import confusion_matrix\n-from .endpoint_error import AEPE, aepe\n+from .endpoint_error import AEPE, aepe, average_endpoint_error\n from .mean_average_precision import mean_average_precision\n from .mean_iou import mean_iou, mean_iou_bbox\n from .psnr import psnr\n@@ -14,6 +14,7 @@\n \"confusion_matrix\",\n \"aepe\",\n \"AEPE\",\n+ \"average_endpoint_error\",\n \"mean_iou\",\n \"mean_iou_bbox\",\n \"mean_average_precision\",\n", "issue": "expose `average_endpoint_error` on `kornia.metrics.__init__.py`\n can you expose it on kornia.metrics.__init__.py ? - to use as `kornia.metrics.average_endpoint_error` too\r\n\r\n_Originally posted by @johnnv1 in https://github.com/kornia/kornia/pull/2615#discussion_r1351007042_\r\n \n", "before_files": [{"content": "from .accuracy import accuracy\nfrom .average_meter import AverageMeter\nfrom .confusion_matrix import confusion_matrix\nfrom .endpoint_error import AEPE, aepe\nfrom .mean_average_precision import mean_average_precision\nfrom .mean_iou import mean_iou, mean_iou_bbox\nfrom .psnr import psnr\nfrom .ssim import SSIM, ssim\nfrom .ssim3d import SSIM3D, ssim3d\n\n__all__ = [\n \"accuracy\",\n \"AverageMeter\",\n \"confusion_matrix\",\n \"aepe\",\n \"AEPE\",\n \"mean_iou\",\n \"mean_iou_bbox\",\n \"mean_average_precision\",\n \"psnr\",\n \"ssim\",\n \"ssim3d\",\n \"SSIM\",\n \"SSIM3D\",\n]\n", "path": "kornia/metrics/__init__.py"}], "after_files": [{"content": "from .accuracy import accuracy\nfrom .average_meter import AverageMeter\nfrom .confusion_matrix import confusion_matrix\nfrom .endpoint_error import AEPE, aepe, average_endpoint_error\nfrom .mean_average_precision import mean_average_precision\nfrom .mean_iou import mean_iou, mean_iou_bbox\nfrom .psnr import psnr\nfrom .ssim import SSIM, ssim\nfrom .ssim3d import SSIM3D, ssim3d\n\n__all__ = [\n \"accuracy\",\n \"AverageMeter\",\n \"confusion_matrix\",\n \"aepe\",\n \"AEPE\",\n \"average_endpoint_error\",\n \"mean_iou\",\n \"mean_iou_bbox\",\n \"mean_average_precision\",\n \"psnr\",\n \"ssim\",\n \"ssim3d\",\n \"SSIM\",\n \"SSIM3D\",\n]\n", "path": "kornia/metrics/__init__.py"}]} | 570 | 195 |
gh_patches_debug_56929 | rasdani/github-patches | git_diff | kivy__python-for-android-575 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
HTTP 302 recipe download file
Downloading from sourceforge autoselecting an mirror results in a HTTP 302 instead of a direct download.
Trying to build this clean recipe below results in:
```
RAN: '/bin/tar xzf ~/.local/share/python-for-android/packages/boost/boost_1_58_0.tar.gz'
STDOUT:
STDERR:
gzip: stdin: not in gzip format
/bin/tar: Child returned status 1
/bin/tar: Error is not recoverable: exiting now
```
Because this .tar.gz file is in reality the 302 HTML (ASCII) page.
```
from pythonforandroid.toolchain import Recipe, shprint, shutil, current_directory
from os.path import exists, join
class BoostRecipe(Recipe):
version = '1.58.0'
url = 'http://downloads.sourceforge.net/project/boost/boost/{version}/boost_1_58_0.tar.gz'
recipe = BoostRecipe()
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pythonforandroid/util.py`
Content:
```
1 import contextlib
2 from os.path import exists
3 from os import getcwd, chdir, makedirs
4 import io
5 import json
6 import shutil
7 import sys
8 from tempfile import mkdtemp
9 try:
10 from urllib.request import FancyURLopener
11 except ImportError:
12 from urllib import FancyURLopener
13
14 from pythonforandroid.logger import (logger, Err_Fore)
15
16 IS_PY3 = sys.version_info[0] >= 3
17
18 if IS_PY3:
19 unistr = str
20 else:
21 unistr = unicode
22
23
24 class ChromeDownloader(FancyURLopener):
25 version = (
26 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 '
27 '(KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36')
28
29 urlretrieve = ChromeDownloader().retrieve
30
31
32 @contextlib.contextmanager
33 def current_directory(new_dir):
34 cur_dir = getcwd()
35 logger.info(''.join((Err_Fore.CYAN, '-> directory context ', new_dir,
36 Err_Fore.RESET)))
37 chdir(new_dir)
38 yield
39 logger.info(''.join((Err_Fore.CYAN, '<- directory context ', cur_dir,
40 Err_Fore.RESET)))
41 chdir(cur_dir)
42
43
44 @contextlib.contextmanager
45 def temp_directory():
46 temp_dir = mkdtemp()
47 try:
48 logger.debug(''.join((Err_Fore.CYAN, ' + temp directory used ',
49 temp_dir, Err_Fore.RESET)))
50 yield temp_dir
51 finally:
52 shutil.rmtree(temp_dir)
53 logger.debug(''.join((Err_Fore.CYAN, ' - temp directory deleted ',
54 temp_dir, Err_Fore.RESET)))
55
56
57 def ensure_dir(filename):
58 if not exists(filename):
59 makedirs(filename)
60
61
62 class JsonStore(object):
63 """Replacement of shelve using json, needed for support python 2 and 3.
64 """
65
66 def __init__(self, filename):
67 super(JsonStore, self).__init__()
68 self.filename = filename
69 self.data = {}
70 if exists(filename):
71 try:
72 with io.open(filename, encoding='utf-8') as fd:
73 self.data = json.load(fd)
74 except ValueError:
75 print("Unable to read the state.db, content will be replaced.")
76
77 def __getitem__(self, key):
78 return self.data[key]
79
80 def __setitem__(self, key, value):
81 self.data[key] = value
82 self.sync()
83
84 def __delitem__(self, key):
85 del self.data[key]
86 self.sync()
87
88 def __contains__(self, item):
89 return item in self.data
90
91 def get(self, item, default=None):
92 return self.data.get(item, default)
93
94 def keys(self):
95 return self.data.keys()
96
97 def remove_all(self, prefix):
98 for key in self.data.keys()[:]:
99 if not key.startswith(prefix):
100 continue
101 del self.data[key]
102 self.sync()
103
104 def sync(self):
105 # http://stackoverflow.com/questions/12309269/write-json-data-to-file-in-python/14870531#14870531
106 if IS_PY3:
107 with open(self.filename, 'w') as fd:
108 json.dump(self.data, fd, ensure_ascii=False)
109 else:
110 with io.open(self.filename, 'w', encoding='utf-8') as fd:
111 fd.write(unicode(json.dumps(self.data, ensure_ascii=False)))
112
113
114 def which(program, path_env):
115 '''Locate an executable in the system.'''
116 import os
117
118 def is_exe(fpath):
119 return os.path.isfile(fpath) and os.access(fpath, os.X_OK)
120
121 fpath, fname = os.path.split(program)
122 if fpath:
123 if is_exe(program):
124 return program
125 else:
126 for path in path_env.split(os.pathsep):
127 path = path.strip('"')
128 exe_file = os.path.join(path, program)
129 if is_exe(exe_file):
130 return exe_file
131
132 return None
133
134
135 def get_directory(filename):
136 '''If the filename ends with a recognised file extension, return the
137 filename without this extension.'''
138 if filename.endswith('.tar.gz'):
139 return basename(filename[:-7])
140 elif filename.endswith('.tgz'):
141 return basename(filename[:-4])
142 elif filename.endswith('.tar.bz2'):
143 return basename(filename[:-8])
144 elif filename.endswith('.tbz2'):
145 return basename(filename[:-5])
146 elif filename.endswith('.zip'):
147 return basename(filename[:-4])
148 info('Unknown file extension for {}'.format(filename))
149 exit(1)
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pythonforandroid/util.py b/pythonforandroid/util.py
--- a/pythonforandroid/util.py
+++ b/pythonforandroid/util.py
@@ -21,12 +21,10 @@
unistr = unicode
-class ChromeDownloader(FancyURLopener):
- version = (
- 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 '
- '(KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36')
+class WgetDownloader(FancyURLopener):
+ version = ('Wget/1.17.1')
-urlretrieve = ChromeDownloader().retrieve
+urlretrieve = WgetDownloader().retrieve
@contextlib.contextmanager
| {"golden_diff": "diff --git a/pythonforandroid/util.py b/pythonforandroid/util.py\n--- a/pythonforandroid/util.py\n+++ b/pythonforandroid/util.py\n@@ -21,12 +21,10 @@\n unistr = unicode\n \n \n-class ChromeDownloader(FancyURLopener):\n- version = (\n- 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 '\n- '(KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36')\n+class WgetDownloader(FancyURLopener):\n+ version = ('Wget/1.17.1')\n \n-urlretrieve = ChromeDownloader().retrieve\n+urlretrieve = WgetDownloader().retrieve\n \n \n @contextlib.contextmanager\n", "issue": "HTTP 302 recipe download file\nDownloading from sourceforge autoselecting an mirror results in a HTTP 302 instead of a direct download.\nTrying to build this clean recipe below results in:\n\n```\nRAN: '/bin/tar xzf ~/.local/share/python-for-android/packages/boost/boost_1_58_0.tar.gz'\n STDOUT:\n STDERR:\ngzip: stdin: not in gzip format\n/bin/tar: Child returned status 1\n/bin/tar: Error is not recoverable: exiting now\n```\n\nBecause this .tar.gz file is in reality the 302 HTML (ASCII) page.\n\n```\nfrom pythonforandroid.toolchain import Recipe, shprint, shutil, current_directory\nfrom os.path import exists, join\n\nclass BoostRecipe(Recipe):\n version = '1.58.0'\n url = 'http://downloads.sourceforge.net/project/boost/boost/{version}/boost_1_58_0.tar.gz'\n\nrecipe = BoostRecipe()\n```\n\n", "before_files": [{"content": "import contextlib\nfrom os.path import exists\nfrom os import getcwd, chdir, makedirs\nimport io\nimport json\nimport shutil\nimport sys\nfrom tempfile import mkdtemp\ntry:\n from urllib.request import FancyURLopener\nexcept ImportError:\n from urllib import FancyURLopener\n\nfrom pythonforandroid.logger import (logger, Err_Fore)\n\nIS_PY3 = sys.version_info[0] >= 3\n\nif IS_PY3:\n unistr = str\nelse:\n unistr = unicode\n\n\nclass ChromeDownloader(FancyURLopener):\n version = (\n 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 '\n '(KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36')\n\nurlretrieve = ChromeDownloader().retrieve\n\n\[email protected]\ndef current_directory(new_dir):\n cur_dir = getcwd()\n logger.info(''.join((Err_Fore.CYAN, '-> directory context ', new_dir,\n Err_Fore.RESET)))\n chdir(new_dir)\n yield\n logger.info(''.join((Err_Fore.CYAN, '<- directory context ', cur_dir,\n Err_Fore.RESET)))\n chdir(cur_dir)\n\n\[email protected]\ndef temp_directory():\n temp_dir = mkdtemp()\n try:\n logger.debug(''.join((Err_Fore.CYAN, ' + temp directory used ',\n temp_dir, Err_Fore.RESET)))\n yield temp_dir\n finally:\n shutil.rmtree(temp_dir)\n logger.debug(''.join((Err_Fore.CYAN, ' - temp directory deleted ',\n temp_dir, Err_Fore.RESET)))\n\n\ndef ensure_dir(filename):\n if not exists(filename):\n makedirs(filename)\n\n\nclass JsonStore(object):\n \"\"\"Replacement of shelve using json, needed for support python 2 and 3.\n \"\"\"\n\n def __init__(self, filename):\n super(JsonStore, self).__init__()\n self.filename = filename\n self.data = {}\n if exists(filename):\n try:\n with io.open(filename, encoding='utf-8') as fd:\n self.data = json.load(fd)\n except ValueError:\n print(\"Unable to read the state.db, content will be replaced.\")\n\n def __getitem__(self, key):\n return self.data[key]\n\n def __setitem__(self, key, value):\n self.data[key] = value\n self.sync()\n\n def __delitem__(self, key):\n del self.data[key]\n self.sync()\n\n def __contains__(self, item):\n return item in self.data\n\n def get(self, item, default=None):\n return self.data.get(item, default)\n\n def keys(self):\n return self.data.keys()\n\n def remove_all(self, prefix):\n for key in self.data.keys()[:]:\n if not key.startswith(prefix):\n continue\n del self.data[key]\n self.sync()\n\n def sync(self):\n # http://stackoverflow.com/questions/12309269/write-json-data-to-file-in-python/14870531#14870531\n if IS_PY3:\n with open(self.filename, 'w') as fd:\n json.dump(self.data, fd, ensure_ascii=False)\n else:\n with io.open(self.filename, 'w', encoding='utf-8') as fd:\n fd.write(unicode(json.dumps(self.data, ensure_ascii=False)))\n\n\ndef which(program, path_env):\n '''Locate an executable in the system.'''\n import os\n\n def is_exe(fpath):\n return os.path.isfile(fpath) and os.access(fpath, os.X_OK)\n\n fpath, fname = os.path.split(program)\n if fpath:\n if is_exe(program):\n return program\n else:\n for path in path_env.split(os.pathsep):\n path = path.strip('\"')\n exe_file = os.path.join(path, program)\n if is_exe(exe_file):\n return exe_file\n\n return None\n\n\ndef get_directory(filename):\n '''If the filename ends with a recognised file extension, return the\n filename without this extension.'''\n if filename.endswith('.tar.gz'):\n return basename(filename[:-7])\n elif filename.endswith('.tgz'):\n return basename(filename[:-4])\n elif filename.endswith('.tar.bz2'):\n return basename(filename[:-8])\n elif filename.endswith('.tbz2'):\n return basename(filename[:-5])\n elif filename.endswith('.zip'):\n return basename(filename[:-4])\n info('Unknown file extension for {}'.format(filename))\n exit(1)\n", "path": "pythonforandroid/util.py"}], "after_files": [{"content": "import contextlib\nfrom os.path import exists\nfrom os import getcwd, chdir, makedirs\nimport io\nimport json\nimport shutil\nimport sys\nfrom tempfile import mkdtemp\ntry:\n from urllib.request import FancyURLopener\nexcept ImportError:\n from urllib import FancyURLopener\n\nfrom pythonforandroid.logger import (logger, Err_Fore)\n\nIS_PY3 = sys.version_info[0] >= 3\n\nif IS_PY3:\n unistr = str\nelse:\n unistr = unicode\n\n\nclass WgetDownloader(FancyURLopener):\n version = ('Wget/1.17.1')\n\nurlretrieve = WgetDownloader().retrieve\n\n\[email protected]\ndef current_directory(new_dir):\n cur_dir = getcwd()\n logger.info(''.join((Err_Fore.CYAN, '-> directory context ', new_dir,\n Err_Fore.RESET)))\n chdir(new_dir)\n yield\n logger.info(''.join((Err_Fore.CYAN, '<- directory context ', cur_dir,\n Err_Fore.RESET)))\n chdir(cur_dir)\n\n\[email protected]\ndef temp_directory():\n temp_dir = mkdtemp()\n try:\n logger.debug(''.join((Err_Fore.CYAN, ' + temp directory used ',\n temp_dir, Err_Fore.RESET)))\n yield temp_dir\n finally:\n shutil.rmtree(temp_dir)\n logger.debug(''.join((Err_Fore.CYAN, ' - temp directory deleted ',\n temp_dir, Err_Fore.RESET)))\n\n\ndef ensure_dir(filename):\n if not exists(filename):\n makedirs(filename)\n\n\nclass JsonStore(object):\n \"\"\"Replacement of shelve using json, needed for support python 2 and 3.\n \"\"\"\n\n def __init__(self, filename):\n super(JsonStore, self).__init__()\n self.filename = filename\n self.data = {}\n if exists(filename):\n try:\n with io.open(filename, encoding='utf-8') as fd:\n self.data = json.load(fd)\n except ValueError:\n print(\"Unable to read the state.db, content will be replaced.\")\n\n def __getitem__(self, key):\n return self.data[key]\n\n def __setitem__(self, key, value):\n self.data[key] = value\n self.sync()\n\n def __delitem__(self, key):\n del self.data[key]\n self.sync()\n\n def __contains__(self, item):\n return item in self.data\n\n def get(self, item, default=None):\n return self.data.get(item, default)\n\n def keys(self):\n return self.data.keys()\n\n def remove_all(self, prefix):\n for key in self.data.keys()[:]:\n if not key.startswith(prefix):\n continue\n del self.data[key]\n self.sync()\n\n def sync(self):\n # http://stackoverflow.com/questions/12309269/write-json-data-to-file-in-python/14870531#14870531\n if IS_PY3:\n with open(self.filename, 'w') as fd:\n json.dump(self.data, fd, ensure_ascii=False)\n else:\n with io.open(self.filename, 'w', encoding='utf-8') as fd:\n fd.write(unicode(json.dumps(self.data, ensure_ascii=False)))\n\n\ndef which(program, path_env):\n '''Locate an executable in the system.'''\n import os\n\n def is_exe(fpath):\n return os.path.isfile(fpath) and os.access(fpath, os.X_OK)\n\n fpath, fname = os.path.split(program)\n if fpath:\n if is_exe(program):\n return program\n else:\n for path in path_env.split(os.pathsep):\n path = path.strip('\"')\n exe_file = os.path.join(path, program)\n if is_exe(exe_file):\n return exe_file\n\n return None\n\n\ndef get_directory(filename):\n '''If the filename ends with a recognised file extension, return the\n filename without this extension.'''\n if filename.endswith('.tar.gz'):\n return basename(filename[:-7])\n elif filename.endswith('.tgz'):\n return basename(filename[:-4])\n elif filename.endswith('.tar.bz2'):\n return basename(filename[:-8])\n elif filename.endswith('.tbz2'):\n return basename(filename[:-5])\n elif filename.endswith('.zip'):\n return basename(filename[:-4])\n info('Unknown file extension for {}'.format(filename))\n exit(1)\n", "path": "pythonforandroid/util.py"}]} | 1,835 | 180 |
gh_patches_debug_26242 | rasdani/github-patches | git_diff | bokeh__bokeh-4929 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Checkbox example is not working as expected
The new checkbox example in master (examples/plotting/file/line_on_off.py) is not working as expected, the plotted lines are depend on how many checkbox are ticked and not on which one. The reason is that the js code is not checking for the values but for the existence of the index.
I have a fix for this with a PR coming, and opening this only as an associated issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/plotting/file/line_on_off.py`
Content:
```
1 """ Example demonstrating turning lines on and off - with JS only
2
3 """
4
5 import numpy as np
6
7 from bokeh.io import output_file, show
8 from bokeh.layouts import row
9 from bokeh.palettes import Viridis3
10 from bokeh.plotting import figure
11 from bokeh.models import CheckboxGroup, CustomJS
12
13 output_file("line_on_off.html", title="line_on_off.py example")
14
15 code = """
16 if (0 in checkbox.active) {
17 l0.visible = true
18 } else {
19 l0.visible = false
20 }
21 if (1 in checkbox.active) {
22 l1.visible = true
23 } else {
24 l1.visible = false
25 }
26 if (2 in checkbox.active) {
27 l2.visible = true
28 } else {
29 l2.visible = false
30 }
31 """
32
33 p = figure()
34 props = dict(line_width=4, line_alpha=0.7)
35 x = np.linspace(0, 4 * np.pi, 100)
36 l0 = p.line(x, np.sin(x), color=Viridis3[0], legend="Line 0", **props)
37 l1 = p.line(x, 4 * np.cos(x), color=Viridis3[1], legend="Line 1", **props)
38 l2 = p.line(x, np.tan(x), color=Viridis3[2], legend="Line 2", **props)
39
40 callback = CustomJS(code=code, args={})
41 checkbox = CheckboxGroup(labels=["Line 0", "Line 1", "Line 2"], active=[0, 1, 2], callback=callback, width=100)
42 callback.args = dict(l0=l0, l1=l1, l2=l2, checkbox=checkbox)
43
44 layout = row(checkbox, p)
45 show(layout)
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/plotting/file/line_on_off.py b/examples/plotting/file/line_on_off.py
--- a/examples/plotting/file/line_on_off.py
+++ b/examples/plotting/file/line_on_off.py
@@ -12,24 +12,6 @@
output_file("line_on_off.html", title="line_on_off.py example")
-code = """
- if (0 in checkbox.active) {
- l0.visible = true
- } else {
- l0.visible = false
- }
- if (1 in checkbox.active) {
- l1.visible = true
- } else {
- l1.visible = false
- }
- if (2 in checkbox.active) {
- l2.visible = true
- } else {
- l2.visible = false
- }
-"""
-
p = figure()
props = dict(line_width=4, line_alpha=0.7)
x = np.linspace(0, 4 * np.pi, 100)
@@ -37,9 +19,14 @@
l1 = p.line(x, 4 * np.cos(x), color=Viridis3[1], legend="Line 1", **props)
l2 = p.line(x, np.tan(x), color=Viridis3[2], legend="Line 2", **props)
-callback = CustomJS(code=code, args={})
-checkbox = CheckboxGroup(labels=["Line 0", "Line 1", "Line 2"], active=[0, 1, 2], callback=callback, width=100)
-callback.args = dict(l0=l0, l1=l1, l2=l2, checkbox=checkbox)
+checkbox = CheckboxGroup(labels=["Line 0", "Line 1", "Line 2"],
+ active=[0, 1, 2], width=100)
+checkbox.callback = CustomJS(args=dict(l0=l0, l1=l1, l2=l2, checkbox=checkbox),
+ lang="coffeescript", code="""
+l0.visible = 0 in checkbox.active;
+l1.visible = 1 in checkbox.active;
+l2.visible = 2 in checkbox.active;
+""")
layout = row(checkbox, p)
show(layout)
| {"golden_diff": "diff --git a/examples/plotting/file/line_on_off.py b/examples/plotting/file/line_on_off.py\n--- a/examples/plotting/file/line_on_off.py\n+++ b/examples/plotting/file/line_on_off.py\n@@ -12,24 +12,6 @@\n \n output_file(\"line_on_off.html\", title=\"line_on_off.py example\")\n \n-code = \"\"\"\n- if (0 in checkbox.active) {\n- l0.visible = true\n- } else {\n- l0.visible = false\n- }\n- if (1 in checkbox.active) {\n- l1.visible = true\n- } else {\n- l1.visible = false\n- }\n- if (2 in checkbox.active) {\n- l2.visible = true\n- } else {\n- l2.visible = false\n- }\n-\"\"\"\n-\n p = figure()\n props = dict(line_width=4, line_alpha=0.7)\n x = np.linspace(0, 4 * np.pi, 100)\n@@ -37,9 +19,14 @@\n l1 = p.line(x, 4 * np.cos(x), color=Viridis3[1], legend=\"Line 1\", **props)\n l2 = p.line(x, np.tan(x), color=Viridis3[2], legend=\"Line 2\", **props)\n \n-callback = CustomJS(code=code, args={})\n-checkbox = CheckboxGroup(labels=[\"Line 0\", \"Line 1\", \"Line 2\"], active=[0, 1, 2], callback=callback, width=100)\n-callback.args = dict(l0=l0, l1=l1, l2=l2, checkbox=checkbox)\n+checkbox = CheckboxGroup(labels=[\"Line 0\", \"Line 1\", \"Line 2\"],\n+ active=[0, 1, 2], width=100)\n+checkbox.callback = CustomJS(args=dict(l0=l0, l1=l1, l2=l2, checkbox=checkbox),\n+ lang=\"coffeescript\", code=\"\"\"\n+l0.visible = 0 in checkbox.active;\n+l1.visible = 1 in checkbox.active;\n+l2.visible = 2 in checkbox.active;\n+\"\"\")\n \n layout = row(checkbox, p)\n show(layout)\n", "issue": "Checkbox example is not working as expected\nThe new checkbox example in master (examples/plotting/file/line_on_off.py) is not working as expected, the plotted lines are depend on how many checkbox are ticked and not on which one. The reason is that the js code is not checking for the values but for the existence of the index.\n\nI have a fix for this with a PR coming, and opening this only as an associated issue.\n\n", "before_files": [{"content": "\"\"\" Example demonstrating turning lines on and off - with JS only\n\n\"\"\"\n\nimport numpy as np\n\nfrom bokeh.io import output_file, show\nfrom bokeh.layouts import row\nfrom bokeh.palettes import Viridis3\nfrom bokeh.plotting import figure\nfrom bokeh.models import CheckboxGroup, CustomJS\n\noutput_file(\"line_on_off.html\", title=\"line_on_off.py example\")\n\ncode = \"\"\"\n if (0 in checkbox.active) {\n l0.visible = true\n } else {\n l0.visible = false\n }\n if (1 in checkbox.active) {\n l1.visible = true\n } else {\n l1.visible = false\n }\n if (2 in checkbox.active) {\n l2.visible = true\n } else {\n l2.visible = false\n }\n\"\"\"\n\np = figure()\nprops = dict(line_width=4, line_alpha=0.7)\nx = np.linspace(0, 4 * np.pi, 100)\nl0 = p.line(x, np.sin(x), color=Viridis3[0], legend=\"Line 0\", **props)\nl1 = p.line(x, 4 * np.cos(x), color=Viridis3[1], legend=\"Line 1\", **props)\nl2 = p.line(x, np.tan(x), color=Viridis3[2], legend=\"Line 2\", **props)\n\ncallback = CustomJS(code=code, args={})\ncheckbox = CheckboxGroup(labels=[\"Line 0\", \"Line 1\", \"Line 2\"], active=[0, 1, 2], callback=callback, width=100)\ncallback.args = dict(l0=l0, l1=l1, l2=l2, checkbox=checkbox)\n\nlayout = row(checkbox, p)\nshow(layout)\n", "path": "examples/plotting/file/line_on_off.py"}], "after_files": [{"content": "\"\"\" Example demonstrating turning lines on and off - with JS only\n\n\"\"\"\n\nimport numpy as np\n\nfrom bokeh.io import output_file, show\nfrom bokeh.layouts import row\nfrom bokeh.palettes import Viridis3\nfrom bokeh.plotting import figure\nfrom bokeh.models import CheckboxGroup, CustomJS\n\noutput_file(\"line_on_off.html\", title=\"line_on_off.py example\")\n\np = figure()\nprops = dict(line_width=4, line_alpha=0.7)\nx = np.linspace(0, 4 * np.pi, 100)\nl0 = p.line(x, np.sin(x), color=Viridis3[0], legend=\"Line 0\", **props)\nl1 = p.line(x, 4 * np.cos(x), color=Viridis3[1], legend=\"Line 1\", **props)\nl2 = p.line(x, np.tan(x), color=Viridis3[2], legend=\"Line 2\", **props)\n\ncheckbox = CheckboxGroup(labels=[\"Line 0\", \"Line 1\", \"Line 2\"],\n active=[0, 1, 2], width=100)\ncheckbox.callback = CustomJS(args=dict(l0=l0, l1=l1, l2=l2, checkbox=checkbox),\n lang=\"coffeescript\", code=\"\"\"\nl0.visible = 0 in checkbox.active;\nl1.visible = 1 in checkbox.active;\nl2.visible = 2 in checkbox.active;\n\"\"\")\n\nlayout = row(checkbox, p)\nshow(layout)\n", "path": "examples/plotting/file/line_on_off.py"}]} | 828 | 500 |
gh_patches_debug_33573 | rasdani/github-patches | git_diff | pre-commit__pre-commit-966 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve platform command line length limits
I have a question related to #510
When I run `pre-commit run` all hooks work as expected (processing just changed files), but `pre-commit run -a` divides a list of files into 5 bulks and run the same hook in parallel executions. This creates unnecessary checks of files which belong to the same directory.
Is it possible to control that certain hooks should not be executed in parallel but rather pass all filenames to it?
Alternatively, is there an environment variable or something like that which says that pre-commit is running a hook for all files?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/xargs.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import unicode_literals
4
5 import concurrent.futures
6 import contextlib
7 import math
8 import sys
9
10 import six
11
12 from pre_commit import parse_shebang
13 from pre_commit.util import cmd_output
14
15
16 # TODO: properly compute max_length value
17 def _get_platform_max_length():
18 # posix minimum
19 return 4 * 1024
20
21
22 def _command_length(*cmd):
23 full_cmd = ' '.join(cmd)
24
25 # win32 uses the amount of characters, more details at:
26 # https://github.com/pre-commit/pre-commit/pull/839
27 if sys.platform == 'win32':
28 # the python2.x apis require bytes, we encode as UTF-8
29 if six.PY2:
30 return len(full_cmd.encode('utf-8'))
31 else:
32 return len(full_cmd.encode('utf-16le')) // 2
33 else:
34 return len(full_cmd.encode(sys.getfilesystemencoding()))
35
36
37 class ArgumentTooLongError(RuntimeError):
38 pass
39
40
41 def partition(cmd, varargs, target_concurrency, _max_length=None):
42 _max_length = _max_length or _get_platform_max_length()
43
44 # Generally, we try to partition evenly into at least `target_concurrency`
45 # partitions, but we don't want a bunch of tiny partitions.
46 max_args = max(4, math.ceil(len(varargs) / target_concurrency))
47
48 cmd = tuple(cmd)
49 ret = []
50
51 ret_cmd = []
52 # Reversed so arguments are in order
53 varargs = list(reversed(varargs))
54
55 total_length = _command_length(*cmd)
56 while varargs:
57 arg = varargs.pop()
58
59 arg_length = _command_length(arg) + 1
60 if (
61 total_length + arg_length <= _max_length and
62 len(ret_cmd) < max_args
63 ):
64 ret_cmd.append(arg)
65 total_length += arg_length
66 elif not ret_cmd:
67 raise ArgumentTooLongError(arg)
68 else:
69 # We've exceeded the length, yield a command
70 ret.append(cmd + tuple(ret_cmd))
71 ret_cmd = []
72 total_length = _command_length(*cmd)
73 varargs.append(arg)
74
75 ret.append(cmd + tuple(ret_cmd))
76
77 return tuple(ret)
78
79
80 @contextlib.contextmanager
81 def _thread_mapper(maxsize):
82 if maxsize == 1:
83 yield map
84 else:
85 with concurrent.futures.ThreadPoolExecutor(maxsize) as ex:
86 yield ex.map
87
88
89 def xargs(cmd, varargs, **kwargs):
90 """A simplified implementation of xargs.
91
92 negate: Make nonzero successful and zero a failure
93 target_concurrency: Target number of partitions to run concurrently
94 """
95 negate = kwargs.pop('negate', False)
96 target_concurrency = kwargs.pop('target_concurrency', 1)
97 retcode = 0
98 stdout = b''
99 stderr = b''
100
101 try:
102 parse_shebang.normexe(cmd[0])
103 except parse_shebang.ExecutableNotFoundError as e:
104 return e.to_output()
105
106 partitions = partition(cmd, varargs, target_concurrency, **kwargs)
107
108 def run_cmd_partition(run_cmd):
109 return cmd_output(*run_cmd, encoding=None, retcode=None)
110
111 threads = min(len(partitions), target_concurrency)
112 with _thread_mapper(threads) as thread_map:
113 results = thread_map(run_cmd_partition, partitions)
114
115 for proc_retcode, proc_out, proc_err in results:
116 # This is *slightly* too clever so I'll explain it.
117 # First the xor boolean table:
118 # T | F |
119 # +-------+
120 # T | F | T |
121 # --+-------+
122 # F | T | F |
123 # --+-------+
124 # When negate is True, it has the effect of flipping the return
125 # code. Otherwise, the returncode is unchanged.
126 retcode |= bool(proc_retcode) ^ negate
127 stdout += proc_out
128 stderr += proc_err
129
130 return retcode, stdout, stderr
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/xargs.py b/pre_commit/xargs.py
--- a/pre_commit/xargs.py
+++ b/pre_commit/xargs.py
@@ -5,6 +5,7 @@
import concurrent.futures
import contextlib
import math
+import os
import sys
import six
@@ -13,10 +14,24 @@
from pre_commit.util import cmd_output
-# TODO: properly compute max_length value
-def _get_platform_max_length():
- # posix minimum
- return 4 * 1024
+def _environ_size(_env=None):
+ environ = _env if _env is not None else getattr(os, 'environb', os.environ)
+ size = 8 * len(environ) # number of pointers in `envp`
+ for k, v in environ.items():
+ size += len(k) + len(v) + 2 # c strings in `envp`
+ return size
+
+
+def _get_platform_max_length(): # pragma: no cover (platform specific)
+ if os.name == 'posix':
+ maximum = os.sysconf(str('SC_ARG_MAX')) - 2048 - _environ_size()
+ maximum = min(maximum, 2 ** 17)
+ return maximum
+ elif os.name == 'nt':
+ return 2 ** 15 - 2048 # UNICODE_STRING max - headroom
+ else:
+ # posix minimum
+ return 2 ** 12
def _command_length(*cmd):
@@ -52,7 +67,7 @@
# Reversed so arguments are in order
varargs = list(reversed(varargs))
- total_length = _command_length(*cmd)
+ total_length = _command_length(*cmd) + 1
while varargs:
arg = varargs.pop()
@@ -69,7 +84,7 @@
# We've exceeded the length, yield a command
ret.append(cmd + tuple(ret_cmd))
ret_cmd = []
- total_length = _command_length(*cmd)
+ total_length = _command_length(*cmd) + 1
varargs.append(arg)
ret.append(cmd + tuple(ret_cmd))
@@ -99,7 +114,7 @@
stderr = b''
try:
- parse_shebang.normexe(cmd[0])
+ cmd = parse_shebang.normalize_cmd(cmd)
except parse_shebang.ExecutableNotFoundError as e:
return e.to_output()
| {"golden_diff": "diff --git a/pre_commit/xargs.py b/pre_commit/xargs.py\n--- a/pre_commit/xargs.py\n+++ b/pre_commit/xargs.py\n@@ -5,6 +5,7 @@\n import concurrent.futures\n import contextlib\n import math\n+import os\n import sys\n \n import six\n@@ -13,10 +14,24 @@\n from pre_commit.util import cmd_output\n \n \n-# TODO: properly compute max_length value\n-def _get_platform_max_length():\n- # posix minimum\n- return 4 * 1024\n+def _environ_size(_env=None):\n+ environ = _env if _env is not None else getattr(os, 'environb', os.environ)\n+ size = 8 * len(environ) # number of pointers in `envp`\n+ for k, v in environ.items():\n+ size += len(k) + len(v) + 2 # c strings in `envp`\n+ return size\n+\n+\n+def _get_platform_max_length(): # pragma: no cover (platform specific)\n+ if os.name == 'posix':\n+ maximum = os.sysconf(str('SC_ARG_MAX')) - 2048 - _environ_size()\n+ maximum = min(maximum, 2 ** 17)\n+ return maximum\n+ elif os.name == 'nt':\n+ return 2 ** 15 - 2048 # UNICODE_STRING max - headroom\n+ else:\n+ # posix minimum\n+ return 2 ** 12\n \n \n def _command_length(*cmd):\n@@ -52,7 +67,7 @@\n # Reversed so arguments are in order\n varargs = list(reversed(varargs))\n \n- total_length = _command_length(*cmd)\n+ total_length = _command_length(*cmd) + 1\n while varargs:\n arg = varargs.pop()\n \n@@ -69,7 +84,7 @@\n # We've exceeded the length, yield a command\n ret.append(cmd + tuple(ret_cmd))\n ret_cmd = []\n- total_length = _command_length(*cmd)\n+ total_length = _command_length(*cmd) + 1\n varargs.append(arg)\n \n ret.append(cmd + tuple(ret_cmd))\n@@ -99,7 +114,7 @@\n stderr = b''\n \n try:\n- parse_shebang.normexe(cmd[0])\n+ cmd = parse_shebang.normalize_cmd(cmd)\n except parse_shebang.ExecutableNotFoundError as e:\n return e.to_output()\n", "issue": "Improve platform command line length limits\nI have a question related to #510 \r\n\r\nWhen I run `pre-commit run` all hooks work as expected (processing just changed files), but `pre-commit run -a` divides a list of files into 5 bulks and run the same hook in parallel executions. This creates unnecessary checks of files which belong to the same directory.\r\n\r\nIs it possible to control that certain hooks should not be executed in parallel but rather pass all filenames to it?\r\n\r\nAlternatively, is there an environment variable or something like that which says that pre-commit is running a hook for all files?\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import unicode_literals\n\nimport concurrent.futures\nimport contextlib\nimport math\nimport sys\n\nimport six\n\nfrom pre_commit import parse_shebang\nfrom pre_commit.util import cmd_output\n\n\n# TODO: properly compute max_length value\ndef _get_platform_max_length():\n # posix minimum\n return 4 * 1024\n\n\ndef _command_length(*cmd):\n full_cmd = ' '.join(cmd)\n\n # win32 uses the amount of characters, more details at:\n # https://github.com/pre-commit/pre-commit/pull/839\n if sys.platform == 'win32':\n # the python2.x apis require bytes, we encode as UTF-8\n if six.PY2:\n return len(full_cmd.encode('utf-8'))\n else:\n return len(full_cmd.encode('utf-16le')) // 2\n else:\n return len(full_cmd.encode(sys.getfilesystemencoding()))\n\n\nclass ArgumentTooLongError(RuntimeError):\n pass\n\n\ndef partition(cmd, varargs, target_concurrency, _max_length=None):\n _max_length = _max_length or _get_platform_max_length()\n\n # Generally, we try to partition evenly into at least `target_concurrency`\n # partitions, but we don't want a bunch of tiny partitions.\n max_args = max(4, math.ceil(len(varargs) / target_concurrency))\n\n cmd = tuple(cmd)\n ret = []\n\n ret_cmd = []\n # Reversed so arguments are in order\n varargs = list(reversed(varargs))\n\n total_length = _command_length(*cmd)\n while varargs:\n arg = varargs.pop()\n\n arg_length = _command_length(arg) + 1\n if (\n total_length + arg_length <= _max_length and\n len(ret_cmd) < max_args\n ):\n ret_cmd.append(arg)\n total_length += arg_length\n elif not ret_cmd:\n raise ArgumentTooLongError(arg)\n else:\n # We've exceeded the length, yield a command\n ret.append(cmd + tuple(ret_cmd))\n ret_cmd = []\n total_length = _command_length(*cmd)\n varargs.append(arg)\n\n ret.append(cmd + tuple(ret_cmd))\n\n return tuple(ret)\n\n\[email protected]\ndef _thread_mapper(maxsize):\n if maxsize == 1:\n yield map\n else:\n with concurrent.futures.ThreadPoolExecutor(maxsize) as ex:\n yield ex.map\n\n\ndef xargs(cmd, varargs, **kwargs):\n \"\"\"A simplified implementation of xargs.\n\n negate: Make nonzero successful and zero a failure\n target_concurrency: Target number of partitions to run concurrently\n \"\"\"\n negate = kwargs.pop('negate', False)\n target_concurrency = kwargs.pop('target_concurrency', 1)\n retcode = 0\n stdout = b''\n stderr = b''\n\n try:\n parse_shebang.normexe(cmd[0])\n except parse_shebang.ExecutableNotFoundError as e:\n return e.to_output()\n\n partitions = partition(cmd, varargs, target_concurrency, **kwargs)\n\n def run_cmd_partition(run_cmd):\n return cmd_output(*run_cmd, encoding=None, retcode=None)\n\n threads = min(len(partitions), target_concurrency)\n with _thread_mapper(threads) as thread_map:\n results = thread_map(run_cmd_partition, partitions)\n\n for proc_retcode, proc_out, proc_err in results:\n # This is *slightly* too clever so I'll explain it.\n # First the xor boolean table:\n # T | F |\n # +-------+\n # T | F | T |\n # --+-------+\n # F | T | F |\n # --+-------+\n # When negate is True, it has the effect of flipping the return\n # code. Otherwise, the returncode is unchanged.\n retcode |= bool(proc_retcode) ^ negate\n stdout += proc_out\n stderr += proc_err\n\n return retcode, stdout, stderr\n", "path": "pre_commit/xargs.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import unicode_literals\n\nimport concurrent.futures\nimport contextlib\nimport math\nimport os\nimport sys\n\nimport six\n\nfrom pre_commit import parse_shebang\nfrom pre_commit.util import cmd_output\n\n\ndef _environ_size(_env=None):\n environ = _env if _env is not None else getattr(os, 'environb', os.environ)\n size = 8 * len(environ) # number of pointers in `envp`\n for k, v in environ.items():\n size += len(k) + len(v) + 2 # c strings in `envp`\n return size\n\n\ndef _get_platform_max_length(): # pragma: no cover (platform specific)\n if os.name == 'posix':\n maximum = os.sysconf(str('SC_ARG_MAX')) - 2048 - _environ_size()\n maximum = min(maximum, 2 ** 17)\n return maximum\n elif os.name == 'nt':\n return 2 ** 15 - 2048 # UNICODE_STRING max - headroom\n else:\n # posix minimum\n return 2 ** 12\n\n\ndef _command_length(*cmd):\n full_cmd = ' '.join(cmd)\n\n # win32 uses the amount of characters, more details at:\n # https://github.com/pre-commit/pre-commit/pull/839\n if sys.platform == 'win32':\n # the python2.x apis require bytes, we encode as UTF-8\n if six.PY2:\n return len(full_cmd.encode('utf-8'))\n else:\n return len(full_cmd.encode('utf-16le')) // 2\n else:\n return len(full_cmd.encode(sys.getfilesystemencoding()))\n\n\nclass ArgumentTooLongError(RuntimeError):\n pass\n\n\ndef partition(cmd, varargs, target_concurrency, _max_length=None):\n _max_length = _max_length or _get_platform_max_length()\n\n # Generally, we try to partition evenly into at least `target_concurrency`\n # partitions, but we don't want a bunch of tiny partitions.\n max_args = max(4, math.ceil(len(varargs) / target_concurrency))\n\n cmd = tuple(cmd)\n ret = []\n\n ret_cmd = []\n # Reversed so arguments are in order\n varargs = list(reversed(varargs))\n\n total_length = _command_length(*cmd) + 1\n while varargs:\n arg = varargs.pop()\n\n arg_length = _command_length(arg) + 1\n if (\n total_length + arg_length <= _max_length and\n len(ret_cmd) < max_args\n ):\n ret_cmd.append(arg)\n total_length += arg_length\n elif not ret_cmd:\n raise ArgumentTooLongError(arg)\n else:\n # We've exceeded the length, yield a command\n ret.append(cmd + tuple(ret_cmd))\n ret_cmd = []\n total_length = _command_length(*cmd) + 1\n varargs.append(arg)\n\n ret.append(cmd + tuple(ret_cmd))\n\n return tuple(ret)\n\n\[email protected]\ndef _thread_mapper(maxsize):\n if maxsize == 1:\n yield map\n else:\n with concurrent.futures.ThreadPoolExecutor(maxsize) as ex:\n yield ex.map\n\n\ndef xargs(cmd, varargs, **kwargs):\n \"\"\"A simplified implementation of xargs.\n\n negate: Make nonzero successful and zero a failure\n target_concurrency: Target number of partitions to run concurrently\n \"\"\"\n negate = kwargs.pop('negate', False)\n target_concurrency = kwargs.pop('target_concurrency', 1)\n retcode = 0\n stdout = b''\n stderr = b''\n\n try:\n cmd = parse_shebang.normalize_cmd(cmd)\n except parse_shebang.ExecutableNotFoundError as e:\n return e.to_output()\n\n partitions = partition(cmd, varargs, target_concurrency, **kwargs)\n\n def run_cmd_partition(run_cmd):\n return cmd_output(*run_cmd, encoding=None, retcode=None)\n\n threads = min(len(partitions), target_concurrency)\n with _thread_mapper(threads) as thread_map:\n results = thread_map(run_cmd_partition, partitions)\n\n for proc_retcode, proc_out, proc_err in results:\n # This is *slightly* too clever so I'll explain it.\n # First the xor boolean table:\n # T | F |\n # +-------+\n # T | F | T |\n # --+-------+\n # F | T | F |\n # --+-------+\n # When negate is True, it has the effect of flipping the return\n # code. Otherwise, the returncode is unchanged.\n retcode |= bool(proc_retcode) ^ negate\n stdout += proc_out\n stderr += proc_err\n\n return retcode, stdout, stderr\n", "path": "pre_commit/xargs.py"}]} | 1,573 | 566 |
gh_patches_debug_97 | rasdani/github-patches | git_diff | uccser__cs-unplugged-434 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Check desired orientation of binary to alphabet resource
Currently is displayed in portrait but half the page is unused. May be better to switch to landscape which will increase the size of table cells.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `csunplugged/resources/views/binary_to_alphabet.py`
Content:
```
1 """Module for generating Binary to Alphabet resource."""
2
3 from PIL import Image, ImageDraw, ImageFont
4 from utils.retrieve_query_parameter import retrieve_query_parameter
5
6
7 def resource_image(request, resource):
8 """Create a image for Binary to Alphabet resource.
9
10 Args:
11 request: HTTP request object
12 resource: Object of resource data.
13
14 Returns:
15 A Pillow image object.
16 """
17 # Retrieve relevant image
18 parameter_options = valid_options()
19 worksheet_version = retrieve_query_parameter(request, "worksheet_version", parameter_options["worksheet_version"])
20 if worksheet_version == "student":
21 image_path = "static/img/resources/binary-to-alphabet/table.png"
22 else:
23 image_path = "static/img/resources/binary-to-alphabet/table-teacher.png"
24 image = Image.open(image_path)
25 draw = ImageDraw.Draw(image)
26
27 font_size = 30
28 font_path = "static/fonts/PatrickHand-Regular.ttf"
29 font = ImageFont.truetype(font_path, font_size)
30
31 # Draw headings
32 column_headings = ["Base 10", "Binary", "Letter"]
33 heading_coord_x = 18
34 heading_coord_y = 6
35
36 i = 0
37 while i < 9: # 9 = number of columns
38
39 if i % 3 == 0:
40 text = str(column_headings[0])
41 elif i % 3 == 1:
42 text = str(column_headings[1])
43 else:
44 text = str(column_headings[2])
45
46 draw.text(
47 (heading_coord_x, heading_coord_y),
48 text,
49 font=font,
50 fill="#000"
51 )
52
53 heading_coord_x += 113
54
55 i += 1
56
57 # Draw numbers
58 # Column data: (min number, max number), x coord
59 columns_data = [((0, 9), 58), ((9, 18), 397), ((18, 27), 736)]
60
61 for column_set in columns_data:
62 start, end = column_set[0]
63 base_coord_x = column_set[1]
64 base_coord_y = 75
65
66 for number in range(start, end):
67 text = str(number)
68 text_width, text_height = draw.textsize(text, font=font)
69 coord_x = base_coord_x - (text_width / 2)
70 coord_y = base_coord_y - (text_height / 2)
71
72 draw.text(
73 (coord_x, coord_y),
74 text,
75 font=font,
76 fill="#000"
77 )
78
79 base_coord_y += 54
80
81 return image
82
83
84 def subtitle(request, resource):
85 """Return the subtitle string of the resource.
86
87 Used after the resource name in the filename, and
88 also on the resource image.
89
90 Args:
91 request: HTTP request object
92 resource: Object of resource data.
93
94 Returns:
95 text for subtitle (string)
96 """
97 text = "{} - {}".format(
98 retrieve_query_parameter(request, "worksheet_version"),
99 retrieve_query_parameter(request, "paper_size")
100 )
101 return text
102
103
104 def valid_options():
105 """Provide dictionary of all valid parameters.
106
107 This excludes the header text parameter.
108
109 Returns:
110 All valid options (dict).
111 """
112 return {
113 "worksheet_version": ["student", "teacher"],
114 "paper_size": ["a4", "letter"]
115 }
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/csunplugged/resources/views/binary_to_alphabet.py b/csunplugged/resources/views/binary_to_alphabet.py
--- a/csunplugged/resources/views/binary_to_alphabet.py
+++ b/csunplugged/resources/views/binary_to_alphabet.py
@@ -78,6 +78,7 @@
base_coord_y += 54
+ image = image.rotate(90, expand=True)
return image
| {"golden_diff": "diff --git a/csunplugged/resources/views/binary_to_alphabet.py b/csunplugged/resources/views/binary_to_alphabet.py\n--- a/csunplugged/resources/views/binary_to_alphabet.py\n+++ b/csunplugged/resources/views/binary_to_alphabet.py\n@@ -78,6 +78,7 @@\n \n base_coord_y += 54\n \n+ image = image.rotate(90, expand=True)\n return image\n", "issue": "Check desired orientation of binary to alphabet resource\nCurrently is displayed in portrait but half the page is unused. May be better to switch to landscape which will increase the size of table cells.\n", "before_files": [{"content": "\"\"\"Module for generating Binary to Alphabet resource.\"\"\"\n\nfrom PIL import Image, ImageDraw, ImageFont\nfrom utils.retrieve_query_parameter import retrieve_query_parameter\n\n\ndef resource_image(request, resource):\n \"\"\"Create a image for Binary to Alphabet resource.\n\n Args:\n request: HTTP request object\n resource: Object of resource data.\n\n Returns:\n A Pillow image object.\n \"\"\"\n # Retrieve relevant image\n parameter_options = valid_options()\n worksheet_version = retrieve_query_parameter(request, \"worksheet_version\", parameter_options[\"worksheet_version\"])\n if worksheet_version == \"student\":\n image_path = \"static/img/resources/binary-to-alphabet/table.png\"\n else:\n image_path = \"static/img/resources/binary-to-alphabet/table-teacher.png\"\n image = Image.open(image_path)\n draw = ImageDraw.Draw(image)\n\n font_size = 30\n font_path = \"static/fonts/PatrickHand-Regular.ttf\"\n font = ImageFont.truetype(font_path, font_size)\n\n # Draw headings\n column_headings = [\"Base 10\", \"Binary\", \"Letter\"]\n heading_coord_x = 18\n heading_coord_y = 6\n\n i = 0\n while i < 9: # 9 = number of columns\n\n if i % 3 == 0:\n text = str(column_headings[0])\n elif i % 3 == 1:\n text = str(column_headings[1])\n else:\n text = str(column_headings[2])\n\n draw.text(\n (heading_coord_x, heading_coord_y),\n text,\n font=font,\n fill=\"#000\"\n )\n\n heading_coord_x += 113\n\n i += 1\n\n # Draw numbers\n # Column data: (min number, max number), x coord\n columns_data = [((0, 9), 58), ((9, 18), 397), ((18, 27), 736)]\n\n for column_set in columns_data:\n start, end = column_set[0]\n base_coord_x = column_set[1]\n base_coord_y = 75\n\n for number in range(start, end):\n text = str(number)\n text_width, text_height = draw.textsize(text, font=font)\n coord_x = base_coord_x - (text_width / 2)\n coord_y = base_coord_y - (text_height / 2)\n\n draw.text(\n (coord_x, coord_y),\n text,\n font=font,\n fill=\"#000\"\n )\n\n base_coord_y += 54\n\n return image\n\n\ndef subtitle(request, resource):\n \"\"\"Return the subtitle string of the resource.\n\n Used after the resource name in the filename, and\n also on the resource image.\n\n Args:\n request: HTTP request object\n resource: Object of resource data.\n\n Returns:\n text for subtitle (string)\n \"\"\"\n text = \"{} - {}\".format(\n retrieve_query_parameter(request, \"worksheet_version\"),\n retrieve_query_parameter(request, \"paper_size\")\n )\n return text\n\n\ndef valid_options():\n \"\"\"Provide dictionary of all valid parameters.\n\n This excludes the header text parameter.\n\n Returns:\n All valid options (dict).\n \"\"\"\n return {\n \"worksheet_version\": [\"student\", \"teacher\"],\n \"paper_size\": [\"a4\", \"letter\"]\n }\n", "path": "csunplugged/resources/views/binary_to_alphabet.py"}], "after_files": [{"content": "\"\"\"Module for generating Binary to Alphabet resource.\"\"\"\n\nfrom PIL import Image, ImageDraw, ImageFont\nfrom utils.retrieve_query_parameter import retrieve_query_parameter\n\n\ndef resource_image(request, resource):\n \"\"\"Create a image for Binary to Alphabet resource.\n\n Args:\n request: HTTP request object\n resource: Object of resource data.\n\n Returns:\n A Pillow image object.\n \"\"\"\n # Retrieve relevant image\n parameter_options = valid_options()\n worksheet_version = retrieve_query_parameter(request, \"worksheet_version\", parameter_options[\"worksheet_version\"])\n if worksheet_version == \"student\":\n image_path = \"static/img/resources/binary-to-alphabet/table.png\"\n else:\n image_path = \"static/img/resources/binary-to-alphabet/table-teacher.png\"\n image = Image.open(image_path)\n draw = ImageDraw.Draw(image)\n\n font_size = 30\n font_path = \"static/fonts/PatrickHand-Regular.ttf\"\n font = ImageFont.truetype(font_path, font_size)\n\n # Draw headings\n column_headings = [\"Base 10\", \"Binary\", \"Letter\"]\n heading_coord_x = 18\n heading_coord_y = 6\n\n i = 0\n while i < 9: # 9 = number of columns\n\n if i % 3 == 0:\n text = str(column_headings[0])\n elif i % 3 == 1:\n text = str(column_headings[1])\n else:\n text = str(column_headings[2])\n\n draw.text(\n (heading_coord_x, heading_coord_y),\n text,\n font=font,\n fill=\"#000\"\n )\n\n heading_coord_x += 113\n\n i += 1\n\n # Draw numbers\n # Column data: (min number, max number), x coord\n columns_data = [((0, 9), 58), ((9, 18), 397), ((18, 27), 736)]\n\n for column_set in columns_data:\n start, end = column_set[0]\n base_coord_x = column_set[1]\n base_coord_y = 75\n\n for number in range(start, end):\n text = str(number)\n text_width, text_height = draw.textsize(text, font=font)\n coord_x = base_coord_x - (text_width / 2)\n coord_y = base_coord_y - (text_height / 2)\n\n draw.text(\n (coord_x, coord_y),\n text,\n font=font,\n fill=\"#000\"\n )\n\n base_coord_y += 54\n\n image = image.rotate(90, expand=True)\n return image\n\n\ndef subtitle(request, resource):\n \"\"\"Return the subtitle string of the resource.\n\n Used after the resource name in the filename, and\n also on the resource image.\n\n Args:\n request: HTTP request object\n resource: Object of resource data.\n\n Returns:\n text for subtitle (string)\n \"\"\"\n text = \"{} - {}\".format(\n retrieve_query_parameter(request, \"worksheet_version\"),\n retrieve_query_parameter(request, \"paper_size\")\n )\n return text\n\n\ndef valid_options():\n \"\"\"Provide dictionary of all valid parameters.\n\n This excludes the header text parameter.\n\n Returns:\n All valid options (dict).\n \"\"\"\n return {\n \"worksheet_version\": [\"student\", \"teacher\"],\n \"paper_size\": [\"a4\", \"letter\"]\n }\n", "path": "csunplugged/resources/views/binary_to_alphabet.py"}]} | 1,296 | 99 |
gh_patches_debug_12983 | rasdani/github-patches | git_diff | scikit-image__scikit-image-7211 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Consistently use lazy loading for all `skimage.*` submodules
### Description:
With `lazy_loader` successfully being used for `skimage`, `skimage.data` and `skimage.filters` why not use it for every of our public submodules? I see no significant disadvantage here (when using the approach with PYI files) and it is what is proposed in [SPEC 1](https://scientific-python.org/specs/spec-0001/).
Feel free to remove the good first issue label if there are concerns. Otherwise I would suggest to tackle this with separate PRs for each module that copy the examples mentioned above.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/metrics/__init__.py`
Content:
```
1 from ._adapted_rand_error import adapted_rand_error
2 from ._contingency_table import contingency_table
3 from ._structural_similarity import structural_similarity
4 from ._variation_of_information import variation_of_information
5 from .set_metrics import hausdorff_distance, hausdorff_pair
6 from .simple_metrics import (
7 mean_squared_error,
8 normalized_mutual_information,
9 normalized_root_mse,
10 peak_signal_noise_ratio,
11 )
12
13 __all__ = [
14 "adapted_rand_error",
15 "variation_of_information",
16 "contingency_table",
17 "mean_squared_error",
18 "normalized_mutual_information",
19 "normalized_root_mse",
20 "peak_signal_noise_ratio",
21 "structural_similarity",
22 "hausdorff_distance",
23 "hausdorff_pair",
24 ]
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/skimage/metrics/__init__.py b/skimage/metrics/__init__.py
--- a/skimage/metrics/__init__.py
+++ b/skimage/metrics/__init__.py
@@ -1,24 +1,3 @@
-from ._adapted_rand_error import adapted_rand_error
-from ._contingency_table import contingency_table
-from ._structural_similarity import structural_similarity
-from ._variation_of_information import variation_of_information
-from .set_metrics import hausdorff_distance, hausdorff_pair
-from .simple_metrics import (
- mean_squared_error,
- normalized_mutual_information,
- normalized_root_mse,
- peak_signal_noise_ratio,
-)
+import lazy_loader as lazy
-__all__ = [
- "adapted_rand_error",
- "variation_of_information",
- "contingency_table",
- "mean_squared_error",
- "normalized_mutual_information",
- "normalized_root_mse",
- "peak_signal_noise_ratio",
- "structural_similarity",
- "hausdorff_distance",
- "hausdorff_pair",
-]
+__getattr__, __dir__, __all__ = lazy.attach_stub(__name__, __file__)
| {"golden_diff": "diff --git a/skimage/metrics/__init__.py b/skimage/metrics/__init__.py\n--- a/skimage/metrics/__init__.py\n+++ b/skimage/metrics/__init__.py\n@@ -1,24 +1,3 @@\n-from ._adapted_rand_error import adapted_rand_error\n-from ._contingency_table import contingency_table\n-from ._structural_similarity import structural_similarity\n-from ._variation_of_information import variation_of_information\n-from .set_metrics import hausdorff_distance, hausdorff_pair\n-from .simple_metrics import (\n- mean_squared_error,\n- normalized_mutual_information,\n- normalized_root_mse,\n- peak_signal_noise_ratio,\n-)\n+import lazy_loader as lazy\n \n-__all__ = [\n- \"adapted_rand_error\",\n- \"variation_of_information\",\n- \"contingency_table\",\n- \"mean_squared_error\",\n- \"normalized_mutual_information\",\n- \"normalized_root_mse\",\n- \"peak_signal_noise_ratio\",\n- \"structural_similarity\",\n- \"hausdorff_distance\",\n- \"hausdorff_pair\",\n-]\n+__getattr__, __dir__, __all__ = lazy.attach_stub(__name__, __file__)\n", "issue": "Consistently use lazy loading for all `skimage.*` submodules\n### Description:\r\n\r\nWith `lazy_loader` successfully being used for `skimage`, `skimage.data` and `skimage.filters` why not use it for every of our public submodules? I see no significant disadvantage here (when using the approach with PYI files) and it is what is proposed in [SPEC 1](https://scientific-python.org/specs/spec-0001/).\r\n\r\nFeel free to remove the good first issue label if there are concerns. Otherwise I would suggest to tackle this with separate PRs for each module that copy the examples mentioned above.\n", "before_files": [{"content": "from ._adapted_rand_error import adapted_rand_error\nfrom ._contingency_table import contingency_table\nfrom ._structural_similarity import structural_similarity\nfrom ._variation_of_information import variation_of_information\nfrom .set_metrics import hausdorff_distance, hausdorff_pair\nfrom .simple_metrics import (\n mean_squared_error,\n normalized_mutual_information,\n normalized_root_mse,\n peak_signal_noise_ratio,\n)\n\n__all__ = [\n \"adapted_rand_error\",\n \"variation_of_information\",\n \"contingency_table\",\n \"mean_squared_error\",\n \"normalized_mutual_information\",\n \"normalized_root_mse\",\n \"peak_signal_noise_ratio\",\n \"structural_similarity\",\n \"hausdorff_distance\",\n \"hausdorff_pair\",\n]\n", "path": "skimage/metrics/__init__.py"}], "after_files": [{"content": "import lazy_loader as lazy\n\n__getattr__, __dir__, __all__ = lazy.attach_stub(__name__, __file__)\n", "path": "skimage/metrics/__init__.py"}]} | 598 | 263 |
gh_patches_debug_4682 | rasdani/github-patches | git_diff | ethereum__web3.py-709 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use EthereumTesterProvider backed by eth-tester in default import
`from web3 import EthereumTesterProvider` should use the eth-tester one, not the testrpc one.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `web3/__init__.py`
Content:
```
1 import pkg_resources
2 import sys
3
4 if sys.version_info < (3, 5):
5 raise EnvironmentError("Python 3.5 or above is required")
6
7 from eth_account import Account # noqa: E402
8 from web3.main import Web3 # noqa: E402
9 from web3.providers.rpc import ( # noqa: E402
10 HTTPProvider,
11 )
12 from web3.providers.tester import ( # noqa: E402
13 TestRPCProvider,
14 EthereumTesterProvider,
15 )
16 from web3.providers.ipc import ( # noqa: E402
17 IPCProvider,
18 )
19
20 __version__ = pkg_resources.get_distribution("web3").version
21
22 __all__ = [
23 "__version__",
24 "Web3",
25 "HTTPProvider",
26 "IPCProvider",
27 "TestRPCProvider",
28 "EthereumTesterProvider",
29 "Account",
30 ]
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/web3/__init__.py b/web3/__init__.py
--- a/web3/__init__.py
+++ b/web3/__init__.py
@@ -9,9 +9,11 @@
from web3.providers.rpc import ( # noqa: E402
HTTPProvider,
)
+from web3.providers.eth_tester import ( # noqa: E402
+ EthereumTesterProvider,
+)
from web3.providers.tester import ( # noqa: E402
TestRPCProvider,
- EthereumTesterProvider,
)
from web3.providers.ipc import ( # noqa: E402
IPCProvider,
| {"golden_diff": "diff --git a/web3/__init__.py b/web3/__init__.py\n--- a/web3/__init__.py\n+++ b/web3/__init__.py\n@@ -9,9 +9,11 @@\n from web3.providers.rpc import ( # noqa: E402\n HTTPProvider,\n )\n+from web3.providers.eth_tester import ( # noqa: E402\n+ EthereumTesterProvider,\n+)\n from web3.providers.tester import ( # noqa: E402\n TestRPCProvider,\n- EthereumTesterProvider,\n )\n from web3.providers.ipc import ( # noqa: E402\n IPCProvider,\n", "issue": "Use EthereumTesterProvider backed by eth-tester in default import\n`from web3 import EthereumTesterProvider` should use the eth-tester one, not the testrpc one.\n", "before_files": [{"content": "import pkg_resources\nimport sys\n\nif sys.version_info < (3, 5):\n raise EnvironmentError(\"Python 3.5 or above is required\")\n\nfrom eth_account import Account # noqa: E402\nfrom web3.main import Web3 # noqa: E402\nfrom web3.providers.rpc import ( # noqa: E402\n HTTPProvider,\n)\nfrom web3.providers.tester import ( # noqa: E402\n TestRPCProvider,\n EthereumTesterProvider,\n)\nfrom web3.providers.ipc import ( # noqa: E402\n IPCProvider,\n)\n\n__version__ = pkg_resources.get_distribution(\"web3\").version\n\n__all__ = [\n \"__version__\",\n \"Web3\",\n \"HTTPProvider\",\n \"IPCProvider\",\n \"TestRPCProvider\",\n \"EthereumTesterProvider\",\n \"Account\",\n]\n", "path": "web3/__init__.py"}], "after_files": [{"content": "import pkg_resources\nimport sys\n\nif sys.version_info < (3, 5):\n raise EnvironmentError(\"Python 3.5 or above is required\")\n\nfrom eth_account import Account # noqa: E402\nfrom web3.main import Web3 # noqa: E402\nfrom web3.providers.rpc import ( # noqa: E402\n HTTPProvider,\n)\nfrom web3.providers.eth_tester import ( # noqa: E402\n EthereumTesterProvider,\n)\nfrom web3.providers.tester import ( # noqa: E402\n TestRPCProvider,\n)\nfrom web3.providers.ipc import ( # noqa: E402\n IPCProvider,\n)\n\n__version__ = pkg_resources.get_distribution(\"web3\").version\n\n__all__ = [\n \"__version__\",\n \"Web3\",\n \"HTTPProvider\",\n \"IPCProvider\",\n \"TestRPCProvider\",\n \"EthereumTesterProvider\",\n \"Account\",\n]\n", "path": "web3/__init__.py"}]} | 545 | 145 |
gh_patches_debug_19778 | rasdani/github-patches | git_diff | Mailu__Mailu-1198 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make access logging of admin server dependent on log level
Currently, admin logs all access logs to the console, regardless of which log level is set.
This is caused by starting gunicorn with `--access-logfile -`. This should be switched dependent on the log level set, e.g. on for levels >= INFO and off for <=WARNING
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/admin/start.py`
Content:
```
1 #!/usr/bin/python3
2
3 import os
4
5 os.system("flask mailu advertise")
6 os.system("flask db upgrade")
7
8 account = os.environ.get("INITIAL_ADMIN_ACCOUNT")
9 domain = os.environ.get("INITIAL_ADMIN_DOMAIN")
10 password = os.environ.get("INITIAL_ADMIN_PW")
11
12 if account is not None and domain is not None and password is not None:
13 mode = os.environ.get("INITIAL_ADMIN_MODE", default="ifmissing")
14 os.system("flask mailu admin %s %s '%s' --mode %s" % (account, domain, password, mode))
15
16 os.system("gunicorn -w 4 -b :80 --access-logfile - --error-logfile - --preload 'mailu:create_app()'")
17
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/admin/start.py b/core/admin/start.py
--- a/core/admin/start.py
+++ b/core/admin/start.py
@@ -1,6 +1,10 @@
#!/usr/bin/python3
import os
+import logging as log
+import sys
+
+log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", "INFO"))
os.system("flask mailu advertise")
os.system("flask db upgrade")
@@ -11,6 +15,14 @@
if account is not None and domain is not None and password is not None:
mode = os.environ.get("INITIAL_ADMIN_MODE", default="ifmissing")
+ log.info("Creating initial admin accout %s@%s with mode %s",account,domain,mode)
os.system("flask mailu admin %s %s '%s' --mode %s" % (account, domain, password, mode))
-os.system("gunicorn -w 4 -b :80 --access-logfile - --error-logfile - --preload 'mailu:create_app()'")
+start_command="".join([
+ "gunicorn -w 4 -b :80 ",
+ "--access-logfile - " if (log.root.level<=log.INFO) else "",
+ "--error-logfile - ",
+ "--preload ",
+ "'mailu:create_app()'"])
+
+os.system(start_command)
| {"golden_diff": "diff --git a/core/admin/start.py b/core/admin/start.py\n--- a/core/admin/start.py\n+++ b/core/admin/start.py\n@@ -1,6 +1,10 @@\n #!/usr/bin/python3\n \n import os\n+import logging as log\n+import sys\n+\n+log.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"INFO\"))\n \n os.system(\"flask mailu advertise\")\n os.system(\"flask db upgrade\")\n@@ -11,6 +15,14 @@\n \n if account is not None and domain is not None and password is not None:\n mode = os.environ.get(\"INITIAL_ADMIN_MODE\", default=\"ifmissing\")\n+ log.info(\"Creating initial admin accout %s@%s with mode %s\",account,domain,mode)\n os.system(\"flask mailu admin %s %s '%s' --mode %s\" % (account, domain, password, mode))\n \n-os.system(\"gunicorn -w 4 -b :80 --access-logfile - --error-logfile - --preload 'mailu:create_app()'\")\n+start_command=\"\".join([\n+ \"gunicorn -w 4 -b :80 \",\n+ \"--access-logfile - \" if (log.root.level<=log.INFO) else \"\",\n+ \"--error-logfile - \",\n+ \"--preload \",\n+ \"'mailu:create_app()'\"])\n+\n+os.system(start_command)\n", "issue": "Make access logging of admin server dependent on log level\nCurrently, admin logs all access logs to the console, regardless of which log level is set.\r\nThis is caused by starting gunicorn with `--access-logfile -`. This should be switched dependent on the log level set, e.g. on for levels >= INFO and off for <=WARNING\r\n \n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport os\n\nos.system(\"flask mailu advertise\")\nos.system(\"flask db upgrade\")\n\naccount = os.environ.get(\"INITIAL_ADMIN_ACCOUNT\")\ndomain = os.environ.get(\"INITIAL_ADMIN_DOMAIN\")\npassword = os.environ.get(\"INITIAL_ADMIN_PW\")\n\nif account is not None and domain is not None and password is not None:\n mode = os.environ.get(\"INITIAL_ADMIN_MODE\", default=\"ifmissing\")\n os.system(\"flask mailu admin %s %s '%s' --mode %s\" % (account, domain, password, mode))\n\nos.system(\"gunicorn -w 4 -b :80 --access-logfile - --error-logfile - --preload 'mailu:create_app()'\")\n", "path": "core/admin/start.py"}], "after_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport logging as log\nimport sys\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"INFO\"))\n\nos.system(\"flask mailu advertise\")\nos.system(\"flask db upgrade\")\n\naccount = os.environ.get(\"INITIAL_ADMIN_ACCOUNT\")\ndomain = os.environ.get(\"INITIAL_ADMIN_DOMAIN\")\npassword = os.environ.get(\"INITIAL_ADMIN_PW\")\n\nif account is not None and domain is not None and password is not None:\n mode = os.environ.get(\"INITIAL_ADMIN_MODE\", default=\"ifmissing\")\n log.info(\"Creating initial admin accout %s@%s with mode %s\",account,domain,mode)\n os.system(\"flask mailu admin %s %s '%s' --mode %s\" % (account, domain, password, mode))\n\nstart_command=\"\".join([\n \"gunicorn -w 4 -b :80 \",\n \"--access-logfile - \" if (log.root.level<=log.INFO) else \"\",\n \"--error-logfile - \",\n \"--preload \",\n \"'mailu:create_app()'\"])\n\nos.system(start_command)\n", "path": "core/admin/start.py"}]} | 516 | 304 |
gh_patches_debug_4918 | rasdani/github-patches | git_diff | rlworkgroup__garage-1639 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docs page "Ensure your experiments are reproducible"
See #1426
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/garage/envs/point_env.py`
Content:
```
1 """Simple 2D environment containing a point and a goal location."""
2 import gym
3 import numpy as np
4
5 from garage.envs.step import Step
6
7
8 class PointEnv(gym.Env):
9 """A simple 2D point environment.
10
11 Attributes:
12 observation_space (gym.spaces.Box): The observation space
13 action_space (gym.spaces.Box): The action space
14
15 Args:
16 goal (np.ndarray): A 2D array representing the goal position
17 arena_size (float): The size of arena where the point is constrained
18 within (-arena_size, arena_size) in each dimension
19 done_bonus (float): A numerical bonus added to the reward
20 once the point as reached the goal
21 never_done (bool): Never send a `done` signal, even if the
22 agent achieves the goal
23
24 """
25
26 def __init__(
27 self,
28 goal=np.array((1., 1.), dtype=np.float32),
29 arena_size=5.,
30 done_bonus=0.,
31 never_done=False,
32 ):
33 goal = np.array(goal, dtype=np.float32)
34 self._goal = goal
35 self._done_bonus = done_bonus
36 self._never_done = never_done
37 self._arena_size = arena_size
38
39 assert ((goal >= -arena_size) & (goal <= arena_size)).all()
40
41 self._point = np.zeros_like(self._goal)
42 self._task = {'goal': self._goal}
43 self._observation_space = gym.spaces.Box(low=-np.inf,
44 high=np.inf,
45 shape=(3, ),
46 dtype=np.float32)
47 self._action_space = gym.spaces.Box(low=-0.1,
48 high=0.1,
49 shape=(2, ),
50 dtype=np.float32)
51
52 @property
53 def observation_space(self):
54 """gym.spaces.Box: The observation space."""
55 return self._observation_space
56
57 @property
58 def action_space(self):
59 """gym.spaces.Box: The action space."""
60 return self._action_space
61
62 def reset(self):
63 """Reset the environment.
64
65 Returns:
66 np.ndarray: Observation of the environment.
67
68 """
69 self._point = np.zeros_like(self._goal)
70 dist = np.linalg.norm(self._point - self._goal)
71 return np.concatenate([self._point, (dist, )])
72
73 def step(self, action):
74 """Step the environment state.
75
76 Args:
77 action (np.ndarray): The action to take in the environment.
78
79 Returns:
80 np.ndarray: Observation. The observation of the environment.
81 float: Reward. The reward acquired at this time step.
82 boolean: Done. Whether the environment was completed at this
83 time step. Always False for this environment.
84
85 """
86 # enforce action space
87 a = action.copy() # NOTE: we MUST copy the action before modifying it
88 a = np.clip(a, self.action_space.low, self.action_space.high)
89
90 self._point = np.clip(self._point + a, -self._arena_size,
91 self._arena_size)
92 dist = np.linalg.norm(self._point - self._goal)
93 succ = dist < np.linalg.norm(self.action_space.low)
94
95 # dense reward
96 reward = -dist
97 # done bonus
98 if succ:
99 reward += self._done_bonus
100
101 # sometimes we don't want to terminate
102 done = succ and not self._never_done
103
104 obs = np.concatenate([self._point, (dist, )])
105
106 return Step(obs, reward, done, task=self._task, success=succ)
107
108 def render(self, mode='human'):
109 """Draw the environment.
110
111 Not implemented.
112
113 Args:
114 mode (str): Ignored.
115
116 """
117 # pylint: disable=no-self-use
118
119 def sample_tasks(self, num_tasks):
120 """Sample a list of `num_tasks` tasks.
121
122 Args:
123 num_tasks (int): Number of tasks to sample.
124
125 Returns:
126 list[dict[str, np.ndarray]]: A list of "tasks", where each task is
127 a dictionary containing a single key, "goal", mapping to a
128 point in 2D space.
129
130 """
131 goals = np.random.uniform(-2, 2, size=(num_tasks, 2))
132 tasks = [{'goal': goal} for goal in goals]
133 return tasks
134
135 def set_task(self, task):
136 """Reset with a task.
137
138 Args:
139 task (dict[str, np.ndarray]): A task (a dictionary containing a
140 single key, "goal", which should be a point in 2D space).
141
142 """
143 self._task = task
144 self._goal = task['goal']
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/garage/envs/point_env.py b/src/garage/envs/point_env.py
--- a/src/garage/envs/point_env.py
+++ b/src/garage/envs/point_env.py
@@ -24,11 +24,11 @@
"""
def __init__(
- self,
- goal=np.array((1., 1.), dtype=np.float32),
- arena_size=5.,
- done_bonus=0.,
- never_done=False,
+ self,
+ goal=np.array((1., 1.), dtype=np.float32),
+ arena_size=5.,
+ done_bonus=0.,
+ never_done=False,
):
goal = np.array(goal, dtype=np.float32)
self._goal = goal
| {"golden_diff": "diff --git a/src/garage/envs/point_env.py b/src/garage/envs/point_env.py\n--- a/src/garage/envs/point_env.py\n+++ b/src/garage/envs/point_env.py\n@@ -24,11 +24,11 @@\n \"\"\"\n \n def __init__(\n- self,\n- goal=np.array((1., 1.), dtype=np.float32),\n- arena_size=5.,\n- done_bonus=0.,\n- never_done=False,\n+ self,\n+ goal=np.array((1., 1.), dtype=np.float32),\n+ arena_size=5.,\n+ done_bonus=0.,\n+ never_done=False,\n ):\n goal = np.array(goal, dtype=np.float32)\n self._goal = goal\n", "issue": "Docs page \"Ensure your experiments are reproducible\"\nSee #1426 \n", "before_files": [{"content": "\"\"\"Simple 2D environment containing a point and a goal location.\"\"\"\nimport gym\nimport numpy as np\n\nfrom garage.envs.step import Step\n\n\nclass PointEnv(gym.Env):\n \"\"\"A simple 2D point environment.\n\n Attributes:\n observation_space (gym.spaces.Box): The observation space\n action_space (gym.spaces.Box): The action space\n\n Args:\n goal (np.ndarray): A 2D array representing the goal position\n arena_size (float): The size of arena where the point is constrained\n within (-arena_size, arena_size) in each dimension\n done_bonus (float): A numerical bonus added to the reward\n once the point as reached the goal\n never_done (bool): Never send a `done` signal, even if the\n agent achieves the goal\n\n \"\"\"\n\n def __init__(\n self,\n goal=np.array((1., 1.), dtype=np.float32),\n arena_size=5.,\n done_bonus=0.,\n never_done=False,\n ):\n goal = np.array(goal, dtype=np.float32)\n self._goal = goal\n self._done_bonus = done_bonus\n self._never_done = never_done\n self._arena_size = arena_size\n\n assert ((goal >= -arena_size) & (goal <= arena_size)).all()\n\n self._point = np.zeros_like(self._goal)\n self._task = {'goal': self._goal}\n self._observation_space = gym.spaces.Box(low=-np.inf,\n high=np.inf,\n shape=(3, ),\n dtype=np.float32)\n self._action_space = gym.spaces.Box(low=-0.1,\n high=0.1,\n shape=(2, ),\n dtype=np.float32)\n\n @property\n def observation_space(self):\n \"\"\"gym.spaces.Box: The observation space.\"\"\"\n return self._observation_space\n\n @property\n def action_space(self):\n \"\"\"gym.spaces.Box: The action space.\"\"\"\n return self._action_space\n\n def reset(self):\n \"\"\"Reset the environment.\n\n Returns:\n np.ndarray: Observation of the environment.\n\n \"\"\"\n self._point = np.zeros_like(self._goal)\n dist = np.linalg.norm(self._point - self._goal)\n return np.concatenate([self._point, (dist, )])\n\n def step(self, action):\n \"\"\"Step the environment state.\n\n Args:\n action (np.ndarray): The action to take in the environment.\n\n Returns:\n np.ndarray: Observation. The observation of the environment.\n float: Reward. The reward acquired at this time step.\n boolean: Done. Whether the environment was completed at this\n time step. Always False for this environment.\n\n \"\"\"\n # enforce action space\n a = action.copy() # NOTE: we MUST copy the action before modifying it\n a = np.clip(a, self.action_space.low, self.action_space.high)\n\n self._point = np.clip(self._point + a, -self._arena_size,\n self._arena_size)\n dist = np.linalg.norm(self._point - self._goal)\n succ = dist < np.linalg.norm(self.action_space.low)\n\n # dense reward\n reward = -dist\n # done bonus\n if succ:\n reward += self._done_bonus\n\n # sometimes we don't want to terminate\n done = succ and not self._never_done\n\n obs = np.concatenate([self._point, (dist, )])\n\n return Step(obs, reward, done, task=self._task, success=succ)\n\n def render(self, mode='human'):\n \"\"\"Draw the environment.\n\n Not implemented.\n\n Args:\n mode (str): Ignored.\n\n \"\"\"\n # pylint: disable=no-self-use\n\n def sample_tasks(self, num_tasks):\n \"\"\"Sample a list of `num_tasks` tasks.\n\n Args:\n num_tasks (int): Number of tasks to sample.\n\n Returns:\n list[dict[str, np.ndarray]]: A list of \"tasks\", where each task is\n a dictionary containing a single key, \"goal\", mapping to a\n point in 2D space.\n\n \"\"\"\n goals = np.random.uniform(-2, 2, size=(num_tasks, 2))\n tasks = [{'goal': goal} for goal in goals]\n return tasks\n\n def set_task(self, task):\n \"\"\"Reset with a task.\n\n Args:\n task (dict[str, np.ndarray]): A task (a dictionary containing a\n single key, \"goal\", which should be a point in 2D space).\n\n \"\"\"\n self._task = task\n self._goal = task['goal']\n", "path": "src/garage/envs/point_env.py"}], "after_files": [{"content": "\"\"\"Simple 2D environment containing a point and a goal location.\"\"\"\nimport gym\nimport numpy as np\n\nfrom garage.envs.step import Step\n\n\nclass PointEnv(gym.Env):\n \"\"\"A simple 2D point environment.\n\n Attributes:\n observation_space (gym.spaces.Box): The observation space\n action_space (gym.spaces.Box): The action space\n\n Args:\n goal (np.ndarray): A 2D array representing the goal position\n arena_size (float): The size of arena where the point is constrained\n within (-arena_size, arena_size) in each dimension\n done_bonus (float): A numerical bonus added to the reward\n once the point as reached the goal\n never_done (bool): Never send a `done` signal, even if the\n agent achieves the goal\n\n \"\"\"\n\n def __init__(\n self,\n goal=np.array((1., 1.), dtype=np.float32),\n arena_size=5.,\n done_bonus=0.,\n never_done=False,\n ):\n goal = np.array(goal, dtype=np.float32)\n self._goal = goal\n self._done_bonus = done_bonus\n self._never_done = never_done\n self._arena_size = arena_size\n\n assert ((goal >= -arena_size) & (goal <= arena_size)).all()\n\n self._point = np.zeros_like(self._goal)\n self._task = {'goal': self._goal}\n self._observation_space = gym.spaces.Box(low=-np.inf,\n high=np.inf,\n shape=(3, ),\n dtype=np.float32)\n self._action_space = gym.spaces.Box(low=-0.1,\n high=0.1,\n shape=(2, ),\n dtype=np.float32)\n\n @property\n def observation_space(self):\n \"\"\"gym.spaces.Box: The observation space.\"\"\"\n return self._observation_space\n\n @property\n def action_space(self):\n \"\"\"gym.spaces.Box: The action space.\"\"\"\n return self._action_space\n\n def reset(self):\n \"\"\"Reset the environment.\n\n Returns:\n np.ndarray: Observation of the environment.\n\n \"\"\"\n self._point = np.zeros_like(self._goal)\n dist = np.linalg.norm(self._point - self._goal)\n return np.concatenate([self._point, (dist, )])\n\n def step(self, action):\n \"\"\"Step the environment state.\n\n Args:\n action (np.ndarray): The action to take in the environment.\n\n Returns:\n np.ndarray: Observation. The observation of the environment.\n float: Reward. The reward acquired at this time step.\n boolean: Done. Whether the environment was completed at this\n time step. Always False for this environment.\n\n \"\"\"\n # enforce action space\n a = action.copy() # NOTE: we MUST copy the action before modifying it\n a = np.clip(a, self.action_space.low, self.action_space.high)\n\n self._point = np.clip(self._point + a, -self._arena_size,\n self._arena_size)\n dist = np.linalg.norm(self._point - self._goal)\n succ = dist < np.linalg.norm(self.action_space.low)\n\n # dense reward\n reward = -dist\n # done bonus\n if succ:\n reward += self._done_bonus\n\n # sometimes we don't want to terminate\n done = succ and not self._never_done\n\n obs = np.concatenate([self._point, (dist, )])\n\n return Step(obs, reward, done, task=self._task, success=succ)\n\n def render(self, mode='human'):\n \"\"\"Draw the environment.\n\n Not implemented.\n\n Args:\n mode (str): Ignored.\n\n \"\"\"\n # pylint: disable=no-self-use\n\n def sample_tasks(self, num_tasks):\n \"\"\"Sample a list of `num_tasks` tasks.\n\n Args:\n num_tasks (int): Number of tasks to sample.\n\n Returns:\n list[dict[str, np.ndarray]]: A list of \"tasks\", where each task is\n a dictionary containing a single key, \"goal\", mapping to a\n point in 2D space.\n\n \"\"\"\n goals = np.random.uniform(-2, 2, size=(num_tasks, 2))\n tasks = [{'goal': goal} for goal in goals]\n return tasks\n\n def set_task(self, task):\n \"\"\"Reset with a task.\n\n Args:\n task (dict[str, np.ndarray]): A task (a dictionary containing a\n single key, \"goal\", which should be a point in 2D space).\n\n \"\"\"\n self._task = task\n self._goal = task['goal']\n", "path": "src/garage/envs/point_env.py"}]} | 1,635 | 178 |
gh_patches_debug_25231 | rasdani/github-patches | git_diff | litestar-org__litestar-1286 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: `SessionCookieConfig` import DeprecationWarning - suggested import path does not exist
**Describe the bug**
In Starlite >= 1.47, the statement `from starlite.middleware.session.cookie_backend import CookieBackendConfig` raises a DeprecationWarning:
```
DeprecationWarning: Import of deprecated import 'SessionCookieConfig from starlite.middleware.session'.
Deprecated in starlite 1.47.0. This import will be removed in the next major version.
Use "'from startlite.middleware.sessions.cookie_backend import CookieBackendConfig'" instead
```
The suggested import path does not exist and contains a typo.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlite/middleware/session/__init__.py`
Content:
```
1 from typing import Any
2
3 from starlite.utils import warn_deprecation
4
5 from .base import SessionMiddleware
6
7
8 def __getattr__(name: str) -> Any:
9 """Provide lazy importing as per https://peps.python.org/pep-0562/"""
10
11 if name != "SessionCookieConfig":
12 raise AttributeError(f"Module {__package__} has no attribute {name}")
13
14 from .cookie_backend import CookieBackendConfig
15
16 warn_deprecation(
17 deprecated_name=f"{name} from {__package__}",
18 kind="import",
19 alternative="'from startlite.middleware.sessions.cookie_backend import CookieBackendConfig'",
20 version="1.47.0",
21 )
22
23 globals()[name] = CookieBackendConfig
24 return CookieBackendConfig
25
26
27 __all__ = ["SessionMiddleware"]
28
```
Path: `starlite/template/__init__.py`
Content:
```
1 from typing import Any
2
3 from .base import TemplateEngineProtocol, TemplateProtocol
4
5 __all__ = ("TemplateEngineProtocol", "TemplateProtocol")
6
7 from ..utils import warn_deprecation
8
9
10 def __getattr__(name: str) -> Any:
11 """Provide lazy importing as per https://peps.python.org/pep-0562/"""
12
13 if name not in {"JinjaTemplateEngine", "MakoTemplateEngine", "MakoTemplate"}:
14 raise AttributeError(f"Module {__package__} has no attribute {name}")
15
16 if name == "JinjaTemplateEngine":
17 from starlite.contrib.jinja import JinjaTemplateEngine
18
19 export: Any = JinjaTemplateEngine
20 module = "jinja"
21 elif name == "MakoTemplateEngine":
22 from starlite.contrib.mako import MakoTemplateEngine
23
24 export = MakoTemplateEngine
25 module = "mako"
26 else:
27 from starlite.contrib.mako import MakoTemplate
28
29 export = MakoTemplate
30 module = "mako"
31
32 warn_deprecation(
33 deprecated_name=f"{name} from {__package__}",
34 kind="import",
35 alternative=f"'from startlite.contrib.{module} import {name}'",
36 version="1.46.0",
37 )
38
39 globals()[name] = export
40 return export
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlite/middleware/session/__init__.py b/starlite/middleware/session/__init__.py
--- a/starlite/middleware/session/__init__.py
+++ b/starlite/middleware/session/__init__.py
@@ -8,7 +8,7 @@
def __getattr__(name: str) -> Any:
"""Provide lazy importing as per https://peps.python.org/pep-0562/"""
- if name != "SessionCookieConfig":
+ if name != "CookieBackendConfig":
raise AttributeError(f"Module {__package__} has no attribute {name}")
from .cookie_backend import CookieBackendConfig
@@ -16,7 +16,7 @@
warn_deprecation(
deprecated_name=f"{name} from {__package__}",
kind="import",
- alternative="'from startlite.middleware.sessions.cookie_backend import CookieBackendConfig'",
+ alternative="'from starlite.middleware.session.cookie_backend import CookieBackendConfig'",
version="1.47.0",
)
diff --git a/starlite/template/__init__.py b/starlite/template/__init__.py
--- a/starlite/template/__init__.py
+++ b/starlite/template/__init__.py
@@ -32,7 +32,7 @@
warn_deprecation(
deprecated_name=f"{name} from {__package__}",
kind="import",
- alternative=f"'from startlite.contrib.{module} import {name}'",
+ alternative=f"'from starlite.contrib.{module} import {name}'",
version="1.46.0",
)
| {"golden_diff": "diff --git a/starlite/middleware/session/__init__.py b/starlite/middleware/session/__init__.py\n--- a/starlite/middleware/session/__init__.py\n+++ b/starlite/middleware/session/__init__.py\n@@ -8,7 +8,7 @@\n def __getattr__(name: str) -> Any:\n \"\"\"Provide lazy importing as per https://peps.python.org/pep-0562/\"\"\"\n \n- if name != \"SessionCookieConfig\":\n+ if name != \"CookieBackendConfig\":\n raise AttributeError(f\"Module {__package__} has no attribute {name}\")\n \n from .cookie_backend import CookieBackendConfig\n@@ -16,7 +16,7 @@\n warn_deprecation(\n deprecated_name=f\"{name} from {__package__}\",\n kind=\"import\",\n- alternative=\"'from startlite.middleware.sessions.cookie_backend import CookieBackendConfig'\",\n+ alternative=\"'from starlite.middleware.session.cookie_backend import CookieBackendConfig'\",\n version=\"1.47.0\",\n )\n \ndiff --git a/starlite/template/__init__.py b/starlite/template/__init__.py\n--- a/starlite/template/__init__.py\n+++ b/starlite/template/__init__.py\n@@ -32,7 +32,7 @@\n warn_deprecation(\n deprecated_name=f\"{name} from {__package__}\",\n kind=\"import\",\n- alternative=f\"'from startlite.contrib.{module} import {name}'\",\n+ alternative=f\"'from starlite.contrib.{module} import {name}'\",\n version=\"1.46.0\",\n )\n", "issue": "Bug: `SessionCookieConfig` import DeprecationWarning - suggested import path does not exist\n**Describe the bug**\r\nIn Starlite >= 1.47, the statement `from starlite.middleware.session.cookie_backend import CookieBackendConfig` raises a DeprecationWarning:\r\n```\r\nDeprecationWarning: Import of deprecated import 'SessionCookieConfig from starlite.middleware.session'.\r\nDeprecated in starlite 1.47.0. This import will be removed in the next major version.\r\nUse \"'from startlite.middleware.sessions.cookie_backend import CookieBackendConfig'\" instead\r\n```\r\nThe suggested import path does not exist and contains a typo.\r\n\r\n\n", "before_files": [{"content": "from typing import Any\n\nfrom starlite.utils import warn_deprecation\n\nfrom .base import SessionMiddleware\n\n\ndef __getattr__(name: str) -> Any:\n \"\"\"Provide lazy importing as per https://peps.python.org/pep-0562/\"\"\"\n\n if name != \"SessionCookieConfig\":\n raise AttributeError(f\"Module {__package__} has no attribute {name}\")\n\n from .cookie_backend import CookieBackendConfig\n\n warn_deprecation(\n deprecated_name=f\"{name} from {__package__}\",\n kind=\"import\",\n alternative=\"'from startlite.middleware.sessions.cookie_backend import CookieBackendConfig'\",\n version=\"1.47.0\",\n )\n\n globals()[name] = CookieBackendConfig\n return CookieBackendConfig\n\n\n__all__ = [\"SessionMiddleware\"]\n", "path": "starlite/middleware/session/__init__.py"}, {"content": "from typing import Any\n\nfrom .base import TemplateEngineProtocol, TemplateProtocol\n\n__all__ = (\"TemplateEngineProtocol\", \"TemplateProtocol\")\n\nfrom ..utils import warn_deprecation\n\n\ndef __getattr__(name: str) -> Any:\n \"\"\"Provide lazy importing as per https://peps.python.org/pep-0562/\"\"\"\n\n if name not in {\"JinjaTemplateEngine\", \"MakoTemplateEngine\", \"MakoTemplate\"}:\n raise AttributeError(f\"Module {__package__} has no attribute {name}\")\n\n if name == \"JinjaTemplateEngine\":\n from starlite.contrib.jinja import JinjaTemplateEngine\n\n export: Any = JinjaTemplateEngine\n module = \"jinja\"\n elif name == \"MakoTemplateEngine\":\n from starlite.contrib.mako import MakoTemplateEngine\n\n export = MakoTemplateEngine\n module = \"mako\"\n else:\n from starlite.contrib.mako import MakoTemplate\n\n export = MakoTemplate\n module = \"mako\"\n\n warn_deprecation(\n deprecated_name=f\"{name} from {__package__}\",\n kind=\"import\",\n alternative=f\"'from startlite.contrib.{module} import {name}'\",\n version=\"1.46.0\",\n )\n\n globals()[name] = export\n return export\n", "path": "starlite/template/__init__.py"}], "after_files": [{"content": "from typing import Any\n\nfrom starlite.utils import warn_deprecation\n\nfrom .base import SessionMiddleware\n\n\ndef __getattr__(name: str) -> Any:\n \"\"\"Provide lazy importing as per https://peps.python.org/pep-0562/\"\"\"\n\n if name != \"CookieBackendConfig\":\n raise AttributeError(f\"Module {__package__} has no attribute {name}\")\n\n from .cookie_backend import CookieBackendConfig\n\n warn_deprecation(\n deprecated_name=f\"{name} from {__package__}\",\n kind=\"import\",\n alternative=\"'from starlite.middleware.session.cookie_backend import CookieBackendConfig'\",\n version=\"1.47.0\",\n )\n\n globals()[name] = CookieBackendConfig\n return CookieBackendConfig\n\n\n__all__ = [\"SessionMiddleware\"]\n", "path": "starlite/middleware/session/__init__.py"}, {"content": "from typing import Any\n\nfrom .base import TemplateEngineProtocol, TemplateProtocol\n\n__all__ = (\"TemplateEngineProtocol\", \"TemplateProtocol\")\n\nfrom ..utils import warn_deprecation\n\n\ndef __getattr__(name: str) -> Any:\n \"\"\"Provide lazy importing as per https://peps.python.org/pep-0562/\"\"\"\n\n if name not in {\"JinjaTemplateEngine\", \"MakoTemplateEngine\", \"MakoTemplate\"}:\n raise AttributeError(f\"Module {__package__} has no attribute {name}\")\n\n if name == \"JinjaTemplateEngine\":\n from starlite.contrib.jinja import JinjaTemplateEngine\n\n export: Any = JinjaTemplateEngine\n module = \"jinja\"\n elif name == \"MakoTemplateEngine\":\n from starlite.contrib.mako import MakoTemplateEngine\n\n export = MakoTemplateEngine\n module = \"mako\"\n else:\n from starlite.contrib.mako import MakoTemplate\n\n export = MakoTemplate\n module = \"mako\"\n\n warn_deprecation(\n deprecated_name=f\"{name} from {__package__}\",\n kind=\"import\",\n alternative=f\"'from starlite.contrib.{module} import {name}'\",\n version=\"1.46.0\",\n )\n\n globals()[name] = export\n return export\n", "path": "starlite/template/__init__.py"}]} | 1,003 | 342 |
gh_patches_debug_7119 | rasdani/github-patches | git_diff | Netflix__lemur-148 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error messages not displaying long enough
Currently error messages are displayed for only a period of time. They should be displayed until the user acknowledges the error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lemur/common/utils.py`
Content:
```
1 """
2 .. module: lemur.common.utils
3 :platform: Unix
4 :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more
5 :license: Apache, see LICENSE for more details.
6
7 .. moduleauthor:: Kevin Glisson <[email protected]>
8 """
9 import string
10 import random
11 from functools import wraps
12
13 from flask import current_app
14
15 from flask.ext.restful import marshal
16 from flask.ext.restful.reqparse import RequestParser
17 from flask.ext.sqlalchemy import Pagination
18
19
20 def get_psuedo_random_string():
21 """
22 Create a random and strongish challenge.
23 """
24 challenge = ''.join(random.choice(string.ascii_uppercase) for x in range(6)) # noqa
25 challenge += ''.join(random.choice("~!@#$%^&*()_+") for x in range(6)) # noqa
26 challenge += ''.join(random.choice(string.ascii_lowercase) for x in range(6))
27 challenge += ''.join(random.choice(string.digits) for x in range(6)) # noqa
28 return challenge
29
30
31 class marshal_items(object):
32 def __init__(self, fields, envelope=None):
33 self.fields = fields
34 self.envelop = envelope
35
36 def __call__(self, f):
37 def _filter_items(items):
38 filtered_items = []
39 for item in items:
40 filtered_items.append(marshal(item, self.fields))
41 return filtered_items
42
43 @wraps(f)
44 def wrapper(*args, **kwargs):
45 try:
46 resp = f(*args, **kwargs)
47
48 # this is a bit weird way to handle non standard error codes returned from the marshaled function
49 if isinstance(resp, tuple):
50 return resp[0], resp[1]
51
52 if isinstance(resp, Pagination):
53 return {'items': _filter_items(resp.items), 'total': resp.total}
54
55 if isinstance(resp, list):
56 return {'items': _filter_items(resp), 'total': len(resp)}
57
58 return marshal(resp, self.fields)
59 except Exception as e:
60 current_app.logger.exception(e)
61 # this is a little weird hack to respect flask restful parsing errors on marshaled functions
62 if hasattr(e, 'code'):
63 if hasattr(e, 'data'):
64 return {'message': e.data['message']}, 400
65 else:
66 return {'message': 'unknown'}, 400
67 else:
68 return {'message': str(e)}, 400
69 return wrapper
70
71
72 paginated_parser = RequestParser()
73
74 paginated_parser.add_argument('count', type=int, default=10, location='args')
75 paginated_parser.add_argument('page', type=int, default=1, location='args')
76 paginated_parser.add_argument('sortDir', type=str, dest='sort_dir', location='args')
77 paginated_parser.add_argument('sortBy', type=str, dest='sort_by', location='args')
78 paginated_parser.add_argument('filter', type=str, location='args')
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lemur/common/utils.py b/lemur/common/utils.py
--- a/lemur/common/utils.py
+++ b/lemur/common/utils.py
@@ -63,9 +63,9 @@
if hasattr(e, 'data'):
return {'message': e.data['message']}, 400
else:
- return {'message': 'unknown'}, 400
+ return {'message': {'exception': 'unknown'}}, 400
else:
- return {'message': str(e)}, 400
+ return {'message': {'exception': str(e)}}, 400
return wrapper
| {"golden_diff": "diff --git a/lemur/common/utils.py b/lemur/common/utils.py\n--- a/lemur/common/utils.py\n+++ b/lemur/common/utils.py\n@@ -63,9 +63,9 @@\n if hasattr(e, 'data'):\n return {'message': e.data['message']}, 400\n else:\n- return {'message': 'unknown'}, 400\n+ return {'message': {'exception': 'unknown'}}, 400\n else:\n- return {'message': str(e)}, 400\n+ return {'message': {'exception': str(e)}}, 400\n return wrapper\n", "issue": "Error messages not displaying long enough\nCurrently error messages are displayed for only a period of time. They should be displayed until the user acknowledges the error. \n\n", "before_files": [{"content": "\"\"\"\n.. module: lemur.common.utils\n :platform: Unix\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\"\"\"\nimport string\nimport random\nfrom functools import wraps\n\nfrom flask import current_app\n\nfrom flask.ext.restful import marshal\nfrom flask.ext.restful.reqparse import RequestParser\nfrom flask.ext.sqlalchemy import Pagination\n\n\ndef get_psuedo_random_string():\n \"\"\"\n Create a random and strongish challenge.\n \"\"\"\n challenge = ''.join(random.choice(string.ascii_uppercase) for x in range(6)) # noqa\n challenge += ''.join(random.choice(\"~!@#$%^&*()_+\") for x in range(6)) # noqa\n challenge += ''.join(random.choice(string.ascii_lowercase) for x in range(6))\n challenge += ''.join(random.choice(string.digits) for x in range(6)) # noqa\n return challenge\n\n\nclass marshal_items(object):\n def __init__(self, fields, envelope=None):\n self.fields = fields\n self.envelop = envelope\n\n def __call__(self, f):\n def _filter_items(items):\n filtered_items = []\n for item in items:\n filtered_items.append(marshal(item, self.fields))\n return filtered_items\n\n @wraps(f)\n def wrapper(*args, **kwargs):\n try:\n resp = f(*args, **kwargs)\n\n # this is a bit weird way to handle non standard error codes returned from the marshaled function\n if isinstance(resp, tuple):\n return resp[0], resp[1]\n\n if isinstance(resp, Pagination):\n return {'items': _filter_items(resp.items), 'total': resp.total}\n\n if isinstance(resp, list):\n return {'items': _filter_items(resp), 'total': len(resp)}\n\n return marshal(resp, self.fields)\n except Exception as e:\n current_app.logger.exception(e)\n # this is a little weird hack to respect flask restful parsing errors on marshaled functions\n if hasattr(e, 'code'):\n if hasattr(e, 'data'):\n return {'message': e.data['message']}, 400\n else:\n return {'message': 'unknown'}, 400\n else:\n return {'message': str(e)}, 400\n return wrapper\n\n\npaginated_parser = RequestParser()\n\npaginated_parser.add_argument('count', type=int, default=10, location='args')\npaginated_parser.add_argument('page', type=int, default=1, location='args')\npaginated_parser.add_argument('sortDir', type=str, dest='sort_dir', location='args')\npaginated_parser.add_argument('sortBy', type=str, dest='sort_by', location='args')\npaginated_parser.add_argument('filter', type=str, location='args')\n", "path": "lemur/common/utils.py"}], "after_files": [{"content": "\"\"\"\n.. module: lemur.common.utils\n :platform: Unix\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\"\"\"\nimport string\nimport random\nfrom functools import wraps\n\nfrom flask import current_app\n\nfrom flask.ext.restful import marshal\nfrom flask.ext.restful.reqparse import RequestParser\nfrom flask.ext.sqlalchemy import Pagination\n\n\ndef get_psuedo_random_string():\n \"\"\"\n Create a random and strongish challenge.\n \"\"\"\n challenge = ''.join(random.choice(string.ascii_uppercase) for x in range(6)) # noqa\n challenge += ''.join(random.choice(\"~!@#$%^&*()_+\") for x in range(6)) # noqa\n challenge += ''.join(random.choice(string.ascii_lowercase) for x in range(6))\n challenge += ''.join(random.choice(string.digits) for x in range(6)) # noqa\n return challenge\n\n\nclass marshal_items(object):\n def __init__(self, fields, envelope=None):\n self.fields = fields\n self.envelop = envelope\n\n def __call__(self, f):\n def _filter_items(items):\n filtered_items = []\n for item in items:\n filtered_items.append(marshal(item, self.fields))\n return filtered_items\n\n @wraps(f)\n def wrapper(*args, **kwargs):\n try:\n resp = f(*args, **kwargs)\n\n # this is a bit weird way to handle non standard error codes returned from the marshaled function\n if isinstance(resp, tuple):\n return resp[0], resp[1]\n\n if isinstance(resp, Pagination):\n return {'items': _filter_items(resp.items), 'total': resp.total}\n\n if isinstance(resp, list):\n return {'items': _filter_items(resp), 'total': len(resp)}\n\n return marshal(resp, self.fields)\n except Exception as e:\n current_app.logger.exception(e)\n # this is a little weird hack to respect flask restful parsing errors on marshaled functions\n if hasattr(e, 'code'):\n if hasattr(e, 'data'):\n return {'message': e.data['message']}, 400\n else:\n return {'message': {'exception': 'unknown'}}, 400\n else:\n return {'message': {'exception': str(e)}}, 400\n return wrapper\n\n\npaginated_parser = RequestParser()\n\npaginated_parser.add_argument('count', type=int, default=10, location='args')\npaginated_parser.add_argument('page', type=int, default=1, location='args')\npaginated_parser.add_argument('sortDir', type=str, dest='sort_dir', location='args')\npaginated_parser.add_argument('sortBy', type=str, dest='sort_by', location='args')\npaginated_parser.add_argument('filter', type=str, location='args')\n", "path": "lemur/common/utils.py"}]} | 1,077 | 146 |
gh_patches_debug_12176 | rasdani/github-patches | git_diff | dotkom__onlineweb4-1818 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to remove image from event in dashboard
If an image has been selected for an event there is currently no way to remove the image after saving.
A shitty workaround is to inspect the hidden input field and set the value to an empty string.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/gallery/widgets.py`
Content:
```
1 # -*- coding: utf8 -*-
2 #
3 # Created by 'myth' on 10/14/15
4
5 from django.conf import settings
6 from django.core.urlresolvers import reverse_lazy
7 from django.forms import HiddenInput, TextInput
8 from django.forms.utils import flatatt, force_text, format_html
9
10 from apps.gallery.models import ResponsiveImage
11
12
13 WIDGET_STRING = """<br /><input{} />\r\n
14 <div id="single-image-field-thumbnail">{}</div>
15 <a href="#" class="btn btn-primary" id="add-responsive-image">\r\n
16 <i class="fa fa-plus fa-lg"></i> Velg</a>\r\n
17 <a href="{}" class="btn btn-primary" target="_blank">\r\n
18 <i class="fa fa-image fa-lg"></i> Last opp</a><br>\r\n
19 <div id="image-selection-wrapper">\r\n
20 <h2 id="image-selection-title">Velg bilde</h2>\r\n
21 <div class="row">\r\n
22 <div class="col-md-12">\r\n
23 <div class="input-group">\r\n
24 <input type="text" id="image-gallery-search" class="form-control" placeholder="Skriv inn søkeord...">\r\n
25 <span class="input-group-btn">\r\n
26 <a class="btn btn-primary" id="image-gallery-search-button" type="button">Søk!</a>\r\n
27 </span>\r\n
28 </div>\r\n
29 </div>\r\n
30 </div>\r\n
31 <hr />\r\n
32 <div class="row" id="image-gallery-search-results"></div>\r\n
33 </div>\r\n"""
34
35
36 class SingleImageInput(HiddenInput):
37 """
38 SingleImageField adds wrapper HTML around the hidden input field containing the ResponsiveImage ID
39 """
40
41 def __init__(self, attrs=None):
42 super(SingleImageInput, self).__init__(attrs)
43 self.input_type = 'hidden'
44
45 def render(self, name, value, attrs=None):
46 """
47 Renders this field widget as HTML
48 :param name: Field input name
49 :param value: Field input value
50 :param attrs: Field input attributes
51 :return: An HTML string representing this widget
52 """
53
54 if value is None:
55 value = ''
56
57 img_thumb = 'Det er ikke valgt noe bilde.'
58 final_attrs = self.build_attrs(attrs, type=self.input_type, name=name)
59 if value != '':
60 # Only add the value attribute if the value is non-empty
61 final_attrs['value'] = force_text(self._format_value(value))
62 img = ResponsiveImage.objects.get(pk=value)
63 img_thumb = format_html(
64 '<img src="{}" alt title="{}"/>',
65 settings.MEDIA_URL + str(img.thumbnail),
66 str(img.name),
67 encoding='utf-8'
68 )
69
70 upload_url = reverse_lazy('gallery_dashboard:upload')
71
72 return format_html(WIDGET_STRING, flatatt(final_attrs), img_thumb, upload_url)
73
74
75 class TagInputField(TextInput):
76 """
77 Adds some extras to a TextInputField to support space or comma separated tagging
78 """
79
80 def __init__(self, attrs=None):
81 super(TagInputField, self).__init__(attrs=attrs)
82
83 def render(self, name, value, attrs=None):
84 """
85 Renders this field widget as HTML
86 :param name: Field input name
87 :param value: Field input value
88 :param attrs: Field input attributes
89 :return: An HTML string representing this widget
90 """
91
92 return super(TagInputField, self).render(name, value, attrs=attrs)
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/gallery/widgets.py b/apps/gallery/widgets.py
--- a/apps/gallery/widgets.py
+++ b/apps/gallery/widgets.py
@@ -15,7 +15,9 @@
<a href="#" class="btn btn-primary" id="add-responsive-image">\r\n
<i class="fa fa-plus fa-lg"></i> Velg</a>\r\n
<a href="{}" class="btn btn-primary" target="_blank">\r\n
-<i class="fa fa-image fa-lg"></i> Last opp</a><br>\r\n
+<i class="fa fa-image fa-lg"></i> Last opp</a>\r\n
+<a href="#" class="btn btn-danger" id="dashboard-gallery-remove-image">\r\n
+<i class="fa fa-times fa-lg"></i> Fjern bilde</a><br>\r\n
<div id="image-selection-wrapper">\r\n
<h2 id="image-selection-title">Velg bilde</h2>\r\n
<div class="row">\r\n
| {"golden_diff": "diff --git a/apps/gallery/widgets.py b/apps/gallery/widgets.py\n--- a/apps/gallery/widgets.py\n+++ b/apps/gallery/widgets.py\n@@ -15,7 +15,9 @@\n <a href=\"#\" class=\"btn btn-primary\" id=\"add-responsive-image\">\\r\\n\n <i class=\"fa fa-plus fa-lg\"></i> Velg</a>\\r\\n\n <a href=\"{}\" class=\"btn btn-primary\" target=\"_blank\">\\r\\n\n-<i class=\"fa fa-image fa-lg\"></i> Last opp</a><br>\\r\\n\n+<i class=\"fa fa-image fa-lg\"></i> Last opp</a>\\r\\n\n+<a href=\"#\" class=\"btn btn-danger\" id=\"dashboard-gallery-remove-image\">\\r\\n\n+<i class=\"fa fa-times fa-lg\"></i> Fjern bilde</a><br>\\r\\n\n <div id=\"image-selection-wrapper\">\\r\\n\n <h2 id=\"image-selection-title\">Velg bilde</h2>\\r\\n\n <div class=\"row\">\\r\\n\n", "issue": "Unable to remove image from event in dashboard\nIf an image has been selected for an event there is currently no way to remove the image after saving. \r\n\r\nA shitty workaround is to inspect the hidden input field and set the value to an empty string.\n", "before_files": [{"content": "# -*- coding: utf8 -*-\n#\n# Created by 'myth' on 10/14/15\n\nfrom django.conf import settings\nfrom django.core.urlresolvers import reverse_lazy\nfrom django.forms import HiddenInput, TextInput\nfrom django.forms.utils import flatatt, force_text, format_html\n\nfrom apps.gallery.models import ResponsiveImage\n\n\nWIDGET_STRING = \"\"\"<br /><input{} />\\r\\n\n<div id=\"single-image-field-thumbnail\">{}</div>\n<a href=\"#\" class=\"btn btn-primary\" id=\"add-responsive-image\">\\r\\n\n<i class=\"fa fa-plus fa-lg\"></i> Velg</a>\\r\\n\n<a href=\"{}\" class=\"btn btn-primary\" target=\"_blank\">\\r\\n\n<i class=\"fa fa-image fa-lg\"></i> Last opp</a><br>\\r\\n\n<div id=\"image-selection-wrapper\">\\r\\n\n<h2 id=\"image-selection-title\">Velg bilde</h2>\\r\\n\n<div class=\"row\">\\r\\n\n<div class=\"col-md-12\">\\r\\n\n<div class=\"input-group\">\\r\\n\n<input type=\"text\" id=\"image-gallery-search\" class=\"form-control\" placeholder=\"Skriv inn s\u00f8keord...\">\\r\\n\n<span class=\"input-group-btn\">\\r\\n\n<a class=\"btn btn-primary\" id=\"image-gallery-search-button\" type=\"button\">S\u00f8k!</a>\\r\\n\n</span>\\r\\n\n</div>\\r\\n\n</div>\\r\\n\n</div>\\r\\n\n<hr />\\r\\n\n<div class=\"row\" id=\"image-gallery-search-results\"></div>\\r\\n\n</div>\\r\\n\"\"\"\n\n\nclass SingleImageInput(HiddenInput):\n \"\"\"\n SingleImageField adds wrapper HTML around the hidden input field containing the ResponsiveImage ID\n \"\"\"\n\n def __init__(self, attrs=None):\n super(SingleImageInput, self).__init__(attrs)\n self.input_type = 'hidden'\n\n def render(self, name, value, attrs=None):\n \"\"\"\n Renders this field widget as HTML\n :param name: Field input name\n :param value: Field input value\n :param attrs: Field input attributes\n :return: An HTML string representing this widget\n \"\"\"\n\n if value is None:\n value = ''\n\n img_thumb = 'Det er ikke valgt noe bilde.'\n final_attrs = self.build_attrs(attrs, type=self.input_type, name=name)\n if value != '':\n # Only add the value attribute if the value is non-empty\n final_attrs['value'] = force_text(self._format_value(value))\n img = ResponsiveImage.objects.get(pk=value)\n img_thumb = format_html(\n '<img src=\"{}\" alt title=\"{}\"/>',\n settings.MEDIA_URL + str(img.thumbnail),\n str(img.name),\n encoding='utf-8'\n )\n\n upload_url = reverse_lazy('gallery_dashboard:upload')\n\n return format_html(WIDGET_STRING, flatatt(final_attrs), img_thumb, upload_url)\n\n\nclass TagInputField(TextInput):\n \"\"\"\n Adds some extras to a TextInputField to support space or comma separated tagging\n \"\"\"\n\n def __init__(self, attrs=None):\n super(TagInputField, self).__init__(attrs=attrs)\n\n def render(self, name, value, attrs=None):\n \"\"\"\n Renders this field widget as HTML\n :param name: Field input name\n :param value: Field input value\n :param attrs: Field input attributes\n :return: An HTML string representing this widget\n \"\"\"\n\n return super(TagInputField, self).render(name, value, attrs=attrs)\n", "path": "apps/gallery/widgets.py"}], "after_files": [{"content": "# -*- coding: utf8 -*-\n#\n# Created by 'myth' on 10/14/15\n\nfrom django.conf import settings\nfrom django.core.urlresolvers import reverse_lazy\nfrom django.forms import HiddenInput, TextInput\nfrom django.forms.utils import flatatt, force_text, format_html\n\nfrom apps.gallery.models import ResponsiveImage\n\n\nWIDGET_STRING = \"\"\"<br /><input{} />\\r\\n\n<div id=\"single-image-field-thumbnail\">{}</div>\n<a href=\"#\" class=\"btn btn-primary\" id=\"add-responsive-image\">\\r\\n\n<i class=\"fa fa-plus fa-lg\"></i> Velg</a>\\r\\n\n<a href=\"{}\" class=\"btn btn-primary\" target=\"_blank\">\\r\\n\n<i class=\"fa fa-image fa-lg\"></i> Last opp</a>\\r\\n\n<a href=\"#\" class=\"btn btn-danger\" id=\"dashboard-gallery-remove-image\">\\r\\n\n<i class=\"fa fa-times fa-lg\"></i> Fjern bilde</a><br>\\r\\n\n<div id=\"image-selection-wrapper\">\\r\\n\n<h2 id=\"image-selection-title\">Velg bilde</h2>\\r\\n\n<div class=\"row\">\\r\\n\n<div class=\"col-md-12\">\\r\\n\n<div class=\"input-group\">\\r\\n\n<input type=\"text\" id=\"image-gallery-search\" class=\"form-control\" placeholder=\"Skriv inn s\u00f8keord...\">\\r\\n\n<span class=\"input-group-btn\">\\r\\n\n<a class=\"btn btn-primary\" id=\"image-gallery-search-button\" type=\"button\">S\u00f8k!</a>\\r\\n\n</span>\\r\\n\n</div>\\r\\n\n</div>\\r\\n\n</div>\\r\\n\n<hr />\\r\\n\n<div class=\"row\" id=\"image-gallery-search-results\"></div>\\r\\n\n</div>\\r\\n\"\"\"\n\n\nclass SingleImageInput(HiddenInput):\n \"\"\"\n SingleImageField adds wrapper HTML around the hidden input field containing the ResponsiveImage ID\n \"\"\"\n\n def __init__(self, attrs=None):\n super(SingleImageInput, self).__init__(attrs)\n self.input_type = 'hidden'\n\n def render(self, name, value, attrs=None):\n \"\"\"\n Renders this field widget as HTML\n :param name: Field input name\n :param value: Field input value\n :param attrs: Field input attributes\n :return: An HTML string representing this widget\n \"\"\"\n\n if value is None:\n value = ''\n\n img_thumb = 'Det er ikke valgt noe bilde.'\n final_attrs = self.build_attrs(attrs, type=self.input_type, name=name)\n if value != '':\n # Only add the value attribute if the value is non-empty\n final_attrs['value'] = force_text(self._format_value(value))\n img = ResponsiveImage.objects.get(pk=value)\n img_thumb = format_html(\n '<img src=\"{}\" alt title=\"{}\"/>',\n settings.MEDIA_URL + str(img.thumbnail),\n str(img.name),\n encoding='utf-8'\n )\n\n upload_url = reverse_lazy('gallery_dashboard:upload')\n\n return format_html(WIDGET_STRING, flatatt(final_attrs), img_thumb, upload_url)\n\n\nclass TagInputField(TextInput):\n \"\"\"\n Adds some extras to a TextInputField to support space or comma separated tagging\n \"\"\"\n\n def __init__(self, attrs=None):\n super(TagInputField, self).__init__(attrs=attrs)\n\n def render(self, name, value, attrs=None):\n \"\"\"\n Renders this field widget as HTML\n :param name: Field input name\n :param value: Field input value\n :param attrs: Field input attributes\n :return: An HTML string representing this widget\n \"\"\"\n\n return super(TagInputField, self).render(name, value, attrs=attrs)\n", "path": "apps/gallery/widgets.py"}]} | 1,281 | 229 |
gh_patches_debug_28339 | rasdani/github-patches | git_diff | mlflow__mlflow-5914 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Replace `unittest.TestCase.assertRaises` with `unittest.TestCase.assertRaisesRegex`
Some tests use `unittest.TestCase.assertRaises` to test an exception is raised for illegal operations, but they need to be replaces with `unittest.TestCase.assertRaisesRegex`.
### Why do we need this change?
Let's say we have a function that raises an exception:
```python
def throw_exception(...):
if condition_1:
raise TypeError("condition_1")
if condition_2:
raise TypeError("condition_2")
...
```
If we test this function using `assertRaises`:
```python
class MyTest(unittest.TestCase):
def test_throw_exception(self):
# Does `throw_exception` really raise the second TypeError?
# It might throw the first TypeError, then the test will pass.
with self.assertRaises(TypeError):
throw_exception(...) # should raise TypeError("condition_2")
```
If we test this function using `assertRaisesRegex`:
```python
class MyTest(unittest.TestCase):
def test_throw_exception(self):
# This test fails when `throw_exception` raises the first TypeError.
with self. assertRaisesRegex(TypeError, "condition_b"):
throw_exception(...) # should raise TypeError("condition_2")
```
### Example
https://github.com/mlflow/mlflow/blob/fe6618823a2e6038149ee0da675503d2764552ca/tests/store/tracking/test_sqlalchemy_store.py#L107
The code above needs to be fixed to the following:
```python
# "<string that matches the error message>" must be replaced
with self.assertRaisesRegex(MlflowException, "<string that matches the error message>") as e:
```
### References
- https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertRaises
- https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertRaisesRegex
### Instructions
https://github.com/mlflow/mlflow/blob/101ad6e8eb383c769178df0df83d1d2a1cea6b4a/pylint_plugins/assert_raises_without_msg.py#L20-L33
Ping me with the file you want to work on :)
| File | Assignee | PR | Done |
| :---------------------------------------------------- | :---------- | :---- | :--- |
| `tests/entities/test_run_status.py` | @Sumanth077 | | |
| `tests/store/model_registry/test_sqlalchemy_store.py` | @ognis1205 | #5875 | ✅ |
| `tests/store/db/test_utils.py` | @erich-db | | |
| `tests/store/tracking/__init__.py` | @Sumanth077 | | |
| `tests/store/tracking/test_file_store.py` | @andy1122 | | |
| `tests/store/tracking/test_sqlalchemy_store.py` | @ognis1205 | #5875 | ✅ |
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pylint_plugins/unittest_assert_raises.py`
Content:
```
1 import os
2
3 import astroid
4 from pylint.interfaces import IAstroidChecker
5 from pylint.checkers import BaseChecker
6
7
8 def _is_unittest_assert_raises(node: astroid.Call):
9 return isinstance(node.func, astroid.Attribute) and node.func.as_string() == "self.assertRaises"
10
11
12 IGNORE_FILES = list(
13 map(
14 os.path.abspath,
15 [
16 # Instructions
17 # ============
18 # 1. Select a file in the list below and remove it.
19 # 2. Run pylint and confirm it fails.
20 # 3. Fix the lines printed out in the previous step.
21 # 4. Run pylint again and confirm it succeeds now.
22 # 5. Run pytest and confirm the changed lines don't fail.
23 # 6. Open a PR.
24 "tests/entities/test_run_status.py",
25 "tests/store/db/test_utils.py",
26 "tests/store/tracking/__init__.py",
27 "tests/store/tracking/test_file_store.py",
28 ],
29 )
30 )
31
32
33 def _should_ignore(path: str):
34 return path in IGNORE_FILES
35
36
37 class UnittestAssertRaises(BaseChecker):
38 __implements__ = IAstroidChecker
39
40 name = "unittest-assert-raises"
41 msgs = {
42 "W0003": (
43 "`assertRaises` must be replaced with `assertRaisesRegex`",
44 name,
45 "Use `assertRaisesRegex` instead",
46 ),
47 }
48 priority = -1
49
50 def visit_call(self, node: astroid.Call):
51 if not _should_ignore(node.root().file) and _is_unittest_assert_raises(node):
52 self.add_message(self.name, node=node)
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pylint_plugins/unittest_assert_raises.py b/pylint_plugins/unittest_assert_raises.py
--- a/pylint_plugins/unittest_assert_raises.py
+++ b/pylint_plugins/unittest_assert_raises.py
@@ -1,5 +1,3 @@
-import os
-
import astroid
from pylint.interfaces import IAstroidChecker
from pylint.checkers import BaseChecker
@@ -9,31 +7,6 @@
return isinstance(node.func, astroid.Attribute) and node.func.as_string() == "self.assertRaises"
-IGNORE_FILES = list(
- map(
- os.path.abspath,
- [
- # Instructions
- # ============
- # 1. Select a file in the list below and remove it.
- # 2. Run pylint and confirm it fails.
- # 3. Fix the lines printed out in the previous step.
- # 4. Run pylint again and confirm it succeeds now.
- # 5. Run pytest and confirm the changed lines don't fail.
- # 6. Open a PR.
- "tests/entities/test_run_status.py",
- "tests/store/db/test_utils.py",
- "tests/store/tracking/__init__.py",
- "tests/store/tracking/test_file_store.py",
- ],
- )
-)
-
-
-def _should_ignore(path: str):
- return path in IGNORE_FILES
-
-
class UnittestAssertRaises(BaseChecker):
__implements__ = IAstroidChecker
@@ -48,5 +21,5 @@
priority = -1
def visit_call(self, node: astroid.Call):
- if not _should_ignore(node.root().file) and _is_unittest_assert_raises(node):
+ if _is_unittest_assert_raises(node):
self.add_message(self.name, node=node)
| {"golden_diff": "diff --git a/pylint_plugins/unittest_assert_raises.py b/pylint_plugins/unittest_assert_raises.py\n--- a/pylint_plugins/unittest_assert_raises.py\n+++ b/pylint_plugins/unittest_assert_raises.py\n@@ -1,5 +1,3 @@\n-import os\n-\n import astroid\n from pylint.interfaces import IAstroidChecker\n from pylint.checkers import BaseChecker\n@@ -9,31 +7,6 @@\n return isinstance(node.func, astroid.Attribute) and node.func.as_string() == \"self.assertRaises\"\n \n \n-IGNORE_FILES = list(\n- map(\n- os.path.abspath,\n- [\n- # Instructions\n- # ============\n- # 1. Select a file in the list below and remove it.\n- # 2. Run pylint and confirm it fails.\n- # 3. Fix the lines printed out in the previous step.\n- # 4. Run pylint again and confirm it succeeds now.\n- # 5. Run pytest and confirm the changed lines don't fail.\n- # 6. Open a PR.\n- \"tests/entities/test_run_status.py\",\n- \"tests/store/db/test_utils.py\",\n- \"tests/store/tracking/__init__.py\",\n- \"tests/store/tracking/test_file_store.py\",\n- ],\n- )\n-)\n-\n-\n-def _should_ignore(path: str):\n- return path in IGNORE_FILES\n-\n-\n class UnittestAssertRaises(BaseChecker):\n __implements__ = IAstroidChecker\n \n@@ -48,5 +21,5 @@\n priority = -1\n \n def visit_call(self, node: astroid.Call):\n- if not _should_ignore(node.root().file) and _is_unittest_assert_raises(node):\n+ if _is_unittest_assert_raises(node):\n self.add_message(self.name, node=node)\n", "issue": "Replace `unittest.TestCase.assertRaises` with `unittest.TestCase.assertRaisesRegex`\nSome tests use `unittest.TestCase.assertRaises` to test an exception is raised for illegal operations, but they need to be replaces with `unittest.TestCase.assertRaisesRegex`.\r\n\r\n### Why do we need this change?\r\n\r\nLet's say we have a function that raises an exception:\r\n\r\n```python\r\ndef throw_exception(...):\r\n if condition_1:\r\n raise TypeError(\"condition_1\")\r\n if condition_2:\r\n raise TypeError(\"condition_2\")\r\n ...\r\n```\r\n\r\nIf we test this function using `assertRaises`:\r\n\r\n```python\r\nclass MyTest(unittest.TestCase):\r\n def test_throw_exception(self):\r\n # Does `throw_exception` really raise the second TypeError?\r\n # It might throw the first TypeError, then the test will pass.\r\n with self.assertRaises(TypeError):\r\n throw_exception(...) # should raise TypeError(\"condition_2\")\r\n```\r\n\r\nIf we test this function using `assertRaisesRegex`:\r\n\r\n```python\r\nclass MyTest(unittest.TestCase):\r\n def test_throw_exception(self):\r\n # This test fails when `throw_exception` raises the first TypeError.\r\n with self. assertRaisesRegex(TypeError, \"condition_b\"):\r\n throw_exception(...) # should raise TypeError(\"condition_2\")\r\n```\r\n\r\n### Example\r\n\r\nhttps://github.com/mlflow/mlflow/blob/fe6618823a2e6038149ee0da675503d2764552ca/tests/store/tracking/test_sqlalchemy_store.py#L107\r\n\r\nThe code above needs to be fixed to the following:\r\n\r\n```python\r\n # \"<string that matches the error message>\" must be replaced\r\n with self.assertRaisesRegex(MlflowException, \"<string that matches the error message>\") as e:\r\n```\r\n\r\n### References\r\n\r\n- https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertRaises\r\n- https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertRaisesRegex\r\n\r\n### Instructions\r\n\r\nhttps://github.com/mlflow/mlflow/blob/101ad6e8eb383c769178df0df83d1d2a1cea6b4a/pylint_plugins/assert_raises_without_msg.py#L20-L33\r\n\r\nPing me with the file you want to work on :)\r\n\r\n| File | Assignee | PR | Done |\r\n| :---------------------------------------------------- | :---------- | :---- | :--- |\r\n| `tests/entities/test_run_status.py` | @Sumanth077 | | |\r\n| `tests/store/model_registry/test_sqlalchemy_store.py` | @ognis1205 | #5875 | \u2705 |\r\n| `tests/store/db/test_utils.py` | @erich-db | | |\r\n| `tests/store/tracking/__init__.py` | @Sumanth077 | | |\r\n| `tests/store/tracking/test_file_store.py` | @andy1122 | | |\r\n| `tests/store/tracking/test_sqlalchemy_store.py` | @ognis1205 | #5875 | \u2705 |\r\n\n", "before_files": [{"content": "import os\n\nimport astroid\nfrom pylint.interfaces import IAstroidChecker\nfrom pylint.checkers import BaseChecker\n\n\ndef _is_unittest_assert_raises(node: astroid.Call):\n return isinstance(node.func, astroid.Attribute) and node.func.as_string() == \"self.assertRaises\"\n\n\nIGNORE_FILES = list(\n map(\n os.path.abspath,\n [\n # Instructions\n # ============\n # 1. Select a file in the list below and remove it.\n # 2. Run pylint and confirm it fails.\n # 3. Fix the lines printed out in the previous step.\n # 4. Run pylint again and confirm it succeeds now.\n # 5. Run pytest and confirm the changed lines don't fail.\n # 6. Open a PR.\n \"tests/entities/test_run_status.py\",\n \"tests/store/db/test_utils.py\",\n \"tests/store/tracking/__init__.py\",\n \"tests/store/tracking/test_file_store.py\",\n ],\n )\n)\n\n\ndef _should_ignore(path: str):\n return path in IGNORE_FILES\n\n\nclass UnittestAssertRaises(BaseChecker):\n __implements__ = IAstroidChecker\n\n name = \"unittest-assert-raises\"\n msgs = {\n \"W0003\": (\n \"`assertRaises` must be replaced with `assertRaisesRegex`\",\n name,\n \"Use `assertRaisesRegex` instead\",\n ),\n }\n priority = -1\n\n def visit_call(self, node: astroid.Call):\n if not _should_ignore(node.root().file) and _is_unittest_assert_raises(node):\n self.add_message(self.name, node=node)\n", "path": "pylint_plugins/unittest_assert_raises.py"}], "after_files": [{"content": "import astroid\nfrom pylint.interfaces import IAstroidChecker\nfrom pylint.checkers import BaseChecker\n\n\ndef _is_unittest_assert_raises(node: astroid.Call):\n return isinstance(node.func, astroid.Attribute) and node.func.as_string() == \"self.assertRaises\"\n\n\nclass UnittestAssertRaises(BaseChecker):\n __implements__ = IAstroidChecker\n\n name = \"unittest-assert-raises\"\n msgs = {\n \"W0003\": (\n \"`assertRaises` must be replaced with `assertRaisesRegex`\",\n name,\n \"Use `assertRaisesRegex` instead\",\n ),\n }\n priority = -1\n\n def visit_call(self, node: astroid.Call):\n if _is_unittest_assert_raises(node):\n self.add_message(self.name, node=node)\n", "path": "pylint_plugins/unittest_assert_raises.py"}]} | 1,383 | 391 |
gh_patches_debug_40242 | rasdani/github-patches | git_diff | SCons__scons-3862 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add timestamp override to Zip builder
Zip should have an option to override the timestamp on files in the archive instead of taking it from the file system. This can be useful for repeatable builds or for anonymizing the archive.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `SCons/Tool/zip.py`
Content:
```
1 """SCons.Tool.zip
2
3 Tool-specific initialization for zip.
4
5 There normally shouldn't be any need to import this module directly.
6 It will usually be imported through the generic SCons.Tool.Tool()
7 selection method.
8
9 """
10
11 #
12 # __COPYRIGHT__
13 #
14 # Permission is hereby granted, free of charge, to any person obtaining
15 # a copy of this software and associated documentation files (the
16 # "Software"), to deal in the Software without restriction, including
17 # without limitation the rights to use, copy, modify, merge, publish,
18 # distribute, sublicense, and/or sell copies of the Software, and to
19 # permit persons to whom the Software is furnished to do so, subject to
20 # the following conditions:
21 #
22 # The above copyright notice and this permission notice shall be included
23 # in all copies or substantial portions of the Software.
24 #
25 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
26 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
27 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
28 # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
29 # LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
30 # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
31 # WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
32 #
33
34 __revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__"
35
36 import os.path
37
38 import SCons.Builder
39 import SCons.Defaults
40 import SCons.Node.FS
41 import SCons.Util
42
43 import zipfile
44
45 zip_compression = zipfile.ZIP_DEFLATED
46
47
48 def zip(target, source, env):
49 compression = env.get('ZIPCOMPRESSION', 0)
50 zf = zipfile.ZipFile(str(target[0]), 'w', compression)
51 for s in source:
52 if s.isdir():
53 for dirpath, dirnames, filenames in os.walk(str(s)):
54 for fname in filenames:
55 path = os.path.join(dirpath, fname)
56 if os.path.isfile(path):
57 zf.write(path, os.path.relpath(path, str(env.get('ZIPROOT', ''))))
58 else:
59 zf.write(str(s), os.path.relpath(str(s), str(env.get('ZIPROOT', ''))))
60 zf.close()
61
62 # Fix PR #3569 - If you don't specify ZIPCOM and ZIPCOMSTR when creating
63 # env, then it will ignore ZIPCOMSTR set afterwards.
64 zipAction = SCons.Action.Action(zip, "$ZIPCOMSTR", varlist=['ZIPCOMPRESSION'])
65
66 ZipBuilder = SCons.Builder.Builder(action=SCons.Action.Action('$ZIPCOM', '$ZIPCOMSTR'),
67 source_factory=SCons.Node.FS.Entry,
68 source_scanner=SCons.Defaults.DirScanner,
69 suffix='$ZIPSUFFIX',
70 multi=1)
71
72
73 def generate(env):
74 """Add Builders and construction variables for zip to an Environment."""
75 try:
76 bld = env['BUILDERS']['Zip']
77 except KeyError:
78 bld = ZipBuilder
79 env['BUILDERS']['Zip'] = bld
80
81 env['ZIP'] = 'zip'
82 env['ZIPFLAGS'] = SCons.Util.CLVar('')
83 env['ZIPCOM'] = zipAction
84 env['ZIPCOMPRESSION'] = zip_compression
85 env['ZIPSUFFIX'] = '.zip'
86 env['ZIPROOT'] = SCons.Util.CLVar('')
87
88
89 def exists(env):
90 return True
91
92 # Local Variables:
93 # tab-width:4
94 # indent-tabs-mode:nil
95 # End:
96 # vim: set expandtab tabstop=4 shiftwidth=4:
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/SCons/Tool/zip.py b/SCons/Tool/zip.py
--- a/SCons/Tool/zip.py
+++ b/SCons/Tool/zip.py
@@ -8,8 +8,9 @@
"""
+# MIT License
#
-# __COPYRIGHT__
+# Copyright The SCons Foundation
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
@@ -29,39 +30,62 @@
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-#
-
-__revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__"
-import os.path
+import os
import SCons.Builder
import SCons.Defaults
import SCons.Node.FS
import SCons.Util
+import time
import zipfile
+
zip_compression = zipfile.ZIP_DEFLATED
-def zip(target, source, env):
- compression = env.get('ZIPCOMPRESSION', 0)
- zf = zipfile.ZipFile(str(target[0]), 'w', compression)
+def _create_zipinfo_for_file(fname, arcname, date_time, compression):
+ st = os.stat(fname)
+ if not date_time:
+ mtime = time.localtime(st.st_mtime)
+ date_time = mtime[0:6]
+ zinfo = zipfile.ZipInfo(filename=arcname, date_time=date_time)
+ zinfo.external_attr = (st.st_mode & 0xFFFF) << 16 # Unix attributes
+ zinfo.compress_type = compression
+ zinfo.file_size = st.st_size
+ return zinfo
+
+
+def zip_builder(target, source, env):
+ compression = env.get('ZIPCOMPRESSION', zipfile.ZIP_STORED)
+ zip_root = str(env.get('ZIPROOT', ''))
+ date_time = env.get('ZIP_OVERRIDE_TIMESTAMP')
+
+ files = []
for s in source:
if s.isdir():
for dirpath, dirnames, filenames in os.walk(str(s)):
for fname in filenames:
path = os.path.join(dirpath, fname)
if os.path.isfile(path):
- zf.write(path, os.path.relpath(path, str(env.get('ZIPROOT', ''))))
+ files.append(path)
else:
- zf.write(str(s), os.path.relpath(str(s), str(env.get('ZIPROOT', ''))))
- zf.close()
+ files.append(str(s))
+
+ with zipfile.ZipFile(str(target[0]), 'w', compression) as zf:
+ for fname in files:
+ arcname = os.path.relpath(fname, zip_root)
+ # TODO: Switch to ZipInfo.from_file when 3.6 becomes the base python version
+ zinfo = _create_zipinfo_for_file(fname, arcname, date_time, compression)
+ with open(fname, "rb") as f:
+ zf.writestr(zinfo, f.read())
+
# Fix PR #3569 - If you don't specify ZIPCOM and ZIPCOMSTR when creating
# env, then it will ignore ZIPCOMSTR set afterwards.
-zipAction = SCons.Action.Action(zip, "$ZIPCOMSTR", varlist=['ZIPCOMPRESSION'])
+zipAction = SCons.Action.Action(zip_builder, "$ZIPCOMSTR",
+ varlist=['ZIPCOMPRESSION', 'ZIPROOT', 'ZIP_OVERRIDE_TIMESTAMP'])
ZipBuilder = SCons.Builder.Builder(action=SCons.Action.Action('$ZIPCOM', '$ZIPCOMSTR'),
source_factory=SCons.Node.FS.Entry,
| {"golden_diff": "diff --git a/SCons/Tool/zip.py b/SCons/Tool/zip.py\n--- a/SCons/Tool/zip.py\n+++ b/SCons/Tool/zip.py\n@@ -8,8 +8,9 @@\n \n \"\"\"\n \n+# MIT License\n #\n-# __COPYRIGHT__\n+# Copyright The SCons Foundation\n #\n # Permission is hereby granted, free of charge, to any person obtaining\n # a copy of this software and associated documentation files (the\n@@ -29,39 +30,62 @@\n # LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION\n # WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n-#\n-\n-__revision__ = \"__FILE__ __REVISION__ __DATE__ __DEVELOPER__\"\n \n-import os.path\n+import os\n \n import SCons.Builder\n import SCons.Defaults\n import SCons.Node.FS\n import SCons.Util\n \n+import time\n import zipfile\n \n+\n zip_compression = zipfile.ZIP_DEFLATED\n \n \n-def zip(target, source, env):\n- compression = env.get('ZIPCOMPRESSION', 0)\n- zf = zipfile.ZipFile(str(target[0]), 'w', compression)\n+def _create_zipinfo_for_file(fname, arcname, date_time, compression):\n+ st = os.stat(fname)\n+ if not date_time:\n+ mtime = time.localtime(st.st_mtime)\n+ date_time = mtime[0:6]\n+ zinfo = zipfile.ZipInfo(filename=arcname, date_time=date_time)\n+ zinfo.external_attr = (st.st_mode & 0xFFFF) << 16 # Unix attributes\n+ zinfo.compress_type = compression\n+ zinfo.file_size = st.st_size\n+ return zinfo\n+\n+\n+def zip_builder(target, source, env):\n+ compression = env.get('ZIPCOMPRESSION', zipfile.ZIP_STORED)\n+ zip_root = str(env.get('ZIPROOT', ''))\n+ date_time = env.get('ZIP_OVERRIDE_TIMESTAMP')\n+\n+ files = []\n for s in source:\n if s.isdir():\n for dirpath, dirnames, filenames in os.walk(str(s)):\n for fname in filenames:\n path = os.path.join(dirpath, fname)\n if os.path.isfile(path):\n- zf.write(path, os.path.relpath(path, str(env.get('ZIPROOT', ''))))\n+ files.append(path)\n else:\n- zf.write(str(s), os.path.relpath(str(s), str(env.get('ZIPROOT', ''))))\n- zf.close()\n+ files.append(str(s))\n+\n+ with zipfile.ZipFile(str(target[0]), 'w', compression) as zf:\n+ for fname in files:\n+ arcname = os.path.relpath(fname, zip_root)\n+ # TODO: Switch to ZipInfo.from_file when 3.6 becomes the base python version\n+ zinfo = _create_zipinfo_for_file(fname, arcname, date_time, compression)\n+ with open(fname, \"rb\") as f:\n+ zf.writestr(zinfo, f.read())\n+\n \n # Fix PR #3569 - If you don't specify ZIPCOM and ZIPCOMSTR when creating\n # env, then it will ignore ZIPCOMSTR set afterwards.\n-zipAction = SCons.Action.Action(zip, \"$ZIPCOMSTR\", varlist=['ZIPCOMPRESSION'])\n+zipAction = SCons.Action.Action(zip_builder, \"$ZIPCOMSTR\",\n+ varlist=['ZIPCOMPRESSION', 'ZIPROOT', 'ZIP_OVERRIDE_TIMESTAMP'])\n \n ZipBuilder = SCons.Builder.Builder(action=SCons.Action.Action('$ZIPCOM', '$ZIPCOMSTR'),\n source_factory=SCons.Node.FS.Entry,\n", "issue": "Add timestamp override to Zip builder\nZip should have an option to override the timestamp on files in the archive instead of taking it from the file system. This can be useful for repeatable builds or for anonymizing the archive.\n", "before_files": [{"content": "\"\"\"SCons.Tool.zip\n\nTool-specific initialization for zip.\n\nThere normally shouldn't be any need to import this module directly.\nIt will usually be imported through the generic SCons.Tool.Tool()\nselection method.\n\n\"\"\"\n\n#\n# __COPYRIGHT__\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND\n# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE\n# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION\n# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n#\n\n__revision__ = \"__FILE__ __REVISION__ __DATE__ __DEVELOPER__\"\n\nimport os.path\n\nimport SCons.Builder\nimport SCons.Defaults\nimport SCons.Node.FS\nimport SCons.Util\n\nimport zipfile\n\nzip_compression = zipfile.ZIP_DEFLATED\n\n\ndef zip(target, source, env):\n compression = env.get('ZIPCOMPRESSION', 0)\n zf = zipfile.ZipFile(str(target[0]), 'w', compression)\n for s in source:\n if s.isdir():\n for dirpath, dirnames, filenames in os.walk(str(s)):\n for fname in filenames:\n path = os.path.join(dirpath, fname)\n if os.path.isfile(path):\n zf.write(path, os.path.relpath(path, str(env.get('ZIPROOT', ''))))\n else:\n zf.write(str(s), os.path.relpath(str(s), str(env.get('ZIPROOT', ''))))\n zf.close()\n\n# Fix PR #3569 - If you don't specify ZIPCOM and ZIPCOMSTR when creating\n# env, then it will ignore ZIPCOMSTR set afterwards.\nzipAction = SCons.Action.Action(zip, \"$ZIPCOMSTR\", varlist=['ZIPCOMPRESSION'])\n\nZipBuilder = SCons.Builder.Builder(action=SCons.Action.Action('$ZIPCOM', '$ZIPCOMSTR'),\n source_factory=SCons.Node.FS.Entry,\n source_scanner=SCons.Defaults.DirScanner,\n suffix='$ZIPSUFFIX',\n multi=1)\n\n\ndef generate(env):\n \"\"\"Add Builders and construction variables for zip to an Environment.\"\"\"\n try:\n bld = env['BUILDERS']['Zip']\n except KeyError:\n bld = ZipBuilder\n env['BUILDERS']['Zip'] = bld\n\n env['ZIP'] = 'zip'\n env['ZIPFLAGS'] = SCons.Util.CLVar('')\n env['ZIPCOM'] = zipAction\n env['ZIPCOMPRESSION'] = zip_compression\n env['ZIPSUFFIX'] = '.zip'\n env['ZIPROOT'] = SCons.Util.CLVar('')\n\n\ndef exists(env):\n return True\n\n# Local Variables:\n# tab-width:4\n# indent-tabs-mode:nil\n# End:\n# vim: set expandtab tabstop=4 shiftwidth=4:\n", "path": "SCons/Tool/zip.py"}], "after_files": [{"content": "\"\"\"SCons.Tool.zip\n\nTool-specific initialization for zip.\n\nThere normally shouldn't be any need to import this module directly.\nIt will usually be imported through the generic SCons.Tool.Tool()\nselection method.\n\n\"\"\"\n\n# MIT License\n#\n# Copyright The SCons Foundation\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND\n# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE\n# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION\n# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nimport os\n\nimport SCons.Builder\nimport SCons.Defaults\nimport SCons.Node.FS\nimport SCons.Util\n\nimport time\nimport zipfile\n\n\nzip_compression = zipfile.ZIP_DEFLATED\n\n\ndef _create_zipinfo_for_file(fname, arcname, date_time, compression):\n st = os.stat(fname)\n if not date_time:\n mtime = time.localtime(st.st_mtime)\n date_time = mtime[0:6]\n zinfo = zipfile.ZipInfo(filename=arcname, date_time=date_time)\n zinfo.external_attr = (st.st_mode & 0xFFFF) << 16 # Unix attributes\n zinfo.compress_type = compression\n zinfo.file_size = st.st_size\n return zinfo\n\n\ndef zip_builder(target, source, env):\n compression = env.get('ZIPCOMPRESSION', zipfile.ZIP_STORED)\n zip_root = str(env.get('ZIPROOT', ''))\n date_time = env.get('ZIP_OVERRIDE_TIMESTAMP')\n\n files = []\n for s in source:\n if s.isdir():\n for dirpath, dirnames, filenames in os.walk(str(s)):\n for fname in filenames:\n path = os.path.join(dirpath, fname)\n if os.path.isfile(path):\n files.append(path)\n else:\n files.append(str(s))\n\n with zipfile.ZipFile(str(target[0]), 'w', compression) as zf:\n for fname in files:\n arcname = os.path.relpath(fname, zip_root)\n # TODO: Switch to ZipInfo.from_file when 3.6 becomes the base python version\n zinfo = _create_zipinfo_for_file(fname, arcname, date_time, compression)\n with open(fname, \"rb\") as f:\n zf.writestr(zinfo, f.read())\n\n\n# Fix PR #3569 - If you don't specify ZIPCOM and ZIPCOMSTR when creating\n# env, then it will ignore ZIPCOMSTR set afterwards.\nzipAction = SCons.Action.Action(zip_builder, \"$ZIPCOMSTR\",\n varlist=['ZIPCOMPRESSION', 'ZIPROOT', 'ZIP_OVERRIDE_TIMESTAMP'])\n\nZipBuilder = SCons.Builder.Builder(action=SCons.Action.Action('$ZIPCOM', '$ZIPCOMSTR'),\n source_factory=SCons.Node.FS.Entry,\n source_scanner=SCons.Defaults.DirScanner,\n suffix='$ZIPSUFFIX',\n multi=1)\n\n\ndef generate(env):\n \"\"\"Add Builders and construction variables for zip to an Environment.\"\"\"\n try:\n bld = env['BUILDERS']['Zip']\n except KeyError:\n bld = ZipBuilder\n env['BUILDERS']['Zip'] = bld\n\n env['ZIP'] = 'zip'\n env['ZIPFLAGS'] = SCons.Util.CLVar('')\n env['ZIPCOM'] = zipAction\n env['ZIPCOMPRESSION'] = zip_compression\n env['ZIPSUFFIX'] = '.zip'\n env['ZIPROOT'] = SCons.Util.CLVar('')\n\n\ndef exists(env):\n return True\n\n# Local Variables:\n# tab-width:4\n# indent-tabs-mode:nil\n# End:\n# vim: set expandtab tabstop=4 shiftwidth=4:\n", "path": "SCons/Tool/zip.py"}]} | 1,254 | 810 |
gh_patches_debug_23861 | rasdani/github-patches | git_diff | openstates__openstates-scrapers-1669 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IL: Capture "Subject Matter" for `event`s
IL events have a Subject Matter field on the legislature's website, which we should capture, probably as the `description` value for that `event` object.
For example, look at one of the hearings linked from [here](http://www.ilga.gov/senate/committees/hearing.asp?CommitteeID=1927), like this:
<img width="864" alt="screen shot 2017-04-23 at 00 19 37" src="https://cloud.githubusercontent.com/assets/4959135/25310740/bb6f24b8-27ba-11e7-96e3-9b102819581c.png">
cc @jonrogoff, @justgosh
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/il/events.py`
Content:
```
1 import datetime as dt
2 import re
3
4 from openstates.utils import LXMLMixin
5 from billy.scrape.events import Event, EventScraper
6
7 import lxml.html
8 import pytz
9
10 urls = {
11 "upper": "http://www.ilga.gov/senate/schedules/weeklyhearings.asp",
12 "lower": "http://www.ilga.gov/house/schedules/weeklyhearings.asp"
13 }
14
15
16 class ILEventScraper(EventScraper, LXMLMixin):
17 jurisdiction = 'il'
18 _tz = pytz.timezone('US/Eastern')
19
20 def scrape_page(self, url, session, chamber):
21 page = self.lxmlize(url)
22
23 ctty_name = page.xpath("//span[@class='heading']")[0].text_content()
24
25 tables = page.xpath("//table[@cellpadding='3']")
26 info = tables[0]
27 rows = info.xpath(".//tr")
28 metainf = {}
29 for row in rows:
30 tds = row.xpath(".//td")
31 key = tds[0].text_content().strip()
32 value = tds[1].text_content().strip()
33 metainf[key] = value
34
35 where = metainf['Location:']
36 description = ctty_name
37
38 datetime = metainf['Scheduled Date:']
39 datetime = re.sub("\s+", " ", datetime)
40 repl = {
41 "AM": " AM",
42 "PM": " PM" # Space shim.
43 }
44 for r in repl:
45 datetime = datetime.replace(r, repl[r])
46 datetime = dt.datetime.strptime(datetime, "%b %d, %Y %I:%M %p")
47
48 event = Event(session, datetime, 'committee:meeting',
49 description, location=where)
50 event.add_source(url)
51
52 if ctty_name.startswith('Hearing Notice For'):
53 ctty_name.replace('Hearing Notice For', '')
54 event.add_participant('host', ctty_name, 'committee', chamber=chamber)
55
56 bills = tables[1]
57 for bill in bills.xpath(".//tr")[1:]:
58 tds = bill.xpath(".//td")
59 if len(tds) < 4:
60 continue
61 # First, let's get the bill ID:
62 bill_id = tds[0].text_content()
63 event.add_related_bill(bill_id,
64 description=description,
65 type='consideration')
66
67 self.save_event(event)
68
69 def scrape(self, chamber, session):
70 try:
71 url = urls[chamber]
72 except KeyError:
73 return # Not for us.
74 page = self.lxmlize(url)
75 tables = page.xpath("//table[@width='550']")
76 for table in tables:
77 meetings = table.xpath(".//a")
78 for meeting in meetings:
79 self.scrape_page(meeting.attrib['href'],
80 session, chamber)
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openstates/il/events.py b/openstates/il/events.py
--- a/openstates/il/events.py
+++ b/openstates/il/events.py
@@ -4,7 +4,6 @@
from openstates.utils import LXMLMixin
from billy.scrape.events import Event, EventScraper
-import lxml.html
import pytz
urls = {
@@ -20,8 +19,8 @@
def scrape_page(self, url, session, chamber):
page = self.lxmlize(url)
- ctty_name = page.xpath("//span[@class='heading']")[0].text_content()
-
+ ctty_name = page.xpath("//span[@class='heading']")[0].text_content().replace(
+ "Hearing Notice For ", "")
tables = page.xpath("//table[@cellpadding='3']")
info = tables[0]
rows = info.xpath(".//tr")
@@ -33,7 +32,8 @@
metainf[key] = value
where = metainf['Location:']
- description = ctty_name
+ subject_matter = metainf['Subject Matter:']
+ description = "{}, {}".format(ctty_name, subject_matter)
datetime = metainf['Scheduled Date:']
datetime = re.sub("\s+", " ", datetime)
| {"golden_diff": "diff --git a/openstates/il/events.py b/openstates/il/events.py\n--- a/openstates/il/events.py\n+++ b/openstates/il/events.py\n@@ -4,7 +4,6 @@\n from openstates.utils import LXMLMixin\n from billy.scrape.events import Event, EventScraper\n \n-import lxml.html\n import pytz\n \n urls = {\n@@ -20,8 +19,8 @@\n def scrape_page(self, url, session, chamber):\n page = self.lxmlize(url)\n \n- ctty_name = page.xpath(\"//span[@class='heading']\")[0].text_content()\n-\n+ ctty_name = page.xpath(\"//span[@class='heading']\")[0].text_content().replace(\n+ \"Hearing Notice For \", \"\")\n tables = page.xpath(\"//table[@cellpadding='3']\")\n info = tables[0]\n rows = info.xpath(\".//tr\")\n@@ -33,7 +32,8 @@\n metainf[key] = value\n \n where = metainf['Location:']\n- description = ctty_name\n+ subject_matter = metainf['Subject Matter:']\n+ description = \"{}, {}\".format(ctty_name, subject_matter)\n \n datetime = metainf['Scheduled Date:']\n datetime = re.sub(\"\\s+\", \" \", datetime)\n", "issue": "IL: Capture \"Subject Matter\" for `event`s\nIL events have a Subject Matter field on the legislature's website, which we should capture, probably as the `description` value for that `event` object.\r\n\r\nFor example, look at one of the hearings linked from [here](http://www.ilga.gov/senate/committees/hearing.asp?CommitteeID=1927), like this:\r\n\r\n<img width=\"864\" alt=\"screen shot 2017-04-23 at 00 19 37\" src=\"https://cloud.githubusercontent.com/assets/4959135/25310740/bb6f24b8-27ba-11e7-96e3-9b102819581c.png\">\r\n\r\ncc @jonrogoff, @justgosh\n", "before_files": [{"content": "import datetime as dt\nimport re\n\nfrom openstates.utils import LXMLMixin\nfrom billy.scrape.events import Event, EventScraper\n\nimport lxml.html\nimport pytz\n\nurls = {\n \"upper\": \"http://www.ilga.gov/senate/schedules/weeklyhearings.asp\",\n \"lower\": \"http://www.ilga.gov/house/schedules/weeklyhearings.asp\"\n}\n\n\nclass ILEventScraper(EventScraper, LXMLMixin):\n jurisdiction = 'il'\n _tz = pytz.timezone('US/Eastern')\n\n def scrape_page(self, url, session, chamber):\n page = self.lxmlize(url)\n\n ctty_name = page.xpath(\"//span[@class='heading']\")[0].text_content()\n\n tables = page.xpath(\"//table[@cellpadding='3']\")\n info = tables[0]\n rows = info.xpath(\".//tr\")\n metainf = {}\n for row in rows:\n tds = row.xpath(\".//td\")\n key = tds[0].text_content().strip()\n value = tds[1].text_content().strip()\n metainf[key] = value\n\n where = metainf['Location:']\n description = ctty_name\n\n datetime = metainf['Scheduled Date:']\n datetime = re.sub(\"\\s+\", \" \", datetime)\n repl = {\n \"AM\": \" AM\",\n \"PM\": \" PM\" # Space shim.\n }\n for r in repl:\n datetime = datetime.replace(r, repl[r])\n datetime = dt.datetime.strptime(datetime, \"%b %d, %Y %I:%M %p\")\n\n event = Event(session, datetime, 'committee:meeting',\n description, location=where)\n event.add_source(url)\n\n if ctty_name.startswith('Hearing Notice For'):\n ctty_name.replace('Hearing Notice For', '')\n event.add_participant('host', ctty_name, 'committee', chamber=chamber)\n\n bills = tables[1]\n for bill in bills.xpath(\".//tr\")[1:]:\n tds = bill.xpath(\".//td\")\n if len(tds) < 4:\n continue\n # First, let's get the bill ID:\n bill_id = tds[0].text_content()\n event.add_related_bill(bill_id,\n description=description,\n type='consideration')\n\n self.save_event(event)\n\n def scrape(self, chamber, session):\n try:\n url = urls[chamber]\n except KeyError:\n return # Not for us.\n page = self.lxmlize(url)\n tables = page.xpath(\"//table[@width='550']\")\n for table in tables:\n meetings = table.xpath(\".//a\")\n for meeting in meetings:\n self.scrape_page(meeting.attrib['href'],\n session, chamber)\n", "path": "openstates/il/events.py"}], "after_files": [{"content": "import datetime as dt\nimport re\n\nfrom openstates.utils import LXMLMixin\nfrom billy.scrape.events import Event, EventScraper\n\nimport pytz\n\nurls = {\n \"upper\": \"http://www.ilga.gov/senate/schedules/weeklyhearings.asp\",\n \"lower\": \"http://www.ilga.gov/house/schedules/weeklyhearings.asp\"\n}\n\n\nclass ILEventScraper(EventScraper, LXMLMixin):\n jurisdiction = 'il'\n _tz = pytz.timezone('US/Eastern')\n\n def scrape_page(self, url, session, chamber):\n page = self.lxmlize(url)\n\n ctty_name = page.xpath(\"//span[@class='heading']\")[0].text_content().replace(\n \"Hearing Notice For \", \"\")\n tables = page.xpath(\"//table[@cellpadding='3']\")\n info = tables[0]\n rows = info.xpath(\".//tr\")\n metainf = {}\n for row in rows:\n tds = row.xpath(\".//td\")\n key = tds[0].text_content().strip()\n value = tds[1].text_content().strip()\n metainf[key] = value\n\n where = metainf['Location:']\n subject_matter = metainf['Subject Matter:']\n description = \"{}, {}\".format(ctty_name, subject_matter)\n\n datetime = metainf['Scheduled Date:']\n datetime = re.sub(\"\\s+\", \" \", datetime)\n repl = {\n \"AM\": \" AM\",\n \"PM\": \" PM\" # Space shim.\n }\n for r in repl:\n datetime = datetime.replace(r, repl[r])\n datetime = dt.datetime.strptime(datetime, \"%b %d, %Y %I:%M %p\")\n\n event = Event(session, datetime, 'committee:meeting',\n description, location=where)\n event.add_source(url)\n\n if ctty_name.startswith('Hearing Notice For'):\n ctty_name.replace('Hearing Notice For', '')\n event.add_participant('host', ctty_name, 'committee', chamber=chamber)\n\n bills = tables[1]\n for bill in bills.xpath(\".//tr\")[1:]:\n tds = bill.xpath(\".//td\")\n if len(tds) < 4:\n continue\n # First, let's get the bill ID:\n bill_id = tds[0].text_content()\n event.add_related_bill(bill_id,\n description=description,\n type='consideration')\n\n self.save_event(event)\n\n def scrape(self, chamber, session):\n try:\n url = urls[chamber]\n except KeyError:\n return # Not for us.\n page = self.lxmlize(url)\n tables = page.xpath(\"//table[@width='550']\")\n for table in tables:\n meetings = table.xpath(\".//a\")\n for meeting in meetings:\n self.scrape_page(meeting.attrib['href'],\n session, chamber)\n", "path": "openstates/il/events.py"}]} | 1,217 | 285 |
gh_patches_debug_13472 | rasdani/github-patches | git_diff | freqtrade__freqtrade-3798 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Telegram Balance not updating correctly after sell
## Describe your environment
* Operating system: Ubuntu 18.04
* Python Version: 2.7.17
* CCXT version: 1.34.7
* Freqtrade Version: develop-4c349342
## Describe the problem:
Balance is not updated after sell on Bittrex live. All positions ever bought, but where sold, are shown in Telegram via /balance, as if they where actually held and in the wallet and are summed up in the estimated value too. The stake currency is updated correctly and the bot trades with correct values.
### Steps to reproduce:
1. Live trade on Bittrex and have a successful buy and sell on the same position by the bot.
2. Call /balance in Telegram.
3. In the response all successfully sold positions show up as not pending, held and available.
### Observed Results:
* What happened?
Calling /balance in Telegram shows all ever bought assets (tested for 3 Days) although they where successfully sold.
* What did you expect to happen?
/balance shows actual held positions/wallet. For instance, If no positions are held, the stake currency only would be shown.
### Relevant code exceptions or logs
2020-09-22 11:42:52 freqtrade.worker: INFO - Bot heartbeat. PID=25032, version='develop-4c349342', state='RUNNING'
2020-09-22 11:43:13 freqtrade.rpc.telegram: INFO - Executing handler: _balance for chat_id: XXX
2020-09-22 11:43:15 freqtrade.wallets: INFO - Wallets synced.
2020-09-22 11:43:15 freqtrade.rpc.rpc: WARNING - Could not get rate for pair BTXCRD.
2020-09-22 11:43:52 freqtrade.worker: INFO - Bot heartbeat. PID=25032, version='develop-4c349342', state='RUNNING'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `freqtrade/wallets.py`
Content:
```
1 # pragma pylint: disable=W0603
2 """ Wallet """
3
4 import logging
5 from typing import Any, Dict, NamedTuple
6
7 import arrow
8
9 from freqtrade.exchange import Exchange
10 from freqtrade.persistence import Trade
11
12 logger = logging.getLogger(__name__)
13
14
15 # wallet data structure
16 class Wallet(NamedTuple):
17 currency: str
18 free: float = 0
19 used: float = 0
20 total: float = 0
21
22
23 class Wallets:
24
25 def __init__(self, config: dict, exchange: Exchange) -> None:
26 self._config = config
27 self._exchange = exchange
28 self._wallets: Dict[str, Wallet] = {}
29 self.start_cap = config['dry_run_wallet']
30 self._last_wallet_refresh = 0
31 self.update()
32
33 def get_free(self, currency: str) -> float:
34 balance = self._wallets.get(currency)
35 if balance and balance.free:
36 return balance.free
37 else:
38 return 0
39
40 def get_used(self, currency: str) -> float:
41 balance = self._wallets.get(currency)
42 if balance and balance.used:
43 return balance.used
44 else:
45 return 0
46
47 def get_total(self, currency: str) -> float:
48 balance = self._wallets.get(currency)
49 if balance and balance.total:
50 return balance.total
51 else:
52 return 0
53
54 def _update_dry(self) -> None:
55 """
56 Update from database in dry-run mode
57 - Apply apply profits of closed trades on top of stake amount
58 - Subtract currently tied up stake_amount in open trades
59 - update balances for currencies currently in trades
60 """
61 # Recreate _wallets to reset closed trade balances
62 _wallets = {}
63 closed_trades = Trade.get_trades(Trade.is_open.is_(False)).all()
64 open_trades = Trade.get_trades(Trade.is_open.is_(True)).all()
65 tot_profit = sum([trade.calc_profit() for trade in closed_trades])
66 tot_in_trades = sum([trade.stake_amount for trade in open_trades])
67
68 current_stake = self.start_cap + tot_profit - tot_in_trades
69 _wallets[self._config['stake_currency']] = Wallet(
70 self._config['stake_currency'],
71 current_stake,
72 0,
73 current_stake
74 )
75
76 for trade in open_trades:
77 curr = self._exchange.get_pair_base_currency(trade.pair)
78 _wallets[curr] = Wallet(
79 curr,
80 trade.amount,
81 0,
82 trade.amount
83 )
84 self._wallets = _wallets
85
86 def _update_live(self) -> None:
87 balances = self._exchange.get_balances()
88
89 for currency in balances:
90 self._wallets[currency] = Wallet(
91 currency,
92 balances[currency].get('free', None),
93 balances[currency].get('used', None),
94 balances[currency].get('total', None)
95 )
96
97 def update(self, require_update: bool = True) -> None:
98 """
99 Updates wallets from the configured version.
100 By default, updates from the exchange.
101 Update-skipping should only be used for user-invoked /balance calls, since
102 for trading operations, the latest balance is needed.
103 :param require_update: Allow skipping an update if balances were recently refreshed
104 """
105 if (require_update or (self._last_wallet_refresh + 3600 < arrow.utcnow().timestamp)):
106 if self._config['dry_run']:
107 self._update_dry()
108 else:
109 self._update_live()
110 logger.info('Wallets synced.')
111 self._last_wallet_refresh = arrow.utcnow().timestamp
112
113 def get_all_balances(self) -> Dict[str, Any]:
114 return self._wallets
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/freqtrade/wallets.py b/freqtrade/wallets.py
--- a/freqtrade/wallets.py
+++ b/freqtrade/wallets.py
@@ -2,6 +2,7 @@
""" Wallet """
import logging
+from copy import deepcopy
from typing import Any, Dict, NamedTuple
import arrow
@@ -93,6 +94,10 @@
balances[currency].get('used', None),
balances[currency].get('total', None)
)
+ # Remove currencies no longer in get_balances output
+ for currency in deepcopy(self._wallets):
+ if currency not in balances:
+ del self._wallets[currency]
def update(self, require_update: bool = True) -> None:
"""
| {"golden_diff": "diff --git a/freqtrade/wallets.py b/freqtrade/wallets.py\n--- a/freqtrade/wallets.py\n+++ b/freqtrade/wallets.py\n@@ -2,6 +2,7 @@\n \"\"\" Wallet \"\"\"\n \n import logging\n+from copy import deepcopy\n from typing import Any, Dict, NamedTuple\n \n import arrow\n@@ -93,6 +94,10 @@\n balances[currency].get('used', None),\n balances[currency].get('total', None)\n )\n+ # Remove currencies no longer in get_balances output\n+ for currency in deepcopy(self._wallets):\n+ if currency not in balances:\n+ del self._wallets[currency]\n \n def update(self, require_update: bool = True) -> None:\n \"\"\"\n", "issue": "Telegram Balance not updating correctly after sell\n## Describe your environment\r\n\r\n * Operating system: Ubuntu 18.04\r\n * Python Version: 2.7.17\r\n * CCXT version: 1.34.7\r\n * Freqtrade Version: develop-4c349342\r\n \r\n## Describe the problem:\r\n\r\nBalance is not updated after sell on Bittrex live. All positions ever bought, but where sold, are shown in Telegram via /balance, as if they where actually held and in the wallet and are summed up in the estimated value too. The stake currency is updated correctly and the bot trades with correct values.\r\n\r\n### Steps to reproduce:\r\n\r\n 1. Live trade on Bittrex and have a successful buy and sell on the same position by the bot.\r\n 2. Call /balance in Telegram.\r\n 3. In the response all successfully sold positions show up as not pending, held and available.\r\n \r\n### Observed Results:\r\n\r\n * What happened?\r\n Calling /balance in Telegram shows all ever bought assets (tested for 3 Days) although they where successfully sold.\r\n * What did you expect to happen?\r\n /balance shows actual held positions/wallet. For instance, If no positions are held, the stake currency only would be shown.\r\n\r\n### Relevant code exceptions or logs\r\n\r\n2020-09-22 11:42:52 freqtrade.worker: INFO - Bot heartbeat. PID=25032, version='develop-4c349342', state='RUNNING'\r\n2020-09-22 11:43:13 freqtrade.rpc.telegram: INFO - Executing handler: _balance for chat_id: XXX\r\n2020-09-22 11:43:15 freqtrade.wallets: INFO - Wallets synced.\r\n2020-09-22 11:43:15 freqtrade.rpc.rpc: WARNING - Could not get rate for pair BTXCRD.\r\n2020-09-22 11:43:52 freqtrade.worker: INFO - Bot heartbeat. PID=25032, version='develop-4c349342', state='RUNNING'\r\n\n", "before_files": [{"content": "# pragma pylint: disable=W0603\n\"\"\" Wallet \"\"\"\n\nimport logging\nfrom typing import Any, Dict, NamedTuple\n\nimport arrow\n\nfrom freqtrade.exchange import Exchange\nfrom freqtrade.persistence import Trade\n\nlogger = logging.getLogger(__name__)\n\n\n# wallet data structure\nclass Wallet(NamedTuple):\n currency: str\n free: float = 0\n used: float = 0\n total: float = 0\n\n\nclass Wallets:\n\n def __init__(self, config: dict, exchange: Exchange) -> None:\n self._config = config\n self._exchange = exchange\n self._wallets: Dict[str, Wallet] = {}\n self.start_cap = config['dry_run_wallet']\n self._last_wallet_refresh = 0\n self.update()\n\n def get_free(self, currency: str) -> float:\n balance = self._wallets.get(currency)\n if balance and balance.free:\n return balance.free\n else:\n return 0\n\n def get_used(self, currency: str) -> float:\n balance = self._wallets.get(currency)\n if balance and balance.used:\n return balance.used\n else:\n return 0\n\n def get_total(self, currency: str) -> float:\n balance = self._wallets.get(currency)\n if balance and balance.total:\n return balance.total\n else:\n return 0\n\n def _update_dry(self) -> None:\n \"\"\"\n Update from database in dry-run mode\n - Apply apply profits of closed trades on top of stake amount\n - Subtract currently tied up stake_amount in open trades\n - update balances for currencies currently in trades\n \"\"\"\n # Recreate _wallets to reset closed trade balances\n _wallets = {}\n closed_trades = Trade.get_trades(Trade.is_open.is_(False)).all()\n open_trades = Trade.get_trades(Trade.is_open.is_(True)).all()\n tot_profit = sum([trade.calc_profit() for trade in closed_trades])\n tot_in_trades = sum([trade.stake_amount for trade in open_trades])\n\n current_stake = self.start_cap + tot_profit - tot_in_trades\n _wallets[self._config['stake_currency']] = Wallet(\n self._config['stake_currency'],\n current_stake,\n 0,\n current_stake\n )\n\n for trade in open_trades:\n curr = self._exchange.get_pair_base_currency(trade.pair)\n _wallets[curr] = Wallet(\n curr,\n trade.amount,\n 0,\n trade.amount\n )\n self._wallets = _wallets\n\n def _update_live(self) -> None:\n balances = self._exchange.get_balances()\n\n for currency in balances:\n self._wallets[currency] = Wallet(\n currency,\n balances[currency].get('free', None),\n balances[currency].get('used', None),\n balances[currency].get('total', None)\n )\n\n def update(self, require_update: bool = True) -> None:\n \"\"\"\n Updates wallets from the configured version.\n By default, updates from the exchange.\n Update-skipping should only be used for user-invoked /balance calls, since\n for trading operations, the latest balance is needed.\n :param require_update: Allow skipping an update if balances were recently refreshed\n \"\"\"\n if (require_update or (self._last_wallet_refresh + 3600 < arrow.utcnow().timestamp)):\n if self._config['dry_run']:\n self._update_dry()\n else:\n self._update_live()\n logger.info('Wallets synced.')\n self._last_wallet_refresh = arrow.utcnow().timestamp\n\n def get_all_balances(self) -> Dict[str, Any]:\n return self._wallets\n", "path": "freqtrade/wallets.py"}], "after_files": [{"content": "# pragma pylint: disable=W0603\n\"\"\" Wallet \"\"\"\n\nimport logging\nfrom copy import deepcopy\nfrom typing import Any, Dict, NamedTuple\n\nimport arrow\n\nfrom freqtrade.exchange import Exchange\nfrom freqtrade.persistence import Trade\n\nlogger = logging.getLogger(__name__)\n\n\n# wallet data structure\nclass Wallet(NamedTuple):\n currency: str\n free: float = 0\n used: float = 0\n total: float = 0\n\n\nclass Wallets:\n\n def __init__(self, config: dict, exchange: Exchange) -> None:\n self._config = config\n self._exchange = exchange\n self._wallets: Dict[str, Wallet] = {}\n self.start_cap = config['dry_run_wallet']\n self._last_wallet_refresh = 0\n self.update()\n\n def get_free(self, currency: str) -> float:\n balance = self._wallets.get(currency)\n if balance and balance.free:\n return balance.free\n else:\n return 0\n\n def get_used(self, currency: str) -> float:\n balance = self._wallets.get(currency)\n if balance and balance.used:\n return balance.used\n else:\n return 0\n\n def get_total(self, currency: str) -> float:\n balance = self._wallets.get(currency)\n if balance and balance.total:\n return balance.total\n else:\n return 0\n\n def _update_dry(self) -> None:\n \"\"\"\n Update from database in dry-run mode\n - Apply apply profits of closed trades on top of stake amount\n - Subtract currently tied up stake_amount in open trades\n - update balances for currencies currently in trades\n \"\"\"\n # Recreate _wallets to reset closed trade balances\n _wallets = {}\n closed_trades = Trade.get_trades(Trade.is_open.is_(False)).all()\n open_trades = Trade.get_trades(Trade.is_open.is_(True)).all()\n tot_profit = sum([trade.calc_profit() for trade in closed_trades])\n tot_in_trades = sum([trade.stake_amount for trade in open_trades])\n\n current_stake = self.start_cap + tot_profit - tot_in_trades\n _wallets[self._config['stake_currency']] = Wallet(\n self._config['stake_currency'],\n current_stake,\n 0,\n current_stake\n )\n\n for trade in open_trades:\n curr = self._exchange.get_pair_base_currency(trade.pair)\n _wallets[curr] = Wallet(\n curr,\n trade.amount,\n 0,\n trade.amount\n )\n self._wallets = _wallets\n\n def _update_live(self) -> None:\n balances = self._exchange.get_balances()\n\n for currency in balances:\n self._wallets[currency] = Wallet(\n currency,\n balances[currency].get('free', None),\n balances[currency].get('used', None),\n balances[currency].get('total', None)\n )\n # Remove currencies no longer in get_balances output\n for currency in deepcopy(self._wallets):\n if currency not in balances:\n del self._wallets[currency]\n\n def update(self, require_update: bool = True) -> None:\n \"\"\"\n Updates wallets from the configured version.\n By default, updates from the exchange.\n Update-skipping should only be used for user-invoked /balance calls, since\n for trading operations, the latest balance is needed.\n :param require_update: Allow skipping an update if balances were recently refreshed\n \"\"\"\n if (require_update or (self._last_wallet_refresh + 3600 < arrow.utcnow().timestamp)):\n if self._config['dry_run']:\n self._update_dry()\n else:\n self._update_live()\n logger.info('Wallets synced.')\n self._last_wallet_refresh = arrow.utcnow().timestamp\n\n def get_all_balances(self) -> Dict[str, Any]:\n return self._wallets\n", "path": "freqtrade/wallets.py"}]} | 1,821 | 173 |
gh_patches_debug_16602 | rasdani/github-patches | git_diff | svthalia__concrexit-2500 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Partner page partner blocks keep showing the loading animation
### Describe the bug
Partner page partner blocks keep showing the loading animation.
### How to reproduce
Steps to reproduce the behaviour:
1. Go to [the Thalia partner page](https://thalia.nu/career/).
2. Scroll down and see the loading animation on the partner blocks.
### Expected behaviour
The loading animation should stop when the partners have been loaded.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/thaliawebsite/templatetags/grid_item.py`
Content:
```
1 from django import template
2
3 register = template.Library()
4
5
6 @register.inclusion_tag("includes/grid_item.html")
7 def grid_item(
8 title=None,
9 meta_text="",
10 url=None,
11 image_url=None,
12 ribbon=None,
13 class_name="",
14 anchor_attrs="",
15 ):
16 return {
17 "title": title,
18 "url": url,
19 "image_url": image_url,
20 "meta_text": meta_text,
21 "ribbon": ribbon,
22 "class_name": class_name,
23 "anchor_attrs": anchor_attrs,
24 }
25
```
Path: `website/partners/templatetags/partner_cards.py`
Content:
```
1 from django import template
2 from django.conf import settings
3 from django.template.defaultfilters import striptags, truncatechars
4
5 from thaliawebsite.templatetags.bleach_tags import bleach
6 from thaliawebsite.templatetags.grid_item import grid_item
7 from utils.media.services import get_thumbnail_url
8 from partners.models import Vacancy
9
10 register = template.Library()
11
12
13 @register.inclusion_tag("includes/grid_item.html")
14 def partner_card(partner):
15 """Return grid item showing partner."""
16 image_url = ""
17 if partner.logo:
18 image_url = get_thumbnail_url(
19 partner.logo, settings.THUMBNAIL_SIZES["medium"], fit=False
20 )
21
22 meta_text = truncatechars(bleach(striptags(partner.company_profile)), 80)
23
24 return grid_item(
25 title=partner.name,
26 meta_text='<p class="px-2 d-none d-md-block">{}</p>'.format(meta_text),
27 url=partner.get_absolute_url,
28 image_url=image_url,
29 class_name="partner-card contain-logo",
30 )
31
32
33 @register.inclusion_tag("includes/grid_item.html")
34 def partner_image_card(image):
35 """Return grid item showing partner image."""
36 class_name = "partner-image-card"
37 image_url = get_thumbnail_url(image, settings.THUMBNAIL_SIZES["medium"])
38
39 return grid_item(
40 title="",
41 url=get_thumbnail_url(image, settings.THUMBNAIL_SIZES["large"], fit=False),
42 image_url=image_url,
43 class_name=class_name,
44 anchor_attrs='data-fancybox="gallery"',
45 )
46
47
48 @register.inclusion_tag("partners/vacancy_card.html")
49 def vacancy_card(vacancy):
50 """Return grid item showing vacancy."""
51 image_url = None
52 if vacancy.get_company_logo():
53 image_url = get_thumbnail_url(
54 vacancy.get_company_logo(), settings.THUMBNAIL_SIZES["medium"], fit=False
55 )
56
57 description = truncatechars(bleach(striptags(vacancy.description)), 300)
58 extra_class = "external-vacancy"
59 url = "#vacancy-{}".format(vacancy.id)
60 keywords = vacancy.keywords.split(",")
61 location = vacancy.location
62 if vacancy.partner and vacancy.partner.is_active:
63 url = "{}#vacancy-{}".format(vacancy.partner.get_absolute_url(), vacancy.id)
64 extra_class = ""
65
66 return {
67 "title": vacancy.title,
68 "company_name": vacancy.get_company_name(),
69 "image_url": image_url,
70 "description": description,
71 "location": location,
72 "keywords": keywords,
73 "url": url,
74 "extra_class": extra_class,
75 }
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/partners/templatetags/partner_cards.py b/website/partners/templatetags/partner_cards.py
--- a/website/partners/templatetags/partner_cards.py
+++ b/website/partners/templatetags/partner_cards.py
@@ -27,6 +27,7 @@
url=partner.get_absolute_url,
image_url=image_url,
class_name="partner-card contain-logo",
+ show_loading_animation=False,
)
diff --git a/website/thaliawebsite/templatetags/grid_item.py b/website/thaliawebsite/templatetags/grid_item.py
--- a/website/thaliawebsite/templatetags/grid_item.py
+++ b/website/thaliawebsite/templatetags/grid_item.py
@@ -12,6 +12,7 @@
ribbon=None,
class_name="",
anchor_attrs="",
+ show_loading_animation=True,
):
return {
"title": title,
@@ -21,4 +22,5 @@
"ribbon": ribbon,
"class_name": class_name,
"anchor_attrs": anchor_attrs,
+ "show_loading_animation": show_loading_animation,
}
| {"golden_diff": "diff --git a/website/partners/templatetags/partner_cards.py b/website/partners/templatetags/partner_cards.py\n--- a/website/partners/templatetags/partner_cards.py\n+++ b/website/partners/templatetags/partner_cards.py\n@@ -27,6 +27,7 @@\n url=partner.get_absolute_url,\n image_url=image_url,\n class_name=\"partner-card contain-logo\",\n+ show_loading_animation=False,\n )\n \n \ndiff --git a/website/thaliawebsite/templatetags/grid_item.py b/website/thaliawebsite/templatetags/grid_item.py\n--- a/website/thaliawebsite/templatetags/grid_item.py\n+++ b/website/thaliawebsite/templatetags/grid_item.py\n@@ -12,6 +12,7 @@\n ribbon=None,\n class_name=\"\",\n anchor_attrs=\"\",\n+ show_loading_animation=True,\n ):\n return {\n \"title\": title,\n@@ -21,4 +22,5 @@\n \"ribbon\": ribbon,\n \"class_name\": class_name,\n \"anchor_attrs\": anchor_attrs,\n+ \"show_loading_animation\": show_loading_animation,\n }\n", "issue": "Partner page partner blocks keep showing the loading animation\n### Describe the bug\r\nPartner page partner blocks keep showing the loading animation.\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Go to [the Thalia partner page](https://thalia.nu/career/).\r\n2. Scroll down and see the loading animation on the partner blocks.\r\n\r\n### Expected behaviour\r\nThe loading animation should stop when the partners have been loaded.\r\n\n", "before_files": [{"content": "from django import template\n\nregister = template.Library()\n\n\[email protected]_tag(\"includes/grid_item.html\")\ndef grid_item(\n title=None,\n meta_text=\"\",\n url=None,\n image_url=None,\n ribbon=None,\n class_name=\"\",\n anchor_attrs=\"\",\n):\n return {\n \"title\": title,\n \"url\": url,\n \"image_url\": image_url,\n \"meta_text\": meta_text,\n \"ribbon\": ribbon,\n \"class_name\": class_name,\n \"anchor_attrs\": anchor_attrs,\n }\n", "path": "website/thaliawebsite/templatetags/grid_item.py"}, {"content": "from django import template\nfrom django.conf import settings\nfrom django.template.defaultfilters import striptags, truncatechars\n\nfrom thaliawebsite.templatetags.bleach_tags import bleach\nfrom thaliawebsite.templatetags.grid_item import grid_item\nfrom utils.media.services import get_thumbnail_url\nfrom partners.models import Vacancy\n\nregister = template.Library()\n\n\[email protected]_tag(\"includes/grid_item.html\")\ndef partner_card(partner):\n \"\"\"Return grid item showing partner.\"\"\"\n image_url = \"\"\n if partner.logo:\n image_url = get_thumbnail_url(\n partner.logo, settings.THUMBNAIL_SIZES[\"medium\"], fit=False\n )\n\n meta_text = truncatechars(bleach(striptags(partner.company_profile)), 80)\n\n return grid_item(\n title=partner.name,\n meta_text='<p class=\"px-2 d-none d-md-block\">{}</p>'.format(meta_text),\n url=partner.get_absolute_url,\n image_url=image_url,\n class_name=\"partner-card contain-logo\",\n )\n\n\[email protected]_tag(\"includes/grid_item.html\")\ndef partner_image_card(image):\n \"\"\"Return grid item showing partner image.\"\"\"\n class_name = \"partner-image-card\"\n image_url = get_thumbnail_url(image, settings.THUMBNAIL_SIZES[\"medium\"])\n\n return grid_item(\n title=\"\",\n url=get_thumbnail_url(image, settings.THUMBNAIL_SIZES[\"large\"], fit=False),\n image_url=image_url,\n class_name=class_name,\n anchor_attrs='data-fancybox=\"gallery\"',\n )\n\n\[email protected]_tag(\"partners/vacancy_card.html\")\ndef vacancy_card(vacancy):\n \"\"\"Return grid item showing vacancy.\"\"\"\n image_url = None\n if vacancy.get_company_logo():\n image_url = get_thumbnail_url(\n vacancy.get_company_logo(), settings.THUMBNAIL_SIZES[\"medium\"], fit=False\n )\n\n description = truncatechars(bleach(striptags(vacancy.description)), 300)\n extra_class = \"external-vacancy\"\n url = \"#vacancy-{}\".format(vacancy.id)\n keywords = vacancy.keywords.split(\",\")\n location = vacancy.location\n if vacancy.partner and vacancy.partner.is_active:\n url = \"{}#vacancy-{}\".format(vacancy.partner.get_absolute_url(), vacancy.id)\n extra_class = \"\"\n\n return {\n \"title\": vacancy.title,\n \"company_name\": vacancy.get_company_name(),\n \"image_url\": image_url,\n \"description\": description,\n \"location\": location,\n \"keywords\": keywords,\n \"url\": url,\n \"extra_class\": extra_class,\n }\n", "path": "website/partners/templatetags/partner_cards.py"}], "after_files": [{"content": "from django import template\n\nregister = template.Library()\n\n\[email protected]_tag(\"includes/grid_item.html\")\ndef grid_item(\n title=None,\n meta_text=\"\",\n url=None,\n image_url=None,\n ribbon=None,\n class_name=\"\",\n anchor_attrs=\"\",\n show_loading_animation=True,\n):\n return {\n \"title\": title,\n \"url\": url,\n \"image_url\": image_url,\n \"meta_text\": meta_text,\n \"ribbon\": ribbon,\n \"class_name\": class_name,\n \"anchor_attrs\": anchor_attrs,\n \"show_loading_animation\": show_loading_animation,\n }\n", "path": "website/thaliawebsite/templatetags/grid_item.py"}, {"content": "from django import template\nfrom django.conf import settings\nfrom django.template.defaultfilters import striptags, truncatechars\n\nfrom thaliawebsite.templatetags.bleach_tags import bleach\nfrom thaliawebsite.templatetags.grid_item import grid_item\nfrom utils.media.services import get_thumbnail_url\nfrom partners.models import Vacancy\n\nregister = template.Library()\n\n\[email protected]_tag(\"includes/grid_item.html\")\ndef partner_card(partner):\n \"\"\"Return grid item showing partner.\"\"\"\n image_url = \"\"\n if partner.logo:\n image_url = get_thumbnail_url(\n partner.logo, settings.THUMBNAIL_SIZES[\"medium\"], fit=False\n )\n\n meta_text = truncatechars(bleach(striptags(partner.company_profile)), 80)\n\n return grid_item(\n title=partner.name,\n meta_text='<p class=\"px-2 d-none d-md-block\">{}</p>'.format(meta_text),\n url=partner.get_absolute_url,\n image_url=image_url,\n class_name=\"partner-card contain-logo\",\n show_loading_animation=False,\n )\n\n\[email protected]_tag(\"includes/grid_item.html\")\ndef partner_image_card(image):\n \"\"\"Return grid item showing partner image.\"\"\"\n class_name = \"partner-image-card\"\n image_url = get_thumbnail_url(image, settings.THUMBNAIL_SIZES[\"medium\"])\n\n return grid_item(\n title=\"\",\n url=get_thumbnail_url(image, settings.THUMBNAIL_SIZES[\"large\"], fit=False),\n image_url=image_url,\n class_name=class_name,\n anchor_attrs='data-fancybox=\"gallery\"',\n )\n\n\[email protected]_tag(\"partners/vacancy_card.html\")\ndef vacancy_card(vacancy):\n \"\"\"Return grid item showing vacancy.\"\"\"\n image_url = None\n if vacancy.get_company_logo():\n image_url = get_thumbnail_url(\n vacancy.get_company_logo(), settings.THUMBNAIL_SIZES[\"medium\"], fit=False\n )\n\n description = truncatechars(bleach(striptags(vacancy.description)), 300)\n extra_class = \"external-vacancy\"\n url = \"#vacancy-{}\".format(vacancy.id)\n keywords = vacancy.keywords.split(\",\")\n location = vacancy.location\n if vacancy.partner and vacancy.partner.is_active:\n url = \"{}#vacancy-{}\".format(vacancy.partner.get_absolute_url(), vacancy.id)\n extra_class = \"\"\n\n return {\n \"title\": vacancy.title,\n \"company_name\": vacancy.get_company_name(),\n \"image_url\": image_url,\n \"description\": description,\n \"location\": location,\n \"keywords\": keywords,\n \"url\": url,\n \"extra_class\": extra_class,\n }\n", "path": "website/partners/templatetags/partner_cards.py"}]} | 1,249 | 267 |
gh_patches_debug_19699 | rasdani/github-patches | git_diff | ibis-project__ibis-3990 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bug: comparing bool expr to bool literal generates invalid sql
It looks like one of the recent refactorings may have broken comparisons of boolean to boolean:
This test:
```python
def test_bool_bool():
import ibis
from ibis.backends.base.sql.compiler import Compiler
t = ibis.table(
[('dest', 'string'), ('origin', 'string'), ('arrdelay', 'int32')],
'airlines',
)
x = ibis.literal(True)
top = t[(t.dest.cast('int64') == 0) == x]
result = Compiler.to_sql(top)
print(result)
```
produces this SQL:
```sql
SELECT *
FROM airlines
WHERE CAST(`dest` AS bigint) = 0 = TRUE
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ibis/backends/base/sql/registry/helpers.py`
Content:
```
1 import ibis.common.exceptions as com
2 import ibis.expr.datatypes as dt
3 import ibis.expr.operations as ops
4 import ibis.expr.types as ir
5 from ibis.backends.base.sql.registry import identifiers
6
7
8 def format_call(translator, func, *args):
9 formatted_args = []
10 for arg in args:
11 fmt_arg = translator.translate(arg)
12 formatted_args.append(fmt_arg)
13
14 return '{}({})'.format(func, ', '.join(formatted_args))
15
16
17 def quote_identifier(name, quotechar='`', force=False):
18 """Add quotes to the `name` identifier if needed."""
19 if force or name.count(' ') or name in identifiers.base_identifiers:
20 return '{0}{1}{0}'.format(quotechar, name)
21 else:
22 return name
23
24
25 def needs_parens(op):
26 if isinstance(op, ir.Expr):
27 op = op.op()
28 op_klass = type(op)
29 # function calls don't need parens
30 return op_klass in {
31 ops.Negate,
32 ops.IsNull,
33 ops.NotNull,
34 ops.Add,
35 ops.Subtract,
36 ops.Multiply,
37 ops.Divide,
38 ops.Power,
39 ops.Modulus,
40 ops.Equals,
41 ops.NotEquals,
42 ops.GreaterEqual,
43 ops.Greater,
44 ops.LessEqual,
45 ops.Less,
46 ops.IdenticalTo,
47 ops.And,
48 ops.Or,
49 ops.Xor,
50 }
51
52
53 parenthesize = '({})'.format
54
55
56 sql_type_names = {
57 'int8': 'tinyint',
58 'int16': 'smallint',
59 'int32': 'int',
60 'int64': 'bigint',
61 'float': 'float',
62 'float32': 'float',
63 'double': 'double',
64 'float64': 'double',
65 'string': 'string',
66 'boolean': 'boolean',
67 'timestamp': 'timestamp',
68 'decimal': 'decimal',
69 }
70
71
72 def type_to_sql_string(tval):
73 if isinstance(tval, dt.Decimal):
74 return f'decimal({tval.precision}, {tval.scale})'
75 name = tval.name.lower()
76 try:
77 return sql_type_names[name]
78 except KeyError:
79 raise com.UnsupportedBackendType(name)
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ibis/backends/base/sql/registry/helpers.py b/ibis/backends/base/sql/registry/helpers.py
--- a/ibis/backends/base/sql/registry/helpers.py
+++ b/ibis/backends/base/sql/registry/helpers.py
@@ -22,32 +22,34 @@
return name
-def needs_parens(op):
- if isinstance(op, ir.Expr):
- op = op.op()
- op_klass = type(op)
- # function calls don't need parens
- return op_klass in {
- ops.Negate,
- ops.IsNull,
- ops.NotNull,
- ops.Add,
- ops.Subtract,
- ops.Multiply,
- ops.Divide,
- ops.Power,
- ops.Modulus,
- ops.Equals,
- ops.NotEquals,
- ops.GreaterEqual,
- ops.Greater,
- ops.LessEqual,
- ops.Less,
- ops.IdenticalTo,
- ops.And,
- ops.Or,
- ops.Xor,
- }
+_NEEDS_PARENS_OPS = (
+ ops.Negate,
+ ops.IsNull,
+ ops.NotNull,
+ ops.Add,
+ ops.Subtract,
+ ops.Multiply,
+ ops.Divide,
+ ops.Power,
+ ops.Modulus,
+ ops.Equals,
+ ops.NotEquals,
+ ops.GreaterEqual,
+ ops.Greater,
+ ops.LessEqual,
+ ops.Less,
+ ops.IdenticalTo,
+ ops.And,
+ ops.Or,
+ ops.Xor,
+)
+
+
+def needs_parens(expr: ir.Expr):
+ op = expr.op()
+ if isinstance(op, ops.Alias):
+ op = op.arg.op()
+ return isinstance(op, _NEEDS_PARENS_OPS)
parenthesize = '({})'.format
| {"golden_diff": "diff --git a/ibis/backends/base/sql/registry/helpers.py b/ibis/backends/base/sql/registry/helpers.py\n--- a/ibis/backends/base/sql/registry/helpers.py\n+++ b/ibis/backends/base/sql/registry/helpers.py\n@@ -22,32 +22,34 @@\n return name\n \n \n-def needs_parens(op):\n- if isinstance(op, ir.Expr):\n- op = op.op()\n- op_klass = type(op)\n- # function calls don't need parens\n- return op_klass in {\n- ops.Negate,\n- ops.IsNull,\n- ops.NotNull,\n- ops.Add,\n- ops.Subtract,\n- ops.Multiply,\n- ops.Divide,\n- ops.Power,\n- ops.Modulus,\n- ops.Equals,\n- ops.NotEquals,\n- ops.GreaterEqual,\n- ops.Greater,\n- ops.LessEqual,\n- ops.Less,\n- ops.IdenticalTo,\n- ops.And,\n- ops.Or,\n- ops.Xor,\n- }\n+_NEEDS_PARENS_OPS = (\n+ ops.Negate,\n+ ops.IsNull,\n+ ops.NotNull,\n+ ops.Add,\n+ ops.Subtract,\n+ ops.Multiply,\n+ ops.Divide,\n+ ops.Power,\n+ ops.Modulus,\n+ ops.Equals,\n+ ops.NotEquals,\n+ ops.GreaterEqual,\n+ ops.Greater,\n+ ops.LessEqual,\n+ ops.Less,\n+ ops.IdenticalTo,\n+ ops.And,\n+ ops.Or,\n+ ops.Xor,\n+)\n+\n+\n+def needs_parens(expr: ir.Expr):\n+ op = expr.op()\n+ if isinstance(op, ops.Alias):\n+ op = op.arg.op()\n+ return isinstance(op, _NEEDS_PARENS_OPS)\n \n \n parenthesize = '({})'.format\n", "issue": "bug: comparing bool expr to bool literal generates invalid sql\nIt looks like one of the recent refactorings may have broken comparisons of boolean to boolean:\r\n\r\nThis test:\r\n\r\n```python\r\ndef test_bool_bool():\r\n import ibis\r\n from ibis.backends.base.sql.compiler import Compiler\r\n\r\n t = ibis.table(\r\n [('dest', 'string'), ('origin', 'string'), ('arrdelay', 'int32')],\r\n 'airlines',\r\n )\r\n\r\n x = ibis.literal(True)\r\n top = t[(t.dest.cast('int64') == 0) == x]\r\n\r\n result = Compiler.to_sql(top)\r\n print(result)\r\n```\r\n\r\nproduces this SQL:\r\n\r\n```sql\r\nSELECT *\r\nFROM airlines\r\nWHERE CAST(`dest` AS bigint) = 0 = TRUE\r\n```\n", "before_files": [{"content": "import ibis.common.exceptions as com\nimport ibis.expr.datatypes as dt\nimport ibis.expr.operations as ops\nimport ibis.expr.types as ir\nfrom ibis.backends.base.sql.registry import identifiers\n\n\ndef format_call(translator, func, *args):\n formatted_args = []\n for arg in args:\n fmt_arg = translator.translate(arg)\n formatted_args.append(fmt_arg)\n\n return '{}({})'.format(func, ', '.join(formatted_args))\n\n\ndef quote_identifier(name, quotechar='`', force=False):\n \"\"\"Add quotes to the `name` identifier if needed.\"\"\"\n if force or name.count(' ') or name in identifiers.base_identifiers:\n return '{0}{1}{0}'.format(quotechar, name)\n else:\n return name\n\n\ndef needs_parens(op):\n if isinstance(op, ir.Expr):\n op = op.op()\n op_klass = type(op)\n # function calls don't need parens\n return op_klass in {\n ops.Negate,\n ops.IsNull,\n ops.NotNull,\n ops.Add,\n ops.Subtract,\n ops.Multiply,\n ops.Divide,\n ops.Power,\n ops.Modulus,\n ops.Equals,\n ops.NotEquals,\n ops.GreaterEqual,\n ops.Greater,\n ops.LessEqual,\n ops.Less,\n ops.IdenticalTo,\n ops.And,\n ops.Or,\n ops.Xor,\n }\n\n\nparenthesize = '({})'.format\n\n\nsql_type_names = {\n 'int8': 'tinyint',\n 'int16': 'smallint',\n 'int32': 'int',\n 'int64': 'bigint',\n 'float': 'float',\n 'float32': 'float',\n 'double': 'double',\n 'float64': 'double',\n 'string': 'string',\n 'boolean': 'boolean',\n 'timestamp': 'timestamp',\n 'decimal': 'decimal',\n}\n\n\ndef type_to_sql_string(tval):\n if isinstance(tval, dt.Decimal):\n return f'decimal({tval.precision}, {tval.scale})'\n name = tval.name.lower()\n try:\n return sql_type_names[name]\n except KeyError:\n raise com.UnsupportedBackendType(name)\n", "path": "ibis/backends/base/sql/registry/helpers.py"}], "after_files": [{"content": "import ibis.common.exceptions as com\nimport ibis.expr.datatypes as dt\nimport ibis.expr.operations as ops\nimport ibis.expr.types as ir\nfrom ibis.backends.base.sql.registry import identifiers\n\n\ndef format_call(translator, func, *args):\n formatted_args = []\n for arg in args:\n fmt_arg = translator.translate(arg)\n formatted_args.append(fmt_arg)\n\n return '{}({})'.format(func, ', '.join(formatted_args))\n\n\ndef quote_identifier(name, quotechar='`', force=False):\n \"\"\"Add quotes to the `name` identifier if needed.\"\"\"\n if force or name.count(' ') or name in identifiers.base_identifiers:\n return '{0}{1}{0}'.format(quotechar, name)\n else:\n return name\n\n\n_NEEDS_PARENS_OPS = (\n ops.Negate,\n ops.IsNull,\n ops.NotNull,\n ops.Add,\n ops.Subtract,\n ops.Multiply,\n ops.Divide,\n ops.Power,\n ops.Modulus,\n ops.Equals,\n ops.NotEquals,\n ops.GreaterEqual,\n ops.Greater,\n ops.LessEqual,\n ops.Less,\n ops.IdenticalTo,\n ops.And,\n ops.Or,\n ops.Xor,\n)\n\n\ndef needs_parens(expr: ir.Expr):\n op = expr.op()\n if isinstance(op, ops.Alias):\n op = op.arg.op()\n return isinstance(op, _NEEDS_PARENS_OPS)\n\n\nparenthesize = '({})'.format\n\n\nsql_type_names = {\n 'int8': 'tinyint',\n 'int16': 'smallint',\n 'int32': 'int',\n 'int64': 'bigint',\n 'float': 'float',\n 'float32': 'float',\n 'double': 'double',\n 'float64': 'double',\n 'string': 'string',\n 'boolean': 'boolean',\n 'timestamp': 'timestamp',\n 'decimal': 'decimal',\n}\n\n\ndef type_to_sql_string(tval):\n if isinstance(tval, dt.Decimal):\n return f'decimal({tval.precision}, {tval.scale})'\n name = tval.name.lower()\n try:\n return sql_type_names[name]\n except KeyError:\n raise com.UnsupportedBackendType(name)\n", "path": "ibis/backends/base/sql/registry/helpers.py"}]} | 1,070 | 430 |
gh_patches_debug_5310 | rasdani/github-patches | git_diff | pantsbuild__pants-6499 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow `compatibility` attribute to be passed through to generated python targets
To do this:
- Override ` _copy_target_attributes` in `ApacheThriftPyGen` to include 'compatibility', so it looks like this:
```
@property
def _copy_target_attributes(self):
"""Propagate these attributes to the synthetic python_library() target."""
return ['provides', 'tags', 'scope', 'compatibility']
```
- See https://github.com/pantsbuild/pants/blob/039051735542d29ae02f4faa09c0c51c47292bf0/contrib/jax_ws/src/python/pants/contrib/jax_ws/tasks/jax_ws_gen.py#L90 as an example
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/backend/codegen/thrift/python/apache_thrift_py_gen.py`
Content:
```
1 # coding=utf-8
2 # Copyright 2016 Pants project contributors (see CONTRIBUTORS.md).
3 # Licensed under the Apache License, Version 2.0 (see LICENSE).
4
5 from __future__ import absolute_import, division, print_function, unicode_literals
6
7 import os
8 from builtins import open
9
10 from pants.backend.codegen.thrift.lib.apache_thrift_gen_base import ApacheThriftGenBase
11 from pants.backend.codegen.thrift.python.python_thrift_library import PythonThriftLibrary
12 from pants.backend.python.targets.python_library import PythonLibrary
13 from pants.util.dirutil import safe_delete, safe_walk
14
15
16 class ApacheThriftPyGen(ApacheThriftGenBase):
17 """Generate Python source files from thrift IDL files."""
18 gentarget_type = PythonThriftLibrary
19 thrift_generator = 'py'
20 default_gen_options_map = {
21 'new_style': None
22 }
23
24 sources_globs = ('**/*',)
25
26 def synthetic_target_type(self, target):
27 return PythonLibrary
28
29 def execute_codegen(self, target, target_workdir):
30 super(ApacheThriftPyGen, self).execute_codegen(target, target_workdir)
31
32 # Thrift generates code with all parent namespaces with empty __init__.py's. Since pants allows
33 # splitting a thrift namespace hierarchy across multiple packages, we explicitly insert
34 # namespace packages to allow for consumption of 2 or more of these packages in the same
35 # PYTHONPATH.
36 for root, _, files in safe_walk(target_workdir):
37 if '__init__.py' not in files: # skip non-packages
38 continue
39
40 init_py_abspath = os.path.join(root, '__init__.py')
41
42 # Thrift puts an __init__.py file at the root, and we don't want one there (it's not needed,
43 # and it confuses some import mechanisms).
44 if root == target_workdir:
45 safe_delete(init_py_abspath)
46 elif os.path.getsize(init_py_abspath) == 0: # empty __init__, translate to namespace package
47 with open(init_py_abspath, 'wb') as f:
48 f.write(b"__import__('pkg_resources').declare_namespace(__name__)")
49 else:
50 # A non-empty __init__, this is a leaf package, usually with ttypes and constants; so we
51 # leave as-is.
52 pass
53
54 def ignore_dup(self, tgt1, tgt2, rel_src):
55 # Thrift generates all the intermediate __init__.py files, and they shouldn't
56 # count as dups.
57 return os.path.basename(rel_src) == '__init__.py'
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/python/pants/backend/codegen/thrift/python/apache_thrift_py_gen.py b/src/python/pants/backend/codegen/thrift/python/apache_thrift_py_gen.py
--- a/src/python/pants/backend/codegen/thrift/python/apache_thrift_py_gen.py
+++ b/src/python/pants/backend/codegen/thrift/python/apache_thrift_py_gen.py
@@ -51,6 +51,10 @@
# leave as-is.
pass
+ @property
+ def _copy_target_attributes(self):
+ return super(ApacheThriftPyGen, self)._copy_target_attributes + ['compatibility']
+
def ignore_dup(self, tgt1, tgt2, rel_src):
# Thrift generates all the intermediate __init__.py files, and they shouldn't
# count as dups.
| {"golden_diff": "diff --git a/src/python/pants/backend/codegen/thrift/python/apache_thrift_py_gen.py b/src/python/pants/backend/codegen/thrift/python/apache_thrift_py_gen.py\n--- a/src/python/pants/backend/codegen/thrift/python/apache_thrift_py_gen.py\n+++ b/src/python/pants/backend/codegen/thrift/python/apache_thrift_py_gen.py\n@@ -51,6 +51,10 @@\n # leave as-is.\n pass\n \n+ @property\n+ def _copy_target_attributes(self):\n+ return super(ApacheThriftPyGen, self)._copy_target_attributes + ['compatibility']\n+\n def ignore_dup(self, tgt1, tgt2, rel_src):\n # Thrift generates all the intermediate __init__.py files, and they shouldn't\n # count as dups.\n", "issue": "Allow `compatibility` attribute to be passed through to generated python targets\nTo do this:\r\n\r\n- Override ` _copy_target_attributes` in `ApacheThriftPyGen` to include 'compatibility', so it looks like this:\r\n\r\n```\r\n @property\r\n def _copy_target_attributes(self):\r\n \"\"\"Propagate these attributes to the synthetic python_library() target.\"\"\"\r\n return ['provides', 'tags', 'scope', 'compatibility']\r\n```\r\n\r\n- See https://github.com/pantsbuild/pants/blob/039051735542d29ae02f4faa09c0c51c47292bf0/contrib/jax_ws/src/python/pants/contrib/jax_ws/tasks/jax_ws_gen.py#L90 as an example\r\n\r\n\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2016 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport os\nfrom builtins import open\n\nfrom pants.backend.codegen.thrift.lib.apache_thrift_gen_base import ApacheThriftGenBase\nfrom pants.backend.codegen.thrift.python.python_thrift_library import PythonThriftLibrary\nfrom pants.backend.python.targets.python_library import PythonLibrary\nfrom pants.util.dirutil import safe_delete, safe_walk\n\n\nclass ApacheThriftPyGen(ApacheThriftGenBase):\n \"\"\"Generate Python source files from thrift IDL files.\"\"\"\n gentarget_type = PythonThriftLibrary\n thrift_generator = 'py'\n default_gen_options_map = {\n 'new_style': None\n }\n\n sources_globs = ('**/*',)\n\n def synthetic_target_type(self, target):\n return PythonLibrary\n\n def execute_codegen(self, target, target_workdir):\n super(ApacheThriftPyGen, self).execute_codegen(target, target_workdir)\n\n # Thrift generates code with all parent namespaces with empty __init__.py's. Since pants allows\n # splitting a thrift namespace hierarchy across multiple packages, we explicitly insert\n # namespace packages to allow for consumption of 2 or more of these packages in the same\n # PYTHONPATH.\n for root, _, files in safe_walk(target_workdir):\n if '__init__.py' not in files: # skip non-packages\n continue\n\n init_py_abspath = os.path.join(root, '__init__.py')\n\n # Thrift puts an __init__.py file at the root, and we don't want one there (it's not needed,\n # and it confuses some import mechanisms).\n if root == target_workdir:\n safe_delete(init_py_abspath)\n elif os.path.getsize(init_py_abspath) == 0: # empty __init__, translate to namespace package\n with open(init_py_abspath, 'wb') as f:\n f.write(b\"__import__('pkg_resources').declare_namespace(__name__)\")\n else:\n # A non-empty __init__, this is a leaf package, usually with ttypes and constants; so we\n # leave as-is.\n pass\n\n def ignore_dup(self, tgt1, tgt2, rel_src):\n # Thrift generates all the intermediate __init__.py files, and they shouldn't\n # count as dups.\n return os.path.basename(rel_src) == '__init__.py'\n", "path": "src/python/pants/backend/codegen/thrift/python/apache_thrift_py_gen.py"}], "after_files": [{"content": "# coding=utf-8\n# Copyright 2016 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport os\nfrom builtins import open\n\nfrom pants.backend.codegen.thrift.lib.apache_thrift_gen_base import ApacheThriftGenBase\nfrom pants.backend.codegen.thrift.python.python_thrift_library import PythonThriftLibrary\nfrom pants.backend.python.targets.python_library import PythonLibrary\nfrom pants.util.dirutil import safe_delete, safe_walk\n\n\nclass ApacheThriftPyGen(ApacheThriftGenBase):\n \"\"\"Generate Python source files from thrift IDL files.\"\"\"\n gentarget_type = PythonThriftLibrary\n thrift_generator = 'py'\n default_gen_options_map = {\n 'new_style': None\n }\n\n sources_globs = ('**/*',)\n\n def synthetic_target_type(self, target):\n return PythonLibrary\n\n def execute_codegen(self, target, target_workdir):\n super(ApacheThriftPyGen, self).execute_codegen(target, target_workdir)\n\n # Thrift generates code with all parent namespaces with empty __init__.py's. Since pants allows\n # splitting a thrift namespace hierarchy across multiple packages, we explicitly insert\n # namespace packages to allow for consumption of 2 or more of these packages in the same\n # PYTHONPATH.\n for root, _, files in safe_walk(target_workdir):\n if '__init__.py' not in files: # skip non-packages\n continue\n\n init_py_abspath = os.path.join(root, '__init__.py')\n\n # Thrift puts an __init__.py file at the root, and we don't want one there (it's not needed,\n # and it confuses some import mechanisms).\n if root == target_workdir:\n safe_delete(init_py_abspath)\n elif os.path.getsize(init_py_abspath) == 0: # empty __init__, translate to namespace package\n with open(init_py_abspath, 'wb') as f:\n f.write(b\"__import__('pkg_resources').declare_namespace(__name__)\")\n else:\n # A non-empty __init__, this is a leaf package, usually with ttypes and constants; so we\n # leave as-is.\n pass\n\n @property\n def _copy_target_attributes(self):\n return super(ApacheThriftPyGen, self)._copy_target_attributes + ['compatibility']\n\n def ignore_dup(self, tgt1, tgt2, rel_src):\n # Thrift generates all the intermediate __init__.py files, and they shouldn't\n # count as dups.\n return os.path.basename(rel_src) == '__init__.py'\n", "path": "src/python/pants/backend/codegen/thrift/python/apache_thrift_py_gen.py"}]} | 1,114 | 176 |
gh_patches_debug_1357 | rasdani/github-patches | git_diff | aws__aws-cli-4334 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Broken docutils==0.15
Hi community,
Today docutils were updated to 0.15 (https://pypi.org/project/docutils/#history) and it breaks awscli running on Python 2.
```
# aws --version
Traceback (most recent call last):
File "/bin/aws", line 19, in <module>
import awscli.clidriver
File "/usr/lib/python2.7/site-packages/awscli/clidriver.py", line 36, in <module>
from awscli.help import ProviderHelpCommand
File "/usr/lib/python2.7/site-packages/awscli/help.py", line 20, in <module>
from docutils.core import publish_string
File "/usr/lib/python2.7/site-packages/docutils/core.py", line 246
print('\n::: Runtime settings:', file=self._stderr)
^
SyntaxError: invalid syntax
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import codecs
3 import os.path
4 import re
5 import sys
6
7 from setuptools import setup, find_packages
8
9
10 here = os.path.abspath(os.path.dirname(__file__))
11
12
13 def read(*parts):
14 return codecs.open(os.path.join(here, *parts), 'r').read()
15
16
17 def find_version(*file_paths):
18 version_file = read(*file_paths)
19 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
20 version_file, re.M)
21 if version_match:
22 return version_match.group(1)
23 raise RuntimeError("Unable to find version string.")
24
25
26 requires = ['botocore==1.12.191',
27 'colorama>=0.2.5,<=0.3.9',
28 'docutils>=0.10',
29 'rsa>=3.1.2,<=3.5.0',
30 's3transfer>=0.2.0,<0.3.0']
31
32
33 if sys.version_info[:2] == (2, 6):
34 # For python2.6 we have to require argparse since it
35 # was not in stdlib until 2.7.
36 requires.append('argparse>=1.1')
37
38 # For Python 2.6, we have to require a different verion of PyYAML since the latest
39 # versions dropped support for Python 2.6.
40 requires.append('PyYAML>=3.10,<=3.13')
41 else:
42 requires.append('PyYAML>=3.10,<=5.1')
43
44
45 setup_options = dict(
46 name='awscli',
47 version=find_version("awscli", "__init__.py"),
48 description='Universal Command Line Environment for AWS.',
49 long_description=read('README.rst'),
50 author='Amazon Web Services',
51 url='http://aws.amazon.com/cli/',
52 scripts=['bin/aws', 'bin/aws.cmd',
53 'bin/aws_completer', 'bin/aws_zsh_completer.sh',
54 'bin/aws_bash_completer'],
55 packages=find_packages(exclude=['tests*']),
56 package_data={'awscli': ['data/*.json', 'examples/*/*.rst',
57 'examples/*/*.txt', 'examples/*/*/*.txt',
58 'examples/*/*/*.rst', 'topics/*.rst',
59 'topics/*.json']},
60 install_requires=requires,
61 extras_require={
62 ':python_version=="2.6"': [
63 'argparse>=1.1',
64 ]
65 },
66 license="Apache License 2.0",
67 classifiers=[
68 'Development Status :: 5 - Production/Stable',
69 'Intended Audience :: Developers',
70 'Intended Audience :: System Administrators',
71 'Natural Language :: English',
72 'License :: OSI Approved :: Apache Software License',
73 'Programming Language :: Python',
74 'Programming Language :: Python :: 2',
75 'Programming Language :: Python :: 2.6',
76 'Programming Language :: Python :: 2.7',
77 'Programming Language :: Python :: 3',
78 'Programming Language :: Python :: 3.3',
79 'Programming Language :: Python :: 3.4',
80 'Programming Language :: Python :: 3.5',
81 'Programming Language :: Python :: 3.6',
82 'Programming Language :: Python :: 3.7',
83 ],
84 )
85
86 if 'py2exe' in sys.argv:
87 # This will actually give us a py2exe command.
88 import py2exe
89 # And we have some py2exe specific options.
90 setup_options['options'] = {
91 'py2exe': {
92 'optimize': 0,
93 'skip_archive': True,
94 'dll_excludes': ['crypt32.dll'],
95 'packages': ['docutils', 'urllib', 'httplib', 'HTMLParser',
96 'awscli', 'ConfigParser', 'xml.etree', 'pipes'],
97 }
98 }
99 setup_options['console'] = ['bin/aws']
100
101
102 setup(**setup_options)
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -25,7 +25,7 @@
requires = ['botocore==1.12.191',
'colorama>=0.2.5,<=0.3.9',
- 'docutils>=0.10',
+ 'docutils>=0.10,<0.15',
'rsa>=3.1.2,<=3.5.0',
's3transfer>=0.2.0,<0.3.0']
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -25,7 +25,7 @@\n \n requires = ['botocore==1.12.191',\n 'colorama>=0.2.5,<=0.3.9',\n- 'docutils>=0.10',\n+ 'docutils>=0.10,<0.15',\n 'rsa>=3.1.2,<=3.5.0',\n 's3transfer>=0.2.0,<0.3.0']\n", "issue": "Broken docutils==0.15\nHi community,\r\n\r\nToday docutils were updated to 0.15 (https://pypi.org/project/docutils/#history) and it breaks awscli running on Python 2.\r\n\r\n```\r\n# aws --version\r\nTraceback (most recent call last):\r\n File \"/bin/aws\", line 19, in <module>\r\n import awscli.clidriver\r\n File \"/usr/lib/python2.7/site-packages/awscli/clidriver.py\", line 36, in <module>\r\n from awscli.help import ProviderHelpCommand\r\n File \"/usr/lib/python2.7/site-packages/awscli/help.py\", line 20, in <module>\r\n from docutils.core import publish_string\r\n File \"/usr/lib/python2.7/site-packages/docutils/core.py\", line 246\r\n print('\\n::: Runtime settings:', file=self._stderr)\r\n ^\r\nSyntaxError: invalid syntax\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\nimport codecs\nimport os.path\nimport re\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(*parts):\n return codecs.open(os.path.join(here, *parts), 'r').read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nrequires = ['botocore==1.12.191',\n 'colorama>=0.2.5,<=0.3.9',\n 'docutils>=0.10',\n 'rsa>=3.1.2,<=3.5.0',\n 's3transfer>=0.2.0,<0.3.0']\n\n\nif sys.version_info[:2] == (2, 6):\n # For python2.6 we have to require argparse since it\n # was not in stdlib until 2.7.\n requires.append('argparse>=1.1')\n\n # For Python 2.6, we have to require a different verion of PyYAML since the latest\n # versions dropped support for Python 2.6.\n requires.append('PyYAML>=3.10,<=3.13')\nelse:\n requires.append('PyYAML>=3.10,<=5.1')\n\n\nsetup_options = dict(\n name='awscli',\n version=find_version(\"awscli\", \"__init__.py\"),\n description='Universal Command Line Environment for AWS.',\n long_description=read('README.rst'),\n author='Amazon Web Services',\n url='http://aws.amazon.com/cli/',\n scripts=['bin/aws', 'bin/aws.cmd',\n 'bin/aws_completer', 'bin/aws_zsh_completer.sh',\n 'bin/aws_bash_completer'],\n packages=find_packages(exclude=['tests*']),\n package_data={'awscli': ['data/*.json', 'examples/*/*.rst',\n 'examples/*/*.txt', 'examples/*/*/*.txt',\n 'examples/*/*/*.rst', 'topics/*.rst',\n 'topics/*.json']},\n install_requires=requires,\n extras_require={\n ':python_version==\"2.6\"': [\n 'argparse>=1.1',\n ]\n },\n license=\"Apache License 2.0\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ],\n)\n\nif 'py2exe' in sys.argv:\n # This will actually give us a py2exe command.\n import py2exe\n # And we have some py2exe specific options.\n setup_options['options'] = {\n 'py2exe': {\n 'optimize': 0,\n 'skip_archive': True,\n 'dll_excludes': ['crypt32.dll'],\n 'packages': ['docutils', 'urllib', 'httplib', 'HTMLParser',\n 'awscli', 'ConfigParser', 'xml.etree', 'pipes'],\n }\n }\n setup_options['console'] = ['bin/aws']\n\n\nsetup(**setup_options)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport codecs\nimport os.path\nimport re\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(*parts):\n return codecs.open(os.path.join(here, *parts), 'r').read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nrequires = ['botocore==1.12.191',\n 'colorama>=0.2.5,<=0.3.9',\n 'docutils>=0.10,<0.15',\n 'rsa>=3.1.2,<=3.5.0',\n 's3transfer>=0.2.0,<0.3.0']\n\n\nif sys.version_info[:2] == (2, 6):\n # For python2.6 we have to require argparse since it\n # was not in stdlib until 2.7.\n requires.append('argparse>=1.1')\n\n # For Python 2.6, we have to require a different verion of PyYAML since the latest\n # versions dropped support for Python 2.6.\n requires.append('PyYAML>=3.10,<=3.13')\nelse:\n requires.append('PyYAML>=3.10,<=5.1')\n\n\nsetup_options = dict(\n name='awscli',\n version=find_version(\"awscli\", \"__init__.py\"),\n description='Universal Command Line Environment for AWS.',\n long_description=read('README.rst'),\n author='Amazon Web Services',\n url='http://aws.amazon.com/cli/',\n scripts=['bin/aws', 'bin/aws.cmd',\n 'bin/aws_completer', 'bin/aws_zsh_completer.sh',\n 'bin/aws_bash_completer'],\n packages=find_packages(exclude=['tests*']),\n package_data={'awscli': ['data/*.json', 'examples/*/*.rst',\n 'examples/*/*.txt', 'examples/*/*/*.txt',\n 'examples/*/*/*.rst', 'topics/*.rst',\n 'topics/*.json']},\n install_requires=requires,\n extras_require={\n ':python_version==\"2.6\"': [\n 'argparse>=1.1',\n ]\n },\n license=\"Apache License 2.0\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ],\n)\n\nif 'py2exe' in sys.argv:\n # This will actually give us a py2exe command.\n import py2exe\n # And we have some py2exe specific options.\n setup_options['options'] = {\n 'py2exe': {\n 'optimize': 0,\n 'skip_archive': True,\n 'dll_excludes': ['crypt32.dll'],\n 'packages': ['docutils', 'urllib', 'httplib', 'HTMLParser',\n 'awscli', 'ConfigParser', 'xml.etree', 'pipes'],\n }\n }\n setup_options['console'] = ['bin/aws']\n\n\nsetup(**setup_options)\n", "path": "setup.py"}]} | 1,521 | 129 |
gh_patches_debug_29112 | rasdani/github-patches | git_diff | crytic__slither-2239 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[False-Positive]:`msg.value` in a loop when `msg.value` is not transferred
### Describe the issue:
In the following code snippet, Slither detects the following:
<img width="1309" alt="image" src="https://github.com/crytic/slither/assets/31145285/eacf2441-3e2d-464a-97c3-635498e43120">
This seems invalid as `msg.value` here is just checked as a conditional to ensure no value is sent to this function. In any case, no logic inside this function (including inside the internal `_transfer` function) forwards `msg.value`.
### Code example to reproduce the issue:
```solidity
modifier noNativeTokens() {
require(msg.value == 0, "Not aimed to receive native tokens");
_;
}
function executeRelayCallBatch(
bytes[] calldata signatures,
uint256[] calldata nonces,
uint256[] calldata validityTimestamps,
uint256[] calldata values,
bytes[] calldata payloads
) public payable noNativeTokens returns (bytes[] memory) {
if (
signatures.length != nonces.length ||
nonces.length != validityTimestamps.length ||
validityTimestamps.length != values.length ||
values.length != payloads.length
) {
revert("Batch ExecuteRelayCall Params Length Mismatch");
}
bytes[] memory castedVotes = new bytes[](payloads.length);
for (uint256 ii; ii < payloads.length; ++ii) {
require(values[ii] == 0, "Batch entry cannot contain value");
// cast each votes one by one
castedVotes[ii] = executeRelayCall(
signatures[ii],
nonces[ii],
validityTimestamps[ii],
payloads[ii]
);
}
return castedVotes;
}
```
### Version:
0.10.0
### Relevant log output:
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `slither/detectors/statements/msg_value_in_loop.py`
Content:
```
1 from typing import List, Optional
2 from slither.core.cfg.node import NodeType, Node
3 from slither.detectors.abstract_detector import (
4 AbstractDetector,
5 DetectorClassification,
6 DETECTOR_INFO,
7 )
8 from slither.slithir.operations import InternalCall
9 from slither.core.declarations import SolidityVariableComposed, Contract
10 from slither.utils.output import Output
11
12
13 def detect_msg_value_in_loop(contract: Contract) -> List[Node]:
14 results: List[Node] = []
15 for f in contract.functions_entry_points:
16 if f.is_implemented and f.payable:
17 msg_value_in_loop(f.entry_point, 0, [], results)
18 return results
19
20
21 def msg_value_in_loop(
22 node: Optional[Node], in_loop_counter: int, visited: List[Node], results: List[Node]
23 ) -> None:
24
25 if node is None:
26 return
27
28 if node in visited:
29 return
30 # shared visited
31 visited.append(node)
32
33 if node.type == NodeType.STARTLOOP:
34 in_loop_counter += 1
35 elif node.type == NodeType.ENDLOOP:
36 in_loop_counter -= 1
37
38 for ir in node.all_slithir_operations():
39 if in_loop_counter > 0 and SolidityVariableComposed("msg.value") in ir.read:
40 results.append(ir.node)
41 if isinstance(ir, (InternalCall)):
42 msg_value_in_loop(ir.function.entry_point, in_loop_counter, visited, results)
43
44 for son in node.sons:
45 msg_value_in_loop(son, in_loop_counter, visited, results)
46
47
48 class MsgValueInLoop(AbstractDetector):
49 """
50 Detect the use of msg.value inside a loop
51 """
52
53 ARGUMENT = "msg-value-loop"
54 HELP = "msg.value inside a loop"
55 IMPACT = DetectorClassification.HIGH
56 CONFIDENCE = DetectorClassification.MEDIUM
57
58 WIKI = "https://github.com/crytic/slither/wiki/Detector-Documentation/#msgvalue-inside-a-loop"
59
60 WIKI_TITLE = "`msg.value` inside a loop"
61 WIKI_DESCRIPTION = "Detect the use of `msg.value` inside a loop."
62
63 # region wiki_exploit_scenario
64 WIKI_EXPLOIT_SCENARIO = """
65 ```solidity
66 contract MsgValueInLoop{
67
68 mapping (address => uint256) balances;
69
70 function bad(address[] memory receivers) public payable {
71 for (uint256 i=0; i < receivers.length; i++) {
72 balances[receivers[i]] += msg.value;
73 }
74 }
75
76 }
77 ```
78 """
79 # endregion wiki_exploit_scenario
80
81 WIKI_RECOMMENDATION = """
82 Provide an explicit array of amounts alongside the receivers array, and check that the sum of all amounts matches `msg.value`.
83 """
84
85 def _detect(self) -> List[Output]:
86 """"""
87 results: List[Output] = []
88 for c in self.compilation_unit.contracts_derived:
89 values = detect_msg_value_in_loop(c)
90 for node in values:
91 func = node.function
92
93 info: DETECTOR_INFO = [func, " use msg.value in a loop: ", node, "\n"]
94 res = self.generate_result(info)
95 results.append(res)
96
97 return results
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/slither/detectors/statements/msg_value_in_loop.py b/slither/detectors/statements/msg_value_in_loop.py
--- a/slither/detectors/statements/msg_value_in_loop.py
+++ b/slither/detectors/statements/msg_value_in_loop.py
@@ -8,6 +8,9 @@
from slither.slithir.operations import InternalCall
from slither.core.declarations import SolidityVariableComposed, Contract
from slither.utils.output import Output
+from slither.slithir.variables.constant import Constant
+from slither.core.variables import Variable
+from slither.core.expressions.literal import Literal
def detect_msg_value_in_loop(contract: Contract) -> List[Node]:
@@ -37,6 +40,21 @@
for ir in node.all_slithir_operations():
if in_loop_counter > 0 and SolidityVariableComposed("msg.value") in ir.read:
+ # If we find a conditional expression with msg.value and is compared to 0 we don't report it
+ if ir.node.is_conditional() and SolidityVariableComposed("msg.value") in ir.read:
+ compared_to = (
+ ir.read[1]
+ if ir.read[0] == SolidityVariableComposed("msg.value")
+ else ir.read[0]
+ )
+ if (
+ isinstance(compared_to, Constant)
+ and compared_to.value == 0
+ or isinstance(compared_to, Variable)
+ and isinstance(compared_to.expression, Literal)
+ and str(compared_to.expression.value) == "0"
+ ):
+ continue
results.append(ir.node)
if isinstance(ir, (InternalCall)):
msg_value_in_loop(ir.function.entry_point, in_loop_counter, visited, results)
| {"golden_diff": "diff --git a/slither/detectors/statements/msg_value_in_loop.py b/slither/detectors/statements/msg_value_in_loop.py\n--- a/slither/detectors/statements/msg_value_in_loop.py\n+++ b/slither/detectors/statements/msg_value_in_loop.py\n@@ -8,6 +8,9 @@\n from slither.slithir.operations import InternalCall\n from slither.core.declarations import SolidityVariableComposed, Contract\n from slither.utils.output import Output\n+from slither.slithir.variables.constant import Constant\n+from slither.core.variables import Variable\n+from slither.core.expressions.literal import Literal\n \n \n def detect_msg_value_in_loop(contract: Contract) -> List[Node]:\n@@ -37,6 +40,21 @@\n \n for ir in node.all_slithir_operations():\n if in_loop_counter > 0 and SolidityVariableComposed(\"msg.value\") in ir.read:\n+ # If we find a conditional expression with msg.value and is compared to 0 we don't report it\n+ if ir.node.is_conditional() and SolidityVariableComposed(\"msg.value\") in ir.read:\n+ compared_to = (\n+ ir.read[1]\n+ if ir.read[0] == SolidityVariableComposed(\"msg.value\")\n+ else ir.read[0]\n+ )\n+ if (\n+ isinstance(compared_to, Constant)\n+ and compared_to.value == 0\n+ or isinstance(compared_to, Variable)\n+ and isinstance(compared_to.expression, Literal)\n+ and str(compared_to.expression.value) == \"0\"\n+ ):\n+ continue\n results.append(ir.node)\n if isinstance(ir, (InternalCall)):\n msg_value_in_loop(ir.function.entry_point, in_loop_counter, visited, results)\n", "issue": "[False-Positive]:`msg.value` in a loop when `msg.value` is not transferred\n### Describe the issue:\n\nIn the following code snippet, Slither detects the following:\r\n\r\n<img width=\"1309\" alt=\"image\" src=\"https://github.com/crytic/slither/assets/31145285/eacf2441-3e2d-464a-97c3-635498e43120\">\r\n\r\nThis seems invalid as `msg.value` here is just checked as a conditional to ensure no value is sent to this function. In any case, no logic inside this function (including inside the internal `_transfer` function) forwards `msg.value`.\n\n### Code example to reproduce the issue:\n\n```solidity\r\n\r\n modifier noNativeTokens() {\r\n require(msg.value == 0, \"Not aimed to receive native tokens\");\r\n _;\r\n }\r\n \r\n function executeRelayCallBatch(\r\n bytes[] calldata signatures,\r\n uint256[] calldata nonces,\r\n uint256[] calldata validityTimestamps,\r\n uint256[] calldata values,\r\n bytes[] calldata payloads\r\n ) public payable noNativeTokens returns (bytes[] memory) {\r\n if (\r\n signatures.length != nonces.length ||\r\n nonces.length != validityTimestamps.length ||\r\n validityTimestamps.length != values.length ||\r\n values.length != payloads.length\r\n ) {\r\n revert(\"Batch ExecuteRelayCall Params Length Mismatch\");\r\n }\r\n\r\n bytes[] memory castedVotes = new bytes[](payloads.length);\r\n\r\n for (uint256 ii; ii < payloads.length; ++ii) {\r\n require(values[ii] == 0, \"Batch entry cannot contain value\");\r\n\r\n // cast each votes one by one\r\n castedVotes[ii] = executeRelayCall(\r\n signatures[ii],\r\n nonces[ii],\r\n validityTimestamps[ii],\r\n payloads[ii]\r\n );\r\n }\r\n\r\n return castedVotes;\r\n }\r\n```\n\n### Version:\n\n0.10.0\n\n### Relevant log output:\n\n_No response_\n", "before_files": [{"content": "from typing import List, Optional\nfrom slither.core.cfg.node import NodeType, Node\nfrom slither.detectors.abstract_detector import (\n AbstractDetector,\n DetectorClassification,\n DETECTOR_INFO,\n)\nfrom slither.slithir.operations import InternalCall\nfrom slither.core.declarations import SolidityVariableComposed, Contract\nfrom slither.utils.output import Output\n\n\ndef detect_msg_value_in_loop(contract: Contract) -> List[Node]:\n results: List[Node] = []\n for f in contract.functions_entry_points:\n if f.is_implemented and f.payable:\n msg_value_in_loop(f.entry_point, 0, [], results)\n return results\n\n\ndef msg_value_in_loop(\n node: Optional[Node], in_loop_counter: int, visited: List[Node], results: List[Node]\n) -> None:\n\n if node is None:\n return\n\n if node in visited:\n return\n # shared visited\n visited.append(node)\n\n if node.type == NodeType.STARTLOOP:\n in_loop_counter += 1\n elif node.type == NodeType.ENDLOOP:\n in_loop_counter -= 1\n\n for ir in node.all_slithir_operations():\n if in_loop_counter > 0 and SolidityVariableComposed(\"msg.value\") in ir.read:\n results.append(ir.node)\n if isinstance(ir, (InternalCall)):\n msg_value_in_loop(ir.function.entry_point, in_loop_counter, visited, results)\n\n for son in node.sons:\n msg_value_in_loop(son, in_loop_counter, visited, results)\n\n\nclass MsgValueInLoop(AbstractDetector):\n \"\"\"\n Detect the use of msg.value inside a loop\n \"\"\"\n\n ARGUMENT = \"msg-value-loop\"\n HELP = \"msg.value inside a loop\"\n IMPACT = DetectorClassification.HIGH\n CONFIDENCE = DetectorClassification.MEDIUM\n\n WIKI = \"https://github.com/crytic/slither/wiki/Detector-Documentation/#msgvalue-inside-a-loop\"\n\n WIKI_TITLE = \"`msg.value` inside a loop\"\n WIKI_DESCRIPTION = \"Detect the use of `msg.value` inside a loop.\"\n\n # region wiki_exploit_scenario\n WIKI_EXPLOIT_SCENARIO = \"\"\"\n```solidity\ncontract MsgValueInLoop{\n\n mapping (address => uint256) balances;\n\n function bad(address[] memory receivers) public payable {\n for (uint256 i=0; i < receivers.length; i++) {\n balances[receivers[i]] += msg.value;\n }\n }\n\n}\n```\n\"\"\"\n # endregion wiki_exploit_scenario\n\n WIKI_RECOMMENDATION = \"\"\"\nProvide an explicit array of amounts alongside the receivers array, and check that the sum of all amounts matches `msg.value`.\n\"\"\"\n\n def _detect(self) -> List[Output]:\n \"\"\"\"\"\"\n results: List[Output] = []\n for c in self.compilation_unit.contracts_derived:\n values = detect_msg_value_in_loop(c)\n for node in values:\n func = node.function\n\n info: DETECTOR_INFO = [func, \" use msg.value in a loop: \", node, \"\\n\"]\n res = self.generate_result(info)\n results.append(res)\n\n return results\n", "path": "slither/detectors/statements/msg_value_in_loop.py"}], "after_files": [{"content": "from typing import List, Optional\nfrom slither.core.cfg.node import NodeType, Node\nfrom slither.detectors.abstract_detector import (\n AbstractDetector,\n DetectorClassification,\n DETECTOR_INFO,\n)\nfrom slither.slithir.operations import InternalCall\nfrom slither.core.declarations import SolidityVariableComposed, Contract\nfrom slither.utils.output import Output\nfrom slither.slithir.variables.constant import Constant\nfrom slither.core.variables import Variable\nfrom slither.core.expressions.literal import Literal\n\n\ndef detect_msg_value_in_loop(contract: Contract) -> List[Node]:\n results: List[Node] = []\n for f in contract.functions_entry_points:\n if f.is_implemented and f.payable:\n msg_value_in_loop(f.entry_point, 0, [], results)\n return results\n\n\ndef msg_value_in_loop(\n node: Optional[Node], in_loop_counter: int, visited: List[Node], results: List[Node]\n) -> None:\n\n if node is None:\n return\n\n if node in visited:\n return\n # shared visited\n visited.append(node)\n\n if node.type == NodeType.STARTLOOP:\n in_loop_counter += 1\n elif node.type == NodeType.ENDLOOP:\n in_loop_counter -= 1\n\n for ir in node.all_slithir_operations():\n if in_loop_counter > 0 and SolidityVariableComposed(\"msg.value\") in ir.read:\n # If we find a conditional expression with msg.value and is compared to 0 we don't report it\n if ir.node.is_conditional() and SolidityVariableComposed(\"msg.value\") in ir.read:\n compared_to = (\n ir.read[1]\n if ir.read[0] == SolidityVariableComposed(\"msg.value\")\n else ir.read[0]\n )\n if (\n isinstance(compared_to, Constant)\n and compared_to.value == 0\n or isinstance(compared_to, Variable)\n and isinstance(compared_to.expression, Literal)\n and str(compared_to.expression.value) == \"0\"\n ):\n continue\n results.append(ir.node)\n if isinstance(ir, (InternalCall)):\n msg_value_in_loop(ir.function.entry_point, in_loop_counter, visited, results)\n\n for son in node.sons:\n msg_value_in_loop(son, in_loop_counter, visited, results)\n\n\nclass MsgValueInLoop(AbstractDetector):\n \"\"\"\n Detect the use of msg.value inside a loop\n \"\"\"\n\n ARGUMENT = \"msg-value-loop\"\n HELP = \"msg.value inside a loop\"\n IMPACT = DetectorClassification.HIGH\n CONFIDENCE = DetectorClassification.MEDIUM\n\n WIKI = \"https://github.com/crytic/slither/wiki/Detector-Documentation/#msgvalue-inside-a-loop\"\n\n WIKI_TITLE = \"`msg.value` inside a loop\"\n WIKI_DESCRIPTION = \"Detect the use of `msg.value` inside a loop.\"\n\n # region wiki_exploit_scenario\n WIKI_EXPLOIT_SCENARIO = \"\"\"\n```solidity\ncontract MsgValueInLoop{\n\n mapping (address => uint256) balances;\n\n function bad(address[] memory receivers) public payable {\n for (uint256 i=0; i < receivers.length; i++) {\n balances[receivers[i]] += msg.value;\n }\n }\n\n}\n```\n\"\"\"\n # endregion wiki_exploit_scenario\n\n WIKI_RECOMMENDATION = \"\"\"\nProvide an explicit array of amounts alongside the receivers array, and check that the sum of all amounts matches `msg.value`.\n\"\"\"\n\n def _detect(self) -> List[Output]:\n \"\"\"\"\"\"\n results: List[Output] = []\n for c in self.compilation_unit.contracts_derived:\n values = detect_msg_value_in_loop(c)\n for node in values:\n func = node.function\n\n info: DETECTOR_INFO = [func, \" use msg.value in a loop: \", node, \"\\n\"]\n res = self.generate_result(info)\n results.append(res)\n\n return results\n", "path": "slither/detectors/statements/msg_value_in_loop.py"}]} | 1,596 | 386 |
gh_patches_debug_8866 | rasdani/github-patches | git_diff | tensorflow__addons-618 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use package manager to pin TF dependency
Per https://github.com/tensorflow/community/pull/135 we should be using pip/conda to specify which Addons releases work with which TF versions.
This is blocked until the `tensorflow` and `tensorflow-gpu` consolidation scheduled for 2.1 release
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """TensorFlow Addons.
16
17 TensorFlow Addons is a repository of contributions that conform to well-
18 established API patterns, but implement new functionality not available
19 in core TensorFlow. TensorFlow natively supports a large number of
20 operators, layers, metrics, losses, and optimizers. However, in a fast
21 moving field like ML, there are many interesting new developments that
22 cannot be integrated into core TensorFlow (because their broad
23 applicability is not yet clear, or it is mostly used by a smaller subset
24 of the community).
25 """
26
27 from __future__ import absolute_import
28 from __future__ import division
29 from __future__ import print_function
30
31 import os
32 import platform
33 import sys
34
35 from datetime import datetime
36 from setuptools import find_packages
37 from setuptools import setup
38 from setuptools.dist import Distribution
39 from setuptools import Extension
40
41 DOCLINES = __doc__.split('\n')
42
43 TFA_NIGHTLY = 'tfa-nightly'
44 TFA_RELEASE = 'tensorflow-addons'
45
46 if '--nightly' in sys.argv:
47 project_name = TFA_NIGHTLY
48 nightly_idx = sys.argv.index('--nightly')
49 sys.argv.pop(nightly_idx)
50 else:
51 project_name = TFA_RELEASE
52
53 # Version
54 version = {}
55 base_dir = os.path.dirname(os.path.abspath(__file__))
56 with open(os.path.join(base_dir, "tensorflow_addons", "version.py")) as fp:
57 # yapf: disable
58 exec(fp.read(), version)
59 # yapf: enable
60
61 if project_name == TFA_NIGHTLY:
62 version['__version__'] += datetime.strftime(datetime.today(), "%Y%m%d")
63
64 # Dependencies
65 REQUIRED_PACKAGES = [
66 'six >= 1.10.0',
67 ]
68
69 if project_name == TFA_RELEASE:
70 # TODO: remove if-else condition when tf supports package consolidation.
71 if platform.system() == 'Linux':
72 REQUIRED_PACKAGES.append('tensorflow-gpu == 2.0.0')
73 else:
74 REQUIRED_PACKAGES.append('tensorflow == 2.0.0')
75 elif project_name == TFA_NIGHTLY:
76 REQUIRED_PACKAGES.append('tf-nightly')
77
78
79 class BinaryDistribution(Distribution):
80 """This class is needed in order to create OS specific wheels."""
81
82 def has_ext_modules(self):
83 return True
84
85
86 setup(
87 name=project_name,
88 version=version['__version__'],
89 description=DOCLINES[0],
90 long_description='\n'.join(DOCLINES[2:]),
91 author='Google Inc.',
92 author_email='[email protected]',
93 packages=find_packages(),
94 ext_modules=[Extension('_foo', ['stub.cc'])],
95 install_requires=REQUIRED_PACKAGES,
96 include_package_data=True,
97 zip_safe=False,
98 distclass=BinaryDistribution,
99 classifiers=[
100 'Development Status :: 4 - Beta',
101 'Intended Audience :: Developers',
102 'Intended Audience :: Education',
103 'Intended Audience :: Science/Research',
104 'License :: OSI Approved :: Apache Software License',
105 'Programming Language :: Python :: 2.7',
106 'Programming Language :: Python :: 3.5',
107 'Programming Language :: Python :: 3.6',
108 'Programming Language :: Python :: 3.7',
109 'Topic :: Scientific/Engineering :: Mathematics',
110 'Topic :: Software Development :: Libraries :: Python Modules',
111 'Topic :: Software Development :: Libraries',
112 ],
113 license='Apache 2.0',
114 keywords='tensorflow addons machine learning',
115 )
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -69,9 +69,9 @@
if project_name == TFA_RELEASE:
# TODO: remove if-else condition when tf supports package consolidation.
if platform.system() == 'Linux':
- REQUIRED_PACKAGES.append('tensorflow-gpu == 2.0.0')
+ REQUIRED_PACKAGES.append('tensorflow-gpu >= 2.0.0')
else:
- REQUIRED_PACKAGES.append('tensorflow == 2.0.0')
+ REQUIRED_PACKAGES.append('tensorflow >= 2.0.0')
elif project_name == TFA_NIGHTLY:
REQUIRED_PACKAGES.append('tf-nightly')
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -69,9 +69,9 @@\n if project_name == TFA_RELEASE:\n # TODO: remove if-else condition when tf supports package consolidation.\n if platform.system() == 'Linux':\n- REQUIRED_PACKAGES.append('tensorflow-gpu == 2.0.0')\n+ REQUIRED_PACKAGES.append('tensorflow-gpu >= 2.0.0')\n else:\n- REQUIRED_PACKAGES.append('tensorflow == 2.0.0')\n+ REQUIRED_PACKAGES.append('tensorflow >= 2.0.0')\n elif project_name == TFA_NIGHTLY:\n REQUIRED_PACKAGES.append('tf-nightly')\n", "issue": "Use package manager to pin TF dependency\nPer https://github.com/tensorflow/community/pull/135 we should be using pip/conda to specify which Addons releases work with which TF versions. \r\n\r\nThis is blocked until the `tensorflow` and `tensorflow-gpu` consolidation scheduled for 2.1 release\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"TensorFlow Addons.\n\nTensorFlow Addons is a repository of contributions that conform to well-\nestablished API patterns, but implement new functionality not available\nin core TensorFlow. TensorFlow natively supports a large number of\noperators, layers, metrics, losses, and optimizers. However, in a fast\nmoving field like ML, there are many interesting new developments that\ncannot be integrated into core TensorFlow (because their broad\napplicability is not yet clear, or it is mostly used by a smaller subset\nof the community).\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport platform\nimport sys\n\nfrom datetime import datetime\nfrom setuptools import find_packages\nfrom setuptools import setup\nfrom setuptools.dist import Distribution\nfrom setuptools import Extension\n\nDOCLINES = __doc__.split('\\n')\n\nTFA_NIGHTLY = 'tfa-nightly'\nTFA_RELEASE = 'tensorflow-addons'\n\nif '--nightly' in sys.argv:\n project_name = TFA_NIGHTLY\n nightly_idx = sys.argv.index('--nightly')\n sys.argv.pop(nightly_idx)\nelse:\n project_name = TFA_RELEASE\n\n# Version\nversion = {}\nbase_dir = os.path.dirname(os.path.abspath(__file__))\nwith open(os.path.join(base_dir, \"tensorflow_addons\", \"version.py\")) as fp:\n # yapf: disable\n exec(fp.read(), version)\n # yapf: enable\n\nif project_name == TFA_NIGHTLY:\n version['__version__'] += datetime.strftime(datetime.today(), \"%Y%m%d\")\n\n# Dependencies\nREQUIRED_PACKAGES = [\n 'six >= 1.10.0',\n]\n\nif project_name == TFA_RELEASE:\n # TODO: remove if-else condition when tf supports package consolidation.\n if platform.system() == 'Linux':\n REQUIRED_PACKAGES.append('tensorflow-gpu == 2.0.0')\n else:\n REQUIRED_PACKAGES.append('tensorflow == 2.0.0')\nelif project_name == TFA_NIGHTLY:\n REQUIRED_PACKAGES.append('tf-nightly')\n\n\nclass BinaryDistribution(Distribution):\n \"\"\"This class is needed in order to create OS specific wheels.\"\"\"\n\n def has_ext_modules(self):\n return True\n\n\nsetup(\n name=project_name,\n version=version['__version__'],\n description=DOCLINES[0],\n long_description='\\n'.join(DOCLINES[2:]),\n author='Google Inc.',\n author_email='[email protected]',\n packages=find_packages(),\n ext_modules=[Extension('_foo', ['stub.cc'])],\n install_requires=REQUIRED_PACKAGES,\n include_package_data=True,\n zip_safe=False,\n distclass=BinaryDistribution,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n license='Apache 2.0',\n keywords='tensorflow addons machine learning',\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"TensorFlow Addons.\n\nTensorFlow Addons is a repository of contributions that conform to well-\nestablished API patterns, but implement new functionality not available\nin core TensorFlow. TensorFlow natively supports a large number of\noperators, layers, metrics, losses, and optimizers. However, in a fast\nmoving field like ML, there are many interesting new developments that\ncannot be integrated into core TensorFlow (because their broad\napplicability is not yet clear, or it is mostly used by a smaller subset\nof the community).\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport platform\nimport sys\n\nfrom datetime import datetime\nfrom setuptools import find_packages\nfrom setuptools import setup\nfrom setuptools.dist import Distribution\nfrom setuptools import Extension\n\nDOCLINES = __doc__.split('\\n')\n\nTFA_NIGHTLY = 'tfa-nightly'\nTFA_RELEASE = 'tensorflow-addons'\n\nif '--nightly' in sys.argv:\n project_name = TFA_NIGHTLY\n nightly_idx = sys.argv.index('--nightly')\n sys.argv.pop(nightly_idx)\nelse:\n project_name = TFA_RELEASE\n\n# Version\nversion = {}\nbase_dir = os.path.dirname(os.path.abspath(__file__))\nwith open(os.path.join(base_dir, \"tensorflow_addons\", \"version.py\")) as fp:\n # yapf: disable\n exec(fp.read(), version)\n # yapf: enable\n\nif project_name == TFA_NIGHTLY:\n version['__version__'] += datetime.strftime(datetime.today(), \"%Y%m%d\")\n\n# Dependencies\nREQUIRED_PACKAGES = [\n 'six >= 1.10.0',\n]\n\nif project_name == TFA_RELEASE:\n # TODO: remove if-else condition when tf supports package consolidation.\n if platform.system() == 'Linux':\n REQUIRED_PACKAGES.append('tensorflow-gpu >= 2.0.0')\n else:\n REQUIRED_PACKAGES.append('tensorflow >= 2.0.0')\nelif project_name == TFA_NIGHTLY:\n REQUIRED_PACKAGES.append('tf-nightly')\n\n\nclass BinaryDistribution(Distribution):\n \"\"\"This class is needed in order to create OS specific wheels.\"\"\"\n\n def has_ext_modules(self):\n return True\n\n\nsetup(\n name=project_name,\n version=version['__version__'],\n description=DOCLINES[0],\n long_description='\\n'.join(DOCLINES[2:]),\n author='Google Inc.',\n author_email='[email protected]',\n packages=find_packages(),\n ext_modules=[Extension('_foo', ['stub.cc'])],\n install_requires=REQUIRED_PACKAGES,\n include_package_data=True,\n zip_safe=False,\n distclass=BinaryDistribution,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n license='Apache 2.0',\n keywords='tensorflow addons machine learning',\n)\n", "path": "setup.py"}]} | 1,434 | 155 |
gh_patches_debug_5541 | rasdani/github-patches | git_diff | conda__conda-build-3118 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't get conda index to work with channel_name
From the updated docs on 3.14.0, `channel_name` is now a positional argument. However, when I do
```bash
$ conda index <build_dir> <channel_name>
```
I get
```
(...)
FileNotFoundError: [Errno 2] No such file or directory: '<path_prefix>/<channel_name>'
```
instead of a custom channel name in the outputted html files.
Am I forgetting some special `argparse` magic or did #3091 introduce a bug?
I'm on conda 4.5.11 and conda-build 3.14.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_build/cli/main_index.py`
Content:
```
1 from __future__ import absolute_import, division, print_function
2
3 import logging
4 import os
5 import sys
6
7 from conda_build.conda_interface import ArgumentParser
8
9 from conda_build import api
10 from conda_build.index import DEFAULT_SUBDIRS, MAX_THREADS_DEFAULT
11
12 logging.basicConfig(level=logging.INFO)
13
14
15 def parse_args(args):
16 p = ArgumentParser(
17 description="Update package index metadata files in given directories.")
18
19 p.add_argument(
20 'dir',
21 help='Directory that contains an index to be updated.',
22 nargs='*',
23 default=[os.getcwd()],
24 )
25
26 p.add_argument(
27 '-c', "--check-md5",
28 action="store_true",
29 help="""Use hash values instead of file modification times for determining if a
30 package's metadata needs to be updated.""",
31 )
32 p.add_argument(
33 'channel_name',
34 help='Adding a channel name will create an index.html file within the subdir.',
35 nargs='?',
36 default=None,
37 )
38 p.add_argument(
39 '-s', '--subdir',
40 action='append',
41 help='Optional. The subdir to index. Can be given multiple times. If not provided, will '
42 'default to all of %s. If provided, will not create channeldata.json for the channel.'
43 '' % ', '.join(DEFAULT_SUBDIRS),
44 )
45 p.add_argument(
46 '-t', '--threads',
47 default=MAX_THREADS_DEFAULT,
48 type=int,
49 )
50 p.add_argument(
51 "-p", "--patch-generator",
52 help="Path to Python file that outputs metadata patch instructions"
53 )
54
55 args = p.parse_args(args)
56 return p, args
57
58
59 def execute(args):
60 _, args = parse_args(args)
61 api.update_index(args.dir, check_md5=args.check_md5, channel_name=args.channel_name,
62 threads=args.threads, subdir=args.subdir, patch_generator=args.patch_generator)
63
64
65 def main():
66 return execute(sys.argv[1:])
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda_build/cli/main_index.py b/conda_build/cli/main_index.py
--- a/conda_build/cli/main_index.py
+++ b/conda_build/cli/main_index.py
@@ -30,10 +30,8 @@
package's metadata needs to be updated.""",
)
p.add_argument(
- 'channel_name',
- help='Adding a channel name will create an index.html file within the subdir.',
- nargs='?',
- default=None,
+ "-n", "--channel-name",
+ help="Customize the channel name listed in each channel's index.html.",
)
p.add_argument(
'-s', '--subdir',
| {"golden_diff": "diff --git a/conda_build/cli/main_index.py b/conda_build/cli/main_index.py\n--- a/conda_build/cli/main_index.py\n+++ b/conda_build/cli/main_index.py\n@@ -30,10 +30,8 @@\n package's metadata needs to be updated.\"\"\",\n )\n p.add_argument(\n- 'channel_name',\n- help='Adding a channel name will create an index.html file within the subdir.',\n- nargs='?',\n- default=None,\n+ \"-n\", \"--channel-name\",\n+ help=\"Customize the channel name listed in each channel's index.html.\",\n )\n p.add_argument(\n '-s', '--subdir',\n", "issue": "Can't get conda index to work with channel_name\nFrom the updated docs on 3.14.0, `channel_name` is now a positional argument. However, when I do\r\n```bash\r\n$ conda index <build_dir> <channel_name>\r\n```\r\nI get\r\n```\r\n(...)\r\nFileNotFoundError: [Errno 2] No such file or directory: '<path_prefix>/<channel_name>'\r\n```\r\ninstead of a custom channel name in the outputted html files. \r\n\r\nAm I forgetting some special `argparse` magic or did #3091 introduce a bug?\r\n\r\nI'm on conda 4.5.11 and conda-build 3.14.0\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport logging\nimport os\nimport sys\n\nfrom conda_build.conda_interface import ArgumentParser\n\nfrom conda_build import api\nfrom conda_build.index import DEFAULT_SUBDIRS, MAX_THREADS_DEFAULT\n\nlogging.basicConfig(level=logging.INFO)\n\n\ndef parse_args(args):\n p = ArgumentParser(\n description=\"Update package index metadata files in given directories.\")\n\n p.add_argument(\n 'dir',\n help='Directory that contains an index to be updated.',\n nargs='*',\n default=[os.getcwd()],\n )\n\n p.add_argument(\n '-c', \"--check-md5\",\n action=\"store_true\",\n help=\"\"\"Use hash values instead of file modification times for determining if a\n package's metadata needs to be updated.\"\"\",\n )\n p.add_argument(\n 'channel_name',\n help='Adding a channel name will create an index.html file within the subdir.',\n nargs='?',\n default=None,\n )\n p.add_argument(\n '-s', '--subdir',\n action='append',\n help='Optional. The subdir to index. Can be given multiple times. If not provided, will '\n 'default to all of %s. If provided, will not create channeldata.json for the channel.'\n '' % ', '.join(DEFAULT_SUBDIRS),\n )\n p.add_argument(\n '-t', '--threads',\n default=MAX_THREADS_DEFAULT,\n type=int,\n )\n p.add_argument(\n \"-p\", \"--patch-generator\",\n help=\"Path to Python file that outputs metadata patch instructions\"\n )\n\n args = p.parse_args(args)\n return p, args\n\n\ndef execute(args):\n _, args = parse_args(args)\n api.update_index(args.dir, check_md5=args.check_md5, channel_name=args.channel_name,\n threads=args.threads, subdir=args.subdir, patch_generator=args.patch_generator)\n\n\ndef main():\n return execute(sys.argv[1:])\n", "path": "conda_build/cli/main_index.py"}], "after_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport logging\nimport os\nimport sys\n\nfrom conda_build.conda_interface import ArgumentParser\n\nfrom conda_build import api\nfrom conda_build.index import DEFAULT_SUBDIRS, MAX_THREADS_DEFAULT\n\nlogging.basicConfig(level=logging.INFO)\n\n\ndef parse_args(args):\n p = ArgumentParser(\n description=\"Update package index metadata files in given directories.\")\n\n p.add_argument(\n 'dir',\n help='Directory that contains an index to be updated.',\n nargs='*',\n default=[os.getcwd()],\n )\n\n p.add_argument(\n '-c', \"--check-md5\",\n action=\"store_true\",\n help=\"\"\"Use hash values instead of file modification times for determining if a\n package's metadata needs to be updated.\"\"\",\n )\n p.add_argument(\n \"-n\", \"--channel-name\",\n help=\"Customize the channel name listed in each channel's index.html.\",\n )\n p.add_argument(\n '-s', '--subdir',\n action='append',\n help='Optional. The subdir to index. Can be given multiple times. If not provided, will '\n 'default to all of %s. If provided, will not create channeldata.json for the channel.'\n '' % ', '.join(DEFAULT_SUBDIRS),\n )\n p.add_argument(\n '-t', '--threads',\n default=MAX_THREADS_DEFAULT,\n type=int,\n )\n p.add_argument(\n \"-p\", \"--patch-generator\",\n help=\"Path to Python file that outputs metadata patch instructions\"\n )\n\n args = p.parse_args(args)\n return p, args\n\n\ndef execute(args):\n _, args = parse_args(args)\n api.update_index(args.dir, check_md5=args.check_md5, channel_name=args.channel_name,\n threads=args.threads, subdir=args.subdir, patch_generator=args.patch_generator)\n\n\ndef main():\n return execute(sys.argv[1:])\n", "path": "conda_build/cli/main_index.py"}]} | 953 | 145 |
gh_patches_debug_14395 | rasdani/github-patches | git_diff | ethereum__web3.py-3027 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix logger for AsyncHTTPProvider
AsyncHTTPProvider is getting the logger for `web3.providers.HTTPProvider` but should instead use `web3.providers.AsyncHTTPProvider`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `web3/providers/__init__.py`
Content:
```
1 from .async_base import ( # noqa: F401
2 AsyncBaseProvider,
3 )
4 from .base import ( # noqa: F401
5 BaseProvider,
6 JSONBaseProvider,
7 )
8 from .ipc import ( # noqa: F401,
9 IPCProvider,
10 )
11 from .rpc import ( # noqa: F401,
12 HTTPProvider,
13 )
14 from .websocket import ( # noqa: F401,
15 WebsocketProvider,
16 )
17 from .auto import ( # noqa: F401,
18 AutoProvider,
19 )
20
```
Path: `web3/providers/async_rpc.py`
Content:
```
1 import logging
2 from typing import (
3 Any,
4 Dict,
5 Iterable,
6 Optional,
7 Tuple,
8 Union,
9 )
10
11 from aiohttp import (
12 ClientSession,
13 )
14 from eth_typing import (
15 URI,
16 )
17 from eth_utils import (
18 to_dict,
19 )
20
21 from web3._utils.http import (
22 construct_user_agent,
23 )
24 from web3._utils.request import (
25 async_cache_and_return_session as _async_cache_and_return_session,
26 async_make_post_request,
27 get_default_http_endpoint,
28 )
29 from web3.types import (
30 AsyncMiddleware,
31 RPCEndpoint,
32 RPCResponse,
33 )
34
35 from ..datastructures import (
36 NamedElementOnion,
37 )
38 from ..middleware.exception_retry_request import (
39 async_http_retry_request_middleware,
40 )
41 from .async_base import (
42 AsyncJSONBaseProvider,
43 )
44
45
46 class AsyncHTTPProvider(AsyncJSONBaseProvider):
47 logger = logging.getLogger("web3.providers.HTTPProvider")
48 endpoint_uri = None
49 _request_kwargs = None
50 # type ignored b/c conflict with _middlewares attr on AsyncBaseProvider
51 _middlewares: Tuple[AsyncMiddleware, ...] = NamedElementOnion([(async_http_retry_request_middleware, "http_retry_request")]) # type: ignore # noqa: E501
52
53 def __init__(
54 self,
55 endpoint_uri: Optional[Union[URI, str]] = None,
56 request_kwargs: Optional[Any] = None,
57 ) -> None:
58 if endpoint_uri is None:
59 self.endpoint_uri = get_default_http_endpoint()
60 else:
61 self.endpoint_uri = URI(endpoint_uri)
62
63 self._request_kwargs = request_kwargs or {}
64
65 super().__init__()
66
67 async def cache_async_session(self, session: ClientSession) -> ClientSession:
68 return await _async_cache_and_return_session(self.endpoint_uri, session)
69
70 def __str__(self) -> str:
71 return f"RPC connection {self.endpoint_uri}"
72
73 @to_dict
74 def get_request_kwargs(self) -> Iterable[Tuple[str, Any]]:
75 if "headers" not in self._request_kwargs:
76 yield "headers", self.get_request_headers()
77 for key, value in self._request_kwargs.items():
78 yield key, value
79
80 def get_request_headers(self) -> Dict[str, str]:
81 return {
82 "Content-Type": "application/json",
83 "User-Agent": construct_user_agent(str(type(self))),
84 }
85
86 async def make_request(self, method: RPCEndpoint, params: Any) -> RPCResponse:
87 self.logger.debug(
88 f"Making request HTTP. URI: {self.endpoint_uri}, Method: {method}"
89 )
90 request_data = self.encode_rpc_request(method, params)
91 raw_response = await async_make_post_request(
92 self.endpoint_uri, request_data, **self.get_request_kwargs()
93 )
94 response = self.decode_rpc_response(raw_response)
95 self.logger.debug(
96 f"Getting response HTTP. URI: {self.endpoint_uri}, "
97 f"Method: {method}, Response: {response}"
98 )
99 return response
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/web3/providers/__init__.py b/web3/providers/__init__.py
--- a/web3/providers/__init__.py
+++ b/web3/providers/__init__.py
@@ -1,6 +1,9 @@
from .async_base import ( # noqa: F401
AsyncBaseProvider,
)
+from .async_rpc import ( # noqa: F401
+ AsyncHTTPProvider,
+)
from .base import ( # noqa: F401
BaseProvider,
JSONBaseProvider,
diff --git a/web3/providers/async_rpc.py b/web3/providers/async_rpc.py
--- a/web3/providers/async_rpc.py
+++ b/web3/providers/async_rpc.py
@@ -44,7 +44,7 @@
class AsyncHTTPProvider(AsyncJSONBaseProvider):
- logger = logging.getLogger("web3.providers.HTTPProvider")
+ logger = logging.getLogger("web3.providers.AsyncHTTPProvider")
endpoint_uri = None
_request_kwargs = None
# type ignored b/c conflict with _middlewares attr on AsyncBaseProvider
| {"golden_diff": "diff --git a/web3/providers/__init__.py b/web3/providers/__init__.py\n--- a/web3/providers/__init__.py\n+++ b/web3/providers/__init__.py\n@@ -1,6 +1,9 @@\n from .async_base import ( # noqa: F401\n AsyncBaseProvider,\n )\n+from .async_rpc import ( # noqa: F401\n+ AsyncHTTPProvider,\n+)\n from .base import ( # noqa: F401\n BaseProvider,\n JSONBaseProvider,\ndiff --git a/web3/providers/async_rpc.py b/web3/providers/async_rpc.py\n--- a/web3/providers/async_rpc.py\n+++ b/web3/providers/async_rpc.py\n@@ -44,7 +44,7 @@\n \n \n class AsyncHTTPProvider(AsyncJSONBaseProvider):\n- logger = logging.getLogger(\"web3.providers.HTTPProvider\")\n+ logger = logging.getLogger(\"web3.providers.AsyncHTTPProvider\")\n endpoint_uri = None\n _request_kwargs = None\n # type ignored b/c conflict with _middlewares attr on AsyncBaseProvider\n", "issue": "Fix logger for AsyncHTTPProvider\nAsyncHTTPProvider is getting the logger for `web3.providers.HTTPProvider` but should instead use `web3.providers.AsyncHTTPProvider`\n", "before_files": [{"content": "from .async_base import ( # noqa: F401\n AsyncBaseProvider,\n)\nfrom .base import ( # noqa: F401\n BaseProvider,\n JSONBaseProvider,\n)\nfrom .ipc import ( # noqa: F401,\n IPCProvider,\n)\nfrom .rpc import ( # noqa: F401,\n HTTPProvider,\n)\nfrom .websocket import ( # noqa: F401,\n WebsocketProvider,\n)\nfrom .auto import ( # noqa: F401,\n AutoProvider,\n)\n", "path": "web3/providers/__init__.py"}, {"content": "import logging\nfrom typing import (\n Any,\n Dict,\n Iterable,\n Optional,\n Tuple,\n Union,\n)\n\nfrom aiohttp import (\n ClientSession,\n)\nfrom eth_typing import (\n URI,\n)\nfrom eth_utils import (\n to_dict,\n)\n\nfrom web3._utils.http import (\n construct_user_agent,\n)\nfrom web3._utils.request import (\n async_cache_and_return_session as _async_cache_and_return_session,\n async_make_post_request,\n get_default_http_endpoint,\n)\nfrom web3.types import (\n AsyncMiddleware,\n RPCEndpoint,\n RPCResponse,\n)\n\nfrom ..datastructures import (\n NamedElementOnion,\n)\nfrom ..middleware.exception_retry_request import (\n async_http_retry_request_middleware,\n)\nfrom .async_base import (\n AsyncJSONBaseProvider,\n)\n\n\nclass AsyncHTTPProvider(AsyncJSONBaseProvider):\n logger = logging.getLogger(\"web3.providers.HTTPProvider\")\n endpoint_uri = None\n _request_kwargs = None\n # type ignored b/c conflict with _middlewares attr on AsyncBaseProvider\n _middlewares: Tuple[AsyncMiddleware, ...] = NamedElementOnion([(async_http_retry_request_middleware, \"http_retry_request\")]) # type: ignore # noqa: E501\n\n def __init__(\n self,\n endpoint_uri: Optional[Union[URI, str]] = None,\n request_kwargs: Optional[Any] = None,\n ) -> None:\n if endpoint_uri is None:\n self.endpoint_uri = get_default_http_endpoint()\n else:\n self.endpoint_uri = URI(endpoint_uri)\n\n self._request_kwargs = request_kwargs or {}\n\n super().__init__()\n\n async def cache_async_session(self, session: ClientSession) -> ClientSession:\n return await _async_cache_and_return_session(self.endpoint_uri, session)\n\n def __str__(self) -> str:\n return f\"RPC connection {self.endpoint_uri}\"\n\n @to_dict\n def get_request_kwargs(self) -> Iterable[Tuple[str, Any]]:\n if \"headers\" not in self._request_kwargs:\n yield \"headers\", self.get_request_headers()\n for key, value in self._request_kwargs.items():\n yield key, value\n\n def get_request_headers(self) -> Dict[str, str]:\n return {\n \"Content-Type\": \"application/json\",\n \"User-Agent\": construct_user_agent(str(type(self))),\n }\n\n async def make_request(self, method: RPCEndpoint, params: Any) -> RPCResponse:\n self.logger.debug(\n f\"Making request HTTP. URI: {self.endpoint_uri}, Method: {method}\"\n )\n request_data = self.encode_rpc_request(method, params)\n raw_response = await async_make_post_request(\n self.endpoint_uri, request_data, **self.get_request_kwargs()\n )\n response = self.decode_rpc_response(raw_response)\n self.logger.debug(\n f\"Getting response HTTP. URI: {self.endpoint_uri}, \"\n f\"Method: {method}, Response: {response}\"\n )\n return response\n", "path": "web3/providers/async_rpc.py"}], "after_files": [{"content": "from .async_base import ( # noqa: F401\n AsyncBaseProvider,\n)\nfrom .async_rpc import ( # noqa: F401\n AsyncHTTPProvider,\n)\nfrom .base import ( # noqa: F401\n BaseProvider,\n JSONBaseProvider,\n)\nfrom .ipc import ( # noqa: F401,\n IPCProvider,\n)\nfrom .rpc import ( # noqa: F401,\n HTTPProvider,\n)\nfrom .websocket import ( # noqa: F401,\n WebsocketProvider,\n)\nfrom .auto import ( # noqa: F401,\n AutoProvider,\n)\n", "path": "web3/providers/__init__.py"}, {"content": "import logging\nfrom typing import (\n Any,\n Dict,\n Iterable,\n Optional,\n Tuple,\n Union,\n)\n\nfrom aiohttp import (\n ClientSession,\n)\nfrom eth_typing import (\n URI,\n)\nfrom eth_utils import (\n to_dict,\n)\n\nfrom web3._utils.http import (\n construct_user_agent,\n)\nfrom web3._utils.request import (\n async_cache_and_return_session as _async_cache_and_return_session,\n async_make_post_request,\n get_default_http_endpoint,\n)\nfrom web3.types import (\n AsyncMiddleware,\n RPCEndpoint,\n RPCResponse,\n)\n\nfrom ..datastructures import (\n NamedElementOnion,\n)\nfrom ..middleware.exception_retry_request import (\n async_http_retry_request_middleware,\n)\nfrom .async_base import (\n AsyncJSONBaseProvider,\n)\n\n\nclass AsyncHTTPProvider(AsyncJSONBaseProvider):\n logger = logging.getLogger(\"web3.providers.AsyncHTTPProvider\")\n endpoint_uri = None\n _request_kwargs = None\n # type ignored b/c conflict with _middlewares attr on AsyncBaseProvider\n _middlewares: Tuple[AsyncMiddleware, ...] = NamedElementOnion([(async_http_retry_request_middleware, \"http_retry_request\")]) # type: ignore # noqa: E501\n\n def __init__(\n self,\n endpoint_uri: Optional[Union[URI, str]] = None,\n request_kwargs: Optional[Any] = None,\n ) -> None:\n if endpoint_uri is None:\n self.endpoint_uri = get_default_http_endpoint()\n else:\n self.endpoint_uri = URI(endpoint_uri)\n\n self._request_kwargs = request_kwargs or {}\n\n super().__init__()\n\n async def cache_async_session(self, session: ClientSession) -> ClientSession:\n return await _async_cache_and_return_session(self.endpoint_uri, session)\n\n def __str__(self) -> str:\n return f\"RPC connection {self.endpoint_uri}\"\n\n @to_dict\n def get_request_kwargs(self) -> Iterable[Tuple[str, Any]]:\n if \"headers\" not in self._request_kwargs:\n yield \"headers\", self.get_request_headers()\n for key, value in self._request_kwargs.items():\n yield key, value\n\n def get_request_headers(self) -> Dict[str, str]:\n return {\n \"Content-Type\": \"application/json\",\n \"User-Agent\": construct_user_agent(str(type(self))),\n }\n\n async def make_request(self, method: RPCEndpoint, params: Any) -> RPCResponse:\n self.logger.debug(\n f\"Making request HTTP. URI: {self.endpoint_uri}, Method: {method}\"\n )\n request_data = self.encode_rpc_request(method, params)\n raw_response = await async_make_post_request(\n self.endpoint_uri, request_data, **self.get_request_kwargs()\n )\n response = self.decode_rpc_response(raw_response)\n self.logger.debug(\n f\"Getting response HTTP. URI: {self.endpoint_uri}, \"\n f\"Method: {method}, Response: {response}\"\n )\n return response\n", "path": "web3/providers/async_rpc.py"}]} | 1,303 | 237 |
gh_patches_debug_700 | rasdani/github-patches | git_diff | saulpw__visidata-1304 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[undo develop] undoing a reload blanks the entire sheet
Since v2.5 undo for reload has been removed, and replaced with quitguard+confirm! However, in that case an undo should not be set.
Current behaviour is that it blanks the sheet.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `visidata/undo.py`
Content:
```
1 import itertools
2 from copy import copy
3
4 from visidata import vd, options, VisiData, BaseSheet, UNLOADED
5
6 BaseSheet.init('undone', list) # list of CommandLogRow for redo after undo
7
8 vd.option('undo', True, 'enable undo/redo')
9
10 nonUndo = '''commit open-file'''.split()
11
12 def isUndoableCommand(longname):
13 for n in nonUndo:
14 if longname.startswith(n):
15 return False
16 return True
17
18 @VisiData.api
19 def addUndo(vd, undofunc, *args, **kwargs):
20 'On undo of latest command, call ``undofunc(*args, **kwargs)``.'
21 if options.undo:
22 # occurs when VisiData is just starting up
23 if getattr(vd, 'activeCommand', UNLOADED) is UNLOADED:
24 return
25 r = vd.modifyCommand
26 # some special commands, like open-file, do not have an undofuncs set
27 if not r or not isUndoableCommand(r.longname):
28 return
29 if not r.undofuncs:
30 r.undofuncs = []
31 r.undofuncs.append((undofunc, args, kwargs))
32
33
34 @VisiData.api
35 def undo(vd, sheet):
36 if not options.undo:
37 vd.fail("options.undo not enabled")
38
39 # don't allow undo of first command on a sheet, which is always the command that created the sheet.
40 for cmdlogrow in sheet.cmdlog_sheet.rows[:0:-1]:
41 if cmdlogrow.undofuncs:
42 for undofunc, args, kwargs, in cmdlogrow.undofuncs[::-1]:
43 undofunc(*args, **kwargs)
44 sheet.undone.append(cmdlogrow)
45 sheet.cmdlog_sheet.rows.remove(cmdlogrow)
46
47 vd.clearCaches() # undofunc can invalidate the drawcache
48
49 vd.moveToReplayContext(cmdlogrow, sheet)
50 vd.status("%s undone" % cmdlogrow.longname)
51 return
52
53 vd.fail("nothing to undo on current sheet")
54
55
56 @VisiData.api
57 def redo(vd, sheet):
58 sheet.undone or vd.fail("nothing to redo")
59 cmdlogrow = sheet.undone.pop()
60 vd.replayOne(cmdlogrow)
61 vd.status("%s redone" % cmdlogrow.longname)
62
63 # undoers
64 def undoAttrFunc(objs, attrname):
65 'Return closure that sets attrname on each obj to its former value.'
66 oldvals = [(o, getattr(o, attrname)) for o in objs]
67 def _undofunc():
68 for o, v in oldvals:
69 setattr(o, attrname, v)
70 return _undofunc
71
72
73 class Fanout(list):
74 'Fan out attribute changes to every element in a list.'
75 def __getattr__(self, k):
76 return Fanout([getattr(o, k) for o in self])
77
78 def __setattr__(self, k, v):
79 vd.addUndo(undoAttrFunc(self, k))
80 for o in self:
81 setattr(o, k, v)
82
83 def __call__(self, *args, **kwargs):
84 return Fanout([o(*args, **kwargs) for o in self])
85
86
87 def undoAttrCopyFunc(objs, attrname):
88 'Return closure that sets attrname on each obj to its former value.'
89 oldvals = [(o, copy(getattr(o, attrname))) for o in objs]
90 def _undofunc():
91 for o, v in oldvals:
92 setattr(o, attrname, v)
93 return _undofunc
94
95
96 @VisiData.api
97 def addUndoSetValues(vd, cols, rows):
98 'Add undo function to reset values for *rows* in *cols*.'
99 oldvals = [(c, r, c.getValue(r)) for c,r in itertools.product(cols, vd.Progress(rows, gerund='doing'))]
100 def _undo():
101 for c, r, v in oldvals:
102 c.setValue(r, v)
103 vd.addUndo(_undo)
104
105 @VisiData.api
106 def addUndoColNames(vd, cols):
107 oldnames = [(c, c.name) for c in cols]
108 def _undo():
109 for c, name in oldnames:
110 c.name = name
111 vd.addUndo(_undo)
112
113
114 BaseSheet.addCommand('U', 'undo-last', 'vd.undo(sheet)', 'Undo the most recent change (options.undo must be enabled)')
115 BaseSheet.addCommand('R', 'redo-last', 'vd.redo(sheet)', 'Redo the most recent undo (options.undo must be enabled)')
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/visidata/undo.py b/visidata/undo.py
--- a/visidata/undo.py
+++ b/visidata/undo.py
@@ -7,7 +7,7 @@
vd.option('undo', True, 'enable undo/redo')
-nonUndo = '''commit open-file'''.split()
+nonUndo = '''commit open-file reload-sheet'''.split()
def isUndoableCommand(longname):
for n in nonUndo:
| {"golden_diff": "diff --git a/visidata/undo.py b/visidata/undo.py\n--- a/visidata/undo.py\n+++ b/visidata/undo.py\n@@ -7,7 +7,7 @@\n \n vd.option('undo', True, 'enable undo/redo')\n \n-nonUndo = '''commit open-file'''.split()\n+nonUndo = '''commit open-file reload-sheet'''.split()\n \n def isUndoableCommand(longname):\n for n in nonUndo:\n", "issue": "[undo develop] undoing a reload blanks the entire sheet\nSince v2.5 undo for reload has been removed, and replaced with quitguard+confirm! However, in that case an undo should not be set.\r\n\r\nCurrent behaviour is that it blanks the sheet.\n", "before_files": [{"content": "import itertools\nfrom copy import copy\n\nfrom visidata import vd, options, VisiData, BaseSheet, UNLOADED\n\nBaseSheet.init('undone', list) # list of CommandLogRow for redo after undo\n\nvd.option('undo', True, 'enable undo/redo')\n\nnonUndo = '''commit open-file'''.split()\n\ndef isUndoableCommand(longname):\n for n in nonUndo:\n if longname.startswith(n):\n return False\n return True\n\[email protected]\ndef addUndo(vd, undofunc, *args, **kwargs):\n 'On undo of latest command, call ``undofunc(*args, **kwargs)``.'\n if options.undo:\n # occurs when VisiData is just starting up\n if getattr(vd, 'activeCommand', UNLOADED) is UNLOADED:\n return\n r = vd.modifyCommand\n # some special commands, like open-file, do not have an undofuncs set\n if not r or not isUndoableCommand(r.longname):\n return\n if not r.undofuncs:\n r.undofuncs = []\n r.undofuncs.append((undofunc, args, kwargs))\n\n\[email protected]\ndef undo(vd, sheet):\n if not options.undo:\n vd.fail(\"options.undo not enabled\")\n\n # don't allow undo of first command on a sheet, which is always the command that created the sheet.\n for cmdlogrow in sheet.cmdlog_sheet.rows[:0:-1]:\n if cmdlogrow.undofuncs:\n for undofunc, args, kwargs, in cmdlogrow.undofuncs[::-1]:\n undofunc(*args, **kwargs)\n sheet.undone.append(cmdlogrow)\n sheet.cmdlog_sheet.rows.remove(cmdlogrow)\n\n vd.clearCaches() # undofunc can invalidate the drawcache\n\n vd.moveToReplayContext(cmdlogrow, sheet)\n vd.status(\"%s undone\" % cmdlogrow.longname)\n return\n\n vd.fail(\"nothing to undo on current sheet\")\n\n\[email protected]\ndef redo(vd, sheet):\n sheet.undone or vd.fail(\"nothing to redo\")\n cmdlogrow = sheet.undone.pop()\n vd.replayOne(cmdlogrow)\n vd.status(\"%s redone\" % cmdlogrow.longname)\n\n# undoers\ndef undoAttrFunc(objs, attrname):\n 'Return closure that sets attrname on each obj to its former value.'\n oldvals = [(o, getattr(o, attrname)) for o in objs]\n def _undofunc():\n for o, v in oldvals:\n setattr(o, attrname, v)\n return _undofunc\n\n\nclass Fanout(list):\n 'Fan out attribute changes to every element in a list.'\n def __getattr__(self, k):\n return Fanout([getattr(o, k) for o in self])\n\n def __setattr__(self, k, v):\n vd.addUndo(undoAttrFunc(self, k))\n for o in self:\n setattr(o, k, v)\n\n def __call__(self, *args, **kwargs):\n return Fanout([o(*args, **kwargs) for o in self])\n\n\ndef undoAttrCopyFunc(objs, attrname):\n 'Return closure that sets attrname on each obj to its former value.'\n oldvals = [(o, copy(getattr(o, attrname))) for o in objs]\n def _undofunc():\n for o, v in oldvals:\n setattr(o, attrname, v)\n return _undofunc\n\n\[email protected]\ndef addUndoSetValues(vd, cols, rows):\n 'Add undo function to reset values for *rows* in *cols*.'\n oldvals = [(c, r, c.getValue(r)) for c,r in itertools.product(cols, vd.Progress(rows, gerund='doing'))]\n def _undo():\n for c, r, v in oldvals:\n c.setValue(r, v)\n vd.addUndo(_undo)\n\[email protected]\ndef addUndoColNames(vd, cols):\n oldnames = [(c, c.name) for c in cols]\n def _undo():\n for c, name in oldnames:\n c.name = name\n vd.addUndo(_undo)\n\n\nBaseSheet.addCommand('U', 'undo-last', 'vd.undo(sheet)', 'Undo the most recent change (options.undo must be enabled)')\nBaseSheet.addCommand('R', 'redo-last', 'vd.redo(sheet)', 'Redo the most recent undo (options.undo must be enabled)')\n", "path": "visidata/undo.py"}], "after_files": [{"content": "import itertools\nfrom copy import copy\n\nfrom visidata import vd, options, VisiData, BaseSheet, UNLOADED\n\nBaseSheet.init('undone', list) # list of CommandLogRow for redo after undo\n\nvd.option('undo', True, 'enable undo/redo')\n\nnonUndo = '''commit open-file reload-sheet'''.split()\n\ndef isUndoableCommand(longname):\n for n in nonUndo:\n if longname.startswith(n):\n return False\n return True\n\[email protected]\ndef addUndo(vd, undofunc, *args, **kwargs):\n 'On undo of latest command, call ``undofunc(*args, **kwargs)``.'\n if options.undo:\n # occurs when VisiData is just starting up\n if getattr(vd, 'activeCommand', UNLOADED) is UNLOADED:\n return\n r = vd.modifyCommand\n # some special commands, like open-file, do not have an undofuncs set\n if not r or not isUndoableCommand(r.longname):\n return\n if not r.undofuncs:\n r.undofuncs = []\n r.undofuncs.append((undofunc, args, kwargs))\n\n\[email protected]\ndef undo(vd, sheet):\n if not options.undo:\n vd.fail(\"options.undo not enabled\")\n\n # don't allow undo of first command on a sheet, which is always the command that created the sheet.\n for cmdlogrow in sheet.cmdlog_sheet.rows[:0:-1]:\n if cmdlogrow.undofuncs:\n for undofunc, args, kwargs, in cmdlogrow.undofuncs[::-1]:\n undofunc(*args, **kwargs)\n sheet.undone.append(cmdlogrow)\n sheet.cmdlog_sheet.rows.remove(cmdlogrow)\n\n vd.clearCaches() # undofunc can invalidate the drawcache\n\n vd.moveToReplayContext(cmdlogrow, sheet)\n vd.status(\"%s undone\" % cmdlogrow.longname)\n return\n\n vd.fail(\"nothing to undo on current sheet\")\n\n\[email protected]\ndef redo(vd, sheet):\n sheet.undone or vd.fail(\"nothing to redo\")\n cmdlogrow = sheet.undone.pop()\n vd.replayOne(cmdlogrow)\n vd.status(\"%s redone\" % cmdlogrow.longname)\n\n# undoers\ndef undoAttrFunc(objs, attrname):\n 'Return closure that sets attrname on each obj to its former value.'\n oldvals = [(o, getattr(o, attrname)) for o in objs]\n def _undofunc():\n for o, v in oldvals:\n setattr(o, attrname, v)\n return _undofunc\n\n\nclass Fanout(list):\n 'Fan out attribute changes to every element in a list.'\n def __getattr__(self, k):\n return Fanout([getattr(o, k) for o in self])\n\n def __setattr__(self, k, v):\n vd.addUndo(undoAttrFunc(self, k))\n for o in self:\n setattr(o, k, v)\n\n def __call__(self, *args, **kwargs):\n return Fanout([o(*args, **kwargs) for o in self])\n\n\ndef undoAttrCopyFunc(objs, attrname):\n 'Return closure that sets attrname on each obj to its former value.'\n oldvals = [(o, copy(getattr(o, attrname))) for o in objs]\n def _undofunc():\n for o, v in oldvals:\n setattr(o, attrname, v)\n return _undofunc\n\n\[email protected]\ndef addUndoSetValues(vd, cols, rows):\n 'Add undo function to reset values for *rows* in *cols*.'\n oldvals = [(c, r, c.getValue(r)) for c,r in itertools.product(cols, vd.Progress(rows, gerund='doing'))]\n def _undo():\n for c, r, v in oldvals:\n c.setValue(r, v)\n vd.addUndo(_undo)\n\[email protected]\ndef addUndoColNames(vd, cols):\n oldnames = [(c, c.name) for c in cols]\n def _undo():\n for c, name in oldnames:\n c.name = name\n vd.addUndo(_undo)\n\n\nBaseSheet.addCommand('U', 'undo-last', 'vd.undo(sheet)', 'Undo the most recent change (options.undo must be enabled)')\nBaseSheet.addCommand('R', 'redo-last', 'vd.redo(sheet)', 'Redo the most recent undo (options.undo must be enabled)')\n", "path": "visidata/undo.py"}]} | 1,574 | 102 |
gh_patches_debug_22422 | rasdani/github-patches | git_diff | pypi__warehouse-1335 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Handle Translation of the <title> tag
Currently we can't really translate the `<title>` tag because our block doesn't allow a template to add a l20n ID to it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/i18n/l20n.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import jinja2
14
15 from markupsafe import Markup as M # noqa
16
17 from warehouse.filters import tojson
18
19
20 _L20N_TEMPLATE = jinja2.Template(
21 'data-l10n-id="{{ tid }}"'
22 '{% if data %} data-l10n-args="{{ data }}"{% endif %}',
23 autoescape=True,
24 )
25
26
27 def l20n(tid, **kwargs):
28 data = tojson(kwargs) if kwargs else None
29 return M(_L20N_TEMPLATE.render(tid=tid, data=data))
30
```
Path: `warehouse/i18n/__init__.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from babel.core import Locale
14
15
16 def _locale(request):
17 """
18 Computes a babel.core:Locale() object for this request.
19 """
20 return Locale.parse(request.locale_name)
21
22
23 def includeme(config):
24 # Add the request attributes
25 config.add_request_method(_locale, name="locale", reify=True)
26
27 # Register our i18n/l10n filters for Jinja2
28 filters = config.get_settings().setdefault("jinja2.filters", {})
29 filters.setdefault("format_date", "warehouse.i18n.filters:format_date")
30 filters.setdefault(
31 "format_datetime",
32 "warehouse.i18n.filters:format_datetime",
33 )
34 filters.setdefault(
35 "format_rfc822_datetime",
36 "warehouse.i18n.filters:format_rfc822_datetime",
37 )
38
39 # Register our utility functions with Jinja2
40 jglobals = config.get_settings().setdefault("jinja2.globals", {})
41 jglobals.setdefault("l20n", "warehouse.i18n.l20n:l20n")
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/warehouse/i18n/__init__.py b/warehouse/i18n/__init__.py
--- a/warehouse/i18n/__init__.py
+++ b/warehouse/i18n/__init__.py
@@ -35,7 +35,3 @@
"format_rfc822_datetime",
"warehouse.i18n.filters:format_rfc822_datetime",
)
-
- # Register our utility functions with Jinja2
- jglobals = config.get_settings().setdefault("jinja2.globals", {})
- jglobals.setdefault("l20n", "warehouse.i18n.l20n:l20n")
diff --git a/warehouse/i18n/l20n.py b/warehouse/i18n/l20n.py
deleted file mode 100644
--- a/warehouse/i18n/l20n.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import jinja2
-
-from markupsafe import Markup as M # noqa
-
-from warehouse.filters import tojson
-
-
-_L20N_TEMPLATE = jinja2.Template(
- 'data-l10n-id="{{ tid }}"'
- '{% if data %} data-l10n-args="{{ data }}"{% endif %}',
- autoescape=True,
-)
-
-
-def l20n(tid, **kwargs):
- data = tojson(kwargs) if kwargs else None
- return M(_L20N_TEMPLATE.render(tid=tid, data=data))
| {"golden_diff": "diff --git a/warehouse/i18n/__init__.py b/warehouse/i18n/__init__.py\n--- a/warehouse/i18n/__init__.py\n+++ b/warehouse/i18n/__init__.py\n@@ -35,7 +35,3 @@\n \"format_rfc822_datetime\",\n \"warehouse.i18n.filters:format_rfc822_datetime\",\n )\n-\n- # Register our utility functions with Jinja2\n- jglobals = config.get_settings().setdefault(\"jinja2.globals\", {})\n- jglobals.setdefault(\"l20n\", \"warehouse.i18n.l20n:l20n\")\ndiff --git a/warehouse/i18n/l20n.py b/warehouse/i18n/l20n.py\ndeleted file mode 100644\n--- a/warehouse/i18n/l20n.py\n+++ /dev/null\n@@ -1,29 +0,0 @@\n-# Licensed under the Apache License, Version 2.0 (the \"License\");\n-# you may not use this file except in compliance with the License.\n-# You may obtain a copy of the License at\n-#\n-# http://www.apache.org/licenses/LICENSE-2.0\n-#\n-# Unless required by applicable law or agreed to in writing, software\n-# distributed under the License is distributed on an \"AS IS\" BASIS,\n-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n-# See the License for the specific language governing permissions and\n-# limitations under the License.\n-\n-import jinja2\n-\n-from markupsafe import Markup as M # noqa\n-\n-from warehouse.filters import tojson\n-\n-\n-_L20N_TEMPLATE = jinja2.Template(\n- 'data-l10n-id=\"{{ tid }}\"'\n- '{% if data %} data-l10n-args=\"{{ data }}\"{% endif %}',\n- autoescape=True,\n-)\n-\n-\n-def l20n(tid, **kwargs):\n- data = tojson(kwargs) if kwargs else None\n- return M(_L20N_TEMPLATE.render(tid=tid, data=data))\n", "issue": "Handle Translation of the <title> tag\nCurrently we can't really translate the `<title>` tag because our block doesn't allow a template to add a l20n ID to it.\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport jinja2\n\nfrom markupsafe import Markup as M # noqa\n\nfrom warehouse.filters import tojson\n\n\n_L20N_TEMPLATE = jinja2.Template(\n 'data-l10n-id=\"{{ tid }}\"'\n '{% if data %} data-l10n-args=\"{{ data }}\"{% endif %}',\n autoescape=True,\n)\n\n\ndef l20n(tid, **kwargs):\n data = tojson(kwargs) if kwargs else None\n return M(_L20N_TEMPLATE.render(tid=tid, data=data))\n", "path": "warehouse/i18n/l20n.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom babel.core import Locale\n\n\ndef _locale(request):\n \"\"\"\n Computes a babel.core:Locale() object for this request.\n \"\"\"\n return Locale.parse(request.locale_name)\n\n\ndef includeme(config):\n # Add the request attributes\n config.add_request_method(_locale, name=\"locale\", reify=True)\n\n # Register our i18n/l10n filters for Jinja2\n filters = config.get_settings().setdefault(\"jinja2.filters\", {})\n filters.setdefault(\"format_date\", \"warehouse.i18n.filters:format_date\")\n filters.setdefault(\n \"format_datetime\",\n \"warehouse.i18n.filters:format_datetime\",\n )\n filters.setdefault(\n \"format_rfc822_datetime\",\n \"warehouse.i18n.filters:format_rfc822_datetime\",\n )\n\n # Register our utility functions with Jinja2\n jglobals = config.get_settings().setdefault(\"jinja2.globals\", {})\n jglobals.setdefault(\"l20n\", \"warehouse.i18n.l20n:l20n\")\n", "path": "warehouse/i18n/__init__.py"}], "after_files": [{"content": null, "path": "warehouse/i18n/l20n.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom babel.core import Locale\n\n\ndef _locale(request):\n \"\"\"\n Computes a babel.core:Locale() object for this request.\n \"\"\"\n return Locale.parse(request.locale_name)\n\n\ndef includeme(config):\n # Add the request attributes\n config.add_request_method(_locale, name=\"locale\", reify=True)\n\n # Register our i18n/l10n filters for Jinja2\n filters = config.get_settings().setdefault(\"jinja2.filters\", {})\n filters.setdefault(\"format_date\", \"warehouse.i18n.filters:format_date\")\n filters.setdefault(\n \"format_datetime\",\n \"warehouse.i18n.filters:format_datetime\",\n )\n filters.setdefault(\n \"format_rfc822_datetime\",\n \"warehouse.i18n.filters:format_rfc822_datetime\",\n )\n", "path": "warehouse/i18n/__init__.py"}]} | 1,040 | 477 |
gh_patches_debug_28516 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-1399 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Avoid sending typing activity when bot is invoked as skill
We should port this once the C# PR is merged.
See [parent](https://github.com/microsoft/botframework-sdk/issues/6049)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3 import asyncio
4 from typing import Awaitable, Callable
5
6 from botbuilder.schema import Activity, ActivityTypes
7
8 from .middleware_set import Middleware
9 from .turn_context import TurnContext
10
11
12 class Timer:
13 clear_timer = False
14
15 def set_timeout(self, func, span):
16 async def some_fn(): # pylint: disable=function-redefined
17 await asyncio.sleep(span)
18 if not self.clear_timer:
19 await func()
20
21 asyncio.ensure_future(some_fn())
22
23 def set_clear_timer(self):
24 self.clear_timer = True
25
26
27 class ShowTypingMiddleware(Middleware):
28 """
29 When added, this middleware will send typing activities back to the user when a Message activity
30 is received to let them know that the bot has received the message and is working on the response.
31 You can specify a delay before the first typing activity is sent and then a frequency, which
32 determines how often another typing activity is sent. Typing activities will continue to be sent
33 until your bot sends another message back to the user.
34 """
35
36 def __init__(self, delay: float = 0.5, period: float = 2.0):
37 """
38 Initializes the middleware.
39
40 :param delay: Delay in seconds for the first typing indicator to be sent.
41 :param period: Delay in seconds for subsequent typing indicators.
42 """
43
44 if delay < 0:
45 raise ValueError("Delay must be greater than or equal to zero")
46
47 if period <= 0:
48 raise ValueError("Repeat period must be greater than zero")
49
50 self._delay = delay
51 self._period = period
52
53 async def on_turn(
54 self, context: TurnContext, logic: Callable[[TurnContext], Awaitable]
55 ):
56 timer = Timer()
57
58 def start_interval(context: TurnContext, delay, period):
59 async def aux():
60 typing_activity = Activity(
61 type=ActivityTypes.typing, relates_to=context.activity.relates_to,
62 )
63
64 conversation_reference = TurnContext.get_conversation_reference(
65 context.activity
66 )
67
68 typing_activity = TurnContext.apply_conversation_reference(
69 typing_activity, conversation_reference
70 )
71
72 asyncio.ensure_future(
73 context.adapter.send_activities(context, [typing_activity])
74 )
75
76 # restart the timer, with the 'period' value for the delay
77 timer.set_timeout(aux, period)
78
79 # first time through we use the 'delay' value for the timer.
80 timer.set_timeout(aux, delay)
81
82 def stop_interval():
83 timer.set_clear_timer()
84
85 # if it's a message, start sending typing activities until the
86 # bot logic is done.
87 if context.activity.type == ActivityTypes.message:
88 start_interval(context, self._delay, self._period)
89
90 # call the bot logic
91 result = await logic()
92
93 stop_interval()
94
95 return result
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py b/libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py
--- a/libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py
+++ b/libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py
@@ -4,7 +4,9 @@
from typing import Awaitable, Callable
from botbuilder.schema import Activity, ActivityTypes
+from botframework.connector.auth import ClaimsIdentity, SkillValidation
+from .bot_adapter import BotAdapter
from .middleware_set import Middleware
from .turn_context import TurnContext
@@ -82,9 +84,12 @@
def stop_interval():
timer.set_clear_timer()
- # if it's a message, start sending typing activities until the
- # bot logic is done.
- if context.activity.type == ActivityTypes.message:
+ # Start a timer to periodically send the typing activity
+ # (bots running as skills should not send typing activity)
+ if (
+ context.activity.type == ActivityTypes.message
+ and not ShowTypingMiddleware._is_skill_bot(context)
+ ):
start_interval(context, self._delay, self._period)
# call the bot logic
@@ -93,3 +98,10 @@
stop_interval()
return result
+
+ @staticmethod
+ def _is_skill_bot(context: TurnContext) -> bool:
+ claims_identity = context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)
+ return isinstance(
+ claims_identity, ClaimsIdentity
+ ) and SkillValidation.is_skill_claim(claims_identity.claims)
| {"golden_diff": "diff --git a/libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py b/libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py\n--- a/libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py\n+++ b/libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py\n@@ -4,7 +4,9 @@\n from typing import Awaitable, Callable\r\n \r\n from botbuilder.schema import Activity, ActivityTypes\r\n+from botframework.connector.auth import ClaimsIdentity, SkillValidation\r\n \r\n+from .bot_adapter import BotAdapter\r\n from .middleware_set import Middleware\r\n from .turn_context import TurnContext\r\n \r\n@@ -82,9 +84,12 @@\n def stop_interval():\r\n timer.set_clear_timer()\r\n \r\n- # if it's a message, start sending typing activities until the\r\n- # bot logic is done.\r\n- if context.activity.type == ActivityTypes.message:\r\n+ # Start a timer to periodically send the typing activity\r\n+ # (bots running as skills should not send typing activity)\r\n+ if (\r\n+ context.activity.type == ActivityTypes.message\r\n+ and not ShowTypingMiddleware._is_skill_bot(context)\r\n+ ):\r\n start_interval(context, self._delay, self._period)\r\n \r\n # call the bot logic\r\n@@ -93,3 +98,10 @@\n stop_interval()\r\n \r\n return result\r\n+\r\n+ @staticmethod\r\n+ def _is_skill_bot(context: TurnContext) -> bool:\r\n+ claims_identity = context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)\r\n+ return isinstance(\r\n+ claims_identity, ClaimsIdentity\r\n+ ) and SkillValidation.is_skill_claim(claims_identity.claims)\n", "issue": "Avoid sending typing activity when bot is invoked as skill\nWe should port this once the C# PR is merged.\r\nSee [parent](https://github.com/microsoft/botframework-sdk/issues/6049)\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\r\n# Licensed under the MIT License.\r\nimport asyncio\r\nfrom typing import Awaitable, Callable\r\n\r\nfrom botbuilder.schema import Activity, ActivityTypes\r\n\r\nfrom .middleware_set import Middleware\r\nfrom .turn_context import TurnContext\r\n\r\n\r\nclass Timer:\r\n clear_timer = False\r\n\r\n def set_timeout(self, func, span):\r\n async def some_fn(): # pylint: disable=function-redefined\r\n await asyncio.sleep(span)\r\n if not self.clear_timer:\r\n await func()\r\n\r\n asyncio.ensure_future(some_fn())\r\n\r\n def set_clear_timer(self):\r\n self.clear_timer = True\r\n\r\n\r\nclass ShowTypingMiddleware(Middleware):\r\n \"\"\"\r\n When added, this middleware will send typing activities back to the user when a Message activity\r\n is received to let them know that the bot has received the message and is working on the response.\r\n You can specify a delay before the first typing activity is sent and then a frequency, which\r\n determines how often another typing activity is sent. Typing activities will continue to be sent\r\n until your bot sends another message back to the user.\r\n \"\"\"\r\n\r\n def __init__(self, delay: float = 0.5, period: float = 2.0):\r\n \"\"\"\r\n Initializes the middleware.\r\n\r\n :param delay: Delay in seconds for the first typing indicator to be sent.\r\n :param period: Delay in seconds for subsequent typing indicators.\r\n \"\"\"\r\n\r\n if delay < 0:\r\n raise ValueError(\"Delay must be greater than or equal to zero\")\r\n\r\n if period <= 0:\r\n raise ValueError(\"Repeat period must be greater than zero\")\r\n\r\n self._delay = delay\r\n self._period = period\r\n\r\n async def on_turn(\r\n self, context: TurnContext, logic: Callable[[TurnContext], Awaitable]\r\n ):\r\n timer = Timer()\r\n\r\n def start_interval(context: TurnContext, delay, period):\r\n async def aux():\r\n typing_activity = Activity(\r\n type=ActivityTypes.typing, relates_to=context.activity.relates_to,\r\n )\r\n\r\n conversation_reference = TurnContext.get_conversation_reference(\r\n context.activity\r\n )\r\n\r\n typing_activity = TurnContext.apply_conversation_reference(\r\n typing_activity, conversation_reference\r\n )\r\n\r\n asyncio.ensure_future(\r\n context.adapter.send_activities(context, [typing_activity])\r\n )\r\n\r\n # restart the timer, with the 'period' value for the delay\r\n timer.set_timeout(aux, period)\r\n\r\n # first time through we use the 'delay' value for the timer.\r\n timer.set_timeout(aux, delay)\r\n\r\n def stop_interval():\r\n timer.set_clear_timer()\r\n\r\n # if it's a message, start sending typing activities until the\r\n # bot logic is done.\r\n if context.activity.type == ActivityTypes.message:\r\n start_interval(context, self._delay, self._period)\r\n\r\n # call the bot logic\r\n result = await logic()\r\n\r\n stop_interval()\r\n\r\n return result\r\n", "path": "libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\r\n# Licensed under the MIT License.\r\nimport asyncio\r\nfrom typing import Awaitable, Callable\r\n\r\nfrom botbuilder.schema import Activity, ActivityTypes\r\nfrom botframework.connector.auth import ClaimsIdentity, SkillValidation\r\n\r\nfrom .bot_adapter import BotAdapter\r\nfrom .middleware_set import Middleware\r\nfrom .turn_context import TurnContext\r\n\r\n\r\nclass Timer:\r\n clear_timer = False\r\n\r\n def set_timeout(self, func, span):\r\n async def some_fn(): # pylint: disable=function-redefined\r\n await asyncio.sleep(span)\r\n if not self.clear_timer:\r\n await func()\r\n\r\n asyncio.ensure_future(some_fn())\r\n\r\n def set_clear_timer(self):\r\n self.clear_timer = True\r\n\r\n\r\nclass ShowTypingMiddleware(Middleware):\r\n \"\"\"\r\n When added, this middleware will send typing activities back to the user when a Message activity\r\n is received to let them know that the bot has received the message and is working on the response.\r\n You can specify a delay before the first typing activity is sent and then a frequency, which\r\n determines how often another typing activity is sent. Typing activities will continue to be sent\r\n until your bot sends another message back to the user.\r\n \"\"\"\r\n\r\n def __init__(self, delay: float = 0.5, period: float = 2.0):\r\n \"\"\"\r\n Initializes the middleware.\r\n\r\n :param delay: Delay in seconds for the first typing indicator to be sent.\r\n :param period: Delay in seconds for subsequent typing indicators.\r\n \"\"\"\r\n\r\n if delay < 0:\r\n raise ValueError(\"Delay must be greater than or equal to zero\")\r\n\r\n if period <= 0:\r\n raise ValueError(\"Repeat period must be greater than zero\")\r\n\r\n self._delay = delay\r\n self._period = period\r\n\r\n async def on_turn(\r\n self, context: TurnContext, logic: Callable[[TurnContext], Awaitable]\r\n ):\r\n timer = Timer()\r\n\r\n def start_interval(context: TurnContext, delay, period):\r\n async def aux():\r\n typing_activity = Activity(\r\n type=ActivityTypes.typing, relates_to=context.activity.relates_to,\r\n )\r\n\r\n conversation_reference = TurnContext.get_conversation_reference(\r\n context.activity\r\n )\r\n\r\n typing_activity = TurnContext.apply_conversation_reference(\r\n typing_activity, conversation_reference\r\n )\r\n\r\n asyncio.ensure_future(\r\n context.adapter.send_activities(context, [typing_activity])\r\n )\r\n\r\n # restart the timer, with the 'period' value for the delay\r\n timer.set_timeout(aux, period)\r\n\r\n # first time through we use the 'delay' value for the timer.\r\n timer.set_timeout(aux, delay)\r\n\r\n def stop_interval():\r\n timer.set_clear_timer()\r\n\r\n # Start a timer to periodically send the typing activity\r\n # (bots running as skills should not send typing activity)\r\n if (\r\n context.activity.type == ActivityTypes.message\r\n and not ShowTypingMiddleware._is_skill_bot(context)\r\n ):\r\n start_interval(context, self._delay, self._period)\r\n\r\n # call the bot logic\r\n result = await logic()\r\n\r\n stop_interval()\r\n\r\n return result\r\n\r\n @staticmethod\r\n def _is_skill_bot(context: TurnContext) -> bool:\r\n claims_identity = context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)\r\n return isinstance(\r\n claims_identity, ClaimsIdentity\r\n ) and SkillValidation.is_skill_claim(claims_identity.claims)\r\n", "path": "libraries/botbuilder-core/botbuilder/core/show_typing_middleware.py"}]} | 1,125 | 374 |
gh_patches_debug_1724 | rasdani/github-patches | git_diff | translate__pootle-5621 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Stats are wrong for children where the child name is repeated in the childs descendants
this is causing stats to foo
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_data/directory_data.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from django.db.models import Max
10
11 from pootle_translationproject.models import TranslationProject
12
13 from .utils import RelatedStoresDataTool
14
15
16 class DirectoryDataTool(RelatedStoresDataTool):
17 """Retrieves aggregate stats for a Directory"""
18
19 group_by = ("store__parent__tp_path", )
20 cache_key_name = "directory"
21
22 @property
23 def context_name(self):
24 return self.context.pootle_path
25
26 @property
27 def max_unit_revision(self):
28 try:
29 return self.context.translationproject.data_tool.max_unit_revision
30 except TranslationProject.DoesNotExist:
31 return self.all_stat_data.aggregate(rev=Max("max_unit_revision"))["rev"]
32
33 def filter_data(self, qs):
34 return (
35 qs.filter(
36 store__translation_project=self.context.translation_project,
37 store__parent__tp_path__startswith=self.context.tp_path)
38 .exclude(store__parent=self.context))
39
40 def get_children_stats(self, qs):
41 children = {}
42 for child in qs.iterator():
43 self.add_child_stats(children, child)
44 child_stores = self.data_model.filter(store__parent=self.context).values(
45 *("store__name", ) + self.max_fields + self.sum_fields)
46 for child in child_stores:
47 self.add_child_stats(
48 children,
49 child,
50 root=child["store__name"],
51 use_aggregates=False)
52 self.add_submission_info(self.stat_data, children)
53 self.add_last_created_info(child_stores, children)
54 return children
55
56 def get_root_child_path(self, child):
57 return (
58 child["store__parent__tp_path"].replace(
59 self.context.tp_path, "").split("/")[0])
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/pootle_data/directory_data.py b/pootle/apps/pootle_data/directory_data.py
--- a/pootle/apps/pootle_data/directory_data.py
+++ b/pootle/apps/pootle_data/directory_data.py
@@ -54,6 +54,5 @@
return children
def get_root_child_path(self, child):
- return (
- child["store__parent__tp_path"].replace(
- self.context.tp_path, "").split("/")[0])
+ return child["store__parent__tp_path"][
+ len(self.context.tp_path):].split("/")[0]
| {"golden_diff": "diff --git a/pootle/apps/pootle_data/directory_data.py b/pootle/apps/pootle_data/directory_data.py\n--- a/pootle/apps/pootle_data/directory_data.py\n+++ b/pootle/apps/pootle_data/directory_data.py\n@@ -54,6 +54,5 @@\n return children\n \n def get_root_child_path(self, child):\n- return (\n- child[\"store__parent__tp_path\"].replace(\n- self.context.tp_path, \"\").split(\"/\")[0])\n+ return child[\"store__parent__tp_path\"][\n+ len(self.context.tp_path):].split(\"/\")[0]\n", "issue": "Stats are wrong for children where the child name is repeated in the childs descendants\nthis is causing stats to foo\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.db.models import Max\n\nfrom pootle_translationproject.models import TranslationProject\n\nfrom .utils import RelatedStoresDataTool\n\n\nclass DirectoryDataTool(RelatedStoresDataTool):\n \"\"\"Retrieves aggregate stats for a Directory\"\"\"\n\n group_by = (\"store__parent__tp_path\", )\n cache_key_name = \"directory\"\n\n @property\n def context_name(self):\n return self.context.pootle_path\n\n @property\n def max_unit_revision(self):\n try:\n return self.context.translationproject.data_tool.max_unit_revision\n except TranslationProject.DoesNotExist:\n return self.all_stat_data.aggregate(rev=Max(\"max_unit_revision\"))[\"rev\"]\n\n def filter_data(self, qs):\n return (\n qs.filter(\n store__translation_project=self.context.translation_project,\n store__parent__tp_path__startswith=self.context.tp_path)\n .exclude(store__parent=self.context))\n\n def get_children_stats(self, qs):\n children = {}\n for child in qs.iterator():\n self.add_child_stats(children, child)\n child_stores = self.data_model.filter(store__parent=self.context).values(\n *(\"store__name\", ) + self.max_fields + self.sum_fields)\n for child in child_stores:\n self.add_child_stats(\n children,\n child,\n root=child[\"store__name\"],\n use_aggregates=False)\n self.add_submission_info(self.stat_data, children)\n self.add_last_created_info(child_stores, children)\n return children\n\n def get_root_child_path(self, child):\n return (\n child[\"store__parent__tp_path\"].replace(\n self.context.tp_path, \"\").split(\"/\")[0])\n", "path": "pootle/apps/pootle_data/directory_data.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.db.models import Max\n\nfrom pootle_translationproject.models import TranslationProject\n\nfrom .utils import RelatedStoresDataTool\n\n\nclass DirectoryDataTool(RelatedStoresDataTool):\n \"\"\"Retrieves aggregate stats for a Directory\"\"\"\n\n group_by = (\"store__parent__tp_path\", )\n cache_key_name = \"directory\"\n\n @property\n def context_name(self):\n return self.context.pootle_path\n\n @property\n def max_unit_revision(self):\n try:\n return self.context.translationproject.data_tool.max_unit_revision\n except TranslationProject.DoesNotExist:\n return self.all_stat_data.aggregate(rev=Max(\"max_unit_revision\"))[\"rev\"]\n\n def filter_data(self, qs):\n return (\n qs.filter(\n store__translation_project=self.context.translation_project,\n store__parent__tp_path__startswith=self.context.tp_path)\n .exclude(store__parent=self.context))\n\n def get_children_stats(self, qs):\n children = {}\n for child in qs.iterator():\n self.add_child_stats(children, child)\n child_stores = self.data_model.filter(store__parent=self.context).values(\n *(\"store__name\", ) + self.max_fields + self.sum_fields)\n for child in child_stores:\n self.add_child_stats(\n children,\n child,\n root=child[\"store__name\"],\n use_aggregates=False)\n self.add_submission_info(self.stat_data, children)\n self.add_last_created_info(child_stores, children)\n return children\n\n def get_root_child_path(self, child):\n return child[\"store__parent__tp_path\"][\n len(self.context.tp_path):].split(\"/\")[0]\n", "path": "pootle/apps/pootle_data/directory_data.py"}]} | 831 | 143 |
gh_patches_debug_34545 | rasdani/github-patches | git_diff | feast-dev__feast-1771 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FeatureService should support "description" as first class named-value argument as Entity does
## Expected Behavior
...
```
# Define your feature service and the features it will serve
driver_fs = FeatureService(name="driver_ranking_fv_svc",
features=[driver_hourly_stats_view],
description="Used for training an ElasticNetCV model"})
```
```
feast feature-services describe driver_ranking_fv_svc
spec:
name: driver_ranking_fv_svc
features:
- featureViewName: driver_hourly_stats
featureColumns:
- name: conv_rate
valueType: FLOAT
- name: acc_rate
valueType: FLOAT
- name: avg_daily_trips
valueType: INT64
description: Used for training an ElasticNetCV model
meta: {}
```
Extend the argument list in the [FeatureService ](https://github.com/feast-dev/feast/blob/7dff49a194a25a62927c1ee7022caf0651f68f38/sdk/python/feast/feature_service.py#L24)to support `description=None` as default
## Current Behavior
```
# Define your feature service and the features it will serve
driver_fs = FeatureService(name="driver_ranking_fv_svc",
features=[driver_hourly_stats_view],
tags={"description": "Used for training an ElasticNetCV model"})
```
Then query the FS from the cli
```
feast feature-services describe driver_ranking_fv_svc
spec:
name: driver_ranking_fv_svc
features:
- featureViewName: driver_hourly_stats
featureColumns:
- name: conv_rate
valueType: FLOAT
- name: acc_rate
valueType: FLOAT
- name: avg_daily_trips
valueType: INT64
tags:
description: Used for training an ElasticNetCV model
meta: {}
```
## Steps to reproduce
Follow the steps above in the current behavior
### Specifications
- Version: v0.12v
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/feast/feature_service.py`
Content:
```
1 from datetime import datetime
2 from typing import Dict, List, Optional, Union
3
4 from google.protobuf.json_format import MessageToJson
5
6 from feast.feature_table import FeatureTable
7 from feast.feature_view import FeatureView
8 from feast.feature_view_projection import FeatureViewProjection
9 from feast.protos.feast.core.FeatureService_pb2 import (
10 FeatureService as FeatureServiceProto,
11 )
12 from feast.protos.feast.core.FeatureService_pb2 import (
13 FeatureServiceMeta,
14 FeatureServiceSpec,
15 )
16
17
18 class FeatureService:
19 """
20 A feature service is a logical grouping of features for retrieval (training or serving).
21 The features grouped by a feature service may come from any number of feature views.
22
23 Args:
24 name: Unique name of the feature service.
25 features: A list of Features that are grouped as part of this FeatureService.
26 The list may contain Feature Views, Feature Tables, or a subset of either.
27 tags (optional): A dictionary of key-value pairs used for organizing Feature
28 Services.
29 """
30
31 name: str
32 features: List[FeatureViewProjection]
33 tags: Dict[str, str]
34 created_timestamp: Optional[datetime] = None
35 last_updated_timestamp: Optional[datetime] = None
36
37 def __init__(
38 self,
39 name: str,
40 features: List[Union[FeatureTable, FeatureView, FeatureViewProjection]],
41 tags: Optional[Dict[str, str]] = None,
42 ):
43 """
44 Creates a FeatureService object.
45
46 Raises:
47 ValueError: If one of the specified features is not a valid type.
48 """
49 self.name = name
50 self.features = []
51 for feature in features:
52 if isinstance(feature, FeatureTable) or isinstance(feature, FeatureView):
53 self.features.append(FeatureViewProjection.from_definition(feature))
54 elif isinstance(feature, FeatureViewProjection):
55 self.features.append(feature)
56 else:
57 raise ValueError(f"Unexpected type: {type(feature)}")
58 self.tags = tags or {}
59 self.created_timestamp = None
60 self.last_updated_timestamp = None
61
62 def __repr__(self):
63 items = (f"{k} = {v}" for k, v in self.__dict__.items())
64 return f"<{self.__class__.__name__}({', '.join(items)})>"
65
66 def __str__(self):
67 return str(MessageToJson(self.to_proto()))
68
69 def __hash__(self):
70 return hash(self.name)
71
72 def __eq__(self, other):
73 if not isinstance(other, FeatureService):
74 raise TypeError(
75 "Comparisons should only involve FeatureService class objects."
76 )
77 if self.tags != other.tags or self.name != other.name:
78 return False
79
80 if sorted(self.features) != sorted(other.features):
81 return False
82
83 return True
84
85 @staticmethod
86 def from_proto(feature_service_proto: FeatureServiceProto):
87 """
88 Converts a FeatureServiceProto to a FeatureService object.
89
90 Args:
91 feature_service_proto: A protobuf representation of a FeatureService.
92 """
93 fs = FeatureService(
94 name=feature_service_proto.spec.name,
95 features=[
96 FeatureViewProjection.from_proto(fp)
97 for fp in feature_service_proto.spec.features
98 ],
99 tags=dict(feature_service_proto.spec.tags),
100 )
101
102 if feature_service_proto.meta.HasField("created_timestamp"):
103 fs.created_timestamp = (
104 feature_service_proto.meta.created_timestamp.ToDatetime()
105 )
106 if feature_service_proto.meta.HasField("last_updated_timestamp"):
107 fs.last_updated_timestamp = (
108 feature_service_proto.meta.last_updated_timestamp.ToDatetime()
109 )
110
111 return fs
112
113 def to_proto(self) -> FeatureServiceProto:
114 """
115 Converts a FeatureService to its protobuf representation.
116
117 Returns:
118 A FeatureServiceProto protobuf.
119 """
120 meta = FeatureServiceMeta()
121 if self.created_timestamp:
122 meta.created_timestamp.FromDatetime(self.created_timestamp)
123
124 spec = FeatureServiceSpec()
125 spec.name = self.name
126 for definition in self.features:
127 if isinstance(definition, FeatureTable) or isinstance(
128 definition, FeatureView
129 ):
130 feature_ref = FeatureViewProjection(
131 definition.name, definition.features
132 )
133 else:
134 feature_ref = definition
135
136 spec.features.append(feature_ref.to_proto())
137
138 if self.tags:
139 spec.tags.update(self.tags)
140
141 feature_service_proto = FeatureServiceProto(spec=spec, meta=meta)
142 return feature_service_proto
143
144 def validate(self):
145 pass
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sdk/python/feast/feature_service.py b/sdk/python/feast/feature_service.py
--- a/sdk/python/feast/feature_service.py
+++ b/sdk/python/feast/feature_service.py
@@ -31,6 +31,7 @@
name: str
features: List[FeatureViewProjection]
tags: Dict[str, str]
+ description: Optional[str] = None
created_timestamp: Optional[datetime] = None
last_updated_timestamp: Optional[datetime] = None
@@ -39,6 +40,7 @@
name: str,
features: List[Union[FeatureTable, FeatureView, FeatureViewProjection]],
tags: Optional[Dict[str, str]] = None,
+ description: Optional[str] = None,
):
"""
Creates a FeatureService object.
@@ -56,6 +58,7 @@
else:
raise ValueError(f"Unexpected type: {type(feature)}")
self.tags = tags or {}
+ self.description = description
self.created_timestamp = None
self.last_updated_timestamp = None
@@ -97,6 +100,11 @@
for fp in feature_service_proto.spec.features
],
tags=dict(feature_service_proto.spec.tags),
+ description=(
+ feature_service_proto.spec.description
+ if feature_service_proto.spec.description != ""
+ else None
+ ),
)
if feature_service_proto.meta.HasField("created_timestamp"):
@@ -137,6 +145,8 @@
if self.tags:
spec.tags.update(self.tags)
+ if self.description:
+ spec.description = self.description
feature_service_proto = FeatureServiceProto(spec=spec, meta=meta)
return feature_service_proto
| {"golden_diff": "diff --git a/sdk/python/feast/feature_service.py b/sdk/python/feast/feature_service.py\n--- a/sdk/python/feast/feature_service.py\n+++ b/sdk/python/feast/feature_service.py\n@@ -31,6 +31,7 @@\n name: str\n features: List[FeatureViewProjection]\n tags: Dict[str, str]\n+ description: Optional[str] = None\n created_timestamp: Optional[datetime] = None\n last_updated_timestamp: Optional[datetime] = None\n \n@@ -39,6 +40,7 @@\n name: str,\n features: List[Union[FeatureTable, FeatureView, FeatureViewProjection]],\n tags: Optional[Dict[str, str]] = None,\n+ description: Optional[str] = None,\n ):\n \"\"\"\n Creates a FeatureService object.\n@@ -56,6 +58,7 @@\n else:\n raise ValueError(f\"Unexpected type: {type(feature)}\")\n self.tags = tags or {}\n+ self.description = description\n self.created_timestamp = None\n self.last_updated_timestamp = None\n \n@@ -97,6 +100,11 @@\n for fp in feature_service_proto.spec.features\n ],\n tags=dict(feature_service_proto.spec.tags),\n+ description=(\n+ feature_service_proto.spec.description\n+ if feature_service_proto.spec.description != \"\"\n+ else None\n+ ),\n )\n \n if feature_service_proto.meta.HasField(\"created_timestamp\"):\n@@ -137,6 +145,8 @@\n \n if self.tags:\n spec.tags.update(self.tags)\n+ if self.description:\n+ spec.description = self.description\n \n feature_service_proto = FeatureServiceProto(spec=spec, meta=meta)\n return feature_service_proto\n", "issue": "FeatureService should support \"description\" as first class named-value argument as Entity does\n## Expected Behavior \r\n...\r\n```\r\n# Define your feature service and the features it will serve\r\ndriver_fs = FeatureService(name=\"driver_ranking_fv_svc\",\r\n features=[driver_hourly_stats_view],\r\n description=\"Used for training an ElasticNetCV model\"})\r\n```\r\n```\r\nfeast feature-services describe driver_ranking_fv_svc\r\nspec:\r\n name: driver_ranking_fv_svc\r\n features:\r\n - featureViewName: driver_hourly_stats\r\n featureColumns:\r\n - name: conv_rate\r\n valueType: FLOAT\r\n - name: acc_rate\r\n valueType: FLOAT\r\n - name: avg_daily_trips\r\n valueType: INT64\r\ndescription: Used for training an ElasticNetCV model\r\nmeta: {}\r\n```\r\nExtend the argument list in the [FeatureService ](https://github.com/feast-dev/feast/blob/7dff49a194a25a62927c1ee7022caf0651f68f38/sdk/python/feast/feature_service.py#L24)to support `description=None` as default\r\n## Current Behavior\r\n\r\n```\r\n# Define your feature service and the features it will serve\r\ndriver_fs = FeatureService(name=\"driver_ranking_fv_svc\",\r\n features=[driver_hourly_stats_view],\r\n tags={\"description\": \"Used for training an ElasticNetCV model\"})\r\n```\r\nThen query the FS from the cli\r\n\r\n```\r\nfeast feature-services describe driver_ranking_fv_svc\r\nspec:\r\n name: driver_ranking_fv_svc\r\n features:\r\n - featureViewName: driver_hourly_stats\r\n featureColumns:\r\n - name: conv_rate\r\n valueType: FLOAT\r\n - name: acc_rate\r\n valueType: FLOAT\r\n - name: avg_daily_trips\r\n valueType: INT64\r\n tags:\r\n description: Used for training an ElasticNetCV model\r\nmeta: {}\r\n```\r\n## Steps to reproduce\r\n\r\nFollow the steps above in the current behavior\r\n\r\n### Specifications\r\n\r\n- Version: v0.12v\r\n\n", "before_files": [{"content": "from datetime import datetime\nfrom typing import Dict, List, Optional, Union\n\nfrom google.protobuf.json_format import MessageToJson\n\nfrom feast.feature_table import FeatureTable\nfrom feast.feature_view import FeatureView\nfrom feast.feature_view_projection import FeatureViewProjection\nfrom feast.protos.feast.core.FeatureService_pb2 import (\n FeatureService as FeatureServiceProto,\n)\nfrom feast.protos.feast.core.FeatureService_pb2 import (\n FeatureServiceMeta,\n FeatureServiceSpec,\n)\n\n\nclass FeatureService:\n \"\"\"\n A feature service is a logical grouping of features for retrieval (training or serving).\n The features grouped by a feature service may come from any number of feature views.\n\n Args:\n name: Unique name of the feature service.\n features: A list of Features that are grouped as part of this FeatureService.\n The list may contain Feature Views, Feature Tables, or a subset of either.\n tags (optional): A dictionary of key-value pairs used for organizing Feature\n Services.\n \"\"\"\n\n name: str\n features: List[FeatureViewProjection]\n tags: Dict[str, str]\n created_timestamp: Optional[datetime] = None\n last_updated_timestamp: Optional[datetime] = None\n\n def __init__(\n self,\n name: str,\n features: List[Union[FeatureTable, FeatureView, FeatureViewProjection]],\n tags: Optional[Dict[str, str]] = None,\n ):\n \"\"\"\n Creates a FeatureService object.\n\n Raises:\n ValueError: If one of the specified features is not a valid type.\n \"\"\"\n self.name = name\n self.features = []\n for feature in features:\n if isinstance(feature, FeatureTable) or isinstance(feature, FeatureView):\n self.features.append(FeatureViewProjection.from_definition(feature))\n elif isinstance(feature, FeatureViewProjection):\n self.features.append(feature)\n else:\n raise ValueError(f\"Unexpected type: {type(feature)}\")\n self.tags = tags or {}\n self.created_timestamp = None\n self.last_updated_timestamp = None\n\n def __repr__(self):\n items = (f\"{k} = {v}\" for k, v in self.__dict__.items())\n return f\"<{self.__class__.__name__}({', '.join(items)})>\"\n\n def __str__(self):\n return str(MessageToJson(self.to_proto()))\n\n def __hash__(self):\n return hash(self.name)\n\n def __eq__(self, other):\n if not isinstance(other, FeatureService):\n raise TypeError(\n \"Comparisons should only involve FeatureService class objects.\"\n )\n if self.tags != other.tags or self.name != other.name:\n return False\n\n if sorted(self.features) != sorted(other.features):\n return False\n\n return True\n\n @staticmethod\n def from_proto(feature_service_proto: FeatureServiceProto):\n \"\"\"\n Converts a FeatureServiceProto to a FeatureService object.\n\n Args:\n feature_service_proto: A protobuf representation of a FeatureService.\n \"\"\"\n fs = FeatureService(\n name=feature_service_proto.spec.name,\n features=[\n FeatureViewProjection.from_proto(fp)\n for fp in feature_service_proto.spec.features\n ],\n tags=dict(feature_service_proto.spec.tags),\n )\n\n if feature_service_proto.meta.HasField(\"created_timestamp\"):\n fs.created_timestamp = (\n feature_service_proto.meta.created_timestamp.ToDatetime()\n )\n if feature_service_proto.meta.HasField(\"last_updated_timestamp\"):\n fs.last_updated_timestamp = (\n feature_service_proto.meta.last_updated_timestamp.ToDatetime()\n )\n\n return fs\n\n def to_proto(self) -> FeatureServiceProto:\n \"\"\"\n Converts a FeatureService to its protobuf representation.\n\n Returns:\n A FeatureServiceProto protobuf.\n \"\"\"\n meta = FeatureServiceMeta()\n if self.created_timestamp:\n meta.created_timestamp.FromDatetime(self.created_timestamp)\n\n spec = FeatureServiceSpec()\n spec.name = self.name\n for definition in self.features:\n if isinstance(definition, FeatureTable) or isinstance(\n definition, FeatureView\n ):\n feature_ref = FeatureViewProjection(\n definition.name, definition.features\n )\n else:\n feature_ref = definition\n\n spec.features.append(feature_ref.to_proto())\n\n if self.tags:\n spec.tags.update(self.tags)\n\n feature_service_proto = FeatureServiceProto(spec=spec, meta=meta)\n return feature_service_proto\n\n def validate(self):\n pass\n", "path": "sdk/python/feast/feature_service.py"}], "after_files": [{"content": "from datetime import datetime\nfrom typing import Dict, List, Optional, Union\n\nfrom google.protobuf.json_format import MessageToJson\n\nfrom feast.feature_table import FeatureTable\nfrom feast.feature_view import FeatureView\nfrom feast.feature_view_projection import FeatureViewProjection\nfrom feast.protos.feast.core.FeatureService_pb2 import (\n FeatureService as FeatureServiceProto,\n)\nfrom feast.protos.feast.core.FeatureService_pb2 import (\n FeatureServiceMeta,\n FeatureServiceSpec,\n)\n\n\nclass FeatureService:\n \"\"\"\n A feature service is a logical grouping of features for retrieval (training or serving).\n The features grouped by a feature service may come from any number of feature views.\n\n Args:\n name: Unique name of the feature service.\n features: A list of Features that are grouped as part of this FeatureService.\n The list may contain Feature Views, Feature Tables, or a subset of either.\n tags (optional): A dictionary of key-value pairs used for organizing Feature\n Services.\n \"\"\"\n\n name: str\n features: List[FeatureViewProjection]\n tags: Dict[str, str]\n description: Optional[str] = None\n created_timestamp: Optional[datetime] = None\n last_updated_timestamp: Optional[datetime] = None\n\n def __init__(\n self,\n name: str,\n features: List[Union[FeatureTable, FeatureView, FeatureViewProjection]],\n tags: Optional[Dict[str, str]] = None,\n description: Optional[str] = None,\n ):\n \"\"\"\n Creates a FeatureService object.\n\n Raises:\n ValueError: If one of the specified features is not a valid type.\n \"\"\"\n self.name = name\n self.features = []\n for feature in features:\n if isinstance(feature, FeatureTable) or isinstance(feature, FeatureView):\n self.features.append(FeatureViewProjection.from_definition(feature))\n elif isinstance(feature, FeatureViewProjection):\n self.features.append(feature)\n else:\n raise ValueError(f\"Unexpected type: {type(feature)}\")\n self.tags = tags or {}\n self.description = description\n self.created_timestamp = None\n self.last_updated_timestamp = None\n\n def __repr__(self):\n items = (f\"{k} = {v}\" for k, v in self.__dict__.items())\n return f\"<{self.__class__.__name__}({', '.join(items)})>\"\n\n def __str__(self):\n return str(MessageToJson(self.to_proto()))\n\n def __hash__(self):\n return hash(self.name)\n\n def __eq__(self, other):\n if not isinstance(other, FeatureService):\n raise TypeError(\n \"Comparisons should only involve FeatureService class objects.\"\n )\n if self.tags != other.tags or self.name != other.name:\n return False\n\n if sorted(self.features) != sorted(other.features):\n return False\n\n return True\n\n @staticmethod\n def from_proto(feature_service_proto: FeatureServiceProto):\n \"\"\"\n Converts a FeatureServiceProto to a FeatureService object.\n\n Args:\n feature_service_proto: A protobuf representation of a FeatureService.\n \"\"\"\n fs = FeatureService(\n name=feature_service_proto.spec.name,\n features=[\n FeatureViewProjection.from_proto(fp)\n for fp in feature_service_proto.spec.features\n ],\n tags=dict(feature_service_proto.spec.tags),\n description=(\n feature_service_proto.spec.description\n if feature_service_proto.spec.description != \"\"\n else None\n ),\n )\n\n if feature_service_proto.meta.HasField(\"created_timestamp\"):\n fs.created_timestamp = (\n feature_service_proto.meta.created_timestamp.ToDatetime()\n )\n if feature_service_proto.meta.HasField(\"last_updated_timestamp\"):\n fs.last_updated_timestamp = (\n feature_service_proto.meta.last_updated_timestamp.ToDatetime()\n )\n\n return fs\n\n def to_proto(self) -> FeatureServiceProto:\n \"\"\"\n Converts a FeatureService to its protobuf representation.\n\n Returns:\n A FeatureServiceProto protobuf.\n \"\"\"\n meta = FeatureServiceMeta()\n if self.created_timestamp:\n meta.created_timestamp.FromDatetime(self.created_timestamp)\n\n spec = FeatureServiceSpec()\n spec.name = self.name\n for definition in self.features:\n if isinstance(definition, FeatureTable) or isinstance(\n definition, FeatureView\n ):\n feature_ref = FeatureViewProjection(\n definition.name, definition.features\n )\n else:\n feature_ref = definition\n\n spec.features.append(feature_ref.to_proto())\n\n if self.tags:\n spec.tags.update(self.tags)\n if self.description:\n spec.description = self.description\n\n feature_service_proto = FeatureServiceProto(spec=spec, meta=meta)\n return feature_service_proto\n\n def validate(self):\n pass\n", "path": "sdk/python/feast/feature_service.py"}]} | 1,985 | 383 |
gh_patches_debug_15810 | rasdani/github-patches | git_diff | zulip__zulip-9272 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove the "Delete streams" administrative tab
Now that we have a "Delete stream" button in the main streams UI, which has lots more context on description/traffic/subscribers, this page is useless. We should just remove it.
We should make sure to remove the documentation in /help/ linking to this as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zerver/lib/bugdown/help_settings_links.py`
Content:
```
1 import re
2 import markdown
3 from typing import Any, Dict, List, Optional, Union, Text
4 from typing.re import Match
5 from markdown.preprocessors import Preprocessor
6
7 REGEXP = re.compile(r'\{settings_tab\|(?P<setting_identifier>.*?)\}')
8
9 link_mapping = {
10 # a mapping from the setting identifier that is the same as the final URL
11 # breadcrumb to that setting to the name of its setting type, the setting
12 # name as it appears in the user interface, and a relative link that can
13 # be used to get to that setting
14 'your-account': ['Settings', 'Your account', '/#settings/your-account'],
15 'display-settings': ['Settings', 'Display settings', '/#settings/display-settings'],
16 'notifications': ['Settings', 'Notifications', '/#settings/notifications'],
17 'your-bots': ['Settings', 'Your bots', '/#settings/your-bots'],
18 'alert-words': ['Settings', 'Alert words', '/#settings/alert-words'],
19 'uploaded-files': ['Settings', 'Uploaded files', '/#settings/uploaded-files'],
20 'muted-topics': ['Settings', 'Muted topics', '/#settings/muted-topics'],
21
22 'organization-profile': ['Manage organization', 'Organization profile',
23 '/#organization/organization-profile'],
24 'organization-settings': ['Manage organization', 'Organization settings',
25 '/#organization/organization-settings'],
26 'organization-permissions': ['Manage organization', 'Organization permissions',
27 '/#organization/organization-permissions'],
28 'emoji-settings': ['Manage organization', 'Custom emoji',
29 '/#organization/emoji-settings'],
30 'auth-methods': ['Manage organization', 'Authentication methods',
31 '/#organization/auth-methods'],
32 'user-groups-admin': ['Manage organization', 'User groups',
33 '/#organization/user-groups-admin'],
34 'user-list-admin': ['Manage organization', 'Users', '/#organization/user-list-admin'],
35 'deactivated-users-admin': ['Manage organization', 'Deactivated users',
36 '/#organization/deactivated-users-admin'],
37 'bot-list-admin': ['Manage organization', 'Bots', '/#organization/bot-list-admin'],
38 'streams-list-admin': ['Manage organization', 'Delete streams',
39 '/#organization/streams-list-admin'],
40 'default-streams-list': ['Manage organization', 'Default streams',
41 '/#organization/default-streams-list'],
42 'filter-settings': ['Manage organization', 'Filter settings',
43 '/#organization/filter-settings'],
44 'profile-field-settings': ['Manage organization', 'Custom profile fields',
45 '/#organization/profile-field-settings'],
46 'invites-list-admin': ['Manage organization', 'Invitations',
47 '/#organization/invites-list-admin'],
48 }
49
50 settings_markdown = """
51 1. From your desktop, click on the **gear**
52 (<i class="icon-vector-cog"></i>) in the upper right corner.
53
54 1. Select **%(setting_type_name)s**.
55
56 1. On the left, click %(setting_reference)s.
57 """
58
59
60 class SettingHelpExtension(markdown.Extension):
61 def extendMarkdown(self, md: markdown.Markdown, md_globals: Dict[str, Any]) -> None:
62 """ Add SettingHelpExtension to the Markdown instance. """
63 md.registerExtension(self)
64 md.preprocessors.add('setting', Setting(), '_begin')
65
66 relative_settings_links = None # type: Optional[bool]
67
68 def set_relative_settings_links(value: bool) -> None:
69 global relative_settings_links
70 relative_settings_links = value
71
72 class Setting(Preprocessor):
73 def run(self, lines: List[str]) -> List[str]:
74 done = False
75 while not done:
76 for line in lines:
77 loc = lines.index(line)
78 match = REGEXP.search(line)
79
80 if match:
81 text = [self.handleMatch(match)]
82 # The line that contains the directive to include the macro
83 # may be preceded or followed by text or tags, in that case
84 # we need to make sure that any preceding or following text
85 # stays the same.
86 line_split = REGEXP.split(line, maxsplit=0)
87 preceding = line_split[0]
88 following = line_split[-1]
89 text = [preceding] + text + [following]
90 lines = lines[:loc] + text + lines[loc+1:]
91 break
92 else:
93 done = True
94 return lines
95
96 def handleMatch(self, match: Match[Text]) -> Text:
97 setting_identifier = match.group('setting_identifier')
98 setting_type_name = link_mapping[setting_identifier][0]
99 setting_name = link_mapping[setting_identifier][1]
100 setting_link = link_mapping[setting_identifier][2]
101 if relative_settings_links:
102 setting_reference = "[%s](%s)" % (setting_name, setting_link)
103 else:
104 setting_reference = "**%s**" % (setting_name,)
105 instructions = settings_markdown % {'setting_type_name': setting_type_name,
106 'setting_reference': setting_reference}
107 return instructions
108
109
110 def makeExtension(*args: Any, **kwargs: Any) -> SettingHelpExtension:
111 return SettingHelpExtension(*args, **kwargs)
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zerver/lib/bugdown/help_settings_links.py b/zerver/lib/bugdown/help_settings_links.py
--- a/zerver/lib/bugdown/help_settings_links.py
+++ b/zerver/lib/bugdown/help_settings_links.py
@@ -35,8 +35,6 @@
'deactivated-users-admin': ['Manage organization', 'Deactivated users',
'/#organization/deactivated-users-admin'],
'bot-list-admin': ['Manage organization', 'Bots', '/#organization/bot-list-admin'],
- 'streams-list-admin': ['Manage organization', 'Delete streams',
- '/#organization/streams-list-admin'],
'default-streams-list': ['Manage organization', 'Default streams',
'/#organization/default-streams-list'],
'filter-settings': ['Manage organization', 'Filter settings',
| {"golden_diff": "diff --git a/zerver/lib/bugdown/help_settings_links.py b/zerver/lib/bugdown/help_settings_links.py\n--- a/zerver/lib/bugdown/help_settings_links.py\n+++ b/zerver/lib/bugdown/help_settings_links.py\n@@ -35,8 +35,6 @@\n 'deactivated-users-admin': ['Manage organization', 'Deactivated users',\n '/#organization/deactivated-users-admin'],\n 'bot-list-admin': ['Manage organization', 'Bots', '/#organization/bot-list-admin'],\n- 'streams-list-admin': ['Manage organization', 'Delete streams',\n- '/#organization/streams-list-admin'],\n 'default-streams-list': ['Manage organization', 'Default streams',\n '/#organization/default-streams-list'],\n 'filter-settings': ['Manage organization', 'Filter settings',\n", "issue": "Remove the \"Delete streams\" administrative tab\nNow that we have a \"Delete stream\" button in the main streams UI, which has lots more context on description/traffic/subscribers, this page is useless. We should just remove it.\r\n\r\nWe should make sure to remove the documentation in /help/ linking to this as well.\n", "before_files": [{"content": "import re\nimport markdown\nfrom typing import Any, Dict, List, Optional, Union, Text\nfrom typing.re import Match\nfrom markdown.preprocessors import Preprocessor\n\nREGEXP = re.compile(r'\\{settings_tab\\|(?P<setting_identifier>.*?)\\}')\n\nlink_mapping = {\n # a mapping from the setting identifier that is the same as the final URL\n # breadcrumb to that setting to the name of its setting type, the setting\n # name as it appears in the user interface, and a relative link that can\n # be used to get to that setting\n 'your-account': ['Settings', 'Your account', '/#settings/your-account'],\n 'display-settings': ['Settings', 'Display settings', '/#settings/display-settings'],\n 'notifications': ['Settings', 'Notifications', '/#settings/notifications'],\n 'your-bots': ['Settings', 'Your bots', '/#settings/your-bots'],\n 'alert-words': ['Settings', 'Alert words', '/#settings/alert-words'],\n 'uploaded-files': ['Settings', 'Uploaded files', '/#settings/uploaded-files'],\n 'muted-topics': ['Settings', 'Muted topics', '/#settings/muted-topics'],\n\n 'organization-profile': ['Manage organization', 'Organization profile',\n '/#organization/organization-profile'],\n 'organization-settings': ['Manage organization', 'Organization settings',\n '/#organization/organization-settings'],\n 'organization-permissions': ['Manage organization', 'Organization permissions',\n '/#organization/organization-permissions'],\n 'emoji-settings': ['Manage organization', 'Custom emoji',\n '/#organization/emoji-settings'],\n 'auth-methods': ['Manage organization', 'Authentication methods',\n '/#organization/auth-methods'],\n 'user-groups-admin': ['Manage organization', 'User groups',\n '/#organization/user-groups-admin'],\n 'user-list-admin': ['Manage organization', 'Users', '/#organization/user-list-admin'],\n 'deactivated-users-admin': ['Manage organization', 'Deactivated users',\n '/#organization/deactivated-users-admin'],\n 'bot-list-admin': ['Manage organization', 'Bots', '/#organization/bot-list-admin'],\n 'streams-list-admin': ['Manage organization', 'Delete streams',\n '/#organization/streams-list-admin'],\n 'default-streams-list': ['Manage organization', 'Default streams',\n '/#organization/default-streams-list'],\n 'filter-settings': ['Manage organization', 'Filter settings',\n '/#organization/filter-settings'],\n 'profile-field-settings': ['Manage organization', 'Custom profile fields',\n '/#organization/profile-field-settings'],\n 'invites-list-admin': ['Manage organization', 'Invitations',\n '/#organization/invites-list-admin'],\n}\n\nsettings_markdown = \"\"\"\n1. From your desktop, click on the **gear**\n (<i class=\"icon-vector-cog\"></i>) in the upper right corner.\n\n1. Select **%(setting_type_name)s**.\n\n1. On the left, click %(setting_reference)s.\n\"\"\"\n\n\nclass SettingHelpExtension(markdown.Extension):\n def extendMarkdown(self, md: markdown.Markdown, md_globals: Dict[str, Any]) -> None:\n \"\"\" Add SettingHelpExtension to the Markdown instance. \"\"\"\n md.registerExtension(self)\n md.preprocessors.add('setting', Setting(), '_begin')\n\nrelative_settings_links = None # type: Optional[bool]\n\ndef set_relative_settings_links(value: bool) -> None:\n global relative_settings_links\n relative_settings_links = value\n\nclass Setting(Preprocessor):\n def run(self, lines: List[str]) -> List[str]:\n done = False\n while not done:\n for line in lines:\n loc = lines.index(line)\n match = REGEXP.search(line)\n\n if match:\n text = [self.handleMatch(match)]\n # The line that contains the directive to include the macro\n # may be preceded or followed by text or tags, in that case\n # we need to make sure that any preceding or following text\n # stays the same.\n line_split = REGEXP.split(line, maxsplit=0)\n preceding = line_split[0]\n following = line_split[-1]\n text = [preceding] + text + [following]\n lines = lines[:loc] + text + lines[loc+1:]\n break\n else:\n done = True\n return lines\n\n def handleMatch(self, match: Match[Text]) -> Text:\n setting_identifier = match.group('setting_identifier')\n setting_type_name = link_mapping[setting_identifier][0]\n setting_name = link_mapping[setting_identifier][1]\n setting_link = link_mapping[setting_identifier][2]\n if relative_settings_links:\n setting_reference = \"[%s](%s)\" % (setting_name, setting_link)\n else:\n setting_reference = \"**%s**\" % (setting_name,)\n instructions = settings_markdown % {'setting_type_name': setting_type_name,\n 'setting_reference': setting_reference}\n return instructions\n\n\ndef makeExtension(*args: Any, **kwargs: Any) -> SettingHelpExtension:\n return SettingHelpExtension(*args, **kwargs)\n", "path": "zerver/lib/bugdown/help_settings_links.py"}], "after_files": [{"content": "import re\nimport markdown\nfrom typing import Any, Dict, List, Optional, Union, Text\nfrom typing.re import Match\nfrom markdown.preprocessors import Preprocessor\n\nREGEXP = re.compile(r'\\{settings_tab\\|(?P<setting_identifier>.*?)\\}')\n\nlink_mapping = {\n # a mapping from the setting identifier that is the same as the final URL\n # breadcrumb to that setting to the name of its setting type, the setting\n # name as it appears in the user interface, and a relative link that can\n # be used to get to that setting\n 'your-account': ['Settings', 'Your account', '/#settings/your-account'],\n 'display-settings': ['Settings', 'Display settings', '/#settings/display-settings'],\n 'notifications': ['Settings', 'Notifications', '/#settings/notifications'],\n 'your-bots': ['Settings', 'Your bots', '/#settings/your-bots'],\n 'alert-words': ['Settings', 'Alert words', '/#settings/alert-words'],\n 'uploaded-files': ['Settings', 'Uploaded files', '/#settings/uploaded-files'],\n 'muted-topics': ['Settings', 'Muted topics', '/#settings/muted-topics'],\n\n 'organization-profile': ['Manage organization', 'Organization profile',\n '/#organization/organization-profile'],\n 'organization-settings': ['Manage organization', 'Organization settings',\n '/#organization/organization-settings'],\n 'organization-permissions': ['Manage organization', 'Organization permissions',\n '/#organization/organization-permissions'],\n 'emoji-settings': ['Manage organization', 'Custom emoji',\n '/#organization/emoji-settings'],\n 'auth-methods': ['Manage organization', 'Authentication methods',\n '/#organization/auth-methods'],\n 'user-groups-admin': ['Manage organization', 'User groups',\n '/#organization/user-groups-admin'],\n 'user-list-admin': ['Manage organization', 'Users', '/#organization/user-list-admin'],\n 'deactivated-users-admin': ['Manage organization', 'Deactivated users',\n '/#organization/deactivated-users-admin'],\n 'bot-list-admin': ['Manage organization', 'Bots', '/#organization/bot-list-admin'],\n 'default-streams-list': ['Manage organization', 'Default streams',\n '/#organization/default-streams-list'],\n 'filter-settings': ['Manage organization', 'Filter settings',\n '/#organization/filter-settings'],\n 'profile-field-settings': ['Manage organization', 'Custom profile fields',\n '/#organization/profile-field-settings'],\n 'invites-list-admin': ['Manage organization', 'Invitations',\n '/#organization/invites-list-admin'],\n}\n\nsettings_markdown = \"\"\"\n1. From your desktop, click on the **gear**\n (<i class=\"icon-vector-cog\"></i>) in the upper right corner.\n\n1. Select **%(setting_type_name)s**.\n\n1. On the left, click %(setting_reference)s.\n\"\"\"\n\n\nclass SettingHelpExtension(markdown.Extension):\n def extendMarkdown(self, md: markdown.Markdown, md_globals: Dict[str, Any]) -> None:\n \"\"\" Add SettingHelpExtension to the Markdown instance. \"\"\"\n md.registerExtension(self)\n md.preprocessors.add('setting', Setting(), '_begin')\n\nrelative_settings_links = None # type: Optional[bool]\n\ndef set_relative_settings_links(value: bool) -> None:\n global relative_settings_links\n relative_settings_links = value\n\nclass Setting(Preprocessor):\n def run(self, lines: List[str]) -> List[str]:\n done = False\n while not done:\n for line in lines:\n loc = lines.index(line)\n match = REGEXP.search(line)\n\n if match:\n text = [self.handleMatch(match)]\n # The line that contains the directive to include the macro\n # may be preceded or followed by text or tags, in that case\n # we need to make sure that any preceding or following text\n # stays the same.\n line_split = REGEXP.split(line, maxsplit=0)\n preceding = line_split[0]\n following = line_split[-1]\n text = [preceding] + text + [following]\n lines = lines[:loc] + text + lines[loc+1:]\n break\n else:\n done = True\n return lines\n\n def handleMatch(self, match: Match[Text]) -> Text:\n setting_identifier = match.group('setting_identifier')\n setting_type_name = link_mapping[setting_identifier][0]\n setting_name = link_mapping[setting_identifier][1]\n setting_link = link_mapping[setting_identifier][2]\n if relative_settings_links:\n setting_reference = \"[%s](%s)\" % (setting_name, setting_link)\n else:\n setting_reference = \"**%s**\" % (setting_name,)\n instructions = settings_markdown % {'setting_type_name': setting_type_name,\n 'setting_reference': setting_reference}\n return instructions\n\n\ndef makeExtension(*args: Any, **kwargs: Any) -> SettingHelpExtension:\n return SettingHelpExtension(*args, **kwargs)\n", "path": "zerver/lib/bugdown/help_settings_links.py"}]} | 1,653 | 173 |
gh_patches_debug_4644 | rasdani/github-patches | git_diff | opendatacube__datacube-core-1399 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Online API documentation has gone missing
### Expected behaviour
I should be able to find API documentation online.
### Actual behaviour

### More info
The latest release of Datacube both added some new dependencies, and bumped the minimum version of existing ones. This has caused the [Read the Docs builds](https://readthedocs.org/projects/datacube-core/builds/19401442/) to start quietly failing.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 from setuptools import setup, find_packages
4
5 tests_require = [
6 'hypothesis',
7 'pycodestyle',
8 'pylint',
9 'pytest',
10 'pytest-cov',
11 'pytest-timeout',
12 'pytest-httpserver',
13 'moto',
14 ]
15 doc_require = [
16 'Sphinx',
17 'sphinx_rtd_theme',
18 'sphinx_autodoc_typehints', # Propagate mypy info into docs
19 'sphinx-click',
20 'recommonmark',
21 'setuptools', # version related dependencies
22 'setuptools_scm[toml]',
23 ]
24
25 extras_require = {
26 'performance': ['ciso8601', 'bottleneck'],
27 'distributed': ['distributed', 'dask[distributed]'],
28 'doc': doc_require,
29 's3': ['boto3', 'botocore'],
30 'test': tests_require,
31 'cf': ['compliance-checker>=4.0.0'],
32 }
33
34 extras_require['dev'] = sorted(set(sum([extras_require[k] for k in [
35 'test',
36 'doc',
37 'performance',
38 's3',
39 'distributed',
40 ]], [])))
41
42 # An 'all' option, following ipython naming conventions.
43 extras_require['all'] = sorted(set(sum(extras_require.values(), [])))
44
45 extra_plugins = dict(read=[], write=[], index=[])
46
47 setup(
48 name='datacube',
49 python_requires='>=3.8.0',
50
51 url='https://github.com/opendatacube/datacube-core',
52 author='Open Data Cube',
53 maintainer='Open Data Cube',
54 maintainer_email='',
55 description='An analysis environment for satellite and other earth observation data',
56 long_description=open('README.rst').read(),
57 long_description_content_type='text/x-rst',
58 license='Apache License 2.0',
59 classifiers=[
60 "Development Status :: 4 - Beta",
61 "Intended Audience :: Developers",
62 "Intended Audience :: Science/Research",
63 "License :: OSI Approved :: Apache Software License",
64 "Natural Language :: English",
65 "Operating System :: MacOS :: MacOS X",
66 "Operating System :: POSIX",
67 "Operating System :: POSIX :: BSD",
68 "Operating System :: POSIX :: Linux",
69 "Operating System :: Microsoft :: Windows",
70 "Programming Language :: Python",
71 "Programming Language :: Python :: 3",
72 "Programming Language :: Python :: 3.8",
73 "Programming Language :: Python :: 3.9",
74 "Programming Language :: Python :: 3.10",
75 "Topic :: Scientific/Engineering :: GIS",
76 "Topic :: Scientific/Engineering :: Information Analysis",
77 ],
78
79 packages=find_packages(
80 exclude=('tests', 'tests.*',
81 'integration_tests', 'integration_tests.*')
82 ),
83 package_data={
84 '': ['*.yaml', '*/*.yaml'],
85 'datacube': ['py.typed'],
86 },
87 scripts=[],
88 install_requires=[
89 'affine',
90 'attrs>=18.1',
91 'pyproj>=2.5',
92 'shapely>=2.0',
93 'cachetools',
94 'click>=5.0',
95 'cloudpickle>=0.4',
96 'dask[array]',
97 'distributed',
98 'jsonschema',
99 'netcdf4',
100 'numpy',
101 'psycopg2',
102 'lark',
103 'pandas',
104 'python-dateutil',
105 'pyyaml',
106 'rasterio>=1.3.2', # Warping broken in 1.3.0 and 1.3.1
107 'ruamel.yaml',
108 'sqlalchemy>=1.4,<2.0', # GeoAlchemy2 requires >=1.4 postgres driver implementation is not 2.0 compatible.
109 'GeoAlchemy2',
110 'toolz',
111 'xarray>=0.9', # >0.9 fixes most problems with `crs` attributes being lost
112 'packaging',
113 ],
114 extras_require=extras_require,
115 tests_require=tests_require,
116
117 entry_points={
118 'console_scripts': [
119 'datacube = datacube.scripts.cli_app:cli',
120 'datacube-search = datacube.scripts.search_tool:cli',
121 'datacube-worker = datacube.execution.worker:main',
122 ],
123 'datacube.plugins.io.read': [
124 'netcdf = datacube.drivers.netcdf.driver:reader_driver_init',
125 *extra_plugins['read'],
126 ],
127 'datacube.plugins.io.write': [
128 'netcdf = datacube.drivers.netcdf.driver:writer_driver_init',
129 *extra_plugins['write'],
130 ],
131 'datacube.plugins.index': [
132 'default = datacube.index.postgres.index:index_driver_init',
133 'null = datacube.index.null.index:index_driver_init',
134 'memory = datacube.index.memory.index:index_driver_init',
135 'postgis = datacube.index.postgis.index:index_driver_init',
136 *extra_plugins['index'],
137 ],
138 },
139 )
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -14,12 +14,13 @@
]
doc_require = [
'Sphinx',
- 'sphinx_rtd_theme',
'sphinx_autodoc_typehints', # Propagate mypy info into docs
'sphinx-click',
'recommonmark',
- 'setuptools', # version related dependencies
- 'setuptools_scm[toml]',
+ 'autodocsumm',
+ 'beautifulsoup4',
+ 'nbsphinx',
+ 'pydata-sphinx-theme==0.9.0',
]
extras_require = {
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -14,12 +14,13 @@\n ]\n doc_require = [\n 'Sphinx',\n- 'sphinx_rtd_theme',\n 'sphinx_autodoc_typehints', # Propagate mypy info into docs\n 'sphinx-click',\n 'recommonmark',\n- 'setuptools', # version related dependencies\n- 'setuptools_scm[toml]',\n+ 'autodocsumm',\n+ 'beautifulsoup4',\n+ 'nbsphinx',\n+ 'pydata-sphinx-theme==0.9.0',\n ]\n \n extras_require = {\n", "issue": "Online API documentation has gone missing\n### Expected behaviour\r\nI should be able to find API documentation online.\r\n\r\n### Actual behaviour\r\n\r\n\r\n### More info\r\n\r\nThe latest release of Datacube both added some new dependencies, and bumped the minimum version of existing ones. This has caused the [Read the Docs builds](https://readthedocs.org/projects/datacube-core/builds/19401442/) to start quietly failing.\r\n\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'hypothesis',\n 'pycodestyle',\n 'pylint',\n 'pytest',\n 'pytest-cov',\n 'pytest-timeout',\n 'pytest-httpserver',\n 'moto',\n]\ndoc_require = [\n 'Sphinx',\n 'sphinx_rtd_theme',\n 'sphinx_autodoc_typehints', # Propagate mypy info into docs\n 'sphinx-click',\n 'recommonmark',\n 'setuptools', # version related dependencies\n 'setuptools_scm[toml]',\n]\n\nextras_require = {\n 'performance': ['ciso8601', 'bottleneck'],\n 'distributed': ['distributed', 'dask[distributed]'],\n 'doc': doc_require,\n 's3': ['boto3', 'botocore'],\n 'test': tests_require,\n 'cf': ['compliance-checker>=4.0.0'],\n}\n\nextras_require['dev'] = sorted(set(sum([extras_require[k] for k in [\n 'test',\n 'doc',\n 'performance',\n 's3',\n 'distributed',\n]], [])))\n\n# An 'all' option, following ipython naming conventions.\nextras_require['all'] = sorted(set(sum(extras_require.values(), [])))\n\nextra_plugins = dict(read=[], write=[], index=[])\n\nsetup(\n name='datacube',\n python_requires='>=3.8.0',\n\n url='https://github.com/opendatacube/datacube-core',\n author='Open Data Cube',\n maintainer='Open Data Cube',\n maintainer_email='',\n description='An analysis environment for satellite and other earth observation data',\n long_description=open('README.rst').read(),\n long_description_content_type='text/x-rst',\n license='Apache License 2.0',\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: BSD\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Scientific/Engineering :: GIS\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n ],\n\n packages=find_packages(\n exclude=('tests', 'tests.*',\n 'integration_tests', 'integration_tests.*')\n ),\n package_data={\n '': ['*.yaml', '*/*.yaml'],\n 'datacube': ['py.typed'],\n },\n scripts=[],\n install_requires=[\n 'affine',\n 'attrs>=18.1',\n 'pyproj>=2.5',\n 'shapely>=2.0',\n 'cachetools',\n 'click>=5.0',\n 'cloudpickle>=0.4',\n 'dask[array]',\n 'distributed',\n 'jsonschema',\n 'netcdf4',\n 'numpy',\n 'psycopg2',\n 'lark',\n 'pandas',\n 'python-dateutil',\n 'pyyaml',\n 'rasterio>=1.3.2', # Warping broken in 1.3.0 and 1.3.1\n 'ruamel.yaml',\n 'sqlalchemy>=1.4,<2.0', # GeoAlchemy2 requires >=1.4 postgres driver implementation is not 2.0 compatible.\n 'GeoAlchemy2',\n 'toolz',\n 'xarray>=0.9', # >0.9 fixes most problems with `crs` attributes being lost\n 'packaging',\n ],\n extras_require=extras_require,\n tests_require=tests_require,\n\n entry_points={\n 'console_scripts': [\n 'datacube = datacube.scripts.cli_app:cli',\n 'datacube-search = datacube.scripts.search_tool:cli',\n 'datacube-worker = datacube.execution.worker:main',\n ],\n 'datacube.plugins.io.read': [\n 'netcdf = datacube.drivers.netcdf.driver:reader_driver_init',\n *extra_plugins['read'],\n ],\n 'datacube.plugins.io.write': [\n 'netcdf = datacube.drivers.netcdf.driver:writer_driver_init',\n *extra_plugins['write'],\n ],\n 'datacube.plugins.index': [\n 'default = datacube.index.postgres.index:index_driver_init',\n 'null = datacube.index.null.index:index_driver_init',\n 'memory = datacube.index.memory.index:index_driver_init',\n 'postgis = datacube.index.postgis.index:index_driver_init',\n *extra_plugins['index'],\n ],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'hypothesis',\n 'pycodestyle',\n 'pylint',\n 'pytest',\n 'pytest-cov',\n 'pytest-timeout',\n 'pytest-httpserver',\n 'moto',\n]\ndoc_require = [\n 'Sphinx',\n 'sphinx_autodoc_typehints', # Propagate mypy info into docs\n 'sphinx-click',\n 'recommonmark',\n 'autodocsumm',\n 'beautifulsoup4',\n 'nbsphinx',\n 'pydata-sphinx-theme==0.9.0',\n]\n\nextras_require = {\n 'performance': ['ciso8601', 'bottleneck'],\n 'distributed': ['distributed', 'dask[distributed]'],\n 'doc': doc_require,\n 's3': ['boto3', 'botocore'],\n 'test': tests_require,\n 'cf': ['compliance-checker>=4.0.0'],\n}\n\nextras_require['dev'] = sorted(set(sum([extras_require[k] for k in [\n 'test',\n 'doc',\n 'performance',\n 's3',\n 'distributed',\n]], [])))\n\n# An 'all' option, following ipython naming conventions.\nextras_require['all'] = sorted(set(sum(extras_require.values(), [])))\n\nextra_plugins = dict(read=[], write=[], index=[])\n\nsetup(\n name='datacube',\n python_requires='>=3.8.0',\n\n url='https://github.com/opendatacube/datacube-core',\n author='Open Data Cube',\n maintainer='Open Data Cube',\n maintainer_email='',\n description='An analysis environment for satellite and other earth observation data',\n long_description=open('README.rst').read(),\n long_description_content_type='text/x-rst',\n license='Apache License 2.0',\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: BSD\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Scientific/Engineering :: GIS\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n ],\n\n packages=find_packages(\n exclude=('tests', 'tests.*',\n 'integration_tests', 'integration_tests.*')\n ),\n package_data={\n '': ['*.yaml', '*/*.yaml'],\n 'datacube': ['py.typed'],\n },\n scripts=[],\n install_requires=[\n 'affine',\n 'attrs>=18.1',\n 'pyproj>=2.5',\n 'shapely>=2.0',\n 'cachetools',\n 'click>=5.0',\n 'cloudpickle>=0.4',\n 'dask[array]',\n 'distributed',\n 'jsonschema',\n 'netcdf4',\n 'numpy',\n 'psycopg2',\n 'lark',\n 'pandas',\n 'python-dateutil',\n 'pyyaml',\n 'rasterio>=1.3.2', # Warping broken in 1.3.0 and 1.3.1\n 'ruamel.yaml',\n 'sqlalchemy>=1.4,<2.0', # GeoAlchemy2 requires >=1.4 postgres driver implementation is not 2.0 compatible.\n 'GeoAlchemy2',\n 'toolz',\n 'xarray>=0.9', # >0.9 fixes most problems with `crs` attributes being lost\n 'packaging',\n ],\n extras_require=extras_require,\n tests_require=tests_require,\n\n entry_points={\n 'console_scripts': [\n 'datacube = datacube.scripts.cli_app:cli',\n 'datacube-search = datacube.scripts.search_tool:cli',\n 'datacube-worker = datacube.execution.worker:main',\n ],\n 'datacube.plugins.io.read': [\n 'netcdf = datacube.drivers.netcdf.driver:reader_driver_init',\n *extra_plugins['read'],\n ],\n 'datacube.plugins.io.write': [\n 'netcdf = datacube.drivers.netcdf.driver:writer_driver_init',\n *extra_plugins['write'],\n ],\n 'datacube.plugins.index': [\n 'default = datacube.index.postgres.index:index_driver_init',\n 'null = datacube.index.null.index:index_driver_init',\n 'memory = datacube.index.memory.index:index_driver_init',\n 'postgis = datacube.index.postgis.index:index_driver_init',\n *extra_plugins['index'],\n ],\n },\n)\n", "path": "setup.py"}]} | 1,857 | 151 |
gh_patches_debug_25523 | rasdani/github-patches | git_diff | bridgecrewio__checkov-2648 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
feat: add support for terraform AKS role_based_access_control_enabled
**Describe the issue**
`CKV_AZURE_5`
Recently, azurerm [2.99.0](https://github.com/hashicorp/terraform-provider-azurerm/releases/tag/v2.99.0) was published, which, in preparation for 3.0, brought some syntax changes, one of which is listed below:
***
"Data Source: `azurerm_kubernetes_cluster` - deprecated the `role_based_access_control` block in favour of `azure_active_directory_role_based_access_control` and `role_based_access_control_enabled` properties (https://github.com/hashicorp/terraform-provider-azurerm/issues/15584)"
***
**Examples**
terraform:
```hcl
role_based_access_control_enabled: true
```
This code should pass `CKV_AZURE_5`.
**Version (please complete the following information):**
- Checkov Version 2.0.970
**Additional context**
PR that adds the required support: https://github.com/bridgecrewio/checkov/pull/2648
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/azure/AKSRbacEnabled.py`
Content:
```
1 from checkov.common.models.enums import CheckCategories
2 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
3
4
5 class AKSRbacEnabled(BaseResourceValueCheck):
6 def __init__(self):
7 name = "Ensure RBAC is enabled on AKS clusters"
8 id = "CKV_AZURE_5"
9 supported_resources = ['azurerm_kubernetes_cluster']
10 categories = [CheckCategories.KUBERNETES]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def get_inspected_key(self):
14 return 'role_based_access_control/[0]/enabled'
15
16
17 check = AKSRbacEnabled()
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py b/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py
--- a/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py
+++ b/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py
@@ -1,17 +1,32 @@
-from checkov.common.models.enums import CheckCategories
-from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
+import dpath.util
+from checkov.common.models.enums import CheckCategories, CheckResult
+from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
-class AKSRbacEnabled(BaseResourceValueCheck):
+class AKSRbacEnabled(BaseResourceCheck):
def __init__(self):
name = "Ensure RBAC is enabled on AKS clusters"
id = "CKV_AZURE_5"
- supported_resources = ['azurerm_kubernetes_cluster']
+ supported_resources = ["azurerm_kubernetes_cluster"]
categories = [CheckCategories.KUBERNETES]
- super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
+ super().__init__(
+ name=name,
+ id=id,
+ categories=categories,
+ supported_resources=supported_resources,
+ )
- def get_inspected_key(self):
- return 'role_based_access_control/[0]/enabled'
+ def scan_resource_conf(self, conf):
+ self.evaluated_keys = [
+ "role_based_access_control/[0]/enabled", # azurerm < 2.99.0
+ "role_based_access_control_enabled", # azurerm >= 2.99.0
+ ]
+
+ for key in self.evaluated_keys:
+ if dpath.search(conf, key) and dpath.get(conf, key)[0]:
+ return CheckResult.PASSED
+
+ return CheckResult.FAILED
check = AKSRbacEnabled()
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py b/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py\n--- a/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py\n+++ b/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py\n@@ -1,17 +1,32 @@\n-from checkov.common.models.enums import CheckCategories\n-from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n+import dpath.util\n+from checkov.common.models.enums import CheckCategories, CheckResult\n+from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n \n \n-class AKSRbacEnabled(BaseResourceValueCheck):\n+class AKSRbacEnabled(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure RBAC is enabled on AKS clusters\"\n id = \"CKV_AZURE_5\"\n- supported_resources = ['azurerm_kubernetes_cluster']\n+ supported_resources = [\"azurerm_kubernetes_cluster\"]\n categories = [CheckCategories.KUBERNETES]\n- super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n+ super().__init__(\n+ name=name,\n+ id=id,\n+ categories=categories,\n+ supported_resources=supported_resources,\n+ )\n \n- def get_inspected_key(self):\n- return 'role_based_access_control/[0]/enabled'\n+ def scan_resource_conf(self, conf):\n+ self.evaluated_keys = [\n+ \"role_based_access_control/[0]/enabled\", # azurerm < 2.99.0\n+ \"role_based_access_control_enabled\", # azurerm >= 2.99.0\n+ ]\n+\n+ for key in self.evaluated_keys:\n+ if dpath.search(conf, key) and dpath.get(conf, key)[0]:\n+ return CheckResult.PASSED\n+\n+ return CheckResult.FAILED\n \n \n check = AKSRbacEnabled()\n", "issue": "feat: add support for terraform AKS role_based_access_control_enabled\n**Describe the issue**\r\n`CKV_AZURE_5`\r\nRecently, azurerm [2.99.0](https://github.com/hashicorp/terraform-provider-azurerm/releases/tag/v2.99.0) was published, which, in preparation for 3.0, brought some syntax changes, one of which is listed below:\r\n***\r\n\"Data Source: `azurerm_kubernetes_cluster` - deprecated the `role_based_access_control` block in favour of `azure_active_directory_role_based_access_control` and `role_based_access_control_enabled` properties (https://github.com/hashicorp/terraform-provider-azurerm/issues/15584)\"\r\n***\r\n\r\n**Examples**\r\n\r\nterraform:\r\n```hcl\r\nrole_based_access_control_enabled: true\r\n```\r\nThis code should pass `CKV_AZURE_5`.\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version 2.0.970\r\n\r\n**Additional context**\r\nPR that adds the required support: https://github.com/bridgecrewio/checkov/pull/2648\r\n\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass AKSRbacEnabled(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure RBAC is enabled on AKS clusters\"\n id = \"CKV_AZURE_5\"\n supported_resources = ['azurerm_kubernetes_cluster']\n categories = [CheckCategories.KUBERNETES]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return 'role_based_access_control/[0]/enabled'\n\n\ncheck = AKSRbacEnabled()\n", "path": "checkov/terraform/checks/resource/azure/AKSRbacEnabled.py"}], "after_files": [{"content": "import dpath.util\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass AKSRbacEnabled(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure RBAC is enabled on AKS clusters\"\n id = \"CKV_AZURE_5\"\n supported_resources = [\"azurerm_kubernetes_cluster\"]\n categories = [CheckCategories.KUBERNETES]\n super().__init__(\n name=name,\n id=id,\n categories=categories,\n supported_resources=supported_resources,\n )\n\n def scan_resource_conf(self, conf):\n self.evaluated_keys = [\n \"role_based_access_control/[0]/enabled\", # azurerm < 2.99.0\n \"role_based_access_control_enabled\", # azurerm >= 2.99.0\n ]\n\n for key in self.evaluated_keys:\n if dpath.search(conf, key) and dpath.get(conf, key)[0]:\n return CheckResult.PASSED\n\n return CheckResult.FAILED\n\n\ncheck = AKSRbacEnabled()\n", "path": "checkov/terraform/checks/resource/azure/AKSRbacEnabled.py"}]} | 694 | 447 |
gh_patches_debug_57408 | rasdani/github-patches | git_diff | kornia__kornia-1861 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bugs when using RandomRGBShift
### Describe the bug
When running RandomRGBShift augmentation on gpus I came across this error
```
File "/usr/local/lib/python3.8/dist-packages/kornia/augmentation/_2d/intensity/random_rgb_shift.py", line 100, in apply_transform
return shift_rgb(inp, params['r_shift'], params['g_shift'], params['b_shift'])
File "/usr/local/lib/python3.8/dist-packages/kornia/enhance/shift_rgb.py", line 17, in shift_rgb
shifted = (image + torch.Tensor(shifts).view(1, 3, 1, 1).to(image)).clamp_(min=0, max=1)
ValueError: only one element tensors can be converted to Python scalars
```
after checking the code I think the problem is that
```
shifts = [params['r_shift'], params['g_shift'], params['b_shift']]
shifted = (image + torch.Tensor(shifts).view(1, 3, 1, 1).to(image)).clamp_(min=0, max=1)
```
and also I am not sure about `.view(1,3,1,1)` shouldn't it be `.view(-1,3,1,1)`
### Reproduction steps
```bash
1. setting up RandomRGBShift augmentation
2. Using RandomRGBShift augmentation with gpus and batchsize > 1
```
### Expected behavior
```
File "/usr/local/lib/python3.8/dist-packages/kornia/augmentation/_2d/intensity/random_rgb_shift.py", line 100, in apply_transform
return shift_rgb(inp, params['r_shift'], params['g_shift'], params['b_shift'])
File "/usr/local/lib/python3.8/dist-packages/kornia/enhance/shift_rgb.py", line 17, in shift_rgb
shifted = (image + torch.Tensor(shifts).view(1, 3, 1, 1).to(image)).clamp_(min=0, max=1)
ValueError: only one element tensors can be converted to Python scalars
```
### Environment
```shell
- PyTorch Version : 1.10
- Linux
- How you installed PyTorch (`pip`):
- Python version: 3.8
- cuda:11.3.0-cudnn8
```
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kornia/enhance/shift_rgb.py`
Content:
```
1 import torch
2
3 from kornia.testing import KORNIA_CHECK_IS_COLOR, KORNIA_CHECK_IS_TENSOR
4
5
6 def shift_rgb(image: torch.Tensor, r_shift: torch.Tensor, g_shift: torch.Tensor, b_shift: torch.Tensor) -> torch.Tensor:
7 """Shift rgb channels.
8
9 Shift each image's channel by either r_shift for red, g_shift for green and b_shift for blue channels.
10 """
11
12 KORNIA_CHECK_IS_TENSOR(image)
13 KORNIA_CHECK_IS_COLOR(image, f"with shape {image.shape}")
14
15 shifts = [r_shift, g_shift, b_shift]
16
17 shifted = (image + torch.Tensor(shifts).view(1, 3, 1, 1).to(image)).clamp_(min=0, max=1)
18
19 return shifted
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kornia/enhance/shift_rgb.py b/kornia/enhance/shift_rgb.py
--- a/kornia/enhance/shift_rgb.py
+++ b/kornia/enhance/shift_rgb.py
@@ -14,6 +14,6 @@
shifts = [r_shift, g_shift, b_shift]
- shifted = (image + torch.Tensor(shifts).view(1, 3, 1, 1).to(image)).clamp_(min=0, max=1)
+ shifted = (image + torch.stack(shifts).view(-1, 3, 1, 1).to(image)).clamp_(min=0, max=1)
return shifted
| {"golden_diff": "diff --git a/kornia/enhance/shift_rgb.py b/kornia/enhance/shift_rgb.py\n--- a/kornia/enhance/shift_rgb.py\n+++ b/kornia/enhance/shift_rgb.py\n@@ -14,6 +14,6 @@\n \n shifts = [r_shift, g_shift, b_shift]\n \n- shifted = (image + torch.Tensor(shifts).view(1, 3, 1, 1).to(image)).clamp_(min=0, max=1)\n+ shifted = (image + torch.stack(shifts).view(-1, 3, 1, 1).to(image)).clamp_(min=0, max=1)\n \n return shifted\n", "issue": "Bugs when using RandomRGBShift\n### Describe the bug\r\n\r\nWhen running RandomRGBShift augmentation on gpus I came across this error \r\n\r\n```\r\n File \"/usr/local/lib/python3.8/dist-packages/kornia/augmentation/_2d/intensity/random_rgb_shift.py\", line 100, in apply_transform\r\n return shift_rgb(inp, params['r_shift'], params['g_shift'], params['b_shift'])\r\n File \"/usr/local/lib/python3.8/dist-packages/kornia/enhance/shift_rgb.py\", line 17, in shift_rgb\r\n shifted = (image + torch.Tensor(shifts).view(1, 3, 1, 1).to(image)).clamp_(min=0, max=1)\r\nValueError: only one element tensors can be converted to Python scalars\r\n```\r\nafter checking the code I think the problem is that \r\n```\r\nshifts = [params['r_shift'], params['g_shift'], params['b_shift']]\r\nshifted = (image + torch.Tensor(shifts).view(1, 3, 1, 1).to(image)).clamp_(min=0, max=1)\r\n```\r\nand also I am not sure about `.view(1,3,1,1)` shouldn't it be `.view(-1,3,1,1)`\r\n\r\n### Reproduction steps\r\n\r\n```bash\r\n1. setting up RandomRGBShift augmentation\r\n2. Using RandomRGBShift augmentation with gpus and batchsize > 1\r\n```\r\n\r\n\r\n### Expected behavior\r\n\r\n```\r\n File \"/usr/local/lib/python3.8/dist-packages/kornia/augmentation/_2d/intensity/random_rgb_shift.py\", line 100, in apply_transform\r\n return shift_rgb(inp, params['r_shift'], params['g_shift'], params['b_shift'])\r\n File \"/usr/local/lib/python3.8/dist-packages/kornia/enhance/shift_rgb.py\", line 17, in shift_rgb\r\n shifted = (image + torch.Tensor(shifts).view(1, 3, 1, 1).to(image)).clamp_(min=0, max=1)\r\nValueError: only one element tensors can be converted to Python scalars\r\n```\r\n\r\n### Environment\r\n\r\n```shell\r\n- PyTorch Version : 1.10\r\n- Linux\r\n- How you installed PyTorch (`pip`):\r\n- Python version: 3.8\r\n- cuda:11.3.0-cudnn8\r\n```\r\n\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "import torch\n\nfrom kornia.testing import KORNIA_CHECK_IS_COLOR, KORNIA_CHECK_IS_TENSOR\n\n\ndef shift_rgb(image: torch.Tensor, r_shift: torch.Tensor, g_shift: torch.Tensor, b_shift: torch.Tensor) -> torch.Tensor:\n \"\"\"Shift rgb channels.\n\n Shift each image's channel by either r_shift for red, g_shift for green and b_shift for blue channels.\n \"\"\"\n\n KORNIA_CHECK_IS_TENSOR(image)\n KORNIA_CHECK_IS_COLOR(image, f\"with shape {image.shape}\")\n\n shifts = [r_shift, g_shift, b_shift]\n\n shifted = (image + torch.Tensor(shifts).view(1, 3, 1, 1).to(image)).clamp_(min=0, max=1)\n\n return shifted\n", "path": "kornia/enhance/shift_rgb.py"}], "after_files": [{"content": "import torch\n\nfrom kornia.testing import KORNIA_CHECK_IS_COLOR, KORNIA_CHECK_IS_TENSOR\n\n\ndef shift_rgb(image: torch.Tensor, r_shift: torch.Tensor, g_shift: torch.Tensor, b_shift: torch.Tensor) -> torch.Tensor:\n \"\"\"Shift rgb channels.\n\n Shift each image's channel by either r_shift for red, g_shift for green and b_shift for blue channels.\n \"\"\"\n\n KORNIA_CHECK_IS_TENSOR(image)\n KORNIA_CHECK_IS_COLOR(image, f\"with shape {image.shape}\")\n\n shifts = [r_shift, g_shift, b_shift]\n\n shifted = (image + torch.stack(shifts).view(-1, 3, 1, 1).to(image)).clamp_(min=0, max=1)\n\n return shifted\n", "path": "kornia/enhance/shift_rgb.py"}]} | 1,002 | 160 |
gh_patches_debug_18264 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-2912 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
E0002: Unknown exception while processing rule E2530: 'dict' object has no attribute 'startswith'
### CloudFormation Lint Version
0.82.0
### What operating system are you using?
windows 11
### Describe the bug
receive the E2530 error even though i dont have SnapStart in my lambda resource
### Expected behavior
no error
### Reproduction template
if i remove this lambda resource, it doesnt throw the error
```
Lambda:
Type: AWS::Lambda::Function
DependsOn:
- LambdaLogGroup
Properties:
Architectures:
- !If
- ConditionLambdaArmSupport
- !Ref OSArchitechture
- x86_64
Code:
S3Bucket:
Fn::ImportValue: !Sub '${Region}-${BusinessUnit}-${Environment}-${AppName}-artifact-s3-bucket'
S3Key: !Ref LambdaS3Key
S3ObjectVersion: !Ref LambdaS3ObjectVersion
Description: !Sub 'Lambda Function - ${LambdaPurpose}'
FunctionName: !Sub '${Region}-${BusinessUnit}-${Environment}-${AppName}-lambda-${LambdaPurpose}'
Handler: !Ref LambdaHandler
Layers:
- !Ref LambdaLayer
MemorySize: !Ref LambdaMemory
Role: !GetAtt LambdaRole.Arn
Runtime: !Ref LambdaRuntime
Timeout: !Ref LambdaTimeout
VpcConfig: !If
- ConditionVpcAttachment
- SecurityGroupIds:
- Fn::ImportValue: !Sub '${Region}-${BusinessUnit}-${Environment}-${AppName}-sg-${LambdaPurpose}'
SubnetIds:
- Fn::ImportValue: !Sub '${Region}-${BusinessUnit}-${Environment}-${VpcAttachment}-subnet-1-id'
- Fn::ImportValue: !Sub '${Region}-${BusinessUnit}-${Environment}-${VpcAttachment}-subnet-2-id'
- !If
- ConditionThirdAZ
- Fn::ImportValue: !Sub '${Region}-${BusinessUnit}-${Environment}-${VpcAttachment}-subnet-3-id'
- !Ref AWS::NoValue
- !Ref AWS::NoValue
Tags:
- Key: Name
Value: !Sub '${Region}-${BusinessUnit}-${Environment}-${AppName}-lambda-${LambdaPurpose}'
- Key: region-abbreviation
Value: !Ref Region
- Key: business-unit
Value: !Ref BusinessUnit
- Key: environment-type
Value: !Ref Environment
- Key: app-name
Value: !Ref AppName
- Key: service-type
Value: !Ref Service
- Key: cost-center
Value: !Ref CostCenter
- Key: compliance-type
Value: !Ref Compliance
- Key: owner-name
Value: !Ref OwnerName
- Key: owner-email-id
Value: !Ref OwnerEmail
- Key: patch-group
Value: !Ref PatchGroup
- Key: criticality-level
Value: !Ref CriticalityLevel
- Key: map-migrated
Value: !Ref MapMigrated
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/lmbd/SnapStartEnabled.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5
6 from cfnlint.rules import CloudFormationLintRule, RuleMatch
7
8
9 class SnapStartEnabled(CloudFormationLintRule):
10 """Check if the SnapStart is enabled for certain java runtimes"""
11
12 id = "I2530"
13 shortdesc = "Validate that SnapStart is configured for >= Java11 runtimes"
14 description = (
15 "SnapStart is a no-cost feature that can increase performance up to 10x. "
16 "Enable SnapStart for Java 11 and greater runtimes"
17 )
18 source_url = "https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html"
19 tags = ["resources", "lambda"]
20
21 def __init__(self):
22 super().__init__()
23 self.resource_property_types.append("AWS::Lambda::Function")
24
25 def validate(self, runtime, path):
26 if not runtime:
27 return []
28
29 if not (runtime.startswith("java")) and runtime not in ["java8.al2", "java8"]:
30 return []
31
32 return [
33 RuleMatch(
34 path,
35 f"When using {runtime} configure SnapStart",
36 rule=self,
37 )
38 ]
39
```
Path: `src/cfnlint/rules/resources/lmbd/SnapStartSupported.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5
6 from cfnlint.rules import CloudFormationLintRule, RuleMatch
7
8
9 class SnapStartSupported(CloudFormationLintRule):
10 """Check if Lambda function using SnapStart has the correct runtimes"""
11
12 id = "E2530"
13 shortdesc = "SnapStart supports the configured runtime"
14 description = (
15 "To properly leverage SnapStart, you must have a runtime of Java11 or greater"
16 )
17 source_url = "https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html"
18 tags = ["resources", "lambda"]
19
20 def __init__(self):
21 super().__init__()
22 self.resource_property_types.append("AWS::Lambda::Function")
23 self.child_rules = {"I2530": None}
24
25 def match_resource_properties(self, properties, _, path, cfn):
26 """Check CloudFormation Properties"""
27 matches = []
28
29 for scenario in cfn.get_object_without_nested_conditions(properties, path):
30 props = scenario.get("Object")
31
32 runtime = props.get("Runtime")
33 snap_start = props.get("SnapStart")
34 if not snap_start:
35 if self.child_rules["I2530"]:
36 matches.extend(self.child_rules["I2530"].validate(runtime, path))
37 continue
38
39 if snap_start.get("ApplyOn") != "PublishedVersions":
40 continue
41
42 if (
43 runtime
44 and (not runtime.startswith("java"))
45 and runtime not in ["java8.al2", "java8"]
46 ):
47 matches.append(
48 RuleMatch(
49 path + ["SnapStart", "ApplyOn"],
50 f"{runtime} is not supported for SnapStart enabled functions",
51 )
52 )
53
54 return matches
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/resources/lmbd/SnapStartEnabled.py b/src/cfnlint/rules/resources/lmbd/SnapStartEnabled.py
--- a/src/cfnlint/rules/resources/lmbd/SnapStartEnabled.py
+++ b/src/cfnlint/rules/resources/lmbd/SnapStartEnabled.py
@@ -23,7 +23,7 @@
self.resource_property_types.append("AWS::Lambda::Function")
def validate(self, runtime, path):
- if not runtime:
+ if not isinstance(runtime, str):
return []
if not (runtime.startswith("java")) and runtime not in ["java8.al2", "java8"]:
diff --git a/src/cfnlint/rules/resources/lmbd/SnapStartSupported.py b/src/cfnlint/rules/resources/lmbd/SnapStartSupported.py
--- a/src/cfnlint/rules/resources/lmbd/SnapStartSupported.py
+++ b/src/cfnlint/rules/resources/lmbd/SnapStartSupported.py
@@ -39,6 +39,10 @@
if snap_start.get("ApplyOn") != "PublishedVersions":
continue
+ # Validate runtime is a string before using startswith
+ if not isinstance(runtime, str):
+ continue
+
if (
runtime
and (not runtime.startswith("java"))
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/lmbd/SnapStartEnabled.py b/src/cfnlint/rules/resources/lmbd/SnapStartEnabled.py\n--- a/src/cfnlint/rules/resources/lmbd/SnapStartEnabled.py\n+++ b/src/cfnlint/rules/resources/lmbd/SnapStartEnabled.py\n@@ -23,7 +23,7 @@\n self.resource_property_types.append(\"AWS::Lambda::Function\")\n \n def validate(self, runtime, path):\n- if not runtime:\n+ if not isinstance(runtime, str):\n return []\n \n if not (runtime.startswith(\"java\")) and runtime not in [\"java8.al2\", \"java8\"]:\ndiff --git a/src/cfnlint/rules/resources/lmbd/SnapStartSupported.py b/src/cfnlint/rules/resources/lmbd/SnapStartSupported.py\n--- a/src/cfnlint/rules/resources/lmbd/SnapStartSupported.py\n+++ b/src/cfnlint/rules/resources/lmbd/SnapStartSupported.py\n@@ -39,6 +39,10 @@\n if snap_start.get(\"ApplyOn\") != \"PublishedVersions\":\n continue\n \n+ # Validate runtime is a string before using startswith\n+ if not isinstance(runtime, str):\n+ continue\n+\n if (\n runtime\n and (not runtime.startswith(\"java\"))\n", "issue": "E0002: Unknown exception while processing rule E2530: 'dict' object has no attribute 'startswith'\n### CloudFormation Lint Version\n\n0.82.0\n\n### What operating system are you using?\n\nwindows 11\n\n### Describe the bug\n\nreceive the E2530 error even though i dont have SnapStart in my lambda resource\r\n\r\n\r\n\n\n### Expected behavior\n\nno error\n\n### Reproduction template\n\nif i remove this lambda resource, it doesnt throw the error\r\n```\r\n Lambda:\r\n Type: AWS::Lambda::Function\r\n DependsOn:\r\n - LambdaLogGroup\r\n Properties:\r\n Architectures:\r\n - !If\r\n - ConditionLambdaArmSupport\r\n - !Ref OSArchitechture\r\n - x86_64\r\n Code:\r\n S3Bucket:\r\n Fn::ImportValue: !Sub '${Region}-${BusinessUnit}-${Environment}-${AppName}-artifact-s3-bucket'\r\n S3Key: !Ref LambdaS3Key\r\n S3ObjectVersion: !Ref LambdaS3ObjectVersion\r\n Description: !Sub 'Lambda Function - ${LambdaPurpose}'\r\n FunctionName: !Sub '${Region}-${BusinessUnit}-${Environment}-${AppName}-lambda-${LambdaPurpose}'\r\n Handler: !Ref LambdaHandler\r\n Layers:\r\n - !Ref LambdaLayer\r\n MemorySize: !Ref LambdaMemory\r\n Role: !GetAtt LambdaRole.Arn\r\n Runtime: !Ref LambdaRuntime\r\n Timeout: !Ref LambdaTimeout\r\n VpcConfig: !If\r\n - ConditionVpcAttachment\r\n - SecurityGroupIds:\r\n - Fn::ImportValue: !Sub '${Region}-${BusinessUnit}-${Environment}-${AppName}-sg-${LambdaPurpose}'\r\n SubnetIds:\r\n - Fn::ImportValue: !Sub '${Region}-${BusinessUnit}-${Environment}-${VpcAttachment}-subnet-1-id'\r\n - Fn::ImportValue: !Sub '${Region}-${BusinessUnit}-${Environment}-${VpcAttachment}-subnet-2-id'\r\n - !If\r\n - ConditionThirdAZ\r\n - Fn::ImportValue: !Sub '${Region}-${BusinessUnit}-${Environment}-${VpcAttachment}-subnet-3-id'\r\n - !Ref AWS::NoValue\r\n - !Ref AWS::NoValue\r\n Tags:\r\n - Key: Name\r\n Value: !Sub '${Region}-${BusinessUnit}-${Environment}-${AppName}-lambda-${LambdaPurpose}'\r\n - Key: region-abbreviation\r\n Value: !Ref Region\r\n - Key: business-unit\r\n Value: !Ref BusinessUnit\r\n - Key: environment-type\r\n Value: !Ref Environment\r\n - Key: app-name\r\n Value: !Ref AppName\r\n - Key: service-type\r\n Value: !Ref Service\r\n - Key: cost-center\r\n Value: !Ref CostCenter\r\n - Key: compliance-type\r\n Value: !Ref Compliance\r\n - Key: owner-name\r\n Value: !Ref OwnerName\r\n - Key: owner-email-id\r\n Value: !Ref OwnerEmail\r\n - Key: patch-group\r\n Value: !Ref PatchGroup\r\n - Key: criticality-level\r\n Value: !Ref CriticalityLevel\r\n - Key: map-migrated\r\n Value: !Ref MapMigrated\r\n\r\n```\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\n\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass SnapStartEnabled(CloudFormationLintRule):\n \"\"\"Check if the SnapStart is enabled for certain java runtimes\"\"\"\n\n id = \"I2530\"\n shortdesc = \"Validate that SnapStart is configured for >= Java11 runtimes\"\n description = (\n \"SnapStart is a no-cost feature that can increase performance up to 10x. \"\n \"Enable SnapStart for Java 11 and greater runtimes\"\n )\n source_url = \"https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html\"\n tags = [\"resources\", \"lambda\"]\n\n def __init__(self):\n super().__init__()\n self.resource_property_types.append(\"AWS::Lambda::Function\")\n\n def validate(self, runtime, path):\n if not runtime:\n return []\n\n if not (runtime.startswith(\"java\")) and runtime not in [\"java8.al2\", \"java8\"]:\n return []\n\n return [\n RuleMatch(\n path,\n f\"When using {runtime} configure SnapStart\",\n rule=self,\n )\n ]\n", "path": "src/cfnlint/rules/resources/lmbd/SnapStartEnabled.py"}, {"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\n\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass SnapStartSupported(CloudFormationLintRule):\n \"\"\"Check if Lambda function using SnapStart has the correct runtimes\"\"\"\n\n id = \"E2530\"\n shortdesc = \"SnapStart supports the configured runtime\"\n description = (\n \"To properly leverage SnapStart, you must have a runtime of Java11 or greater\"\n )\n source_url = \"https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html\"\n tags = [\"resources\", \"lambda\"]\n\n def __init__(self):\n super().__init__()\n self.resource_property_types.append(\"AWS::Lambda::Function\")\n self.child_rules = {\"I2530\": None}\n\n def match_resource_properties(self, properties, _, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n for scenario in cfn.get_object_without_nested_conditions(properties, path):\n props = scenario.get(\"Object\")\n\n runtime = props.get(\"Runtime\")\n snap_start = props.get(\"SnapStart\")\n if not snap_start:\n if self.child_rules[\"I2530\"]:\n matches.extend(self.child_rules[\"I2530\"].validate(runtime, path))\n continue\n\n if snap_start.get(\"ApplyOn\") != \"PublishedVersions\":\n continue\n\n if (\n runtime\n and (not runtime.startswith(\"java\"))\n and runtime not in [\"java8.al2\", \"java8\"]\n ):\n matches.append(\n RuleMatch(\n path + [\"SnapStart\", \"ApplyOn\"],\n f\"{runtime} is not supported for SnapStart enabled functions\",\n )\n )\n\n return matches\n", "path": "src/cfnlint/rules/resources/lmbd/SnapStartSupported.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\n\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass SnapStartEnabled(CloudFormationLintRule):\n \"\"\"Check if the SnapStart is enabled for certain java runtimes\"\"\"\n\n id = \"I2530\"\n shortdesc = \"Validate that SnapStart is configured for >= Java11 runtimes\"\n description = (\n \"SnapStart is a no-cost feature that can increase performance up to 10x. \"\n \"Enable SnapStart for Java 11 and greater runtimes\"\n )\n source_url = \"https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html\"\n tags = [\"resources\", \"lambda\"]\n\n def __init__(self):\n super().__init__()\n self.resource_property_types.append(\"AWS::Lambda::Function\")\n\n def validate(self, runtime, path):\n if not isinstance(runtime, str):\n return []\n\n if not (runtime.startswith(\"java\")) and runtime not in [\"java8.al2\", \"java8\"]:\n return []\n\n return [\n RuleMatch(\n path,\n f\"When using {runtime} configure SnapStart\",\n rule=self,\n )\n ]\n", "path": "src/cfnlint/rules/resources/lmbd/SnapStartEnabled.py"}, {"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\n\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass SnapStartSupported(CloudFormationLintRule):\n \"\"\"Check if Lambda function using SnapStart has the correct runtimes\"\"\"\n\n id = \"E2530\"\n shortdesc = \"SnapStart supports the configured runtime\"\n description = (\n \"To properly leverage SnapStart, you must have a runtime of Java11 or greater\"\n )\n source_url = \"https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html\"\n tags = [\"resources\", \"lambda\"]\n\n def __init__(self):\n super().__init__()\n self.resource_property_types.append(\"AWS::Lambda::Function\")\n self.child_rules = {\"I2530\": None}\n\n def match_resource_properties(self, properties, _, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n for scenario in cfn.get_object_without_nested_conditions(properties, path):\n props = scenario.get(\"Object\")\n\n runtime = props.get(\"Runtime\")\n snap_start = props.get(\"SnapStart\")\n if not snap_start:\n if self.child_rules[\"I2530\"]:\n matches.extend(self.child_rules[\"I2530\"].validate(runtime, path))\n continue\n\n if snap_start.get(\"ApplyOn\") != \"PublishedVersions\":\n continue\n\n # Validate runtime is a string before using startswith\n if not isinstance(runtime, str):\n continue\n\n if (\n runtime\n and (not runtime.startswith(\"java\"))\n and runtime not in [\"java8.al2\", \"java8\"]\n ):\n matches.append(\n RuleMatch(\n path + [\"SnapStart\", \"ApplyOn\"],\n f\"{runtime} is not supported for SnapStart enabled functions\",\n )\n )\n\n return matches\n", "path": "src/cfnlint/rules/resources/lmbd/SnapStartSupported.py"}]} | 1,806 | 285 |
gh_patches_debug_51453 | rasdani/github-patches | git_diff | lutris__lutris-402 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Running game changes to default X cursor in main window

I think [this is the code](https://github.com/lutris/lutris/blob/cbe6f5d5d982543a66f95aa62f0d8e26dd462470/lutris/util/display.py#L10) that sets the cursor. This was introduced in the commit https://github.com/lutris/lutris/commit/4bb2d78e5c3626057f9dc10dcc27f74ae67853b9
Right now it's using [GdkCursorType](https://developer.gnome.org/gdk3/stable/gdk3-Cursors.html#GdkCursorType) to set cursor, but it seems like we should be using [gdk_cursor_new_from_name()](https://developer.gnome.org/gdk3/stable/gdk3-Cursors.html#gdk-cursor-new-from-name) instead which is the recommended way.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lutris/util/display.py`
Content:
```
1 import subprocess
2 from gi.repository import Gdk
3
4 from lutris.util.log import logger
5
6
7 def set_cursor(name, window, display=None):
8 """Set a named mouse cursor for the given window."""
9 cursors = {
10 'default': Gdk.CursorType.ARROW,
11 'wait': Gdk.CursorType.WATCH,
12 }
13
14 if not display:
15 display = Gdk.Display.get_default()
16 cursor = Gdk.Cursor.new_for_display(display, cursors[name])
17 window.set_cursor(cursor)
18
19
20 def get_vidmodes():
21 xrandr_output = subprocess.Popen(["xrandr"],
22 stdout=subprocess.PIPE).communicate()[0]
23 return list([line for line in xrandr_output.decode().split("\n")])
24
25
26 def get_outputs():
27 """Return list of tuples containing output name and geometry."""
28 outputs = []
29 vid_modes = get_vidmodes()
30 if not vid_modes:
31 logger.error("xrandr didn't return anything")
32 return []
33 for line in vid_modes:
34 parts = line.split()
35 if len(parts) < 2:
36 continue
37 if parts[1] == 'connected':
38 if len(parts) == 2:
39 continue
40 geom = parts[2] if parts[2] != 'primary' else parts[3]
41 if geom.startswith('('): # Screen turned off, no geometry
42 continue
43 outputs.append((parts[0], geom))
44 return outputs
45
46
47 def get_output_names():
48 return [output[0] for output in get_outputs()]
49
50
51 def turn_off_except(display):
52 for output in get_outputs():
53 if output[0] != display:
54 subprocess.Popen(["xrandr", "--output", output[0], "--off"])
55
56
57 def get_resolutions():
58 """Return the list of supported screen resolutions."""
59 resolution_list = []
60 for line in get_vidmodes():
61 if line.startswith(" "):
62 resolution_list.append(line.split()[0])
63 return resolution_list
64
65
66 def get_current_resolution(monitor=0):
67 """Return the current resolution for the desktop."""
68 resolution = list()
69 for line in get_vidmodes():
70 if line.startswith(" ") and "*" in line:
71 resolution.append(line.split()[0])
72 if monitor == 'all':
73 return resolution
74 else:
75 return resolution[monitor]
76
77
78 def change_resolution(resolution):
79 """Change display resolution.
80
81 Takes a string for single monitors or a list of displays as returned
82 by get_outputs().
83 """
84 if not resolution:
85 logger.warning("No resolution provided")
86 return
87 if isinstance(resolution, str):
88 logger.debug("Switching resolution to %s", resolution)
89
90 if resolution not in get_resolutions():
91 logger.warning("Resolution %s doesn't exist." % resolution)
92 else:
93 subprocess.Popen(["xrandr", "-s", resolution])
94 else:
95 for display in resolution:
96 display_name = display[0]
97 logger.debug("Switching to %s on %s", display[1], display[0])
98 display_geom = display[1].split('+')
99 display_resolution = display_geom[0]
100 position = (display_geom[1], display_geom[2])
101
102 subprocess.Popen([
103 "xrandr",
104 "--output", display_name,
105 "--mode", display_resolution,
106 "--pos", "{}x{}".format(position[0], position[1])
107 ]).communicate()
108
109
110 def restore_gamma():
111 """Restores gamma to a normal level."""
112 subprocess.Popen(["xgamma", "-gamma", "1.0"])
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lutris/util/display.py b/lutris/util/display.py
--- a/lutris/util/display.py
+++ b/lutris/util/display.py
@@ -6,14 +6,10 @@
def set_cursor(name, window, display=None):
"""Set a named mouse cursor for the given window."""
- cursors = {
- 'default': Gdk.CursorType.ARROW,
- 'wait': Gdk.CursorType.WATCH,
- }
if not display:
display = Gdk.Display.get_default()
- cursor = Gdk.Cursor.new_for_display(display, cursors[name])
+ cursor = Gdk.Cursor.new_from_name(display, name)
window.set_cursor(cursor)
| {"golden_diff": "diff --git a/lutris/util/display.py b/lutris/util/display.py\n--- a/lutris/util/display.py\n+++ b/lutris/util/display.py\n@@ -6,14 +6,10 @@\n \n def set_cursor(name, window, display=None):\n \"\"\"Set a named mouse cursor for the given window.\"\"\"\n- cursors = {\n- 'default': Gdk.CursorType.ARROW,\n- 'wait': Gdk.CursorType.WATCH,\n- }\n \n if not display:\n display = Gdk.Display.get_default()\n- cursor = Gdk.Cursor.new_for_display(display, cursors[name])\n+ cursor = Gdk.Cursor.new_from_name(display, name)\n window.set_cursor(cursor)\n", "issue": "Running game changes to default X cursor in main window\n\n\nI think [this is the code](https://github.com/lutris/lutris/blob/cbe6f5d5d982543a66f95aa62f0d8e26dd462470/lutris/util/display.py#L10) that sets the cursor. This was introduced in the commit https://github.com/lutris/lutris/commit/4bb2d78e5c3626057f9dc10dcc27f74ae67853b9\n\nRight now it's using [GdkCursorType](https://developer.gnome.org/gdk3/stable/gdk3-Cursors.html#GdkCursorType) to set cursor, but it seems like we should be using [gdk_cursor_new_from_name()](https://developer.gnome.org/gdk3/stable/gdk3-Cursors.html#gdk-cursor-new-from-name) instead which is the recommended way.\n\n", "before_files": [{"content": "import subprocess\nfrom gi.repository import Gdk\n\nfrom lutris.util.log import logger\n\n\ndef set_cursor(name, window, display=None):\n \"\"\"Set a named mouse cursor for the given window.\"\"\"\n cursors = {\n 'default': Gdk.CursorType.ARROW,\n 'wait': Gdk.CursorType.WATCH,\n }\n\n if not display:\n display = Gdk.Display.get_default()\n cursor = Gdk.Cursor.new_for_display(display, cursors[name])\n window.set_cursor(cursor)\n\n\ndef get_vidmodes():\n xrandr_output = subprocess.Popen([\"xrandr\"],\n stdout=subprocess.PIPE).communicate()[0]\n return list([line for line in xrandr_output.decode().split(\"\\n\")])\n\n\ndef get_outputs():\n \"\"\"Return list of tuples containing output name and geometry.\"\"\"\n outputs = []\n vid_modes = get_vidmodes()\n if not vid_modes:\n logger.error(\"xrandr didn't return anything\")\n return []\n for line in vid_modes:\n parts = line.split()\n if len(parts) < 2:\n continue\n if parts[1] == 'connected':\n if len(parts) == 2:\n continue\n geom = parts[2] if parts[2] != 'primary' else parts[3]\n if geom.startswith('('): # Screen turned off, no geometry\n continue\n outputs.append((parts[0], geom))\n return outputs\n\n\ndef get_output_names():\n return [output[0] for output in get_outputs()]\n\n\ndef turn_off_except(display):\n for output in get_outputs():\n if output[0] != display:\n subprocess.Popen([\"xrandr\", \"--output\", output[0], \"--off\"])\n\n\ndef get_resolutions():\n \"\"\"Return the list of supported screen resolutions.\"\"\"\n resolution_list = []\n for line in get_vidmodes():\n if line.startswith(\" \"):\n resolution_list.append(line.split()[0])\n return resolution_list\n\n\ndef get_current_resolution(monitor=0):\n \"\"\"Return the current resolution for the desktop.\"\"\"\n resolution = list()\n for line in get_vidmodes():\n if line.startswith(\" \") and \"*\" in line:\n resolution.append(line.split()[0])\n if monitor == 'all':\n return resolution\n else:\n return resolution[monitor]\n\n\ndef change_resolution(resolution):\n \"\"\"Change display resolution.\n\n Takes a string for single monitors or a list of displays as returned\n by get_outputs().\n \"\"\"\n if not resolution:\n logger.warning(\"No resolution provided\")\n return\n if isinstance(resolution, str):\n logger.debug(\"Switching resolution to %s\", resolution)\n\n if resolution not in get_resolutions():\n logger.warning(\"Resolution %s doesn't exist.\" % resolution)\n else:\n subprocess.Popen([\"xrandr\", \"-s\", resolution])\n else:\n for display in resolution:\n display_name = display[0]\n logger.debug(\"Switching to %s on %s\", display[1], display[0])\n display_geom = display[1].split('+')\n display_resolution = display_geom[0]\n position = (display_geom[1], display_geom[2])\n\n subprocess.Popen([\n \"xrandr\",\n \"--output\", display_name,\n \"--mode\", display_resolution,\n \"--pos\", \"{}x{}\".format(position[0], position[1])\n ]).communicate()\n\n\ndef restore_gamma():\n \"\"\"Restores gamma to a normal level.\"\"\"\n subprocess.Popen([\"xgamma\", \"-gamma\", \"1.0\"])\n", "path": "lutris/util/display.py"}], "after_files": [{"content": "import subprocess\nfrom gi.repository import Gdk\n\nfrom lutris.util.log import logger\n\n\ndef set_cursor(name, window, display=None):\n \"\"\"Set a named mouse cursor for the given window.\"\"\"\n\n if not display:\n display = Gdk.Display.get_default()\n cursor = Gdk.Cursor.new_from_name(display, name)\n window.set_cursor(cursor)\n\n\ndef get_vidmodes():\n xrandr_output = subprocess.Popen([\"xrandr\"],\n stdout=subprocess.PIPE).communicate()[0]\n return list([line for line in xrandr_output.decode().split(\"\\n\")])\n\n\ndef get_outputs():\n \"\"\"Return list of tuples containing output name and geometry.\"\"\"\n outputs = []\n vid_modes = get_vidmodes()\n if not vid_modes:\n logger.error(\"xrandr didn't return anything\")\n return []\n for line in vid_modes:\n parts = line.split()\n if len(parts) < 2:\n continue\n if parts[1] == 'connected':\n if len(parts) == 2:\n continue\n geom = parts[2] if parts[2] != 'primary' else parts[3]\n if geom.startswith('('): # Screen turned off, no geometry\n continue\n outputs.append((parts[0], geom))\n return outputs\n\n\ndef get_output_names():\n return [output[0] for output in get_outputs()]\n\n\ndef turn_off_except(display):\n for output in get_outputs():\n if output[0] != display:\n subprocess.Popen([\"xrandr\", \"--output\", output[0], \"--off\"])\n\n\ndef get_resolutions():\n \"\"\"Return the list of supported screen resolutions.\"\"\"\n resolution_list = []\n for line in get_vidmodes():\n if line.startswith(\" \"):\n resolution_list.append(line.split()[0])\n return resolution_list\n\n\ndef get_current_resolution(monitor=0):\n \"\"\"Return the current resolution for the desktop.\"\"\"\n resolution = list()\n for line in get_vidmodes():\n if line.startswith(\" \") and \"*\" in line:\n resolution.append(line.split()[0])\n if monitor == 'all':\n return resolution\n else:\n return resolution[monitor]\n\n\ndef change_resolution(resolution):\n \"\"\"Change display resolution.\n\n Takes a string for single monitors or a list of displays as returned\n by get_outputs().\n \"\"\"\n if not resolution:\n logger.warning(\"No resolution provided\")\n return\n if isinstance(resolution, str):\n logger.debug(\"Switching resolution to %s\", resolution)\n\n if resolution not in get_resolutions():\n logger.warning(\"Resolution %s doesn't exist.\" % resolution)\n else:\n subprocess.Popen([\"xrandr\", \"-s\", resolution])\n else:\n for display in resolution:\n display_name = display[0]\n logger.debug(\"Switching to %s on %s\", display[1], display[0])\n display_geom = display[1].split('+')\n display_resolution = display_geom[0]\n position = (display_geom[1], display_geom[2])\n\n subprocess.Popen([\n \"xrandr\",\n \"--output\", display_name,\n \"--mode\", display_resolution,\n \"--pos\", \"{}x{}\".format(position[0], position[1])\n ]).communicate()\n\n\ndef restore_gamma():\n \"\"\"Restores gamma to a normal level.\"\"\"\n subprocess.Popen([\"xgamma\", \"-gamma\", \"1.0\"])\n", "path": "lutris/util/display.py"}]} | 1,488 | 150 |
gh_patches_debug_21203 | rasdani/github-patches | git_diff | akvo__akvo-rsr-4237 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Show project's locations in EUTF Syria custom project report.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rsr/views/py_reports/results_indicators_with_map_pdf_reports.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Akvo RSR is covered by the GNU Affero General Public License.
4
5 See more details in the license.txt file located at the root folder of the
6 Akvo RSR module. For additional details on the GNU license please
7 see < http://www.gnu.org/licenses/agpl.html >.
8 """
9
10 from akvo.rsr.models import Project, Country, Organisation, IndicatorPeriod
11 from akvo.rsr.staticmap import get_staticmap_url, Coordinate, Size
12 from datetime import datetime
13 from django.contrib.auth.decorators import login_required
14 from django.db.models import Q
15 from django.http import HttpResponse, HttpResponseBadRequest
16 from django.shortcuts import get_object_or_404
17 from django.template.loader import render_to_string
18
19 from . import utils
20
21
22 @login_required
23 def render_organisation_projects_results_indicators_map_overview(request, org_id):
24 country = request.GET.get('country', '').strip()
25 if not country:
26 return HttpResponseBadRequest('Please provide the country code!')
27
28 show_comment = True if request.GET.get('comment', '').strip() == 'true' else False
29 start_date = utils.parse_date(request.GET.get('start_date', '').strip(), datetime(1900, 1, 1))
30 end_date = utils.parse_date(request.GET.get('end_date', '').strip(), datetime(2999, 12, 31))
31
32 country = get_object_or_404(Country, iso_code=country)
33 organisation = get_object_or_404(
34 Organisation.objects.prefetch_related(
35 'projects',
36 'projects__results',
37 'projects__results__indicators',
38 'projects__results__indicators__periods'
39 ),
40 pk=org_id
41 )
42 projects = organisation.all_projects().filter(primary_location__country=country)
43 coordinates = [
44 Coordinate(p.primary_location.latitude, p.primary_location.longitude)
45 for p
46 in projects
47 if p.primary_location
48 ]
49
50 now = datetime.today()
51
52 html = render_to_string(
53 'reports/organisation-projects-results-indicators-map-overview.html',
54 context={
55 'title': 'Results and indicators overview for projects in {}'.format(country.name),
56 'staticmap': get_staticmap_url(coordinates, Size(900, 600)),
57 'projects': [build_view_object(p, start_date, end_date) for p in projects],
58 'show_comment': show_comment,
59 'today': now.strftime('%d-%b-%Y'),
60 }
61 )
62
63 if request.GET.get('show-html', ''):
64 return HttpResponse(html)
65
66 filename = '{}-{}-{}-projects-results-indicators-overview.pdf'.format(
67 now.strftime('%Y%b%d'), organisation.id, country.iso_code
68 )
69
70 return utils.make_pdf_response(html, filename)
71
72
73 @login_required
74 def render_project_results_indicators_map_overview(request, project_id):
75 return _render_project_report(request, project_id, with_map=True)
76
77
78 @login_required
79 def render_project_results_indicators_overview(request, project_id):
80 return _render_project_report(request, project_id, with_disaggregation=True)
81
82
83 def build_view_object(project, start_date=None, end_date=None):
84 results = {}
85 periods = IndicatorPeriod.objects\
86 .select_related('indicator', 'indicator__result')\
87 .prefetch_related('disaggregations')\
88 .filter(indicator__result__project=project)
89 if start_date and end_date:
90 periods = periods.filter(
91 Q(period_start__isnull=True) | Q(period_start__gte=start_date),
92 Q(period_end__isnull=True) | Q(period_end__lte=end_date)
93 )
94 for period in periods:
95 indicator = period.indicator
96 result = indicator.result
97 if result.id not in results:
98 results[result.id] = {'item': result, 'indicators': {}}
99 if indicator.id not in results[result.id]['indicators']:
100 results[result.id]['indicators'][indicator.id] = {'item': indicator, 'periods': []}
101 results[result.id]['indicators'][indicator.id]['periods'].append(period)
102 return utils.ProjectProxy(project, results)
103
104
105 def _render_project_report(request, project_id, with_map=False, with_disaggregation=False):
106 show_comment = True if request.GET.get('comment', '').strip() == 'true' else False
107 start_date = utils.parse_date(request.GET.get('start_date', '').strip(), datetime(1900, 1, 1))
108 end_date = utils.parse_date(request.GET.get('end_date', '').strip(), datetime(2999, 12, 31))
109
110 project = get_object_or_404(
111 Project.objects.prefetch_related(
112 'partners',
113 'related_projects',
114 'related_to_projects',
115 'results',
116 'results__indicators',
117 'results__indicators__periods',
118 'results__indicators__periods__disaggregations'
119 ),
120 pk=project_id
121 )
122 project_location = project.primary_location
123 locations = [project_location]
124 if project.parents().count():
125 locations.append(project.parents().first().primary_location)
126 if project.children().count():
127 for child in project.children_all().published():
128 locations.append(child.primary_location)
129
130 if with_map:
131 coordinates = [Coordinate(loc.latitude, loc.longitude) for loc in locations if loc]
132
133 now = datetime.today()
134
135 html = render_to_string(
136 'reports/project-results-indicators-map-overview.html',
137 context={
138 'project': build_view_object(project, start_date, end_date),
139 'location': ", ".join([
140 _f
141 for _f
142 in [project_location.city, getattr(project_location.country, 'name', None)]
143 if _f
144 ]) if project_location else "",
145 'staticmap': get_staticmap_url(coordinates, Size(900, 600)) if with_map else None,
146 'show_comment': show_comment,
147 'show_disaggregations': with_disaggregation,
148 'today': now.strftime('%d-%b-%Y'),
149 }
150 )
151
152 if request.GET.get('show-html', ''):
153 return HttpResponse(html)
154
155 filename = '{}-{}-results-indicators{}-overview.pdf'.format(
156 now.strftime('%Y%b%d'), project.id, '-map' if with_map else '')
157
158 return utils.make_pdf_response(html, filename)
159
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/akvo/rsr/views/py_reports/results_indicators_with_map_pdf_reports.py b/akvo/rsr/views/py_reports/results_indicators_with_map_pdf_reports.py
--- a/akvo/rsr/views/py_reports/results_indicators_with_map_pdf_reports.py
+++ b/akvo/rsr/views/py_reports/results_indicators_with_map_pdf_reports.py
@@ -109,6 +109,7 @@
project = get_object_or_404(
Project.objects.prefetch_related(
+ 'locations',
'partners',
'related_projects',
'related_to_projects',
@@ -120,14 +121,10 @@
pk=project_id
)
project_location = project.primary_location
- locations = [project_location]
- if project.parents().count():
- locations.append(project.parents().first().primary_location)
- if project.children().count():
- for child in project.children_all().published():
- locations.append(child.primary_location)
+ coordinates = None
if with_map:
+ locations = project.locations.all()
coordinates = [Coordinate(loc.latitude, loc.longitude) for loc in locations if loc]
now = datetime.today()
| {"golden_diff": "diff --git a/akvo/rsr/views/py_reports/results_indicators_with_map_pdf_reports.py b/akvo/rsr/views/py_reports/results_indicators_with_map_pdf_reports.py\n--- a/akvo/rsr/views/py_reports/results_indicators_with_map_pdf_reports.py\n+++ b/akvo/rsr/views/py_reports/results_indicators_with_map_pdf_reports.py\n@@ -109,6 +109,7 @@\n \n project = get_object_or_404(\n Project.objects.prefetch_related(\n+ 'locations',\n 'partners',\n 'related_projects',\n 'related_to_projects',\n@@ -120,14 +121,10 @@\n pk=project_id\n )\n project_location = project.primary_location\n- locations = [project_location]\n- if project.parents().count():\n- locations.append(project.parents().first().primary_location)\n- if project.children().count():\n- for child in project.children_all().published():\n- locations.append(child.primary_location)\n+ coordinates = None\n \n if with_map:\n+ locations = project.locations.all()\n coordinates = [Coordinate(loc.latitude, loc.longitude) for loc in locations if loc]\n \n now = datetime.today()\n", "issue": "Show project's locations in EUTF Syria custom project report.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom akvo.rsr.models import Project, Country, Organisation, IndicatorPeriod\nfrom akvo.rsr.staticmap import get_staticmap_url, Coordinate, Size\nfrom datetime import datetime\nfrom django.contrib.auth.decorators import login_required\nfrom django.db.models import Q\nfrom django.http import HttpResponse, HttpResponseBadRequest\nfrom django.shortcuts import get_object_or_404\nfrom django.template.loader import render_to_string\n\nfrom . import utils\n\n\n@login_required\ndef render_organisation_projects_results_indicators_map_overview(request, org_id):\n country = request.GET.get('country', '').strip()\n if not country:\n return HttpResponseBadRequest('Please provide the country code!')\n\n show_comment = True if request.GET.get('comment', '').strip() == 'true' else False\n start_date = utils.parse_date(request.GET.get('start_date', '').strip(), datetime(1900, 1, 1))\n end_date = utils.parse_date(request.GET.get('end_date', '').strip(), datetime(2999, 12, 31))\n\n country = get_object_or_404(Country, iso_code=country)\n organisation = get_object_or_404(\n Organisation.objects.prefetch_related(\n 'projects',\n 'projects__results',\n 'projects__results__indicators',\n 'projects__results__indicators__periods'\n ),\n pk=org_id\n )\n projects = organisation.all_projects().filter(primary_location__country=country)\n coordinates = [\n Coordinate(p.primary_location.latitude, p.primary_location.longitude)\n for p\n in projects\n if p.primary_location\n ]\n\n now = datetime.today()\n\n html = render_to_string(\n 'reports/organisation-projects-results-indicators-map-overview.html',\n context={\n 'title': 'Results and indicators overview for projects in {}'.format(country.name),\n 'staticmap': get_staticmap_url(coordinates, Size(900, 600)),\n 'projects': [build_view_object(p, start_date, end_date) for p in projects],\n 'show_comment': show_comment,\n 'today': now.strftime('%d-%b-%Y'),\n }\n )\n\n if request.GET.get('show-html', ''):\n return HttpResponse(html)\n\n filename = '{}-{}-{}-projects-results-indicators-overview.pdf'.format(\n now.strftime('%Y%b%d'), organisation.id, country.iso_code\n )\n\n return utils.make_pdf_response(html, filename)\n\n\n@login_required\ndef render_project_results_indicators_map_overview(request, project_id):\n return _render_project_report(request, project_id, with_map=True)\n\n\n@login_required\ndef render_project_results_indicators_overview(request, project_id):\n return _render_project_report(request, project_id, with_disaggregation=True)\n\n\ndef build_view_object(project, start_date=None, end_date=None):\n results = {}\n periods = IndicatorPeriod.objects\\\n .select_related('indicator', 'indicator__result')\\\n .prefetch_related('disaggregations')\\\n .filter(indicator__result__project=project)\n if start_date and end_date:\n periods = periods.filter(\n Q(period_start__isnull=True) | Q(period_start__gte=start_date),\n Q(period_end__isnull=True) | Q(period_end__lte=end_date)\n )\n for period in periods:\n indicator = period.indicator\n result = indicator.result\n if result.id not in results:\n results[result.id] = {'item': result, 'indicators': {}}\n if indicator.id not in results[result.id]['indicators']:\n results[result.id]['indicators'][indicator.id] = {'item': indicator, 'periods': []}\n results[result.id]['indicators'][indicator.id]['periods'].append(period)\n return utils.ProjectProxy(project, results)\n\n\ndef _render_project_report(request, project_id, with_map=False, with_disaggregation=False):\n show_comment = True if request.GET.get('comment', '').strip() == 'true' else False\n start_date = utils.parse_date(request.GET.get('start_date', '').strip(), datetime(1900, 1, 1))\n end_date = utils.parse_date(request.GET.get('end_date', '').strip(), datetime(2999, 12, 31))\n\n project = get_object_or_404(\n Project.objects.prefetch_related(\n 'partners',\n 'related_projects',\n 'related_to_projects',\n 'results',\n 'results__indicators',\n 'results__indicators__periods',\n 'results__indicators__periods__disaggregations'\n ),\n pk=project_id\n )\n project_location = project.primary_location\n locations = [project_location]\n if project.parents().count():\n locations.append(project.parents().first().primary_location)\n if project.children().count():\n for child in project.children_all().published():\n locations.append(child.primary_location)\n\n if with_map:\n coordinates = [Coordinate(loc.latitude, loc.longitude) for loc in locations if loc]\n\n now = datetime.today()\n\n html = render_to_string(\n 'reports/project-results-indicators-map-overview.html',\n context={\n 'project': build_view_object(project, start_date, end_date),\n 'location': \", \".join([\n _f\n for _f\n in [project_location.city, getattr(project_location.country, 'name', None)]\n if _f\n ]) if project_location else \"\",\n 'staticmap': get_staticmap_url(coordinates, Size(900, 600)) if with_map else None,\n 'show_comment': show_comment,\n 'show_disaggregations': with_disaggregation,\n 'today': now.strftime('%d-%b-%Y'),\n }\n )\n\n if request.GET.get('show-html', ''):\n return HttpResponse(html)\n\n filename = '{}-{}-results-indicators{}-overview.pdf'.format(\n now.strftime('%Y%b%d'), project.id, '-map' if with_map else '')\n\n return utils.make_pdf_response(html, filename)\n", "path": "akvo/rsr/views/py_reports/results_indicators_with_map_pdf_reports.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom akvo.rsr.models import Project, Country, Organisation, IndicatorPeriod\nfrom akvo.rsr.staticmap import get_staticmap_url, Coordinate, Size\nfrom datetime import datetime\nfrom django.contrib.auth.decorators import login_required\nfrom django.db.models import Q\nfrom django.http import HttpResponse, HttpResponseBadRequest\nfrom django.shortcuts import get_object_or_404\nfrom django.template.loader import render_to_string\n\nfrom . import utils\n\n\n@login_required\ndef render_organisation_projects_results_indicators_map_overview(request, org_id):\n country = request.GET.get('country', '').strip()\n if not country:\n return HttpResponseBadRequest('Please provide the country code!')\n\n show_comment = True if request.GET.get('comment', '').strip() == 'true' else False\n start_date = utils.parse_date(request.GET.get('start_date', '').strip(), datetime(1900, 1, 1))\n end_date = utils.parse_date(request.GET.get('end_date', '').strip(), datetime(2999, 12, 31))\n\n country = get_object_or_404(Country, iso_code=country)\n organisation = get_object_or_404(\n Organisation.objects.prefetch_related(\n 'projects',\n 'projects__results',\n 'projects__results__indicators',\n 'projects__results__indicators__periods'\n ),\n pk=org_id\n )\n projects = organisation.all_projects().filter(primary_location__country=country)\n coordinates = [\n Coordinate(p.primary_location.latitude, p.primary_location.longitude)\n for p\n in projects\n if p.primary_location\n ]\n\n now = datetime.today()\n\n html = render_to_string(\n 'reports/organisation-projects-results-indicators-map-overview.html',\n context={\n 'title': 'Results and indicators overview for projects in {}'.format(country.name),\n 'staticmap': get_staticmap_url(coordinates, Size(900, 600)),\n 'projects': [build_view_object(p, start_date, end_date) for p in projects],\n 'show_comment': show_comment,\n 'today': now.strftime('%d-%b-%Y'),\n }\n )\n\n if request.GET.get('show-html', ''):\n return HttpResponse(html)\n\n filename = '{}-{}-{}-projects-results-indicators-overview.pdf'.format(\n now.strftime('%Y%b%d'), organisation.id, country.iso_code\n )\n\n return utils.make_pdf_response(html, filename)\n\n\n@login_required\ndef render_project_results_indicators_map_overview(request, project_id):\n return _render_project_report(request, project_id, with_map=True)\n\n\n@login_required\ndef render_project_results_indicators_overview(request, project_id):\n return _render_project_report(request, project_id, with_disaggregation=True)\n\n\ndef build_view_object(project, start_date=None, end_date=None):\n results = {}\n periods = IndicatorPeriod.objects\\\n .select_related('indicator', 'indicator__result')\\\n .prefetch_related('disaggregations')\\\n .filter(indicator__result__project=project)\n if start_date and end_date:\n periods = periods.filter(\n Q(period_start__isnull=True) | Q(period_start__gte=start_date),\n Q(period_end__isnull=True) | Q(period_end__lte=end_date)\n )\n for period in periods:\n indicator = period.indicator\n result = indicator.result\n if result.id not in results:\n results[result.id] = {'item': result, 'indicators': {}}\n if indicator.id not in results[result.id]['indicators']:\n results[result.id]['indicators'][indicator.id] = {'item': indicator, 'periods': []}\n results[result.id]['indicators'][indicator.id]['periods'].append(period)\n return utils.ProjectProxy(project, results)\n\n\ndef _render_project_report(request, project_id, with_map=False, with_disaggregation=False):\n show_comment = True if request.GET.get('comment', '').strip() == 'true' else False\n start_date = utils.parse_date(request.GET.get('start_date', '').strip(), datetime(1900, 1, 1))\n end_date = utils.parse_date(request.GET.get('end_date', '').strip(), datetime(2999, 12, 31))\n\n project = get_object_or_404(\n Project.objects.prefetch_related(\n 'locations',\n 'partners',\n 'related_projects',\n 'related_to_projects',\n 'results',\n 'results__indicators',\n 'results__indicators__periods',\n 'results__indicators__periods__disaggregations'\n ),\n pk=project_id\n )\n project_location = project.primary_location\n coordinates = None\n\n if with_map:\n locations = project.locations.all()\n coordinates = [Coordinate(loc.latitude, loc.longitude) for loc in locations if loc]\n\n now = datetime.today()\n\n html = render_to_string(\n 'reports/project-results-indicators-map-overview.html',\n context={\n 'project': build_view_object(project, start_date, end_date),\n 'location': \", \".join([\n _f\n for _f\n in [project_location.city, getattr(project_location.country, 'name', None)]\n if _f\n ]) if project_location else \"\",\n 'staticmap': get_staticmap_url(coordinates, Size(900, 600)) if with_map else None,\n 'show_comment': show_comment,\n 'show_disaggregations': with_disaggregation,\n 'today': now.strftime('%d-%b-%Y'),\n }\n )\n\n if request.GET.get('show-html', ''):\n return HttpResponse(html)\n\n filename = '{}-{}-results-indicators{}-overview.pdf'.format(\n now.strftime('%Y%b%d'), project.id, '-map' if with_map else '')\n\n return utils.make_pdf_response(html, filename)\n", "path": "akvo/rsr/views/py_reports/results_indicators_with_map_pdf_reports.py"}]} | 2,038 | 260 |
gh_patches_debug_35621 | rasdani/github-patches | git_diff | streamlink__streamlink-5443 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.rtvs: No playable streams found on
### Checklist
- [X] This is a plugin issue and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest build from the master branch
### Description
rtvs plugin - stream not work
### Debug log
```text
PS C:\Users\My> streamlink https://www.rtvs.sk/televizia/live-24 --loglevel debug
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.11.2
[cli][debug] Streamlink: 5.3.1
[cli][debug] Dependencies:
[cli][debug] certifi: 2022.12.7
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.2
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.17
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.28.2
[cli][debug] urllib3: 1.26.14
[cli][debug] websocket-client: 1.5.1
[cli][debug] Arguments:
[cli][debug] url=https://www.rtvs.sk/televizia/live-24
[cli][debug] --loglevel=debug
[cli][debug] --ffmpeg-ffmpeg=C:\Program Files\Streamlink\ffmpeg\ffmpeg.exe
[cli][info] Found matching plugin rtvs for URL https://www.rtvs.sk/televizia/live-24
error: No playable streams found on this URL: https://www.rtvs.sk/televizia/live-24
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/rtvs.py`
Content:
```
1 """
2 $description Live TV channels from RTVS, a Slovak public, state-owned broadcaster.
3 $url rtvs.sk
4 $type live
5 $region Slovakia
6 """
7
8 import re
9
10 from streamlink.plugin import Plugin, pluginmatcher
11 from streamlink.plugin.api import validate
12 from streamlink.stream.hls import HLSStream
13 from streamlink.utils.parse import parse_json
14
15
16 @pluginmatcher(re.compile(
17 r"https?://www\.rtvs\.sk/televizia/live-[\w-]+",
18 ))
19 class Rtvs(Plugin):
20 _re_channel_id = re.compile(r"'stream':\s*'live-(\d+)'")
21
22 def _get_streams(self):
23 res = self.session.http.get(self.url)
24 m = self._re_channel_id.search(res.text)
25 if not m:
26 return
27
28 res = self.session.http.get(
29 "https://www.rtvs.sk/json/live5f.json",
30 params={
31 "c": m.group(1),
32 "b": "mozilla",
33 "p": "win",
34 "f": "0",
35 "d": "1",
36 },
37 )
38 videos = parse_json(res.text, schema=validate.Schema({
39 "clip": {
40 "sources": [{
41 "src": validate.url(),
42 "type": str,
43 }],
44 }},
45 validate.get(("clip", "sources")),
46 validate.filter(lambda n: n["type"] == "application/x-mpegurl"),
47 ))
48 for video in videos:
49 yield from HLSStream.parse_variant_playlist(self.session, video["src"]).items()
50
51
52 __plugin__ = Rtvs
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/rtvs.py b/src/streamlink/plugins/rtvs.py
--- a/src/streamlink/plugins/rtvs.py
+++ b/src/streamlink/plugins/rtvs.py
@@ -6,47 +6,52 @@
"""
import re
+from urllib.parse import urlparse
from streamlink.plugin import Plugin, pluginmatcher
from streamlink.plugin.api import validate
from streamlink.stream.hls import HLSStream
-from streamlink.utils.parse import parse_json
@pluginmatcher(re.compile(
- r"https?://www\.rtvs\.sk/televizia/live-[\w-]+",
+ r"https?://www\.rtvs\.sk/televizia/(?:live-|sport)",
))
class Rtvs(Plugin):
- _re_channel_id = re.compile(r"'stream':\s*'live-(\d+)'")
-
def _get_streams(self):
- res = self.session.http.get(self.url)
- m = self._re_channel_id.search(res.text)
- if not m:
+ channel = self.session.http.get(self.url, schema=validate.Schema(
+ validate.parse_html(),
+ validate.xml_xpath_string(".//iframe[@id='player_live']//@src"),
+ validate.url(path=validate.startswith("/embed/live/")),
+ validate.transform(lambda embed: urlparse(embed).path[len("/embed/live/"):]),
+ ))
+ if not channel:
return
- res = self.session.http.get(
+ videos = self.session.http.get(
"https://www.rtvs.sk/json/live5f.json",
params={
- "c": m.group(1),
+ "c": channel,
"b": "mozilla",
"p": "win",
"f": "0",
"d": "1",
},
+ schema=validate.Schema(
+ validate.parse_json(),
+ {
+ "clip": {
+ "sources": [{
+ "src": validate.url(),
+ "type": str,
+ }],
+ },
+ },
+ validate.get(("clip", "sources")),
+ validate.filter(lambda n: n["type"] == "application/x-mpegurl"),
+ ),
)
- videos = parse_json(res.text, schema=validate.Schema({
- "clip": {
- "sources": [{
- "src": validate.url(),
- "type": str,
- }],
- }},
- validate.get(("clip", "sources")),
- validate.filter(lambda n: n["type"] == "application/x-mpegurl"),
- ))
for video in videos:
- yield from HLSStream.parse_variant_playlist(self.session, video["src"]).items()
+ return HLSStream.parse_variant_playlist(self.session, video["src"])
__plugin__ = Rtvs
| {"golden_diff": "diff --git a/src/streamlink/plugins/rtvs.py b/src/streamlink/plugins/rtvs.py\n--- a/src/streamlink/plugins/rtvs.py\n+++ b/src/streamlink/plugins/rtvs.py\n@@ -6,47 +6,52 @@\n \"\"\"\n \n import re\n+from urllib.parse import urlparse\n \n from streamlink.plugin import Plugin, pluginmatcher\n from streamlink.plugin.api import validate\n from streamlink.stream.hls import HLSStream\n-from streamlink.utils.parse import parse_json\n \n \n @pluginmatcher(re.compile(\n- r\"https?://www\\.rtvs\\.sk/televizia/live-[\\w-]+\",\n+ r\"https?://www\\.rtvs\\.sk/televizia/(?:live-|sport)\",\n ))\n class Rtvs(Plugin):\n- _re_channel_id = re.compile(r\"'stream':\\s*'live-(\\d+)'\")\n-\n def _get_streams(self):\n- res = self.session.http.get(self.url)\n- m = self._re_channel_id.search(res.text)\n- if not m:\n+ channel = self.session.http.get(self.url, schema=validate.Schema(\n+ validate.parse_html(),\n+ validate.xml_xpath_string(\".//iframe[@id='player_live']//@src\"),\n+ validate.url(path=validate.startswith(\"/embed/live/\")),\n+ validate.transform(lambda embed: urlparse(embed).path[len(\"/embed/live/\"):]),\n+ ))\n+ if not channel:\n return\n \n- res = self.session.http.get(\n+ videos = self.session.http.get(\n \"https://www.rtvs.sk/json/live5f.json\",\n params={\n- \"c\": m.group(1),\n+ \"c\": channel,\n \"b\": \"mozilla\",\n \"p\": \"win\",\n \"f\": \"0\",\n \"d\": \"1\",\n },\n+ schema=validate.Schema(\n+ validate.parse_json(),\n+ {\n+ \"clip\": {\n+ \"sources\": [{\n+ \"src\": validate.url(),\n+ \"type\": str,\n+ }],\n+ },\n+ },\n+ validate.get((\"clip\", \"sources\")),\n+ validate.filter(lambda n: n[\"type\"] == \"application/x-mpegurl\"),\n+ ),\n )\n- videos = parse_json(res.text, schema=validate.Schema({\n- \"clip\": {\n- \"sources\": [{\n- \"src\": validate.url(),\n- \"type\": str,\n- }],\n- }},\n- validate.get((\"clip\", \"sources\")),\n- validate.filter(lambda n: n[\"type\"] == \"application/x-mpegurl\"),\n- ))\n for video in videos:\n- yield from HLSStream.parse_variant_playlist(self.session, video[\"src\"]).items()\n+ return HLSStream.parse_variant_playlist(self.session, video[\"src\"])\n \n \n __plugin__ = Rtvs\n", "issue": "plugins.rtvs: No playable streams found on\n### Checklist\n\n- [X] This is a plugin issue and not a different kind of issue\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest build from the master branch\n\n### Description\n\nrtvs plugin - stream not work\n\n### Debug log\n\n```text\nPS C:\\Users\\My> streamlink https://www.rtvs.sk/televizia/live-24 --loglevel debug\r\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.11.2\r\n[cli][debug] Streamlink: 5.3.1\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2022.12.7\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.2\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.17\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.28.2\r\n[cli][debug] urllib3: 1.26.14\r\n[cli][debug] websocket-client: 1.5.1\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.rtvs.sk/televizia/live-24\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --ffmpeg-ffmpeg=C:\\Program Files\\Streamlink\\ffmpeg\\ffmpeg.exe\r\n[cli][info] Found matching plugin rtvs for URL https://www.rtvs.sk/televizia/live-24\r\nerror: No playable streams found on this URL: https://www.rtvs.sk/televizia/live-24\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Live TV channels from RTVS, a Slovak public, state-owned broadcaster.\n$url rtvs.sk\n$type live\n$region Slovakia\n\"\"\"\n\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\nfrom streamlink.utils.parse import parse_json\n\n\n@pluginmatcher(re.compile(\n r\"https?://www\\.rtvs\\.sk/televizia/live-[\\w-]+\",\n))\nclass Rtvs(Plugin):\n _re_channel_id = re.compile(r\"'stream':\\s*'live-(\\d+)'\")\n\n def _get_streams(self):\n res = self.session.http.get(self.url)\n m = self._re_channel_id.search(res.text)\n if not m:\n return\n\n res = self.session.http.get(\n \"https://www.rtvs.sk/json/live5f.json\",\n params={\n \"c\": m.group(1),\n \"b\": \"mozilla\",\n \"p\": \"win\",\n \"f\": \"0\",\n \"d\": \"1\",\n },\n )\n videos = parse_json(res.text, schema=validate.Schema({\n \"clip\": {\n \"sources\": [{\n \"src\": validate.url(),\n \"type\": str,\n }],\n }},\n validate.get((\"clip\", \"sources\")),\n validate.filter(lambda n: n[\"type\"] == \"application/x-mpegurl\"),\n ))\n for video in videos:\n yield from HLSStream.parse_variant_playlist(self.session, video[\"src\"]).items()\n\n\n__plugin__ = Rtvs\n", "path": "src/streamlink/plugins/rtvs.py"}], "after_files": [{"content": "\"\"\"\n$description Live TV channels from RTVS, a Slovak public, state-owned broadcaster.\n$url rtvs.sk\n$type live\n$region Slovakia\n\"\"\"\n\nimport re\nfrom urllib.parse import urlparse\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\n@pluginmatcher(re.compile(\n r\"https?://www\\.rtvs\\.sk/televizia/(?:live-|sport)\",\n))\nclass Rtvs(Plugin):\n def _get_streams(self):\n channel = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//iframe[@id='player_live']//@src\"),\n validate.url(path=validate.startswith(\"/embed/live/\")),\n validate.transform(lambda embed: urlparse(embed).path[len(\"/embed/live/\"):]),\n ))\n if not channel:\n return\n\n videos = self.session.http.get(\n \"https://www.rtvs.sk/json/live5f.json\",\n params={\n \"c\": channel,\n \"b\": \"mozilla\",\n \"p\": \"win\",\n \"f\": \"0\",\n \"d\": \"1\",\n },\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"clip\": {\n \"sources\": [{\n \"src\": validate.url(),\n \"type\": str,\n }],\n },\n },\n validate.get((\"clip\", \"sources\")),\n validate.filter(lambda n: n[\"type\"] == \"application/x-mpegurl\"),\n ),\n )\n for video in videos:\n return HLSStream.parse_variant_playlist(self.session, video[\"src\"])\n\n\n__plugin__ = Rtvs\n", "path": "src/streamlink/plugins/rtvs.py"}]} | 1,222 | 603 |
gh_patches_debug_7581 | rasdani/github-patches | git_diff | ivy-llc__ivy-18346 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sort
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/paddle/tensor/search.py`
Content:
```
1 # global
2 import ivy
3 from ivy.func_wrapper import with_supported_dtypes
4 from ivy.functional.frontends.paddle.func_wrapper import (
5 to_ivy_arrays_and_back,
6 )
7
8
9 @with_supported_dtypes(
10 {"2.5.0 and below": ("float32", "float64", "int16", "int32", "int64", "uint8")},
11 "paddle",
12 )
13 @to_ivy_arrays_and_back
14 def argmax(x, /, *, axis=None, keepdim=False, dtype="int64", name=None):
15 return ivy.argmax(x, axis=axis, keepdims=keepdim, dtype=dtype)
16
17
18 @with_supported_dtypes(
19 {"2.5.0 and below": ("float32", "float64", "int16", "int32", "int64", "uint8")},
20 "paddle",
21 )
22 @to_ivy_arrays_and_back
23 def argmin(x, /, *, axis=None, keepdim=False, dtype="int64", name=None):
24 return ivy.argmin(x, axis=axis, keepdims=keepdim, dtype=dtype)
25
26
27 @with_supported_dtypes(
28 {"2.4.2 and below": ("float32", "float64", "int16", "int32", "int64", "uint8")},
29 "paddle",
30 )
31 @to_ivy_arrays_and_back
32 def argsort(x, /, *, axis=-1, descending=False, name=None):
33 return ivy.argsort(x, axis=axis, descending=descending)
34
35
36 @with_supported_dtypes(
37 {"2.4.2 and below": ("float32", "float64", "int16", "int32", "int64", "uint8")},
38 "paddle",
39 )
40 @to_ivy_arrays_and_back
41 def nonzero(input, /, *, as_tuple=False):
42 ret = ivy.nonzero(input)
43 if as_tuple is False:
44 ret = ivy.matrix_transpose(ivy.stack(ret))
45 return ret
46
47
48 @with_supported_dtypes(
49 {"2.5.0 and below": ("float32", "float64", "int32", "int64")},
50 "paddle",
51 )
52 @to_ivy_arrays_and_back
53 def searchsorted(sorted_sequence, values, out_int32=False, right=False, name=None):
54 if right:
55 side = "right"
56 else:
57 side = "left"
58 ret = ivy.searchsorted(sorted_sequence, values, side=side)
59 if out_int32:
60 ret = ivy.astype(ret, "int32")
61 return ret
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/paddle/tensor/search.py b/ivy/functional/frontends/paddle/tensor/search.py
--- a/ivy/functional/frontends/paddle/tensor/search.py
+++ b/ivy/functional/frontends/paddle/tensor/search.py
@@ -33,6 +33,15 @@
return ivy.argsort(x, axis=axis, descending=descending)
+@with_supported_dtypes(
+ {"2.5.0 and below": ("float32", "float64", "int32", "int64")},
+ "paddle",
+)
+@to_ivy_arrays_and_back
+def sort(x, /, *, axis=-1, descending=False, name=None):
+ return ivy.sort(x, axis=axis, descending=descending)
+
+
@with_supported_dtypes(
{"2.4.2 and below": ("float32", "float64", "int16", "int32", "int64", "uint8")},
"paddle",
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/search.py b/ivy/functional/frontends/paddle/tensor/search.py\n--- a/ivy/functional/frontends/paddle/tensor/search.py\n+++ b/ivy/functional/frontends/paddle/tensor/search.py\n@@ -33,6 +33,15 @@\n return ivy.argsort(x, axis=axis, descending=descending)\n \n \n+@with_supported_dtypes(\n+ {\"2.5.0 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")},\n+ \"paddle\",\n+)\n+@to_ivy_arrays_and_back\n+def sort(x, /, *, axis=-1, descending=False, name=None):\n+ return ivy.sort(x, axis=axis, descending=descending)\n+\n+\n @with_supported_dtypes(\n {\"2.4.2 and below\": (\"float32\", \"float64\", \"int16\", \"int32\", \"int64\", \"uint8\")},\n \"paddle\",\n", "issue": "Sort\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_supported_dtypes(\n {\"2.5.0 and below\": (\"float32\", \"float64\", \"int16\", \"int32\", \"int64\", \"uint8\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef argmax(x, /, *, axis=None, keepdim=False, dtype=\"int64\", name=None):\n return ivy.argmax(x, axis=axis, keepdims=keepdim, dtype=dtype)\n\n\n@with_supported_dtypes(\n {\"2.5.0 and below\": (\"float32\", \"float64\", \"int16\", \"int32\", \"int64\", \"uint8\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef argmin(x, /, *, axis=None, keepdim=False, dtype=\"int64\", name=None):\n return ivy.argmin(x, axis=axis, keepdims=keepdim, dtype=dtype)\n\n\n@with_supported_dtypes(\n {\"2.4.2 and below\": (\"float32\", \"float64\", \"int16\", \"int32\", \"int64\", \"uint8\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef argsort(x, /, *, axis=-1, descending=False, name=None):\n return ivy.argsort(x, axis=axis, descending=descending)\n\n\n@with_supported_dtypes(\n {\"2.4.2 and below\": (\"float32\", \"float64\", \"int16\", \"int32\", \"int64\", \"uint8\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef nonzero(input, /, *, as_tuple=False):\n ret = ivy.nonzero(input)\n if as_tuple is False:\n ret = ivy.matrix_transpose(ivy.stack(ret))\n return ret\n\n\n@with_supported_dtypes(\n {\"2.5.0 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef searchsorted(sorted_sequence, values, out_int32=False, right=False, name=None):\n if right:\n side = \"right\"\n else:\n side = \"left\"\n ret = ivy.searchsorted(sorted_sequence, values, side=side)\n if out_int32:\n ret = ivy.astype(ret, \"int32\")\n return ret\n", "path": "ivy/functional/frontends/paddle/tensor/search.py"}], "after_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_supported_dtypes(\n {\"2.5.0 and below\": (\"float32\", \"float64\", \"int16\", \"int32\", \"int64\", \"uint8\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef argmax(x, /, *, axis=None, keepdim=False, dtype=\"int64\", name=None):\n return ivy.argmax(x, axis=axis, keepdims=keepdim, dtype=dtype)\n\n\n@with_supported_dtypes(\n {\"2.5.0 and below\": (\"float32\", \"float64\", \"int16\", \"int32\", \"int64\", \"uint8\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef argmin(x, /, *, axis=None, keepdim=False, dtype=\"int64\", name=None):\n return ivy.argmin(x, axis=axis, keepdims=keepdim, dtype=dtype)\n\n\n@with_supported_dtypes(\n {\"2.4.2 and below\": (\"float32\", \"float64\", \"int16\", \"int32\", \"int64\", \"uint8\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef argsort(x, /, *, axis=-1, descending=False, name=None):\n return ivy.argsort(x, axis=axis, descending=descending)\n\n\n@with_supported_dtypes(\n {\"2.5.0 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef sort(x, /, *, axis=-1, descending=False, name=None):\n return ivy.sort(x, axis=axis, descending=descending)\n\n\n@with_supported_dtypes(\n {\"2.4.2 and below\": (\"float32\", \"float64\", \"int16\", \"int32\", \"int64\", \"uint8\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef nonzero(input, /, *, as_tuple=False):\n ret = ivy.nonzero(input)\n if as_tuple is False:\n ret = ivy.matrix_transpose(ivy.stack(ret))\n return ret\n\n\n@with_supported_dtypes(\n {\"2.5.0 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef searchsorted(sorted_sequence, values, out_int32=False, right=False, name=None):\n if right:\n side = \"right\"\n else:\n side = \"left\"\n ret = ivy.searchsorted(sorted_sequence, values, side=side)\n if out_int32:\n ret = ivy.astype(ret, \"int32\")\n return ret\n", "path": "ivy/functional/frontends/paddle/tensor/search.py"}]} | 964 | 230 |
gh_patches_debug_29121 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-2235 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Perplexity dtype restriction too strict
## 🐛 Bug
The perplexity metric requires the input dtype to be either fp32 or fp64, but this doesn't work with e.g. fp16, and users need to manually recast.
https://github.com/Lightning-AI/torchmetrics/blob/894de4caeeae820f60f1871d75334873241e5633/src/torchmetrics/functional/text/perplexity.py#L21
### Expected behavior
The metric should accept other floating point dtypes.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/torchmetrics/functional/text/perplexity.py`
Content:
```
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Optional, Tuple
16
17 import torch
18 from torch import Tensor
19 from torch.nn import functional as F # noqa: N812
20
21 _TORCH_FLOAT_OR_DOUBLE = (torch.float32, torch.float64)
22
23
24 def _check_shape_and_type_consistency(preds: Tensor, target: Tensor) -> None:
25 """Check shape and type consistency of input vectors.
26
27 Args:
28 preds:
29 Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,
30 vocab_size]. Scores will be normalized internally using softmax.
31 target:
32 Ground truth values with a shape [batch_size, seq_len].
33
34 Raises:
35 ValueError:
36 If ``preds`` tensor has no 3 dimensions.
37 ValueError:
38 If ``target`` tensor has no 2 dimensions.
39 ValueError:
40 If the first two dimensions of ``preds`` and ``target`` do not equal.
41 TypeError:
42 If ``preds`` dtype is not one of ``(torch.float16, torch.float32, torch.float64)``
43 TypeError:
44 If ``target`` is not of a type LongTensor (torch.int64)
45
46 """
47 if len(preds.shape) != 3:
48 raise ValueError(
49 "Input tensor `preds` is expected to have 3 dimensions, [batch_size, seq_len, vocab_size],"
50 f" but got {len(preds.shape)}."
51 )
52 if len(target.shape) != 2:
53 raise ValueError(
54 "Input tensor `target` is expected to have 2 dimensions, [batch_size, seq_len],"
55 f" but got {len(target.shape)}."
56 )
57 if preds.shape[:2] != target.shape:
58 raise ValueError(
59 "Input tensors `preds` and `target` are expected to have equaling first two dimensions,"
60 f" [batch_size, seq_len], but got {preds.shape[:2]} and {target.shape}."
61 )
62 if preds.dtype not in _TORCH_FLOAT_OR_DOUBLE:
63 raise TypeError(
64 f"Input tensor `preds` is expected to be of a type one of {_TORCH_FLOAT_OR_DOUBLE} but got {preds.dtype}."
65 )
66 if target.dtype != torch.int64:
67 raise TypeError(f"Input tensor `target` is expected to be of a type {torch.int64} but got {target.dtype}.")
68
69
70 def _perplexity_update(preds: Tensor, target: Tensor, ignore_index: Optional[int] = None) -> Tuple[Tensor, Tensor]:
71 """Compute intermediate statistics for Perplexity.
72
73 Args:
74 preds:
75 Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,
76 vocab_size]. Scores will be normalized internally using softmax.
77 target:
78 Ground truth values with a shape [batch_size, seq_len].
79 ignore_index:
80 Integer specifying a target class to ignore. If given, this class index does not contribute
81 to the returned score.
82
83 Returns:
84 Log probabilities, summed over all samples
85 Number of samples
86
87 """
88 _check_shape_and_type_consistency(preds, target)
89
90 probs = F.softmax(preds.reshape(-1, preds.shape[-1]), dim=1)
91 target = target.reshape(-1)
92
93 if ignore_index is not None:
94 mask = target.ne(ignore_index)
95 target = target.where(target != ignore_index, torch.tensor(0, device=target.device))
96 else:
97 mask = torch.ones_like(target, dtype=torch.bool)
98
99 probs = probs[:, target].diagonal()[mask]
100 total_log_probs = -probs.log().sum()
101 count = mask.sum()
102
103 return total_log_probs, count
104
105
106 def _perplexity_compute(total: Tensor, count: Tensor) -> Tensor:
107 """Compute the Perplexity.
108
109 Args:
110 total: Log probabilities, summed over all samples
111 count: Number of samples
112 Returns:
113 Perplexity
114
115 """
116 return torch.exp(total / count)
117
118
119 def perplexity(preds: Tensor, target: Tensor, ignore_index: Optional[int] = None) -> Tensor:
120 """Perplexity measures how well a language model predicts a text sample.
121
122 This metric is calculated as the average number of bits per word a model needs to represent the sample.
123
124 Args:
125 preds:
126 Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,
127 vocab_size], which is the output of a language model. Scores will be normalized internally using softmax.
128 target:
129 Ground truth values with a shape [batch_size, seq_len].
130 ignore_index:
131 Integer specifying a target class to ignore. If given, this class index does not contribute
132 to the returned score.
133
134 Returns:
135 Perplexity value
136
137 Examples:
138 >>> import torch
139 >>> gen = torch.manual_seed(42)
140 >>> preds = torch.rand(2, 8, 5, generator=gen)
141 >>> target = torch.randint(5, (2, 8), generator=gen)
142 >>> target[0, 6:] = -100
143 >>> perplexity(preds, target, ignore_index=-100)
144 tensor(5.8540)
145
146 """
147 total, count = _perplexity_update(preds, target, ignore_index)
148 return _perplexity_compute(total, count)
149
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/torchmetrics/functional/text/perplexity.py b/src/torchmetrics/functional/text/perplexity.py
--- a/src/torchmetrics/functional/text/perplexity.py
+++ b/src/torchmetrics/functional/text/perplexity.py
@@ -16,9 +16,6 @@
import torch
from torch import Tensor
-from torch.nn import functional as F # noqa: N812
-
-_TORCH_FLOAT_OR_DOUBLE = (torch.float32, torch.float64)
def _check_shape_and_type_consistency(preds: Tensor, target: Tensor) -> None:
@@ -59,10 +56,8 @@
"Input tensors `preds` and `target` are expected to have equaling first two dimensions,"
f" [batch_size, seq_len], but got {preds.shape[:2]} and {target.shape}."
)
- if preds.dtype not in _TORCH_FLOAT_OR_DOUBLE:
- raise TypeError(
- f"Input tensor `preds` is expected to be of a type one of {_TORCH_FLOAT_OR_DOUBLE} but got {preds.dtype}."
- )
+ if not preds.is_floating_point():
+ raise TypeError(f"Input tensor `preds` is expected to be of floating point type but got {preds.dtype}.")
if target.dtype != torch.int64:
raise TypeError(f"Input tensor `target` is expected to be of a type {torch.int64} but got {target.dtype}.")
@@ -87,7 +82,7 @@
"""
_check_shape_and_type_consistency(preds, target)
- probs = F.softmax(preds.reshape(-1, preds.shape[-1]), dim=1)
+ probs = torch.nn.functional.softmax(preds.reshape(-1, preds.shape[-1]), dim=1)
target = target.reshape(-1)
if ignore_index is not None:
| {"golden_diff": "diff --git a/src/torchmetrics/functional/text/perplexity.py b/src/torchmetrics/functional/text/perplexity.py\n--- a/src/torchmetrics/functional/text/perplexity.py\n+++ b/src/torchmetrics/functional/text/perplexity.py\n@@ -16,9 +16,6 @@\n \n import torch\n from torch import Tensor\n-from torch.nn import functional as F # noqa: N812\n-\n-_TORCH_FLOAT_OR_DOUBLE = (torch.float32, torch.float64)\n \n \n def _check_shape_and_type_consistency(preds: Tensor, target: Tensor) -> None:\n@@ -59,10 +56,8 @@\n \"Input tensors `preds` and `target` are expected to have equaling first two dimensions,\"\n f\" [batch_size, seq_len], but got {preds.shape[:2]} and {target.shape}.\"\n )\n- if preds.dtype not in _TORCH_FLOAT_OR_DOUBLE:\n- raise TypeError(\n- f\"Input tensor `preds` is expected to be of a type one of {_TORCH_FLOAT_OR_DOUBLE} but got {preds.dtype}.\"\n- )\n+ if not preds.is_floating_point():\n+ raise TypeError(f\"Input tensor `preds` is expected to be of floating point type but got {preds.dtype}.\")\n if target.dtype != torch.int64:\n raise TypeError(f\"Input tensor `target` is expected to be of a type {torch.int64} but got {target.dtype}.\")\n \n@@ -87,7 +82,7 @@\n \"\"\"\n _check_shape_and_type_consistency(preds, target)\n \n- probs = F.softmax(preds.reshape(-1, preds.shape[-1]), dim=1)\n+ probs = torch.nn.functional.softmax(preds.reshape(-1, preds.shape[-1]), dim=1)\n target = target.reshape(-1)\n \n if ignore_index is not None:\n", "issue": "Perplexity dtype restriction too strict\n## \ud83d\udc1b Bug\r\n\r\nThe perplexity metric requires the input dtype to be either fp32 or fp64, but this doesn't work with e.g. fp16, and users need to manually recast.\r\n\r\nhttps://github.com/Lightning-AI/torchmetrics/blob/894de4caeeae820f60f1871d75334873241e5633/src/torchmetrics/functional/text/perplexity.py#L21\r\n\r\n### Expected behavior\r\n\r\nThe metric should accept other floating point dtypes.\n", "before_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Optional, Tuple\n\nimport torch\nfrom torch import Tensor\nfrom torch.nn import functional as F # noqa: N812\n\n_TORCH_FLOAT_OR_DOUBLE = (torch.float32, torch.float64)\n\n\ndef _check_shape_and_type_consistency(preds: Tensor, target: Tensor) -> None:\n \"\"\"Check shape and type consistency of input vectors.\n\n Args:\n preds:\n Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,\n vocab_size]. Scores will be normalized internally using softmax.\n target:\n Ground truth values with a shape [batch_size, seq_len].\n\n Raises:\n ValueError:\n If ``preds`` tensor has no 3 dimensions.\n ValueError:\n If ``target`` tensor has no 2 dimensions.\n ValueError:\n If the first two dimensions of ``preds`` and ``target`` do not equal.\n TypeError:\n If ``preds`` dtype is not one of ``(torch.float16, torch.float32, torch.float64)``\n TypeError:\n If ``target`` is not of a type LongTensor (torch.int64)\n\n \"\"\"\n if len(preds.shape) != 3:\n raise ValueError(\n \"Input tensor `preds` is expected to have 3 dimensions, [batch_size, seq_len, vocab_size],\"\n f\" but got {len(preds.shape)}.\"\n )\n if len(target.shape) != 2:\n raise ValueError(\n \"Input tensor `target` is expected to have 2 dimensions, [batch_size, seq_len],\"\n f\" but got {len(target.shape)}.\"\n )\n if preds.shape[:2] != target.shape:\n raise ValueError(\n \"Input tensors `preds` and `target` are expected to have equaling first two dimensions,\"\n f\" [batch_size, seq_len], but got {preds.shape[:2]} and {target.shape}.\"\n )\n if preds.dtype not in _TORCH_FLOAT_OR_DOUBLE:\n raise TypeError(\n f\"Input tensor `preds` is expected to be of a type one of {_TORCH_FLOAT_OR_DOUBLE} but got {preds.dtype}.\"\n )\n if target.dtype != torch.int64:\n raise TypeError(f\"Input tensor `target` is expected to be of a type {torch.int64} but got {target.dtype}.\")\n\n\ndef _perplexity_update(preds: Tensor, target: Tensor, ignore_index: Optional[int] = None) -> Tuple[Tensor, Tensor]:\n \"\"\"Compute intermediate statistics for Perplexity.\n\n Args:\n preds:\n Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,\n vocab_size]. Scores will be normalized internally using softmax.\n target:\n Ground truth values with a shape [batch_size, seq_len].\n ignore_index:\n Integer specifying a target class to ignore. If given, this class index does not contribute\n to the returned score.\n\n Returns:\n Log probabilities, summed over all samples\n Number of samples\n\n \"\"\"\n _check_shape_and_type_consistency(preds, target)\n\n probs = F.softmax(preds.reshape(-1, preds.shape[-1]), dim=1)\n target = target.reshape(-1)\n\n if ignore_index is not None:\n mask = target.ne(ignore_index)\n target = target.where(target != ignore_index, torch.tensor(0, device=target.device))\n else:\n mask = torch.ones_like(target, dtype=torch.bool)\n\n probs = probs[:, target].diagonal()[mask]\n total_log_probs = -probs.log().sum()\n count = mask.sum()\n\n return total_log_probs, count\n\n\ndef _perplexity_compute(total: Tensor, count: Tensor) -> Tensor:\n \"\"\"Compute the Perplexity.\n\n Args:\n total: Log probabilities, summed over all samples\n count: Number of samples\n Returns:\n Perplexity\n\n \"\"\"\n return torch.exp(total / count)\n\n\ndef perplexity(preds: Tensor, target: Tensor, ignore_index: Optional[int] = None) -> Tensor:\n \"\"\"Perplexity measures how well a language model predicts a text sample.\n\n This metric is calculated as the average number of bits per word a model needs to represent the sample.\n\n Args:\n preds:\n Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,\n vocab_size], which is the output of a language model. Scores will be normalized internally using softmax.\n target:\n Ground truth values with a shape [batch_size, seq_len].\n ignore_index:\n Integer specifying a target class to ignore. If given, this class index does not contribute\n to the returned score.\n\n Returns:\n Perplexity value\n\n Examples:\n >>> import torch\n >>> gen = torch.manual_seed(42)\n >>> preds = torch.rand(2, 8, 5, generator=gen)\n >>> target = torch.randint(5, (2, 8), generator=gen)\n >>> target[0, 6:] = -100\n >>> perplexity(preds, target, ignore_index=-100)\n tensor(5.8540)\n\n \"\"\"\n total, count = _perplexity_update(preds, target, ignore_index)\n return _perplexity_compute(total, count)\n", "path": "src/torchmetrics/functional/text/perplexity.py"}], "after_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Optional, Tuple\n\nimport torch\nfrom torch import Tensor\n\n\ndef _check_shape_and_type_consistency(preds: Tensor, target: Tensor) -> None:\n \"\"\"Check shape and type consistency of input vectors.\n\n Args:\n preds:\n Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,\n vocab_size]. Scores will be normalized internally using softmax.\n target:\n Ground truth values with a shape [batch_size, seq_len].\n\n Raises:\n ValueError:\n If ``preds`` tensor has no 3 dimensions.\n ValueError:\n If ``target`` tensor has no 2 dimensions.\n ValueError:\n If the first two dimensions of ``preds`` and ``target`` do not equal.\n TypeError:\n If ``preds`` dtype is not one of ``(torch.float16, torch.float32, torch.float64)``\n TypeError:\n If ``target`` is not of a type LongTensor (torch.int64)\n\n \"\"\"\n if len(preds.shape) != 3:\n raise ValueError(\n \"Input tensor `preds` is expected to have 3 dimensions, [batch_size, seq_len, vocab_size],\"\n f\" but got {len(preds.shape)}.\"\n )\n if len(target.shape) != 2:\n raise ValueError(\n \"Input tensor `target` is expected to have 2 dimensions, [batch_size, seq_len],\"\n f\" but got {len(target.shape)}.\"\n )\n if preds.shape[:2] != target.shape:\n raise ValueError(\n \"Input tensors `preds` and `target` are expected to have equaling first two dimensions,\"\n f\" [batch_size, seq_len], but got {preds.shape[:2]} and {target.shape}.\"\n )\n if not preds.is_floating_point():\n raise TypeError(f\"Input tensor `preds` is expected to be of floating point type but got {preds.dtype}.\")\n if target.dtype != torch.int64:\n raise TypeError(f\"Input tensor `target` is expected to be of a type {torch.int64} but got {target.dtype}.\")\n\n\ndef _perplexity_update(preds: Tensor, target: Tensor, ignore_index: Optional[int] = None) -> Tuple[Tensor, Tensor]:\n \"\"\"Compute intermediate statistics for Perplexity.\n\n Args:\n preds:\n Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,\n vocab_size]. Scores will be normalized internally using softmax.\n target:\n Ground truth values with a shape [batch_size, seq_len].\n ignore_index:\n Integer specifying a target class to ignore. If given, this class index does not contribute\n to the returned score.\n\n Returns:\n Log probabilities, summed over all samples\n Number of samples\n\n \"\"\"\n _check_shape_and_type_consistency(preds, target)\n\n probs = torch.nn.functional.softmax(preds.reshape(-1, preds.shape[-1]), dim=1)\n target = target.reshape(-1)\n\n if ignore_index is not None:\n mask = target.ne(ignore_index)\n target = target.where(target != ignore_index, torch.tensor(0, device=target.device))\n else:\n mask = torch.ones_like(target, dtype=torch.bool)\n\n probs = probs[:, target].diagonal()[mask]\n total_log_probs = -probs.log().sum()\n count = mask.sum()\n\n return total_log_probs, count\n\n\ndef _perplexity_compute(total: Tensor, count: Tensor) -> Tensor:\n \"\"\"Compute the Perplexity.\n\n Args:\n total: Log probabilities, summed over all samples\n count: Number of samples\n Returns:\n Perplexity\n\n \"\"\"\n return torch.exp(total / count)\n\n\ndef perplexity(preds: Tensor, target: Tensor, ignore_index: Optional[int] = None) -> Tensor:\n \"\"\"Perplexity measures how well a language model predicts a text sample.\n\n This metric is calculated as the average number of bits per word a model needs to represent the sample.\n\n Args:\n preds:\n Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,\n vocab_size], which is the output of a language model. Scores will be normalized internally using softmax.\n target:\n Ground truth values with a shape [batch_size, seq_len].\n ignore_index:\n Integer specifying a target class to ignore. If given, this class index does not contribute\n to the returned score.\n\n Returns:\n Perplexity value\n\n Examples:\n >>> import torch\n >>> gen = torch.manual_seed(42)\n >>> preds = torch.rand(2, 8, 5, generator=gen)\n >>> target = torch.randint(5, (2, 8), generator=gen)\n >>> target[0, 6:] = -100\n >>> perplexity(preds, target, ignore_index=-100)\n tensor(5.8540)\n\n \"\"\"\n total, count = _perplexity_update(preds, target, ignore_index)\n return _perplexity_compute(total, count)\n", "path": "src/torchmetrics/functional/text/perplexity.py"}]} | 2,040 | 419 |
gh_patches_debug_6988 | rasdani/github-patches | git_diff | napari__napari-4264 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'napari._vispy.layers.image' is not a package
## Activating experimental rendering features causes persistent ModuleNotFoundError
Recently, I discovered the two "experimental" features designed to enhance viewer performance with asynchronous rendering. After checking both options in Preferences (under Experimental), the GUI prompted a restart. When attempting to relaunch Napari, the following error is thrown: `ModuleNotFoundError: No module named 'napari._vispy.layers.image.experimental'; 'napari._vispy.layers.image' is not a package`. I tried recreating my virtual environment, installing a different version of Napari, using the `--no-cache-dir` flag with pip to ensure that the installation process did not use cached files...but every new installation of Napari now throws this error.
## To Reproduce
Steps to reproduce the behavior:
1. Check both experimental rendering options under File/Preferences/Experimental
2. Restart Napari
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `napari/_vispy/utils/visual.py`
Content:
```
1 from typing import Tuple
2
3 import numpy as np
4 from vispy.scene.widgets.viewbox import ViewBox
5
6 from ...layers import (
7 Image,
8 Labels,
9 Layer,
10 Points,
11 Shapes,
12 Surface,
13 Tracks,
14 Vectors,
15 )
16 from ...utils.config import async_octree
17 from ...utils.translations import trans
18 from ..layers.base import VispyBaseLayer
19 from ..layers.image import VispyImageLayer
20 from ..layers.labels import VispyLabelsLayer
21 from ..layers.points import VispyPointsLayer
22 from ..layers.shapes import VispyShapesLayer
23 from ..layers.surface import VispySurfaceLayer
24 from ..layers.tracks import VispyTracksLayer
25 from ..layers.vectors import VispyVectorsLayer
26
27 layer_to_visual = {
28 Image: VispyImageLayer,
29 Labels: VispyLabelsLayer,
30 Points: VispyPointsLayer,
31 Shapes: VispyShapesLayer,
32 Surface: VispySurfaceLayer,
33 Vectors: VispyVectorsLayer,
34 Tracks: VispyTracksLayer,
35 }
36
37
38 if async_octree:
39 from ..layers.image.experimental.octree_image import _OctreeImageBase
40 from .experimental.vispy_tiled_image_layer import VispyTiledImageLayer
41
42 # Insert _OctreeImageBase in front so it gets picked over plain Image.
43 new_mapping = {_OctreeImageBase: VispyTiledImageLayer}
44 new_mapping.update(layer_to_visual)
45 layer_to_visual = new_mapping
46
47
48 def create_vispy_visual(layer: Layer) -> VispyBaseLayer:
49 """Create vispy visual for a layer based on its layer type.
50
51 Parameters
52 ----------
53 layer : napari.layers._base_layer.Layer
54 Layer that needs its property widget created.
55
56 Returns
57 -------
58 visual : vispy.scene.visuals.VisualNode
59 Vispy visual node
60 """
61 for layer_type, visual_class in layer_to_visual.items():
62 if isinstance(layer, layer_type):
63 return visual_class(layer)
64
65 raise TypeError(
66 trans._(
67 'Could not find VispyLayer for layer of type {dtype}',
68 deferred=True,
69 dtype=type(layer),
70 )
71 )
72
73
74 def get_view_direction_in_scene_coordinates(
75 view: ViewBox,
76 ndim: int,
77 dims_displayed: Tuple[int],
78 ) -> np.ndarray:
79 """Calculate the unit vector pointing in the direction of the view.
80
81 This is only for 3D viewing, so it returns None when
82 len(dims_displayed) == 2.
83 Adapted From:
84 https://stackoverflow.com/questions/37877592/
85 get-view-direction-relative-to-scene-in-vispy/37882984
86
87 Parameters
88 ----------
89 view : vispy.scene.widgets.viewbox.ViewBox
90 The vispy view box object to get the view direction from.
91 ndim : int
92 The number of dimensions in the full nD dims model.
93 This is typically from viewer.dims.ndim
94 dims_displayed : Tuple[int]
95 The indices of the dims displayed in the viewer.
96 This is typically from viewer.dims.displayed.
97
98 Returns
99 -------
100 view_vector : np.ndarray
101 Unit vector in the direction of the view in scene coordinates.
102 Axes are ordered zyx. If the viewer is in 2D
103 (i.e., len(dims_displayed) == 2), view_vector is None.
104 """
105 # only return a vector when viewing in 3D
106 if len(dims_displayed) == 2:
107 return None
108
109 tform = view.scene.transform
110 w, h = view.canvas.size
111
112 # get a point at the center of the canvas
113 # (homogeneous screen coords)
114 screen_center = np.array([w / 2, h / 2, 0, 1])
115
116 # find a point just in front of the center point
117 # transform both to world coords and find the vector
118 d1 = np.array([0, 0, 1, 0])
119 point_in_front_of_screen_center = screen_center + d1
120 p1 = tform.imap(point_in_front_of_screen_center)
121 p0 = tform.imap(screen_center)
122 d2 = p1 - p0
123
124 # in 3D world coordinates
125 d3 = d2[0:3]
126 d4 = d3 / np.linalg.norm(d3)
127
128 # data are ordered xyz on vispy Volume
129 d4 = d4[[2, 1, 0]]
130 view_dir_world = np.zeros((ndim,))
131 for i, d in enumerate(dims_displayed):
132 view_dir_world[d] = d4[i]
133
134 return view_dir_world
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/napari/_vispy/utils/visual.py b/napari/_vispy/utils/visual.py
--- a/napari/_vispy/utils/visual.py
+++ b/napari/_vispy/utils/visual.py
@@ -36,8 +36,8 @@
if async_octree:
- from ..layers.image.experimental.octree_image import _OctreeImageBase
- from .experimental.vispy_tiled_image_layer import VispyTiledImageLayer
+ from ...layers.image.experimental.octree_image import _OctreeImageBase
+ from ..experimental.vispy_tiled_image_layer import VispyTiledImageLayer
# Insert _OctreeImageBase in front so it gets picked over plain Image.
new_mapping = {_OctreeImageBase: VispyTiledImageLayer}
| {"golden_diff": "diff --git a/napari/_vispy/utils/visual.py b/napari/_vispy/utils/visual.py\n--- a/napari/_vispy/utils/visual.py\n+++ b/napari/_vispy/utils/visual.py\n@@ -36,8 +36,8 @@\n \n \n if async_octree:\n- from ..layers.image.experimental.octree_image import _OctreeImageBase\n- from .experimental.vispy_tiled_image_layer import VispyTiledImageLayer\n+ from ...layers.image.experimental.octree_image import _OctreeImageBase\n+ from ..experimental.vispy_tiled_image_layer import VispyTiledImageLayer\n \n # Insert _OctreeImageBase in front so it gets picked over plain Image.\n new_mapping = {_OctreeImageBase: VispyTiledImageLayer}\n", "issue": "'napari._vispy.layers.image' is not a package\n## Activating experimental rendering features causes persistent ModuleNotFoundError\r\n\r\nRecently, I discovered the two \"experimental\" features designed to enhance viewer performance with asynchronous rendering. After checking both options in Preferences (under Experimental), the GUI prompted a restart. When attempting to relaunch Napari, the following error is thrown: `ModuleNotFoundError: No module named 'napari._vispy.layers.image.experimental'; 'napari._vispy.layers.image' is not a package`. I tried recreating my virtual environment, installing a different version of Napari, using the `--no-cache-dir` flag with pip to ensure that the installation process did not use cached files...but every new installation of Napari now throws this error.\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Check both experimental rendering options under File/Preferences/Experimental\r\n2. Restart Napari\r\n\n", "before_files": [{"content": "from typing import Tuple\n\nimport numpy as np\nfrom vispy.scene.widgets.viewbox import ViewBox\n\nfrom ...layers import (\n Image,\n Labels,\n Layer,\n Points,\n Shapes,\n Surface,\n Tracks,\n Vectors,\n)\nfrom ...utils.config import async_octree\nfrom ...utils.translations import trans\nfrom ..layers.base import VispyBaseLayer\nfrom ..layers.image import VispyImageLayer\nfrom ..layers.labels import VispyLabelsLayer\nfrom ..layers.points import VispyPointsLayer\nfrom ..layers.shapes import VispyShapesLayer\nfrom ..layers.surface import VispySurfaceLayer\nfrom ..layers.tracks import VispyTracksLayer\nfrom ..layers.vectors import VispyVectorsLayer\n\nlayer_to_visual = {\n Image: VispyImageLayer,\n Labels: VispyLabelsLayer,\n Points: VispyPointsLayer,\n Shapes: VispyShapesLayer,\n Surface: VispySurfaceLayer,\n Vectors: VispyVectorsLayer,\n Tracks: VispyTracksLayer,\n}\n\n\nif async_octree:\n from ..layers.image.experimental.octree_image import _OctreeImageBase\n from .experimental.vispy_tiled_image_layer import VispyTiledImageLayer\n\n # Insert _OctreeImageBase in front so it gets picked over plain Image.\n new_mapping = {_OctreeImageBase: VispyTiledImageLayer}\n new_mapping.update(layer_to_visual)\n layer_to_visual = new_mapping\n\n\ndef create_vispy_visual(layer: Layer) -> VispyBaseLayer:\n \"\"\"Create vispy visual for a layer based on its layer type.\n\n Parameters\n ----------\n layer : napari.layers._base_layer.Layer\n Layer that needs its property widget created.\n\n Returns\n -------\n visual : vispy.scene.visuals.VisualNode\n Vispy visual node\n \"\"\"\n for layer_type, visual_class in layer_to_visual.items():\n if isinstance(layer, layer_type):\n return visual_class(layer)\n\n raise TypeError(\n trans._(\n 'Could not find VispyLayer for layer of type {dtype}',\n deferred=True,\n dtype=type(layer),\n )\n )\n\n\ndef get_view_direction_in_scene_coordinates(\n view: ViewBox,\n ndim: int,\n dims_displayed: Tuple[int],\n) -> np.ndarray:\n \"\"\"Calculate the unit vector pointing in the direction of the view.\n\n This is only for 3D viewing, so it returns None when\n len(dims_displayed) == 2.\n Adapted From:\n https://stackoverflow.com/questions/37877592/\n get-view-direction-relative-to-scene-in-vispy/37882984\n\n Parameters\n ----------\n view : vispy.scene.widgets.viewbox.ViewBox\n The vispy view box object to get the view direction from.\n ndim : int\n The number of dimensions in the full nD dims model.\n This is typically from viewer.dims.ndim\n dims_displayed : Tuple[int]\n The indices of the dims displayed in the viewer.\n This is typically from viewer.dims.displayed.\n\n Returns\n -------\n view_vector : np.ndarray\n Unit vector in the direction of the view in scene coordinates.\n Axes are ordered zyx. If the viewer is in 2D\n (i.e., len(dims_displayed) == 2), view_vector is None.\n \"\"\"\n # only return a vector when viewing in 3D\n if len(dims_displayed) == 2:\n return None\n\n tform = view.scene.transform\n w, h = view.canvas.size\n\n # get a point at the center of the canvas\n # (homogeneous screen coords)\n screen_center = np.array([w / 2, h / 2, 0, 1])\n\n # find a point just in front of the center point\n # transform both to world coords and find the vector\n d1 = np.array([0, 0, 1, 0])\n point_in_front_of_screen_center = screen_center + d1\n p1 = tform.imap(point_in_front_of_screen_center)\n p0 = tform.imap(screen_center)\n d2 = p1 - p0\n\n # in 3D world coordinates\n d3 = d2[0:3]\n d4 = d3 / np.linalg.norm(d3)\n\n # data are ordered xyz on vispy Volume\n d4 = d4[[2, 1, 0]]\n view_dir_world = np.zeros((ndim,))\n for i, d in enumerate(dims_displayed):\n view_dir_world[d] = d4[i]\n\n return view_dir_world\n", "path": "napari/_vispy/utils/visual.py"}], "after_files": [{"content": "from typing import Tuple\n\nimport numpy as np\nfrom vispy.scene.widgets.viewbox import ViewBox\n\nfrom ...layers import (\n Image,\n Labels,\n Layer,\n Points,\n Shapes,\n Surface,\n Tracks,\n Vectors,\n)\nfrom ...utils.config import async_octree\nfrom ...utils.translations import trans\nfrom ..layers.base import VispyBaseLayer\nfrom ..layers.image import VispyImageLayer\nfrom ..layers.labels import VispyLabelsLayer\nfrom ..layers.points import VispyPointsLayer\nfrom ..layers.shapes import VispyShapesLayer\nfrom ..layers.surface import VispySurfaceLayer\nfrom ..layers.tracks import VispyTracksLayer\nfrom ..layers.vectors import VispyVectorsLayer\n\nlayer_to_visual = {\n Image: VispyImageLayer,\n Labels: VispyLabelsLayer,\n Points: VispyPointsLayer,\n Shapes: VispyShapesLayer,\n Surface: VispySurfaceLayer,\n Vectors: VispyVectorsLayer,\n Tracks: VispyTracksLayer,\n}\n\n\nif async_octree:\n from ...layers.image.experimental.octree_image import _OctreeImageBase\n from ..experimental.vispy_tiled_image_layer import VispyTiledImageLayer\n\n # Insert _OctreeImageBase in front so it gets picked over plain Image.\n new_mapping = {_OctreeImageBase: VispyTiledImageLayer}\n new_mapping.update(layer_to_visual)\n layer_to_visual = new_mapping\n\n\ndef create_vispy_visual(layer: Layer) -> VispyBaseLayer:\n \"\"\"Create vispy visual for a layer based on its layer type.\n\n Parameters\n ----------\n layer : napari.layers._base_layer.Layer\n Layer that needs its property widget created.\n\n Returns\n -------\n visual : vispy.scene.visuals.VisualNode\n Vispy visual node\n \"\"\"\n for layer_type, visual_class in layer_to_visual.items():\n if isinstance(layer, layer_type):\n return visual_class(layer)\n\n raise TypeError(\n trans._(\n 'Could not find VispyLayer for layer of type {dtype}',\n deferred=True,\n dtype=type(layer),\n )\n )\n\n\ndef get_view_direction_in_scene_coordinates(\n view: ViewBox,\n ndim: int,\n dims_displayed: Tuple[int],\n) -> np.ndarray:\n \"\"\"Calculate the unit vector pointing in the direction of the view.\n\n This is only for 3D viewing, so it returns None when\n len(dims_displayed) == 2.\n Adapted From:\n https://stackoverflow.com/questions/37877592/\n get-view-direction-relative-to-scene-in-vispy/37882984\n\n Parameters\n ----------\n view : vispy.scene.widgets.viewbox.ViewBox\n The vispy view box object to get the view direction from.\n ndim : int\n The number of dimensions in the full nD dims model.\n This is typically from viewer.dims.ndim\n dims_displayed : Tuple[int]\n The indices of the dims displayed in the viewer.\n This is typically from viewer.dims.displayed.\n\n Returns\n -------\n view_vector : np.ndarray\n Unit vector in the direction of the view in scene coordinates.\n Axes are ordered zyx. If the viewer is in 2D\n (i.e., len(dims_displayed) == 2), view_vector is None.\n \"\"\"\n # only return a vector when viewing in 3D\n if len(dims_displayed) == 2:\n return None\n\n tform = view.scene.transform\n w, h = view.canvas.size\n\n # get a point at the center of the canvas\n # (homogeneous screen coords)\n screen_center = np.array([w / 2, h / 2, 0, 1])\n\n # find a point just in front of the center point\n # transform both to world coords and find the vector\n d1 = np.array([0, 0, 1, 0])\n point_in_front_of_screen_center = screen_center + d1\n p1 = tform.imap(point_in_front_of_screen_center)\n p0 = tform.imap(screen_center)\n d2 = p1 - p0\n\n # in 3D world coordinates\n d3 = d2[0:3]\n d4 = d3 / np.linalg.norm(d3)\n\n # data are ordered xyz on vispy Volume\n d4 = d4[[2, 1, 0]]\n view_dir_world = np.zeros((ndim,))\n for i, d in enumerate(dims_displayed):\n view_dir_world[d] = d4[i]\n\n return view_dir_world\n", "path": "napari/_vispy/utils/visual.py"}]} | 1,767 | 179 |
gh_patches_debug_4034 | rasdani/github-patches | git_diff | facebookresearch__ParlAI-1334 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Messenger - qa_data_collection task
Running the messenger qa_data_collection task, it gives me te following error:
Exception in user code:
Traceback (most recent call last):
File "/home/micol/PycharmProjects/parlai_project/ParlAI/parlai/messenger/core/messenger_manager.py", line 552, in _task_function
task_functions[world_type](self, opt, agents, conversation_id)
File "run.py", line 45, in run_conversation
task = task_class(task_opt)
File "/home/micol/PycharmProjects/parlai_project/ParlAI/parlai/tasks/squad/agents.py", line 114, in __init__
build(opt)
File "/home/micol/PycharmProjects/parlai_project/ParlAI/parlai/tasks/squad/build.py", line 35, in build
if 'fulldoc' in opt['task']:
KeyError: 'task'
"Facebook response from message send: {'recipient_id': '2172740222785536', 'message_id': 'm_rDniDdXXKp9KY2DDR7nYxf46FEm_jG446gMYoyCBkXQ1qs8d7VUug8eG31SNJbRdWO5T8BAyL5R4_1-p3tHWVQ'}"
Message data received: {'object': 'page', 'entry': [{'id': '336760520259323', 'time': 1546939725227, 'messaging': [{'sender': {'id': '336760520259323'}, 'recipient': {'id': '2172740222785536'}, 'timestamp': 1546939724784, 'message': {'is_echo': True, 'app_id': 322301978494112, 'mid': 'rDniDdXXKp9KY2DDR7nYxf46FEm_jG446gMYoyCBkXQ1qs8d7VUug8eG31SNJbRdWO5T8BAyL5R4_1-p3tHWVQ', 'seq': 32111, 'text': 'Sorry, this world closed. Returning to overworld.'}}]}]}
Message data received: {'object': 'page', 'entry': [{'id': '336760520259323', 'time': 1546939725478, 'messaging': [{'sender': {'id': '2172740222785536'}, 'recipient': {'id': '336760520259323'}, 'timestamp': 1546939725466, 'delivery': {'mids': ['rDniDdXXKp9KY2DDR7nYxf46FEm_jG446gMYoyCBkXQ1qs8d7VUug8eG31SNJbRdWO5T8BAyL5R4_1-p3tHWVQ'], 'watermark': 1546939724784, 'seq': 0}}]}]}
What is the possible solution?
Thank you
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parlai/messenger/tasks/qa_data_collection/run.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright (c) 2017-present, Facebook, Inc.
4 # All rights reserved.
5 # This source code is licensed under the BSD-style license found in the
6 # LICENSE file in the root directory of this source tree. An additional grant
7 # of patent rights can be found in the PATENTS file in the same directory.
8 from parlai.core.params import ParlaiParser
9 from parlai.messenger.tasks.qa_data_collection.worlds import \
10 QADataCollectionWorld
11 from parlai.messenger.core.messenger_manager import MessengerManager
12 from parlai.messenger.core.worlds import SimpleMessengerOverworld as \
13 MessengerOverworld
14 import os
15 import importlib
16
17
18 def main():
19 argparser = ParlaiParser(False, False)
20 argparser.add_parlai_data_path()
21 argparser.add_messenger_args()
22 opt = argparser.parse_args()
23 opt['task'] = os.path.basename(os.path.dirname(os.path.abspath(__file__)))
24
25 # Initialize a SQuAD teacher agent, which we will get context from
26 module_name = 'parlai.tasks.squad.agents'
27 class_name = 'DefaultTeacher'
28 my_module = importlib.import_module(module_name)
29 task_class = getattr(my_module, class_name)
30 task_opt = {}
31 task_opt['datatype'] = 'train'
32 task_opt['datapath'] = opt['datapath']
33
34 messenger_manager = MessengerManager(opt=opt)
35 messenger_manager.setup_server()
36 messenger_manager.init_new_state()
37
38 def get_overworld(opt, agent):
39 return MessengerOverworld(opt, agent)
40
41 def assign_agent_role(agent):
42 agent[0].disp_id = 'Agent'
43
44 def run_conversation(manager, opt, agents, task_id):
45 task = task_class(task_opt)
46 agent = agents[0]
47 world = QADataCollectionWorld(
48 opt=opt,
49 task=task,
50 agent=agent
51 )
52 while not world.episode_done():
53 world.parley()
54 world.shutdown()
55
56 # World with no onboarding
57 messenger_manager.set_onboard_functions({'default': None})
58 task_functions = {'default': run_conversation}
59 assign_agent_roles = {'default': assign_agent_role}
60 messenger_manager.set_agents_required({'default': 1})
61
62 messenger_manager.set_overworld_func(get_overworld)
63 messenger_manager.setup_socket()
64 try:
65 messenger_manager.start_new_run()
66 messenger_manager.start_task(
67 assign_role_functions=assign_agent_roles,
68 task_functions=task_functions,
69 )
70 except BaseException:
71 raise
72 finally:
73 messenger_manager.shutdown()
74
75
76 if __name__ == '__main__':
77 main()
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/parlai/messenger/tasks/qa_data_collection/run.py b/parlai/messenger/tasks/qa_data_collection/run.py
--- a/parlai/messenger/tasks/qa_data_collection/run.py
+++ b/parlai/messenger/tasks/qa_data_collection/run.py
@@ -27,7 +27,7 @@
class_name = 'DefaultTeacher'
my_module = importlib.import_module(module_name)
task_class = getattr(my_module, class_name)
- task_opt = {}
+ task_opt = opt.copy()
task_opt['datatype'] = 'train'
task_opt['datapath'] = opt['datapath']
| {"golden_diff": "diff --git a/parlai/messenger/tasks/qa_data_collection/run.py b/parlai/messenger/tasks/qa_data_collection/run.py\n--- a/parlai/messenger/tasks/qa_data_collection/run.py\n+++ b/parlai/messenger/tasks/qa_data_collection/run.py\n@@ -27,7 +27,7 @@\n class_name = 'DefaultTeacher'\n my_module = importlib.import_module(module_name)\n task_class = getattr(my_module, class_name)\n- task_opt = {}\n+ task_opt = opt.copy()\n task_opt['datatype'] = 'train'\n task_opt['datapath'] = opt['datapath']\n", "issue": "Messenger - qa_data_collection task \nRunning the messenger qa_data_collection task, it gives me te following error:\r\nException in user code:\r\nTraceback (most recent call last):\r\n File \"/home/micol/PycharmProjects/parlai_project/ParlAI/parlai/messenger/core/messenger_manager.py\", line 552, in _task_function\r\n task_functions[world_type](self, opt, agents, conversation_id)\r\n File \"run.py\", line 45, in run_conversation\r\n task = task_class(task_opt)\r\n File \"/home/micol/PycharmProjects/parlai_project/ParlAI/parlai/tasks/squad/agents.py\", line 114, in __init__\r\n build(opt)\r\n File \"/home/micol/PycharmProjects/parlai_project/ParlAI/parlai/tasks/squad/build.py\", line 35, in build\r\n if 'fulldoc' in opt['task']:\r\nKeyError: 'task'\r\n\"Facebook response from message send: {'recipient_id': '2172740222785536', 'message_id': 'm_rDniDdXXKp9KY2DDR7nYxf46FEm_jG446gMYoyCBkXQ1qs8d7VUug8eG31SNJbRdWO5T8BAyL5R4_1-p3tHWVQ'}\"\r\nMessage data received: {'object': 'page', 'entry': [{'id': '336760520259323', 'time': 1546939725227, 'messaging': [{'sender': {'id': '336760520259323'}, 'recipient': {'id': '2172740222785536'}, 'timestamp': 1546939724784, 'message': {'is_echo': True, 'app_id': 322301978494112, 'mid': 'rDniDdXXKp9KY2DDR7nYxf46FEm_jG446gMYoyCBkXQ1qs8d7VUug8eG31SNJbRdWO5T8BAyL5R4_1-p3tHWVQ', 'seq': 32111, 'text': 'Sorry, this world closed. Returning to overworld.'}}]}]}\r\nMessage data received: {'object': 'page', 'entry': [{'id': '336760520259323', 'time': 1546939725478, 'messaging': [{'sender': {'id': '2172740222785536'}, 'recipient': {'id': '336760520259323'}, 'timestamp': 1546939725466, 'delivery': {'mids': ['rDniDdXXKp9KY2DDR7nYxf46FEm_jG446gMYoyCBkXQ1qs8d7VUug8eG31SNJbRdWO5T8BAyL5R4_1-p3tHWVQ'], 'watermark': 1546939724784, 'seq': 0}}]}]}\r\n \r\nWhat is the possible solution?\r\n\r\nThank you\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) 2017-present, Facebook, Inc.\n# All rights reserved.\n# This source code is licensed under the BSD-style license found in the\n# LICENSE file in the root directory of this source tree. An additional grant\n# of patent rights can be found in the PATENTS file in the same directory.\nfrom parlai.core.params import ParlaiParser\nfrom parlai.messenger.tasks.qa_data_collection.worlds import \\\n QADataCollectionWorld\nfrom parlai.messenger.core.messenger_manager import MessengerManager\nfrom parlai.messenger.core.worlds import SimpleMessengerOverworld as \\\n MessengerOverworld\nimport os\nimport importlib\n\n\ndef main():\n argparser = ParlaiParser(False, False)\n argparser.add_parlai_data_path()\n argparser.add_messenger_args()\n opt = argparser.parse_args()\n opt['task'] = os.path.basename(os.path.dirname(os.path.abspath(__file__)))\n\n # Initialize a SQuAD teacher agent, which we will get context from\n module_name = 'parlai.tasks.squad.agents'\n class_name = 'DefaultTeacher'\n my_module = importlib.import_module(module_name)\n task_class = getattr(my_module, class_name)\n task_opt = {}\n task_opt['datatype'] = 'train'\n task_opt['datapath'] = opt['datapath']\n\n messenger_manager = MessengerManager(opt=opt)\n messenger_manager.setup_server()\n messenger_manager.init_new_state()\n\n def get_overworld(opt, agent):\n return MessengerOverworld(opt, agent)\n\n def assign_agent_role(agent):\n agent[0].disp_id = 'Agent'\n\n def run_conversation(manager, opt, agents, task_id):\n task = task_class(task_opt)\n agent = agents[0]\n world = QADataCollectionWorld(\n opt=opt,\n task=task,\n agent=agent\n )\n while not world.episode_done():\n world.parley()\n world.shutdown()\n\n # World with no onboarding\n messenger_manager.set_onboard_functions({'default': None})\n task_functions = {'default': run_conversation}\n assign_agent_roles = {'default': assign_agent_role}\n messenger_manager.set_agents_required({'default': 1})\n\n messenger_manager.set_overworld_func(get_overworld)\n messenger_manager.setup_socket()\n try:\n messenger_manager.start_new_run()\n messenger_manager.start_task(\n assign_role_functions=assign_agent_roles,\n task_functions=task_functions,\n )\n except BaseException:\n raise\n finally:\n messenger_manager.shutdown()\n\n\nif __name__ == '__main__':\n main()\n", "path": "parlai/messenger/tasks/qa_data_collection/run.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) 2017-present, Facebook, Inc.\n# All rights reserved.\n# This source code is licensed under the BSD-style license found in the\n# LICENSE file in the root directory of this source tree. An additional grant\n# of patent rights can be found in the PATENTS file in the same directory.\nfrom parlai.core.params import ParlaiParser\nfrom parlai.messenger.tasks.qa_data_collection.worlds import \\\n QADataCollectionWorld\nfrom parlai.messenger.core.messenger_manager import MessengerManager\nfrom parlai.messenger.core.worlds import SimpleMessengerOverworld as \\\n MessengerOverworld\nimport os\nimport importlib\n\n\ndef main():\n argparser = ParlaiParser(False, False)\n argparser.add_parlai_data_path()\n argparser.add_messenger_args()\n opt = argparser.parse_args()\n opt['task'] = os.path.basename(os.path.dirname(os.path.abspath(__file__)))\n\n # Initialize a SQuAD teacher agent, which we will get context from\n module_name = 'parlai.tasks.squad.agents'\n class_name = 'DefaultTeacher'\n my_module = importlib.import_module(module_name)\n task_class = getattr(my_module, class_name)\n task_opt = opt.copy()\n task_opt['datatype'] = 'train'\n task_opt['datapath'] = opt['datapath']\n\n messenger_manager = MessengerManager(opt=opt)\n messenger_manager.setup_server()\n messenger_manager.init_new_state()\n\n def get_overworld(opt, agent):\n return MessengerOverworld(opt, agent)\n\n def assign_agent_role(agent):\n agent[0].disp_id = 'Agent'\n\n def run_conversation(manager, opt, agents, task_id):\n task = task_class(task_opt)\n agent = agents[0]\n world = QADataCollectionWorld(\n opt=opt,\n task=task,\n agent=agent\n )\n while not world.episode_done():\n world.parley()\n world.shutdown()\n\n # World with no onboarding\n messenger_manager.set_onboard_functions({'default': None})\n task_functions = {'default': run_conversation}\n assign_agent_roles = {'default': assign_agent_role}\n messenger_manager.set_agents_required({'default': 1})\n\n messenger_manager.set_overworld_func(get_overworld)\n messenger_manager.setup_socket()\n try:\n messenger_manager.start_new_run()\n messenger_manager.start_task(\n assign_role_functions=assign_agent_roles,\n task_functions=task_functions,\n )\n except BaseException:\n raise\n finally:\n messenger_manager.shutdown()\n\n\nif __name__ == '__main__':\n main()\n", "path": "parlai/messenger/tasks/qa_data_collection/run.py"}]} | 1,793 | 143 |
gh_patches_debug_36930 | rasdani/github-patches | git_diff | kedro-org__kedro-1761 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve Jupyter workflow when configuration is broken
## Description
> Is your feature request related to a problem? A clear and concise description of what the problem is: "I'm always frustrated when ..."
Using the spaceflights demo on 0.18.1 if I open a `kedro jupyter notebook` session and write `catalog.list()` I will get the following:

If I break the `catalog.yml` file so that it is unparsable and run `kedro catalog list` (or any other command like `kedro run` I will get a helpful parse error in the terminal:
<img width="654" alt="image" src="https://user-images.githubusercontent.com/35801847/172139490-a079095e-52d7-4c80-a8d3-15c5a675239e.png">
However, if I start a `kedro jupyter notebook` session in this state, I get no warning or messaging that Kedro is unable to create the `catalog`, `session` and other global variables. As a user I get this confusing state where the `catalog` is no longer available to me:
<img width="636" alt="image" src="https://user-images.githubusercontent.com/35801847/172139614-8dabbc13-38a4-4785-9afe-bbddf77b974a.png">
## Context
> Why is this change important to you? How would you use it? How can it benefit other users?
I think novice users can get stuck here with no direction as to why the catalog is not available.
## Possible Implementation
Surface warning to the jupyter context, perhaps even make a dummy `catalog`, `context`, `session` objects available that expose the log message when called...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kedro/extras/extensions/ipython.py`
Content:
```
1 # pylint: disable=import-outside-toplevel,global-statement,invalid-name
2 """
3 This script creates an IPython extension to load Kedro-related variables in
4 local scope.
5 """
6 import logging
7 import sys
8 from pathlib import Path
9 from typing import Any, Dict
10
11 from IPython import get_ipython
12 from IPython.core.magic import needs_local_scope, register_line_magic
13
14 logger = logging.getLogger(__name__)
15 default_project_path = Path.cwd()
16
17
18 def _remove_cached_modules(package_name):
19 to_remove = [mod for mod in sys.modules if mod.startswith(package_name)]
20 # `del` is used instead of `reload()` because: If the new version of a module does not
21 # define a name that was defined by the old version, the old definition remains.
22 for module in to_remove:
23 del sys.modules[module] # pragma: no cover
24
25
26 def _find_kedro_project(current_dir: Path): # pragma: no cover
27 from kedro.framework.startup import _is_project
28
29 while current_dir != current_dir.parent:
30 if _is_project(current_dir):
31 return current_dir
32 current_dir = current_dir.parent
33
34 return None
35
36
37 def reload_kedro(
38 path: str = None, env: str = None, extra_params: Dict[str, Any] = None
39 ):
40 """Line magic which reloads all Kedro default variables.
41 Setting the path will also make it default for subsequent calls.
42
43
44 """
45 from kedro.framework.cli import load_entry_points
46 from kedro.framework.project import LOGGING # noqa # pylint:disable=unused-import
47 from kedro.framework.project import configure_project, pipelines
48 from kedro.framework.session import KedroSession
49 from kedro.framework.startup import bootstrap_project
50
51 # If a path is provided, set it as default for subsequent calls
52 global default_project_path
53 if path:
54 default_project_path = Path(path).expanduser().resolve()
55 logger.info("Updated path to Kedro project: %s", default_project_path)
56 else:
57 logger.info("No path argument was provided. Using: %s", default_project_path)
58
59 metadata = bootstrap_project(default_project_path)
60 _remove_cached_modules(metadata.package_name)
61 configure_project(metadata.package_name)
62
63 session = KedroSession.create(
64 metadata.package_name, default_project_path, env=env, extra_params=extra_params
65 )
66 logger.debug("Loading the context from %s", default_project_path)
67 context = session.load_context()
68 catalog = context.catalog
69
70 get_ipython().push(
71 variables={
72 "context": context,
73 "catalog": catalog,
74 "session": session,
75 "pipelines": pipelines,
76 }
77 )
78
79 logger.info("Kedro project %s", str(metadata.project_name))
80 logger.info(
81 "Defined global variable 'context', 'session', 'catalog' and 'pipelines'"
82 )
83
84 for line_magic in load_entry_points("line_magic"):
85 register_line_magic(needs_local_scope(line_magic))
86 logger.info("Registered line magic '%s'", line_magic.__name__) # type: ignore
87
88
89 def load_ipython_extension(ipython):
90 """Main entry point when %load_ext is executed"""
91
92 global default_project_path
93
94 ipython.register_magic_function(reload_kedro, "line", "reload_kedro")
95
96 default_project_path = _find_kedro_project(Path.cwd())
97
98 try:
99 reload_kedro(default_project_path)
100 except (ImportError, ModuleNotFoundError):
101 logger.error("Kedro appears not to be installed in your current environment.")
102 except Exception: # pylint: disable=broad-except
103 logger.warning(
104 "Kedro extension was registered but couldn't find a Kedro project. "
105 "Make sure you run '%reload_kedro <project_root>'."
106 )
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kedro/extras/extensions/ipython.py b/kedro/extras/extensions/ipython.py
--- a/kedro/extras/extensions/ipython.py
+++ b/kedro/extras/extensions/ipython.py
@@ -1,4 +1,4 @@
-# pylint: disable=import-outside-toplevel,global-statement,invalid-name
+# pylint: disable=import-outside-toplevel,global-statement,invalid-name,too-many-locals
"""
This script creates an IPython extension to load Kedro-related variables in
local scope.
@@ -8,9 +8,6 @@
from pathlib import Path
from typing import Any, Dict
-from IPython import get_ipython
-from IPython.core.magic import needs_local_scope, register_line_magic
-
logger = logging.getLogger(__name__)
default_project_path = Path.cwd()
@@ -39,9 +36,10 @@
):
"""Line magic which reloads all Kedro default variables.
Setting the path will also make it default for subsequent calls.
-
-
"""
+ from IPython import get_ipython
+ from IPython.core.magic import needs_local_scope, register_line_magic
+
from kedro.framework.cli import load_entry_points
from kedro.framework.project import LOGGING # noqa # pylint:disable=unused-import
from kedro.framework.project import configure_project, pipelines
@@ -63,7 +61,6 @@
session = KedroSession.create(
metadata.package_name, default_project_path, env=env, extra_params=extra_params
)
- logger.debug("Loading the context from %s", default_project_path)
context = session.load_context()
catalog = context.catalog
@@ -95,12 +92,11 @@
default_project_path = _find_kedro_project(Path.cwd())
- try:
- reload_kedro(default_project_path)
- except (ImportError, ModuleNotFoundError):
- logger.error("Kedro appears not to be installed in your current environment.")
- except Exception: # pylint: disable=broad-except
+ if default_project_path is None:
logger.warning(
"Kedro extension was registered but couldn't find a Kedro project. "
"Make sure you run '%reload_kedro <project_root>'."
)
+ return
+
+ reload_kedro(default_project_path)
| {"golden_diff": "diff --git a/kedro/extras/extensions/ipython.py b/kedro/extras/extensions/ipython.py\n--- a/kedro/extras/extensions/ipython.py\n+++ b/kedro/extras/extensions/ipython.py\n@@ -1,4 +1,4 @@\n-# pylint: disable=import-outside-toplevel,global-statement,invalid-name\n+# pylint: disable=import-outside-toplevel,global-statement,invalid-name,too-many-locals\n \"\"\"\n This script creates an IPython extension to load Kedro-related variables in\n local scope.\n@@ -8,9 +8,6 @@\n from pathlib import Path\n from typing import Any, Dict\n \n-from IPython import get_ipython\n-from IPython.core.magic import needs_local_scope, register_line_magic\n-\n logger = logging.getLogger(__name__)\n default_project_path = Path.cwd()\n \n@@ -39,9 +36,10 @@\n ):\n \"\"\"Line magic which reloads all Kedro default variables.\n Setting the path will also make it default for subsequent calls.\n-\n-\n \"\"\"\n+ from IPython import get_ipython\n+ from IPython.core.magic import needs_local_scope, register_line_magic\n+\n from kedro.framework.cli import load_entry_points\n from kedro.framework.project import LOGGING # noqa # pylint:disable=unused-import\n from kedro.framework.project import configure_project, pipelines\n@@ -63,7 +61,6 @@\n session = KedroSession.create(\n metadata.package_name, default_project_path, env=env, extra_params=extra_params\n )\n- logger.debug(\"Loading the context from %s\", default_project_path)\n context = session.load_context()\n catalog = context.catalog\n \n@@ -95,12 +92,11 @@\n \n default_project_path = _find_kedro_project(Path.cwd())\n \n- try:\n- reload_kedro(default_project_path)\n- except (ImportError, ModuleNotFoundError):\n- logger.error(\"Kedro appears not to be installed in your current environment.\")\n- except Exception: # pylint: disable=broad-except\n+ if default_project_path is None:\n logger.warning(\n \"Kedro extension was registered but couldn't find a Kedro project. \"\n \"Make sure you run '%reload_kedro <project_root>'.\"\n )\n+ return\n+\n+ reload_kedro(default_project_path)\n", "issue": "Improve Jupyter workflow when configuration is broken\n## Description\r\n> Is your feature request related to a problem? A clear and concise description of what the problem is: \"I'm always frustrated when ...\"\r\n\r\nUsing the spaceflights demo on 0.18.1 if I open a `kedro jupyter notebook` session and write `catalog.list()` I will get the following:\r\n\r\n\r\n\r\nIf I break the `catalog.yml` file so that it is unparsable and run `kedro catalog list` (or any other command like `kedro run` I will get a helpful parse error in the terminal: \r\n\r\n<img width=\"654\" alt=\"image\" src=\"https://user-images.githubusercontent.com/35801847/172139490-a079095e-52d7-4c80-a8d3-15c5a675239e.png\">\r\n\r\nHowever, if I start a `kedro jupyter notebook` session in this state, I get no warning or messaging that Kedro is unable to create the `catalog`, `session` and other global variables. As a user I get this confusing state where the `catalog` is no longer available to me:\r\n\r\n<img width=\"636\" alt=\"image\" src=\"https://user-images.githubusercontent.com/35801847/172139614-8dabbc13-38a4-4785-9afe-bbddf77b974a.png\">\r\n\r\n\r\n## Context\r\n> Why is this change important to you? How would you use it? How can it benefit other users?\r\n\r\nI think novice users can get stuck here with no direction as to why the catalog is not available.\r\n\r\n## Possible Implementation\r\n\r\nSurface warning to the jupyter context, perhaps even make a dummy `catalog`, `context`, `session` objects available that expose the log message when called...\n", "before_files": [{"content": "# pylint: disable=import-outside-toplevel,global-statement,invalid-name\n\"\"\"\nThis script creates an IPython extension to load Kedro-related variables in\nlocal scope.\n\"\"\"\nimport logging\nimport sys\nfrom pathlib import Path\nfrom typing import Any, Dict\n\nfrom IPython import get_ipython\nfrom IPython.core.magic import needs_local_scope, register_line_magic\n\nlogger = logging.getLogger(__name__)\ndefault_project_path = Path.cwd()\n\n\ndef _remove_cached_modules(package_name):\n to_remove = [mod for mod in sys.modules if mod.startswith(package_name)]\n # `del` is used instead of `reload()` because: If the new version of a module does not\n # define a name that was defined by the old version, the old definition remains.\n for module in to_remove:\n del sys.modules[module] # pragma: no cover\n\n\ndef _find_kedro_project(current_dir: Path): # pragma: no cover\n from kedro.framework.startup import _is_project\n\n while current_dir != current_dir.parent:\n if _is_project(current_dir):\n return current_dir\n current_dir = current_dir.parent\n\n return None\n\n\ndef reload_kedro(\n path: str = None, env: str = None, extra_params: Dict[str, Any] = None\n):\n \"\"\"Line magic which reloads all Kedro default variables.\n Setting the path will also make it default for subsequent calls.\n\n\n \"\"\"\n from kedro.framework.cli import load_entry_points\n from kedro.framework.project import LOGGING # noqa # pylint:disable=unused-import\n from kedro.framework.project import configure_project, pipelines\n from kedro.framework.session import KedroSession\n from kedro.framework.startup import bootstrap_project\n\n # If a path is provided, set it as default for subsequent calls\n global default_project_path\n if path:\n default_project_path = Path(path).expanduser().resolve()\n logger.info(\"Updated path to Kedro project: %s\", default_project_path)\n else:\n logger.info(\"No path argument was provided. Using: %s\", default_project_path)\n\n metadata = bootstrap_project(default_project_path)\n _remove_cached_modules(metadata.package_name)\n configure_project(metadata.package_name)\n\n session = KedroSession.create(\n metadata.package_name, default_project_path, env=env, extra_params=extra_params\n )\n logger.debug(\"Loading the context from %s\", default_project_path)\n context = session.load_context()\n catalog = context.catalog\n\n get_ipython().push(\n variables={\n \"context\": context,\n \"catalog\": catalog,\n \"session\": session,\n \"pipelines\": pipelines,\n }\n )\n\n logger.info(\"Kedro project %s\", str(metadata.project_name))\n logger.info(\n \"Defined global variable 'context', 'session', 'catalog' and 'pipelines'\"\n )\n\n for line_magic in load_entry_points(\"line_magic\"):\n register_line_magic(needs_local_scope(line_magic))\n logger.info(\"Registered line magic '%s'\", line_magic.__name__) # type: ignore\n\n\ndef load_ipython_extension(ipython):\n \"\"\"Main entry point when %load_ext is executed\"\"\"\n\n global default_project_path\n\n ipython.register_magic_function(reload_kedro, \"line\", \"reload_kedro\")\n\n default_project_path = _find_kedro_project(Path.cwd())\n\n try:\n reload_kedro(default_project_path)\n except (ImportError, ModuleNotFoundError):\n logger.error(\"Kedro appears not to be installed in your current environment.\")\n except Exception: # pylint: disable=broad-except\n logger.warning(\n \"Kedro extension was registered but couldn't find a Kedro project. \"\n \"Make sure you run '%reload_kedro <project_root>'.\"\n )\n", "path": "kedro/extras/extensions/ipython.py"}], "after_files": [{"content": "# pylint: disable=import-outside-toplevel,global-statement,invalid-name,too-many-locals\n\"\"\"\nThis script creates an IPython extension to load Kedro-related variables in\nlocal scope.\n\"\"\"\nimport logging\nimport sys\nfrom pathlib import Path\nfrom typing import Any, Dict\n\nlogger = logging.getLogger(__name__)\ndefault_project_path = Path.cwd()\n\n\ndef _remove_cached_modules(package_name):\n to_remove = [mod for mod in sys.modules if mod.startswith(package_name)]\n # `del` is used instead of `reload()` because: If the new version of a module does not\n # define a name that was defined by the old version, the old definition remains.\n for module in to_remove:\n del sys.modules[module] # pragma: no cover\n\n\ndef _find_kedro_project(current_dir: Path): # pragma: no cover\n from kedro.framework.startup import _is_project\n\n while current_dir != current_dir.parent:\n if _is_project(current_dir):\n return current_dir\n current_dir = current_dir.parent\n\n return None\n\n\ndef reload_kedro(\n path: str = None, env: str = None, extra_params: Dict[str, Any] = None\n):\n \"\"\"Line magic which reloads all Kedro default variables.\n Setting the path will also make it default for subsequent calls.\n \"\"\"\n from IPython import get_ipython\n from IPython.core.magic import needs_local_scope, register_line_magic\n\n from kedro.framework.cli import load_entry_points\n from kedro.framework.project import LOGGING # noqa # pylint:disable=unused-import\n from kedro.framework.project import configure_project, pipelines\n from kedro.framework.session import KedroSession\n from kedro.framework.startup import bootstrap_project\n\n # If a path is provided, set it as default for subsequent calls\n global default_project_path\n if path:\n default_project_path = Path(path).expanduser().resolve()\n logger.info(\"Updated path to Kedro project: %s\", default_project_path)\n else:\n logger.info(\"No path argument was provided. Using: %s\", default_project_path)\n\n metadata = bootstrap_project(default_project_path)\n _remove_cached_modules(metadata.package_name)\n configure_project(metadata.package_name)\n\n session = KedroSession.create(\n metadata.package_name, default_project_path, env=env, extra_params=extra_params\n )\n context = session.load_context()\n catalog = context.catalog\n\n get_ipython().push(\n variables={\n \"context\": context,\n \"catalog\": catalog,\n \"session\": session,\n \"pipelines\": pipelines,\n }\n )\n\n logger.info(\"Kedro project %s\", str(metadata.project_name))\n logger.info(\n \"Defined global variable 'context', 'session', 'catalog' and 'pipelines'\"\n )\n\n for line_magic in load_entry_points(\"line_magic\"):\n register_line_magic(needs_local_scope(line_magic))\n logger.info(\"Registered line magic '%s'\", line_magic.__name__) # type: ignore\n\n\ndef load_ipython_extension(ipython):\n \"\"\"Main entry point when %load_ext is executed\"\"\"\n\n global default_project_path\n\n ipython.register_magic_function(reload_kedro, \"line\", \"reload_kedro\")\n\n default_project_path = _find_kedro_project(Path.cwd())\n\n if default_project_path is None:\n logger.warning(\n \"Kedro extension was registered but couldn't find a Kedro project. \"\n \"Make sure you run '%reload_kedro <project_root>'.\"\n )\n return\n\n reload_kedro(default_project_path)\n", "path": "kedro/extras/extensions/ipython.py"}]} | 1,794 | 521 |
gh_patches_debug_43422 | rasdani/github-patches | git_diff | encode__starlette-92 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add `allow_origin_regex` to CORSMiddleware.
It'd be helpful if `CORSMiddleware` supported an `allow_origin_regex`, so that users could do...
```python
# Enforce a subdomain CORS policy
app.add_middleware(CORSMiddleware, allow_origin_regex="(http|https)://*.example.com")
```
Or...
```python
# Enforce an HTTPS-only CORS policy.
app.add_middleware(CORSMiddleware, allow_origin_regex="https://*")
```
The string should be compiled to a regex by the middleware and matches should be anchored to the start/end of the origin string.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/middleware/cors.py`
Content:
```
1 from starlette.datastructures import Headers, MutableHeaders, URL
2 from starlette.responses import PlainTextResponse
3 from starlette.types import ASGIApp, ASGIInstance, Scope
4 import functools
5 import typing
6
7
8 ALL_METHODS = ("DELETE", "GET", "OPTIONS", "PATCH", "POST", "PUT")
9
10
11 class CORSMiddleware:
12 def __init__(
13 self,
14 app: ASGIApp,
15 allow_origins: typing.Sequence[str] = (),
16 allow_methods: typing.Sequence[str] = ("GET",),
17 allow_headers: typing.Sequence[str] = (),
18 allow_credentials: bool = False,
19 expose_headers: typing.Sequence[str] = (),
20 max_age: int = 600,
21 ):
22
23 if "*" in allow_methods:
24 allow_methods = ALL_METHODS
25
26 simple_headers = {}
27 if "*" in allow_origins:
28 simple_headers["Access-Control-Allow-Origin"] = "*"
29 if allow_credentials:
30 simple_headers["Access-Control-Allow-Credentials"] = "true"
31 if expose_headers:
32 simple_headers["Access-Control-Expose-Headers"] = ", ".join(expose_headers)
33
34 preflight_headers = {}
35 if "*" in allow_origins:
36 preflight_headers["Access-Control-Allow-Origin"] = "*"
37 else:
38 preflight_headers["Vary"] = "Origin"
39 preflight_headers.update(
40 {
41 "Access-Control-Allow-Methods": ", ".join(allow_methods),
42 "Access-Control-Max-Age": str(max_age),
43 }
44 )
45 if allow_headers and "*" not in allow_headers:
46 preflight_headers["Access-Control-Allow-Headers"] = ", ".join(allow_headers)
47 if allow_credentials:
48 preflight_headers["Access-Control-Allow-Credentials"] = "true"
49
50 self.app = app
51 self.allow_origins = allow_origins
52 self.allow_methods = allow_methods
53 self.allow_headers = allow_headers
54 self.allow_all_origins = "*" in allow_origins
55 self.allow_all_headers = "*" in allow_headers
56 self.simple_headers = simple_headers
57 self.preflight_headers = preflight_headers
58
59 def __call__(self, scope: Scope):
60 if scope["type"] == "http":
61 method = scope["method"]
62 headers = Headers(scope["headers"])
63 origin = headers.get("origin")
64
65 if origin is not None:
66 if method == "OPTIONS" and "access-control-request-method" in headers:
67 return self.preflight_response(request_headers=headers)
68 else:
69 return functools.partial(
70 self.simple_response, scope=scope, origin=origin
71 )
72
73 return self.app(scope)
74
75 def preflight_response(self, request_headers):
76 requested_origin = request_headers["origin"]
77 requested_method = request_headers["access-control-request-method"]
78 requested_headers = request_headers.get("access-control-request-headers")
79 requested_cookie = "cookie" in request_headers
80
81 headers = dict(self.preflight_headers)
82 failures = []
83
84 # If we only allow specific origins, then we have to mirror back
85 # the Origin header in the response.
86 if not self.allow_all_origins:
87 if requested_origin in self.allow_origins:
88 headers["Access-Control-Allow-Origin"] = requested_origin
89 else:
90 failures.append("origin")
91
92 if requested_method not in self.allow_methods:
93 failures.append("method")
94
95 # If we allow all headers, then we have to mirror back any requested
96 # headers in the response.
97 if self.allow_all_headers and requested_headers is not None:
98 headers["Access-Control-Allow-Headers"] = requested_headers
99 elif requested_headers is not None:
100 for header in requested_headers.split(","):
101 if header.strip() not in self.allow_headers:
102 failures.append("headers")
103
104 # We don't strictly need to use 400 responses here, since its up to
105 # the browser to enforce the CORS policy, but its more informative
106 # if we do.
107 if failures:
108 failure_text = "Disallowed CORS " + ", ".join(failures)
109 return PlainTextResponse(failure_text, status_code=400, headers=headers)
110
111 return PlainTextResponse("OK", status_code=200, headers=headers)
112
113 async def simple_response(self, receive, send, scope=None, origin=None):
114 inner = self.app(scope)
115 send = functools.partial(self.send, send=send, origin=origin)
116 await inner(receive, send)
117
118 async def send(self, message, send=None, origin=None):
119 if message["type"] != "http.response.start":
120 await send(message)
121 return
122
123 message.setdefault("headers", [])
124 headers = MutableHeaders(message["headers"])
125
126 # If we only allow specific origins, then we have to mirror back
127 # the Origin header in the response.
128 if not self.allow_all_origins and origin in self.allow_origins:
129 headers["Access-Control-Allow-Origin"] = origin
130 headers.update(self.simple_headers)
131 await send(message)
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlette/middleware/cors.py b/starlette/middleware/cors.py
--- a/starlette/middleware/cors.py
+++ b/starlette/middleware/cors.py
@@ -3,6 +3,7 @@
from starlette.types import ASGIApp, ASGIInstance, Scope
import functools
import typing
+import re
ALL_METHODS = ("DELETE", "GET", "OPTIONS", "PATCH", "POST", "PUT")
@@ -16,6 +17,7 @@
allow_methods: typing.Sequence[str] = ("GET",),
allow_headers: typing.Sequence[str] = (),
allow_credentials: bool = False,
+ allow_origin_regex: str = None,
expose_headers: typing.Sequence[str] = (),
max_age: int = 600,
):
@@ -23,6 +25,10 @@
if "*" in allow_methods:
allow_methods = ALL_METHODS
+ if allow_origin_regex is not None:
+ regex = re.compile(allow_origin_regex)
+ allow_origin_regex = regex
+
simple_headers = {}
if "*" in allow_origins:
simple_headers["Access-Control-Allow-Origin"] = "*"
@@ -53,6 +59,7 @@
self.allow_headers = allow_headers
self.allow_all_origins = "*" in allow_origins
self.allow_all_headers = "*" in allow_headers
+ self.allow_origin_regex = allow_origin_regex
self.simple_headers = simple_headers
self.preflight_headers = preflight_headers
@@ -66,12 +73,22 @@
if method == "OPTIONS" and "access-control-request-method" in headers:
return self.preflight_response(request_headers=headers)
else:
- return functools.partial(
- self.simple_response, scope=scope, origin=origin
- )
+ if self.is_allowed_origin(origin=origin):
+ return functools.partial(
+ self.simple_response, scope=scope, origin=origin
+ )
+ return PlainTextResponse("Disallowed CORS origin", status_code=400)
return self.app(scope)
+ def is_allowed_origin(self, origin):
+ if self.allow_origin_regex:
+ return self.allow_origin_regex.match(origin)
+ if self.allow_all_origins:
+ return True
+
+ return origin in self.allow_origins
+
def preflight_response(self, request_headers):
requested_origin = request_headers["origin"]
requested_method = request_headers["access-control-request-method"]
@@ -84,7 +101,7 @@
# If we only allow specific origins, then we have to mirror back
# the Origin header in the response.
if not self.allow_all_origins:
- if requested_origin in self.allow_origins:
+ if self.is_allowed_origin(origin=requested_origin):
headers["Access-Control-Allow-Origin"] = requested_origin
else:
failures.append("origin")
@@ -125,7 +142,7 @@
# If we only allow specific origins, then we have to mirror back
# the Origin header in the response.
- if not self.allow_all_origins and origin in self.allow_origins:
+ if not self.allow_all_origins and self.is_allowed_origin(origin=origin):
headers["Access-Control-Allow-Origin"] = origin
headers.update(self.simple_headers)
await send(message)
| {"golden_diff": "diff --git a/starlette/middleware/cors.py b/starlette/middleware/cors.py\n--- a/starlette/middleware/cors.py\n+++ b/starlette/middleware/cors.py\n@@ -3,6 +3,7 @@\n from starlette.types import ASGIApp, ASGIInstance, Scope\n import functools\n import typing\n+import re\n \n \n ALL_METHODS = (\"DELETE\", \"GET\", \"OPTIONS\", \"PATCH\", \"POST\", \"PUT\")\n@@ -16,6 +17,7 @@\n allow_methods: typing.Sequence[str] = (\"GET\",),\n allow_headers: typing.Sequence[str] = (),\n allow_credentials: bool = False,\n+ allow_origin_regex: str = None,\n expose_headers: typing.Sequence[str] = (),\n max_age: int = 600,\n ):\n@@ -23,6 +25,10 @@\n if \"*\" in allow_methods:\n allow_methods = ALL_METHODS\n \n+ if allow_origin_regex is not None:\n+ regex = re.compile(allow_origin_regex)\n+ allow_origin_regex = regex\n+\n simple_headers = {}\n if \"*\" in allow_origins:\n simple_headers[\"Access-Control-Allow-Origin\"] = \"*\"\n@@ -53,6 +59,7 @@\n self.allow_headers = allow_headers\n self.allow_all_origins = \"*\" in allow_origins\n self.allow_all_headers = \"*\" in allow_headers\n+ self.allow_origin_regex = allow_origin_regex\n self.simple_headers = simple_headers\n self.preflight_headers = preflight_headers\n \n@@ -66,12 +73,22 @@\n if method == \"OPTIONS\" and \"access-control-request-method\" in headers:\n return self.preflight_response(request_headers=headers)\n else:\n- return functools.partial(\n- self.simple_response, scope=scope, origin=origin\n- )\n+ if self.is_allowed_origin(origin=origin):\n+ return functools.partial(\n+ self.simple_response, scope=scope, origin=origin\n+ )\n+ return PlainTextResponse(\"Disallowed CORS origin\", status_code=400)\n \n return self.app(scope)\n \n+ def is_allowed_origin(self, origin):\n+ if self.allow_origin_regex:\n+ return self.allow_origin_regex.match(origin)\n+ if self.allow_all_origins:\n+ return True\n+\n+ return origin in self.allow_origins\n+\n def preflight_response(self, request_headers):\n requested_origin = request_headers[\"origin\"]\n requested_method = request_headers[\"access-control-request-method\"]\n@@ -84,7 +101,7 @@\n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n if not self.allow_all_origins:\n- if requested_origin in self.allow_origins:\n+ if self.is_allowed_origin(origin=requested_origin):\n headers[\"Access-Control-Allow-Origin\"] = requested_origin\n else:\n failures.append(\"origin\")\n@@ -125,7 +142,7 @@\n \n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n- if not self.allow_all_origins and origin in self.allow_origins:\n+ if not self.allow_all_origins and self.is_allowed_origin(origin=origin):\n headers[\"Access-Control-Allow-Origin\"] = origin\n headers.update(self.simple_headers)\n await send(message)\n", "issue": "Add `allow_origin_regex` to CORSMiddleware.\nIt'd be helpful if `CORSMiddleware` supported an `allow_origin_regex`, so that users could do...\r\n\r\n```python\r\n# Enforce a subdomain CORS policy\r\napp.add_middleware(CORSMiddleware, allow_origin_regex=\"(http|https)://*.example.com\")\r\n```\r\n\r\nOr...\r\n\r\n```python\r\n# Enforce an HTTPS-only CORS policy.\r\napp.add_middleware(CORSMiddleware, allow_origin_regex=\"https://*\")\r\n```\r\n\r\nThe string should be compiled to a regex by the middleware and matches should be anchored to the start/end of the origin string.\n", "before_files": [{"content": "from starlette.datastructures import Headers, MutableHeaders, URL\nfrom starlette.responses import PlainTextResponse\nfrom starlette.types import ASGIApp, ASGIInstance, Scope\nimport functools\nimport typing\n\n\nALL_METHODS = (\"DELETE\", \"GET\", \"OPTIONS\", \"PATCH\", \"POST\", \"PUT\")\n\n\nclass CORSMiddleware:\n def __init__(\n self,\n app: ASGIApp,\n allow_origins: typing.Sequence[str] = (),\n allow_methods: typing.Sequence[str] = (\"GET\",),\n allow_headers: typing.Sequence[str] = (),\n allow_credentials: bool = False,\n expose_headers: typing.Sequence[str] = (),\n max_age: int = 600,\n ):\n\n if \"*\" in allow_methods:\n allow_methods = ALL_METHODS\n\n simple_headers = {}\n if \"*\" in allow_origins:\n simple_headers[\"Access-Control-Allow-Origin\"] = \"*\"\n if allow_credentials:\n simple_headers[\"Access-Control-Allow-Credentials\"] = \"true\"\n if expose_headers:\n simple_headers[\"Access-Control-Expose-Headers\"] = \", \".join(expose_headers)\n\n preflight_headers = {}\n if \"*\" in allow_origins:\n preflight_headers[\"Access-Control-Allow-Origin\"] = \"*\"\n else:\n preflight_headers[\"Vary\"] = \"Origin\"\n preflight_headers.update(\n {\n \"Access-Control-Allow-Methods\": \", \".join(allow_methods),\n \"Access-Control-Max-Age\": str(max_age),\n }\n )\n if allow_headers and \"*\" not in allow_headers:\n preflight_headers[\"Access-Control-Allow-Headers\"] = \", \".join(allow_headers)\n if allow_credentials:\n preflight_headers[\"Access-Control-Allow-Credentials\"] = \"true\"\n\n self.app = app\n self.allow_origins = allow_origins\n self.allow_methods = allow_methods\n self.allow_headers = allow_headers\n self.allow_all_origins = \"*\" in allow_origins\n self.allow_all_headers = \"*\" in allow_headers\n self.simple_headers = simple_headers\n self.preflight_headers = preflight_headers\n\n def __call__(self, scope: Scope):\n if scope[\"type\"] == \"http\":\n method = scope[\"method\"]\n headers = Headers(scope[\"headers\"])\n origin = headers.get(\"origin\")\n\n if origin is not None:\n if method == \"OPTIONS\" and \"access-control-request-method\" in headers:\n return self.preflight_response(request_headers=headers)\n else:\n return functools.partial(\n self.simple_response, scope=scope, origin=origin\n )\n\n return self.app(scope)\n\n def preflight_response(self, request_headers):\n requested_origin = request_headers[\"origin\"]\n requested_method = request_headers[\"access-control-request-method\"]\n requested_headers = request_headers.get(\"access-control-request-headers\")\n requested_cookie = \"cookie\" in request_headers\n\n headers = dict(self.preflight_headers)\n failures = []\n\n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n if not self.allow_all_origins:\n if requested_origin in self.allow_origins:\n headers[\"Access-Control-Allow-Origin\"] = requested_origin\n else:\n failures.append(\"origin\")\n\n if requested_method not in self.allow_methods:\n failures.append(\"method\")\n\n # If we allow all headers, then we have to mirror back any requested\n # headers in the response.\n if self.allow_all_headers and requested_headers is not None:\n headers[\"Access-Control-Allow-Headers\"] = requested_headers\n elif requested_headers is not None:\n for header in requested_headers.split(\",\"):\n if header.strip() not in self.allow_headers:\n failures.append(\"headers\")\n\n # We don't strictly need to use 400 responses here, since its up to\n # the browser to enforce the CORS policy, but its more informative\n # if we do.\n if failures:\n failure_text = \"Disallowed CORS \" + \", \".join(failures)\n return PlainTextResponse(failure_text, status_code=400, headers=headers)\n\n return PlainTextResponse(\"OK\", status_code=200, headers=headers)\n\n async def simple_response(self, receive, send, scope=None, origin=None):\n inner = self.app(scope)\n send = functools.partial(self.send, send=send, origin=origin)\n await inner(receive, send)\n\n async def send(self, message, send=None, origin=None):\n if message[\"type\"] != \"http.response.start\":\n await send(message)\n return\n\n message.setdefault(\"headers\", [])\n headers = MutableHeaders(message[\"headers\"])\n\n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n if not self.allow_all_origins and origin in self.allow_origins:\n headers[\"Access-Control-Allow-Origin\"] = origin\n headers.update(self.simple_headers)\n await send(message)\n", "path": "starlette/middleware/cors.py"}], "after_files": [{"content": "from starlette.datastructures import Headers, MutableHeaders, URL\nfrom starlette.responses import PlainTextResponse\nfrom starlette.types import ASGIApp, ASGIInstance, Scope\nimport functools\nimport typing\nimport re\n\n\nALL_METHODS = (\"DELETE\", \"GET\", \"OPTIONS\", \"PATCH\", \"POST\", \"PUT\")\n\n\nclass CORSMiddleware:\n def __init__(\n self,\n app: ASGIApp,\n allow_origins: typing.Sequence[str] = (),\n allow_methods: typing.Sequence[str] = (\"GET\",),\n allow_headers: typing.Sequence[str] = (),\n allow_credentials: bool = False,\n allow_origin_regex: str = None,\n expose_headers: typing.Sequence[str] = (),\n max_age: int = 600,\n ):\n\n if \"*\" in allow_methods:\n allow_methods = ALL_METHODS\n\n if allow_origin_regex is not None:\n regex = re.compile(allow_origin_regex)\n allow_origin_regex = regex\n\n simple_headers = {}\n if \"*\" in allow_origins:\n simple_headers[\"Access-Control-Allow-Origin\"] = \"*\"\n if allow_credentials:\n simple_headers[\"Access-Control-Allow-Credentials\"] = \"true\"\n if expose_headers:\n simple_headers[\"Access-Control-Expose-Headers\"] = \", \".join(expose_headers)\n\n preflight_headers = {}\n if \"*\" in allow_origins:\n preflight_headers[\"Access-Control-Allow-Origin\"] = \"*\"\n else:\n preflight_headers[\"Vary\"] = \"Origin\"\n preflight_headers.update(\n {\n \"Access-Control-Allow-Methods\": \", \".join(allow_methods),\n \"Access-Control-Max-Age\": str(max_age),\n }\n )\n if allow_headers and \"*\" not in allow_headers:\n preflight_headers[\"Access-Control-Allow-Headers\"] = \", \".join(allow_headers)\n if allow_credentials:\n preflight_headers[\"Access-Control-Allow-Credentials\"] = \"true\"\n\n self.app = app\n self.allow_origins = allow_origins\n self.allow_methods = allow_methods\n self.allow_headers = allow_headers\n self.allow_all_origins = \"*\" in allow_origins\n self.allow_all_headers = \"*\" in allow_headers\n self.allow_origin_regex = allow_origin_regex\n self.simple_headers = simple_headers\n self.preflight_headers = preflight_headers\n\n def __call__(self, scope: Scope):\n if scope[\"type\"] == \"http\":\n method = scope[\"method\"]\n headers = Headers(scope[\"headers\"])\n origin = headers.get(\"origin\")\n\n if origin is not None:\n if method == \"OPTIONS\" and \"access-control-request-method\" in headers:\n return self.preflight_response(request_headers=headers)\n else:\n if self.is_allowed_origin(origin=origin):\n return functools.partial(\n self.simple_response, scope=scope, origin=origin\n )\n return PlainTextResponse(\"Disallowed CORS origin\", status_code=400)\n\n return self.app(scope)\n\n def is_allowed_origin(self, origin):\n if self.allow_origin_regex:\n return self.allow_origin_regex.match(origin)\n if self.allow_all_origins:\n return True\n\n return origin in self.allow_origins\n\n def preflight_response(self, request_headers):\n requested_origin = request_headers[\"origin\"]\n requested_method = request_headers[\"access-control-request-method\"]\n requested_headers = request_headers.get(\"access-control-request-headers\")\n requested_cookie = \"cookie\" in request_headers\n\n headers = dict(self.preflight_headers)\n failures = []\n\n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n if not self.allow_all_origins:\n if self.is_allowed_origin(origin=requested_origin):\n headers[\"Access-Control-Allow-Origin\"] = requested_origin\n else:\n failures.append(\"origin\")\n\n if requested_method not in self.allow_methods:\n failures.append(\"method\")\n\n # If we allow all headers, then we have to mirror back any requested\n # headers in the response.\n if self.allow_all_headers and requested_headers is not None:\n headers[\"Access-Control-Allow-Headers\"] = requested_headers\n elif requested_headers is not None:\n for header in requested_headers.split(\",\"):\n if header.strip() not in self.allow_headers:\n failures.append(\"headers\")\n\n # We don't strictly need to use 400 responses here, since its up to\n # the browser to enforce the CORS policy, but its more informative\n # if we do.\n if failures:\n failure_text = \"Disallowed CORS \" + \", \".join(failures)\n return PlainTextResponse(failure_text, status_code=400, headers=headers)\n\n return PlainTextResponse(\"OK\", status_code=200, headers=headers)\n\n async def simple_response(self, receive, send, scope=None, origin=None):\n inner = self.app(scope)\n send = functools.partial(self.send, send=send, origin=origin)\n await inner(receive, send)\n\n async def send(self, message, send=None, origin=None):\n if message[\"type\"] != \"http.response.start\":\n await send(message)\n return\n\n message.setdefault(\"headers\", [])\n headers = MutableHeaders(message[\"headers\"])\n\n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n if not self.allow_all_origins and self.is_allowed_origin(origin=origin):\n headers[\"Access-Control-Allow-Origin\"] = origin\n headers.update(self.simple_headers)\n await send(message)\n", "path": "starlette/middleware/cors.py"}]} | 1,739 | 732 |
gh_patches_debug_12814 | rasdani/github-patches | git_diff | HypothesisWorks__hypothesis-1178 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Run tests with warnings as errors
This is harder than it sounds, because practically everything in a Python stack emits warnings. See #1124, #1149, and #1169 for previous work on this.
One lead is to check whether we can put e.g. `,default:::setuptools,default:::pip` at the end of the [`PYTHONWARNINGS`](https://docs.python.org/3/using/cmdline.html#cmdoption-w) variable, which would exempt build-time dependencies (that we can't fix) from `=error,` at the front. Unclear from the docs whether we can use one entry per package, or if it's one per file.
Internally, we get a `ResourceWarning` from failing to close a [`branch-check` file](https://github.com/HypothesisWorks/hypothesis-python/blob/06a6ce681b8f13676ae2b674c559c413f0f3dbac/src/hypothesis/internal/coverage.py#L60) in the coverage job (fixable with a minor refactor to use `mode='a'`). There are probably more warnings that we simply don't notice at the moment.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/hypothesis/internal/coverage.py`
Content:
```
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis, which may be found at
4 # https://github.com/HypothesisWorks/hypothesis-python
5 #
6 # Most of this work is copyright (C) 2013-2018 David R. MacIver
7 # ([email protected]), but it contains contributions by others. See
8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
9 # consult the git log if you need to determine who owns an individual
10 # contribution.
11 #
12 # This Source Code Form is subject to the terms of the Mozilla Public License,
13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
14 # obtain one at http://mozilla.org/MPL/2.0/.
15 #
16 # END HEADER
17
18 from __future__ import division, print_function, absolute_import
19
20 import os
21 import sys
22 import json
23 from contextlib import contextmanager
24
25 from hypothesis.internal.reflection import proxies
26
27 """
28 This module implements a custom coverage system that records conditions and
29 then validates that every condition has been seen to be both True and False
30 during the execution of our tests.
31
32 The only thing we use it for at present is our argument validation functions,
33 where we assert that every validation function has been seen to both pass and
34 fail in the course of testing.
35
36 When not running with a magic environment variable set, this module disables
37 itself and has essentially no overhead.
38 """
39
40 pretty_file_name_cache = {}
41
42
43 def pretty_file_name(f):
44 try:
45 return pretty_file_name_cache[f]
46 except KeyError:
47 pass
48
49 parts = f.split(os.path.sep)
50 parts = parts[parts.index('hypothesis'):]
51 result = os.path.sep.join(parts)
52 pretty_file_name_cache[f] = result
53 return result
54
55
56 IN_COVERAGE_TESTS = os.getenv('HYPOTHESIS_INTERNAL_COVERAGE') == 'true'
57
58
59 if IN_COVERAGE_TESTS:
60 log = open('branch-check', 'w')
61 written = set()
62
63 def record_branch(name, value):
64 key = (name, value)
65 if key in written:
66 return
67 written.add(key)
68 log.write(
69 json.dumps({'name': name, 'value': value})
70 )
71 log.write('\n')
72 log.flush()
73
74 description_stack = []
75
76 @contextmanager
77 def check_block(name, depth):
78 # We add an extra two callers to the stack: One for the contextmanager
79 # function, one for our actual caller, so we want to go two extra
80 # stack frames up.
81 caller = sys._getframe(depth + 2)
82 local_description = '%s at %s:%d' % (
83 name,
84 pretty_file_name(caller.f_code.co_filename),
85 caller.f_lineno,
86 )
87 try:
88 description_stack.append(local_description)
89 description = ' in '.join(reversed(description_stack)) + ' passed'
90 yield
91 record_branch(description, True)
92 except BaseException:
93 record_branch(description, False)
94 raise
95 finally:
96 description_stack.pop()
97
98 @contextmanager
99 def check(name):
100 with check_block(name, 2):
101 yield
102
103 def check_function(f):
104 @proxies(f)
105 def accept(*args, **kwargs):
106 # depth of 2 because of the proxy function calling us.
107 with check_block(f.__name__, 2):
108 return f(*args, **kwargs)
109 return accept
110 else:
111 def check_function(f):
112 return f
113
114 @contextmanager
115 def check(name):
116 yield
117
118
119 class suppress_tracing(object):
120 def __enter__(self):
121 self.__original_trace = sys.gettrace()
122 sys.settrace(None)
123
124 def __exit__(self, exc_type, exc_value, traceback):
125 sys.settrace(self.__original_trace)
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/hypothesis/internal/coverage.py b/src/hypothesis/internal/coverage.py
--- a/src/hypothesis/internal/coverage.py
+++ b/src/hypothesis/internal/coverage.py
@@ -57,7 +57,8 @@
if IN_COVERAGE_TESTS:
- log = open('branch-check', 'w')
+ with open('branch-check', 'w'):
+ pass
written = set()
def record_branch(name, value):
@@ -65,11 +66,8 @@
if key in written:
return
written.add(key)
- log.write(
- json.dumps({'name': name, 'value': value})
- )
- log.write('\n')
- log.flush()
+ with open('branch-check', 'a') as log:
+ log.write(json.dumps({'name': name, 'value': value}) + '\n')
description_stack = []
| {"golden_diff": "diff --git a/src/hypothesis/internal/coverage.py b/src/hypothesis/internal/coverage.py\n--- a/src/hypothesis/internal/coverage.py\n+++ b/src/hypothesis/internal/coverage.py\n@@ -57,7 +57,8 @@\n \n \n if IN_COVERAGE_TESTS:\n- log = open('branch-check', 'w')\n+ with open('branch-check', 'w'):\n+ pass\n written = set()\n \n def record_branch(name, value):\n@@ -65,11 +66,8 @@\n if key in written:\n return\n written.add(key)\n- log.write(\n- json.dumps({'name': name, 'value': value})\n- )\n- log.write('\\n')\n- log.flush()\n+ with open('branch-check', 'a') as log:\n+ log.write(json.dumps({'name': name, 'value': value}) + '\\n')\n \n description_stack = []\n", "issue": "Run tests with warnings as errors\nThis is harder than it sounds, because practically everything in a Python stack emits warnings. See #1124, #1149, and #1169 for previous work on this.\r\n\r\nOne lead is to check whether we can put e.g. `,default:::setuptools,default:::pip` at the end of the [`PYTHONWARNINGS`](https://docs.python.org/3/using/cmdline.html#cmdoption-w) variable, which would exempt build-time dependencies (that we can't fix) from `=error,` at the front. Unclear from the docs whether we can use one entry per package, or if it's one per file.\r\n\r\nInternally, we get a `ResourceWarning` from failing to close a [`branch-check` file](https://github.com/HypothesisWorks/hypothesis-python/blob/06a6ce681b8f13676ae2b674c559c413f0f3dbac/src/hypothesis/internal/coverage.py#L60) in the coverage job (fixable with a minor refactor to use `mode='a'`). There are probably more warnings that we simply don't notice at the moment.\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport sys\nimport json\nfrom contextlib import contextmanager\n\nfrom hypothesis.internal.reflection import proxies\n\n\"\"\"\nThis module implements a custom coverage system that records conditions and\nthen validates that every condition has been seen to be both True and False\nduring the execution of our tests.\n\nThe only thing we use it for at present is our argument validation functions,\nwhere we assert that every validation function has been seen to both pass and\nfail in the course of testing.\n\nWhen not running with a magic environment variable set, this module disables\nitself and has essentially no overhead.\n\"\"\"\n\npretty_file_name_cache = {}\n\n\ndef pretty_file_name(f):\n try:\n return pretty_file_name_cache[f]\n except KeyError:\n pass\n\n parts = f.split(os.path.sep)\n parts = parts[parts.index('hypothesis'):]\n result = os.path.sep.join(parts)\n pretty_file_name_cache[f] = result\n return result\n\n\nIN_COVERAGE_TESTS = os.getenv('HYPOTHESIS_INTERNAL_COVERAGE') == 'true'\n\n\nif IN_COVERAGE_TESTS:\n log = open('branch-check', 'w')\n written = set()\n\n def record_branch(name, value):\n key = (name, value)\n if key in written:\n return\n written.add(key)\n log.write(\n json.dumps({'name': name, 'value': value})\n )\n log.write('\\n')\n log.flush()\n\n description_stack = []\n\n @contextmanager\n def check_block(name, depth):\n # We add an extra two callers to the stack: One for the contextmanager\n # function, one for our actual caller, so we want to go two extra\n # stack frames up.\n caller = sys._getframe(depth + 2)\n local_description = '%s at %s:%d' % (\n name,\n pretty_file_name(caller.f_code.co_filename),\n caller.f_lineno,\n )\n try:\n description_stack.append(local_description)\n description = ' in '.join(reversed(description_stack)) + ' passed'\n yield\n record_branch(description, True)\n except BaseException:\n record_branch(description, False)\n raise\n finally:\n description_stack.pop()\n\n @contextmanager\n def check(name):\n with check_block(name, 2):\n yield\n\n def check_function(f):\n @proxies(f)\n def accept(*args, **kwargs):\n # depth of 2 because of the proxy function calling us.\n with check_block(f.__name__, 2):\n return f(*args, **kwargs)\n return accept\nelse:\n def check_function(f):\n return f\n\n @contextmanager\n def check(name):\n yield\n\n\nclass suppress_tracing(object):\n def __enter__(self):\n self.__original_trace = sys.gettrace()\n sys.settrace(None)\n\n def __exit__(self, exc_type, exc_value, traceback):\n sys.settrace(self.__original_trace)\n", "path": "src/hypothesis/internal/coverage.py"}], "after_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport sys\nimport json\nfrom contextlib import contextmanager\n\nfrom hypothesis.internal.reflection import proxies\n\n\"\"\"\nThis module implements a custom coverage system that records conditions and\nthen validates that every condition has been seen to be both True and False\nduring the execution of our tests.\n\nThe only thing we use it for at present is our argument validation functions,\nwhere we assert that every validation function has been seen to both pass and\nfail in the course of testing.\n\nWhen not running with a magic environment variable set, this module disables\nitself and has essentially no overhead.\n\"\"\"\n\npretty_file_name_cache = {}\n\n\ndef pretty_file_name(f):\n try:\n return pretty_file_name_cache[f]\n except KeyError:\n pass\n\n parts = f.split(os.path.sep)\n parts = parts[parts.index('hypothesis'):]\n result = os.path.sep.join(parts)\n pretty_file_name_cache[f] = result\n return result\n\n\nIN_COVERAGE_TESTS = os.getenv('HYPOTHESIS_INTERNAL_COVERAGE') == 'true'\n\n\nif IN_COVERAGE_TESTS:\n with open('branch-check', 'w'):\n pass\n written = set()\n\n def record_branch(name, value):\n key = (name, value)\n if key in written:\n return\n written.add(key)\n with open('branch-check', 'a') as log:\n log.write(json.dumps({'name': name, 'value': value}) + '\\n')\n\n description_stack = []\n\n @contextmanager\n def check_block(name, depth):\n # We add an extra two callers to the stack: One for the contextmanager\n # function, one for our actual caller, so we want to go two extra\n # stack frames up.\n caller = sys._getframe(depth + 2)\n local_description = '%s at %s:%d' % (\n name,\n pretty_file_name(caller.f_code.co_filename),\n caller.f_lineno,\n )\n try:\n description_stack.append(local_description)\n description = ' in '.join(reversed(description_stack)) + ' passed'\n yield\n record_branch(description, True)\n except BaseException:\n record_branch(description, False)\n raise\n finally:\n description_stack.pop()\n\n @contextmanager\n def check(name):\n with check_block(name, 2):\n yield\n\n def check_function(f):\n @proxies(f)\n def accept(*args, **kwargs):\n # depth of 2 because of the proxy function calling us.\n with check_block(f.__name__, 2):\n return f(*args, **kwargs)\n return accept\nelse:\n def check_function(f):\n return f\n\n @contextmanager\n def check(name):\n yield\n\n\nclass suppress_tracing(object):\n def __enter__(self):\n self.__original_trace = sys.gettrace()\n sys.settrace(None)\n\n def __exit__(self, exc_type, exc_value, traceback):\n sys.settrace(self.__original_trace)\n", "path": "src/hypothesis/internal/coverage.py"}]} | 1,634 | 205 |
gh_patches_debug_18261 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-4296 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing template translations confirm-email
**URL:** https://meinberlin-dev.liqd.net/accounts/confirm-email/
**user:** user who want to register
**expected behaviour:**
Translation is shown https://github.com/liqd/a4-meinberlin/blob/80ac3005df15322a330c2ce98833a11fb35d8bc8/locale/de_DE/LC_MESSAGES/django.po#L2180
From mail-override text https://github.com/liqd/a4-meinberlin/blob/80ac3005df15322a330c2ce98833a11fb35d8bc8/meinberlin/apps/contrib/django_standard_messages.py#L13
**behaviour:** django-allauth text is used and not translated https://github.com/pennersr/django-allauth/blob/672507e517eb762f76afae0ec3670d96c6afc143/allauth/templates/account/verified_email_required.html#L16
**important screensize:**
**device & browser:**
**Comment/Question:**
The same is true for https://meinberlin-dev.liqd.net/accounts/password/reset/done/ which also shows the django-allauth text https://github.com/pennersr/django-allauth/blob/672507e517eb762f76afae0ec3670d96c6afc143/allauth/templates/account/password_reset_done.html#L15
But this is the same issue, as override text and translation are also actually in our repo.
Screenshot?

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/contrib/django_standard_messages.py`
Content:
```
1 def _(s):
2 return s
3
4
5 django_standard_messages_to_override = [
6 _("You have signed out."),
7 _("Verify Your E-mail Address"),
8 _("You must type the same password each time."),
9 _("You have confirmed %(email)s."),
10 _("You cannot remove your primary e-mail address (%(email)s)."),
11 _("We have sent you an e-mail. Please contact us if "
12 "you do not receive it within a few minutes."),
13 _("We have sent an e-mail to you for verification. "
14 "Follow the link provided to finalize the signup process. "
15 "Please contact us if you do not receive it within a few minutes."),
16 _(u'You must select a minimum of %(limit_value)d choices.'),
17 _(u'You must select a maximum of %(limit_value)d choices.'),
18 _('Enter a valid email address.')
19 ]
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/meinberlin/apps/contrib/django_standard_messages.py b/meinberlin/apps/contrib/django_standard_messages.py
--- a/meinberlin/apps/contrib/django_standard_messages.py
+++ b/meinberlin/apps/contrib/django_standard_messages.py
@@ -12,7 +12,13 @@
"you do not receive it within a few minutes."),
_("We have sent an e-mail to you for verification. "
"Follow the link provided to finalize the signup process. "
- "Please contact us if you do not receive it within a few minutes."),
+ "If you do not see the verification e-mail in your main inbox, "
+ "check your spam folder. "
+ "Please contact us if you do not receive the verification e-mail "
+ "within a few minutes."),
+ _("We have sent you an e-mail. If you have not received it "
+ "please check your spam folder. Otherwise contact us if you "
+ "do not receive it in a few minutes."),
_(u'You must select a minimum of %(limit_value)d choices.'),
_(u'You must select a maximum of %(limit_value)d choices.'),
_('Enter a valid email address.')
| {"golden_diff": "diff --git a/meinberlin/apps/contrib/django_standard_messages.py b/meinberlin/apps/contrib/django_standard_messages.py\n--- a/meinberlin/apps/contrib/django_standard_messages.py\n+++ b/meinberlin/apps/contrib/django_standard_messages.py\n@@ -12,7 +12,13 @@\n \"you do not receive it within a few minutes.\"),\n _(\"We have sent an e-mail to you for verification. \"\n \"Follow the link provided to finalize the signup process. \"\n- \"Please contact us if you do not receive it within a few minutes.\"),\n+ \"If you do not see the verification e-mail in your main inbox, \"\n+ \"check your spam folder. \"\n+ \"Please contact us if you do not receive the verification e-mail \"\n+ \"within a few minutes.\"),\n+ _(\"We have sent you an e-mail. If you have not received it \"\n+ \"please check your spam folder. Otherwise contact us if you \"\n+ \"do not receive it in a few minutes.\"),\n _(u'You must select a minimum of %(limit_value)d choices.'),\n _(u'You must select a maximum of %(limit_value)d choices.'),\n _('Enter a valid email address.')\n", "issue": "Missing template translations confirm-email\n**URL:** https://meinberlin-dev.liqd.net/accounts/confirm-email/\r\n**user:** user who want to register\r\n**expected behaviour:** \r\nTranslation is shown https://github.com/liqd/a4-meinberlin/blob/80ac3005df15322a330c2ce98833a11fb35d8bc8/locale/de_DE/LC_MESSAGES/django.po#L2180\r\nFrom mail-override text https://github.com/liqd/a4-meinberlin/blob/80ac3005df15322a330c2ce98833a11fb35d8bc8/meinberlin/apps/contrib/django_standard_messages.py#L13\r\n**behaviour:** django-allauth text is used and not translated https://github.com/pennersr/django-allauth/blob/672507e517eb762f76afae0ec3670d96c6afc143/allauth/templates/account/verified_email_required.html#L16\r\n**important screensize:**\r\n**device & browser:** \r\n\r\n**Comment/Question:** \r\nThe same is true for https://meinberlin-dev.liqd.net/accounts/password/reset/done/ which also shows the django-allauth text https://github.com/pennersr/django-allauth/blob/672507e517eb762f76afae0ec3670d96c6afc143/allauth/templates/account/password_reset_done.html#L15\r\nBut this is the same issue, as override text and translation are also actually in our repo. \r\n\r\nScreenshot?\r\n\r\n\r\n\n", "before_files": [{"content": "def _(s):\n return s\n\n\ndjango_standard_messages_to_override = [\n _(\"You have signed out.\"),\n _(\"Verify Your E-mail Address\"),\n _(\"You must type the same password each time.\"),\n _(\"You have confirmed %(email)s.\"),\n _(\"You cannot remove your primary e-mail address (%(email)s).\"),\n _(\"We have sent you an e-mail. Please contact us if \"\n \"you do not receive it within a few minutes.\"),\n _(\"We have sent an e-mail to you for verification. \"\n \"Follow the link provided to finalize the signup process. \"\n \"Please contact us if you do not receive it within a few minutes.\"),\n _(u'You must select a minimum of %(limit_value)d choices.'),\n _(u'You must select a maximum of %(limit_value)d choices.'),\n _('Enter a valid email address.')\n]\n", "path": "meinberlin/apps/contrib/django_standard_messages.py"}], "after_files": [{"content": "def _(s):\n return s\n\n\ndjango_standard_messages_to_override = [\n _(\"You have signed out.\"),\n _(\"Verify Your E-mail Address\"),\n _(\"You must type the same password each time.\"),\n _(\"You have confirmed %(email)s.\"),\n _(\"You cannot remove your primary e-mail address (%(email)s).\"),\n _(\"We have sent you an e-mail. Please contact us if \"\n \"you do not receive it within a few minutes.\"),\n _(\"We have sent an e-mail to you for verification. \"\n \"Follow the link provided to finalize the signup process. \"\n \"If you do not see the verification e-mail in your main inbox, \"\n \"check your spam folder. \"\n \"Please contact us if you do not receive the verification e-mail \"\n \"within a few minutes.\"),\n _(\"We have sent you an e-mail. If you have not received it \"\n \"please check your spam folder. Otherwise contact us if you \"\n \"do not receive it in a few minutes.\"),\n _(u'You must select a minimum of %(limit_value)d choices.'),\n _(u'You must select a maximum of %(limit_value)d choices.'),\n _('Enter a valid email address.')\n]\n", "path": "meinberlin/apps/contrib/django_standard_messages.py"}]} | 926 | 267 |
gh_patches_debug_9637 | rasdani/github-patches | git_diff | pre-commit__pre-commit-2029 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`check-useless-excludes` meta hook doesn't seem to work well with broken symlinks
Repro steps
1. Add a submodule
2. Add a symlink to somewhere inside this submodule
3. Use `check-symlinks`
4. Deinit submodule (to simulate the case when people clone without `--recursive`)
4. Notice that it now complains about the symlink created on 2
5. Add an exclude entry with the path to that symlink
6. Use `check-useless-excludes`
7. Notice that it complains that the added exclude entry doesn't apply.
For a working example, check out https://github.com/pkoch/test-check-useless-excludes (without `--recursive`, or run `git submodule deinit --all`) and run `pre-commit run --all`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/meta_hooks/check_useless_excludes.py`
Content:
```
1 import argparse
2 import re
3 from typing import Optional
4 from typing import Sequence
5
6 from cfgv import apply_defaults
7
8 import pre_commit.constants as C
9 from pre_commit import git
10 from pre_commit.clientlib import load_config
11 from pre_commit.clientlib import MANIFEST_HOOK_DICT
12 from pre_commit.commands.run import Classifier
13
14
15 def exclude_matches_any(
16 filenames: Sequence[str],
17 include: str,
18 exclude: str,
19 ) -> bool:
20 if exclude == '^$':
21 return True
22 include_re, exclude_re = re.compile(include), re.compile(exclude)
23 for filename in filenames:
24 if include_re.search(filename) and exclude_re.search(filename):
25 return True
26 return False
27
28
29 def check_useless_excludes(config_file: str) -> int:
30 config = load_config(config_file)
31 filenames = git.get_all_files()
32 classifier = Classifier.from_config(
33 filenames, config['files'], config['exclude'],
34 )
35 retv = 0
36
37 exclude = config['exclude']
38 if not exclude_matches_any(filenames, '', exclude):
39 print(
40 f'The global exclude pattern {exclude!r} does not match any files',
41 )
42 retv = 1
43
44 for repo in config['repos']:
45 for hook in repo['hooks']:
46 # Not actually a manifest dict, but this more accurately reflects
47 # the defaults applied during runtime
48 hook = apply_defaults(hook, MANIFEST_HOOK_DICT)
49 names = classifier.filenames
50 types = hook['types']
51 types_or = hook['types_or']
52 exclude_types = hook['exclude_types']
53 names = classifier.by_types(names, types, types_or, exclude_types)
54 include, exclude = hook['files'], hook['exclude']
55 if not exclude_matches_any(names, include, exclude):
56 print(
57 f'The exclude pattern {exclude!r} for {hook["id"]} does '
58 f'not match any files',
59 )
60 retv = 1
61
62 return retv
63
64
65 def main(argv: Optional[Sequence[str]] = None) -> int:
66 parser = argparse.ArgumentParser()
67 parser.add_argument('filenames', nargs='*', default=[C.CONFIG_FILE])
68 args = parser.parse_args(argv)
69
70 retv = 0
71 for filename in args.filenames:
72 retv |= check_useless_excludes(filename)
73 return retv
74
75
76 if __name__ == '__main__':
77 exit(main())
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/meta_hooks/check_useless_excludes.py b/pre_commit/meta_hooks/check_useless_excludes.py
--- a/pre_commit/meta_hooks/check_useless_excludes.py
+++ b/pre_commit/meta_hooks/check_useless_excludes.py
@@ -43,6 +43,9 @@
for repo in config['repos']:
for hook in repo['hooks']:
+ # the default of manifest hooks is `types: [file]` but we may
+ # be configuring a symlink hook while there's a broken symlink
+ hook.setdefault('types', [])
# Not actually a manifest dict, but this more accurately reflects
# the defaults applied during runtime
hook = apply_defaults(hook, MANIFEST_HOOK_DICT)
| {"golden_diff": "diff --git a/pre_commit/meta_hooks/check_useless_excludes.py b/pre_commit/meta_hooks/check_useless_excludes.py\n--- a/pre_commit/meta_hooks/check_useless_excludes.py\n+++ b/pre_commit/meta_hooks/check_useless_excludes.py\n@@ -43,6 +43,9 @@\n \n for repo in config['repos']:\n for hook in repo['hooks']:\n+ # the default of manifest hooks is `types: [file]` but we may\n+ # be configuring a symlink hook while there's a broken symlink\n+ hook.setdefault('types', [])\n # Not actually a manifest dict, but this more accurately reflects\n # the defaults applied during runtime\n hook = apply_defaults(hook, MANIFEST_HOOK_DICT)\n", "issue": "`check-useless-excludes` meta hook doesn't seem to work well with broken symlinks\nRepro steps\r\n1. Add a submodule\r\n2. Add a symlink to somewhere inside this submodule\r\n3. Use `check-symlinks`\r\n4. Deinit submodule (to simulate the case when people clone without `--recursive`)\r\n4. Notice that it now complains about the symlink created on 2\r\n5. Add an exclude entry with the path to that symlink\r\n6. Use `check-useless-excludes`\r\n7. Notice that it complains that the added exclude entry doesn't apply.\r\n\r\nFor a working example, check out https://github.com/pkoch/test-check-useless-excludes (without `--recursive`, or run `git submodule deinit --all`) and run `pre-commit run --all`.\n", "before_files": [{"content": "import argparse\nimport re\nfrom typing import Optional\nfrom typing import Sequence\n\nfrom cfgv import apply_defaults\n\nimport pre_commit.constants as C\nfrom pre_commit import git\nfrom pre_commit.clientlib import load_config\nfrom pre_commit.clientlib import MANIFEST_HOOK_DICT\nfrom pre_commit.commands.run import Classifier\n\n\ndef exclude_matches_any(\n filenames: Sequence[str],\n include: str,\n exclude: str,\n) -> bool:\n if exclude == '^$':\n return True\n include_re, exclude_re = re.compile(include), re.compile(exclude)\n for filename in filenames:\n if include_re.search(filename) and exclude_re.search(filename):\n return True\n return False\n\n\ndef check_useless_excludes(config_file: str) -> int:\n config = load_config(config_file)\n filenames = git.get_all_files()\n classifier = Classifier.from_config(\n filenames, config['files'], config['exclude'],\n )\n retv = 0\n\n exclude = config['exclude']\n if not exclude_matches_any(filenames, '', exclude):\n print(\n f'The global exclude pattern {exclude!r} does not match any files',\n )\n retv = 1\n\n for repo in config['repos']:\n for hook in repo['hooks']:\n # Not actually a manifest dict, but this more accurately reflects\n # the defaults applied during runtime\n hook = apply_defaults(hook, MANIFEST_HOOK_DICT)\n names = classifier.filenames\n types = hook['types']\n types_or = hook['types_or']\n exclude_types = hook['exclude_types']\n names = classifier.by_types(names, types, types_or, exclude_types)\n include, exclude = hook['files'], hook['exclude']\n if not exclude_matches_any(names, include, exclude):\n print(\n f'The exclude pattern {exclude!r} for {hook[\"id\"]} does '\n f'not match any files',\n )\n retv = 1\n\n return retv\n\n\ndef main(argv: Optional[Sequence[str]] = None) -> int:\n parser = argparse.ArgumentParser()\n parser.add_argument('filenames', nargs='*', default=[C.CONFIG_FILE])\n args = parser.parse_args(argv)\n\n retv = 0\n for filename in args.filenames:\n retv |= check_useless_excludes(filename)\n return retv\n\n\nif __name__ == '__main__':\n exit(main())\n", "path": "pre_commit/meta_hooks/check_useless_excludes.py"}], "after_files": [{"content": "import argparse\nimport re\nfrom typing import Optional\nfrom typing import Sequence\n\nfrom cfgv import apply_defaults\n\nimport pre_commit.constants as C\nfrom pre_commit import git\nfrom pre_commit.clientlib import load_config\nfrom pre_commit.clientlib import MANIFEST_HOOK_DICT\nfrom pre_commit.commands.run import Classifier\n\n\ndef exclude_matches_any(\n filenames: Sequence[str],\n include: str,\n exclude: str,\n) -> bool:\n if exclude == '^$':\n return True\n include_re, exclude_re = re.compile(include), re.compile(exclude)\n for filename in filenames:\n if include_re.search(filename) and exclude_re.search(filename):\n return True\n return False\n\n\ndef check_useless_excludes(config_file: str) -> int:\n config = load_config(config_file)\n filenames = git.get_all_files()\n classifier = Classifier.from_config(\n filenames, config['files'], config['exclude'],\n )\n retv = 0\n\n exclude = config['exclude']\n if not exclude_matches_any(filenames, '', exclude):\n print(\n f'The global exclude pattern {exclude!r} does not match any files',\n )\n retv = 1\n\n for repo in config['repos']:\n for hook in repo['hooks']:\n # the default of manifest hooks is `types: [file]` but we may\n # be configuring a symlink hook while there's a broken symlink\n hook.setdefault('types', [])\n # Not actually a manifest dict, but this more accurately reflects\n # the defaults applied during runtime\n hook = apply_defaults(hook, MANIFEST_HOOK_DICT)\n names = classifier.filenames\n types = hook['types']\n types_or = hook['types_or']\n exclude_types = hook['exclude_types']\n names = classifier.by_types(names, types, types_or, exclude_types)\n include, exclude = hook['files'], hook['exclude']\n if not exclude_matches_any(names, include, exclude):\n print(\n f'The exclude pattern {exclude!r} for {hook[\"id\"]} does '\n f'not match any files',\n )\n retv = 1\n\n return retv\n\n\ndef main(argv: Optional[Sequence[str]] = None) -> int:\n parser = argparse.ArgumentParser()\n parser.add_argument('filenames', nargs='*', default=[C.CONFIG_FILE])\n args = parser.parse_args(argv)\n\n retv = 0\n for filename in args.filenames:\n retv |= check_useless_excludes(filename)\n return retv\n\n\nif __name__ == '__main__':\n exit(main())\n", "path": "pre_commit/meta_hooks/check_useless_excludes.py"}]} | 1,100 | 159 |
gh_patches_debug_4164 | rasdani/github-patches | git_diff | lutris__lutris-1859 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
get_children_pids_of_thread(): process.py:57:get_children_pids_of_thread:ProcessLookupError: [Errno 3] No such process: '/proc/25892/task/25892/children'
Here's another Fedora report.
https://bugzilla.redhat.com/show_bug.cgi?id=1686988
cmdline: /usr/bin/python3 -s /usr/bin/lutris-wrapper 0 0 /home/u13881/.local/share/lutris/runtime/winetricks/winetricks --unattended
crash_function: get_children_pids_of_thread
exception_type: ProcessLookupError
executable: /usr/bin/lutris-wrapper
interpreter: python3-3.7.2-4.fc29.x86_64
kernel: 4.20.13-200.fc29.x86_64
runlevel: N 5
type: Python3
uid: 1000
Truncated backtrace:
process.py:57:get_children_pids_of_thread:ProcessLookupError: [Errno 3] No such process: '/proc/25892/task/25892/children'
Traceback (most recent call last):
File "/usr/bin/lutris-wrapper", line 110, in <module>
main()
File "/usr/bin/lutris-wrapper", line 91, in main
if not monitor.refresh_process_status():
File "/usr/lib/python3.7/site-packages/lutris/util/monitor.py", line 98, in refresh_process_status
for child in self.iter_children(Process(os.getpid())):
File "/usr/lib/python3.7/site-packages/lutris/util/process.py", line 20, in __init__
self.get_children()
File "/usr/lib/python3.7/site-packages/lutris/util/process.py", line 67, in get_children
self.children.append(Process(child_pid, parent=self))
File "/usr/lib/python3.7/site-packages/lutris/util/process.py", line 20, in __init__
self.get_children()
File "/usr/lib/python3.7/site-packages/lutris/util/process.py", line 67, in get_children
self.children.append(Process(child_pid, parent=self))
File "/usr/lib/python3.7/site-packages/lutris/util/process.py", line 20, in __init__
self.get_children()
File "/usr/lib/python3.7/site-packages/lutris/util/process.py", line 66, in get_children
for child_pid in self.get_children_pids_of_thread(tid):
File "/usr/lib/python3.7/site-packages/lutris/util/process.py", line 57, in get_children_pids_of_thread
with open(children_path) as children_file:
ProcessLookupError: [Errno 3] No such process: '/proc/25892/task/25892/children'
Local variables in innermost frame:
self: Process 25892
tid: '25892'
children_path: '/proc/25892/task/25892/children'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lutris/util/process.py`
Content:
```
1 """Class to manipulate a process"""
2 import os
3 from lutris.util.log import logger
4 from lutris.util.system import path_exists
5
6
7 class InvalidPid(Exception):
8 """Exception raised when an operation on a non-existent PID is called"""
9
10
11 class Process:
12 """Python abstraction a Linux process"""
13 def __init__(self, pid, parent=None):
14 try:
15 self.pid = int(pid)
16 except ValueError:
17 raise InvalidPid("'%s' is not a valid pid" % pid)
18 self.children = []
19 self.parent = None
20 self.get_children()
21
22 def __repr__(self):
23 return "Process {}".format(self.pid)
24
25 def __str__(self):
26 return "{} ({}:{})".format(self.name, self.pid, self.state)
27
28 def get_stat(self, parsed=True):
29 stat_filename = "/proc/{}/stat".format(self.pid)
30 if not path_exists(stat_filename):
31 return None
32 with open(stat_filename) as stat_file:
33 try:
34 _stat = stat_file.readline()
35 except (ProcessLookupError, FileNotFoundError):
36 logger.warning("Unable to read stat for process %s", self.pid)
37 return None
38 if parsed:
39 return _stat[_stat.rfind(")") + 1:].split()
40 return _stat
41
42 def get_thread_ids(self):
43 """Return a list of thread ids opened by process."""
44 basedir = "/proc/{}/task/".format(self.pid)
45 if os.path.isdir(basedir):
46 try:
47 return [tid for tid in os.listdir(basedir)]
48 except FileNotFoundError:
49 return []
50 else:
51 return []
52
53 def get_children_pids_of_thread(self, tid):
54 """Return pids of child processes opened by thread `tid` of process."""
55 children_path = "/proc/{}/task/{}/children".format(self.pid, tid)
56 try:
57 with open(children_path) as children_file:
58 children_content = children_file.read()
59 except FileNotFoundError:
60 children_content = ""
61 return children_content.strip().split()
62
63 def get_children(self):
64 self.children = []
65 for tid in self.get_thread_ids():
66 for child_pid in self.get_children_pids_of_thread(tid):
67 self.children.append(Process(child_pid, parent=self))
68
69 @property
70 def name(self):
71 """Filename of the executable."""
72 _stat = self.get_stat(parsed=False)
73 if _stat:
74 return _stat[_stat.find("(") + 1:_stat.rfind(")")]
75 return None
76
77 @property
78 def state(self):
79 """One character from the string "RSDZTW" where R is running, S is
80 sleeping in an interruptible wait, D is waiting in uninterruptible disk
81 sleep, Z is zombie, T is traced or stopped (on a signal), and W is
82 paging.
83 """
84 _stat = self.get_stat()
85 if _stat:
86 return _stat[0]
87 return None
88
89 @property
90 def cmdline(self):
91 """Return command line used to run the process `pid`."""
92 cmdline_path = "/proc/{}/cmdline".format(self.pid)
93 with open(cmdline_path) as cmdline_file:
94 _cmdline = cmdline_file.read().replace("\x00", " ")
95 return _cmdline
96
97 @property
98 def cwd(self):
99 """Return current working dir of process"""
100 cwd_path = "/proc/%d/cwd" % int(self.pid)
101 return os.readlink(cwd_path)
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lutris/util/process.py b/lutris/util/process.py
--- a/lutris/util/process.py
+++ b/lutris/util/process.py
@@ -56,7 +56,7 @@
try:
with open(children_path) as children_file:
children_content = children_file.read()
- except FileNotFoundError:
+ except (FileNotFoundError, ProcessLookupError):
children_content = ""
return children_content.strip().split()
| {"golden_diff": "diff --git a/lutris/util/process.py b/lutris/util/process.py\n--- a/lutris/util/process.py\n+++ b/lutris/util/process.py\n@@ -56,7 +56,7 @@\n try:\n with open(children_path) as children_file:\n children_content = children_file.read()\n- except FileNotFoundError:\n+ except (FileNotFoundError, ProcessLookupError):\n children_content = \"\"\n return children_content.strip().split()\n", "issue": "get_children_pids_of_thread(): process.py:57:get_children_pids_of_thread:ProcessLookupError: [Errno 3] No such process: '/proc/25892/task/25892/children'\nHere's another Fedora report.\r\n\r\nhttps://bugzilla.redhat.com/show_bug.cgi?id=1686988\r\n\r\ncmdline: /usr/bin/python3 -s /usr/bin/lutris-wrapper 0 0 /home/u13881/.local/share/lutris/runtime/winetricks/winetricks --unattended\r\ncrash_function: get_children_pids_of_thread\r\nexception_type: ProcessLookupError\r\nexecutable: /usr/bin/lutris-wrapper\r\ninterpreter: python3-3.7.2-4.fc29.x86_64\r\nkernel: 4.20.13-200.fc29.x86_64\r\nrunlevel: N 5\r\ntype: Python3\r\nuid: 1000\r\n\r\nTruncated backtrace:\r\nprocess.py:57:get_children_pids_of_thread:ProcessLookupError: [Errno 3] No such process: '/proc/25892/task/25892/children'\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/bin/lutris-wrapper\", line 110, in <module>\r\n main()\r\n File \"/usr/bin/lutris-wrapper\", line 91, in main\r\n if not monitor.refresh_process_status():\r\n File \"/usr/lib/python3.7/site-packages/lutris/util/monitor.py\", line 98, in refresh_process_status\r\n for child in self.iter_children(Process(os.getpid())):\r\n File \"/usr/lib/python3.7/site-packages/lutris/util/process.py\", line 20, in __init__\r\n self.get_children()\r\n File \"/usr/lib/python3.7/site-packages/lutris/util/process.py\", line 67, in get_children\r\n self.children.append(Process(child_pid, parent=self))\r\n File \"/usr/lib/python3.7/site-packages/lutris/util/process.py\", line 20, in __init__\r\n self.get_children()\r\n File \"/usr/lib/python3.7/site-packages/lutris/util/process.py\", line 67, in get_children\r\n self.children.append(Process(child_pid, parent=self))\r\n File \"/usr/lib/python3.7/site-packages/lutris/util/process.py\", line 20, in __init__\r\n self.get_children()\r\n File \"/usr/lib/python3.7/site-packages/lutris/util/process.py\", line 66, in get_children\r\n for child_pid in self.get_children_pids_of_thread(tid):\r\n File \"/usr/lib/python3.7/site-packages/lutris/util/process.py\", line 57, in get_children_pids_of_thread\r\n with open(children_path) as children_file:\r\nProcessLookupError: [Errno 3] No such process: '/proc/25892/task/25892/children'\r\n\r\nLocal variables in innermost frame:\r\nself: Process 25892\r\ntid: '25892'\r\nchildren_path: '/proc/25892/task/25892/children'\n", "before_files": [{"content": "\"\"\"Class to manipulate a process\"\"\"\nimport os\nfrom lutris.util.log import logger\nfrom lutris.util.system import path_exists\n\n\nclass InvalidPid(Exception):\n \"\"\"Exception raised when an operation on a non-existent PID is called\"\"\"\n\n\nclass Process:\n \"\"\"Python abstraction a Linux process\"\"\"\n def __init__(self, pid, parent=None):\n try:\n self.pid = int(pid)\n except ValueError:\n raise InvalidPid(\"'%s' is not a valid pid\" % pid)\n self.children = []\n self.parent = None\n self.get_children()\n\n def __repr__(self):\n return \"Process {}\".format(self.pid)\n\n def __str__(self):\n return \"{} ({}:{})\".format(self.name, self.pid, self.state)\n\n def get_stat(self, parsed=True):\n stat_filename = \"/proc/{}/stat\".format(self.pid)\n if not path_exists(stat_filename):\n return None\n with open(stat_filename) as stat_file:\n try:\n _stat = stat_file.readline()\n except (ProcessLookupError, FileNotFoundError):\n logger.warning(\"Unable to read stat for process %s\", self.pid)\n return None\n if parsed:\n return _stat[_stat.rfind(\")\") + 1:].split()\n return _stat\n\n def get_thread_ids(self):\n \"\"\"Return a list of thread ids opened by process.\"\"\"\n basedir = \"/proc/{}/task/\".format(self.pid)\n if os.path.isdir(basedir):\n try:\n return [tid for tid in os.listdir(basedir)]\n except FileNotFoundError:\n return []\n else:\n return []\n\n def get_children_pids_of_thread(self, tid):\n \"\"\"Return pids of child processes opened by thread `tid` of process.\"\"\"\n children_path = \"/proc/{}/task/{}/children\".format(self.pid, tid)\n try:\n with open(children_path) as children_file:\n children_content = children_file.read()\n except FileNotFoundError:\n children_content = \"\"\n return children_content.strip().split()\n\n def get_children(self):\n self.children = []\n for tid in self.get_thread_ids():\n for child_pid in self.get_children_pids_of_thread(tid):\n self.children.append(Process(child_pid, parent=self))\n\n @property\n def name(self):\n \"\"\"Filename of the executable.\"\"\"\n _stat = self.get_stat(parsed=False)\n if _stat:\n return _stat[_stat.find(\"(\") + 1:_stat.rfind(\")\")]\n return None\n\n @property\n def state(self):\n \"\"\"One character from the string \"RSDZTW\" where R is running, S is\n sleeping in an interruptible wait, D is waiting in uninterruptible disk\n sleep, Z is zombie, T is traced or stopped (on a signal), and W is\n paging.\n \"\"\"\n _stat = self.get_stat()\n if _stat:\n return _stat[0]\n return None\n\n @property\n def cmdline(self):\n \"\"\"Return command line used to run the process `pid`.\"\"\"\n cmdline_path = \"/proc/{}/cmdline\".format(self.pid)\n with open(cmdline_path) as cmdline_file:\n _cmdline = cmdline_file.read().replace(\"\\x00\", \" \")\n return _cmdline\n\n @property\n def cwd(self):\n \"\"\"Return current working dir of process\"\"\"\n cwd_path = \"/proc/%d/cwd\" % int(self.pid)\n return os.readlink(cwd_path)\n", "path": "lutris/util/process.py"}], "after_files": [{"content": "\"\"\"Class to manipulate a process\"\"\"\nimport os\nfrom lutris.util.log import logger\nfrom lutris.util.system import path_exists\n\n\nclass InvalidPid(Exception):\n \"\"\"Exception raised when an operation on a non-existent PID is called\"\"\"\n\n\nclass Process:\n \"\"\"Python abstraction a Linux process\"\"\"\n def __init__(self, pid, parent=None):\n try:\n self.pid = int(pid)\n except ValueError:\n raise InvalidPid(\"'%s' is not a valid pid\" % pid)\n self.children = []\n self.parent = None\n self.get_children()\n\n def __repr__(self):\n return \"Process {}\".format(self.pid)\n\n def __str__(self):\n return \"{} ({}:{})\".format(self.name, self.pid, self.state)\n\n def get_stat(self, parsed=True):\n stat_filename = \"/proc/{}/stat\".format(self.pid)\n if not path_exists(stat_filename):\n return None\n with open(stat_filename) as stat_file:\n try:\n _stat = stat_file.readline()\n except (ProcessLookupError, FileNotFoundError):\n logger.warning(\"Unable to read stat for process %s\", self.pid)\n return None\n if parsed:\n return _stat[_stat.rfind(\")\") + 1:].split()\n return _stat\n\n def get_thread_ids(self):\n \"\"\"Return a list of thread ids opened by process.\"\"\"\n basedir = \"/proc/{}/task/\".format(self.pid)\n if os.path.isdir(basedir):\n try:\n return [tid for tid in os.listdir(basedir)]\n except FileNotFoundError:\n return []\n else:\n return []\n\n def get_children_pids_of_thread(self, tid):\n \"\"\"Return pids of child processes opened by thread `tid` of process.\"\"\"\n children_path = \"/proc/{}/task/{}/children\".format(self.pid, tid)\n try:\n with open(children_path) as children_file:\n children_content = children_file.read()\n except (FileNotFoundError, ProcessLookupError):\n children_content = \"\"\n return children_content.strip().split()\n\n def get_children(self):\n self.children = []\n for tid in self.get_thread_ids():\n for child_pid in self.get_children_pids_of_thread(tid):\n self.children.append(Process(child_pid, parent=self))\n\n @property\n def name(self):\n \"\"\"Filename of the executable.\"\"\"\n _stat = self.get_stat(parsed=False)\n if _stat:\n return _stat[_stat.find(\"(\") + 1:_stat.rfind(\")\")]\n return None\n\n @property\n def state(self):\n \"\"\"One character from the string \"RSDZTW\" where R is running, S is\n sleeping in an interruptible wait, D is waiting in uninterruptible disk\n sleep, Z is zombie, T is traced or stopped (on a signal), and W is\n paging.\n \"\"\"\n _stat = self.get_stat()\n if _stat:\n return _stat[0]\n return None\n\n @property\n def cmdline(self):\n \"\"\"Return command line used to run the process `pid`.\"\"\"\n cmdline_path = \"/proc/{}/cmdline\".format(self.pid)\n with open(cmdline_path) as cmdline_file:\n _cmdline = cmdline_file.read().replace(\"\\x00\", \" \")\n return _cmdline\n\n @property\n def cwd(self):\n \"\"\"Return current working dir of process\"\"\"\n cwd_path = \"/proc/%d/cwd\" % int(self.pid)\n return os.readlink(cwd_path)\n", "path": "lutris/util/process.py"}]} | 1,908 | 98 |
gh_patches_debug_24181 | rasdani/github-patches | git_diff | ansible__awx-13528 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exception handling for Conjur Secrets Manager Lookup not working properly
### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
### Bug Summary
When using the Conjur Secrets Manager Lookup credential type against Conjur Cloud, exception handling does not properly switch to the proper URI endpoint automatically. Any attempts currently fail with HTTP 404.
### AWX version
0.1.dev32763+g3d73b80
### Select the relevant components
- [X] UI
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
docker development environment
### Modifications
no
### Ansible version
_No response_
### Operating system
_No response_
### Web browser
_No response_
### Steps to reproduce
1. Select "CyberArk Conjur Secrets Manager Lookup" as the credential type for a new credential object in AWX.
2. Configure for a Conjur Cloud tenant.
3. Run "Test" and a failure with "HTTP 404" is returned.
### Expected results
The test should pass.
### Actual results
The test fails with an HTTP 404 error code.
### Additional information
This fix is being implemented in a PR within moments.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awx/main/credential_plugins/conjur.py`
Content:
```
1 from .plugin import CredentialPlugin, CertFiles, raise_for_status
2
3 from urllib.parse import urljoin, quote
4
5 from django.utils.translation import gettext_lazy as _
6 import requests
7
8
9 conjur_inputs = {
10 'fields': [
11 {
12 'id': 'url',
13 'label': _('Conjur URL'),
14 'type': 'string',
15 'format': 'url',
16 },
17 {
18 'id': 'api_key',
19 'label': _('API Key'),
20 'type': 'string',
21 'secret': True,
22 },
23 {
24 'id': 'account',
25 'label': _('Account'),
26 'type': 'string',
27 },
28 {
29 'id': 'username',
30 'label': _('Username'),
31 'type': 'string',
32 },
33 {'id': 'cacert', 'label': _('Public Key Certificate'), 'type': 'string', 'multiline': True},
34 ],
35 'metadata': [
36 {
37 'id': 'secret_path',
38 'label': _('Secret Identifier'),
39 'type': 'string',
40 'help_text': _('The identifier for the secret e.g., /some/identifier'),
41 },
42 {
43 'id': 'secret_version',
44 'label': _('Secret Version'),
45 'type': 'string',
46 'help_text': _('Used to specify a specific secret version (if left empty, the latest version will be used).'),
47 },
48 ],
49 'required': ['url', 'api_key', 'account', 'username'],
50 }
51
52
53 def conjur_backend(**kwargs):
54 url = kwargs['url']
55 api_key = kwargs['api_key']
56 account = quote(kwargs['account'], safe='')
57 username = quote(kwargs['username'], safe='')
58 secret_path = quote(kwargs['secret_path'], safe='')
59 version = kwargs.get('secret_version')
60 cacert = kwargs.get('cacert', None)
61
62 auth_kwargs = {
63 'headers': {'Content-Type': 'text/plain', 'Accept-Encoding': 'base64'},
64 'data': api_key,
65 'allow_redirects': False,
66 }
67
68 with CertFiles(cacert) as cert:
69 # https://www.conjur.org/api.html#authentication-authenticate-post
70 auth_kwargs['verify'] = cert
71 try:
72 resp = requests.post(urljoin(url, '/'.join(['authn', account, username, 'authenticate'])), **auth_kwargs)
73 except requests.exceptions.ConnectionError:
74 resp = requests.post(urljoin(url, '/'.join(['api', 'authn', account, username, 'authenticate'])), **auth_kwargs)
75 raise_for_status(resp)
76 token = resp.content.decode('utf-8')
77
78 lookup_kwargs = {
79 'headers': {'Authorization': 'Token token="{}"'.format(token)},
80 'allow_redirects': False,
81 }
82
83 # https://www.conjur.org/api.html#secrets-retrieve-a-secret-get
84 path = urljoin(url, '/'.join(['secrets', account, 'variable', secret_path]))
85 path_conjurcloud = urljoin(url, '/'.join(['api', 'secrets', account, 'variable', secret_path]))
86 if version:
87 ver = "version={}".format(version)
88 path = '?'.join([path, ver])
89 path_conjurcloud = '?'.join([path_conjurcloud, ver])
90
91 with CertFiles(cacert) as cert:
92 lookup_kwargs['verify'] = cert
93 try:
94 resp = requests.get(path, timeout=30, **lookup_kwargs)
95 except requests.exceptions.ConnectionError:
96 resp = requests.get(path_conjurcloud, timeout=30, **lookup_kwargs)
97 raise_for_status(resp)
98 return resp.text
99
100
101 conjur_plugin = CredentialPlugin('CyberArk Conjur Secrets Manager Lookup', inputs=conjur_inputs, backend=conjur_backend)
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/awx/main/credential_plugins/conjur.py b/awx/main/credential_plugins/conjur.py
--- a/awx/main/credential_plugins/conjur.py
+++ b/awx/main/credential_plugins/conjur.py
@@ -70,7 +70,8 @@
auth_kwargs['verify'] = cert
try:
resp = requests.post(urljoin(url, '/'.join(['authn', account, username, 'authenticate'])), **auth_kwargs)
- except requests.exceptions.ConnectionError:
+ resp.raise_for_status()
+ except requests.exceptions.HTTPError:
resp = requests.post(urljoin(url, '/'.join(['api', 'authn', account, username, 'authenticate'])), **auth_kwargs)
raise_for_status(resp)
token = resp.content.decode('utf-8')
@@ -92,7 +93,8 @@
lookup_kwargs['verify'] = cert
try:
resp = requests.get(path, timeout=30, **lookup_kwargs)
- except requests.exceptions.ConnectionError:
+ resp.raise_for_status()
+ except requests.exceptions.HTTPError:
resp = requests.get(path_conjurcloud, timeout=30, **lookup_kwargs)
raise_for_status(resp)
return resp.text
| {"golden_diff": "diff --git a/awx/main/credential_plugins/conjur.py b/awx/main/credential_plugins/conjur.py\n--- a/awx/main/credential_plugins/conjur.py\n+++ b/awx/main/credential_plugins/conjur.py\n@@ -70,7 +70,8 @@\n auth_kwargs['verify'] = cert\n try:\n resp = requests.post(urljoin(url, '/'.join(['authn', account, username, 'authenticate'])), **auth_kwargs)\n- except requests.exceptions.ConnectionError:\n+ resp.raise_for_status()\n+ except requests.exceptions.HTTPError:\n resp = requests.post(urljoin(url, '/'.join(['api', 'authn', account, username, 'authenticate'])), **auth_kwargs)\n raise_for_status(resp)\n token = resp.content.decode('utf-8')\n@@ -92,7 +93,8 @@\n lookup_kwargs['verify'] = cert\n try:\n resp = requests.get(path, timeout=30, **lookup_kwargs)\n- except requests.exceptions.ConnectionError:\n+ resp.raise_for_status()\n+ except requests.exceptions.HTTPError:\n resp = requests.get(path_conjurcloud, timeout=30, **lookup_kwargs)\n raise_for_status(resp)\n return resp.text\n", "issue": "Exception handling for Conjur Secrets Manager Lookup not working properly\n### Please confirm the following\n\n- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).\n- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.\n- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.\n\n### Bug Summary\n\nWhen using the Conjur Secrets Manager Lookup credential type against Conjur Cloud, exception handling does not properly switch to the proper URI endpoint automatically. Any attempts currently fail with HTTP 404.\n\n### AWX version\n\n0.1.dev32763+g3d73b80\n\n### Select the relevant components\n\n- [X] UI\n- [ ] API\n- [ ] Docs\n- [ ] Collection\n- [ ] CLI\n- [ ] Other\n\n### Installation method\n\ndocker development environment\n\n### Modifications\n\nno\n\n### Ansible version\n\n_No response_\n\n### Operating system\n\n_No response_\n\n### Web browser\n\n_No response_\n\n### Steps to reproduce\n\n1. Select \"CyberArk Conjur Secrets Manager Lookup\" as the credential type for a new credential object in AWX.\r\n2. Configure for a Conjur Cloud tenant.\r\n3. Run \"Test\" and a failure with \"HTTP 404\" is returned.\n\n### Expected results\n\nThe test should pass.\n\n### Actual results\n\nThe test fails with an HTTP 404 error code.\n\n### Additional information\n\nThis fix is being implemented in a PR within moments.\n", "before_files": [{"content": "from .plugin import CredentialPlugin, CertFiles, raise_for_status\n\nfrom urllib.parse import urljoin, quote\n\nfrom django.utils.translation import gettext_lazy as _\nimport requests\n\n\nconjur_inputs = {\n 'fields': [\n {\n 'id': 'url',\n 'label': _('Conjur URL'),\n 'type': 'string',\n 'format': 'url',\n },\n {\n 'id': 'api_key',\n 'label': _('API Key'),\n 'type': 'string',\n 'secret': True,\n },\n {\n 'id': 'account',\n 'label': _('Account'),\n 'type': 'string',\n },\n {\n 'id': 'username',\n 'label': _('Username'),\n 'type': 'string',\n },\n {'id': 'cacert', 'label': _('Public Key Certificate'), 'type': 'string', 'multiline': True},\n ],\n 'metadata': [\n {\n 'id': 'secret_path',\n 'label': _('Secret Identifier'),\n 'type': 'string',\n 'help_text': _('The identifier for the secret e.g., /some/identifier'),\n },\n {\n 'id': 'secret_version',\n 'label': _('Secret Version'),\n 'type': 'string',\n 'help_text': _('Used to specify a specific secret version (if left empty, the latest version will be used).'),\n },\n ],\n 'required': ['url', 'api_key', 'account', 'username'],\n}\n\n\ndef conjur_backend(**kwargs):\n url = kwargs['url']\n api_key = kwargs['api_key']\n account = quote(kwargs['account'], safe='')\n username = quote(kwargs['username'], safe='')\n secret_path = quote(kwargs['secret_path'], safe='')\n version = kwargs.get('secret_version')\n cacert = kwargs.get('cacert', None)\n\n auth_kwargs = {\n 'headers': {'Content-Type': 'text/plain', 'Accept-Encoding': 'base64'},\n 'data': api_key,\n 'allow_redirects': False,\n }\n\n with CertFiles(cacert) as cert:\n # https://www.conjur.org/api.html#authentication-authenticate-post\n auth_kwargs['verify'] = cert\n try:\n resp = requests.post(urljoin(url, '/'.join(['authn', account, username, 'authenticate'])), **auth_kwargs)\n except requests.exceptions.ConnectionError:\n resp = requests.post(urljoin(url, '/'.join(['api', 'authn', account, username, 'authenticate'])), **auth_kwargs)\n raise_for_status(resp)\n token = resp.content.decode('utf-8')\n\n lookup_kwargs = {\n 'headers': {'Authorization': 'Token token=\"{}\"'.format(token)},\n 'allow_redirects': False,\n }\n\n # https://www.conjur.org/api.html#secrets-retrieve-a-secret-get\n path = urljoin(url, '/'.join(['secrets', account, 'variable', secret_path]))\n path_conjurcloud = urljoin(url, '/'.join(['api', 'secrets', account, 'variable', secret_path]))\n if version:\n ver = \"version={}\".format(version)\n path = '?'.join([path, ver])\n path_conjurcloud = '?'.join([path_conjurcloud, ver])\n\n with CertFiles(cacert) as cert:\n lookup_kwargs['verify'] = cert\n try:\n resp = requests.get(path, timeout=30, **lookup_kwargs)\n except requests.exceptions.ConnectionError:\n resp = requests.get(path_conjurcloud, timeout=30, **lookup_kwargs)\n raise_for_status(resp)\n return resp.text\n\n\nconjur_plugin = CredentialPlugin('CyberArk Conjur Secrets Manager Lookup', inputs=conjur_inputs, backend=conjur_backend)\n", "path": "awx/main/credential_plugins/conjur.py"}], "after_files": [{"content": "from .plugin import CredentialPlugin, CertFiles, raise_for_status\n\nfrom urllib.parse import urljoin, quote\n\nfrom django.utils.translation import gettext_lazy as _\nimport requests\n\n\nconjur_inputs = {\n 'fields': [\n {\n 'id': 'url',\n 'label': _('Conjur URL'),\n 'type': 'string',\n 'format': 'url',\n },\n {\n 'id': 'api_key',\n 'label': _('API Key'),\n 'type': 'string',\n 'secret': True,\n },\n {\n 'id': 'account',\n 'label': _('Account'),\n 'type': 'string',\n },\n {\n 'id': 'username',\n 'label': _('Username'),\n 'type': 'string',\n },\n {'id': 'cacert', 'label': _('Public Key Certificate'), 'type': 'string', 'multiline': True},\n ],\n 'metadata': [\n {\n 'id': 'secret_path',\n 'label': _('Secret Identifier'),\n 'type': 'string',\n 'help_text': _('The identifier for the secret e.g., /some/identifier'),\n },\n {\n 'id': 'secret_version',\n 'label': _('Secret Version'),\n 'type': 'string',\n 'help_text': _('Used to specify a specific secret version (if left empty, the latest version will be used).'),\n },\n ],\n 'required': ['url', 'api_key', 'account', 'username'],\n}\n\n\ndef conjur_backend(**kwargs):\n url = kwargs['url']\n api_key = kwargs['api_key']\n account = quote(kwargs['account'], safe='')\n username = quote(kwargs['username'], safe='')\n secret_path = quote(kwargs['secret_path'], safe='')\n version = kwargs.get('secret_version')\n cacert = kwargs.get('cacert', None)\n\n auth_kwargs = {\n 'headers': {'Content-Type': 'text/plain', 'Accept-Encoding': 'base64'},\n 'data': api_key,\n 'allow_redirects': False,\n }\n\n with CertFiles(cacert) as cert:\n # https://www.conjur.org/api.html#authentication-authenticate-post\n auth_kwargs['verify'] = cert\n try:\n resp = requests.post(urljoin(url, '/'.join(['authn', account, username, 'authenticate'])), **auth_kwargs)\n resp.raise_for_status()\n except requests.exceptions.HTTPError:\n resp = requests.post(urljoin(url, '/'.join(['api', 'authn', account, username, 'authenticate'])), **auth_kwargs)\n raise_for_status(resp)\n token = resp.content.decode('utf-8')\n\n lookup_kwargs = {\n 'headers': {'Authorization': 'Token token=\"{}\"'.format(token)},\n 'allow_redirects': False,\n }\n\n # https://www.conjur.org/api.html#secrets-retrieve-a-secret-get\n path = urljoin(url, '/'.join(['secrets', account, 'variable', secret_path]))\n path_conjurcloud = urljoin(url, '/'.join(['api', 'secrets', account, 'variable', secret_path]))\n if version:\n ver = \"version={}\".format(version)\n path = '?'.join([path, ver])\n path_conjurcloud = '?'.join([path_conjurcloud, ver])\n\n with CertFiles(cacert) as cert:\n lookup_kwargs['verify'] = cert\n try:\n resp = requests.get(path, timeout=30, **lookup_kwargs)\n resp.raise_for_status()\n except requests.exceptions.HTTPError:\n resp = requests.get(path_conjurcloud, timeout=30, **lookup_kwargs)\n raise_for_status(resp)\n return resp.text\n\n\nconjur_plugin = CredentialPlugin('CyberArk Conjur Secrets Manager Lookup', inputs=conjur_inputs, backend=conjur_backend)\n", "path": "awx/main/credential_plugins/conjur.py"}]} | 1,621 | 268 |
gh_patches_debug_15683 | rasdani/github-patches | git_diff | openstates__openstates-scrapers-2556 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ME failing since at least 2018-10-25
ME has been failing since 2018-10-25
Based on automated runs it appears that ME has not run successfully in 2 days (2018-10-25).
```
loaded Open States pupa settings...
me (scrape, import)
bills: {}
Traceback (most recent call last):
File "/opt/openstates/venv-pupa//bin/pupa", line 11, in <module>
load_entry_point('pupa', 'console_scripts', 'pupa')()
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/__main__.py", line 68, in main
subcommands[args.subcommand].handle(args, other)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 301, in do_handle
self.check_session_list(juris)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 228, in check_session_list
scraped_sessions = juris.get_session_list()
File "/opt/openstates/openstates/openstates/me/__init__.py", line 98, in get_session_list
sessions.remove('jb-Test')
ValueError: list.remove(x): x not in list
```
Visit http://bobsled.openstates.org for more info.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/me/__init__.py`
Content:
```
1 from pupa.scrape import Jurisdiction, Organization
2 from openstates.utils import url_xpath
3
4 from .bills import MEBillScraper
5 # from .people import MEPersonScraper
6 # from .committees import MECommitteeScraper
7
8
9 class Maine(Jurisdiction):
10 division_id = "ocd-division/country:us/state:me"
11 classification = "government"
12 name = "Maine"
13 url = "http://legislature.maine.gov"
14 scrapers = {
15 'bills': MEBillScraper,
16 # 'people': MEPersonScraper,
17 # 'committees': MECommitteeScraper,
18 }
19 legislative_sessions = [
20 {
21 "_scraped_name": "121st Legislature",
22 "identifier": "121",
23 "name": "121st Legislature (2003-2004)"
24 },
25 {
26 "_scraped_name": "122nd Legislature",
27 "identifier": "122",
28 "name": "122nd Legislature (2005-2006)"
29 },
30 {
31 "_scraped_name": "123rd Legislature",
32 "identifier": "123",
33 "name": "123rd Legislature (2007-2008)"
34 },
35 {
36 "_scraped_name": "124th Legislature",
37 "identifier": "124",
38 "name": "124th Legislature (2009-2010)"
39 },
40 {
41 "_scraped_name": "125th Legislature",
42 "identifier": "125",
43 "name": "125th Legislature (2011-2012)"
44 },
45 {
46 "_scraped_name": "126th Legislature",
47 "identifier": "126",
48 "name": "126th Legislature (2013-2014)"
49 },
50 {
51 "_scraped_name": "127th Legislature",
52 "identifier": "127",
53 "name": "127th Legislature (2015-2016)"
54 },
55 {
56 "_scraped_name": "128th Legislature",
57 "identifier": "128",
58 "name": "128th Legislature (2017-2018)",
59 "start_date": "2016-12-07",
60 "end_date": "2017-06-14",
61 }
62 ]
63 ignored_scraped_sessions = []
64
65 def get_organizations(self):
66 legislature_name = "Maine Legislature"
67 lower_chamber_name = "House"
68 lower_seats = 151
69 lower_title = "Representative"
70 upper_chamber_name = "Senate"
71 upper_seats = 35
72 upper_title = "Senator"
73
74 legislature = Organization(name=legislature_name,
75 classification="legislature")
76 upper = Organization(upper_chamber_name, classification='upper',
77 parent_id=legislature._id)
78 lower = Organization(lower_chamber_name, classification='lower',
79 parent_id=legislature._id)
80
81 for n in range(1, upper_seats + 1):
82 upper.add_post(
83 label=str(n), role=upper_title,
84 division_id='{}/sldu:{}'.format(self.division_id, n))
85 for n in range(1, lower_seats + 1):
86 lower.add_post(
87 label=str(n), role=lower_title,
88 division_id='{}/sldl:{}'.format(self.division_id, n))
89
90 yield legislature
91 yield Organization(name='Office of the Governor', classification='executive')
92 yield upper
93 yield lower
94
95 def get_session_list(self):
96 sessions = url_xpath('http://www.mainelegislature.org/LawMakerWeb/advancedsearch.asp',
97 '//select[@name="LegSession"]/option/text()')
98 sessions.remove('jb-Test')
99 sessions.remove('2001-2002')
100 return sessions
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openstates/me/__init__.py b/openstates/me/__init__.py
--- a/openstates/me/__init__.py
+++ b/openstates/me/__init__.py
@@ -60,7 +60,9 @@
"end_date": "2017-06-14",
}
]
- ignored_scraped_sessions = []
+ ignored_scraped_sessions = [
+ '2001-2002'
+ ]
def get_organizations(self):
legislature_name = "Maine Legislature"
@@ -95,6 +97,4 @@
def get_session_list(self):
sessions = url_xpath('http://www.mainelegislature.org/LawMakerWeb/advancedsearch.asp',
'//select[@name="LegSession"]/option/text()')
- sessions.remove('jb-Test')
- sessions.remove('2001-2002')
return sessions
| {"golden_diff": "diff --git a/openstates/me/__init__.py b/openstates/me/__init__.py\n--- a/openstates/me/__init__.py\n+++ b/openstates/me/__init__.py\n@@ -60,7 +60,9 @@\n \"end_date\": \"2017-06-14\",\n }\n ]\n- ignored_scraped_sessions = []\n+ ignored_scraped_sessions = [\n+ '2001-2002'\n+ ]\n \n def get_organizations(self):\n legislature_name = \"Maine Legislature\"\n@@ -95,6 +97,4 @@\n def get_session_list(self):\n sessions = url_xpath('http://www.mainelegislature.org/LawMakerWeb/advancedsearch.asp',\n '//select[@name=\"LegSession\"]/option/text()')\n- sessions.remove('jb-Test')\n- sessions.remove('2001-2002')\n return sessions\n", "issue": "ME failing since at least 2018-10-25\nME has been failing since 2018-10-25\n\nBased on automated runs it appears that ME has not run successfully in 2 days (2018-10-25).\n\n\n```\n loaded Open States pupa settings...\nme (scrape, import)\n bills: {}\nTraceback (most recent call last):\n File \"/opt/openstates/venv-pupa//bin/pupa\", line 11, in <module>\n load_entry_point('pupa', 'console_scripts', 'pupa')()\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/__main__.py\", line 68, in main\n subcommands[args.subcommand].handle(args, other)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 260, in handle\n return self.do_handle(args, other, juris)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 301, in do_handle\n self.check_session_list(juris)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 228, in check_session_list\n scraped_sessions = juris.get_session_list()\n File \"/opt/openstates/openstates/openstates/me/__init__.py\", line 98, in get_session_list\n sessions.remove('jb-Test')\nValueError: list.remove(x): x not in list\n```\n\nVisit http://bobsled.openstates.org for more info.\n\n", "before_files": [{"content": "from pupa.scrape import Jurisdiction, Organization\nfrom openstates.utils import url_xpath\n\nfrom .bills import MEBillScraper\n# from .people import MEPersonScraper\n# from .committees import MECommitteeScraper\n\n\nclass Maine(Jurisdiction):\n division_id = \"ocd-division/country:us/state:me\"\n classification = \"government\"\n name = \"Maine\"\n url = \"http://legislature.maine.gov\"\n scrapers = {\n 'bills': MEBillScraper,\n # 'people': MEPersonScraper,\n # 'committees': MECommitteeScraper,\n }\n legislative_sessions = [\n {\n \"_scraped_name\": \"121st Legislature\",\n \"identifier\": \"121\",\n \"name\": \"121st Legislature (2003-2004)\"\n },\n {\n \"_scraped_name\": \"122nd Legislature\",\n \"identifier\": \"122\",\n \"name\": \"122nd Legislature (2005-2006)\"\n },\n {\n \"_scraped_name\": \"123rd Legislature\",\n \"identifier\": \"123\",\n \"name\": \"123rd Legislature (2007-2008)\"\n },\n {\n \"_scraped_name\": \"124th Legislature\",\n \"identifier\": \"124\",\n \"name\": \"124th Legislature (2009-2010)\"\n },\n {\n \"_scraped_name\": \"125th Legislature\",\n \"identifier\": \"125\",\n \"name\": \"125th Legislature (2011-2012)\"\n },\n {\n \"_scraped_name\": \"126th Legislature\",\n \"identifier\": \"126\",\n \"name\": \"126th Legislature (2013-2014)\"\n },\n {\n \"_scraped_name\": \"127th Legislature\",\n \"identifier\": \"127\",\n \"name\": \"127th Legislature (2015-2016)\"\n },\n {\n \"_scraped_name\": \"128th Legislature\",\n \"identifier\": \"128\",\n \"name\": \"128th Legislature (2017-2018)\",\n \"start_date\": \"2016-12-07\",\n \"end_date\": \"2017-06-14\",\n }\n ]\n ignored_scraped_sessions = []\n\n def get_organizations(self):\n legislature_name = \"Maine Legislature\"\n lower_chamber_name = \"House\"\n lower_seats = 151\n lower_title = \"Representative\"\n upper_chamber_name = \"Senate\"\n upper_seats = 35\n upper_title = \"Senator\"\n\n legislature = Organization(name=legislature_name,\n classification=\"legislature\")\n upper = Organization(upper_chamber_name, classification='upper',\n parent_id=legislature._id)\n lower = Organization(lower_chamber_name, classification='lower',\n parent_id=legislature._id)\n\n for n in range(1, upper_seats + 1):\n upper.add_post(\n label=str(n), role=upper_title,\n division_id='{}/sldu:{}'.format(self.division_id, n))\n for n in range(1, lower_seats + 1):\n lower.add_post(\n label=str(n), role=lower_title,\n division_id='{}/sldl:{}'.format(self.division_id, n))\n\n yield legislature\n yield Organization(name='Office of the Governor', classification='executive')\n yield upper\n yield lower\n\n def get_session_list(self):\n sessions = url_xpath('http://www.mainelegislature.org/LawMakerWeb/advancedsearch.asp',\n '//select[@name=\"LegSession\"]/option/text()')\n sessions.remove('jb-Test')\n sessions.remove('2001-2002')\n return sessions\n", "path": "openstates/me/__init__.py"}], "after_files": [{"content": "from pupa.scrape import Jurisdiction, Organization\nfrom openstates.utils import url_xpath\n\nfrom .bills import MEBillScraper\n# from .people import MEPersonScraper\n# from .committees import MECommitteeScraper\n\n\nclass Maine(Jurisdiction):\n division_id = \"ocd-division/country:us/state:me\"\n classification = \"government\"\n name = \"Maine\"\n url = \"http://legislature.maine.gov\"\n scrapers = {\n 'bills': MEBillScraper,\n # 'people': MEPersonScraper,\n # 'committees': MECommitteeScraper,\n }\n legislative_sessions = [\n {\n \"_scraped_name\": \"121st Legislature\",\n \"identifier\": \"121\",\n \"name\": \"121st Legislature (2003-2004)\"\n },\n {\n \"_scraped_name\": \"122nd Legislature\",\n \"identifier\": \"122\",\n \"name\": \"122nd Legislature (2005-2006)\"\n },\n {\n \"_scraped_name\": \"123rd Legislature\",\n \"identifier\": \"123\",\n \"name\": \"123rd Legislature (2007-2008)\"\n },\n {\n \"_scraped_name\": \"124th Legislature\",\n \"identifier\": \"124\",\n \"name\": \"124th Legislature (2009-2010)\"\n },\n {\n \"_scraped_name\": \"125th Legislature\",\n \"identifier\": \"125\",\n \"name\": \"125th Legislature (2011-2012)\"\n },\n {\n \"_scraped_name\": \"126th Legislature\",\n \"identifier\": \"126\",\n \"name\": \"126th Legislature (2013-2014)\"\n },\n {\n \"_scraped_name\": \"127th Legislature\",\n \"identifier\": \"127\",\n \"name\": \"127th Legislature (2015-2016)\"\n },\n {\n \"_scraped_name\": \"128th Legislature\",\n \"identifier\": \"128\",\n \"name\": \"128th Legislature (2017-2018)\",\n \"start_date\": \"2016-12-07\",\n \"end_date\": \"2017-06-14\",\n }\n ]\n ignored_scraped_sessions = [\n '2001-2002'\n ]\n\n def get_organizations(self):\n legislature_name = \"Maine Legislature\"\n lower_chamber_name = \"House\"\n lower_seats = 151\n lower_title = \"Representative\"\n upper_chamber_name = \"Senate\"\n upper_seats = 35\n upper_title = \"Senator\"\n\n legislature = Organization(name=legislature_name,\n classification=\"legislature\")\n upper = Organization(upper_chamber_name, classification='upper',\n parent_id=legislature._id)\n lower = Organization(lower_chamber_name, classification='lower',\n parent_id=legislature._id)\n\n for n in range(1, upper_seats + 1):\n upper.add_post(\n label=str(n), role=upper_title,\n division_id='{}/sldu:{}'.format(self.division_id, n))\n for n in range(1, lower_seats + 1):\n lower.add_post(\n label=str(n), role=lower_title,\n division_id='{}/sldl:{}'.format(self.division_id, n))\n\n yield legislature\n yield Organization(name='Office of the Governor', classification='executive')\n yield upper\n yield lower\n\n def get_session_list(self):\n sessions = url_xpath('http://www.mainelegislature.org/LawMakerWeb/advancedsearch.asp',\n '//select[@name=\"LegSession\"]/option/text()')\n return sessions\n", "path": "openstates/me/__init__.py"}]} | 1,725 | 208 |
gh_patches_debug_3041 | rasdani/github-patches | git_diff | ethereum__web3.py-1107 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Backport 1094 to v4 branch
### What was wrong?
https://github.com/ethereum/web3.py/issues/1094#issuecomment-428259232 needs to be backported to the v4 branch.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from setuptools import (
4 find_packages,
5 setup,
6 )
7
8 extras_require = {
9 'tester': [
10 "eth-tester[py-evm]==0.1.0-beta.33",
11 "py-geth>=2.0.1,<3.0.0",
12 ],
13 'testrpc': ["eth-testrpc>=1.3.3,<2.0.0"],
14 'linter': [
15 "flake8==3.4.1",
16 "isort>=4.2.15,<5",
17 ],
18 'docs': [
19 "mock",
20 "sphinx-better-theme>=0.1.4",
21 "click>=5.1",
22 "configparser==3.5.0",
23 "contextlib2>=0.5.4",
24 #"eth-testrpc>=0.8.0",
25 #"ethereum-tester-client>=1.1.0",
26 "ethtoken",
27 "py-geth>=1.4.0",
28 "py-solc>=0.4.0",
29 "pytest>=2.7.2",
30 "sphinx",
31 "sphinx_rtd_theme>=0.1.9",
32 "toposort>=1.4",
33 "urllib3",
34 "web3>=2.1.0",
35 "wheel"
36 ],
37 'dev': [
38 "bumpversion",
39 "flaky>=3.3.0",
40 "hypothesis>=3.31.2",
41 "pytest>=3.5.0,<4",
42 "pytest-mock==1.*",
43 "pytest-pythonpath>=0.3",
44 "pytest-watch==4.*",
45 "pytest-xdist==1.*",
46 "tox>=1.8.0",
47 "tqdm",
48 "when-changed"
49 ]
50 }
51
52 extras_require['dev'] = (
53 extras_require['tester'] +
54 extras_require['linter'] +
55 extras_require['docs'] +
56 extras_require['dev']
57 )
58
59 setup(
60 name='web3',
61 # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.
62 version='4.7.2',
63 description="""Web3.py""",
64 long_description_markdown_filename='README.md',
65 author='Piper Merriam',
66 author_email='[email protected]',
67 url='https://github.com/ethereum/web3.py',
68 include_package_data=True,
69 install_requires=[
70 "toolz>=0.9.0,<1.0.0;implementation_name=='pypy'",
71 "cytoolz>=0.9.0,<1.0.0;implementation_name=='cpython'",
72 "eth-abi>=1.2.0,<2.0.0",
73 "eth-account>=0.2.1,<0.4.0",
74 "eth-utils>=1.2.0,<2.0.0",
75 "hexbytes>=0.1.0,<1.0.0",
76 "lru-dict>=1.1.6,<2.0.0",
77 "eth-hash[pycryptodome]>=0.2.0,<1.0.0",
78 "requests>=2.16.0,<3.0.0",
79 "websockets>=6.0.0,<7.0.0",
80 "pypiwin32>=223;platform_system=='Windows'",
81 ],
82 setup_requires=['setuptools-markdown'],
83 python_requires='>=3.5, <4',
84 extras_require=extras_require,
85 py_modules=['web3', 'ens'],
86 license="MIT",
87 zip_safe=False,
88 keywords='ethereum',
89 packages=find_packages(exclude=["tests", "tests.*"]),
90 classifiers=[
91 'Development Status :: 5 - Production/Stable',
92 'Intended Audience :: Developers',
93 'License :: OSI Approved :: MIT License',
94 'Natural Language :: English',
95 'Programming Language :: Python :: 3',
96 'Programming Language :: Python :: 3.5',
97 'Programming Language :: Python :: 3.6',
98 ],
99 )
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -80,7 +80,7 @@
"pypiwin32>=223;platform_system=='Windows'",
],
setup_requires=['setuptools-markdown'],
- python_requires='>=3.5, <4',
+ python_requires='>=3.5.3,<4',
extras_require=extras_require,
py_modules=['web3', 'ens'],
license="MIT",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -80,7 +80,7 @@\n \"pypiwin32>=223;platform_system=='Windows'\",\n ],\n setup_requires=['setuptools-markdown'],\n- python_requires='>=3.5, <4',\n+ python_requires='>=3.5.3,<4',\n extras_require=extras_require,\n py_modules=['web3', 'ens'],\n license=\"MIT\",\n", "issue": "Backport 1094 to v4 branch\n### What was wrong?\r\n\r\nhttps://github.com/ethereum/web3.py/issues/1094#issuecomment-428259232 needs to be backported to the v4 branch.\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n 'tester': [\n \"eth-tester[py-evm]==0.1.0-beta.33\",\n \"py-geth>=2.0.1,<3.0.0\",\n ],\n 'testrpc': [\"eth-testrpc>=1.3.3,<2.0.0\"],\n 'linter': [\n \"flake8==3.4.1\",\n \"isort>=4.2.15,<5\",\n ],\n 'docs': [\n \"mock\",\n \"sphinx-better-theme>=0.1.4\",\n \"click>=5.1\",\n \"configparser==3.5.0\",\n \"contextlib2>=0.5.4\",\n #\"eth-testrpc>=0.8.0\",\n #\"ethereum-tester-client>=1.1.0\",\n \"ethtoken\",\n \"py-geth>=1.4.0\",\n \"py-solc>=0.4.0\",\n \"pytest>=2.7.2\",\n \"sphinx\",\n \"sphinx_rtd_theme>=0.1.9\",\n \"toposort>=1.4\",\n \"urllib3\",\n \"web3>=2.1.0\",\n \"wheel\"\n ],\n 'dev': [\n \"bumpversion\",\n \"flaky>=3.3.0\",\n \"hypothesis>=3.31.2\",\n \"pytest>=3.5.0,<4\",\n \"pytest-mock==1.*\",\n \"pytest-pythonpath>=0.3\",\n \"pytest-watch==4.*\",\n \"pytest-xdist==1.*\",\n \"tox>=1.8.0\",\n \"tqdm\",\n \"when-changed\"\n ]\n}\n\nextras_require['dev'] = (\n extras_require['tester'] +\n extras_require['linter'] +\n extras_require['docs'] +\n extras_require['dev']\n)\n\nsetup(\n name='web3',\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version='4.7.2',\n description=\"\"\"Web3.py\"\"\",\n long_description_markdown_filename='README.md',\n author='Piper Merriam',\n author_email='[email protected]',\n url='https://github.com/ethereum/web3.py',\n include_package_data=True,\n install_requires=[\n \"toolz>=0.9.0,<1.0.0;implementation_name=='pypy'\",\n \"cytoolz>=0.9.0,<1.0.0;implementation_name=='cpython'\",\n \"eth-abi>=1.2.0,<2.0.0\",\n \"eth-account>=0.2.1,<0.4.0\",\n \"eth-utils>=1.2.0,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"eth-hash[pycryptodome]>=0.2.0,<1.0.0\",\n \"requests>=2.16.0,<3.0.0\",\n \"websockets>=6.0.0,<7.0.0\",\n \"pypiwin32>=223;platform_system=='Windows'\",\n ],\n setup_requires=['setuptools-markdown'],\n python_requires='>=3.5, <4',\n extras_require=extras_require,\n py_modules=['web3', 'ens'],\n license=\"MIT\",\n zip_safe=False,\n keywords='ethereum',\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n 'tester': [\n \"eth-tester[py-evm]==0.1.0-beta.33\",\n \"py-geth>=2.0.1,<3.0.0\",\n ],\n 'testrpc': [\"eth-testrpc>=1.3.3,<2.0.0\"],\n 'linter': [\n \"flake8==3.4.1\",\n \"isort>=4.2.15,<5\",\n ],\n 'docs': [\n \"mock\",\n \"sphinx-better-theme>=0.1.4\",\n \"click>=5.1\",\n \"configparser==3.5.0\",\n \"contextlib2>=0.5.4\",\n #\"eth-testrpc>=0.8.0\",\n #\"ethereum-tester-client>=1.1.0\",\n \"ethtoken\",\n \"py-geth>=1.4.0\",\n \"py-solc>=0.4.0\",\n \"pytest>=2.7.2\",\n \"sphinx\",\n \"sphinx_rtd_theme>=0.1.9\",\n \"toposort>=1.4\",\n \"urllib3\",\n \"web3>=2.1.0\",\n \"wheel\"\n ],\n 'dev': [\n \"bumpversion\",\n \"flaky>=3.3.0\",\n \"hypothesis>=3.31.2\",\n \"pytest>=3.5.0,<4\",\n \"pytest-mock==1.*\",\n \"pytest-pythonpath>=0.3\",\n \"pytest-watch==4.*\",\n \"pytest-xdist==1.*\",\n \"tox>=1.8.0\",\n \"tqdm\",\n \"when-changed\"\n ]\n}\n\nextras_require['dev'] = (\n extras_require['tester'] +\n extras_require['linter'] +\n extras_require['docs'] +\n extras_require['dev']\n)\n\nsetup(\n name='web3',\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version='4.7.2',\n description=\"\"\"Web3.py\"\"\",\n long_description_markdown_filename='README.md',\n author='Piper Merriam',\n author_email='[email protected]',\n url='https://github.com/ethereum/web3.py',\n include_package_data=True,\n install_requires=[\n \"toolz>=0.9.0,<1.0.0;implementation_name=='pypy'\",\n \"cytoolz>=0.9.0,<1.0.0;implementation_name=='cpython'\",\n \"eth-abi>=1.2.0,<2.0.0\",\n \"eth-account>=0.2.1,<0.4.0\",\n \"eth-utils>=1.2.0,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"eth-hash[pycryptodome]>=0.2.0,<1.0.0\",\n \"requests>=2.16.0,<3.0.0\",\n \"websockets>=6.0.0,<7.0.0\",\n \"pypiwin32>=223;platform_system=='Windows'\",\n ],\n setup_requires=['setuptools-markdown'],\n python_requires='>=3.5.3,<4',\n extras_require=extras_require,\n py_modules=['web3', 'ens'],\n license=\"MIT\",\n zip_safe=False,\n keywords='ethereum',\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n)\n", "path": "setup.py"}]} | 1,409 | 109 |
gh_patches_debug_11266 | rasdani/github-patches | git_diff | marshmallow-code__webargs-943 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Testsuite fails under pytest==8.2.0 with `'AsyncHTTPTestCase' has no attribute 'runTest'`
We currently have some test failures in basically all python versions, starting ~2 days ago. At first glance, it looks like an issue with `tornado`'s `AsyncHTTPTestCase`, but `tornado` doesn't have a recent release.
Looking at what projects updated recently, I flagged `pytest` as a good candidate for investigation, and testing with `pytest=8.1.2` works fine. So something related to unittest TestCases changed in 8.2.0 in a way that breaks tornado tests.
For reference, here's one of the error traces:
```
____________________ ERROR collecting tests/test_tornadoparser.py ____________________
.tox/py311/lib/python3.11/site-packages/tornado/testing.py:180: in __init__
setattr(self, methodName, _TestMethodWrapper(getattr(self, methodName)))
E AttributeError: 'AsyncHTTPTestCase' object has no attribute 'runTest'
____________________ ERROR collecting tests/test_tornadoparser.py ____________________
.tox/py311/lib/python3.11/site-packages/tornado/testing.py:180: in __init__
setattr(self, methodName, _TestMethodWrapper(getattr(self, methodName)))
E AttributeError: 'TestApp' object has no attribute 'runTest'
____________________ ERROR collecting tests/test_tornadoparser.py ____________________
.tox/py311/lib/python3.11/site-packages/tornado/testing.py:180: in __init__
setattr(self, methodName, _TestMethodWrapper(getattr(self, methodName)))
E AttributeError: 'TestValidateApp' object has no attribute 'runTest'
```
`runTest` looks like a dynamically created method in unittest, whose name gets passed on init to TestCase.
I started looking at changes to `pytest`'s unittest module ( https://github.com/pytest-dev/pytest/blame/main/src/_pytest/unittest.py ), but I'm out of time for digging into this right now.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/webargs/__init__.py`
Content:
```
1 from __future__ import annotations
2
3 import importlib.metadata
4
5 # Make marshmallow's validation functions importable from webargs
6 from marshmallow import validate
7 from marshmallow.utils import missing
8 from packaging.version import Version
9
10 from webargs import fields
11 from webargs.core import ValidationError
12
13 # TODO: Deprecate __version__ et al.
14 __version__ = importlib.metadata.version("webargs")
15 __parsed_version__ = Version(__version__)
16 __version_info__: tuple[int, int, int] | tuple[int, int, int, str, int] = (
17 __parsed_version__.release
18 ) # type: ignore[assignment]
19 if __parsed_version__.pre:
20 __version_info__ += __parsed_version__.pre # type: ignore[assignment]
21 __all__ = ("ValidationError", "fields", "missing", "validate")
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/webargs/__init__.py b/src/webargs/__init__.py
--- a/src/webargs/__init__.py
+++ b/src/webargs/__init__.py
@@ -14,8 +14,8 @@
__version__ = importlib.metadata.version("webargs")
__parsed_version__ = Version(__version__)
__version_info__: tuple[int, int, int] | tuple[int, int, int, str, int] = (
- __parsed_version__.release
-) # type: ignore[assignment]
+ __parsed_version__.release # type: ignore[assignment]
+)
if __parsed_version__.pre:
__version_info__ += __parsed_version__.pre # type: ignore[assignment]
__all__ = ("ValidationError", "fields", "missing", "validate")
| {"golden_diff": "diff --git a/src/webargs/__init__.py b/src/webargs/__init__.py\n--- a/src/webargs/__init__.py\n+++ b/src/webargs/__init__.py\n@@ -14,8 +14,8 @@\n __version__ = importlib.metadata.version(\"webargs\")\n __parsed_version__ = Version(__version__)\n __version_info__: tuple[int, int, int] | tuple[int, int, int, str, int] = (\n- __parsed_version__.release\n-) # type: ignore[assignment]\n+ __parsed_version__.release # type: ignore[assignment]\n+)\n if __parsed_version__.pre:\n __version_info__ += __parsed_version__.pre # type: ignore[assignment]\n __all__ = (\"ValidationError\", \"fields\", \"missing\", \"validate\")\n", "issue": "Testsuite fails under pytest==8.2.0 with `'AsyncHTTPTestCase' has no attribute 'runTest'`\nWe currently have some test failures in basically all python versions, starting ~2 days ago. At first glance, it looks like an issue with `tornado`'s `AsyncHTTPTestCase`, but `tornado` doesn't have a recent release.\r\n\r\nLooking at what projects updated recently, I flagged `pytest` as a good candidate for investigation, and testing with `pytest=8.1.2` works fine. So something related to unittest TestCases changed in 8.2.0 in a way that breaks tornado tests.\r\n\r\nFor reference, here's one of the error traces:\r\n```\r\n____________________ ERROR collecting tests/test_tornadoparser.py ____________________\r\n.tox/py311/lib/python3.11/site-packages/tornado/testing.py:180: in __init__\r\n setattr(self, methodName, _TestMethodWrapper(getattr(self, methodName)))\r\nE AttributeError: 'AsyncHTTPTestCase' object has no attribute 'runTest'\r\n____________________ ERROR collecting tests/test_tornadoparser.py ____________________\r\n.tox/py311/lib/python3.11/site-packages/tornado/testing.py:180: in __init__\r\n setattr(self, methodName, _TestMethodWrapper(getattr(self, methodName)))\r\nE AttributeError: 'TestApp' object has no attribute 'runTest'\r\n____________________ ERROR collecting tests/test_tornadoparser.py ____________________\r\n.tox/py311/lib/python3.11/site-packages/tornado/testing.py:180: in __init__\r\n setattr(self, methodName, _TestMethodWrapper(getattr(self, methodName)))\r\nE AttributeError: 'TestValidateApp' object has no attribute 'runTest'\r\n```\r\n\r\n`runTest` looks like a dynamically created method in unittest, whose name gets passed on init to TestCase.\r\nI started looking at changes to `pytest`'s unittest module ( https://github.com/pytest-dev/pytest/blame/main/src/_pytest/unittest.py ), but I'm out of time for digging into this right now.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport importlib.metadata\n\n# Make marshmallow's validation functions importable from webargs\nfrom marshmallow import validate\nfrom marshmallow.utils import missing\nfrom packaging.version import Version\n\nfrom webargs import fields\nfrom webargs.core import ValidationError\n\n# TODO: Deprecate __version__ et al.\n__version__ = importlib.metadata.version(\"webargs\")\n__parsed_version__ = Version(__version__)\n__version_info__: tuple[int, int, int] | tuple[int, int, int, str, int] = (\n __parsed_version__.release\n) # type: ignore[assignment]\nif __parsed_version__.pre:\n __version_info__ += __parsed_version__.pre # type: ignore[assignment]\n__all__ = (\"ValidationError\", \"fields\", \"missing\", \"validate\")\n", "path": "src/webargs/__init__.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport importlib.metadata\n\n# Make marshmallow's validation functions importable from webargs\nfrom marshmallow import validate\nfrom marshmallow.utils import missing\nfrom packaging.version import Version\n\nfrom webargs import fields\nfrom webargs.core import ValidationError\n\n# TODO: Deprecate __version__ et al.\n__version__ = importlib.metadata.version(\"webargs\")\n__parsed_version__ = Version(__version__)\n__version_info__: tuple[int, int, int] | tuple[int, int, int, str, int] = (\n __parsed_version__.release # type: ignore[assignment]\n)\nif __parsed_version__.pre:\n __version_info__ += __parsed_version__.pre # type: ignore[assignment]\n__all__ = (\"ValidationError\", \"fields\", \"missing\", \"validate\")\n", "path": "src/webargs/__init__.py"}]} | 907 | 174 |
gh_patches_debug_22777 | rasdani/github-patches | git_diff | openstates__openstates-scrapers-1370 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NC - Committee assignments missing for a house member
State: NC
There may be more, I just happen to find this today. Jay Adams is missing committee assignments. The API does not show any in roles.
https://openstates.org/api/v1/legislators/NCL000338/
The NCGA site shows committee assignments here: http://www.ncleg.net/gascripts/members/reports/committeeAssignments.pl?nUserid=697&Chamber=H
There may be something broken and it may affect more members in NC.
I have assignments for others. Have not determined what is different between Adams and the others
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/nc/committees.py`
Content:
```
1 from billy.scrape.committees import CommitteeScraper, Committee
2
3 import lxml.html
4
5 class NCCommitteeScraper(CommitteeScraper):
6 jurisdiction = 'nc'
7
8 def scrape_committee(self, committee, url):
9 url = url.replace(' ', '%20') + '&bPrintable=true'
10 data = self.get(url).text
11 doc = lxml.html.fromstring(data)
12 for row in doc.xpath('//table/tr'):
13 children = row.getchildren()
14 if len(children) != 2:
15 self.log('skipping members for ' + committee['committee'])
16 continue
17 mtype, members = row.getchildren()
18 if mtype.text == 'Members':
19 for m in members.getchildren():
20 committee.add_member(m.text)
21 else:
22 committee.add_member(members.text_content(), mtype.text)
23
24
25 def scrape(self, term, chambers):
26 base_url = 'http://www.ncga.state.nc.us/gascripts/Committees/Committees.asp?bPrintable=true&sAction=ViewCommitteeType&sActionDetails='
27
28 chamber_slugs = {'upper': ['Senate%20Standing', 'Senate%20Select'],
29 'lower': ['House%20Standing', 'House%20Select']}
30
31 for chamber in chambers:
32 for ctype in chamber_slugs[chamber]:
33 data = self.get(base_url + ctype).text
34 doc = lxml.html.fromstring(data)
35 doc.make_links_absolute(base_url+ctype)
36 for comm in doc.xpath('//ul/li/a'):
37 name = comm.text
38 # skip committee of whole Senate
39 if 'Whole Senate' in name:
40 continue
41 url = comm.get('href')
42 committee = Committee(chamber, name)
43 self.scrape_committee(committee, url)
44 committee.add_source(url)
45 if not committee['members']:
46 self.warning('empty committee: %s', name)
47 else:
48 self.save_committee(committee)
49
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openstates/nc/committees.py b/openstates/nc/committees.py
--- a/openstates/nc/committees.py
+++ b/openstates/nc/committees.py
@@ -17,10 +17,23 @@
mtype, members = row.getchildren()
if mtype.text == 'Members':
for m in members.getchildren():
- committee.add_member(m.text)
+ member_name = self._clean_member_name(m.text)
+ committee.add_member(member_name)
else:
- committee.add_member(members.text_content(), mtype.text)
+ member_name = self._clean_member_name(members.text_content())
+ committee.add_member(member_name, mtype.text)
+ def _clean_member_name(self, name):
+ """Names are displayed as "Office. LastName", e.g. "Rep. Adamsa". This strips the "Office. "
+
+ This helps the NameMatcher in billy link this to the correct legislator.
+ """
+ for prefix in ['Rep. ', 'Sen. ']:
+ if name.startswith(prefix):
+ return name.replace(prefix, '')
+
+ # If none hit, return the name as is
+ return name
def scrape(self, term, chambers):
base_url = 'http://www.ncga.state.nc.us/gascripts/Committees/Committees.asp?bPrintable=true&sAction=ViewCommitteeType&sActionDetails='
| {"golden_diff": "diff --git a/openstates/nc/committees.py b/openstates/nc/committees.py\n--- a/openstates/nc/committees.py\n+++ b/openstates/nc/committees.py\n@@ -17,10 +17,23 @@\n mtype, members = row.getchildren()\n if mtype.text == 'Members':\n for m in members.getchildren():\n- committee.add_member(m.text)\n+ member_name = self._clean_member_name(m.text)\n+ committee.add_member(member_name)\n else:\n- committee.add_member(members.text_content(), mtype.text)\n+ member_name = self._clean_member_name(members.text_content())\n+ committee.add_member(member_name, mtype.text)\n \n+ def _clean_member_name(self, name):\n+ \"\"\"Names are displayed as \"Office. LastName\", e.g. \"Rep. Adamsa\". This strips the \"Office. \"\n+\n+ This helps the NameMatcher in billy link this to the correct legislator.\n+ \"\"\"\n+ for prefix in ['Rep. ', 'Sen. ']:\n+ if name.startswith(prefix):\n+ return name.replace(prefix, '')\n+\n+ # If none hit, return the name as is\n+ return name\n \n def scrape(self, term, chambers):\n base_url = 'http://www.ncga.state.nc.us/gascripts/Committees/Committees.asp?bPrintable=true&sAction=ViewCommitteeType&sActionDetails='\n", "issue": "NC - Committee assignments missing for a house member\nState: NC\r\n\r\nThere may be more, I just happen to find this today. Jay Adams is missing committee assignments. The API does not show any in roles. \r\n\r\nhttps://openstates.org/api/v1/legislators/NCL000338/\r\n\r\nThe NCGA site shows committee assignments here: http://www.ncleg.net/gascripts/members/reports/committeeAssignments.pl?nUserid=697&Chamber=H\r\n\r\nThere may be something broken and it may affect more members in NC.\r\n\r\nI have assignments for others. Have not determined what is different between Adams and the others\n", "before_files": [{"content": "from billy.scrape.committees import CommitteeScraper, Committee\n\nimport lxml.html\n\nclass NCCommitteeScraper(CommitteeScraper):\n jurisdiction = 'nc'\n\n def scrape_committee(self, committee, url):\n url = url.replace(' ', '%20') + '&bPrintable=true'\n data = self.get(url).text\n doc = lxml.html.fromstring(data)\n for row in doc.xpath('//table/tr'):\n children = row.getchildren()\n if len(children) != 2:\n self.log('skipping members for ' + committee['committee'])\n continue\n mtype, members = row.getchildren()\n if mtype.text == 'Members':\n for m in members.getchildren():\n committee.add_member(m.text)\n else:\n committee.add_member(members.text_content(), mtype.text)\n\n\n def scrape(self, term, chambers):\n base_url = 'http://www.ncga.state.nc.us/gascripts/Committees/Committees.asp?bPrintable=true&sAction=ViewCommitteeType&sActionDetails='\n\n chamber_slugs = {'upper': ['Senate%20Standing', 'Senate%20Select'],\n 'lower': ['House%20Standing', 'House%20Select']}\n\n for chamber in chambers:\n for ctype in chamber_slugs[chamber]:\n data = self.get(base_url + ctype).text\n doc = lxml.html.fromstring(data)\n doc.make_links_absolute(base_url+ctype)\n for comm in doc.xpath('//ul/li/a'):\n name = comm.text\n # skip committee of whole Senate\n if 'Whole Senate' in name:\n continue\n url = comm.get('href')\n committee = Committee(chamber, name)\n self.scrape_committee(committee, url)\n committee.add_source(url)\n if not committee['members']:\n self.warning('empty committee: %s', name)\n else:\n self.save_committee(committee)\n\n", "path": "openstates/nc/committees.py"}], "after_files": [{"content": "from billy.scrape.committees import CommitteeScraper, Committee\n\nimport lxml.html\n\nclass NCCommitteeScraper(CommitteeScraper):\n jurisdiction = 'nc'\n\n def scrape_committee(self, committee, url):\n url = url.replace(' ', '%20') + '&bPrintable=true'\n data = self.get(url).text\n doc = lxml.html.fromstring(data)\n for row in doc.xpath('//table/tr'):\n children = row.getchildren()\n if len(children) != 2:\n self.log('skipping members for ' + committee['committee'])\n continue\n mtype, members = row.getchildren()\n if mtype.text == 'Members':\n for m in members.getchildren():\n member_name = self._clean_member_name(m.text)\n committee.add_member(member_name)\n else:\n member_name = self._clean_member_name(members.text_content())\n committee.add_member(member_name, mtype.text)\n\n def _clean_member_name(self, name):\n \"\"\"Names are displayed as \"Office. LastName\", e.g. \"Rep. Adamsa\". This strips the \"Office. \"\n\n This helps the NameMatcher in billy link this to the correct legislator.\n \"\"\"\n for prefix in ['Rep. ', 'Sen. ']:\n if name.startswith(prefix):\n return name.replace(prefix, '')\n\n # If none hit, return the name as is\n return name\n\n def scrape(self, term, chambers):\n base_url = 'http://www.ncga.state.nc.us/gascripts/Committees/Committees.asp?bPrintable=true&sAction=ViewCommitteeType&sActionDetails='\n\n chamber_slugs = {'upper': ['Senate%20Standing', 'Senate%20Select'],\n 'lower': ['House%20Standing', 'House%20Select']}\n\n for chamber in chambers:\n for ctype in chamber_slugs[chamber]:\n data = self.get(base_url + ctype).text\n doc = lxml.html.fromstring(data)\n doc.make_links_absolute(base_url+ctype)\n for comm in doc.xpath('//ul/li/a'):\n name = comm.text\n # skip committee of whole Senate\n if 'Whole Senate' in name:\n continue\n url = comm.get('href')\n committee = Committee(chamber, name)\n self.scrape_committee(committee, url)\n committee.add_source(url)\n if not committee['members']:\n self.warning('empty committee: %s', name)\n else:\n self.save_committee(committee)\n\n", "path": "openstates/nc/committees.py"}]} | 913 | 317 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.