problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_1292 | rasdani/github-patches | git_diff | ray-project__ray-1471 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Travis test failures in test_catalog.py.
The Travis builds all seem to be failing in `test_catalog.py`.
I can reproduce some failures locally with `gym` version `0.9.5`.
Gym pushed a new version today, so that may be the issue https://pypi.python.org/pypi/gym.
For example,
```
$ python -m pytest python/ray/rllib/test/test_catalog.py
[1m============================= test session starts ==============================[0m
platform linux2 -- Python 2.7.14, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: /home/travis/build/robertnishihara/ray-private-travis/python, inifile:
[1m
collecting 0 items [0m[1m
collecting 5 items [0m[1m
collecting 5 items [0m[1m
collected 5 items [0m
python/ray/rllib/test/test_catalog.py ...FF[36m [100%][0m
=================================== FAILURES ===================================
[1m[31m____________________ ModelCatalogTest.testGymPreprocessors _____________________[0m
self = <ray.rllib.test.test_catalog.ModelCatalogTest testMethod=testGymPreprocessors>
[1m def testGymPreprocessors(self):[0m
[1m p1 = ModelCatalog.get_preprocessor([0m
[1m get_registry(), gym.make("CartPole-v0"))[0m
[1m self.assertEqual(type(p1), NoPreprocessor)[0m
[1m [0m
[1m p2 = ModelCatalog.get_preprocessor([0m
[1m> get_registry(), gym.make("FrozenLake-v0"))[0m
[1m[31mpython/ray/rllib/test/test_catalog.py[0m:41:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[1m[31mpython/ray/rllib/models/catalog.py[0m:215: in get_preprocessor
[1m return preprocessor(env.observation_space, options)[0m
[1m[31mpython/ray/rllib/models/preprocessors.py[0m:23: in __init__
[1m self._init()[0m
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ray.rllib.models.preprocessors.OneHotPreprocessor object at 0x7fad2df67dd0>
[1m def _init(self):[0m
[1m> assert self._obs_space.shape == ()[0m
[1m[31mE AssertionError[0m
[1m[31mpython/ray/rllib/models/preprocessors.py[0m:81: AssertionError
----------------------------- Captured stdout call -----------------------------
Observation shape is (4,)
Not using any observation preprocessor.
Observation shape is (16,)
Using one-hot preprocessor for discrete envs.
----------------------------- Captured stderr call -----------------------------
[2018-01-25 07:26:43,537] Making new env: CartPole-v0
[2018-01-25 07:26:43,540] Making new env: FrozenLake-v0
------------------------------ Captured log call -------------------------------
registration.py 120 INFO Making new env: CartPole-v0
registration.py 120 INFO Making new env: FrozenLake-v0
[1m[31m____________________ ModelCatalogTest.testTuplePreprocessor ____________________[0m
self = <ray.rllib.test.test_catalog.ModelCatalogTest testMethod=testTuplePreprocessor>
[1m def testTuplePreprocessor(self):[0m
[1m ray.init()[0m
[1m [0m
[1m class TupleEnv(object):[0m
[1m def __init__(self):[0m
[1m self.observation_space = Tuple([0m
[1m [Discrete(5), Box(0, 1, shape=(3,))])[0m
[1m p1 = ModelCatalog.get_preprocessor([0m
[1m> get_registry(), TupleEnv())[0m
[1m[31mpython/ray/rllib/test/test_catalog.py[0m:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[1m[31mpython/ray/rllib/models/catalog.py[0m:215: in get_preprocessor
[1m return preprocessor(env.observation_space, options)[0m
[1m[31mpython/ray/rllib/models/preprocessors.py[0m:23: in __init__
[1m self._init()[0m
[1m[31mpython/ray/rllib/models/preprocessors.py[0m:112: in _init
[1m preprocessor = get_preprocessor(space)(space, self._options)[0m
[1m[31mpython/ray/rllib/models/preprocessors.py[0m:23: in __init__
[1m self._init()[0m
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ray.rllib.models.preprocessors.OneHotPreprocessor object at 0x7fad4ff234d0>
[1m def _init(self):[0m
[1m> assert self._obs_space.shape == ()[0m
[1m[31mE AssertionError[0m
[1m[31mpython/ray/rllib/models/preprocessors.py[0m:81: AssertionError
----------------------------- Captured stdout call -----------------------------
Waiting for redis server at 127.0.0.1:44545 to respond...
Waiting for redis server at 127.0.0.1:60007 to respond...
Starting local scheduler with the following resources: {'GPU': 0, 'CPU': 2}.
Failed to start the UI, you may need to run 'pip install jupyter'.
Observation shape is ((5,), (3,))
Using a TupleFlatteningPreprocessor
Creating sub-preprocessor for Discrete(5)
Observation shape is (5,)
Using one-hot preprocessor for discrete envs.
----------------------------- Captured stderr call -----------------------------
Allowing the Plasma store to use up to 3.13728GB of memory.
Starting object store with directory /dev/shm and huge page support disabled
Disconnecting client on fd 22
[INFO] (/home/travis/build/robertnishihara/ray-private-travis/src/local_scheduler/local_scheduler.cc:171) Killed worker pid 14098 which hadn't started yet.
[INFO] (/home/travis/build/robertnishihara/ray-private-travis/src/local_scheduler/local_scheduler.cc:171) Killed worker pid 14099 which hadn't started yet.
Disconnecting client on fd 20
Disconnecting client on fd 18
[1m[31m====================== 2 failed, 3 passed in 7.09 seconds ======================[0m
travis_time:end:224e60d5:start=1516865197573618638,finish=1516865205120814512,duration=7547195874
[0K
[31;1mThe command "python -m pytest python/ray/rllib/test/test_catalog.py" exited with 1.[0m
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/ray/rllib/models/preprocessors.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4 import cv2
5 import numpy as np
6 import gym
7
8 ATARI_OBS_SHAPE = (210, 160, 3)
9 ATARI_RAM_OBS_SHAPE = (128,)
10
11
12 class Preprocessor(object):
13 """Defines an abstract observation preprocessor function.
14
15 Attributes:
16 shape (obj): Shape of the preprocessed output.
17 """
18
19 def __init__(self, obs_space, options):
20 legacy_patch_shapes(obs_space)
21 self._obs_space = obs_space
22 self._options = options
23 self._init()
24
25 def _init(self):
26 pass
27
28 def transform(self, observation):
29 """Returns the preprocessed observation."""
30 raise NotImplementedError
31
32
33 class AtariPixelPreprocessor(Preprocessor):
34 def _init(self):
35 self._grayscale = self._options.get("grayscale", False)
36 self._zero_mean = self._options.get("zero_mean", True)
37 self._dim = self._options.get("dim", 80)
38 self._channel_major = self._options.get("channel_major", False)
39 if self._grayscale:
40 self.shape = (self._dim, self._dim, 1)
41 else:
42 self.shape = (self._dim, self._dim, 3)
43
44 # channel_major requires (# in-channels, row dim, col dim)
45 if self._channel_major:
46 self.shape = self.shape[-1:] + self.shape[:-1]
47
48 def transform(self, observation):
49 """Downsamples images from (210, 160, 3) by the configured factor."""
50 scaled = observation[25:-25, :, :]
51 if self._dim < 80:
52 scaled = cv2.resize(scaled, (80, 80))
53 # OpenAI: Resize by half, then down to 42x42 (essentially mipmapping).
54 # If we resize directly we lose pixels that, when mapped to 42x42,
55 # aren't close enough to the pixel boundary.
56 scaled = cv2.resize(scaled, (self._dim, self._dim))
57 if self._grayscale:
58 scaled = scaled.mean(2)
59 scaled = scaled.astype(np.float32)
60 # Rescale needed for maintaining 1 channel
61 scaled = np.reshape(scaled, [self._dim, self._dim, 1])
62 if self._zero_mean:
63 scaled = (scaled - 128) / 128
64 else:
65 scaled *= 1.0 / 255.0
66 if self._channel_major:
67 scaled = np.reshape(scaled, self.shape)
68 return scaled
69
70
71 class AtariRamPreprocessor(Preprocessor):
72 def _init(self):
73 self.shape = (128,)
74
75 def transform(self, observation):
76 return (observation - 128) / 128
77
78
79 class OneHotPreprocessor(Preprocessor):
80 def _init(self):
81 assert self._obs_space.shape == ()
82 self.shape = (self._obs_space.n,)
83
84 def transform(self, observation):
85 arr = np.zeros(self._obs_space.n)
86 arr[observation] = 1
87 return arr
88
89
90 class NoPreprocessor(Preprocessor):
91 def _init(self):
92 self.shape = self._obs_space.shape
93
94 def transform(self, observation):
95 return observation
96
97
98 class TupleFlatteningPreprocessor(Preprocessor):
99 """Preprocesses each tuple element, then flattens it all into a vector.
100
101 If desired, the vector output can be unpacked via tf.reshape() within a
102 custom model to handle each component separately.
103 """
104
105 def _init(self):
106 assert isinstance(self._obs_space, gym.spaces.Tuple)
107 size = 0
108 self.preprocessors = []
109 for i in range(len(self._obs_space.spaces)):
110 space = self._obs_space.spaces[i]
111 print("Creating sub-preprocessor for", space)
112 preprocessor = get_preprocessor(space)(space, self._options)
113 self.preprocessors.append(preprocessor)
114 size += np.product(preprocessor.shape)
115 self.shape = (size,)
116
117 def transform(self, observation):
118 assert len(observation) == len(self.preprocessors), observation
119 return np.concatenate([
120 np.reshape(p.transform(o), [np.product(p.shape)])
121 for (o, p) in zip(observation, self.preprocessors)])
122
123
124 def get_preprocessor(space):
125 """Returns an appropriate preprocessor class for the given space."""
126
127 legacy_patch_shapes(space)
128 obs_shape = space.shape
129 print("Observation shape is {}".format(obs_shape))
130
131 if isinstance(space, gym.spaces.Discrete):
132 print("Using one-hot preprocessor for discrete envs.")
133 preprocessor = OneHotPreprocessor
134 elif obs_shape == ATARI_OBS_SHAPE:
135 print("Assuming Atari pixel env, using AtariPixelPreprocessor.")
136 preprocessor = AtariPixelPreprocessor
137 elif obs_shape == ATARI_RAM_OBS_SHAPE:
138 print("Assuming Atari ram env, using AtariRamPreprocessor.")
139 preprocessor = AtariRamPreprocessor
140 elif isinstance(space, gym.spaces.Tuple):
141 print("Using a TupleFlatteningPreprocessor")
142 preprocessor = TupleFlatteningPreprocessor
143 else:
144 print("Not using any observation preprocessor.")
145 preprocessor = NoPreprocessor
146
147 return preprocessor
148
149
150 def legacy_patch_shapes(space):
151 """Assigns shapes to spaces that don't have shapes.
152
153 This is only needed for older gym versions that don't set shapes properly
154 for Tuple and Discrete spaces.
155 """
156
157 if not hasattr(space, "shape"):
158 if isinstance(space, gym.spaces.Discrete):
159 space.shape = ()
160 elif isinstance(space, gym.spaces.Tuple):
161 shapes = []
162 for s in space.spaces:
163 shape = legacy_patch_shapes(s)
164 shapes.append(shape)
165 space.shape = tuple(shapes)
166
167 return space.shape
168
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/ray/rllib/models/preprocessors.py b/python/ray/rllib/models/preprocessors.py
--- a/python/ray/rllib/models/preprocessors.py
+++ b/python/ray/rllib/models/preprocessors.py
@@ -78,7 +78,6 @@
class OneHotPreprocessor(Preprocessor):
def _init(self):
- assert self._obs_space.shape == ()
self.shape = (self._obs_space.n,)
def transform(self, observation):
| {"golden_diff": "diff --git a/python/ray/rllib/models/preprocessors.py b/python/ray/rllib/models/preprocessors.py\n--- a/python/ray/rllib/models/preprocessors.py\n+++ b/python/ray/rllib/models/preprocessors.py\n@@ -78,7 +78,6 @@\n \n class OneHotPreprocessor(Preprocessor):\n def _init(self):\n- assert self._obs_space.shape == ()\n self.shape = (self._obs_space.n,)\n \n def transform(self, observation):\n", "issue": "Travis test failures in test_catalog.py.\nThe Travis builds all seem to be failing in `test_catalog.py`.\r\n\r\nI can reproduce some failures locally with `gym` version `0.9.5`.\r\n\r\nGym pushed a new version today, so that may be the issue https://pypi.python.org/pypi/gym.\r\n\r\nFor example,\r\n\r\n```\r\n$ python -m pytest python/ray/rllib/test/test_catalog.py\r\n\u001b[1m============================= test session starts ==============================\u001b[0m\r\nplatform linux2 -- Python 2.7.14, pytest-3.3.2, py-1.5.2, pluggy-0.6.0\r\nrootdir: /home/travis/build/robertnishihara/ray-private-travis/python, inifile:\r\n\u001b[1m\r\ncollecting 0 items \u001b[0m\u001b[1m\r\ncollecting 5 items \u001b[0m\u001b[1m\r\ncollecting 5 items \u001b[0m\u001b[1m\r\ncollected 5 items \u001b[0m\r\n\r\npython/ray/rllib/test/test_catalog.py ...FF\u001b[36m [100%]\u001b[0m\r\n\r\n=================================== FAILURES ===================================\r\n\u001b[1m\u001b[31m____________________ ModelCatalogTest.testGymPreprocessors _____________________\u001b[0m\r\n\r\nself = <ray.rllib.test.test_catalog.ModelCatalogTest testMethod=testGymPreprocessors>\r\n\r\n\u001b[1m def testGymPreprocessors(self):\u001b[0m\r\n\u001b[1m p1 = ModelCatalog.get_preprocessor(\u001b[0m\r\n\u001b[1m get_registry(), gym.make(\"CartPole-v0\"))\u001b[0m\r\n\u001b[1m self.assertEqual(type(p1), NoPreprocessor)\u001b[0m\r\n\u001b[1m \u001b[0m\r\n\u001b[1m p2 = ModelCatalog.get_preprocessor(\u001b[0m\r\n\u001b[1m> get_registry(), gym.make(\"FrozenLake-v0\"))\u001b[0m\r\n\r\n\u001b[1m\u001b[31mpython/ray/rllib/test/test_catalog.py\u001b[0m:41: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\u001b[1m\u001b[31mpython/ray/rllib/models/catalog.py\u001b[0m:215: in get_preprocessor\r\n\u001b[1m return preprocessor(env.observation_space, options)\u001b[0m\r\n\u001b[1m\u001b[31mpython/ray/rllib/models/preprocessors.py\u001b[0m:23: in __init__\r\n\u001b[1m self._init()\u001b[0m\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <ray.rllib.models.preprocessors.OneHotPreprocessor object at 0x7fad2df67dd0>\r\n\r\n\u001b[1m def _init(self):\u001b[0m\r\n\u001b[1m> assert self._obs_space.shape == ()\u001b[0m\r\n\u001b[1m\u001b[31mE AssertionError\u001b[0m\r\n\r\n\u001b[1m\u001b[31mpython/ray/rllib/models/preprocessors.py\u001b[0m:81: AssertionError\r\n----------------------------- Captured stdout call -----------------------------\r\nObservation shape is (4,)\r\nNot using any observation preprocessor.\r\nObservation shape is (16,)\r\nUsing one-hot preprocessor for discrete envs.\r\n----------------------------- Captured stderr call -----------------------------\r\n[2018-01-25 07:26:43,537] Making new env: CartPole-v0\r\n[2018-01-25 07:26:43,540] Making new env: FrozenLake-v0\r\n------------------------------ Captured log call -------------------------------\r\nregistration.py 120 INFO Making new env: CartPole-v0\r\nregistration.py 120 INFO Making new env: FrozenLake-v0\r\n\u001b[1m\u001b[31m____________________ ModelCatalogTest.testTuplePreprocessor ____________________\u001b[0m\r\n\r\nself = <ray.rllib.test.test_catalog.ModelCatalogTest testMethod=testTuplePreprocessor>\r\n\r\n\u001b[1m def testTuplePreprocessor(self):\u001b[0m\r\n\u001b[1m ray.init()\u001b[0m\r\n\u001b[1m \u001b[0m\r\n\u001b[1m class TupleEnv(object):\u001b[0m\r\n\u001b[1m def __init__(self):\u001b[0m\r\n\u001b[1m self.observation_space = Tuple(\u001b[0m\r\n\u001b[1m [Discrete(5), Box(0, 1, shape=(3,))])\u001b[0m\r\n\u001b[1m p1 = ModelCatalog.get_preprocessor(\u001b[0m\r\n\u001b[1m> get_registry(), TupleEnv())\u001b[0m\r\n\r\n\u001b[1m\u001b[31mpython/ray/rllib/test/test_catalog.py\u001b[0m:52: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\u001b[1m\u001b[31mpython/ray/rllib/models/catalog.py\u001b[0m:215: in get_preprocessor\r\n\u001b[1m return preprocessor(env.observation_space, options)\u001b[0m\r\n\u001b[1m\u001b[31mpython/ray/rllib/models/preprocessors.py\u001b[0m:23: in __init__\r\n\u001b[1m self._init()\u001b[0m\r\n\u001b[1m\u001b[31mpython/ray/rllib/models/preprocessors.py\u001b[0m:112: in _init\r\n\u001b[1m preprocessor = get_preprocessor(space)(space, self._options)\u001b[0m\r\n\u001b[1m\u001b[31mpython/ray/rllib/models/preprocessors.py\u001b[0m:23: in __init__\r\n\u001b[1m self._init()\u001b[0m\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <ray.rllib.models.preprocessors.OneHotPreprocessor object at 0x7fad4ff234d0>\r\n\r\n\u001b[1m def _init(self):\u001b[0m\r\n\u001b[1m> assert self._obs_space.shape == ()\u001b[0m\r\n\u001b[1m\u001b[31mE AssertionError\u001b[0m\r\n\r\n\u001b[1m\u001b[31mpython/ray/rllib/models/preprocessors.py\u001b[0m:81: AssertionError\r\n----------------------------- Captured stdout call -----------------------------\r\nWaiting for redis server at 127.0.0.1:44545 to respond...\r\nWaiting for redis server at 127.0.0.1:60007 to respond...\r\nStarting local scheduler with the following resources: {'GPU': 0, 'CPU': 2}.\r\nFailed to start the UI, you may need to run 'pip install jupyter'.\r\nObservation shape is ((5,), (3,))\r\nUsing a TupleFlatteningPreprocessor\r\nCreating sub-preprocessor for Discrete(5)\r\nObservation shape is (5,)\r\nUsing one-hot preprocessor for discrete envs.\r\n----------------------------- Captured stderr call -----------------------------\r\nAllowing the Plasma store to use up to 3.13728GB of memory.\r\nStarting object store with directory /dev/shm and huge page support disabled\r\nDisconnecting client on fd 22\r\n[INFO] (/home/travis/build/robertnishihara/ray-private-travis/src/local_scheduler/local_scheduler.cc:171) Killed worker pid 14098 which hadn't started yet.\r\n[INFO] (/home/travis/build/robertnishihara/ray-private-travis/src/local_scheduler/local_scheduler.cc:171) Killed worker pid 14099 which hadn't started yet.\r\nDisconnecting client on fd 20\r\nDisconnecting client on fd 18\r\n\u001b[1m\u001b[31m====================== 2 failed, 3 passed in 7.09 seconds ======================\u001b[0m\r\n\r\ntravis_time:end:224e60d5:start=1516865197573618638,finish=1516865205120814512,duration=7547195874\r\n\u001b[0K\r\n\u001b[31;1mThe command \"python -m pytest python/ray/rllib/test/test_catalog.py\" exited with 1.\u001b[0m\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nimport cv2\nimport numpy as np\nimport gym\n\nATARI_OBS_SHAPE = (210, 160, 3)\nATARI_RAM_OBS_SHAPE = (128,)\n\n\nclass Preprocessor(object):\n \"\"\"Defines an abstract observation preprocessor function.\n\n Attributes:\n shape (obj): Shape of the preprocessed output.\n \"\"\"\n\n def __init__(self, obs_space, options):\n legacy_patch_shapes(obs_space)\n self._obs_space = obs_space\n self._options = options\n self._init()\n\n def _init(self):\n pass\n\n def transform(self, observation):\n \"\"\"Returns the preprocessed observation.\"\"\"\n raise NotImplementedError\n\n\nclass AtariPixelPreprocessor(Preprocessor):\n def _init(self):\n self._grayscale = self._options.get(\"grayscale\", False)\n self._zero_mean = self._options.get(\"zero_mean\", True)\n self._dim = self._options.get(\"dim\", 80)\n self._channel_major = self._options.get(\"channel_major\", False)\n if self._grayscale:\n self.shape = (self._dim, self._dim, 1)\n else:\n self.shape = (self._dim, self._dim, 3)\n\n # channel_major requires (# in-channels, row dim, col dim)\n if self._channel_major:\n self.shape = self.shape[-1:] + self.shape[:-1]\n\n def transform(self, observation):\n \"\"\"Downsamples images from (210, 160, 3) by the configured factor.\"\"\"\n scaled = observation[25:-25, :, :]\n if self._dim < 80:\n scaled = cv2.resize(scaled, (80, 80))\n # OpenAI: Resize by half, then down to 42x42 (essentially mipmapping).\n # If we resize directly we lose pixels that, when mapped to 42x42,\n # aren't close enough to the pixel boundary.\n scaled = cv2.resize(scaled, (self._dim, self._dim))\n if self._grayscale:\n scaled = scaled.mean(2)\n scaled = scaled.astype(np.float32)\n # Rescale needed for maintaining 1 channel\n scaled = np.reshape(scaled, [self._dim, self._dim, 1])\n if self._zero_mean:\n scaled = (scaled - 128) / 128\n else:\n scaled *= 1.0 / 255.0\n if self._channel_major:\n scaled = np.reshape(scaled, self.shape)\n return scaled\n\n\nclass AtariRamPreprocessor(Preprocessor):\n def _init(self):\n self.shape = (128,)\n\n def transform(self, observation):\n return (observation - 128) / 128\n\n\nclass OneHotPreprocessor(Preprocessor):\n def _init(self):\n assert self._obs_space.shape == ()\n self.shape = (self._obs_space.n,)\n\n def transform(self, observation):\n arr = np.zeros(self._obs_space.n)\n arr[observation] = 1\n return arr\n\n\nclass NoPreprocessor(Preprocessor):\n def _init(self):\n self.shape = self._obs_space.shape\n\n def transform(self, observation):\n return observation\n\n\nclass TupleFlatteningPreprocessor(Preprocessor):\n \"\"\"Preprocesses each tuple element, then flattens it all into a vector.\n\n If desired, the vector output can be unpacked via tf.reshape() within a\n custom model to handle each component separately.\n \"\"\"\n\n def _init(self):\n assert isinstance(self._obs_space, gym.spaces.Tuple)\n size = 0\n self.preprocessors = []\n for i in range(len(self._obs_space.spaces)):\n space = self._obs_space.spaces[i]\n print(\"Creating sub-preprocessor for\", space)\n preprocessor = get_preprocessor(space)(space, self._options)\n self.preprocessors.append(preprocessor)\n size += np.product(preprocessor.shape)\n self.shape = (size,)\n\n def transform(self, observation):\n assert len(observation) == len(self.preprocessors), observation\n return np.concatenate([\n np.reshape(p.transform(o), [np.product(p.shape)])\n for (o, p) in zip(observation, self.preprocessors)])\n\n\ndef get_preprocessor(space):\n \"\"\"Returns an appropriate preprocessor class for the given space.\"\"\"\n\n legacy_patch_shapes(space)\n obs_shape = space.shape\n print(\"Observation shape is {}\".format(obs_shape))\n\n if isinstance(space, gym.spaces.Discrete):\n print(\"Using one-hot preprocessor for discrete envs.\")\n preprocessor = OneHotPreprocessor\n elif obs_shape == ATARI_OBS_SHAPE:\n print(\"Assuming Atari pixel env, using AtariPixelPreprocessor.\")\n preprocessor = AtariPixelPreprocessor\n elif obs_shape == ATARI_RAM_OBS_SHAPE:\n print(\"Assuming Atari ram env, using AtariRamPreprocessor.\")\n preprocessor = AtariRamPreprocessor\n elif isinstance(space, gym.spaces.Tuple):\n print(\"Using a TupleFlatteningPreprocessor\")\n preprocessor = TupleFlatteningPreprocessor\n else:\n print(\"Not using any observation preprocessor.\")\n preprocessor = NoPreprocessor\n\n return preprocessor\n\n\ndef legacy_patch_shapes(space):\n \"\"\"Assigns shapes to spaces that don't have shapes.\n\n This is only needed for older gym versions that don't set shapes properly\n for Tuple and Discrete spaces.\n \"\"\"\n\n if not hasattr(space, \"shape\"):\n if isinstance(space, gym.spaces.Discrete):\n space.shape = ()\n elif isinstance(space, gym.spaces.Tuple):\n shapes = []\n for s in space.spaces:\n shape = legacy_patch_shapes(s)\n shapes.append(shape)\n space.shape = tuple(shapes)\n\n return space.shape\n", "path": "python/ray/rllib/models/preprocessors.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nimport cv2\nimport numpy as np\nimport gym\n\nATARI_OBS_SHAPE = (210, 160, 3)\nATARI_RAM_OBS_SHAPE = (128,)\n\n\nclass Preprocessor(object):\n \"\"\"Defines an abstract observation preprocessor function.\n\n Attributes:\n shape (obj): Shape of the preprocessed output.\n \"\"\"\n\n def __init__(self, obs_space, options):\n legacy_patch_shapes(obs_space)\n self._obs_space = obs_space\n self._options = options\n self._init()\n\n def _init(self):\n pass\n\n def transform(self, observation):\n \"\"\"Returns the preprocessed observation.\"\"\"\n raise NotImplementedError\n\n\nclass AtariPixelPreprocessor(Preprocessor):\n def _init(self):\n self._grayscale = self._options.get(\"grayscale\", False)\n self._zero_mean = self._options.get(\"zero_mean\", True)\n self._dim = self._options.get(\"dim\", 80)\n self._channel_major = self._options.get(\"channel_major\", False)\n if self._grayscale:\n self.shape = (self._dim, self._dim, 1)\n else:\n self.shape = (self._dim, self._dim, 3)\n\n # channel_major requires (# in-channels, row dim, col dim)\n if self._channel_major:\n self.shape = self.shape[-1:] + self.shape[:-1]\n\n def transform(self, observation):\n \"\"\"Downsamples images from (210, 160, 3) by the configured factor.\"\"\"\n scaled = observation[25:-25, :, :]\n if self._dim < 80:\n scaled = cv2.resize(scaled, (80, 80))\n # OpenAI: Resize by half, then down to 42x42 (essentially mipmapping).\n # If we resize directly we lose pixels that, when mapped to 42x42,\n # aren't close enough to the pixel boundary.\n scaled = cv2.resize(scaled, (self._dim, self._dim))\n if self._grayscale:\n scaled = scaled.mean(2)\n scaled = scaled.astype(np.float32)\n # Rescale needed for maintaining 1 channel\n scaled = np.reshape(scaled, [self._dim, self._dim, 1])\n if self._zero_mean:\n scaled = (scaled - 128) / 128\n else:\n scaled *= 1.0 / 255.0\n if self._channel_major:\n scaled = np.reshape(scaled, self.shape)\n return scaled\n\n\nclass AtariRamPreprocessor(Preprocessor):\n def _init(self):\n self.shape = (128,)\n\n def transform(self, observation):\n return (observation - 128) / 128\n\n\nclass OneHotPreprocessor(Preprocessor):\n def _init(self):\n self.shape = (self._obs_space.n,)\n\n def transform(self, observation):\n arr = np.zeros(self._obs_space.n)\n arr[observation] = 1\n return arr\n\n\nclass NoPreprocessor(Preprocessor):\n def _init(self):\n self.shape = self._obs_space.shape\n\n def transform(self, observation):\n return observation\n\n\nclass TupleFlatteningPreprocessor(Preprocessor):\n \"\"\"Preprocesses each tuple element, then flattens it all into a vector.\n\n If desired, the vector output can be unpacked via tf.reshape() within a\n custom model to handle each component separately.\n \"\"\"\n\n def _init(self):\n assert isinstance(self._obs_space, gym.spaces.Tuple)\n size = 0\n self.preprocessors = []\n for i in range(len(self._obs_space.spaces)):\n space = self._obs_space.spaces[i]\n print(\"Creating sub-preprocessor for\", space)\n preprocessor = get_preprocessor(space)(space, self._options)\n self.preprocessors.append(preprocessor)\n size += np.product(preprocessor.shape)\n self.shape = (size,)\n\n def transform(self, observation):\n assert len(observation) == len(self.preprocessors), observation\n return np.concatenate([\n np.reshape(p.transform(o), [np.product(p.shape)])\n for (o, p) in zip(observation, self.preprocessors)])\n\n\ndef get_preprocessor(space):\n \"\"\"Returns an appropriate preprocessor class for the given space.\"\"\"\n\n legacy_patch_shapes(space)\n obs_shape = space.shape\n print(\"Observation shape is {}\".format(obs_shape))\n\n if isinstance(space, gym.spaces.Discrete):\n print(\"Using one-hot preprocessor for discrete envs.\")\n preprocessor = OneHotPreprocessor\n elif obs_shape == ATARI_OBS_SHAPE:\n print(\"Assuming Atari pixel env, using AtariPixelPreprocessor.\")\n preprocessor = AtariPixelPreprocessor\n elif obs_shape == ATARI_RAM_OBS_SHAPE:\n print(\"Assuming Atari ram env, using AtariRamPreprocessor.\")\n preprocessor = AtariRamPreprocessor\n elif isinstance(space, gym.spaces.Tuple):\n print(\"Using a TupleFlatteningPreprocessor\")\n preprocessor = TupleFlatteningPreprocessor\n else:\n print(\"Not using any observation preprocessor.\")\n preprocessor = NoPreprocessor\n\n return preprocessor\n\n\ndef legacy_patch_shapes(space):\n \"\"\"Assigns shapes to spaces that don't have shapes.\n\n This is only needed for older gym versions that don't set shapes properly\n for Tuple and Discrete spaces.\n \"\"\"\n\n if not hasattr(space, \"shape\"):\n if isinstance(space, gym.spaces.Discrete):\n space.shape = ()\n elif isinstance(space, gym.spaces.Tuple):\n shapes = []\n for s in space.spaces:\n shape = legacy_patch_shapes(s)\n shapes.append(shape)\n space.shape = tuple(shapes)\n\n return space.shape\n", "path": "python/ray/rllib/models/preprocessors.py"}]} | 3,840 | 105 |
gh_patches_debug_21483 | rasdani/github-patches | git_diff | dotkom__onlineweb4-775 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Registering with an invalid email (according to smtplib) causes errors when sending emails
SMTPRecipientsRefused at /auth/register/
{u'[email protected]': (501, '5.1.3 Bad recipient address syntax')}
Request Method: POST
Request URL: http://moonshine.online.ntnu.no/auth/register/
Django Version: 1.6.1
Exception Type: SMTPRecipientsRefused
Exception Value:
{u'[email protected]': (501, '5.1.3 Bad recipient address syntax')}
Exception Location: /usr/lib/python2.7/smtplib.py in sendmail, line 733
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/authentication/views.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import uuid
4 import re
5
6 from django.contrib import auth
7 from django.contrib import messages
8 from django.core.mail import send_mail
9 from django.shortcuts import render, redirect, get_object_or_404
10 from django.http import HttpResponseRedirect
11 from django.utils.translation import ugettext as _
12 from django.views.decorators.debug import sensitive_post_parameters
13
14 from django.conf import settings
15 from apps.authentication.forms import (LoginForm, RegisterForm,
16 RecoveryForm, ChangePasswordForm)
17 from apps.authentication.models import OnlineUser as User, RegisterToken, Email
18
19
20 @sensitive_post_parameters()
21 def login(request):
22 redirect_url = request.REQUEST.get('next', '')
23 if request.method == 'POST':
24 form = LoginForm(request.POST)
25 if form.login(request):
26 messages.success(request, _(u'Du er nå logget inn.'))
27 if redirect_url:
28 return HttpResponseRedirect(redirect_url)
29 return HttpResponseRedirect('/')
30 else: form = LoginForm(request.POST, auto_id=True)
31 else:
32 form = LoginForm()
33
34 response_dict = { 'form' : form, 'next' : redirect_url}
35 return render(request, 'auth/login.html', response_dict)
36
37
38 def logout(request):
39 auth.logout(request)
40 messages.success(request, _(u'Du er nå logget ut.'))
41 return HttpResponseRedirect('/')
42
43
44 @sensitive_post_parameters()
45 def register(request):
46 if request.user.is_authenticated():
47 messages.error(request, _(u'Registrering av ny konto krever at du er logget ut.'))
48 return HttpResponseRedirect('/')
49 else:
50 if request.method == 'POST':
51 form = RegisterForm(request.POST)
52 if form.is_valid():
53 cleaned = form.cleaned_data
54
55 # Create user
56 user = User(
57 username=cleaned['username'],
58 first_name=cleaned['first_name'].title(),
59 last_name=cleaned['last_name'].title(),
60 )
61 # Set remaining fields
62 user.phone_number=cleaned['phone']
63 user.address=cleaned['address'].title()
64 user.zip_code=cleaned['zip_code']
65 # Store password properly
66 user.set_password(cleaned['password'])
67 # Users need to be manually activated
68 user.is_active = False
69 user.save()
70
71 # Set email address
72 email = Email(
73 user=user,
74 email=cleaned['email'].lower(),
75 )
76 email.primary = True
77 email.save()
78
79 # Create the registration token
80 token = uuid.uuid4().hex
81 rt = RegisterToken(user=user, email=cleaned['email'], token=token)
82 rt.save()
83
84 email_message = _(u"""
85 En konto har blitt registrert på online.ntnu.no med denne epostadressen. Dersom du ikke
86 har utført denne handlingen ber vi deg se bort fra denne eposten.
87
88 For å bruke denne kontoen kreves det at du verifiserer epostadressen. Du kan gjøre
89 dette ved å besøke linken under.
90
91 http://%s/auth/verify/%s/
92
93 Denne lenken vil være gyldig i 24 timer. Dersom du behøver å få tilsendt en ny lenke
94 kan dette gjøres med funksjonen for å gjenopprette passord.
95 """) % (request.META['HTTP_HOST'], token)
96
97 send_mail(_(u'Verifiser din konto'), email_message, settings.DEFAULT_FROM_EMAIL, [email.email,])
98
99 messages.success(request, _(u'Registreringen var vellykket. Se tilsendt epost for verifiseringsinstrukser.'))
100
101 return HttpResponseRedirect('/')
102 else:
103 form = RegisterForm(request.POST, auto_id=True)
104 else:
105 form = RegisterForm()
106
107 return render(request, 'auth/register.html', {'form': form, })
108
109
110 def verify(request, token):
111 rt = get_object_or_404(RegisterToken, token=token)
112
113 if rt.is_valid:
114 email = get_object_or_404(Email, email=rt.email)
115 email.verified = True
116 email.save()
117
118 user = getattr(rt, 'user')
119
120 # If it is a stud email, set the ntnu_username for user
121 if re.match(r'[^@][email protected]', rt.email):
122 user.ntnu_username = rt.email.split("@")[0]
123
124 user_activated = False
125 if not user.is_active:
126 user.is_active = True
127 user_activated = True
128
129 user.save()
130 rt.delete()
131
132 if user_activated:
133 messages.success(request, _(u'Bruker %s ble aktivert. Du kan nå logge inn.') % user.username)
134 return redirect('auth_login')
135 else:
136 messages.success(request, _(u'Eposten %s er nå verifisert.') % email)
137 return redirect('profiles')
138 else:
139 messages.error(request, _(u'Denne lenken er utløpt. Bruk gjenopprett passord for å få tilsendt en ny lenke.'))
140 return HttpResponseRedirect('/')
141
142
143 def recover(request):
144 if request.user.is_authenticated():
145 messages.error(request, _(u'Gjenoppretning av passord krever at du er logget ut.'))
146 return HttpResponseRedirect('/')
147 else:
148 if request.method == 'POST':
149 form = RecoveryForm(request.POST)
150 if form.is_valid():
151 email_string = form.cleaned_data['email']
152 emails = Email.objects.filter(email=email_string)
153
154 if len(emails) == 0:
155 messages.error(request, _(u'Denne eposten er ikke registrert i våre systemer.'))
156 return HttpResponseRedirect('/')
157
158 email = emails[0]
159
160 # Create the registration token
161 token = uuid.uuid4().hex
162 rt = RegisterToken(user=email.user, email=email.email, token=token)
163 rt.save()
164
165 email_message = _(u"""
166 Vi har mottat forespørsel om å gjenopprette passordet for kontoen bundet til %s.
167 Dersom du ikke har bedt om denne handlingen ber vi deg se bort fra denne eposten.
168
169 Brukernavn: %s
170
171 Hvis du ønsker å gjennomføre en gjenoppretning av passord, bruk lenken under.
172
173 http://%s/auth/set_password/%s/
174
175 Denne lenken vil være gyldig i 24 timer. Dersom du behøver å få tilsendt en ny lenke
176 kan dette gjøres med funksjonen for å gjenopprette passord.
177 """) % (email.email, email.user.username, request.META['HTTP_HOST'], token)
178
179 send_mail(_(u'Gjenoppretning av passord'), email_message, settings.DEFAULT_FROM_EMAIL, [email.email,])
180
181 messages.success(request, _(u'En lenke for gjenoppretning har blitt sendt til %s.') % email.email)
182
183 return HttpResponseRedirect('/')
184 else:
185 form = RecoveryForm(request.POST, auto_id=True)
186 else:
187 form = RecoveryForm()
188
189 return render(request, 'auth/recover.html', {'form': form})
190
191
192 @sensitive_post_parameters()
193 def set_password(request, token=None):
194 if request.user.is_authenticated():
195 return HttpResponseRedirect('/')
196 else:
197 tokens = RegisterToken.objects.filter(token=token)
198
199 if tokens.count() == 1:
200 rt = tokens[0]
201 if rt.is_valid:
202 if request.method == 'POST':
203 form = ChangePasswordForm(request.POST, auto_id=True)
204 if form.is_valid():
205 user = getattr(rt, 'user')
206
207 user.is_active = True
208 user.set_password(form.cleaned_data['new_password'])
209 user.save()
210
211 rt.delete()
212
213 messages.success(request, _(u'Bruker %s har gjennomført vellykket gjenoppretning av passord. Du kan nå logge inn.') % user.username)
214
215 return HttpResponseRedirect('/')
216 else:
217 form = ChangePasswordForm()
218
219 messages.success(request, _(u'Lenken er akseptert. Vennligst skriv inn ønsket passord.'))
220
221 return render(request, 'auth/set_password.html', {'form': form, 'token': token})
222
223 else:
224 messages.error(request, _(u'Lenken er ugyldig. Vennligst bruk gjenoppretning av passord for å få tilsendt en ny lenke.'))
225 return HttpResponseRedirect('/')
226
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/authentication/views.py b/apps/authentication/views.py
--- a/apps/authentication/views.py
+++ b/apps/authentication/views.py
@@ -2,6 +2,7 @@
import uuid
import re
+from smtplib import SMTPException
from django.contrib import auth
from django.contrib import messages
@@ -93,8 +94,11 @@
Denne lenken vil være gyldig i 24 timer. Dersom du behøver å få tilsendt en ny lenke
kan dette gjøres med funksjonen for å gjenopprette passord.
""") % (request.META['HTTP_HOST'], token)
-
- send_mail(_(u'Verifiser din konto'), email_message, settings.DEFAULT_FROM_EMAIL, [email.email,])
+ try:
+ send_mail(_(u'Verifiser din konto'), email_message, settings.DEFAULT_FROM_EMAIL, [email.email,])
+ except SMTPException as e:
+ messages.error(request, u'Det oppstod en kritisk feil, ' + e)
+ return redirect('home')
messages.success(request, _(u'Registreringen var vellykket. Se tilsendt epost for verifiseringsinstrukser.'))
| {"golden_diff": "diff --git a/apps/authentication/views.py b/apps/authentication/views.py\n--- a/apps/authentication/views.py\n+++ b/apps/authentication/views.py\n@@ -2,6 +2,7 @@\n \n import uuid\n import re\n+from smtplib import SMTPException\n \n from django.contrib import auth\n from django.contrib import messages\n@@ -93,8 +94,11 @@\n Denne lenken vil v\u00e6re gyldig i 24 timer. Dersom du beh\u00f8ver \u00e5 f\u00e5 tilsendt en ny lenke\n kan dette gj\u00f8res med funksjonen for \u00e5 gjenopprette passord.\n \"\"\") % (request.META['HTTP_HOST'], token)\n-\n- send_mail(_(u'Verifiser din konto'), email_message, settings.DEFAULT_FROM_EMAIL, [email.email,])\n+ try:\n+ send_mail(_(u'Verifiser din konto'), email_message, settings.DEFAULT_FROM_EMAIL, [email.email,])\n+ except SMTPException as e:\n+ messages.error(request, u'Det oppstod en kritisk feil, ' + e)\n+ return redirect('home')\n \n messages.success(request, _(u'Registreringen var vellykket. Se tilsendt epost for verifiseringsinstrukser.'))\n", "issue": "Registering with an invalid email (according to smtplib) causes errors when sending emails\nSMTPRecipientsRefused at /auth/register/\n{u'[email protected]': (501, '5.1.3 Bad recipient address syntax')}\nRequest Method: POST\nRequest URL: http://moonshine.online.ntnu.no/auth/register/\nDjango Version: 1.6.1\nException Type: SMTPRecipientsRefused\nException Value: \n{u'[email protected]': (501, '5.1.3 Bad recipient address syntax')}\nException Location: /usr/lib/python2.7/smtplib.py in sendmail, line 733\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport uuid\nimport re\n\nfrom django.contrib import auth\nfrom django.contrib import messages\nfrom django.core.mail import send_mail\nfrom django.shortcuts import render, redirect, get_object_or_404\nfrom django.http import HttpResponseRedirect\nfrom django.utils.translation import ugettext as _\nfrom django.views.decorators.debug import sensitive_post_parameters\n\nfrom django.conf import settings\nfrom apps.authentication.forms import (LoginForm, RegisterForm, \n RecoveryForm, ChangePasswordForm)\nfrom apps.authentication.models import OnlineUser as User, RegisterToken, Email\n\n\n@sensitive_post_parameters()\ndef login(request):\n redirect_url = request.REQUEST.get('next', '')\n if request.method == 'POST':\n form = LoginForm(request.POST)\n if form.login(request):\n messages.success(request, _(u'Du er n\u00e5 logget inn.'))\n if redirect_url:\n return HttpResponseRedirect(redirect_url)\n return HttpResponseRedirect('/')\n else: form = LoginForm(request.POST, auto_id=True)\n else:\n form = LoginForm()\n\n response_dict = { 'form' : form, 'next' : redirect_url}\n return render(request, 'auth/login.html', response_dict)\n\n\ndef logout(request):\n auth.logout(request)\n messages.success(request, _(u'Du er n\u00e5 logget ut.'))\n return HttpResponseRedirect('/')\n\n\n@sensitive_post_parameters()\ndef register(request):\n if request.user.is_authenticated():\n messages.error(request, _(u'Registrering av ny konto krever at du er logget ut.'))\n return HttpResponseRedirect('/')\n else:\n if request.method == 'POST':\n form = RegisterForm(request.POST)\n if form.is_valid():\n cleaned = form.cleaned_data\n\n # Create user\n user = User(\n username=cleaned['username'], \n first_name=cleaned['first_name'].title(), \n last_name=cleaned['last_name'].title(),\n )\n # Set remaining fields\n user.phone_number=cleaned['phone']\n user.address=cleaned['address'].title()\n user.zip_code=cleaned['zip_code']\n # Store password properly\n user.set_password(cleaned['password'])\n # Users need to be manually activated\n user.is_active = False\n user.save()\n\n # Set email address\n email = Email(\n user=user,\n email=cleaned['email'].lower(),\n )\n email.primary = True\n email.save() \n\n # Create the registration token\n token = uuid.uuid4().hex\n rt = RegisterToken(user=user, email=cleaned['email'], token=token)\n rt.save()\n\n email_message = _(u\"\"\"\nEn konto har blitt registrert p\u00e5 online.ntnu.no med denne epostadressen. Dersom du ikke\nhar utf\u00f8rt denne handlingen ber vi deg se bort fra denne eposten.\n\nFor \u00e5 bruke denne kontoen kreves det at du verifiserer epostadressen. Du kan gj\u00f8re\ndette ved \u00e5 bes\u00f8ke linken under.\n\nhttp://%s/auth/verify/%s/\n\nDenne lenken vil v\u00e6re gyldig i 24 timer. Dersom du beh\u00f8ver \u00e5 f\u00e5 tilsendt en ny lenke\nkan dette gj\u00f8res med funksjonen for \u00e5 gjenopprette passord.\n\"\"\") % (request.META['HTTP_HOST'], token)\n\n send_mail(_(u'Verifiser din konto'), email_message, settings.DEFAULT_FROM_EMAIL, [email.email,])\n\n messages.success(request, _(u'Registreringen var vellykket. Se tilsendt epost for verifiseringsinstrukser.'))\n\n return HttpResponseRedirect('/') \n else:\n form = RegisterForm(request.POST, auto_id=True)\n else:\n form = RegisterForm()\n\n return render(request, 'auth/register.html', {'form': form, })\n\n\ndef verify(request, token):\n rt = get_object_or_404(RegisterToken, token=token)\n \n if rt.is_valid:\n email = get_object_or_404(Email, email=rt.email)\n email.verified = True\n email.save()\n \n user = getattr(rt, 'user')\n\n # If it is a stud email, set the ntnu_username for user\n if re.match(r'[^@][email protected]', rt.email):\n user.ntnu_username = rt.email.split(\"@\")[0]\n\n user_activated = False\n if not user.is_active:\n user.is_active = True\n user_activated = True\n\n user.save()\n rt.delete()\n\n if user_activated:\n messages.success(request, _(u'Bruker %s ble aktivert. Du kan n\u00e5 logge inn.') % user.username)\n return redirect('auth_login')\n else:\n messages.success(request, _(u'Eposten %s er n\u00e5 verifisert.') % email)\n return redirect('profiles')\n else:\n messages.error(request, _(u'Denne lenken er utl\u00f8pt. Bruk gjenopprett passord for \u00e5 f\u00e5 tilsendt en ny lenke.'))\n return HttpResponseRedirect('/') \n \n\ndef recover(request):\n if request.user.is_authenticated():\n messages.error(request, _(u'Gjenoppretning av passord krever at du er logget ut.'))\n return HttpResponseRedirect('/')\n else:\n if request.method == 'POST':\n form = RecoveryForm(request.POST)\n if form.is_valid():\n email_string = form.cleaned_data['email']\n emails = Email.objects.filter(email=email_string)\n\n if len(emails) == 0:\n messages.error(request, _(u'Denne eposten er ikke registrert i v\u00e5re systemer.'))\n return HttpResponseRedirect('/') \n\n email = emails[0]\n \n # Create the registration token\n token = uuid.uuid4().hex\n rt = RegisterToken(user=email.user, email=email.email, token=token)\n rt.save()\n\n email_message = _(u\"\"\"\nVi har mottat foresp\u00f8rsel om \u00e5 gjenopprette passordet for kontoen bundet til %s.\nDersom du ikke har bedt om denne handlingen ber vi deg se bort fra denne eposten.\n\nBrukernavn: %s\n\nHvis du \u00f8nsker \u00e5 gjennomf\u00f8re en gjenoppretning av passord, bruk lenken under.\n\nhttp://%s/auth/set_password/%s/\n\nDenne lenken vil v\u00e6re gyldig i 24 timer. Dersom du beh\u00f8ver \u00e5 f\u00e5 tilsendt en ny lenke\nkan dette gj\u00f8res med funksjonen for \u00e5 gjenopprette passord.\n\"\"\") % (email.email, email.user.username, request.META['HTTP_HOST'], token)\n\n send_mail(_(u'Gjenoppretning av passord'), email_message, settings.DEFAULT_FROM_EMAIL, [email.email,])\n\n messages.success(request, _(u'En lenke for gjenoppretning har blitt sendt til %s.') % email.email)\n\n return HttpResponseRedirect('/') \n else:\n form = RecoveryForm(request.POST, auto_id=True)\n else:\n form = RecoveryForm()\n\n return render(request, 'auth/recover.html', {'form': form})\n\n\n@sensitive_post_parameters()\ndef set_password(request, token=None): \n if request.user.is_authenticated():\n return HttpResponseRedirect('/')\n else:\n tokens = RegisterToken.objects.filter(token=token)\n \n if tokens.count() == 1:\n rt = tokens[0]\n if rt.is_valid:\n if request.method == 'POST':\n form = ChangePasswordForm(request.POST, auto_id=True)\n if form.is_valid():\n user = getattr(rt, 'user')\n\n user.is_active = True\n user.set_password(form.cleaned_data['new_password'])\n user.save()\n \n rt.delete()\n\n messages.success(request, _(u'Bruker %s har gjennomf\u00f8rt vellykket gjenoppretning av passord. Du kan n\u00e5 logge inn.') % user.username)\n \n return HttpResponseRedirect('/') \n else:\n form = ChangePasswordForm()\n\n messages.success(request, _(u'Lenken er akseptert. Vennligst skriv inn \u00f8nsket passord.'))\n\n return render(request, 'auth/set_password.html', {'form': form, 'token': token})\n\n else:\n messages.error(request, _(u'Lenken er ugyldig. Vennligst bruk gjenoppretning av passord for \u00e5 f\u00e5 tilsendt en ny lenke.'))\n return HttpResponseRedirect('/') \n", "path": "apps/authentication/views.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport uuid\nimport re\nfrom smtplib import SMTPException\n\nfrom django.contrib import auth\nfrom django.contrib import messages\nfrom django.core.mail import send_mail\nfrom django.shortcuts import render, redirect, get_object_or_404\nfrom django.http import HttpResponseRedirect\nfrom django.utils.translation import ugettext as _\nfrom django.views.decorators.debug import sensitive_post_parameters\n\nfrom django.conf import settings\nfrom apps.authentication.forms import (LoginForm, RegisterForm, \n RecoveryForm, ChangePasswordForm)\nfrom apps.authentication.models import OnlineUser as User, RegisterToken, Email\n\n\n@sensitive_post_parameters()\ndef login(request):\n redirect_url = request.REQUEST.get('next', '')\n if request.method == 'POST':\n form = LoginForm(request.POST)\n if form.login(request):\n messages.success(request, _(u'Du er n\u00e5 logget inn.'))\n if redirect_url:\n return HttpResponseRedirect(redirect_url)\n return HttpResponseRedirect('/')\n else: form = LoginForm(request.POST, auto_id=True)\n else:\n form = LoginForm()\n\n response_dict = { 'form' : form, 'next' : redirect_url}\n return render(request, 'auth/login.html', response_dict)\n\n\ndef logout(request):\n auth.logout(request)\n messages.success(request, _(u'Du er n\u00e5 logget ut.'))\n return HttpResponseRedirect('/')\n\n\n@sensitive_post_parameters()\ndef register(request):\n if request.user.is_authenticated():\n messages.error(request, _(u'Registrering av ny konto krever at du er logget ut.'))\n return HttpResponseRedirect('/')\n else:\n if request.method == 'POST':\n form = RegisterForm(request.POST)\n if form.is_valid():\n cleaned = form.cleaned_data\n\n # Create user\n user = User(\n username=cleaned['username'], \n first_name=cleaned['first_name'].title(), \n last_name=cleaned['last_name'].title(),\n )\n # Set remaining fields\n user.phone_number=cleaned['phone']\n user.address=cleaned['address'].title()\n user.zip_code=cleaned['zip_code']\n # Store password properly\n user.set_password(cleaned['password'])\n # Users need to be manually activated\n user.is_active = False\n user.save()\n\n # Set email address\n email = Email(\n user=user,\n email=cleaned['email'].lower(),\n )\n email.primary = True\n email.save() \n\n # Create the registration token\n token = uuid.uuid4().hex\n rt = RegisterToken(user=user, email=cleaned['email'], token=token)\n rt.save()\n\n email_message = _(u\"\"\"\nEn konto har blitt registrert p\u00e5 online.ntnu.no med denne epostadressen. Dersom du ikke\nhar utf\u00f8rt denne handlingen ber vi deg se bort fra denne eposten.\n\nFor \u00e5 bruke denne kontoen kreves det at du verifiserer epostadressen. Du kan gj\u00f8re\ndette ved \u00e5 bes\u00f8ke linken under.\n\nhttp://%s/auth/verify/%s/\n\nDenne lenken vil v\u00e6re gyldig i 24 timer. Dersom du beh\u00f8ver \u00e5 f\u00e5 tilsendt en ny lenke\nkan dette gj\u00f8res med funksjonen for \u00e5 gjenopprette passord.\n\"\"\") % (request.META['HTTP_HOST'], token)\n try:\n send_mail(_(u'Verifiser din konto'), email_message, settings.DEFAULT_FROM_EMAIL, [email.email,])\n except SMTPException as e:\n messages.error(request, u'Det oppstod en kritisk feil, ' + e)\n return redirect('home')\n\n messages.success(request, _(u'Registreringen var vellykket. Se tilsendt epost for verifiseringsinstrukser.'))\n\n return HttpResponseRedirect('/') \n else:\n form = RegisterForm(request.POST, auto_id=True)\n else:\n form = RegisterForm()\n\n return render(request, 'auth/register.html', {'form': form, })\n\n\ndef verify(request, token):\n rt = get_object_or_404(RegisterToken, token=token)\n \n if rt.is_valid:\n email = get_object_or_404(Email, email=rt.email)\n email.verified = True\n email.save()\n \n user = getattr(rt, 'user')\n\n # If it is a stud email, set the ntnu_username for user\n if re.match(r'[^@][email protected]', rt.email):\n user.ntnu_username = rt.email.split(\"@\")[0]\n\n user_activated = False\n if not user.is_active:\n user.is_active = True\n user_activated = True\n\n user.save()\n rt.delete()\n\n if user_activated:\n messages.success(request, _(u'Bruker %s ble aktivert. Du kan n\u00e5 logge inn.') % user.username)\n return redirect('auth_login')\n else:\n messages.success(request, _(u'Eposten %s er n\u00e5 verifisert.') % email)\n return redirect('profiles')\n else:\n messages.error(request, _(u'Denne lenken er utl\u00f8pt. Bruk gjenopprett passord for \u00e5 f\u00e5 tilsendt en ny lenke.'))\n return HttpResponseRedirect('/') \n \n\ndef recover(request):\n if request.user.is_authenticated():\n messages.error(request, _(u'Gjenoppretning av passord krever at du er logget ut.'))\n return HttpResponseRedirect('/')\n else:\n if request.method == 'POST':\n form = RecoveryForm(request.POST)\n if form.is_valid():\n email_string = form.cleaned_data['email']\n emails = Email.objects.filter(email=email_string)\n\n if len(emails) == 0:\n messages.error(request, _(u'Denne eposten er ikke registrert i v\u00e5re systemer.'))\n return HttpResponseRedirect('/') \n\n email = emails[0]\n \n # Create the registration token\n token = uuid.uuid4().hex\n rt = RegisterToken(user=email.user, email=email.email, token=token)\n rt.save()\n\n email_message = _(u\"\"\"\nVi har mottat foresp\u00f8rsel om \u00e5 gjenopprette passordet for kontoen bundet til %s.\nDersom du ikke har bedt om denne handlingen ber vi deg se bort fra denne eposten.\n\nBrukernavn: %s\n\nHvis du \u00f8nsker \u00e5 gjennomf\u00f8re en gjenoppretning av passord, bruk lenken under.\n\nhttp://%s/auth/set_password/%s/\n\nDenne lenken vil v\u00e6re gyldig i 24 timer. Dersom du beh\u00f8ver \u00e5 f\u00e5 tilsendt en ny lenke\nkan dette gj\u00f8res med funksjonen for \u00e5 gjenopprette passord.\n\"\"\") % (email.email, email.user.username, request.META['HTTP_HOST'], token)\n\n send_mail(_(u'Gjenoppretning av passord'), email_message, settings.DEFAULT_FROM_EMAIL, [email.email,])\n\n messages.success(request, _(u'En lenke for gjenoppretning har blitt sendt til %s.') % email.email)\n\n return HttpResponseRedirect('/') \n else:\n form = RecoveryForm(request.POST, auto_id=True)\n else:\n form = RecoveryForm()\n\n return render(request, 'auth/recover.html', {'form': form})\n\n\n@sensitive_post_parameters()\ndef set_password(request, token=None): \n if request.user.is_authenticated():\n return HttpResponseRedirect('/')\n else:\n tokens = RegisterToken.objects.filter(token=token)\n \n if tokens.count() == 1:\n rt = tokens[0]\n if rt.is_valid:\n if request.method == 'POST':\n form = ChangePasswordForm(request.POST, auto_id=True)\n if form.is_valid():\n user = getattr(rt, 'user')\n\n user.is_active = True\n user.set_password(form.cleaned_data['new_password'])\n user.save()\n \n rt.delete()\n\n messages.success(request, _(u'Bruker %s har gjennomf\u00f8rt vellykket gjenoppretning av passord. Du kan n\u00e5 logge inn.') % user.username)\n \n return HttpResponseRedirect('/') \n else:\n form = ChangePasswordForm()\n\n messages.success(request, _(u'Lenken er akseptert. Vennligst skriv inn \u00f8nsket passord.'))\n\n return render(request, 'auth/set_password.html', {'form': form, 'token': token})\n\n else:\n messages.error(request, _(u'Lenken er ugyldig. Vennligst bruk gjenoppretning av passord for \u00e5 f\u00e5 tilsendt en ny lenke.'))\n return HttpResponseRedirect('/') \n", "path": "apps/authentication/views.py"}]} | 2,871 | 277 |
gh_patches_debug_24940 | rasdani/github-patches | git_diff | HypothesisWorks__hypothesis-1548 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Log message hiding interfers with pytest log caputure fixture.
The log message hiding merged in #1535 interferes with pytests log capturing.
In the example below I am using the [caplog fixture](https://docs.pytest.org/en/latest/logging.html#caplog-fixture) to capture the log message and test that they are correct.
In this simple case just asserting the number of messages. This results in an empty record list in the first call with 3.69.11 but works as expected with 3.69.10
```python
import logging
from hypothesis import given, strategies
@given(x=strategies.floats(min_value=-3, max_value=3))
def test_logcapture(x, caplog):
with caplog.at_level(logging.CRITICAL, logger='root'):
caplog.clear()
logging.critical(f"x is {x}")
assert len(caplog.records) == 1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hypothesis-python/src/hypothesis/control.py`
Content:
```
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis, which may be found at
4 # https://github.com/HypothesisWorks/hypothesis-python
5 #
6 # Most of this work is copyright (C) 2013-2018 David R. MacIver
7 # ([email protected]), but it contains contributions by others. See
8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
9 # consult the git log if you need to determine who owns an individual
10 # contribution.
11 #
12 # This Source Code Form is subject to the terms of the Mozilla Public License,
13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
14 # obtain one at http://mozilla.org/MPL/2.0/.
15 #
16 # END HEADER
17
18 from __future__ import division, print_function, absolute_import
19
20 import logging
21 import traceback
22
23 from hypothesis import Verbosity, settings
24 from hypothesis.errors import CleanupFailed, InvalidArgument, \
25 UnsatisfiedAssumption
26 from hypothesis.reporting import report
27 from hypothesis.utils.dynamicvariables import DynamicVariable
28
29 if False:
30 from typing import Any, AnyStr # noqa
31
32
33 def reject():
34 raise UnsatisfiedAssumption()
35
36
37 def assume(condition):
38 # type: (Any) -> bool
39 """Calling ``assume`` is like an :ref:`assert <python:assert>` that marks
40 the example as bad, rather than failing the test.
41
42 This allows you to specify properties that you *assume* will be
43 true, and let Hypothesis try to avoid similar examples in future.
44 """
45 if not condition:
46 raise UnsatisfiedAssumption()
47 return True
48
49
50 _current_build_context = DynamicVariable(None)
51
52
53 def current_build_context():
54 context = _current_build_context.value
55 if context is None:
56 raise InvalidArgument(
57 u'No build context registered')
58 return context
59
60
61 class BuildContext(object):
62
63 def __init__(self, data, is_final=False, close_on_capture=True):
64 self.data = data
65 self.tasks = []
66 self.is_final = is_final
67 self.close_on_capture = close_on_capture
68 self.close_on_del = False
69 self.notes = []
70 self.original_logging_disable = logging.NOTSET
71
72 def __enter__(self):
73 if not self.is_final:
74 self.original_logging_disable = logging.root.manager.disable
75 logging.disable(logging.CRITICAL)
76 self.assign_variable = _current_build_context.with_value(self)
77 self.assign_variable.__enter__()
78 return self
79
80 def __exit__(self, exc_type, exc_value, tb):
81 self.assign_variable.__exit__(exc_type, exc_value, tb)
82 if not self.is_final:
83 logging.disable(self.original_logging_disable)
84 if self.close() and exc_type is None:
85 raise CleanupFailed()
86
87 def local(self):
88 return _current_build_context.with_value(self)
89
90 def close(self):
91 any_failed = False
92 for task in self.tasks:
93 try:
94 task()
95 except BaseException:
96 any_failed = True
97 report(traceback.format_exc())
98 return any_failed
99
100
101 def cleanup(teardown):
102 """Register a function to be called when the current test has finished
103 executing. Any exceptions thrown in teardown will be printed but not
104 rethrown.
105
106 Inside a test this isn't very interesting, because you can just use
107 a finally block, but note that you can use this inside map, flatmap,
108 etc. in order to e.g. insist that a value is closed at the end.
109 """
110 context = _current_build_context.value
111 if context is None:
112 raise InvalidArgument(
113 u'Cannot register cleanup outside of build context')
114 context.tasks.append(teardown)
115
116
117 def note(value):
118 # type: (AnyStr) -> None
119 """Report this value in the final execution."""
120 context = _current_build_context.value
121 if context is None:
122 raise InvalidArgument(
123 'Cannot make notes outside of a test')
124 context.notes.append(value)
125 if context.is_final or settings.default.verbosity >= Verbosity.verbose:
126 report(value)
127
128
129 def event(value):
130 # type: (AnyStr) -> None
131 """Record an event that occurred this test. Statistics on number of test
132 runs with each event will be reported at the end if you run Hypothesis in
133 statistics reporting mode.
134
135 Events should be strings or convertible to them.
136 """
137 context = _current_build_context.value
138 if context is None:
139 raise InvalidArgument(
140 'Cannot make record events outside of a test')
141
142 if context.data is not None:
143 context.data.note_event(value)
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hypothesis-python/src/hypothesis/control.py b/hypothesis-python/src/hypothesis/control.py
--- a/hypothesis-python/src/hypothesis/control.py
+++ b/hypothesis-python/src/hypothesis/control.py
@@ -17,7 +17,6 @@
from __future__ import division, print_function, absolute_import
-import logging
import traceback
from hypothesis import Verbosity, settings
@@ -67,20 +66,14 @@
self.close_on_capture = close_on_capture
self.close_on_del = False
self.notes = []
- self.original_logging_disable = logging.NOTSET
def __enter__(self):
- if not self.is_final:
- self.original_logging_disable = logging.root.manager.disable
- logging.disable(logging.CRITICAL)
self.assign_variable = _current_build_context.with_value(self)
self.assign_variable.__enter__()
return self
def __exit__(self, exc_type, exc_value, tb):
self.assign_variable.__exit__(exc_type, exc_value, tb)
- if not self.is_final:
- logging.disable(self.original_logging_disable)
if self.close() and exc_type is None:
raise CleanupFailed()
| {"golden_diff": "diff --git a/hypothesis-python/src/hypothesis/control.py b/hypothesis-python/src/hypothesis/control.py\n--- a/hypothesis-python/src/hypothesis/control.py\n+++ b/hypothesis-python/src/hypothesis/control.py\n@@ -17,7 +17,6 @@\n \n from __future__ import division, print_function, absolute_import\n \n-import logging\n import traceback\n \n from hypothesis import Verbosity, settings\n@@ -67,20 +66,14 @@\n self.close_on_capture = close_on_capture\n self.close_on_del = False\n self.notes = []\n- self.original_logging_disable = logging.NOTSET\n \n def __enter__(self):\n- if not self.is_final:\n- self.original_logging_disable = logging.root.manager.disable\n- logging.disable(logging.CRITICAL)\n self.assign_variable = _current_build_context.with_value(self)\n self.assign_variable.__enter__()\n return self\n \n def __exit__(self, exc_type, exc_value, tb):\n self.assign_variable.__exit__(exc_type, exc_value, tb)\n- if not self.is_final:\n- logging.disable(self.original_logging_disable)\n if self.close() and exc_type is None:\n raise CleanupFailed()\n", "issue": "Log message hiding interfers with pytest log caputure fixture.\nThe log message hiding merged in #1535 interferes with pytests log capturing. \r\n\r\nIn the example below I am using the [caplog fixture](https://docs.pytest.org/en/latest/logging.html#caplog-fixture) to capture the log message and test that they are correct.\r\nIn this simple case just asserting the number of messages. This results in an empty record list in the first call with 3.69.11 but works as expected with 3.69.10\r\n\r\n```python\r\nimport logging\r\nfrom hypothesis import given, strategies\r\n\r\n@given(x=strategies.floats(min_value=-3, max_value=3))\r\ndef test_logcapture(x, caplog):\r\n\r\n with caplog.at_level(logging.CRITICAL, logger='root'):\r\n caplog.clear()\r\n logging.critical(f\"x is {x}\")\r\n assert len(caplog.records) == 1 \r\n```\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport logging\nimport traceback\n\nfrom hypothesis import Verbosity, settings\nfrom hypothesis.errors import CleanupFailed, InvalidArgument, \\\n UnsatisfiedAssumption\nfrom hypothesis.reporting import report\nfrom hypothesis.utils.dynamicvariables import DynamicVariable\n\nif False:\n from typing import Any, AnyStr # noqa\n\n\ndef reject():\n raise UnsatisfiedAssumption()\n\n\ndef assume(condition):\n # type: (Any) -> bool\n \"\"\"Calling ``assume`` is like an :ref:`assert <python:assert>` that marks\n the example as bad, rather than failing the test.\n\n This allows you to specify properties that you *assume* will be\n true, and let Hypothesis try to avoid similar examples in future.\n \"\"\"\n if not condition:\n raise UnsatisfiedAssumption()\n return True\n\n\n_current_build_context = DynamicVariable(None)\n\n\ndef current_build_context():\n context = _current_build_context.value\n if context is None:\n raise InvalidArgument(\n u'No build context registered')\n return context\n\n\nclass BuildContext(object):\n\n def __init__(self, data, is_final=False, close_on_capture=True):\n self.data = data\n self.tasks = []\n self.is_final = is_final\n self.close_on_capture = close_on_capture\n self.close_on_del = False\n self.notes = []\n self.original_logging_disable = logging.NOTSET\n\n def __enter__(self):\n if not self.is_final:\n self.original_logging_disable = logging.root.manager.disable\n logging.disable(logging.CRITICAL)\n self.assign_variable = _current_build_context.with_value(self)\n self.assign_variable.__enter__()\n return self\n\n def __exit__(self, exc_type, exc_value, tb):\n self.assign_variable.__exit__(exc_type, exc_value, tb)\n if not self.is_final:\n logging.disable(self.original_logging_disable)\n if self.close() and exc_type is None:\n raise CleanupFailed()\n\n def local(self):\n return _current_build_context.with_value(self)\n\n def close(self):\n any_failed = False\n for task in self.tasks:\n try:\n task()\n except BaseException:\n any_failed = True\n report(traceback.format_exc())\n return any_failed\n\n\ndef cleanup(teardown):\n \"\"\"Register a function to be called when the current test has finished\n executing. Any exceptions thrown in teardown will be printed but not\n rethrown.\n\n Inside a test this isn't very interesting, because you can just use\n a finally block, but note that you can use this inside map, flatmap,\n etc. in order to e.g. insist that a value is closed at the end.\n \"\"\"\n context = _current_build_context.value\n if context is None:\n raise InvalidArgument(\n u'Cannot register cleanup outside of build context')\n context.tasks.append(teardown)\n\n\ndef note(value):\n # type: (AnyStr) -> None\n \"\"\"Report this value in the final execution.\"\"\"\n context = _current_build_context.value\n if context is None:\n raise InvalidArgument(\n 'Cannot make notes outside of a test')\n context.notes.append(value)\n if context.is_final or settings.default.verbosity >= Verbosity.verbose:\n report(value)\n\n\ndef event(value):\n # type: (AnyStr) -> None\n \"\"\"Record an event that occurred this test. Statistics on number of test\n runs with each event will be reported at the end if you run Hypothesis in\n statistics reporting mode.\n\n Events should be strings or convertible to them.\n \"\"\"\n context = _current_build_context.value\n if context is None:\n raise InvalidArgument(\n 'Cannot make record events outside of a test')\n\n if context.data is not None:\n context.data.note_event(value)\n", "path": "hypothesis-python/src/hypothesis/control.py"}], "after_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport traceback\n\nfrom hypothesis import Verbosity, settings\nfrom hypothesis.errors import CleanupFailed, InvalidArgument, \\\n UnsatisfiedAssumption\nfrom hypothesis.reporting import report\nfrom hypothesis.utils.dynamicvariables import DynamicVariable\n\nif False:\n from typing import Any, AnyStr # noqa\n\n\ndef reject():\n raise UnsatisfiedAssumption()\n\n\ndef assume(condition):\n # type: (Any) -> bool\n \"\"\"Calling ``assume`` is like an :ref:`assert <python:assert>` that marks\n the example as bad, rather than failing the test.\n\n This allows you to specify properties that you *assume* will be\n true, and let Hypothesis try to avoid similar examples in future.\n \"\"\"\n if not condition:\n raise UnsatisfiedAssumption()\n return True\n\n\n_current_build_context = DynamicVariable(None)\n\n\ndef current_build_context():\n context = _current_build_context.value\n if context is None:\n raise InvalidArgument(\n u'No build context registered')\n return context\n\n\nclass BuildContext(object):\n\n def __init__(self, data, is_final=False, close_on_capture=True):\n self.data = data\n self.tasks = []\n self.is_final = is_final\n self.close_on_capture = close_on_capture\n self.close_on_del = False\n self.notes = []\n\n def __enter__(self):\n self.assign_variable = _current_build_context.with_value(self)\n self.assign_variable.__enter__()\n return self\n\n def __exit__(self, exc_type, exc_value, tb):\n self.assign_variable.__exit__(exc_type, exc_value, tb)\n if self.close() and exc_type is None:\n raise CleanupFailed()\n\n def local(self):\n return _current_build_context.with_value(self)\n\n def close(self):\n any_failed = False\n for task in self.tasks:\n try:\n task()\n except BaseException:\n any_failed = True\n report(traceback.format_exc())\n return any_failed\n\n\ndef cleanup(teardown):\n \"\"\"Register a function to be called when the current test has finished\n executing. Any exceptions thrown in teardown will be printed but not\n rethrown.\n\n Inside a test this isn't very interesting, because you can just use\n a finally block, but note that you can use this inside map, flatmap,\n etc. in order to e.g. insist that a value is closed at the end.\n \"\"\"\n context = _current_build_context.value\n if context is None:\n raise InvalidArgument(\n u'Cannot register cleanup outside of build context')\n context.tasks.append(teardown)\n\n\ndef note(value):\n # type: (AnyStr) -> None\n \"\"\"Report this value in the final execution.\"\"\"\n context = _current_build_context.value\n if context is None:\n raise InvalidArgument(\n 'Cannot make notes outside of a test')\n context.notes.append(value)\n if context.is_final or settings.default.verbosity >= Verbosity.verbose:\n report(value)\n\n\ndef event(value):\n # type: (AnyStr) -> None\n \"\"\"Record an event that occurred this test. Statistics on number of test\n runs with each event will be reported at the end if you run Hypothesis in\n statistics reporting mode.\n\n Events should be strings or convertible to them.\n \"\"\"\n context = _current_build_context.value\n if context is None:\n raise InvalidArgument(\n 'Cannot make record events outside of a test')\n\n if context.data is not None:\n context.data.note_event(value)\n", "path": "hypothesis-python/src/hypothesis/control.py"}]} | 1,801 | 262 |
gh_patches_debug_10841 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-435 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Manual creation of Languages results in Bugsplash at page view.
### Describe the Bug
If a system is set up manually without the test data. The manual language setup results in a error when opening the list view of the pages.

### Steps to Reproduce
1. Create Languages (Like German with code DE_de)
2. Create Region (Like Berlin)
3. Add Language Tree model like German -> English
4. Click on Pages and see error
### Expected Behavior
The usual list should be displayed.
### Actual Behavior
Error message from Django.
### Additional Information
I guess this has something to do with manually setting the language code and this can't be matched by django.
Manual creation of Languages results in Bugsplash at page view.
### Describe the Bug
If a system is set up manually without the test data. The manual language setup results in a error when opening the list view of the pages.

### Steps to Reproduce
1. Create Languages (Like German with code DE_de)
2. Create Region (Like Berlin)
3. Add Language Tree model like German -> English
4. Click on Pages and see error
### Expected Behavior
The usual list should be displayed.
### Actual Behavior
Error message from Django.
### Additional Information
I guess this has something to do with manually setting the language code and this can't be matched by django.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cms/templatetags/content_filters.py`
Content:
```
1 import logging
2
3 from django import template
4
5 from ..models import Language
6
7 logger = logging.getLogger(__name__)
8 register = template.Library()
9
10
11 @register.simple_tag
12 def get_translation(instance, language_code):
13 return instance.translations.filter(language__code=language_code).first()
14
15
16 @register.simple_tag
17 def translated_language_name(language_code):
18 return Language.objects.get(code=language_code).translated_name
19
20 @register.simple_tag
21 def get_language(language_code):
22 return Language.objects.get(code=language_code)
23
24 # Unify the language codes of backend and content languages
25 @register.simple_tag
26 def unify_language_code(language_code):
27 if language_code == 'en-gb':
28 return 'en-us'
29 return language_code
30
31
32 @register.filter
33 def get_int_list(data, list_name):
34 return [int(item) for item in data.getlist(list_name)]
35
36
37 @register.filter
38 def is_empty(iterable):
39 return not bool(iterable)
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cms/templatetags/content_filters.py b/src/cms/templatetags/content_filters.py
--- a/src/cms/templatetags/content_filters.py
+++ b/src/cms/templatetags/content_filters.py
@@ -15,11 +15,14 @@
@register.simple_tag
def translated_language_name(language_code):
- return Language.objects.get(code=language_code).translated_name
+ language = Language.objects.filter(code=language_code)
+ if language.exists():
+ return language.first().translated_name
+ return ''
@register.simple_tag
def get_language(language_code):
- return Language.objects.get(code=language_code)
+ return Language.objects.filter(code=language_code).first()
# Unify the language codes of backend and content languages
@register.simple_tag
| {"golden_diff": "diff --git a/src/cms/templatetags/content_filters.py b/src/cms/templatetags/content_filters.py\n--- a/src/cms/templatetags/content_filters.py\n+++ b/src/cms/templatetags/content_filters.py\n@@ -15,11 +15,14 @@\n \n @register.simple_tag\n def translated_language_name(language_code):\n- return Language.objects.get(code=language_code).translated_name\n+ language = Language.objects.filter(code=language_code)\n+ if language.exists():\n+ return language.first().translated_name\n+ return ''\n \n @register.simple_tag\n def get_language(language_code):\n- return Language.objects.get(code=language_code)\n+ return Language.objects.filter(code=language_code).first()\n \n # Unify the language codes of backend and content languages\n @register.simple_tag\n", "issue": "Manual creation of Languages results in Bugsplash at page view.\n### Describe the Bug\r\nIf a system is set up manually without the test data. The manual language setup results in a error when opening the list view of the pages.\r\n\r\n\r\n\r\n\r\n### Steps to Reproduce\r\n\r\n1. Create Languages (Like German with code DE_de)\r\n2. Create Region (Like Berlin)\r\n3. Add Language Tree model like German -> English\r\n4. Click on Pages and see error\r\n\r\n### Expected Behavior\r\nThe usual list should be displayed. \r\n\r\n\r\n### Actual Behavior\r\nError message from Django.\r\n\r\n\r\n### Additional Information\r\nI guess this has something to do with manually setting the language code and this can't be matched by django.\r\n\r\n\nManual creation of Languages results in Bugsplash at page view.\n### Describe the Bug\r\nIf a system is set up manually without the test data. The manual language setup results in a error when opening the list view of the pages.\r\n\r\n\r\n\r\n\r\n### Steps to Reproduce\r\n\r\n1. Create Languages (Like German with code DE_de)\r\n2. Create Region (Like Berlin)\r\n3. Add Language Tree model like German -> English\r\n4. Click on Pages and see error\r\n\r\n### Expected Behavior\r\nThe usual list should be displayed. \r\n\r\n\r\n### Actual Behavior\r\nError message from Django.\r\n\r\n\r\n### Additional Information\r\nI guess this has something to do with manually setting the language code and this can't be matched by django.\r\n\r\n\n", "before_files": [{"content": "import logging\n\nfrom django import template\n\nfrom ..models import Language\n\nlogger = logging.getLogger(__name__)\nregister = template.Library()\n\n\[email protected]_tag\ndef get_translation(instance, language_code):\n return instance.translations.filter(language__code=language_code).first()\n\n\[email protected]_tag\ndef translated_language_name(language_code):\n return Language.objects.get(code=language_code).translated_name\n\[email protected]_tag\ndef get_language(language_code):\n return Language.objects.get(code=language_code)\n\n# Unify the language codes of backend and content languages\[email protected]_tag\ndef unify_language_code(language_code):\n if language_code == 'en-gb':\n return 'en-us'\n return language_code\n\n\[email protected]\ndef get_int_list(data, list_name):\n return [int(item) for item in data.getlist(list_name)]\n\n\[email protected]\ndef is_empty(iterable):\n return not bool(iterable)\n", "path": "src/cms/templatetags/content_filters.py"}], "after_files": [{"content": "import logging\n\nfrom django import template\n\nfrom ..models import Language\n\nlogger = logging.getLogger(__name__)\nregister = template.Library()\n\n\[email protected]_tag\ndef get_translation(instance, language_code):\n return instance.translations.filter(language__code=language_code).first()\n\n\[email protected]_tag\ndef translated_language_name(language_code):\n language = Language.objects.filter(code=language_code)\n if language.exists():\n return language.first().translated_name\n return ''\n\[email protected]_tag\ndef get_language(language_code):\n return Language.objects.filter(code=language_code).first()\n\n# Unify the language codes of backend and content languages\[email protected]_tag\ndef unify_language_code(language_code):\n if language_code == 'en-gb':\n return 'en-us'\n return language_code\n\n\[email protected]\ndef get_int_list(data, list_name):\n return [int(item) for item in data.getlist(list_name)]\n\n\[email protected]\ndef is_empty(iterable):\n return not bool(iterable)\n", "path": "src/cms/templatetags/content_filters.py"}]} | 949 | 178 |
gh_patches_debug_229 | rasdani/github-patches | git_diff | facebookresearch__hydra-1808 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] hydra-optuna-sweeper 1.1.0 requires numpy<1.20.0
# 🐛 Bug
## Description
<!-- A clear and concise description of what the bug is. -->
I used the guide from
https://hydra.cc/docs/plugins/optuna_sweeper/
And install hydra-optuna-sweeper:
```bash
pip install hydra-optuna-sweeper --upgrade
```
But it seems this plugin requires numpy<1.20.0:

**Edit:**
I searched for optuna's requirements, found this:
https://github.com/optuna/optuna/blob/cbae80476c15b6d39e1d8851dc6a501c63c3ca92/setup.py#L35
Why hydra-optuna-sweeper need to use numpy<1.20.0?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/hydra_optuna_sweeper/hydra_plugins/hydra_optuna_sweeper/__init__.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
3 __version__ = "1.1.0"
4
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugins/hydra_optuna_sweeper/hydra_plugins/hydra_optuna_sweeper/__init__.py b/plugins/hydra_optuna_sweeper/hydra_plugins/hydra_optuna_sweeper/__init__.py
--- a/plugins/hydra_optuna_sweeper/hydra_plugins/hydra_optuna_sweeper/__init__.py
+++ b/plugins/hydra_optuna_sweeper/hydra_plugins/hydra_optuna_sweeper/__init__.py
@@ -1,3 +1,3 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-__version__ = "1.1.0"
+__version__ = "1.1.1"
| {"golden_diff": "diff --git a/plugins/hydra_optuna_sweeper/hydra_plugins/hydra_optuna_sweeper/__init__.py b/plugins/hydra_optuna_sweeper/hydra_plugins/hydra_optuna_sweeper/__init__.py\n--- a/plugins/hydra_optuna_sweeper/hydra_plugins/hydra_optuna_sweeper/__init__.py\n+++ b/plugins/hydra_optuna_sweeper/hydra_plugins/hydra_optuna_sweeper/__init__.py\n@@ -1,3 +1,3 @@\n # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n \n-__version__ = \"1.1.0\"\n+__version__ = \"1.1.1\"\n", "issue": "[Bug] hydra-optuna-sweeper 1.1.0 requires numpy<1.20.0\n# \ud83d\udc1b Bug\r\n## Description\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nI used the guide from\r\nhttps://hydra.cc/docs/plugins/optuna_sweeper/\r\n\r\nAnd install hydra-optuna-sweeper:\r\n\r\n```bash \r\npip install hydra-optuna-sweeper --upgrade\r\n```\r\n\r\nBut it seems this plugin requires numpy<1.20.0:\r\n\r\n\r\n\r\n**Edit:**\r\n\r\nI searched for optuna's requirements, found this:\r\n\r\nhttps://github.com/optuna/optuna/blob/cbae80476c15b6d39e1d8851dc6a501c63c3ca92/setup.py#L35\r\n\r\nWhy hydra-optuna-sweeper need to use numpy<1.20.0?\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\n__version__ = \"1.1.0\"\n", "path": "plugins/hydra_optuna_sweeper/hydra_plugins/hydra_optuna_sweeper/__init__.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\n__version__ = \"1.1.1\"\n", "path": "plugins/hydra_optuna_sweeper/hydra_plugins/hydra_optuna_sweeper/__init__.py"}]} | 577 | 163 |
gh_patches_debug_43516 | rasdani/github-patches | git_diff | ResonantGeoData__ResonantGeoData-577 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Client: API token saving bug on MacOS
@banesullivan was experiencing the following issue:
When calling `create_rgd_client`, if there isn't already a token stored locally (in `$HOME/.rgd/token`), the client is supposed to make a request to the server to create it, and save it in that file. On MacOS, it seems this might not be occurring. The file doesn't appear to ever be created (notably though, the `.rgd` folder _is_ present). Furthermore, if you try to manually populate that file with your token, it will correctly read it, but the file will then be gone afterwards.
This doesn't actually affect authorization, as it still just fetches the token from the API and stores it in memory, but the storage issue needs to be looked into.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django-rgd/client/rgd_client/client.py`
Content:
```
1 import getpass
2 import os
3 from typing import List, Optional, Type
4
5 import requests
6
7 from .plugin import CorePlugin
8 from .session import RgdClientSession, clone_session
9 from .utils import API_KEY_DIR_PATH, API_KEY_FILE_NAME, DEFAULT_RGD_API
10
11
12 class RgdClient:
13 def __init__(
14 self,
15 api_url: str = DEFAULT_RGD_API,
16 username: Optional[str] = None,
17 password: Optional[str] = None,
18 save: Optional[bool] = True,
19 ) -> None:
20 """
21 Initialize the base RGD Client.
22
23 Args:
24 api_url: The base url of the RGD API instance.
25 username: The username to authenticate to the instance with, if any.
26 password: The password associated with the provided username. If None, a prompt will be provided.
27 save: Whether or not to save the logged-in user's API key to disk for future use.
28
29 Returns:
30 A base RgdClient instance.
31 """
32 # Look for an API key in the environment. If it's not there, check username/password
33 api_key = _read_api_key()
34 if api_key is None:
35 if username is not None and password is None:
36 password = getpass.getpass()
37
38 # Get an API key for this user and save it to disk
39 if username and password:
40 api_key = _get_api_key(api_url, username, password, save)
41
42 auth_header = f'Token {api_key}'
43
44 self.session = RgdClientSession(base_url=api_url, auth_header=auth_header)
45 self.rgd = CorePlugin(clone_session(self.session))
46
47 def clear_token(self):
48 """Delete a locally-stored API key."""
49 (API_KEY_DIR_PATH / API_KEY_FILE_NAME).unlink(missing_ok=True)
50
51
52 def _get_api_key(api_url: str, username: str, password: str, save: bool) -> str:
53 """Get an RGD API Key for the given user from the server, and save it if requested."""
54 resp = requests.post(f'{api_url}/api-token-auth', {'username': username, 'password': password})
55 resp.raise_for_status()
56 token = resp.json()['token']
57 if save:
58 API_KEY_DIR_PATH.mkdir(parents=True, exist_ok=True)
59 with open(API_KEY_DIR_PATH / API_KEY_FILE_NAME, 'w') as fd:
60 fd.write(token)
61 return token
62
63
64 def _read_api_key() -> Optional[str]:
65 """
66 Retrieve an RGD API Key from the users environment.
67
68 This function checks for an environment variable named RGD_API_TOKEN and returns it if it exists.
69 If it does not exist, it looks for a file located at ~/.rgd/token and returns its contents.
70 """
71 token = os.getenv('RGD_API_TOKEN', None)
72 if token is not None:
73 return token
74
75 try:
76 # read the first line of the text file at ~/.rgd/token
77 with open(API_KEY_DIR_PATH / API_KEY_FILE_NAME, 'r') as fd:
78 return fd.readline().strip()
79 except FileNotFoundError:
80 return None
81
82
83 def create_rgd_client(
84 api_url: str = DEFAULT_RGD_API,
85 username: Optional[str] = None,
86 password: Optional[str] = None,
87 save: Optional[bool] = True,
88 extra_plugins: Optional[List[Type]] = None,
89 ):
90 # Avoid circular import
91 from ._plugin_utils import _inject_plugin_deps, _plugin_classes, _plugin_instances
92
93 # Create initial client
94 client = RgdClient(api_url, username, password, save)
95
96 # Perform plugin initialization
97 plugin_classes = _plugin_classes(extra_plugins=extra_plugins)
98 plugin_instances = _plugin_instances(client, plugin_classes)
99 _inject_plugin_deps(plugin_instances)
100
101 return client
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django-rgd/client/rgd_client/client.py b/django-rgd/client/rgd_client/client.py
--- a/django-rgd/client/rgd_client/client.py
+++ b/django-rgd/client/rgd_client/client.py
@@ -1,4 +1,5 @@
import getpass
+import logging
import os
from typing import List, Optional, Type
@@ -8,6 +9,8 @@
from .session import RgdClientSession, clone_session
from .utils import API_KEY_DIR_PATH, API_KEY_FILE_NAME, DEFAULT_RGD_API
+logger = logging.getLogger(__name__)
+
class RgdClient:
def __init__(
@@ -30,7 +33,7 @@
A base RgdClient instance.
"""
# Look for an API key in the environment. If it's not there, check username/password
- api_key = _read_api_key()
+ api_key = _read_api_key(api_url=api_url, username=username, password=password)
if api_key is None:
if username is not None and password is None:
password = getpass.getpass()
@@ -38,6 +41,10 @@
# Get an API key for this user and save it to disk
if username and password:
api_key = _get_api_key(api_url, username, password, save)
+ if api_key is None:
+ logger.error(
+ 'Failed to retrieve API key; are your username and password correct?'
+ )
auth_header = f'Token {api_key}'
@@ -49,11 +56,12 @@
(API_KEY_DIR_PATH / API_KEY_FILE_NAME).unlink(missing_ok=True)
-def _get_api_key(api_url: str, username: str, password: str, save: bool) -> str:
+def _get_api_key(api_url: str, username: str, password: str, save: bool) -> Optional[str]:
"""Get an RGD API Key for the given user from the server, and save it if requested."""
resp = requests.post(f'{api_url}/api-token-auth', {'username': username, 'password': password})
- resp.raise_for_status()
- token = resp.json()['token']
+ token = resp.json().get('token')
+ if token is None:
+ return None
if save:
API_KEY_DIR_PATH.mkdir(parents=True, exist_ok=True)
with open(API_KEY_DIR_PATH / API_KEY_FILE_NAME, 'w') as fd:
@@ -61,7 +69,7 @@
return token
-def _read_api_key() -> Optional[str]:
+def _read_api_key(api_url: str, username: str = None, password: str = None) -> Optional[str]:
"""
Retrieve an RGD API Key from the users environment.
@@ -75,10 +83,29 @@
try:
# read the first line of the text file at ~/.rgd/token
with open(API_KEY_DIR_PATH / API_KEY_FILE_NAME, 'r') as fd:
- return fd.readline().strip()
+ api_key = fd.readline().strip()
except FileNotFoundError:
return None
+ # Make sure API key works by hitting a protected endpoint
+ resp = requests.get(f'{api_url}/rgd/collection', headers={'Authorization': f'Token {api_key}'})
+
+ # If it doesn't, try to get a new one and save it to ~/.rgd/token, as the current one is corrupted
+ if resp.status_code == 401:
+ logger.error('API key is invalid.')
+ # If username + password were provided, try to get a new API key with them
+ if username is not None and password is not None:
+ logger.warning('Attempting to fetch a new API key...')
+ api_key = _get_api_key(api_url, username, password, save=True)
+ if api_key is not None:
+ logger.warning('Succeeded.')
+ return api_key
+ else:
+ logger.error('Provide your username and password next time to fetch a new one.')
+ return None
+
+ return api_key
+
def create_rgd_client(
api_url: str = DEFAULT_RGD_API,
| {"golden_diff": "diff --git a/django-rgd/client/rgd_client/client.py b/django-rgd/client/rgd_client/client.py\n--- a/django-rgd/client/rgd_client/client.py\n+++ b/django-rgd/client/rgd_client/client.py\n@@ -1,4 +1,5 @@\n import getpass\n+import logging\n import os\n from typing import List, Optional, Type\n \n@@ -8,6 +9,8 @@\n from .session import RgdClientSession, clone_session\n from .utils import API_KEY_DIR_PATH, API_KEY_FILE_NAME, DEFAULT_RGD_API\n \n+logger = logging.getLogger(__name__)\n+\n \n class RgdClient:\n def __init__(\n@@ -30,7 +33,7 @@\n A base RgdClient instance.\n \"\"\"\n # Look for an API key in the environment. If it's not there, check username/password\n- api_key = _read_api_key()\n+ api_key = _read_api_key(api_url=api_url, username=username, password=password)\n if api_key is None:\n if username is not None and password is None:\n password = getpass.getpass()\n@@ -38,6 +41,10 @@\n # Get an API key for this user and save it to disk\n if username and password:\n api_key = _get_api_key(api_url, username, password, save)\n+ if api_key is None:\n+ logger.error(\n+ 'Failed to retrieve API key; are your username and password correct?'\n+ )\n \n auth_header = f'Token {api_key}'\n \n@@ -49,11 +56,12 @@\n (API_KEY_DIR_PATH / API_KEY_FILE_NAME).unlink(missing_ok=True)\n \n \n-def _get_api_key(api_url: str, username: str, password: str, save: bool) -> str:\n+def _get_api_key(api_url: str, username: str, password: str, save: bool) -> Optional[str]:\n \"\"\"Get an RGD API Key for the given user from the server, and save it if requested.\"\"\"\n resp = requests.post(f'{api_url}/api-token-auth', {'username': username, 'password': password})\n- resp.raise_for_status()\n- token = resp.json()['token']\n+ token = resp.json().get('token')\n+ if token is None:\n+ return None\n if save:\n API_KEY_DIR_PATH.mkdir(parents=True, exist_ok=True)\n with open(API_KEY_DIR_PATH / API_KEY_FILE_NAME, 'w') as fd:\n@@ -61,7 +69,7 @@\n return token\n \n \n-def _read_api_key() -> Optional[str]:\n+def _read_api_key(api_url: str, username: str = None, password: str = None) -> Optional[str]:\n \"\"\"\n Retrieve an RGD API Key from the users environment.\n \n@@ -75,10 +83,29 @@\n try:\n # read the first line of the text file at ~/.rgd/token\n with open(API_KEY_DIR_PATH / API_KEY_FILE_NAME, 'r') as fd:\n- return fd.readline().strip()\n+ api_key = fd.readline().strip()\n except FileNotFoundError:\n return None\n \n+ # Make sure API key works by hitting a protected endpoint\n+ resp = requests.get(f'{api_url}/rgd/collection', headers={'Authorization': f'Token {api_key}'})\n+\n+ # If it doesn't, try to get a new one and save it to ~/.rgd/token, as the current one is corrupted\n+ if resp.status_code == 401:\n+ logger.error('API key is invalid.')\n+ # If username + password were provided, try to get a new API key with them\n+ if username is not None and password is not None:\n+ logger.warning('Attempting to fetch a new API key...')\n+ api_key = _get_api_key(api_url, username, password, save=True)\n+ if api_key is not None:\n+ logger.warning('Succeeded.')\n+ return api_key\n+ else:\n+ logger.error('Provide your username and password next time to fetch a new one.')\n+ return None\n+\n+ return api_key\n+\n \n def create_rgd_client(\n api_url: str = DEFAULT_RGD_API,\n", "issue": "Client: API token saving bug on MacOS\n@banesullivan was experiencing the following issue:\r\n\r\nWhen calling `create_rgd_client`, if there isn't already a token stored locally (in `$HOME/.rgd/token`), the client is supposed to make a request to the server to create it, and save it in that file. On MacOS, it seems this might not be occurring. The file doesn't appear to ever be created (notably though, the `.rgd` folder _is_ present). Furthermore, if you try to manually populate that file with your token, it will correctly read it, but the file will then be gone afterwards.\r\n\r\nThis doesn't actually affect authorization, as it still just fetches the token from the API and stores it in memory, but the storage issue needs to be looked into.\n", "before_files": [{"content": "import getpass\nimport os\nfrom typing import List, Optional, Type\n\nimport requests\n\nfrom .plugin import CorePlugin\nfrom .session import RgdClientSession, clone_session\nfrom .utils import API_KEY_DIR_PATH, API_KEY_FILE_NAME, DEFAULT_RGD_API\n\n\nclass RgdClient:\n def __init__(\n self,\n api_url: str = DEFAULT_RGD_API,\n username: Optional[str] = None,\n password: Optional[str] = None,\n save: Optional[bool] = True,\n ) -> None:\n \"\"\"\n Initialize the base RGD Client.\n\n Args:\n api_url: The base url of the RGD API instance.\n username: The username to authenticate to the instance with, if any.\n password: The password associated with the provided username. If None, a prompt will be provided.\n save: Whether or not to save the logged-in user's API key to disk for future use.\n\n Returns:\n A base RgdClient instance.\n \"\"\"\n # Look for an API key in the environment. If it's not there, check username/password\n api_key = _read_api_key()\n if api_key is None:\n if username is not None and password is None:\n password = getpass.getpass()\n\n # Get an API key for this user and save it to disk\n if username and password:\n api_key = _get_api_key(api_url, username, password, save)\n\n auth_header = f'Token {api_key}'\n\n self.session = RgdClientSession(base_url=api_url, auth_header=auth_header)\n self.rgd = CorePlugin(clone_session(self.session))\n\n def clear_token(self):\n \"\"\"Delete a locally-stored API key.\"\"\"\n (API_KEY_DIR_PATH / API_KEY_FILE_NAME).unlink(missing_ok=True)\n\n\ndef _get_api_key(api_url: str, username: str, password: str, save: bool) -> str:\n \"\"\"Get an RGD API Key for the given user from the server, and save it if requested.\"\"\"\n resp = requests.post(f'{api_url}/api-token-auth', {'username': username, 'password': password})\n resp.raise_for_status()\n token = resp.json()['token']\n if save:\n API_KEY_DIR_PATH.mkdir(parents=True, exist_ok=True)\n with open(API_KEY_DIR_PATH / API_KEY_FILE_NAME, 'w') as fd:\n fd.write(token)\n return token\n\n\ndef _read_api_key() -> Optional[str]:\n \"\"\"\n Retrieve an RGD API Key from the users environment.\n\n This function checks for an environment variable named RGD_API_TOKEN and returns it if it exists.\n If it does not exist, it looks for a file located at ~/.rgd/token and returns its contents.\n \"\"\"\n token = os.getenv('RGD_API_TOKEN', None)\n if token is not None:\n return token\n\n try:\n # read the first line of the text file at ~/.rgd/token\n with open(API_KEY_DIR_PATH / API_KEY_FILE_NAME, 'r') as fd:\n return fd.readline().strip()\n except FileNotFoundError:\n return None\n\n\ndef create_rgd_client(\n api_url: str = DEFAULT_RGD_API,\n username: Optional[str] = None,\n password: Optional[str] = None,\n save: Optional[bool] = True,\n extra_plugins: Optional[List[Type]] = None,\n):\n # Avoid circular import\n from ._plugin_utils import _inject_plugin_deps, _plugin_classes, _plugin_instances\n\n # Create initial client\n client = RgdClient(api_url, username, password, save)\n\n # Perform plugin initialization\n plugin_classes = _plugin_classes(extra_plugins=extra_plugins)\n plugin_instances = _plugin_instances(client, plugin_classes)\n _inject_plugin_deps(plugin_instances)\n\n return client\n", "path": "django-rgd/client/rgd_client/client.py"}], "after_files": [{"content": "import getpass\nimport logging\nimport os\nfrom typing import List, Optional, Type\n\nimport requests\n\nfrom .plugin import CorePlugin\nfrom .session import RgdClientSession, clone_session\nfrom .utils import API_KEY_DIR_PATH, API_KEY_FILE_NAME, DEFAULT_RGD_API\n\nlogger = logging.getLogger(__name__)\n\n\nclass RgdClient:\n def __init__(\n self,\n api_url: str = DEFAULT_RGD_API,\n username: Optional[str] = None,\n password: Optional[str] = None,\n save: Optional[bool] = True,\n ) -> None:\n \"\"\"\n Initialize the base RGD Client.\n\n Args:\n api_url: The base url of the RGD API instance.\n username: The username to authenticate to the instance with, if any.\n password: The password associated with the provided username. If None, a prompt will be provided.\n save: Whether or not to save the logged-in user's API key to disk for future use.\n\n Returns:\n A base RgdClient instance.\n \"\"\"\n # Look for an API key in the environment. If it's not there, check username/password\n api_key = _read_api_key(api_url=api_url, username=username, password=password)\n if api_key is None:\n if username is not None and password is None:\n password = getpass.getpass()\n\n # Get an API key for this user and save it to disk\n if username and password:\n api_key = _get_api_key(api_url, username, password, save)\n if api_key is None:\n logger.error(\n 'Failed to retrieve API key; are your username and password correct?'\n )\n\n auth_header = f'Token {api_key}'\n\n self.session = RgdClientSession(base_url=api_url, auth_header=auth_header)\n self.rgd = CorePlugin(clone_session(self.session))\n\n def clear_token(self):\n \"\"\"Delete a locally-stored API key.\"\"\"\n (API_KEY_DIR_PATH / API_KEY_FILE_NAME).unlink(missing_ok=True)\n\n\ndef _get_api_key(api_url: str, username: str, password: str, save: bool) -> Optional[str]:\n \"\"\"Get an RGD API Key for the given user from the server, and save it if requested.\"\"\"\n resp = requests.post(f'{api_url}/api-token-auth', {'username': username, 'password': password})\n token = resp.json().get('token')\n if token is None:\n return None\n if save:\n API_KEY_DIR_PATH.mkdir(parents=True, exist_ok=True)\n with open(API_KEY_DIR_PATH / API_KEY_FILE_NAME, 'w') as fd:\n fd.write(token)\n return token\n\n\ndef _read_api_key(api_url: str, username: str = None, password: str = None) -> Optional[str]:\n \"\"\"\n Retrieve an RGD API Key from the users environment.\n\n This function checks for an environment variable named RGD_API_TOKEN and returns it if it exists.\n If it does not exist, it looks for a file located at ~/.rgd/token and returns its contents.\n \"\"\"\n token = os.getenv('RGD_API_TOKEN', None)\n if token is not None:\n return token\n\n try:\n # read the first line of the text file at ~/.rgd/token\n with open(API_KEY_DIR_PATH / API_KEY_FILE_NAME, 'r') as fd:\n api_key = fd.readline().strip()\n except FileNotFoundError:\n return None\n\n # Make sure API key works by hitting a protected endpoint\n resp = requests.get(f'{api_url}/rgd/collection', headers={'Authorization': f'Token {api_key}'})\n\n # If it doesn't, try to get a new one and save it to ~/.rgd/token, as the current one is corrupted\n if resp.status_code == 401:\n logger.error('API key is invalid.')\n # If username + password were provided, try to get a new API key with them\n if username is not None and password is not None:\n logger.warning('Attempting to fetch a new API key...')\n api_key = _get_api_key(api_url, username, password, save=True)\n if api_key is not None:\n logger.warning('Succeeded.')\n return api_key\n else:\n logger.error('Provide your username and password next time to fetch a new one.')\n return None\n\n return api_key\n\n\ndef create_rgd_client(\n api_url: str = DEFAULT_RGD_API,\n username: Optional[str] = None,\n password: Optional[str] = None,\n save: Optional[bool] = True,\n extra_plugins: Optional[List[Type]] = None,\n):\n # Avoid circular import\n from ._plugin_utils import _inject_plugin_deps, _plugin_classes, _plugin_instances\n\n # Create initial client\n client = RgdClient(api_url, username, password, save)\n\n # Perform plugin initialization\n plugin_classes = _plugin_classes(extra_plugins=extra_plugins)\n plugin_instances = _plugin_instances(client, plugin_classes)\n _inject_plugin_deps(plugin_instances)\n\n return client\n", "path": "django-rgd/client/rgd_client/client.py"}]} | 1,456 | 938 |
gh_patches_debug_36852 | rasdani/github-patches | git_diff | googleapis__google-api-python-client-961 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Python == 2.7
The read me states that
># Deprecated Python Versions
>Python == 2.7
It does not appear to work with Python == 2.7. (link to question on [SO](https://stackoverflow.com/q/56731694/1841839))
Should this note be removed from the [readme](https://github.com/googleapis/google-api-python-client/edit/master/README.md)? to me deprecation means that it should still work just be removed soon.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `noxfile.py`
Content:
```
1
2 # Copyright 2020 Google LLC
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import sys
17
18 import nox
19
20 test_dependencies = [
21 "google-auth",
22 "google-auth-httplib2",
23 "mox",
24 "parameterized",
25 "pyopenssl",
26 "pytest",
27 "pytest-cov",
28 "webtest",
29 "coverage",
30 "unittest2",
31 "mock",
32 ]
33
34
35 @nox.session(python=["3.7"])
36 def lint(session):
37 session.install("flake8")
38 session.run(
39 "flake8",
40 "googleapiclient",
41 "tests",
42 "--count",
43 "--select=E9,F63,F7,F82",
44 "--show-source",
45 "--statistics",
46 )
47
48
49 @nox.session(python=["2.7", "3.5", "3.6", "3.7", "3.8"])
50 @nox.parametrize(
51 "oauth2client",
52 [
53 "oauth2client<2dev",
54 "oauth2client>=2,<=3dev",
55 "oauth2client>=3,<=4dev",
56 "oauth2client>=4,<=5dev",
57 ],
58 )
59 def unit(session, oauth2client):
60 session.install(*test_dependencies)
61 session.install(oauth2client)
62 if session.python < "3.0":
63 session.install("django<2.0.0")
64 else:
65 session.install("django>=2.0.0")
66
67 session.install('.')
68
69 # Run py.test against the unit tests.
70 session.run(
71 "py.test",
72 "--quiet",
73 "--cov=googleapiclient",
74 "--cov=tests",
75 "--cov-append",
76 "--cov-config=.coveragerc",
77 "--cov-report=",
78 "--cov-fail-under=85",
79 "tests",
80 *session.posargs,
81 )
82
83
84 @nox.session(python="3.6")
85 def docs(session):
86 session.install('.')
87 session.run("python", "describe.py")
```
Path: `setup.py`
Content:
```
1 # Copyright 2014 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Setup script for Google API Python client.
16
17 Also installs included versions of third party libraries, if those libraries
18 are not already installed.
19 """
20 from __future__ import print_function
21
22 import sys
23
24 if sys.version_info < (2, 7):
25 print("google-api-python-client requires python version >= 2.7.", file=sys.stderr)
26 sys.exit(1)
27 if (3, 1) <= sys.version_info < (3, 4):
28 print("google-api-python-client requires python3 version >= 3.4.", file=sys.stderr)
29 sys.exit(1)
30
31 import io
32 import os
33 from setuptools import setup
34
35 packages = ["apiclient", "googleapiclient", "googleapiclient/discovery_cache"]
36
37 install_requires = [
38 "httplib2>=0.15.0,<1dev",
39 "google-auth>=1.16.0",
40 "google-auth-httplib2>=0.0.3",
41 "google-api-core>=1.21.0,<2dev",
42 "six>=1.13.0,<2dev",
43 "uritemplate>=3.0.0,<4dev",
44 ]
45
46 package_root = os.path.abspath(os.path.dirname(__file__))
47
48 readme_filename = os.path.join(package_root, "README.md")
49 with io.open(readme_filename, encoding="utf-8") as readme_file:
50 readme = readme_file.read()
51
52 version = "1.12.8"
53
54 setup(
55 name="google-api-python-client",
56 version=version,
57 description="Google API Client Library for Python",
58 long_description=readme,
59 long_description_content_type='text/markdown',
60 author="Google LLC",
61 author_email="[email protected]",
62 url="https://github.com/googleapis/google-api-python-client/",
63 install_requires=install_requires,
64 python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*",
65 packages=packages,
66 package_data={},
67 license="Apache 2.0",
68 keywords="google api client",
69 classifiers=[
70 "Programming Language :: Python :: 2",
71 "Programming Language :: Python :: 2.7",
72 "Programming Language :: Python :: 3",
73 "Programming Language :: Python :: 3.5",
74 "Programming Language :: Python :: 3.6",
75 "Programming Language :: Python :: 3.7",
76 "Development Status :: 5 - Production/Stable",
77 "Intended Audience :: Developers",
78 "License :: OSI Approved :: Apache Software License",
79 "Operating System :: OS Independent",
80 "Topic :: Internet :: WWW/HTTP",
81 ],
82 )
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -18,6 +18,7 @@
import nox
test_dependencies = [
+ "django>=2.0.0",
"google-auth",
"google-auth-httplib2",
"mox",
@@ -46,7 +47,7 @@
)
[email protected](python=["2.7", "3.5", "3.6", "3.7", "3.8"])
[email protected](python=["3.6", "3.7", "3.8", "3.9"])
@nox.parametrize(
"oauth2client",
[
@@ -59,11 +60,6 @@
def unit(session, oauth2client):
session.install(*test_dependencies)
session.install(oauth2client)
- if session.python < "3.0":
- session.install("django<2.0.0")
- else:
- session.install("django>=2.0.0")
-
session.install('.')
# Run py.test against the unit tests.
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -21,11 +21,8 @@
import sys
-if sys.version_info < (2, 7):
- print("google-api-python-client requires python version >= 2.7.", file=sys.stderr)
- sys.exit(1)
-if (3, 1) <= sys.version_info < (3, 4):
- print("google-api-python-client requires python3 version >= 3.4.", file=sys.stderr)
+if sys.version_info < (3, 6):
+ print("google-api-python-client requires python3 version >= 3.6.", file=sys.stderr)
sys.exit(1)
import io
@@ -61,18 +58,17 @@
author_email="[email protected]",
url="https://github.com/googleapis/google-api-python-client/",
install_requires=install_requires,
- python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*",
+ python_requires=">=3.6",
packages=packages,
package_data={},
license="Apache 2.0",
keywords="google api client",
classifiers=[
- "Programming Language :: Python :: 2",
- "Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
| {"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -18,6 +18,7 @@\n import nox\n \n test_dependencies = [\n+ \"django>=2.0.0\",\n \"google-auth\",\n \"google-auth-httplib2\",\n \"mox\",\n@@ -46,7 +47,7 @@\n )\n \n \[email protected](python=[\"2.7\", \"3.5\", \"3.6\", \"3.7\", \"3.8\"])\[email protected](python=[\"3.6\", \"3.7\", \"3.8\", \"3.9\"])\n @nox.parametrize(\n \"oauth2client\",\n [\n@@ -59,11 +60,6 @@\n def unit(session, oauth2client):\n session.install(*test_dependencies)\n session.install(oauth2client)\n- if session.python < \"3.0\":\n- session.install(\"django<2.0.0\")\n- else:\n- session.install(\"django>=2.0.0\")\n-\n session.install('.')\n \n # Run py.test against the unit tests.\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -21,11 +21,8 @@\n \n import sys\n \n-if sys.version_info < (2, 7):\n- print(\"google-api-python-client requires python version >= 2.7.\", file=sys.stderr)\n- sys.exit(1)\n-if (3, 1) <= sys.version_info < (3, 4):\n- print(\"google-api-python-client requires python3 version >= 3.4.\", file=sys.stderr)\n+if sys.version_info < (3, 6):\n+ print(\"google-api-python-client requires python3 version >= 3.6.\", file=sys.stderr)\n sys.exit(1)\n \n import io\n@@ -61,18 +58,17 @@\n author_email=\"[email protected]\",\n url=\"https://github.com/googleapis/google-api-python-client/\",\n install_requires=install_requires,\n- python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n+ python_requires=\">=3.6\",\n packages=packages,\n package_data={},\n license=\"Apache 2.0\",\n keywords=\"google api client\",\n classifiers=[\n- \"Programming Language :: Python :: 2\",\n- \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n- \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n+ \"Programming Language :: Python :: 3.8\",\n+ \"Programming Language :: Python :: 3.9\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n", "issue": "Python == 2.7\nThe read me states that\r\n\r\n># Deprecated Python Versions\r\n>Python == 2.7\r\n\r\nIt does not appear to work with Python == 2.7. (link to question on [SO](https://stackoverflow.com/q/56731694/1841839))\r\n\r\nShould this note be removed from the [readme](https://github.com/googleapis/google-api-python-client/edit/master/README.md)? to me deprecation means that it should still work just be removed soon.\n", "before_files": [{"content": "\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sys\n\nimport nox\n\ntest_dependencies = [\n \"google-auth\",\n \"google-auth-httplib2\",\n \"mox\",\n \"parameterized\",\n \"pyopenssl\",\n \"pytest\",\n \"pytest-cov\",\n \"webtest\",\n \"coverage\",\n \"unittest2\",\n \"mock\",\n]\n\n\[email protected](python=[\"3.7\"])\ndef lint(session):\n session.install(\"flake8\")\n session.run(\n \"flake8\",\n \"googleapiclient\",\n \"tests\",\n \"--count\",\n \"--select=E9,F63,F7,F82\",\n \"--show-source\",\n \"--statistics\",\n )\n\n\[email protected](python=[\"2.7\", \"3.5\", \"3.6\", \"3.7\", \"3.8\"])\[email protected](\n \"oauth2client\",\n [\n \"oauth2client<2dev\",\n \"oauth2client>=2,<=3dev\",\n \"oauth2client>=3,<=4dev\",\n \"oauth2client>=4,<=5dev\",\n ],\n)\ndef unit(session, oauth2client):\n session.install(*test_dependencies)\n session.install(oauth2client)\n if session.python < \"3.0\":\n session.install(\"django<2.0.0\")\n else:\n session.install(\"django>=2.0.0\")\n\n session.install('.')\n\n # Run py.test against the unit tests.\n session.run(\n \"py.test\",\n \"--quiet\",\n \"--cov=googleapiclient\",\n \"--cov=tests\",\n \"--cov-append\",\n \"--cov-config=.coveragerc\",\n \"--cov-report=\",\n \"--cov-fail-under=85\",\n \"tests\",\n *session.posargs,\n )\n\n\[email protected](python=\"3.6\")\ndef docs(session):\n session.install('.')\n session.run(\"python\", \"describe.py\")", "path": "noxfile.py"}, {"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script for Google API Python client.\n\nAlso installs included versions of third party libraries, if those libraries\nare not already installed.\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\n\nif sys.version_info < (2, 7):\n print(\"google-api-python-client requires python version >= 2.7.\", file=sys.stderr)\n sys.exit(1)\nif (3, 1) <= sys.version_info < (3, 4):\n print(\"google-api-python-client requires python3 version >= 3.4.\", file=sys.stderr)\n sys.exit(1)\n\nimport io\nimport os\nfrom setuptools import setup\n\npackages = [\"apiclient\", \"googleapiclient\", \"googleapiclient/discovery_cache\"]\n\ninstall_requires = [\n \"httplib2>=0.15.0,<1dev\",\n \"google-auth>=1.16.0\",\n \"google-auth-httplib2>=0.0.3\",\n \"google-api-core>=1.21.0,<2dev\",\n \"six>=1.13.0,<2dev\",\n \"uritemplate>=3.0.0,<4dev\",\n]\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.md\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\nversion = \"1.12.8\"\n\nsetup(\n name=\"google-api-python-client\",\n version=version,\n description=\"Google API Client Library for Python\",\n long_description=readme,\n long_description_content_type='text/markdown',\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n url=\"https://github.com/googleapis/google-api-python-client/\",\n install_requires=install_requires,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n packages=packages,\n package_data={},\n license=\"Apache 2.0\",\n keywords=\"google api client\",\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sys\n\nimport nox\n\ntest_dependencies = [\n \"django>=2.0.0\",\n \"google-auth\",\n \"google-auth-httplib2\",\n \"mox\",\n \"parameterized\",\n \"pyopenssl\",\n \"pytest\",\n \"pytest-cov\",\n \"webtest\",\n \"coverage\",\n \"unittest2\",\n \"mock\",\n]\n\n\[email protected](python=[\"3.7\"])\ndef lint(session):\n session.install(\"flake8\")\n session.run(\n \"flake8\",\n \"googleapiclient\",\n \"tests\",\n \"--count\",\n \"--select=E9,F63,F7,F82\",\n \"--show-source\",\n \"--statistics\",\n )\n\n\[email protected](python=[\"3.6\", \"3.7\", \"3.8\", \"3.9\"])\[email protected](\n \"oauth2client\",\n [\n \"oauth2client<2dev\",\n \"oauth2client>=2,<=3dev\",\n \"oauth2client>=3,<=4dev\",\n \"oauth2client>=4,<=5dev\",\n ],\n)\ndef unit(session, oauth2client):\n session.install(*test_dependencies)\n session.install(oauth2client)\n session.install('.')\n\n # Run py.test against the unit tests.\n session.run(\n \"py.test\",\n \"--quiet\",\n \"--cov=googleapiclient\",\n \"--cov=tests\",\n \"--cov-append\",\n \"--cov-config=.coveragerc\",\n \"--cov-report=\",\n \"--cov-fail-under=85\",\n \"tests\",\n *session.posargs,\n )\n\n\[email protected](python=\"3.6\")\ndef docs(session):\n session.install('.')\n session.run(\"python\", \"describe.py\")", "path": "noxfile.py"}, {"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script for Google API Python client.\n\nAlso installs included versions of third party libraries, if those libraries\nare not already installed.\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\n\nif sys.version_info < (3, 6):\n print(\"google-api-python-client requires python3 version >= 3.6.\", file=sys.stderr)\n sys.exit(1)\n\nimport io\nimport os\nfrom setuptools import setup\n\npackages = [\"apiclient\", \"googleapiclient\", \"googleapiclient/discovery_cache\"]\n\ninstall_requires = [\n \"httplib2>=0.15.0,<1dev\",\n \"google-auth>=1.16.0\",\n \"google-auth-httplib2>=0.0.3\",\n \"google-api-core>=1.21.0,<2dev\",\n \"six>=1.13.0,<2dev\",\n \"uritemplate>=3.0.0,<4dev\",\n]\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.md\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\nversion = \"1.12.8\"\n\nsetup(\n name=\"google-api-python-client\",\n version=version,\n description=\"Google API Client Library for Python\",\n long_description=readme,\n long_description_content_type='text/markdown',\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n url=\"https://github.com/googleapis/google-api-python-client/\",\n install_requires=install_requires,\n python_requires=\">=3.6\",\n packages=packages,\n package_data={},\n license=\"Apache 2.0\",\n keywords=\"google api client\",\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "setup.py"}]} | 1,958 | 659 |
gh_patches_debug_18713 | rasdani/github-patches | git_diff | pypi__warehouse-3396 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing Purges
Noticed this while setting up new mirror. We don't seem to be purging `project/<normalized_name>` key when projects are deleted.
This leads bandersnatch to get confused and fall behind until the key is purged so the JSON api returns a 404
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/packaging/__init__.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from celery.schedules import crontab
14 from sqlalchemy.orm.base import NO_VALUE
15
16 from warehouse import db
17 from warehouse.accounts.models import User, Email
18 from warehouse.cache.origin import key_factory, receive_set
19 from warehouse.packaging.interfaces import IFileStorage
20 from warehouse.packaging.models import Project, Release
21 from warehouse.packaging.tasks import compute_trending
22
23
24 @db.listens_for(User.name, 'set')
25 def user_name_receive_set(config, target, value, oldvalue, initiator):
26 if oldvalue is not NO_VALUE:
27 receive_set(User.name, config, target)
28
29
30 @db.listens_for(Email.primary, 'set')
31 def email_primary_receive_set(config, target, value, oldvalue, initiator):
32 if oldvalue is not NO_VALUE:
33 receive_set(Email.primary, config, target)
34
35
36 def includeme(config):
37 # Register whatever file storage backend has been configured for storing
38 # our package files.
39 storage_class = config.maybe_dotted(
40 config.registry.settings["files.backend"],
41 )
42 config.register_service_factory(storage_class.create_service, IFileStorage)
43
44 # Register our origin cache keys
45 config.register_origin_cache_keys(
46 Project,
47 cache_keys=["project/{obj.normalized_name}"],
48 purge_keys=[
49 key_factory("project/{obj.normalized_name}"),
50 key_factory("user/{itr.username}", iterate_on='users'),
51 key_factory("all-projects"),
52 ],
53 )
54 config.register_origin_cache_keys(
55 Release,
56 cache_keys=["project/{obj.project.normalized_name}"],
57 purge_keys=[
58 key_factory("project/{obj.project.normalized_name}"),
59 key_factory("user/{itr.username}", iterate_on='project.users'),
60 key_factory("all-projects"),
61 ],
62 )
63 config.register_origin_cache_keys(
64 User,
65 cache_keys=["user/{obj.username}"],
66 )
67 config.register_origin_cache_keys(
68 User.name,
69 purge_keys=[
70 key_factory("user/{obj.username}"),
71 key_factory("project/{itr.normalized_name}", iterate_on='projects')
72 ],
73 )
74 config.register_origin_cache_keys(
75 Email.primary,
76 purge_keys=[
77 key_factory("user/{obj.user.username}"),
78 key_factory(
79 "project/{itr.normalized_name}",
80 iterate_on='user.projects',
81 )
82 ],
83 )
84
85 # Add a periodic task to compute trending once a day, assuming we have
86 # been configured to be able to access BigQuery.
87 if config.get_settings().get("warehouse.trending_table"):
88 config.add_periodic_task(crontab(minute=0, hour=3), compute_trending)
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/warehouse/packaging/__init__.py b/warehouse/packaging/__init__.py
--- a/warehouse/packaging/__init__.py
+++ b/warehouse/packaging/__init__.py
@@ -17,7 +17,7 @@
from warehouse.accounts.models import User, Email
from warehouse.cache.origin import key_factory, receive_set
from warehouse.packaging.interfaces import IFileStorage
-from warehouse.packaging.models import Project, Release
+from warehouse.packaging.models import Project, Release, Role
from warehouse.packaging.tasks import compute_trending
@@ -60,6 +60,13 @@
key_factory("all-projects"),
],
)
+ config.register_origin_cache_keys(
+ Role,
+ purge_keys=[
+ key_factory("user/{obj.user.username}"),
+ key_factory("project/{obj.project.normalized_name}")
+ ],
+ )
config.register_origin_cache_keys(
User,
cache_keys=["user/{obj.username}"],
| {"golden_diff": "diff --git a/warehouse/packaging/__init__.py b/warehouse/packaging/__init__.py\n--- a/warehouse/packaging/__init__.py\n+++ b/warehouse/packaging/__init__.py\n@@ -17,7 +17,7 @@\n from warehouse.accounts.models import User, Email\n from warehouse.cache.origin import key_factory, receive_set\n from warehouse.packaging.interfaces import IFileStorage\n-from warehouse.packaging.models import Project, Release\n+from warehouse.packaging.models import Project, Release, Role\n from warehouse.packaging.tasks import compute_trending\n \n \n@@ -60,6 +60,13 @@\n key_factory(\"all-projects\"),\n ],\n )\n+ config.register_origin_cache_keys(\n+ Role,\n+ purge_keys=[\n+ key_factory(\"user/{obj.user.username}\"),\n+ key_factory(\"project/{obj.project.normalized_name}\")\n+ ],\n+ )\n config.register_origin_cache_keys(\n User,\n cache_keys=[\"user/{obj.username}\"],\n", "issue": "Missing Purges\nNoticed this while setting up new mirror. We don't seem to be purging `project/<normalized_name>` key when projects are deleted.\r\n\r\nThis leads bandersnatch to get confused and fall behind until the key is purged so the JSON api returns a 404\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom celery.schedules import crontab\nfrom sqlalchemy.orm.base import NO_VALUE\n\nfrom warehouse import db\nfrom warehouse.accounts.models import User, Email\nfrom warehouse.cache.origin import key_factory, receive_set\nfrom warehouse.packaging.interfaces import IFileStorage\nfrom warehouse.packaging.models import Project, Release\nfrom warehouse.packaging.tasks import compute_trending\n\n\[email protected]_for(User.name, 'set')\ndef user_name_receive_set(config, target, value, oldvalue, initiator):\n if oldvalue is not NO_VALUE:\n receive_set(User.name, config, target)\n\n\[email protected]_for(Email.primary, 'set')\ndef email_primary_receive_set(config, target, value, oldvalue, initiator):\n if oldvalue is not NO_VALUE:\n receive_set(Email.primary, config, target)\n\n\ndef includeme(config):\n # Register whatever file storage backend has been configured for storing\n # our package files.\n storage_class = config.maybe_dotted(\n config.registry.settings[\"files.backend\"],\n )\n config.register_service_factory(storage_class.create_service, IFileStorage)\n\n # Register our origin cache keys\n config.register_origin_cache_keys(\n Project,\n cache_keys=[\"project/{obj.normalized_name}\"],\n purge_keys=[\n key_factory(\"project/{obj.normalized_name}\"),\n key_factory(\"user/{itr.username}\", iterate_on='users'),\n key_factory(\"all-projects\"),\n ],\n )\n config.register_origin_cache_keys(\n Release,\n cache_keys=[\"project/{obj.project.normalized_name}\"],\n purge_keys=[\n key_factory(\"project/{obj.project.normalized_name}\"),\n key_factory(\"user/{itr.username}\", iterate_on='project.users'),\n key_factory(\"all-projects\"),\n ],\n )\n config.register_origin_cache_keys(\n User,\n cache_keys=[\"user/{obj.username}\"],\n )\n config.register_origin_cache_keys(\n User.name,\n purge_keys=[\n key_factory(\"user/{obj.username}\"),\n key_factory(\"project/{itr.normalized_name}\", iterate_on='projects')\n ],\n )\n config.register_origin_cache_keys(\n Email.primary,\n purge_keys=[\n key_factory(\"user/{obj.user.username}\"),\n key_factory(\n \"project/{itr.normalized_name}\",\n iterate_on='user.projects',\n )\n ],\n )\n\n # Add a periodic task to compute trending once a day, assuming we have\n # been configured to be able to access BigQuery.\n if config.get_settings().get(\"warehouse.trending_table\"):\n config.add_periodic_task(crontab(minute=0, hour=3), compute_trending)\n", "path": "warehouse/packaging/__init__.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom celery.schedules import crontab\nfrom sqlalchemy.orm.base import NO_VALUE\n\nfrom warehouse import db\nfrom warehouse.accounts.models import User, Email\nfrom warehouse.cache.origin import key_factory, receive_set\nfrom warehouse.packaging.interfaces import IFileStorage\nfrom warehouse.packaging.models import Project, Release, Role\nfrom warehouse.packaging.tasks import compute_trending\n\n\[email protected]_for(User.name, 'set')\ndef user_name_receive_set(config, target, value, oldvalue, initiator):\n if oldvalue is not NO_VALUE:\n receive_set(User.name, config, target)\n\n\[email protected]_for(Email.primary, 'set')\ndef email_primary_receive_set(config, target, value, oldvalue, initiator):\n if oldvalue is not NO_VALUE:\n receive_set(Email.primary, config, target)\n\n\ndef includeme(config):\n # Register whatever file storage backend has been configured for storing\n # our package files.\n storage_class = config.maybe_dotted(\n config.registry.settings[\"files.backend\"],\n )\n config.register_service_factory(storage_class.create_service, IFileStorage)\n\n # Register our origin cache keys\n config.register_origin_cache_keys(\n Project,\n cache_keys=[\"project/{obj.normalized_name}\"],\n purge_keys=[\n key_factory(\"project/{obj.normalized_name}\"),\n key_factory(\"user/{itr.username}\", iterate_on='users'),\n key_factory(\"all-projects\"),\n ],\n )\n config.register_origin_cache_keys(\n Release,\n cache_keys=[\"project/{obj.project.normalized_name}\"],\n purge_keys=[\n key_factory(\"project/{obj.project.normalized_name}\"),\n key_factory(\"user/{itr.username}\", iterate_on='project.users'),\n key_factory(\"all-projects\"),\n ],\n )\n config.register_origin_cache_keys(\n Role,\n purge_keys=[\n key_factory(\"user/{obj.user.username}\"),\n key_factory(\"project/{obj.project.normalized_name}\")\n ],\n )\n config.register_origin_cache_keys(\n User,\n cache_keys=[\"user/{obj.username}\"],\n )\n config.register_origin_cache_keys(\n User.name,\n purge_keys=[\n key_factory(\"user/{obj.username}\"),\n key_factory(\"project/{itr.normalized_name}\", iterate_on='projects')\n ],\n )\n config.register_origin_cache_keys(\n Email.primary,\n purge_keys=[\n key_factory(\"user/{obj.user.username}\"),\n key_factory(\n \"project/{itr.normalized_name}\",\n iterate_on='user.projects',\n )\n ],\n )\n\n # Add a periodic task to compute trending once a day, assuming we have\n # been configured to be able to access BigQuery.\n if config.get_settings().get(\"warehouse.trending_table\"):\n config.add_periodic_task(crontab(minute=0, hour=3), compute_trending)\n", "path": "warehouse/packaging/__init__.py"}]} | 1,161 | 216 |
gh_patches_debug_21636 | rasdani/github-patches | git_diff | cloudtools__troposphere-1775 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add AdvancedSecurityOptions property to AWS ElasticSearch Domain
**Description:**
AWS Elasticsearch now supports fine-grained access control with Cloudformation. Need to add AdvancedSecurityOptions and MasterUserOptions to AWS::Elasticsearch::Domain object to enable this new functionality.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticsearch-domain.html#cfn-elasticsearch-domain-advancedsecurityoptions
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `troposphere/elasticsearch.py`
Content:
```
1 # Copyright (c) 2012-2015, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSProperty, AWSObject, Tags
7 from .compat import policytypes
8 from .validators import boolean, integer, integer_range, positive_integer
9
10 VALID_VOLUME_TYPES = ('standard', 'gp2', 'io1')
11
12
13 def validate_volume_type(volume_type):
14 """Validate VolumeType for ElasticsearchDomain"""
15 if volume_type not in VALID_VOLUME_TYPES:
16 raise ValueError("Elasticsearch Domain VolumeType must be one of: %s" %
17 ", ".join(VALID_VOLUME_TYPES))
18 return volume_type
19
20
21 class CognitoOptions(AWSProperty):
22 props = {
23 'Enabled': (boolean, False),
24 'IdentityPoolId': (basestring, False),
25 'RoleArn': (basestring, False),
26 'UserPoolId': (basestring, False),
27 }
28
29
30 class EBSOptions(AWSProperty):
31 props = {
32 'EBSEnabled': (boolean, False),
33 'Iops': (positive_integer, False),
34 'VolumeSize': (integer, False),
35 'VolumeType': (validate_volume_type, False)
36 }
37
38 def validate(self):
39 volume_type = self.properties.get('VolumeType')
40 iops = self.properties.get('Iops')
41 if volume_type == 'io1' and not iops:
42 raise ValueError("Must specify Iops if VolumeType is 'io1'.")
43
44
45 class ZoneAwarenessConfig(AWSProperty):
46 props = {
47 'AvailabilityZoneCount': (integer, False),
48 }
49
50
51 class ElasticsearchClusterConfig(AWSProperty):
52 props = {
53 'DedicatedMasterCount': (integer, False),
54 'DedicatedMasterEnabled': (boolean, False),
55 'DedicatedMasterType': (basestring, False),
56 'InstanceCount': (integer, False),
57 'InstanceType': (basestring, False),
58 'ZoneAwarenessConfig': (ZoneAwarenessConfig, False),
59 'ZoneAwarenessEnabled': (boolean, False)
60 }
61
62
63 class EncryptionAtRestOptions(AWSProperty):
64 props = {
65 'Enabled': (boolean, False),
66 'KmsKeyId': (basestring, False),
67 }
68
69
70 class NodeToNodeEncryptionOptions(AWSProperty):
71 props = {
72 'Enabled': (boolean, False),
73 }
74
75
76 class SnapshotOptions(AWSProperty):
77 props = {
78 'AutomatedSnapshotStartHour': (integer_range(0, 23), False)
79 }
80
81
82 class VPCOptions(AWSProperty):
83 props = {
84 "SecurityGroupIds": ([basestring], False),
85 "SubnetIds": ([basestring], False)
86 }
87
88
89 class Domain(AWSObject):
90 resource_type = "AWS::Elasticsearch::Domain"
91
92 props = {
93 'AccessPolicies': (policytypes, False),
94 'AdvancedOptions': (dict, False),
95 'CognitoOptions': (CognitoOptions, False),
96 'DomainName': (basestring, False),
97 'EBSOptions': (EBSOptions, False),
98 'ElasticsearchClusterConfig': (ElasticsearchClusterConfig, False),
99 'ElasticsearchVersion': (basestring, False),
100 'EncryptionAtRestOptions': (EncryptionAtRestOptions, False),
101 'LogPublishingOptions': (dict, False),
102 'NodeToNodeEncryptionOptions': (NodeToNodeEncryptionOptions, False),
103 'SnapshotOptions': (SnapshotOptions, False),
104 'Tags': ((Tags, list), False),
105 'VPCOptions': (VPCOptions, False),
106 }
107
108
109 # Backward compatibility
110 ElasticsearchDomain = Domain
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/troposphere/elasticsearch.py b/troposphere/elasticsearch.py
--- a/troposphere/elasticsearch.py
+++ b/troposphere/elasticsearch.py
@@ -81,8 +81,24 @@
class VPCOptions(AWSProperty):
props = {
- "SecurityGroupIds": ([basestring], False),
- "SubnetIds": ([basestring], False)
+ 'SecurityGroupIds': ([basestring], False),
+ 'SubnetIds': ([basestring], False)
+ }
+
+
+class MasterUserOptions(AWSProperty):
+ props = {
+ 'MasterUserARN': (basestring, False),
+ 'MasterUserName': (basestring, False),
+ 'MasterUserPassword': (basestring, False),
+ }
+
+
+class AdvancedSecurityOptionsInput(AWSProperty):
+ props = {
+ 'Enabled': (boolean, False),
+ 'InternalUserDatabaseEnabled': (boolean, False),
+ 'MasterUserOptions': (MasterUserOptions, False),
}
@@ -92,6 +108,7 @@
props = {
'AccessPolicies': (policytypes, False),
'AdvancedOptions': (dict, False),
+ 'AdvancedSecurityOptions': (AdvancedSecurityOptionsInput, False),
'CognitoOptions': (CognitoOptions, False),
'DomainName': (basestring, False),
'EBSOptions': (EBSOptions, False),
| {"golden_diff": "diff --git a/troposphere/elasticsearch.py b/troposphere/elasticsearch.py\n--- a/troposphere/elasticsearch.py\n+++ b/troposphere/elasticsearch.py\n@@ -81,8 +81,24 @@\n \n class VPCOptions(AWSProperty):\n props = {\n- \"SecurityGroupIds\": ([basestring], False),\n- \"SubnetIds\": ([basestring], False)\n+ 'SecurityGroupIds': ([basestring], False),\n+ 'SubnetIds': ([basestring], False)\n+ }\n+\n+\n+class MasterUserOptions(AWSProperty):\n+ props = {\n+ 'MasterUserARN': (basestring, False),\n+ 'MasterUserName': (basestring, False),\n+ 'MasterUserPassword': (basestring, False),\n+ }\n+\n+\n+class AdvancedSecurityOptionsInput(AWSProperty):\n+ props = {\n+ 'Enabled': (boolean, False),\n+ 'InternalUserDatabaseEnabled': (boolean, False),\n+ 'MasterUserOptions': (MasterUserOptions, False),\n }\n \n \n@@ -92,6 +108,7 @@\n props = {\n 'AccessPolicies': (policytypes, False),\n 'AdvancedOptions': (dict, False),\n+ 'AdvancedSecurityOptions': (AdvancedSecurityOptionsInput, False),\n 'CognitoOptions': (CognitoOptions, False),\n 'DomainName': (basestring, False),\n 'EBSOptions': (EBSOptions, False),\n", "issue": "Add AdvancedSecurityOptions property to AWS ElasticSearch Domain\n**Description:**\r\n\r\nAWS Elasticsearch now supports fine-grained access control with Cloudformation. Need to add AdvancedSecurityOptions and MasterUserOptions to AWS::Elasticsearch::Domain object to enable this new functionality. \r\n\r\nhttps://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticsearch-domain.html#cfn-elasticsearch-domain-advancedsecurityoptions\n", "before_files": [{"content": "# Copyright (c) 2012-2015, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSProperty, AWSObject, Tags\nfrom .compat import policytypes\nfrom .validators import boolean, integer, integer_range, positive_integer\n\nVALID_VOLUME_TYPES = ('standard', 'gp2', 'io1')\n\n\ndef validate_volume_type(volume_type):\n \"\"\"Validate VolumeType for ElasticsearchDomain\"\"\"\n if volume_type not in VALID_VOLUME_TYPES:\n raise ValueError(\"Elasticsearch Domain VolumeType must be one of: %s\" %\n \", \".join(VALID_VOLUME_TYPES))\n return volume_type\n\n\nclass CognitoOptions(AWSProperty):\n props = {\n 'Enabled': (boolean, False),\n 'IdentityPoolId': (basestring, False),\n 'RoleArn': (basestring, False),\n 'UserPoolId': (basestring, False),\n }\n\n\nclass EBSOptions(AWSProperty):\n props = {\n 'EBSEnabled': (boolean, False),\n 'Iops': (positive_integer, False),\n 'VolumeSize': (integer, False),\n 'VolumeType': (validate_volume_type, False)\n }\n\n def validate(self):\n volume_type = self.properties.get('VolumeType')\n iops = self.properties.get('Iops')\n if volume_type == 'io1' and not iops:\n raise ValueError(\"Must specify Iops if VolumeType is 'io1'.\")\n\n\nclass ZoneAwarenessConfig(AWSProperty):\n props = {\n 'AvailabilityZoneCount': (integer, False),\n }\n\n\nclass ElasticsearchClusterConfig(AWSProperty):\n props = {\n 'DedicatedMasterCount': (integer, False),\n 'DedicatedMasterEnabled': (boolean, False),\n 'DedicatedMasterType': (basestring, False),\n 'InstanceCount': (integer, False),\n 'InstanceType': (basestring, False),\n 'ZoneAwarenessConfig': (ZoneAwarenessConfig, False),\n 'ZoneAwarenessEnabled': (boolean, False)\n }\n\n\nclass EncryptionAtRestOptions(AWSProperty):\n props = {\n 'Enabled': (boolean, False),\n 'KmsKeyId': (basestring, False),\n }\n\n\nclass NodeToNodeEncryptionOptions(AWSProperty):\n props = {\n 'Enabled': (boolean, False),\n }\n\n\nclass SnapshotOptions(AWSProperty):\n props = {\n 'AutomatedSnapshotStartHour': (integer_range(0, 23), False)\n }\n\n\nclass VPCOptions(AWSProperty):\n props = {\n \"SecurityGroupIds\": ([basestring], False),\n \"SubnetIds\": ([basestring], False)\n }\n\n\nclass Domain(AWSObject):\n resource_type = \"AWS::Elasticsearch::Domain\"\n\n props = {\n 'AccessPolicies': (policytypes, False),\n 'AdvancedOptions': (dict, False),\n 'CognitoOptions': (CognitoOptions, False),\n 'DomainName': (basestring, False),\n 'EBSOptions': (EBSOptions, False),\n 'ElasticsearchClusterConfig': (ElasticsearchClusterConfig, False),\n 'ElasticsearchVersion': (basestring, False),\n 'EncryptionAtRestOptions': (EncryptionAtRestOptions, False),\n 'LogPublishingOptions': (dict, False),\n 'NodeToNodeEncryptionOptions': (NodeToNodeEncryptionOptions, False),\n 'SnapshotOptions': (SnapshotOptions, False),\n 'Tags': ((Tags, list), False),\n 'VPCOptions': (VPCOptions, False),\n }\n\n\n# Backward compatibility\nElasticsearchDomain = Domain\n", "path": "troposphere/elasticsearch.py"}], "after_files": [{"content": "# Copyright (c) 2012-2015, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSProperty, AWSObject, Tags\nfrom .compat import policytypes\nfrom .validators import boolean, integer, integer_range, positive_integer\n\nVALID_VOLUME_TYPES = ('standard', 'gp2', 'io1')\n\n\ndef validate_volume_type(volume_type):\n \"\"\"Validate VolumeType for ElasticsearchDomain\"\"\"\n if volume_type not in VALID_VOLUME_TYPES:\n raise ValueError(\"Elasticsearch Domain VolumeType must be one of: %s\" %\n \", \".join(VALID_VOLUME_TYPES))\n return volume_type\n\n\nclass CognitoOptions(AWSProperty):\n props = {\n 'Enabled': (boolean, False),\n 'IdentityPoolId': (basestring, False),\n 'RoleArn': (basestring, False),\n 'UserPoolId': (basestring, False),\n }\n\n\nclass EBSOptions(AWSProperty):\n props = {\n 'EBSEnabled': (boolean, False),\n 'Iops': (positive_integer, False),\n 'VolumeSize': (integer, False),\n 'VolumeType': (validate_volume_type, False)\n }\n\n def validate(self):\n volume_type = self.properties.get('VolumeType')\n iops = self.properties.get('Iops')\n if volume_type == 'io1' and not iops:\n raise ValueError(\"Must specify Iops if VolumeType is 'io1'.\")\n\n\nclass ZoneAwarenessConfig(AWSProperty):\n props = {\n 'AvailabilityZoneCount': (integer, False),\n }\n\n\nclass ElasticsearchClusterConfig(AWSProperty):\n props = {\n 'DedicatedMasterCount': (integer, False),\n 'DedicatedMasterEnabled': (boolean, False),\n 'DedicatedMasterType': (basestring, False),\n 'InstanceCount': (integer, False),\n 'InstanceType': (basestring, False),\n 'ZoneAwarenessConfig': (ZoneAwarenessConfig, False),\n 'ZoneAwarenessEnabled': (boolean, False)\n }\n\n\nclass EncryptionAtRestOptions(AWSProperty):\n props = {\n 'Enabled': (boolean, False),\n 'KmsKeyId': (basestring, False),\n }\n\n\nclass NodeToNodeEncryptionOptions(AWSProperty):\n props = {\n 'Enabled': (boolean, False),\n }\n\n\nclass SnapshotOptions(AWSProperty):\n props = {\n 'AutomatedSnapshotStartHour': (integer_range(0, 23), False)\n }\n\n\nclass VPCOptions(AWSProperty):\n props = {\n 'SecurityGroupIds': ([basestring], False),\n 'SubnetIds': ([basestring], False)\n }\n\n\nclass MasterUserOptions(AWSProperty):\n props = {\n 'MasterUserARN': (basestring, False),\n 'MasterUserName': (basestring, False),\n 'MasterUserPassword': (basestring, False),\n }\n\n\nclass AdvancedSecurityOptionsInput(AWSProperty):\n props = {\n 'Enabled': (boolean, False),\n 'InternalUserDatabaseEnabled': (boolean, False),\n 'MasterUserOptions': (MasterUserOptions, False),\n }\n\n\nclass Domain(AWSObject):\n resource_type = \"AWS::Elasticsearch::Domain\"\n\n props = {\n 'AccessPolicies': (policytypes, False),\n 'AdvancedOptions': (dict, False),\n 'AdvancedSecurityOptions': (AdvancedSecurityOptionsInput, False),\n 'CognitoOptions': (CognitoOptions, False),\n 'DomainName': (basestring, False),\n 'EBSOptions': (EBSOptions, False),\n 'ElasticsearchClusterConfig': (ElasticsearchClusterConfig, False),\n 'ElasticsearchVersion': (basestring, False),\n 'EncryptionAtRestOptions': (EncryptionAtRestOptions, False),\n 'LogPublishingOptions': (dict, False),\n 'NodeToNodeEncryptionOptions': (NodeToNodeEncryptionOptions, False),\n 'SnapshotOptions': (SnapshotOptions, False),\n 'Tags': ((Tags, list), False),\n 'VPCOptions': (VPCOptions, False),\n }\n\n\n# Backward compatibility\nElasticsearchDomain = Domain\n", "path": "troposphere/elasticsearch.py"}]} | 1,376 | 322 |
gh_patches_debug_39512 | rasdani/github-patches | git_diff | netbox-community__netbox-14265 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow plugins to add columns to tables
### NetBox version
dev / v3.6.3
### Feature type
New functionality
### Proposed functionality
Allow plugins to register columns for built-in NetBox tables.
### Use case
NetBox comes with admirably configurable tables and filters for all its built-in models, and makes it easy for plugins to provide tables like that, too. Some plugins manage fairly separate data, like DNS. However, when plugins *connect* to built-in models (instead of just providing separate models), the additional data cannot be seen from the built-in tables.
This is especially jarring for users when the data *can* be shown on the detail pages (by way of the `template_extensions` mechanism). Naturally, users tend to ask for ways to use the data displayed on the detail pages in the list/table views. As a plugin developer, I frequently build models that attach to existing NetBox models, and I get those requests a lot ("why do I have to go to the detail page of the Tenant to see the new data, can't you show it in the Tenant list?").
(I realise that there are *technically* ways to add columns to the tables right now: You can subclass/extend the built-in tables and provide them at a different URL, or, worst-of-all-worlds, patch something in. However, I think both of these are terrible in terms of maintainability and generally not good practice.)
An example could be a plugin that extends the customer management side of NetBox with additional data on Contacts, Tenants, or Sites, and would like to a) show the added data in those tables (e.g. the Site table), but also in other tables that can show Site information, e.g. the Device table.
I realise this isn't a trivial request. If you think this is generally worthwhile, I'd be happy to work on this feature!
### Database changes
_No response_
### External dependencies
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/netbox/tables/tables.py`
Content:
```
1 import django_tables2 as tables
2 from django.contrib.auth.models import AnonymousUser
3 from django.contrib.contenttypes.fields import GenericForeignKey
4 from django.contrib.contenttypes.models import ContentType
5 from django.core.exceptions import FieldDoesNotExist
6 from django.db.models.fields.related import RelatedField
7 from django.urls import reverse
8 from django.urls.exceptions import NoReverseMatch
9 from django.utils.safestring import mark_safe
10 from django.utils.translation import gettext_lazy as _
11 from django_tables2.data import TableQuerysetData
12
13 from extras.models import CustomField, CustomLink
14 from extras.choices import CustomFieldVisibilityChoices
15 from netbox.tables import columns
16 from utilities.paginator import EnhancedPaginator, get_paginate_count
17 from utilities.utils import get_viewname, highlight_string, title
18 from .template_code import *
19
20 __all__ = (
21 'BaseTable',
22 'NetBoxTable',
23 'SearchTable',
24 )
25
26
27 class BaseTable(tables.Table):
28 """
29 Base table class for NetBox objects. Adds support for:
30
31 * User configuration (column preferences)
32 * Automatic prefetching of related objects
33 * BS5 styling
34
35 :param user: Personalize table display for the given user (optional). Has no effect if AnonymousUser is passed.
36 """
37 exempt_columns = ()
38
39 class Meta:
40 attrs = {
41 'class': 'table table-hover object-list',
42 }
43
44 def __init__(self, *args, user=None, **kwargs):
45
46 super().__init__(*args, **kwargs)
47
48 # Set default empty_text if none was provided
49 if self.empty_text is None:
50 self.empty_text = f"No {self._meta.model._meta.verbose_name_plural} found"
51
52 # Determine the table columns to display by checking the following:
53 # 1. User's configuration for the table
54 # 2. Meta.default_columns
55 # 3. Meta.fields
56 selected_columns = None
57 if user is not None and not isinstance(user, AnonymousUser):
58 selected_columns = user.config.get(f"tables.{self.name}.columns")
59 if not selected_columns:
60 selected_columns = getattr(self.Meta, 'default_columns', self.Meta.fields)
61
62 # Hide non-selected columns which are not exempt
63 for column in self.columns:
64 if column.name not in [*selected_columns, *self.exempt_columns]:
65 self.columns.hide(column.name)
66
67 # Rearrange the sequence to list selected columns first, followed by all remaining columns
68 # TODO: There's probably a more clever way to accomplish this
69 self.sequence = [
70 *[c for c in selected_columns if c in self.columns.names()],
71 *[c for c in self.columns.names() if c not in selected_columns]
72 ]
73
74 # PK column should always come first
75 if 'pk' in self.sequence:
76 self.sequence.remove('pk')
77 self.sequence.insert(0, 'pk')
78
79 # Actions column should always come last
80 if 'actions' in self.sequence:
81 self.sequence.remove('actions')
82 self.sequence.append('actions')
83
84 # Dynamically update the table's QuerySet to ensure related fields are pre-fetched
85 if isinstance(self.data, TableQuerysetData):
86
87 prefetch_fields = []
88 for column in self.columns:
89 if column.visible:
90 model = getattr(self.Meta, 'model')
91 accessor = column.accessor
92 prefetch_path = []
93 for field_name in accessor.split(accessor.SEPARATOR):
94 try:
95 field = model._meta.get_field(field_name)
96 except FieldDoesNotExist:
97 break
98 if isinstance(field, RelatedField):
99 # Follow ForeignKeys to the related model
100 prefetch_path.append(field_name)
101 model = field.remote_field.model
102 elif isinstance(field, GenericForeignKey):
103 # Can't prefetch beyond a GenericForeignKey
104 prefetch_path.append(field_name)
105 break
106 if prefetch_path:
107 prefetch_fields.append('__'.join(prefetch_path))
108 self.data.data = self.data.data.prefetch_related(*prefetch_fields)
109
110 def _get_columns(self, visible=True):
111 columns = []
112 for name, column in self.columns.items():
113 if column.visible == visible and name not in self.exempt_columns:
114 columns.append((name, column.verbose_name))
115 return columns
116
117 @property
118 def name(self):
119 return self.__class__.__name__
120
121 @property
122 def available_columns(self):
123 return sorted(self._get_columns(visible=False))
124
125 @property
126 def selected_columns(self):
127 return self._get_columns(visible=True)
128
129 @property
130 def objects_count(self):
131 """
132 Return the total number of real objects represented by the Table. This is useful when dealing with
133 prefixes/IP addresses/etc., where some table rows may represent available address space.
134 """
135 if not hasattr(self, '_objects_count'):
136 self._objects_count = sum(1 for obj in self.data if hasattr(obj, 'pk'))
137 return self._objects_count
138
139 def configure(self, request):
140 """
141 Configure the table for a specific request context. This performs pagination and records
142 the user's preferred ordering logic.
143 """
144 # Save ordering preference
145 if request.user.is_authenticated:
146 if self.prefixed_order_by_field in request.GET:
147 if request.GET[self.prefixed_order_by_field]:
148 # If an ordering has been specified as a query parameter, save it as the
149 # user's preferred ordering for this table.
150 ordering = request.GET.getlist(self.prefixed_order_by_field)
151 request.user.config.set(f'tables.{self.name}.ordering', ordering, commit=True)
152 else:
153 # If the ordering has been set to none (empty), clear any existing preference.
154 request.user.config.clear(f'tables.{self.name}.ordering', commit=True)
155 elif ordering := request.user.config.get(f'tables.{self.name}.ordering'):
156 # If no ordering has been specified, set the preferred ordering (if any).
157 self.order_by = ordering
158
159 # Paginate the table results
160 paginate = {
161 'paginator_class': EnhancedPaginator,
162 'per_page': get_paginate_count(request)
163 }
164 tables.RequestConfig(request, paginate).configure(self)
165
166
167 class NetBoxTable(BaseTable):
168 """
169 Table class for most NetBox objects. Adds support for custom field & custom link columns. Includes
170 default columns for:
171
172 * PK (row selection)
173 * ID
174 * Actions
175 """
176 pk = columns.ToggleColumn(
177 visible=False
178 )
179 id = tables.Column(
180 linkify=True,
181 verbose_name=_('ID')
182 )
183 actions = columns.ActionsColumn()
184
185 exempt_columns = ('pk', 'actions')
186
187 class Meta(BaseTable.Meta):
188 pass
189
190 def __init__(self, *args, extra_columns=None, **kwargs):
191 if extra_columns is None:
192 extra_columns = []
193
194 # Add custom field & custom link columns
195 content_type = ContentType.objects.get_for_model(self._meta.model)
196 custom_fields = CustomField.objects.filter(
197 content_types=content_type
198 ).exclude(ui_visibility=CustomFieldVisibilityChoices.VISIBILITY_HIDDEN)
199
200 extra_columns.extend([
201 (f'cf_{cf.name}', columns.CustomFieldColumn(cf)) for cf in custom_fields
202 ])
203 custom_links = CustomLink.objects.filter(content_types=content_type, enabled=True)
204 extra_columns.extend([
205 (f'cl_{cl.name}', columns.CustomLinkColumn(cl)) for cl in custom_links
206 ])
207
208 super().__init__(*args, extra_columns=extra_columns, **kwargs)
209
210 @property
211 def htmx_url(self):
212 """
213 Return the base HTML request URL for embedded tables.
214 """
215 if getattr(self, 'embedded', False):
216 viewname = get_viewname(self._meta.model, action='list')
217 try:
218 return reverse(viewname)
219 except NoReverseMatch:
220 pass
221 return ''
222
223
224 class SearchTable(tables.Table):
225 object_type = columns.ContentTypeColumn(
226 verbose_name=_('Type'),
227 order_by="object___meta__verbose_name",
228 )
229 object = tables.Column(
230 verbose_name=_('Object'),
231 linkify=True,
232 order_by=('name', )
233 )
234 field = tables.Column(
235 verbose_name=_('Field'),
236 )
237 value = tables.Column(
238 verbose_name=_('Value'),
239 )
240 attrs = columns.TemplateColumn(
241 template_code=SEARCH_RESULT_ATTRS,
242 verbose_name=_('Attributes')
243 )
244
245 trim_length = 30
246
247 class Meta:
248 attrs = {
249 'class': 'table table-hover object-list',
250 }
251 empty_text = _('No results found')
252
253 def __init__(self, data, highlight=None, **kwargs):
254 self.highlight = highlight
255 super().__init__(data, **kwargs)
256
257 def render_field(self, value, record):
258 if hasattr(record.object, value):
259 return title(record.object._meta.get_field(value).verbose_name)
260 return value
261
262 def render_value(self, value):
263 if not self.highlight:
264 return value
265
266 value = highlight_string(value, self.highlight, trim_pre=self.trim_length, trim_post=self.trim_length)
267
268 return mark_safe(value)
269
```
Path: `netbox/netbox/registry.py`
Content:
```
1 import collections
2
3
4 class Registry(dict):
5 """
6 Central registry for registration of functionality. Once a Registry is initialized, keys cannot be added or
7 removed (though the value of each key is mutable).
8 """
9 def __getitem__(self, key):
10 try:
11 return super().__getitem__(key)
12 except KeyError:
13 raise KeyError(f"Invalid store: {key}")
14
15 def __setitem__(self, key, value):
16 raise TypeError("Cannot add stores to registry after initialization")
17
18 def __delitem__(self, key):
19 raise TypeError("Cannot delete stores from registry")
20
21
22 # Initialize the global registry
23 registry = Registry({
24 'counter_fields': collections.defaultdict(dict),
25 'data_backends': dict(),
26 'denormalized_fields': collections.defaultdict(list),
27 'model_features': dict(),
28 'models': collections.defaultdict(set),
29 'plugins': dict(),
30 'search': dict(),
31 'views': collections.defaultdict(dict),
32 'widgets': dict(),
33 })
34
```
Path: `netbox/utilities/tables.py`
Content:
```
1 __all__ = (
2 'get_table_ordering',
3 'linkify_phone',
4 )
5
6
7 def get_table_ordering(request, table):
8 """
9 Given a request, return the prescribed table ordering, if any. This may be necessary to determine prior to rendering
10 the table itself.
11 """
12 # Check for an explicit ordering
13 if 'sort' in request.GET:
14 return request.GET['sort'] or None
15
16 # Check for a configured preference
17 if request.user.is_authenticated:
18 if preference := request.user.config.get(f'tables.{table.__name__}.ordering'):
19 return preference
20
21
22 def linkify_phone(value):
23 """
24 Render a telephone number as a hyperlink.
25 """
26 if value is None:
27 return None
28 return f"tel:{value}"
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/netbox/registry.py b/netbox/netbox/registry.py
--- a/netbox/netbox/registry.py
+++ b/netbox/netbox/registry.py
@@ -28,6 +28,7 @@
'models': collections.defaultdict(set),
'plugins': dict(),
'search': dict(),
+ 'tables': collections.defaultdict(dict),
'views': collections.defaultdict(dict),
'widgets': dict(),
})
diff --git a/netbox/netbox/tables/tables.py b/netbox/netbox/tables/tables.py
--- a/netbox/netbox/tables/tables.py
+++ b/netbox/netbox/tables/tables.py
@@ -1,3 +1,5 @@
+from copy import deepcopy
+
import django_tables2 as tables
from django.contrib.auth.models import AnonymousUser
from django.contrib.contenttypes.fields import GenericForeignKey
@@ -12,6 +14,7 @@
from extras.models import CustomField, CustomLink
from extras.choices import CustomFieldVisibilityChoices
+from netbox.registry import registry
from netbox.tables import columns
from utilities.paginator import EnhancedPaginator, get_paginate_count
from utilities.utils import get_viewname, highlight_string, title
@@ -191,12 +194,17 @@
if extra_columns is None:
extra_columns = []
+ if registered_columns := registry['tables'].get(self.__class__):
+ extra_columns.extend([
+ # Create a copy to avoid modifying the original Column
+ (name, deepcopy(column)) for name, column in registered_columns.items()
+ ])
+
# Add custom field & custom link columns
content_type = ContentType.objects.get_for_model(self._meta.model)
custom_fields = CustomField.objects.filter(
content_types=content_type
).exclude(ui_visibility=CustomFieldVisibilityChoices.VISIBILITY_HIDDEN)
-
extra_columns.extend([
(f'cf_{cf.name}', columns.CustomFieldColumn(cf)) for cf in custom_fields
])
diff --git a/netbox/utilities/tables.py b/netbox/utilities/tables.py
--- a/netbox/utilities/tables.py
+++ b/netbox/utilities/tables.py
@@ -1,6 +1,9 @@
+from netbox.registry import registry
+
__all__ = (
'get_table_ordering',
'linkify_phone',
+ 'register_table_column'
)
@@ -26,3 +29,19 @@
if value is None:
return None
return f"tel:{value}"
+
+
+def register_table_column(column, name, *tables):
+ """
+ Register a custom column for use on one or more tables.
+
+ Args:
+ column: The column instance to register
+ name: The name of the table column
+ tables: One or more table classes
+ """
+ for table in tables:
+ reg = registry['tables'][table]
+ if name in reg:
+ raise ValueError(f"A column named {name} is already defined for table {table.__name__}")
+ reg[name] = column
| {"golden_diff": "diff --git a/netbox/netbox/registry.py b/netbox/netbox/registry.py\n--- a/netbox/netbox/registry.py\n+++ b/netbox/netbox/registry.py\n@@ -28,6 +28,7 @@\n 'models': collections.defaultdict(set),\n 'plugins': dict(),\n 'search': dict(),\n+ 'tables': collections.defaultdict(dict),\n 'views': collections.defaultdict(dict),\n 'widgets': dict(),\n })\ndiff --git a/netbox/netbox/tables/tables.py b/netbox/netbox/tables/tables.py\n--- a/netbox/netbox/tables/tables.py\n+++ b/netbox/netbox/tables/tables.py\n@@ -1,3 +1,5 @@\n+from copy import deepcopy\n+\n import django_tables2 as tables\n from django.contrib.auth.models import AnonymousUser\n from django.contrib.contenttypes.fields import GenericForeignKey\n@@ -12,6 +14,7 @@\n \n from extras.models import CustomField, CustomLink\n from extras.choices import CustomFieldVisibilityChoices\n+from netbox.registry import registry\n from netbox.tables import columns\n from utilities.paginator import EnhancedPaginator, get_paginate_count\n from utilities.utils import get_viewname, highlight_string, title\n@@ -191,12 +194,17 @@\n if extra_columns is None:\n extra_columns = []\n \n+ if registered_columns := registry['tables'].get(self.__class__):\n+ extra_columns.extend([\n+ # Create a copy to avoid modifying the original Column\n+ (name, deepcopy(column)) for name, column in registered_columns.items()\n+ ])\n+\n # Add custom field & custom link columns\n content_type = ContentType.objects.get_for_model(self._meta.model)\n custom_fields = CustomField.objects.filter(\n content_types=content_type\n ).exclude(ui_visibility=CustomFieldVisibilityChoices.VISIBILITY_HIDDEN)\n-\n extra_columns.extend([\n (f'cf_{cf.name}', columns.CustomFieldColumn(cf)) for cf in custom_fields\n ])\ndiff --git a/netbox/utilities/tables.py b/netbox/utilities/tables.py\n--- a/netbox/utilities/tables.py\n+++ b/netbox/utilities/tables.py\n@@ -1,6 +1,9 @@\n+from netbox.registry import registry\n+\n __all__ = (\n 'get_table_ordering',\n 'linkify_phone',\n+ 'register_table_column'\n )\n \n \n@@ -26,3 +29,19 @@\n if value is None:\n return None\n return f\"tel:{value}\"\n+\n+\n+def register_table_column(column, name, *tables):\n+ \"\"\"\n+ Register a custom column for use on one or more tables.\n+\n+ Args:\n+ column: The column instance to register\n+ name: The name of the table column\n+ tables: One or more table classes\n+ \"\"\"\n+ for table in tables:\n+ reg = registry['tables'][table]\n+ if name in reg:\n+ raise ValueError(f\"A column named {name} is already defined for table {table.__name__}\")\n+ reg[name] = column\n", "issue": "Allow plugins to add columns to tables\n### NetBox version\r\n\r\ndev / v3.6.3\r\n\r\n### Feature type\r\n\r\nNew functionality\r\n\r\n### Proposed functionality\r\n\r\nAllow plugins to register columns for built-in NetBox tables.\r\n\r\n\r\n### Use case\r\n\r\nNetBox comes with admirably configurable tables and filters for all its built-in models, and makes it easy for plugins to provide tables like that, too. Some plugins manage fairly separate data, like DNS. However, when plugins *connect* to built-in models (instead of just providing separate models), the additional data cannot be seen from the built-in tables.\r\n\r\nThis is especially jarring for users when the data *can* be shown on the detail pages (by way of the `template_extensions` mechanism). Naturally, users tend to ask for ways to use the data displayed on the detail pages in the list/table views. As a plugin developer, I frequently build models that attach to existing NetBox models, and I get those requests a lot (\"why do I have to go to the detail page of the Tenant to see the new data, can't you show it in the Tenant list?\").\r\n\r\n(I realise that there are *technically* ways to add columns to the tables right now: You can subclass/extend the built-in tables and provide them at a different URL, or, worst-of-all-worlds, patch something in. However, I think both of these are terrible in terms of maintainability and generally not good practice.)\r\n\r\nAn example could be a plugin that extends the customer management side of NetBox with additional data on Contacts, Tenants, or Sites, and would like to a) show the added data in those tables (e.g. the Site table), but also in other tables that can show Site information, e.g. the Device table.\r\n\r\nI realise this isn't a trivial request. If you think this is generally worthwhile, I'd be happy to work on this feature!\r\n\r\n### Database changes\r\n\r\n_No response_\r\n\r\n### External dependencies\r\n\r\n_No response_\n", "before_files": [{"content": "import django_tables2 as tables\nfrom django.contrib.auth.models import AnonymousUser\nfrom django.contrib.contenttypes.fields import GenericForeignKey\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.exceptions import FieldDoesNotExist\nfrom django.db.models.fields.related import RelatedField\nfrom django.urls import reverse\nfrom django.urls.exceptions import NoReverseMatch\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext_lazy as _\nfrom django_tables2.data import TableQuerysetData\n\nfrom extras.models import CustomField, CustomLink\nfrom extras.choices import CustomFieldVisibilityChoices\nfrom netbox.tables import columns\nfrom utilities.paginator import EnhancedPaginator, get_paginate_count\nfrom utilities.utils import get_viewname, highlight_string, title\nfrom .template_code import *\n\n__all__ = (\n 'BaseTable',\n 'NetBoxTable',\n 'SearchTable',\n)\n\n\nclass BaseTable(tables.Table):\n \"\"\"\n Base table class for NetBox objects. Adds support for:\n\n * User configuration (column preferences)\n * Automatic prefetching of related objects\n * BS5 styling\n\n :param user: Personalize table display for the given user (optional). Has no effect if AnonymousUser is passed.\n \"\"\"\n exempt_columns = ()\n\n class Meta:\n attrs = {\n 'class': 'table table-hover object-list',\n }\n\n def __init__(self, *args, user=None, **kwargs):\n\n super().__init__(*args, **kwargs)\n\n # Set default empty_text if none was provided\n if self.empty_text is None:\n self.empty_text = f\"No {self._meta.model._meta.verbose_name_plural} found\"\n\n # Determine the table columns to display by checking the following:\n # 1. User's configuration for the table\n # 2. Meta.default_columns\n # 3. Meta.fields\n selected_columns = None\n if user is not None and not isinstance(user, AnonymousUser):\n selected_columns = user.config.get(f\"tables.{self.name}.columns\")\n if not selected_columns:\n selected_columns = getattr(self.Meta, 'default_columns', self.Meta.fields)\n\n # Hide non-selected columns which are not exempt\n for column in self.columns:\n if column.name not in [*selected_columns, *self.exempt_columns]:\n self.columns.hide(column.name)\n\n # Rearrange the sequence to list selected columns first, followed by all remaining columns\n # TODO: There's probably a more clever way to accomplish this\n self.sequence = [\n *[c for c in selected_columns if c in self.columns.names()],\n *[c for c in self.columns.names() if c not in selected_columns]\n ]\n\n # PK column should always come first\n if 'pk' in self.sequence:\n self.sequence.remove('pk')\n self.sequence.insert(0, 'pk')\n\n # Actions column should always come last\n if 'actions' in self.sequence:\n self.sequence.remove('actions')\n self.sequence.append('actions')\n\n # Dynamically update the table's QuerySet to ensure related fields are pre-fetched\n if isinstance(self.data, TableQuerysetData):\n\n prefetch_fields = []\n for column in self.columns:\n if column.visible:\n model = getattr(self.Meta, 'model')\n accessor = column.accessor\n prefetch_path = []\n for field_name in accessor.split(accessor.SEPARATOR):\n try:\n field = model._meta.get_field(field_name)\n except FieldDoesNotExist:\n break\n if isinstance(field, RelatedField):\n # Follow ForeignKeys to the related model\n prefetch_path.append(field_name)\n model = field.remote_field.model\n elif isinstance(field, GenericForeignKey):\n # Can't prefetch beyond a GenericForeignKey\n prefetch_path.append(field_name)\n break\n if prefetch_path:\n prefetch_fields.append('__'.join(prefetch_path))\n self.data.data = self.data.data.prefetch_related(*prefetch_fields)\n\n def _get_columns(self, visible=True):\n columns = []\n for name, column in self.columns.items():\n if column.visible == visible and name not in self.exempt_columns:\n columns.append((name, column.verbose_name))\n return columns\n\n @property\n def name(self):\n return self.__class__.__name__\n\n @property\n def available_columns(self):\n return sorted(self._get_columns(visible=False))\n\n @property\n def selected_columns(self):\n return self._get_columns(visible=True)\n\n @property\n def objects_count(self):\n \"\"\"\n Return the total number of real objects represented by the Table. This is useful when dealing with\n prefixes/IP addresses/etc., where some table rows may represent available address space.\n \"\"\"\n if not hasattr(self, '_objects_count'):\n self._objects_count = sum(1 for obj in self.data if hasattr(obj, 'pk'))\n return self._objects_count\n\n def configure(self, request):\n \"\"\"\n Configure the table for a specific request context. This performs pagination and records\n the user's preferred ordering logic.\n \"\"\"\n # Save ordering preference\n if request.user.is_authenticated:\n if self.prefixed_order_by_field in request.GET:\n if request.GET[self.prefixed_order_by_field]:\n # If an ordering has been specified as a query parameter, save it as the\n # user's preferred ordering for this table.\n ordering = request.GET.getlist(self.prefixed_order_by_field)\n request.user.config.set(f'tables.{self.name}.ordering', ordering, commit=True)\n else:\n # If the ordering has been set to none (empty), clear any existing preference.\n request.user.config.clear(f'tables.{self.name}.ordering', commit=True)\n elif ordering := request.user.config.get(f'tables.{self.name}.ordering'):\n # If no ordering has been specified, set the preferred ordering (if any).\n self.order_by = ordering\n\n # Paginate the table results\n paginate = {\n 'paginator_class': EnhancedPaginator,\n 'per_page': get_paginate_count(request)\n }\n tables.RequestConfig(request, paginate).configure(self)\n\n\nclass NetBoxTable(BaseTable):\n \"\"\"\n Table class for most NetBox objects. Adds support for custom field & custom link columns. Includes\n default columns for:\n\n * PK (row selection)\n * ID\n * Actions\n \"\"\"\n pk = columns.ToggleColumn(\n visible=False\n )\n id = tables.Column(\n linkify=True,\n verbose_name=_('ID')\n )\n actions = columns.ActionsColumn()\n\n exempt_columns = ('pk', 'actions')\n\n class Meta(BaseTable.Meta):\n pass\n\n def __init__(self, *args, extra_columns=None, **kwargs):\n if extra_columns is None:\n extra_columns = []\n\n # Add custom field & custom link columns\n content_type = ContentType.objects.get_for_model(self._meta.model)\n custom_fields = CustomField.objects.filter(\n content_types=content_type\n ).exclude(ui_visibility=CustomFieldVisibilityChoices.VISIBILITY_HIDDEN)\n\n extra_columns.extend([\n (f'cf_{cf.name}', columns.CustomFieldColumn(cf)) for cf in custom_fields\n ])\n custom_links = CustomLink.objects.filter(content_types=content_type, enabled=True)\n extra_columns.extend([\n (f'cl_{cl.name}', columns.CustomLinkColumn(cl)) for cl in custom_links\n ])\n\n super().__init__(*args, extra_columns=extra_columns, **kwargs)\n\n @property\n def htmx_url(self):\n \"\"\"\n Return the base HTML request URL for embedded tables.\n \"\"\"\n if getattr(self, 'embedded', False):\n viewname = get_viewname(self._meta.model, action='list')\n try:\n return reverse(viewname)\n except NoReverseMatch:\n pass\n return ''\n\n\nclass SearchTable(tables.Table):\n object_type = columns.ContentTypeColumn(\n verbose_name=_('Type'),\n order_by=\"object___meta__verbose_name\",\n )\n object = tables.Column(\n verbose_name=_('Object'),\n linkify=True,\n order_by=('name', )\n )\n field = tables.Column(\n verbose_name=_('Field'),\n )\n value = tables.Column(\n verbose_name=_('Value'),\n )\n attrs = columns.TemplateColumn(\n template_code=SEARCH_RESULT_ATTRS,\n verbose_name=_('Attributes')\n )\n\n trim_length = 30\n\n class Meta:\n attrs = {\n 'class': 'table table-hover object-list',\n }\n empty_text = _('No results found')\n\n def __init__(self, data, highlight=None, **kwargs):\n self.highlight = highlight\n super().__init__(data, **kwargs)\n\n def render_field(self, value, record):\n if hasattr(record.object, value):\n return title(record.object._meta.get_field(value).verbose_name)\n return value\n\n def render_value(self, value):\n if not self.highlight:\n return value\n\n value = highlight_string(value, self.highlight, trim_pre=self.trim_length, trim_post=self.trim_length)\n\n return mark_safe(value)\n", "path": "netbox/netbox/tables/tables.py"}, {"content": "import collections\n\n\nclass Registry(dict):\n \"\"\"\n Central registry for registration of functionality. Once a Registry is initialized, keys cannot be added or\n removed (though the value of each key is mutable).\n \"\"\"\n def __getitem__(self, key):\n try:\n return super().__getitem__(key)\n except KeyError:\n raise KeyError(f\"Invalid store: {key}\")\n\n def __setitem__(self, key, value):\n raise TypeError(\"Cannot add stores to registry after initialization\")\n\n def __delitem__(self, key):\n raise TypeError(\"Cannot delete stores from registry\")\n\n\n# Initialize the global registry\nregistry = Registry({\n 'counter_fields': collections.defaultdict(dict),\n 'data_backends': dict(),\n 'denormalized_fields': collections.defaultdict(list),\n 'model_features': dict(),\n 'models': collections.defaultdict(set),\n 'plugins': dict(),\n 'search': dict(),\n 'views': collections.defaultdict(dict),\n 'widgets': dict(),\n})\n", "path": "netbox/netbox/registry.py"}, {"content": "__all__ = (\n 'get_table_ordering',\n 'linkify_phone',\n)\n\n\ndef get_table_ordering(request, table):\n \"\"\"\n Given a request, return the prescribed table ordering, if any. This may be necessary to determine prior to rendering\n the table itself.\n \"\"\"\n # Check for an explicit ordering\n if 'sort' in request.GET:\n return request.GET['sort'] or None\n\n # Check for a configured preference\n if request.user.is_authenticated:\n if preference := request.user.config.get(f'tables.{table.__name__}.ordering'):\n return preference\n\n\ndef linkify_phone(value):\n \"\"\"\n Render a telephone number as a hyperlink.\n \"\"\"\n if value is None:\n return None\n return f\"tel:{value}\"\n", "path": "netbox/utilities/tables.py"}], "after_files": [{"content": "from copy import deepcopy\n\nimport django_tables2 as tables\nfrom django.contrib.auth.models import AnonymousUser\nfrom django.contrib.contenttypes.fields import GenericForeignKey\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.exceptions import FieldDoesNotExist\nfrom django.db.models.fields.related import RelatedField\nfrom django.urls import reverse\nfrom django.urls.exceptions import NoReverseMatch\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext_lazy as _\nfrom django_tables2.data import TableQuerysetData\n\nfrom extras.models import CustomField, CustomLink\nfrom extras.choices import CustomFieldVisibilityChoices\nfrom netbox.registry import registry\nfrom netbox.tables import columns\nfrom utilities.paginator import EnhancedPaginator, get_paginate_count\nfrom utilities.utils import get_viewname, highlight_string, title\nfrom .template_code import *\n\n__all__ = (\n 'BaseTable',\n 'NetBoxTable',\n 'SearchTable',\n)\n\n\nclass BaseTable(tables.Table):\n \"\"\"\n Base table class for NetBox objects. Adds support for:\n\n * User configuration (column preferences)\n * Automatic prefetching of related objects\n * BS5 styling\n\n :param user: Personalize table display for the given user (optional). Has no effect if AnonymousUser is passed.\n \"\"\"\n exempt_columns = ()\n\n class Meta:\n attrs = {\n 'class': 'table table-hover object-list',\n }\n\n def __init__(self, *args, user=None, **kwargs):\n\n super().__init__(*args, **kwargs)\n\n # Set default empty_text if none was provided\n if self.empty_text is None:\n self.empty_text = f\"No {self._meta.model._meta.verbose_name_plural} found\"\n\n # Determine the table columns to display by checking the following:\n # 1. User's configuration for the table\n # 2. Meta.default_columns\n # 3. Meta.fields\n selected_columns = None\n if user is not None and not isinstance(user, AnonymousUser):\n selected_columns = user.config.get(f\"tables.{self.name}.columns\")\n if not selected_columns:\n selected_columns = getattr(self.Meta, 'default_columns', self.Meta.fields)\n\n # Hide non-selected columns which are not exempt\n for column in self.columns:\n if column.name not in [*selected_columns, *self.exempt_columns]:\n self.columns.hide(column.name)\n\n # Rearrange the sequence to list selected columns first, followed by all remaining columns\n # TODO: There's probably a more clever way to accomplish this\n self.sequence = [\n *[c for c in selected_columns if c in self.columns.names()],\n *[c for c in self.columns.names() if c not in selected_columns]\n ]\n\n # PK column should always come first\n if 'pk' in self.sequence:\n self.sequence.remove('pk')\n self.sequence.insert(0, 'pk')\n\n # Actions column should always come last\n if 'actions' in self.sequence:\n self.sequence.remove('actions')\n self.sequence.append('actions')\n\n # Dynamically update the table's QuerySet to ensure related fields are pre-fetched\n if isinstance(self.data, TableQuerysetData):\n\n prefetch_fields = []\n for column in self.columns:\n if column.visible:\n model = getattr(self.Meta, 'model')\n accessor = column.accessor\n prefetch_path = []\n for field_name in accessor.split(accessor.SEPARATOR):\n try:\n field = model._meta.get_field(field_name)\n except FieldDoesNotExist:\n break\n if isinstance(field, RelatedField):\n # Follow ForeignKeys to the related model\n prefetch_path.append(field_name)\n model = field.remote_field.model\n elif isinstance(field, GenericForeignKey):\n # Can't prefetch beyond a GenericForeignKey\n prefetch_path.append(field_name)\n break\n if prefetch_path:\n prefetch_fields.append('__'.join(prefetch_path))\n self.data.data = self.data.data.prefetch_related(*prefetch_fields)\n\n def _get_columns(self, visible=True):\n columns = []\n for name, column in self.columns.items():\n if column.visible == visible and name not in self.exempt_columns:\n columns.append((name, column.verbose_name))\n return columns\n\n @property\n def name(self):\n return self.__class__.__name__\n\n @property\n def available_columns(self):\n return sorted(self._get_columns(visible=False))\n\n @property\n def selected_columns(self):\n return self._get_columns(visible=True)\n\n @property\n def objects_count(self):\n \"\"\"\n Return the total number of real objects represented by the Table. This is useful when dealing with\n prefixes/IP addresses/etc., where some table rows may represent available address space.\n \"\"\"\n if not hasattr(self, '_objects_count'):\n self._objects_count = sum(1 for obj in self.data if hasattr(obj, 'pk'))\n return self._objects_count\n\n def configure(self, request):\n \"\"\"\n Configure the table for a specific request context. This performs pagination and records\n the user's preferred ordering logic.\n \"\"\"\n # Save ordering preference\n if request.user.is_authenticated:\n if self.prefixed_order_by_field in request.GET:\n if request.GET[self.prefixed_order_by_field]:\n # If an ordering has been specified as a query parameter, save it as the\n # user's preferred ordering for this table.\n ordering = request.GET.getlist(self.prefixed_order_by_field)\n request.user.config.set(f'tables.{self.name}.ordering', ordering, commit=True)\n else:\n # If the ordering has been set to none (empty), clear any existing preference.\n request.user.config.clear(f'tables.{self.name}.ordering', commit=True)\n elif ordering := request.user.config.get(f'tables.{self.name}.ordering'):\n # If no ordering has been specified, set the preferred ordering (if any).\n self.order_by = ordering\n\n # Paginate the table results\n paginate = {\n 'paginator_class': EnhancedPaginator,\n 'per_page': get_paginate_count(request)\n }\n tables.RequestConfig(request, paginate).configure(self)\n\n\nclass NetBoxTable(BaseTable):\n \"\"\"\n Table class for most NetBox objects. Adds support for custom field & custom link columns. Includes\n default columns for:\n\n * PK (row selection)\n * ID\n * Actions\n \"\"\"\n pk = columns.ToggleColumn(\n visible=False\n )\n id = tables.Column(\n linkify=True,\n verbose_name=_('ID')\n )\n actions = columns.ActionsColumn()\n\n exempt_columns = ('pk', 'actions')\n\n class Meta(BaseTable.Meta):\n pass\n\n def __init__(self, *args, extra_columns=None, **kwargs):\n if extra_columns is None:\n extra_columns = []\n\n if registered_columns := registry['tables'].get(self.__class__):\n extra_columns.extend([\n # Create a copy to avoid modifying the original Column\n (name, deepcopy(column)) for name, column in registered_columns.items()\n ])\n\n # Add custom field & custom link columns\n content_type = ContentType.objects.get_for_model(self._meta.model)\n custom_fields = CustomField.objects.filter(\n content_types=content_type\n ).exclude(ui_visibility=CustomFieldVisibilityChoices.VISIBILITY_HIDDEN)\n extra_columns.extend([\n (f'cf_{cf.name}', columns.CustomFieldColumn(cf)) for cf in custom_fields\n ])\n custom_links = CustomLink.objects.filter(content_types=content_type, enabled=True)\n extra_columns.extend([\n (f'cl_{cl.name}', columns.CustomLinkColumn(cl)) for cl in custom_links\n ])\n\n super().__init__(*args, extra_columns=extra_columns, **kwargs)\n\n @property\n def htmx_url(self):\n \"\"\"\n Return the base HTML request URL for embedded tables.\n \"\"\"\n if getattr(self, 'embedded', False):\n viewname = get_viewname(self._meta.model, action='list')\n try:\n return reverse(viewname)\n except NoReverseMatch:\n pass\n return ''\n\n\nclass SearchTable(tables.Table):\n object_type = columns.ContentTypeColumn(\n verbose_name=_('Type'),\n order_by=\"object___meta__verbose_name\",\n )\n object = tables.Column(\n verbose_name=_('Object'),\n linkify=True,\n order_by=('name', )\n )\n field = tables.Column(\n verbose_name=_('Field'),\n )\n value = tables.Column(\n verbose_name=_('Value'),\n )\n attrs = columns.TemplateColumn(\n template_code=SEARCH_RESULT_ATTRS,\n verbose_name=_('Attributes')\n )\n\n trim_length = 30\n\n class Meta:\n attrs = {\n 'class': 'table table-hover object-list',\n }\n empty_text = _('No results found')\n\n def __init__(self, data, highlight=None, **kwargs):\n self.highlight = highlight\n super().__init__(data, **kwargs)\n\n def render_field(self, value, record):\n if hasattr(record.object, value):\n return title(record.object._meta.get_field(value).verbose_name)\n return value\n\n def render_value(self, value):\n if not self.highlight:\n return value\n\n value = highlight_string(value, self.highlight, trim_pre=self.trim_length, trim_post=self.trim_length)\n\n return mark_safe(value)\n", "path": "netbox/netbox/tables/tables.py"}, {"content": "import collections\n\n\nclass Registry(dict):\n \"\"\"\n Central registry for registration of functionality. Once a Registry is initialized, keys cannot be added or\n removed (though the value of each key is mutable).\n \"\"\"\n def __getitem__(self, key):\n try:\n return super().__getitem__(key)\n except KeyError:\n raise KeyError(f\"Invalid store: {key}\")\n\n def __setitem__(self, key, value):\n raise TypeError(\"Cannot add stores to registry after initialization\")\n\n def __delitem__(self, key):\n raise TypeError(\"Cannot delete stores from registry\")\n\n\n# Initialize the global registry\nregistry = Registry({\n 'counter_fields': collections.defaultdict(dict),\n 'data_backends': dict(),\n 'denormalized_fields': collections.defaultdict(list),\n 'model_features': dict(),\n 'models': collections.defaultdict(set),\n 'plugins': dict(),\n 'search': dict(),\n 'tables': collections.defaultdict(dict),\n 'views': collections.defaultdict(dict),\n 'widgets': dict(),\n})\n", "path": "netbox/netbox/registry.py"}, {"content": "from netbox.registry import registry\n\n__all__ = (\n 'get_table_ordering',\n 'linkify_phone',\n 'register_table_column'\n)\n\n\ndef get_table_ordering(request, table):\n \"\"\"\n Given a request, return the prescribed table ordering, if any. This may be necessary to determine prior to rendering\n the table itself.\n \"\"\"\n # Check for an explicit ordering\n if 'sort' in request.GET:\n return request.GET['sort'] or None\n\n # Check for a configured preference\n if request.user.is_authenticated:\n if preference := request.user.config.get(f'tables.{table.__name__}.ordering'):\n return preference\n\n\ndef linkify_phone(value):\n \"\"\"\n Render a telephone number as a hyperlink.\n \"\"\"\n if value is None:\n return None\n return f\"tel:{value}\"\n\n\ndef register_table_column(column, name, *tables):\n \"\"\"\n Register a custom column for use on one or more tables.\n\n Args:\n column: The column instance to register\n name: The name of the table column\n tables: One or more table classes\n \"\"\"\n for table in tables:\n reg = registry['tables'][table]\n if name in reg:\n raise ValueError(f\"A column named {name} is already defined for table {table.__name__}\")\n reg[name] = column\n", "path": "netbox/utilities/tables.py"}]} | 3,831 | 672 |
gh_patches_debug_39486 | rasdani/github-patches | git_diff | PlasmaPy__PlasmaPy-491 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve project description on PyPI
[Project description](https://pypi.org/project/plasmapy/#description) is currently bland. We should detail some functionality, problems that the project is trying to address, mention openAstronomy affiliation, etc.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # Licensed under a 3-clause BSD style license - see LICENSE.rst
3
4 import glob
5 import os
6 import sys
7
8 # Enforce Python version check - this is the same check as in __init__.py but
9 # this one has to happen before importing ah_bootstrap.
10 if sys.version_info < tuple((int(val) for val in "3.6".split('.'))):
11 sys.stderr.write("ERROR: plasmapy requires Python {} or later\n".format(3.6))
12 sys.exit(1)
13
14 import ah_bootstrap
15 from setuptools import setup
16
17 # A dirty hack to get around some early import/configurations ambiguities
18 if sys.version_info[0] >= 3:
19 import builtins
20 else:
21 import __builtin__ as builtins
22 builtins._ASTROPY_SETUP_ = True
23
24 from astropy_helpers.setup_helpers import (register_commands, get_debug_option,
25 get_package_info)
26 from astropy_helpers.git_helpers import get_git_devstr
27 from astropy_helpers.version_helpers import generate_version_py
28
29 # Get some values from the setup.cfg
30 try:
31 from ConfigParser import ConfigParser
32 except ImportError:
33 from configparser import ConfigParser
34
35 conf = ConfigParser()
36 conf.read(['setup.cfg'])
37 metadata = dict(conf.items('metadata'))
38
39 PACKAGENAME = metadata.get('package_name', 'plasmapy')
40 DESCRIPTION = metadata.get('description', 'plasmapy')
41 AUTHOR = metadata.get('author', 'PlasmaPy Developers')
42 AUTHOR_EMAIL = metadata.get('author_email', '')
43 LICENSE = metadata.get('license', 'unknown')
44 URL = metadata.get('url', 'http://plasmapy.org')
45
46 # order of priority for long_description:
47 # (1) set in setup.cfg,
48 # (2) load LONG_DESCRIPTION.rst,
49 # (3) load README.rst,
50 # (4) package docstring
51 readme_glob = 'README*'
52 _cfg_long_description = metadata.get('long_description', '')
53 if _cfg_long_description:
54 LONG_DESCRIPTION = _cfg_long_description
55
56 elif os.path.exists('LONG_DESCRIPTION.rst'):
57 with open('LONG_DESCRIPTION.rst') as f:
58 LONG_DESCRIPTION = f.read()
59
60 elif len(glob.glob(readme_glob)) > 0:
61 with open(glob.glob(readme_glob)[0]) as f:
62 LONG_DESCRIPTION = f.read()
63
64 else:
65 # Get the long description from the package's docstring
66 __import__(PACKAGENAME)
67 package = sys.modules[PACKAGENAME]
68 LONG_DESCRIPTION = package.__doc__
69
70 # Store the package name in a built-in variable so it's easy
71 # to get from other parts of the setup infrastructure
72 builtins._ASTROPY_PACKAGE_NAME_ = PACKAGENAME
73
74 # VERSION should be PEP440 compatible (http://www.python.org/dev/peps/pep-0440)
75 VERSION = metadata.get('version', '0.0.dev0')
76
77 # Indicates if this version is a release version
78 RELEASE = 'dev' not in VERSION
79
80 if not RELEASE:
81 VERSION += get_git_devstr(False)
82
83 # Populate the dict of setup command overrides; this should be done before
84 # invoking any other functionality from distutils since it can potentially
85 # modify distutils' behavior.
86 cmdclassd = register_commands(PACKAGENAME, VERSION, RELEASE)
87
88 # Freeze build information in version.py
89 generate_version_py(PACKAGENAME, VERSION, RELEASE,
90 get_debug_option(PACKAGENAME))
91
92 # Treat everything in scripts except README* as a script to be installed
93 scripts = [fname for fname in glob.glob(os.path.join('scripts', '*'))
94 if not os.path.basename(fname).startswith('README')]
95
96
97 # Get configuration information from all of the various subpackages.
98 # See the docstring for setup_helpers.update_package_files for more
99 # details.
100 package_info = get_package_info()
101
102 # Add the project-global data
103 package_info['package_data'].setdefault(PACKAGENAME, [])
104 package_info['package_data'][PACKAGENAME].append('data/*')
105
106 # Define entry points for command-line scripts
107 entry_points = {'console_scripts': []}
108
109 if conf.has_section('entry_points'):
110 entry_point_list = conf.items('entry_points')
111 for entry_point in entry_point_list:
112 entry_points['console_scripts'].append('{0} = {1}'.format(
113 entry_point[0], entry_point[1]))
114
115 # Include all .c files, recursively, including those generated by
116 # Cython, since we can not do this in MANIFEST.in with a "dynamic"
117 # directory name.
118 c_files = []
119 for root, dirs, files in os.walk(PACKAGENAME):
120 for filename in files:
121 if filename.endswith('.c'):
122 c_files.append(
123 os.path.join(
124 os.path.relpath(root, PACKAGENAME), filename))
125 package_info['package_data'][PACKAGENAME].extend(c_files)
126
127 # Note that requires and provides should not be included in the call to
128 # ``setup``, since these are now deprecated. See this link for more details:
129 # https://groups.google.com/forum/#!topic/astropy-dev/urYO8ckB2uM
130
131 setup(name=PACKAGENAME,
132 version=VERSION,
133 description=DESCRIPTION,
134 scripts=scripts,
135 setup_requires=metadata.get("setup_requires", None),
136 install_requires=[s.strip() for s in metadata.get('install_requires', 'astropy').split(',')],
137 author=AUTHOR,
138 author_email=AUTHOR_EMAIL,
139 license=LICENSE,
140 url=URL,
141 long_description=LONG_DESCRIPTION,
142 cmdclass=cmdclassd,
143 zip_safe=False,
144 use_2to3=False,
145 include_package_data=True,
146 entry_points=entry_points,
147 python_requires='>={}'.format("3.6"),
148 **package_info
149 )
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -23,6 +23,7 @@
from astropy_helpers.setup_helpers import (register_commands, get_debug_option,
get_package_info)
+from astropy_helpers.distutils_helpers import is_distutils_display_option
from astropy_helpers.git_helpers import get_git_devstr
from astropy_helpers.version_helpers import generate_version_py
@@ -124,6 +125,18 @@
os.path.relpath(root, PACKAGENAME), filename))
package_info['package_data'][PACKAGENAME].extend(c_files)
+setup_requires = ['numpy']
+
+# Make sure to have the packages needed for building PlasmaPy, but do not require them
+# when installing from an sdist as the c files are included there.
+if not os.path.exists(os.path.join(os.path.dirname(__file__), 'PKG-INFO')):
+ setup_requires.extend(['cython>=0.27.2'])
+
+# Avoid installing setup_requires dependencies if the user just
+# queries for information
+if is_distutils_display_option():
+ setup_requires = []
+
# Note that requires and provides should not be included in the call to
# ``setup``, since these are now deprecated. See this link for more details:
# https://groups.google.com/forum/#!topic/astropy-dev/urYO8ckB2uM
@@ -132,18 +145,33 @@
version=VERSION,
description=DESCRIPTION,
scripts=scripts,
- setup_requires=metadata.get("setup_requires", None),
+ setup_requires=[s.strip() for s in metadata.get('install_requires', 'astropy').split(',')],
install_requires=[s.strip() for s in metadata.get('install_requires', 'astropy').split(',')],
author=AUTHOR,
author_email=AUTHOR_EMAIL,
license=LICENSE,
url=URL,
long_description=LONG_DESCRIPTION,
+ keywords=['plasma', 'physics', 'transport', 'collisions', 'science',
+ 'atomic', 'particle', 'simulation', 'langmuir', 'tokamak',
+ 'instability', 'modeling'],
+ classifiers=[
+ 'Intended Audience :: Science/Research',
+ 'License :: OSI Approved :: BSD-2-Clause-Patent',
+ 'Operating System :: OS Independent',
+ 'Programming Language :: C',
+ 'Programming Language :: Cython',
+ 'Programming Language :: Python :: 3',
+ 'Programming Language :: Python :: Implementation :: CPython',
+ 'Topic :: Scientific/Engineering :: Astronomy',
+ 'Topic :: Scientific/Engineering :: Physics'
+ ],
cmdclass=cmdclassd,
zip_safe=False,
use_2to3=False,
include_package_data=True,
entry_points=entry_points,
python_requires='>={}'.format("3.6"),
+ tests_require=["pytest", "pytest-astropy"],
**package_info
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -23,6 +23,7 @@\n \n from astropy_helpers.setup_helpers import (register_commands, get_debug_option,\n get_package_info)\n+from astropy_helpers.distutils_helpers import is_distutils_display_option\n from astropy_helpers.git_helpers import get_git_devstr\n from astropy_helpers.version_helpers import generate_version_py\n \n@@ -124,6 +125,18 @@\n os.path.relpath(root, PACKAGENAME), filename))\n package_info['package_data'][PACKAGENAME].extend(c_files)\n \n+setup_requires = ['numpy']\n+\n+# Make sure to have the packages needed for building PlasmaPy, but do not require them\n+# when installing from an sdist as the c files are included there.\n+if not os.path.exists(os.path.join(os.path.dirname(__file__), 'PKG-INFO')):\n+ setup_requires.extend(['cython>=0.27.2'])\n+\n+# Avoid installing setup_requires dependencies if the user just\n+# queries for information\n+if is_distutils_display_option():\n+ setup_requires = []\n+\n # Note that requires and provides should not be included in the call to\n # ``setup``, since these are now deprecated. See this link for more details:\n # https://groups.google.com/forum/#!topic/astropy-dev/urYO8ckB2uM\n@@ -132,18 +145,33 @@\n version=VERSION,\n description=DESCRIPTION,\n scripts=scripts,\n- setup_requires=metadata.get(\"setup_requires\", None),\n+ setup_requires=[s.strip() for s in metadata.get('install_requires', 'astropy').split(',')],\n install_requires=[s.strip() for s in metadata.get('install_requires', 'astropy').split(',')],\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n license=LICENSE,\n url=URL,\n long_description=LONG_DESCRIPTION,\n+ keywords=['plasma', 'physics', 'transport', 'collisions', 'science',\n+ 'atomic', 'particle', 'simulation', 'langmuir', 'tokamak',\n+ 'instability', 'modeling'],\n+ classifiers=[\n+ 'Intended Audience :: Science/Research',\n+ 'License :: OSI Approved :: BSD-2-Clause-Patent',\n+ 'Operating System :: OS Independent',\n+ 'Programming Language :: C',\n+ 'Programming Language :: Cython',\n+ 'Programming Language :: Python :: 3',\n+ 'Programming Language :: Python :: Implementation :: CPython',\n+ 'Topic :: Scientific/Engineering :: Astronomy',\n+ 'Topic :: Scientific/Engineering :: Physics'\n+ ],\n cmdclass=cmdclassd,\n zip_safe=False,\n use_2to3=False,\n include_package_data=True,\n entry_points=entry_points,\n python_requires='>={}'.format(\"3.6\"),\n+ tests_require=[\"pytest\", \"pytest-astropy\"],\n **package_info\n )\n", "issue": "Improve project description on PyPI\n[Project description](https://pypi.org/project/plasmapy/#description) is currently bland. We should detail some functionality, problems that the project is trying to address, mention openAstronomy affiliation, etc.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\nimport glob\nimport os\nimport sys\n\n# Enforce Python version check - this is the same check as in __init__.py but\n# this one has to happen before importing ah_bootstrap.\nif sys.version_info < tuple((int(val) for val in \"3.6\".split('.'))):\n sys.stderr.write(\"ERROR: plasmapy requires Python {} or later\\n\".format(3.6))\n sys.exit(1)\n\nimport ah_bootstrap\nfrom setuptools import setup\n\n# A dirty hack to get around some early import/configurations ambiguities\nif sys.version_info[0] >= 3:\n import builtins\nelse:\n import __builtin__ as builtins\nbuiltins._ASTROPY_SETUP_ = True\n\nfrom astropy_helpers.setup_helpers import (register_commands, get_debug_option,\n get_package_info)\nfrom astropy_helpers.git_helpers import get_git_devstr\nfrom astropy_helpers.version_helpers import generate_version_py\n\n# Get some values from the setup.cfg\ntry:\n from ConfigParser import ConfigParser\nexcept ImportError:\n from configparser import ConfigParser\n\nconf = ConfigParser()\nconf.read(['setup.cfg'])\nmetadata = dict(conf.items('metadata'))\n\nPACKAGENAME = metadata.get('package_name', 'plasmapy')\nDESCRIPTION = metadata.get('description', 'plasmapy')\nAUTHOR = metadata.get('author', 'PlasmaPy Developers')\nAUTHOR_EMAIL = metadata.get('author_email', '')\nLICENSE = metadata.get('license', 'unknown')\nURL = metadata.get('url', 'http://plasmapy.org')\n\n# order of priority for long_description:\n# (1) set in setup.cfg,\n# (2) load LONG_DESCRIPTION.rst,\n# (3) load README.rst,\n# (4) package docstring\nreadme_glob = 'README*'\n_cfg_long_description = metadata.get('long_description', '')\nif _cfg_long_description:\n LONG_DESCRIPTION = _cfg_long_description\n\nelif os.path.exists('LONG_DESCRIPTION.rst'):\n with open('LONG_DESCRIPTION.rst') as f:\n LONG_DESCRIPTION = f.read()\n\nelif len(glob.glob(readme_glob)) > 0:\n with open(glob.glob(readme_glob)[0]) as f:\n LONG_DESCRIPTION = f.read()\n\nelse:\n # Get the long description from the package's docstring\n __import__(PACKAGENAME)\n package = sys.modules[PACKAGENAME]\n LONG_DESCRIPTION = package.__doc__\n\n# Store the package name in a built-in variable so it's easy\n# to get from other parts of the setup infrastructure\nbuiltins._ASTROPY_PACKAGE_NAME_ = PACKAGENAME\n\n# VERSION should be PEP440 compatible (http://www.python.org/dev/peps/pep-0440)\nVERSION = metadata.get('version', '0.0.dev0')\n\n# Indicates if this version is a release version\nRELEASE = 'dev' not in VERSION\n\nif not RELEASE:\n VERSION += get_git_devstr(False)\n\n# Populate the dict of setup command overrides; this should be done before\n# invoking any other functionality from distutils since it can potentially\n# modify distutils' behavior.\ncmdclassd = register_commands(PACKAGENAME, VERSION, RELEASE)\n\n# Freeze build information in version.py\ngenerate_version_py(PACKAGENAME, VERSION, RELEASE,\n get_debug_option(PACKAGENAME))\n\n# Treat everything in scripts except README* as a script to be installed\nscripts = [fname for fname in glob.glob(os.path.join('scripts', '*'))\n if not os.path.basename(fname).startswith('README')]\n\n\n# Get configuration information from all of the various subpackages.\n# See the docstring for setup_helpers.update_package_files for more\n# details.\npackage_info = get_package_info()\n\n# Add the project-global data\npackage_info['package_data'].setdefault(PACKAGENAME, [])\npackage_info['package_data'][PACKAGENAME].append('data/*')\n\n# Define entry points for command-line scripts\nentry_points = {'console_scripts': []}\n\nif conf.has_section('entry_points'):\n entry_point_list = conf.items('entry_points')\n for entry_point in entry_point_list:\n entry_points['console_scripts'].append('{0} = {1}'.format(\n entry_point[0], entry_point[1]))\n\n# Include all .c files, recursively, including those generated by\n# Cython, since we can not do this in MANIFEST.in with a \"dynamic\"\n# directory name.\nc_files = []\nfor root, dirs, files in os.walk(PACKAGENAME):\n for filename in files:\n if filename.endswith('.c'):\n c_files.append(\n os.path.join(\n os.path.relpath(root, PACKAGENAME), filename))\npackage_info['package_data'][PACKAGENAME].extend(c_files)\n\n# Note that requires and provides should not be included in the call to\n# ``setup``, since these are now deprecated. See this link for more details:\n# https://groups.google.com/forum/#!topic/astropy-dev/urYO8ckB2uM\n\nsetup(name=PACKAGENAME,\n version=VERSION,\n description=DESCRIPTION,\n scripts=scripts,\n setup_requires=metadata.get(\"setup_requires\", None),\n install_requires=[s.strip() for s in metadata.get('install_requires', 'astropy').split(',')],\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n license=LICENSE,\n url=URL,\n long_description=LONG_DESCRIPTION,\n cmdclass=cmdclassd,\n zip_safe=False,\n use_2to3=False,\n include_package_data=True,\n entry_points=entry_points,\n python_requires='>={}'.format(\"3.6\"),\n **package_info\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\nimport glob\nimport os\nimport sys\n\n# Enforce Python version check - this is the same check as in __init__.py but\n# this one has to happen before importing ah_bootstrap.\nif sys.version_info < tuple((int(val) for val in \"3.6\".split('.'))):\n sys.stderr.write(\"ERROR: plasmapy requires Python {} or later\\n\".format(3.6))\n sys.exit(1)\n\nimport ah_bootstrap\nfrom setuptools import setup\n\n# A dirty hack to get around some early import/configurations ambiguities\nif sys.version_info[0] >= 3:\n import builtins\nelse:\n import __builtin__ as builtins\nbuiltins._ASTROPY_SETUP_ = True\n\nfrom astropy_helpers.setup_helpers import (register_commands, get_debug_option,\n get_package_info)\nfrom astropy_helpers.distutils_helpers import is_distutils_display_option\nfrom astropy_helpers.git_helpers import get_git_devstr\nfrom astropy_helpers.version_helpers import generate_version_py\n\n# Get some values from the setup.cfg\ntry:\n from ConfigParser import ConfigParser\nexcept ImportError:\n from configparser import ConfigParser\n\nconf = ConfigParser()\nconf.read(['setup.cfg'])\nmetadata = dict(conf.items('metadata'))\n\nPACKAGENAME = metadata.get('package_name', 'plasmapy')\nDESCRIPTION = metadata.get('description', 'plasmapy')\nAUTHOR = metadata.get('author', 'PlasmaPy Developers')\nAUTHOR_EMAIL = metadata.get('author_email', '')\nLICENSE = metadata.get('license', 'unknown')\nURL = metadata.get('url', 'http://plasmapy.org')\n\n# order of priority for long_description:\n# (1) set in setup.cfg,\n# (2) load LONG_DESCRIPTION.rst,\n# (3) load README.rst,\n# (4) package docstring\nreadme_glob = 'README*'\n_cfg_long_description = metadata.get('long_description', '')\nif _cfg_long_description:\n LONG_DESCRIPTION = _cfg_long_description\n\nelif os.path.exists('LONG_DESCRIPTION.rst'):\n with open('LONG_DESCRIPTION.rst') as f:\n LONG_DESCRIPTION = f.read()\n\nelif len(glob.glob(readme_glob)) > 0:\n with open(glob.glob(readme_glob)[0]) as f:\n LONG_DESCRIPTION = f.read()\n\nelse:\n # Get the long description from the package's docstring\n __import__(PACKAGENAME)\n package = sys.modules[PACKAGENAME]\n LONG_DESCRIPTION = package.__doc__\n\n# Store the package name in a built-in variable so it's easy\n# to get from other parts of the setup infrastructure\nbuiltins._ASTROPY_PACKAGE_NAME_ = PACKAGENAME\n\n# VERSION should be PEP440 compatible (http://www.python.org/dev/peps/pep-0440)\nVERSION = metadata.get('version', '0.0.dev0')\n\n# Indicates if this version is a release version\nRELEASE = 'dev' not in VERSION\n\nif not RELEASE:\n VERSION += get_git_devstr(False)\n\n# Populate the dict of setup command overrides; this should be done before\n# invoking any other functionality from distutils since it can potentially\n# modify distutils' behavior.\ncmdclassd = register_commands(PACKAGENAME, VERSION, RELEASE)\n\n# Freeze build information in version.py\ngenerate_version_py(PACKAGENAME, VERSION, RELEASE,\n get_debug_option(PACKAGENAME))\n\n# Treat everything in scripts except README* as a script to be installed\nscripts = [fname for fname in glob.glob(os.path.join('scripts', '*'))\n if not os.path.basename(fname).startswith('README')]\n\n\n# Get configuration information from all of the various subpackages.\n# See the docstring for setup_helpers.update_package_files for more\n# details.\npackage_info = get_package_info()\n\n# Add the project-global data\npackage_info['package_data'].setdefault(PACKAGENAME, [])\npackage_info['package_data'][PACKAGENAME].append('data/*')\n\n# Define entry points for command-line scripts\nentry_points = {'console_scripts': []}\n\nif conf.has_section('entry_points'):\n entry_point_list = conf.items('entry_points')\n for entry_point in entry_point_list:\n entry_points['console_scripts'].append('{0} = {1}'.format(\n entry_point[0], entry_point[1]))\n\n# Include all .c files, recursively, including those generated by\n# Cython, since we can not do this in MANIFEST.in with a \"dynamic\"\n# directory name.\nc_files = []\nfor root, dirs, files in os.walk(PACKAGENAME):\n for filename in files:\n if filename.endswith('.c'):\n c_files.append(\n os.path.join(\n os.path.relpath(root, PACKAGENAME), filename))\npackage_info['package_data'][PACKAGENAME].extend(c_files)\n\nsetup_requires = ['numpy']\n\n# Make sure to have the packages needed for building PlasmaPy, but do not require them\n# when installing from an sdist as the c files are included there.\nif not os.path.exists(os.path.join(os.path.dirname(__file__), 'PKG-INFO')):\n setup_requires.extend(['cython>=0.27.2'])\n\n# Avoid installing setup_requires dependencies if the user just\n# queries for information\nif is_distutils_display_option():\n setup_requires = []\n\n# Note that requires and provides should not be included in the call to\n# ``setup``, since these are now deprecated. See this link for more details:\n# https://groups.google.com/forum/#!topic/astropy-dev/urYO8ckB2uM\n\nsetup(name=PACKAGENAME,\n version=VERSION,\n description=DESCRIPTION,\n scripts=scripts,\n setup_requires=[s.strip() for s in metadata.get('install_requires', 'astropy').split(',')],\n install_requires=[s.strip() for s in metadata.get('install_requires', 'astropy').split(',')],\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n license=LICENSE,\n url=URL,\n long_description=LONG_DESCRIPTION,\n keywords=['plasma', 'physics', 'transport', 'collisions', 'science',\n 'atomic', 'particle', 'simulation', 'langmuir', 'tokamak',\n 'instability', 'modeling'],\n classifiers=[\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD-2-Clause-Patent',\n 'Operating System :: OS Independent',\n 'Programming Language :: C',\n 'Programming Language :: Cython',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Topic :: Scientific/Engineering :: Astronomy',\n 'Topic :: Scientific/Engineering :: Physics'\n ],\n cmdclass=cmdclassd,\n zip_safe=False,\n use_2to3=False,\n include_package_data=True,\n entry_points=entry_points,\n python_requires='>={}'.format(\"3.6\"),\n tests_require=[\"pytest\", \"pytest-astropy\"],\n **package_info\n)\n", "path": "setup.py"}]} | 1,899 | 645 |
gh_patches_debug_40928 | rasdani/github-patches | git_diff | CTFd__CTFd-1270 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to import backups with ISO 8601 date+time based flags
**Environment**:
- CTFd Version/Commit: Latest (As of 18/02/2020)
- Operating System: Docker (Underlying OS tested on Mac and Google Cloud Run)
- Web Browser and Version: Chrome for Mac Version 79.0.3945.130 (Official Build) (64-bit)
**What happened?**
When importing a backup into a fresh CTFd container I receive the error "ValueError('unconverted data remains: Z')"
After a bit of testing, the 'Z' is part of one of my flags in flag.json. Unzipping the backup, removing the Z and re-zipping allows the backup to be imported correctly.
Full flag in broken backup was: 1996-02-20T05:00:00Z
**What did you expect to happen?**
Backup should be imported correctly
**How to reproduce your issue**
Create a CTF game with "1996-02-20T05:00:00Z" as a flag (no quotes). Export this backup, then try and re-import it
**Any associated stack traces or error logs**
"ValueError('unconverted data remains: Z')" as a browser popup
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/utils/exports/__init__.py`
Content:
```
1 import datetime
2 import json
3 import os
4 import re
5 import tempfile
6 import zipfile
7
8 import datafreeze
9 import dataset
10 import six
11 from alembic.util import CommandError
12 from datafreeze.format import SERIALIZERS
13 from datafreeze.format.fjson import JSONEncoder, JSONSerializer
14 from flask import current_app as app
15 from flask_migrate import upgrade
16 from sqlalchemy.exc import OperationalError, ProgrammingError
17
18 from CTFd.cache import cache
19 from CTFd.models import db
20 from CTFd.utils import get_app_config, set_config
21 from CTFd.utils.migrations import (
22 create_database,
23 drop_database,
24 get_current_revision,
25 stamp_latest_revision,
26 )
27 from CTFd.utils.uploads import get_uploader
28
29
30 class CTFdSerializer(JSONSerializer):
31 """
32 Slightly modified datafreeze serializer so that we can properly
33 export the CTFd database into a zip file.
34 """
35
36 def close(self):
37 for path, result in self.buckets.items():
38 result = self.wrap(result)
39
40 if self.fileobj is None:
41 fh = open(path, "wb")
42 else:
43 fh = self.fileobj
44
45 # Certain databases (MariaDB) store JSON as LONGTEXT.
46 # Before emitting a file we should standardize to valid JSON (i.e. a dict)
47 # See Issue #973
48 for i, r in enumerate(result["results"]):
49 data = r.get("requirements")
50 if data:
51 try:
52 if isinstance(data, six.string_types):
53 result["results"][i]["requirements"] = json.loads(data)
54 except ValueError:
55 pass
56
57 data = json.dumps(
58 result, cls=JSONEncoder, indent=self.export.get_int("indent")
59 )
60
61 callback = self.export.get("callback")
62 if callback:
63 data = "%s && %s(%s);" % (callback, callback, data)
64
65 if six.PY3:
66 fh.write(bytes(data, encoding="utf-8"))
67 else:
68 fh.write(data)
69 if self.fileobj is None:
70 fh.close()
71
72
73 SERIALIZERS["ctfd"] = CTFdSerializer # Load the custom serializer
74
75
76 def export_ctf():
77 # TODO: For some unknown reason dataset is only able to see alembic_version during tests.
78 # Even using a real sqlite database. This makes this test impossible to pass in sqlite.
79 db = dataset.connect(get_app_config("SQLALCHEMY_DATABASE_URI"))
80
81 # Backup database
82 backup = tempfile.NamedTemporaryFile()
83
84 backup_zip = zipfile.ZipFile(backup, "w")
85
86 tables = db.tables
87 for table in tables:
88 result = db[table].all()
89 result_file = six.BytesIO()
90 datafreeze.freeze(result, format="ctfd", fileobj=result_file)
91 result_file.seek(0)
92 backup_zip.writestr("db/{}.json".format(table), result_file.read())
93
94 # # Guarantee that alembic_version is saved into the export
95 if "alembic_version" not in tables:
96 result = {
97 "count": 1,
98 "results": [{"version_num": get_current_revision()}],
99 "meta": {},
100 }
101 result_file = six.BytesIO()
102 json.dump(result, result_file)
103 result_file.seek(0)
104 backup_zip.writestr("db/alembic_version.json", result_file.read())
105
106 # Backup uploads
107 uploader = get_uploader()
108 uploader.sync()
109
110 upload_folder = os.path.join(
111 os.path.normpath(app.root_path), app.config.get("UPLOAD_FOLDER")
112 )
113 for root, dirs, files in os.walk(upload_folder):
114 for file in files:
115 parent_dir = os.path.basename(root)
116 backup_zip.write(
117 os.path.join(root, file),
118 arcname=os.path.join("uploads", parent_dir, file),
119 )
120
121 backup_zip.close()
122 backup.seek(0)
123 return backup
124
125
126 def import_ctf(backup, erase=True):
127 if not zipfile.is_zipfile(backup):
128 raise zipfile.BadZipfile
129
130 backup = zipfile.ZipFile(backup)
131
132 members = backup.namelist()
133 max_content_length = get_app_config("MAX_CONTENT_LENGTH")
134 for f in members:
135 if f.startswith("/") or ".." in f:
136 # Abort on malicious zip files
137 raise zipfile.BadZipfile
138 info = backup.getinfo(f)
139 if max_content_length:
140 if info.file_size > max_content_length:
141 raise zipfile.LargeZipFile
142
143 try:
144 alembic_version = json.loads(backup.open("db/alembic_version.json").read())
145 alembic_version = alembic_version["results"][0]["version_num"]
146 except Exception:
147 raise Exception(
148 "Could not determine appropriate database version. This backup cannot be automatically imported."
149 )
150
151 # Check if the alembic version is from CTFd 1.x
152 if alembic_version in (
153 "1ec4a28fe0ff",
154 "2539d8b5082e",
155 "7e9efd084c5a",
156 "87733981ca0e",
157 "a4e30c94c360",
158 "c12d2a1b0926",
159 "c7225db614c1",
160 "cb3cfcc47e2f",
161 "cbf5620f8e15",
162 "d5a224bf5862",
163 "d6514ec92738",
164 "dab615389702",
165 "e62fd69bd417",
166 ):
167 raise Exception(
168 "The version of CTFd that this backup is from is too old to be automatically imported."
169 )
170
171 if erase:
172 drop_database()
173 create_database()
174 # We explicitly do not want to upgrade or stamp here.
175 # The import will have this information.
176
177 side_db = dataset.connect(get_app_config("SQLALCHEMY_DATABASE_URI"))
178 sqlite = get_app_config("SQLALCHEMY_DATABASE_URI").startswith("sqlite")
179 postgres = get_app_config("SQLALCHEMY_DATABASE_URI").startswith("postgres")
180
181 try:
182 if postgres:
183 side_db.query("SET session_replication_role=replica;")
184 else:
185 side_db.query("SET FOREIGN_KEY_CHECKS=0;")
186 except Exception:
187 print("Failed to disable foreign key checks. Continuing.")
188
189 first = [
190 "db/teams.json",
191 "db/users.json",
192 "db/challenges.json",
193 "db/dynamic_challenge.json",
194 "db/flags.json",
195 "db/hints.json",
196 "db/unlocks.json",
197 "db/awards.json",
198 "db/tags.json",
199 "db/submissions.json",
200 "db/solves.json",
201 "db/files.json",
202 "db/notifications.json",
203 "db/pages.json",
204 "db/tracking.json",
205 "db/config.json",
206 ]
207
208 for item in first:
209 if item in members:
210 members.remove(item)
211
212 members = first + members
213
214 upgrade(revision=alembic_version)
215
216 # Create tables created by plugins
217 try:
218 app.db.create_all()
219 except OperationalError as e:
220 if not postgres:
221 raise e
222 else:
223 print("Allowing error during app.db.create_all() due to Postgres")
224
225 members.remove("db/alembic_version.json")
226
227 for member in members:
228 if member.startswith("db/"):
229 table_name = member[3:-5]
230
231 try:
232 # Try to open a file but skip if it doesn't exist.
233 data = backup.open(member).read()
234 except KeyError:
235 continue
236
237 if data:
238 table = side_db[table_name]
239
240 saved = json.loads(data)
241 for entry in saved["results"]:
242 # This is a hack to get SQLite to properly accept datetime values from dataset
243 # See Issue #246
244 if sqlite:
245 for k, v in entry.items():
246 if isinstance(v, six.string_types):
247 match = re.match(
248 r"\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d", v
249 )
250 if match:
251 entry[k] = datetime.datetime.strptime(
252 v, "%Y-%m-%dT%H:%M:%S.%f"
253 )
254 continue
255 match = re.match(
256 r"\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}", v
257 )
258 if match:
259 entry[k] = datetime.datetime.strptime(
260 v, "%Y-%m-%dT%H:%M:%S"
261 )
262 continue
263 # From v2.0.0 to v2.1.0 requirements could have been a string or JSON because of a SQLAlchemy issue
264 # This is a hack to ensure we can still accept older exports. See #867
265 if member in (
266 "db/challenges.json",
267 "db/hints.json",
268 "db/awards.json",
269 ):
270 requirements = entry.get("requirements")
271 if requirements and isinstance(requirements, six.string_types):
272 entry["requirements"] = json.loads(requirements)
273
274 try:
275 table.insert(entry)
276 except ProgrammingError:
277 # MariaDB does not like JSON objects and prefers strings because it internally
278 # represents JSON with LONGTEXT.
279 # See Issue #973
280 requirements = entry.get("requirements")
281 if requirements and isinstance(requirements, dict):
282 entry["requirements"] = json.dumps(requirements)
283 table.insert(entry)
284
285 db.session.commit()
286 if postgres:
287 # This command is to set the next primary key ID for the re-inserted tables in Postgres. However,
288 # this command is very difficult to translate into SQLAlchemy code. Because Postgres is not
289 # officially supported, no major work will go into this functionality.
290 # https://stackoverflow.com/a/37972960
291 if '"' not in table_name and "'" not in table_name:
292 query = "SELECT setval(pg_get_serial_sequence('{table_name}', 'id'), coalesce(max(id)+1,1), false) FROM \"{table_name}\"".format( # nosec
293 table_name=table_name
294 )
295 side_db.engine.execute(query)
296 else:
297 raise Exception(
298 "Table name {table_name} contains quotes".format(
299 table_name=table_name
300 )
301 )
302
303 # Extracting files
304 files = [f for f in backup.namelist() if f.startswith("uploads/")]
305 uploader = get_uploader()
306 for f in files:
307 filename = f.split(os.sep, 1)
308
309 if (
310 len(filename) < 2 or os.path.basename(filename[1]) == ""
311 ): # just an empty uploads directory (e.g. uploads/) or any directory
312 continue
313
314 filename = filename[1] # Get the second entry in the list (the actual filename)
315 source = backup.open(f)
316 uploader.store(fileobj=source, filename=filename)
317
318 # Alembic sqlite support is lacking so we should just create_all anyway
319 try:
320 upgrade(revision="head")
321 except (OperationalError, CommandError, RuntimeError, SystemExit, Exception):
322 app.db.create_all()
323 stamp_latest_revision()
324
325 try:
326 if postgres:
327 side_db.query("SET session_replication_role=DEFAULT;")
328 else:
329 side_db.query("SET FOREIGN_KEY_CHECKS=1;")
330 except Exception:
331 print("Failed to enable foreign key checks. Continuing.")
332
333 # Invalidate all cached data
334 cache.clear()
335
336 # Set default theme in case the current instance or the import does not provide it
337 set_config("ctf_theme", "core")
338
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/CTFd/utils/exports/__init__.py b/CTFd/utils/exports/__init__.py
--- a/CTFd/utils/exports/__init__.py
+++ b/CTFd/utils/exports/__init__.py
@@ -14,9 +14,10 @@
from flask import current_app as app
from flask_migrate import upgrade
from sqlalchemy.exc import OperationalError, ProgrammingError
+from sqlalchemy.sql import sqltypes
from CTFd.cache import cache
-from CTFd.models import db
+from CTFd.models import db, get_class_by_tablename
from CTFd.utils import get_app_config, set_config
from CTFd.utils.migrations import (
create_database,
@@ -242,24 +243,36 @@
# This is a hack to get SQLite to properly accept datetime values from dataset
# See Issue #246
if sqlite:
+ direct_table = get_class_by_tablename(table.name)
for k, v in entry.items():
if isinstance(v, six.string_types):
- match = re.match(
- r"\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d", v
- )
- if match:
- entry[k] = datetime.datetime.strptime(
- v, "%Y-%m-%dT%H:%M:%S.%f"
+ # We only want to apply this hack to columns that are expecting a datetime object
+ try:
+ is_dt_column = (
+ type(getattr(direct_table, k).type)
+ == sqltypes.DateTime
)
- continue
- match = re.match(
- r"\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}", v
- )
- if match:
- entry[k] = datetime.datetime.strptime(
- v, "%Y-%m-%dT%H:%M:%S"
+ except AttributeError:
+ is_dt_column = False
+
+ # If the table is expecting a datetime, we should check if the string is one and convert it
+ if is_dt_column:
+ match = re.match(
+ r"\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d", v
+ )
+ if match:
+ entry[k] = datetime.datetime.strptime(
+ v, "%Y-%m-%dT%H:%M:%S.%f"
+ )
+ continue
+ match = re.match(
+ r"\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}", v
)
- continue
+ if match:
+ entry[k] = datetime.datetime.strptime(
+ v, "%Y-%m-%dT%H:%M:%S"
+ )
+ continue
# From v2.0.0 to v2.1.0 requirements could have been a string or JSON because of a SQLAlchemy issue
# This is a hack to ensure we can still accept older exports. See #867
if member in (
| {"golden_diff": "diff --git a/CTFd/utils/exports/__init__.py b/CTFd/utils/exports/__init__.py\n--- a/CTFd/utils/exports/__init__.py\n+++ b/CTFd/utils/exports/__init__.py\n@@ -14,9 +14,10 @@\n from flask import current_app as app\n from flask_migrate import upgrade\n from sqlalchemy.exc import OperationalError, ProgrammingError\n+from sqlalchemy.sql import sqltypes\n \n from CTFd.cache import cache\n-from CTFd.models import db\n+from CTFd.models import db, get_class_by_tablename\n from CTFd.utils import get_app_config, set_config\n from CTFd.utils.migrations import (\n create_database,\n@@ -242,24 +243,36 @@\n # This is a hack to get SQLite to properly accept datetime values from dataset\n # See Issue #246\n if sqlite:\n+ direct_table = get_class_by_tablename(table.name)\n for k, v in entry.items():\n if isinstance(v, six.string_types):\n- match = re.match(\n- r\"\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d\", v\n- )\n- if match:\n- entry[k] = datetime.datetime.strptime(\n- v, \"%Y-%m-%dT%H:%M:%S.%f\"\n+ # We only want to apply this hack to columns that are expecting a datetime object\n+ try:\n+ is_dt_column = (\n+ type(getattr(direct_table, k).type)\n+ == sqltypes.DateTime\n )\n- continue\n- match = re.match(\n- r\"\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\", v\n- )\n- if match:\n- entry[k] = datetime.datetime.strptime(\n- v, \"%Y-%m-%dT%H:%M:%S\"\n+ except AttributeError:\n+ is_dt_column = False\n+\n+ # If the table is expecting a datetime, we should check if the string is one and convert it\n+ if is_dt_column:\n+ match = re.match(\n+ r\"\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d\", v\n+ )\n+ if match:\n+ entry[k] = datetime.datetime.strptime(\n+ v, \"%Y-%m-%dT%H:%M:%S.%f\"\n+ )\n+ continue\n+ match = re.match(\n+ r\"\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\", v\n )\n- continue\n+ if match:\n+ entry[k] = datetime.datetime.strptime(\n+ v, \"%Y-%m-%dT%H:%M:%S\"\n+ )\n+ continue\n # From v2.0.0 to v2.1.0 requirements could have been a string or JSON because of a SQLAlchemy issue\n # This is a hack to ensure we can still accept older exports. See #867\n if member in (\n", "issue": "Unable to import backups with ISO 8601 date+time based flags\n**Environment**:\r\n\r\n - CTFd Version/Commit: Latest (As of 18/02/2020)\r\n - Operating System: Docker (Underlying OS tested on Mac and Google Cloud Run)\r\n - Web Browser and Version: Chrome for Mac Version 79.0.3945.130 (Official Build) (64-bit)\r\n\r\n**What happened?**\r\n\r\nWhen importing a backup into a fresh CTFd container I receive the error \"ValueError('unconverted data remains: Z')\"\r\n\r\nAfter a bit of testing, the 'Z' is part of one of my flags in flag.json. Unzipping the backup, removing the Z and re-zipping allows the backup to be imported correctly.\r\n\r\nFull flag in broken backup was: 1996-02-20T05:00:00Z\r\n\r\n**What did you expect to happen?**\r\n\r\nBackup should be imported correctly\r\n\r\n**How to reproduce your issue**\r\n\r\nCreate a CTF game with \"1996-02-20T05:00:00Z\" as a flag (no quotes). Export this backup, then try and re-import it\r\n\r\n**Any associated stack traces or error logs**\r\n\r\n\"ValueError('unconverted data remains: Z')\" as a browser popup\r\n\r\n\n", "before_files": [{"content": "import datetime\nimport json\nimport os\nimport re\nimport tempfile\nimport zipfile\n\nimport datafreeze\nimport dataset\nimport six\nfrom alembic.util import CommandError\nfrom datafreeze.format import SERIALIZERS\nfrom datafreeze.format.fjson import JSONEncoder, JSONSerializer\nfrom flask import current_app as app\nfrom flask_migrate import upgrade\nfrom sqlalchemy.exc import OperationalError, ProgrammingError\n\nfrom CTFd.cache import cache\nfrom CTFd.models import db\nfrom CTFd.utils import get_app_config, set_config\nfrom CTFd.utils.migrations import (\n create_database,\n drop_database,\n get_current_revision,\n stamp_latest_revision,\n)\nfrom CTFd.utils.uploads import get_uploader\n\n\nclass CTFdSerializer(JSONSerializer):\n \"\"\"\n Slightly modified datafreeze serializer so that we can properly\n export the CTFd database into a zip file.\n \"\"\"\n\n def close(self):\n for path, result in self.buckets.items():\n result = self.wrap(result)\n\n if self.fileobj is None:\n fh = open(path, \"wb\")\n else:\n fh = self.fileobj\n\n # Certain databases (MariaDB) store JSON as LONGTEXT.\n # Before emitting a file we should standardize to valid JSON (i.e. a dict)\n # See Issue #973\n for i, r in enumerate(result[\"results\"]):\n data = r.get(\"requirements\")\n if data:\n try:\n if isinstance(data, six.string_types):\n result[\"results\"][i][\"requirements\"] = json.loads(data)\n except ValueError:\n pass\n\n data = json.dumps(\n result, cls=JSONEncoder, indent=self.export.get_int(\"indent\")\n )\n\n callback = self.export.get(\"callback\")\n if callback:\n data = \"%s && %s(%s);\" % (callback, callback, data)\n\n if six.PY3:\n fh.write(bytes(data, encoding=\"utf-8\"))\n else:\n fh.write(data)\n if self.fileobj is None:\n fh.close()\n\n\nSERIALIZERS[\"ctfd\"] = CTFdSerializer # Load the custom serializer\n\n\ndef export_ctf():\n # TODO: For some unknown reason dataset is only able to see alembic_version during tests.\n # Even using a real sqlite database. This makes this test impossible to pass in sqlite.\n db = dataset.connect(get_app_config(\"SQLALCHEMY_DATABASE_URI\"))\n\n # Backup database\n backup = tempfile.NamedTemporaryFile()\n\n backup_zip = zipfile.ZipFile(backup, \"w\")\n\n tables = db.tables\n for table in tables:\n result = db[table].all()\n result_file = six.BytesIO()\n datafreeze.freeze(result, format=\"ctfd\", fileobj=result_file)\n result_file.seek(0)\n backup_zip.writestr(\"db/{}.json\".format(table), result_file.read())\n\n # # Guarantee that alembic_version is saved into the export\n if \"alembic_version\" not in tables:\n result = {\n \"count\": 1,\n \"results\": [{\"version_num\": get_current_revision()}],\n \"meta\": {},\n }\n result_file = six.BytesIO()\n json.dump(result, result_file)\n result_file.seek(0)\n backup_zip.writestr(\"db/alembic_version.json\", result_file.read())\n\n # Backup uploads\n uploader = get_uploader()\n uploader.sync()\n\n upload_folder = os.path.join(\n os.path.normpath(app.root_path), app.config.get(\"UPLOAD_FOLDER\")\n )\n for root, dirs, files in os.walk(upload_folder):\n for file in files:\n parent_dir = os.path.basename(root)\n backup_zip.write(\n os.path.join(root, file),\n arcname=os.path.join(\"uploads\", parent_dir, file),\n )\n\n backup_zip.close()\n backup.seek(0)\n return backup\n\n\ndef import_ctf(backup, erase=True):\n if not zipfile.is_zipfile(backup):\n raise zipfile.BadZipfile\n\n backup = zipfile.ZipFile(backup)\n\n members = backup.namelist()\n max_content_length = get_app_config(\"MAX_CONTENT_LENGTH\")\n for f in members:\n if f.startswith(\"/\") or \"..\" in f:\n # Abort on malicious zip files\n raise zipfile.BadZipfile\n info = backup.getinfo(f)\n if max_content_length:\n if info.file_size > max_content_length:\n raise zipfile.LargeZipFile\n\n try:\n alembic_version = json.loads(backup.open(\"db/alembic_version.json\").read())\n alembic_version = alembic_version[\"results\"][0][\"version_num\"]\n except Exception:\n raise Exception(\n \"Could not determine appropriate database version. This backup cannot be automatically imported.\"\n )\n\n # Check if the alembic version is from CTFd 1.x\n if alembic_version in (\n \"1ec4a28fe0ff\",\n \"2539d8b5082e\",\n \"7e9efd084c5a\",\n \"87733981ca0e\",\n \"a4e30c94c360\",\n \"c12d2a1b0926\",\n \"c7225db614c1\",\n \"cb3cfcc47e2f\",\n \"cbf5620f8e15\",\n \"d5a224bf5862\",\n \"d6514ec92738\",\n \"dab615389702\",\n \"e62fd69bd417\",\n ):\n raise Exception(\n \"The version of CTFd that this backup is from is too old to be automatically imported.\"\n )\n\n if erase:\n drop_database()\n create_database()\n # We explicitly do not want to upgrade or stamp here.\n # The import will have this information.\n\n side_db = dataset.connect(get_app_config(\"SQLALCHEMY_DATABASE_URI\"))\n sqlite = get_app_config(\"SQLALCHEMY_DATABASE_URI\").startswith(\"sqlite\")\n postgres = get_app_config(\"SQLALCHEMY_DATABASE_URI\").startswith(\"postgres\")\n\n try:\n if postgres:\n side_db.query(\"SET session_replication_role=replica;\")\n else:\n side_db.query(\"SET FOREIGN_KEY_CHECKS=0;\")\n except Exception:\n print(\"Failed to disable foreign key checks. Continuing.\")\n\n first = [\n \"db/teams.json\",\n \"db/users.json\",\n \"db/challenges.json\",\n \"db/dynamic_challenge.json\",\n \"db/flags.json\",\n \"db/hints.json\",\n \"db/unlocks.json\",\n \"db/awards.json\",\n \"db/tags.json\",\n \"db/submissions.json\",\n \"db/solves.json\",\n \"db/files.json\",\n \"db/notifications.json\",\n \"db/pages.json\",\n \"db/tracking.json\",\n \"db/config.json\",\n ]\n\n for item in first:\n if item in members:\n members.remove(item)\n\n members = first + members\n\n upgrade(revision=alembic_version)\n\n # Create tables created by plugins\n try:\n app.db.create_all()\n except OperationalError as e:\n if not postgres:\n raise e\n else:\n print(\"Allowing error during app.db.create_all() due to Postgres\")\n\n members.remove(\"db/alembic_version.json\")\n\n for member in members:\n if member.startswith(\"db/\"):\n table_name = member[3:-5]\n\n try:\n # Try to open a file but skip if it doesn't exist.\n data = backup.open(member).read()\n except KeyError:\n continue\n\n if data:\n table = side_db[table_name]\n\n saved = json.loads(data)\n for entry in saved[\"results\"]:\n # This is a hack to get SQLite to properly accept datetime values from dataset\n # See Issue #246\n if sqlite:\n for k, v in entry.items():\n if isinstance(v, six.string_types):\n match = re.match(\n r\"\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d\", v\n )\n if match:\n entry[k] = datetime.datetime.strptime(\n v, \"%Y-%m-%dT%H:%M:%S.%f\"\n )\n continue\n match = re.match(\n r\"\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\", v\n )\n if match:\n entry[k] = datetime.datetime.strptime(\n v, \"%Y-%m-%dT%H:%M:%S\"\n )\n continue\n # From v2.0.0 to v2.1.0 requirements could have been a string or JSON because of a SQLAlchemy issue\n # This is a hack to ensure we can still accept older exports. See #867\n if member in (\n \"db/challenges.json\",\n \"db/hints.json\",\n \"db/awards.json\",\n ):\n requirements = entry.get(\"requirements\")\n if requirements and isinstance(requirements, six.string_types):\n entry[\"requirements\"] = json.loads(requirements)\n\n try:\n table.insert(entry)\n except ProgrammingError:\n # MariaDB does not like JSON objects and prefers strings because it internally\n # represents JSON with LONGTEXT.\n # See Issue #973\n requirements = entry.get(\"requirements\")\n if requirements and isinstance(requirements, dict):\n entry[\"requirements\"] = json.dumps(requirements)\n table.insert(entry)\n\n db.session.commit()\n if postgres:\n # This command is to set the next primary key ID for the re-inserted tables in Postgres. However,\n # this command is very difficult to translate into SQLAlchemy code. Because Postgres is not\n # officially supported, no major work will go into this functionality.\n # https://stackoverflow.com/a/37972960\n if '\"' not in table_name and \"'\" not in table_name:\n query = \"SELECT setval(pg_get_serial_sequence('{table_name}', 'id'), coalesce(max(id)+1,1), false) FROM \\\"{table_name}\\\"\".format( # nosec\n table_name=table_name\n )\n side_db.engine.execute(query)\n else:\n raise Exception(\n \"Table name {table_name} contains quotes\".format(\n table_name=table_name\n )\n )\n\n # Extracting files\n files = [f for f in backup.namelist() if f.startswith(\"uploads/\")]\n uploader = get_uploader()\n for f in files:\n filename = f.split(os.sep, 1)\n\n if (\n len(filename) < 2 or os.path.basename(filename[1]) == \"\"\n ): # just an empty uploads directory (e.g. uploads/) or any directory\n continue\n\n filename = filename[1] # Get the second entry in the list (the actual filename)\n source = backup.open(f)\n uploader.store(fileobj=source, filename=filename)\n\n # Alembic sqlite support is lacking so we should just create_all anyway\n try:\n upgrade(revision=\"head\")\n except (OperationalError, CommandError, RuntimeError, SystemExit, Exception):\n app.db.create_all()\n stamp_latest_revision()\n\n try:\n if postgres:\n side_db.query(\"SET session_replication_role=DEFAULT;\")\n else:\n side_db.query(\"SET FOREIGN_KEY_CHECKS=1;\")\n except Exception:\n print(\"Failed to enable foreign key checks. Continuing.\")\n\n # Invalidate all cached data\n cache.clear()\n\n # Set default theme in case the current instance or the import does not provide it\n set_config(\"ctf_theme\", \"core\")\n", "path": "CTFd/utils/exports/__init__.py"}], "after_files": [{"content": "import datetime\nimport json\nimport os\nimport re\nimport tempfile\nimport zipfile\n\nimport datafreeze\nimport dataset\nimport six\nfrom alembic.util import CommandError\nfrom datafreeze.format import SERIALIZERS\nfrom datafreeze.format.fjson import JSONEncoder, JSONSerializer\nfrom flask import current_app as app\nfrom flask_migrate import upgrade\nfrom sqlalchemy.exc import OperationalError, ProgrammingError\nfrom sqlalchemy.sql import sqltypes\n\nfrom CTFd.cache import cache\nfrom CTFd.models import db, get_class_by_tablename\nfrom CTFd.utils import get_app_config, set_config\nfrom CTFd.utils.migrations import (\n create_database,\n drop_database,\n get_current_revision,\n stamp_latest_revision,\n)\nfrom CTFd.utils.uploads import get_uploader\n\n\nclass CTFdSerializer(JSONSerializer):\n \"\"\"\n Slightly modified datafreeze serializer so that we can properly\n export the CTFd database into a zip file.\n \"\"\"\n\n def close(self):\n for path, result in self.buckets.items():\n result = self.wrap(result)\n\n if self.fileobj is None:\n fh = open(path, \"wb\")\n else:\n fh = self.fileobj\n\n # Certain databases (MariaDB) store JSON as LONGTEXT.\n # Before emitting a file we should standardize to valid JSON (i.e. a dict)\n # See Issue #973\n for i, r in enumerate(result[\"results\"]):\n data = r.get(\"requirements\")\n if data:\n try:\n if isinstance(data, six.string_types):\n result[\"results\"][i][\"requirements\"] = json.loads(data)\n except ValueError:\n pass\n\n data = json.dumps(\n result, cls=JSONEncoder, indent=self.export.get_int(\"indent\")\n )\n\n callback = self.export.get(\"callback\")\n if callback:\n data = \"%s && %s(%s);\" % (callback, callback, data)\n\n if six.PY3:\n fh.write(bytes(data, encoding=\"utf-8\"))\n else:\n fh.write(data)\n if self.fileobj is None:\n fh.close()\n\n\nSERIALIZERS[\"ctfd\"] = CTFdSerializer # Load the custom serializer\n\n\ndef export_ctf():\n # TODO: For some unknown reason dataset is only able to see alembic_version during tests.\n # Even using a real sqlite database. This makes this test impossible to pass in sqlite.\n db = dataset.connect(get_app_config(\"SQLALCHEMY_DATABASE_URI\"))\n\n # Backup database\n backup = tempfile.NamedTemporaryFile()\n\n backup_zip = zipfile.ZipFile(backup, \"w\")\n\n tables = db.tables\n for table in tables:\n result = db[table].all()\n result_file = six.BytesIO()\n datafreeze.freeze(result, format=\"ctfd\", fileobj=result_file)\n result_file.seek(0)\n backup_zip.writestr(\"db/{}.json\".format(table), result_file.read())\n\n # # Guarantee that alembic_version is saved into the export\n if \"alembic_version\" not in tables:\n result = {\n \"count\": 1,\n \"results\": [{\"version_num\": get_current_revision()}],\n \"meta\": {},\n }\n result_file = six.BytesIO()\n json.dump(result, result_file)\n result_file.seek(0)\n backup_zip.writestr(\"db/alembic_version.json\", result_file.read())\n\n # Backup uploads\n uploader = get_uploader()\n uploader.sync()\n\n upload_folder = os.path.join(\n os.path.normpath(app.root_path), app.config.get(\"UPLOAD_FOLDER\")\n )\n for root, dirs, files in os.walk(upload_folder):\n for file in files:\n parent_dir = os.path.basename(root)\n backup_zip.write(\n os.path.join(root, file),\n arcname=os.path.join(\"uploads\", parent_dir, file),\n )\n\n backup_zip.close()\n backup.seek(0)\n return backup\n\n\ndef import_ctf(backup, erase=True):\n if not zipfile.is_zipfile(backup):\n raise zipfile.BadZipfile\n\n backup = zipfile.ZipFile(backup)\n\n members = backup.namelist()\n max_content_length = get_app_config(\"MAX_CONTENT_LENGTH\")\n for f in members:\n if f.startswith(\"/\") or \"..\" in f:\n # Abort on malicious zip files\n raise zipfile.BadZipfile\n info = backup.getinfo(f)\n if max_content_length:\n if info.file_size > max_content_length:\n raise zipfile.LargeZipFile\n\n try:\n alembic_version = json.loads(backup.open(\"db/alembic_version.json\").read())\n alembic_version = alembic_version[\"results\"][0][\"version_num\"]\n except Exception:\n raise Exception(\n \"Could not determine appropriate database version. This backup cannot be automatically imported.\"\n )\n\n # Check if the alembic version is from CTFd 1.x\n if alembic_version in (\n \"1ec4a28fe0ff\",\n \"2539d8b5082e\",\n \"7e9efd084c5a\",\n \"87733981ca0e\",\n \"a4e30c94c360\",\n \"c12d2a1b0926\",\n \"c7225db614c1\",\n \"cb3cfcc47e2f\",\n \"cbf5620f8e15\",\n \"d5a224bf5862\",\n \"d6514ec92738\",\n \"dab615389702\",\n \"e62fd69bd417\",\n ):\n raise Exception(\n \"The version of CTFd that this backup is from is too old to be automatically imported.\"\n )\n\n if erase:\n drop_database()\n create_database()\n # We explicitly do not want to upgrade or stamp here.\n # The import will have this information.\n\n side_db = dataset.connect(get_app_config(\"SQLALCHEMY_DATABASE_URI\"))\n sqlite = get_app_config(\"SQLALCHEMY_DATABASE_URI\").startswith(\"sqlite\")\n postgres = get_app_config(\"SQLALCHEMY_DATABASE_URI\").startswith(\"postgres\")\n\n try:\n if postgres:\n side_db.query(\"SET session_replication_role=replica;\")\n else:\n side_db.query(\"SET FOREIGN_KEY_CHECKS=0;\")\n except Exception:\n print(\"Failed to disable foreign key checks. Continuing.\")\n\n first = [\n \"db/teams.json\",\n \"db/users.json\",\n \"db/challenges.json\",\n \"db/dynamic_challenge.json\",\n \"db/flags.json\",\n \"db/hints.json\",\n \"db/unlocks.json\",\n \"db/awards.json\",\n \"db/tags.json\",\n \"db/submissions.json\",\n \"db/solves.json\",\n \"db/files.json\",\n \"db/notifications.json\",\n \"db/pages.json\",\n \"db/tracking.json\",\n \"db/config.json\",\n ]\n\n for item in first:\n if item in members:\n members.remove(item)\n\n members = first + members\n\n upgrade(revision=alembic_version)\n\n # Create tables created by plugins\n try:\n app.db.create_all()\n except OperationalError as e:\n if not postgres:\n raise e\n else:\n print(\"Allowing error during app.db.create_all() due to Postgres\")\n\n members.remove(\"db/alembic_version.json\")\n\n for member in members:\n if member.startswith(\"db/\"):\n table_name = member[3:-5]\n\n try:\n # Try to open a file but skip if it doesn't exist.\n data = backup.open(member).read()\n except KeyError:\n continue\n\n if data:\n table = side_db[table_name]\n\n saved = json.loads(data)\n for entry in saved[\"results\"]:\n # This is a hack to get SQLite to properly accept datetime values from dataset\n # See Issue #246\n if sqlite:\n direct_table = get_class_by_tablename(table.name)\n for k, v in entry.items():\n if isinstance(v, six.string_types):\n # We only want to apply this hack to columns that are expecting a datetime object\n try:\n is_dt_column = (\n type(getattr(direct_table, k).type)\n == sqltypes.DateTime\n )\n except AttributeError:\n is_dt_column = False\n\n # If the table is expecting a datetime, we should check if the string is one and convert it\n if is_dt_column:\n match = re.match(\n r\"\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d\", v\n )\n if match:\n entry[k] = datetime.datetime.strptime(\n v, \"%Y-%m-%dT%H:%M:%S.%f\"\n )\n continue\n match = re.match(\n r\"\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\", v\n )\n if match:\n entry[k] = datetime.datetime.strptime(\n v, \"%Y-%m-%dT%H:%M:%S\"\n )\n continue\n # From v2.0.0 to v2.1.0 requirements could have been a string or JSON because of a SQLAlchemy issue\n # This is a hack to ensure we can still accept older exports. See #867\n if member in (\n \"db/challenges.json\",\n \"db/hints.json\",\n \"db/awards.json\",\n ):\n requirements = entry.get(\"requirements\")\n if requirements and isinstance(requirements, six.string_types):\n entry[\"requirements\"] = json.loads(requirements)\n\n try:\n table.insert(entry)\n except ProgrammingError:\n # MariaDB does not like JSON objects and prefers strings because it internally\n # represents JSON with LONGTEXT.\n # See Issue #973\n requirements = entry.get(\"requirements\")\n if requirements and isinstance(requirements, dict):\n entry[\"requirements\"] = json.dumps(requirements)\n table.insert(entry)\n\n db.session.commit()\n if postgres:\n # This command is to set the next primary key ID for the re-inserted tables in Postgres. However,\n # this command is very difficult to translate into SQLAlchemy code. Because Postgres is not\n # officially supported, no major work will go into this functionality.\n # https://stackoverflow.com/a/37972960\n if '\"' not in table_name and \"'\" not in table_name:\n query = \"SELECT setval(pg_get_serial_sequence('{table_name}', 'id'), coalesce(max(id)+1,1), false) FROM \\\"{table_name}\\\"\".format( # nosec\n table_name=table_name\n )\n side_db.engine.execute(query)\n else:\n raise Exception(\n \"Table name {table_name} contains quotes\".format(\n table_name=table_name\n )\n )\n\n # Extracting files\n files = [f for f in backup.namelist() if f.startswith(\"uploads/\")]\n uploader = get_uploader()\n for f in files:\n filename = f.split(os.sep, 1)\n\n if (\n len(filename) < 2 or os.path.basename(filename[1]) == \"\"\n ): # just an empty uploads directory (e.g. uploads/) or any directory\n continue\n\n filename = filename[1] # Get the second entry in the list (the actual filename)\n source = backup.open(f)\n uploader.store(fileobj=source, filename=filename)\n\n # Alembic sqlite support is lacking so we should just create_all anyway\n try:\n upgrade(revision=\"head\")\n except (OperationalError, CommandError, RuntimeError, SystemExit, Exception):\n app.db.create_all()\n stamp_latest_revision()\n\n try:\n if postgres:\n side_db.query(\"SET session_replication_role=DEFAULT;\")\n else:\n side_db.query(\"SET FOREIGN_KEY_CHECKS=1;\")\n except Exception:\n print(\"Failed to enable foreign key checks. Continuing.\")\n\n # Invalidate all cached data\n cache.clear()\n\n # Set default theme in case the current instance or the import does not provide it\n set_config(\"ctf_theme\", \"core\")\n", "path": "CTFd/utils/exports/__init__.py"}]} | 4,054 | 719 |
gh_patches_debug_10450 | rasdani/github-patches | git_diff | celery__celery-6020 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to use mysql SSL parameters in create_engine()
PR for proposed fix to this issue: https://github.com/celery/celery/pull/6020
# Checklist
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [ ] This has already been asked to the [discussion group](https://groups.google.com/forum/#!forum/celery-users) first.
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [x] I have included the contents of ``pip freeze`` in the issue.
- [x] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [x] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [x] I have tried reproducing the issue on more than one operating system.
- [ ] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
https://github.com/celery/celery/commit/94dae1b899aae6ae2ca333773fddbc6dd603213c
This PR was made to address the following issue, which has resulted in the issue I am having now. https://github.com/celery/celery/issues/1930
#### Related Issues
https://github.com/celery/celery/issues/1930
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**: celery>=4.0.0 (using it in Airflow)
</p>
</details>
# Steps to Reproduce
(see Minimally Reproducible Test Case for step by step commands. This contains information leading to the issue and a proposed fix)
In Airflow, you can set celery configs. I was setting up cloudsql to use a private IP instead of a proxy. Currently, we use mysql as the `results_backend`. Changing the host address from local host to the private IP caused some errors, as expected.
```
OperationalError: (_mysql_exceptions.OperationalError) (1045, "Access denied for user 'airflow'@'10.x.x.xxx' (using password: YES)")
```
In order to use the private IP, I need to use the SSL cert, key, and ca. I confirmed that by logging into the Airflow worker and scheduler pods that my url and engine arg params worked.
```
from airflow.models import DagRun
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
e = create_engine({AIRFLOW__CELERY__SQL_ALCHEMY_CONN},connect_args= {'ssl': {'ca': '/path-to-mysql-sslcert/server-ca', 'cert': '/path-to-mysql-sslcert/client-cert', 'key': '/path-to-mysql-sslcert/client-key'}})
s = scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=e))
s.query(DagRun).all()
```
This worked fine, so I know that the my ssl certs are accessible, the engine can be created, and a session used. Non-celery mysql connections no longer gave an error.
The Celery documentation (https://docs.celeryproject.org/en/stable/userguide/configuration.html#conf-database-result-backend) outlines how to add engine args to via `database_engine_options`. Therefore, I added
```
'database_engine_options': {
'connect_args': {'ssl': {'ca': '/path-to-mysql-sslcert/server-ca', 'cert': '/path-to-mysql-sslcert/client-cert', 'key': '/path-to-mysql-sslcert/client-key'}}}
```
However, I still get the same error.
```
OperationalError: (_mysql_exceptions.OperationalError) (1045, "Access denied for user 'airflow'@'10.x.x.xxx' (using password: YES)")
```
Additionally, I get logs in the scheduler like the following:
```
{{__init__.py:56}} WARNING - Failed operation _get_task_meta_for. Retrying 1 more times.
68918-Traceback (most recent call last):
68919- File "/usr/local/lib/python2.7/dist-packages/celery/backends/database/__init__.py", line 51, in _inner
68920- return fun(*args, **kwargs)
68921- File "/usr/local/lib/python2.7/dist-packages/celery/backends/database/__init__.py", line 154, in _get_task_meta_for
68922: session = self.ResultSession()
68923: File "/usr/local/lib/python2.7/dist-packages/celery/backends/database/__init__.py", line 113, in ResultSession
68924- **self.engine_options)
68925- File "/usr/local/lib/python2.7/dist-packages/celery/backends/database/session.py", line 59, in session_factory
68926- self.prepare_models(engine)
68927- File "/usr/local/lib/python2.7/dist-packages/celery/backends/database/session.py", line 54, in prepare_models
68928- ResultModelBase.metadata.create_all(engine)
```
After digging through the code with @dangermike, we noticed that `get_engine` will not use the kwargs passed to it unless it has been forked.(https://github.com/celery/celery/blob/master/celery/backends/database/session.py#L34) Therefore, the SSL params will not be passed in our case. The only place that self.forked = True is after the fork cleanup session. This used to not be the case (https://github.com/celery/celery/commit/94dae1b899aae6ae2ca333773fddbc6dd603213c), but after an issue was made about passing pool_size (https://github.com/celery/celery/issues/1930), `**kwargs` were taken out of create_engine() entirely.
Possibly something like the following would allow for kwargs to be passed in, while still addressing the pool params issue.
```
class SessionManager(object):
# ...
def get_engine(self, dburi, **kwargs):
if self.forked:
try:
return self._engines[dburi]
except KeyError:
engine = self._engines[dburi] = create_engine(dburi, **kwargs)
return engine
else:
kwargs = dict([(k, v) for k, v in kwargs.items() if not k.startswith('pool')])
return create_engine(dburi, poolclass=NullPool, **kwargs)
```
where `kwargs = dict([(k, v) for k, v in kwargs.items() if not k.startswith('pool')])` omits any pool args while keeping the rest.
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: >=2.7
* **Minimal Celery Version**: >=4.0.0
* **Minimal Kombu Version**: N/A or Unknown
* **Minimal Broker Version**: N/A or Unknown
* **Minimal Result Backend Version**: N/A or Unknown
* **Minimal OS and/or Kernel Version**: N/A or Unknown
* **Minimal Broker Client Version**: N/A or Unknown
* **Minimal Result Backend Client Version**: N/A or Unknown
### Python Packages
Used Airflow
### Other Dependencies
N/A
## Minimally Reproducible Test Case
In a python shell,
get the url with a private mysql IP to make result_backend, giving something like `db+mysql://airflow:***@10.x.xx.xx/airflow`
and the celery config
```
celery_configuration =
{'broker_transport_options': {'visibility_timeout': 21600},
'result_serializer': 'pickle',
'task_acks_late': True,
'database_engine_options': { 'connect_args': {'ssl': {'ca': '/path-to-mysql-sslcert/server-ca', 'cert': '/path-to-mysql-sslcert/client-cert', 'key': '/path-to-mysql-sslcert/client-key'}}},
'task_default_queue': 'default',
'worker_concurrency': 32,
'worker_prefetch_multiplier': 1,
'event_serializer': 'json',
'accept_content': ['json', 'pickle'],
'broker_url': 'redis://{URL}/1',
'result_backend': 'db+mysql://airflow:***@10.x.xx.xx/airflow',
'task_default_exchange': 'default'}
```
the line most important here is:
` 'database_engine_options': { 'connect_args': {'ssl': {'ca': '/path-to-mysql-sslcert/server-ca', 'cert': '/path-to-mysql-sslcert/client-cert', 'key': '/path-to-mysql-sslcert/client-key'}}}`
then try to connect to result_backend by creating app.
```
app = Celery(celery_app_name=airflow.executors.celery_executor,
config_source=celery_configuration)
```
create a database backend
```
dbbe = database.DatabaseBackend(url={results_backend url without the 'db+' in the beginning}, engine_options=celery_configuration['database_engine_options'], app=app)
```
and you will get the error again
```
sqlalchemy.exc.OperationalError: (_mysql_exceptions.OperationalError) (1045, "Access denied for user 'airflow'@'10.xx.xx.xxx' (using password: YES)")
(Background on this error at: http://sqlalche.me/e/e3q8)
```
# Expected Behavior
It seems like the expected behavior here would be for the connection to be successful and use the SSL certs in the **kwargs passed into `get_engine`.
# Actual Behavior
Since self.fork is not True, and will not be True, create_engine is made by:
```
return create_engine(dburi, poolclass=NullPool)
```
since the SSL certs are not included, an error is returned and the connection is _not_ successful.
```
sqlalchemy.exc.OperationalError: (_mysql_exceptions.OperationalError) (1045, "Access denied for user 'airflow'@'10.xx.xx.xxx' (using password: YES)")
(Background on this error at: http://sqlalche.me/e/e3q8)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `celery/backends/database/session.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """SQLAlchemy session."""
3 from __future__ import absolute_import, unicode_literals
4
5 from kombu.utils.compat import register_after_fork
6 from sqlalchemy import create_engine
7 from sqlalchemy.ext.declarative import declarative_base
8 from sqlalchemy.orm import sessionmaker
9 from sqlalchemy.pool import NullPool
10
11 ResultModelBase = declarative_base()
12
13 __all__ = ('SessionManager',)
14
15
16 def _after_fork_cleanup_session(session):
17 session._after_fork()
18
19
20 class SessionManager(object):
21 """Manage SQLAlchemy sessions."""
22
23 def __init__(self):
24 self._engines = {}
25 self._sessions = {}
26 self.forked = False
27 self.prepared = False
28 if register_after_fork is not None:
29 register_after_fork(self, _after_fork_cleanup_session)
30
31 def _after_fork(self):
32 self.forked = True
33
34 def get_engine(self, dburi, **kwargs):
35 if self.forked:
36 try:
37 return self._engines[dburi]
38 except KeyError:
39 engine = self._engines[dburi] = create_engine(dburi, **kwargs)
40 return engine
41 else:
42 return create_engine(dburi, poolclass=NullPool)
43
44 def create_session(self, dburi, short_lived_sessions=False, **kwargs):
45 engine = self.get_engine(dburi, **kwargs)
46 if self.forked:
47 if short_lived_sessions or dburi not in self._sessions:
48 self._sessions[dburi] = sessionmaker(bind=engine)
49 return engine, self._sessions[dburi]
50 return engine, sessionmaker(bind=engine)
51
52 def prepare_models(self, engine):
53 if not self.prepared:
54 ResultModelBase.metadata.create_all(engine)
55 self.prepared = True
56
57 def session_factory(self, dburi, **kwargs):
58 engine, session = self.create_session(dburi, **kwargs)
59 self.prepare_models(engine)
60 return session()
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/celery/backends/database/session.py b/celery/backends/database/session.py
--- a/celery/backends/database/session.py
+++ b/celery/backends/database/session.py
@@ -39,7 +39,9 @@
engine = self._engines[dburi] = create_engine(dburi, **kwargs)
return engine
else:
- return create_engine(dburi, poolclass=NullPool)
+ kwargs = dict([(k, v) for k, v in kwargs.items() if
+ not k.startswith('pool')])
+ return create_engine(dburi, poolclass=NullPool, **kwargs)
def create_session(self, dburi, short_lived_sessions=False, **kwargs):
engine = self.get_engine(dburi, **kwargs)
| {"golden_diff": "diff --git a/celery/backends/database/session.py b/celery/backends/database/session.py\n--- a/celery/backends/database/session.py\n+++ b/celery/backends/database/session.py\n@@ -39,7 +39,9 @@\n engine = self._engines[dburi] = create_engine(dburi, **kwargs)\n return engine\n else:\n- return create_engine(dburi, poolclass=NullPool)\n+ kwargs = dict([(k, v) for k, v in kwargs.items() if\n+ not k.startswith('pool')])\n+ return create_engine(dburi, poolclass=NullPool, **kwargs)\n \n def create_session(self, dburi, short_lived_sessions=False, **kwargs):\n engine = self.get_engine(dburi, **kwargs)\n", "issue": "Unable to use mysql SSL parameters in create_engine()\nPR for proposed fix to this issue: https://github.com/celery/celery/pull/6020\r\n\r\n# Checklist\r\n- [x] I have verified that the issue exists against the `master` branch of Celery.\r\n- [ ] This has already been asked to the [discussion group](https://groups.google.com/forum/#!forum/celery-users) first.\r\n- [x] I have read the relevant section in the\r\n [contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)\r\n on reporting bugs.\r\n- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)\r\n for similar or identical bug reports.\r\n- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)\r\n for existing proposed fixes.\r\n- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)\r\n to find out if the bug was already fixed in the master branch.\r\n- [x] I have included all related issues and possible duplicate issues\r\n in this issue (If there are none, check this box anyway).\r\n\r\n## Mandatory Debugging Information\r\n\r\n- [x] I have included the output of ``celery -A proj report`` in the issue.\r\n (if you are not able to do this, then at least specify the Celery\r\n version affected).\r\n- [x] I have verified that the issue exists against the `master` branch of Celery.\r\n- [x] I have included the contents of ``pip freeze`` in the issue.\r\n- [x] I have included all the versions of all the external dependencies required\r\n to reproduce this bug.\r\n\r\n## Optional Debugging Information\r\n- [ ] I have tried reproducing the issue on more than one Python version\r\n and/or implementation.\r\n- [x] I have tried reproducing the issue on more than one message broker and/or\r\n result backend.\r\n- [ ] I have tried reproducing the issue on more than one version of the message\r\n broker and/or result backend.\r\n- [x] I have tried reproducing the issue on more than one operating system.\r\n- [ ] I have tried reproducing the issue on more than one workers pool.\r\n- [ ] I have tried reproducing the issue with autoscaling, retries,\r\n ETA/Countdown & rate limits disabled.\r\n- [ ] I have tried reproducing the issue after downgrading\r\n and/or upgrading Celery and its dependencies.\r\n\r\n## Related Issues and Possible Duplicates\r\nhttps://github.com/celery/celery/commit/94dae1b899aae6ae2ca333773fddbc6dd603213c \r\nThis PR was made to address the following issue, which has resulted in the issue I am having now. https://github.com/celery/celery/issues/1930 \r\n\r\n#### Related Issues\r\n\r\nhttps://github.com/celery/celery/issues/1930\r\n\r\n#### Possible Duplicates\r\n\r\n- None\r\n\r\n## Environment & Settings\r\n<!-- Include the contents of celery --version below -->\r\n**Celery version**: celery>=4.0.0 (using it in Airflow) \r\n\r\n</p>\r\n</details>\r\n\r\n# Steps to Reproduce\r\n(see Minimally Reproducible Test Case for step by step commands. This contains information leading to the issue and a proposed fix)\r\n\r\nIn Airflow, you can set celery configs. I was setting up cloudsql to use a private IP instead of a proxy. Currently, we use mysql as the `results_backend`. Changing the host address from local host to the private IP caused some errors, as expected. \r\n\r\n```\r\nOperationalError: (_mysql_exceptions.OperationalError) (1045, \"Access denied for user 'airflow'@'10.x.x.xxx' (using password: YES)\")\r\n```\r\nIn order to use the private IP, I need to use the SSL cert, key, and ca. I confirmed that by logging into the Airflow worker and scheduler pods that my url and engine arg params worked. \r\n\r\n```\r\nfrom airflow.models import DagRun \r\nfrom sqlalchemy import create_engine\r\nfrom sqlalchemy.orm import scoped_session, sessionmaker\r\ne = create_engine({AIRFLOW__CELERY__SQL_ALCHEMY_CONN},connect_args= {'ssl': {'ca': '/path-to-mysql-sslcert/server-ca', 'cert': '/path-to-mysql-sslcert/client-cert', 'key': '/path-to-mysql-sslcert/client-key'}})\r\ns = scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=e))\r\ns.query(DagRun).all()\r\n```\r\n\r\nThis worked fine, so I know that the my ssl certs are accessible, the engine can be created, and a session used. Non-celery mysql connections no longer gave an error. \r\n\r\nThe Celery documentation (https://docs.celeryproject.org/en/stable/userguide/configuration.html#conf-database-result-backend) outlines how to add engine args to via `database_engine_options`. Therefore, I added \r\n```\r\n'database_engine_options': {\r\n 'connect_args': {'ssl': {'ca': '/path-to-mysql-sslcert/server-ca', 'cert': '/path-to-mysql-sslcert/client-cert', 'key': '/path-to-mysql-sslcert/client-key'}}}\r\n```\r\nHowever, I still get the same error. \r\n\r\n```\r\nOperationalError: (_mysql_exceptions.OperationalError) (1045, \"Access denied for user 'airflow'@'10.x.x.xxx' (using password: YES)\")\r\n```\r\n\r\n\r\nAdditionally, I get logs in the scheduler like the following:\r\n```\r\n {{__init__.py:56}} WARNING - Failed operation _get_task_meta_for. Retrying 1 more times.\r\n68918-Traceback (most recent call last):\r\n68919- File \"/usr/local/lib/python2.7/dist-packages/celery/backends/database/__init__.py\", line 51, in _inner\r\n68920- return fun(*args, **kwargs)\r\n68921- File \"/usr/local/lib/python2.7/dist-packages/celery/backends/database/__init__.py\", line 154, in _get_task_meta_for\r\n68922: session = self.ResultSession()\r\n68923: File \"/usr/local/lib/python2.7/dist-packages/celery/backends/database/__init__.py\", line 113, in ResultSession\r\n68924- **self.engine_options)\r\n68925- File \"/usr/local/lib/python2.7/dist-packages/celery/backends/database/session.py\", line 59, in session_factory\r\n68926- self.prepare_models(engine)\r\n68927- File \"/usr/local/lib/python2.7/dist-packages/celery/backends/database/session.py\", line 54, in prepare_models\r\n68928- ResultModelBase.metadata.create_all(engine)\r\n```\r\nAfter digging through the code with @dangermike, we noticed that `get_engine` will not use the kwargs passed to it unless it has been forked.(https://github.com/celery/celery/blob/master/celery/backends/database/session.py#L34) Therefore, the SSL params will not be passed in our case. The only place that self.forked = True is after the fork cleanup session. This used to not be the case (https://github.com/celery/celery/commit/94dae1b899aae6ae2ca333773fddbc6dd603213c), but after an issue was made about passing pool_size (https://github.com/celery/celery/issues/1930), `**kwargs` were taken out of create_engine() entirely. \r\nPossibly something like the following would allow for kwargs to be passed in, while still addressing the pool params issue.\r\n\r\n```\r\nclass SessionManager(object):\r\n # ...\r\n def get_engine(self, dburi, **kwargs):\r\n if self.forked:\r\n try:\r\n return self._engines[dburi]\r\n except KeyError:\r\n engine = self._engines[dburi] = create_engine(dburi, **kwargs)\r\n return engine\r\n else:\r\n kwargs = dict([(k, v) for k, v in kwargs.items() if not k.startswith('pool')])\r\n return create_engine(dburi, poolclass=NullPool, **kwargs)\r\n```\r\n\r\nwhere `kwargs = dict([(k, v) for k, v in kwargs.items() if not k.startswith('pool')])` omits any pool args while keeping the rest. \r\n\r\n\r\n## Required Dependencies\r\n<!-- Please fill the required dependencies to reproduce this issue -->\r\n* **Minimal Python Version**: >=2.7\r\n* **Minimal Celery Version**: >=4.0.0\r\n* **Minimal Kombu Version**: N/A or Unknown\r\n* **Minimal Broker Version**: N/A or Unknown\r\n* **Minimal Result Backend Version**: N/A or Unknown\r\n* **Minimal OS and/or Kernel Version**: N/A or Unknown\r\n* **Minimal Broker Client Version**: N/A or Unknown\r\n* **Minimal Result Backend Client Version**: N/A or Unknown\r\n\r\n### Python Packages\r\nUsed Airflow\r\n\r\n### Other Dependencies\r\nN/A\r\n\r\n## Minimally Reproducible Test Case\r\n\r\nIn a python shell, \r\n\r\nget the url with a private mysql IP to make result_backend, giving something like `db+mysql://airflow:***@10.x.xx.xx/airflow`\r\n\r\nand the celery config\r\n```\r\ncelery_configuration = \r\n{'broker_transport_options': {'visibility_timeout': 21600},\r\n 'result_serializer': 'pickle',\r\n 'task_acks_late': True, \r\n'database_engine_options': { 'connect_args': {'ssl': {'ca': '/path-to-mysql-sslcert/server-ca', 'cert': '/path-to-mysql-sslcert/client-cert', 'key': '/path-to-mysql-sslcert/client-key'}}}, \r\n'task_default_queue': 'default',\r\n 'worker_concurrency': 32,\r\n 'worker_prefetch_multiplier': 1, \r\n'event_serializer': 'json', \r\n'accept_content': ['json', 'pickle'], \r\n'broker_url': 'redis://{URL}/1', \r\n'result_backend': 'db+mysql://airflow:***@10.x.xx.xx/airflow', \r\n'task_default_exchange': 'default'}\r\n```\r\nthe line most important here is:\r\n` 'database_engine_options': { 'connect_args': {'ssl': {'ca': '/path-to-mysql-sslcert/server-ca', 'cert': '/path-to-mysql-sslcert/client-cert', 'key': '/path-to-mysql-sslcert/client-key'}}}`\r\n\r\nthen try to connect to result_backend by creating app.\r\n```\r\napp = Celery(celery_app_name=airflow.executors.celery_executor,\r\n config_source=celery_configuration)\r\n```\r\n\r\ncreate a database backend \r\n```\r\ndbbe = database.DatabaseBackend(url={results_backend url without the 'db+' in the beginning}, engine_options=celery_configuration['database_engine_options'], app=app)\r\n```\r\n\r\n\r\nand you will get the error again \r\n\r\n```\r\nsqlalchemy.exc.OperationalError: (_mysql_exceptions.OperationalError) (1045, \"Access denied for user 'airflow'@'10.xx.xx.xxx' (using password: YES)\")\r\n(Background on this error at: http://sqlalche.me/e/e3q8)\r\n```\r\n\r\n\r\n\r\n# Expected Behavior\r\nIt seems like the expected behavior here would be for the connection to be successful and use the SSL certs in the **kwargs passed into `get_engine`. \r\n\r\n# Actual Behavior\r\nSince self.fork is not True, and will not be True, create_engine is made by:\r\n```\r\n return create_engine(dburi, poolclass=NullPool)\r\n```\r\nsince the SSL certs are not included, an error is returned and the connection is _not_ successful. \r\n\r\n```\r\nsqlalchemy.exc.OperationalError: (_mysql_exceptions.OperationalError) (1045, \"Access denied for user 'airflow'@'10.xx.xx.xxx' (using password: YES)\")\r\n(Background on this error at: http://sqlalche.me/e/e3q8)\r\n```\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"SQLAlchemy session.\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nfrom kombu.utils.compat import register_after_fork\nfrom sqlalchemy import create_engine\nfrom sqlalchemy.ext.declarative import declarative_base\nfrom sqlalchemy.orm import sessionmaker\nfrom sqlalchemy.pool import NullPool\n\nResultModelBase = declarative_base()\n\n__all__ = ('SessionManager',)\n\n\ndef _after_fork_cleanup_session(session):\n session._after_fork()\n\n\nclass SessionManager(object):\n \"\"\"Manage SQLAlchemy sessions.\"\"\"\n\n def __init__(self):\n self._engines = {}\n self._sessions = {}\n self.forked = False\n self.prepared = False\n if register_after_fork is not None:\n register_after_fork(self, _after_fork_cleanup_session)\n\n def _after_fork(self):\n self.forked = True\n\n def get_engine(self, dburi, **kwargs):\n if self.forked:\n try:\n return self._engines[dburi]\n except KeyError:\n engine = self._engines[dburi] = create_engine(dburi, **kwargs)\n return engine\n else:\n return create_engine(dburi, poolclass=NullPool)\n\n def create_session(self, dburi, short_lived_sessions=False, **kwargs):\n engine = self.get_engine(dburi, **kwargs)\n if self.forked:\n if short_lived_sessions or dburi not in self._sessions:\n self._sessions[dburi] = sessionmaker(bind=engine)\n return engine, self._sessions[dburi]\n return engine, sessionmaker(bind=engine)\n\n def prepare_models(self, engine):\n if not self.prepared:\n ResultModelBase.metadata.create_all(engine)\n self.prepared = True\n\n def session_factory(self, dburi, **kwargs):\n engine, session = self.create_session(dburi, **kwargs)\n self.prepare_models(engine)\n return session()\n", "path": "celery/backends/database/session.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"SQLAlchemy session.\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nfrom kombu.utils.compat import register_after_fork\nfrom sqlalchemy import create_engine\nfrom sqlalchemy.ext.declarative import declarative_base\nfrom sqlalchemy.orm import sessionmaker\nfrom sqlalchemy.pool import NullPool\n\nResultModelBase = declarative_base()\n\n__all__ = ('SessionManager',)\n\n\ndef _after_fork_cleanup_session(session):\n session._after_fork()\n\n\nclass SessionManager(object):\n \"\"\"Manage SQLAlchemy sessions.\"\"\"\n\n def __init__(self):\n self._engines = {}\n self._sessions = {}\n self.forked = False\n self.prepared = False\n if register_after_fork is not None:\n register_after_fork(self, _after_fork_cleanup_session)\n\n def _after_fork(self):\n self.forked = True\n\n def get_engine(self, dburi, **kwargs):\n if self.forked:\n try:\n return self._engines[dburi]\n except KeyError:\n engine = self._engines[dburi] = create_engine(dburi, **kwargs)\n return engine\n else:\n kwargs = dict([(k, v) for k, v in kwargs.items() if\n not k.startswith('pool')])\n return create_engine(dburi, poolclass=NullPool, **kwargs)\n\n def create_session(self, dburi, short_lived_sessions=False, **kwargs):\n engine = self.get_engine(dburi, **kwargs)\n if self.forked:\n if short_lived_sessions or dburi not in self._sessions:\n self._sessions[dburi] = sessionmaker(bind=engine)\n return engine, self._sessions[dburi]\n return engine, sessionmaker(bind=engine)\n\n def prepare_models(self, engine):\n if not self.prepared:\n ResultModelBase.metadata.create_all(engine)\n self.prepared = True\n\n def session_factory(self, dburi, **kwargs):\n engine, session = self.create_session(dburi, **kwargs)\n self.prepare_models(engine)\n return session()\n", "path": "celery/backends/database/session.py"}]} | 3,551 | 173 |
gh_patches_debug_44511 | rasdani/github-patches | git_diff | conda-forge__conda-smithy-743 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: 'module' object has no attribute 'meta_of_feedstock'
https://travis-ci.org/conda-forge/staged-recipes/builds/367829917#L880
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_smithy/github.py`
Content:
```
1 from __future__ import absolute_import, print_function
2
3 import os
4 import random
5 from random import choice
6
7 import git
8 from git import Repo
9
10 import github
11 from github import Github
12 from github.GithubException import GithubException
13 from github.Organization import Organization
14 from github.Team import Team
15
16 from . import configure_feedstock
17
18
19 def gh_token():
20 try:
21 with open(os.path.expanduser('~/.conda-smithy/github.token'), 'r') as fh:
22 token = fh.read().strip()
23 except IOError:
24 msg = ('No github token. Go to https://github.com/settings/tokens/new and generate\n'
25 'a token with repo access. Put it in ~/.conda-smithy/github.token')
26 raise RuntimeError(msg)
27 return token
28
29
30 def create_team(org, name, description, repo_names=[]):
31 # PyGithub creates secret teams, and has no way of turning that off! :(
32 post_parameters = {
33 "name": name,
34 "description": description,
35 "privacy": "closed",
36 "permission": "push",
37 "repo_names": repo_names
38 }
39 headers, data = org._requester.requestJsonAndCheck(
40 "POST",
41 org.url + "/teams",
42 input=post_parameters
43 )
44 return Team(org._requester, headers, data, completed=True)
45
46
47 def add_membership(team, member):
48 headers, data = team._requester.requestJsonAndCheck(
49 "PUT",
50 team.url + "/memberships/" + member
51 )
52 return (headers, data)
53
54
55 def remove_membership(team, member):
56 headers, data = team._requester.requestJsonAndCheck(
57 "DELETE",
58 team.url + "/memberships/" + member
59 )
60 return (headers, data)
61
62
63 def has_in_members(team, member):
64 status, headers, data = team._requester.requestJson(
65 "GET",
66 team.url + "/members/" + member
67 )
68 return status == 204
69
70
71 def get_cached_team(org, team_name, description=""):
72 cached_file = os.path.expanduser('~/.conda-smithy/{}-{}-team'.format(org.login, team_name))
73 try:
74 with open(cached_file, 'r') as fh:
75 team_id = int(fh.read().strip())
76 return org.get_team(team_id)
77 except IOError:
78 pass
79
80 team = next((team for team in org.get_teams() if team.name == team_name), None)
81 if not team:
82 team = create_team(org, team_name, description, [])
83
84 with open(cached_file, 'w') as fh:
85 fh.write(str(team.id))
86
87 return team
88
89
90 def create_github_repo(args):
91 token = gh_token()
92 meta = configure_feedstock.meta_of_feedstock(args.feedstock_directory)
93
94 gh = Github(token)
95 user_or_org = None
96 if args.user is not None:
97 pass
98 # User has been defined, and organization has not.
99 user_or_org = gh.get_user()
100 else:
101 # Use the organization provided.
102 user_or_org = gh.get_organization(args.organization)
103
104 repo_name = '{}-feedstock'.format(meta.name())
105 try:
106 gh_repo = user_or_org.create_repo(repo_name, has_wiki=False,
107 description='A conda-smithy repository for {}.'.format(meta.name()))
108 print('Created {} on github'.format(gh_repo.full_name))
109 except GithubException as gh_except:
110 if gh_except.data.get('errors', [{}])[0].get('message', '') != u'name already exists on this account':
111 raise
112 gh_repo = user_or_org.get_repo(repo_name)
113 print('Github repository already exists.')
114
115 # Now add this new repo as a remote on the local clone.
116 repo = Repo(args.feedstock_directory)
117 remote_name = args.remote_name.strip()
118 if remote_name:
119 if remote_name in [remote.name for remote in repo.remotes]:
120 existing_remote = repo.remotes[remote_name]
121 if existing_remote.url != gh_repo.ssh_url:
122 print("Remote {} already exists, and doesn't point to {} "
123 "(it points to {}).".format(remote_name, gh_repo.ssh_url, existing_remote.url))
124 else:
125 repo.create_remote(remote_name, gh_repo.ssh_url)
126
127 if args.add_teams:
128 if isinstance(user_or_org, Organization):
129 configure_github_team(meta, gh_repo, user_or_org)
130
131
132 def configure_github_team(meta, gh_repo, org):
133
134 # Add a team for this repo and add the maintainers to it.
135 superlative = [
136 'awesome', 'slick', 'formidable', 'awe-inspiring',
137 'breathtaking', 'magnificent', 'wonderous', 'stunning',
138 'astonishing', 'superb', 'splendid', 'impressive',
139 'unbeatable', 'excellent', 'top', 'outstanding', 'exalted',
140 'standout', 'smashing'
141 ]
142
143 maintainers = set(
144 meta.meta.get('extra', {}).get('recipe-maintainers', [])
145 )
146 maintainers = set(maintainer.lower() for maintainer in maintainers)
147 team_name = meta.name()
148 # Try to get team or create it if it doesn't exist.
149 team = next((team for team in gh_repo.get_teams() if team.name == team_name), None)
150 current_maintainers = []
151 if not team:
152 team = create_team(
153 org,
154 team_name,
155 'The {} {} contributors!'.format(
156 choice(superlative), team_name
157 )
158 )
159 team.add_to_repos(gh_repo)
160 else:
161 current_maintainers = team.get_members()
162
163
164 # Add only the new maintainers to the team.
165 current_maintainers_handles = set([
166 e.login.lower() for e in current_maintainers
167 ])
168 for new_maintainer in maintainers - current_maintainers_handles:
169 add_membership(team, new_maintainer)
170
171 # Mention any maintainers that need to be removed (unlikely here).
172 for old_maintainer in current_maintainers_handles - maintainers:
173 print(
174 "AN OLD MEMBER ({}) NEEDS TO BE REMOVED FROM {}".format(
175 old_maintainer, repo_name
176 )
177 )
178
179 # Get the all-members team
180 team_name = 'all-members'
181 description = "All of the awesome {} contributors!".format(org.name)
182 all_members_team = get_cached_team(org, team_name, description)
183 new_conda_forge_members = set()
184
185 # Add new members to all-members
186 for new_member in maintainers - current_maintainers_handles:
187 if not has_in_members(all_members_team, new_member):
188 print(
189 "Adding a new member ({}) to {}. Welcome! :)".format(
190 new_member, org.name
191 )
192 )
193 add_membership(all_members_team, new_member)
194 new_conda_forge_members.add(new_member)
195
196 return maintainers, current_maintainers_handles, new_conda_forge_members
197
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda_smithy/github.py b/conda_smithy/github.py
--- a/conda_smithy/github.py
+++ b/conda_smithy/github.py
@@ -1,19 +1,16 @@
from __future__ import absolute_import, print_function
import os
-import random
from random import choice
-import git
from git import Repo
-import github
from github import Github
from github.GithubException import GithubException
from github.Organization import Organization
from github.Team import Team
-from . import configure_feedstock
+import conda_build.api
def gh_token():
@@ -89,7 +86,14 @@
def create_github_repo(args):
token = gh_token()
- meta = configure_feedstock.meta_of_feedstock(args.feedstock_directory)
+ meta = conda_build.api.render(args.feedstock_directory,
+ permit_undefined_jinja=True, finalize=False,
+ bypass_env_check=True, trim_skip=False)[0][0]
+
+ if "parent_recipe" in meta.meta["extra"]:
+ feedstock_name = meta.meta["extra"]["parent_recipe"]["name"]
+ else:
+ feedstock_name = meta.name()
gh = Github(token)
user_or_org = None
@@ -101,10 +105,11 @@
# Use the organization provided.
user_or_org = gh.get_organization(args.organization)
- repo_name = '{}-feedstock'.format(meta.name())
+ repo_name = '{}-feedstock'.format(feedstock_name)
try:
- gh_repo = user_or_org.create_repo(repo_name, has_wiki=False,
- description='A conda-smithy repository for {}.'.format(meta.name()))
+ gh_repo = user_or_org.create_repo(
+ repo_name, has_wiki=False,
+ description='A conda-smithy repository for {}.'.format(feedstock_name))
print('Created {} on github'.format(gh_repo.full_name))
except GithubException as gh_except:
if gh_except.data.get('errors', [{}])[0].get('message', '') != u'name already exists on this account':
@@ -120,16 +125,17 @@
existing_remote = repo.remotes[remote_name]
if existing_remote.url != gh_repo.ssh_url:
print("Remote {} already exists, and doesn't point to {} "
- "(it points to {}).".format(remote_name, gh_repo.ssh_url, existing_remote.url))
+ "(it points to {}).".format(remote_name, gh_repo.ssh_url,
+ existing_remote.url))
else:
repo.create_remote(remote_name, gh_repo.ssh_url)
if args.add_teams:
if isinstance(user_or_org, Organization):
- configure_github_team(meta, gh_repo, user_or_org)
+ configure_github_team(meta, gh_repo, user_or_org, feedstock_name)
-def configure_github_team(meta, gh_repo, org):
+def configure_github_team(meta, gh_repo, org, feedstock_name):
# Add a team for this repo and add the maintainers to it.
superlative = [
@@ -144,7 +150,7 @@
meta.meta.get('extra', {}).get('recipe-maintainers', [])
)
maintainers = set(maintainer.lower() for maintainer in maintainers)
- team_name = meta.name()
+ team_name = feedstock_name
# Try to get team or create it if it doesn't exist.
team = next((team for team in gh_repo.get_teams() if team.name == team_name), None)
current_maintainers = []
@@ -160,7 +166,6 @@
else:
current_maintainers = team.get_members()
-
# Add only the new maintainers to the team.
current_maintainers_handles = set([
e.login.lower() for e in current_maintainers
@@ -172,7 +177,7 @@
for old_maintainer in current_maintainers_handles - maintainers:
print(
"AN OLD MEMBER ({}) NEEDS TO BE REMOVED FROM {}".format(
- old_maintainer, repo_name
+ old_maintainer, gh_repo
)
)
| {"golden_diff": "diff --git a/conda_smithy/github.py b/conda_smithy/github.py\n--- a/conda_smithy/github.py\n+++ b/conda_smithy/github.py\n@@ -1,19 +1,16 @@\n from __future__ import absolute_import, print_function\n \n import os\n-import random\n from random import choice\n \n-import git\n from git import Repo\n \n-import github\n from github import Github\n from github.GithubException import GithubException\n from github.Organization import Organization\n from github.Team import Team\n \n-from . import configure_feedstock\n+import conda_build.api\n \n \n def gh_token():\n@@ -89,7 +86,14 @@\n \n def create_github_repo(args):\n token = gh_token()\n- meta = configure_feedstock.meta_of_feedstock(args.feedstock_directory)\n+ meta = conda_build.api.render(args.feedstock_directory,\n+ permit_undefined_jinja=True, finalize=False,\n+ bypass_env_check=True, trim_skip=False)[0][0]\n+\n+ if \"parent_recipe\" in meta.meta[\"extra\"]:\n+ feedstock_name = meta.meta[\"extra\"][\"parent_recipe\"][\"name\"]\n+ else:\n+ feedstock_name = meta.name()\n \n gh = Github(token)\n user_or_org = None\n@@ -101,10 +105,11 @@\n # Use the organization provided.\n user_or_org = gh.get_organization(args.organization)\n \n- repo_name = '{}-feedstock'.format(meta.name())\n+ repo_name = '{}-feedstock'.format(feedstock_name)\n try:\n- gh_repo = user_or_org.create_repo(repo_name, has_wiki=False,\n- description='A conda-smithy repository for {}.'.format(meta.name()))\n+ gh_repo = user_or_org.create_repo(\n+ repo_name, has_wiki=False,\n+ description='A conda-smithy repository for {}.'.format(feedstock_name))\n print('Created {} on github'.format(gh_repo.full_name))\n except GithubException as gh_except:\n if gh_except.data.get('errors', [{}])[0].get('message', '') != u'name already exists on this account':\n@@ -120,16 +125,17 @@\n existing_remote = repo.remotes[remote_name]\n if existing_remote.url != gh_repo.ssh_url:\n print(\"Remote {} already exists, and doesn't point to {} \"\n- \"(it points to {}).\".format(remote_name, gh_repo.ssh_url, existing_remote.url))\n+ \"(it points to {}).\".format(remote_name, gh_repo.ssh_url,\n+ existing_remote.url))\n else:\n repo.create_remote(remote_name, gh_repo.ssh_url)\n \n if args.add_teams:\n if isinstance(user_or_org, Organization):\n- configure_github_team(meta, gh_repo, user_or_org)\n+ configure_github_team(meta, gh_repo, user_or_org, feedstock_name)\n \n \n-def configure_github_team(meta, gh_repo, org):\n+def configure_github_team(meta, gh_repo, org, feedstock_name):\n \n # Add a team for this repo and add the maintainers to it.\n superlative = [\n@@ -144,7 +150,7 @@\n meta.meta.get('extra', {}).get('recipe-maintainers', [])\n )\n maintainers = set(maintainer.lower() for maintainer in maintainers)\n- team_name = meta.name()\n+ team_name = feedstock_name\n # Try to get team or create it if it doesn't exist.\n team = next((team for team in gh_repo.get_teams() if team.name == team_name), None)\n current_maintainers = []\n@@ -160,7 +166,6 @@\n else:\n current_maintainers = team.get_members()\n \n-\n # Add only the new maintainers to the team.\n current_maintainers_handles = set([\n e.login.lower() for e in current_maintainers\n@@ -172,7 +177,7 @@\n for old_maintainer in current_maintainers_handles - maintainers:\n print(\n \"AN OLD MEMBER ({}) NEEDS TO BE REMOVED FROM {}\".format(\n- old_maintainer, repo_name\n+ old_maintainer, gh_repo\n )\n )\n", "issue": "AttributeError: 'module' object has no attribute 'meta_of_feedstock'\nhttps://travis-ci.org/conda-forge/staged-recipes/builds/367829917#L880\n", "before_files": [{"content": "from __future__ import absolute_import, print_function\n\nimport os\nimport random\nfrom random import choice\n\nimport git\nfrom git import Repo\n\nimport github\nfrom github import Github\nfrom github.GithubException import GithubException\nfrom github.Organization import Organization\nfrom github.Team import Team\n\nfrom . import configure_feedstock\n\n\ndef gh_token():\n try:\n with open(os.path.expanduser('~/.conda-smithy/github.token'), 'r') as fh:\n token = fh.read().strip()\n except IOError:\n msg = ('No github token. Go to https://github.com/settings/tokens/new and generate\\n'\n 'a token with repo access. Put it in ~/.conda-smithy/github.token')\n raise RuntimeError(msg)\n return token\n\n\ndef create_team(org, name, description, repo_names=[]):\n # PyGithub creates secret teams, and has no way of turning that off! :(\n post_parameters = {\n \"name\": name,\n \"description\": description,\n \"privacy\": \"closed\",\n \"permission\": \"push\",\n \"repo_names\": repo_names\n }\n headers, data = org._requester.requestJsonAndCheck(\n \"POST\",\n org.url + \"/teams\",\n input=post_parameters\n )\n return Team(org._requester, headers, data, completed=True)\n\n\ndef add_membership(team, member):\n headers, data = team._requester.requestJsonAndCheck(\n \"PUT\",\n team.url + \"/memberships/\" + member\n )\n return (headers, data)\n\n\ndef remove_membership(team, member):\n headers, data = team._requester.requestJsonAndCheck(\n \"DELETE\",\n team.url + \"/memberships/\" + member\n )\n return (headers, data)\n\n\ndef has_in_members(team, member):\n status, headers, data = team._requester.requestJson(\n \"GET\",\n team.url + \"/members/\" + member\n )\n return status == 204\n\n\ndef get_cached_team(org, team_name, description=\"\"):\n cached_file = os.path.expanduser('~/.conda-smithy/{}-{}-team'.format(org.login, team_name))\n try:\n with open(cached_file, 'r') as fh:\n team_id = int(fh.read().strip())\n return org.get_team(team_id)\n except IOError:\n pass\n\n team = next((team for team in org.get_teams() if team.name == team_name), None)\n if not team:\n team = create_team(org, team_name, description, [])\n\n with open(cached_file, 'w') as fh:\n fh.write(str(team.id))\n\n return team\n\n\ndef create_github_repo(args):\n token = gh_token()\n meta = configure_feedstock.meta_of_feedstock(args.feedstock_directory)\n\n gh = Github(token)\n user_or_org = None\n if args.user is not None:\n pass\n # User has been defined, and organization has not.\n user_or_org = gh.get_user()\n else:\n # Use the organization provided.\n user_or_org = gh.get_organization(args.organization)\n\n repo_name = '{}-feedstock'.format(meta.name())\n try:\n gh_repo = user_or_org.create_repo(repo_name, has_wiki=False,\n description='A conda-smithy repository for {}.'.format(meta.name()))\n print('Created {} on github'.format(gh_repo.full_name))\n except GithubException as gh_except:\n if gh_except.data.get('errors', [{}])[0].get('message', '') != u'name already exists on this account':\n raise\n gh_repo = user_or_org.get_repo(repo_name)\n print('Github repository already exists.')\n\n # Now add this new repo as a remote on the local clone.\n repo = Repo(args.feedstock_directory)\n remote_name = args.remote_name.strip()\n if remote_name:\n if remote_name in [remote.name for remote in repo.remotes]:\n existing_remote = repo.remotes[remote_name]\n if existing_remote.url != gh_repo.ssh_url:\n print(\"Remote {} already exists, and doesn't point to {} \"\n \"(it points to {}).\".format(remote_name, gh_repo.ssh_url, existing_remote.url))\n else:\n repo.create_remote(remote_name, gh_repo.ssh_url)\n\n if args.add_teams:\n if isinstance(user_or_org, Organization):\n configure_github_team(meta, gh_repo, user_or_org)\n\n\ndef configure_github_team(meta, gh_repo, org):\n\n # Add a team for this repo and add the maintainers to it.\n superlative = [\n 'awesome', 'slick', 'formidable', 'awe-inspiring',\n 'breathtaking', 'magnificent', 'wonderous', 'stunning',\n 'astonishing', 'superb', 'splendid', 'impressive',\n 'unbeatable', 'excellent', 'top', 'outstanding', 'exalted',\n 'standout', 'smashing'\n ]\n\n maintainers = set(\n meta.meta.get('extra', {}).get('recipe-maintainers', [])\n )\n maintainers = set(maintainer.lower() for maintainer in maintainers)\n team_name = meta.name()\n # Try to get team or create it if it doesn't exist.\n team = next((team for team in gh_repo.get_teams() if team.name == team_name), None)\n current_maintainers = []\n if not team:\n team = create_team(\n org,\n team_name,\n 'The {} {} contributors!'.format(\n choice(superlative), team_name\n )\n )\n team.add_to_repos(gh_repo)\n else:\n current_maintainers = team.get_members()\n\n\n # Add only the new maintainers to the team.\n current_maintainers_handles = set([\n e.login.lower() for e in current_maintainers\n ])\n for new_maintainer in maintainers - current_maintainers_handles:\n add_membership(team, new_maintainer)\n\n # Mention any maintainers that need to be removed (unlikely here).\n for old_maintainer in current_maintainers_handles - maintainers:\n print(\n \"AN OLD MEMBER ({}) NEEDS TO BE REMOVED FROM {}\".format(\n old_maintainer, repo_name\n )\n )\n\n # Get the all-members team\n team_name = 'all-members'\n description = \"All of the awesome {} contributors!\".format(org.name)\n all_members_team = get_cached_team(org, team_name, description)\n new_conda_forge_members = set()\n\n # Add new members to all-members\n for new_member in maintainers - current_maintainers_handles:\n if not has_in_members(all_members_team, new_member):\n print(\n \"Adding a new member ({}) to {}. Welcome! :)\".format(\n new_member, org.name\n )\n )\n add_membership(all_members_team, new_member)\n new_conda_forge_members.add(new_member)\n\n return maintainers, current_maintainers_handles, new_conda_forge_members\n", "path": "conda_smithy/github.py"}], "after_files": [{"content": "from __future__ import absolute_import, print_function\n\nimport os\nfrom random import choice\n\nfrom git import Repo\n\nfrom github import Github\nfrom github.GithubException import GithubException\nfrom github.Organization import Organization\nfrom github.Team import Team\n\nimport conda_build.api\n\n\ndef gh_token():\n try:\n with open(os.path.expanduser('~/.conda-smithy/github.token'), 'r') as fh:\n token = fh.read().strip()\n except IOError:\n msg = ('No github token. Go to https://github.com/settings/tokens/new and generate\\n'\n 'a token with repo access. Put it in ~/.conda-smithy/github.token')\n raise RuntimeError(msg)\n return token\n\n\ndef create_team(org, name, description, repo_names=[]):\n # PyGithub creates secret teams, and has no way of turning that off! :(\n post_parameters = {\n \"name\": name,\n \"description\": description,\n \"privacy\": \"closed\",\n \"permission\": \"push\",\n \"repo_names\": repo_names\n }\n headers, data = org._requester.requestJsonAndCheck(\n \"POST\",\n org.url + \"/teams\",\n input=post_parameters\n )\n return Team(org._requester, headers, data, completed=True)\n\n\ndef add_membership(team, member):\n headers, data = team._requester.requestJsonAndCheck(\n \"PUT\",\n team.url + \"/memberships/\" + member\n )\n return (headers, data)\n\n\ndef remove_membership(team, member):\n headers, data = team._requester.requestJsonAndCheck(\n \"DELETE\",\n team.url + \"/memberships/\" + member\n )\n return (headers, data)\n\n\ndef has_in_members(team, member):\n status, headers, data = team._requester.requestJson(\n \"GET\",\n team.url + \"/members/\" + member\n )\n return status == 204\n\n\ndef get_cached_team(org, team_name, description=\"\"):\n cached_file = os.path.expanduser('~/.conda-smithy/{}-{}-team'.format(org.login, team_name))\n try:\n with open(cached_file, 'r') as fh:\n team_id = int(fh.read().strip())\n return org.get_team(team_id)\n except IOError:\n pass\n\n team = next((team for team in org.get_teams() if team.name == team_name), None)\n if not team:\n team = create_team(org, team_name, description, [])\n\n with open(cached_file, 'w') as fh:\n fh.write(str(team.id))\n\n return team\n\n\ndef create_github_repo(args):\n token = gh_token()\n meta = conda_build.api.render(args.feedstock_directory,\n permit_undefined_jinja=True, finalize=False,\n bypass_env_check=True, trim_skip=False)[0][0]\n\n if \"parent_recipe\" in meta.meta[\"extra\"]:\n feedstock_name = meta.meta[\"extra\"][\"parent_recipe\"][\"name\"]\n else:\n feedstock_name = meta.name()\n\n gh = Github(token)\n user_or_org = None\n if args.user is not None:\n pass\n # User has been defined, and organization has not.\n user_or_org = gh.get_user()\n else:\n # Use the organization provided.\n user_or_org = gh.get_organization(args.organization)\n\n repo_name = '{}-feedstock'.format(feedstock_name)\n try:\n gh_repo = user_or_org.create_repo(\n repo_name, has_wiki=False,\n description='A conda-smithy repository for {}.'.format(feedstock_name))\n print('Created {} on github'.format(gh_repo.full_name))\n except GithubException as gh_except:\n if gh_except.data.get('errors', [{}])[0].get('message', '') != u'name already exists on this account':\n raise\n gh_repo = user_or_org.get_repo(repo_name)\n print('Github repository already exists.')\n\n # Now add this new repo as a remote on the local clone.\n repo = Repo(args.feedstock_directory)\n remote_name = args.remote_name.strip()\n if remote_name:\n if remote_name in [remote.name for remote in repo.remotes]:\n existing_remote = repo.remotes[remote_name]\n if existing_remote.url != gh_repo.ssh_url:\n print(\"Remote {} already exists, and doesn't point to {} \"\n \"(it points to {}).\".format(remote_name, gh_repo.ssh_url,\n existing_remote.url))\n else:\n repo.create_remote(remote_name, gh_repo.ssh_url)\n\n if args.add_teams:\n if isinstance(user_or_org, Organization):\n configure_github_team(meta, gh_repo, user_or_org, feedstock_name)\n\n\ndef configure_github_team(meta, gh_repo, org, feedstock_name):\n\n # Add a team for this repo and add the maintainers to it.\n superlative = [\n 'awesome', 'slick', 'formidable', 'awe-inspiring',\n 'breathtaking', 'magnificent', 'wonderous', 'stunning',\n 'astonishing', 'superb', 'splendid', 'impressive',\n 'unbeatable', 'excellent', 'top', 'outstanding', 'exalted',\n 'standout', 'smashing'\n ]\n\n maintainers = set(\n meta.meta.get('extra', {}).get('recipe-maintainers', [])\n )\n maintainers = set(maintainer.lower() for maintainer in maintainers)\n team_name = feedstock_name\n # Try to get team or create it if it doesn't exist.\n team = next((team for team in gh_repo.get_teams() if team.name == team_name), None)\n current_maintainers = []\n if not team:\n team = create_team(\n org,\n team_name,\n 'The {} {} contributors!'.format(\n choice(superlative), team_name\n )\n )\n team.add_to_repos(gh_repo)\n else:\n current_maintainers = team.get_members()\n\n # Add only the new maintainers to the team.\n current_maintainers_handles = set([\n e.login.lower() for e in current_maintainers\n ])\n for new_maintainer in maintainers - current_maintainers_handles:\n add_membership(team, new_maintainer)\n\n # Mention any maintainers that need to be removed (unlikely here).\n for old_maintainer in current_maintainers_handles - maintainers:\n print(\n \"AN OLD MEMBER ({}) NEEDS TO BE REMOVED FROM {}\".format(\n old_maintainer, gh_repo\n )\n )\n\n # Get the all-members team\n team_name = 'all-members'\n description = \"All of the awesome {} contributors!\".format(org.name)\n all_members_team = get_cached_team(org, team_name, description)\n new_conda_forge_members = set()\n\n # Add new members to all-members\n for new_member in maintainers - current_maintainers_handles:\n if not has_in_members(all_members_team, new_member):\n print(\n \"Adding a new member ({}) to {}. Welcome! :)\".format(\n new_member, org.name\n )\n )\n add_membership(all_members_team, new_member)\n new_conda_forge_members.add(new_member)\n\n return maintainers, current_maintainers_handles, new_conda_forge_members\n", "path": "conda_smithy/github.py"}]} | 2,324 | 933 |
gh_patches_debug_25159 | rasdani/github-patches | git_diff | mlflow__mlflow-9258 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Typo fix
https://github.com/mlflow/mlflow/blob/9724c83bd8f0100c465e68e30651a9727de42ce0/dev/show_package_release_dates.py#L49
`package_legnth` -> `package_length`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dev/show_package_release_dates.py`
Content:
```
1 import os
2 import json
3 import sys
4 import subprocess
5 import requests
6 from concurrent.futures import ThreadPoolExecutor
7 import traceback
8
9
10 def get_distributions():
11 res = subprocess.check_output(
12 [sys.executable, "-m", "pip", "list", "--format", "json"], text=True
13 )
14 return [(pkg["name"], pkg["version"]) for pkg in json.loads(res)]
15
16
17 def get_release_date(package, version):
18 resp = requests.get(f"https://pypi.python.org/pypi/{package}/json", timeout=10)
19 if not resp.ok:
20 return ""
21
22 matched = [dist_files for ver, dist_files in resp.json()["releases"].items() if ver == version]
23 if (not matched) or (not matched[0]):
24 return ""
25
26 upload_time = matched[0][0]["upload_time"]
27 return upload_time.split("T")[0] # return year-month-day
28
29
30 def get_longest_string_length(array):
31 return len(max(array, key=len))
32
33
34 def safe_result(future, if_error=""):
35 try:
36 return future.result()
37 except Exception:
38 traceback.print_exc()
39 return if_error
40
41
42 def main():
43 distributions = get_distributions()
44 with ThreadPoolExecutor(max_workers=min(32, os.cpu_count() + 4)) as executor:
45 futures = [executor.submit(get_release_date, pkg, ver) for pkg, ver in distributions]
46 release_dates = [safe_result(f) for f in futures]
47
48 packages, versions = list(zip(*distributions))
49 package_legnth = get_longest_string_length(packages)
50 version_length = get_longest_string_length(versions)
51 release_date_length = len("Release Date")
52 print("Package".ljust(package_legnth), "Version".ljust(version_length), "Release Date")
53 print("-" * (package_legnth + version_length + release_date_length + 2))
54 for package, version, release_date in sorted(
55 zip(packages, versions, release_dates),
56 # Sort by release date in descending order
57 key=lambda x: x[2],
58 reverse=True,
59 ):
60 print(
61 package.ljust(package_legnth),
62 version.ljust(version_length),
63 release_date.ljust(release_date_length),
64 )
65
66
67 if __name__ == "__main__":
68 main()
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dev/show_package_release_dates.py b/dev/show_package_release_dates.py
--- a/dev/show_package_release_dates.py
+++ b/dev/show_package_release_dates.py
@@ -46,11 +46,11 @@
release_dates = [safe_result(f) for f in futures]
packages, versions = list(zip(*distributions))
- package_legnth = get_longest_string_length(packages)
+ package_length = get_longest_string_length(packages)
version_length = get_longest_string_length(versions)
release_date_length = len("Release Date")
- print("Package".ljust(package_legnth), "Version".ljust(version_length), "Release Date")
- print("-" * (package_legnth + version_length + release_date_length + 2))
+ print("Package".ljust(package_length), "Version".ljust(version_length), "Release Date")
+ print("-" * (package_length + version_length + release_date_length + 2))
for package, version, release_date in sorted(
zip(packages, versions, release_dates),
# Sort by release date in descending order
@@ -58,7 +58,7 @@
reverse=True,
):
print(
- package.ljust(package_legnth),
+ package.ljust(package_length),
version.ljust(version_length),
release_date.ljust(release_date_length),
)
| {"golden_diff": "diff --git a/dev/show_package_release_dates.py b/dev/show_package_release_dates.py\n--- a/dev/show_package_release_dates.py\n+++ b/dev/show_package_release_dates.py\n@@ -46,11 +46,11 @@\n release_dates = [safe_result(f) for f in futures]\n \n packages, versions = list(zip(*distributions))\n- package_legnth = get_longest_string_length(packages)\n+ package_length = get_longest_string_length(packages)\n version_length = get_longest_string_length(versions)\n release_date_length = len(\"Release Date\")\n- print(\"Package\".ljust(package_legnth), \"Version\".ljust(version_length), \"Release Date\")\n- print(\"-\" * (package_legnth + version_length + release_date_length + 2))\n+ print(\"Package\".ljust(package_length), \"Version\".ljust(version_length), \"Release Date\")\n+ print(\"-\" * (package_length + version_length + release_date_length + 2))\n for package, version, release_date in sorted(\n zip(packages, versions, release_dates),\n # Sort by release date in descending order\n@@ -58,7 +58,7 @@\n reverse=True,\n ):\n print(\n- package.ljust(package_legnth),\n+ package.ljust(package_length),\n version.ljust(version_length),\n release_date.ljust(release_date_length),\n )\n", "issue": "Typo fix\nhttps://github.com/mlflow/mlflow/blob/9724c83bd8f0100c465e68e30651a9727de42ce0/dev/show_package_release_dates.py#L49\r\n\r\n`package_legnth` -> `package_length`\n", "before_files": [{"content": "import os\nimport json\nimport sys\nimport subprocess\nimport requests\nfrom concurrent.futures import ThreadPoolExecutor\nimport traceback\n\n\ndef get_distributions():\n res = subprocess.check_output(\n [sys.executable, \"-m\", \"pip\", \"list\", \"--format\", \"json\"], text=True\n )\n return [(pkg[\"name\"], pkg[\"version\"]) for pkg in json.loads(res)]\n\n\ndef get_release_date(package, version):\n resp = requests.get(f\"https://pypi.python.org/pypi/{package}/json\", timeout=10)\n if not resp.ok:\n return \"\"\n\n matched = [dist_files for ver, dist_files in resp.json()[\"releases\"].items() if ver == version]\n if (not matched) or (not matched[0]):\n return \"\"\n\n upload_time = matched[0][0][\"upload_time\"]\n return upload_time.split(\"T\")[0] # return year-month-day\n\n\ndef get_longest_string_length(array):\n return len(max(array, key=len))\n\n\ndef safe_result(future, if_error=\"\"):\n try:\n return future.result()\n except Exception:\n traceback.print_exc()\n return if_error\n\n\ndef main():\n distributions = get_distributions()\n with ThreadPoolExecutor(max_workers=min(32, os.cpu_count() + 4)) as executor:\n futures = [executor.submit(get_release_date, pkg, ver) for pkg, ver in distributions]\n release_dates = [safe_result(f) for f in futures]\n\n packages, versions = list(zip(*distributions))\n package_legnth = get_longest_string_length(packages)\n version_length = get_longest_string_length(versions)\n release_date_length = len(\"Release Date\")\n print(\"Package\".ljust(package_legnth), \"Version\".ljust(version_length), \"Release Date\")\n print(\"-\" * (package_legnth + version_length + release_date_length + 2))\n for package, version, release_date in sorted(\n zip(packages, versions, release_dates),\n # Sort by release date in descending order\n key=lambda x: x[2],\n reverse=True,\n ):\n print(\n package.ljust(package_legnth),\n version.ljust(version_length),\n release_date.ljust(release_date_length),\n )\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "dev/show_package_release_dates.py"}], "after_files": [{"content": "import os\nimport json\nimport sys\nimport subprocess\nimport requests\nfrom concurrent.futures import ThreadPoolExecutor\nimport traceback\n\n\ndef get_distributions():\n res = subprocess.check_output(\n [sys.executable, \"-m\", \"pip\", \"list\", \"--format\", \"json\"], text=True\n )\n return [(pkg[\"name\"], pkg[\"version\"]) for pkg in json.loads(res)]\n\n\ndef get_release_date(package, version):\n resp = requests.get(f\"https://pypi.python.org/pypi/{package}/json\", timeout=10)\n if not resp.ok:\n return \"\"\n\n matched = [dist_files for ver, dist_files in resp.json()[\"releases\"].items() if ver == version]\n if (not matched) or (not matched[0]):\n return \"\"\n\n upload_time = matched[0][0][\"upload_time\"]\n return upload_time.split(\"T\")[0] # return year-month-day\n\n\ndef get_longest_string_length(array):\n return len(max(array, key=len))\n\n\ndef safe_result(future, if_error=\"\"):\n try:\n return future.result()\n except Exception:\n traceback.print_exc()\n return if_error\n\n\ndef main():\n distributions = get_distributions()\n with ThreadPoolExecutor(max_workers=min(32, os.cpu_count() + 4)) as executor:\n futures = [executor.submit(get_release_date, pkg, ver) for pkg, ver in distributions]\n release_dates = [safe_result(f) for f in futures]\n\n packages, versions = list(zip(*distributions))\n package_length = get_longest_string_length(packages)\n version_length = get_longest_string_length(versions)\n release_date_length = len(\"Release Date\")\n print(\"Package\".ljust(package_length), \"Version\".ljust(version_length), \"Release Date\")\n print(\"-\" * (package_length + version_length + release_date_length + 2))\n for package, version, release_date in sorted(\n zip(packages, versions, release_dates),\n # Sort by release date in descending order\n key=lambda x: x[2],\n reverse=True,\n ):\n print(\n package.ljust(package_length),\n version.ljust(version_length),\n release_date.ljust(release_date_length),\n )\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "dev/show_package_release_dates.py"}]} | 962 | 301 |
gh_patches_debug_27386 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-8360 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Problems with Spiders built on YextSpider
I've noticed a few of the spiders whose results I was using in my OSM tool have recently stopped returning any results, including five_guys_de_es_fr_gb.py , matalan_gb.py , and three_gb.py .
A common feature of these spiders is that they use the YextSpider class. Checking some other spiders that are also based on this class, reveals they've also stopped working. The spider stats suggest 404 and 403 errors are preventing the data being retrieved.
If this is a general problem affecting multiple spiders, would someone be able to take a look and see if it can be fixed?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/independent_financial_us.py`
Content:
```
1 from locations.categories import Categories, apply_category
2 from locations.storefinders.yext import YextSpider
3
4
5 class IndependentFinancialUSSpider(YextSpider):
6 name = "independent_financial_us"
7 item_attributes = {"brand": "Independent Financial", "brand_wikidata": "Q6016398"}
8 api_key = "ee4600854cf5501c53831bf944472e57"
9 wanted_types = ["location", "atm"]
10
11 def parse_item(self, item, location):
12 if location["meta"]["entityType"] == "location":
13 apply_category(Categories.BANK, item)
14 item["ref"] = location.get("c_branchCode", location["meta"].get("id"))
15 item["name"] = " ".join(filter(None, [location.get("name"), location.get("geomodifier")]))
16 elif location["meta"]["entityType"] == "atm":
17 apply_category(Categories.ATM, item)
18 item["name"] = location.get("geomodifier")
19 item["website"] = location.get("c_pagesURL")
20 item.pop("email", None)
21 item["extras"].pop("contact:instagram", None)
22 item.pop("twitter", None)
23 item.pop("facebook", None)
24 yield item
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/independent_financial_us.py b/locations/spiders/independent_financial_us.py
--- a/locations/spiders/independent_financial_us.py
+++ b/locations/spiders/independent_financial_us.py
@@ -1,24 +1,18 @@
from locations.categories import Categories, apply_category
-from locations.storefinders.yext import YextSpider
+from locations.storefinders.yext_answers import YextAnswersSpider
-class IndependentFinancialUSSpider(YextSpider):
+class IndependentFinancialUSSpider(YextAnswersSpider):
name = "independent_financial_us"
item_attributes = {"brand": "Independent Financial", "brand_wikidata": "Q6016398"}
api_key = "ee4600854cf5501c53831bf944472e57"
- wanted_types = ["location", "atm"]
+ experience_key = "independent-financial-search"
- def parse_item(self, item, location):
- if location["meta"]["entityType"] == "location":
- apply_category(Categories.BANK, item)
- item["ref"] = location.get("c_branchCode", location["meta"].get("id"))
- item["name"] = " ".join(filter(None, [location.get("name"), location.get("geomodifier")]))
- elif location["meta"]["entityType"] == "atm":
+ def parse_item(self, location, item):
+ if location["data"]["type"] == "atm":
apply_category(Categories.ATM, item)
- item["name"] = location.get("geomodifier")
- item["website"] = location.get("c_pagesURL")
- item.pop("email", None)
- item["extras"].pop("contact:instagram", None)
- item.pop("twitter", None)
- item.pop("facebook", None)
+ elif location["data"]["type"] == "location":
+ apply_category(Categories.BANK, item)
+ else:
+ self.logger.error("Unknown location type: {}".format(location["data"]["type"]))
yield item
| {"golden_diff": "diff --git a/locations/spiders/independent_financial_us.py b/locations/spiders/independent_financial_us.py\n--- a/locations/spiders/independent_financial_us.py\n+++ b/locations/spiders/independent_financial_us.py\n@@ -1,24 +1,18 @@\n from locations.categories import Categories, apply_category\n-from locations.storefinders.yext import YextSpider\n+from locations.storefinders.yext_answers import YextAnswersSpider\n \n \n-class IndependentFinancialUSSpider(YextSpider):\n+class IndependentFinancialUSSpider(YextAnswersSpider):\n name = \"independent_financial_us\"\n item_attributes = {\"brand\": \"Independent Financial\", \"brand_wikidata\": \"Q6016398\"}\n api_key = \"ee4600854cf5501c53831bf944472e57\"\n- wanted_types = [\"location\", \"atm\"]\n+ experience_key = \"independent-financial-search\"\n \n- def parse_item(self, item, location):\n- if location[\"meta\"][\"entityType\"] == \"location\":\n- apply_category(Categories.BANK, item)\n- item[\"ref\"] = location.get(\"c_branchCode\", location[\"meta\"].get(\"id\"))\n- item[\"name\"] = \" \".join(filter(None, [location.get(\"name\"), location.get(\"geomodifier\")]))\n- elif location[\"meta\"][\"entityType\"] == \"atm\":\n+ def parse_item(self, location, item):\n+ if location[\"data\"][\"type\"] == \"atm\":\n apply_category(Categories.ATM, item)\n- item[\"name\"] = location.get(\"geomodifier\")\n- item[\"website\"] = location.get(\"c_pagesURL\")\n- item.pop(\"email\", None)\n- item[\"extras\"].pop(\"contact:instagram\", None)\n- item.pop(\"twitter\", None)\n- item.pop(\"facebook\", None)\n+ elif location[\"data\"][\"type\"] == \"location\":\n+ apply_category(Categories.BANK, item)\n+ else:\n+ self.logger.error(\"Unknown location type: {}\".format(location[\"data\"][\"type\"]))\n yield item\n", "issue": "Problems with Spiders built on YextSpider\nI've noticed a few of the spiders whose results I was using in my OSM tool have recently stopped returning any results, including five_guys_de_es_fr_gb.py , matalan_gb.py , and three_gb.py .\r\n\r\nA common feature of these spiders is that they use the YextSpider class. Checking some other spiders that are also based on this class, reveals they've also stopped working. The spider stats suggest 404 and 403 errors are preventing the data being retrieved.\r\n\r\nIf this is a general problem affecting multiple spiders, would someone be able to take a look and see if it can be fixed?\n", "before_files": [{"content": "from locations.categories import Categories, apply_category\nfrom locations.storefinders.yext import YextSpider\n\n\nclass IndependentFinancialUSSpider(YextSpider):\n name = \"independent_financial_us\"\n item_attributes = {\"brand\": \"Independent Financial\", \"brand_wikidata\": \"Q6016398\"}\n api_key = \"ee4600854cf5501c53831bf944472e57\"\n wanted_types = [\"location\", \"atm\"]\n\n def parse_item(self, item, location):\n if location[\"meta\"][\"entityType\"] == \"location\":\n apply_category(Categories.BANK, item)\n item[\"ref\"] = location.get(\"c_branchCode\", location[\"meta\"].get(\"id\"))\n item[\"name\"] = \" \".join(filter(None, [location.get(\"name\"), location.get(\"geomodifier\")]))\n elif location[\"meta\"][\"entityType\"] == \"atm\":\n apply_category(Categories.ATM, item)\n item[\"name\"] = location.get(\"geomodifier\")\n item[\"website\"] = location.get(\"c_pagesURL\")\n item.pop(\"email\", None)\n item[\"extras\"].pop(\"contact:instagram\", None)\n item.pop(\"twitter\", None)\n item.pop(\"facebook\", None)\n yield item\n", "path": "locations/spiders/independent_financial_us.py"}], "after_files": [{"content": "from locations.categories import Categories, apply_category\nfrom locations.storefinders.yext_answers import YextAnswersSpider\n\n\nclass IndependentFinancialUSSpider(YextAnswersSpider):\n name = \"independent_financial_us\"\n item_attributes = {\"brand\": \"Independent Financial\", \"brand_wikidata\": \"Q6016398\"}\n api_key = \"ee4600854cf5501c53831bf944472e57\"\n experience_key = \"independent-financial-search\"\n\n def parse_item(self, location, item):\n if location[\"data\"][\"type\"] == \"atm\":\n apply_category(Categories.ATM, item)\n elif location[\"data\"][\"type\"] == \"location\":\n apply_category(Categories.BANK, item)\n else:\n self.logger.error(\"Unknown location type: {}\".format(location[\"data\"][\"type\"]))\n yield item\n", "path": "locations/spiders/independent_financial_us.py"}]} | 735 | 477 |
gh_patches_debug_39027 | rasdani/github-patches | git_diff | NVIDIA__NVFlare-140 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tenseal dependency for HE is not available on ARM aarch64
The tenseal dependency is not available for the ARM aarch64 platform, causing installation to fail. This has been reported for local development on Mac M1 and will affect other non-x86 architectures, Jetson, Clara AGX, IBM POWER, etc..
The tenseal dependency is only required when using the HEBuilder module, and it looks like all other functionality could be used without this dependency. Can tenseal be made optional, with the caveat that HE is not available without tenseal?
One option would be providing an alternate install, a `requirements-no-tenseal.txt` that includes everything but tenseal. For example, I generated this file in a clean venv on my linux machine using:
```
pip download nvflare -d /tmp -v \
| grep Collecting \
| awk '{print $2}' \
| tr '[:upper:]' '[:lower:]' \
| grep -v tenseal \
| tee requirements-no-tenseal.txt
```
and verified that I can install nvflare and all deps except tenseal by copying to an aarch64 system (in this case a Jetson TX2) with:
```
python3 -m pip install --no-deps -r requirements-no-tenseal.txt
```
This is a pretty awkward solution. It would be much cleaner to remove the tenseal dependency in the default packaging, since HE is optional, and note in the docs that tenseal must be installed when using HE.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright (c) 2021, NVIDIA CORPORATION.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
16 import os
17 import shutil
18 from datetime import datetime
19
20 from setuptools import find_packages, setup
21
22 import versioneer
23 # read the contents of your README file
24 this_directory = os.path.abspath(os.path.dirname(__file__))
25 with open(os.path.join(this_directory, "README.md"), encoding="utf-8") as f:
26 long_description = f.read()
27
28 # with open(os.path.join(this_directory, "nvflare", "__init__.py"), encoding="utf-8") as f:
29 # for line in f.readlines():
30 # if "__version__" in line:
31 # init_version = line.split("=")[1].strip().strip('"')
32 #
33 # nvfl_version = os.environ.get("NVFL_VERSION", init_version)
34 # yymmdd = datetime.today().strftime("%y%m%d")
35 # nvfl_nightly_version = f"{nvfl_version}.dev{yymmdd}"
36 #
37 # if os.environ.get("NVFL_RELEASE"):
38 # package_name = "nvflare"
39 # version = nvfl_version
40 # else:
41 # package_name = "nvflare-nightly"
42 # version = nvfl_nightly_version
43 #
44
45 if os.path.exists(os.path.join(this_directory, "nvflare", "poc.zip")):
46 os.remove(os.path.join(this_directory, "nvflare", "poc.zip"))
47 shutil.make_archive(base_name="poc", format="zip", root_dir=os.path.join(this_directory, "nvflare"), base_dir="poc")
48 shutil.move("poc.zip", os.path.join(this_directory, "nvflare", "poc.zip"))
49 package_name = "nvflare"
50
51 setup(
52 name=package_name,
53 version=versioneer.get_version(),
54 cmdclass=versioneer.get_cmdclass(),
55 description="Federated Learning Application Runtime Environment",
56 url="https://github.com/NVIDIA/NVFlare",
57 package_dir={"nvflare": "nvflare"},
58 packages=find_packages(
59 where=".",
60 include=[
61 "*",
62 ],
63 exclude=[
64 "test",
65 ],
66 ),
67 package_data={"": ["*.yml", "*.html", "poc.zip"]},
68 zip_safe=True,
69 license_files=("LICENSE.pdf",),
70 classifiers=[
71 "Programming Language :: Python :: 3",
72 "License :: Other/Proprietary License",
73 "Operating System :: OS Independent",
74 ],
75 long_description=long_description,
76 long_description_content_type="text/markdown",
77 python_requires=">=3.7",
78 # install_requires=list(pkutils.parse_requirements("requirements.txt")),
79 install_requires=[
80 "PyYAML",
81 "psutil",
82 "numpy",
83 "grpcio",
84 "google-api-python-client",
85 "cryptography",
86 "tenseal==0.3.0",
87 ],
88 entry_points={
89 "console_scripts": [
90 "provision=nvflare.lighter.provision:main",
91 "poc=nvflare.lighter.poc:main",
92 "authz_preview=nvflare.fuel.hci.tools.authz_preview:main",
93 ],
94 },
95 )
96
97 os.remove(os.path.join(this_directory, "nvflare", "poc.zip"))
98
```
Path: `nvflare/lighter/impl/he.py`
Content:
```
1 # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16
17 import tenseal as ts
18
19 from nvflare.lighter.spec import Builder
20
21
22 class HEBuilder(Builder):
23 def __init__(
24 self,
25 poly_modulus_degree=8192,
26 coeff_mod_bit_sizes=[60, 40, 40],
27 scale_bits=40,
28 scheme="CKKS",
29 ):
30 """Build Homomorphic related contents.
31
32 Generates Tenseal homomorphic encryption context for server and client and writes them to server and client
33 participant folders.
34
35 Args:
36 poly_modulus_degree: defaults to 8192.
37 coeff_mod_bit_sizes: defaults to [60, 40, 40].
38 scale_bits: defaults to 40.
39 scheme: defaults to "CKKS".
40 """
41 self._context = None
42 self.scheme_type_mapping = {
43 "CKKS": ts.SCHEME_TYPE.CKKS,
44 "BFV": ts.SCHEME_TYPE.BFV,
45 }
46 self.poly_modulus_degree = poly_modulus_degree
47 self.coeff_mod_bit_sizes = coeff_mod_bit_sizes
48 self.scale_bits = scale_bits
49 _scheme = scheme
50 # Setup TenSEAL context
51 self.scheme_type = self.scheme_type_mapping[_scheme]
52 self.serialized = None
53
54 def initialize(self, ctx):
55 self._context = ts.context(
56 self.scheme_type,
57 poly_modulus_degree=self.poly_modulus_degree,
58 coeff_mod_bit_sizes=self.coeff_mod_bit_sizes,
59 encryption_type=ts.ENCRYPTION_TYPE.SYMMETRIC,
60 )
61 # dynamically call different generate keys method
62 # getattr(self._context, f'generate_{self.key_type}_keys')()
63 self._context.generate_relin_keys()
64 self._context.global_scale = 2 ** self.scale_bits
65
66 def build(self, study, ctx):
67 server = study.get_participants_by_type("server")
68 dest_dir = self.get_kit_dir(server, ctx)
69 with open(os.path.join(dest_dir, "server_context.tenseal"), "wb") as f:
70 f.write(self.get_serialized_context())
71 for client in study.get_participants_by_type("client", first_only=False):
72 dest_dir = self.get_kit_dir(client, ctx)
73 with open(os.path.join(dest_dir, "client_context.tenseal"), "wb") as f:
74 f.write(self.get_serialized_context(is_client=True))
75
76 def get_serialized_context(self, is_client=False):
77 _serialized_context = self._context.serialize(
78 save_public_key=is_client,
79 save_secret_key=is_client,
80 save_galois_keys=False,
81 save_relin_keys=True,
82 )
83 return _serialized_context
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nvflare/lighter/impl/he.py b/nvflare/lighter/impl/he.py
--- a/nvflare/lighter/impl/he.py
+++ b/nvflare/lighter/impl/he.py
@@ -14,7 +14,10 @@
import os
-import tenseal as ts
+try:
+ import tenseal as ts
+except ImportError:
+ ts = None
from nvflare.lighter.spec import Builder
@@ -38,6 +41,11 @@
scale_bits: defaults to 40.
scheme: defaults to "CKKS".
"""
+ if ts is None:
+ print(
+ "\n *** tenseal is not installed. HEBuilder is ignored and no tenseal files will be generated. ***\n"
+ )
+ return
self._context = None
self.scheme_type_mapping = {
"CKKS": ts.SCHEME_TYPE.CKKS,
@@ -52,6 +60,8 @@
self.serialized = None
def initialize(self, ctx):
+ if ts is None:
+ return
self._context = ts.context(
self.scheme_type,
poly_modulus_degree=self.poly_modulus_degree,
@@ -64,6 +74,8 @@
self._context.global_scale = 2 ** self.scale_bits
def build(self, study, ctx):
+ if ts is None:
+ return
server = study.get_participants_by_type("server")
dest_dir = self.get_kit_dir(server, ctx)
with open(os.path.join(dest_dir, "server_context.tenseal"), "wb") as f:
@@ -74,6 +86,8 @@
f.write(self.get_serialized_context(is_client=True))
def get_serialized_context(self, is_client=False):
+ if ts is None:
+ return
_serialized_context = self._context.serialize(
save_public_key=is_client,
save_secret_key=is_client,
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -76,15 +76,8 @@
long_description_content_type="text/markdown",
python_requires=">=3.7",
# install_requires=list(pkutils.parse_requirements("requirements.txt")),
- install_requires=[
- "PyYAML",
- "psutil",
- "numpy",
- "grpcio",
- "google-api-python-client",
- "cryptography",
- "tenseal==0.3.0",
- ],
+ install_requires=["PyYAML", "psutil", "numpy", "grpcio", "google-api-python-client", "cryptography"],
+ extras_require={"HE": ["tenseal==0.3.0"]},
entry_points={
"console_scripts": [
"provision=nvflare.lighter.provision:main",
| {"golden_diff": "diff --git a/nvflare/lighter/impl/he.py b/nvflare/lighter/impl/he.py\n--- a/nvflare/lighter/impl/he.py\n+++ b/nvflare/lighter/impl/he.py\n@@ -14,7 +14,10 @@\n \n import os\n \n-import tenseal as ts\n+try:\n+ import tenseal as ts\n+except ImportError:\n+ ts = None\n \n from nvflare.lighter.spec import Builder\n \n@@ -38,6 +41,11 @@\n scale_bits: defaults to 40.\n scheme: defaults to \"CKKS\".\n \"\"\"\n+ if ts is None:\n+ print(\n+ \"\\n *** tenseal is not installed. HEBuilder is ignored and no tenseal files will be generated. ***\\n\"\n+ )\n+ return\n self._context = None\n self.scheme_type_mapping = {\n \"CKKS\": ts.SCHEME_TYPE.CKKS,\n@@ -52,6 +60,8 @@\n self.serialized = None\n \n def initialize(self, ctx):\n+ if ts is None:\n+ return\n self._context = ts.context(\n self.scheme_type,\n poly_modulus_degree=self.poly_modulus_degree,\n@@ -64,6 +74,8 @@\n self._context.global_scale = 2 ** self.scale_bits\n \n def build(self, study, ctx):\n+ if ts is None:\n+ return\n server = study.get_participants_by_type(\"server\")\n dest_dir = self.get_kit_dir(server, ctx)\n with open(os.path.join(dest_dir, \"server_context.tenseal\"), \"wb\") as f:\n@@ -74,6 +86,8 @@\n f.write(self.get_serialized_context(is_client=True))\n \n def get_serialized_context(self, is_client=False):\n+ if ts is None:\n+ return\n _serialized_context = self._context.serialize(\n save_public_key=is_client,\n save_secret_key=is_client,\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -76,15 +76,8 @@\n long_description_content_type=\"text/markdown\",\n python_requires=\">=3.7\",\n # install_requires=list(pkutils.parse_requirements(\"requirements.txt\")),\n- install_requires=[\n- \"PyYAML\",\n- \"psutil\",\n- \"numpy\",\n- \"grpcio\",\n- \"google-api-python-client\",\n- \"cryptography\",\n- \"tenseal==0.3.0\",\n- ],\n+ install_requires=[\"PyYAML\", \"psutil\", \"numpy\", \"grpcio\", \"google-api-python-client\", \"cryptography\"],\n+ extras_require={\"HE\": [\"tenseal==0.3.0\"]},\n entry_points={\n \"console_scripts\": [\n \"provision=nvflare.lighter.provision:main\",\n", "issue": "Tenseal dependency for HE is not available on ARM aarch64\nThe tenseal dependency is not available for the ARM aarch64 platform, causing installation to fail. This has been reported for local development on Mac M1 and will affect other non-x86 architectures, Jetson, Clara AGX, IBM POWER, etc..\r\n\r\nThe tenseal dependency is only required when using the HEBuilder module, and it looks like all other functionality could be used without this dependency. Can tenseal be made optional, with the caveat that HE is not available without tenseal?\r\n\r\nOne option would be providing an alternate install, a `requirements-no-tenseal.txt` that includes everything but tenseal. For example, I generated this file in a clean venv on my linux machine using:\r\n```\r\npip download nvflare -d /tmp -v \\\r\n | grep Collecting \\\r\n | awk '{print $2}' \\\r\n | tr '[:upper:]' '[:lower:]' \\\r\n | grep -v tenseal \\\r\n | tee requirements-no-tenseal.txt\r\n```\r\nand verified that I can install nvflare and all deps except tenseal by copying to an aarch64 system (in this case a Jetson TX2) with:\r\n```\r\npython3 -m pip install --no-deps -r requirements-no-tenseal.txt\r\n```\r\n\r\nThis is a pretty awkward solution. It would be much cleaner to remove the tenseal dependency in the default packaging, since HE is optional, and note in the docs that tenseal must be installed when using HE.\n", "before_files": [{"content": "# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.\nimport os\nimport shutil\nfrom datetime import datetime\n\nfrom setuptools import find_packages, setup\n\nimport versioneer\n# read the contents of your README file\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\n# with open(os.path.join(this_directory, \"nvflare\", \"__init__.py\"), encoding=\"utf-8\") as f:\n# for line in f.readlines():\n# if \"__version__\" in line:\n# init_version = line.split(\"=\")[1].strip().strip('\"')\n# \n# nvfl_version = os.environ.get(\"NVFL_VERSION\", init_version)\n# yymmdd = datetime.today().strftime(\"%y%m%d\")\n# nvfl_nightly_version = f\"{nvfl_version}.dev{yymmdd}\"\n# \n# if os.environ.get(\"NVFL_RELEASE\"):\n# package_name = \"nvflare\"\n# version = nvfl_version\n# else:\n# package_name = \"nvflare-nightly\"\n# version = nvfl_nightly_version\n# \n\nif os.path.exists(os.path.join(this_directory, \"nvflare\", \"poc.zip\")):\n os.remove(os.path.join(this_directory, \"nvflare\", \"poc.zip\"))\nshutil.make_archive(base_name=\"poc\", format=\"zip\", root_dir=os.path.join(this_directory, \"nvflare\"), base_dir=\"poc\")\nshutil.move(\"poc.zip\", os.path.join(this_directory, \"nvflare\", \"poc.zip\"))\npackage_name = \"nvflare\"\n\nsetup(\n name=package_name,\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n description=\"Federated Learning Application Runtime Environment\",\n url=\"https://github.com/NVIDIA/NVFlare\",\n package_dir={\"nvflare\": \"nvflare\"},\n packages=find_packages(\n where=\".\",\n include=[\n \"*\",\n ],\n exclude=[\n \"test\",\n ],\n ),\n package_data={\"\": [\"*.yml\", \"*.html\", \"poc.zip\"]},\n zip_safe=True,\n license_files=(\"LICENSE.pdf\",),\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"License :: Other/Proprietary License\",\n \"Operating System :: OS Independent\",\n ],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n python_requires=\">=3.7\",\n # install_requires=list(pkutils.parse_requirements(\"requirements.txt\")),\n install_requires=[\n \"PyYAML\",\n \"psutil\",\n \"numpy\",\n \"grpcio\",\n \"google-api-python-client\",\n \"cryptography\",\n \"tenseal==0.3.0\",\n ],\n entry_points={\n \"console_scripts\": [\n \"provision=nvflare.lighter.provision:main\",\n \"poc=nvflare.lighter.poc:main\",\n \"authz_preview=nvflare.fuel.hci.tools.authz_preview:main\",\n ],\n },\n)\n\nos.remove(os.path.join(this_directory, \"nvflare\", \"poc.zip\"))\n", "path": "setup.py"}, {"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nimport tenseal as ts\n\nfrom nvflare.lighter.spec import Builder\n\n\nclass HEBuilder(Builder):\n def __init__(\n self,\n poly_modulus_degree=8192,\n coeff_mod_bit_sizes=[60, 40, 40],\n scale_bits=40,\n scheme=\"CKKS\",\n ):\n \"\"\"Build Homomorphic related contents.\n\n Generates Tenseal homomorphic encryption context for server and client and writes them to server and client\n participant folders.\n\n Args:\n poly_modulus_degree: defaults to 8192.\n coeff_mod_bit_sizes: defaults to [60, 40, 40].\n scale_bits: defaults to 40.\n scheme: defaults to \"CKKS\".\n \"\"\"\n self._context = None\n self.scheme_type_mapping = {\n \"CKKS\": ts.SCHEME_TYPE.CKKS,\n \"BFV\": ts.SCHEME_TYPE.BFV,\n }\n self.poly_modulus_degree = poly_modulus_degree\n self.coeff_mod_bit_sizes = coeff_mod_bit_sizes\n self.scale_bits = scale_bits\n _scheme = scheme\n # Setup TenSEAL context\n self.scheme_type = self.scheme_type_mapping[_scheme]\n self.serialized = None\n\n def initialize(self, ctx):\n self._context = ts.context(\n self.scheme_type,\n poly_modulus_degree=self.poly_modulus_degree,\n coeff_mod_bit_sizes=self.coeff_mod_bit_sizes,\n encryption_type=ts.ENCRYPTION_TYPE.SYMMETRIC,\n )\n # dynamically call different generate keys method\n # getattr(self._context, f'generate_{self.key_type}_keys')()\n self._context.generate_relin_keys()\n self._context.global_scale = 2 ** self.scale_bits\n\n def build(self, study, ctx):\n server = study.get_participants_by_type(\"server\")\n dest_dir = self.get_kit_dir(server, ctx)\n with open(os.path.join(dest_dir, \"server_context.tenseal\"), \"wb\") as f:\n f.write(self.get_serialized_context())\n for client in study.get_participants_by_type(\"client\", first_only=False):\n dest_dir = self.get_kit_dir(client, ctx)\n with open(os.path.join(dest_dir, \"client_context.tenseal\"), \"wb\") as f:\n f.write(self.get_serialized_context(is_client=True))\n\n def get_serialized_context(self, is_client=False):\n _serialized_context = self._context.serialize(\n save_public_key=is_client,\n save_secret_key=is_client,\n save_galois_keys=False,\n save_relin_keys=True,\n )\n return _serialized_context\n", "path": "nvflare/lighter/impl/he.py"}], "after_files": [{"content": "# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.\nimport os\nimport shutil\nfrom datetime import datetime\n\nfrom setuptools import find_packages, setup\n\nimport versioneer\n# read the contents of your README file\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\n# with open(os.path.join(this_directory, \"nvflare\", \"__init__.py\"), encoding=\"utf-8\") as f:\n# for line in f.readlines():\n# if \"__version__\" in line:\n# init_version = line.split(\"=\")[1].strip().strip('\"')\n# \n# nvfl_version = os.environ.get(\"NVFL_VERSION\", init_version)\n# yymmdd = datetime.today().strftime(\"%y%m%d\")\n# nvfl_nightly_version = f\"{nvfl_version}.dev{yymmdd}\"\n# \n# if os.environ.get(\"NVFL_RELEASE\"):\n# package_name = \"nvflare\"\n# version = nvfl_version\n# else:\n# package_name = \"nvflare-nightly\"\n# version = nvfl_nightly_version\n# \n\nif os.path.exists(os.path.join(this_directory, \"nvflare\", \"poc.zip\")):\n os.remove(os.path.join(this_directory, \"nvflare\", \"poc.zip\"))\nshutil.make_archive(base_name=\"poc\", format=\"zip\", root_dir=os.path.join(this_directory, \"nvflare\"), base_dir=\"poc\")\nshutil.move(\"poc.zip\", os.path.join(this_directory, \"nvflare\", \"poc.zip\"))\npackage_name = \"nvflare\"\n\nsetup(\n name=package_name,\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n description=\"Federated Learning Application Runtime Environment\",\n url=\"https://github.com/NVIDIA/NVFlare\",\n package_dir={\"nvflare\": \"nvflare\"},\n packages=find_packages(\n where=\".\",\n include=[\n \"*\",\n ],\n exclude=[\n \"test\",\n ],\n ),\n package_data={\"\": [\"*.yml\", \"*.html\", \"poc.zip\"]},\n zip_safe=True,\n license_files=(\"LICENSE.pdf\",),\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"License :: Other/Proprietary License\",\n \"Operating System :: OS Independent\",\n ],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n python_requires=\">=3.7\",\n # install_requires=list(pkutils.parse_requirements(\"requirements.txt\")),\n install_requires=[\"PyYAML\", \"psutil\", \"numpy\", \"grpcio\", \"google-api-python-client\", \"cryptography\"],\n extras_require={\"HE\": [\"tenseal==0.3.0\"]},\n entry_points={\n \"console_scripts\": [\n \"provision=nvflare.lighter.provision:main\",\n \"poc=nvflare.lighter.poc:main\",\n \"authz_preview=nvflare.fuel.hci.tools.authz_preview:main\",\n ],\n },\n)\n\nos.remove(os.path.join(this_directory, \"nvflare\", \"poc.zip\"))\n", "path": "setup.py"}, {"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\ntry:\n import tenseal as ts\nexcept ImportError:\n ts = None\n\nfrom nvflare.lighter.spec import Builder\n\n\nclass HEBuilder(Builder):\n def __init__(\n self,\n poly_modulus_degree=8192,\n coeff_mod_bit_sizes=[60, 40, 40],\n scale_bits=40,\n scheme=\"CKKS\",\n ):\n \"\"\"Build Homomorphic related contents.\n\n Generates Tenseal homomorphic encryption context for server and client and writes them to server and client\n participant folders.\n\n Args:\n poly_modulus_degree: defaults to 8192.\n coeff_mod_bit_sizes: defaults to [60, 40, 40].\n scale_bits: defaults to 40.\n scheme: defaults to \"CKKS\".\n \"\"\"\n if ts is None:\n print(\n \"\\n *** tenseal is not installed. HEBuilder is ignored and no tenseal files will be generated. ***\\n\"\n )\n return\n self._context = None\n self.scheme_type_mapping = {\n \"CKKS\": ts.SCHEME_TYPE.CKKS,\n \"BFV\": ts.SCHEME_TYPE.BFV,\n }\n self.poly_modulus_degree = poly_modulus_degree\n self.coeff_mod_bit_sizes = coeff_mod_bit_sizes\n self.scale_bits = scale_bits\n _scheme = scheme\n # Setup TenSEAL context\n self.scheme_type = self.scheme_type_mapping[_scheme]\n self.serialized = None\n\n def initialize(self, ctx):\n if ts is None:\n return\n self._context = ts.context(\n self.scheme_type,\n poly_modulus_degree=self.poly_modulus_degree,\n coeff_mod_bit_sizes=self.coeff_mod_bit_sizes,\n encryption_type=ts.ENCRYPTION_TYPE.SYMMETRIC,\n )\n # dynamically call different generate keys method\n # getattr(self._context, f'generate_{self.key_type}_keys')()\n self._context.generate_relin_keys()\n self._context.global_scale = 2 ** self.scale_bits\n\n def build(self, study, ctx):\n if ts is None:\n return\n server = study.get_participants_by_type(\"server\")\n dest_dir = self.get_kit_dir(server, ctx)\n with open(os.path.join(dest_dir, \"server_context.tenseal\"), \"wb\") as f:\n f.write(self.get_serialized_context())\n for client in study.get_participants_by_type(\"client\", first_only=False):\n dest_dir = self.get_kit_dir(client, ctx)\n with open(os.path.join(dest_dir, \"client_context.tenseal\"), \"wb\") as f:\n f.write(self.get_serialized_context(is_client=True))\n\n def get_serialized_context(self, is_client=False):\n if ts is None:\n return\n _serialized_context = self._context.serialize(\n save_public_key=is_client,\n save_secret_key=is_client,\n save_galois_keys=False,\n save_relin_keys=True,\n )\n return _serialized_context\n", "path": "nvflare/lighter/impl/he.py"}]} | 2,494 | 632 |
gh_patches_debug_18081 | rasdani/github-patches | git_diff | jupyterhub__zero-to-jupyterhub-k8s-31 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
add link to github repo on website
I think there should be an obvious link on the website to the repo / issue tracker so that people can leave feedback more easily.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `doc/source/conf.py`
Content:
```
1 #!/usr/bin/env python3
2 # -*- coding: utf-8 -*-
3 #
4 # Learning with JupyterHub documentation build configuration file, created by
5 # sphinx-quickstart on Fri Mar 17 16:07:58 2017.
6 #
7 # This file is execfile()d with the current directory set to its
8 # containing dir.
9 #
10 # Note that not all possible configuration values are present in this
11 # autogenerated file.
12 #
13 # All configuration values have a default; values that are commented out
14 # serve to show the default.
15
16 # If extensions (or modules to document with autodoc) are in another directory,
17 # add these directories to sys.path here. If the directory is relative to the
18 # documentation root, use os.path.abspath to make it absolute, like shown here.
19 #
20 # import os
21 # import sys
22 # sys.path.insert(0, os.path.abspath('.'))
23 import recommonmark
24
25 # -- General configuration ------------------------------------------------
26
27 # If your documentation needs a minimal Sphinx version, state it here.
28 #
29 # needs_sphinx = '1.0'
30
31 # Add any Sphinx extension module names here, as strings. They can be
32 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
33 # ones.
34 extensions = ['sphinx.ext.mathjax']
35
36 # Add any paths that contain templates here, relative to this directory.
37 templates_path = ['_templates']
38
39 source_parsers = {
40 '.md': 'recommonmark.parser.CommonMarkParser',
41 }
42
43 # The suffix(es) of source filenames.
44 # You can specify multiple suffix as a list of string:
45 #
46 # source_suffix = ['.rst', '.md']
47 source_suffix = ['.rst', '.md']
48
49 # The master toctree document.
50 master_doc = 'index'
51
52 # General information about the project.
53 project = u'Zero to JupyterHub with Kubernetes'
54 copyright = u'2017, Project Jupyter team'
55 author = u'Project Jupyter team'
56
57 # The version info for the project you're documenting, acts as replacement for
58 # |version| and |release|, also used in various other places throughout the
59 # built documents.
60 #
61 # The short X.Y version.
62 version = '0.1'
63 # The full version, including alpha/beta/rc tags.
64 release = '0.1'
65
66 # The language for content autogenerated by Sphinx. Refer to documentation
67 # for a list of supported languages.
68 #
69 # This is also used if you do content translation via gettext catalogs.
70 # Usually you set "language" from the command line for these cases.
71 language = None
72
73 # List of patterns, relative to source directory, that match files and
74 # directories to ignore when looking for source files.
75 # This patterns also effect to html_static_path and html_extra_path
76 exclude_patterns = []
77
78 # The name of the Pygments (syntax highlighting) style to use.
79 pygments_style = 'sphinx'
80
81 # If true, `todo` and `todoList` produce output, else they produce nothing.
82 todo_include_todos = False
83
84
85 # -- Options for HTML output ----------------------------------------------
86
87 # The theme to use for HTML and HTML Help pages. See the documentation for
88 # a list of builtin themes.
89 #
90 html_theme = 'alabaster'
91 html_favicon = '_static/images/logo/favicon.ico'
92 html_logo = '_static/images/logo/logo.png'
93
94 # Theme options are theme-specific and customize the look and feel of a theme
95 # further. For a list of options available for each theme, see the
96 # documentation.
97 #
98 # html_theme_options = {}
99
100 # Add any paths that contain custom static files (such as style sheets) here,
101 # relative to this directory. They are copied after the builtin static files,
102 # so a file named "default.css" will overwrite the builtin "default.css".
103 html_static_path = ['_static']
104
105
106 # -- Options for HTMLHelp output ------------------------------------------
107
108 # Output file base name for HTML help builder.
109 htmlhelp_basename = 'ZeroToJupyterhubDoc'
110
111
112 # -- Options for LaTeX output ---------------------------------------------
113
114 latex_elements = {
115 # The paper size ('letterpaper' or 'a4paper').
116 #
117 # 'papersize': 'letterpaper',
118
119 # The font size ('10pt', '11pt' or '12pt').
120 #
121 # 'pointsize': '10pt',
122
123 # Additional stuff for the LaTeX preamble.
124 #
125 # 'preamble': '',
126
127 # Latex figure (float) alignment
128 #
129 # 'figure_align': 'htbp',
130 }
131
132 # Grouping the document tree into LaTeX files. List of tuples
133 # (source start file, target name, title,
134 # author, documentclass [howto, manual, or own class]).
135 latex_documents = [
136 (master_doc, 'ZeroToJupyterhubDoc.tex', 'Zero to JupyterHub',
137 'Chris Holdgraf', 'manual'),
138 ]
139
140
141 # -- Options for manual page output ---------------------------------------
142
143 # One entry per manual page. List of tuples
144 # (source start file, name, description, authors, manual section).
145 man_pages = [
146 (master_doc, 'zerotojupyterhub', 'Zero to JupyterHub',
147 [author], 1)
148 ]
149
150
151 # -- Options for Texinfo output -------------------------------------------
152
153 # Grouping the document tree into Texinfo files. List of tuples
154 # (source start file, target name, title, author,
155 # dir menu entry, description, category)
156 texinfo_documents = [
157 (master_doc, 'ZeroToJupyterhubDoc', 'Zero to JupyterHub',
158 author, 'ZeroToJupyterhubDoc', 'One line description of project.',
159 'Miscellaneous'),
160 ]
161
162
163
164 # -- Options for Epub output ----------------------------------------------
165
166 # Bibliographic Dublin Core info.
167 epub_title = project
168 epub_author = author
169 epub_publisher = author
170 epub_copyright = copyright
171
172 # The unique identifier of the text. This can be a ISBN number
173 # or the project homepage.
174 #
175 # epub_identifier = ''
176
177 # A unique identification for the text.
178 #
179 # epub_uid = ''
180
181 # A list of files that should not be packed into the epub file.
182 epub_exclude_files = ['search.html']
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/doc/source/conf.py b/doc/source/conf.py
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -95,13 +95,23 @@
# further. For a list of options available for each theme, see the
# documentation.
#
-# html_theme_options = {}
+html_theme_options = {
+ 'show_powered_by': False,
+ 'github_user': 'jupyterhub',
+ 'github_repo': 'zero-to-jupyterhub-k8s',
+ 'github_banner': False,
+ 'show_related': False
+}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
+html_sidebars = {
+ 'index': ['sidebarintro.html'],
+}
+
# -- Options for HTMLHelp output ------------------------------------------
| {"golden_diff": "diff --git a/doc/source/conf.py b/doc/source/conf.py\n--- a/doc/source/conf.py\n+++ b/doc/source/conf.py\n@@ -95,13 +95,23 @@\n # further. For a list of options available for each theme, see the\n # documentation.\n #\n-# html_theme_options = {}\n+html_theme_options = {\n+ 'show_powered_by': False,\n+ 'github_user': 'jupyterhub',\n+ 'github_repo': 'zero-to-jupyterhub-k8s',\n+ 'github_banner': False,\n+ 'show_related': False\n+}\n \n # Add any paths that contain custom static files (such as style sheets) here,\n # relative to this directory. They are copied after the builtin static files,\n # so a file named \"default.css\" will overwrite the builtin \"default.css\".\n html_static_path = ['_static']\n \n+html_sidebars = {\n+ 'index': ['sidebarintro.html'],\n+}\n+\n \n # -- Options for HTMLHelp output ------------------------------------------\n", "issue": "add link to github repo on website\nI think there should be an obvious link on the website to the repo / issue tracker so that people can leave feedback more easily.\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Learning with JupyterHub documentation build configuration file, created by\n# sphinx-quickstart on Fri Mar 17 16:07:58 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\nimport recommonmark\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.mathjax']\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\nsource_parsers = {\n '.md': 'recommonmark.parser.CommonMarkParser',\n}\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = ['.rst', '.md']\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Zero to JupyterHub with Kubernetes'\ncopyright = u'2017, Project Jupyter team'\nauthor = u'Project Jupyter team'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.1'\n# The full version, including alpha/beta/rc tags.\nrelease = '0.1'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'alabaster'\nhtml_favicon = '_static/images/logo/favicon.ico'\nhtml_logo = '_static/images/logo/logo.png'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'ZeroToJupyterhubDoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'ZeroToJupyterhubDoc.tex', 'Zero to JupyterHub',\n 'Chris Holdgraf', 'manual'),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'zerotojupyterhub', 'Zero to JupyterHub',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'ZeroToJupyterhubDoc', 'Zero to JupyterHub',\n author, 'ZeroToJupyterhubDoc', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n\n\n# -- Options for Epub output ----------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\nepub_author = author\nepub_publisher = author\nepub_copyright = copyright\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#\n# epub_identifier = ''\n\n# A unique identification for the text.\n#\n# epub_uid = ''\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n", "path": "doc/source/conf.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Learning with JupyterHub documentation build configuration file, created by\n# sphinx-quickstart on Fri Mar 17 16:07:58 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\nimport recommonmark\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.mathjax']\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\nsource_parsers = {\n '.md': 'recommonmark.parser.CommonMarkParser',\n}\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = ['.rst', '.md']\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Zero to JupyterHub with Kubernetes'\ncopyright = u'2017, Project Jupyter team'\nauthor = u'Project Jupyter team'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.1'\n# The full version, including alpha/beta/rc tags.\nrelease = '0.1'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'alabaster'\nhtml_favicon = '_static/images/logo/favicon.ico'\nhtml_logo = '_static/images/logo/logo.png'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n 'show_powered_by': False,\n 'github_user': 'jupyterhub',\n 'github_repo': 'zero-to-jupyterhub-k8s',\n 'github_banner': False,\n 'show_related': False\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\nhtml_sidebars = {\n 'index': ['sidebarintro.html'],\n}\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'ZeroToJupyterhubDoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'ZeroToJupyterhubDoc.tex', 'Zero to JupyterHub',\n 'Chris Holdgraf', 'manual'),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'zerotojupyterhub', 'Zero to JupyterHub',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'ZeroToJupyterhubDoc', 'Zero to JupyterHub',\n author, 'ZeroToJupyterhubDoc', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n\n\n# -- Options for Epub output ----------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\nepub_author = author\nepub_publisher = author\nepub_copyright = copyright\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#\n# epub_identifier = ''\n\n# A unique identification for the text.\n#\n# epub_uid = ''\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n", "path": "doc/source/conf.py"}]} | 2,052 | 219 |
gh_patches_debug_950 | rasdani/github-patches | git_diff | OpenNMT__OpenNMT-py-2204 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
torch.div() (1.6.0) does not have 'rounding_mode' parameter
According to the torch 1.6.0 here: https://pytorch.org/docs/1.6.0/generated/torch.div.html?highlight=torch%20div#torch.div
there is no 'rounding_mode' parameter.
But in translator:
https://github.com/OpenNMT/OpenNMT-py/blob/0f411ce11a83b18c0223ac94ccc11a35403763df/onmt/translate/beam_search.py#L282
That's why I receive this error:
```
onmt_translate -model ./../output/test/nmt/f0/run/model_step_100.pt -src ./../output/test/nmt/f0/src-test.txt -output ./../output/test/nmt/f0/test.epoch100.pred.csv -gpu 0 --min_length 2 -verbose
[2022-09-15 20:32:19,980 INFO] Translating shard 0.
Traceback (most recent call last):
File "c:\programdata\anaconda3\envs\nuecg\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\programdata\anaconda3\envs\nuecg\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\ProgramData\Anaconda3\envs\nuecg\Scripts\onmt_translate.exe\__main__.py", line 7, in <module>
File "c:\programdata\anaconda3\envs\nuecg\lib\site-packages\onmt\bin\translate.py", line 54, in main
translate(opt)
File "c:\programdata\anaconda3\envs\nuecg\lib\site-packages\onmt\bin\translate.py", line 38, in translate
align_debug=opt.align_debug
File "c:\programdata\anaconda3\envs\nuecg\lib\site-packages\onmt\translate\translator.py", line 440, in translate
phrase_table=phrase_table)
File "c:\programdata\anaconda3\envs\nuecg\lib\site-packages\onmt\translate\translator.py", line 487, in _translate
batch, data.src_vocabs, attn_debug
File "c:\programdata\anaconda3\envs\nuecg\lib\site-packages\onmt\translate\translator.py", line 861, in translate_batch
batch, src_vocabs, decode_strategy
File "c:\programdata\anaconda3\envs\nuecg\lib\site-packages\onmt\translate\translator.py", line 947, in _translate_batch_with_strategy
decode_strategy.advance(log_probs, attn)
File "c:\programdata\anaconda3\envs\nuecg\lib\site-packages\onmt\translate\beam_search.py", line 283, in advance
rounding_mode='trunc')
TypeError: div() got an unexpected keyword argument 'rounding_mode'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 from setuptools import setup, find_packages
3 from os import path
4
5 this_directory = path.abspath(path.dirname(__file__))
6 with open(path.join(this_directory, 'README.md'), encoding='utf-8') as f:
7 long_description = f.read()
8
9 setup(
10 name='OpenNMT-py',
11 description='A python implementation of OpenNMT',
12 long_description=long_description,
13 long_description_content_type='text/markdown',
14 version='2.3.0',
15 packages=find_packages(),
16 project_urls={
17 "Documentation": "http://opennmt.net/OpenNMT-py/",
18 "Forum": "http://forum.opennmt.net/",
19 "Gitter": "https://gitter.im/OpenNMT/OpenNMT-py",
20 "Source": "https://github.com/OpenNMT/OpenNMT-py/"
21 },
22 python_requires=">=3.5",
23 install_requires=[
24 "torch>=1.6.0",
25 "torchtext==0.5.0",
26 "configargparse",
27 "tensorboard>=2.3",
28 "flask",
29 "waitress",
30 "pyonmttok>=1.23,<2",
31 "pyyaml",
32 "sacrebleu"
33 ],
34 entry_points={
35 "console_scripts": [
36 "onmt_server=onmt.bin.server:main",
37 "onmt_train=onmt.bin.train:main",
38 "onmt_translate=onmt.bin.translate:main",
39 "onmt_translate_dynamic=onmt.bin.translate_dynamic:main",
40 "onmt_release_model=onmt.bin.release_model:main",
41 "onmt_average_models=onmt.bin.average_models:main",
42 "onmt_build_vocab=onmt.bin.build_vocab:main"
43 ],
44 }
45 )
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -21,7 +21,7 @@
},
python_requires=">=3.5",
install_requires=[
- "torch>=1.6.0",
+ "torch>=1.9.0",
"torchtext==0.5.0",
"configargparse",
"tensorboard>=2.3",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -21,7 +21,7 @@\n },\n python_requires=\">=3.5\",\n install_requires=[\n- \"torch>=1.6.0\",\n+ \"torch>=1.9.0\",\n \"torchtext==0.5.0\",\n \"configargparse\",\n \"tensorboard>=2.3\",\n", "issue": "torch.div() (1.6.0) does not have 'rounding_mode' parameter\nAccording to the torch 1.6.0 here: https://pytorch.org/docs/1.6.0/generated/torch.div.html?highlight=torch%20div#torch.div\r\nthere is no 'rounding_mode' parameter. \r\n\r\nBut in translator:\r\nhttps://github.com/OpenNMT/OpenNMT-py/blob/0f411ce11a83b18c0223ac94ccc11a35403763df/onmt/translate/beam_search.py#L282\r\n\r\nThat's why I receive this error:\r\n```\r\nonmt_translate -model ./../output/test/nmt/f0/run/model_step_100.pt -src ./../output/test/nmt/f0/src-test.txt -output ./../output/test/nmt/f0/test.epoch100.pred.csv -gpu 0 --min_length 2 -verbose \r\n[2022-09-15 20:32:19,980 INFO] Translating shard 0.\r\nTraceback (most recent call last):\r\n File \"c:\\programdata\\anaconda3\\envs\\nuecg\\lib\\runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"c:\\programdata\\anaconda3\\envs\\nuecg\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\nuecg\\Scripts\\onmt_translate.exe\\__main__.py\", line 7, in <module>\r\n File \"c:\\programdata\\anaconda3\\envs\\nuecg\\lib\\site-packages\\onmt\\bin\\translate.py\", line 54, in main\r\n translate(opt)\r\n File \"c:\\programdata\\anaconda3\\envs\\nuecg\\lib\\site-packages\\onmt\\bin\\translate.py\", line 38, in translate\r\n align_debug=opt.align_debug\r\n File \"c:\\programdata\\anaconda3\\envs\\nuecg\\lib\\site-packages\\onmt\\translate\\translator.py\", line 440, in translate\r\n phrase_table=phrase_table)\r\n File \"c:\\programdata\\anaconda3\\envs\\nuecg\\lib\\site-packages\\onmt\\translate\\translator.py\", line 487, in _translate\r\n batch, data.src_vocabs, attn_debug\r\n File \"c:\\programdata\\anaconda3\\envs\\nuecg\\lib\\site-packages\\onmt\\translate\\translator.py\", line 861, in translate_batch\r\n batch, src_vocabs, decode_strategy\r\n File \"c:\\programdata\\anaconda3\\envs\\nuecg\\lib\\site-packages\\onmt\\translate\\translator.py\", line 947, in _translate_batch_with_strategy\r\n decode_strategy.advance(log_probs, attn)\r\n File \"c:\\programdata\\anaconda3\\envs\\nuecg\\lib\\site-packages\\onmt\\translate\\beam_search.py\", line 283, in advance\r\n rounding_mode='trunc')\r\nTypeError: div() got an unexpected keyword argument 'rounding_mode'\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import setup, find_packages\nfrom os import path\n\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, 'README.md'), encoding='utf-8') as f:\n long_description = f.read()\n\nsetup(\n name='OpenNMT-py',\n description='A python implementation of OpenNMT',\n long_description=long_description,\n long_description_content_type='text/markdown',\n version='2.3.0',\n packages=find_packages(),\n project_urls={\n \"Documentation\": \"http://opennmt.net/OpenNMT-py/\",\n \"Forum\": \"http://forum.opennmt.net/\",\n \"Gitter\": \"https://gitter.im/OpenNMT/OpenNMT-py\",\n \"Source\": \"https://github.com/OpenNMT/OpenNMT-py/\"\n },\n python_requires=\">=3.5\",\n install_requires=[\n \"torch>=1.6.0\",\n \"torchtext==0.5.0\",\n \"configargparse\",\n \"tensorboard>=2.3\",\n \"flask\",\n \"waitress\",\n \"pyonmttok>=1.23,<2\",\n \"pyyaml\",\n \"sacrebleu\"\n ],\n entry_points={\n \"console_scripts\": [\n \"onmt_server=onmt.bin.server:main\",\n \"onmt_train=onmt.bin.train:main\",\n \"onmt_translate=onmt.bin.translate:main\",\n \"onmt_translate_dynamic=onmt.bin.translate_dynamic:main\",\n \"onmt_release_model=onmt.bin.release_model:main\",\n \"onmt_average_models=onmt.bin.average_models:main\",\n \"onmt_build_vocab=onmt.bin.build_vocab:main\"\n ],\n }\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import setup, find_packages\nfrom os import path\n\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, 'README.md'), encoding='utf-8') as f:\n long_description = f.read()\n\nsetup(\n name='OpenNMT-py',\n description='A python implementation of OpenNMT',\n long_description=long_description,\n long_description_content_type='text/markdown',\n version='2.3.0',\n packages=find_packages(),\n project_urls={\n \"Documentation\": \"http://opennmt.net/OpenNMT-py/\",\n \"Forum\": \"http://forum.opennmt.net/\",\n \"Gitter\": \"https://gitter.im/OpenNMT/OpenNMT-py\",\n \"Source\": \"https://github.com/OpenNMT/OpenNMT-py/\"\n },\n python_requires=\">=3.5\",\n install_requires=[\n \"torch>=1.9.0\",\n \"torchtext==0.5.0\",\n \"configargparse\",\n \"tensorboard>=2.3\",\n \"flask\",\n \"waitress\",\n \"pyonmttok>=1.23,<2\",\n \"pyyaml\",\n \"sacrebleu\"\n ],\n entry_points={\n \"console_scripts\": [\n \"onmt_server=onmt.bin.server:main\",\n \"onmt_train=onmt.bin.train:main\",\n \"onmt_translate=onmt.bin.translate:main\",\n \"onmt_translate_dynamic=onmt.bin.translate_dynamic:main\",\n \"onmt_release_model=onmt.bin.release_model:main\",\n \"onmt_average_models=onmt.bin.average_models:main\",\n \"onmt_build_vocab=onmt.bin.build_vocab:main\"\n ],\n }\n)\n", "path": "setup.py"}]} | 1,461 | 95 |
gh_patches_debug_9160 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3313 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider ymca is broken
During the global build at 2021-06-02-14-42-40, spider **ymca** failed with **0 features** and **87 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/ymca.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/ymca.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/ymca.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/ymca.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from datetime import datetime
3 import json
4 import re
5 from urllib.parse import urlencode
6
7 import scrapy
8
9 from locations.items import GeojsonPointItem
10 from locations.hours import OpeningHours
11
12 SINGLE_POINT_STATES = [
13 ("0,64.0685,-152.2782,AK"),
14 ("1,20.6538883744,-157.8631750471,HI"),
15 ]
16
17 HUNDRED_MILES_STATES = {"MT", "WY", "SD", "ND", "NE", "NV", "AZ", "NM", "UT", "ID"}
18 TWENTYFIVE_MILES_STATES = {"MD", "OH", "FL", "IL", "IA", "WI", "MN", "RI", "MA", "NH",
19 "SC", "NC", "NJ", "WA", "CA", "PA", "NY"}
20 ADDITONAL_CITIES = [
21 "Los Angeles, CA",
22 "New York, NY",
23 "Boston, MA",
24 "Philadelphia, PA",
25 "Dallas, TX",
26 "Houston, TX",
27 "Seattle, WA",
28 "San Francisco, CA",
29 "Denver, CO",
30 "Minneapolis, MN",
31 "Omaha, NE",
32 "St. Louis, MO",
33 "Chicago, IL",
34 "Montgomery, AL",
35 "Orlando, FL",
36 "St. Petersburg, FL",
37 "Atlanta, GA",
38 "Poughkeepsie, NY",
39 "Hartford, CT",
40 "Concord, NH"
41 ]
42
43
44 class YmcaSpider(scrapy.Spider):
45 name = "ymca"
46 item_attributes = { 'brand': "YMCA" }
47 allowed_domains = ["ymca.net"]
48 download_delay = 0.5
49
50 def start_requests(self):
51 url = 'https://www.ymca.net/find-your-y/?'
52
53 for point in SINGLE_POINT_STATES:
54 _, lat, lon, state = point.strip().split(',')
55 params = {"address": "{},{}".format(lat, lon)}
56 yield scrapy.Request(url=url + urlencode(params))
57
58 with open('./locations/searchable_points/us_centroids_100mile_radius_state.csv') as points:
59 next(points)
60 for point in points:
61 _, lat, lon, state = point.strip().split(',')
62 if state in HUNDRED_MILES_STATES:
63 params = {"address": "{},{}".format(lat, lon)}
64 yield scrapy.Request(url=url + urlencode(params))
65
66 with open('./locations/searchable_points/us_centroids_25mile_radius_state.csv') as points:
67 next(points)
68 for point in points:
69 _, lat, lon, state = point.strip().split(',')
70 if state in TWENTYFIVE_MILES_STATES:
71 params = {"address": "{},{}".format(lat, lon)}
72 yield scrapy.Request(url=url + urlencode(params))
73
74 with open('./locations/searchable_points/us_centroids_50mile_radius_state.csv') as points:
75 next(points)
76 for point in points:
77 _, lat, lon, state = point.strip().split(',')
78 if state not in HUNDRED_MILES_STATES.union(TWENTYFIVE_MILES_STATES).union({"AK", "HI"}):
79 params = {"address": "{},{}".format(lat, lon)}
80 yield scrapy.Request(url=url + urlencode(params))
81
82 for city in ADDITONAL_CITIES:
83 params = {"address": city}
84 yield scrapy.Request(url=url + urlencode(params))
85
86 def parse_hours(self, hours):
87 opening_hours = OpeningHours()
88
89 for hour in hours:
90 hour = hour.strip()
91 if hour == "Hours of Operation:":
92 continue
93
94 try:
95 day, open_time, close_time = re.search(r'(.*?):\s(.*?)\s-\s(.*?)$', hour).groups()
96 except AttributeError: # closed
97 continue
98 open_time = open_time.replace('.', '')
99 close_time = close_time.replace('.', '')
100
101 open_time = (datetime.strptime(open_time, '%I:%M %p')
102 if ":" in open_time
103 else datetime.strptime(open_time, '%I %p')).strftime('%H:%M')
104 close_time = (datetime.strptime(close_time, '%I:%M %p')
105 if ":" in close_time
106 else datetime.strptime(close_time, '%I %p')).strftime('%H:%M')
107
108 opening_hours.add_range(day=day[:2],
109 open_time=open_time,
110 close_time=close_time,
111 time_format='%H:%M')
112 return opening_hours.as_opening_hours()
113
114 def parse_location(self, response):
115 p = response.xpath('//main//p[1]/text()').extract()
116 p = [x.strip() for x in p if x.strip()]
117
118 phone = p.pop(-1) # last line is phone number
119 city, state, postcode = re.search(r'(.*?), ([A-Z]{2}) ([\d-]+)$', p.pop(-1)).groups() # next to last line is city/state/zip
120 address = " ".join(p) # every thing left is street address
121
122 properties = {
123 'ref': re.search(r'.+/?id=(.+)', response.url).group(1),
124 'name': response.xpath('//main//h1/text()').extract_first(),
125 'addr_full': address,
126 'city': city,
127 'state': state,
128 'postcode': postcode,
129 'country': 'US',
130 'lat': float(response.xpath('//div[@id="y-profile-position"]/@data-latitude').extract_first()),
131 'lon': float(response.xpath('//div[@id="y-profile-position"]/@data-longitude').extract_first()),
132 'phone': phone.replace("Phone: ", ""),
133 'website': response.xpath('//div[@id="y-profile-position"]/@data-url').extract_first()
134 }
135
136 properties['opening_hours'] = self.parse_hours(response.xpath('//main//p[contains(text(), "Hours")]/text()').extract())
137
138 yield GeojsonPointItem(**properties)
139
140 def parse(self, response):
141 urls = response.xpath('//main//ul[not(contains(@class, "ymca-pagination"))]/li/h3//a/@href').extract()
142
143 for url in urls:
144 yield scrapy.Request(response.urljoin(url), callback=self.parse_location)
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/ymca.py b/locations/spiders/ymca.py
--- a/locations/spiders/ymca.py
+++ b/locations/spiders/ymca.py
@@ -44,11 +44,11 @@
class YmcaSpider(scrapy.Spider):
name = "ymca"
item_attributes = { 'brand': "YMCA" }
- allowed_domains = ["ymca.net"]
+ allowed_domains = ["ymca.org"]
download_delay = 0.5
def start_requests(self):
- url = 'https://www.ymca.net/find-your-y/?'
+ url = 'https://www.ymca.org/find-your-y/?'
for point in SINGLE_POINT_STATES:
_, lat, lon, state = point.strip().split(',')
| {"golden_diff": "diff --git a/locations/spiders/ymca.py b/locations/spiders/ymca.py\n--- a/locations/spiders/ymca.py\n+++ b/locations/spiders/ymca.py\n@@ -44,11 +44,11 @@\n class YmcaSpider(scrapy.Spider):\n name = \"ymca\"\n item_attributes = { 'brand': \"YMCA\" }\n- allowed_domains = [\"ymca.net\"]\n+ allowed_domains = [\"ymca.org\"]\n download_delay = 0.5\n \n def start_requests(self):\n- url = 'https://www.ymca.net/find-your-y/?'\n+ url = 'https://www.ymca.org/find-your-y/?'\n \n for point in SINGLE_POINT_STATES:\n _, lat, lon, state = point.strip().split(',')\n", "issue": "Spider ymca is broken\nDuring the global build at 2021-06-02-14-42-40, spider **ymca** failed with **0 features** and **87 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/ymca.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/ymca.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/ymca.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom datetime import datetime\nimport json\nimport re\nfrom urllib.parse import urlencode\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nSINGLE_POINT_STATES = [\n (\"0,64.0685,-152.2782,AK\"),\n (\"1,20.6538883744,-157.8631750471,HI\"),\n]\n\nHUNDRED_MILES_STATES = {\"MT\", \"WY\", \"SD\", \"ND\", \"NE\", \"NV\", \"AZ\", \"NM\", \"UT\", \"ID\"}\nTWENTYFIVE_MILES_STATES = {\"MD\", \"OH\", \"FL\", \"IL\", \"IA\", \"WI\", \"MN\", \"RI\", \"MA\", \"NH\",\n \"SC\", \"NC\", \"NJ\", \"WA\", \"CA\", \"PA\", \"NY\"}\nADDITONAL_CITIES = [\n \"Los Angeles, CA\",\n \"New York, NY\",\n \"Boston, MA\",\n \"Philadelphia, PA\",\n \"Dallas, TX\",\n \"Houston, TX\",\n \"Seattle, WA\",\n \"San Francisco, CA\",\n \"Denver, CO\",\n \"Minneapolis, MN\",\n \"Omaha, NE\",\n \"St. Louis, MO\",\n \"Chicago, IL\",\n \"Montgomery, AL\",\n \"Orlando, FL\",\n \"St. Petersburg, FL\",\n \"Atlanta, GA\",\n \"Poughkeepsie, NY\",\n \"Hartford, CT\",\n \"Concord, NH\"\n]\n\n\nclass YmcaSpider(scrapy.Spider):\n name = \"ymca\"\n item_attributes = { 'brand': \"YMCA\" }\n allowed_domains = [\"ymca.net\"]\n download_delay = 0.5\n\n def start_requests(self):\n url = 'https://www.ymca.net/find-your-y/?'\n\n for point in SINGLE_POINT_STATES:\n _, lat, lon, state = point.strip().split(',')\n params = {\"address\": \"{},{}\".format(lat, lon)}\n yield scrapy.Request(url=url + urlencode(params))\n\n with open('./locations/searchable_points/us_centroids_100mile_radius_state.csv') as points:\n next(points)\n for point in points:\n _, lat, lon, state = point.strip().split(',')\n if state in HUNDRED_MILES_STATES:\n params = {\"address\": \"{},{}\".format(lat, lon)}\n yield scrapy.Request(url=url + urlencode(params))\n\n with open('./locations/searchable_points/us_centroids_25mile_radius_state.csv') as points:\n next(points)\n for point in points:\n _, lat, lon, state = point.strip().split(',')\n if state in TWENTYFIVE_MILES_STATES:\n params = {\"address\": \"{},{}\".format(lat, lon)}\n yield scrapy.Request(url=url + urlencode(params))\n\n with open('./locations/searchable_points/us_centroids_50mile_radius_state.csv') as points:\n next(points)\n for point in points:\n _, lat, lon, state = point.strip().split(',')\n if state not in HUNDRED_MILES_STATES.union(TWENTYFIVE_MILES_STATES).union({\"AK\", \"HI\"}):\n params = {\"address\": \"{},{}\".format(lat, lon)}\n yield scrapy.Request(url=url + urlencode(params))\n\n for city in ADDITONAL_CITIES:\n params = {\"address\": city}\n yield scrapy.Request(url=url + urlencode(params))\n\n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n\n for hour in hours:\n hour = hour.strip()\n if hour == \"Hours of Operation:\":\n continue\n\n try:\n day, open_time, close_time = re.search(r'(.*?):\\s(.*?)\\s-\\s(.*?)$', hour).groups()\n except AttributeError: # closed\n continue\n open_time = open_time.replace('.', '')\n close_time = close_time.replace('.', '')\n\n open_time = (datetime.strptime(open_time, '%I:%M %p')\n if \":\" in open_time\n else datetime.strptime(open_time, '%I %p')).strftime('%H:%M')\n close_time = (datetime.strptime(close_time, '%I:%M %p')\n if \":\" in close_time\n else datetime.strptime(close_time, '%I %p')).strftime('%H:%M')\n\n opening_hours.add_range(day=day[:2],\n open_time=open_time,\n close_time=close_time,\n time_format='%H:%M')\n return opening_hours.as_opening_hours()\n\n def parse_location(self, response):\n p = response.xpath('//main//p[1]/text()').extract()\n p = [x.strip() for x in p if x.strip()]\n\n phone = p.pop(-1) # last line is phone number\n city, state, postcode = re.search(r'(.*?), ([A-Z]{2}) ([\\d-]+)$', p.pop(-1)).groups() # next to last line is city/state/zip\n address = \" \".join(p) # every thing left is street address\n\n properties = {\n 'ref': re.search(r'.+/?id=(.+)', response.url).group(1),\n 'name': response.xpath('//main//h1/text()').extract_first(),\n 'addr_full': address,\n 'city': city,\n 'state': state,\n 'postcode': postcode,\n 'country': 'US',\n 'lat': float(response.xpath('//div[@id=\"y-profile-position\"]/@data-latitude').extract_first()),\n 'lon': float(response.xpath('//div[@id=\"y-profile-position\"]/@data-longitude').extract_first()),\n 'phone': phone.replace(\"Phone: \", \"\"),\n 'website': response.xpath('//div[@id=\"y-profile-position\"]/@data-url').extract_first()\n }\n \n properties['opening_hours'] = self.parse_hours(response.xpath('//main//p[contains(text(), \"Hours\")]/text()').extract())\n \n yield GeojsonPointItem(**properties)\n\n def parse(self, response):\n urls = response.xpath('//main//ul[not(contains(@class, \"ymca-pagination\"))]/li/h3//a/@href').extract()\n\n for url in urls:\n yield scrapy.Request(response.urljoin(url), callback=self.parse_location)\n", "path": "locations/spiders/ymca.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom datetime import datetime\nimport json\nimport re\nfrom urllib.parse import urlencode\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nSINGLE_POINT_STATES = [\n (\"0,64.0685,-152.2782,AK\"),\n (\"1,20.6538883744,-157.8631750471,HI\"),\n]\n\nHUNDRED_MILES_STATES = {\"MT\", \"WY\", \"SD\", \"ND\", \"NE\", \"NV\", \"AZ\", \"NM\", \"UT\", \"ID\"}\nTWENTYFIVE_MILES_STATES = {\"MD\", \"OH\", \"FL\", \"IL\", \"IA\", \"WI\", \"MN\", \"RI\", \"MA\", \"NH\",\n \"SC\", \"NC\", \"NJ\", \"WA\", \"CA\", \"PA\", \"NY\"}\nADDITONAL_CITIES = [\n \"Los Angeles, CA\",\n \"New York, NY\",\n \"Boston, MA\",\n \"Philadelphia, PA\",\n \"Dallas, TX\",\n \"Houston, TX\",\n \"Seattle, WA\",\n \"San Francisco, CA\",\n \"Denver, CO\",\n \"Minneapolis, MN\",\n \"Omaha, NE\",\n \"St. Louis, MO\",\n \"Chicago, IL\",\n \"Montgomery, AL\",\n \"Orlando, FL\",\n \"St. Petersburg, FL\",\n \"Atlanta, GA\",\n \"Poughkeepsie, NY\",\n \"Hartford, CT\",\n \"Concord, NH\"\n]\n\n\nclass YmcaSpider(scrapy.Spider):\n name = \"ymca\"\n item_attributes = { 'brand': \"YMCA\" }\n allowed_domains = [\"ymca.org\"]\n download_delay = 0.5\n\n def start_requests(self):\n url = 'https://www.ymca.org/find-your-y/?'\n\n for point in SINGLE_POINT_STATES:\n _, lat, lon, state = point.strip().split(',')\n params = {\"address\": \"{},{}\".format(lat, lon)}\n yield scrapy.Request(url=url + urlencode(params))\n\n with open('./locations/searchable_points/us_centroids_100mile_radius_state.csv') as points:\n next(points)\n for point in points:\n _, lat, lon, state = point.strip().split(',')\n if state in HUNDRED_MILES_STATES:\n params = {\"address\": \"{},{}\".format(lat, lon)}\n yield scrapy.Request(url=url + urlencode(params))\n\n with open('./locations/searchable_points/us_centroids_25mile_radius_state.csv') as points:\n next(points)\n for point in points:\n _, lat, lon, state = point.strip().split(',')\n if state in TWENTYFIVE_MILES_STATES:\n params = {\"address\": \"{},{}\".format(lat, lon)}\n yield scrapy.Request(url=url + urlencode(params))\n\n with open('./locations/searchable_points/us_centroids_50mile_radius_state.csv') as points:\n next(points)\n for point in points:\n _, lat, lon, state = point.strip().split(',')\n if state not in HUNDRED_MILES_STATES.union(TWENTYFIVE_MILES_STATES).union({\"AK\", \"HI\"}):\n params = {\"address\": \"{},{}\".format(lat, lon)}\n yield scrapy.Request(url=url + urlencode(params))\n\n for city in ADDITONAL_CITIES:\n params = {\"address\": city}\n yield scrapy.Request(url=url + urlencode(params))\n\n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n\n for hour in hours:\n hour = hour.strip()\n if hour == \"Hours of Operation:\":\n continue\n\n try:\n day, open_time, close_time = re.search(r'(.*?):\\s(.*?)\\s-\\s(.*?)$', hour).groups()\n except AttributeError: # closed\n continue\n open_time = open_time.replace('.', '')\n close_time = close_time.replace('.', '')\n\n open_time = (datetime.strptime(open_time, '%I:%M %p')\n if \":\" in open_time\n else datetime.strptime(open_time, '%I %p')).strftime('%H:%M')\n close_time = (datetime.strptime(close_time, '%I:%M %p')\n if \":\" in close_time\n else datetime.strptime(close_time, '%I %p')).strftime('%H:%M')\n\n opening_hours.add_range(day=day[:2],\n open_time=open_time,\n close_time=close_time,\n time_format='%H:%M')\n return opening_hours.as_opening_hours()\n\n def parse_location(self, response):\n p = response.xpath('//main//p[1]/text()').extract()\n p = [x.strip() for x in p if x.strip()]\n\n phone = p.pop(-1) # last line is phone number\n city, state, postcode = re.search(r'(.*?), ([A-Z]{2}) ([\\d-]+)$', p.pop(-1)).groups() # next to last line is city/state/zip\n address = \" \".join(p) # every thing left is street address\n\n properties = {\n 'ref': re.search(r'.+/?id=(.+)', response.url).group(1),\n 'name': response.xpath('//main//h1/text()').extract_first(),\n 'addr_full': address,\n 'city': city,\n 'state': state,\n 'postcode': postcode,\n 'country': 'US',\n 'lat': float(response.xpath('//div[@id=\"y-profile-position\"]/@data-latitude').extract_first()),\n 'lon': float(response.xpath('//div[@id=\"y-profile-position\"]/@data-longitude').extract_first()),\n 'phone': phone.replace(\"Phone: \", \"\"),\n 'website': response.xpath('//div[@id=\"y-profile-position\"]/@data-url').extract_first()\n }\n \n properties['opening_hours'] = self.parse_hours(response.xpath('//main//p[contains(text(), \"Hours\")]/text()').extract())\n \n yield GeojsonPointItem(**properties)\n\n def parse(self, response):\n urls = response.xpath('//main//ul[not(contains(@class, \"ymca-pagination\"))]/li/h3//a/@href').extract()\n\n for url in urls:\n yield scrapy.Request(response.urljoin(url), callback=self.parse_location)\n", "path": "locations/spiders/ymca.py"}]} | 2,155 | 179 |
gh_patches_debug_40667 | rasdani/github-patches | git_diff | nilearn__nilearn-1054 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
expandpath should be applied to the "memory" argument
The CacheMixin should be modified so that when the argument is a basestring, it is expanded, with the same logic to turn this off as the globbing that we already have.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nilearn/__init__.py`
Content:
```
1 """
2 Machine Learning module for NeuroImaging in python
3 ==================================================
4
5 Documentation is available in the docstrings and online at
6 http://nilearn.github.io.
7
8 Contents
9 --------
10 Nilearn aims at simplifying the use of the scikit-learn package in the context of
11 neuroimaging. It provides specific input/output functions, algorithms and
12 visualization tools.
13
14 Submodules
15 ---------
16 datasets --- Utilities to download NeuroImaging datasets
17 decoding --- Decoding tools and algorithms
18 decomposition --- Includes a subject level variant of the ICA
19 algorithm called Canonical ICA
20 connectome --- Set of tools for computing functional connectivity matrices
21 and for sparse multi-subjects learning of Gaussian graphical models
22 image --- Set of functions defining mathematical operations
23 working on Niimg-like objects
24 input_data --- includes scikit-learn tranformers and tools to
25 preprocess neuro-imaging data
26 masking --- Utilities to compute and operate on brain masks
27 mass_univariate --- Defines a Massively Univariate Linear Model
28 estimated with OLS and permutation test
29 plotting --- Plotting code for nilearn
30 region --- Set of functions for extracting region-defined
31 signals
32 signal --- Set of preprocessing functions for time series
33 """
34
35 import gzip
36
37 from .version import _check_module_dependencies, __version__
38
39 _check_module_dependencies()
40
41 # Monkey-patch gzip to have faster reads on large gzip files
42 if hasattr(gzip.GzipFile, 'max_read_chunk'):
43 gzip.GzipFile.max_read_chunk = 100 * 1024 * 1024 # 100Mb
44
45 # Boolean controlling the default globbing technique when using check_niimg
46 # Default value it True, set it to False to completely deactivate use of glob
47 # module
48 EXPAND_PATH_WILDCARDS = True
49
50 # Boolean controlling whether the joblib caches should be
51 # flushed if the version of certain modules changes (eg nibabel, as it
52 # does not respect the backward compatibility in some of its internal
53 # structures
54 # This is used in nilearn._utils.cache_mixin
55 CHECK_CACHE_VERSION = True
56
57 # list all submodules available in nilearn and version
58 __all__ = ['datasets', 'decoding', 'decomposition', 'connectome',
59 'image', 'input_data', 'masking', 'mass_univariate', 'plotting',
60 'region', 'signal', '__version__']
61
```
Path: `nilearn/_utils/cache_mixin.py`
Content:
```
1 """
2 Mixin for cache with joblib
3 """
4 # Author: Gael Varoquaux, Alexandre Abraham, Philippe Gervais
5 # License: simplified BSD
6
7 import json
8 import warnings
9 import os
10 import shutil
11 from distutils.version import LooseVersion
12
13 import nibabel
14 from sklearn.externals.joblib import Memory
15
16 MEMORY_CLASSES = (Memory, )
17
18 try:
19 from joblib import Memory as JoblibMemory
20 MEMORY_CLASSES = (Memory, JoblibMemory)
21 except ImportError:
22 pass
23
24 import nilearn
25
26 from .compat import _basestring
27
28 __CACHE_CHECKED = dict()
29
30
31 def _safe_cache(memory, func, **kwargs):
32 """ A wrapper for mem.cache that flushes the cache if the version
33 number of nibabel has changed.
34 """
35 cachedir = memory.cachedir
36
37 if cachedir is None or cachedir in __CACHE_CHECKED:
38 return memory.cache(func, **kwargs)
39
40 version_file = os.path.join(cachedir, 'module_versions.json')
41
42 versions = dict()
43 if os.path.exists(version_file):
44 with open(version_file, 'r') as _version_file:
45 versions = json.load(_version_file)
46
47 modules = (nibabel, )
48 # Keep only the major + minor version numbers
49 my_versions = dict((m.__name__, LooseVersion(m.__version__).version[:2])
50 for m in modules)
51 commons = set(versions.keys()).intersection(set(my_versions.keys()))
52 collisions = [m for m in commons if versions[m] != my_versions[m]]
53
54 # Flush cache if version collision
55 if len(collisions) > 0:
56 if nilearn.CHECK_CACHE_VERSION:
57 warnings.warn("Incompatible cache in %s: "
58 "different version of nibabel. Deleting "
59 "the cache. Put nilearn.CHECK_CACHE_VERSION "
60 "to false to avoid this behavior."
61 % cachedir)
62 try:
63 tmp_dir = (os.path.split(cachedir)[:-1]
64 + ('old_%i' % os.getpid(), ))
65 tmp_dir = os.path.join(*tmp_dir)
66 # We use rename + unlink to be more robust to race
67 # conditions
68 os.rename(cachedir, tmp_dir)
69 shutil.rmtree(tmp_dir)
70 except OSError:
71 # Another process could have removed this dir
72 pass
73
74 try:
75 os.makedirs(cachedir)
76 except OSError:
77 # File exists?
78 pass
79 else:
80 warnings.warn("Incompatible cache in %s: "
81 "old version of nibabel." % cachedir)
82
83 # Write json files if configuration is different
84 if versions != my_versions:
85 with open(version_file, 'w') as _version_file:
86 json.dump(my_versions, _version_file)
87
88 __CACHE_CHECKED[cachedir] = True
89
90 return memory.cache(func, **kwargs)
91
92
93 def cache(func, memory, func_memory_level=None, memory_level=None,
94 **kwargs):
95 """ Return a joblib.Memory object.
96
97 The memory_level determines the level above which the wrapped
98 function output is cached. By specifying a numeric value for
99 this level, the user can to control the amount of cache memory
100 used. This function will cache the function call or not
101 depending on the cache level.
102
103 Parameters
104 ----------
105 func: function
106 The function which output is to be cached.
107
108 memory: instance of joblib.Memory or string
109 Used to cache the function call.
110
111 func_memory_level: int, optional
112 The memory_level from which caching must be enabled for the wrapped
113 function.
114
115 memory_level: int, optional
116 The memory_level used to determine if function call must
117 be cached or not (if user_memory_level is equal of greater than
118 func_memory_level the function is cached)
119
120 kwargs: keyword arguments
121 The keyword arguments passed to memory.cache
122
123 Returns
124 -------
125 mem: joblib.MemorizedFunc
126 object that wraps the function func. This object may be
127 a no-op, if the requested level is lower than the value given
128 to _cache()). For consistency, a joblib.Memory object is always
129 returned.
130 """
131 verbose = kwargs.get('verbose', 0)
132
133 # memory_level and func_memory_level must be both None or both integers.
134 memory_levels = [memory_level, func_memory_level]
135 both_params_integers = all(isinstance(lvl, int) for lvl in memory_levels)
136 both_params_none = all(lvl is None for lvl in memory_levels)
137
138 if not (both_params_integers or both_params_none):
139 raise ValueError('Reference and user memory levels must be both None '
140 'or both integers.')
141
142 if memory is not None and (func_memory_level is None or
143 memory_level >= func_memory_level):
144 if isinstance(memory, _basestring):
145 memory = Memory(cachedir=memory, verbose=verbose)
146 if not isinstance(memory, MEMORY_CLASSES):
147 raise TypeError("'memory' argument must be a string or a "
148 "joblib.Memory object. "
149 "%s %s was given." % (memory, type(memory)))
150 if (memory.cachedir is None and memory_level is not None
151 and memory_level > 1):
152 warnings.warn("Caching has been enabled (memory_level = %d) "
153 "but no Memory object or path has been provided"
154 " (parameter memory). Caching deactivated for "
155 "function %s." %
156 (memory_level, func.__name__),
157 stacklevel=2)
158 else:
159 memory = Memory(cachedir=None, verbose=verbose)
160 return _safe_cache(memory, func, **kwargs)
161
162
163 class CacheMixin(object):
164 """Mixin to add caching to a class.
165
166 This class is a thin layer on top of joblib.Memory, that mainly adds a
167 "caching level", similar to a "log level".
168
169 Usage: to cache the results of a method, wrap it in self._cache()
170 defined by this class. Caching is performed only if the user-specified
171 cache level (self._memory_level) is greater than the value given as a
172 parameter to self._cache(). See _cache() documentation for details.
173 """
174 def _cache(self, func, func_memory_level=1, **kwargs):
175 """Return a joblib.Memory object.
176
177 The memory_level determines the level above which the wrapped
178 function output is cached. By specifying a numeric value for
179 this level, the user can to control the amount of cache memory
180 used. This function will cache the function call or not
181 depending on the cache level.
182
183 Parameters
184 ----------
185 func: function
186 The function the output of which is to be cached.
187
188 memory_level: int
189 The memory_level from which caching must be enabled for the wrapped
190 function.
191
192 Returns
193 -------
194 mem: joblib.Memory
195 object that wraps the function func. This object may be
196 a no-op, if the requested level is lower than the value given
197 to _cache()). For consistency, a joblib.Memory object is always
198 returned.
199
200 """
201
202 verbose = getattr(self, 'verbose', 0)
203
204 # Creates attributes if they don't exist
205 # This is to make creating them in __init__() optional.
206 if not hasattr(self, "memory_level"):
207 self.memory_level = 0
208 if not hasattr(self, "memory"):
209 self.memory = Memory(cachedir=None, verbose=verbose)
210 if isinstance(self.memory, _basestring):
211 self.memory = Memory(cachedir=self.memory, verbose=verbose)
212
213 # If cache level is 0 but a memory object has been provided, set
214 # memory_level to 1 with a warning.
215 if self.memory_level == 0:
216 if (isinstance(self.memory, _basestring)
217 or self.memory.cachedir is not None):
218 warnings.warn("memory_level is currently set to 0 but "
219 "a Memory object has been provided. "
220 "Setting memory_level to 1.")
221 self.memory_level = 1
222
223 return cache(func, self.memory, func_memory_level=func_memory_level,
224 memory_level=self.memory_level, **kwargs)
225
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nilearn/__init__.py b/nilearn/__init__.py
--- a/nilearn/__init__.py
+++ b/nilearn/__init__.py
@@ -43,8 +43,9 @@
gzip.GzipFile.max_read_chunk = 100 * 1024 * 1024 # 100Mb
# Boolean controlling the default globbing technique when using check_niimg
-# Default value it True, set it to False to completely deactivate use of glob
-# module
+# and the os.path.expanduser usage in CacheMixin.
+# Default value it True, set it to False to completely deactivate this
+# behavior.
EXPAND_PATH_WILDCARDS = True
# Boolean controlling whether the joblib caches should be
diff --git a/nilearn/_utils/cache_mixin.py b/nilearn/_utils/cache_mixin.py
--- a/nilearn/_utils/cache_mixin.py
+++ b/nilearn/_utils/cache_mixin.py
@@ -208,17 +208,47 @@
if not hasattr(self, "memory"):
self.memory = Memory(cachedir=None, verbose=verbose)
if isinstance(self.memory, _basestring):
- self.memory = Memory(cachedir=self.memory, verbose=verbose)
+ cache_dir = self.memory
+ if nilearn.EXPAND_PATH_WILDCARDS:
+ cache_dir = os.path.expanduser(cache_dir)
+
+ # Perform some verifications on given path.
+ split_cache_dir = os.path.split(cache_dir)
+ if (len(split_cache_dir) > 1 and
+ (not os.path.exists(split_cache_dir[0]) and
+ split_cache_dir[0] != '')):
+ if (not nilearn.EXPAND_PATH_WILDCARDS and
+ cache_dir.startswith("~")):
+ # Maybe the user want to enable expanded user path.
+ error_msg = ("Given cache path parent directory doesn't "
+ "exists, you gave '{0}'. Enabling "
+ "nilearn.EXPAND_PATH_WILDCARDS could solve "
+ "this issue.".format(split_cache_dir[0]))
+ elif self.memory.startswith("~"):
+ # Path built on top of expanded user path doesn't exist.
+ error_msg = ("Given cache path parent directory doesn't "
+ "exists, you gave '{0}' which was expanded "
+ "as '{1}' but doesn't exist either. Use "
+ "nilearn.EXPAND_PATH_WILDCARDS to deactivate "
+ "auto expand user path (~) behavior."
+ .format(split_cache_dir[0],
+ os.path.dirname(self.memory)))
+ else:
+ # The given cache base path doesn't exist.
+ error_msg = ("Given cache path parent directory doesn't "
+ "exists, you gave '{0}'."
+ .format(split_cache_dir[0]))
+ raise ValueError(error_msg)
+
+ self.memory = Memory(cachedir=cache_dir, verbose=verbose)
# If cache level is 0 but a memory object has been provided, set
# memory_level to 1 with a warning.
- if self.memory_level == 0:
- if (isinstance(self.memory, _basestring)
- or self.memory.cachedir is not None):
- warnings.warn("memory_level is currently set to 0 but "
- "a Memory object has been provided. "
- "Setting memory_level to 1.")
- self.memory_level = 1
+ if self.memory_level == 0 and self.memory.cachedir is not None:
+ warnings.warn("memory_level is currently set to 0 but "
+ "a Memory object has been provided. "
+ "Setting memory_level to 1.")
+ self.memory_level = 1
return cache(func, self.memory, func_memory_level=func_memory_level,
memory_level=self.memory_level, **kwargs)
| {"golden_diff": "diff --git a/nilearn/__init__.py b/nilearn/__init__.py\n--- a/nilearn/__init__.py\n+++ b/nilearn/__init__.py\n@@ -43,8 +43,9 @@\n gzip.GzipFile.max_read_chunk = 100 * 1024 * 1024 # 100Mb\n \n # Boolean controlling the default globbing technique when using check_niimg\n-# Default value it True, set it to False to completely deactivate use of glob\n-# module\n+# and the os.path.expanduser usage in CacheMixin.\n+# Default value it True, set it to False to completely deactivate this\n+# behavior.\n EXPAND_PATH_WILDCARDS = True\n \n # Boolean controlling whether the joblib caches should be\ndiff --git a/nilearn/_utils/cache_mixin.py b/nilearn/_utils/cache_mixin.py\n--- a/nilearn/_utils/cache_mixin.py\n+++ b/nilearn/_utils/cache_mixin.py\n@@ -208,17 +208,47 @@\n if not hasattr(self, \"memory\"):\n self.memory = Memory(cachedir=None, verbose=verbose)\n if isinstance(self.memory, _basestring):\n- self.memory = Memory(cachedir=self.memory, verbose=verbose)\n+ cache_dir = self.memory\n+ if nilearn.EXPAND_PATH_WILDCARDS:\n+ cache_dir = os.path.expanduser(cache_dir)\n+\n+ # Perform some verifications on given path.\n+ split_cache_dir = os.path.split(cache_dir)\n+ if (len(split_cache_dir) > 1 and\n+ (not os.path.exists(split_cache_dir[0]) and\n+ split_cache_dir[0] != '')):\n+ if (not nilearn.EXPAND_PATH_WILDCARDS and\n+ cache_dir.startswith(\"~\")):\n+ # Maybe the user want to enable expanded user path.\n+ error_msg = (\"Given cache path parent directory doesn't \"\n+ \"exists, you gave '{0}'. Enabling \"\n+ \"nilearn.EXPAND_PATH_WILDCARDS could solve \"\n+ \"this issue.\".format(split_cache_dir[0]))\n+ elif self.memory.startswith(\"~\"):\n+ # Path built on top of expanded user path doesn't exist.\n+ error_msg = (\"Given cache path parent directory doesn't \"\n+ \"exists, you gave '{0}' which was expanded \"\n+ \"as '{1}' but doesn't exist either. Use \"\n+ \"nilearn.EXPAND_PATH_WILDCARDS to deactivate \"\n+ \"auto expand user path (~) behavior.\"\n+ .format(split_cache_dir[0],\n+ os.path.dirname(self.memory)))\n+ else:\n+ # The given cache base path doesn't exist.\n+ error_msg = (\"Given cache path parent directory doesn't \"\n+ \"exists, you gave '{0}'.\"\n+ .format(split_cache_dir[0]))\n+ raise ValueError(error_msg)\n+\n+ self.memory = Memory(cachedir=cache_dir, verbose=verbose)\n \n # If cache level is 0 but a memory object has been provided, set\n # memory_level to 1 with a warning.\n- if self.memory_level == 0:\n- if (isinstance(self.memory, _basestring)\n- or self.memory.cachedir is not None):\n- warnings.warn(\"memory_level is currently set to 0 but \"\n- \"a Memory object has been provided. \"\n- \"Setting memory_level to 1.\")\n- self.memory_level = 1\n+ if self.memory_level == 0 and self.memory.cachedir is not None:\n+ warnings.warn(\"memory_level is currently set to 0 but \"\n+ \"a Memory object has been provided. \"\n+ \"Setting memory_level to 1.\")\n+ self.memory_level = 1\n \n return cache(func, self.memory, func_memory_level=func_memory_level,\n memory_level=self.memory_level, **kwargs)\n", "issue": "expandpath should be applied to the \"memory\" argument\nThe CacheMixin should be modified so that when the argument is a basestring, it is expanded, with the same logic to turn this off as the globbing that we already have.\n\n", "before_files": [{"content": "\"\"\"\nMachine Learning module for NeuroImaging in python\n==================================================\n\nDocumentation is available in the docstrings and online at\nhttp://nilearn.github.io.\n\nContents\n--------\nNilearn aims at simplifying the use of the scikit-learn package in the context of\nneuroimaging. It provides specific input/output functions, algorithms and\nvisualization tools.\n\nSubmodules\n---------\ndatasets --- Utilities to download NeuroImaging datasets\ndecoding --- Decoding tools and algorithms\ndecomposition --- Includes a subject level variant of the ICA\n algorithm called Canonical ICA\nconnectome --- Set of tools for computing functional connectivity matrices\n and for sparse multi-subjects learning of Gaussian graphical models\nimage --- Set of functions defining mathematical operations\n working on Niimg-like objects\ninput_data --- includes scikit-learn tranformers and tools to\n preprocess neuro-imaging data\nmasking --- Utilities to compute and operate on brain masks\nmass_univariate --- Defines a Massively Univariate Linear Model\n estimated with OLS and permutation test\nplotting --- Plotting code for nilearn\nregion --- Set of functions for extracting region-defined\n signals\nsignal --- Set of preprocessing functions for time series\n\"\"\"\n\nimport gzip\n\nfrom .version import _check_module_dependencies, __version__\n\n_check_module_dependencies()\n\n# Monkey-patch gzip to have faster reads on large gzip files\nif hasattr(gzip.GzipFile, 'max_read_chunk'):\n gzip.GzipFile.max_read_chunk = 100 * 1024 * 1024 # 100Mb\n\n# Boolean controlling the default globbing technique when using check_niimg\n# Default value it True, set it to False to completely deactivate use of glob\n# module\nEXPAND_PATH_WILDCARDS = True\n\n# Boolean controlling whether the joblib caches should be\n# flushed if the version of certain modules changes (eg nibabel, as it\n# does not respect the backward compatibility in some of its internal\n# structures\n# This is used in nilearn._utils.cache_mixin\nCHECK_CACHE_VERSION = True\n\n# list all submodules available in nilearn and version\n__all__ = ['datasets', 'decoding', 'decomposition', 'connectome',\n 'image', 'input_data', 'masking', 'mass_univariate', 'plotting',\n 'region', 'signal', '__version__']\n", "path": "nilearn/__init__.py"}, {"content": "\"\"\"\nMixin for cache with joblib\n\"\"\"\n# Author: Gael Varoquaux, Alexandre Abraham, Philippe Gervais\n# License: simplified BSD\n\nimport json\nimport warnings\nimport os\nimport shutil\nfrom distutils.version import LooseVersion\n\nimport nibabel\nfrom sklearn.externals.joblib import Memory\n\nMEMORY_CLASSES = (Memory, )\n\ntry:\n from joblib import Memory as JoblibMemory\n MEMORY_CLASSES = (Memory, JoblibMemory)\nexcept ImportError:\n pass\n\nimport nilearn\n\nfrom .compat import _basestring\n\n__CACHE_CHECKED = dict()\n\n\ndef _safe_cache(memory, func, **kwargs):\n \"\"\" A wrapper for mem.cache that flushes the cache if the version\n number of nibabel has changed.\n \"\"\"\n cachedir = memory.cachedir\n\n if cachedir is None or cachedir in __CACHE_CHECKED:\n return memory.cache(func, **kwargs)\n\n version_file = os.path.join(cachedir, 'module_versions.json')\n\n versions = dict()\n if os.path.exists(version_file):\n with open(version_file, 'r') as _version_file:\n versions = json.load(_version_file)\n\n modules = (nibabel, )\n # Keep only the major + minor version numbers\n my_versions = dict((m.__name__, LooseVersion(m.__version__).version[:2])\n for m in modules)\n commons = set(versions.keys()).intersection(set(my_versions.keys()))\n collisions = [m for m in commons if versions[m] != my_versions[m]]\n\n # Flush cache if version collision\n if len(collisions) > 0:\n if nilearn.CHECK_CACHE_VERSION:\n warnings.warn(\"Incompatible cache in %s: \"\n \"different version of nibabel. Deleting \"\n \"the cache. Put nilearn.CHECK_CACHE_VERSION \"\n \"to false to avoid this behavior.\"\n % cachedir)\n try:\n tmp_dir = (os.path.split(cachedir)[:-1]\n + ('old_%i' % os.getpid(), ))\n tmp_dir = os.path.join(*tmp_dir)\n # We use rename + unlink to be more robust to race\n # conditions\n os.rename(cachedir, tmp_dir)\n shutil.rmtree(tmp_dir)\n except OSError:\n # Another process could have removed this dir\n pass\n\n try:\n os.makedirs(cachedir)\n except OSError:\n # File exists?\n pass\n else:\n warnings.warn(\"Incompatible cache in %s: \"\n \"old version of nibabel.\" % cachedir)\n\n # Write json files if configuration is different\n if versions != my_versions:\n with open(version_file, 'w') as _version_file:\n json.dump(my_versions, _version_file)\n\n __CACHE_CHECKED[cachedir] = True\n\n return memory.cache(func, **kwargs)\n\n\ndef cache(func, memory, func_memory_level=None, memory_level=None,\n **kwargs):\n \"\"\" Return a joblib.Memory object.\n\n The memory_level determines the level above which the wrapped\n function output is cached. By specifying a numeric value for\n this level, the user can to control the amount of cache memory\n used. This function will cache the function call or not\n depending on the cache level.\n\n Parameters\n ----------\n func: function\n The function which output is to be cached.\n\n memory: instance of joblib.Memory or string\n Used to cache the function call.\n\n func_memory_level: int, optional\n The memory_level from which caching must be enabled for the wrapped\n function.\n\n memory_level: int, optional\n The memory_level used to determine if function call must\n be cached or not (if user_memory_level is equal of greater than\n func_memory_level the function is cached)\n\n kwargs: keyword arguments\n The keyword arguments passed to memory.cache\n\n Returns\n -------\n mem: joblib.MemorizedFunc\n object that wraps the function func. This object may be\n a no-op, if the requested level is lower than the value given\n to _cache()). For consistency, a joblib.Memory object is always\n returned.\n \"\"\"\n verbose = kwargs.get('verbose', 0)\n\n # memory_level and func_memory_level must be both None or both integers.\n memory_levels = [memory_level, func_memory_level]\n both_params_integers = all(isinstance(lvl, int) for lvl in memory_levels)\n both_params_none = all(lvl is None for lvl in memory_levels)\n\n if not (both_params_integers or both_params_none):\n raise ValueError('Reference and user memory levels must be both None '\n 'or both integers.')\n\n if memory is not None and (func_memory_level is None or\n memory_level >= func_memory_level):\n if isinstance(memory, _basestring):\n memory = Memory(cachedir=memory, verbose=verbose)\n if not isinstance(memory, MEMORY_CLASSES):\n raise TypeError(\"'memory' argument must be a string or a \"\n \"joblib.Memory object. \"\n \"%s %s was given.\" % (memory, type(memory)))\n if (memory.cachedir is None and memory_level is not None\n and memory_level > 1):\n warnings.warn(\"Caching has been enabled (memory_level = %d) \"\n \"but no Memory object or path has been provided\"\n \" (parameter memory). Caching deactivated for \"\n \"function %s.\" %\n (memory_level, func.__name__),\n stacklevel=2)\n else:\n memory = Memory(cachedir=None, verbose=verbose)\n return _safe_cache(memory, func, **kwargs)\n\n\nclass CacheMixin(object):\n \"\"\"Mixin to add caching to a class.\n\n This class is a thin layer on top of joblib.Memory, that mainly adds a\n \"caching level\", similar to a \"log level\".\n\n Usage: to cache the results of a method, wrap it in self._cache()\n defined by this class. Caching is performed only if the user-specified\n cache level (self._memory_level) is greater than the value given as a\n parameter to self._cache(). See _cache() documentation for details.\n \"\"\"\n def _cache(self, func, func_memory_level=1, **kwargs):\n \"\"\"Return a joblib.Memory object.\n\n The memory_level determines the level above which the wrapped\n function output is cached. By specifying a numeric value for\n this level, the user can to control the amount of cache memory\n used. This function will cache the function call or not\n depending on the cache level.\n\n Parameters\n ----------\n func: function\n The function the output of which is to be cached.\n\n memory_level: int\n The memory_level from which caching must be enabled for the wrapped\n function.\n\n Returns\n -------\n mem: joblib.Memory\n object that wraps the function func. This object may be\n a no-op, if the requested level is lower than the value given\n to _cache()). For consistency, a joblib.Memory object is always\n returned.\n\n \"\"\"\n\n verbose = getattr(self, 'verbose', 0)\n\n # Creates attributes if they don't exist\n # This is to make creating them in __init__() optional.\n if not hasattr(self, \"memory_level\"):\n self.memory_level = 0\n if not hasattr(self, \"memory\"):\n self.memory = Memory(cachedir=None, verbose=verbose)\n if isinstance(self.memory, _basestring):\n self.memory = Memory(cachedir=self.memory, verbose=verbose)\n\n # If cache level is 0 but a memory object has been provided, set\n # memory_level to 1 with a warning.\n if self.memory_level == 0:\n if (isinstance(self.memory, _basestring)\n or self.memory.cachedir is not None):\n warnings.warn(\"memory_level is currently set to 0 but \"\n \"a Memory object has been provided. \"\n \"Setting memory_level to 1.\")\n self.memory_level = 1\n\n return cache(func, self.memory, func_memory_level=func_memory_level,\n memory_level=self.memory_level, **kwargs)\n", "path": "nilearn/_utils/cache_mixin.py"}], "after_files": [{"content": "\"\"\"\nMachine Learning module for NeuroImaging in python\n==================================================\n\nDocumentation is available in the docstrings and online at\nhttp://nilearn.github.io.\n\nContents\n--------\nNilearn aims at simplifying the use of the scikit-learn package in the context of\nneuroimaging. It provides specific input/output functions, algorithms and\nvisualization tools.\n\nSubmodules\n---------\ndatasets --- Utilities to download NeuroImaging datasets\ndecoding --- Decoding tools and algorithms\ndecomposition --- Includes a subject level variant of the ICA\n algorithm called Canonical ICA\nconnectome --- Set of tools for computing functional connectivity matrices\n and for sparse multi-subjects learning of Gaussian graphical models\nimage --- Set of functions defining mathematical operations\n working on Niimg-like objects\ninput_data --- includes scikit-learn tranformers and tools to\n preprocess neuro-imaging data\nmasking --- Utilities to compute and operate on brain masks\nmass_univariate --- Defines a Massively Univariate Linear Model\n estimated with OLS and permutation test\nplotting --- Plotting code for nilearn\nregion --- Set of functions for extracting region-defined\n signals\nsignal --- Set of preprocessing functions for time series\n\"\"\"\n\nimport gzip\n\nfrom .version import _check_module_dependencies, __version__\n\n_check_module_dependencies()\n\n# Monkey-patch gzip to have faster reads on large gzip files\nif hasattr(gzip.GzipFile, 'max_read_chunk'):\n gzip.GzipFile.max_read_chunk = 100 * 1024 * 1024 # 100Mb\n\n# Boolean controlling the default globbing technique when using check_niimg\n# and the os.path.expanduser usage in CacheMixin.\n# Default value it True, set it to False to completely deactivate this\n# behavior.\nEXPAND_PATH_WILDCARDS = True\n\n# Boolean controlling whether the joblib caches should be\n# flushed if the version of certain modules changes (eg nibabel, as it\n# does not respect the backward compatibility in some of its internal\n# structures\n# This is used in nilearn._utils.cache_mixin\nCHECK_CACHE_VERSION = True\n\n# list all submodules available in nilearn and version\n__all__ = ['datasets', 'decoding', 'decomposition', 'connectome',\n 'image', 'input_data', 'masking', 'mass_univariate', 'plotting',\n 'region', 'signal', '__version__']\n", "path": "nilearn/__init__.py"}, {"content": "\"\"\"\nMixin for cache with joblib\n\"\"\"\n# Author: Gael Varoquaux, Alexandre Abraham, Philippe Gervais\n# License: simplified BSD\n\nimport json\nimport warnings\nimport os\nimport shutil\nfrom distutils.version import LooseVersion\n\nimport nibabel\nfrom sklearn.externals.joblib import Memory\n\nMEMORY_CLASSES = (Memory, )\n\ntry:\n from joblib import Memory as JoblibMemory\n MEMORY_CLASSES = (Memory, JoblibMemory)\nexcept ImportError:\n pass\n\nimport nilearn\n\nfrom .compat import _basestring\n\n__CACHE_CHECKED = dict()\n\n\ndef _safe_cache(memory, func, **kwargs):\n \"\"\" A wrapper for mem.cache that flushes the cache if the version\n number of nibabel has changed.\n \"\"\"\n cachedir = memory.cachedir\n\n if cachedir is None or cachedir in __CACHE_CHECKED:\n return memory.cache(func, **kwargs)\n\n version_file = os.path.join(cachedir, 'module_versions.json')\n\n versions = dict()\n if os.path.exists(version_file):\n with open(version_file, 'r') as _version_file:\n versions = json.load(_version_file)\n\n modules = (nibabel, )\n # Keep only the major + minor version numbers\n my_versions = dict((m.__name__, LooseVersion(m.__version__).version[:2])\n for m in modules)\n commons = set(versions.keys()).intersection(set(my_versions.keys()))\n collisions = [m for m in commons if versions[m] != my_versions[m]]\n\n # Flush cache if version collision\n if len(collisions) > 0:\n if nilearn.CHECK_CACHE_VERSION:\n warnings.warn(\"Incompatible cache in %s: \"\n \"different version of nibabel. Deleting \"\n \"the cache. Put nilearn.CHECK_CACHE_VERSION \"\n \"to false to avoid this behavior.\"\n % cachedir)\n try:\n tmp_dir = (os.path.split(cachedir)[:-1]\n + ('old_%i' % os.getpid(), ))\n tmp_dir = os.path.join(*tmp_dir)\n # We use rename + unlink to be more robust to race\n # conditions\n os.rename(cachedir, tmp_dir)\n shutil.rmtree(tmp_dir)\n except OSError:\n # Another process could have removed this dir\n pass\n\n try:\n os.makedirs(cachedir)\n except OSError:\n # File exists?\n pass\n else:\n warnings.warn(\"Incompatible cache in %s: \"\n \"old version of nibabel.\" % cachedir)\n\n # Write json files if configuration is different\n if versions != my_versions:\n with open(version_file, 'w') as _version_file:\n json.dump(my_versions, _version_file)\n\n __CACHE_CHECKED[cachedir] = True\n\n return memory.cache(func, **kwargs)\n\n\ndef cache(func, memory, func_memory_level=None, memory_level=None,\n **kwargs):\n \"\"\" Return a joblib.Memory object.\n\n The memory_level determines the level above which the wrapped\n function output is cached. By specifying a numeric value for\n this level, the user can to control the amount of cache memory\n used. This function will cache the function call or not\n depending on the cache level.\n\n Parameters\n ----------\n func: function\n The function which output is to be cached.\n\n memory: instance of joblib.Memory or string\n Used to cache the function call.\n\n func_memory_level: int, optional\n The memory_level from which caching must be enabled for the wrapped\n function.\n\n memory_level: int, optional\n The memory_level used to determine if function call must\n be cached or not (if user_memory_level is equal of greater than\n func_memory_level the function is cached)\n\n kwargs: keyword arguments\n The keyword arguments passed to memory.cache\n\n Returns\n -------\n mem: joblib.MemorizedFunc\n object that wraps the function func. This object may be\n a no-op, if the requested level is lower than the value given\n to _cache()). For consistency, a joblib.Memory object is always\n returned.\n \"\"\"\n verbose = kwargs.get('verbose', 0)\n\n # memory_level and func_memory_level must be both None or both integers.\n memory_levels = [memory_level, func_memory_level]\n both_params_integers = all(isinstance(lvl, int) for lvl in memory_levels)\n both_params_none = all(lvl is None for lvl in memory_levels)\n\n if not (both_params_integers or both_params_none):\n raise ValueError('Reference and user memory levels must be both None '\n 'or both integers.')\n\n if memory is not None and (func_memory_level is None or\n memory_level >= func_memory_level):\n if isinstance(memory, _basestring):\n memory = Memory(cachedir=memory, verbose=verbose)\n if not isinstance(memory, MEMORY_CLASSES):\n raise TypeError(\"'memory' argument must be a string or a \"\n \"joblib.Memory object. \"\n \"%s %s was given.\" % (memory, type(memory)))\n if (memory.cachedir is None and memory_level is not None\n and memory_level > 1):\n warnings.warn(\"Caching has been enabled (memory_level = %d) \"\n \"but no Memory object or path has been provided\"\n \" (parameter memory). Caching deactivated for \"\n \"function %s.\" %\n (memory_level, func.__name__),\n stacklevel=2)\n else:\n memory = Memory(cachedir=None, verbose=verbose)\n return _safe_cache(memory, func, **kwargs)\n\n\nclass CacheMixin(object):\n \"\"\"Mixin to add caching to a class.\n\n This class is a thin layer on top of joblib.Memory, that mainly adds a\n \"caching level\", similar to a \"log level\".\n\n Usage: to cache the results of a method, wrap it in self._cache()\n defined by this class. Caching is performed only if the user-specified\n cache level (self._memory_level) is greater than the value given as a\n parameter to self._cache(). See _cache() documentation for details.\n \"\"\"\n def _cache(self, func, func_memory_level=1, **kwargs):\n \"\"\"Return a joblib.Memory object.\n\n The memory_level determines the level above which the wrapped\n function output is cached. By specifying a numeric value for\n this level, the user can to control the amount of cache memory\n used. This function will cache the function call or not\n depending on the cache level.\n\n Parameters\n ----------\n func: function\n The function the output of which is to be cached.\n\n memory_level: int\n The memory_level from which caching must be enabled for the wrapped\n function.\n\n Returns\n -------\n mem: joblib.Memory\n object that wraps the function func. This object may be\n a no-op, if the requested level is lower than the value given\n to _cache()). For consistency, a joblib.Memory object is always\n returned.\n\n \"\"\"\n\n verbose = getattr(self, 'verbose', 0)\n\n # Creates attributes if they don't exist\n # This is to make creating them in __init__() optional.\n if not hasattr(self, \"memory_level\"):\n self.memory_level = 0\n if not hasattr(self, \"memory\"):\n self.memory = Memory(cachedir=None, verbose=verbose)\n if isinstance(self.memory, _basestring):\n cache_dir = self.memory\n if nilearn.EXPAND_PATH_WILDCARDS:\n cache_dir = os.path.expanduser(cache_dir)\n\n # Perform some verifications on given path.\n split_cache_dir = os.path.split(cache_dir)\n if (len(split_cache_dir) > 1 and\n (not os.path.exists(split_cache_dir[0]) and\n split_cache_dir[0] != '')):\n if (not nilearn.EXPAND_PATH_WILDCARDS and\n cache_dir.startswith(\"~\")):\n # Maybe the user want to enable expanded user path.\n error_msg = (\"Given cache path parent directory doesn't \"\n \"exists, you gave '{0}'. Enabling \"\n \"nilearn.EXPAND_PATH_WILDCARDS could solve \"\n \"this issue.\".format(split_cache_dir[0]))\n elif self.memory.startswith(\"~\"):\n # Path built on top of expanded user path doesn't exist.\n error_msg = (\"Given cache path parent directory doesn't \"\n \"exists, you gave '{0}' which was expanded \"\n \"as '{1}' but doesn't exist either. Use \"\n \"nilearn.EXPAND_PATH_WILDCARDS to deactivate \"\n \"auto expand user path (~) behavior.\"\n .format(split_cache_dir[0],\n os.path.dirname(self.memory)))\n else:\n # The given cache base path doesn't exist.\n error_msg = (\"Given cache path parent directory doesn't \"\n \"exists, you gave '{0}'.\"\n .format(split_cache_dir[0]))\n raise ValueError(error_msg)\n\n self.memory = Memory(cachedir=cache_dir, verbose=verbose)\n\n # If cache level is 0 but a memory object has been provided, set\n # memory_level to 1 with a warning.\n if self.memory_level == 0 and self.memory.cachedir is not None:\n warnings.warn(\"memory_level is currently set to 0 but \"\n \"a Memory object has been provided. \"\n \"Setting memory_level to 1.\")\n self.memory_level = 1\n\n return cache(func, self.memory, func_memory_level=func_memory_level,\n memory_level=self.memory_level, **kwargs)\n", "path": "nilearn/_utils/cache_mixin.py"}]} | 3,312 | 865 |
gh_patches_debug_10001 | rasdani/github-patches | git_diff | apache__airflow-31477 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cleanup-pod CLI command fails due to incorrect host
### Apache Airflow version
2.6.1
### What happened
When running `airflow kubernetes cleanup-pods`, the API call to delete a pod fails. A snippet of the log is below:
```
urllib3.exceptions.MaxRetryError:
HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /api/v1/namespaces/airflow/pods/my-task-avd79fq1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f52f9aebfd0>: Failed to establish a new connection: [Errno 111] Connection refused'))
```
[The Kubernetes client provisioned in _delete_pod](https://github.com/apache/airflow/blob/main/airflow/cli/commands/kubernetes_command.py#L151) incorrectly has the host as `http:localhost`. On the scheduler pod if I start a Python environment I can see that the configuration differs from the `get_kube_client()` configuration:
```
>>> get_kube_client().api_client.configuration.host
'https://172.20.0.1:443'
>>> client.CoreV1Api().api_client.configuration.host
'http://localhost/'
```
On Airflow 2.5.3 these two clients have the same configuration.
It's possible I have some mistake in my configuration but I'm not sure what it could be. The above fails on 2.6.0 also.
### What you think should happen instead
Pods should clean up without error
### How to reproduce
Run the following from a Kubernetes deployment of Airflow:
```python
from airflow.kubernetes.kube_client import get_kube_client
from kubernetes import client
print(get_kube_client().api_client.configuration.host)
print(client.CoreV1Api().api_client.configuration.host)
```
Alternatively run `airflow kubernetes cleanup-pods` with pods available for cleanup
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Using `in_cluster` configuration for KubernetesExecutor
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `airflow/cli/commands/kubernetes_command.py`
Content:
```
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17 """Kubernetes sub-commands."""
18 from __future__ import annotations
19
20 import os
21 import sys
22 from datetime import datetime, timedelta
23
24 from kubernetes import client
25 from kubernetes.client.api_client import ApiClient
26 from kubernetes.client.rest import ApiException
27
28 from airflow.executors.kubernetes_executor import KubeConfig, create_pod_id
29 from airflow.kubernetes import pod_generator
30 from airflow.kubernetes.kube_client import get_kube_client
31 from airflow.kubernetes.pod_generator import PodGenerator
32 from airflow.models import DagRun, TaskInstance
33 from airflow.utils import cli as cli_utils, yaml
34 from airflow.utils.cli import get_dag
35
36
37 @cli_utils.action_cli
38 def generate_pod_yaml(args):
39 """Generates yaml files for each task in the DAG. Used for testing output of KubernetesExecutor."""
40 execution_date = args.execution_date
41 dag = get_dag(subdir=args.subdir, dag_id=args.dag_id)
42 yaml_output_path = args.output_path
43 dr = DagRun(dag.dag_id, execution_date=execution_date)
44 kube_config = KubeConfig()
45 for task in dag.tasks:
46 ti = TaskInstance(task, None)
47 ti.dag_run = dr
48 pod = PodGenerator.construct_pod(
49 dag_id=args.dag_id,
50 task_id=ti.task_id,
51 pod_id=create_pod_id(args.dag_id, ti.task_id),
52 try_number=ti.try_number,
53 kube_image=kube_config.kube_image,
54 date=ti.execution_date,
55 args=ti.command_as_list(),
56 pod_override_object=PodGenerator.from_obj(ti.executor_config),
57 scheduler_job_id="worker-config",
58 namespace=kube_config.executor_namespace,
59 base_worker_pod=PodGenerator.deserialize_model_file(kube_config.pod_template_file),
60 with_mutation_hook=True,
61 )
62 api_client = ApiClient()
63 date_string = pod_generator.datetime_to_label_safe_datestring(execution_date)
64 yaml_file_name = f"{args.dag_id}_{ti.task_id}_{date_string}.yml"
65 os.makedirs(os.path.dirname(yaml_output_path + "/airflow_yaml_output/"), exist_ok=True)
66 with open(yaml_output_path + "/airflow_yaml_output/" + yaml_file_name, "w") as output:
67 sanitized_pod = api_client.sanitize_for_serialization(pod)
68 output.write(yaml.dump(sanitized_pod))
69 print(f"YAML output can be found at {yaml_output_path}/airflow_yaml_output/")
70
71
72 @cli_utils.action_cli
73 def cleanup_pods(args):
74 """Clean up k8s pods in evicted/failed/succeeded/pending states."""
75 namespace = args.namespace
76
77 min_pending_minutes = args.min_pending_minutes
78 # protect newly created pods from deletion
79 if min_pending_minutes < 5:
80 min_pending_minutes = 5
81
82 # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/
83 # All Containers in the Pod have terminated in success, and will not be restarted.
84 pod_succeeded = "succeeded"
85
86 # The Pod has been accepted by the Kubernetes cluster,
87 # but one or more of the containers has not been set up and made ready to run.
88 pod_pending = "pending"
89
90 # All Containers in the Pod have terminated, and at least one Container has terminated in failure.
91 # That is, the Container either exited with non-zero status or was terminated by the system.
92 pod_failed = "failed"
93
94 # https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/
95 pod_reason_evicted = "evicted"
96 # If pod is failed and restartPolicy is:
97 # * Always: Restart Container; Pod phase stays Running.
98 # * OnFailure: Restart Container; Pod phase stays Running.
99 # * Never: Pod phase becomes Failed.
100 pod_restart_policy_never = "never"
101
102 print("Loading Kubernetes configuration")
103 kube_client = get_kube_client()
104 print(f"Listing pods in namespace {namespace}")
105 airflow_pod_labels = [
106 "dag_id",
107 "task_id",
108 "try_number",
109 "airflow_version",
110 ]
111 list_kwargs = {"namespace": namespace, "limit": 500, "label_selector": ",".join(airflow_pod_labels)}
112
113 while True:
114 pod_list = kube_client.list_namespaced_pod(**list_kwargs)
115 for pod in pod_list.items:
116 pod_name = pod.metadata.name
117 print(f"Inspecting pod {pod_name}")
118 pod_phase = pod.status.phase.lower()
119 pod_reason = pod.status.reason.lower() if pod.status.reason else ""
120 pod_restart_policy = pod.spec.restart_policy.lower()
121 current_time = datetime.now(pod.metadata.creation_timestamp.tzinfo)
122
123 if (
124 pod_phase == pod_succeeded
125 or (pod_phase == pod_failed and pod_restart_policy == pod_restart_policy_never)
126 or (pod_reason == pod_reason_evicted)
127 or (
128 pod_phase == pod_pending
129 and current_time - pod.metadata.creation_timestamp
130 > timedelta(minutes=min_pending_minutes)
131 )
132 ):
133 print(
134 f'Deleting pod "{pod_name}" phase "{pod_phase}" and reason "{pod_reason}", '
135 f'restart policy "{pod_restart_policy}"'
136 )
137 try:
138 _delete_pod(pod.metadata.name, namespace)
139 except ApiException as e:
140 print(f"Can't remove POD: {e}", file=sys.stderr)
141 continue
142 print(f"No action taken on pod {pod_name}")
143 continue_token = pod_list.metadata._continue
144 if not continue_token:
145 break
146 list_kwargs["_continue"] = continue_token
147
148
149 def _delete_pod(name, namespace):
150 """Helper Function for cleanup_pods."""
151 core_v1 = client.CoreV1Api()
152 delete_options = client.V1DeleteOptions()
153 print(f'Deleting POD "{name}" from "{namespace}" namespace')
154 api_response = core_v1.delete_namespaced_pod(name=name, namespace=namespace, body=delete_options)
155 print(api_response)
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/airflow/cli/commands/kubernetes_command.py b/airflow/cli/commands/kubernetes_command.py
--- a/airflow/cli/commands/kubernetes_command.py
+++ b/airflow/cli/commands/kubernetes_command.py
@@ -148,8 +148,8 @@
def _delete_pod(name, namespace):
"""Helper Function for cleanup_pods."""
- core_v1 = client.CoreV1Api()
+ kube_client = get_kube_client()
delete_options = client.V1DeleteOptions()
print(f'Deleting POD "{name}" from "{namespace}" namespace')
- api_response = core_v1.delete_namespaced_pod(name=name, namespace=namespace, body=delete_options)
+ api_response = kube_client.delete_namespaced_pod(name=name, namespace=namespace, body=delete_options)
print(api_response)
| {"golden_diff": "diff --git a/airflow/cli/commands/kubernetes_command.py b/airflow/cli/commands/kubernetes_command.py\n--- a/airflow/cli/commands/kubernetes_command.py\n+++ b/airflow/cli/commands/kubernetes_command.py\n@@ -148,8 +148,8 @@\n \n def _delete_pod(name, namespace):\n \"\"\"Helper Function for cleanup_pods.\"\"\"\n- core_v1 = client.CoreV1Api()\n+ kube_client = get_kube_client()\n delete_options = client.V1DeleteOptions()\n print(f'Deleting POD \"{name}\" from \"{namespace}\" namespace')\n- api_response = core_v1.delete_namespaced_pod(name=name, namespace=namespace, body=delete_options)\n+ api_response = kube_client.delete_namespaced_pod(name=name, namespace=namespace, body=delete_options)\n print(api_response)\n", "issue": "cleanup-pod CLI command fails due to incorrect host\n### Apache Airflow version\r\n\r\n2.6.1\r\n\r\n### What happened\r\n\r\nWhen running `airflow kubernetes cleanup-pods`, the API call to delete a pod fails. A snippet of the log is below:\r\n\r\n```\r\nurllib3.exceptions.MaxRetryError:\r\nHTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /api/v1/namespaces/airflow/pods/my-task-avd79fq1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f52f9aebfd0>: Failed to establish a new connection: [Errno 111] Connection refused'))\r\n```\r\n\r\n[The Kubernetes client provisioned in _delete_pod](https://github.com/apache/airflow/blob/main/airflow/cli/commands/kubernetes_command.py#L151) incorrectly has the host as `http:localhost`. On the scheduler pod if I start a Python environment I can see that the configuration differs from the `get_kube_client()` configuration:\r\n\r\n```\r\n>>> get_kube_client().api_client.configuration.host\r\n'https://172.20.0.1:443'\r\n>>> client.CoreV1Api().api_client.configuration.host\r\n'http://localhost/'\r\n```\r\n\r\nOn Airflow 2.5.3 these two clients have the same configuration.\r\n\r\nIt's possible I have some mistake in my configuration but I'm not sure what it could be. The above fails on 2.6.0 also.\r\n\r\n### What you think should happen instead\r\n\r\nPods should clean up without error\r\n\r\n### How to reproduce\r\n\r\nRun the following from a Kubernetes deployment of Airflow:\r\n\r\n```python\r\nfrom airflow.kubernetes.kube_client import get_kube_client\r\nfrom kubernetes import client\r\n\r\nprint(get_kube_client().api_client.configuration.host)\r\nprint(client.CoreV1Api().api_client.configuration.host)\r\n```\r\n\r\nAlternatively run `airflow kubernetes cleanup-pods` with pods available for cleanup\r\n\r\n### Operating System\r\n\r\nDebian GNU/Linux 11 (bullseye)\r\n\r\n### Versions of Apache Airflow Providers\r\n\r\n_No response_\r\n\r\n### Deployment\r\n\r\nOfficial Apache Airflow Helm Chart\r\n\r\n### Deployment details\r\n\r\nUsing `in_cluster` configuration for KubernetesExecutor\r\n\r\n### Anything else\r\n\r\n_No response_\r\n\r\n### Are you willing to submit PR?\r\n\r\n- [X] Yes I am willing to submit a PR!\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)\r\n\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\"\"\"Kubernetes sub-commands.\"\"\"\nfrom __future__ import annotations\n\nimport os\nimport sys\nfrom datetime import datetime, timedelta\n\nfrom kubernetes import client\nfrom kubernetes.client.api_client import ApiClient\nfrom kubernetes.client.rest import ApiException\n\nfrom airflow.executors.kubernetes_executor import KubeConfig, create_pod_id\nfrom airflow.kubernetes import pod_generator\nfrom airflow.kubernetes.kube_client import get_kube_client\nfrom airflow.kubernetes.pod_generator import PodGenerator\nfrom airflow.models import DagRun, TaskInstance\nfrom airflow.utils import cli as cli_utils, yaml\nfrom airflow.utils.cli import get_dag\n\n\n@cli_utils.action_cli\ndef generate_pod_yaml(args):\n \"\"\"Generates yaml files for each task in the DAG. Used for testing output of KubernetesExecutor.\"\"\"\n execution_date = args.execution_date\n dag = get_dag(subdir=args.subdir, dag_id=args.dag_id)\n yaml_output_path = args.output_path\n dr = DagRun(dag.dag_id, execution_date=execution_date)\n kube_config = KubeConfig()\n for task in dag.tasks:\n ti = TaskInstance(task, None)\n ti.dag_run = dr\n pod = PodGenerator.construct_pod(\n dag_id=args.dag_id,\n task_id=ti.task_id,\n pod_id=create_pod_id(args.dag_id, ti.task_id),\n try_number=ti.try_number,\n kube_image=kube_config.kube_image,\n date=ti.execution_date,\n args=ti.command_as_list(),\n pod_override_object=PodGenerator.from_obj(ti.executor_config),\n scheduler_job_id=\"worker-config\",\n namespace=kube_config.executor_namespace,\n base_worker_pod=PodGenerator.deserialize_model_file(kube_config.pod_template_file),\n with_mutation_hook=True,\n )\n api_client = ApiClient()\n date_string = pod_generator.datetime_to_label_safe_datestring(execution_date)\n yaml_file_name = f\"{args.dag_id}_{ti.task_id}_{date_string}.yml\"\n os.makedirs(os.path.dirname(yaml_output_path + \"/airflow_yaml_output/\"), exist_ok=True)\n with open(yaml_output_path + \"/airflow_yaml_output/\" + yaml_file_name, \"w\") as output:\n sanitized_pod = api_client.sanitize_for_serialization(pod)\n output.write(yaml.dump(sanitized_pod))\n print(f\"YAML output can be found at {yaml_output_path}/airflow_yaml_output/\")\n\n\n@cli_utils.action_cli\ndef cleanup_pods(args):\n \"\"\"Clean up k8s pods in evicted/failed/succeeded/pending states.\"\"\"\n namespace = args.namespace\n\n min_pending_minutes = args.min_pending_minutes\n # protect newly created pods from deletion\n if min_pending_minutes < 5:\n min_pending_minutes = 5\n\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/\n # All Containers in the Pod have terminated in success, and will not be restarted.\n pod_succeeded = \"succeeded\"\n\n # The Pod has been accepted by the Kubernetes cluster,\n # but one or more of the containers has not been set up and made ready to run.\n pod_pending = \"pending\"\n\n # All Containers in the Pod have terminated, and at least one Container has terminated in failure.\n # That is, the Container either exited with non-zero status or was terminated by the system.\n pod_failed = \"failed\"\n\n # https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/\n pod_reason_evicted = \"evicted\"\n # If pod is failed and restartPolicy is:\n # * Always: Restart Container; Pod phase stays Running.\n # * OnFailure: Restart Container; Pod phase stays Running.\n # * Never: Pod phase becomes Failed.\n pod_restart_policy_never = \"never\"\n\n print(\"Loading Kubernetes configuration\")\n kube_client = get_kube_client()\n print(f\"Listing pods in namespace {namespace}\")\n airflow_pod_labels = [\n \"dag_id\",\n \"task_id\",\n \"try_number\",\n \"airflow_version\",\n ]\n list_kwargs = {\"namespace\": namespace, \"limit\": 500, \"label_selector\": \",\".join(airflow_pod_labels)}\n\n while True:\n pod_list = kube_client.list_namespaced_pod(**list_kwargs)\n for pod in pod_list.items:\n pod_name = pod.metadata.name\n print(f\"Inspecting pod {pod_name}\")\n pod_phase = pod.status.phase.lower()\n pod_reason = pod.status.reason.lower() if pod.status.reason else \"\"\n pod_restart_policy = pod.spec.restart_policy.lower()\n current_time = datetime.now(pod.metadata.creation_timestamp.tzinfo)\n\n if (\n pod_phase == pod_succeeded\n or (pod_phase == pod_failed and pod_restart_policy == pod_restart_policy_never)\n or (pod_reason == pod_reason_evicted)\n or (\n pod_phase == pod_pending\n and current_time - pod.metadata.creation_timestamp\n > timedelta(minutes=min_pending_minutes)\n )\n ):\n print(\n f'Deleting pod \"{pod_name}\" phase \"{pod_phase}\" and reason \"{pod_reason}\", '\n f'restart policy \"{pod_restart_policy}\"'\n )\n try:\n _delete_pod(pod.metadata.name, namespace)\n except ApiException as e:\n print(f\"Can't remove POD: {e}\", file=sys.stderr)\n continue\n print(f\"No action taken on pod {pod_name}\")\n continue_token = pod_list.metadata._continue\n if not continue_token:\n break\n list_kwargs[\"_continue\"] = continue_token\n\n\ndef _delete_pod(name, namespace):\n \"\"\"Helper Function for cleanup_pods.\"\"\"\n core_v1 = client.CoreV1Api()\n delete_options = client.V1DeleteOptions()\n print(f'Deleting POD \"{name}\" from \"{namespace}\" namespace')\n api_response = core_v1.delete_namespaced_pod(name=name, namespace=namespace, body=delete_options)\n print(api_response)\n", "path": "airflow/cli/commands/kubernetes_command.py"}], "after_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\"\"\"Kubernetes sub-commands.\"\"\"\nfrom __future__ import annotations\n\nimport os\nimport sys\nfrom datetime import datetime, timedelta\n\nfrom kubernetes import client\nfrom kubernetes.client.api_client import ApiClient\nfrom kubernetes.client.rest import ApiException\n\nfrom airflow.executors.kubernetes_executor import KubeConfig, create_pod_id\nfrom airflow.kubernetes import pod_generator\nfrom airflow.kubernetes.kube_client import get_kube_client\nfrom airflow.kubernetes.pod_generator import PodGenerator\nfrom airflow.models import DagRun, TaskInstance\nfrom airflow.utils import cli as cli_utils, yaml\nfrom airflow.utils.cli import get_dag\n\n\n@cli_utils.action_cli\ndef generate_pod_yaml(args):\n \"\"\"Generates yaml files for each task in the DAG. Used for testing output of KubernetesExecutor.\"\"\"\n execution_date = args.execution_date\n dag = get_dag(subdir=args.subdir, dag_id=args.dag_id)\n yaml_output_path = args.output_path\n dr = DagRun(dag.dag_id, execution_date=execution_date)\n kube_config = KubeConfig()\n for task in dag.tasks:\n ti = TaskInstance(task, None)\n ti.dag_run = dr\n pod = PodGenerator.construct_pod(\n dag_id=args.dag_id,\n task_id=ti.task_id,\n pod_id=create_pod_id(args.dag_id, ti.task_id),\n try_number=ti.try_number,\n kube_image=kube_config.kube_image,\n date=ti.execution_date,\n args=ti.command_as_list(),\n pod_override_object=PodGenerator.from_obj(ti.executor_config),\n scheduler_job_id=\"worker-config\",\n namespace=kube_config.executor_namespace,\n base_worker_pod=PodGenerator.deserialize_model_file(kube_config.pod_template_file),\n with_mutation_hook=True,\n )\n api_client = ApiClient()\n date_string = pod_generator.datetime_to_label_safe_datestring(execution_date)\n yaml_file_name = f\"{args.dag_id}_{ti.task_id}_{date_string}.yml\"\n os.makedirs(os.path.dirname(yaml_output_path + \"/airflow_yaml_output/\"), exist_ok=True)\n with open(yaml_output_path + \"/airflow_yaml_output/\" + yaml_file_name, \"w\") as output:\n sanitized_pod = api_client.sanitize_for_serialization(pod)\n output.write(yaml.dump(sanitized_pod))\n print(f\"YAML output can be found at {yaml_output_path}/airflow_yaml_output/\")\n\n\n@cli_utils.action_cli\ndef cleanup_pods(args):\n \"\"\"Clean up k8s pods in evicted/failed/succeeded/pending states.\"\"\"\n namespace = args.namespace\n\n min_pending_minutes = args.min_pending_minutes\n # protect newly created pods from deletion\n if min_pending_minutes < 5:\n min_pending_minutes = 5\n\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/\n # All Containers in the Pod have terminated in success, and will not be restarted.\n pod_succeeded = \"succeeded\"\n\n # The Pod has been accepted by the Kubernetes cluster,\n # but one or more of the containers has not been set up and made ready to run.\n pod_pending = \"pending\"\n\n # All Containers in the Pod have terminated, and at least one Container has terminated in failure.\n # That is, the Container either exited with non-zero status or was terminated by the system.\n pod_failed = \"failed\"\n\n # https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/\n pod_reason_evicted = \"evicted\"\n # If pod is failed and restartPolicy is:\n # * Always: Restart Container; Pod phase stays Running.\n # * OnFailure: Restart Container; Pod phase stays Running.\n # * Never: Pod phase becomes Failed.\n pod_restart_policy_never = \"never\"\n\n print(\"Loading Kubernetes configuration\")\n kube_client = get_kube_client()\n print(f\"Listing pods in namespace {namespace}\")\n airflow_pod_labels = [\n \"dag_id\",\n \"task_id\",\n \"try_number\",\n \"airflow_version\",\n ]\n list_kwargs = {\"namespace\": namespace, \"limit\": 500, \"label_selector\": \",\".join(airflow_pod_labels)}\n\n while True:\n pod_list = kube_client.list_namespaced_pod(**list_kwargs)\n for pod in pod_list.items:\n pod_name = pod.metadata.name\n print(f\"Inspecting pod {pod_name}\")\n pod_phase = pod.status.phase.lower()\n pod_reason = pod.status.reason.lower() if pod.status.reason else \"\"\n pod_restart_policy = pod.spec.restart_policy.lower()\n current_time = datetime.now(pod.metadata.creation_timestamp.tzinfo)\n\n if (\n pod_phase == pod_succeeded\n or (pod_phase == pod_failed and pod_restart_policy == pod_restart_policy_never)\n or (pod_reason == pod_reason_evicted)\n or (\n pod_phase == pod_pending\n and current_time - pod.metadata.creation_timestamp\n > timedelta(minutes=min_pending_minutes)\n )\n ):\n print(\n f'Deleting pod \"{pod_name}\" phase \"{pod_phase}\" and reason \"{pod_reason}\", '\n f'restart policy \"{pod_restart_policy}\"'\n )\n try:\n _delete_pod(pod.metadata.name, namespace)\n except ApiException as e:\n print(f\"Can't remove POD: {e}\", file=sys.stderr)\n continue\n print(f\"No action taken on pod {pod_name}\")\n continue_token = pod_list.metadata._continue\n if not continue_token:\n break\n list_kwargs[\"_continue\"] = continue_token\n\n\ndef _delete_pod(name, namespace):\n \"\"\"Helper Function for cleanup_pods.\"\"\"\n kube_client = get_kube_client()\n delete_options = client.V1DeleteOptions()\n print(f'Deleting POD \"{name}\" from \"{namespace}\" namespace')\n api_response = kube_client.delete_namespaced_pod(name=name, namespace=namespace, body=delete_options)\n print(api_response)\n", "path": "airflow/cli/commands/kubernetes_command.py"}]} | 2,611 | 180 |
gh_patches_debug_13922 | rasdani/github-patches | git_diff | huggingface__accelerate-445 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`psutil` required by utils/modeling.py but it isn't declared as a dependency
### System Info
```Shell
Accelerate `0.10.0.dev0` on Debian Bullseye running Python 3.10.5.
File "/opt/venv/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 276, in get_max_memory
import psutil
ModuleNotFoundError: No module named 'psutil'
```
I'm not sure if you have any minimum version you need to satisfy for `psutil` as a runtime dependency but I see that there are no constraints on it as a `test` dependency in setup.py.
If you don't have any requirements, I'm happy to just add it myself and open a patch PR.
Thanks!
```
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [ ] My own task or dataset (give details below)
### Reproduction
Use `device_map="auto"` when loading any model that supports it.
### Expected behavior
```Shell
I expect that `psutil` is declared as a runtime dependency of the `accelerate` package instead of having to install it myself.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2021 The HuggingFace Team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from setuptools import setup
16 from setuptools import find_packages
17
18 extras = {}
19 extras["quality"] = ["black ~= 22.0", "isort >= 5.5.4", "flake8 >= 3.8.3"]
20 extras["docs"] = []
21 extras["test"] = [
22 "psutil",
23 "pytest",
24 "pytest-xdist",
25 "pytest-subtests",
26 "datasets",
27 "evaluate",
28 "transformers",
29 "scipy",
30 "sklearn",
31 "parameterized",
32 "deepspeed",
33 ]
34
35 extras["test_trackers"] = ["wandb", "comet-ml", "tensorboard"]
36 extras["dev"] = extras["quality"] + extras["test"]
37
38 extras["sagemaker"] = [
39 "sagemaker", # boto3 is a required package in sagemaker
40 ]
41
42 setup(
43 name="accelerate",
44 version="0.10.0.dev0",
45 description="Accelerate",
46 long_description=open("README.md", "r", encoding="utf-8").read(),
47 long_description_content_type="text/markdown",
48 keywords="deep learning",
49 license="Apache",
50 author="The HuggingFace team",
51 author_email="[email protected]",
52 url="https://github.com/huggingface/accelerate",
53 package_dir={"": "src"},
54 packages=find_packages("src"),
55 entry_points={
56 "console_scripts": [
57 "accelerate=accelerate.commands.accelerate_cli:main",
58 "accelerate-config=accelerate.commands.config:main",
59 "accelerate-launch=accelerate.commands.launch:main",
60 ]
61 },
62 python_requires=">=3.7.0",
63 install_requires=["numpy>=1.17", "packaging>=20.0", "pyyaml", "torch>=1.4.0"],
64 extras_require=extras,
65 classifiers=[
66 "Development Status :: 5 - Production/Stable",
67 "Intended Audience :: Developers",
68 "Intended Audience :: Education",
69 "Intended Audience :: Science/Research",
70 "License :: OSI Approved :: Apache Software License",
71 "Operating System :: OS Independent",
72 "Programming Language :: Python :: 3",
73 "Programming Language :: Python :: 3.7",
74 "Topic :: Scientific/Engineering :: Artificial Intelligence",
75 ],
76 )
77
78 # Release checklist
79 # 1. Change the version in __init__.py and setup.py.
80 # 2. Commit these changes with the message: "Release: VERSION"
81 # 3. Add a tag in git to mark the release: "git tag VERSION -m 'Adds tag VERSION for pypi' "
82 # Push the tag to git: git push --tags origin main
83 # 4. Run the following commands in the top-level directory:
84 # python setup.py bdist_wheel
85 # python setup.py sdist
86 # 5. Upload the package to the pypi test server first:
87 # twine upload dist/* -r pypitest
88 # twine upload dist/* -r pypitest --repository-url=https://test.pypi.org/legacy/
89 # 6. Check that you can install it in a virtualenv by running:
90 # pip install -i https://testpypi.python.org/pypi accelerate
91 # accelerate env
92 # accelerate test
93 # 7. Upload the final version to actual pypi:
94 # twine upload dist/* -r pypi
95 # 8. Add release notes to the tag in github once everything is looking hunky-dory.
96 # 9. Update the version in __init__.py, setup.py to the new version "-dev" and push to master
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -19,7 +19,6 @@
extras["quality"] = ["black ~= 22.0", "isort >= 5.5.4", "flake8 >= 3.8.3"]
extras["docs"] = []
extras["test"] = [
- "psutil",
"pytest",
"pytest-xdist",
"pytest-subtests",
@@ -60,7 +59,7 @@
]
},
python_requires=">=3.7.0",
- install_requires=["numpy>=1.17", "packaging>=20.0", "pyyaml", "torch>=1.4.0"],
+ install_requires=["numpy>=1.17", "packaging>=20.0", "psutil", "pyyaml", "torch>=1.4.0"],
extras_require=extras,
classifiers=[
"Development Status :: 5 - Production/Stable",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -19,7 +19,6 @@\n extras[\"quality\"] = [\"black ~= 22.0\", \"isort >= 5.5.4\", \"flake8 >= 3.8.3\"]\n extras[\"docs\"] = []\n extras[\"test\"] = [\n- \"psutil\",\n \"pytest\",\n \"pytest-xdist\",\n \"pytest-subtests\",\n@@ -60,7 +59,7 @@\n ]\n },\n python_requires=\">=3.7.0\",\n- install_requires=[\"numpy>=1.17\", \"packaging>=20.0\", \"pyyaml\", \"torch>=1.4.0\"],\n+ install_requires=[\"numpy>=1.17\", \"packaging>=20.0\", \"psutil\", \"pyyaml\", \"torch>=1.4.0\"],\n extras_require=extras,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n", "issue": "`psutil` required by utils/modeling.py but it isn't declared as a dependency\n### System Info\n\n```Shell\nAccelerate `0.10.0.dev0` on Debian Bullseye running Python 3.10.5.\r\n\r\n\r\nFile \"/opt/venv/lib/python3.10/site-packages/accelerate/utils/modeling.py\", line 276, in get_max_memory\r\n import psutil\r\nModuleNotFoundError: No module named 'psutil'\r\n```\r\n\r\nI'm not sure if you have any minimum version you need to satisfy for `psutil` as a runtime dependency but I see that there are no constraints on it as a `test` dependency in setup.py.\r\n\r\nIf you don't have any requirements, I'm happy to just add it myself and open a patch PR.\r\n\r\nThanks!\n```\n\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nUse `device_map=\"auto\"` when loading any model that supports it.\n\n### Expected behavior\n\n```Shell\nI expect that `psutil` is declared as a runtime dependency of the `accelerate` package instead of having to install it myself.\n```\n\n", "before_files": [{"content": "# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nextras = {}\nextras[\"quality\"] = [\"black ~= 22.0\", \"isort >= 5.5.4\", \"flake8 >= 3.8.3\"]\nextras[\"docs\"] = []\nextras[\"test\"] = [\n \"psutil\",\n \"pytest\",\n \"pytest-xdist\",\n \"pytest-subtests\",\n \"datasets\",\n \"evaluate\",\n \"transformers\",\n \"scipy\",\n \"sklearn\",\n \"parameterized\",\n \"deepspeed\",\n]\n\nextras[\"test_trackers\"] = [\"wandb\", \"comet-ml\", \"tensorboard\"]\nextras[\"dev\"] = extras[\"quality\"] + extras[\"test\"]\n\nextras[\"sagemaker\"] = [\n \"sagemaker\", # boto3 is a required package in sagemaker\n]\n\nsetup(\n name=\"accelerate\",\n version=\"0.10.0.dev0\",\n description=\"Accelerate\",\n long_description=open(\"README.md\", \"r\", encoding=\"utf-8\").read(),\n long_description_content_type=\"text/markdown\",\n keywords=\"deep learning\",\n license=\"Apache\",\n author=\"The HuggingFace team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/huggingface/accelerate\",\n package_dir={\"\": \"src\"},\n packages=find_packages(\"src\"),\n entry_points={\n \"console_scripts\": [\n \"accelerate=accelerate.commands.accelerate_cli:main\",\n \"accelerate-config=accelerate.commands.config:main\",\n \"accelerate-launch=accelerate.commands.launch:main\",\n ]\n },\n python_requires=\">=3.7.0\",\n install_requires=[\"numpy>=1.17\", \"packaging>=20.0\", \"pyyaml\", \"torch>=1.4.0\"],\n extras_require=extras,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n\n# Release checklist\n# 1. Change the version in __init__.py and setup.py.\n# 2. Commit these changes with the message: \"Release: VERSION\"\n# 3. Add a tag in git to mark the release: \"git tag VERSION -m 'Adds tag VERSION for pypi' \"\n# Push the tag to git: git push --tags origin main\n# 4. Run the following commands in the top-level directory:\n# python setup.py bdist_wheel\n# python setup.py sdist\n# 5. Upload the package to the pypi test server first:\n# twine upload dist/* -r pypitest\n# twine upload dist/* -r pypitest --repository-url=https://test.pypi.org/legacy/\n# 6. Check that you can install it in a virtualenv by running:\n# pip install -i https://testpypi.python.org/pypi accelerate\n# accelerate env\n# accelerate test\n# 7. Upload the final version to actual pypi:\n# twine upload dist/* -r pypi\n# 8. Add release notes to the tag in github once everything is looking hunky-dory.\n# 9. Update the version in __init__.py, setup.py to the new version \"-dev\" and push to master\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nextras = {}\nextras[\"quality\"] = [\"black ~= 22.0\", \"isort >= 5.5.4\", \"flake8 >= 3.8.3\"]\nextras[\"docs\"] = []\nextras[\"test\"] = [\n \"pytest\",\n \"pytest-xdist\",\n \"pytest-subtests\",\n \"datasets\",\n \"evaluate\",\n \"transformers\",\n \"scipy\",\n \"sklearn\",\n \"parameterized\",\n \"deepspeed\",\n]\n\nextras[\"test_trackers\"] = [\"wandb\", \"comet-ml\", \"tensorboard\"]\nextras[\"dev\"] = extras[\"quality\"] + extras[\"test\"]\n\nextras[\"sagemaker\"] = [\n \"sagemaker\", # boto3 is a required package in sagemaker\n]\n\nsetup(\n name=\"accelerate\",\n version=\"0.10.0.dev0\",\n description=\"Accelerate\",\n long_description=open(\"README.md\", \"r\", encoding=\"utf-8\").read(),\n long_description_content_type=\"text/markdown\",\n keywords=\"deep learning\",\n license=\"Apache\",\n author=\"The HuggingFace team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/huggingface/accelerate\",\n package_dir={\"\": \"src\"},\n packages=find_packages(\"src\"),\n entry_points={\n \"console_scripts\": [\n \"accelerate=accelerate.commands.accelerate_cli:main\",\n \"accelerate-config=accelerate.commands.config:main\",\n \"accelerate-launch=accelerate.commands.launch:main\",\n ]\n },\n python_requires=\">=3.7.0\",\n install_requires=[\"numpy>=1.17\", \"packaging>=20.0\", \"psutil\", \"pyyaml\", \"torch>=1.4.0\"],\n extras_require=extras,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n\n# Release checklist\n# 1. Change the version in __init__.py and setup.py.\n# 2. Commit these changes with the message: \"Release: VERSION\"\n# 3. Add a tag in git to mark the release: \"git tag VERSION -m 'Adds tag VERSION for pypi' \"\n# Push the tag to git: git push --tags origin main\n# 4. Run the following commands in the top-level directory:\n# python setup.py bdist_wheel\n# python setup.py sdist\n# 5. Upload the package to the pypi test server first:\n# twine upload dist/* -r pypitest\n# twine upload dist/* -r pypitest --repository-url=https://test.pypi.org/legacy/\n# 6. Check that you can install it in a virtualenv by running:\n# pip install -i https://testpypi.python.org/pypi accelerate\n# accelerate env\n# accelerate test\n# 7. Upload the final version to actual pypi:\n# twine upload dist/* -r pypi\n# 8. Add release notes to the tag in github once everything is looking hunky-dory.\n# 9. Update the version in __init__.py, setup.py to the new version \"-dev\" and push to master\n", "path": "setup.py"}]} | 1,676 | 222 |
gh_patches_debug_9144 | rasdani/github-patches | git_diff | Cloud-CV__EvalAI-1305 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add challenge filtering using featured parameter.
We need to add filtering in challenge model on the basis of `featured` parameter.
Add challenge filtering using featured parameter.
We need to add filtering in challenge model on the basis of `featured` parameter.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/challenges/admin.py`
Content:
```
1 from django.contrib import admin
2
3 from base.admin import ImportExportTimeStampedAdmin
4
5 from .models import (Challenge,
6 ChallengeConfiguration,
7 ChallengePhase,
8 ChallengePhaseSplit,
9 DatasetSplit,
10 Leaderboard,
11 LeaderboardData,
12 StarChallenge,)
13
14
15 @admin.register(Challenge)
16 class ChallengeAdmin(ImportExportTimeStampedAdmin):
17 list_display = ("title", "start_date", "end_date", "creator", "published", "enable_forum", "anonymous_leaderboard")
18 list_filter = ("creator", "published", "enable_forum", "anonymous_leaderboard")
19 search_fields = ("title", "creator")
20
21
22 @admin.register(DatasetSplit)
23 class DatasetSplitAdmin(ImportExportTimeStampedAdmin):
24 list_display = ("name", "codename")
25 list_filter = ("name", "codename")
26 search_fields = ("name", "codename")
27
28
29 @admin.register(ChallengePhase)
30 class ChallengePhaseAdmin(ImportExportTimeStampedAdmin):
31 list_display = ("name", "challenge", "start_date", "end_date", "test_annotation", "is_public", "leaderboard_public")
32 list_filter = ("leaderboard_public", "challenge")
33 search_fields = ("name",)
34
35
36 @admin.register(Leaderboard)
37 class LeaderboardAdmin(ImportExportTimeStampedAdmin):
38 list_display = ("id", "schema")
39 search_fields = ("id",)
40
41
42 @admin.register(ChallengePhaseSplit)
43 class ChallengePhaseSplitAdmin(ImportExportTimeStampedAdmin):
44 list_display = ("id", "challenge_phase", "dataset_split", "leaderboard", "visibility")
45 list_filter = ("challenge_phase", "dataset_split", "leaderboard", "visibility")
46 search_fields = ("challenge_phase", "dataset_split", "leaderboard")
47
48
49 @admin.register(LeaderboardData)
50 class LeaderboardDataAdmin(ImportExportTimeStampedAdmin):
51 list_display = ("challenge_phase_split", "submission", "leaderboard", "result")
52 list_filter = ("challenge_phase_split", "leaderboard",)
53 search_fields = ("challenge_phase_split", "submission", "leaderboard", "result")
54
55
56 @admin.register(ChallengeConfiguration)
57 class ChallengeConfigurationAdmin(ImportExportTimeStampedAdmin):
58 list_display = ('user', 'challenge', 'is_created', 'zip_configuration',)
59 list_filter = ('user', 'is_created',)
60 search_fields = ('user', 'challenge',)
61
62
63 @admin.register(StarChallenge)
64 class StarChallengeAdmin(ImportExportTimeStampedAdmin):
65 list_display = ('user', 'challenge', 'is_starred')
66 search_fields = ('user', 'challenge',)
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/challenges/admin.py b/apps/challenges/admin.py
--- a/apps/challenges/admin.py
+++ b/apps/challenges/admin.py
@@ -14,8 +14,9 @@
@admin.register(Challenge)
class ChallengeAdmin(ImportExportTimeStampedAdmin):
- list_display = ("title", "start_date", "end_date", "creator", "published", "enable_forum", "anonymous_leaderboard")
- list_filter = ("creator", "published", "enable_forum", "anonymous_leaderboard")
+ list_display = ("title", "start_date", "end_date", "creator", "published", "enable_forum", "anonymous_leaderboard",
+ "featured")
+ list_filter = ("creator", "published", "enable_forum", "anonymous_leaderboard", "featured")
search_fields = ("title", "creator")
| {"golden_diff": "diff --git a/apps/challenges/admin.py b/apps/challenges/admin.py\n--- a/apps/challenges/admin.py\n+++ b/apps/challenges/admin.py\n@@ -14,8 +14,9 @@\n \n @admin.register(Challenge)\n class ChallengeAdmin(ImportExportTimeStampedAdmin):\n- list_display = (\"title\", \"start_date\", \"end_date\", \"creator\", \"published\", \"enable_forum\", \"anonymous_leaderboard\")\n- list_filter = (\"creator\", \"published\", \"enable_forum\", \"anonymous_leaderboard\")\n+ list_display = (\"title\", \"start_date\", \"end_date\", \"creator\", \"published\", \"enable_forum\", \"anonymous_leaderboard\",\n+ \"featured\")\n+ list_filter = (\"creator\", \"published\", \"enable_forum\", \"anonymous_leaderboard\", \"featured\")\n search_fields = (\"title\", \"creator\")\n", "issue": "Add challenge filtering using featured parameter.\nWe need to add filtering in challenge model on the basis of `featured` parameter.\nAdd challenge filtering using featured parameter.\nWe need to add filtering in challenge model on the basis of `featured` parameter.\n", "before_files": [{"content": "from django.contrib import admin\n\nfrom base.admin import ImportExportTimeStampedAdmin\n\nfrom .models import (Challenge,\n ChallengeConfiguration,\n ChallengePhase,\n ChallengePhaseSplit,\n DatasetSplit,\n Leaderboard,\n LeaderboardData,\n StarChallenge,)\n\n\[email protected](Challenge)\nclass ChallengeAdmin(ImportExportTimeStampedAdmin):\n list_display = (\"title\", \"start_date\", \"end_date\", \"creator\", \"published\", \"enable_forum\", \"anonymous_leaderboard\")\n list_filter = (\"creator\", \"published\", \"enable_forum\", \"anonymous_leaderboard\")\n search_fields = (\"title\", \"creator\")\n\n\[email protected](DatasetSplit)\nclass DatasetSplitAdmin(ImportExportTimeStampedAdmin):\n list_display = (\"name\", \"codename\")\n list_filter = (\"name\", \"codename\")\n search_fields = (\"name\", \"codename\")\n\n\[email protected](ChallengePhase)\nclass ChallengePhaseAdmin(ImportExportTimeStampedAdmin):\n list_display = (\"name\", \"challenge\", \"start_date\", \"end_date\", \"test_annotation\", \"is_public\", \"leaderboard_public\")\n list_filter = (\"leaderboard_public\", \"challenge\")\n search_fields = (\"name\",)\n\n\[email protected](Leaderboard)\nclass LeaderboardAdmin(ImportExportTimeStampedAdmin):\n list_display = (\"id\", \"schema\")\n search_fields = (\"id\",)\n\n\[email protected](ChallengePhaseSplit)\nclass ChallengePhaseSplitAdmin(ImportExportTimeStampedAdmin):\n list_display = (\"id\", \"challenge_phase\", \"dataset_split\", \"leaderboard\", \"visibility\")\n list_filter = (\"challenge_phase\", \"dataset_split\", \"leaderboard\", \"visibility\")\n search_fields = (\"challenge_phase\", \"dataset_split\", \"leaderboard\")\n\n\[email protected](LeaderboardData)\nclass LeaderboardDataAdmin(ImportExportTimeStampedAdmin):\n list_display = (\"challenge_phase_split\", \"submission\", \"leaderboard\", \"result\")\n list_filter = (\"challenge_phase_split\", \"leaderboard\",)\n search_fields = (\"challenge_phase_split\", \"submission\", \"leaderboard\", \"result\")\n\n\[email protected](ChallengeConfiguration)\nclass ChallengeConfigurationAdmin(ImportExportTimeStampedAdmin):\n list_display = ('user', 'challenge', 'is_created', 'zip_configuration',)\n list_filter = ('user', 'is_created',)\n search_fields = ('user', 'challenge',)\n\n\[email protected](StarChallenge)\nclass StarChallengeAdmin(ImportExportTimeStampedAdmin):\n list_display = ('user', 'challenge', 'is_starred')\n search_fields = ('user', 'challenge',)\n", "path": "apps/challenges/admin.py"}], "after_files": [{"content": "from django.contrib import admin\n\nfrom base.admin import ImportExportTimeStampedAdmin\n\nfrom .models import (Challenge,\n ChallengeConfiguration,\n ChallengePhase,\n ChallengePhaseSplit,\n DatasetSplit,\n Leaderboard,\n LeaderboardData,\n StarChallenge,)\n\n\[email protected](Challenge)\nclass ChallengeAdmin(ImportExportTimeStampedAdmin):\n list_display = (\"title\", \"start_date\", \"end_date\", \"creator\", \"published\", \"enable_forum\", \"anonymous_leaderboard\",\n \"featured\")\n list_filter = (\"creator\", \"published\", \"enable_forum\", \"anonymous_leaderboard\", \"featured\")\n search_fields = (\"title\", \"creator\")\n\n\[email protected](DatasetSplit)\nclass DatasetSplitAdmin(ImportExportTimeStampedAdmin):\n list_display = (\"name\", \"codename\")\n list_filter = (\"name\", \"codename\")\n search_fields = (\"name\", \"codename\")\n\n\[email protected](ChallengePhase)\nclass ChallengePhaseAdmin(ImportExportTimeStampedAdmin):\n list_display = (\"name\", \"challenge\", \"start_date\", \"end_date\", \"test_annotation\", \"is_public\", \"leaderboard_public\")\n list_filter = (\"leaderboard_public\", \"challenge\")\n search_fields = (\"name\",)\n\n\[email protected](Leaderboard)\nclass LeaderboardAdmin(ImportExportTimeStampedAdmin):\n list_display = (\"id\", \"schema\")\n search_fields = (\"id\",)\n\n\[email protected](ChallengePhaseSplit)\nclass ChallengePhaseSplitAdmin(ImportExportTimeStampedAdmin):\n list_display = (\"id\", \"challenge_phase\", \"dataset_split\", \"leaderboard\", \"visibility\")\n list_filter = (\"challenge_phase\", \"dataset_split\", \"leaderboard\", \"visibility\")\n search_fields = (\"challenge_phase\", \"dataset_split\", \"leaderboard\")\n\n\[email protected](LeaderboardData)\nclass LeaderboardDataAdmin(ImportExportTimeStampedAdmin):\n list_display = (\"challenge_phase_split\", \"submission\", \"leaderboard\", \"result\")\n list_filter = (\"challenge_phase_split\", \"leaderboard\",)\n search_fields = (\"challenge_phase_split\", \"submission\", \"leaderboard\", \"result\")\n\n\[email protected](ChallengeConfiguration)\nclass ChallengeConfigurationAdmin(ImportExportTimeStampedAdmin):\n list_display = ('user', 'challenge', 'is_created', 'zip_configuration',)\n list_filter = ('user', 'is_created',)\n search_fields = ('user', 'challenge',)\n\n\[email protected](StarChallenge)\nclass StarChallengeAdmin(ImportExportTimeStampedAdmin):\n list_display = ('user', 'challenge', 'is_starred')\n search_fields = ('user', 'challenge',)\n", "path": "apps/challenges/admin.py"}]} | 983 | 181 |
gh_patches_debug_49480 | rasdani/github-patches | git_diff | numpy__numpy-15425 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dir(numpy) returns duplicate "testing"
<!-- Please describe the issue in detail here, and fill in the fields below -->
### Reproducing code example:
<!-- A short code example that reproduces the problem/missing feature. It should be
self-contained, i.e., possible to run as-is via 'python myproblem.py' -->
```python
import numpy as np
>>> np.__version__
'1.18.1'
>>> len(dir(np))
620
>>> np.testing
<module 'numpy.testing' from 'C:\\Python\\Python38\\lib\\site-packages\\numpy\\testing\\__init__.py'>
>>> len(dir(np))
621
>>> [i for i in dir(np) if i == "testing"]
['testing', 'testing']
```
### Error:
"testing" appears twice in dir(np)
### Numpy/Python version information:
<!-- Output from 'import sys, numpy; print(numpy.__version__, sys.version)' -->
Python 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] on win32
>>> np.__version__
'1.18.1'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numpy/__init__.py`
Content:
```
1 """
2 NumPy
3 =====
4
5 Provides
6 1. An array object of arbitrary homogeneous items
7 2. Fast mathematical operations over arrays
8 3. Linear Algebra, Fourier Transforms, Random Number Generation
9
10 How to use the documentation
11 ----------------------------
12 Documentation is available in two forms: docstrings provided
13 with the code, and a loose standing reference guide, available from
14 `the NumPy homepage <https://www.scipy.org>`_.
15
16 We recommend exploring the docstrings using
17 `IPython <https://ipython.org>`_, an advanced Python shell with
18 TAB-completion and introspection capabilities. See below for further
19 instructions.
20
21 The docstring examples assume that `numpy` has been imported as `np`::
22
23 >>> import numpy as np
24
25 Code snippets are indicated by three greater-than signs::
26
27 >>> x = 42
28 >>> x = x + 1
29
30 Use the built-in ``help`` function to view a function's docstring::
31
32 >>> help(np.sort)
33 ... # doctest: +SKIP
34
35 For some objects, ``np.info(obj)`` may provide additional help. This is
36 particularly true if you see the line "Help on ufunc object:" at the top
37 of the help() page. Ufuncs are implemented in C, not Python, for speed.
38 The native Python help() does not know how to view their help, but our
39 np.info() function does.
40
41 To search for documents containing a keyword, do::
42
43 >>> np.lookfor('keyword')
44 ... # doctest: +SKIP
45
46 General-purpose documents like a glossary and help on the basic concepts
47 of numpy are available under the ``doc`` sub-module::
48
49 >>> from numpy import doc
50 >>> help(doc)
51 ... # doctest: +SKIP
52
53 Available subpackages
54 ---------------------
55 doc
56 Topical documentation on broadcasting, indexing, etc.
57 lib
58 Basic functions used by several sub-packages.
59 random
60 Core Random Tools
61 linalg
62 Core Linear Algebra Tools
63 fft
64 Core FFT routines
65 polynomial
66 Polynomial tools
67 testing
68 NumPy testing tools
69 f2py
70 Fortran to Python Interface Generator.
71 distutils
72 Enhancements to distutils with support for
73 Fortran compilers support and more.
74
75 Utilities
76 ---------
77 test
78 Run numpy unittests
79 show_config
80 Show numpy build configuration
81 dual
82 Overwrite certain functions with high-performance Scipy tools
83 matlib
84 Make everything matrices.
85 __version__
86 NumPy version string
87
88 Viewing documentation using IPython
89 -----------------------------------
90 Start IPython with the NumPy profile (``ipython -p numpy``), which will
91 import `numpy` under the alias `np`. Then, use the ``cpaste`` command to
92 paste examples into the shell. To see which functions are available in
93 `numpy`, type ``np.<TAB>`` (where ``<TAB>`` refers to the TAB key), or use
94 ``np.*cos*?<ENTER>`` (where ``<ENTER>`` refers to the ENTER key) to narrow
95 down the list. To view the docstring for a function, use
96 ``np.cos?<ENTER>`` (to view the docstring) and ``np.cos??<ENTER>`` (to view
97 the source code).
98
99 Copies vs. in-place operation
100 -----------------------------
101 Most of the functions in `numpy` return a copy of the array argument
102 (e.g., `np.sort`). In-place versions of these functions are often
103 available as array methods, i.e. ``x = np.array([1,2,3]); x.sort()``.
104 Exceptions to this rule are documented.
105
106 """
107 import sys
108 import warnings
109
110 from ._globals import ModuleDeprecationWarning, VisibleDeprecationWarning
111 from ._globals import _NoValue
112
113 # We first need to detect if we're being called as part of the numpy setup
114 # procedure itself in a reliable manner.
115 try:
116 __NUMPY_SETUP__
117 except NameError:
118 __NUMPY_SETUP__ = False
119
120 if __NUMPY_SETUP__:
121 sys.stderr.write('Running from numpy source directory.\n')
122 else:
123 try:
124 from numpy.__config__ import show as show_config
125 except ImportError:
126 msg = """Error importing numpy: you should not try to import numpy from
127 its source directory; please exit the numpy source tree, and relaunch
128 your python interpreter from there."""
129 raise ImportError(msg)
130
131 from .version import git_revision as __git_revision__
132 from .version import version as __version__
133
134 __all__ = ['ModuleDeprecationWarning',
135 'VisibleDeprecationWarning']
136
137 # Allow distributors to run custom init code
138 from . import _distributor_init
139
140 from . import core
141 from .core import *
142 from . import compat
143 from . import lib
144 # FIXME: why have numpy.lib if everything is imported here??
145 from .lib import *
146
147 from . import linalg
148 from . import fft
149 from . import polynomial
150 from . import random
151 from . import ctypeslib
152 from . import ma
153 from . import matrixlib as _mat
154 from .matrixlib import *
155 from .compat import long
156
157 # Make these accessible from numpy name-space
158 # but not imported in from numpy import *
159 # TODO[gh-6103]: Deprecate these
160 if sys.version_info[0] >= 3:
161 from builtins import bool, int, float, complex, object, str
162 unicode = str
163 else:
164 from __builtin__ import bool, int, float, complex, object, unicode, str
165
166 from .core import round, abs, max, min
167 # now that numpy modules are imported, can initialize limits
168 core.getlimits._register_known_types()
169
170 __all__.extend(['__version__', 'show_config'])
171 __all__.extend(core.__all__)
172 __all__.extend(_mat.__all__)
173 __all__.extend(lib.__all__)
174 __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma'])
175
176 # These are added by `from .core import *` and `core.__all__`, but we
177 # overwrite them above with builtins we do _not_ want to export.
178 __all__.remove('long')
179 __all__.remove('unicode')
180
181 # Remove things that are in the numpy.lib but not in the numpy namespace
182 # Note that there is a test (numpy/tests/test_public_api.py:test_numpy_namespace)
183 # that prevents adding more things to the main namespace by accident.
184 # The list below will grow until the `from .lib import *` fixme above is
185 # taken care of
186 __all__.remove('Arrayterator')
187 del Arrayterator
188
189 # Filter out Cython harmless warnings
190 warnings.filterwarnings("ignore", message="numpy.dtype size changed")
191 warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
192 warnings.filterwarnings("ignore", message="numpy.ndarray size changed")
193
194 # oldnumeric and numarray were removed in 1.9. In case some packages import
195 # but do not use them, we define them here for backward compatibility.
196 oldnumeric = 'removed'
197 numarray = 'removed'
198
199 if sys.version_info[:2] >= (3, 7):
200 # Importing Tester requires importing all of UnitTest which is not a
201 # cheap import Since it is mainly used in test suits, we lazy import it
202 # here to save on the order of 10 ms of import time for most users
203 #
204 # The previous way Tester was imported also had a side effect of adding
205 # the full `numpy.testing` namespace
206 #
207 # module level getattr is only supported in 3.7 onwards
208 # https://www.python.org/dev/peps/pep-0562/
209 def __getattr__(attr):
210 if attr == 'testing':
211 import numpy.testing as testing
212 return testing
213 elif attr == 'Tester':
214 from .testing import Tester
215 return Tester
216 else:
217 raise AttributeError("module {!r} has no attribute "
218 "{!r}".format(__name__, attr))
219
220 def __dir__():
221 return list(globals().keys()) + ['Tester', 'testing']
222
223 else:
224 # We don't actually use this ourselves anymore, but I'm not 100% sure that
225 # no-one else in the world is using it (though I hope not)
226 from .testing import Tester
227
228 # Pytest testing
229 from numpy._pytesttester import PytestTester
230 test = PytestTester(__name__)
231 del PytestTester
232
233
234 def _sanity_check():
235 """
236 Quick sanity checks for common bugs caused by environment.
237 There are some cases e.g. with wrong BLAS ABI that cause wrong
238 results under specific runtime conditions that are not necessarily
239 achieved during test suite runs, and it is useful to catch those early.
240
241 See https://github.com/numpy/numpy/issues/8577 and other
242 similar bug reports.
243
244 """
245 try:
246 x = ones(2, dtype=float32)
247 if not abs(x.dot(x) - 2.0) < 1e-5:
248 raise AssertionError()
249 except AssertionError:
250 msg = ("The current Numpy installation ({!r}) fails to "
251 "pass simple sanity checks. This can be caused for example "
252 "by incorrect BLAS library being linked in, or by mixing "
253 "package managers (pip, conda, apt, ...). Search closed "
254 "numpy issues for similar problems.")
255 raise RuntimeError(msg.format(__file__))
256
257 _sanity_check()
258 del _sanity_check
259
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/numpy/__init__.py b/numpy/__init__.py
--- a/numpy/__init__.py
+++ b/numpy/__init__.py
@@ -218,7 +218,7 @@
"{!r}".format(__name__, attr))
def __dir__():
- return list(globals().keys()) + ['Tester', 'testing']
+ return list(globals().keys() | {'Tester', 'testing'})
else:
# We don't actually use this ourselves anymore, but I'm not 100% sure that
| {"golden_diff": "diff --git a/numpy/__init__.py b/numpy/__init__.py\n--- a/numpy/__init__.py\n+++ b/numpy/__init__.py\n@@ -218,7 +218,7 @@\n \"{!r}\".format(__name__, attr))\n \n def __dir__():\n- return list(globals().keys()) + ['Tester', 'testing']\n+ return list(globals().keys() | {'Tester', 'testing'})\n \n else:\n # We don't actually use this ourselves anymore, but I'm not 100% sure that\n", "issue": "dir(numpy) returns duplicate \"testing\"\n<!-- Please describe the issue in detail here, and fill in the fields below -->\r\n\r\n### Reproducing code example:\r\n\r\n<!-- A short code example that reproduces the problem/missing feature. It should be\r\nself-contained, i.e., possible to run as-is via 'python myproblem.py' -->\r\n\r\n```python\r\nimport numpy as np\r\n>>> np.__version__\r\n'1.18.1'\r\n>>> len(dir(np))\r\n620\r\n>>> np.testing\r\n<module 'numpy.testing' from 'C:\\\\Python\\\\Python38\\\\lib\\\\site-packages\\\\numpy\\\\testing\\\\__init__.py'>\r\n>>> len(dir(np))\r\n621\r\n>>> [i for i in dir(np) if i == \"testing\"]\r\n['testing', 'testing']\r\n```\r\n### Error:\r\n\"testing\" appears twice in dir(np)\r\n\r\n\r\n### Numpy/Python version information:\r\n\r\n<!-- Output from 'import sys, numpy; print(numpy.__version__, sys.version)' -->\r\nPython 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] on win32\r\n>>> np.__version__\r\n'1.18.1'\n", "before_files": [{"content": "\"\"\"\nNumPy\n=====\n\nProvides\n 1. An array object of arbitrary homogeneous items\n 2. Fast mathematical operations over arrays\n 3. Linear Algebra, Fourier Transforms, Random Number Generation\n\nHow to use the documentation\n----------------------------\nDocumentation is available in two forms: docstrings provided\nwith the code, and a loose standing reference guide, available from\n`the NumPy homepage <https://www.scipy.org>`_.\n\nWe recommend exploring the docstrings using\n`IPython <https://ipython.org>`_, an advanced Python shell with\nTAB-completion and introspection capabilities. See below for further\ninstructions.\n\nThe docstring examples assume that `numpy` has been imported as `np`::\n\n >>> import numpy as np\n\nCode snippets are indicated by three greater-than signs::\n\n >>> x = 42\n >>> x = x + 1\n\nUse the built-in ``help`` function to view a function's docstring::\n\n >>> help(np.sort)\n ... # doctest: +SKIP\n\nFor some objects, ``np.info(obj)`` may provide additional help. This is\nparticularly true if you see the line \"Help on ufunc object:\" at the top\nof the help() page. Ufuncs are implemented in C, not Python, for speed.\nThe native Python help() does not know how to view their help, but our\nnp.info() function does.\n\nTo search for documents containing a keyword, do::\n\n >>> np.lookfor('keyword')\n ... # doctest: +SKIP\n\nGeneral-purpose documents like a glossary and help on the basic concepts\nof numpy are available under the ``doc`` sub-module::\n\n >>> from numpy import doc\n >>> help(doc)\n ... # doctest: +SKIP\n\nAvailable subpackages\n---------------------\ndoc\n Topical documentation on broadcasting, indexing, etc.\nlib\n Basic functions used by several sub-packages.\nrandom\n Core Random Tools\nlinalg\n Core Linear Algebra Tools\nfft\n Core FFT routines\npolynomial\n Polynomial tools\ntesting\n NumPy testing tools\nf2py\n Fortran to Python Interface Generator.\ndistutils\n Enhancements to distutils with support for\n Fortran compilers support and more.\n\nUtilities\n---------\ntest\n Run numpy unittests\nshow_config\n Show numpy build configuration\ndual\n Overwrite certain functions with high-performance Scipy tools\nmatlib\n Make everything matrices.\n__version__\n NumPy version string\n\nViewing documentation using IPython\n-----------------------------------\nStart IPython with the NumPy profile (``ipython -p numpy``), which will\nimport `numpy` under the alias `np`. Then, use the ``cpaste`` command to\npaste examples into the shell. To see which functions are available in\n`numpy`, type ``np.<TAB>`` (where ``<TAB>`` refers to the TAB key), or use\n``np.*cos*?<ENTER>`` (where ``<ENTER>`` refers to the ENTER key) to narrow\ndown the list. To view the docstring for a function, use\n``np.cos?<ENTER>`` (to view the docstring) and ``np.cos??<ENTER>`` (to view\nthe source code).\n\nCopies vs. in-place operation\n-----------------------------\nMost of the functions in `numpy` return a copy of the array argument\n(e.g., `np.sort`). In-place versions of these functions are often\navailable as array methods, i.e. ``x = np.array([1,2,3]); x.sort()``.\nExceptions to this rule are documented.\n\n\"\"\"\nimport sys\nimport warnings\n\nfrom ._globals import ModuleDeprecationWarning, VisibleDeprecationWarning\nfrom ._globals import _NoValue\n\n# We first need to detect if we're being called as part of the numpy setup\n# procedure itself in a reliable manner.\ntry:\n __NUMPY_SETUP__\nexcept NameError:\n __NUMPY_SETUP__ = False\n\nif __NUMPY_SETUP__:\n sys.stderr.write('Running from numpy source directory.\\n')\nelse:\n try:\n from numpy.__config__ import show as show_config\n except ImportError:\n msg = \"\"\"Error importing numpy: you should not try to import numpy from\n its source directory; please exit the numpy source tree, and relaunch\n your python interpreter from there.\"\"\"\n raise ImportError(msg)\n\n from .version import git_revision as __git_revision__\n from .version import version as __version__\n\n __all__ = ['ModuleDeprecationWarning',\n 'VisibleDeprecationWarning']\n\n # Allow distributors to run custom init code\n from . import _distributor_init\n\n from . import core\n from .core import *\n from . import compat\n from . import lib\n # FIXME: why have numpy.lib if everything is imported here??\n from .lib import *\n\n from . import linalg\n from . import fft\n from . import polynomial\n from . import random\n from . import ctypeslib\n from . import ma\n from . import matrixlib as _mat\n from .matrixlib import *\n from .compat import long\n\n # Make these accessible from numpy name-space\n # but not imported in from numpy import *\n # TODO[gh-6103]: Deprecate these\n if sys.version_info[0] >= 3:\n from builtins import bool, int, float, complex, object, str\n unicode = str\n else:\n from __builtin__ import bool, int, float, complex, object, unicode, str\n\n from .core import round, abs, max, min\n # now that numpy modules are imported, can initialize limits\n core.getlimits._register_known_types()\n\n __all__.extend(['__version__', 'show_config'])\n __all__.extend(core.__all__)\n __all__.extend(_mat.__all__)\n __all__.extend(lib.__all__)\n __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma'])\n\n # These are added by `from .core import *` and `core.__all__`, but we\n # overwrite them above with builtins we do _not_ want to export.\n __all__.remove('long')\n __all__.remove('unicode')\n\n # Remove things that are in the numpy.lib but not in the numpy namespace\n # Note that there is a test (numpy/tests/test_public_api.py:test_numpy_namespace)\n # that prevents adding more things to the main namespace by accident.\n # The list below will grow until the `from .lib import *` fixme above is\n # taken care of\n __all__.remove('Arrayterator')\n del Arrayterator\n\n # Filter out Cython harmless warnings\n warnings.filterwarnings(\"ignore\", message=\"numpy.dtype size changed\")\n warnings.filterwarnings(\"ignore\", message=\"numpy.ufunc size changed\")\n warnings.filterwarnings(\"ignore\", message=\"numpy.ndarray size changed\")\n\n # oldnumeric and numarray were removed in 1.9. In case some packages import\n # but do not use them, we define them here for backward compatibility.\n oldnumeric = 'removed'\n numarray = 'removed'\n\n if sys.version_info[:2] >= (3, 7):\n # Importing Tester requires importing all of UnitTest which is not a\n # cheap import Since it is mainly used in test suits, we lazy import it\n # here to save on the order of 10 ms of import time for most users\n #\n # The previous way Tester was imported also had a side effect of adding\n # the full `numpy.testing` namespace\n #\n # module level getattr is only supported in 3.7 onwards\n # https://www.python.org/dev/peps/pep-0562/\n def __getattr__(attr):\n if attr == 'testing':\n import numpy.testing as testing\n return testing\n elif attr == 'Tester':\n from .testing import Tester\n return Tester\n else:\n raise AttributeError(\"module {!r} has no attribute \"\n \"{!r}\".format(__name__, attr))\n\n def __dir__():\n return list(globals().keys()) + ['Tester', 'testing']\n\n else:\n # We don't actually use this ourselves anymore, but I'm not 100% sure that\n # no-one else in the world is using it (though I hope not)\n from .testing import Tester\n\n # Pytest testing\n from numpy._pytesttester import PytestTester\n test = PytestTester(__name__)\n del PytestTester\n\n\n def _sanity_check():\n \"\"\"\n Quick sanity checks for common bugs caused by environment.\n There are some cases e.g. with wrong BLAS ABI that cause wrong\n results under specific runtime conditions that are not necessarily\n achieved during test suite runs, and it is useful to catch those early.\n\n See https://github.com/numpy/numpy/issues/8577 and other\n similar bug reports.\n\n \"\"\"\n try:\n x = ones(2, dtype=float32)\n if not abs(x.dot(x) - 2.0) < 1e-5:\n raise AssertionError()\n except AssertionError:\n msg = (\"The current Numpy installation ({!r}) fails to \"\n \"pass simple sanity checks. This can be caused for example \"\n \"by incorrect BLAS library being linked in, or by mixing \"\n \"package managers (pip, conda, apt, ...). Search closed \"\n \"numpy issues for similar problems.\")\n raise RuntimeError(msg.format(__file__))\n\n _sanity_check()\n del _sanity_check\n", "path": "numpy/__init__.py"}], "after_files": [{"content": "\"\"\"\nNumPy\n=====\n\nProvides\n 1. An array object of arbitrary homogeneous items\n 2. Fast mathematical operations over arrays\n 3. Linear Algebra, Fourier Transforms, Random Number Generation\n\nHow to use the documentation\n----------------------------\nDocumentation is available in two forms: docstrings provided\nwith the code, and a loose standing reference guide, available from\n`the NumPy homepage <https://www.scipy.org>`_.\n\nWe recommend exploring the docstrings using\n`IPython <https://ipython.org>`_, an advanced Python shell with\nTAB-completion and introspection capabilities. See below for further\ninstructions.\n\nThe docstring examples assume that `numpy` has been imported as `np`::\n\n >>> import numpy as np\n\nCode snippets are indicated by three greater-than signs::\n\n >>> x = 42\n >>> x = x + 1\n\nUse the built-in ``help`` function to view a function's docstring::\n\n >>> help(np.sort)\n ... # doctest: +SKIP\n\nFor some objects, ``np.info(obj)`` may provide additional help. This is\nparticularly true if you see the line \"Help on ufunc object:\" at the top\nof the help() page. Ufuncs are implemented in C, not Python, for speed.\nThe native Python help() does not know how to view their help, but our\nnp.info() function does.\n\nTo search for documents containing a keyword, do::\n\n >>> np.lookfor('keyword')\n ... # doctest: +SKIP\n\nGeneral-purpose documents like a glossary and help on the basic concepts\nof numpy are available under the ``doc`` sub-module::\n\n >>> from numpy import doc\n >>> help(doc)\n ... # doctest: +SKIP\n\nAvailable subpackages\n---------------------\ndoc\n Topical documentation on broadcasting, indexing, etc.\nlib\n Basic functions used by several sub-packages.\nrandom\n Core Random Tools\nlinalg\n Core Linear Algebra Tools\nfft\n Core FFT routines\npolynomial\n Polynomial tools\ntesting\n NumPy testing tools\nf2py\n Fortran to Python Interface Generator.\ndistutils\n Enhancements to distutils with support for\n Fortran compilers support and more.\n\nUtilities\n---------\ntest\n Run numpy unittests\nshow_config\n Show numpy build configuration\ndual\n Overwrite certain functions with high-performance Scipy tools\nmatlib\n Make everything matrices.\n__version__\n NumPy version string\n\nViewing documentation using IPython\n-----------------------------------\nStart IPython with the NumPy profile (``ipython -p numpy``), which will\nimport `numpy` under the alias `np`. Then, use the ``cpaste`` command to\npaste examples into the shell. To see which functions are available in\n`numpy`, type ``np.<TAB>`` (where ``<TAB>`` refers to the TAB key), or use\n``np.*cos*?<ENTER>`` (where ``<ENTER>`` refers to the ENTER key) to narrow\ndown the list. To view the docstring for a function, use\n``np.cos?<ENTER>`` (to view the docstring) and ``np.cos??<ENTER>`` (to view\nthe source code).\n\nCopies vs. in-place operation\n-----------------------------\nMost of the functions in `numpy` return a copy of the array argument\n(e.g., `np.sort`). In-place versions of these functions are often\navailable as array methods, i.e. ``x = np.array([1,2,3]); x.sort()``.\nExceptions to this rule are documented.\n\n\"\"\"\nimport sys\nimport warnings\n\nfrom ._globals import ModuleDeprecationWarning, VisibleDeprecationWarning\nfrom ._globals import _NoValue\n\n# We first need to detect if we're being called as part of the numpy setup\n# procedure itself in a reliable manner.\ntry:\n __NUMPY_SETUP__\nexcept NameError:\n __NUMPY_SETUP__ = False\n\nif __NUMPY_SETUP__:\n sys.stderr.write('Running from numpy source directory.\\n')\nelse:\n try:\n from numpy.__config__ import show as show_config\n except ImportError:\n msg = \"\"\"Error importing numpy: you should not try to import numpy from\n its source directory; please exit the numpy source tree, and relaunch\n your python interpreter from there.\"\"\"\n raise ImportError(msg)\n\n from .version import git_revision as __git_revision__\n from .version import version as __version__\n\n __all__ = ['ModuleDeprecationWarning',\n 'VisibleDeprecationWarning']\n\n # Allow distributors to run custom init code\n from . import _distributor_init\n\n from . import core\n from .core import *\n from . import compat\n from . import lib\n # FIXME: why have numpy.lib if everything is imported here??\n from .lib import *\n\n from . import linalg\n from . import fft\n from . import polynomial\n from . import random\n from . import ctypeslib\n from . import ma\n from . import matrixlib as _mat\n from .matrixlib import *\n from .compat import long\n\n # Make these accessible from numpy name-space\n # but not imported in from numpy import *\n # TODO[gh-6103]: Deprecate these\n if sys.version_info[0] >= 3:\n from builtins import bool, int, float, complex, object, str\n unicode = str\n else:\n from __builtin__ import bool, int, float, complex, object, unicode, str\n\n from .core import round, abs, max, min\n # now that numpy modules are imported, can initialize limits\n core.getlimits._register_known_types()\n\n __all__.extend(['__version__', 'show_config'])\n __all__.extend(core.__all__)\n __all__.extend(_mat.__all__)\n __all__.extend(lib.__all__)\n __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma'])\n\n # These are added by `from .core import *` and `core.__all__`, but we\n # overwrite them above with builtins we do _not_ want to export.\n __all__.remove('long')\n __all__.remove('unicode')\n\n # Remove things that are in the numpy.lib but not in the numpy namespace\n # Note that there is a test (numpy/tests/test_public_api.py:test_numpy_namespace)\n # that prevents adding more things to the main namespace by accident.\n # The list below will grow until the `from .lib import *` fixme above is\n # taken care of\n __all__.remove('Arrayterator')\n del Arrayterator\n\n # Filter out Cython harmless warnings\n warnings.filterwarnings(\"ignore\", message=\"numpy.dtype size changed\")\n warnings.filterwarnings(\"ignore\", message=\"numpy.ufunc size changed\")\n warnings.filterwarnings(\"ignore\", message=\"numpy.ndarray size changed\")\n\n # oldnumeric and numarray were removed in 1.9. In case some packages import\n # but do not use them, we define them here for backward compatibility.\n oldnumeric = 'removed'\n numarray = 'removed'\n\n if sys.version_info[:2] >= (3, 7):\n # Importing Tester requires importing all of UnitTest which is not a\n # cheap import Since it is mainly used in test suits, we lazy import it\n # here to save on the order of 10 ms of import time for most users\n #\n # The previous way Tester was imported also had a side effect of adding\n # the full `numpy.testing` namespace\n #\n # module level getattr is only supported in 3.7 onwards\n # https://www.python.org/dev/peps/pep-0562/\n def __getattr__(attr):\n if attr == 'testing':\n import numpy.testing as testing\n return testing\n elif attr == 'Tester':\n from .testing import Tester\n return Tester\n else:\n raise AttributeError(\"module {!r} has no attribute \"\n \"{!r}\".format(__name__, attr))\n\n def __dir__():\n return list(globals().keys() | {'Tester', 'testing'})\n\n else:\n # We don't actually use this ourselves anymore, but I'm not 100% sure that\n # no-one else in the world is using it (though I hope not)\n from .testing import Tester\n\n # Pytest testing\n from numpy._pytesttester import PytestTester\n test = PytestTester(__name__)\n del PytestTester\n\n\n def _sanity_check():\n \"\"\"\n Quick sanity checks for common bugs caused by environment.\n There are some cases e.g. with wrong BLAS ABI that cause wrong\n results under specific runtime conditions that are not necessarily\n achieved during test suite runs, and it is useful to catch those early.\n\n See https://github.com/numpy/numpy/issues/8577 and other\n similar bug reports.\n\n \"\"\"\n try:\n x = ones(2, dtype=float32)\n if not abs(x.dot(x) - 2.0) < 1e-5:\n raise AssertionError()\n except AssertionError:\n msg = (\"The current Numpy installation ({!r}) fails to \"\n \"pass simple sanity checks. This can be caused for example \"\n \"by incorrect BLAS library being linked in, or by mixing \"\n \"package managers (pip, conda, apt, ...). Search closed \"\n \"numpy issues for similar problems.\")\n raise RuntimeError(msg.format(__file__))\n\n _sanity_check()\n del _sanity_check\n", "path": "numpy/__init__.py"}]} | 3,315 | 127 |
gh_patches_debug_5820 | rasdani/github-patches | git_diff | aws__aws-cli-761 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
datapipeline query-object --query argument is shadowed
The top level `--query` option shadows the `--query` from datapipeline query-object. This can be addressed in the argrename customization model. We can also take this opportunity to remove the `cli_name` from the `.extra.json` files in botocore.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awscli/customizations/argrename.py`
Content:
```
1 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 """
14 """
15
16 from awscli.customizations import utils
17
18
19 ARGUMENT_RENAMES = {
20 # Mapping of original arg to renamed arg.
21 # The key is <service>.<operation>.argname
22 # The first part of the key is used for event registration
23 # so if you wanted to rename something for an entire service you
24 # could say 'ec2.*.dry-run': 'renamed-arg-name', or if you wanted
25 # to rename across all services you could say '*.*.dry-run': 'new-name'.
26 'ec2.create-image.no-no-reboot': 'reboot',
27 'ec2.*.no-egress': 'ingress',
28 'ec2.*.no-disable-api-termination': 'enable-api-termination',
29 }
30
31
32 def register_arg_renames(cli):
33 for original, new_name in ARGUMENT_RENAMES.items():
34 event_portion, original_arg_name = original.rsplit('.', 1)
35 cli.register('building-argument-table.%s' % event_portion,
36 rename_arg(original_arg_name, new_name))
37
38
39 def rename_arg(original_arg_name, new_name):
40 def _rename_arg(argument_table, **kwargs):
41 if original_arg_name in argument_table:
42 utils.rename_argument(argument_table, original_arg_name, new_name)
43 return _rename_arg
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/awscli/customizations/argrename.py b/awscli/customizations/argrename.py
--- a/awscli/customizations/argrename.py
+++ b/awscli/customizations/argrename.py
@@ -26,6 +26,11 @@
'ec2.create-image.no-no-reboot': 'reboot',
'ec2.*.no-egress': 'ingress',
'ec2.*.no-disable-api-termination': 'enable-api-termination',
+ 'opsworks.*.region': 'stack-region',
+ 'elastictranscoder.*.output': 'job-output',
+ 'swf.register-activity-type.version': 'activity-version',
+ 'swf.register-workflow-type.version': 'workflow-version',
+ 'datapipeline.*.query': 'objects-query',
}
| {"golden_diff": "diff --git a/awscli/customizations/argrename.py b/awscli/customizations/argrename.py\n--- a/awscli/customizations/argrename.py\n+++ b/awscli/customizations/argrename.py\n@@ -26,6 +26,11 @@\n 'ec2.create-image.no-no-reboot': 'reboot',\n 'ec2.*.no-egress': 'ingress',\n 'ec2.*.no-disable-api-termination': 'enable-api-termination',\n+ 'opsworks.*.region': 'stack-region',\n+ 'elastictranscoder.*.output': 'job-output',\n+ 'swf.register-activity-type.version': 'activity-version',\n+ 'swf.register-workflow-type.version': 'workflow-version',\n+ 'datapipeline.*.query': 'objects-query',\n }\n", "issue": "datapipeline query-object --query argument is shadowed\nThe top level `--query` option shadows the `--query` from datapipeline query-object. This can be addressed in the argrename customization model. We can also take this opportunity to remove the `cli_name` from the `.extra.json` files in botocore.\n\n", "before_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"\n\"\"\"\n\nfrom awscli.customizations import utils\n\n\nARGUMENT_RENAMES = {\n # Mapping of original arg to renamed arg.\n # The key is <service>.<operation>.argname\n # The first part of the key is used for event registration\n # so if you wanted to rename something for an entire service you\n # could say 'ec2.*.dry-run': 'renamed-arg-name', or if you wanted\n # to rename across all services you could say '*.*.dry-run': 'new-name'.\n 'ec2.create-image.no-no-reboot': 'reboot',\n 'ec2.*.no-egress': 'ingress',\n 'ec2.*.no-disable-api-termination': 'enable-api-termination',\n}\n\n\ndef register_arg_renames(cli):\n for original, new_name in ARGUMENT_RENAMES.items():\n event_portion, original_arg_name = original.rsplit('.', 1)\n cli.register('building-argument-table.%s' % event_portion,\n rename_arg(original_arg_name, new_name))\n\n\ndef rename_arg(original_arg_name, new_name):\n def _rename_arg(argument_table, **kwargs):\n if original_arg_name in argument_table:\n utils.rename_argument(argument_table, original_arg_name, new_name)\n return _rename_arg\n", "path": "awscli/customizations/argrename.py"}], "after_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"\n\"\"\"\n\nfrom awscli.customizations import utils\n\n\nARGUMENT_RENAMES = {\n # Mapping of original arg to renamed arg.\n # The key is <service>.<operation>.argname\n # The first part of the key is used for event registration\n # so if you wanted to rename something for an entire service you\n # could say 'ec2.*.dry-run': 'renamed-arg-name', or if you wanted\n # to rename across all services you could say '*.*.dry-run': 'new-name'.\n 'ec2.create-image.no-no-reboot': 'reboot',\n 'ec2.*.no-egress': 'ingress',\n 'ec2.*.no-disable-api-termination': 'enable-api-termination',\n 'opsworks.*.region': 'stack-region',\n 'elastictranscoder.*.output': 'job-output',\n 'swf.register-activity-type.version': 'activity-version',\n 'swf.register-workflow-type.version': 'workflow-version',\n 'datapipeline.*.query': 'objects-query',\n}\n\n\ndef register_arg_renames(cli):\n for original, new_name in ARGUMENT_RENAMES.items():\n event_portion, original_arg_name = original.rsplit('.', 1)\n cli.register('building-argument-table.%s' % event_portion,\n rename_arg(original_arg_name, new_name))\n\n\ndef rename_arg(original_arg_name, new_name):\n def _rename_arg(argument_table, **kwargs):\n if original_arg_name in argument_table:\n utils.rename_argument(argument_table, original_arg_name, new_name)\n return _rename_arg\n", "path": "awscli/customizations/argrename.py"}]} | 829 | 176 |
gh_patches_debug_6551 | rasdani/github-patches | git_diff | wagtail__wagtail-8396 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
wagtail.core.utils should be updated to wagtail.coreutils on 3.0 upgrade
The contents of `wagtail.core.utils` have been moved to `wagtail.coreutils` rather than `wagtail.utils`, but the release note doesn't mention this and the `wagtail upgrademodulepaths` command doesn't account for it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/bin/wagtail.py`
Content:
```
1 #!/usr/bin/env python
2 import fileinput
3 import fnmatch
4 import os
5 import re
6 import sys
7 from argparse import ArgumentParser
8 from difflib import unified_diff
9
10 from django.core.management import ManagementUtility
11
12 CURRENT_PYTHON = sys.version_info[:2]
13 REQUIRED_PYTHON = (3, 5)
14
15 if CURRENT_PYTHON < REQUIRED_PYTHON:
16 sys.stderr.write(
17 "This version of Wagtail requires Python {}.{} or above - you are running {}.{}\n".format(
18 *(REQUIRED_PYTHON + CURRENT_PYTHON)
19 )
20 )
21 sys.exit(1)
22
23
24 def pluralize(value, arg="s"):
25 return "" if value == 1 else arg
26
27
28 class Command:
29 description = None
30
31 def create_parser(self, command_name=None):
32 if command_name is None:
33 prog = None
34 else:
35 # hack the prog name as reported to ArgumentParser to include the command
36 prog = "%s %s" % (prog_name(), command_name)
37
38 parser = ArgumentParser(
39 description=getattr(self, "description", None), add_help=False, prog=prog
40 )
41 self.add_arguments(parser)
42 return parser
43
44 def add_arguments(self, parser):
45 pass
46
47 def print_help(self, command_name):
48 parser = self.create_parser(command_name=command_name)
49 parser.print_help()
50
51 def execute(self, argv):
52 parser = self.create_parser()
53 options = parser.parse_args(sys.argv[2:])
54 options_dict = vars(options)
55 self.run(**options_dict)
56
57
58 class CreateProject(Command):
59 description = "Creates the directory structure for a new Wagtail project."
60
61 def add_arguments(self, parser):
62 parser.add_argument("project_name", help="Name for your Wagtail project")
63 parser.add_argument(
64 "dest_dir",
65 nargs="?",
66 help="Destination directory inside which to create the project",
67 )
68
69 def run(self, project_name=None, dest_dir=None):
70 # Make sure given name is not already in use by another python package/module.
71 try:
72 __import__(project_name)
73 except ImportError:
74 pass
75 else:
76 sys.exit(
77 "'%s' conflicts with the name of an existing "
78 "Python module and cannot be used as a project "
79 "name. Please try another name." % project_name
80 )
81
82 print(
83 "Creating a Wagtail project called %(project_name)s"
84 % {"project_name": project_name}
85 ) # noqa
86
87 # Create the project from the Wagtail template using startapp
88
89 # First find the path to Wagtail
90 import wagtail
91
92 wagtail_path = os.path.dirname(wagtail.__file__)
93 template_path = os.path.join(wagtail_path, "project_template")
94
95 # Call django-admin startproject
96 utility_args = [
97 "django-admin",
98 "startproject",
99 "--template=" + template_path,
100 "--ext=html,rst",
101 "--name=Dockerfile",
102 project_name,
103 ]
104
105 if dest_dir:
106 utility_args.append(dest_dir)
107
108 utility = ManagementUtility(utility_args)
109 utility.execute()
110
111 print(
112 "Success! %(project_name)s has been created"
113 % {"project_name": project_name}
114 ) # noqa
115
116
117 class UpdateModulePaths(Command):
118 description = "Update a Wagtail project tree to use Wagtail 2.x module paths"
119
120 REPLACEMENTS = [
121 # Added in Wagtail 2.0
122 (re.compile(r"\bwagtail\.wagtailcore\b"), "wagtail"),
123 (re.compile(r"\bwagtail\.wagtailadmin\b"), "wagtail.admin"),
124 (re.compile(r"\bwagtail\.wagtaildocs\b"), "wagtail.documents"),
125 (re.compile(r"\bwagtail\.wagtailembeds\b"), "wagtail.embeds"),
126 (re.compile(r"\bwagtail\.wagtailimages\b"), "wagtail.images"),
127 (re.compile(r"\bwagtail\.wagtailsearch\b"), "wagtail.search"),
128 (re.compile(r"\bwagtail\.wagtailsites\b"), "wagtail.sites"),
129 (re.compile(r"\bwagtail\.wagtailsnippets\b"), "wagtail.snippets"),
130 (re.compile(r"\bwagtail\.wagtailusers\b"), "wagtail.users"),
131 (re.compile(r"\bwagtail\.wagtailforms\b"), "wagtail.contrib.forms"),
132 (re.compile(r"\bwagtail\.wagtailredirects\b"), "wagtail.contrib.redirects"),
133 (
134 re.compile(r"\bwagtail\.contrib\.wagtailfrontendcache\b"),
135 "wagtail.contrib.frontend_cache",
136 ),
137 (
138 re.compile(r"\bwagtail\.contrib\.wagtailroutablepage\b"),
139 "wagtail.contrib.routable_page",
140 ),
141 (
142 re.compile(r"\bwagtail\.contrib\.wagtailsearchpromotions\b"),
143 "wagtail.contrib.search_promotions",
144 ),
145 (
146 re.compile(r"\bwagtail\.contrib\.wagtailsitemaps\b"),
147 "wagtail.contrib.sitemaps",
148 ),
149 (
150 re.compile(r"\bwagtail\.contrib\.wagtailstyleguide\b"),
151 "wagtail.contrib.styleguide",
152 ),
153 # Added in Wagtail 3.0
154 (re.compile(r"\bwagtail\.tests\b"), "wagtail.test"),
155 (re.compile(r"\bwagtail\.core\b"), "wagtail"),
156 (re.compile(r"\bwagtail\.admin\.edit_handlers\b"), "wagtail.admin.panels"),
157 (
158 re.compile(r"\bwagtail\.contrib\.forms\.edit_handlers\b"),
159 "wagtail.contrib.forms.panels",
160 ),
161 ]
162
163 def add_arguments(self, parser):
164 parser.add_argument("root_path", nargs="?", help="Path to your project's root")
165 parser.add_argument(
166 "--list",
167 action="store_true",
168 dest="list_files",
169 help="Show the list of files to change, without modifying them",
170 )
171 parser.add_argument(
172 "--diff",
173 action="store_true",
174 help="Show the changes that would be made, without modifying the files",
175 )
176 parser.add_argument(
177 "--ignore-dir",
178 action="append",
179 dest="ignored_dirs",
180 metavar="NAME",
181 help="Ignore files in this directory",
182 )
183 parser.add_argument(
184 "--ignore-file",
185 action="append",
186 dest="ignored_patterns",
187 metavar="NAME",
188 help="Ignore files with this name (supports wildcards)",
189 )
190
191 def run(
192 self,
193 root_path=None,
194 list_files=False,
195 diff=False,
196 ignored_dirs=None,
197 ignored_patterns=None,
198 ):
199 if root_path is None:
200 root_path = os.getcwd()
201
202 absolute_ignored_dirs = [
203 os.path.abspath(dir_path) + os.sep for dir_path in (ignored_dirs or [])
204 ]
205
206 if ignored_patterns is None:
207 ignored_patterns = []
208
209 checked_file_count = 0
210 changed_file_count = 0
211
212 for (dirpath, dirnames, filenames) in os.walk(root_path):
213 dirpath_with_slash = os.path.abspath(dirpath) + os.sep
214 if any(
215 dirpath_with_slash.startswith(ignored_dir)
216 for ignored_dir in absolute_ignored_dirs
217 ):
218 continue
219
220 for filename in filenames:
221 if not filename.lower().endswith(".py"):
222 continue
223
224 if any(
225 fnmatch.fnmatch(filename, pattern) for pattern in ignored_patterns
226 ):
227 continue
228
229 path = os.path.join(dirpath, filename)
230 relative_path = os.path.relpath(path, start=root_path)
231 checked_file_count += 1
232
233 if diff:
234 change_count = self._show_diff(path, relative_path=relative_path)
235 else:
236 if list_files:
237 change_count = self._count_changes(path)
238 else: # actually update
239 change_count = self._rewrite_file(path)
240 if change_count:
241 print(
242 "%s - %d change%s"
243 % (relative_path, change_count, pluralize(change_count))
244 ) # NOQA
245
246 if change_count:
247 changed_file_count += 1
248
249 if diff or list_files:
250 print(
251 "\nChecked %d .py file%s, %d file%s to update."
252 % (
253 checked_file_count,
254 pluralize(checked_file_count),
255 changed_file_count,
256 pluralize(changed_file_count),
257 )
258 ) # NOQA
259 else:
260 print(
261 "\nChecked %d .py file%s, %d file%s updated."
262 % (
263 checked_file_count,
264 pluralize(checked_file_count),
265 changed_file_count,
266 pluralize(changed_file_count),
267 )
268 ) # NOQA
269
270 def _rewrite_line(self, line):
271 for pattern, repl in self.REPLACEMENTS:
272 line = re.sub(pattern, repl, line)
273 return line
274
275 def _show_diff(self, filename, relative_path=None):
276 change_count = 0
277 original = []
278 updated = []
279
280 with open(filename) as f:
281 for original_line in f:
282 original.append(original_line)
283
284 line = self._rewrite_line(original_line)
285 updated.append(line)
286 if line != original_line:
287 change_count += 1
288
289 if change_count:
290 relative_path = relative_path or filename
291
292 sys.stdout.writelines(
293 unified_diff(
294 original,
295 updated,
296 fromfile="%s:before" % relative_path,
297 tofile="%s:after" % relative_path,
298 )
299 )
300
301 return change_count
302
303 def _count_changes(self, filename):
304 change_count = 0
305
306 with open(filename) as f:
307 for original_line in f:
308 line = self._rewrite_line(original_line)
309 if line != original_line:
310 change_count += 1
311
312 return change_count
313
314 def _rewrite_file(self, filename):
315 change_count = 0
316
317 with fileinput.FileInput(filename, inplace=True) as f:
318 for original_line in f:
319 line = self._rewrite_line(original_line)
320 print(line, end="") # NOQA
321 if line != original_line:
322 change_count += 1
323
324 return change_count
325
326
327 class Version(Command):
328 description = "List which version of Wagtail you are using"
329
330 def run(self):
331 import wagtail
332
333 version = wagtail.get_version(wagtail.VERSION)
334
335 print("You are using Wagtail %(version)s" % {"version": version})
336
337
338 COMMANDS = {
339 "start": CreateProject(),
340 "updatemodulepaths": UpdateModulePaths(),
341 "--version": Version(),
342 }
343
344
345 def prog_name():
346 return os.path.basename(sys.argv[0])
347
348
349 def help_index():
350 print(
351 "Type '%s help <subcommand>' for help on a specific subcommand.\n" % prog_name()
352 ) # NOQA
353 print("Available subcommands:\n") # NOQA
354 for name, cmd in sorted(COMMANDS.items()):
355 print(" %s%s" % (name.ljust(20), cmd.description)) # NOQA
356
357
358 def unknown_command(command):
359 print("Unknown command: '%s'" % command) # NOQA
360 print("Type '%s help' for usage." % prog_name()) # NOQA
361 sys.exit(1)
362
363
364 def main():
365 try:
366 command_name = sys.argv[1]
367 except IndexError:
368 help_index()
369 return
370
371 if command_name == "help":
372 try:
373 help_command_name = sys.argv[2]
374 except IndexError:
375 help_index()
376 return
377
378 try:
379 command = COMMANDS[help_command_name]
380 except KeyError:
381 unknown_command(help_command_name)
382 return
383
384 command.print_help(help_command_name)
385 return
386
387 try:
388 command = COMMANDS[command_name]
389 except KeyError:
390 unknown_command(command_name)
391 return
392
393 command.execute(sys.argv)
394
395
396 if __name__ == "__main__":
397 main()
398
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/bin/wagtail.py b/wagtail/bin/wagtail.py
--- a/wagtail/bin/wagtail.py
+++ b/wagtail/bin/wagtail.py
@@ -152,6 +152,7 @@
),
# Added in Wagtail 3.0
(re.compile(r"\bwagtail\.tests\b"), "wagtail.test"),
+ (re.compile(r"\bwagtail\.core\.utils\b"), "wagtail.coreutils"),
(re.compile(r"\bwagtail\.core\b"), "wagtail"),
(re.compile(r"\bwagtail\.admin\.edit_handlers\b"), "wagtail.admin.panels"),
(
| {"golden_diff": "diff --git a/wagtail/bin/wagtail.py b/wagtail/bin/wagtail.py\n--- a/wagtail/bin/wagtail.py\n+++ b/wagtail/bin/wagtail.py\n@@ -152,6 +152,7 @@\n ),\n # Added in Wagtail 3.0\n (re.compile(r\"\\bwagtail\\.tests\\b\"), \"wagtail.test\"),\n+ (re.compile(r\"\\bwagtail\\.core\\.utils\\b\"), \"wagtail.coreutils\"),\n (re.compile(r\"\\bwagtail\\.core\\b\"), \"wagtail\"),\n (re.compile(r\"\\bwagtail\\.admin\\.edit_handlers\\b\"), \"wagtail.admin.panels\"),\n (\n", "issue": "wagtail.core.utils should be updated to wagtail.coreutils on 3.0 upgrade\nThe contents of `wagtail.core.utils` have been moved to `wagtail.coreutils` rather than `wagtail.utils`, but the release note doesn't mention this and the `wagtail upgrademodulepaths` command doesn't account for it.\n", "before_files": [{"content": "#!/usr/bin/env python\nimport fileinput\nimport fnmatch\nimport os\nimport re\nimport sys\nfrom argparse import ArgumentParser\nfrom difflib import unified_diff\n\nfrom django.core.management import ManagementUtility\n\nCURRENT_PYTHON = sys.version_info[:2]\nREQUIRED_PYTHON = (3, 5)\n\nif CURRENT_PYTHON < REQUIRED_PYTHON:\n sys.stderr.write(\n \"This version of Wagtail requires Python {}.{} or above - you are running {}.{}\\n\".format(\n *(REQUIRED_PYTHON + CURRENT_PYTHON)\n )\n )\n sys.exit(1)\n\n\ndef pluralize(value, arg=\"s\"):\n return \"\" if value == 1 else arg\n\n\nclass Command:\n description = None\n\n def create_parser(self, command_name=None):\n if command_name is None:\n prog = None\n else:\n # hack the prog name as reported to ArgumentParser to include the command\n prog = \"%s %s\" % (prog_name(), command_name)\n\n parser = ArgumentParser(\n description=getattr(self, \"description\", None), add_help=False, prog=prog\n )\n self.add_arguments(parser)\n return parser\n\n def add_arguments(self, parser):\n pass\n\n def print_help(self, command_name):\n parser = self.create_parser(command_name=command_name)\n parser.print_help()\n\n def execute(self, argv):\n parser = self.create_parser()\n options = parser.parse_args(sys.argv[2:])\n options_dict = vars(options)\n self.run(**options_dict)\n\n\nclass CreateProject(Command):\n description = \"Creates the directory structure for a new Wagtail project.\"\n\n def add_arguments(self, parser):\n parser.add_argument(\"project_name\", help=\"Name for your Wagtail project\")\n parser.add_argument(\n \"dest_dir\",\n nargs=\"?\",\n help=\"Destination directory inside which to create the project\",\n )\n\n def run(self, project_name=None, dest_dir=None):\n # Make sure given name is not already in use by another python package/module.\n try:\n __import__(project_name)\n except ImportError:\n pass\n else:\n sys.exit(\n \"'%s' conflicts with the name of an existing \"\n \"Python module and cannot be used as a project \"\n \"name. Please try another name.\" % project_name\n )\n\n print(\n \"Creating a Wagtail project called %(project_name)s\"\n % {\"project_name\": project_name}\n ) # noqa\n\n # Create the project from the Wagtail template using startapp\n\n # First find the path to Wagtail\n import wagtail\n\n wagtail_path = os.path.dirname(wagtail.__file__)\n template_path = os.path.join(wagtail_path, \"project_template\")\n\n # Call django-admin startproject\n utility_args = [\n \"django-admin\",\n \"startproject\",\n \"--template=\" + template_path,\n \"--ext=html,rst\",\n \"--name=Dockerfile\",\n project_name,\n ]\n\n if dest_dir:\n utility_args.append(dest_dir)\n\n utility = ManagementUtility(utility_args)\n utility.execute()\n\n print(\n \"Success! %(project_name)s has been created\"\n % {\"project_name\": project_name}\n ) # noqa\n\n\nclass UpdateModulePaths(Command):\n description = \"Update a Wagtail project tree to use Wagtail 2.x module paths\"\n\n REPLACEMENTS = [\n # Added in Wagtail 2.0\n (re.compile(r\"\\bwagtail\\.wagtailcore\\b\"), \"wagtail\"),\n (re.compile(r\"\\bwagtail\\.wagtailadmin\\b\"), \"wagtail.admin\"),\n (re.compile(r\"\\bwagtail\\.wagtaildocs\\b\"), \"wagtail.documents\"),\n (re.compile(r\"\\bwagtail\\.wagtailembeds\\b\"), \"wagtail.embeds\"),\n (re.compile(r\"\\bwagtail\\.wagtailimages\\b\"), \"wagtail.images\"),\n (re.compile(r\"\\bwagtail\\.wagtailsearch\\b\"), \"wagtail.search\"),\n (re.compile(r\"\\bwagtail\\.wagtailsites\\b\"), \"wagtail.sites\"),\n (re.compile(r\"\\bwagtail\\.wagtailsnippets\\b\"), \"wagtail.snippets\"),\n (re.compile(r\"\\bwagtail\\.wagtailusers\\b\"), \"wagtail.users\"),\n (re.compile(r\"\\bwagtail\\.wagtailforms\\b\"), \"wagtail.contrib.forms\"),\n (re.compile(r\"\\bwagtail\\.wagtailredirects\\b\"), \"wagtail.contrib.redirects\"),\n (\n re.compile(r\"\\bwagtail\\.contrib\\.wagtailfrontendcache\\b\"),\n \"wagtail.contrib.frontend_cache\",\n ),\n (\n re.compile(r\"\\bwagtail\\.contrib\\.wagtailroutablepage\\b\"),\n \"wagtail.contrib.routable_page\",\n ),\n (\n re.compile(r\"\\bwagtail\\.contrib\\.wagtailsearchpromotions\\b\"),\n \"wagtail.contrib.search_promotions\",\n ),\n (\n re.compile(r\"\\bwagtail\\.contrib\\.wagtailsitemaps\\b\"),\n \"wagtail.contrib.sitemaps\",\n ),\n (\n re.compile(r\"\\bwagtail\\.contrib\\.wagtailstyleguide\\b\"),\n \"wagtail.contrib.styleguide\",\n ),\n # Added in Wagtail 3.0\n (re.compile(r\"\\bwagtail\\.tests\\b\"), \"wagtail.test\"),\n (re.compile(r\"\\bwagtail\\.core\\b\"), \"wagtail\"),\n (re.compile(r\"\\bwagtail\\.admin\\.edit_handlers\\b\"), \"wagtail.admin.panels\"),\n (\n re.compile(r\"\\bwagtail\\.contrib\\.forms\\.edit_handlers\\b\"),\n \"wagtail.contrib.forms.panels\",\n ),\n ]\n\n def add_arguments(self, parser):\n parser.add_argument(\"root_path\", nargs=\"?\", help=\"Path to your project's root\")\n parser.add_argument(\n \"--list\",\n action=\"store_true\",\n dest=\"list_files\",\n help=\"Show the list of files to change, without modifying them\",\n )\n parser.add_argument(\n \"--diff\",\n action=\"store_true\",\n help=\"Show the changes that would be made, without modifying the files\",\n )\n parser.add_argument(\n \"--ignore-dir\",\n action=\"append\",\n dest=\"ignored_dirs\",\n metavar=\"NAME\",\n help=\"Ignore files in this directory\",\n )\n parser.add_argument(\n \"--ignore-file\",\n action=\"append\",\n dest=\"ignored_patterns\",\n metavar=\"NAME\",\n help=\"Ignore files with this name (supports wildcards)\",\n )\n\n def run(\n self,\n root_path=None,\n list_files=False,\n diff=False,\n ignored_dirs=None,\n ignored_patterns=None,\n ):\n if root_path is None:\n root_path = os.getcwd()\n\n absolute_ignored_dirs = [\n os.path.abspath(dir_path) + os.sep for dir_path in (ignored_dirs or [])\n ]\n\n if ignored_patterns is None:\n ignored_patterns = []\n\n checked_file_count = 0\n changed_file_count = 0\n\n for (dirpath, dirnames, filenames) in os.walk(root_path):\n dirpath_with_slash = os.path.abspath(dirpath) + os.sep\n if any(\n dirpath_with_slash.startswith(ignored_dir)\n for ignored_dir in absolute_ignored_dirs\n ):\n continue\n\n for filename in filenames:\n if not filename.lower().endswith(\".py\"):\n continue\n\n if any(\n fnmatch.fnmatch(filename, pattern) for pattern in ignored_patterns\n ):\n continue\n\n path = os.path.join(dirpath, filename)\n relative_path = os.path.relpath(path, start=root_path)\n checked_file_count += 1\n\n if diff:\n change_count = self._show_diff(path, relative_path=relative_path)\n else:\n if list_files:\n change_count = self._count_changes(path)\n else: # actually update\n change_count = self._rewrite_file(path)\n if change_count:\n print(\n \"%s - %d change%s\"\n % (relative_path, change_count, pluralize(change_count))\n ) # NOQA\n\n if change_count:\n changed_file_count += 1\n\n if diff or list_files:\n print(\n \"\\nChecked %d .py file%s, %d file%s to update.\"\n % (\n checked_file_count,\n pluralize(checked_file_count),\n changed_file_count,\n pluralize(changed_file_count),\n )\n ) # NOQA\n else:\n print(\n \"\\nChecked %d .py file%s, %d file%s updated.\"\n % (\n checked_file_count,\n pluralize(checked_file_count),\n changed_file_count,\n pluralize(changed_file_count),\n )\n ) # NOQA\n\n def _rewrite_line(self, line):\n for pattern, repl in self.REPLACEMENTS:\n line = re.sub(pattern, repl, line)\n return line\n\n def _show_diff(self, filename, relative_path=None):\n change_count = 0\n original = []\n updated = []\n\n with open(filename) as f:\n for original_line in f:\n original.append(original_line)\n\n line = self._rewrite_line(original_line)\n updated.append(line)\n if line != original_line:\n change_count += 1\n\n if change_count:\n relative_path = relative_path or filename\n\n sys.stdout.writelines(\n unified_diff(\n original,\n updated,\n fromfile=\"%s:before\" % relative_path,\n tofile=\"%s:after\" % relative_path,\n )\n )\n\n return change_count\n\n def _count_changes(self, filename):\n change_count = 0\n\n with open(filename) as f:\n for original_line in f:\n line = self._rewrite_line(original_line)\n if line != original_line:\n change_count += 1\n\n return change_count\n\n def _rewrite_file(self, filename):\n change_count = 0\n\n with fileinput.FileInput(filename, inplace=True) as f:\n for original_line in f:\n line = self._rewrite_line(original_line)\n print(line, end=\"\") # NOQA\n if line != original_line:\n change_count += 1\n\n return change_count\n\n\nclass Version(Command):\n description = \"List which version of Wagtail you are using\"\n\n def run(self):\n import wagtail\n\n version = wagtail.get_version(wagtail.VERSION)\n\n print(\"You are using Wagtail %(version)s\" % {\"version\": version})\n\n\nCOMMANDS = {\n \"start\": CreateProject(),\n \"updatemodulepaths\": UpdateModulePaths(),\n \"--version\": Version(),\n}\n\n\ndef prog_name():\n return os.path.basename(sys.argv[0])\n\n\ndef help_index():\n print(\n \"Type '%s help <subcommand>' for help on a specific subcommand.\\n\" % prog_name()\n ) # NOQA\n print(\"Available subcommands:\\n\") # NOQA\n for name, cmd in sorted(COMMANDS.items()):\n print(\" %s%s\" % (name.ljust(20), cmd.description)) # NOQA\n\n\ndef unknown_command(command):\n print(\"Unknown command: '%s'\" % command) # NOQA\n print(\"Type '%s help' for usage.\" % prog_name()) # NOQA\n sys.exit(1)\n\n\ndef main():\n try:\n command_name = sys.argv[1]\n except IndexError:\n help_index()\n return\n\n if command_name == \"help\":\n try:\n help_command_name = sys.argv[2]\n except IndexError:\n help_index()\n return\n\n try:\n command = COMMANDS[help_command_name]\n except KeyError:\n unknown_command(help_command_name)\n return\n\n command.print_help(help_command_name)\n return\n\n try:\n command = COMMANDS[command_name]\n except KeyError:\n unknown_command(command_name)\n return\n\n command.execute(sys.argv)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "wagtail/bin/wagtail.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport fileinput\nimport fnmatch\nimport os\nimport re\nimport sys\nfrom argparse import ArgumentParser\nfrom difflib import unified_diff\n\nfrom django.core.management import ManagementUtility\n\nCURRENT_PYTHON = sys.version_info[:2]\nREQUIRED_PYTHON = (3, 5)\n\nif CURRENT_PYTHON < REQUIRED_PYTHON:\n sys.stderr.write(\n \"This version of Wagtail requires Python {}.{} or above - you are running {}.{}\\n\".format(\n *(REQUIRED_PYTHON + CURRENT_PYTHON)\n )\n )\n sys.exit(1)\n\n\ndef pluralize(value, arg=\"s\"):\n return \"\" if value == 1 else arg\n\n\nclass Command:\n description = None\n\n def create_parser(self, command_name=None):\n if command_name is None:\n prog = None\n else:\n # hack the prog name as reported to ArgumentParser to include the command\n prog = \"%s %s\" % (prog_name(), command_name)\n\n parser = ArgumentParser(\n description=getattr(self, \"description\", None), add_help=False, prog=prog\n )\n self.add_arguments(parser)\n return parser\n\n def add_arguments(self, parser):\n pass\n\n def print_help(self, command_name):\n parser = self.create_parser(command_name=command_name)\n parser.print_help()\n\n def execute(self, argv):\n parser = self.create_parser()\n options = parser.parse_args(sys.argv[2:])\n options_dict = vars(options)\n self.run(**options_dict)\n\n\nclass CreateProject(Command):\n description = \"Creates the directory structure for a new Wagtail project.\"\n\n def add_arguments(self, parser):\n parser.add_argument(\"project_name\", help=\"Name for your Wagtail project\")\n parser.add_argument(\n \"dest_dir\",\n nargs=\"?\",\n help=\"Destination directory inside which to create the project\",\n )\n\n def run(self, project_name=None, dest_dir=None):\n # Make sure given name is not already in use by another python package/module.\n try:\n __import__(project_name)\n except ImportError:\n pass\n else:\n sys.exit(\n \"'%s' conflicts with the name of an existing \"\n \"Python module and cannot be used as a project \"\n \"name. Please try another name.\" % project_name\n )\n\n print(\n \"Creating a Wagtail project called %(project_name)s\"\n % {\"project_name\": project_name}\n ) # noqa\n\n # Create the project from the Wagtail template using startapp\n\n # First find the path to Wagtail\n import wagtail\n\n wagtail_path = os.path.dirname(wagtail.__file__)\n template_path = os.path.join(wagtail_path, \"project_template\")\n\n # Call django-admin startproject\n utility_args = [\n \"django-admin\",\n \"startproject\",\n \"--template=\" + template_path,\n \"--ext=html,rst\",\n \"--name=Dockerfile\",\n project_name,\n ]\n\n if dest_dir:\n utility_args.append(dest_dir)\n\n utility = ManagementUtility(utility_args)\n utility.execute()\n\n print(\n \"Success! %(project_name)s has been created\"\n % {\"project_name\": project_name}\n ) # noqa\n\n\nclass UpdateModulePaths(Command):\n description = \"Update a Wagtail project tree to use Wagtail 2.x module paths\"\n\n REPLACEMENTS = [\n # Added in Wagtail 2.0\n (re.compile(r\"\\bwagtail\\.wagtailcore\\b\"), \"wagtail\"),\n (re.compile(r\"\\bwagtail\\.wagtailadmin\\b\"), \"wagtail.admin\"),\n (re.compile(r\"\\bwagtail\\.wagtaildocs\\b\"), \"wagtail.documents\"),\n (re.compile(r\"\\bwagtail\\.wagtailembeds\\b\"), \"wagtail.embeds\"),\n (re.compile(r\"\\bwagtail\\.wagtailimages\\b\"), \"wagtail.images\"),\n (re.compile(r\"\\bwagtail\\.wagtailsearch\\b\"), \"wagtail.search\"),\n (re.compile(r\"\\bwagtail\\.wagtailsites\\b\"), \"wagtail.sites\"),\n (re.compile(r\"\\bwagtail\\.wagtailsnippets\\b\"), \"wagtail.snippets\"),\n (re.compile(r\"\\bwagtail\\.wagtailusers\\b\"), \"wagtail.users\"),\n (re.compile(r\"\\bwagtail\\.wagtailforms\\b\"), \"wagtail.contrib.forms\"),\n (re.compile(r\"\\bwagtail\\.wagtailredirects\\b\"), \"wagtail.contrib.redirects\"),\n (\n re.compile(r\"\\bwagtail\\.contrib\\.wagtailfrontendcache\\b\"),\n \"wagtail.contrib.frontend_cache\",\n ),\n (\n re.compile(r\"\\bwagtail\\.contrib\\.wagtailroutablepage\\b\"),\n \"wagtail.contrib.routable_page\",\n ),\n (\n re.compile(r\"\\bwagtail\\.contrib\\.wagtailsearchpromotions\\b\"),\n \"wagtail.contrib.search_promotions\",\n ),\n (\n re.compile(r\"\\bwagtail\\.contrib\\.wagtailsitemaps\\b\"),\n \"wagtail.contrib.sitemaps\",\n ),\n (\n re.compile(r\"\\bwagtail\\.contrib\\.wagtailstyleguide\\b\"),\n \"wagtail.contrib.styleguide\",\n ),\n # Added in Wagtail 3.0\n (re.compile(r\"\\bwagtail\\.tests\\b\"), \"wagtail.test\"),\n (re.compile(r\"\\bwagtail\\.core\\.utils\\b\"), \"wagtail.coreutils\"),\n (re.compile(r\"\\bwagtail\\.core\\b\"), \"wagtail\"),\n (re.compile(r\"\\bwagtail\\.admin\\.edit_handlers\\b\"), \"wagtail.admin.panels\"),\n (\n re.compile(r\"\\bwagtail\\.contrib\\.forms\\.edit_handlers\\b\"),\n \"wagtail.contrib.forms.panels\",\n ),\n ]\n\n def add_arguments(self, parser):\n parser.add_argument(\"root_path\", nargs=\"?\", help=\"Path to your project's root\")\n parser.add_argument(\n \"--list\",\n action=\"store_true\",\n dest=\"list_files\",\n help=\"Show the list of files to change, without modifying them\",\n )\n parser.add_argument(\n \"--diff\",\n action=\"store_true\",\n help=\"Show the changes that would be made, without modifying the files\",\n )\n parser.add_argument(\n \"--ignore-dir\",\n action=\"append\",\n dest=\"ignored_dirs\",\n metavar=\"NAME\",\n help=\"Ignore files in this directory\",\n )\n parser.add_argument(\n \"--ignore-file\",\n action=\"append\",\n dest=\"ignored_patterns\",\n metavar=\"NAME\",\n help=\"Ignore files with this name (supports wildcards)\",\n )\n\n def run(\n self,\n root_path=None,\n list_files=False,\n diff=False,\n ignored_dirs=None,\n ignored_patterns=None,\n ):\n if root_path is None:\n root_path = os.getcwd()\n\n absolute_ignored_dirs = [\n os.path.abspath(dir_path) + os.sep for dir_path in (ignored_dirs or [])\n ]\n\n if ignored_patterns is None:\n ignored_patterns = []\n\n checked_file_count = 0\n changed_file_count = 0\n\n for (dirpath, dirnames, filenames) in os.walk(root_path):\n dirpath_with_slash = os.path.abspath(dirpath) + os.sep\n if any(\n dirpath_with_slash.startswith(ignored_dir)\n for ignored_dir in absolute_ignored_dirs\n ):\n continue\n\n for filename in filenames:\n if not filename.lower().endswith(\".py\"):\n continue\n\n if any(\n fnmatch.fnmatch(filename, pattern) for pattern in ignored_patterns\n ):\n continue\n\n path = os.path.join(dirpath, filename)\n relative_path = os.path.relpath(path, start=root_path)\n checked_file_count += 1\n\n if diff:\n change_count = self._show_diff(path, relative_path=relative_path)\n else:\n if list_files:\n change_count = self._count_changes(path)\n else: # actually update\n change_count = self._rewrite_file(path)\n if change_count:\n print(\n \"%s - %d change%s\"\n % (relative_path, change_count, pluralize(change_count))\n ) # NOQA\n\n if change_count:\n changed_file_count += 1\n\n if diff or list_files:\n print(\n \"\\nChecked %d .py file%s, %d file%s to update.\"\n % (\n checked_file_count,\n pluralize(checked_file_count),\n changed_file_count,\n pluralize(changed_file_count),\n )\n ) # NOQA\n else:\n print(\n \"\\nChecked %d .py file%s, %d file%s updated.\"\n % (\n checked_file_count,\n pluralize(checked_file_count),\n changed_file_count,\n pluralize(changed_file_count),\n )\n ) # NOQA\n\n def _rewrite_line(self, line):\n for pattern, repl in self.REPLACEMENTS:\n line = re.sub(pattern, repl, line)\n return line\n\n def _show_diff(self, filename, relative_path=None):\n change_count = 0\n original = []\n updated = []\n\n with open(filename) as f:\n for original_line in f:\n original.append(original_line)\n\n line = self._rewrite_line(original_line)\n updated.append(line)\n if line != original_line:\n change_count += 1\n\n if change_count:\n relative_path = relative_path or filename\n\n sys.stdout.writelines(\n unified_diff(\n original,\n updated,\n fromfile=\"%s:before\" % relative_path,\n tofile=\"%s:after\" % relative_path,\n )\n )\n\n return change_count\n\n def _count_changes(self, filename):\n change_count = 0\n\n with open(filename) as f:\n for original_line in f:\n line = self._rewrite_line(original_line)\n if line != original_line:\n change_count += 1\n\n return change_count\n\n def _rewrite_file(self, filename):\n change_count = 0\n\n with fileinput.FileInput(filename, inplace=True) as f:\n for original_line in f:\n line = self._rewrite_line(original_line)\n print(line, end=\"\") # NOQA\n if line != original_line:\n change_count += 1\n\n return change_count\n\n\nclass Version(Command):\n description = \"List which version of Wagtail you are using\"\n\n def run(self):\n import wagtail\n\n version = wagtail.get_version(wagtail.VERSION)\n\n print(\"You are using Wagtail %(version)s\" % {\"version\": version})\n\n\nCOMMANDS = {\n \"start\": CreateProject(),\n \"updatemodulepaths\": UpdateModulePaths(),\n \"--version\": Version(),\n}\n\n\ndef prog_name():\n return os.path.basename(sys.argv[0])\n\n\ndef help_index():\n print(\n \"Type '%s help <subcommand>' for help on a specific subcommand.\\n\" % prog_name()\n ) # NOQA\n print(\"Available subcommands:\\n\") # NOQA\n for name, cmd in sorted(COMMANDS.items()):\n print(\" %s%s\" % (name.ljust(20), cmd.description)) # NOQA\n\n\ndef unknown_command(command):\n print(\"Unknown command: '%s'\" % command) # NOQA\n print(\"Type '%s help' for usage.\" % prog_name()) # NOQA\n sys.exit(1)\n\n\ndef main():\n try:\n command_name = sys.argv[1]\n except IndexError:\n help_index()\n return\n\n if command_name == \"help\":\n try:\n help_command_name = sys.argv[2]\n except IndexError:\n help_index()\n return\n\n try:\n command = COMMANDS[help_command_name]\n except KeyError:\n unknown_command(help_command_name)\n return\n\n command.print_help(help_command_name)\n return\n\n try:\n command = COMMANDS[command_name]\n except KeyError:\n unknown_command(command_name)\n return\n\n command.execute(sys.argv)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "wagtail/bin/wagtail.py"}]} | 4,092 | 157 |
gh_patches_debug_8313 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-933 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add "epoch" options to basic templates
## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Add "epochs" option to parser of 'basic_examples/lightning_module_template.py'
### Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
Thanks to 'basic_examples/lightning_module_template.py', I could build my deep learning model. Some beginners like me might build their model from this basic template. However, there are no options to manipulate epochs. I just thought that what people use often should be included in the basic template, so I uploaded my issue.
### Pitch
<!-- A clear and concise description of what you want to happen. -->
I suggest that the basic template includes "epoch" option in the basic template.
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
Add "epoch" options to parser of 'basic_examples/lightning_module_template.py'
```python
parser.add_argument('--epochs', default=10, type=int, metavar='N',
help='number of total epochs to run')
trainer = pl.Trainer(max_epochs=hparams.epochs)
```
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
I am really enjoying PytorchLightning framework. Thanks 😄
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pl_examples/basic_examples/lightning_module_template.py`
Content:
```
1 """
2 Example template for defining a system
3 """
4 import logging as log
5 import os
6 from argparse import ArgumentParser
7 from collections import OrderedDict
8
9 import torch
10 import torch.nn as nn
11 import torch.nn.functional as F
12 import torchvision.transforms as transforms
13 from torch import optim
14 from torch.utils.data import DataLoader
15 from torch.utils.data.distributed import DistributedSampler
16 from torchvision.datasets import MNIST
17
18 import pytorch_lightning as pl
19
20
21 class LightningTemplateModel(pl.LightningModule):
22 """
23 Sample model to show how to define a template
24 """
25
26 def __init__(self, hparams):
27 """
28 Pass in parsed HyperOptArgumentParser to the model
29 :param hparams:
30 """
31 # init superclass
32 super(LightningTemplateModel, self).__init__()
33 self.hparams = hparams
34
35 self.batch_size = hparams.batch_size
36
37 # if you specify an example input, the summary will show input/output for each layer
38 self.example_input_array = torch.rand(5, 28 * 28)
39
40 # build model
41 self.__build_model()
42
43 # ---------------------
44 # MODEL SETUP
45 # ---------------------
46 def __build_model(self):
47 """
48 Layout model
49 :return:
50 """
51 self.c_d1 = nn.Linear(in_features=self.hparams.in_features,
52 out_features=self.hparams.hidden_dim)
53 self.c_d1_bn = nn.BatchNorm1d(self.hparams.hidden_dim)
54 self.c_d1_drop = nn.Dropout(self.hparams.drop_prob)
55
56 self.c_d2 = nn.Linear(in_features=self.hparams.hidden_dim,
57 out_features=self.hparams.out_features)
58
59 # ---------------------
60 # TRAINING
61 # ---------------------
62 def forward(self, x):
63 """
64 No special modification required for lightning, define as you normally would
65 :param x:
66 :return:
67 """
68
69 x = self.c_d1(x)
70 x = torch.tanh(x)
71 x = self.c_d1_bn(x)
72 x = self.c_d1_drop(x)
73
74 x = self.c_d2(x)
75 logits = F.log_softmax(x, dim=1)
76
77 return logits
78
79 def loss(self, labels, logits):
80 nll = F.nll_loss(logits, labels)
81 return nll
82
83 def training_step(self, batch, batch_idx):
84 """
85 Lightning calls this inside the training loop
86 :param batch:
87 :return:
88 """
89 # forward pass
90 x, y = batch
91 x = x.view(x.size(0), -1)
92
93 y_hat = self.forward(x)
94
95 # calculate loss
96 loss_val = self.loss(y, y_hat)
97
98 # in DP mode (default) make sure if result is scalar, there's another dim in the beginning
99 if self.trainer.use_dp or self.trainer.use_ddp2:
100 loss_val = loss_val.unsqueeze(0)
101
102 tqdm_dict = {'train_loss': loss_val}
103 output = OrderedDict({
104 'loss': loss_val,
105 'progress_bar': tqdm_dict,
106 'log': tqdm_dict
107 })
108
109 # can also return just a scalar instead of a dict (return loss_val)
110 return output
111
112 def validation_step(self, batch, batch_idx):
113 """
114 Lightning calls this inside the validation loop
115 :param batch:
116 :return:
117 """
118 x, y = batch
119 x = x.view(x.size(0), -1)
120 y_hat = self.forward(x)
121
122 loss_val = self.loss(y, y_hat)
123
124 # acc
125 labels_hat = torch.argmax(y_hat, dim=1)
126 val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)
127 val_acc = torch.tensor(val_acc)
128
129 if self.on_gpu:
130 val_acc = val_acc.cuda(loss_val.device.index)
131
132 # in DP mode (default) make sure if result is scalar, there's another dim in the beginning
133 if self.trainer.use_dp or self.trainer.use_ddp2:
134 loss_val = loss_val.unsqueeze(0)
135 val_acc = val_acc.unsqueeze(0)
136
137 output = OrderedDict({
138 'val_loss': loss_val,
139 'val_acc': val_acc,
140 })
141
142 # can also return just a scalar instead of a dict (return loss_val)
143 return output
144
145 def validation_end(self, outputs):
146 """
147 Called at the end of validation to aggregate outputs
148 :param outputs: list of individual outputs of each validation step
149 :return:
150 """
151 # if returned a scalar from validation_step, outputs is a list of tensor scalars
152 # we return just the average in this case (if we want)
153 # return torch.stack(outputs).mean()
154
155 val_loss_mean = 0
156 val_acc_mean = 0
157 for output in outputs:
158 val_loss = output['val_loss']
159
160 # reduce manually when using dp
161 if self.trainer.use_dp or self.trainer.use_ddp2:
162 val_loss = torch.mean(val_loss)
163 val_loss_mean += val_loss
164
165 # reduce manually when using dp
166 val_acc = output['val_acc']
167 if self.trainer.use_dp or self.trainer.use_ddp2:
168 val_acc = torch.mean(val_acc)
169
170 val_acc_mean += val_acc
171
172 val_loss_mean /= len(outputs)
173 val_acc_mean /= len(outputs)
174 tqdm_dict = {'val_loss': val_loss_mean, 'val_acc': val_acc_mean}
175 result = {'progress_bar': tqdm_dict, 'log': tqdm_dict, 'val_loss': val_loss_mean}
176 return result
177
178 # ---------------------
179 # TRAINING SETUP
180 # ---------------------
181 def configure_optimizers(self):
182 """
183 return whatever optimizers we want here
184 :return: list of optimizers
185 """
186 optimizer = optim.Adam(self.parameters(), lr=self.hparams.learning_rate)
187 scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=10)
188 return [optimizer], [scheduler]
189
190 def __dataloader(self, train):
191 # init data generators
192 transform = transforms.Compose([transforms.ToTensor(),
193 transforms.Normalize((0.5,), (1.0,))])
194 dataset = MNIST(root=self.hparams.data_root, train=train,
195 transform=transform, download=False)
196
197 # when using multi-node (ddp) we need to add the datasampler
198 batch_size = self.hparams.batch_size
199
200 loader = DataLoader(
201 dataset=dataset,
202 batch_size=batch_size,
203 num_workers=0
204 )
205
206 return loader
207
208 def prepare_data(self):
209 transform = transforms.Compose([transforms.ToTensor(),
210 transforms.Normalize((0.5,), (1.0,))])
211 dataset = MNIST(root=self.hparams.data_root, train=True,
212 transform=transform, download=True)
213 dataset = MNIST(root=self.hparams.data_root, train=False,
214 transform=transform, download=True)
215
216 def train_dataloader(self):
217 log.info('Training data loader called.')
218 return self.__dataloader(train=True)
219
220 def val_dataloader(self):
221 log.info('Validation data loader called.')
222 return self.__dataloader(train=False)
223
224 def test_dataloader(self):
225 log.info('Test data loader called.')
226 return self.__dataloader(train=False)
227
228 @staticmethod
229 def add_model_specific_args(parent_parser, root_dir): # pragma: no cover
230 """
231 Parameters you define here will be available to your model through self.hparams
232 :param parent_parser:
233 :param root_dir:
234 :return:
235 """
236 parser = ArgumentParser(parents=[parent_parser])
237
238 # param overwrites
239 # parser.set_defaults(gradient_clip_val=5.0)
240
241 # network params
242 parser.add_argument('--in_features', default=28 * 28, type=int)
243 parser.add_argument('--out_features', default=10, type=int)
244 # use 500 for CPU, 50000 for GPU to see speed difference
245 parser.add_argument('--hidden_dim', default=50000, type=int)
246 parser.add_argument('--drop_prob', default=0.2, type=float)
247 parser.add_argument('--learning_rate', default=0.001, type=float)
248
249 # data
250 parser.add_argument('--data_root', default=os.path.join(root_dir, 'mnist'), type=str)
251
252 # training params (opt)
253 parser.add_argument('--optimizer_name', default='adam', type=str)
254 parser.add_argument('--batch_size', default=64, type=int)
255 return parser
256
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pl_examples/basic_examples/lightning_module_template.py b/pl_examples/basic_examples/lightning_module_template.py
--- a/pl_examples/basic_examples/lightning_module_template.py
+++ b/pl_examples/basic_examples/lightning_module_template.py
@@ -250,6 +250,7 @@
parser.add_argument('--data_root', default=os.path.join(root_dir, 'mnist'), type=str)
# training params (opt)
+ parser.add_argument('--epochs', default=20, type=int)
parser.add_argument('--optimizer_name', default='adam', type=str)
parser.add_argument('--batch_size', default=64, type=int)
return parser
| {"golden_diff": "diff --git a/pl_examples/basic_examples/lightning_module_template.py b/pl_examples/basic_examples/lightning_module_template.py\n--- a/pl_examples/basic_examples/lightning_module_template.py\n+++ b/pl_examples/basic_examples/lightning_module_template.py\n@@ -250,6 +250,7 @@\n parser.add_argument('--data_root', default=os.path.join(root_dir, 'mnist'), type=str)\n \n # training params (opt)\n+ parser.add_argument('--epochs', default=20, type=int)\n parser.add_argument('--optimizer_name', default='adam', type=str)\n parser.add_argument('--batch_size', default=64, type=int)\n return parser\n", "issue": "Add \"epoch\" options to basic templates\n## \ud83d\ude80 Feature\r\n<!-- A clear and concise description of the feature proposal -->\r\nAdd \"epochs\" option to parser of 'basic_examples/lightning_module_template.py'\r\n\r\n### Motivation\r\n<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->\r\nThanks to 'basic_examples/lightning_module_template.py', I could build my deep learning model. Some beginners like me might build their model from this basic template. However, there are no options to manipulate epochs. I just thought that what people use often should be included in the basic template, so I uploaded my issue.\r\n\r\n### Pitch\r\n<!-- A clear and concise description of what you want to happen. -->\r\nI suggest that the basic template includes \"epoch\" option in the basic template.\r\n\r\n### Alternatives\r\n<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->\r\nAdd \"epoch\" options to parser of 'basic_examples/lightning_module_template.py'\r\n```python\r\nparser.add_argument('--epochs', default=10, type=int, metavar='N',\r\n help='number of total epochs to run')\r\n\r\ntrainer = pl.Trainer(max_epochs=hparams.epochs)\r\n```\r\n\r\n### Additional context\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\nI am really enjoying PytorchLightning framework. Thanks \ud83d\ude04 \r\n\n", "before_files": [{"content": "\"\"\"\nExample template for defining a system\n\"\"\"\nimport logging as log\nimport os\nfrom argparse import ArgumentParser\nfrom collections import OrderedDict\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision.transforms as transforms\nfrom torch import optim\nfrom torch.utils.data import DataLoader\nfrom torch.utils.data.distributed import DistributedSampler\nfrom torchvision.datasets import MNIST\n\nimport pytorch_lightning as pl\n\n\nclass LightningTemplateModel(pl.LightningModule):\n \"\"\"\n Sample model to show how to define a template\n \"\"\"\n\n def __init__(self, hparams):\n \"\"\"\n Pass in parsed HyperOptArgumentParser to the model\n :param hparams:\n \"\"\"\n # init superclass\n super(LightningTemplateModel, self).__init__()\n self.hparams = hparams\n\n self.batch_size = hparams.batch_size\n\n # if you specify an example input, the summary will show input/output for each layer\n self.example_input_array = torch.rand(5, 28 * 28)\n\n # build model\n self.__build_model()\n\n # ---------------------\n # MODEL SETUP\n # ---------------------\n def __build_model(self):\n \"\"\"\n Layout model\n :return:\n \"\"\"\n self.c_d1 = nn.Linear(in_features=self.hparams.in_features,\n out_features=self.hparams.hidden_dim)\n self.c_d1_bn = nn.BatchNorm1d(self.hparams.hidden_dim)\n self.c_d1_drop = nn.Dropout(self.hparams.drop_prob)\n\n self.c_d2 = nn.Linear(in_features=self.hparams.hidden_dim,\n out_features=self.hparams.out_features)\n\n # ---------------------\n # TRAINING\n # ---------------------\n def forward(self, x):\n \"\"\"\n No special modification required for lightning, define as you normally would\n :param x:\n :return:\n \"\"\"\n\n x = self.c_d1(x)\n x = torch.tanh(x)\n x = self.c_d1_bn(x)\n x = self.c_d1_drop(x)\n\n x = self.c_d2(x)\n logits = F.log_softmax(x, dim=1)\n\n return logits\n\n def loss(self, labels, logits):\n nll = F.nll_loss(logits, labels)\n return nll\n\n def training_step(self, batch, batch_idx):\n \"\"\"\n Lightning calls this inside the training loop\n :param batch:\n :return:\n \"\"\"\n # forward pass\n x, y = batch\n x = x.view(x.size(0), -1)\n\n y_hat = self.forward(x)\n\n # calculate loss\n loss_val = self.loss(y, y_hat)\n\n # in DP mode (default) make sure if result is scalar, there's another dim in the beginning\n if self.trainer.use_dp or self.trainer.use_ddp2:\n loss_val = loss_val.unsqueeze(0)\n\n tqdm_dict = {'train_loss': loss_val}\n output = OrderedDict({\n 'loss': loss_val,\n 'progress_bar': tqdm_dict,\n 'log': tqdm_dict\n })\n\n # can also return just a scalar instead of a dict (return loss_val)\n return output\n\n def validation_step(self, batch, batch_idx):\n \"\"\"\n Lightning calls this inside the validation loop\n :param batch:\n :return:\n \"\"\"\n x, y = batch\n x = x.view(x.size(0), -1)\n y_hat = self.forward(x)\n\n loss_val = self.loss(y, y_hat)\n\n # acc\n labels_hat = torch.argmax(y_hat, dim=1)\n val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)\n val_acc = torch.tensor(val_acc)\n\n if self.on_gpu:\n val_acc = val_acc.cuda(loss_val.device.index)\n\n # in DP mode (default) make sure if result is scalar, there's another dim in the beginning\n if self.trainer.use_dp or self.trainer.use_ddp2:\n loss_val = loss_val.unsqueeze(0)\n val_acc = val_acc.unsqueeze(0)\n\n output = OrderedDict({\n 'val_loss': loss_val,\n 'val_acc': val_acc,\n })\n\n # can also return just a scalar instead of a dict (return loss_val)\n return output\n\n def validation_end(self, outputs):\n \"\"\"\n Called at the end of validation to aggregate outputs\n :param outputs: list of individual outputs of each validation step\n :return:\n \"\"\"\n # if returned a scalar from validation_step, outputs is a list of tensor scalars\n # we return just the average in this case (if we want)\n # return torch.stack(outputs).mean()\n\n val_loss_mean = 0\n val_acc_mean = 0\n for output in outputs:\n val_loss = output['val_loss']\n\n # reduce manually when using dp\n if self.trainer.use_dp or self.trainer.use_ddp2:\n val_loss = torch.mean(val_loss)\n val_loss_mean += val_loss\n\n # reduce manually when using dp\n val_acc = output['val_acc']\n if self.trainer.use_dp or self.trainer.use_ddp2:\n val_acc = torch.mean(val_acc)\n\n val_acc_mean += val_acc\n\n val_loss_mean /= len(outputs)\n val_acc_mean /= len(outputs)\n tqdm_dict = {'val_loss': val_loss_mean, 'val_acc': val_acc_mean}\n result = {'progress_bar': tqdm_dict, 'log': tqdm_dict, 'val_loss': val_loss_mean}\n return result\n\n # ---------------------\n # TRAINING SETUP\n # ---------------------\n def configure_optimizers(self):\n \"\"\"\n return whatever optimizers we want here\n :return: list of optimizers\n \"\"\"\n optimizer = optim.Adam(self.parameters(), lr=self.hparams.learning_rate)\n scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=10)\n return [optimizer], [scheduler]\n\n def __dataloader(self, train):\n # init data generators\n transform = transforms.Compose([transforms.ToTensor(),\n transforms.Normalize((0.5,), (1.0,))])\n dataset = MNIST(root=self.hparams.data_root, train=train,\n transform=transform, download=False)\n\n # when using multi-node (ddp) we need to add the datasampler\n batch_size = self.hparams.batch_size\n\n loader = DataLoader(\n dataset=dataset,\n batch_size=batch_size,\n num_workers=0\n )\n\n return loader\n\n def prepare_data(self):\n transform = transforms.Compose([transforms.ToTensor(),\n transforms.Normalize((0.5,), (1.0,))])\n dataset = MNIST(root=self.hparams.data_root, train=True,\n transform=transform, download=True)\n dataset = MNIST(root=self.hparams.data_root, train=False,\n transform=transform, download=True)\n\n def train_dataloader(self):\n log.info('Training data loader called.')\n return self.__dataloader(train=True)\n\n def val_dataloader(self):\n log.info('Validation data loader called.')\n return self.__dataloader(train=False)\n\n def test_dataloader(self):\n log.info('Test data loader called.')\n return self.__dataloader(train=False)\n\n @staticmethod\n def add_model_specific_args(parent_parser, root_dir): # pragma: no cover\n \"\"\"\n Parameters you define here will be available to your model through self.hparams\n :param parent_parser:\n :param root_dir:\n :return:\n \"\"\"\n parser = ArgumentParser(parents=[parent_parser])\n\n # param overwrites\n # parser.set_defaults(gradient_clip_val=5.0)\n\n # network params\n parser.add_argument('--in_features', default=28 * 28, type=int)\n parser.add_argument('--out_features', default=10, type=int)\n # use 500 for CPU, 50000 for GPU to see speed difference\n parser.add_argument('--hidden_dim', default=50000, type=int)\n parser.add_argument('--drop_prob', default=0.2, type=float)\n parser.add_argument('--learning_rate', default=0.001, type=float)\n\n # data\n parser.add_argument('--data_root', default=os.path.join(root_dir, 'mnist'), type=str)\n\n # training params (opt)\n parser.add_argument('--optimizer_name', default='adam', type=str)\n parser.add_argument('--batch_size', default=64, type=int)\n return parser\n", "path": "pl_examples/basic_examples/lightning_module_template.py"}], "after_files": [{"content": "\"\"\"\nExample template for defining a system\n\"\"\"\nimport logging as log\nimport os\nfrom argparse import ArgumentParser\nfrom collections import OrderedDict\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision.transforms as transforms\nfrom torch import optim\nfrom torch.utils.data import DataLoader\nfrom torch.utils.data.distributed import DistributedSampler\nfrom torchvision.datasets import MNIST\n\nimport pytorch_lightning as pl\n\n\nclass LightningTemplateModel(pl.LightningModule):\n \"\"\"\n Sample model to show how to define a template\n \"\"\"\n\n def __init__(self, hparams):\n \"\"\"\n Pass in parsed HyperOptArgumentParser to the model\n :param hparams:\n \"\"\"\n # init superclass\n super(LightningTemplateModel, self).__init__()\n self.hparams = hparams\n\n self.batch_size = hparams.batch_size\n\n # if you specify an example input, the summary will show input/output for each layer\n self.example_input_array = torch.rand(5, 28 * 28)\n\n # build model\n self.__build_model()\n\n # ---------------------\n # MODEL SETUP\n # ---------------------\n def __build_model(self):\n \"\"\"\n Layout model\n :return:\n \"\"\"\n self.c_d1 = nn.Linear(in_features=self.hparams.in_features,\n out_features=self.hparams.hidden_dim)\n self.c_d1_bn = nn.BatchNorm1d(self.hparams.hidden_dim)\n self.c_d1_drop = nn.Dropout(self.hparams.drop_prob)\n\n self.c_d2 = nn.Linear(in_features=self.hparams.hidden_dim,\n out_features=self.hparams.out_features)\n\n # ---------------------\n # TRAINING\n # ---------------------\n def forward(self, x):\n \"\"\"\n No special modification required for lightning, define as you normally would\n :param x:\n :return:\n \"\"\"\n\n x = self.c_d1(x)\n x = torch.tanh(x)\n x = self.c_d1_bn(x)\n x = self.c_d1_drop(x)\n\n x = self.c_d2(x)\n logits = F.log_softmax(x, dim=1)\n\n return logits\n\n def loss(self, labels, logits):\n nll = F.nll_loss(logits, labels)\n return nll\n\n def training_step(self, batch, batch_idx):\n \"\"\"\n Lightning calls this inside the training loop\n :param batch:\n :return:\n \"\"\"\n # forward pass\n x, y = batch\n x = x.view(x.size(0), -1)\n\n y_hat = self.forward(x)\n\n # calculate loss\n loss_val = self.loss(y, y_hat)\n\n # in DP mode (default) make sure if result is scalar, there's another dim in the beginning\n if self.trainer.use_dp or self.trainer.use_ddp2:\n loss_val = loss_val.unsqueeze(0)\n\n tqdm_dict = {'train_loss': loss_val}\n output = OrderedDict({\n 'loss': loss_val,\n 'progress_bar': tqdm_dict,\n 'log': tqdm_dict\n })\n\n # can also return just a scalar instead of a dict (return loss_val)\n return output\n\n def validation_step(self, batch, batch_idx):\n \"\"\"\n Lightning calls this inside the validation loop\n :param batch:\n :return:\n \"\"\"\n x, y = batch\n x = x.view(x.size(0), -1)\n y_hat = self.forward(x)\n\n loss_val = self.loss(y, y_hat)\n\n # acc\n labels_hat = torch.argmax(y_hat, dim=1)\n val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)\n val_acc = torch.tensor(val_acc)\n\n if self.on_gpu:\n val_acc = val_acc.cuda(loss_val.device.index)\n\n # in DP mode (default) make sure if result is scalar, there's another dim in the beginning\n if self.trainer.use_dp or self.trainer.use_ddp2:\n loss_val = loss_val.unsqueeze(0)\n val_acc = val_acc.unsqueeze(0)\n\n output = OrderedDict({\n 'val_loss': loss_val,\n 'val_acc': val_acc,\n })\n\n # can also return just a scalar instead of a dict (return loss_val)\n return output\n\n def validation_end(self, outputs):\n \"\"\"\n Called at the end of validation to aggregate outputs\n :param outputs: list of individual outputs of each validation step\n :return:\n \"\"\"\n # if returned a scalar from validation_step, outputs is a list of tensor scalars\n # we return just the average in this case (if we want)\n # return torch.stack(outputs).mean()\n\n val_loss_mean = 0\n val_acc_mean = 0\n for output in outputs:\n val_loss = output['val_loss']\n\n # reduce manually when using dp\n if self.trainer.use_dp or self.trainer.use_ddp2:\n val_loss = torch.mean(val_loss)\n val_loss_mean += val_loss\n\n # reduce manually when using dp\n val_acc = output['val_acc']\n if self.trainer.use_dp or self.trainer.use_ddp2:\n val_acc = torch.mean(val_acc)\n\n val_acc_mean += val_acc\n\n val_loss_mean /= len(outputs)\n val_acc_mean /= len(outputs)\n tqdm_dict = {'val_loss': val_loss_mean, 'val_acc': val_acc_mean}\n result = {'progress_bar': tqdm_dict, 'log': tqdm_dict, 'val_loss': val_loss_mean}\n return result\n\n # ---------------------\n # TRAINING SETUP\n # ---------------------\n def configure_optimizers(self):\n \"\"\"\n return whatever optimizers we want here\n :return: list of optimizers\n \"\"\"\n optimizer = optim.Adam(self.parameters(), lr=self.hparams.learning_rate)\n scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=10)\n return [optimizer], [scheduler]\n\n def __dataloader(self, train):\n # init data generators\n transform = transforms.Compose([transforms.ToTensor(),\n transforms.Normalize((0.5,), (1.0,))])\n dataset = MNIST(root=self.hparams.data_root, train=train,\n transform=transform, download=False)\n\n # when using multi-node (ddp) we need to add the datasampler\n batch_size = self.hparams.batch_size\n\n loader = DataLoader(\n dataset=dataset,\n batch_size=batch_size,\n num_workers=0\n )\n\n return loader\n\n def prepare_data(self):\n transform = transforms.Compose([transforms.ToTensor(),\n transforms.Normalize((0.5,), (1.0,))])\n dataset = MNIST(root=self.hparams.data_root, train=True,\n transform=transform, download=True)\n dataset = MNIST(root=self.hparams.data_root, train=False,\n transform=transform, download=True)\n\n def train_dataloader(self):\n log.info('Training data loader called.')\n return self.__dataloader(train=True)\n\n def val_dataloader(self):\n log.info('Validation data loader called.')\n return self.__dataloader(train=False)\n\n def test_dataloader(self):\n log.info('Test data loader called.')\n return self.__dataloader(train=False)\n\n @staticmethod\n def add_model_specific_args(parent_parser, root_dir): # pragma: no cover\n \"\"\"\n Parameters you define here will be available to your model through self.hparams\n :param parent_parser:\n :param root_dir:\n :return:\n \"\"\"\n parser = ArgumentParser(parents=[parent_parser])\n\n # param overwrites\n # parser.set_defaults(gradient_clip_val=5.0)\n\n # network params\n parser.add_argument('--in_features', default=28 * 28, type=int)\n parser.add_argument('--out_features', default=10, type=int)\n # use 500 for CPU, 50000 for GPU to see speed difference\n parser.add_argument('--hidden_dim', default=50000, type=int)\n parser.add_argument('--drop_prob', default=0.2, type=float)\n parser.add_argument('--learning_rate', default=0.001, type=float)\n\n # data\n parser.add_argument('--data_root', default=os.path.join(root_dir, 'mnist'), type=str)\n\n # training params (opt)\n parser.add_argument('--epochs', default=20, type=int)\n parser.add_argument('--optimizer_name', default='adam', type=str)\n parser.add_argument('--batch_size', default=64, type=int)\n return parser\n", "path": "pl_examples/basic_examples/lightning_module_template.py"}]} | 3,093 | 143 |
gh_patches_debug_32480 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-549 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IterableDataset breaks 1.1 compatibility
A recently introduced feature unfortunately breaks compability with Pytorch 1.1.0.
**Describe the bug**
IterableDataset support, introduced in [issue 323](https://github.com/williamFalcon/pytorch-lightning/issues/323), requires Pytorch 1.2.0+.
**To Reproduce**
In a python environment with Pytorch 1.1.0 do:
import pytorch_lightning
**Expected behavior**
Compatibility with Pytorch 1.1.0. I'm filing it as a bug report rather than a docs fix since the dependency on 1.2.0+ introduced by [issue 323](https://github.com/williamFalcon/pytorch-lightning/issues/323) doesn't seem to be intentional.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytorch_lightning/trainer/data_loading_mixin.py`
Content:
```
1 import warnings
2
3 import torch.distributed as dist
4 from torch.utils.data import IterableDataset
5 from torch.utils.data.distributed import DistributedSampler
6
7 from pytorch_lightning.utilities.debugging import MisconfigurationException
8
9 try:
10 from apex import amp
11
12 APEX_AVAILABLE = True
13 except ImportError:
14 APEX_AVAILABLE = False
15
16
17 class TrainerDataLoadingMixin(object):
18 def init_train_dataloader(self, model):
19 """
20 Dataloaders are provided by the model
21 :param model:
22 :return:
23 """
24 self.get_train_dataloader = model.train_dataloader
25
26 # determine number of training batches
27 if isinstance(self.get_train_dataloader().dataset, IterableDataset):
28 self.nb_training_batches = float('inf')
29 else:
30 self.nb_training_batches = len(self.get_train_dataloader())
31 self.nb_training_batches = int(self.nb_training_batches * self.train_percent_check)
32
33 # determine when to check validation
34 # if int passed in, val checks that often
35 # otherwise, it checks in [0, 1.0] % range of a training epoch
36 if isinstance(self.val_check_interval, int):
37 self.val_check_batch = self.val_check_interval
38 else:
39 self.val_check_batch = int(self.nb_training_batches * self.val_check_interval)
40 self.val_check_batch = max(1, self.val_check_batch)
41
42 on_ddp = self.use_ddp or self.use_ddp2
43 if on_ddp and not isinstance(self.get_train_dataloader().sampler, DistributedSampler):
44 msg = """
45 You're using multiple gpus and multiple nodes without using a DistributedSampler
46 to assign a subset of your data to each process. To silence this warning, pass a
47 DistributedSampler to your DataLoader.
48
49 ie: this:
50 dataset = myDataset()
51 dataloader = Dataloader(dataset)
52
53 becomes:
54 dataset = myDataset()
55 dist_sampler = torch.utils.data.distributed.DistributedSampler(dataset)
56 dataloader = Dataloader(dataset, sampler=dist_sampler)
57
58 If you want each process to load the full dataset, ignore this warning.
59 """
60 if msg not in self.shown_warnings and self.proc_rank == 0:
61 self.shown_warnings.add(msg)
62 warnings.warn(msg)
63
64 def init_val_dataloader(self, model):
65 """
66 Dataloaders are provided by the model
67 :param model:
68 :return:
69 """
70 self.get_val_dataloaders = model.val_dataloader
71
72 # determine number of validation batches
73 # val datasets could be none, 1 or 2+
74 if self.get_val_dataloaders() is not None:
75 self.nb_val_batches = sum(len(dataloader) for dataloader in self.get_val_dataloaders())
76 self.nb_val_batches = int(self.nb_val_batches * self.val_percent_check)
77 self.nb_val_batches = max(1, self.nb_val_batches)
78
79 on_ddp = self.use_ddp or self.use_ddp2
80 if on_ddp and self.get_val_dataloaders() is not None:
81 for dataloader in self.get_val_dataloaders():
82 if not isinstance(dataloader.sampler, DistributedSampler):
83 msg = """
84 Your val_dataloader(s) don't use DistributedSampler.
85
86 You're using multiple gpus and multiple nodes without using a
87 DistributedSampler to assign a subset of your data to each process.
88 To silence this warning, pass a DistributedSampler to your DataLoader.
89
90 ie: this:
91 dataset = myDataset()
92 dataloader = Dataloader(dataset)
93
94 becomes:
95 dataset = myDataset()
96 dist_sampler = torch.utils.data.distributed.DistributedSampler(dataset)
97 dataloader = Dataloader(dataset, sampler=dist_sampler)
98
99 If you want each process to load the full dataset, ignore this warning.
100 """
101 if msg not in self.shown_warnings and self.proc_rank == 0:
102 self.shown_warnings.add(msg)
103 warnings.warn(msg)
104 break
105
106 def init_test_dataloader(self, model):
107 """
108 Dataloaders are provided by the model
109 :param model:
110 :return:
111 """
112
113 self.get_test_dataloaders = model.test_dataloader
114
115 # determine number of test batches
116 if self.get_test_dataloaders() is not None:
117 len_sum = sum(len(dataloader) for dataloader in self.get_test_dataloaders())
118 self.nb_test_batches = len_sum
119 self.nb_test_batches = int(self.nb_test_batches * self.test_percent_check)
120 self.nb_test_batches = max(1, self.nb_test_batches)
121
122 on_ddp = self.use_ddp or self.use_ddp2
123 if on_ddp and self.get_test_dataloaders() is not None:
124 for dataloader in self.get_test_dataloaders():
125 if not isinstance(dataloader.sampler, DistributedSampler):
126 msg = """
127 Your test_dataloader(s) don't use DistributedSampler.
128
129 You're using multiple gpus and multiple nodes without using a
130 DistributedSampler to assign a subset of your data to each process.
131 To silence this warning, pass a DistributedSampler to your DataLoader.
132
133 ie: this:
134 dataset = myDataset()
135 dataloader = Dataloader(dataset)
136
137 becomes:
138 dataset = myDataset()
139 dist_sampler = torch.utils.data.distributed.DistributedSampler(dataset)
140 dataloader = Dataloader(dataset, sampler=dist_sampler)
141
142 If you want each process to load the full dataset, ignore this warning.
143 """
144 if msg not in self.shown_warnings and self.proc_rank == 0:
145 self.shown_warnings.add(msg)
146 warnings.warn(msg)
147 break
148
149 def get_dataloaders(self, model):
150 """
151 Dataloaders are provided by the model
152 :param model:
153 :return:
154 """
155
156 self.init_train_dataloader(model)
157 self.init_test_dataloader(model)
158 self.init_val_dataloader(model)
159
160 if self.use_ddp or self.use_ddp2:
161 # wait for all processes to catch up
162 dist.barrier()
163
164 # load each dataloader
165 self.get_train_dataloader()
166 self.get_test_dataloaders()
167 self.get_val_dataloaders()
168
169 # support IterableDataset for train data
170 self.is_iterable_train_dataloader = isinstance(self.get_train_dataloader().dataset, IterableDataset)
171 if self.is_iterable_train_dataloader and not isinstance(self.val_check_interval, int):
172 m = '''
173 When using an iterableDataset for train_dataloader,
174 Trainer(val_check_interval) must be an int.
175 An int k specifies checking validation every k training batches
176 '''
177 raise MisconfigurationException(m)
178
179 def determine_data_use_amount(self, train_percent_check, val_percent_check,
180 test_percent_check, overfit_pct):
181 """
182 Use less data for debugging purposes
183 """
184 self.train_percent_check = train_percent_check
185 self.val_percent_check = val_percent_check
186 self.test_percent_check = test_percent_check
187 if overfit_pct > 0:
188 self.train_percent_check = overfit_pct
189 self.val_percent_check = overfit_pct
190 self.test_percent_check = overfit_pct
191
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pytorch_lightning/trainer/data_loading_mixin.py b/pytorch_lightning/trainer/data_loading_mixin.py
--- a/pytorch_lightning/trainer/data_loading_mixin.py
+++ b/pytorch_lightning/trainer/data_loading_mixin.py
@@ -1,7 +1,17 @@
import warnings
import torch.distributed as dist
-from torch.utils.data import IterableDataset
+try:
+ # loading for pyTorch 1.3
+ from torch.utils.data import IterableDataset
+except ImportError:
+ # loading for pyTorch 1.1
+ import torch
+ warnings.warn('Your version of pyTorch %s does not support `IterableDataset`,'
+ ' please upgrade to 1.2+' % torch.__version__, ImportWarning)
+ EXIST_ITER_DATASET = False
+else:
+ EXIST_ITER_DATASET = True
from torch.utils.data.distributed import DistributedSampler
from pytorch_lightning.utilities.debugging import MisconfigurationException
@@ -24,7 +34,7 @@
self.get_train_dataloader = model.train_dataloader
# determine number of training batches
- if isinstance(self.get_train_dataloader().dataset, IterableDataset):
+ if EXIST_ITER_DATASET and isinstance(self.get_train_dataloader().dataset, IterableDataset):
self.nb_training_batches = float('inf')
else:
self.nb_training_batches = len(self.get_train_dataloader())
@@ -167,7 +177,8 @@
self.get_val_dataloaders()
# support IterableDataset for train data
- self.is_iterable_train_dataloader = isinstance(self.get_train_dataloader().dataset, IterableDataset)
+ self.is_iterable_train_dataloader = (
+ EXIST_ITER_DATASET and isinstance(self.get_train_dataloader().dataset, IterableDataset))
if self.is_iterable_train_dataloader and not isinstance(self.val_check_interval, int):
m = '''
When using an iterableDataset for train_dataloader,
| {"golden_diff": "diff --git a/pytorch_lightning/trainer/data_loading_mixin.py b/pytorch_lightning/trainer/data_loading_mixin.py\n--- a/pytorch_lightning/trainer/data_loading_mixin.py\n+++ b/pytorch_lightning/trainer/data_loading_mixin.py\n@@ -1,7 +1,17 @@\n import warnings\n \n import torch.distributed as dist\n-from torch.utils.data import IterableDataset\n+try:\n+ # loading for pyTorch 1.3\n+ from torch.utils.data import IterableDataset\n+except ImportError:\n+ # loading for pyTorch 1.1\n+ import torch\n+ warnings.warn('Your version of pyTorch %s does not support `IterableDataset`,'\n+ ' please upgrade to 1.2+' % torch.__version__, ImportWarning)\n+ EXIST_ITER_DATASET = False\n+else:\n+ EXIST_ITER_DATASET = True\n from torch.utils.data.distributed import DistributedSampler\n \n from pytorch_lightning.utilities.debugging import MisconfigurationException\n@@ -24,7 +34,7 @@\n self.get_train_dataloader = model.train_dataloader\n \n # determine number of training batches\n- if isinstance(self.get_train_dataloader().dataset, IterableDataset):\n+ if EXIST_ITER_DATASET and isinstance(self.get_train_dataloader().dataset, IterableDataset):\n self.nb_training_batches = float('inf')\n else:\n self.nb_training_batches = len(self.get_train_dataloader())\n@@ -167,7 +177,8 @@\n self.get_val_dataloaders()\n \n # support IterableDataset for train data\n- self.is_iterable_train_dataloader = isinstance(self.get_train_dataloader().dataset, IterableDataset)\n+ self.is_iterable_train_dataloader = (\n+ EXIST_ITER_DATASET and isinstance(self.get_train_dataloader().dataset, IterableDataset))\n if self.is_iterable_train_dataloader and not isinstance(self.val_check_interval, int):\n m = '''\n When using an iterableDataset for train_dataloader,\n", "issue": "IterableDataset breaks 1.1 compatibility\nA recently introduced feature unfortunately breaks compability with Pytorch 1.1.0. \r\n\r\n**Describe the bug**\r\nIterableDataset support, introduced in [issue 323](https://github.com/williamFalcon/pytorch-lightning/issues/323), requires Pytorch 1.2.0+. \r\n\r\n**To Reproduce**\r\nIn a python environment with Pytorch 1.1.0 do:\r\nimport pytorch_lightning\r\n\r\n**Expected behavior**\r\nCompatibility with Pytorch 1.1.0. I'm filing it as a bug report rather than a docs fix since the dependency on 1.2.0+ introduced by [issue 323](https://github.com/williamFalcon/pytorch-lightning/issues/323) doesn't seem to be intentional.\r\n\r\n\r\n\n", "before_files": [{"content": "import warnings\n\nimport torch.distributed as dist\nfrom torch.utils.data import IterableDataset\nfrom torch.utils.data.distributed import DistributedSampler\n\nfrom pytorch_lightning.utilities.debugging import MisconfigurationException\n\ntry:\n from apex import amp\n\n APEX_AVAILABLE = True\nexcept ImportError:\n APEX_AVAILABLE = False\n\n\nclass TrainerDataLoadingMixin(object):\n def init_train_dataloader(self, model):\n \"\"\"\n Dataloaders are provided by the model\n :param model:\n :return:\n \"\"\"\n self.get_train_dataloader = model.train_dataloader\n\n # determine number of training batches\n if isinstance(self.get_train_dataloader().dataset, IterableDataset):\n self.nb_training_batches = float('inf')\n else:\n self.nb_training_batches = len(self.get_train_dataloader())\n self.nb_training_batches = int(self.nb_training_batches * self.train_percent_check)\n\n # determine when to check validation\n # if int passed in, val checks that often\n # otherwise, it checks in [0, 1.0] % range of a training epoch\n if isinstance(self.val_check_interval, int):\n self.val_check_batch = self.val_check_interval\n else:\n self.val_check_batch = int(self.nb_training_batches * self.val_check_interval)\n self.val_check_batch = max(1, self.val_check_batch)\n\n on_ddp = self.use_ddp or self.use_ddp2\n if on_ddp and not isinstance(self.get_train_dataloader().sampler, DistributedSampler):\n msg = \"\"\"\n You're using multiple gpus and multiple nodes without using a DistributedSampler\n to assign a subset of your data to each process. To silence this warning, pass a\n DistributedSampler to your DataLoader.\n\n ie: this:\n dataset = myDataset()\n dataloader = Dataloader(dataset)\n\n becomes:\n dataset = myDataset()\n dist_sampler = torch.utils.data.distributed.DistributedSampler(dataset)\n dataloader = Dataloader(dataset, sampler=dist_sampler)\n\n If you want each process to load the full dataset, ignore this warning.\n \"\"\"\n if msg not in self.shown_warnings and self.proc_rank == 0:\n self.shown_warnings.add(msg)\n warnings.warn(msg)\n\n def init_val_dataloader(self, model):\n \"\"\"\n Dataloaders are provided by the model\n :param model:\n :return:\n \"\"\"\n self.get_val_dataloaders = model.val_dataloader\n\n # determine number of validation batches\n # val datasets could be none, 1 or 2+\n if self.get_val_dataloaders() is not None:\n self.nb_val_batches = sum(len(dataloader) for dataloader in self.get_val_dataloaders())\n self.nb_val_batches = int(self.nb_val_batches * self.val_percent_check)\n self.nb_val_batches = max(1, self.nb_val_batches)\n\n on_ddp = self.use_ddp or self.use_ddp2\n if on_ddp and self.get_val_dataloaders() is not None:\n for dataloader in self.get_val_dataloaders():\n if not isinstance(dataloader.sampler, DistributedSampler):\n msg = \"\"\"\n Your val_dataloader(s) don't use DistributedSampler.\n\n You're using multiple gpus and multiple nodes without using a\n DistributedSampler to assign a subset of your data to each process.\n To silence this warning, pass a DistributedSampler to your DataLoader.\n\n ie: this:\n dataset = myDataset()\n dataloader = Dataloader(dataset)\n\n becomes:\n dataset = myDataset()\n dist_sampler = torch.utils.data.distributed.DistributedSampler(dataset)\n dataloader = Dataloader(dataset, sampler=dist_sampler)\n\n If you want each process to load the full dataset, ignore this warning.\n \"\"\"\n if msg not in self.shown_warnings and self.proc_rank == 0:\n self.shown_warnings.add(msg)\n warnings.warn(msg)\n break\n\n def init_test_dataloader(self, model):\n \"\"\"\n Dataloaders are provided by the model\n :param model:\n :return:\n \"\"\"\n\n self.get_test_dataloaders = model.test_dataloader\n\n # determine number of test batches\n if self.get_test_dataloaders() is not None:\n len_sum = sum(len(dataloader) for dataloader in self.get_test_dataloaders())\n self.nb_test_batches = len_sum\n self.nb_test_batches = int(self.nb_test_batches * self.test_percent_check)\n self.nb_test_batches = max(1, self.nb_test_batches)\n\n on_ddp = self.use_ddp or self.use_ddp2\n if on_ddp and self.get_test_dataloaders() is not None:\n for dataloader in self.get_test_dataloaders():\n if not isinstance(dataloader.sampler, DistributedSampler):\n msg = \"\"\"\n Your test_dataloader(s) don't use DistributedSampler.\n\n You're using multiple gpus and multiple nodes without using a\n DistributedSampler to assign a subset of your data to each process.\n To silence this warning, pass a DistributedSampler to your DataLoader.\n\n ie: this:\n dataset = myDataset()\n dataloader = Dataloader(dataset)\n\n becomes:\n dataset = myDataset()\n dist_sampler = torch.utils.data.distributed.DistributedSampler(dataset)\n dataloader = Dataloader(dataset, sampler=dist_sampler)\n\n If you want each process to load the full dataset, ignore this warning.\n \"\"\"\n if msg not in self.shown_warnings and self.proc_rank == 0:\n self.shown_warnings.add(msg)\n warnings.warn(msg)\n break\n\n def get_dataloaders(self, model):\n \"\"\"\n Dataloaders are provided by the model\n :param model:\n :return:\n \"\"\"\n\n self.init_train_dataloader(model)\n self.init_test_dataloader(model)\n self.init_val_dataloader(model)\n\n if self.use_ddp or self.use_ddp2:\n # wait for all processes to catch up\n dist.barrier()\n\n # load each dataloader\n self.get_train_dataloader()\n self.get_test_dataloaders()\n self.get_val_dataloaders()\n\n # support IterableDataset for train data\n self.is_iterable_train_dataloader = isinstance(self.get_train_dataloader().dataset, IterableDataset)\n if self.is_iterable_train_dataloader and not isinstance(self.val_check_interval, int):\n m = '''\n When using an iterableDataset for train_dataloader,\n Trainer(val_check_interval) must be an int.\n An int k specifies checking validation every k training batches\n '''\n raise MisconfigurationException(m)\n\n def determine_data_use_amount(self, train_percent_check, val_percent_check,\n test_percent_check, overfit_pct):\n \"\"\"\n Use less data for debugging purposes\n \"\"\"\n self.train_percent_check = train_percent_check\n self.val_percent_check = val_percent_check\n self.test_percent_check = test_percent_check\n if overfit_pct > 0:\n self.train_percent_check = overfit_pct\n self.val_percent_check = overfit_pct\n self.test_percent_check = overfit_pct\n", "path": "pytorch_lightning/trainer/data_loading_mixin.py"}], "after_files": [{"content": "import warnings\n\nimport torch.distributed as dist\ntry:\n # loading for pyTorch 1.3\n from torch.utils.data import IterableDataset\nexcept ImportError:\n # loading for pyTorch 1.1\n import torch\n warnings.warn('Your version of pyTorch %s does not support `IterableDataset`,'\n ' please upgrade to 1.2+' % torch.__version__, ImportWarning)\n EXIST_ITER_DATASET = False\nelse:\n EXIST_ITER_DATASET = True\nfrom torch.utils.data.distributed import DistributedSampler\n\nfrom pytorch_lightning.utilities.debugging import MisconfigurationException\n\ntry:\n from apex import amp\n\n APEX_AVAILABLE = True\nexcept ImportError:\n APEX_AVAILABLE = False\n\n\nclass TrainerDataLoadingMixin(object):\n def init_train_dataloader(self, model):\n \"\"\"\n Dataloaders are provided by the model\n :param model:\n :return:\n \"\"\"\n self.get_train_dataloader = model.train_dataloader\n\n # determine number of training batches\n if EXIST_ITER_DATASET and isinstance(self.get_train_dataloader().dataset, IterableDataset):\n self.nb_training_batches = float('inf')\n else:\n self.nb_training_batches = len(self.get_train_dataloader())\n self.nb_training_batches = int(self.nb_training_batches * self.train_percent_check)\n\n # determine when to check validation\n # if int passed in, val checks that often\n # otherwise, it checks in [0, 1.0] % range of a training epoch\n if isinstance(self.val_check_interval, int):\n self.val_check_batch = self.val_check_interval\n else:\n self.val_check_batch = int(self.nb_training_batches * self.val_check_interval)\n self.val_check_batch = max(1, self.val_check_batch)\n\n on_ddp = self.use_ddp or self.use_ddp2\n if on_ddp and not isinstance(self.get_train_dataloader().sampler, DistributedSampler):\n msg = \"\"\"\n You're using multiple gpus and multiple nodes without using a DistributedSampler\n to assign a subset of your data to each process. To silence this warning, pass a\n DistributedSampler to your DataLoader.\n\n ie: this:\n dataset = myDataset()\n dataloader = Dataloader(dataset)\n\n becomes:\n dataset = myDataset()\n dist_sampler = torch.utils.data.distributed.DistributedSampler(dataset)\n dataloader = Dataloader(dataset, sampler=dist_sampler)\n\n If you want each process to load the full dataset, ignore this warning.\n \"\"\"\n if msg not in self.shown_warnings and self.proc_rank == 0:\n self.shown_warnings.add(msg)\n warnings.warn(msg)\n\n def init_val_dataloader(self, model):\n \"\"\"\n Dataloaders are provided by the model\n :param model:\n :return:\n \"\"\"\n self.get_val_dataloaders = model.val_dataloader\n\n # determine number of validation batches\n # val datasets could be none, 1 or 2+\n if self.get_val_dataloaders() is not None:\n self.nb_val_batches = sum(len(dataloader) for dataloader in self.get_val_dataloaders())\n self.nb_val_batches = int(self.nb_val_batches * self.val_percent_check)\n self.nb_val_batches = max(1, self.nb_val_batches)\n\n on_ddp = self.use_ddp or self.use_ddp2\n if on_ddp and self.get_val_dataloaders() is not None:\n for dataloader in self.get_val_dataloaders():\n if not isinstance(dataloader.sampler, DistributedSampler):\n msg = \"\"\"\n Your val_dataloader(s) don't use DistributedSampler.\n\n You're using multiple gpus and multiple nodes without using a\n DistributedSampler to assign a subset of your data to each process.\n To silence this warning, pass a DistributedSampler to your DataLoader.\n\n ie: this:\n dataset = myDataset()\n dataloader = Dataloader(dataset)\n\n becomes:\n dataset = myDataset()\n dist_sampler = torch.utils.data.distributed.DistributedSampler(dataset)\n dataloader = Dataloader(dataset, sampler=dist_sampler)\n\n If you want each process to load the full dataset, ignore this warning.\n \"\"\"\n if msg not in self.shown_warnings and self.proc_rank == 0:\n self.shown_warnings.add(msg)\n warnings.warn(msg)\n break\n\n def init_test_dataloader(self, model):\n \"\"\"\n Dataloaders are provided by the model\n :param model:\n :return:\n \"\"\"\n\n self.get_test_dataloaders = model.test_dataloader\n\n # determine number of test batches\n if self.get_test_dataloaders() is not None:\n len_sum = sum(len(dataloader) for dataloader in self.get_test_dataloaders())\n self.nb_test_batches = len_sum\n self.nb_test_batches = int(self.nb_test_batches * self.test_percent_check)\n self.nb_test_batches = max(1, self.nb_test_batches)\n\n on_ddp = self.use_ddp or self.use_ddp2\n if on_ddp and self.get_test_dataloaders() is not None:\n for dataloader in self.get_test_dataloaders():\n if not isinstance(dataloader.sampler, DistributedSampler):\n msg = \"\"\"\n Your test_dataloader(s) don't use DistributedSampler.\n\n You're using multiple gpus and multiple nodes without using a\n DistributedSampler to assign a subset of your data to each process.\n To silence this warning, pass a DistributedSampler to your DataLoader.\n\n ie: this:\n dataset = myDataset()\n dataloader = Dataloader(dataset)\n\n becomes:\n dataset = myDataset()\n dist_sampler = torch.utils.data.distributed.DistributedSampler(dataset)\n dataloader = Dataloader(dataset, sampler=dist_sampler)\n\n If you want each process to load the full dataset, ignore this warning.\n \"\"\"\n if msg not in self.shown_warnings and self.proc_rank == 0:\n self.shown_warnings.add(msg)\n warnings.warn(msg)\n break\n\n def get_dataloaders(self, model):\n \"\"\"\n Dataloaders are provided by the model\n :param model:\n :return:\n \"\"\"\n\n self.init_train_dataloader(model)\n self.init_test_dataloader(model)\n self.init_val_dataloader(model)\n\n if self.use_ddp or self.use_ddp2:\n # wait for all processes to catch up\n dist.barrier()\n\n # load each dataloader\n self.get_train_dataloader()\n self.get_test_dataloaders()\n self.get_val_dataloaders()\n\n # support IterableDataset for train data\n self.is_iterable_train_dataloader = (\n EXIST_ITER_DATASET and isinstance(self.get_train_dataloader().dataset, IterableDataset))\n if self.is_iterable_train_dataloader and not isinstance(self.val_check_interval, int):\n m = '''\n When using an iterableDataset for train_dataloader,\n Trainer(val_check_interval) must be an int.\n An int k specifies checking validation every k training batches\n '''\n raise MisconfigurationException(m)\n\n def determine_data_use_amount(self, train_percent_check, val_percent_check,\n test_percent_check, overfit_pct):\n \"\"\"\n Use less data for debugging purposes\n \"\"\"\n self.train_percent_check = train_percent_check\n self.val_percent_check = val_percent_check\n self.test_percent_check = test_percent_check\n if overfit_pct > 0:\n self.train_percent_check = overfit_pct\n self.val_percent_check = overfit_pct\n self.test_percent_check = overfit_pct\n", "path": "pytorch_lightning/trainer/data_loading_mixin.py"}]} | 2,462 | 433 |
gh_patches_debug_58044 | rasdani/github-patches | git_diff | PyGithub__PyGithub-2084 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug in calculating totalCount of review requests
I want to get the count of all review requests (users and teams), but there is a problem when using the following code:
```
for pull in repo.get_pulls(state='all'):
review_reqs = pull.get_review_requests()
users = review_reqs[0]
teams = review_reqs[1]
print(users.totalCount)
print(teams.totalCount)
```
It always prints `2` for the count of users and teams no matter what is the actual count of them.
For example, in the following case, you can see that both users and teams list is empty but again totalCount returns 2 for both of them:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `github/PaginatedList.py`
Content:
```
1 ############################ Copyrights and license ############################
2 # #
3 # Copyright 2012 Vincent Jacques <[email protected]> #
4 # Copyright 2012 Zearin <[email protected]> #
5 # Copyright 2013 AKFish <[email protected]> #
6 # Copyright 2013 Bill Mill <[email protected]> #
7 # Copyright 2013 Vincent Jacques <[email protected]> #
8 # Copyright 2013 davidbrai <[email protected]> #
9 # Copyright 2014 Thialfihar <[email protected]> #
10 # Copyright 2014 Vincent Jacques <[email protected]> #
11 # Copyright 2015 Dan Vanderkam <[email protected]> #
12 # Copyright 2015 Eliot Walker <[email protected]> #
13 # Copyright 2016 Peter Buckley <[email protected]> #
14 # Copyright 2017 Jannis Gebauer <[email protected]> #
15 # Copyright 2018 Gilad Shefer <[email protected]> #
16 # Copyright 2018 Joel Koglin <[email protected]> #
17 # Copyright 2018 Wan Liuyang <[email protected]> #
18 # Copyright 2018 sfdye <[email protected]> #
19 # #
20 # This file is part of PyGithub. #
21 # http://pygithub.readthedocs.io/ #
22 # #
23 # PyGithub is free software: you can redistribute it and/or modify it under #
24 # the terms of the GNU Lesser General Public License as published by the Free #
25 # Software Foundation, either version 3 of the License, or (at your option) #
26 # any later version. #
27 # #
28 # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
29 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
30 # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
31 # details. #
32 # #
33 # You should have received a copy of the GNU Lesser General Public License #
34 # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
35 # #
36 ################################################################################
37
38 from urllib.parse import parse_qs
39
40
41 class PaginatedListBase:
42 def __init__(self):
43 self.__elements = list()
44
45 def __getitem__(self, index):
46 assert isinstance(index, (int, slice))
47 if isinstance(index, int):
48 self.__fetchToIndex(index)
49 return self.__elements[index]
50 else:
51 return self._Slice(self, index)
52
53 def __iter__(self):
54 yield from self.__elements
55 while self._couldGrow():
56 newElements = self._grow()
57 yield from newElements
58
59 def _isBiggerThan(self, index):
60 return len(self.__elements) > index or self._couldGrow()
61
62 def __fetchToIndex(self, index):
63 while len(self.__elements) <= index and self._couldGrow():
64 self._grow()
65
66 def _grow(self):
67 newElements = self._fetchNextPage()
68 self.__elements += newElements
69 return newElements
70
71 class _Slice:
72 def __init__(self, theList, theSlice):
73 self.__list = theList
74 self.__start = theSlice.start or 0
75 self.__stop = theSlice.stop
76 self.__step = theSlice.step or 1
77
78 def __iter__(self):
79 index = self.__start
80 while not self.__finished(index):
81 if self.__list._isBiggerThan(index):
82 yield self.__list[index]
83 index += self.__step
84 else:
85 return
86
87 def __finished(self, index):
88 return self.__stop is not None and index >= self.__stop
89
90
91 class PaginatedList(PaginatedListBase):
92 """
93 This class abstracts the `pagination of the API <https://docs.github.com/en/rest/guides/traversing-with-pagination>`_.
94
95 You can simply enumerate through instances of this class::
96
97 for repo in user.get_repos():
98 print(repo.name)
99
100 If you want to know the total number of items in the list::
101
102 print(user.get_repos().totalCount)
103
104 You can also index them or take slices::
105
106 second_repo = user.get_repos()[1]
107 first_repos = user.get_repos()[:10]
108
109 If you want to iterate in reversed order, just do::
110
111 for repo in user.get_repos().reversed:
112 print(repo.name)
113
114 And if you really need it, you can explicitly access a specific page::
115
116 some_repos = user.get_repos().get_page(0)
117 some_other_repos = user.get_repos().get_page(3)
118 """
119
120 def __init__(
121 self,
122 contentClass,
123 requester,
124 firstUrl,
125 firstParams,
126 headers=None,
127 list_item="items",
128 ):
129 super().__init__()
130 self.__requester = requester
131 self.__contentClass = contentClass
132 self.__firstUrl = firstUrl
133 self.__firstParams = firstParams or ()
134 self.__nextUrl = firstUrl
135 self.__nextParams = firstParams or {}
136 self.__headers = headers
137 self.__list_item = list_item
138 if self.__requester.per_page != 30:
139 self.__nextParams["per_page"] = self.__requester.per_page
140 self._reversed = False
141 self.__totalCount = None
142
143 @property
144 def totalCount(self):
145 if not self.__totalCount:
146 params = {} if self.__nextParams is None else self.__nextParams.copy()
147 # set per_page = 1 so the totalCount is just the number of pages
148 params.update({"per_page": 1})
149 headers, data = self.__requester.requestJsonAndCheck(
150 "GET", self.__firstUrl, parameters=params, headers=self.__headers
151 )
152 if "link" not in headers:
153 if data and "total_count" in data:
154 self.__totalCount = data["total_count"]
155 elif data:
156 self.__totalCount = len(data)
157 else:
158 self.__totalCount = 0
159 else:
160 links = self.__parseLinkHeader(headers)
161 lastUrl = links.get("last")
162 if lastUrl:
163 self.__totalCount = int(parse_qs(lastUrl)["page"][0])
164 else:
165 self.__totalCount = 0
166 return self.__totalCount
167
168 def _getLastPageUrl(self):
169 headers, data = self.__requester.requestJsonAndCheck(
170 "GET", self.__firstUrl, parameters=self.__nextParams, headers=self.__headers
171 )
172 links = self.__parseLinkHeader(headers)
173 lastUrl = links.get("last")
174 return lastUrl
175
176 @property
177 def reversed(self):
178 r = PaginatedList(
179 self.__contentClass,
180 self.__requester,
181 self.__firstUrl,
182 self.__firstParams,
183 self.__headers,
184 self.__list_item,
185 )
186 r.__reverse()
187 return r
188
189 def __reverse(self):
190 self._reversed = True
191 lastUrl = self._getLastPageUrl()
192 if lastUrl:
193 self.__nextUrl = lastUrl
194
195 def _couldGrow(self):
196 return self.__nextUrl is not None
197
198 def _fetchNextPage(self):
199 headers, data = self.__requester.requestJsonAndCheck(
200 "GET", self.__nextUrl, parameters=self.__nextParams, headers=self.__headers
201 )
202 data = data if data else []
203
204 self.__nextUrl = None
205 if len(data) > 0:
206 links = self.__parseLinkHeader(headers)
207 if self._reversed:
208 if "prev" in links:
209 self.__nextUrl = links["prev"]
210 elif "next" in links:
211 self.__nextUrl = links["next"]
212 self.__nextParams = None
213
214 if self.__list_item in data:
215 self.__totalCount = data.get("total_count")
216 data = data[self.__list_item]
217
218 content = [
219 self.__contentClass(self.__requester, headers, element, completed=False)
220 for element in data
221 if element is not None
222 ]
223 if self._reversed:
224 return content[::-1]
225 return content
226
227 def __parseLinkHeader(self, headers):
228 links = {}
229 if "link" in headers:
230 linkHeaders = headers["link"].split(", ")
231 for linkHeader in linkHeaders:
232 url, rel, *rest = linkHeader.split("; ")
233 url = url[1:-1]
234 rel = rel[5:-1]
235 links[rel] = url
236 return links
237
238 def get_page(self, page):
239 params = dict(self.__firstParams)
240 if page != 0:
241 params["page"] = page + 1
242 if self.__requester.per_page != 30:
243 params["per_page"] = self.__requester.per_page
244 headers, data = self.__requester.requestJsonAndCheck(
245 "GET", self.__firstUrl, parameters=params, headers=self.__headers
246 )
247
248 if self.__list_item in data:
249 self.__totalCount = data.get("total_count")
250 data = data[self.__list_item]
251
252 return [
253 self.__contentClass(self.__requester, headers, element, completed=False)
254 for element in data
255 ]
256
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/github/PaginatedList.py b/github/PaginatedList.py
--- a/github/PaginatedList.py
+++ b/github/PaginatedList.py
@@ -153,6 +153,8 @@
if data and "total_count" in data:
self.__totalCount = data["total_count"]
elif data:
+ if isinstance(data, dict):
+ data = data[self.__list_item]
self.__totalCount = len(data)
else:
self.__totalCount = 0
| {"golden_diff": "diff --git a/github/PaginatedList.py b/github/PaginatedList.py\n--- a/github/PaginatedList.py\n+++ b/github/PaginatedList.py\n@@ -153,6 +153,8 @@\n if data and \"total_count\" in data:\n self.__totalCount = data[\"total_count\"]\n elif data:\n+ if isinstance(data, dict):\n+ data = data[self.__list_item]\n self.__totalCount = len(data)\n else:\n self.__totalCount = 0\n", "issue": "Bug in calculating totalCount of review requests\nI want to get the count of all review requests (users and teams), but there is a problem when using the following code:\r\n\r\n```\r\nfor pull in repo.get_pulls(state='all'):\r\n review_reqs = pull.get_review_requests()\r\n users = review_reqs[0]\r\n teams = review_reqs[1]\r\n print(users.totalCount)\r\n print(teams.totalCount)\r\n```\r\nIt always prints `2` for the count of users and teams no matter what is the actual count of them.\r\n\r\nFor example, in the following case, you can see that both users and teams list is empty but again totalCount returns 2 for both of them:\r\n\r\n\r\n\n", "before_files": [{"content": "############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Bill Mill <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2013 davidbrai <[email protected]> #\n# Copyright 2014 Thialfihar <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2015 Dan Vanderkam <[email protected]> #\n# Copyright 2015 Eliot Walker <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2017 Jannis Gebauer <[email protected]> #\n# Copyright 2018 Gilad Shefer <[email protected]> #\n# Copyright 2018 Joel Koglin <[email protected]> #\n# Copyright 2018 Wan Liuyang <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\nfrom urllib.parse import parse_qs\n\n\nclass PaginatedListBase:\n def __init__(self):\n self.__elements = list()\n\n def __getitem__(self, index):\n assert isinstance(index, (int, slice))\n if isinstance(index, int):\n self.__fetchToIndex(index)\n return self.__elements[index]\n else:\n return self._Slice(self, index)\n\n def __iter__(self):\n yield from self.__elements\n while self._couldGrow():\n newElements = self._grow()\n yield from newElements\n\n def _isBiggerThan(self, index):\n return len(self.__elements) > index or self._couldGrow()\n\n def __fetchToIndex(self, index):\n while len(self.__elements) <= index and self._couldGrow():\n self._grow()\n\n def _grow(self):\n newElements = self._fetchNextPage()\n self.__elements += newElements\n return newElements\n\n class _Slice:\n def __init__(self, theList, theSlice):\n self.__list = theList\n self.__start = theSlice.start or 0\n self.__stop = theSlice.stop\n self.__step = theSlice.step or 1\n\n def __iter__(self):\n index = self.__start\n while not self.__finished(index):\n if self.__list._isBiggerThan(index):\n yield self.__list[index]\n index += self.__step\n else:\n return\n\n def __finished(self, index):\n return self.__stop is not None and index >= self.__stop\n\n\nclass PaginatedList(PaginatedListBase):\n \"\"\"\n This class abstracts the `pagination of the API <https://docs.github.com/en/rest/guides/traversing-with-pagination>`_.\n\n You can simply enumerate through instances of this class::\n\n for repo in user.get_repos():\n print(repo.name)\n\n If you want to know the total number of items in the list::\n\n print(user.get_repos().totalCount)\n\n You can also index them or take slices::\n\n second_repo = user.get_repos()[1]\n first_repos = user.get_repos()[:10]\n\n If you want to iterate in reversed order, just do::\n\n for repo in user.get_repos().reversed:\n print(repo.name)\n\n And if you really need it, you can explicitly access a specific page::\n\n some_repos = user.get_repos().get_page(0)\n some_other_repos = user.get_repos().get_page(3)\n \"\"\"\n\n def __init__(\n self,\n contentClass,\n requester,\n firstUrl,\n firstParams,\n headers=None,\n list_item=\"items\",\n ):\n super().__init__()\n self.__requester = requester\n self.__contentClass = contentClass\n self.__firstUrl = firstUrl\n self.__firstParams = firstParams or ()\n self.__nextUrl = firstUrl\n self.__nextParams = firstParams or {}\n self.__headers = headers\n self.__list_item = list_item\n if self.__requester.per_page != 30:\n self.__nextParams[\"per_page\"] = self.__requester.per_page\n self._reversed = False\n self.__totalCount = None\n\n @property\n def totalCount(self):\n if not self.__totalCount:\n params = {} if self.__nextParams is None else self.__nextParams.copy()\n # set per_page = 1 so the totalCount is just the number of pages\n params.update({\"per_page\": 1})\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\", self.__firstUrl, parameters=params, headers=self.__headers\n )\n if \"link\" not in headers:\n if data and \"total_count\" in data:\n self.__totalCount = data[\"total_count\"]\n elif data:\n self.__totalCount = len(data)\n else:\n self.__totalCount = 0\n else:\n links = self.__parseLinkHeader(headers)\n lastUrl = links.get(\"last\")\n if lastUrl:\n self.__totalCount = int(parse_qs(lastUrl)[\"page\"][0])\n else:\n self.__totalCount = 0\n return self.__totalCount\n\n def _getLastPageUrl(self):\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\", self.__firstUrl, parameters=self.__nextParams, headers=self.__headers\n )\n links = self.__parseLinkHeader(headers)\n lastUrl = links.get(\"last\")\n return lastUrl\n\n @property\n def reversed(self):\n r = PaginatedList(\n self.__contentClass,\n self.__requester,\n self.__firstUrl,\n self.__firstParams,\n self.__headers,\n self.__list_item,\n )\n r.__reverse()\n return r\n\n def __reverse(self):\n self._reversed = True\n lastUrl = self._getLastPageUrl()\n if lastUrl:\n self.__nextUrl = lastUrl\n\n def _couldGrow(self):\n return self.__nextUrl is not None\n\n def _fetchNextPage(self):\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\", self.__nextUrl, parameters=self.__nextParams, headers=self.__headers\n )\n data = data if data else []\n\n self.__nextUrl = None\n if len(data) > 0:\n links = self.__parseLinkHeader(headers)\n if self._reversed:\n if \"prev\" in links:\n self.__nextUrl = links[\"prev\"]\n elif \"next\" in links:\n self.__nextUrl = links[\"next\"]\n self.__nextParams = None\n\n if self.__list_item in data:\n self.__totalCount = data.get(\"total_count\")\n data = data[self.__list_item]\n\n content = [\n self.__contentClass(self.__requester, headers, element, completed=False)\n for element in data\n if element is not None\n ]\n if self._reversed:\n return content[::-1]\n return content\n\n def __parseLinkHeader(self, headers):\n links = {}\n if \"link\" in headers:\n linkHeaders = headers[\"link\"].split(\", \")\n for linkHeader in linkHeaders:\n url, rel, *rest = linkHeader.split(\"; \")\n url = url[1:-1]\n rel = rel[5:-1]\n links[rel] = url\n return links\n\n def get_page(self, page):\n params = dict(self.__firstParams)\n if page != 0:\n params[\"page\"] = page + 1\n if self.__requester.per_page != 30:\n params[\"per_page\"] = self.__requester.per_page\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\", self.__firstUrl, parameters=params, headers=self.__headers\n )\n\n if self.__list_item in data:\n self.__totalCount = data.get(\"total_count\")\n data = data[self.__list_item]\n\n return [\n self.__contentClass(self.__requester, headers, element, completed=False)\n for element in data\n ]\n", "path": "github/PaginatedList.py"}], "after_files": [{"content": "############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Bill Mill <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2013 davidbrai <[email protected]> #\n# Copyright 2014 Thialfihar <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2015 Dan Vanderkam <[email protected]> #\n# Copyright 2015 Eliot Walker <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2017 Jannis Gebauer <[email protected]> #\n# Copyright 2018 Gilad Shefer <[email protected]> #\n# Copyright 2018 Joel Koglin <[email protected]> #\n# Copyright 2018 Wan Liuyang <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\nfrom urllib.parse import parse_qs\n\n\nclass PaginatedListBase:\n def __init__(self):\n self.__elements = list()\n\n def __getitem__(self, index):\n assert isinstance(index, (int, slice))\n if isinstance(index, int):\n self.__fetchToIndex(index)\n return self.__elements[index]\n else:\n return self._Slice(self, index)\n\n def __iter__(self):\n yield from self.__elements\n while self._couldGrow():\n newElements = self._grow()\n yield from newElements\n\n def _isBiggerThan(self, index):\n return len(self.__elements) > index or self._couldGrow()\n\n def __fetchToIndex(self, index):\n while len(self.__elements) <= index and self._couldGrow():\n self._grow()\n\n def _grow(self):\n newElements = self._fetchNextPage()\n self.__elements += newElements\n return newElements\n\n class _Slice:\n def __init__(self, theList, theSlice):\n self.__list = theList\n self.__start = theSlice.start or 0\n self.__stop = theSlice.stop\n self.__step = theSlice.step or 1\n\n def __iter__(self):\n index = self.__start\n while not self.__finished(index):\n if self.__list._isBiggerThan(index):\n yield self.__list[index]\n index += self.__step\n else:\n return\n\n def __finished(self, index):\n return self.__stop is not None and index >= self.__stop\n\n\nclass PaginatedList(PaginatedListBase):\n \"\"\"\n This class abstracts the `pagination of the API <https://docs.github.com/en/rest/guides/traversing-with-pagination>`_.\n\n You can simply enumerate through instances of this class::\n\n for repo in user.get_repos():\n print(repo.name)\n\n If you want to know the total number of items in the list::\n\n print(user.get_repos().totalCount)\n\n You can also index them or take slices::\n\n second_repo = user.get_repos()[1]\n first_repos = user.get_repos()[:10]\n\n If you want to iterate in reversed order, just do::\n\n for repo in user.get_repos().reversed:\n print(repo.name)\n\n And if you really need it, you can explicitly access a specific page::\n\n some_repos = user.get_repos().get_page(0)\n some_other_repos = user.get_repos().get_page(3)\n \"\"\"\n\n def __init__(\n self,\n contentClass,\n requester,\n firstUrl,\n firstParams,\n headers=None,\n list_item=\"items\",\n ):\n super().__init__()\n self.__requester = requester\n self.__contentClass = contentClass\n self.__firstUrl = firstUrl\n self.__firstParams = firstParams or ()\n self.__nextUrl = firstUrl\n self.__nextParams = firstParams or {}\n self.__headers = headers\n self.__list_item = list_item\n if self.__requester.per_page != 30:\n self.__nextParams[\"per_page\"] = self.__requester.per_page\n self._reversed = False\n self.__totalCount = None\n\n @property\n def totalCount(self):\n if not self.__totalCount:\n params = {} if self.__nextParams is None else self.__nextParams.copy()\n # set per_page = 1 so the totalCount is just the number of pages\n params.update({\"per_page\": 1})\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\", self.__firstUrl, parameters=params, headers=self.__headers\n )\n if \"link\" not in headers:\n if data and \"total_count\" in data:\n self.__totalCount = data[\"total_count\"]\n elif data:\n if isinstance(data, dict):\n data = data[self.__list_item]\n self.__totalCount = len(data)\n else:\n self.__totalCount = 0\n else:\n links = self.__parseLinkHeader(headers)\n lastUrl = links.get(\"last\")\n if lastUrl:\n self.__totalCount = int(parse_qs(lastUrl)[\"page\"][0])\n else:\n self.__totalCount = 0\n return self.__totalCount\n\n def _getLastPageUrl(self):\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\", self.__firstUrl, parameters=self.__nextParams, headers=self.__headers\n )\n links = self.__parseLinkHeader(headers)\n lastUrl = links.get(\"last\")\n return lastUrl\n\n @property\n def reversed(self):\n r = PaginatedList(\n self.__contentClass,\n self.__requester,\n self.__firstUrl,\n self.__firstParams,\n self.__headers,\n self.__list_item,\n )\n r.__reverse()\n return r\n\n def __reverse(self):\n self._reversed = True\n lastUrl = self._getLastPageUrl()\n if lastUrl:\n self.__nextUrl = lastUrl\n\n def _couldGrow(self):\n return self.__nextUrl is not None\n\n def _fetchNextPage(self):\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\", self.__nextUrl, parameters=self.__nextParams, headers=self.__headers\n )\n data = data if data else []\n\n self.__nextUrl = None\n if len(data) > 0:\n links = self.__parseLinkHeader(headers)\n if self._reversed:\n if \"prev\" in links:\n self.__nextUrl = links[\"prev\"]\n elif \"next\" in links:\n self.__nextUrl = links[\"next\"]\n self.__nextParams = None\n\n if self.__list_item in data:\n self.__totalCount = data.get(\"total_count\")\n data = data[self.__list_item]\n\n content = [\n self.__contentClass(self.__requester, headers, element, completed=False)\n for element in data\n if element is not None\n ]\n if self._reversed:\n return content[::-1]\n return content\n\n def __parseLinkHeader(self, headers):\n links = {}\n if \"link\" in headers:\n linkHeaders = headers[\"link\"].split(\", \")\n for linkHeader in linkHeaders:\n url, rel, *rest = linkHeader.split(\"; \")\n url = url[1:-1]\n rel = rel[5:-1]\n links[rel] = url\n return links\n\n def get_page(self, page):\n params = dict(self.__firstParams)\n if page != 0:\n params[\"page\"] = page + 1\n if self.__requester.per_page != 30:\n params[\"per_page\"] = self.__requester.per_page\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\", self.__firstUrl, parameters=params, headers=self.__headers\n )\n\n if self.__list_item in data:\n self.__totalCount = data.get(\"total_count\")\n data = data[self.__list_item]\n\n return [\n self.__contentClass(self.__requester, headers, element, completed=False)\n for element in data\n ]\n", "path": "github/PaginatedList.py"}]} | 3,236 | 112 |
gh_patches_debug_17055 | rasdani/github-patches | git_diff | svthalia__concrexit-1680 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Show non-current FoodEvents in API v2.
### Describe the bug
The `api/v2/food/events/` and `api/v2/food/events/<pk>/` endpoints currently do not return FoodEvents that are not current.
I think to change that we’d only need to replace some `FoodEvent.current_objects.all()`s with `FoodEvent.objects.all()`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/pizzas/api/v2/views.py`
Content:
```
1 from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope
2 from rest_framework.generics import (
3 ListAPIView,
4 RetrieveAPIView,
5 get_object_or_404,
6 CreateAPIView,
7 DestroyAPIView,
8 UpdateAPIView,
9 )
10
11 from rest_framework import filters as framework_filters, status
12 from rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly
13 from rest_framework.response import Response
14
15 from pizzas.api.v2 import filters
16 from pizzas.api.v2.serializers import (
17 ProductSerializer,
18 FoodOrderSerializer,
19 FoodOrderUpdateSerializer,
20 FoodOrderCreateSerializer,
21 )
22 from pizzas.api.v2.serializers.food_event import FoodEventSerializer
23 from pizzas.models import FoodEvent, Product, FoodOrder
24 from thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod
25
26
27 class FoodEventListView(ListAPIView):
28 """Returns an overview of all food events."""
29
30 serializer_class = FoodEventSerializer
31 queryset = FoodEvent.current_objects.all()
32 filter_backends = (
33 framework_filters.OrderingFilter,
34 filters.FoodEventDateFilterBackend,
35 )
36 ordering_fields = ("start", "end")
37 permission_classes = [
38 IsAuthenticatedOrTokenHasScope,
39 DjangoModelPermissionsOrAnonReadOnly,
40 ]
41 required_scopes = ["food:read"]
42
43
44 class FoodEventDetailView(RetrieveAPIView):
45 """Returns one single food event."""
46
47 serializer_class = FoodEventSerializer
48 queryset = FoodEvent.current_objects.all()
49 permission_classes = [
50 IsAuthenticatedOrTokenHasScope,
51 DjangoModelPermissionsOrAnonReadOnly,
52 ]
53 required_scopes = ["food:read"]
54
55
56 class FoodEventProductsListView(ListAPIView):
57 """Returns an overview of all products."""
58
59 serializer_class = ProductSerializer
60 queryset = Product.available_products.all()
61 filter_backends = (framework_filters.SearchFilter,)
62 search_fields = ("name",)
63 permission_classes = [
64 IsAuthenticatedOrTokenHasScope,
65 DjangoModelPermissionsOrAnonReadOnly,
66 ]
67 required_scopes = ["food:read"]
68
69
70 class FoodEventOrderDetailView(
71 RetrieveAPIView, CreateAPIView, UpdateAPIView, DestroyAPIView
72 ):
73 """Returns details of a food order."""
74
75 permission_classes = [
76 IsAuthenticatedOrTokenHasScopeForMethod,
77 DjangoModelPermissionsOrAnonReadOnly,
78 ]
79 required_scopes_per_method = {
80 "GET": ["food:read"],
81 "POST": ["food:order"],
82 "PUT": ["food:order"],
83 "PATCH": ["food:order"],
84 "DELETE": ["food:order"],
85 }
86
87 def get_serializer_class(self):
88 if self.request.method.lower() == "get":
89 return FoodOrderSerializer
90 if self.request.method.lower() == "post":
91 return FoodOrderCreateSerializer
92 return FoodOrderUpdateSerializer
93
94 def get_queryset(self):
95 return FoodOrder.objects.filter(food_event=self.food_event)
96
97 def get_object(self):
98 queryset = self.filter_queryset(self.get_queryset())
99 obj = get_object_or_404(queryset, member=self.request.member)
100
101 # May raise a permission denied
102 self.check_object_permissions(self.request, obj)
103
104 return obj
105
106 def dispatch(self, request, *args, **kwargs):
107 self.food_event = get_object_or_404(FoodEvent, pk=self.kwargs.get("pk"))
108 return super().dispatch(request, *args, **kwargs)
109
110 def update(self, request, *args, **kwargs):
111 super().update(request, *args, **kwargs)
112 instance = self.get_object()
113 return Response(
114 FoodOrderSerializer(instance, context=self.get_serializer_context()).data
115 )
116
117 def create(self, request, *args, **kwargs):
118 serializer = self.get_serializer(data=request.data)
119 serializer.is_valid(raise_exception=True)
120 instance = serializer.save(food_event=self.food_event)
121 return Response(
122 FoodOrderSerializer(instance, context=self.get_serializer_context()).data,
123 status=status.HTTP_201_CREATED,
124 )
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/pizzas/api/v2/views.py b/website/pizzas/api/v2/views.py
--- a/website/pizzas/api/v2/views.py
+++ b/website/pizzas/api/v2/views.py
@@ -28,7 +28,7 @@
"""Returns an overview of all food events."""
serializer_class = FoodEventSerializer
- queryset = FoodEvent.current_objects.all()
+ queryset = FoodEvent.objects.all()
filter_backends = (
framework_filters.OrderingFilter,
filters.FoodEventDateFilterBackend,
@@ -45,7 +45,7 @@
"""Returns one single food event."""
serializer_class = FoodEventSerializer
- queryset = FoodEvent.current_objects.all()
+ queryset = FoodEvent.objects.all()
permission_classes = [
IsAuthenticatedOrTokenHasScope,
DjangoModelPermissionsOrAnonReadOnly,
| {"golden_diff": "diff --git a/website/pizzas/api/v2/views.py b/website/pizzas/api/v2/views.py\n--- a/website/pizzas/api/v2/views.py\n+++ b/website/pizzas/api/v2/views.py\n@@ -28,7 +28,7 @@\n \"\"\"Returns an overview of all food events.\"\"\"\n \n serializer_class = FoodEventSerializer\n- queryset = FoodEvent.current_objects.all()\n+ queryset = FoodEvent.objects.all()\n filter_backends = (\n framework_filters.OrderingFilter,\n filters.FoodEventDateFilterBackend,\n@@ -45,7 +45,7 @@\n \"\"\"Returns one single food event.\"\"\"\n \n serializer_class = FoodEventSerializer\n- queryset = FoodEvent.current_objects.all()\n+ queryset = FoodEvent.objects.all()\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n", "issue": "Show non-current FoodEvents in API v2.\n### Describe the bug\r\nThe `api/v2/food/events/` and `api/v2/food/events/<pk>/` endpoints currently do not return FoodEvents that are not current. \r\n\r\nI think to change that we\u2019d only need to replace some `FoodEvent.current_objects.all()`s with `FoodEvent.objects.all()`.\r\n\n", "before_files": [{"content": "from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework.generics import (\n ListAPIView,\n RetrieveAPIView,\n get_object_or_404,\n CreateAPIView,\n DestroyAPIView,\n UpdateAPIView,\n)\n\nfrom rest_framework import filters as framework_filters, status\nfrom rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly\nfrom rest_framework.response import Response\n\nfrom pizzas.api.v2 import filters\nfrom pizzas.api.v2.serializers import (\n ProductSerializer,\n FoodOrderSerializer,\n FoodOrderUpdateSerializer,\n FoodOrderCreateSerializer,\n)\nfrom pizzas.api.v2.serializers.food_event import FoodEventSerializer\nfrom pizzas.models import FoodEvent, Product, FoodOrder\nfrom thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod\n\n\nclass FoodEventListView(ListAPIView):\n \"\"\"Returns an overview of all food events.\"\"\"\n\n serializer_class = FoodEventSerializer\n queryset = FoodEvent.current_objects.all()\n filter_backends = (\n framework_filters.OrderingFilter,\n filters.FoodEventDateFilterBackend,\n )\n ordering_fields = (\"start\", \"end\")\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventDetailView(RetrieveAPIView):\n \"\"\"Returns one single food event.\"\"\"\n\n serializer_class = FoodEventSerializer\n queryset = FoodEvent.current_objects.all()\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventProductsListView(ListAPIView):\n \"\"\"Returns an overview of all products.\"\"\"\n\n serializer_class = ProductSerializer\n queryset = Product.available_products.all()\n filter_backends = (framework_filters.SearchFilter,)\n search_fields = (\"name\",)\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventOrderDetailView(\n RetrieveAPIView, CreateAPIView, UpdateAPIView, DestroyAPIView\n):\n \"\"\"Returns details of a food order.\"\"\"\n\n permission_classes = [\n IsAuthenticatedOrTokenHasScopeForMethod,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes_per_method = {\n \"GET\": [\"food:read\"],\n \"POST\": [\"food:order\"],\n \"PUT\": [\"food:order\"],\n \"PATCH\": [\"food:order\"],\n \"DELETE\": [\"food:order\"],\n }\n\n def get_serializer_class(self):\n if self.request.method.lower() == \"get\":\n return FoodOrderSerializer\n if self.request.method.lower() == \"post\":\n return FoodOrderCreateSerializer\n return FoodOrderUpdateSerializer\n\n def get_queryset(self):\n return FoodOrder.objects.filter(food_event=self.food_event)\n\n def get_object(self):\n queryset = self.filter_queryset(self.get_queryset())\n obj = get_object_or_404(queryset, member=self.request.member)\n\n # May raise a permission denied\n self.check_object_permissions(self.request, obj)\n\n return obj\n\n def dispatch(self, request, *args, **kwargs):\n self.food_event = get_object_or_404(FoodEvent, pk=self.kwargs.get(\"pk\"))\n return super().dispatch(request, *args, **kwargs)\n\n def update(self, request, *args, **kwargs):\n super().update(request, *args, **kwargs)\n instance = self.get_object()\n return Response(\n FoodOrderSerializer(instance, context=self.get_serializer_context()).data\n )\n\n def create(self, request, *args, **kwargs):\n serializer = self.get_serializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n instance = serializer.save(food_event=self.food_event)\n return Response(\n FoodOrderSerializer(instance, context=self.get_serializer_context()).data,\n status=status.HTTP_201_CREATED,\n )\n", "path": "website/pizzas/api/v2/views.py"}], "after_files": [{"content": "from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework.generics import (\n ListAPIView,\n RetrieveAPIView,\n get_object_or_404,\n CreateAPIView,\n DestroyAPIView,\n UpdateAPIView,\n)\n\nfrom rest_framework import filters as framework_filters, status\nfrom rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly\nfrom rest_framework.response import Response\n\nfrom pizzas.api.v2 import filters\nfrom pizzas.api.v2.serializers import (\n ProductSerializer,\n FoodOrderSerializer,\n FoodOrderUpdateSerializer,\n FoodOrderCreateSerializer,\n)\nfrom pizzas.api.v2.serializers.food_event import FoodEventSerializer\nfrom pizzas.models import FoodEvent, Product, FoodOrder\nfrom thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod\n\n\nclass FoodEventListView(ListAPIView):\n \"\"\"Returns an overview of all food events.\"\"\"\n\n serializer_class = FoodEventSerializer\n queryset = FoodEvent.objects.all()\n filter_backends = (\n framework_filters.OrderingFilter,\n filters.FoodEventDateFilterBackend,\n )\n ordering_fields = (\"start\", \"end\")\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventDetailView(RetrieveAPIView):\n \"\"\"Returns one single food event.\"\"\"\n\n serializer_class = FoodEventSerializer\n queryset = FoodEvent.objects.all()\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventProductsListView(ListAPIView):\n \"\"\"Returns an overview of all products.\"\"\"\n\n serializer_class = ProductSerializer\n queryset = Product.available_products.all()\n filter_backends = (framework_filters.SearchFilter,)\n search_fields = (\"name\",)\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventOrderDetailView(\n RetrieveAPIView, CreateAPIView, UpdateAPIView, DestroyAPIView\n):\n \"\"\"Returns details of a food order.\"\"\"\n\n permission_classes = [\n IsAuthenticatedOrTokenHasScopeForMethod,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes_per_method = {\n \"GET\": [\"food:read\"],\n \"POST\": [\"food:order\"],\n \"PUT\": [\"food:order\"],\n \"PATCH\": [\"food:order\"],\n \"DELETE\": [\"food:order\"],\n }\n\n def get_serializer_class(self):\n if self.request.method.lower() == \"get\":\n return FoodOrderSerializer\n if self.request.method.lower() == \"post\":\n return FoodOrderCreateSerializer\n return FoodOrderUpdateSerializer\n\n def get_queryset(self):\n return FoodOrder.objects.filter(food_event=self.food_event)\n\n def get_object(self):\n queryset = self.filter_queryset(self.get_queryset())\n obj = get_object_or_404(queryset, member=self.request.member)\n\n # May raise a permission denied\n self.check_object_permissions(self.request, obj)\n\n return obj\n\n def dispatch(self, request, *args, **kwargs):\n self.food_event = get_object_or_404(FoodEvent, pk=self.kwargs.get(\"pk\"))\n return super().dispatch(request, *args, **kwargs)\n\n def update(self, request, *args, **kwargs):\n super().update(request, *args, **kwargs)\n instance = self.get_object()\n return Response(\n FoodOrderSerializer(instance, context=self.get_serializer_context()).data\n )\n\n def create(self, request, *args, **kwargs):\n serializer = self.get_serializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n instance = serializer.save(food_event=self.food_event)\n return Response(\n FoodOrderSerializer(instance, context=self.get_serializer_context()).data,\n status=status.HTTP_201_CREATED,\n )\n", "path": "website/pizzas/api/v2/views.py"}]} | 1,460 | 190 |
gh_patches_debug_30501 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-306 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Calendar used as a trigger for automations never fire.
So I created an automation that uses the generated calendar to notify me when I have to put the bins out using the new calendar triggers.
However, the automation never runs because the trigger never fires.
I debugged this a bit and found tha following issue:
HA asks the integration [here](https://github.com/home-assistant/core/blob/dev/homeassistant/components/calendar/trigger.py#L98) for all applicable events. However, the returned list is not quite correct. The timestamps are datetimes and the integration checks only the date component [here](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/custom_components/waste_collection_schedule/calendar.py#L53).
In my case, my local timezone is Europe/Berlin, which is currently UTC+2. HA gives UTC timestamps to the integration, so they are shifted by two hours "into the past" (not really, but you know what I mean). This means that the date check is wrong, as it misses the events for the day.
I changed the following and it worked in my testing but maybe you have a better idea on how to fix that:
```python
async def async_get_events(self, hass, start_datetime, end_datetime):
"""Return all events within specified time span."""
collections = []
for a in self._scraper.get_upcoming(include_today=True):
event = self._convert(a)
if event.start_datetime_local >= start_datetime and event.end_datetime_local <= end_datetime:
collections.append(event)
return collections
def _convert(self, collection):
"""Convert an collection into a Home Assistant calendar event."""
return CalendarEvent(
summary=collection.type,
start=collection.date,
end=collection.date,
)
```
Essentially, I convert to a HA calender event first and then let HA convert the start/end times of the event to local time to compare them against the given start/end times which are still in UTC. But both are now proper datetime objects with timezone information so comparing them works fine.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/calendar.py`
Content:
```
1 """Calendar platform support for Waste Collection Schedule."""
2
3 import logging
4 from datetime import timedelta
5
6 from homeassistant.components.calendar import CalendarEntity, CalendarEvent
7
8 from custom_components.waste_collection_schedule.waste_collection_schedule.scraper import (
9 Scraper,
10 )
11
12 _LOGGER = logging.getLogger(__name__)
13
14
15 async def async_setup_platform(hass, config, async_add_entities, discovery_info=None):
16 """Set up calendar platform."""
17 # We only want this platform to be set up via discovery.
18 if discovery_info is None:
19 return
20
21 entities = []
22
23 api = discovery_info["api"]
24
25 for scraper in api.scrapers:
26 dedicated_calendar_types = scraper.get_dedicated_calendar_types()
27 global_calendar_types = scraper.get_global_calendar_types()
28
29 if dedicated_calendar_types is not None:
30 for type in dedicated_calendar_types:
31 unique_id = calc_unique_calendar_id(scraper, type)
32
33 entities.append(
34 WasteCollectionCalendar(
35 api,
36 scraper,
37 scraper.get_calendar_title_for_type(type),
38 [scraper.get_collection_type(type)],
39 unique_id,
40 )
41 )
42
43 if global_calendar_types is not None or dedicated_calendar_types is None:
44 unique_id = calc_unique_calendar_id(scraper)
45 entities.append(
46 WasteCollectionCalendar(
47 api,
48 scraper,
49 scraper.calendar_title,
50 [
51 scraper.get_collection_type(type)
52 for type in global_calendar_types
53 ]
54 if global_calendar_types is not None
55 else None,
56 unique_id,
57 )
58 )
59
60 async_add_entities(entities)
61
62
63 class WasteCollectionCalendar(CalendarEntity):
64 """Calendar entity class."""
65
66 def __init__(self, api, scraper, name, types, unique_id: str):
67 self._api = api
68 self._scraper = scraper
69 self._name = name
70 self._types = types
71 self._unique_id = unique_id
72 self._attr_unique_id = unique_id
73
74 @property
75 def name(self):
76 """Return entity name."""
77 return self._name
78
79 @property
80 def event(self):
81 """Return next collection event."""
82 collections = self._scraper.get_upcoming(
83 count=1, include_today=True, types=self._types
84 )
85
86 if len(collections) == 0:
87 return None
88 else:
89 return self._convert(collections[0])
90
91 async def async_get_events(self, hass, start_date, end_date):
92 """Return all events within specified time span."""
93 collections = []
94 for a in self._scraper.get_upcoming(include_today=True, types=self._types):
95 if a.date >= start_date.date() and a.date <= end_date.date():
96 collections.append(self._convert(a))
97 return collections
98
99 def _convert(self, collection):
100 """Convert an collection into a Home Assistant calendar event."""
101 return CalendarEvent(
102 summary=collection.type,
103 start=collection.date,
104 end=collection.date + timedelta(days=1),
105 )
106
107
108 def calc_unique_calendar_id(scraper: Scraper, type: str | None = None):
109 return scraper.unique_id + ("_" + type if type is not None else "") + "_calendar"
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/calendar.py b/custom_components/waste_collection_schedule/calendar.py
--- a/custom_components/waste_collection_schedule/calendar.py
+++ b/custom_components/waste_collection_schedule/calendar.py
@@ -1,9 +1,11 @@
"""Calendar platform support for Waste Collection Schedule."""
import logging
-from datetime import timedelta
+from datetime import timedelta, timezone, datetime
from homeassistant.components.calendar import CalendarEntity, CalendarEvent
+from homeassistant.core import HomeAssistant
+from homeassistant.util.dt import DEFAULT_TIME_ZONE
from custom_components.waste_collection_schedule.waste_collection_schedule.scraper import (
Scraper,
@@ -88,15 +90,23 @@
else:
return self._convert(collections[0])
- async def async_get_events(self, hass, start_date, end_date):
+ async def async_get_events(
+ self, hass: HomeAssistant, start_date: datetime, end_date: datetime
+ ):
"""Return all events within specified time span."""
- collections = []
- for a in self._scraper.get_upcoming(include_today=True, types=self._types):
- if a.date >= start_date.date() and a.date <= end_date.date():
- collections.append(self._convert(a))
- return collections
+ events = []
- def _convert(self, collection):
+ for collection in self._scraper.get_upcoming(
+ include_today=True, types=self._types
+ ):
+ event = self._convert(collection)
+
+ if start_date <= event.start_datetime_local <= end_date:
+ events.append(event)
+
+ return events
+
+ def _convert(self, collection) -> CalendarEvent:
"""Convert an collection into a Home Assistant calendar event."""
return CalendarEvent(
summary=collection.type,
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/calendar.py b/custom_components/waste_collection_schedule/calendar.py\n--- a/custom_components/waste_collection_schedule/calendar.py\n+++ b/custom_components/waste_collection_schedule/calendar.py\n@@ -1,9 +1,11 @@\n \"\"\"Calendar platform support for Waste Collection Schedule.\"\"\"\n \n import logging\n-from datetime import timedelta\n+from datetime import timedelta, timezone, datetime\n \n from homeassistant.components.calendar import CalendarEntity, CalendarEvent\n+from homeassistant.core import HomeAssistant\n+from homeassistant.util.dt import DEFAULT_TIME_ZONE\n \n from custom_components.waste_collection_schedule.waste_collection_schedule.scraper import (\n Scraper,\n@@ -88,15 +90,23 @@\n else:\n return self._convert(collections[0])\n \n- async def async_get_events(self, hass, start_date, end_date):\n+ async def async_get_events(\n+ self, hass: HomeAssistant, start_date: datetime, end_date: datetime\n+ ):\n \"\"\"Return all events within specified time span.\"\"\"\n- collections = []\n- for a in self._scraper.get_upcoming(include_today=True, types=self._types):\n- if a.date >= start_date.date() and a.date <= end_date.date():\n- collections.append(self._convert(a))\n- return collections\n+ events = []\n \n- def _convert(self, collection):\n+ for collection in self._scraper.get_upcoming(\n+ include_today=True, types=self._types\n+ ):\n+ event = self._convert(collection)\n+\n+ if start_date <= event.start_datetime_local <= end_date:\n+ events.append(event)\n+\n+ return events\n+\n+ def _convert(self, collection) -> CalendarEvent:\n \"\"\"Convert an collection into a Home Assistant calendar event.\"\"\"\n return CalendarEvent(\n summary=collection.type,\n", "issue": "Calendar used as a trigger for automations never fire.\nSo I created an automation that uses the generated calendar to notify me when I have to put the bins out using the new calendar triggers.\r\n\r\nHowever, the automation never runs because the trigger never fires.\r\n\r\nI debugged this a bit and found tha following issue:\r\n\r\nHA asks the integration [here](https://github.com/home-assistant/core/blob/dev/homeassistant/components/calendar/trigger.py#L98) for all applicable events. However, the returned list is not quite correct. The timestamps are datetimes and the integration checks only the date component [here](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/custom_components/waste_collection_schedule/calendar.py#L53).\r\n\r\nIn my case, my local timezone is Europe/Berlin, which is currently UTC+2. HA gives UTC timestamps to the integration, so they are shifted by two hours \"into the past\" (not really, but you know what I mean). This means that the date check is wrong, as it misses the events for the day.\r\n\r\nI changed the following and it worked in my testing but maybe you have a better idea on how to fix that:\r\n\r\n```python\r\n async def async_get_events(self, hass, start_datetime, end_datetime):\r\n \"\"\"Return all events within specified time span.\"\"\"\r\n collections = []\r\n for a in self._scraper.get_upcoming(include_today=True):\r\n event = self._convert(a)\r\n if event.start_datetime_local >= start_datetime and event.end_datetime_local <= end_datetime:\r\n collections.append(event)\r\n return collections\r\n\r\n def _convert(self, collection):\r\n \"\"\"Convert an collection into a Home Assistant calendar event.\"\"\"\r\n return CalendarEvent(\r\n summary=collection.type,\r\n start=collection.date,\r\n end=collection.date,\r\n )\r\n```\r\n\r\nEssentially, I convert to a HA calender event first and then let HA convert the start/end times of the event to local time to compare them against the given start/end times which are still in UTC. But both are now proper datetime objects with timezone information so comparing them works fine.\n", "before_files": [{"content": "\"\"\"Calendar platform support for Waste Collection Schedule.\"\"\"\n\nimport logging\nfrom datetime import timedelta\n\nfrom homeassistant.components.calendar import CalendarEntity, CalendarEvent\n\nfrom custom_components.waste_collection_schedule.waste_collection_schedule.scraper import (\n Scraper,\n)\n\n_LOGGER = logging.getLogger(__name__)\n\n\nasync def async_setup_platform(hass, config, async_add_entities, discovery_info=None):\n \"\"\"Set up calendar platform.\"\"\"\n # We only want this platform to be set up via discovery.\n if discovery_info is None:\n return\n\n entities = []\n\n api = discovery_info[\"api\"]\n\n for scraper in api.scrapers:\n dedicated_calendar_types = scraper.get_dedicated_calendar_types()\n global_calendar_types = scraper.get_global_calendar_types()\n\n if dedicated_calendar_types is not None:\n for type in dedicated_calendar_types:\n unique_id = calc_unique_calendar_id(scraper, type)\n\n entities.append(\n WasteCollectionCalendar(\n api,\n scraper,\n scraper.get_calendar_title_for_type(type),\n [scraper.get_collection_type(type)],\n unique_id,\n )\n )\n\n if global_calendar_types is not None or dedicated_calendar_types is None:\n unique_id = calc_unique_calendar_id(scraper)\n entities.append(\n WasteCollectionCalendar(\n api,\n scraper,\n scraper.calendar_title,\n [\n scraper.get_collection_type(type)\n for type in global_calendar_types\n ]\n if global_calendar_types is not None\n else None,\n unique_id,\n )\n )\n\n async_add_entities(entities)\n\n\nclass WasteCollectionCalendar(CalendarEntity):\n \"\"\"Calendar entity class.\"\"\"\n\n def __init__(self, api, scraper, name, types, unique_id: str):\n self._api = api\n self._scraper = scraper\n self._name = name\n self._types = types\n self._unique_id = unique_id\n self._attr_unique_id = unique_id\n\n @property\n def name(self):\n \"\"\"Return entity name.\"\"\"\n return self._name\n\n @property\n def event(self):\n \"\"\"Return next collection event.\"\"\"\n collections = self._scraper.get_upcoming(\n count=1, include_today=True, types=self._types\n )\n\n if len(collections) == 0:\n return None\n else:\n return self._convert(collections[0])\n\n async def async_get_events(self, hass, start_date, end_date):\n \"\"\"Return all events within specified time span.\"\"\"\n collections = []\n for a in self._scraper.get_upcoming(include_today=True, types=self._types):\n if a.date >= start_date.date() and a.date <= end_date.date():\n collections.append(self._convert(a))\n return collections\n\n def _convert(self, collection):\n \"\"\"Convert an collection into a Home Assistant calendar event.\"\"\"\n return CalendarEvent(\n summary=collection.type,\n start=collection.date,\n end=collection.date + timedelta(days=1),\n )\n\n\ndef calc_unique_calendar_id(scraper: Scraper, type: str | None = None):\n return scraper.unique_id + (\"_\" + type if type is not None else \"\") + \"_calendar\"\n", "path": "custom_components/waste_collection_schedule/calendar.py"}], "after_files": [{"content": "\"\"\"Calendar platform support for Waste Collection Schedule.\"\"\"\n\nimport logging\nfrom datetime import timedelta, timezone, datetime\n\nfrom homeassistant.components.calendar import CalendarEntity, CalendarEvent\nfrom homeassistant.core import HomeAssistant\nfrom homeassistant.util.dt import DEFAULT_TIME_ZONE\n\nfrom custom_components.waste_collection_schedule.waste_collection_schedule.scraper import (\n Scraper,\n)\n\n_LOGGER = logging.getLogger(__name__)\n\n\nasync def async_setup_platform(hass, config, async_add_entities, discovery_info=None):\n \"\"\"Set up calendar platform.\"\"\"\n # We only want this platform to be set up via discovery.\n if discovery_info is None:\n return\n\n entities = []\n\n api = discovery_info[\"api\"]\n\n for scraper in api.scrapers:\n dedicated_calendar_types = scraper.get_dedicated_calendar_types()\n global_calendar_types = scraper.get_global_calendar_types()\n\n if dedicated_calendar_types is not None:\n for type in dedicated_calendar_types:\n unique_id = calc_unique_calendar_id(scraper, type)\n\n entities.append(\n WasteCollectionCalendar(\n api,\n scraper,\n scraper.get_calendar_title_for_type(type),\n [scraper.get_collection_type(type)],\n unique_id,\n )\n )\n\n if global_calendar_types is not None or dedicated_calendar_types is None:\n unique_id = calc_unique_calendar_id(scraper)\n entities.append(\n WasteCollectionCalendar(\n api,\n scraper,\n scraper.calendar_title,\n [\n scraper.get_collection_type(type)\n for type in global_calendar_types\n ]\n if global_calendar_types is not None\n else None,\n unique_id,\n )\n )\n\n async_add_entities(entities)\n\n\nclass WasteCollectionCalendar(CalendarEntity):\n \"\"\"Calendar entity class.\"\"\"\n\n def __init__(self, api, scraper, name, types, unique_id: str):\n self._api = api\n self._scraper = scraper\n self._name = name\n self._types = types\n self._unique_id = unique_id\n self._attr_unique_id = unique_id\n\n @property\n def name(self):\n \"\"\"Return entity name.\"\"\"\n return self._name\n\n @property\n def event(self):\n \"\"\"Return next collection event.\"\"\"\n collections = self._scraper.get_upcoming(\n count=1, include_today=True, types=self._types\n )\n\n if len(collections) == 0:\n return None\n else:\n return self._convert(collections[0])\n\n async def async_get_events(\n self, hass: HomeAssistant, start_date: datetime, end_date: datetime\n ):\n \"\"\"Return all events within specified time span.\"\"\"\n events = []\n\n for collection in self._scraper.get_upcoming(\n include_today=True, types=self._types\n ):\n event = self._convert(collection)\n\n if start_date <= event.start_datetime_local <= end_date:\n events.append(event)\n\n return events\n\n def _convert(self, collection) -> CalendarEvent:\n \"\"\"Convert an collection into a Home Assistant calendar event.\"\"\"\n return CalendarEvent(\n summary=collection.type,\n start=collection.date,\n end=collection.date + timedelta(days=1),\n )\n\n\ndef calc_unique_calendar_id(scraper: Scraper, type: str | None = None):\n return scraper.unique_id + (\"_\" + type if type is not None else \"\") + \"_calendar\"\n", "path": "custom_components/waste_collection_schedule/calendar.py"}]} | 1,596 | 392 |
gh_patches_debug_36612 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-2633 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider officedepot is broken
During the global build at 2021-08-18-14-42-26, spider **officedepot** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/logs/officedepot.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/officedepot.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/officedepot.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/officedepot.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import scrapy
3
4 from locations.items import GeojsonPointItem
5 from locations.hours import OpeningHours
6
7
8 class OfficedepotSpider(scrapy.Spider):
9 name = 'officedepot'
10 allowed_domains = ['www.officedepot.com']
11 start_urls = ['https://www.officedepot.com/storelocator/states/']
12
13 def parse_store(self, response):
14 o = OpeningHours()
15 for d in response.xpath('//time[@itemprop="openingHours"]/@datetime').extract():
16 day, times = d.split(' ', 1)
17 s, f = times.split('-')
18
19 # They seem to have a bug where they put down 24:00 when they mean noon
20 if s == '24:00': s = '12:00'
21
22 o.add_range(day, s, f)
23
24
25 store_number_results = response.xpath('//dt[@class="lsp_number"]/text()')
26 if store_number_results:
27 ref = store_number_results[-1].extract().strip()
28
29 yield GeojsonPointItem(
30 lat=response.xpath('//meta[@itemprop="latitude"]/@content').extract_first(),
31 lon=response.xpath('//meta[@itemprop="longitude"]/@content').extract_first(),
32 phone=response.xpath('//p[@itemprop="telephone"]/text()').extract_first(),
33 addr_full=response.xpath('//p[@itemprop="streetAddress"]/text()').extract_first(),
34 city=response.xpath('//p[@itemprop="addressLocality"]/text()').extract_first(),
35 state=response.xpath('//p[@itemprop="addressRegion"]/text()').extract_first(),
36 postcode=response.xpath('//p[@itemprop="postalCode"]/text()').extract_first(),
37 website=response.url,
38 ref=ref,
39 opening_hours=o.as_opening_hours(),
40 )
41
42 def parse(self, response):
43 for state in response.xpath('//div[@style="float: left; width: 200px;"]/a/@href').extract():
44 yield scrapy.Request(
45 response.urljoin(state),
46 callback=self.parse,
47 )
48
49 for store in response.xpath('//div[@style="float: left; width: 300px; padding-top: 10px;"]/a/@href').extract():
50 yield scrapy.Request(
51 response.urljoin(store),
52 callback=self.parse_store,
53 )
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/officedepot.py b/locations/spiders/officedepot.py
--- a/locations/spiders/officedepot.py
+++ b/locations/spiders/officedepot.py
@@ -1,4 +1,5 @@
# -*- coding: utf-8 -*-
+import json
import scrapy
from locations.items import GeojsonPointItem
@@ -7,8 +8,41 @@
class OfficedepotSpider(scrapy.Spider):
name = 'officedepot'
- allowed_domains = ['www.officedepot.com']
- start_urls = ['https://www.officedepot.com/storelocator/states/']
+ allowed_domains = ["where2getit.com"]
+
+ def start_requests(self):
+ url = 'https://locations.where2getit.com/officedepot/rest/getlist?like=0.9145201524205426&lang=en_US'
+
+ headers = {
+ 'Accept-Language': 'en-US,en;q=0.9',
+ 'Origin': 'https://hosted.where2getit.com',
+ 'Accept-Encoding': 'gzip, deflate, br',
+ 'Accept': 'application/json, text/javascript, */*; q=0.01',
+ 'Referer': 'https://hosted.where2getit.com/officedepot/2015/index1.html',
+ 'Connection': 'keep-alive',
+ 'Content-Type': 'application/json',
+ }
+
+ form_data = {
+ "request": {
+ "appkey": "592778B0-A13B-11EB-B3DB-84030D516365",
+ "formdata": {
+ "order": "city",
+ "objectname": "Locator::Store",
+ "softmatch": "1",
+ "where": {
+ }
+ }
+ }
+ }
+
+ yield scrapy.http.FormRequest(
+ url=url,
+ method='POST',
+ body=json.dumps(form_data),
+ headers=headers,
+ callback=self.parse,
+ )
def parse_store(self, response):
o = OpeningHours()
@@ -40,14 +74,20 @@
)
def parse(self, response):
- for state in response.xpath('//div[@style="float: left; width: 200px;"]/a/@href').extract():
- yield scrapy.Request(
- response.urljoin(state),
- callback=self.parse,
- )
-
- for store in response.xpath('//div[@style="float: left; width: 300px; padding-top: 10px;"]/a/@href').extract():
- yield scrapy.Request(
- response.urljoin(store),
- callback=self.parse_store,
- )
+ data = json.loads(response.body_as_unicode())
+
+ for store in data["response"]["collection"]:
+ properties = {
+ 'ref': store["clientkey"],
+ 'name': store.get("name"),
+ 'addr_full': store["address1"],
+ 'city': store["city"],
+ 'state': store["state"],
+ 'postcode': store["postalcode"],
+ 'country': store["country"],
+ 'lat': store["latitude"],
+ 'lon': store["longitude"],
+ 'phone': store["phone"],
+ }
+
+ yield GeojsonPointItem(**properties)
| {"golden_diff": "diff --git a/locations/spiders/officedepot.py b/locations/spiders/officedepot.py\n--- a/locations/spiders/officedepot.py\n+++ b/locations/spiders/officedepot.py\n@@ -1,4 +1,5 @@\n # -*- coding: utf-8 -*-\n+import json\n import scrapy\n \n from locations.items import GeojsonPointItem\n@@ -7,8 +8,41 @@\n \n class OfficedepotSpider(scrapy.Spider):\n name = 'officedepot'\n- allowed_domains = ['www.officedepot.com']\n- start_urls = ['https://www.officedepot.com/storelocator/states/']\n+ allowed_domains = [\"where2getit.com\"]\n+\n+ def start_requests(self):\n+ url = 'https://locations.where2getit.com/officedepot/rest/getlist?like=0.9145201524205426&lang=en_US'\n+\n+ headers = {\n+ 'Accept-Language': 'en-US,en;q=0.9',\n+ 'Origin': 'https://hosted.where2getit.com',\n+ 'Accept-Encoding': 'gzip, deflate, br',\n+ 'Accept': 'application/json, text/javascript, */*; q=0.01',\n+ 'Referer': 'https://hosted.where2getit.com/officedepot/2015/index1.html',\n+ 'Connection': 'keep-alive',\n+ 'Content-Type': 'application/json',\n+ }\n+\n+ form_data = {\n+ \"request\": {\n+ \"appkey\": \"592778B0-A13B-11EB-B3DB-84030D516365\",\n+ \"formdata\": {\n+ \"order\": \"city\",\n+ \"objectname\": \"Locator::Store\",\n+ \"softmatch\": \"1\",\n+ \"where\": {\n+ }\n+ }\n+ }\n+ }\n+\n+ yield scrapy.http.FormRequest(\n+ url=url,\n+ method='POST',\n+ body=json.dumps(form_data),\n+ headers=headers,\n+ callback=self.parse,\n+ )\n \n def parse_store(self, response):\n o = OpeningHours()\n@@ -40,14 +74,20 @@\n )\n \n def parse(self, response):\n- for state in response.xpath('//div[@style=\"float: left; width: 200px;\"]/a/@href').extract():\n- yield scrapy.Request(\n- response.urljoin(state),\n- callback=self.parse,\n- )\n-\n- for store in response.xpath('//div[@style=\"float: left; width: 300px; padding-top: 10px;\"]/a/@href').extract():\n- yield scrapy.Request(\n- response.urljoin(store),\n- callback=self.parse_store,\n- )\n+ data = json.loads(response.body_as_unicode())\n+\n+ for store in data[\"response\"][\"collection\"]:\n+ properties = {\n+ 'ref': store[\"clientkey\"],\n+ 'name': store.get(\"name\"),\n+ 'addr_full': store[\"address1\"],\n+ 'city': store[\"city\"],\n+ 'state': store[\"state\"],\n+ 'postcode': store[\"postalcode\"],\n+ 'country': store[\"country\"],\n+ 'lat': store[\"latitude\"],\n+ 'lon': store[\"longitude\"],\n+ 'phone': store[\"phone\"],\n+ }\n+\n+ yield GeojsonPointItem(**properties)\n", "issue": "Spider officedepot is broken\nDuring the global build at 2021-08-18-14-42-26, spider **officedepot** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/logs/officedepot.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/officedepot.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/officedepot.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass OfficedepotSpider(scrapy.Spider):\n name = 'officedepot'\n allowed_domains = ['www.officedepot.com']\n start_urls = ['https://www.officedepot.com/storelocator/states/']\n\n def parse_store(self, response):\n o = OpeningHours()\n for d in response.xpath('//time[@itemprop=\"openingHours\"]/@datetime').extract():\n day, times = d.split(' ', 1)\n s, f = times.split('-')\n\n # They seem to have a bug where they put down 24:00 when they mean noon\n if s == '24:00': s = '12:00'\n\n o.add_range(day, s, f)\n\n\n store_number_results = response.xpath('//dt[@class=\"lsp_number\"]/text()')\n if store_number_results:\n ref = store_number_results[-1].extract().strip()\n\n yield GeojsonPointItem(\n lat=response.xpath('//meta[@itemprop=\"latitude\"]/@content').extract_first(),\n lon=response.xpath('//meta[@itemprop=\"longitude\"]/@content').extract_first(),\n phone=response.xpath('//p[@itemprop=\"telephone\"]/text()').extract_first(),\n addr_full=response.xpath('//p[@itemprop=\"streetAddress\"]/text()').extract_first(),\n city=response.xpath('//p[@itemprop=\"addressLocality\"]/text()').extract_first(),\n state=response.xpath('//p[@itemprop=\"addressRegion\"]/text()').extract_first(),\n postcode=response.xpath('//p[@itemprop=\"postalCode\"]/text()').extract_first(),\n website=response.url,\n ref=ref,\n opening_hours=o.as_opening_hours(),\n )\n\n def parse(self, response):\n for state in response.xpath('//div[@style=\"float: left; width: 200px;\"]/a/@href').extract():\n yield scrapy.Request(\n response.urljoin(state),\n callback=self.parse,\n )\n\n for store in response.xpath('//div[@style=\"float: left; width: 300px; padding-top: 10px;\"]/a/@href').extract():\n yield scrapy.Request(\n response.urljoin(store),\n callback=self.parse_store,\n )\n", "path": "locations/spiders/officedepot.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass OfficedepotSpider(scrapy.Spider):\n name = 'officedepot'\n allowed_domains = [\"where2getit.com\"]\n\n def start_requests(self):\n url = 'https://locations.where2getit.com/officedepot/rest/getlist?like=0.9145201524205426&lang=en_US'\n\n headers = {\n 'Accept-Language': 'en-US,en;q=0.9',\n 'Origin': 'https://hosted.where2getit.com',\n 'Accept-Encoding': 'gzip, deflate, br',\n 'Accept': 'application/json, text/javascript, */*; q=0.01',\n 'Referer': 'https://hosted.where2getit.com/officedepot/2015/index1.html',\n 'Connection': 'keep-alive',\n 'Content-Type': 'application/json',\n }\n\n form_data = {\n \"request\": {\n \"appkey\": \"592778B0-A13B-11EB-B3DB-84030D516365\",\n \"formdata\": {\n \"order\": \"city\",\n \"objectname\": \"Locator::Store\",\n \"softmatch\": \"1\",\n \"where\": {\n }\n }\n }\n }\n\n yield scrapy.http.FormRequest(\n url=url,\n method='POST',\n body=json.dumps(form_data),\n headers=headers,\n callback=self.parse,\n )\n\n def parse_store(self, response):\n o = OpeningHours()\n for d in response.xpath('//time[@itemprop=\"openingHours\"]/@datetime').extract():\n day, times = d.split(' ', 1)\n s, f = times.split('-')\n\n # They seem to have a bug where they put down 24:00 when they mean noon\n if s == '24:00': s = '12:00'\n\n o.add_range(day, s, f)\n\n\n store_number_results = response.xpath('//dt[@class=\"lsp_number\"]/text()')\n if store_number_results:\n ref = store_number_results[-1].extract().strip()\n\n yield GeojsonPointItem(\n lat=response.xpath('//meta[@itemprop=\"latitude\"]/@content').extract_first(),\n lon=response.xpath('//meta[@itemprop=\"longitude\"]/@content').extract_first(),\n phone=response.xpath('//p[@itemprop=\"telephone\"]/text()').extract_first(),\n addr_full=response.xpath('//p[@itemprop=\"streetAddress\"]/text()').extract_first(),\n city=response.xpath('//p[@itemprop=\"addressLocality\"]/text()').extract_first(),\n state=response.xpath('//p[@itemprop=\"addressRegion\"]/text()').extract_first(),\n postcode=response.xpath('//p[@itemprop=\"postalCode\"]/text()').extract_first(),\n website=response.url,\n ref=ref,\n opening_hours=o.as_opening_hours(),\n )\n\n def parse(self, response):\n data = json.loads(response.body_as_unicode())\n\n for store in data[\"response\"][\"collection\"]:\n properties = {\n 'ref': store[\"clientkey\"],\n 'name': store.get(\"name\"),\n 'addr_full': store[\"address1\"],\n 'city': store[\"city\"],\n 'state': store[\"state\"],\n 'postcode': store[\"postalcode\"],\n 'country': store[\"country\"],\n 'lat': store[\"latitude\"],\n 'lon': store[\"longitude\"],\n 'phone': store[\"phone\"],\n }\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/officedepot.py"}]} | 1,058 | 777 |
gh_patches_debug_29567 | rasdani/github-patches | git_diff | GPflow__GPflow-1350 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tf2.2-rc1 gast requirement
Hi,
gpflow 2.0.0-rc1 has gast requirement 0.2.2.
TensorFlow has gast requirement 0.3.3 from 2.2-rc1, which is incompatible with gpflow requirement.
Best Regards,
Marco
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 # pylint: skip-file
5
6 import os
7 import sys
8 from pathlib import Path
9
10 from pkg_resources import parse_version
11 from setuptools import find_packages, setup
12
13 is_py37 = sys.version_info.major == 3 and sys.version_info.minor == 7
14 on_rtd = os.environ.get('READTHEDOCS', None) == 'True' # copied from the docs
15
16 # Dependencies of GPflow
17 requirements = [
18 'numpy>=1.10.0',
19 'scipy>=0.18.0',
20 'multipledispatch>=0.4.9',
21 'tabulate',
22 'gast==0.2.2',
23 ]
24
25 if not is_py37:
26 requirements.append("dataclasses")
27
28 if not on_rtd:
29 requirements.append("tensorflow-probability>=0.9")
30
31 min_tf_version = '2.1.0'
32 tf_cpu = 'tensorflow'
33 tf_gpu = 'tensorflow-gpu'
34
35 # Only detect TF if not installed or outdated. If not, do not do not list as
36 # requirement to avoid installing over e.g. tensorflow-gpu
37 # To avoid this, rely on importing rather than the package name (like pip).
38
39 try:
40 # If tf not installed, import raises ImportError
41 import tensorflow as tf
42 if parse_version(tf.__version__) < parse_version(min_tf_version):
43 # TF pre-installed, but below the minimum required version
44 raise DeprecationWarning("TensorFlow version below minimum requirement")
45 except (ImportError, DeprecationWarning):
46 # Add TensorFlow to dependencies to trigger installation/update
47 if not on_rtd:
48 # Do not add TF if we are installing GPflow on readthedocs
49 requirements.append(tf_cpu)
50
51 with open(str(Path(".", "VERSION").absolute())) as version_file:
52 version = version_file.read().strip()
53
54 packages = find_packages('.', exclude=["tests"])
55
56 setup(name='gpflow',
57 version=version,
58 author="James Hensman, Alex Matthews",
59 author_email="[email protected]",
60 description="Gaussian process methods in TensorFlow",
61 license="Apache License 2.0",
62 keywords="machine-learning gaussian-processes kernels tensorflow",
63 url="http://github.com/GPflow/GPflow",
64 packages=packages,
65 include_package_data=True,
66 install_requires=requirements,
67 extras_require={'Tensorflow with GPU': [tf_gpu]},
68 python_requires=">=3.6",
69 classifiers=[
70 'License :: OSI Approved :: Apache Software License',
71 'Natural Language :: English',
72 'Operating System :: MacOS :: MacOS X',
73 'Operating System :: Microsoft :: Windows',
74 'Operating System :: POSIX :: Linux',
75 'Programming Language :: Python :: 3.6',
76 'Topic :: Scientific/Engineering :: Artificial Intelligence'
77 ])
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,8 +18,7 @@
'numpy>=1.10.0',
'scipy>=0.18.0',
'multipledispatch>=0.4.9',
- 'tabulate',
- 'gast==0.2.2',
+ 'tabulate'
]
if not is_py37:
@@ -32,6 +31,22 @@
tf_cpu = 'tensorflow'
tf_gpu = 'tensorflow-gpu'
+
+# for latest_version() [see https://github.com/GPflow/GPflow/issues/1348]:
+def latest_version(package_name):
+ import json
+ from urllib import request
+ import re
+
+ url = f"https://pypi.python.org/pypi/{package_name}/json"
+ data = json.load(request.urlopen(url))
+ # filter out rc and beta releases and, more generally, any releases that
+ # do not contain exclusively numbers and dots.
+ versions = [parse_version(v) for v in data["releases"].keys() if re.match("^[0-9.]+$", v)]
+ versions.sort()
+ return versions[-1] # return latest version
+
+
# Only detect TF if not installed or outdated. If not, do not do not list as
# requirement to avoid installing over e.g. tensorflow-gpu
# To avoid this, rely on importing rather than the package name (like pip).
@@ -47,6 +62,9 @@
if not on_rtd:
# Do not add TF if we are installing GPflow on readthedocs
requirements.append(tf_cpu)
+ gast_requirement = 'gast>=0.2.2,<0.3' if latest_version('tensorflow') < parse_version('2.2') else 'gast>=0.3.3'
+ requirements.append(gast_requirement)
+
with open(str(Path(".", "VERSION").absolute())) as version_file:
version = version_file.read().strip()
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,8 +18,7 @@\n 'numpy>=1.10.0',\n 'scipy>=0.18.0',\n 'multipledispatch>=0.4.9',\n- 'tabulate',\n- 'gast==0.2.2',\n+ 'tabulate'\n ]\n \n if not is_py37:\n@@ -32,6 +31,22 @@\n tf_cpu = 'tensorflow'\n tf_gpu = 'tensorflow-gpu'\n \n+\n+# for latest_version() [see https://github.com/GPflow/GPflow/issues/1348]:\n+def latest_version(package_name):\n+ import json\n+ from urllib import request\n+ import re\n+\n+ url = f\"https://pypi.python.org/pypi/{package_name}/json\"\n+ data = json.load(request.urlopen(url))\n+ # filter out rc and beta releases and, more generally, any releases that\n+ # do not contain exclusively numbers and dots.\n+ versions = [parse_version(v) for v in data[\"releases\"].keys() if re.match(\"^[0-9.]+$\", v)] \n+ versions.sort()\n+ return versions[-1] # return latest version\n+\n+\n # Only detect TF if not installed or outdated. If not, do not do not list as\n # requirement to avoid installing over e.g. tensorflow-gpu\n # To avoid this, rely on importing rather than the package name (like pip).\n@@ -47,6 +62,9 @@\n if not on_rtd:\n # Do not add TF if we are installing GPflow on readthedocs\n requirements.append(tf_cpu)\n+ gast_requirement = 'gast>=0.2.2,<0.3' if latest_version('tensorflow') < parse_version('2.2') else 'gast>=0.3.3'\n+ requirements.append(gast_requirement)\n+ \n \n with open(str(Path(\".\", \"VERSION\").absolute())) as version_file:\n version = version_file.read().strip()\n", "issue": "tf2.2-rc1 gast requirement\nHi,\r\n\r\ngpflow 2.0.0-rc1 has gast requirement 0.2.2.\r\n\r\nTensorFlow has gast requirement 0.3.3 from 2.2-rc1, which is incompatible with gpflow requirement.\r\n\r\nBest Regards,\r\n\r\nMarco\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# pylint: skip-file\n\nimport os\nimport sys\nfrom pathlib import Path\n\nfrom pkg_resources import parse_version\nfrom setuptools import find_packages, setup\n\nis_py37 = sys.version_info.major == 3 and sys.version_info.minor == 7\non_rtd = os.environ.get('READTHEDOCS', None) == 'True' # copied from the docs\n\n# Dependencies of GPflow\nrequirements = [\n 'numpy>=1.10.0',\n 'scipy>=0.18.0',\n 'multipledispatch>=0.4.9',\n 'tabulate',\n 'gast==0.2.2',\n]\n\nif not is_py37:\n requirements.append(\"dataclasses\")\n\nif not on_rtd:\n requirements.append(\"tensorflow-probability>=0.9\")\n\nmin_tf_version = '2.1.0'\ntf_cpu = 'tensorflow'\ntf_gpu = 'tensorflow-gpu'\n\n# Only detect TF if not installed or outdated. If not, do not do not list as\n# requirement to avoid installing over e.g. tensorflow-gpu\n# To avoid this, rely on importing rather than the package name (like pip).\n\ntry:\n # If tf not installed, import raises ImportError\n import tensorflow as tf\n if parse_version(tf.__version__) < parse_version(min_tf_version):\n # TF pre-installed, but below the minimum required version\n raise DeprecationWarning(\"TensorFlow version below minimum requirement\")\nexcept (ImportError, DeprecationWarning):\n # Add TensorFlow to dependencies to trigger installation/update\n if not on_rtd:\n # Do not add TF if we are installing GPflow on readthedocs\n requirements.append(tf_cpu)\n\nwith open(str(Path(\".\", \"VERSION\").absolute())) as version_file:\n version = version_file.read().strip()\n\npackages = find_packages('.', exclude=[\"tests\"])\n\nsetup(name='gpflow',\n version=version,\n author=\"James Hensman, Alex Matthews\",\n author_email=\"[email protected]\",\n description=\"Gaussian process methods in TensorFlow\",\n license=\"Apache License 2.0\",\n keywords=\"machine-learning gaussian-processes kernels tensorflow\",\n url=\"http://github.com/GPflow/GPflow\",\n packages=packages,\n include_package_data=True,\n install_requires=requirements,\n extras_require={'Tensorflow with GPU': [tf_gpu]},\n python_requires=\">=3.6\",\n classifiers=[\n 'License :: OSI Approved :: Apache Software License',\n 'Natural Language :: English',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence'\n ])\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# pylint: skip-file\n\nimport os\nimport sys\nfrom pathlib import Path\n\nfrom pkg_resources import parse_version\nfrom setuptools import find_packages, setup\n\nis_py37 = sys.version_info.major == 3 and sys.version_info.minor == 7\non_rtd = os.environ.get('READTHEDOCS', None) == 'True' # copied from the docs\n\n# Dependencies of GPflow\nrequirements = [\n 'numpy>=1.10.0',\n 'scipy>=0.18.0',\n 'multipledispatch>=0.4.9',\n 'tabulate'\n]\n\nif not is_py37:\n requirements.append(\"dataclasses\")\n\nif not on_rtd:\n requirements.append(\"tensorflow-probability>=0.9\")\n\nmin_tf_version = '2.1.0'\ntf_cpu = 'tensorflow'\ntf_gpu = 'tensorflow-gpu'\n\n\n# for latest_version() [see https://github.com/GPflow/GPflow/issues/1348]:\ndef latest_version(package_name):\n import json\n from urllib import request\n import re\n\n url = f\"https://pypi.python.org/pypi/{package_name}/json\"\n data = json.load(request.urlopen(url))\n # filter out rc and beta releases and, more generally, any releases that\n # do not contain exclusively numbers and dots.\n versions = [parse_version(v) for v in data[\"releases\"].keys() if re.match(\"^[0-9.]+$\", v)] \n versions.sort()\n return versions[-1] # return latest version\n\n\n# Only detect TF if not installed or outdated. If not, do not do not list as\n# requirement to avoid installing over e.g. tensorflow-gpu\n# To avoid this, rely on importing rather than the package name (like pip).\n\ntry:\n # If tf not installed, import raises ImportError\n import tensorflow as tf\n if parse_version(tf.__version__) < parse_version(min_tf_version):\n # TF pre-installed, but below the minimum required version\n raise DeprecationWarning(\"TensorFlow version below minimum requirement\")\nexcept (ImportError, DeprecationWarning):\n # Add TensorFlow to dependencies to trigger installation/update\n if not on_rtd:\n # Do not add TF if we are installing GPflow on readthedocs\n requirements.append(tf_cpu)\n gast_requirement = 'gast>=0.2.2,<0.3' if latest_version('tensorflow') < parse_version('2.2') else 'gast>=0.3.3'\n requirements.append(gast_requirement)\n \n\nwith open(str(Path(\".\", \"VERSION\").absolute())) as version_file:\n version = version_file.read().strip()\n\npackages = find_packages('.', exclude=[\"tests\"])\n\nsetup(name='gpflow',\n version=version,\n author=\"James Hensman, Alex Matthews\",\n author_email=\"[email protected]\",\n description=\"Gaussian process methods in TensorFlow\",\n license=\"Apache License 2.0\",\n keywords=\"machine-learning gaussian-processes kernels tensorflow\",\n url=\"http://github.com/GPflow/GPflow\",\n packages=packages,\n include_package_data=True,\n install_requires=requirements,\n extras_require={'Tensorflow with GPU': [tf_gpu]},\n python_requires=\">=3.6\",\n classifiers=[\n 'License :: OSI Approved :: Apache Software License',\n 'Natural Language :: English',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence'\n ])\n", "path": "setup.py"}]} | 1,085 | 462 |
gh_patches_debug_18286 | rasdani/github-patches | git_diff | spotify__luigi-908 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exception when a command-line parametter contains a non-ascii character
Hey,
We've just upgraded to Luigi 1.1.2, and now we get an error when launching Tasks that have utf-8 characters on the parameters. (using python 2.7.3)
Here's the stacktrace we get :
```
Traceback (most recent call last):
File "examples/top_artists.py", line 283, in <module>
luigi.run()
File "/home/jonas/.virtualenvs/website/local/lib/python2.7/site-packages/luigi/interface.py", line 434, in run
return interface.run(tasks, worker_scheduler_factory, override_defaults=override_defaults)
File "/home/jonas/.virtualenvs/website/local/lib/python2.7/site-packages/luigi/interface.py", line 165, in run
not(lock.acquire_for(env_params.lock_pid_dir, env_params.lock_size))):
File "/home/jonas/.virtualenvs/website/local/lib/python2.7/site-packages/luigi/lock.py", line 59, in acquire_for
my_pid, my_cmd, pid_file = get_info(pid_dir)
File "/home/jonas/.virtualenvs/website/local/lib/python2.7/site-packages/luigi/lock.py", line 44, in get_info
pid_file = os.path.join(pid_dir, hashlib.md5(my_cmd.encode('utf8')).hexdigest()) + '.pid'
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 97: ordinal not in range(128)
```
Here I've just added a `plop = luigi.Parameter()` to `AggregateArtists`, and called it with
`python examples/top_artists.py AggregateArtists --local-scheduler --date-interval 2012-06 --plop à`
The error seems to come from the [Python3 refactoring](https://github.com/spotify/luigi/pull/745). Removing `.encode('utf8')` on [this line](https://github.com/spotify/luigi/blob/master/luigi/lock.py#L44) seems to solve the problem, but I guess it was added for a reason ^^.
Any ideas of how we could solve this on our side, other than removing the encode? (I'll try to add a test for it later)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `luigi/lock.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright 2012-2015 Spotify AB
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 #
17 """
18 Locking functionality when launching things from the command line.
19 Uses a pidfile.
20 This prevents multiple identical workflows to be launched simultaneously.
21 """
22 from __future__ import print_function
23
24 import hashlib
25 import os
26
27
28 def getpcmd(pid):
29 """
30 Returns command of process.
31
32 :param pid:
33 """
34 cmd = 'ps -p %s -o command=' % (pid,)
35 p = os.popen(cmd, 'r')
36 return p.readline().strip()
37
38
39 def get_info(pid_dir):
40 # Check the name and pid of this process
41 my_pid = os.getpid()
42 my_cmd = getpcmd(my_pid)
43
44 pid_file = os.path.join(pid_dir, hashlib.md5(my_cmd.encode('utf8')).hexdigest()) + '.pid'
45
46 return my_pid, my_cmd, pid_file
47
48
49 def acquire_for(pid_dir, num_available=1):
50 """
51 Makes sure the process is only run once at the same time with the same name.
52
53 Notice that we since we check the process name, different parameters to the same
54 command can spawn multiple processes at the same time, i.e. running
55 "/usr/bin/my_process" does not prevent anyone from launching
56 "/usr/bin/my_process --foo bar".
57 """
58
59 my_pid, my_cmd, pid_file = get_info(pid_dir)
60
61 # Check if there is a pid file corresponding to this name
62 if not os.path.exists(pid_dir):
63 os.mkdir(pid_dir)
64 os.chmod(pid_dir, 0o777)
65
66 pids = set()
67 pid_cmds = {}
68 if os.path.exists(pid_file):
69 # There is such a file - read the pid and look up its process name
70 pids.update(filter(None, map(str.strip, open(pid_file))))
71 pid_cmds = dict((pid, getpcmd(pid)) for pid in pids)
72 matching_pids = list(filter(lambda pid: pid_cmds[pid] == my_cmd, pids))
73
74 if len(matching_pids) >= num_available:
75 # We are already running under a different pid
76 print('Pid(s)', ', '.join(matching_pids), 'already running')
77 return False
78 else:
79 # The pid belongs to something else, we could
80 pass
81 pid_cmds[str(my_pid)] = my_cmd
82
83 # Write pids
84 pids.add(str(my_pid))
85 with open(pid_file, 'w') as f:
86 f.writelines('%s\n' % (pid, ) for pid in filter(pid_cmds.__getitem__, pids))
87
88 # Make the file writable by all
89 if os.name == 'nt':
90 pass
91 else:
92 s = os.stat(pid_file)
93 if os.getuid() == s.st_uid:
94 os.chmod(pid_file, s.st_mode | 0o777)
95
96 return True
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/luigi/lock.py b/luigi/lock.py
--- a/luigi/lock.py
+++ b/luigi/lock.py
@@ -24,6 +24,8 @@
import hashlib
import os
+from luigi import six
+
def getpcmd(pid):
"""
@@ -32,16 +34,23 @@
:param pid:
"""
cmd = 'ps -p %s -o command=' % (pid,)
- p = os.popen(cmd, 'r')
- return p.readline().strip()
+ with os.popen(cmd, 'r') as p:
+ return p.readline().strip()
-def get_info(pid_dir):
+def get_info(pid_dir, my_pid=None):
# Check the name and pid of this process
- my_pid = os.getpid()
+ if my_pid is None:
+ my_pid = os.getpid()
+
my_cmd = getpcmd(my_pid)
- pid_file = os.path.join(pid_dir, hashlib.md5(my_cmd.encode('utf8')).hexdigest()) + '.pid'
+ if six.PY3:
+ cmd_hash = my_cmd.encode('utf8')
+ else:
+ cmd_hash = my_cmd
+
+ pid_file = os.path.join(pid_dir, hashlib.md5(cmd_hash).hexdigest()) + '.pid'
return my_pid, my_cmd, pid_file
| {"golden_diff": "diff --git a/luigi/lock.py b/luigi/lock.py\n--- a/luigi/lock.py\n+++ b/luigi/lock.py\n@@ -24,6 +24,8 @@\n import hashlib\n import os\n \n+from luigi import six\n+\n \n def getpcmd(pid):\n \"\"\"\n@@ -32,16 +34,23 @@\n :param pid:\n \"\"\"\n cmd = 'ps -p %s -o command=' % (pid,)\n- p = os.popen(cmd, 'r')\n- return p.readline().strip()\n+ with os.popen(cmd, 'r') as p:\n+ return p.readline().strip()\n \n \n-def get_info(pid_dir):\n+def get_info(pid_dir, my_pid=None):\n # Check the name and pid of this process\n- my_pid = os.getpid()\n+ if my_pid is None:\n+ my_pid = os.getpid()\n+\n my_cmd = getpcmd(my_pid)\n \n- pid_file = os.path.join(pid_dir, hashlib.md5(my_cmd.encode('utf8')).hexdigest()) + '.pid'\n+ if six.PY3:\n+ cmd_hash = my_cmd.encode('utf8')\n+ else:\n+ cmd_hash = my_cmd\n+\n+ pid_file = os.path.join(pid_dir, hashlib.md5(cmd_hash).hexdigest()) + '.pid'\n \n return my_pid, my_cmd, pid_file\n", "issue": "Exception when a command-line parametter contains a non-ascii character\nHey,\n\nWe've just upgraded to Luigi 1.1.2, and now we get an error when launching Tasks that have utf-8 characters on the parameters. (using python 2.7.3)\n\nHere's the stacktrace we get : \n\n```\nTraceback (most recent call last):\n File \"examples/top_artists.py\", line 283, in <module>\n luigi.run()\n File \"/home/jonas/.virtualenvs/website/local/lib/python2.7/site-packages/luigi/interface.py\", line 434, in run\n return interface.run(tasks, worker_scheduler_factory, override_defaults=override_defaults)\n File \"/home/jonas/.virtualenvs/website/local/lib/python2.7/site-packages/luigi/interface.py\", line 165, in run\n not(lock.acquire_for(env_params.lock_pid_dir, env_params.lock_size))):\n File \"/home/jonas/.virtualenvs/website/local/lib/python2.7/site-packages/luigi/lock.py\", line 59, in acquire_for\n my_pid, my_cmd, pid_file = get_info(pid_dir)\n File \"/home/jonas/.virtualenvs/website/local/lib/python2.7/site-packages/luigi/lock.py\", line 44, in get_info\n pid_file = os.path.join(pid_dir, hashlib.md5(my_cmd.encode('utf8')).hexdigest()) + '.pid'\nUnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 97: ordinal not in range(128)\n```\n\nHere I've just added a `plop = luigi.Parameter()` to `AggregateArtists`, and called it with \n`python examples/top_artists.py AggregateArtists --local-scheduler --date-interval 2012-06 --plop \u00e0`\n\nThe error seems to come from the [Python3 refactoring](https://github.com/spotify/luigi/pull/745). Removing `.encode('utf8')` on [this line](https://github.com/spotify/luigi/blob/master/luigi/lock.py#L44) seems to solve the problem, but I guess it was added for a reason ^^.\n\nAny ideas of how we could solve this on our side, other than removing the encode? (I'll try to add a test for it later)\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2012-2015 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\"\"\"\nLocking functionality when launching things from the command line.\nUses a pidfile.\nThis prevents multiple identical workflows to be launched simultaneously.\n\"\"\"\nfrom __future__ import print_function\n\nimport hashlib\nimport os\n\n\ndef getpcmd(pid):\n \"\"\"\n Returns command of process.\n\n :param pid:\n \"\"\"\n cmd = 'ps -p %s -o command=' % (pid,)\n p = os.popen(cmd, 'r')\n return p.readline().strip()\n\n\ndef get_info(pid_dir):\n # Check the name and pid of this process\n my_pid = os.getpid()\n my_cmd = getpcmd(my_pid)\n\n pid_file = os.path.join(pid_dir, hashlib.md5(my_cmd.encode('utf8')).hexdigest()) + '.pid'\n\n return my_pid, my_cmd, pid_file\n\n\ndef acquire_for(pid_dir, num_available=1):\n \"\"\"\n Makes sure the process is only run once at the same time with the same name.\n\n Notice that we since we check the process name, different parameters to the same\n command can spawn multiple processes at the same time, i.e. running\n \"/usr/bin/my_process\" does not prevent anyone from launching\n \"/usr/bin/my_process --foo bar\".\n \"\"\"\n\n my_pid, my_cmd, pid_file = get_info(pid_dir)\n\n # Check if there is a pid file corresponding to this name\n if not os.path.exists(pid_dir):\n os.mkdir(pid_dir)\n os.chmod(pid_dir, 0o777)\n\n pids = set()\n pid_cmds = {}\n if os.path.exists(pid_file):\n # There is such a file - read the pid and look up its process name\n pids.update(filter(None, map(str.strip, open(pid_file))))\n pid_cmds = dict((pid, getpcmd(pid)) for pid in pids)\n matching_pids = list(filter(lambda pid: pid_cmds[pid] == my_cmd, pids))\n\n if len(matching_pids) >= num_available:\n # We are already running under a different pid\n print('Pid(s)', ', '.join(matching_pids), 'already running')\n return False\n else:\n # The pid belongs to something else, we could\n pass\n pid_cmds[str(my_pid)] = my_cmd\n\n # Write pids\n pids.add(str(my_pid))\n with open(pid_file, 'w') as f:\n f.writelines('%s\\n' % (pid, ) for pid in filter(pid_cmds.__getitem__, pids))\n\n # Make the file writable by all\n if os.name == 'nt':\n pass\n else:\n s = os.stat(pid_file)\n if os.getuid() == s.st_uid:\n os.chmod(pid_file, s.st_mode | 0o777)\n\n return True\n", "path": "luigi/lock.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2012-2015 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\"\"\"\nLocking functionality when launching things from the command line.\nUses a pidfile.\nThis prevents multiple identical workflows to be launched simultaneously.\n\"\"\"\nfrom __future__ import print_function\n\nimport hashlib\nimport os\n\nfrom luigi import six\n\n\ndef getpcmd(pid):\n \"\"\"\n Returns command of process.\n\n :param pid:\n \"\"\"\n cmd = 'ps -p %s -o command=' % (pid,)\n with os.popen(cmd, 'r') as p:\n return p.readline().strip()\n\n\ndef get_info(pid_dir, my_pid=None):\n # Check the name and pid of this process\n if my_pid is None:\n my_pid = os.getpid()\n\n my_cmd = getpcmd(my_pid)\n\n if six.PY3:\n cmd_hash = my_cmd.encode('utf8')\n else:\n cmd_hash = my_cmd\n\n pid_file = os.path.join(pid_dir, hashlib.md5(cmd_hash).hexdigest()) + '.pid'\n\n return my_pid, my_cmd, pid_file\n\n\ndef acquire_for(pid_dir, num_available=1):\n \"\"\"\n Makes sure the process is only run once at the same time with the same name.\n\n Notice that we since we check the process name, different parameters to the same\n command can spawn multiple processes at the same time, i.e. running\n \"/usr/bin/my_process\" does not prevent anyone from launching\n \"/usr/bin/my_process --foo bar\".\n \"\"\"\n\n my_pid, my_cmd, pid_file = get_info(pid_dir)\n\n # Check if there is a pid file corresponding to this name\n if not os.path.exists(pid_dir):\n os.mkdir(pid_dir)\n os.chmod(pid_dir, 0o777)\n\n pids = set()\n pid_cmds = {}\n if os.path.exists(pid_file):\n # There is such a file - read the pid and look up its process name\n pids.update(filter(None, map(str.strip, open(pid_file))))\n pid_cmds = dict((pid, getpcmd(pid)) for pid in pids)\n matching_pids = list(filter(lambda pid: pid_cmds[pid] == my_cmd, pids))\n\n if len(matching_pids) >= num_available:\n # We are already running under a different pid\n print('Pid(s)', ', '.join(matching_pids), 'already running')\n return False\n else:\n # The pid belongs to something else, we could\n pass\n pid_cmds[str(my_pid)] = my_cmd\n\n # Write pids\n pids.add(str(my_pid))\n with open(pid_file, 'w') as f:\n f.writelines('%s\\n' % (pid, ) for pid in filter(pid_cmds.__getitem__, pids))\n\n # Make the file writable by all\n if os.name == 'nt':\n pass\n else:\n s = os.stat(pid_file)\n if os.getuid() == s.st_uid:\n os.chmod(pid_file, s.st_mode | 0o777)\n\n return True\n", "path": "luigi/lock.py"}]} | 1,724 | 313 |
gh_patches_debug_32795 | rasdani/github-patches | git_diff | litestar-org__litestar-1999 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Lots of `pydantic` warnings: `.dict()` and `.json()` are deprecated
### Description
You can find lots of `DeprecationWarning` instances here: https://github.com/litestar-org/litestar/actions/runs/5578844701/jobs/10193581342
I propose to add a compat layer to call `.model_dump` and `.model_dump_json` on v2 and `.dict` and `.json` on v1, since they are both supported right now.
### URL to code causing the issue
_No response_
### MCVE
_No response_
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
`main`
### Platform
- [ ] Linux
- [X] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
<!-- POLAR PLEDGE BADGE START -->
## Funding
* If you would like to see an issue prioritized, make a pledge towards it!
* We receive the pledge once the issue is completed & verified
<a href="https://polar.sh/litestar-org/litestar/issues/1996">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/1996/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/1996/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/_openapi/schema_generation/examples.py`
Content:
```
1 from __future__ import annotations
2
3 from dataclasses import replace
4 from enum import Enum
5 from typing import TYPE_CHECKING, Any
6
7 from _decimal import Decimal
8 from polyfactory.exceptions import ParameterException
9 from polyfactory.field_meta import FieldMeta, Null
10 from polyfactory.utils.helpers import unwrap_annotation
11
12 from litestar.openapi.spec import Example
13 from litestar.types import Empty
14 from litestar.utils import is_pydantic_model_instance
15
16 try:
17 from polyfactory.factories.pydantic_factory import ModelFactory as Factory
18 except ImportError:
19 from polyfactory.factories import DataclassFactory as Factory # type: ignore[assignment]
20
21
22 if TYPE_CHECKING:
23 from litestar.typing import FieldDefinition
24
25
26 Factory.seed_random(10)
27
28
29 def _normalize_example_value(value: Any) -> Any:
30 """Normalize the example value to make it look a bit prettier."""
31 value = unwrap_annotation(annotation=value, random=Factory.__random__)
32 if isinstance(value, (Decimal, float)):
33 value = round(float(value), 2)
34 if isinstance(value, Enum):
35 value = value.value
36 if is_pydantic_model_instance(value):
37 value = value.dict()
38 if isinstance(value, (list, set)):
39 value = [_normalize_example_value(v) for v in value]
40 if isinstance(value, dict):
41 for k, v in value.items():
42 value[k] = _normalize_example_value(v)
43 return value
44
45
46 def _create_field_meta(field: FieldDefinition) -> FieldMeta:
47 return FieldMeta.from_type(
48 annotation=field.annotation,
49 default=field.default if field.default is not Empty else Null,
50 name=field.name,
51 random=Factory.__random__,
52 )
53
54
55 def create_examples_for_field(field: FieldDefinition) -> list[Example]:
56 """Create an OpenAPI Example instance.
57
58 Args:
59 field: A signature field.
60
61 Returns:
62 A list including a single example.
63 """
64 try:
65 field_meta = _create_field_meta(replace(field, annotation=_normalize_example_value(field.annotation)))
66 value = Factory.get_field_value(field_meta)
67 return [Example(description=f"Example {field.name} value", value=value)]
68 except ParameterException:
69 return []
70
```
Path: `litestar/contrib/pydantic/__init__.py`
Content:
```
1 from .pydantic_dto_factory import PydanticDTO
2 from .pydantic_init_plugin import PydanticInitPlugin
3 from .pydantic_schema_plugin import PydanticSchemaPlugin
4
5 __all__ = ("PydanticDTO", "PydanticInitPlugin", "PydanticSchemaPlugin")
6
```
Path: `litestar/contrib/pydantic/pydantic_init_plugin.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING, Any, Callable, TypeVar, cast
4 from uuid import UUID
5
6 from msgspec import ValidationError
7
8 from litestar.exceptions import MissingDependencyException
9 from litestar.plugins import InitPluginProtocol
10 from litestar.serialization._msgspec_utils import ExtendedMsgSpecValidationError
11 from litestar.utils import is_class_and_subclass, is_pydantic_model_class
12
13 if TYPE_CHECKING:
14 from litestar.config.app import AppConfig
15
16 try:
17 import pydantic
18 except ImportError as e:
19 raise MissingDependencyException("pydantic") from e
20
21 T = TypeVar("T")
22
23
24 def _dec_pydantic(model_type: type[pydantic.BaseModel], value: Any) -> pydantic.BaseModel:
25 try:
26 return (
27 model_type.model_validate(value, strict=False)
28 if hasattr(model_type, "model_validate")
29 else model_type.parse_obj(value)
30 )
31 except pydantic.ValidationError as e:
32 raise ExtendedMsgSpecValidationError(errors=cast("list[dict[str, Any]]", e.errors())) from e
33
34
35 def _dec_pydantic_uuid(
36 uuid_type: type[pydantic.UUID1] | type[pydantic.UUID3] | type[pydantic.UUID4] | type[pydantic.UUID5],
37 value: Any,
38 ) -> type[pydantic.UUID1] | type[pydantic.UUID3] | type[pydantic.UUID4] | type[pydantic.UUID5]: # pragma: no cover
39 if isinstance(value, str):
40 value = uuid_type(value)
41
42 elif isinstance(value, (bytes, bytearray)):
43 try:
44 value = uuid_type(value.decode())
45 except ValueError:
46 # 16 bytes in big-endian order as the bytes argument fail
47 # the above check
48 value = uuid_type(bytes=value)
49 elif isinstance(value, UUID):
50 value = uuid_type(str(value))
51
52 if not isinstance(value, uuid_type):
53 raise ValidationError(f"Invalid UUID: {value!r}")
54
55 if value._required_version != value.version: # pyright: ignore
56 raise ValidationError(f"Invalid UUID version: {value!r}")
57
58 return cast("type[pydantic.UUID1] | type[pydantic.UUID3] | type[pydantic.UUID4] | type[pydantic.UUID5]", value)
59
60
61 def _is_pydantic_uuid(value: Any) -> bool: # pragma: no cover
62 return is_class_and_subclass(value, (pydantic.UUID1, pydantic.UUID3, pydantic.UUID4, pydantic.UUID5))
63
64
65 _base_encoders: dict[Any, Callable[[Any], Any]] = {
66 pydantic.EmailStr: str,
67 pydantic.NameEmail: str,
68 pydantic.ByteSize: lambda val: val.real,
69 }
70
71
72 class PydanticInitPlugin(InitPluginProtocol):
73 @classmethod
74 def encoders(cls) -> dict[Any, Callable[[Any], Any]]:
75 if pydantic.VERSION.startswith("1"): # pragma: no cover
76 return {**_base_encoders, **cls._create_pydantic_v1_encoders()}
77 return {**_base_encoders, **cls._create_pydantic_v2_encoders()}
78
79 @classmethod
80 def decoders(cls) -> list[tuple[Callable[[Any], bool], Callable[[Any, Any], Any]]]:
81 decoders: list[tuple[Callable[[Any], bool], Callable[[Any, Any], Any]]] = [
82 (is_pydantic_model_class, _dec_pydantic)
83 ]
84
85 if pydantic.VERSION.startswith("1"): # pragma: no cover
86 decoders.append((_is_pydantic_uuid, _dec_pydantic_uuid))
87
88 return decoders
89
90 @staticmethod
91 def _create_pydantic_v1_encoders() -> dict[Any, Callable[[Any], Any]]: # pragma: no cover
92 return {
93 pydantic.BaseModel: lambda model: {
94 k: v.decode() if isinstance(v, bytes) else v for k, v in model.dict().items()
95 },
96 pydantic.SecretField: str,
97 pydantic.StrictBool: int,
98 pydantic.color.Color: str, # pyright: ignore
99 pydantic.ConstrainedBytes: lambda val: val.decode("utf-8"),
100 pydantic.ConstrainedDate: lambda val: val.isoformat(),
101 }
102
103 @staticmethod
104 def _create_pydantic_v2_encoders() -> dict[Any, Callable[[Any], Any]]:
105 try:
106 from pydantic_extra_types import color
107 except ImportError:
108 color = None # type: ignore[assignment]
109 encoders: dict[Any, Callable[[Any], Any]] = {
110 pydantic.BaseModel: lambda model: model.model_dump(mode="json"),
111 pydantic.types.SecretStr: lambda val: "**********" if val else "",
112 pydantic.types.SecretBytes: lambda val: "**********" if val else "",
113 }
114 if color:
115 encoders[color.Color] = str
116 return encoders
117
118 def on_app_init(self, app_config: AppConfig) -> AppConfig:
119 app_config.type_encoders = {**self.encoders(), **(app_config.type_encoders or {})}
120 app_config.type_decoders = [*self.decoders(), *(app_config.type_decoders or [])]
121 return app_config
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/litestar/_openapi/schema_generation/examples.py b/litestar/_openapi/schema_generation/examples.py
--- a/litestar/_openapi/schema_generation/examples.py
+++ b/litestar/_openapi/schema_generation/examples.py
@@ -34,7 +34,9 @@
if isinstance(value, Enum):
value = value.value
if is_pydantic_model_instance(value):
- value = value.dict()
+ from litestar.contrib.pydantic import _model_dump
+
+ value = _model_dump(value)
if isinstance(value, (list, set)):
value = [_normalize_example_value(v) for v in value]
if isinstance(value, dict):
diff --git a/litestar/contrib/pydantic/__init__.py b/litestar/contrib/pydantic/__init__.py
--- a/litestar/contrib/pydantic/__init__.py
+++ b/litestar/contrib/pydantic/__init__.py
@@ -1,5 +1,24 @@
+from __future__ import annotations
+
+from typing import TYPE_CHECKING, Any
+
from .pydantic_dto_factory import PydanticDTO
from .pydantic_init_plugin import PydanticInitPlugin
from .pydantic_schema_plugin import PydanticSchemaPlugin
+if TYPE_CHECKING:
+ import pydantic
+
__all__ = ("PydanticDTO", "PydanticInitPlugin", "PydanticSchemaPlugin")
+
+
+def _model_dump(model: pydantic.BaseModel, *, by_alias: bool = False) -> dict[str, Any]:
+ return (
+ model.model_dump(mode="json", by_alias=by_alias)
+ if hasattr(model, "model_dump")
+ else model.dict(by_alias=by_alias)
+ )
+
+
+def _model_dump_json(model: pydantic.BaseModel) -> str:
+ return model.model_dump_json() if hasattr(model, "model_dump_json") else model.json()
diff --git a/litestar/contrib/pydantic/pydantic_init_plugin.py b/litestar/contrib/pydantic/pydantic_init_plugin.py
--- a/litestar/contrib/pydantic/pydantic_init_plugin.py
+++ b/litestar/contrib/pydantic/pydantic_init_plugin.py
@@ -90,9 +90,7 @@
@staticmethod
def _create_pydantic_v1_encoders() -> dict[Any, Callable[[Any], Any]]: # pragma: no cover
return {
- pydantic.BaseModel: lambda model: {
- k: v.decode() if isinstance(v, bytes) else v for k, v in model.dict().items()
- },
+ pydantic.BaseModel: lambda model: model.dict(),
pydantic.SecretField: str,
pydantic.StrictBool: int,
pydantic.color.Color: str, # pyright: ignore
| {"golden_diff": "diff --git a/litestar/_openapi/schema_generation/examples.py b/litestar/_openapi/schema_generation/examples.py\n--- a/litestar/_openapi/schema_generation/examples.py\n+++ b/litestar/_openapi/schema_generation/examples.py\n@@ -34,7 +34,9 @@\n if isinstance(value, Enum):\n value = value.value\n if is_pydantic_model_instance(value):\n- value = value.dict()\n+ from litestar.contrib.pydantic import _model_dump\n+\n+ value = _model_dump(value)\n if isinstance(value, (list, set)):\n value = [_normalize_example_value(v) for v in value]\n if isinstance(value, dict):\ndiff --git a/litestar/contrib/pydantic/__init__.py b/litestar/contrib/pydantic/__init__.py\n--- a/litestar/contrib/pydantic/__init__.py\n+++ b/litestar/contrib/pydantic/__init__.py\n@@ -1,5 +1,24 @@\n+from __future__ import annotations\n+\n+from typing import TYPE_CHECKING, Any\n+\n from .pydantic_dto_factory import PydanticDTO\n from .pydantic_init_plugin import PydanticInitPlugin\n from .pydantic_schema_plugin import PydanticSchemaPlugin\n \n+if TYPE_CHECKING:\n+ import pydantic\n+\n __all__ = (\"PydanticDTO\", \"PydanticInitPlugin\", \"PydanticSchemaPlugin\")\n+\n+\n+def _model_dump(model: pydantic.BaseModel, *, by_alias: bool = False) -> dict[str, Any]:\n+ return (\n+ model.model_dump(mode=\"json\", by_alias=by_alias)\n+ if hasattr(model, \"model_dump\")\n+ else model.dict(by_alias=by_alias)\n+ )\n+\n+\n+def _model_dump_json(model: pydantic.BaseModel) -> str:\n+ return model.model_dump_json() if hasattr(model, \"model_dump_json\") else model.json()\ndiff --git a/litestar/contrib/pydantic/pydantic_init_plugin.py b/litestar/contrib/pydantic/pydantic_init_plugin.py\n--- a/litestar/contrib/pydantic/pydantic_init_plugin.py\n+++ b/litestar/contrib/pydantic/pydantic_init_plugin.py\n@@ -90,9 +90,7 @@\n @staticmethod\n def _create_pydantic_v1_encoders() -> dict[Any, Callable[[Any], Any]]: # pragma: no cover\n return {\n- pydantic.BaseModel: lambda model: {\n- k: v.decode() if isinstance(v, bytes) else v for k, v in model.dict().items()\n- },\n+ pydantic.BaseModel: lambda model: model.dict(),\n pydantic.SecretField: str,\n pydantic.StrictBool: int,\n pydantic.color.Color: str, # pyright: ignore\n", "issue": "Lots of `pydantic` warnings: `.dict()` and `.json()` are deprecated\n### Description\n\nYou can find lots of `DeprecationWarning` instances here: https://github.com/litestar-org/litestar/actions/runs/5578844701/jobs/10193581342\r\n\r\nI propose to add a compat layer to call `.model_dump` and `.model_dump_json` on v2 and `.dict` and `.json` on v1, since they are both supported right now.\n\n### URL to code causing the issue\n\n_No response_\n\n### MCVE\n\n_No response_\n\n### Steps to reproduce\n\n_No response_\n\n### Screenshots\n\n_No response_\n\n### Logs\n\n_No response_\n\n### Litestar Version\n\n`main`\n\n### Platform\n\n- [ ] Linux\n- [X] Mac\n- [ ] Windows\n- [ ] Other (Please specify in the description above)\n\n<!-- POLAR PLEDGE BADGE START -->\n## Funding\n* If you would like to see an issue prioritized, make a pledge towards it!\n* We receive the pledge once the issue is completed & verified\n\n<a href=\"https://polar.sh/litestar-org/litestar/issues/1996\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/1996/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/1996/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom dataclasses import replace\nfrom enum import Enum\nfrom typing import TYPE_CHECKING, Any\n\nfrom _decimal import Decimal\nfrom polyfactory.exceptions import ParameterException\nfrom polyfactory.field_meta import FieldMeta, Null\nfrom polyfactory.utils.helpers import unwrap_annotation\n\nfrom litestar.openapi.spec import Example\nfrom litestar.types import Empty\nfrom litestar.utils import is_pydantic_model_instance\n\ntry:\n from polyfactory.factories.pydantic_factory import ModelFactory as Factory\nexcept ImportError:\n from polyfactory.factories import DataclassFactory as Factory # type: ignore[assignment]\n\n\nif TYPE_CHECKING:\n from litestar.typing import FieldDefinition\n\n\nFactory.seed_random(10)\n\n\ndef _normalize_example_value(value: Any) -> Any:\n \"\"\"Normalize the example value to make it look a bit prettier.\"\"\"\n value = unwrap_annotation(annotation=value, random=Factory.__random__)\n if isinstance(value, (Decimal, float)):\n value = round(float(value), 2)\n if isinstance(value, Enum):\n value = value.value\n if is_pydantic_model_instance(value):\n value = value.dict()\n if isinstance(value, (list, set)):\n value = [_normalize_example_value(v) for v in value]\n if isinstance(value, dict):\n for k, v in value.items():\n value[k] = _normalize_example_value(v)\n return value\n\n\ndef _create_field_meta(field: FieldDefinition) -> FieldMeta:\n return FieldMeta.from_type(\n annotation=field.annotation,\n default=field.default if field.default is not Empty else Null,\n name=field.name,\n random=Factory.__random__,\n )\n\n\ndef create_examples_for_field(field: FieldDefinition) -> list[Example]:\n \"\"\"Create an OpenAPI Example instance.\n\n Args:\n field: A signature field.\n\n Returns:\n A list including a single example.\n \"\"\"\n try:\n field_meta = _create_field_meta(replace(field, annotation=_normalize_example_value(field.annotation)))\n value = Factory.get_field_value(field_meta)\n return [Example(description=f\"Example {field.name} value\", value=value)]\n except ParameterException:\n return []\n", "path": "litestar/_openapi/schema_generation/examples.py"}, {"content": "from .pydantic_dto_factory import PydanticDTO\nfrom .pydantic_init_plugin import PydanticInitPlugin\nfrom .pydantic_schema_plugin import PydanticSchemaPlugin\n\n__all__ = (\"PydanticDTO\", \"PydanticInitPlugin\", \"PydanticSchemaPlugin\")\n", "path": "litestar/contrib/pydantic/__init__.py"}, {"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, Callable, TypeVar, cast\nfrom uuid import UUID\n\nfrom msgspec import ValidationError\n\nfrom litestar.exceptions import MissingDependencyException\nfrom litestar.plugins import InitPluginProtocol\nfrom litestar.serialization._msgspec_utils import ExtendedMsgSpecValidationError\nfrom litestar.utils import is_class_and_subclass, is_pydantic_model_class\n\nif TYPE_CHECKING:\n from litestar.config.app import AppConfig\n\ntry:\n import pydantic\nexcept ImportError as e:\n raise MissingDependencyException(\"pydantic\") from e\n\nT = TypeVar(\"T\")\n\n\ndef _dec_pydantic(model_type: type[pydantic.BaseModel], value: Any) -> pydantic.BaseModel:\n try:\n return (\n model_type.model_validate(value, strict=False)\n if hasattr(model_type, \"model_validate\")\n else model_type.parse_obj(value)\n )\n except pydantic.ValidationError as e:\n raise ExtendedMsgSpecValidationError(errors=cast(\"list[dict[str, Any]]\", e.errors())) from e\n\n\ndef _dec_pydantic_uuid(\n uuid_type: type[pydantic.UUID1] | type[pydantic.UUID3] | type[pydantic.UUID4] | type[pydantic.UUID5],\n value: Any,\n) -> type[pydantic.UUID1] | type[pydantic.UUID3] | type[pydantic.UUID4] | type[pydantic.UUID5]: # pragma: no cover\n if isinstance(value, str):\n value = uuid_type(value)\n\n elif isinstance(value, (bytes, bytearray)):\n try:\n value = uuid_type(value.decode())\n except ValueError:\n # 16 bytes in big-endian order as the bytes argument fail\n # the above check\n value = uuid_type(bytes=value)\n elif isinstance(value, UUID):\n value = uuid_type(str(value))\n\n if not isinstance(value, uuid_type):\n raise ValidationError(f\"Invalid UUID: {value!r}\")\n\n if value._required_version != value.version: # pyright: ignore\n raise ValidationError(f\"Invalid UUID version: {value!r}\")\n\n return cast(\"type[pydantic.UUID1] | type[pydantic.UUID3] | type[pydantic.UUID4] | type[pydantic.UUID5]\", value)\n\n\ndef _is_pydantic_uuid(value: Any) -> bool: # pragma: no cover\n return is_class_and_subclass(value, (pydantic.UUID1, pydantic.UUID3, pydantic.UUID4, pydantic.UUID5))\n\n\n_base_encoders: dict[Any, Callable[[Any], Any]] = {\n pydantic.EmailStr: str,\n pydantic.NameEmail: str,\n pydantic.ByteSize: lambda val: val.real,\n}\n\n\nclass PydanticInitPlugin(InitPluginProtocol):\n @classmethod\n def encoders(cls) -> dict[Any, Callable[[Any], Any]]:\n if pydantic.VERSION.startswith(\"1\"): # pragma: no cover\n return {**_base_encoders, **cls._create_pydantic_v1_encoders()}\n return {**_base_encoders, **cls._create_pydantic_v2_encoders()}\n\n @classmethod\n def decoders(cls) -> list[tuple[Callable[[Any], bool], Callable[[Any, Any], Any]]]:\n decoders: list[tuple[Callable[[Any], bool], Callable[[Any, Any], Any]]] = [\n (is_pydantic_model_class, _dec_pydantic)\n ]\n\n if pydantic.VERSION.startswith(\"1\"): # pragma: no cover\n decoders.append((_is_pydantic_uuid, _dec_pydantic_uuid))\n\n return decoders\n\n @staticmethod\n def _create_pydantic_v1_encoders() -> dict[Any, Callable[[Any], Any]]: # pragma: no cover\n return {\n pydantic.BaseModel: lambda model: {\n k: v.decode() if isinstance(v, bytes) else v for k, v in model.dict().items()\n },\n pydantic.SecretField: str,\n pydantic.StrictBool: int,\n pydantic.color.Color: str, # pyright: ignore\n pydantic.ConstrainedBytes: lambda val: val.decode(\"utf-8\"),\n pydantic.ConstrainedDate: lambda val: val.isoformat(),\n }\n\n @staticmethod\n def _create_pydantic_v2_encoders() -> dict[Any, Callable[[Any], Any]]:\n try:\n from pydantic_extra_types import color\n except ImportError:\n color = None # type: ignore[assignment]\n encoders: dict[Any, Callable[[Any], Any]] = {\n pydantic.BaseModel: lambda model: model.model_dump(mode=\"json\"),\n pydantic.types.SecretStr: lambda val: \"**********\" if val else \"\",\n pydantic.types.SecretBytes: lambda val: \"**********\" if val else \"\",\n }\n if color:\n encoders[color.Color] = str\n return encoders\n\n def on_app_init(self, app_config: AppConfig) -> AppConfig:\n app_config.type_encoders = {**self.encoders(), **(app_config.type_encoders or {})}\n app_config.type_decoders = [*self.decoders(), *(app_config.type_decoders or [])]\n return app_config\n", "path": "litestar/contrib/pydantic/pydantic_init_plugin.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom dataclasses import replace\nfrom enum import Enum\nfrom typing import TYPE_CHECKING, Any\n\nfrom _decimal import Decimal\nfrom polyfactory.exceptions import ParameterException\nfrom polyfactory.field_meta import FieldMeta, Null\nfrom polyfactory.utils.helpers import unwrap_annotation\n\nfrom litestar.openapi.spec import Example\nfrom litestar.types import Empty\nfrom litestar.utils import is_pydantic_model_instance\n\ntry:\n from polyfactory.factories.pydantic_factory import ModelFactory as Factory\nexcept ImportError:\n from polyfactory.factories import DataclassFactory as Factory # type: ignore[assignment]\n\n\nif TYPE_CHECKING:\n from litestar.typing import FieldDefinition\n\n\nFactory.seed_random(10)\n\n\ndef _normalize_example_value(value: Any) -> Any:\n \"\"\"Normalize the example value to make it look a bit prettier.\"\"\"\n value = unwrap_annotation(annotation=value, random=Factory.__random__)\n if isinstance(value, (Decimal, float)):\n value = round(float(value), 2)\n if isinstance(value, Enum):\n value = value.value\n if is_pydantic_model_instance(value):\n from litestar.contrib.pydantic import _model_dump\n\n value = _model_dump(value)\n if isinstance(value, (list, set)):\n value = [_normalize_example_value(v) for v in value]\n if isinstance(value, dict):\n for k, v in value.items():\n value[k] = _normalize_example_value(v)\n return value\n\n\ndef _create_field_meta(field: FieldDefinition) -> FieldMeta:\n return FieldMeta.from_type(\n annotation=field.annotation,\n default=field.default if field.default is not Empty else Null,\n name=field.name,\n random=Factory.__random__,\n )\n\n\ndef create_examples_for_field(field: FieldDefinition) -> list[Example]:\n \"\"\"Create an OpenAPI Example instance.\n\n Args:\n field: A signature field.\n\n Returns:\n A list including a single example.\n \"\"\"\n try:\n field_meta = _create_field_meta(replace(field, annotation=_normalize_example_value(field.annotation)))\n value = Factory.get_field_value(field_meta)\n return [Example(description=f\"Example {field.name} value\", value=value)]\n except ParameterException:\n return []\n", "path": "litestar/_openapi/schema_generation/examples.py"}, {"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any\n\nfrom .pydantic_dto_factory import PydanticDTO\nfrom .pydantic_init_plugin import PydanticInitPlugin\nfrom .pydantic_schema_plugin import PydanticSchemaPlugin\n\nif TYPE_CHECKING:\n import pydantic\n\n__all__ = (\"PydanticDTO\", \"PydanticInitPlugin\", \"PydanticSchemaPlugin\")\n\n\ndef _model_dump(model: pydantic.BaseModel, *, by_alias: bool = False) -> dict[str, Any]:\n return (\n model.model_dump(mode=\"json\", by_alias=by_alias)\n if hasattr(model, \"model_dump\")\n else model.dict(by_alias=by_alias)\n )\n\n\ndef _model_dump_json(model: pydantic.BaseModel) -> str:\n return model.model_dump_json() if hasattr(model, \"model_dump_json\") else model.json()\n", "path": "litestar/contrib/pydantic/__init__.py"}, {"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, Callable, TypeVar, cast\nfrom uuid import UUID\n\nfrom msgspec import ValidationError\n\nfrom litestar.exceptions import MissingDependencyException\nfrom litestar.plugins import InitPluginProtocol\nfrom litestar.serialization._msgspec_utils import ExtendedMsgSpecValidationError\nfrom litestar.utils import is_class_and_subclass, is_pydantic_model_class\n\nif TYPE_CHECKING:\n from litestar.config.app import AppConfig\n\ntry:\n import pydantic\nexcept ImportError as e:\n raise MissingDependencyException(\"pydantic\") from e\n\nT = TypeVar(\"T\")\n\n\ndef _dec_pydantic(model_type: type[pydantic.BaseModel], value: Any) -> pydantic.BaseModel:\n try:\n return (\n model_type.model_validate(value, strict=False)\n if hasattr(model_type, \"model_validate\")\n else model_type.parse_obj(value)\n )\n except pydantic.ValidationError as e:\n raise ExtendedMsgSpecValidationError(errors=cast(\"list[dict[str, Any]]\", e.errors())) from e\n\n\ndef _dec_pydantic_uuid(\n uuid_type: type[pydantic.UUID1] | type[pydantic.UUID3] | type[pydantic.UUID4] | type[pydantic.UUID5],\n value: Any,\n) -> type[pydantic.UUID1] | type[pydantic.UUID3] | type[pydantic.UUID4] | type[pydantic.UUID5]: # pragma: no cover\n if isinstance(value, str):\n value = uuid_type(value)\n\n elif isinstance(value, (bytes, bytearray)):\n try:\n value = uuid_type(value.decode())\n except ValueError:\n # 16 bytes in big-endian order as the bytes argument fail\n # the above check\n value = uuid_type(bytes=value)\n elif isinstance(value, UUID):\n value = uuid_type(str(value))\n\n if not isinstance(value, uuid_type):\n raise ValidationError(f\"Invalid UUID: {value!r}\")\n\n if value._required_version != value.version: # pyright: ignore\n raise ValidationError(f\"Invalid UUID version: {value!r}\")\n\n return cast(\"type[pydantic.UUID1] | type[pydantic.UUID3] | type[pydantic.UUID4] | type[pydantic.UUID5]\", value)\n\n\ndef _is_pydantic_uuid(value: Any) -> bool: # pragma: no cover\n return is_class_and_subclass(value, (pydantic.UUID1, pydantic.UUID3, pydantic.UUID4, pydantic.UUID5))\n\n\n_base_encoders: dict[Any, Callable[[Any], Any]] = {\n pydantic.EmailStr: str,\n pydantic.NameEmail: str,\n pydantic.ByteSize: lambda val: val.real,\n}\n\n\nclass PydanticInitPlugin(InitPluginProtocol):\n @classmethod\n def encoders(cls) -> dict[Any, Callable[[Any], Any]]:\n if pydantic.VERSION.startswith(\"1\"): # pragma: no cover\n return {**_base_encoders, **cls._create_pydantic_v1_encoders()}\n return {**_base_encoders, **cls._create_pydantic_v2_encoders()}\n\n @classmethod\n def decoders(cls) -> list[tuple[Callable[[Any], bool], Callable[[Any, Any], Any]]]:\n decoders: list[tuple[Callable[[Any], bool], Callable[[Any, Any], Any]]] = [\n (is_pydantic_model_class, _dec_pydantic)\n ]\n\n if pydantic.VERSION.startswith(\"1\"): # pragma: no cover\n decoders.append((_is_pydantic_uuid, _dec_pydantic_uuid))\n\n return decoders\n\n @staticmethod\n def _create_pydantic_v1_encoders() -> dict[Any, Callable[[Any], Any]]: # pragma: no cover\n return {\n pydantic.BaseModel: lambda model: model.dict(),\n pydantic.SecretField: str,\n pydantic.StrictBool: int,\n pydantic.color.Color: str, # pyright: ignore\n pydantic.ConstrainedBytes: lambda val: val.decode(\"utf-8\"),\n pydantic.ConstrainedDate: lambda val: val.isoformat(),\n }\n\n @staticmethod\n def _create_pydantic_v2_encoders() -> dict[Any, Callable[[Any], Any]]:\n try:\n from pydantic_extra_types import color\n except ImportError:\n color = None # type: ignore[assignment]\n encoders: dict[Any, Callable[[Any], Any]] = {\n pydantic.BaseModel: lambda model: model.model_dump(mode=\"json\"),\n pydantic.types.SecretStr: lambda val: \"**********\" if val else \"\",\n pydantic.types.SecretBytes: lambda val: \"**********\" if val else \"\",\n }\n if color:\n encoders[color.Color] = str\n return encoders\n\n def on_app_init(self, app_config: AppConfig) -> AppConfig:\n app_config.type_encoders = {**self.encoders(), **(app_config.type_encoders or {})}\n app_config.type_decoders = [*self.decoders(), *(app_config.type_decoders or [])]\n return app_config\n", "path": "litestar/contrib/pydantic/pydantic_init_plugin.py"}]} | 2,794 | 637 |
gh_patches_debug_22058 | rasdani/github-patches | git_diff | pex-tool__pex-258 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update PyPI page
It would be nice if the `README.rst` were included in the `setup.py` `long_description` in addition to the `CHANGES.rst` so that users browsing PyPI could read the README without having to travel to GitHub.
Would also be nice if the trove classifiers in `setup.py` reflected which versions of Python were officially supported (e.g. `'Programming Language :: Python :: 3.5'`).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 import os
5
6 from setuptools import setup
7
8
9 with open(os.path.join(os.path.dirname(__file__), 'CHANGES.rst')) as fp:
10 LONG_DESCRIPTION = fp.read()
11
12
13 # This seems to be a fairly standard version file pattern.
14 #
15 # Populates the following variables:
16 # __version__
17 # __setuptools_requirement
18 # __wheel_requirement
19 __version__ = ''
20 version_py_file = os.path.join(os.path.dirname(__file__), 'pex', 'version.py')
21 with open(version_py_file) as version_py:
22 exec(compile(version_py.read(), version_py_file, 'exec'))
23
24
25 setup(
26 name = 'pex',
27 version = __version__,
28 description = "The PEX packaging toolchain.",
29 long_description = LONG_DESCRIPTION,
30 url = 'https://github.com/pantsbuild/pex',
31 license = 'Apache License, Version 2.0',
32 zip_safe = True,
33 classifiers = [
34 'Intended Audience :: Developers',
35 'License :: OSI Approved :: Apache Software License',
36 'Operating System :: OS Independent',
37 'Programming Language :: Python',
38 ],
39 packages = [
40 'pex',
41 'pex.bin',
42 'pex.commands',
43 ],
44 install_requires = [
45 SETUPTOOLS_REQUIREMENT,
46 ],
47 tests_require = [
48 'mock',
49 'twitter.common.contextutil>=0.3.1,<0.4.0',
50 'twitter.common.lang>=0.3.1,<0.4.0',
51 'twitter.common.testing>=0.3.1,<0.4.0',
52 'twitter.common.dirutil>=0.3.1,<0.4.0',
53 'pytest',
54 ],
55 entry_points = {
56 'distutils.commands': [
57 'bdist_pex = pex.commands.bdist_pex:bdist_pex',
58 ],
59 'console_scripts': [
60 'pex = pex.bin.pex:main',
61 ],
62 },
63 )
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -5,9 +5,11 @@
from setuptools import setup
+with open(os.path.join(os.path.dirname(__file__), 'README.rst')) as fp:
+ LONG_DESCRIPTION = fp.read() + '\n'
with open(os.path.join(os.path.dirname(__file__), 'CHANGES.rst')) as fp:
- LONG_DESCRIPTION = fp.read()
+ LONG_DESCRIPTION += fp.read()
# This seems to be a fairly standard version file pattern.
@@ -33,8 +35,17 @@
classifiers = [
'Intended Audience :: Developers',
'License :: OSI Approved :: Apache Software License',
- 'Operating System :: OS Independent',
+ 'Operating System :: Unix',
+ 'Operating System :: POSIX :: Linux',
+ 'Operating System :: MacOS :: MacOS X',
'Programming Language :: Python',
+ 'Programming Language :: Python :: 2',
+ 'Programming Language :: Python :: 2.6',
+ 'Programming Language :: Python :: 2.7',
+ 'Programming Language :: Python :: 3',
+ 'Programming Language :: Python :: 3.3',
+ 'Programming Language :: Python :: 3.4',
+ 'Programming Language :: Python :: 3.5',
],
packages = [
'pex',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -5,9 +5,11 @@\n \n from setuptools import setup\n \n+with open(os.path.join(os.path.dirname(__file__), 'README.rst')) as fp:\n+ LONG_DESCRIPTION = fp.read() + '\\n'\n \n with open(os.path.join(os.path.dirname(__file__), 'CHANGES.rst')) as fp:\n- LONG_DESCRIPTION = fp.read()\n+ LONG_DESCRIPTION += fp.read()\n \n \n # This seems to be a fairly standard version file pattern.\n@@ -33,8 +35,17 @@\n classifiers = [\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n- 'Operating System :: OS Independent',\n+ 'Operating System :: Unix',\n+ 'Operating System :: POSIX :: Linux',\n+ 'Operating System :: MacOS :: MacOS X',\n 'Programming Language :: Python',\n+ 'Programming Language :: Python :: 2',\n+ 'Programming Language :: Python :: 2.6',\n+ 'Programming Language :: Python :: 2.7',\n+ 'Programming Language :: Python :: 3',\n+ 'Programming Language :: Python :: 3.3',\n+ 'Programming Language :: Python :: 3.4',\n+ 'Programming Language :: Python :: 3.5',\n ],\n packages = [\n 'pex',\n", "issue": "Update PyPI page\nIt would be nice if the `README.rst` were included in the `setup.py` `long_description` in addition to the `CHANGES.rst` so that users browsing PyPI could read the README without having to travel to GitHub.\n\nWould also be nice if the trove classifiers in `setup.py` reflected which versions of Python were officially supported (e.g. `'Programming Language :: Python :: 3.5'`).\n\n", "before_files": [{"content": "# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport os\n\nfrom setuptools import setup\n\n\nwith open(os.path.join(os.path.dirname(__file__), 'CHANGES.rst')) as fp:\n LONG_DESCRIPTION = fp.read()\n\n\n# This seems to be a fairly standard version file pattern.\n#\n# Populates the following variables:\n# __version__\n# __setuptools_requirement\n# __wheel_requirement\n__version__ = ''\nversion_py_file = os.path.join(os.path.dirname(__file__), 'pex', 'version.py')\nwith open(version_py_file) as version_py:\n exec(compile(version_py.read(), version_py_file, 'exec'))\n\n\nsetup(\n name = 'pex',\n version = __version__,\n description = \"The PEX packaging toolchain.\",\n long_description = LONG_DESCRIPTION,\n url = 'https://github.com/pantsbuild/pex',\n license = 'Apache License, Version 2.0',\n zip_safe = True,\n classifiers = [\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n ],\n packages = [\n 'pex',\n 'pex.bin',\n 'pex.commands',\n ],\n install_requires = [\n SETUPTOOLS_REQUIREMENT,\n ],\n tests_require = [\n 'mock',\n 'twitter.common.contextutil>=0.3.1,<0.4.0',\n 'twitter.common.lang>=0.3.1,<0.4.0',\n 'twitter.common.testing>=0.3.1,<0.4.0',\n 'twitter.common.dirutil>=0.3.1,<0.4.0',\n 'pytest',\n ],\n entry_points = {\n 'distutils.commands': [\n 'bdist_pex = pex.commands.bdist_pex:bdist_pex',\n ],\n 'console_scripts': [\n 'pex = pex.bin.pex:main',\n ],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport os\n\nfrom setuptools import setup\n\nwith open(os.path.join(os.path.dirname(__file__), 'README.rst')) as fp:\n LONG_DESCRIPTION = fp.read() + '\\n'\n\nwith open(os.path.join(os.path.dirname(__file__), 'CHANGES.rst')) as fp:\n LONG_DESCRIPTION += fp.read()\n\n\n# This seems to be a fairly standard version file pattern.\n#\n# Populates the following variables:\n# __version__\n# __setuptools_requirement\n# __wheel_requirement\n__version__ = ''\nversion_py_file = os.path.join(os.path.dirname(__file__), 'pex', 'version.py')\nwith open(version_py_file) as version_py:\n exec(compile(version_py.read(), version_py_file, 'exec'))\n\n\nsetup(\n name = 'pex',\n version = __version__,\n description = \"The PEX packaging toolchain.\",\n long_description = LONG_DESCRIPTION,\n url = 'https://github.com/pantsbuild/pex',\n license = 'Apache License, Version 2.0',\n zip_safe = True,\n classifiers = [\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: Unix',\n 'Operating System :: POSIX :: Linux',\n 'Operating System :: MacOS :: MacOS X',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n ],\n packages = [\n 'pex',\n 'pex.bin',\n 'pex.commands',\n ],\n install_requires = [\n SETUPTOOLS_REQUIREMENT,\n ],\n tests_require = [\n 'mock',\n 'twitter.common.contextutil>=0.3.1,<0.4.0',\n 'twitter.common.lang>=0.3.1,<0.4.0',\n 'twitter.common.testing>=0.3.1,<0.4.0',\n 'twitter.common.dirutil>=0.3.1,<0.4.0',\n 'pytest',\n ],\n entry_points = {\n 'distutils.commands': [\n 'bdist_pex = pex.commands.bdist_pex:bdist_pex',\n ],\n 'console_scripts': [\n 'pex = pex.bin.pex:main',\n ],\n },\n)\n", "path": "setup.py"}]} | 926 | 298 |
gh_patches_debug_4022 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-3030 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use samtranslate `passthrough_metadata` to support ignoring checks on Serverless resources
### Is this feature request related to a new rule or cfn-lint capabilities?
_No response_
### Describe the feature you'd like to request
Based on this: https://github.com/aws/serverless-application-model/pull/2224, it looks like it should be possible to ask SAM to bring all the Metadata stored on the original AWS::Serverless::* resources through to the translated resources. Doing that should allow cfn-lint to check the Metadata section for resource level ignores.
I actually tried changing the parameter myself but couldn't get it working. I must be missing something....
### Describe the solution you'd like
Enable `passthrough_metadata` on the samtranslate `translate` call to bring ALL the metadata through and properly support resource level ignores on serverless resources.
### Additional context
This is where I tried to add the parameter: https://github.com/aws-cloudformation/cfn-lint/blob/main/src/cfnlint/template/transforms/_sam.py#L155
### Is this something that you'd be interested in working on?
- [X] 👋 I may be able to implement this feature request
### Would this feature include a breaking change?
- [ ] ⚠️ This feature might incur a breaking change
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/template/transforms/_sam.py`
Content:
```
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5
6 from __future__ import annotations
7
8 import logging
9 import os
10 from typing import Any
11
12 import samtranslator
13 from samtranslator.parser import parser
14 from samtranslator.public.exceptions import InvalidDocumentException
15 from samtranslator.sdk import resource
16 from samtranslator.translator.translator import Translator
17
18 from cfnlint.data import Serverless
19 from cfnlint.decode.utils import convert_dict
20 from cfnlint.helpers import format_json_string, load_resource
21 from cfnlint.template.transforms._types import TransformResult
22
23 LOGGER = logging.getLogger("cfnlint")
24
25 samtranslator_logger = logging.getLogger("samtranslator")
26 samtranslator_logger.setLevel(logging.CRITICAL)
27
28
29 # Override SAM validation as cfn-lint does thoese
30 # checks already
31 # pylint: disable=unused-argument
32 def valid_override(self):
33 return resource.SamResourceType.has_value(self.type)
34
35
36 # pylint: disable=redefined-outer-name
37 resource.SamResource.valid = valid_override
38
39
40 class Transform:
41 """
42 Application Serverless Module tranform Wrapper.
43 Based on code from AWS SAM CLI:
44 https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/commands/validate/lib/sam_template_validator.py
45 """
46
47 def __init__(self, filename, template, region):
48 """
49 Initialize Transform class
50 """
51 self._filename = filename
52 self._template = template
53 self._region = region
54 self._parameters = {}
55
56 self._managed_policy_map = self.load_managed_policies()
57 self._sam_parser = parser.Parser()
58
59 def template(self):
60 """Get the template"""
61 return self._template
62
63 def load_managed_policies(self):
64 """
65 Load the ManagedPolicies locally, based on the AWS-CLI:
66 https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/lib/samlib/default_managed_policies.json
67 """
68 return load_resource(Serverless, "ManagedPolicies.json")
69
70 def _replace_local_codeuri(self):
71 """
72 Replaces the CodeUri in AWS::Serverless::Function and DefinitionUri in
73 AWS::Serverless::Api to a fake S3 Uri. This is to support running the
74 SAM Translator with valid values for these fields. If this is not done,
75 the template is invalid in the eyes of SAM Translator (the translator
76 does not support local paths)
77 """
78
79 all_resources = self._template.get("Resources", {})
80
81 template_globals = self._template.get("Globals", {})
82 auto_publish_alias = template_globals.get("Function", {}).get(
83 "AutoPublishAlias"
84 )
85 if isinstance(auto_publish_alias, dict):
86 if len(auto_publish_alias) == 1:
87 for k, v in auto_publish_alias.items():
88 if k == "Ref":
89 if v in self._template.get("Parameters"):
90 self._parameters[v] = "Alias"
91
92 for _, resource in all_resources.items():
93 resource_type = resource.get("Type")
94 resource_dict = resource.get("Properties")
95
96 if resource_type == "AWS::Serverless::Function":
97 if resource_dict.get("PackageType") == "Image":
98 Transform._update_to_s3_uri("ImageUri", resource_dict)
99 else:
100 Transform._update_to_s3_uri("CodeUri", resource_dict)
101 auto_publish_alias = resource_dict.get("AutoPublishAlias")
102 if isinstance(auto_publish_alias, dict):
103 if len(auto_publish_alias) == 1:
104 for k, v in auto_publish_alias.items():
105 if k == "Ref":
106 if v in self._template.get("Parameters"):
107 self._parameters[v] = "Alias"
108 if resource_type in ["AWS::Serverless::LayerVersion"]:
109 if resource_dict.get("ContentUri"):
110 Transform._update_to_s3_uri("ContentUri", resource_dict)
111 if resource_type == "AWS::Serverless::Application":
112 if resource_dict.get("Location"):
113 if isinstance(resource_dict.get("Location"), dict):
114 resource_dict["Location"] = ""
115 Transform._update_to_s3_uri("Location", resource_dict)
116 if resource_type == "AWS::Serverless::Api":
117 if (
118 "DefinitionBody" not in resource_dict
119 and "Auth" not in resource_dict
120 and "Cors" not in resource_dict
121 and "DisableExecuteApiEndpoint" not in resource_dict
122 ):
123 Transform._update_to_s3_uri("DefinitionUri", resource_dict)
124 else:
125 resource_dict["DefinitionBody"] = ""
126 if resource_type == "AWS::Serverless::StateMachine" and resource_dict.get(
127 "DefinitionUri"
128 ):
129 Transform._update_to_s3_uri("DefinitionUri", resource_dict)
130
131 def transform_template(self):
132 """
133 Transform the Template using the Serverless Application Model.
134 """
135 matches = []
136
137 try:
138 # Output the SAM Translator version in debug mode
139 LOGGER.info("SAM Translator: %s", samtranslator.__version__)
140
141 sam_translator = Translator(
142 managed_policy_map=self._managed_policy_map, sam_parser=self._sam_parser
143 )
144
145 self._replace_local_codeuri()
146
147 # Tell SAM to use the region we're linting in, this has to be
148 # controlled using the default AWS mechanisms, see also:
149 # https://github.com/awslabs/serverless-application-model/blob/master/samtranslator/translator/arn_generator.py
150 LOGGER.info("Setting AWS_DEFAULT_REGION to %s", self._region)
151 os.environ["AWS_DEFAULT_REGION"] = self._region
152
153 self._template = convert_dict(
154 sam_translator.translate(
155 sam_template=self._template, parameter_values=self._parameters
156 )
157 )
158
159 LOGGER.info(
160 "Transformed template: \n%s", format_json_string(self._template)
161 )
162 except InvalidDocumentException as e:
163 # pylint: disable=import-outside-toplevel
164 from cfnlint.match import Match # pylint: disable=cyclic-import
165 from cfnlint.rules import TransformError # pylint: disable=cyclic-import
166
167 message = "Error transforming template: {0}"
168 for cause in e.causes:
169 matches.append(
170 Match(
171 1,
172 1,
173 1,
174 1,
175 self._filename,
176 TransformError(),
177 message.format(cause.message),
178 )
179 )
180 except Exception as e: # pylint: disable=W0703
181 # pylint: disable=import-outside-toplevel
182 from cfnlint.match import Match # pylint: disable=cyclic-import
183 from cfnlint.rules import TransformError # pylint: disable=cyclic-import
184
185 LOGGER.debug("Error transforming template: %s", str(e))
186 LOGGER.debug("Stack trace: %s", e, exc_info=True)
187 message = "Error transforming template: {0}"
188 matches.append(
189 Match(
190 1, 1, 1, 1, self._filename, TransformError(), message.format(str(e))
191 )
192 )
193
194 return matches
195
196 @staticmethod
197 def is_s3_uri(uri):
198 """
199 Checks the uri and determines if it is a valid S3 Uri
200 Parameters
201 ----------
202 uri str, required
203 Uri to check
204 Returns
205 -------
206 bool
207 Returns True if the uri given is an S3 uri, otherwise False
208 """
209 return isinstance(uri, str) and uri.startswith("s3://")
210
211 @staticmethod
212 def _update_to_s3_uri(
213 property_key, resource_property_dict, s3_uri_value="s3://bucket/value"
214 ):
215 """
216 Updates the 'property_key' in the 'resource_property_dict' to the
217 value of 's3_uri_value'
218 Note: The function will mutate the resource_property_dict that is pass
219 in Parameters
220 ----------
221 property_key str, required
222 Key in the resource_property_dict
223 resource_property_dict dict, required
224 Property dictionary of a Resource in the template to replace
225 s3_uri_value str, optional
226 Value to update the value of the property_key to
227 """
228 uri_property = resource_property_dict.get(property_key, ".")
229
230 # ignore if dict or already an S3 Uri
231 if isinstance(uri_property, dict):
232 if len(uri_property) == 1:
233 for k in uri_property.keys():
234 if k in ["Ref", "Fn::Sub"]:
235 resource_property_dict[property_key] = s3_uri_value
236 return
237 if Transform.is_s3_uri(uri_property):
238 return
239
240 resource_property_dict[property_key] = s3_uri_value
241
242
243 def sam(cfn: Any) -> TransformResult:
244 transform = Transform(cfn.filename, cfn.template, cfn.regions[0])
245 matches = transform.transform_template()
246 if matches:
247 return matches, None
248 return matches, transform.template()
249
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/template/transforms/_sam.py b/src/cfnlint/template/transforms/_sam.py
--- a/src/cfnlint/template/transforms/_sam.py
+++ b/src/cfnlint/template/transforms/_sam.py
@@ -152,7 +152,9 @@
self._template = convert_dict(
sam_translator.translate(
- sam_template=self._template, parameter_values=self._parameters
+ sam_template=self._template,
+ parameter_values=self._parameters,
+ passthrough_metadata=True,
)
)
| {"golden_diff": "diff --git a/src/cfnlint/template/transforms/_sam.py b/src/cfnlint/template/transforms/_sam.py\n--- a/src/cfnlint/template/transforms/_sam.py\n+++ b/src/cfnlint/template/transforms/_sam.py\n@@ -152,7 +152,9 @@\n \n self._template = convert_dict(\n sam_translator.translate(\n- sam_template=self._template, parameter_values=self._parameters\n+ sam_template=self._template,\n+ parameter_values=self._parameters,\n+ passthrough_metadata=True,\n )\n )\n", "issue": "Use samtranslate `passthrough_metadata` to support ignoring checks on Serverless resources\n### Is this feature request related to a new rule or cfn-lint capabilities?\n\n_No response_\n\n### Describe the feature you'd like to request\n\nBased on this: https://github.com/aws/serverless-application-model/pull/2224, it looks like it should be possible to ask SAM to bring all the Metadata stored on the original AWS::Serverless::* resources through to the translated resources. Doing that should allow cfn-lint to check the Metadata section for resource level ignores.\r\n\r\nI actually tried changing the parameter myself but couldn't get it working. I must be missing something....\n\n### Describe the solution you'd like\n\nEnable `passthrough_metadata` on the samtranslate `translate` call to bring ALL the metadata through and properly support resource level ignores on serverless resources.\n\n### Additional context\n\nThis is where I tried to add the parameter: https://github.com/aws-cloudformation/cfn-lint/blob/main/src/cfnlint/template/transforms/_sam.py#L155\n\n### Is this something that you'd be interested in working on?\n\n- [X] \ud83d\udc4b I may be able to implement this feature request\n\n### Would this feature include a breaking change?\n\n- [ ] \u26a0\ufe0f This feature might incur a breaking change\n", "before_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\n\nfrom __future__ import annotations\n\nimport logging\nimport os\nfrom typing import Any\n\nimport samtranslator\nfrom samtranslator.parser import parser\nfrom samtranslator.public.exceptions import InvalidDocumentException\nfrom samtranslator.sdk import resource\nfrom samtranslator.translator.translator import Translator\n\nfrom cfnlint.data import Serverless\nfrom cfnlint.decode.utils import convert_dict\nfrom cfnlint.helpers import format_json_string, load_resource\nfrom cfnlint.template.transforms._types import TransformResult\n\nLOGGER = logging.getLogger(\"cfnlint\")\n\nsamtranslator_logger = logging.getLogger(\"samtranslator\")\nsamtranslator_logger.setLevel(logging.CRITICAL)\n\n\n# Override SAM validation as cfn-lint does thoese\n# checks already\n# pylint: disable=unused-argument\ndef valid_override(self):\n return resource.SamResourceType.has_value(self.type)\n\n\n# pylint: disable=redefined-outer-name\nresource.SamResource.valid = valid_override\n\n\nclass Transform:\n \"\"\"\n Application Serverless Module tranform Wrapper.\n Based on code from AWS SAM CLI:\n https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/commands/validate/lib/sam_template_validator.py\n \"\"\"\n\n def __init__(self, filename, template, region):\n \"\"\"\n Initialize Transform class\n \"\"\"\n self._filename = filename\n self._template = template\n self._region = region\n self._parameters = {}\n\n self._managed_policy_map = self.load_managed_policies()\n self._sam_parser = parser.Parser()\n\n def template(self):\n \"\"\"Get the template\"\"\"\n return self._template\n\n def load_managed_policies(self):\n \"\"\"\n Load the ManagedPolicies locally, based on the AWS-CLI:\n https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/lib/samlib/default_managed_policies.json\n \"\"\"\n return load_resource(Serverless, \"ManagedPolicies.json\")\n\n def _replace_local_codeuri(self):\n \"\"\"\n Replaces the CodeUri in AWS::Serverless::Function and DefinitionUri in\n AWS::Serverless::Api to a fake S3 Uri. This is to support running the\n SAM Translator with valid values for these fields. If this is not done,\n the template is invalid in the eyes of SAM Translator (the translator\n does not support local paths)\n \"\"\"\n\n all_resources = self._template.get(\"Resources\", {})\n\n template_globals = self._template.get(\"Globals\", {})\n auto_publish_alias = template_globals.get(\"Function\", {}).get(\n \"AutoPublishAlias\"\n )\n if isinstance(auto_publish_alias, dict):\n if len(auto_publish_alias) == 1:\n for k, v in auto_publish_alias.items():\n if k == \"Ref\":\n if v in self._template.get(\"Parameters\"):\n self._parameters[v] = \"Alias\"\n\n for _, resource in all_resources.items():\n resource_type = resource.get(\"Type\")\n resource_dict = resource.get(\"Properties\")\n\n if resource_type == \"AWS::Serverless::Function\":\n if resource_dict.get(\"PackageType\") == \"Image\":\n Transform._update_to_s3_uri(\"ImageUri\", resource_dict)\n else:\n Transform._update_to_s3_uri(\"CodeUri\", resource_dict)\n auto_publish_alias = resource_dict.get(\"AutoPublishAlias\")\n if isinstance(auto_publish_alias, dict):\n if len(auto_publish_alias) == 1:\n for k, v in auto_publish_alias.items():\n if k == \"Ref\":\n if v in self._template.get(\"Parameters\"):\n self._parameters[v] = \"Alias\"\n if resource_type in [\"AWS::Serverless::LayerVersion\"]:\n if resource_dict.get(\"ContentUri\"):\n Transform._update_to_s3_uri(\"ContentUri\", resource_dict)\n if resource_type == \"AWS::Serverless::Application\":\n if resource_dict.get(\"Location\"):\n if isinstance(resource_dict.get(\"Location\"), dict):\n resource_dict[\"Location\"] = \"\"\n Transform._update_to_s3_uri(\"Location\", resource_dict)\n if resource_type == \"AWS::Serverless::Api\":\n if (\n \"DefinitionBody\" not in resource_dict\n and \"Auth\" not in resource_dict\n and \"Cors\" not in resource_dict\n and \"DisableExecuteApiEndpoint\" not in resource_dict\n ):\n Transform._update_to_s3_uri(\"DefinitionUri\", resource_dict)\n else:\n resource_dict[\"DefinitionBody\"] = \"\"\n if resource_type == \"AWS::Serverless::StateMachine\" and resource_dict.get(\n \"DefinitionUri\"\n ):\n Transform._update_to_s3_uri(\"DefinitionUri\", resource_dict)\n\n def transform_template(self):\n \"\"\"\n Transform the Template using the Serverless Application Model.\n \"\"\"\n matches = []\n\n try:\n # Output the SAM Translator version in debug mode\n LOGGER.info(\"SAM Translator: %s\", samtranslator.__version__)\n\n sam_translator = Translator(\n managed_policy_map=self._managed_policy_map, sam_parser=self._sam_parser\n )\n\n self._replace_local_codeuri()\n\n # Tell SAM to use the region we're linting in, this has to be\n # controlled using the default AWS mechanisms, see also:\n # https://github.com/awslabs/serverless-application-model/blob/master/samtranslator/translator/arn_generator.py\n LOGGER.info(\"Setting AWS_DEFAULT_REGION to %s\", self._region)\n os.environ[\"AWS_DEFAULT_REGION\"] = self._region\n\n self._template = convert_dict(\n sam_translator.translate(\n sam_template=self._template, parameter_values=self._parameters\n )\n )\n\n LOGGER.info(\n \"Transformed template: \\n%s\", format_json_string(self._template)\n )\n except InvalidDocumentException as e:\n # pylint: disable=import-outside-toplevel\n from cfnlint.match import Match # pylint: disable=cyclic-import\n from cfnlint.rules import TransformError # pylint: disable=cyclic-import\n\n message = \"Error transforming template: {0}\"\n for cause in e.causes:\n matches.append(\n Match(\n 1,\n 1,\n 1,\n 1,\n self._filename,\n TransformError(),\n message.format(cause.message),\n )\n )\n except Exception as e: # pylint: disable=W0703\n # pylint: disable=import-outside-toplevel\n from cfnlint.match import Match # pylint: disable=cyclic-import\n from cfnlint.rules import TransformError # pylint: disable=cyclic-import\n\n LOGGER.debug(\"Error transforming template: %s\", str(e))\n LOGGER.debug(\"Stack trace: %s\", e, exc_info=True)\n message = \"Error transforming template: {0}\"\n matches.append(\n Match(\n 1, 1, 1, 1, self._filename, TransformError(), message.format(str(e))\n )\n )\n\n return matches\n\n @staticmethod\n def is_s3_uri(uri):\n \"\"\"\n Checks the uri and determines if it is a valid S3 Uri\n Parameters\n ----------\n uri str, required\n Uri to check\n Returns\n -------\n bool\n Returns True if the uri given is an S3 uri, otherwise False\n \"\"\"\n return isinstance(uri, str) and uri.startswith(\"s3://\")\n\n @staticmethod\n def _update_to_s3_uri(\n property_key, resource_property_dict, s3_uri_value=\"s3://bucket/value\"\n ):\n \"\"\"\n Updates the 'property_key' in the 'resource_property_dict' to the\n value of 's3_uri_value'\n Note: The function will mutate the resource_property_dict that is pass\n in Parameters\n ----------\n property_key str, required\n Key in the resource_property_dict\n resource_property_dict dict, required\n Property dictionary of a Resource in the template to replace\n s3_uri_value str, optional\n Value to update the value of the property_key to\n \"\"\"\n uri_property = resource_property_dict.get(property_key, \".\")\n\n # ignore if dict or already an S3 Uri\n if isinstance(uri_property, dict):\n if len(uri_property) == 1:\n for k in uri_property.keys():\n if k in [\"Ref\", \"Fn::Sub\"]:\n resource_property_dict[property_key] = s3_uri_value\n return\n if Transform.is_s3_uri(uri_property):\n return\n\n resource_property_dict[property_key] = s3_uri_value\n\n\ndef sam(cfn: Any) -> TransformResult:\n transform = Transform(cfn.filename, cfn.template, cfn.regions[0])\n matches = transform.transform_template()\n if matches:\n return matches, None\n return matches, transform.template()\n", "path": "src/cfnlint/template/transforms/_sam.py"}], "after_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\n\nfrom __future__ import annotations\n\nimport logging\nimport os\nfrom typing import Any\n\nimport samtranslator\nfrom samtranslator.parser import parser\nfrom samtranslator.public.exceptions import InvalidDocumentException\nfrom samtranslator.sdk import resource\nfrom samtranslator.translator.translator import Translator\n\nfrom cfnlint.data import Serverless\nfrom cfnlint.decode.utils import convert_dict\nfrom cfnlint.helpers import format_json_string, load_resource\nfrom cfnlint.template.transforms._types import TransformResult\n\nLOGGER = logging.getLogger(\"cfnlint\")\n\nsamtranslator_logger = logging.getLogger(\"samtranslator\")\nsamtranslator_logger.setLevel(logging.CRITICAL)\n\n\n# Override SAM validation as cfn-lint does thoese\n# checks already\n# pylint: disable=unused-argument\ndef valid_override(self):\n return resource.SamResourceType.has_value(self.type)\n\n\n# pylint: disable=redefined-outer-name\nresource.SamResource.valid = valid_override\n\n\nclass Transform:\n \"\"\"\n Application Serverless Module tranform Wrapper.\n Based on code from AWS SAM CLI:\n https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/commands/validate/lib/sam_template_validator.py\n \"\"\"\n\n def __init__(self, filename, template, region):\n \"\"\"\n Initialize Transform class\n \"\"\"\n self._filename = filename\n self._template = template\n self._region = region\n self._parameters = {}\n\n self._managed_policy_map = self.load_managed_policies()\n self._sam_parser = parser.Parser()\n\n def template(self):\n \"\"\"Get the template\"\"\"\n return self._template\n\n def load_managed_policies(self):\n \"\"\"\n Load the ManagedPolicies locally, based on the AWS-CLI:\n https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/lib/samlib/default_managed_policies.json\n \"\"\"\n return load_resource(Serverless, \"ManagedPolicies.json\")\n\n def _replace_local_codeuri(self):\n \"\"\"\n Replaces the CodeUri in AWS::Serverless::Function and DefinitionUri in\n AWS::Serverless::Api to a fake S3 Uri. This is to support running the\n SAM Translator with valid values for these fields. If this is not done,\n the template is invalid in the eyes of SAM Translator (the translator\n does not support local paths)\n \"\"\"\n\n all_resources = self._template.get(\"Resources\", {})\n\n template_globals = self._template.get(\"Globals\", {})\n auto_publish_alias = template_globals.get(\"Function\", {}).get(\n \"AutoPublishAlias\"\n )\n if isinstance(auto_publish_alias, dict):\n if len(auto_publish_alias) == 1:\n for k, v in auto_publish_alias.items():\n if k == \"Ref\":\n if v in self._template.get(\"Parameters\"):\n self._parameters[v] = \"Alias\"\n\n for _, resource in all_resources.items():\n resource_type = resource.get(\"Type\")\n resource_dict = resource.get(\"Properties\")\n\n if resource_type == \"AWS::Serverless::Function\":\n if resource_dict.get(\"PackageType\") == \"Image\":\n Transform._update_to_s3_uri(\"ImageUri\", resource_dict)\n else:\n Transform._update_to_s3_uri(\"CodeUri\", resource_dict)\n auto_publish_alias = resource_dict.get(\"AutoPublishAlias\")\n if isinstance(auto_publish_alias, dict):\n if len(auto_publish_alias) == 1:\n for k, v in auto_publish_alias.items():\n if k == \"Ref\":\n if v in self._template.get(\"Parameters\"):\n self._parameters[v] = \"Alias\"\n if resource_type in [\"AWS::Serverless::LayerVersion\"]:\n if resource_dict.get(\"ContentUri\"):\n Transform._update_to_s3_uri(\"ContentUri\", resource_dict)\n if resource_type == \"AWS::Serverless::Application\":\n if resource_dict.get(\"Location\"):\n if isinstance(resource_dict.get(\"Location\"), dict):\n resource_dict[\"Location\"] = \"\"\n Transform._update_to_s3_uri(\"Location\", resource_dict)\n if resource_type == \"AWS::Serverless::Api\":\n if (\n \"DefinitionBody\" not in resource_dict\n and \"Auth\" not in resource_dict\n and \"Cors\" not in resource_dict\n and \"DisableExecuteApiEndpoint\" not in resource_dict\n ):\n Transform._update_to_s3_uri(\"DefinitionUri\", resource_dict)\n else:\n resource_dict[\"DefinitionBody\"] = \"\"\n if resource_type == \"AWS::Serverless::StateMachine\" and resource_dict.get(\n \"DefinitionUri\"\n ):\n Transform._update_to_s3_uri(\"DefinitionUri\", resource_dict)\n\n def transform_template(self):\n \"\"\"\n Transform the Template using the Serverless Application Model.\n \"\"\"\n matches = []\n\n try:\n # Output the SAM Translator version in debug mode\n LOGGER.info(\"SAM Translator: %s\", samtranslator.__version__)\n\n sam_translator = Translator(\n managed_policy_map=self._managed_policy_map, sam_parser=self._sam_parser\n )\n\n self._replace_local_codeuri()\n\n # Tell SAM to use the region we're linting in, this has to be\n # controlled using the default AWS mechanisms, see also:\n # https://github.com/awslabs/serverless-application-model/blob/master/samtranslator/translator/arn_generator.py\n LOGGER.info(\"Setting AWS_DEFAULT_REGION to %s\", self._region)\n os.environ[\"AWS_DEFAULT_REGION\"] = self._region\n\n self._template = convert_dict(\n sam_translator.translate(\n sam_template=self._template,\n parameter_values=self._parameters,\n passthrough_metadata=True,\n )\n )\n\n LOGGER.info(\n \"Transformed template: \\n%s\", format_json_string(self._template)\n )\n except InvalidDocumentException as e:\n # pylint: disable=import-outside-toplevel\n from cfnlint.match import Match # pylint: disable=cyclic-import\n from cfnlint.rules import TransformError # pylint: disable=cyclic-import\n\n message = \"Error transforming template: {0}\"\n for cause in e.causes:\n matches.append(\n Match(\n 1,\n 1,\n 1,\n 1,\n self._filename,\n TransformError(),\n message.format(cause.message),\n )\n )\n except Exception as e: # pylint: disable=W0703\n # pylint: disable=import-outside-toplevel\n from cfnlint.match import Match # pylint: disable=cyclic-import\n from cfnlint.rules import TransformError # pylint: disable=cyclic-import\n\n LOGGER.debug(\"Error transforming template: %s\", str(e))\n LOGGER.debug(\"Stack trace: %s\", e, exc_info=True)\n message = \"Error transforming template: {0}\"\n matches.append(\n Match(\n 1, 1, 1, 1, self._filename, TransformError(), message.format(str(e))\n )\n )\n\n return matches\n\n @staticmethod\n def is_s3_uri(uri):\n \"\"\"\n Checks the uri and determines if it is a valid S3 Uri\n Parameters\n ----------\n uri str, required\n Uri to check\n Returns\n -------\n bool\n Returns True if the uri given is an S3 uri, otherwise False\n \"\"\"\n return isinstance(uri, str) and uri.startswith(\"s3://\")\n\n @staticmethod\n def _update_to_s3_uri(\n property_key, resource_property_dict, s3_uri_value=\"s3://bucket/value\"\n ):\n \"\"\"\n Updates the 'property_key' in the 'resource_property_dict' to the\n value of 's3_uri_value'\n Note: The function will mutate the resource_property_dict that is pass\n in Parameters\n ----------\n property_key str, required\n Key in the resource_property_dict\n resource_property_dict dict, required\n Property dictionary of a Resource in the template to replace\n s3_uri_value str, optional\n Value to update the value of the property_key to\n \"\"\"\n uri_property = resource_property_dict.get(property_key, \".\")\n\n # ignore if dict or already an S3 Uri\n if isinstance(uri_property, dict):\n if len(uri_property) == 1:\n for k in uri_property.keys():\n if k in [\"Ref\", \"Fn::Sub\"]:\n resource_property_dict[property_key] = s3_uri_value\n return\n if Transform.is_s3_uri(uri_property):\n return\n\n resource_property_dict[property_key] = s3_uri_value\n\n\ndef sam(cfn: Any) -> TransformResult:\n transform = Transform(cfn.filename, cfn.template, cfn.regions[0])\n matches = transform.transform_template()\n if matches:\n return matches, None\n return matches, transform.template()\n", "path": "src/cfnlint/template/transforms/_sam.py"}]} | 3,091 | 122 |
gh_patches_debug_13814 | rasdani/github-patches | git_diff | pre-commit__pre-commit-1043 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Permission denied when installed this environment
Hello.
I'm not sure the permission is about the pre-commit but it looks likes so.
Here is my config
````yaml
- repo: git://github.com/dnephin/pre-commit-golang
rev: master
hooks:
- id: go-fmt
- repo: git://github.com/golangci/golangci-lint
rev: master
hooks:
- id: golangci-lint
args:
- --config .golangci.yml
- repo: git://github.com/detailyang/pre-commit-shell
rev: v1.0.6
hooks:
- id: shell-lint
args: [--color=always, "--exclude=SC1090,SC1091,SC2206"]
````
it looks like python3.7 shutil.rmtree raise the erorr as the following:
````python
An unexpected error has occurred: PermissionError: [Errno 13] Permission denied: '/Users/detailyang/.cache/pre-commit/repo2ba1f3b5/golangenv-default/pkg/mod/github.com/!burnt!sushi/[email protected]/.gitignore'
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py", line 447, in _rmtree_safe_fd
os.unlink(entry.name, dir_fd=topfd)
PermissionError: [Errno 13] Permission denied: '.gitignore'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/pre_commit/error_handler.py", line 46, in error_handler
yield
File "/usr/local/lib/python3.7/site-packages/pre_commit/main.py", line 294, in main
return run(args.config, store, args)
File "/usr/local/lib/python3.7/site-packages/pre_commit/commands/run.py", line 285, in run
install_hook_envs(hooks, store)
File "/usr/local/lib/python3.7/site-packages/pre_commit/repository.py", line 215, in install_hook_envs
hook.install()
File "/usr/local/lib/python3.7/site-packages/pre_commit/repository.py", line 90, in install
rmtree(self.prefix.path(venv))
File "/usr/local/lib/python3.7/site-packages/pre_commit/util.py", line 171, in rmtree
shutil.rmtree(path, ignore_errors=False, onerror=handle_remove_readonly)
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py", line 491, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py", line 429, in _rmtree_safe_fd
_rmtree_safe_fd(dirfd, fullname, onerror)
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py", line 429, in _rmtree_safe_fd
_rmtree_safe_fd(dirfd, fullname, onerror)
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py", line 429, in _rmtree_safe_fd
_rmtree_safe_fd(dirfd, fullname, onerror)
[Previous line repeated 2 more times]
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py", line 449, in _rmtree_safe_fd
onerror(os.unlink, fullname, sys.exc_info())
File "/usr/local/lib/python3.7/site-packages/pre_commit/util.py", line 168, in handle_remove_readonly
func(path)
PermissionError: [Errno 13] Permission denied: '/Users/detailyang/.cache/pre-commit/repo2ba1f3b5/golangenv-default/pkg/mod/github.com/!burnt!sushi/[email protected]/.gitignore'
````
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/util.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import contextlib
4 import errno
5 import os.path
6 import shutil
7 import stat
8 import subprocess
9 import sys
10 import tempfile
11
12 import six
13
14 from pre_commit import five
15 from pre_commit import parse_shebang
16
17 if sys.version_info >= (3, 7): # pragma: no cover (PY37+)
18 from importlib.resources import open_binary
19 from importlib.resources import read_text
20 else: # pragma: no cover (<PY37)
21 from importlib_resources import open_binary
22 from importlib_resources import read_text
23
24
25 def mkdirp(path):
26 try:
27 os.makedirs(path)
28 except OSError:
29 if not os.path.exists(path):
30 raise
31
32
33 @contextlib.contextmanager
34 def clean_path_on_failure(path):
35 """Cleans up the directory on an exceptional failure."""
36 try:
37 yield
38 except BaseException:
39 if os.path.exists(path):
40 rmtree(path)
41 raise
42
43
44 @contextlib.contextmanager
45 def noop_context():
46 yield
47
48
49 @contextlib.contextmanager
50 def tmpdir():
51 """Contextmanager to create a temporary directory. It will be cleaned up
52 afterwards.
53 """
54 tempdir = tempfile.mkdtemp()
55 try:
56 yield tempdir
57 finally:
58 rmtree(tempdir)
59
60
61 def resource_bytesio(filename):
62 return open_binary('pre_commit.resources', filename)
63
64
65 def resource_text(filename):
66 return read_text('pre_commit.resources', filename)
67
68
69 def make_executable(filename):
70 original_mode = os.stat(filename).st_mode
71 os.chmod(
72 filename, original_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH,
73 )
74
75
76 class CalledProcessError(RuntimeError):
77 def __init__(self, returncode, cmd, expected_returncode, output=None):
78 super(CalledProcessError, self).__init__(
79 returncode, cmd, expected_returncode, output,
80 )
81 self.returncode = returncode
82 self.cmd = cmd
83 self.expected_returncode = expected_returncode
84 self.output = output
85
86 def to_bytes(self):
87 output = []
88 for maybe_text in self.output:
89 if maybe_text:
90 output.append(
91 b'\n ' +
92 five.to_bytes(maybe_text).replace(b'\n', b'\n '),
93 )
94 else:
95 output.append(b'(none)')
96
97 return b''.join((
98 five.to_bytes(
99 'Command: {!r}\n'
100 'Return code: {}\n'
101 'Expected return code: {}\n'.format(
102 self.cmd, self.returncode, self.expected_returncode,
103 ),
104 ),
105 b'Output: ', output[0], b'\n',
106 b'Errors: ', output[1], b'\n',
107 ))
108
109 def to_text(self):
110 return self.to_bytes().decode('UTF-8')
111
112 if six.PY2: # pragma: no cover (py2)
113 __str__ = to_bytes
114 __unicode__ = to_text
115 else: # pragma: no cover (py3)
116 __bytes__ = to_bytes
117 __str__ = to_text
118
119
120 def cmd_output(*cmd, **kwargs):
121 retcode = kwargs.pop('retcode', 0)
122 encoding = kwargs.pop('encoding', 'UTF-8')
123
124 popen_kwargs = {
125 'stdin': subprocess.PIPE,
126 'stdout': subprocess.PIPE,
127 'stderr': subprocess.PIPE,
128 }
129
130 # py2/py3 on windows are more strict about the types here
131 cmd = tuple(five.n(arg) for arg in cmd)
132 kwargs['env'] = {
133 five.n(key): five.n(value)
134 for key, value in kwargs.pop('env', {}).items()
135 } or None
136
137 try:
138 cmd = parse_shebang.normalize_cmd(cmd)
139 except parse_shebang.ExecutableNotFoundError as e:
140 returncode, stdout, stderr = e.to_output()
141 else:
142 popen_kwargs.update(kwargs)
143 proc = subprocess.Popen(cmd, **popen_kwargs)
144 stdout, stderr = proc.communicate()
145 returncode = proc.returncode
146 if encoding is not None and stdout is not None:
147 stdout = stdout.decode(encoding)
148 if encoding is not None and stderr is not None:
149 stderr = stderr.decode(encoding)
150
151 if retcode is not None and retcode != returncode:
152 raise CalledProcessError(
153 returncode, cmd, retcode, output=(stdout, stderr),
154 )
155
156 return returncode, stdout, stderr
157
158
159 def rmtree(path):
160 """On windows, rmtree fails for readonly dirs."""
161 def handle_remove_readonly(func, path, exc): # pragma: no cover (windows)
162 excvalue = exc[1]
163 if (
164 func in (os.rmdir, os.remove, os.unlink) and
165 excvalue.errno == errno.EACCES
166 ):
167 os.chmod(path, stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO)
168 func(path)
169 else:
170 raise
171 shutil.rmtree(path, ignore_errors=False, onerror=handle_remove_readonly)
172
173
174 def parse_version(s):
175 """poor man's version comparison"""
176 return tuple(int(p) for p in s.split('.'))
177
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/util.py b/pre_commit/util.py
--- a/pre_commit/util.py
+++ b/pre_commit/util.py
@@ -158,13 +158,14 @@
def rmtree(path):
"""On windows, rmtree fails for readonly dirs."""
- def handle_remove_readonly(func, path, exc): # pragma: no cover (windows)
+ def handle_remove_readonly(func, path, exc):
excvalue = exc[1]
if (
func in (os.rmdir, os.remove, os.unlink) and
excvalue.errno == errno.EACCES
):
- os.chmod(path, stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO)
+ for p in (path, os.path.dirname(path)):
+ os.chmod(p, os.stat(p).st_mode | stat.S_IWUSR)
func(path)
else:
raise
| {"golden_diff": "diff --git a/pre_commit/util.py b/pre_commit/util.py\n--- a/pre_commit/util.py\n+++ b/pre_commit/util.py\n@@ -158,13 +158,14 @@\n \n def rmtree(path):\n \"\"\"On windows, rmtree fails for readonly dirs.\"\"\"\n- def handle_remove_readonly(func, path, exc): # pragma: no cover (windows)\n+ def handle_remove_readonly(func, path, exc):\n excvalue = exc[1]\n if (\n func in (os.rmdir, os.remove, os.unlink) and\n excvalue.errno == errno.EACCES\n ):\n- os.chmod(path, stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO)\n+ for p in (path, os.path.dirname(path)):\n+ os.chmod(p, os.stat(p).st_mode | stat.S_IWUSR)\n func(path)\n else:\n raise\n", "issue": "Permission denied when installed this environment\nHello.\r\n\r\nI'm not sure the permission is about the pre-commit but it looks likes so.\r\n\r\nHere is my config\r\n\r\n````yaml\r\n- repo: git://github.com/dnephin/pre-commit-golang\r\n rev: master\r\n hooks:\r\n - id: go-fmt\r\n\r\n- repo: git://github.com/golangci/golangci-lint\r\n rev: master\r\n hooks:\r\n - id: golangci-lint\r\n args:\r\n - --config .golangci.yml\r\n\r\n- repo: git://github.com/detailyang/pre-commit-shell\r\n rev: v1.0.6\r\n hooks:\r\n - id: shell-lint\r\n args: [--color=always, \"--exclude=SC1090,SC1091,SC2206\"]\r\n````\r\n\r\nit looks like python3.7 shutil.rmtree raise the erorr as the following:\r\n````python\r\nAn unexpected error has occurred: PermissionError: [Errno 13] Permission denied: '/Users/detailyang/.cache/pre-commit/repo2ba1f3b5/golangenv-default/pkg/mod/github.com/!burnt!sushi/[email protected]/.gitignore'\r\nTraceback (most recent call last):\r\n File \"/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py\", line 447, in _rmtree_safe_fd\r\n os.unlink(entry.name, dir_fd=topfd)\r\nPermissionError: [Errno 13] Permission denied: '.gitignore'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/site-packages/pre_commit/error_handler.py\", line 46, in error_handler\r\n yield\r\n File \"/usr/local/lib/python3.7/site-packages/pre_commit/main.py\", line 294, in main\r\n return run(args.config, store, args)\r\n File \"/usr/local/lib/python3.7/site-packages/pre_commit/commands/run.py\", line 285, in run\r\n install_hook_envs(hooks, store)\r\n File \"/usr/local/lib/python3.7/site-packages/pre_commit/repository.py\", line 215, in install_hook_envs\r\n hook.install()\r\n File \"/usr/local/lib/python3.7/site-packages/pre_commit/repository.py\", line 90, in install\r\n rmtree(self.prefix.path(venv))\r\n File \"/usr/local/lib/python3.7/site-packages/pre_commit/util.py\", line 171, in rmtree\r\n shutil.rmtree(path, ignore_errors=False, onerror=handle_remove_readonly)\r\n File \"/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py\", line 491, in rmtree\r\n _rmtree_safe_fd(fd, path, onerror)\r\n File \"/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py\", line 429, in _rmtree_safe_fd\r\n _rmtree_safe_fd(dirfd, fullname, onerror)\r\n File \"/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py\", line 429, in _rmtree_safe_fd\r\n _rmtree_safe_fd(dirfd, fullname, onerror)\r\n File \"/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py\", line 429, in _rmtree_safe_fd\r\n _rmtree_safe_fd(dirfd, fullname, onerror)\r\n [Previous line repeated 2 more times]\r\n File \"/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py\", line 449, in _rmtree_safe_fd\r\n onerror(os.unlink, fullname, sys.exc_info())\r\n File \"/usr/local/lib/python3.7/site-packages/pre_commit/util.py\", line 168, in handle_remove_readonly\r\n func(path)\r\nPermissionError: [Errno 13] Permission denied: '/Users/detailyang/.cache/pre-commit/repo2ba1f3b5/golangenv-default/pkg/mod/github.com/!burnt!sushi/[email protected]/.gitignore'\r\n````\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport errno\nimport os.path\nimport shutil\nimport stat\nimport subprocess\nimport sys\nimport tempfile\n\nimport six\n\nfrom pre_commit import five\nfrom pre_commit import parse_shebang\n\nif sys.version_info >= (3, 7): # pragma: no cover (PY37+)\n from importlib.resources import open_binary\n from importlib.resources import read_text\nelse: # pragma: no cover (<PY37)\n from importlib_resources import open_binary\n from importlib_resources import read_text\n\n\ndef mkdirp(path):\n try:\n os.makedirs(path)\n except OSError:\n if not os.path.exists(path):\n raise\n\n\[email protected]\ndef clean_path_on_failure(path):\n \"\"\"Cleans up the directory on an exceptional failure.\"\"\"\n try:\n yield\n except BaseException:\n if os.path.exists(path):\n rmtree(path)\n raise\n\n\[email protected]\ndef noop_context():\n yield\n\n\[email protected]\ndef tmpdir():\n \"\"\"Contextmanager to create a temporary directory. It will be cleaned up\n afterwards.\n \"\"\"\n tempdir = tempfile.mkdtemp()\n try:\n yield tempdir\n finally:\n rmtree(tempdir)\n\n\ndef resource_bytesio(filename):\n return open_binary('pre_commit.resources', filename)\n\n\ndef resource_text(filename):\n return read_text('pre_commit.resources', filename)\n\n\ndef make_executable(filename):\n original_mode = os.stat(filename).st_mode\n os.chmod(\n filename, original_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH,\n )\n\n\nclass CalledProcessError(RuntimeError):\n def __init__(self, returncode, cmd, expected_returncode, output=None):\n super(CalledProcessError, self).__init__(\n returncode, cmd, expected_returncode, output,\n )\n self.returncode = returncode\n self.cmd = cmd\n self.expected_returncode = expected_returncode\n self.output = output\n\n def to_bytes(self):\n output = []\n for maybe_text in self.output:\n if maybe_text:\n output.append(\n b'\\n ' +\n five.to_bytes(maybe_text).replace(b'\\n', b'\\n '),\n )\n else:\n output.append(b'(none)')\n\n return b''.join((\n five.to_bytes(\n 'Command: {!r}\\n'\n 'Return code: {}\\n'\n 'Expected return code: {}\\n'.format(\n self.cmd, self.returncode, self.expected_returncode,\n ),\n ),\n b'Output: ', output[0], b'\\n',\n b'Errors: ', output[1], b'\\n',\n ))\n\n def to_text(self):\n return self.to_bytes().decode('UTF-8')\n\n if six.PY2: # pragma: no cover (py2)\n __str__ = to_bytes\n __unicode__ = to_text\n else: # pragma: no cover (py3)\n __bytes__ = to_bytes\n __str__ = to_text\n\n\ndef cmd_output(*cmd, **kwargs):\n retcode = kwargs.pop('retcode', 0)\n encoding = kwargs.pop('encoding', 'UTF-8')\n\n popen_kwargs = {\n 'stdin': subprocess.PIPE,\n 'stdout': subprocess.PIPE,\n 'stderr': subprocess.PIPE,\n }\n\n # py2/py3 on windows are more strict about the types here\n cmd = tuple(five.n(arg) for arg in cmd)\n kwargs['env'] = {\n five.n(key): five.n(value)\n for key, value in kwargs.pop('env', {}).items()\n } or None\n\n try:\n cmd = parse_shebang.normalize_cmd(cmd)\n except parse_shebang.ExecutableNotFoundError as e:\n returncode, stdout, stderr = e.to_output()\n else:\n popen_kwargs.update(kwargs)\n proc = subprocess.Popen(cmd, **popen_kwargs)\n stdout, stderr = proc.communicate()\n returncode = proc.returncode\n if encoding is not None and stdout is not None:\n stdout = stdout.decode(encoding)\n if encoding is not None and stderr is not None:\n stderr = stderr.decode(encoding)\n\n if retcode is not None and retcode != returncode:\n raise CalledProcessError(\n returncode, cmd, retcode, output=(stdout, stderr),\n )\n\n return returncode, stdout, stderr\n\n\ndef rmtree(path):\n \"\"\"On windows, rmtree fails for readonly dirs.\"\"\"\n def handle_remove_readonly(func, path, exc): # pragma: no cover (windows)\n excvalue = exc[1]\n if (\n func in (os.rmdir, os.remove, os.unlink) and\n excvalue.errno == errno.EACCES\n ):\n os.chmod(path, stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO)\n func(path)\n else:\n raise\n shutil.rmtree(path, ignore_errors=False, onerror=handle_remove_readonly)\n\n\ndef parse_version(s):\n \"\"\"poor man's version comparison\"\"\"\n return tuple(int(p) for p in s.split('.'))\n", "path": "pre_commit/util.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport errno\nimport os.path\nimport shutil\nimport stat\nimport subprocess\nimport sys\nimport tempfile\n\nimport six\n\nfrom pre_commit import five\nfrom pre_commit import parse_shebang\n\nif sys.version_info >= (3, 7): # pragma: no cover (PY37+)\n from importlib.resources import open_binary\n from importlib.resources import read_text\nelse: # pragma: no cover (<PY37)\n from importlib_resources import open_binary\n from importlib_resources import read_text\n\n\ndef mkdirp(path):\n try:\n os.makedirs(path)\n except OSError:\n if not os.path.exists(path):\n raise\n\n\[email protected]\ndef clean_path_on_failure(path):\n \"\"\"Cleans up the directory on an exceptional failure.\"\"\"\n try:\n yield\n except BaseException:\n if os.path.exists(path):\n rmtree(path)\n raise\n\n\[email protected]\ndef noop_context():\n yield\n\n\[email protected]\ndef tmpdir():\n \"\"\"Contextmanager to create a temporary directory. It will be cleaned up\n afterwards.\n \"\"\"\n tempdir = tempfile.mkdtemp()\n try:\n yield tempdir\n finally:\n rmtree(tempdir)\n\n\ndef resource_bytesio(filename):\n return open_binary('pre_commit.resources', filename)\n\n\ndef resource_text(filename):\n return read_text('pre_commit.resources', filename)\n\n\ndef make_executable(filename):\n original_mode = os.stat(filename).st_mode\n os.chmod(\n filename, original_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH,\n )\n\n\nclass CalledProcessError(RuntimeError):\n def __init__(self, returncode, cmd, expected_returncode, output=None):\n super(CalledProcessError, self).__init__(\n returncode, cmd, expected_returncode, output,\n )\n self.returncode = returncode\n self.cmd = cmd\n self.expected_returncode = expected_returncode\n self.output = output\n\n def to_bytes(self):\n output = []\n for maybe_text in self.output:\n if maybe_text:\n output.append(\n b'\\n ' +\n five.to_bytes(maybe_text).replace(b'\\n', b'\\n '),\n )\n else:\n output.append(b'(none)')\n\n return b''.join((\n five.to_bytes(\n 'Command: {!r}\\n'\n 'Return code: {}\\n'\n 'Expected return code: {}\\n'.format(\n self.cmd, self.returncode, self.expected_returncode,\n ),\n ),\n b'Output: ', output[0], b'\\n',\n b'Errors: ', output[1], b'\\n',\n ))\n\n def to_text(self):\n return self.to_bytes().decode('UTF-8')\n\n if six.PY2: # pragma: no cover (py2)\n __str__ = to_bytes\n __unicode__ = to_text\n else: # pragma: no cover (py3)\n __bytes__ = to_bytes\n __str__ = to_text\n\n\ndef cmd_output(*cmd, **kwargs):\n retcode = kwargs.pop('retcode', 0)\n encoding = kwargs.pop('encoding', 'UTF-8')\n\n popen_kwargs = {\n 'stdin': subprocess.PIPE,\n 'stdout': subprocess.PIPE,\n 'stderr': subprocess.PIPE,\n }\n\n # py2/py3 on windows are more strict about the types here\n cmd = tuple(five.n(arg) for arg in cmd)\n kwargs['env'] = {\n five.n(key): five.n(value)\n for key, value in kwargs.pop('env', {}).items()\n } or None\n\n try:\n cmd = parse_shebang.normalize_cmd(cmd)\n except parse_shebang.ExecutableNotFoundError as e:\n returncode, stdout, stderr = e.to_output()\n else:\n popen_kwargs.update(kwargs)\n proc = subprocess.Popen(cmd, **popen_kwargs)\n stdout, stderr = proc.communicate()\n returncode = proc.returncode\n if encoding is not None and stdout is not None:\n stdout = stdout.decode(encoding)\n if encoding is not None and stderr is not None:\n stderr = stderr.decode(encoding)\n\n if retcode is not None and retcode != returncode:\n raise CalledProcessError(\n returncode, cmd, retcode, output=(stdout, stderr),\n )\n\n return returncode, stdout, stderr\n\n\ndef rmtree(path):\n \"\"\"On windows, rmtree fails for readonly dirs.\"\"\"\n def handle_remove_readonly(func, path, exc):\n excvalue = exc[1]\n if (\n func in (os.rmdir, os.remove, os.unlink) and\n excvalue.errno == errno.EACCES\n ):\n for p in (path, os.path.dirname(path)):\n os.chmod(p, os.stat(p).st_mode | stat.S_IWUSR)\n func(path)\n else:\n raise\n shutil.rmtree(path, ignore_errors=False, onerror=handle_remove_readonly)\n\n\ndef parse_version(s):\n \"\"\"poor man's version comparison\"\"\"\n return tuple(int(p) for p in s.split('.'))\n", "path": "pre_commit/util.py"}]} | 2,823 | 204 |
gh_patches_debug_8079 | rasdani/github-patches | git_diff | apache__airflow-16601 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Airflow logging secrets masker assumes dict_key is type `str`
**Apache Airflow version**: 2.1.0
**What happened**:
Airflow logging assume dict_key is type `str`
```
logging.info("Dictionary where key is int type: %s", modified_table_mapping)
File "/usr/lib64/python3.6/logging/__init__.py", line 1902, in info
root.info(msg, *args, **kwargs)
File "/usr/lib64/python3.6/logging/__init__.py", line 1308, in info
self._log(INFO, msg, args, **kwargs)
File "/usr/lib64/python3.6/logging/__init__.py", line 1444, in _log
self.handle(record)
File "/usr/lib64/python3.6/logging/__init__.py", line 1453, in handle
if (not self.disabled) and self.filter(record):
File "/usr/lib64/python3.6/logging/__init__.py", line 720, in filter
result = f.filter(record)
File "/bb/bin/airflow_env/lib/python3.6/site-packages/airflow/utils/log/secrets_masker.py", line 157, in filter
record.__dict__[k] = self.redact(v)
File "/bb/bin/airflow_env/lib/python3.6/site-packages/airflow/utils/log/secrets_masker.py", line 193, in redact
return {dict_key: self.redact(subval, dict_key) for dict_key, subval in item.items()}
File "/bb/bin/airflow_env/lib/python3.6/site-packages/airflow/utils/log/secrets_masker.py", line 193, in <dictcomp>
return {dict_key: self.redact(subval, dict_key) for dict_key, subval in item.items()}
File "/bb/bin/airflow_env/lib/python3.6/site-packages/airflow/utils/log/secrets_masker.py", line 189, in redact
if name and should_hide_value_for_key(name):
File "/bb/bin/airflow_env/lib/python3.6/site-packages/airflow/utils/log/secrets_masker.py", line 74, in should_hide_value_for_key
name = name.strip().lower()
AttributeError: 'int' object has no attribute 'strip'
```
**How to reproduce it**:
Define a dictionary where the type of keys is `int` and print it in any Airflow tasks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `airflow/utils/log/secrets_masker.py`
Content:
```
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17 """Mask sensitive information from logs"""
18 import collections
19 import logging
20 import re
21 from typing import TYPE_CHECKING, Iterable, Optional, Set, TypeVar, Union
22
23 from airflow.compat.functools import cache, cached_property
24
25 if TYPE_CHECKING:
26 from airflow.typing_compat import RePatternType
27
28 RedactableItem = TypeVar('RedactableItem')
29
30
31 log = logging.getLogger(__name__)
32
33
34 DEFAULT_SENSITIVE_FIELDS = frozenset(
35 {
36 'access_token',
37 'api_key',
38 'apikey',
39 'authorization',
40 'passphrase',
41 'passwd',
42 'password',
43 'private_key',
44 'secret',
45 }
46 )
47 """Names of fields (Connection extra, Variable key name etc.) that are deemed sensitive"""
48
49
50 @cache
51 def get_sensitive_variables_fields():
52 """Get comma-separated sensitive Variable Fields from airflow.cfg."""
53 from airflow.configuration import conf
54
55 sensitive_fields = DEFAULT_SENSITIVE_FIELDS.copy()
56 sensitive_variable_fields = conf.get('core', 'sensitive_var_conn_names')
57 if sensitive_variable_fields:
58 sensitive_fields |= frozenset({field.strip() for field in sensitive_variable_fields.split(',')})
59 return sensitive_fields
60
61
62 def should_hide_value_for_key(name):
63 """Should the value for this given name (Variable name, or key in conn.extra_dejson) be hidden"""
64 from airflow import settings
65
66 if name and settings.HIDE_SENSITIVE_VAR_CONN_FIELDS:
67 name = name.strip().lower()
68 return any(s in name for s in get_sensitive_variables_fields())
69 return False
70
71
72 def mask_secret(secret: Union[str, dict, Iterable], name: str = None) -> None:
73 """
74 Mask a secret from appearing in the task logs.
75
76 If ``name`` is provided, then it will only be masked if the name matches
77 one of the configured "sensitive" names.
78
79 If ``secret`` is a dict or a iterable (excluding str) then it will be
80 recursively walked and keys with sensitive names will be hidden.
81 """
82 # Delay import
83 from airflow import settings
84
85 # Filtering all log messages is not a free process, so we only do it when
86 # running tasks
87 if not settings.MASK_SECRETS_IN_LOGS or not secret:
88 return
89
90 _secrets_masker().add_mask(secret, name)
91
92
93 def redact(value: "RedactableItem", name: str = None) -> "RedactableItem":
94 """Redact any secrets found in ``value``."""
95 return _secrets_masker().redact(value, name)
96
97
98 @cache
99 def _secrets_masker() -> "SecretsMasker":
100
101 for flt in logging.getLogger('airflow.task').filters:
102 if isinstance(flt, SecretsMasker):
103 return flt
104 raise RuntimeError("No SecretsMasker found!")
105
106
107 class SecretsMasker(logging.Filter):
108 """Redact secrets from logs"""
109
110 replacer: Optional["RePatternType"] = None
111 patterns: Set[str]
112
113 ALREADY_FILTERED_FLAG = "__SecretsMasker_filtered"
114 MAX_RECURSION_DEPTH = 5
115
116 def __init__(self):
117 super().__init__()
118 self.patterns = set()
119
120 @cached_property
121 def _record_attrs_to_ignore(self) -> Iterable[str]:
122 # Doing log.info(..., extra={'foo': 2}) sets extra properties on
123 # record, i.e. record.foo. And we need to filter those too. Fun
124 #
125 # Create a record, and look at what attributes are on it, and ignore
126 # all the default ones!
127
128 record = logging.getLogRecordFactory()(
129 # name, level, pathname, lineno, msg, args, exc_info, func=None, sinfo=None,
130 "x",
131 logging.INFO,
132 __file__,
133 1,
134 "",
135 tuple(),
136 exc_info=None,
137 func="funcname",
138 )
139 return frozenset(record.__dict__).difference({'msg', 'args'})
140
141 def filter(self, record) -> bool:
142 if self.ALREADY_FILTERED_FLAG in record.__dict__:
143 # Filters are attached to multiple handlers and logs, keep a
144 # "private" flag that stops us needing to process it more than once
145 return True
146
147 if self.replacer:
148 for k, v in record.__dict__.items():
149 if k in self._record_attrs_to_ignore:
150 continue
151 record.__dict__[k] = self.redact(v)
152 if record.exc_info and record.exc_info[1] is not None:
153 exc = record.exc_info[1]
154 # I'm not sure if this is a good idea!
155 exc.args = (self.redact(v) for v in exc.args)
156 record.__dict__[self.ALREADY_FILTERED_FLAG] = True
157
158 return True
159
160 def _redact_all(self, item: "RedactableItem", depth: int) -> "RedactableItem":
161 if depth > self.MAX_RECURSION_DEPTH or isinstance(item, str):
162 return '***'
163 if isinstance(item, dict):
164 return {dict_key: self._redact_all(subval, depth + 1) for dict_key, subval in item.items()}
165 elif isinstance(item, (tuple, set)):
166 # Turn set in to tuple!
167 return tuple(self._redact_all(subval, depth + 1) for subval in item)
168 elif isinstance(item, list):
169 return list(self._redact_all(subval, depth + 1) for subval in item)
170 else:
171 return item
172
173 # pylint: disable=too-many-return-statements
174 def _redact(self, item: "RedactableItem", name: Optional[str], depth: int) -> "RedactableItem":
175 # Avoid spending too much effort on redacting on deeply nested
176 # structures. This also avoid infinite recursion if a structure has
177 # reference to self.
178 if depth > self.MAX_RECURSION_DEPTH:
179 return item
180 try:
181 if name and should_hide_value_for_key(name):
182 return self._redact_all(item, depth)
183 if isinstance(item, dict):
184 return {
185 dict_key: self._redact(subval, name=dict_key, depth=(depth + 1))
186 for dict_key, subval in item.items()
187 }
188 elif isinstance(item, str):
189 if self.replacer:
190 # We can't replace specific values, but the key-based redacting
191 # can still happen, so we can't short-circuit, we need to walk
192 # the structure.
193 return self.replacer.sub('***', item)
194 return item
195 elif isinstance(item, (tuple, set)):
196 # Turn set in to tuple!
197 return tuple(self._redact(subval, name=None, depth=(depth + 1)) for subval in item)
198 elif isinstance(item, list):
199 return [self._redact(subval, name=None, depth=(depth + 1)) for subval in item]
200 else:
201 return item
202 # I think this should never happen, but it does not hurt to leave it just in case
203 except Exception as e: # pylint: disable=broad-except
204 log.warning(
205 "Unable to redact %r, please report this via <https://github.com/apache/airflow/issues>. "
206 "Error was: %s: %s",
207 item,
208 type(e).__name__,
209 str(e),
210 )
211 return item
212
213 def redact(self, item: "RedactableItem", name: Optional[str] = None) -> "RedactableItem":
214 """Redact an any secrets found in ``item``, if it is a string.
215
216 If ``name`` is given, and it's a "sensitive" name (see
217 :func:`should_hide_value_for_key`) then all string values in the item
218 is redacted.
219 """
220 return self._redact(item, name, depth=0)
221
222 # pylint: enable=too-many-return-statements
223 def add_mask(self, secret: Union[str, dict, Iterable], name: str = None):
224 """Add a new secret to be masked to this filter instance."""
225 if isinstance(secret, dict):
226 for k, v in secret.items():
227 self.add_mask(v, k)
228 elif isinstance(secret, str):
229 if not secret:
230 return
231 pattern = re.escape(secret)
232 if pattern not in self.patterns and (not name or should_hide_value_for_key(name)):
233 self.patterns.add(pattern)
234 self.replacer = re.compile('|'.join(self.patterns))
235 elif isinstance(secret, collections.abc.Iterable):
236 for v in secret:
237 self.add_mask(v, name)
238
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/airflow/utils/log/secrets_masker.py b/airflow/utils/log/secrets_masker.py
--- a/airflow/utils/log/secrets_masker.py
+++ b/airflow/utils/log/secrets_masker.py
@@ -63,7 +63,7 @@
"""Should the value for this given name (Variable name, or key in conn.extra_dejson) be hidden"""
from airflow import settings
- if name and settings.HIDE_SENSITIVE_VAR_CONN_FIELDS:
+ if isinstance(name, str) and settings.HIDE_SENSITIVE_VAR_CONN_FIELDS:
name = name.strip().lower()
return any(s in name for s in get_sensitive_variables_fields())
return False
| {"golden_diff": "diff --git a/airflow/utils/log/secrets_masker.py b/airflow/utils/log/secrets_masker.py\n--- a/airflow/utils/log/secrets_masker.py\n+++ b/airflow/utils/log/secrets_masker.py\n@@ -63,7 +63,7 @@\n \"\"\"Should the value for this given name (Variable name, or key in conn.extra_dejson) be hidden\"\"\"\n from airflow import settings\n \n- if name and settings.HIDE_SENSITIVE_VAR_CONN_FIELDS:\n+ if isinstance(name, str) and settings.HIDE_SENSITIVE_VAR_CONN_FIELDS:\n name = name.strip().lower()\n return any(s in name for s in get_sensitive_variables_fields())\n return False\n", "issue": "Airflow logging secrets masker assumes dict_key is type `str`\n**Apache Airflow version**: 2.1.0\r\n\r\n**What happened**:\r\nAirflow logging assume dict_key is type `str`\r\n```\r\n logging.info(\"Dictionary where key is int type: %s\", modified_table_mapping)\r\n File \"/usr/lib64/python3.6/logging/__init__.py\", line 1902, in info\r\n root.info(msg, *args, **kwargs)\r\n File \"/usr/lib64/python3.6/logging/__init__.py\", line 1308, in info\r\n self._log(INFO, msg, args, **kwargs)\r\n File \"/usr/lib64/python3.6/logging/__init__.py\", line 1444, in _log\r\n self.handle(record)\r\n File \"/usr/lib64/python3.6/logging/__init__.py\", line 1453, in handle\r\n if (not self.disabled) and self.filter(record):\r\n File \"/usr/lib64/python3.6/logging/__init__.py\", line 720, in filter\r\n result = f.filter(record)\r\n File \"/bb/bin/airflow_env/lib/python3.6/site-packages/airflow/utils/log/secrets_masker.py\", line 157, in filter\r\n record.__dict__[k] = self.redact(v)\r\n File \"/bb/bin/airflow_env/lib/python3.6/site-packages/airflow/utils/log/secrets_masker.py\", line 193, in redact\r\n return {dict_key: self.redact(subval, dict_key) for dict_key, subval in item.items()}\r\n File \"/bb/bin/airflow_env/lib/python3.6/site-packages/airflow/utils/log/secrets_masker.py\", line 193, in <dictcomp>\r\n return {dict_key: self.redact(subval, dict_key) for dict_key, subval in item.items()}\r\n File \"/bb/bin/airflow_env/lib/python3.6/site-packages/airflow/utils/log/secrets_masker.py\", line 189, in redact\r\n if name and should_hide_value_for_key(name):\r\n File \"/bb/bin/airflow_env/lib/python3.6/site-packages/airflow/utils/log/secrets_masker.py\", line 74, in should_hide_value_for_key\r\n name = name.strip().lower()\r\nAttributeError: 'int' object has no attribute 'strip'\r\n```\r\n\r\n**How to reproduce it**:\r\nDefine a dictionary where the type of keys is `int` and print it in any Airflow tasks.\r\n\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\"\"\"Mask sensitive information from logs\"\"\"\nimport collections\nimport logging\nimport re\nfrom typing import TYPE_CHECKING, Iterable, Optional, Set, TypeVar, Union\n\nfrom airflow.compat.functools import cache, cached_property\n\nif TYPE_CHECKING:\n from airflow.typing_compat import RePatternType\n\n RedactableItem = TypeVar('RedactableItem')\n\n\nlog = logging.getLogger(__name__)\n\n\nDEFAULT_SENSITIVE_FIELDS = frozenset(\n {\n 'access_token',\n 'api_key',\n 'apikey',\n 'authorization',\n 'passphrase',\n 'passwd',\n 'password',\n 'private_key',\n 'secret',\n }\n)\n\"\"\"Names of fields (Connection extra, Variable key name etc.) that are deemed sensitive\"\"\"\n\n\n@cache\ndef get_sensitive_variables_fields():\n \"\"\"Get comma-separated sensitive Variable Fields from airflow.cfg.\"\"\"\n from airflow.configuration import conf\n\n sensitive_fields = DEFAULT_SENSITIVE_FIELDS.copy()\n sensitive_variable_fields = conf.get('core', 'sensitive_var_conn_names')\n if sensitive_variable_fields:\n sensitive_fields |= frozenset({field.strip() for field in sensitive_variable_fields.split(',')})\n return sensitive_fields\n\n\ndef should_hide_value_for_key(name):\n \"\"\"Should the value for this given name (Variable name, or key in conn.extra_dejson) be hidden\"\"\"\n from airflow import settings\n\n if name and settings.HIDE_SENSITIVE_VAR_CONN_FIELDS:\n name = name.strip().lower()\n return any(s in name for s in get_sensitive_variables_fields())\n return False\n\n\ndef mask_secret(secret: Union[str, dict, Iterable], name: str = None) -> None:\n \"\"\"\n Mask a secret from appearing in the task logs.\n\n If ``name`` is provided, then it will only be masked if the name matches\n one of the configured \"sensitive\" names.\n\n If ``secret`` is a dict or a iterable (excluding str) then it will be\n recursively walked and keys with sensitive names will be hidden.\n \"\"\"\n # Delay import\n from airflow import settings\n\n # Filtering all log messages is not a free process, so we only do it when\n # running tasks\n if not settings.MASK_SECRETS_IN_LOGS or not secret:\n return\n\n _secrets_masker().add_mask(secret, name)\n\n\ndef redact(value: \"RedactableItem\", name: str = None) -> \"RedactableItem\":\n \"\"\"Redact any secrets found in ``value``.\"\"\"\n return _secrets_masker().redact(value, name)\n\n\n@cache\ndef _secrets_masker() -> \"SecretsMasker\":\n\n for flt in logging.getLogger('airflow.task').filters:\n if isinstance(flt, SecretsMasker):\n return flt\n raise RuntimeError(\"No SecretsMasker found!\")\n\n\nclass SecretsMasker(logging.Filter):\n \"\"\"Redact secrets from logs\"\"\"\n\n replacer: Optional[\"RePatternType\"] = None\n patterns: Set[str]\n\n ALREADY_FILTERED_FLAG = \"__SecretsMasker_filtered\"\n MAX_RECURSION_DEPTH = 5\n\n def __init__(self):\n super().__init__()\n self.patterns = set()\n\n @cached_property\n def _record_attrs_to_ignore(self) -> Iterable[str]:\n # Doing log.info(..., extra={'foo': 2}) sets extra properties on\n # record, i.e. record.foo. And we need to filter those too. Fun\n #\n # Create a record, and look at what attributes are on it, and ignore\n # all the default ones!\n\n record = logging.getLogRecordFactory()(\n # name, level, pathname, lineno, msg, args, exc_info, func=None, sinfo=None,\n \"x\",\n logging.INFO,\n __file__,\n 1,\n \"\",\n tuple(),\n exc_info=None,\n func=\"funcname\",\n )\n return frozenset(record.__dict__).difference({'msg', 'args'})\n\n def filter(self, record) -> bool:\n if self.ALREADY_FILTERED_FLAG in record.__dict__:\n # Filters are attached to multiple handlers and logs, keep a\n # \"private\" flag that stops us needing to process it more than once\n return True\n\n if self.replacer:\n for k, v in record.__dict__.items():\n if k in self._record_attrs_to_ignore:\n continue\n record.__dict__[k] = self.redact(v)\n if record.exc_info and record.exc_info[1] is not None:\n exc = record.exc_info[1]\n # I'm not sure if this is a good idea!\n exc.args = (self.redact(v) for v in exc.args)\n record.__dict__[self.ALREADY_FILTERED_FLAG] = True\n\n return True\n\n def _redact_all(self, item: \"RedactableItem\", depth: int) -> \"RedactableItem\":\n if depth > self.MAX_RECURSION_DEPTH or isinstance(item, str):\n return '***'\n if isinstance(item, dict):\n return {dict_key: self._redact_all(subval, depth + 1) for dict_key, subval in item.items()}\n elif isinstance(item, (tuple, set)):\n # Turn set in to tuple!\n return tuple(self._redact_all(subval, depth + 1) for subval in item)\n elif isinstance(item, list):\n return list(self._redact_all(subval, depth + 1) for subval in item)\n else:\n return item\n\n # pylint: disable=too-many-return-statements\n def _redact(self, item: \"RedactableItem\", name: Optional[str], depth: int) -> \"RedactableItem\":\n # Avoid spending too much effort on redacting on deeply nested\n # structures. This also avoid infinite recursion if a structure has\n # reference to self.\n if depth > self.MAX_RECURSION_DEPTH:\n return item\n try:\n if name and should_hide_value_for_key(name):\n return self._redact_all(item, depth)\n if isinstance(item, dict):\n return {\n dict_key: self._redact(subval, name=dict_key, depth=(depth + 1))\n for dict_key, subval in item.items()\n }\n elif isinstance(item, str):\n if self.replacer:\n # We can't replace specific values, but the key-based redacting\n # can still happen, so we can't short-circuit, we need to walk\n # the structure.\n return self.replacer.sub('***', item)\n return item\n elif isinstance(item, (tuple, set)):\n # Turn set in to tuple!\n return tuple(self._redact(subval, name=None, depth=(depth + 1)) for subval in item)\n elif isinstance(item, list):\n return [self._redact(subval, name=None, depth=(depth + 1)) for subval in item]\n else:\n return item\n # I think this should never happen, but it does not hurt to leave it just in case\n except Exception as e: # pylint: disable=broad-except\n log.warning(\n \"Unable to redact %r, please report this via <https://github.com/apache/airflow/issues>. \"\n \"Error was: %s: %s\",\n item,\n type(e).__name__,\n str(e),\n )\n return item\n\n def redact(self, item: \"RedactableItem\", name: Optional[str] = None) -> \"RedactableItem\":\n \"\"\"Redact an any secrets found in ``item``, if it is a string.\n\n If ``name`` is given, and it's a \"sensitive\" name (see\n :func:`should_hide_value_for_key`) then all string values in the item\n is redacted.\n \"\"\"\n return self._redact(item, name, depth=0)\n\n # pylint: enable=too-many-return-statements\n def add_mask(self, secret: Union[str, dict, Iterable], name: str = None):\n \"\"\"Add a new secret to be masked to this filter instance.\"\"\"\n if isinstance(secret, dict):\n for k, v in secret.items():\n self.add_mask(v, k)\n elif isinstance(secret, str):\n if not secret:\n return\n pattern = re.escape(secret)\n if pattern not in self.patterns and (not name or should_hide_value_for_key(name)):\n self.patterns.add(pattern)\n self.replacer = re.compile('|'.join(self.patterns))\n elif isinstance(secret, collections.abc.Iterable):\n for v in secret:\n self.add_mask(v, name)\n", "path": "airflow/utils/log/secrets_masker.py"}], "after_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\"\"\"Mask sensitive information from logs\"\"\"\nimport collections\nimport logging\nimport re\nfrom typing import TYPE_CHECKING, Iterable, Optional, Set, TypeVar, Union\n\nfrom airflow.compat.functools import cache, cached_property\n\nif TYPE_CHECKING:\n from airflow.typing_compat import RePatternType\n\n RedactableItem = TypeVar('RedactableItem')\n\n\nlog = logging.getLogger(__name__)\n\n\nDEFAULT_SENSITIVE_FIELDS = frozenset(\n {\n 'access_token',\n 'api_key',\n 'apikey',\n 'authorization',\n 'passphrase',\n 'passwd',\n 'password',\n 'private_key',\n 'secret',\n }\n)\n\"\"\"Names of fields (Connection extra, Variable key name etc.) that are deemed sensitive\"\"\"\n\n\n@cache\ndef get_sensitive_variables_fields():\n \"\"\"Get comma-separated sensitive Variable Fields from airflow.cfg.\"\"\"\n from airflow.configuration import conf\n\n sensitive_fields = DEFAULT_SENSITIVE_FIELDS.copy()\n sensitive_variable_fields = conf.get('core', 'sensitive_var_conn_names')\n if sensitive_variable_fields:\n sensitive_fields |= frozenset({field.strip() for field in sensitive_variable_fields.split(',')})\n return sensitive_fields\n\n\ndef should_hide_value_for_key(name):\n \"\"\"Should the value for this given name (Variable name, or key in conn.extra_dejson) be hidden\"\"\"\n from airflow import settings\n\n if isinstance(name, str) and settings.HIDE_SENSITIVE_VAR_CONN_FIELDS:\n name = name.strip().lower()\n return any(s in name for s in get_sensitive_variables_fields())\n return False\n\n\ndef mask_secret(secret: Union[str, dict, Iterable], name: str = None) -> None:\n \"\"\"\n Mask a secret from appearing in the task logs.\n\n If ``name`` is provided, then it will only be masked if the name matches\n one of the configured \"sensitive\" names.\n\n If ``secret`` is a dict or a iterable (excluding str) then it will be\n recursively walked and keys with sensitive names will be hidden.\n \"\"\"\n # Delay import\n from airflow import settings\n\n # Filtering all log messages is not a free process, so we only do it when\n # running tasks\n if not settings.MASK_SECRETS_IN_LOGS or not secret:\n return\n\n _secrets_masker().add_mask(secret, name)\n\n\ndef redact(value: \"RedactableItem\", name: str = None) -> \"RedactableItem\":\n \"\"\"Redact any secrets found in ``value``.\"\"\"\n return _secrets_masker().redact(value, name)\n\n\n@cache\ndef _secrets_masker() -> \"SecretsMasker\":\n\n for flt in logging.getLogger('airflow.task').filters:\n if isinstance(flt, SecretsMasker):\n return flt\n raise RuntimeError(\"No SecretsMasker found!\")\n\n\nclass SecretsMasker(logging.Filter):\n \"\"\"Redact secrets from logs\"\"\"\n\n replacer: Optional[\"RePatternType\"] = None\n patterns: Set[str]\n\n ALREADY_FILTERED_FLAG = \"__SecretsMasker_filtered\"\n MAX_RECURSION_DEPTH = 5\n\n def __init__(self):\n super().__init__()\n self.patterns = set()\n\n @cached_property\n def _record_attrs_to_ignore(self) -> Iterable[str]:\n # Doing log.info(..., extra={'foo': 2}) sets extra properties on\n # record, i.e. record.foo. And we need to filter those too. Fun\n #\n # Create a record, and look at what attributes are on it, and ignore\n # all the default ones!\n\n record = logging.getLogRecordFactory()(\n # name, level, pathname, lineno, msg, args, exc_info, func=None, sinfo=None,\n \"x\",\n logging.INFO,\n __file__,\n 1,\n \"\",\n tuple(),\n exc_info=None,\n func=\"funcname\",\n )\n return frozenset(record.__dict__).difference({'msg', 'args'})\n\n def filter(self, record) -> bool:\n if self.ALREADY_FILTERED_FLAG in record.__dict__:\n # Filters are attached to multiple handlers and logs, keep a\n # \"private\" flag that stops us needing to process it more than once\n return True\n\n if self.replacer:\n for k, v in record.__dict__.items():\n if k in self._record_attrs_to_ignore:\n continue\n record.__dict__[k] = self.redact(v)\n if record.exc_info and record.exc_info[1] is not None:\n exc = record.exc_info[1]\n # I'm not sure if this is a good idea!\n exc.args = (self.redact(v) for v in exc.args)\n record.__dict__[self.ALREADY_FILTERED_FLAG] = True\n\n return True\n\n def _redact_all(self, item: \"RedactableItem\", depth: int) -> \"RedactableItem\":\n if depth > self.MAX_RECURSION_DEPTH or isinstance(item, str):\n return '***'\n if isinstance(item, dict):\n return {dict_key: self._redact_all(subval, depth + 1) for dict_key, subval in item.items()}\n elif isinstance(item, (tuple, set)):\n # Turn set in to tuple!\n return tuple(self._redact_all(subval, depth + 1) for subval in item)\n elif isinstance(item, list):\n return list(self._redact_all(subval, depth + 1) for subval in item)\n else:\n return item\n\n # pylint: disable=too-many-return-statements\n def _redact(self, item: \"RedactableItem\", name: Optional[str], depth: int) -> \"RedactableItem\":\n # Avoid spending too much effort on redacting on deeply nested\n # structures. This also avoid infinite recursion if a structure has\n # reference to self.\n if depth > self.MAX_RECURSION_DEPTH:\n return item\n try:\n if name and should_hide_value_for_key(name):\n return self._redact_all(item, depth)\n if isinstance(item, dict):\n return {\n dict_key: self._redact(subval, name=dict_key, depth=(depth + 1))\n for dict_key, subval in item.items()\n }\n elif isinstance(item, str):\n if self.replacer:\n # We can't replace specific values, but the key-based redacting\n # can still happen, so we can't short-circuit, we need to walk\n # the structure.\n return self.replacer.sub('***', item)\n return item\n elif isinstance(item, (tuple, set)):\n # Turn set in to tuple!\n return tuple(self._redact(subval, name=None, depth=(depth + 1)) for subval in item)\n elif isinstance(item, list):\n return [self._redact(subval, name=None, depth=(depth + 1)) for subval in item]\n else:\n return item\n # I think this should never happen, but it does not hurt to leave it just in case\n except Exception as e: # pylint: disable=broad-except\n log.warning(\n \"Unable to redact %r, please report this via <https://github.com/apache/airflow/issues>. \"\n \"Error was: %s: %s\",\n item,\n type(e).__name__,\n str(e),\n )\n return item\n\n def redact(self, item: \"RedactableItem\", name: Optional[str] = None) -> \"RedactableItem\":\n \"\"\"Redact an any secrets found in ``item``, if it is a string.\n\n If ``name`` is given, and it's a \"sensitive\" name (see\n :func:`should_hide_value_for_key`) then all string values in the item\n is redacted.\n \"\"\"\n return self._redact(item, name, depth=0)\n\n # pylint: enable=too-many-return-statements\n def add_mask(self, secret: Union[str, dict, Iterable], name: str = None):\n \"\"\"Add a new secret to be masked to this filter instance.\"\"\"\n if isinstance(secret, dict):\n for k, v in secret.items():\n self.add_mask(v, k)\n elif isinstance(secret, str):\n if not secret:\n return\n pattern = re.escape(secret)\n if pattern not in self.patterns and (not name or should_hide_value_for_key(name)):\n self.patterns.add(pattern)\n self.replacer = re.compile('|'.join(self.patterns))\n elif isinstance(secret, collections.abc.Iterable):\n for v in secret:\n self.add_mask(v, name)\n", "path": "airflow/utils/log/secrets_masker.py"}]} | 3,479 | 152 |
gh_patches_debug_13819 | rasdani/github-patches | git_diff | pypa__pipenv-5380 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"Could not find a version that satisfies the requirement" for package in private repository starting from pipenv==2022.8.31
### Issue description
With a Pipfile like this...
```
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[[source]]
url = "https://pypi.ourinternalrepo.com/simple"
verify_ssl = true
name = "our-pypi"
[packages]
..etc..
ourinternalpackage = {version = "==0.5.9", index = "our-pypi"}
```
that generates a Pipfile.lock like this...
```
{
"_meta": {
"hash": {
"sha256": "something"
},
"pipfile-spec": 6,
"requires": {
"python_version": "3.8"
},
"sources": [
{
"name": "pypi",
"url": "https://pypi.org/simple",
"verify_ssl": true
},
{
"name": "our-pypi",
"url": "https://pypi.ourinternalrepo.com/simple",
"verify_ssl": true
}
]
},
"default": {
"ourinternalpackage ": {
"hashes": [
"sha256:something"
],
"index": "our-pypi",
"version": "==0.1.1"
},
...etc
```
but when we execute `pipenv sync --verbose`, it generates a pip install command like so
```
/usr/local/lib/python3.8/site-packages/pipenv/patched/pip/__pip-runner__.py install -i https://pypi.org/simple --no-input --upgrade --no-deps --exists-action=i -r /tmp/pipenv-etc-requirements/pipenv-etc-hashed-reqs.txt
```
(note "-i https://pypi.org/simple")
and as might be expected, this fails with `6.552 ERROR: Could not find a version that satisfies the requirement ourinternalpackage==0.1.1 (from versions: none)`. This appears to have started happening with 2022.8.31--when we `pip install -U pipenv==2022.8.30` and run `pipenv sync` from there, the package resolves successfully.
### Expected result
I would expect the package install to generate with a -i pointed at the `our-pypi` source rather than the default.
### Actual result
```
6.552 ERROR: Could not find a version that satisfies the requirement ourinternalpackage==0.1.1 (from versions: none)
```
There are also bunches of these, that may or may not be irrelevant. I can't provide the full --verbose output because cleansing it of work-internal information would be too much hassle, but I have it around if anyone wants specific snippets.
```
An error occurred while installing alembic==1.8.0; python_version >= '3.7' --hash=sha256:a2d4d90da70b30e70352cd9455e35873a255a31402a438fe24815758d7a0e5e1 --hash=sha256:b5ae4bbfc7d1302ed413989d39474d102e7cfa158f6d5969d2497955ffe85a30! Will try again.
```
### Steps to replicate
See above
-------------------------------------------------------------------------------
Please run `$ pipenv --support`, and paste the results here. Don't put backticks (`` ` ``) around it! The output already contains Markdown formatting.
<details><summary>$ pipenv --support</summary>
Pipenv version: `'2022.9.8'`
Pipenv location: `'/usr/local/lib/python3.8/site-packages/pipenv'`
Python location: `'/usr/local/bin/python'`
OS Name: `'posix'`
User pip version: `'22.2.2'`
user Python installations found:
- `3.8.14`: `/usr/local/bin/python`
- `3.8.14`: `/usr/local/bin/python3`
- `3.8.14`: `/usr/local/bin/python3.8`
- `3.7.3`: `/usr/bin/python3`
- `3.7.3`: `/usr/bin/python3.7`
- `3.7.3`: `/usr/bin/python3.7m`
- `2.7.16`: `/usr/bin/python`
- `2.7.16`: `/usr/bin/python2.7`
- `2.7.16`: `/usr/bin/python2`
PEP 508 Information:
```
{'implementation_name': 'cpython',
'implementation_version': '3.8.14',
'os_name': 'posix',
'platform_machine': 'x86_64',
'platform_python_implementation': 'CPython',
'platform_release': '5.4.0-1051-gcp',
'platform_system': 'Linux',
'platform_version': '#55~18.04.1-Ubuntu SMP Sun Aug 1 20:38:04 UTC 2021',
'python_full_version': '3.8.14',
'python_version': '3.8',
'sys_platform': 'linux'}
```
System environment variables:
- `PIPENV_VENV_IN_PROJECT`
- `HOSTNAME`
- `PYTHON_VERSION`
- `PWD`
- `PYTHON_SETUPTOOLS_VERSION`
- `PIPENV_CACHE_DIR`
- `HOME`
- `LANG`
- `LS_COLORS`
- `GPG_KEY`
- `PYTHONPATH`
- `TERM`
- `SHLVL`
- `PYTHON_PIP_VERSION`
- `PYTHON_GET_PIP_SHA256`
- `PYTHON_GET_PIP_URL`
- `PATH`
- `_`
- `PIP_DISABLE_PIP_VERSION_CHECK`
- `PIP_PYTHON_PATH`
- `PYTHONDONTWRITEBYTECODE`
- `PYTHONFINDER_IGNORE_UNSUPPORTED`
Pipenv–specific environment variables:
- `PIPENV_VENV_IN_PROJECT`: `1`
- `PIPENV_CACHE_DIR`: `/tmp`
Debug–specific environment variables:
- `PATH`: `/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin`
- `LANG`: `C.UTF-8`
- `PWD`: `/code`
---------------------------
Contents of `Pipfile` ('/code/Pipfile'):
```toml
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
[dev-packages]
[requires]
python_version = "3.8"
```
Contents of `Pipfile.lock` ('/code/Pipfile.lock'):
(see above for selected snippets, full output not included for reasons stated above)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pipenv/utils/indexes.py`
Content:
```
1 from __future__ import annotations
2
3 import re
4 from collections.abc import Mapping
5
6 from pipenv.exceptions import PipenvUsageError
7 from pipenv.patched.pip._vendor.urllib3.util import parse_url
8 from pipenv.utils.constants import MYPY_RUNNING
9
10 from .internet import create_mirror_source, is_pypi_url
11
12 if MYPY_RUNNING:
13 from typing import List, Optional, Union # noqa
14
15 from pipenv.project import Project, TSource # noqa
16
17
18 def prepare_pip_source_args(sources, pip_args=None):
19 if pip_args is None:
20 pip_args = []
21 if sources:
22 # Add the source to pip.
23 package_url = sources[0].get("url")
24 if not package_url:
25 raise PipenvUsageError("[[source]] section does not contain a URL.")
26 pip_args.extend(["-i", package_url])
27 # Trust the host if it's not verified.
28 if not sources[0].get("verify_ssl", True):
29 url_parts = parse_url(package_url)
30 url_port = f":{url_parts.port}" if url_parts.port else ""
31 pip_args.extend(["--trusted-host", f"{url_parts.host}{url_port}"])
32 # Add additional sources as extra indexes.
33 if len(sources) > 1:
34 for source in sources[1:]:
35 url = source.get("url")
36 if not url: # not harmless, just don't continue
37 continue
38 pip_args.extend(["--extra-index-url", url])
39 # Trust the host if it's not verified.
40 if not source.get("verify_ssl", True):
41 url_parts = parse_url(url)
42 url_port = f":{url_parts.port}" if url_parts.port else ""
43 pip_args.extend(["--trusted-host", f"{url_parts.host}{url_port}"])
44 return pip_args
45
46
47 def get_project_index(
48 project: Project,
49 index: Optional[Union[str, TSource]] = None,
50 trusted_hosts: Optional[List[str]] = None,
51 ) -> TSource:
52 from pipenv.project import SourceNotFound
53
54 if trusted_hosts is None:
55 trusted_hosts = []
56 if isinstance(index, Mapping):
57 return project.find_source(index.get("url"))
58 try:
59 source = project.find_source(index)
60 except SourceNotFound:
61 index_url = parse_url(index)
62 src_name = project.src_name_from_url(index)
63 verify_ssl = index_url.host not in trusted_hosts
64 source = {"url": index, "verify_ssl": verify_ssl, "name": src_name}
65 return source
66
67
68 def get_source_list(
69 project: Project,
70 index: Optional[Union[str, TSource]] = None,
71 extra_indexes: Optional[Union[str, List[str]]] = None,
72 trusted_hosts: Optional[List[str]] = None,
73 pypi_mirror: Optional[str] = None,
74 ) -> List[TSource]:
75 sources: List[TSource] = []
76 if index:
77 sources.append(get_project_index(project, index))
78 if extra_indexes:
79 if isinstance(extra_indexes, str):
80 extra_indexes = [extra_indexes]
81
82 for source in extra_indexes:
83 extra_src = get_project_index(project, source)
84 if not sources or extra_src["url"] != sources[0]["url"]:
85 sources.append(extra_src)
86
87 for source in project.pipfile_sources:
88 if not sources or source["url"] != sources[0]["url"]:
89 sources.append(source)
90
91 if not sources:
92 sources = project.pipfile_sources[:]
93 if pypi_mirror:
94 sources = [
95 create_mirror_source(pypi_mirror) if is_pypi_url(source["url"]) else source
96 for source in sources
97 ]
98 return sources
99
100
101 def parse_indexes(line, strict=False):
102 from argparse import ArgumentParser
103
104 comment_re = re.compile(r"(?:^|\s+)#.*$")
105 line = comment_re.sub("", line)
106 parser = ArgumentParser("indexes", allow_abbrev=False)
107 parser.add_argument("-i", "--index-url", dest="index")
108 parser.add_argument("--extra-index-url", dest="extra_index")
109 parser.add_argument("--trusted-host", dest="trusted_host")
110 args, remainder = parser.parse_known_args(line.split())
111 index = args.index
112 extra_index = args.extra_index
113 trusted_host = args.trusted_host
114 if (
115 strict
116 and sum(bool(arg) for arg in (index, extra_index, trusted_host, remainder)) > 1
117 ):
118 raise ValueError("Index arguments must be on their own lines.")
119 return index, extra_index, trusted_host, remainder
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pipenv/utils/indexes.py b/pipenv/utils/indexes.py
--- a/pipenv/utils/indexes.py
+++ b/pipenv/utils/indexes.py
@@ -84,12 +84,12 @@
if not sources or extra_src["url"] != sources[0]["url"]:
sources.append(extra_src)
- for source in project.pipfile_sources:
+ for source in project.sources:
if not sources or source["url"] != sources[0]["url"]:
sources.append(source)
if not sources:
- sources = project.pipfile_sources[:]
+ sources = project.sources[:]
if pypi_mirror:
sources = [
create_mirror_source(pypi_mirror) if is_pypi_url(source["url"]) else source
| {"golden_diff": "diff --git a/pipenv/utils/indexes.py b/pipenv/utils/indexes.py\n--- a/pipenv/utils/indexes.py\n+++ b/pipenv/utils/indexes.py\n@@ -84,12 +84,12 @@\n if not sources or extra_src[\"url\"] != sources[0][\"url\"]:\n sources.append(extra_src)\n \n- for source in project.pipfile_sources:\n+ for source in project.sources:\n if not sources or source[\"url\"] != sources[0][\"url\"]:\n sources.append(source)\n \n if not sources:\n- sources = project.pipfile_sources[:]\n+ sources = project.sources[:]\n if pypi_mirror:\n sources = [\n create_mirror_source(pypi_mirror) if is_pypi_url(source[\"url\"]) else source\n", "issue": "\"Could not find a version that satisfies the requirement\" for package in private repository starting from pipenv==2022.8.31 \n### Issue description\r\n\r\nWith a Pipfile like this...\r\n```\r\n[[source]]\r\nurl = \"https://pypi.org/simple\"\r\nverify_ssl = true\r\nname = \"pypi\"\r\n\r\n[[source]]\r\nurl = \"https://pypi.ourinternalrepo.com/simple\"\r\nverify_ssl = true\r\nname = \"our-pypi\"\r\n\r\n[packages]\r\n..etc..\r\nourinternalpackage = {version = \"==0.5.9\", index = \"our-pypi\"}\r\n```\r\nthat generates a Pipfile.lock like this...\r\n```\r\n{\r\n \"_meta\": {\r\n \"hash\": {\r\n \"sha256\": \"something\"\r\n },\r\n \"pipfile-spec\": 6,\r\n \"requires\": {\r\n \"python_version\": \"3.8\"\r\n },\r\n \"sources\": [\r\n {\r\n \"name\": \"pypi\",\r\n \"url\": \"https://pypi.org/simple\",\r\n \"verify_ssl\": true\r\n },\r\n {\r\n \"name\": \"our-pypi\",\r\n \"url\": \"https://pypi.ourinternalrepo.com/simple\",\r\n \"verify_ssl\": true\r\n }\r\n ]\r\n },\r\n \"default\": {\r\n \"ourinternalpackage \": {\r\n \"hashes\": [\r\n \"sha256:something\"\r\n ],\r\n \"index\": \"our-pypi\",\r\n \"version\": \"==0.1.1\"\r\n },\r\n ...etc\r\n```\r\n\r\nbut when we execute `pipenv sync --verbose`, it generates a pip install command like so\r\n\r\n```\r\n/usr/local/lib/python3.8/site-packages/pipenv/patched/pip/__pip-runner__.py install -i https://pypi.org/simple --no-input --upgrade --no-deps --exists-action=i -r /tmp/pipenv-etc-requirements/pipenv-etc-hashed-reqs.txt\r\n```\r\n\r\n(note \"-i https://pypi.org/simple\")\r\n\r\nand as might be expected, this fails with `6.552 ERROR: Could not find a version that satisfies the requirement ourinternalpackage==0.1.1 (from versions: none)`. This appears to have started happening with 2022.8.31--when we `pip install -U pipenv==2022.8.30` and run `pipenv sync` from there, the package resolves successfully.\r\n\r\n### Expected result\r\n\r\nI would expect the package install to generate with a -i pointed at the `our-pypi` source rather than the default.\r\n\r\n### Actual result\r\n\r\n```\r\n6.552 ERROR: Could not find a version that satisfies the requirement ourinternalpackage==0.1.1 (from versions: none)\r\n```\r\n\r\nThere are also bunches of these, that may or may not be irrelevant. I can't provide the full --verbose output because cleansing it of work-internal information would be too much hassle, but I have it around if anyone wants specific snippets.\r\n\r\n```\r\nAn error occurred while installing alembic==1.8.0; python_version >= '3.7' --hash=sha256:a2d4d90da70b30e70352cd9455e35873a255a31402a438fe24815758d7a0e5e1 --hash=sha256:b5ae4bbfc7d1302ed413989d39474d102e7cfa158f6d5969d2497955ffe85a30! Will try again.\r\n```\r\n### Steps to replicate\r\n\r\nSee above\r\n\r\n-------------------------------------------------------------------------------\r\n\r\nPlease run `$ pipenv --support`, and paste the results here. Don't put backticks (`` ` ``) around it! The output already contains Markdown formatting.\r\n<details><summary>$ pipenv --support</summary>\r\n\r\nPipenv version: `'2022.9.8'`\r\n\r\nPipenv location: `'/usr/local/lib/python3.8/site-packages/pipenv'`\r\n\r\nPython location: `'/usr/local/bin/python'`\r\n\r\nOS Name: `'posix'`\r\n\r\nUser pip version: `'22.2.2'`\r\n\r\nuser Python installations found:\r\n\r\n - `3.8.14`: `/usr/local/bin/python`\r\n - `3.8.14`: `/usr/local/bin/python3`\r\n - `3.8.14`: `/usr/local/bin/python3.8`\r\n - `3.7.3`: `/usr/bin/python3`\r\n - `3.7.3`: `/usr/bin/python3.7`\r\n - `3.7.3`: `/usr/bin/python3.7m`\r\n - `2.7.16`: `/usr/bin/python`\r\n - `2.7.16`: `/usr/bin/python2.7`\r\n - `2.7.16`: `/usr/bin/python2`\r\n\r\nPEP 508 Information:\r\n\r\n```\r\n{'implementation_name': 'cpython',\r\n 'implementation_version': '3.8.14',\r\n 'os_name': 'posix',\r\n 'platform_machine': 'x86_64',\r\n 'platform_python_implementation': 'CPython',\r\n 'platform_release': '5.4.0-1051-gcp',\r\n 'platform_system': 'Linux',\r\n 'platform_version': '#55~18.04.1-Ubuntu SMP Sun Aug 1 20:38:04 UTC 2021',\r\n 'python_full_version': '3.8.14',\r\n 'python_version': '3.8',\r\n 'sys_platform': 'linux'}\r\n```\r\n\r\nSystem environment variables:\r\n\r\n - `PIPENV_VENV_IN_PROJECT`\r\n - `HOSTNAME`\r\n - `PYTHON_VERSION`\r\n - `PWD`\r\n - `PYTHON_SETUPTOOLS_VERSION`\r\n - `PIPENV_CACHE_DIR`\r\n - `HOME`\r\n - `LANG`\r\n - `LS_COLORS`\r\n - `GPG_KEY`\r\n - `PYTHONPATH`\r\n - `TERM`\r\n - `SHLVL`\r\n - `PYTHON_PIP_VERSION`\r\n - `PYTHON_GET_PIP_SHA256`\r\n - `PYTHON_GET_PIP_URL`\r\n - `PATH`\r\n - `_`\r\n - `PIP_DISABLE_PIP_VERSION_CHECK`\r\n - `PIP_PYTHON_PATH`\r\n - `PYTHONDONTWRITEBYTECODE`\r\n - `PYTHONFINDER_IGNORE_UNSUPPORTED`\r\n\r\nPipenv\u2013specific environment variables:\r\n\r\n - `PIPENV_VENV_IN_PROJECT`: `1`\r\n - `PIPENV_CACHE_DIR`: `/tmp`\r\n\r\nDebug\u2013specific environment variables:\r\n\r\n - `PATH`: `/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin`\r\n - `LANG`: `C.UTF-8`\r\n - `PWD`: `/code`\r\n\r\n\r\n---------------------------\r\n\r\nContents of `Pipfile` ('/code/Pipfile'):\r\n\r\n```toml\r\n[[source]]\r\nurl = \"https://pypi.org/simple\"\r\nverify_ssl = true\r\nname = \"pypi\"\r\n\r\n[packages]\r\n\r\n[dev-packages]\r\n\r\n[requires]\r\npython_version = \"3.8\"\r\n\r\n```\r\n\r\n\r\nContents of `Pipfile.lock` ('/code/Pipfile.lock'):\r\n\r\n(see above for selected snippets, full output not included for reasons stated above)\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport re\nfrom collections.abc import Mapping\n\nfrom pipenv.exceptions import PipenvUsageError\nfrom pipenv.patched.pip._vendor.urllib3.util import parse_url\nfrom pipenv.utils.constants import MYPY_RUNNING\n\nfrom .internet import create_mirror_source, is_pypi_url\n\nif MYPY_RUNNING:\n from typing import List, Optional, Union # noqa\n\n from pipenv.project import Project, TSource # noqa\n\n\ndef prepare_pip_source_args(sources, pip_args=None):\n if pip_args is None:\n pip_args = []\n if sources:\n # Add the source to pip.\n package_url = sources[0].get(\"url\")\n if not package_url:\n raise PipenvUsageError(\"[[source]] section does not contain a URL.\")\n pip_args.extend([\"-i\", package_url])\n # Trust the host if it's not verified.\n if not sources[0].get(\"verify_ssl\", True):\n url_parts = parse_url(package_url)\n url_port = f\":{url_parts.port}\" if url_parts.port else \"\"\n pip_args.extend([\"--trusted-host\", f\"{url_parts.host}{url_port}\"])\n # Add additional sources as extra indexes.\n if len(sources) > 1:\n for source in sources[1:]:\n url = source.get(\"url\")\n if not url: # not harmless, just don't continue\n continue\n pip_args.extend([\"--extra-index-url\", url])\n # Trust the host if it's not verified.\n if not source.get(\"verify_ssl\", True):\n url_parts = parse_url(url)\n url_port = f\":{url_parts.port}\" if url_parts.port else \"\"\n pip_args.extend([\"--trusted-host\", f\"{url_parts.host}{url_port}\"])\n return pip_args\n\n\ndef get_project_index(\n project: Project,\n index: Optional[Union[str, TSource]] = None,\n trusted_hosts: Optional[List[str]] = None,\n) -> TSource:\n from pipenv.project import SourceNotFound\n\n if trusted_hosts is None:\n trusted_hosts = []\n if isinstance(index, Mapping):\n return project.find_source(index.get(\"url\"))\n try:\n source = project.find_source(index)\n except SourceNotFound:\n index_url = parse_url(index)\n src_name = project.src_name_from_url(index)\n verify_ssl = index_url.host not in trusted_hosts\n source = {\"url\": index, \"verify_ssl\": verify_ssl, \"name\": src_name}\n return source\n\n\ndef get_source_list(\n project: Project,\n index: Optional[Union[str, TSource]] = None,\n extra_indexes: Optional[Union[str, List[str]]] = None,\n trusted_hosts: Optional[List[str]] = None,\n pypi_mirror: Optional[str] = None,\n) -> List[TSource]:\n sources: List[TSource] = []\n if index:\n sources.append(get_project_index(project, index))\n if extra_indexes:\n if isinstance(extra_indexes, str):\n extra_indexes = [extra_indexes]\n\n for source in extra_indexes:\n extra_src = get_project_index(project, source)\n if not sources or extra_src[\"url\"] != sources[0][\"url\"]:\n sources.append(extra_src)\n\n for source in project.pipfile_sources:\n if not sources or source[\"url\"] != sources[0][\"url\"]:\n sources.append(source)\n\n if not sources:\n sources = project.pipfile_sources[:]\n if pypi_mirror:\n sources = [\n create_mirror_source(pypi_mirror) if is_pypi_url(source[\"url\"]) else source\n for source in sources\n ]\n return sources\n\n\ndef parse_indexes(line, strict=False):\n from argparse import ArgumentParser\n\n comment_re = re.compile(r\"(?:^|\\s+)#.*$\")\n line = comment_re.sub(\"\", line)\n parser = ArgumentParser(\"indexes\", allow_abbrev=False)\n parser.add_argument(\"-i\", \"--index-url\", dest=\"index\")\n parser.add_argument(\"--extra-index-url\", dest=\"extra_index\")\n parser.add_argument(\"--trusted-host\", dest=\"trusted_host\")\n args, remainder = parser.parse_known_args(line.split())\n index = args.index\n extra_index = args.extra_index\n trusted_host = args.trusted_host\n if (\n strict\n and sum(bool(arg) for arg in (index, extra_index, trusted_host, remainder)) > 1\n ):\n raise ValueError(\"Index arguments must be on their own lines.\")\n return index, extra_index, trusted_host, remainder\n", "path": "pipenv/utils/indexes.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport re\nfrom collections.abc import Mapping\n\nfrom pipenv.exceptions import PipenvUsageError\nfrom pipenv.patched.pip._vendor.urllib3.util import parse_url\nfrom pipenv.utils.constants import MYPY_RUNNING\n\nfrom .internet import create_mirror_source, is_pypi_url\n\nif MYPY_RUNNING:\n from typing import List, Optional, Union # noqa\n\n from pipenv.project import Project, TSource # noqa\n\n\ndef prepare_pip_source_args(sources, pip_args=None):\n if pip_args is None:\n pip_args = []\n if sources:\n # Add the source to pip.\n package_url = sources[0].get(\"url\")\n if not package_url:\n raise PipenvUsageError(\"[[source]] section does not contain a URL.\")\n pip_args.extend([\"-i\", package_url])\n # Trust the host if it's not verified.\n if not sources[0].get(\"verify_ssl\", True):\n url_parts = parse_url(package_url)\n url_port = f\":{url_parts.port}\" if url_parts.port else \"\"\n pip_args.extend([\"--trusted-host\", f\"{url_parts.host}{url_port}\"])\n # Add additional sources as extra indexes.\n if len(sources) > 1:\n for source in sources[1:]:\n url = source.get(\"url\")\n if not url: # not harmless, just don't continue\n continue\n pip_args.extend([\"--extra-index-url\", url])\n # Trust the host if it's not verified.\n if not source.get(\"verify_ssl\", True):\n url_parts = parse_url(url)\n url_port = f\":{url_parts.port}\" if url_parts.port else \"\"\n pip_args.extend([\"--trusted-host\", f\"{url_parts.host}{url_port}\"])\n return pip_args\n\n\ndef get_project_index(\n project: Project,\n index: Optional[Union[str, TSource]] = None,\n trusted_hosts: Optional[List[str]] = None,\n) -> TSource:\n from pipenv.project import SourceNotFound\n\n if trusted_hosts is None:\n trusted_hosts = []\n if isinstance(index, Mapping):\n return project.find_source(index.get(\"url\"))\n try:\n source = project.find_source(index)\n except SourceNotFound:\n index_url = parse_url(index)\n src_name = project.src_name_from_url(index)\n verify_ssl = index_url.host not in trusted_hosts\n source = {\"url\": index, \"verify_ssl\": verify_ssl, \"name\": src_name}\n return source\n\n\ndef get_source_list(\n project: Project,\n index: Optional[Union[str, TSource]] = None,\n extra_indexes: Optional[Union[str, List[str]]] = None,\n trusted_hosts: Optional[List[str]] = None,\n pypi_mirror: Optional[str] = None,\n) -> List[TSource]:\n sources: List[TSource] = []\n if index:\n sources.append(get_project_index(project, index))\n if extra_indexes:\n if isinstance(extra_indexes, str):\n extra_indexes = [extra_indexes]\n\n for source in extra_indexes:\n extra_src = get_project_index(project, source)\n if not sources or extra_src[\"url\"] != sources[0][\"url\"]:\n sources.append(extra_src)\n\n for source in project.sources:\n if not sources or source[\"url\"] != sources[0][\"url\"]:\n sources.append(source)\n\n if not sources:\n sources = project.sources[:]\n if pypi_mirror:\n sources = [\n create_mirror_source(pypi_mirror) if is_pypi_url(source[\"url\"]) else source\n for source in sources\n ]\n return sources\n\n\ndef parse_indexes(line, strict=False):\n from argparse import ArgumentParser\n\n comment_re = re.compile(r\"(?:^|\\s+)#.*$\")\n line = comment_re.sub(\"\", line)\n parser = ArgumentParser(\"indexes\", allow_abbrev=False)\n parser.add_argument(\"-i\", \"--index-url\", dest=\"index\")\n parser.add_argument(\"--extra-index-url\", dest=\"extra_index\")\n parser.add_argument(\"--trusted-host\", dest=\"trusted_host\")\n args, remainder = parser.parse_known_args(line.split())\n index = args.index\n extra_index = args.extra_index\n trusted_host = args.trusted_host\n if (\n strict\n and sum(bool(arg) for arg in (index, extra_index, trusted_host, remainder)) > 1\n ):\n raise ValueError(\"Index arguments must be on their own lines.\")\n return index, extra_index, trusted_host, remainder\n", "path": "pipenv/utils/indexes.py"}]} | 3,093 | 171 |
gh_patches_debug_19082 | rasdani/github-patches | git_diff | mkdocs__mkdocs-1322 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Version to 0.17.0
See discussion in #1166.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/contrib/legacy_search/__init__.py`
Content:
```
1 # coding: utf-8
2
3 from __future__ import absolute_import, unicode_literals
4
5 import os
6 import logging
7 from mkdocs import utils
8 from mkdocs.plugins import BasePlugin
9 from mkdocs.contrib.legacy_search.search_index import SearchIndex
10
11
12 log = logging.getLogger(__name__)
13
14
15 class SearchPlugin(BasePlugin):
16 """ Add a search feature to MkDocs. """
17
18 def on_config(self, config, **kwargs):
19 "Add plugin templates and scripts to config."
20 path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'templates')
21 config['theme'].dirs.append(path)
22 config['theme'].static_templates.add('search.html')
23 config['extra_javascript'].append('search/require.js')
24 config['extra_javascript'].append('search/search.js')
25 return config
26
27 def on_pre_build(self, config, **kwargs):
28 "Create search index instance for later use."
29 self.search_index = SearchIndex()
30
31 def on_page_context(self, context, **kwargs):
32 "Add page to search index."
33 self.search_index.add_entry_from_context(context['page'])
34
35 def on_post_build(self, config, **kwargs):
36 "Build search index."
37 search_index = self.search_index.generate_search_index()
38 json_output_path = os.path.join(config['site_dir'], 'search', 'search_index.json')
39 utils.write_file(search_index.encode('utf-8'), json_output_path)
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mkdocs/contrib/legacy_search/__init__.py b/mkdocs/contrib/legacy_search/__init__.py
--- a/mkdocs/contrib/legacy_search/__init__.py
+++ b/mkdocs/contrib/legacy_search/__init__.py
@@ -17,11 +17,13 @@
def on_config(self, config, **kwargs):
"Add plugin templates and scripts to config."
- path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'templates')
- config['theme'].dirs.append(path)
- config['theme'].static_templates.add('search.html')
- config['extra_javascript'].append('search/require.js')
- config['extra_javascript'].append('search/search.js')
+ if 'include_search_page' in config['theme'] and config['theme']['include_search_page']:
+ config['theme'].static_templates.add('search.html')
+ if not ('search_index_only' in config['theme'] and config['theme']['search_index_only']):
+ path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'templates')
+ config['theme'].dirs.append(path)
+ config['extra_javascript'].append('search/require.js')
+ config['extra_javascript'].append('search/search.js')
return config
def on_pre_build(self, config, **kwargs):
| {"golden_diff": "diff --git a/mkdocs/contrib/legacy_search/__init__.py b/mkdocs/contrib/legacy_search/__init__.py\n--- a/mkdocs/contrib/legacy_search/__init__.py\n+++ b/mkdocs/contrib/legacy_search/__init__.py\n@@ -17,11 +17,13 @@\n \n def on_config(self, config, **kwargs):\n \"Add plugin templates and scripts to config.\"\n- path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'templates')\n- config['theme'].dirs.append(path)\n- config['theme'].static_templates.add('search.html')\n- config['extra_javascript'].append('search/require.js')\n- config['extra_javascript'].append('search/search.js')\n+ if 'include_search_page' in config['theme'] and config['theme']['include_search_page']:\n+ config['theme'].static_templates.add('search.html')\n+ if not ('search_index_only' in config['theme'] and config['theme']['search_index_only']):\n+ path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'templates')\n+ config['theme'].dirs.append(path)\n+ config['extra_javascript'].append('search/require.js')\n+ config['extra_javascript'].append('search/search.js')\n return config\n \n def on_pre_build(self, config, **kwargs):\n", "issue": "Version to 0.17.0\nSee discussion in #1166.\n", "before_files": [{"content": "# coding: utf-8\n\nfrom __future__ import absolute_import, unicode_literals\n\nimport os\nimport logging\nfrom mkdocs import utils\nfrom mkdocs.plugins import BasePlugin\nfrom mkdocs.contrib.legacy_search.search_index import SearchIndex\n\n\nlog = logging.getLogger(__name__)\n\n\nclass SearchPlugin(BasePlugin):\n \"\"\" Add a search feature to MkDocs. \"\"\"\n\n def on_config(self, config, **kwargs):\n \"Add plugin templates and scripts to config.\"\n path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'templates')\n config['theme'].dirs.append(path)\n config['theme'].static_templates.add('search.html')\n config['extra_javascript'].append('search/require.js')\n config['extra_javascript'].append('search/search.js')\n return config\n\n def on_pre_build(self, config, **kwargs):\n \"Create search index instance for later use.\"\n self.search_index = SearchIndex()\n\n def on_page_context(self, context, **kwargs):\n \"Add page to search index.\"\n self.search_index.add_entry_from_context(context['page'])\n\n def on_post_build(self, config, **kwargs):\n \"Build search index.\"\n search_index = self.search_index.generate_search_index()\n json_output_path = os.path.join(config['site_dir'], 'search', 'search_index.json')\n utils.write_file(search_index.encode('utf-8'), json_output_path)\n", "path": "mkdocs/contrib/legacy_search/__init__.py"}], "after_files": [{"content": "# coding: utf-8\n\nfrom __future__ import absolute_import, unicode_literals\n\nimport os\nimport logging\nfrom mkdocs import utils\nfrom mkdocs.plugins import BasePlugin\nfrom mkdocs.contrib.legacy_search.search_index import SearchIndex\n\n\nlog = logging.getLogger(__name__)\n\n\nclass SearchPlugin(BasePlugin):\n \"\"\" Add a search feature to MkDocs. \"\"\"\n\n def on_config(self, config, **kwargs):\n \"Add plugin templates and scripts to config.\"\n if 'include_search_page' in config['theme'] and config['theme']['include_search_page']:\n config['theme'].static_templates.add('search.html')\n if not ('search_index_only' in config['theme'] and config['theme']['search_index_only']):\n path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'templates')\n config['theme'].dirs.append(path)\n config['extra_javascript'].append('search/require.js')\n config['extra_javascript'].append('search/search.js')\n return config\n\n def on_pre_build(self, config, **kwargs):\n \"Create search index instance for later use.\"\n self.search_index = SearchIndex()\n\n def on_page_context(self, context, **kwargs):\n \"Add page to search index.\"\n self.search_index.add_entry_from_context(context['page'])\n\n def on_post_build(self, config, **kwargs):\n \"Build search index.\"\n search_index = self.search_index.generate_search_index()\n json_output_path = os.path.join(config['site_dir'], 'search', 'search_index.json')\n utils.write_file(search_index.encode('utf-8'), json_output_path)\n", "path": "mkdocs/contrib/legacy_search/__init__.py"}]} | 660 | 301 |
gh_patches_debug_35568 | rasdani/github-patches | git_diff | rotki__rotki-839 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing token from balances, requires pagination for aleth.io
## Problem Definition
I'm missing a token from my balances. When I query aleth.io API for my token balances I get a paginated response back. Rotki does only load tokens from the first page (10 items).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rotkehlchen/externalapis/alethio.py`
Content:
```
1 import logging
2 from json.decoder import JSONDecodeError
3 from typing import Any, Dict, List, Optional, Union, overload
4
5 import gevent
6 import requests
7 from eth_utils.address import to_checksum_address
8 from typing_extensions import Literal
9
10 from rotkehlchen.assets.asset import EthereumToken
11 from rotkehlchen.db.dbhandler import DBHandler
12 from rotkehlchen.errors import RemoteError
13 from rotkehlchen.externalapis.interface import ExternalServiceWithApiKey
14 from rotkehlchen.fval import FVal
15 from rotkehlchen.logging import RotkehlchenLogsAdapter
16 from rotkehlchen.typing import ChecksumEthAddress, EthTokenInfo, ExternalService
17 from rotkehlchen.user_messages import MessagesAggregator
18 from rotkehlchen.utils.serialization import rlk_jsonloads_dict
19
20 logger = logging.getLogger(__name__)
21 log = RotkehlchenLogsAdapter(logger)
22
23
24 class Alethio(ExternalServiceWithApiKey):
25
26 def __init__(
27 self,
28 database: DBHandler,
29 msg_aggregator: MessagesAggregator,
30 all_eth_tokens: List[EthTokenInfo],
31 ) -> None:
32 super().__init__(database=database, service_name=ExternalService.ALETHIO)
33 self.msg_aggregator = msg_aggregator
34 self.session = requests.session()
35 self.all_tokens = all_eth_tokens
36 self.session.headers.update({'User-Agent': 'rotkehlchen'})
37
38 @overload # noqa: F811
39 def _query( # pylint: disable=no-self-use
40 self,
41 root_endpoint: Literal['accounts'],
42 path: str,
43 ) -> List[Dict[str, Any]]:
44 ...
45
46 @overload # noqa: F811
47 def _query( # pylint: disable=no-self-use
48 self,
49 root_endpoint: Literal['foo'],
50 path: str,
51 ) -> Dict[str, Any]:
52 ...
53
54 def _query(self, root_endpoint: str, path: str) -> Union[Dict[str, Any], List]: # noqa: F811
55 query_str = f'https://api.aleth.io/v1/{root_endpoint}/{path}'
56 log.debug(f'Querying alethio for {query_str}')
57
58 api_key = self._get_api_key()
59 if api_key:
60 self.session.headers.update({'Authorization': f'Bearer {api_key}'})
61
62 backoff = 1
63 backoff_limit = 13
64 while backoff < backoff_limit:
65 try:
66 response = self.session.get(query_str)
67 except requests.exceptions.ConnectionError as e:
68 if 'Max retries exceeded with url' in str(e):
69 log.debug(
70 f'Got max retries exceeded from alethio. Will '
71 f'backoff for {backoff} seconds.',
72 )
73 gevent.sleep(backoff)
74 backoff = backoff * 2
75 if backoff >= backoff_limit:
76 raise RemoteError(
77 'Getting alethio max connections error even '
78 'after we incrementally backed off',
79 )
80 continue
81
82 raise RemoteError(f'Alethio API request failed due to {str(e)}')
83
84 if response.status_code == 429:
85 log.debug(
86 f'Got response: {response.text} from alethio. Will '
87 f'backoff for {backoff} seconds.',
88 )
89 gevent.sleep(backoff)
90 backoff = backoff * 2
91 if backoff >= backoff_limit:
92 raise RemoteError(
93 'Alethio keeps returning rate limit errors even '
94 'after we incrementally backed off',
95 )
96 continue
97
98 if response.status_code != 200:
99 raise RemoteError(
100 f'Alethio API request {response.url} failed '
101 f'with HTTP status code {response.status_code} and text '
102 f'{response.text}',
103 )
104
105 try:
106 json_ret = rlk_jsonloads_dict(response.text)
107 except JSONDecodeError:
108 raise RemoteError(f'alethio returned invalid JSON response: {response.text}')
109
110 data = json_ret.get('data', None)
111 if data is None:
112 errors = json_ret.get('errors', None)
113 if errors is None:
114 msg = f'Unexpected alethio response: {response.text}'
115 else:
116 msg = str(errors)
117 raise RemoteError(f'alethio response error: {msg}')
118
119 # if we got here we should return
120 break
121
122 return data
123
124 def token_address_to_identifier(self, address: ChecksumEthAddress) -> Optional[EthTokenInfo]:
125 # TODO: Cache these stuff in a mapping
126 for token in self.all_tokens:
127 if token.address == address:
128 return token
129
130 return None
131
132 def get_token_balances(self, account: ChecksumEthAddress) -> Dict[EthereumToken, FVal]:
133 """Auto-detect which tokens are owned and get token balances for the account
134
135 The returned balance is already normalized for the token's decimals.
136
137 May raise:
138 - RemoteError if there is a problem contacting aleth.io
139 """
140 balances = {}
141 data = self._query(root_endpoint='accounts', path=f'{account}/tokenBalances')
142 for entry in data:
143 entry_type = entry.get('type', None)
144 if entry_type == 'TokenBalance':
145
146 attributes = entry.get('attributes', None)
147 balance = None
148 if attributes is not None:
149 balance = attributes.get('balance', None)
150 if balance is None:
151 continue
152
153 relationships = entry.get('relationships', None)
154 if relationships is None:
155 continue
156 token = relationships.get('token', None)
157 if token is None:
158 continue
159 if 'data' not in token:
160 continue
161 if 'id' not in token['data']:
162 continue
163
164 token_address = to_checksum_address(token['data']['id'])
165 token_info = self.token_address_to_identifier(token_address)
166 if token_info is None:
167 continue
168
169 amount = FVal(balance) / (FVal(10) ** FVal(token_info.decimal))
170 balances[EthereumToken(token_info.symbol)] = amount
171
172 return balances
173
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rotkehlchen/externalapis/alethio.py b/rotkehlchen/externalapis/alethio.py
--- a/rotkehlchen/externalapis/alethio.py
+++ b/rotkehlchen/externalapis/alethio.py
@@ -40,6 +40,7 @@
self,
root_endpoint: Literal['accounts'],
path: str,
+ full_query_str: Optional[str] = None,
) -> List[Dict[str, Any]]:
...
@@ -48,11 +49,21 @@
self,
root_endpoint: Literal['foo'],
path: str,
+ full_query_str: Optional[str] = None,
) -> Dict[str, Any]:
...
- def _query(self, root_endpoint: str, path: str) -> Union[Dict[str, Any], List]: # noqa: F811
- query_str = f'https://api.aleth.io/v1/{root_endpoint}/{path}'
+ def _query( # noqa: F811
+ self,
+ root_endpoint: str,
+ path: str,
+ full_query_str: Optional[str] = None,
+ ) -> Union[Dict[str, Any], List]: # noqa: F811
+ if full_query_str:
+ # If this is a pagination call
+ query_str = full_query_str
+ else:
+ query_str = f'https://api.aleth.io/v1/{root_endpoint}/{path}?page[limit]=100'
log.debug(f'Querying alethio for {query_str}')
api_key = self._get_api_key()
@@ -116,6 +127,34 @@
msg = str(errors)
raise RemoteError(f'alethio response error: {msg}')
+ has_next = False
+ try:
+ has_next = json_ret['meta']['page']['hasNext']
+ except KeyError:
+ raise RemoteError(
+ f'Alethio response does not contain pagination information: {response.text}',
+ )
+
+ if has_next:
+ try:
+ link = json_ret['links']['next']
+ except KeyError:
+ raise RemoteError(
+ f'Alethio response does not contain next page link: {response.text}',
+ )
+
+ next_data = self._query( # type: ignore
+ root_endpoint=root_endpoint,
+ path=path,
+ full_query_str=link,
+ )
+ if root_endpoint == 'accounts':
+ data.extend(next_data)
+ else:
+ raise AssertionError(
+ 'Have not yet implemented alethio endpoints returning non lists',
+ )
+
# if we got here we should return
break
| {"golden_diff": "diff --git a/rotkehlchen/externalapis/alethio.py b/rotkehlchen/externalapis/alethio.py\n--- a/rotkehlchen/externalapis/alethio.py\n+++ b/rotkehlchen/externalapis/alethio.py\n@@ -40,6 +40,7 @@\n self,\n root_endpoint: Literal['accounts'],\n path: str,\n+ full_query_str: Optional[str] = None,\n ) -> List[Dict[str, Any]]:\n ...\n \n@@ -48,11 +49,21 @@\n self,\n root_endpoint: Literal['foo'],\n path: str,\n+ full_query_str: Optional[str] = None,\n ) -> Dict[str, Any]:\n ...\n \n- def _query(self, root_endpoint: str, path: str) -> Union[Dict[str, Any], List]: # noqa: F811\n- query_str = f'https://api.aleth.io/v1/{root_endpoint}/{path}'\n+ def _query( # noqa: F811\n+ self,\n+ root_endpoint: str,\n+ path: str,\n+ full_query_str: Optional[str] = None,\n+ ) -> Union[Dict[str, Any], List]: # noqa: F811\n+ if full_query_str:\n+ # If this is a pagination call\n+ query_str = full_query_str\n+ else:\n+ query_str = f'https://api.aleth.io/v1/{root_endpoint}/{path}?page[limit]=100'\n log.debug(f'Querying alethio for {query_str}')\n \n api_key = self._get_api_key()\n@@ -116,6 +127,34 @@\n msg = str(errors)\n raise RemoteError(f'alethio response error: {msg}')\n \n+ has_next = False\n+ try:\n+ has_next = json_ret['meta']['page']['hasNext']\n+ except KeyError:\n+ raise RemoteError(\n+ f'Alethio response does not contain pagination information: {response.text}',\n+ )\n+\n+ if has_next:\n+ try:\n+ link = json_ret['links']['next']\n+ except KeyError:\n+ raise RemoteError(\n+ f'Alethio response does not contain next page link: {response.text}',\n+ )\n+\n+ next_data = self._query( # type: ignore\n+ root_endpoint=root_endpoint,\n+ path=path,\n+ full_query_str=link,\n+ )\n+ if root_endpoint == 'accounts':\n+ data.extend(next_data)\n+ else:\n+ raise AssertionError(\n+ 'Have not yet implemented alethio endpoints returning non lists',\n+ )\n+\n # if we got here we should return\n break\n", "issue": "Missing token from balances, requires pagination for aleth.io\n## Problem Definition\r\n\r\nI'm missing a token from my balances. When I query aleth.io API for my token balances I get a paginated response back. Rotki does only load tokens from the first page (10 items).\n", "before_files": [{"content": "import logging\nfrom json.decoder import JSONDecodeError\nfrom typing import Any, Dict, List, Optional, Union, overload\n\nimport gevent\nimport requests\nfrom eth_utils.address import to_checksum_address\nfrom typing_extensions import Literal\n\nfrom rotkehlchen.assets.asset import EthereumToken\nfrom rotkehlchen.db.dbhandler import DBHandler\nfrom rotkehlchen.errors import RemoteError\nfrom rotkehlchen.externalapis.interface import ExternalServiceWithApiKey\nfrom rotkehlchen.fval import FVal\nfrom rotkehlchen.logging import RotkehlchenLogsAdapter\nfrom rotkehlchen.typing import ChecksumEthAddress, EthTokenInfo, ExternalService\nfrom rotkehlchen.user_messages import MessagesAggregator\nfrom rotkehlchen.utils.serialization import rlk_jsonloads_dict\n\nlogger = logging.getLogger(__name__)\nlog = RotkehlchenLogsAdapter(logger)\n\n\nclass Alethio(ExternalServiceWithApiKey):\n\n def __init__(\n self,\n database: DBHandler,\n msg_aggregator: MessagesAggregator,\n all_eth_tokens: List[EthTokenInfo],\n ) -> None:\n super().__init__(database=database, service_name=ExternalService.ALETHIO)\n self.msg_aggregator = msg_aggregator\n self.session = requests.session()\n self.all_tokens = all_eth_tokens\n self.session.headers.update({'User-Agent': 'rotkehlchen'})\n\n @overload # noqa: F811\n def _query( # pylint: disable=no-self-use\n self,\n root_endpoint: Literal['accounts'],\n path: str,\n ) -> List[Dict[str, Any]]:\n ...\n\n @overload # noqa: F811\n def _query( # pylint: disable=no-self-use\n self,\n root_endpoint: Literal['foo'],\n path: str,\n ) -> Dict[str, Any]:\n ...\n\n def _query(self, root_endpoint: str, path: str) -> Union[Dict[str, Any], List]: # noqa: F811\n query_str = f'https://api.aleth.io/v1/{root_endpoint}/{path}'\n log.debug(f'Querying alethio for {query_str}')\n\n api_key = self._get_api_key()\n if api_key:\n self.session.headers.update({'Authorization': f'Bearer {api_key}'})\n\n backoff = 1\n backoff_limit = 13\n while backoff < backoff_limit:\n try:\n response = self.session.get(query_str)\n except requests.exceptions.ConnectionError as e:\n if 'Max retries exceeded with url' in str(e):\n log.debug(\n f'Got max retries exceeded from alethio. Will '\n f'backoff for {backoff} seconds.',\n )\n gevent.sleep(backoff)\n backoff = backoff * 2\n if backoff >= backoff_limit:\n raise RemoteError(\n 'Getting alethio max connections error even '\n 'after we incrementally backed off',\n )\n continue\n\n raise RemoteError(f'Alethio API request failed due to {str(e)}')\n\n if response.status_code == 429:\n log.debug(\n f'Got response: {response.text} from alethio. Will '\n f'backoff for {backoff} seconds.',\n )\n gevent.sleep(backoff)\n backoff = backoff * 2\n if backoff >= backoff_limit:\n raise RemoteError(\n 'Alethio keeps returning rate limit errors even '\n 'after we incrementally backed off',\n )\n continue\n\n if response.status_code != 200:\n raise RemoteError(\n f'Alethio API request {response.url} failed '\n f'with HTTP status code {response.status_code} and text '\n f'{response.text}',\n )\n\n try:\n json_ret = rlk_jsonloads_dict(response.text)\n except JSONDecodeError:\n raise RemoteError(f'alethio returned invalid JSON response: {response.text}')\n\n data = json_ret.get('data', None)\n if data is None:\n errors = json_ret.get('errors', None)\n if errors is None:\n msg = f'Unexpected alethio response: {response.text}'\n else:\n msg = str(errors)\n raise RemoteError(f'alethio response error: {msg}')\n\n # if we got here we should return\n break\n\n return data\n\n def token_address_to_identifier(self, address: ChecksumEthAddress) -> Optional[EthTokenInfo]:\n # TODO: Cache these stuff in a mapping\n for token in self.all_tokens:\n if token.address == address:\n return token\n\n return None\n\n def get_token_balances(self, account: ChecksumEthAddress) -> Dict[EthereumToken, FVal]:\n \"\"\"Auto-detect which tokens are owned and get token balances for the account\n\n The returned balance is already normalized for the token's decimals.\n\n May raise:\n - RemoteError if there is a problem contacting aleth.io\n \"\"\"\n balances = {}\n data = self._query(root_endpoint='accounts', path=f'{account}/tokenBalances')\n for entry in data:\n entry_type = entry.get('type', None)\n if entry_type == 'TokenBalance':\n\n attributes = entry.get('attributes', None)\n balance = None\n if attributes is not None:\n balance = attributes.get('balance', None)\n if balance is None:\n continue\n\n relationships = entry.get('relationships', None)\n if relationships is None:\n continue\n token = relationships.get('token', None)\n if token is None:\n continue\n if 'data' not in token:\n continue\n if 'id' not in token['data']:\n continue\n\n token_address = to_checksum_address(token['data']['id'])\n token_info = self.token_address_to_identifier(token_address)\n if token_info is None:\n continue\n\n amount = FVal(balance) / (FVal(10) ** FVal(token_info.decimal))\n balances[EthereumToken(token_info.symbol)] = amount\n\n return balances\n", "path": "rotkehlchen/externalapis/alethio.py"}], "after_files": [{"content": "import logging\nfrom json.decoder import JSONDecodeError\nfrom typing import Any, Dict, List, Optional, Union, overload\n\nimport gevent\nimport requests\nfrom eth_utils.address import to_checksum_address\nfrom typing_extensions import Literal\n\nfrom rotkehlchen.assets.asset import EthereumToken\nfrom rotkehlchen.db.dbhandler import DBHandler\nfrom rotkehlchen.errors import RemoteError\nfrom rotkehlchen.externalapis.interface import ExternalServiceWithApiKey\nfrom rotkehlchen.fval import FVal\nfrom rotkehlchen.logging import RotkehlchenLogsAdapter\nfrom rotkehlchen.typing import ChecksumEthAddress, EthTokenInfo, ExternalService\nfrom rotkehlchen.user_messages import MessagesAggregator\nfrom rotkehlchen.utils.serialization import rlk_jsonloads_dict\n\nlogger = logging.getLogger(__name__)\nlog = RotkehlchenLogsAdapter(logger)\n\n\nclass Alethio(ExternalServiceWithApiKey):\n\n def __init__(\n self,\n database: DBHandler,\n msg_aggregator: MessagesAggregator,\n all_eth_tokens: List[EthTokenInfo],\n ) -> None:\n super().__init__(database=database, service_name=ExternalService.ALETHIO)\n self.msg_aggregator = msg_aggregator\n self.session = requests.session()\n self.all_tokens = all_eth_tokens\n self.session.headers.update({'User-Agent': 'rotkehlchen'})\n\n @overload # noqa: F811\n def _query( # pylint: disable=no-self-use\n self,\n root_endpoint: Literal['accounts'],\n path: str,\n full_query_str: Optional[str] = None,\n ) -> List[Dict[str, Any]]:\n ...\n\n @overload # noqa: F811\n def _query( # pylint: disable=no-self-use\n self,\n root_endpoint: Literal['foo'],\n path: str,\n full_query_str: Optional[str] = None,\n ) -> Dict[str, Any]:\n ...\n\n def _query( # noqa: F811\n self,\n root_endpoint: str,\n path: str,\n full_query_str: Optional[str] = None,\n ) -> Union[Dict[str, Any], List]: # noqa: F811\n if full_query_str:\n # If this is a pagination call\n query_str = full_query_str\n else:\n query_str = f'https://api.aleth.io/v1/{root_endpoint}/{path}?page[limit]=100'\n log.debug(f'Querying alethio for {query_str}')\n\n api_key = self._get_api_key()\n if api_key:\n self.session.headers.update({'Authorization': f'Bearer {api_key}'})\n\n backoff = 1\n backoff_limit = 13\n while backoff < backoff_limit:\n try:\n response = self.session.get(query_str)\n except requests.exceptions.ConnectionError as e:\n if 'Max retries exceeded with url' in str(e):\n log.debug(\n f'Got max retries exceeded from alethio. Will '\n f'backoff for {backoff} seconds.',\n )\n gevent.sleep(backoff)\n backoff = backoff * 2\n if backoff >= backoff_limit:\n raise RemoteError(\n 'Getting alethio max connections error even '\n 'after we incrementally backed off',\n )\n continue\n\n raise RemoteError(f'Alethio API request failed due to {str(e)}')\n\n if response.status_code == 429:\n log.debug(\n f'Got response: {response.text} from alethio. Will '\n f'backoff for {backoff} seconds.',\n )\n gevent.sleep(backoff)\n backoff = backoff * 2\n if backoff >= backoff_limit:\n raise RemoteError(\n 'Alethio keeps returning rate limit errors even '\n 'after we incrementally backed off',\n )\n continue\n\n if response.status_code != 200:\n raise RemoteError(\n f'Alethio API request {response.url} failed '\n f'with HTTP status code {response.status_code} and text '\n f'{response.text}',\n )\n\n try:\n json_ret = rlk_jsonloads_dict(response.text)\n except JSONDecodeError:\n raise RemoteError(f'alethio returned invalid JSON response: {response.text}')\n\n data = json_ret.get('data', None)\n if data is None:\n errors = json_ret.get('errors', None)\n if errors is None:\n msg = f'Unexpected alethio response: {response.text}'\n else:\n msg = str(errors)\n raise RemoteError(f'alethio response error: {msg}')\n\n has_next = False\n try:\n has_next = json_ret['meta']['page']['hasNext']\n except KeyError:\n raise RemoteError(\n f'Alethio response does not contain pagination information: {response.text}',\n )\n\n if has_next:\n try:\n link = json_ret['links']['next']\n except KeyError:\n raise RemoteError(\n f'Alethio response does not contain next page link: {response.text}',\n )\n\n next_data = self._query( # type: ignore\n root_endpoint=root_endpoint,\n path=path,\n full_query_str=link,\n )\n if root_endpoint == 'accounts':\n data.extend(next_data)\n else:\n raise AssertionError(\n 'Have not yet implemented alethio endpoints returning non lists',\n )\n\n # if we got here we should return\n break\n\n return data\n\n def token_address_to_identifier(self, address: ChecksumEthAddress) -> Optional[EthTokenInfo]:\n # TODO: Cache these stuff in a mapping\n for token in self.all_tokens:\n if token.address == address:\n return token\n\n return None\n\n def get_token_balances(self, account: ChecksumEthAddress) -> Dict[EthereumToken, FVal]:\n \"\"\"Auto-detect which tokens are owned and get token balances for the account\n\n The returned balance is already normalized for the token's decimals.\n\n May raise:\n - RemoteError if there is a problem contacting aleth.io\n \"\"\"\n balances = {}\n data = self._query(root_endpoint='accounts', path=f'{account}/tokenBalances')\n for entry in data:\n entry_type = entry.get('type', None)\n if entry_type == 'TokenBalance':\n\n attributes = entry.get('attributes', None)\n balance = None\n if attributes is not None:\n balance = attributes.get('balance', None)\n if balance is None:\n continue\n\n relationships = entry.get('relationships', None)\n if relationships is None:\n continue\n token = relationships.get('token', None)\n if token is None:\n continue\n if 'data' not in token:\n continue\n if 'id' not in token['data']:\n continue\n\n token_address = to_checksum_address(token['data']['id'])\n token_info = self.token_address_to_identifier(token_address)\n if token_info is None:\n continue\n\n amount = FVal(balance) / (FVal(10) ** FVal(token_info.decimal))\n balances[EthereumToken(token_info.symbol)] = amount\n\n return balances\n", "path": "rotkehlchen/externalapis/alethio.py"}]} | 2,076 | 623 |
gh_patches_debug_79 | rasdani/github-patches | git_diff | flairNLP__flair-447 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
__version__ attribute?
I'm always frustrated when flair doesn't have a __version__attribute... :-)
Please, add a __version__attribute to the module.
Thank you!
DC
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flair/__init__.py`
Content:
```
1 import torch
2
3 from . import data
4 from . import models
5 from . import visual
6 from . import trainers
7
8 import logging.config
9
10
11 logging.config.dictConfig({
12 'version': 1,
13 'disable_existing_loggers': False,
14 'formatters': {
15 'standard': {
16 'format': '%(asctime)-15s %(message)s'
17 },
18 },
19 'handlers': {
20 'console': {
21 'level': 'INFO',
22 'class': 'logging.StreamHandler',
23 'formatter': 'standard',
24 'stream': 'ext://sys.stdout'
25 },
26 },
27 'loggers': {
28 'flair': {
29 'handlers': ['console'],
30 'level': 'INFO',
31 'propagate': False
32 }
33 },
34 'root': {
35 'handlers': ['console'],
36 'level': 'WARNING'
37 }
38 })
39
40 logger = logging.getLogger('flair')
41
42
43 device = None
44 if torch.cuda.is_available():
45 device = torch.device('cuda:0')
46 else:
47 device = torch.device('cpu')
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flair/__init__.py b/flair/__init__.py
--- a/flair/__init__.py
+++ b/flair/__init__.py
@@ -7,6 +7,7 @@
import logging.config
+__version__ = "0.4.1"
logging.config.dictConfig({
'version': 1,
| {"golden_diff": "diff --git a/flair/__init__.py b/flair/__init__.py\n--- a/flair/__init__.py\n+++ b/flair/__init__.py\n@@ -7,6 +7,7 @@\n \n import logging.config\n \n+__version__ = \"0.4.1\"\n \n logging.config.dictConfig({\n 'version': 1,\n", "issue": "__version__ attribute?\nI'm always frustrated when flair doesn't have a __version__attribute... :-)\r\n\r\nPlease, add a __version__attribute to the module.\r\n\r\nThank you!\r\nDC\r\n\n", "before_files": [{"content": "import torch\n\nfrom . import data\nfrom . import models\nfrom . import visual\nfrom . import trainers\n\nimport logging.config\n\n\nlogging.config.dictConfig({\n 'version': 1,\n 'disable_existing_loggers': False,\n 'formatters': {\n 'standard': {\n 'format': '%(asctime)-15s %(message)s'\n },\n },\n 'handlers': {\n 'console': {\n 'level': 'INFO',\n 'class': 'logging.StreamHandler',\n 'formatter': 'standard',\n 'stream': 'ext://sys.stdout'\n },\n },\n 'loggers': {\n 'flair': {\n 'handlers': ['console'],\n 'level': 'INFO',\n 'propagate': False\n }\n },\n 'root': {\n 'handlers': ['console'],\n 'level': 'WARNING'\n }\n})\n\nlogger = logging.getLogger('flair')\n\n\ndevice = None\nif torch.cuda.is_available():\n device = torch.device('cuda:0')\nelse:\n device = torch.device('cpu')\n", "path": "flair/__init__.py"}], "after_files": [{"content": "import torch\n\nfrom . import data\nfrom . import models\nfrom . import visual\nfrom . import trainers\n\nimport logging.config\n\n__version__ = \"0.4.1\"\n\nlogging.config.dictConfig({\n 'version': 1,\n 'disable_existing_loggers': False,\n 'formatters': {\n 'standard': {\n 'format': '%(asctime)-15s %(message)s'\n },\n },\n 'handlers': {\n 'console': {\n 'level': 'INFO',\n 'class': 'logging.StreamHandler',\n 'formatter': 'standard',\n 'stream': 'ext://sys.stdout'\n },\n },\n 'loggers': {\n 'flair': {\n 'handlers': ['console'],\n 'level': 'INFO',\n 'propagate': False\n }\n },\n 'root': {\n 'handlers': ['console'],\n 'level': 'WARNING'\n }\n})\n\nlogger = logging.getLogger('flair')\n\n\ndevice = None\nif torch.cuda.is_available():\n device = torch.device('cuda:0')\nelse:\n device = torch.device('cpu')\n", "path": "flair/__init__.py"}]} | 616 | 76 |
gh_patches_debug_29935 | rasdani/github-patches | git_diff | google__jax-2481 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add multivariate normal pdf evalutation to jax.scipy
It would be great to have a Multivariate gaussian pdf/logpdf implementation, similar to the univariate version in [jax.scipy.stats.norm](https://jax.readthedocs.io/en/latest/_modules/jax/scipy/stats/norm.html#logpdf). I am currently working with this hacky function:
```
@jit
def multi_gauss_logpdf(x, mean, cov):
""" Calculate the probability density of a
sample from the multivariate normal. """
D = mean.shape[0]
(sign, logdet) = np.linalg.slogdet(cov)
p1 = D*np.log(2*np.pi) + logdet
p2 = (x-mean).T @ np.linalg.inv(cov) @ (x-mean)
return -1./2 * (p1 + p2)
batch_logpdf = vmap(multi_gauss_logpdf, in_axes=(0, None, None))
```
My `lax`/primitive knowledge is still fairly limited but I will try to put together a pr. Any recommendations how to speed things up?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jax/scipy/stats/multivariate_normal.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 import numpy as np
17 import scipy.stats as osp_stats
18
19 from ... import lax
20 from ...numpy.lax_numpy import _promote_dtypes_inexact, _constant_like, _wraps
21 from ...numpy.lax_numpy import dot, subtract, einsum
22 from ...numpy.linalg import det, inv
23
24
25 @_wraps(osp_stats.multivariate_normal.logpdf, update_doc=False)
26 def logpdf(x, mean, cov):
27 x, mean, cov = _promote_dtypes_inexact(x, mean, cov)
28 two = _constant_like(x, 2)
29 dim = _constant_like(x, mean.shape[0])
30 det_sig = det(cov).astype(cov.dtype)
31 log_normalizer = lax.log(lax.mul(lax.pow(_constant_like(x, 2 * np.pi), dim),
32 det_sig))
33 x_shape = x.shape[:-1]
34 if x_shape:
35 x_2d = x.reshape((-1, mean.shape[0]))
36 quadratic = einsum("ij,jk,ik->i", subtract(x_2d, mean), inv(cov),
37 subtract(x_2d, mean)).reshape(x_shape).astype(cov.dtype)
38 else:
39 quadratic = dot(dot(subtract(x, mean), inv(cov)), subtract(x, mean).T).astype(cov.dtype)
40 return lax.div(lax.neg(lax.add(log_normalizer, quadratic)), two)
41
42 @_wraps(osp_stats.multivariate_normal.pdf, update_doc=False)
43 def pdf(x, mean, cov):
44 return lax.exp(logpdf(x, mean, cov))
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/jax/scipy/stats/multivariate_normal.py b/jax/scipy/stats/multivariate_normal.py
--- a/jax/scipy/stats/multivariate_normal.py
+++ b/jax/scipy/stats/multivariate_normal.py
@@ -17,27 +17,29 @@
import scipy.stats as osp_stats
from ... import lax
+from ...lax_linalg import cholesky, triangular_solve
+from ... import numpy as jnp
from ...numpy.lax_numpy import _promote_dtypes_inexact, _constant_like, _wraps
-from ...numpy.lax_numpy import dot, subtract, einsum
-from ...numpy.linalg import det, inv
@_wraps(osp_stats.multivariate_normal.logpdf, update_doc=False)
def logpdf(x, mean, cov):
x, mean, cov = _promote_dtypes_inexact(x, mean, cov)
- two = _constant_like(x, 2)
- dim = _constant_like(x, mean.shape[0])
- det_sig = det(cov).astype(cov.dtype)
- log_normalizer = lax.log(lax.mul(lax.pow(_constant_like(x, 2 * np.pi), dim),
- det_sig))
- x_shape = x.shape[:-1]
- if x_shape:
- x_2d = x.reshape((-1, mean.shape[0]))
- quadratic = einsum("ij,jk,ik->i", subtract(x_2d, mean), inv(cov),
- subtract(x_2d, mean)).reshape(x_shape).astype(cov.dtype)
+ if not mean.shape:
+ return -1/2 * (x - mean) ** 2 / cov - 1/2 * (np.log(2*np.pi) + jnp.log(cov))
else:
- quadratic = dot(dot(subtract(x, mean), inv(cov)), subtract(x, mean).T).astype(cov.dtype)
- return lax.div(lax.neg(lax.add(log_normalizer, quadratic)), two)
+ n = mean.shape[-1]
+ if not np.shape(cov):
+ y = x - mean
+ return (-1/2 * jnp.einsum('...i,...i->...', y, y) / cov
+ - n/2 * (np.log(2*np.pi) + jnp.log(cov)))
+ else:
+ if cov.ndim < 2 or cov.shape[-2:] != (n, n):
+ raise ValueError("multivariate_normal.logpdf got incompatible shapes")
+ L = cholesky(cov)
+ y = triangular_solve(L, x - mean, lower=True, transpose_a=True)
+ return (-1/2 * jnp.einsum('...i,...i->...', y, y) - n/2*np.log(2*np.pi)
+ - jnp.log(L.diagonal()).sum())
@_wraps(osp_stats.multivariate_normal.pdf, update_doc=False)
def pdf(x, mean, cov):
| {"golden_diff": "diff --git a/jax/scipy/stats/multivariate_normal.py b/jax/scipy/stats/multivariate_normal.py\n--- a/jax/scipy/stats/multivariate_normal.py\n+++ b/jax/scipy/stats/multivariate_normal.py\n@@ -17,27 +17,29 @@\n import scipy.stats as osp_stats\n \n from ... import lax\n+from ...lax_linalg import cholesky, triangular_solve\n+from ... import numpy as jnp\n from ...numpy.lax_numpy import _promote_dtypes_inexact, _constant_like, _wraps\n-from ...numpy.lax_numpy import dot, subtract, einsum\n-from ...numpy.linalg import det, inv\n \n \n @_wraps(osp_stats.multivariate_normal.logpdf, update_doc=False)\n def logpdf(x, mean, cov):\n x, mean, cov = _promote_dtypes_inexact(x, mean, cov)\n- two = _constant_like(x, 2)\n- dim = _constant_like(x, mean.shape[0])\n- det_sig = det(cov).astype(cov.dtype)\n- log_normalizer = lax.log(lax.mul(lax.pow(_constant_like(x, 2 * np.pi), dim),\n- det_sig))\n- x_shape = x.shape[:-1]\n- if x_shape:\n- x_2d = x.reshape((-1, mean.shape[0]))\n- quadratic = einsum(\"ij,jk,ik->i\", subtract(x_2d, mean), inv(cov), \n- subtract(x_2d, mean)).reshape(x_shape).astype(cov.dtype)\n+ if not mean.shape:\n+ return -1/2 * (x - mean) ** 2 / cov - 1/2 * (np.log(2*np.pi) + jnp.log(cov))\n else:\n- quadratic = dot(dot(subtract(x, mean), inv(cov)), subtract(x, mean).T).astype(cov.dtype)\n- return lax.div(lax.neg(lax.add(log_normalizer, quadratic)), two)\n+ n = mean.shape[-1]\n+ if not np.shape(cov):\n+ y = x - mean\n+ return (-1/2 * jnp.einsum('...i,...i->...', y, y) / cov\n+ - n/2 * (np.log(2*np.pi) + jnp.log(cov)))\n+ else:\n+ if cov.ndim < 2 or cov.shape[-2:] != (n, n):\n+ raise ValueError(\"multivariate_normal.logpdf got incompatible shapes\")\n+ L = cholesky(cov)\n+ y = triangular_solve(L, x - mean, lower=True, transpose_a=True)\n+ return (-1/2 * jnp.einsum('...i,...i->...', y, y) - n/2*np.log(2*np.pi)\n+ - jnp.log(L.diagonal()).sum())\n \n @_wraps(osp_stats.multivariate_normal.pdf, update_doc=False)\n def pdf(x, mean, cov):\n", "issue": "Add multivariate normal pdf evalutation to jax.scipy\nIt would be great to have a Multivariate gaussian pdf/logpdf implementation, similar to the univariate version in [jax.scipy.stats.norm](https://jax.readthedocs.io/en/latest/_modules/jax/scipy/stats/norm.html#logpdf). I am currently working with this hacky function: \r\n\r\n```\r\n@jit\r\ndef multi_gauss_logpdf(x, mean, cov):\r\n \"\"\" Calculate the probability density of a\r\n sample from the multivariate normal. \"\"\"\r\n D = mean.shape[0]\r\n (sign, logdet) = np.linalg.slogdet(cov)\r\n p1 = D*np.log(2*np.pi) + logdet\r\n p2 = (x-mean).T @ np.linalg.inv(cov) @ (x-mean)\r\n return -1./2 * (p1 + p2)\r\n\r\nbatch_logpdf = vmap(multi_gauss_logpdf, in_axes=(0, None, None))\r\n```\r\n\r\nMy `lax`/primitive knowledge is still fairly limited but I will try to put together a pr. Any recommendations how to speed things up?\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport numpy as np\nimport scipy.stats as osp_stats\n\nfrom ... import lax\nfrom ...numpy.lax_numpy import _promote_dtypes_inexact, _constant_like, _wraps\nfrom ...numpy.lax_numpy import dot, subtract, einsum\nfrom ...numpy.linalg import det, inv\n\n\n@_wraps(osp_stats.multivariate_normal.logpdf, update_doc=False)\ndef logpdf(x, mean, cov):\n x, mean, cov = _promote_dtypes_inexact(x, mean, cov)\n two = _constant_like(x, 2)\n dim = _constant_like(x, mean.shape[0])\n det_sig = det(cov).astype(cov.dtype)\n log_normalizer = lax.log(lax.mul(lax.pow(_constant_like(x, 2 * np.pi), dim),\n det_sig))\n x_shape = x.shape[:-1]\n if x_shape:\n x_2d = x.reshape((-1, mean.shape[0]))\n quadratic = einsum(\"ij,jk,ik->i\", subtract(x_2d, mean), inv(cov), \n subtract(x_2d, mean)).reshape(x_shape).astype(cov.dtype)\n else:\n quadratic = dot(dot(subtract(x, mean), inv(cov)), subtract(x, mean).T).astype(cov.dtype)\n return lax.div(lax.neg(lax.add(log_normalizer, quadratic)), two)\n\n@_wraps(osp_stats.multivariate_normal.pdf, update_doc=False)\ndef pdf(x, mean, cov):\n return lax.exp(logpdf(x, mean, cov))\n", "path": "jax/scipy/stats/multivariate_normal.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport numpy as np\nimport scipy.stats as osp_stats\n\nfrom ... import lax\nfrom ...lax_linalg import cholesky, triangular_solve\nfrom ... import numpy as jnp\nfrom ...numpy.lax_numpy import _promote_dtypes_inexact, _constant_like, _wraps\n\n\n@_wraps(osp_stats.multivariate_normal.logpdf, update_doc=False)\ndef logpdf(x, mean, cov):\n x, mean, cov = _promote_dtypes_inexact(x, mean, cov)\n if not mean.shape:\n return -1/2 * (x - mean) ** 2 / cov - 1/2 * (np.log(2*np.pi) + jnp.log(cov))\n else:\n n = mean.shape[-1]\n if not np.shape(cov):\n y = x - mean\n return (-1/2 * jnp.einsum('...i,...i->...', y, y) / cov\n - n/2 * (np.log(2*np.pi) + jnp.log(cov)))\n else:\n if cov.ndim < 2 or cov.shape[-2:] != (n, n):\n raise ValueError(\"multivariate_normal.logpdf got incompatible shapes\")\n L = cholesky(cov)\n y = triangular_solve(L, x - mean, lower=True, transpose_a=True)\n return (-1/2 * jnp.einsum('...i,...i->...', y, y) - n/2*np.log(2*np.pi)\n - jnp.log(L.diagonal()).sum())\n\n@_wraps(osp_stats.multivariate_normal.pdf, update_doc=False)\ndef pdf(x, mean, cov):\n return lax.exp(logpdf(x, mean, cov))\n", "path": "jax/scipy/stats/multivariate_normal.py"}]} | 1,053 | 655 |
gh_patches_debug_17452 | rasdani/github-patches | git_diff | streamlink__streamlink-5908 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.vkplay: vkplay.live has moved to another domain (live.vkplay.ru)
### Checklist
- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
streamlink 6.7.2
### Description
https://vk.com/wall-212496568_91026
yesterday, vkplay live changed its domain. if you specify the old domain in the link to the stream, then everything still works, but on the site itself there are links to a new domain, to which the existing plugin does not respond.
I just tried to change the updated part of the domain in the plugin code (vkplay.live -> live.vkplay.ru ), and everything seems to be working well. It's a bit difficult for me to create a pull request, but here's the corrected plugin on gist:
https://gist.github.com/oexlkinq/eef0a260dddad473c5febafd91b980d9
the old domain is also listed in the documentation (https://streamlink.github.io/plugins.html#vkplay)
### Debug log
```text
streamlink https://live.vkplay.ru/ruwarface 720p --loglevel=debug
[cli][debug] OS: Linux-6.8.1-arch1-1-x86_64-with-glibc2.39
[cli][debug] Python: 3.11.8
[cli][debug] OpenSSL: OpenSSL 3.2.1 30 Jan 2024
[cli][debug] Streamlink: 6.7.2
[cli][debug] Dependencies:
[cli][debug] certifi: 2024.2.2
[cli][debug] exceptiongroup: 1.2.0
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 5.1.0
[cli][debug] pycountry: 23.12.11
[cli][debug] pycryptodome: 3.20.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.31.0
[cli][debug] trio: 0.25.0
[cli][debug] trio-websocket: 0.11.1
[cli][debug] typing-extensions: 4.10.0
[cli][debug] urllib3: 1.26.18
[cli][debug] websocket-client: 1.7.0
[cli][debug] Arguments:
[cli][debug] url=https://live.vkplay.ru/ruwarface
[cli][debug] stream=['720p']
[cli][debug] --loglevel=debug
error: No plugin can handle URL: https://live.vkplay.ru/ruwarface
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/vkplay.py`
Content:
```
1 """
2 $description Russian live-streaming platform for gaming and esports, owned by VKontakte.
3 $url vkplay.live
4 $type live
5 $metadata id
6 $metadata author
7 $metadata category
8 $metadata title
9 """
10
11 import logging
12 import re
13
14 from streamlink.plugin import Plugin, pluginmatcher
15 from streamlink.plugin.api import validate
16 from streamlink.stream.hls import HLSStream
17
18
19 log = logging.getLogger(__name__)
20
21
22 @pluginmatcher(re.compile(
23 r"https?://vkplay\.live/(?P<channel_name>\w+)/?$",
24 ))
25 class VKplay(Plugin):
26 API_URL = "https://api.vkplay.live/v1"
27
28 def _get_streams(self):
29 self.author = self.match.group("channel_name")
30 log.debug(f"Channel name: {self.author}")
31
32 data = self.session.http.get(
33 f"{self.API_URL}/blog/{self.author}/public_video_stream",
34 headers={"Referer": self.url},
35 acceptable_status=(200, 404),
36 schema=validate.Schema(
37 validate.parse_json(),
38 validate.any(
39 validate.all(
40 {"error": str, "error_description": str},
41 validate.get("error_description"),
42 ),
43 validate.all(
44 {
45 validate.optional("category"): validate.all(
46 {
47 "title": str,
48 },
49 validate.get("title"),
50 ),
51 "title": str,
52 "data": validate.any(
53 [
54 validate.all(
55 {
56 "vid": str,
57 "playerUrls": [
58 validate.all(
59 {
60 "type": str,
61 "url": validate.any("", validate.url()),
62 },
63 validate.union_get("type", "url"),
64 ),
65 ],
66 },
67 validate.union_get("vid", "playerUrls"),
68 ),
69 ],
70 [],
71 ),
72 },
73 validate.union_get(
74 "category",
75 "title",
76 ("data", 0),
77 ),
78 ),
79 ),
80 ),
81 )
82 if isinstance(data, str):
83 log.error(data)
84 return
85
86 self.category, self.title, streamdata = data
87 if not streamdata:
88 return
89
90 self.id, streams = streamdata
91
92 for streamtype, streamurl in streams:
93 if streamurl and streamtype == "live_hls":
94 return HLSStream.parse_variant_playlist(self.session, streamurl)
95
96
97 __plugin__ = VKplay
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/vkplay.py b/src/streamlink/plugins/vkplay.py
--- a/src/streamlink/plugins/vkplay.py
+++ b/src/streamlink/plugins/vkplay.py
@@ -1,6 +1,6 @@
"""
$description Russian live-streaming platform for gaming and esports, owned by VKontakte.
-$url vkplay.live
+$url live.vkplay.ru
$type live
$metadata id
$metadata author
@@ -20,13 +20,13 @@
@pluginmatcher(re.compile(
- r"https?://vkplay\.live/(?P<channel_name>\w+)/?$",
+ r"https?://(?:live\.vkplay\.ru|vkplay\.live)/(?P<channel_name>\w+)/?$",
))
class VKplay(Plugin):
- API_URL = "https://api.vkplay.live/v1"
+ API_URL = "https://api.live.vkplay.ru/v1"
def _get_streams(self):
- self.author = self.match.group("channel_name")
+ self.author = self.match["channel_name"]
log.debug(f"Channel name: {self.author}")
data = self.session.http.get(
| {"golden_diff": "diff --git a/src/streamlink/plugins/vkplay.py b/src/streamlink/plugins/vkplay.py\n--- a/src/streamlink/plugins/vkplay.py\n+++ b/src/streamlink/plugins/vkplay.py\n@@ -1,6 +1,6 @@\n \"\"\"\n $description Russian live-streaming platform for gaming and esports, owned by VKontakte.\n-$url vkplay.live\n+$url live.vkplay.ru\n $type live\n $metadata id\n $metadata author\n@@ -20,13 +20,13 @@\n \n \n @pluginmatcher(re.compile(\n- r\"https?://vkplay\\.live/(?P<channel_name>\\w+)/?$\",\n+ r\"https?://(?:live\\.vkplay\\.ru|vkplay\\.live)/(?P<channel_name>\\w+)/?$\",\n ))\n class VKplay(Plugin):\n- API_URL = \"https://api.vkplay.live/v1\"\n+ API_URL = \"https://api.live.vkplay.ru/v1\"\n \n def _get_streams(self):\n- self.author = self.match.group(\"channel_name\")\n+ self.author = self.match[\"channel_name\"]\n log.debug(f\"Channel name: {self.author}\")\n \n data = self.session.http.get(\n", "issue": "plugins.vkplay: vkplay.live has moved to another domain (live.vkplay.ru)\n### Checklist\n\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nstreamlink 6.7.2\n\n### Description\n\nhttps://vk.com/wall-212496568_91026\r\n\r\nyesterday, vkplay live changed its domain. if you specify the old domain in the link to the stream, then everything still works, but on the site itself there are links to a new domain, to which the existing plugin does not respond.\r\n\r\nI just tried to change the updated part of the domain in the plugin code (vkplay.live -> live.vkplay.ru ), and everything seems to be working well. It's a bit difficult for me to create a pull request, but here's the corrected plugin on gist:\r\nhttps://gist.github.com/oexlkinq/eef0a260dddad473c5febafd91b980d9\r\n\r\nthe old domain is also listed in the documentation (https://streamlink.github.io/plugins.html#vkplay)\n\n### Debug log\n\n```text\nstreamlink https://live.vkplay.ru/ruwarface 720p --loglevel=debug\r\n[cli][debug] OS: Linux-6.8.1-arch1-1-x86_64-with-glibc2.39\r\n[cli][debug] Python: 3.11.8\r\n[cli][debug] OpenSSL: OpenSSL 3.2.1 30 Jan 2024\r\n[cli][debug] Streamlink: 6.7.2\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2024.2.2\r\n[cli][debug] exceptiongroup: 1.2.0\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 5.1.0\r\n[cli][debug] pycountry: 23.12.11\r\n[cli][debug] pycryptodome: 3.20.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.31.0\r\n[cli][debug] trio: 0.25.0\r\n[cli][debug] trio-websocket: 0.11.1\r\n[cli][debug] typing-extensions: 4.10.0\r\n[cli][debug] urllib3: 1.26.18\r\n[cli][debug] websocket-client: 1.7.0\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://live.vkplay.ru/ruwarface\r\n[cli][debug] stream=['720p']\r\n[cli][debug] --loglevel=debug\r\nerror: No plugin can handle URL: https://live.vkplay.ru/ruwarface\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Russian live-streaming platform for gaming and esports, owned by VKontakte.\n$url vkplay.live\n$type live\n$metadata id\n$metadata author\n$metadata category\n$metadata title\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://vkplay\\.live/(?P<channel_name>\\w+)/?$\",\n))\nclass VKplay(Plugin):\n API_URL = \"https://api.vkplay.live/v1\"\n\n def _get_streams(self):\n self.author = self.match.group(\"channel_name\")\n log.debug(f\"Channel name: {self.author}\")\n\n data = self.session.http.get(\n f\"{self.API_URL}/blog/{self.author}/public_video_stream\",\n headers={\"Referer\": self.url},\n acceptable_status=(200, 404),\n schema=validate.Schema(\n validate.parse_json(),\n validate.any(\n validate.all(\n {\"error\": str, \"error_description\": str},\n validate.get(\"error_description\"),\n ),\n validate.all(\n {\n validate.optional(\"category\"): validate.all(\n {\n \"title\": str,\n },\n validate.get(\"title\"),\n ),\n \"title\": str,\n \"data\": validate.any(\n [\n validate.all(\n {\n \"vid\": str,\n \"playerUrls\": [\n validate.all(\n {\n \"type\": str,\n \"url\": validate.any(\"\", validate.url()),\n },\n validate.union_get(\"type\", \"url\"),\n ),\n ],\n },\n validate.union_get(\"vid\", \"playerUrls\"),\n ),\n ],\n [],\n ),\n },\n validate.union_get(\n \"category\",\n \"title\",\n (\"data\", 0),\n ),\n ),\n ),\n ),\n )\n if isinstance(data, str):\n log.error(data)\n return\n\n self.category, self.title, streamdata = data\n if not streamdata:\n return\n\n self.id, streams = streamdata\n\n for streamtype, streamurl in streams:\n if streamurl and streamtype == \"live_hls\":\n return HLSStream.parse_variant_playlist(self.session, streamurl)\n\n\n__plugin__ = VKplay\n", "path": "src/streamlink/plugins/vkplay.py"}], "after_files": [{"content": "\"\"\"\n$description Russian live-streaming platform for gaming and esports, owned by VKontakte.\n$url live.vkplay.ru\n$type live\n$metadata id\n$metadata author\n$metadata category\n$metadata title\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:live\\.vkplay\\.ru|vkplay\\.live)/(?P<channel_name>\\w+)/?$\",\n))\nclass VKplay(Plugin):\n API_URL = \"https://api.live.vkplay.ru/v1\"\n\n def _get_streams(self):\n self.author = self.match[\"channel_name\"]\n log.debug(f\"Channel name: {self.author}\")\n\n data = self.session.http.get(\n f\"{self.API_URL}/blog/{self.author}/public_video_stream\",\n headers={\"Referer\": self.url},\n acceptable_status=(200, 404),\n schema=validate.Schema(\n validate.parse_json(),\n validate.any(\n validate.all(\n {\"error\": str, \"error_description\": str},\n validate.get(\"error_description\"),\n ),\n validate.all(\n {\n validate.optional(\"category\"): validate.all(\n {\n \"title\": str,\n },\n validate.get(\"title\"),\n ),\n \"title\": str,\n \"data\": validate.any(\n [\n validate.all(\n {\n \"vid\": str,\n \"playerUrls\": [\n validate.all(\n {\n \"type\": str,\n \"url\": validate.any(\"\", validate.url()),\n },\n validate.union_get(\"type\", \"url\"),\n ),\n ],\n },\n validate.union_get(\"vid\", \"playerUrls\"),\n ),\n ],\n [],\n ),\n },\n validate.union_get(\n \"category\",\n \"title\",\n (\"data\", 0),\n ),\n ),\n ),\n ),\n )\n if isinstance(data, str):\n log.error(data)\n return\n\n self.category, self.title, streamdata = data\n if not streamdata:\n return\n\n self.id, streams = streamdata\n\n for streamtype, streamurl in streams:\n if streamurl and streamtype == \"live_hls\":\n return HLSStream.parse_variant_playlist(self.session, streamurl)\n\n\n__plugin__ = VKplay\n", "path": "src/streamlink/plugins/vkplay.py"}]} | 1,762 | 261 |
gh_patches_debug_3690 | rasdani/github-patches | git_diff | zestedesavoir__zds-site-6318 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Le code HTML des ePUBs est visible
Je ne sais pas si c'est un bug de zds-site ou zmd.
Créer un billet, le publier. L'export au format ePUB contient le contenu attendu, mais on voit les balises HTML (ce qui rend le fichier illisible).
Le bug est présent avec zmd 10 et 11.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zds/utils/templatetags/emarkdown.py`
Content:
```
1 import re
2 import json
3 import logging
4 from requests import post, HTTPError
5
6 from django import template
7 from django.conf import settings
8 from django.template.defaultfilters import stringfilter
9 from django.utils.safestring import mark_safe
10 from django.utils.translation import gettext_lazy as _
11
12 logger = logging.getLogger(__name__)
13 register = template.Library()
14 """
15 Markdown related filters.
16 """
17
18 # Constants
19 MAX_ATTEMPTS = 3
20 MD_PARSING_ERROR = _("Une erreur est survenue dans la génération de texte Markdown. Veuillez rapporter le bug.")
21
22 FORMAT_ENDPOINTS = {
23 "html": "/html",
24 "texfile": "/latex-document",
25 "epub": "/epub",
26 "tex": "/latex",
27 }
28
29
30 def _render_markdown_once(md_input, *, output_format="html", **kwargs):
31 """
32 Returns None on error (error details are logged). No retry mechanism.
33 """
34
35 def log_args():
36 logger.error(f"md_input: {md_input!r}")
37 logger.error(f"kwargs: {kwargs!r}")
38
39 inline = kwargs.get("inline", False) is True
40 full_json = kwargs.pop("full_json", False)
41
42 if settings.ZDS_APP["zmd"]["disable_pings"] is True:
43 kwargs["disable_ping"] = True
44
45 endpoint = FORMAT_ENDPOINTS[output_format]
46
47 try:
48 timeout = 10
49 real_input = str(md_input)
50 if output_format.startswith("tex") or full_json:
51 # latex may be really long to generate but it is also restrained by server configuration
52 timeout = 120
53 # use manifest renderer
54 real_input = md_input
55 response = post(
56 "{}{}".format(settings.ZDS_APP["zmd"]["server"], endpoint),
57 json={
58 "opts": kwargs,
59 "md": real_input,
60 },
61 timeout=timeout,
62 )
63 except HTTPError:
64 logger.exception("An HTTP error happened, markdown rendering failed")
65 log_args()
66 return "", {}, []
67
68 if response.status_code == 413:
69 return "", {}, [{"message": str(_("Texte trop volumineux."))}]
70
71 if response.status_code != 200:
72 logger.error(f"The markdown server replied with status {response.status_code} (expected 200)")
73 log_args()
74 return "", {}, []
75
76 try:
77 content, metadata, messages = response.json()
78 logger.debug("Result %s, %s, %s", content, metadata, messages)
79 if messages:
80 logger.error("Markdown errors %s", json.dumps(messages))
81 if isinstance(content, str):
82 content = content.strip()
83 if inline:
84 content = content.replace("</p>\n", "\n\n").replace("\n<p>", "\n")
85 if full_json:
86 return content, metadata, messages
87 return mark_safe(content), metadata, messages
88 except: # noqa
89 logger.exception("Unexpected exception raised")
90 log_args()
91 return "", {}, []
92
93
94 def render_markdown(md_input, *, on_error=None, disable_jsfiddle=True, **kwargs):
95 """Render a markdown string.
96
97 Returns a tuple ``(rendered_content, metadata)``, where
98 ``rendered_content`` is a string and ``metadata`` is a dict.
99
100 Handles errors gracefully by returning an user-friendly HTML
101 string which explains that the Markdown rendering has failed
102 (without any technical details).
103
104 """
105 opts = {"disable_jsfiddle": disable_jsfiddle}
106 opts.update(kwargs)
107 content, metadata, messages = _render_markdown_once(md_input, **opts)
108 if messages and on_error:
109 on_error([m["message"] for m in messages])
110 if content is not None:
111 # Success!
112 return content, metadata, messages
113
114 # Oops, something went wrong
115
116 attempts = kwargs.get("attempts", 0)
117 inline = kwargs.get("inline", False) is True
118
119 if attempts < MAX_ATTEMPTS:
120 if not kwargs:
121 kwargs = dict()
122 return render_markdown(md_input, **dict(kwargs, attempts=attempts + 1))
123
124 logger.error("Max attempt count reached, giving up")
125 logger.error(f"md_input: {md_input!r}")
126 logger.error(f"kwargs: {kwargs!r}")
127
128 # FIXME: This cannot work with LaTeX.
129 if inline:
130 return mark_safe(f"<p>{json.dumps(messages)}</p>"), metadata, []
131 else:
132 return mark_safe(f'<div class="error ico-after"><p>{json.dumps(messages)}</p></div>'), metadata, []
133
134
135 def render_markdown_stats(md_input, **kwargs):
136 """
137 Returns contents statistics (words and chars)
138 """
139 kwargs["stats"] = True
140 kwargs["disable_images_download"] = True
141 logger.setLevel(logging.INFO)
142 content, metadata, messages = _render_markdown_once(md_input, output_format="tex", **kwargs)
143 if metadata:
144 return metadata.get("stats", {}).get("signs", {})
145 return None
146
147
148 @register.filter(name="epub_markdown", needs_autoescape=False)
149 def epub_markdown(md_input, image_directory):
150 media_root = str(settings.MEDIA_ROOT)
151 if not media_root.endswith("/"):
152 media_root += "/"
153 replaced_media_url = settings.MEDIA_URL
154 if replaced_media_url.startswith("/"):
155 replaced_media_url = replaced_media_url[1:]
156 return (
157 emarkdown(
158 md_input,
159 output_format="epub",
160 images_download_dir=image_directory.absolute,
161 local_url_to_local_path=[settings.MEDIA_URL + "galleries/[0-9]+", image_directory.relative],
162 )
163 .replace('src"/', f'src="{media_root}')
164 .replace(f'src="{media_root}{replaced_media_url}', f'src="{media_root}')
165 )
166
167
168 @register.filter(needs_autoescape=False)
169 @stringfilter
170 def emarkdown(md_input, use_jsfiddle="", **kwargs):
171 """
172 :param str md_input: Markdown string.
173 :return: HTML string.
174 :rtype: str
175 """
176 disable_jsfiddle = use_jsfiddle != "js"
177 content, metadata, messages = render_markdown(
178 md_input,
179 on_error=lambda m: logger.error("Markdown errors %s", str(m)),
180 **dict(kwargs, disable_jsfiddle=disable_jsfiddle),
181 )
182 kwargs.get("metadata", {}).update(metadata)
183 return content or ""
184
185
186 @register.filter(needs_autoescape=False)
187 @stringfilter
188 def emarkdown_preview(md_input, use_jsfiddle="", **kwargs):
189 """
190 Filter markdown string and render it to html.
191
192 :param str md_input: Markdown string.
193 :return: HTML string.
194 :rtype: str
195 """
196 disable_jsfiddle = use_jsfiddle != "js"
197
198 content, metadata, messages = render_markdown(md_input, **dict(kwargs, disable_jsfiddle=disable_jsfiddle))
199
200 if messages:
201 content = _(
202 '</div><div class="preview-error"><strong>Erreur du serveur Markdown:</strong>\n{}'.format(
203 "<br>- ".join([m["message"] for m in messages])
204 )
205 )
206 content = mark_safe(content)
207
208 return content
209
210
211 @register.filter(needs_autoescape=False)
212 @stringfilter
213 def emarkdown_inline(text):
214 """
215 Parses inline elements only and renders HTML. Mainly for member signatures.
216 Although they are inline elements, pings are disabled.
217
218 :param str text: Markdown string.
219 :return: HTML string.
220 :rtype: str
221 """
222 rendered = emarkdown(text, inline=True)
223 return mark_safe(rendered.replace("<a href=", '<a rel="nofollow" href='))
224
225
226 def sub_hd(match, count):
227 """Replace header shifted."""
228 subt = match.group(1)
229 lvl = match.group("level")
230 header = match.group("header")
231 end = match.group(4)
232
233 new_content = subt + "#" * count + lvl + header + end
234
235 return new_content
236
237
238 def shift_heading(text, count):
239 """
240 Shift header in markdown document.
241
242 :param str text: Text to filter.
243 :param int count:
244 :return: Filtered text.
245 :rtype: str
246 """
247 text_by_code = re.split("(```|~~~)", text)
248 starting_code = None
249 for i, element in enumerate(text_by_code):
250 if element in ["```", "~~~"] and not starting_code:
251 starting_code = element
252 elif element == starting_code:
253 starting_code = None
254 elif starting_code is None:
255 text_by_code[i] = re.sub(
256 r"(^|\n)(?P<level>#{1,4})(?P<header>.*?)#*(\n|$)", lambda t: sub_hd(t, count), text_by_code[i]
257 )
258
259 return "".join(text_by_code)
260
261
262 @register.filter("shift_heading_1")
263 def shift_heading_1(text):
264 return shift_heading(text, 1)
265
266
267 @register.filter("shift_heading_2")
268 def shift_heading_2(text):
269 return shift_heading(text, 2)
270
271
272 @register.filter("shift_heading_3")
273 def shift_heading_3(text):
274 return shift_heading(text, 3)
275
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zds/utils/templatetags/emarkdown.py b/zds/utils/templatetags/emarkdown.py
--- a/zds/utils/templatetags/emarkdown.py
+++ b/zds/utils/templatetags/emarkdown.py
@@ -153,7 +153,7 @@
replaced_media_url = settings.MEDIA_URL
if replaced_media_url.startswith("/"):
replaced_media_url = replaced_media_url[1:]
- return (
+ return mark_safe(
emarkdown(
md_input,
output_format="epub",
| {"golden_diff": "diff --git a/zds/utils/templatetags/emarkdown.py b/zds/utils/templatetags/emarkdown.py\n--- a/zds/utils/templatetags/emarkdown.py\n+++ b/zds/utils/templatetags/emarkdown.py\n@@ -153,7 +153,7 @@\n replaced_media_url = settings.MEDIA_URL\n if replaced_media_url.startswith(\"/\"):\n replaced_media_url = replaced_media_url[1:]\n- return (\n+ return mark_safe(\n emarkdown(\n md_input,\n output_format=\"epub\",\n", "issue": "Le code HTML des ePUBs est visible\nJe ne sais pas si c'est un bug de zds-site ou zmd.\r\n\r\nCr\u00e9er un billet, le publier. L'export au format ePUB contient le contenu attendu, mais on voit les balises HTML (ce qui rend le fichier illisible).\r\n\r\nLe bug est pr\u00e9sent avec zmd 10 et 11.\r\n\n", "before_files": [{"content": "import re\nimport json\nimport logging\nfrom requests import post, HTTPError\n\nfrom django import template\nfrom django.conf import settings\nfrom django.template.defaultfilters import stringfilter\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext_lazy as _\n\nlogger = logging.getLogger(__name__)\nregister = template.Library()\n\"\"\"\nMarkdown related filters.\n\"\"\"\n\n# Constants\nMAX_ATTEMPTS = 3\nMD_PARSING_ERROR = _(\"Une erreur est survenue dans la g\u00e9n\u00e9ration de texte Markdown. Veuillez rapporter le bug.\")\n\nFORMAT_ENDPOINTS = {\n \"html\": \"/html\",\n \"texfile\": \"/latex-document\",\n \"epub\": \"/epub\",\n \"tex\": \"/latex\",\n}\n\n\ndef _render_markdown_once(md_input, *, output_format=\"html\", **kwargs):\n \"\"\"\n Returns None on error (error details are logged). No retry mechanism.\n \"\"\"\n\n def log_args():\n logger.error(f\"md_input: {md_input!r}\")\n logger.error(f\"kwargs: {kwargs!r}\")\n\n inline = kwargs.get(\"inline\", False) is True\n full_json = kwargs.pop(\"full_json\", False)\n\n if settings.ZDS_APP[\"zmd\"][\"disable_pings\"] is True:\n kwargs[\"disable_ping\"] = True\n\n endpoint = FORMAT_ENDPOINTS[output_format]\n\n try:\n timeout = 10\n real_input = str(md_input)\n if output_format.startswith(\"tex\") or full_json:\n # latex may be really long to generate but it is also restrained by server configuration\n timeout = 120\n # use manifest renderer\n real_input = md_input\n response = post(\n \"{}{}\".format(settings.ZDS_APP[\"zmd\"][\"server\"], endpoint),\n json={\n \"opts\": kwargs,\n \"md\": real_input,\n },\n timeout=timeout,\n )\n except HTTPError:\n logger.exception(\"An HTTP error happened, markdown rendering failed\")\n log_args()\n return \"\", {}, []\n\n if response.status_code == 413:\n return \"\", {}, [{\"message\": str(_(\"Texte trop volumineux.\"))}]\n\n if response.status_code != 200:\n logger.error(f\"The markdown server replied with status {response.status_code} (expected 200)\")\n log_args()\n return \"\", {}, []\n\n try:\n content, metadata, messages = response.json()\n logger.debug(\"Result %s, %s, %s\", content, metadata, messages)\n if messages:\n logger.error(\"Markdown errors %s\", json.dumps(messages))\n if isinstance(content, str):\n content = content.strip()\n if inline:\n content = content.replace(\"</p>\\n\", \"\\n\\n\").replace(\"\\n<p>\", \"\\n\")\n if full_json:\n return content, metadata, messages\n return mark_safe(content), metadata, messages\n except: # noqa\n logger.exception(\"Unexpected exception raised\")\n log_args()\n return \"\", {}, []\n\n\ndef render_markdown(md_input, *, on_error=None, disable_jsfiddle=True, **kwargs):\n \"\"\"Render a markdown string.\n\n Returns a tuple ``(rendered_content, metadata)``, where\n ``rendered_content`` is a string and ``metadata`` is a dict.\n\n Handles errors gracefully by returning an user-friendly HTML\n string which explains that the Markdown rendering has failed\n (without any technical details).\n\n \"\"\"\n opts = {\"disable_jsfiddle\": disable_jsfiddle}\n opts.update(kwargs)\n content, metadata, messages = _render_markdown_once(md_input, **opts)\n if messages and on_error:\n on_error([m[\"message\"] for m in messages])\n if content is not None:\n # Success!\n return content, metadata, messages\n\n # Oops, something went wrong\n\n attempts = kwargs.get(\"attempts\", 0)\n inline = kwargs.get(\"inline\", False) is True\n\n if attempts < MAX_ATTEMPTS:\n if not kwargs:\n kwargs = dict()\n return render_markdown(md_input, **dict(kwargs, attempts=attempts + 1))\n\n logger.error(\"Max attempt count reached, giving up\")\n logger.error(f\"md_input: {md_input!r}\")\n logger.error(f\"kwargs: {kwargs!r}\")\n\n # FIXME: This cannot work with LaTeX.\n if inline:\n return mark_safe(f\"<p>{json.dumps(messages)}</p>\"), metadata, []\n else:\n return mark_safe(f'<div class=\"error ico-after\"><p>{json.dumps(messages)}</p></div>'), metadata, []\n\n\ndef render_markdown_stats(md_input, **kwargs):\n \"\"\"\n Returns contents statistics (words and chars)\n \"\"\"\n kwargs[\"stats\"] = True\n kwargs[\"disable_images_download\"] = True\n logger.setLevel(logging.INFO)\n content, metadata, messages = _render_markdown_once(md_input, output_format=\"tex\", **kwargs)\n if metadata:\n return metadata.get(\"stats\", {}).get(\"signs\", {})\n return None\n\n\[email protected](name=\"epub_markdown\", needs_autoescape=False)\ndef epub_markdown(md_input, image_directory):\n media_root = str(settings.MEDIA_ROOT)\n if not media_root.endswith(\"/\"):\n media_root += \"/\"\n replaced_media_url = settings.MEDIA_URL\n if replaced_media_url.startswith(\"/\"):\n replaced_media_url = replaced_media_url[1:]\n return (\n emarkdown(\n md_input,\n output_format=\"epub\",\n images_download_dir=image_directory.absolute,\n local_url_to_local_path=[settings.MEDIA_URL + \"galleries/[0-9]+\", image_directory.relative],\n )\n .replace('src\"/', f'src=\"{media_root}')\n .replace(f'src=\"{media_root}{replaced_media_url}', f'src=\"{media_root}')\n )\n\n\[email protected](needs_autoescape=False)\n@stringfilter\ndef emarkdown(md_input, use_jsfiddle=\"\", **kwargs):\n \"\"\"\n :param str md_input: Markdown string.\n :return: HTML string.\n :rtype: str\n \"\"\"\n disable_jsfiddle = use_jsfiddle != \"js\"\n content, metadata, messages = render_markdown(\n md_input,\n on_error=lambda m: logger.error(\"Markdown errors %s\", str(m)),\n **dict(kwargs, disable_jsfiddle=disable_jsfiddle),\n )\n kwargs.get(\"metadata\", {}).update(metadata)\n return content or \"\"\n\n\[email protected](needs_autoescape=False)\n@stringfilter\ndef emarkdown_preview(md_input, use_jsfiddle=\"\", **kwargs):\n \"\"\"\n Filter markdown string and render it to html.\n\n :param str md_input: Markdown string.\n :return: HTML string.\n :rtype: str\n \"\"\"\n disable_jsfiddle = use_jsfiddle != \"js\"\n\n content, metadata, messages = render_markdown(md_input, **dict(kwargs, disable_jsfiddle=disable_jsfiddle))\n\n if messages:\n content = _(\n '</div><div class=\"preview-error\"><strong>Erreur du serveur Markdown:</strong>\\n{}'.format(\n \"<br>- \".join([m[\"message\"] for m in messages])\n )\n )\n content = mark_safe(content)\n\n return content\n\n\[email protected](needs_autoescape=False)\n@stringfilter\ndef emarkdown_inline(text):\n \"\"\"\n Parses inline elements only and renders HTML. Mainly for member signatures.\n Although they are inline elements, pings are disabled.\n\n :param str text: Markdown string.\n :return: HTML string.\n :rtype: str\n \"\"\"\n rendered = emarkdown(text, inline=True)\n return mark_safe(rendered.replace(\"<a href=\", '<a rel=\"nofollow\" href='))\n\n\ndef sub_hd(match, count):\n \"\"\"Replace header shifted.\"\"\"\n subt = match.group(1)\n lvl = match.group(\"level\")\n header = match.group(\"header\")\n end = match.group(4)\n\n new_content = subt + \"#\" * count + lvl + header + end\n\n return new_content\n\n\ndef shift_heading(text, count):\n \"\"\"\n Shift header in markdown document.\n\n :param str text: Text to filter.\n :param int count:\n :return: Filtered text.\n :rtype: str\n \"\"\"\n text_by_code = re.split(\"(```|~~~)\", text)\n starting_code = None\n for i, element in enumerate(text_by_code):\n if element in [\"```\", \"~~~\"] and not starting_code:\n starting_code = element\n elif element == starting_code:\n starting_code = None\n elif starting_code is None:\n text_by_code[i] = re.sub(\n r\"(^|\\n)(?P<level>#{1,4})(?P<header>.*?)#*(\\n|$)\", lambda t: sub_hd(t, count), text_by_code[i]\n )\n\n return \"\".join(text_by_code)\n\n\[email protected](\"shift_heading_1\")\ndef shift_heading_1(text):\n return shift_heading(text, 1)\n\n\[email protected](\"shift_heading_2\")\ndef shift_heading_2(text):\n return shift_heading(text, 2)\n\n\[email protected](\"shift_heading_3\")\ndef shift_heading_3(text):\n return shift_heading(text, 3)\n", "path": "zds/utils/templatetags/emarkdown.py"}], "after_files": [{"content": "import re\nimport json\nimport logging\nfrom requests import post, HTTPError\n\nfrom django import template\nfrom django.conf import settings\nfrom django.template.defaultfilters import stringfilter\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext_lazy as _\n\nlogger = logging.getLogger(__name__)\nregister = template.Library()\n\"\"\"\nMarkdown related filters.\n\"\"\"\n\n# Constants\nMAX_ATTEMPTS = 3\nMD_PARSING_ERROR = _(\"Une erreur est survenue dans la g\u00e9n\u00e9ration de texte Markdown. Veuillez rapporter le bug.\")\n\nFORMAT_ENDPOINTS = {\n \"html\": \"/html\",\n \"texfile\": \"/latex-document\",\n \"epub\": \"/epub\",\n \"tex\": \"/latex\",\n}\n\n\ndef _render_markdown_once(md_input, *, output_format=\"html\", **kwargs):\n \"\"\"\n Returns None on error (error details are logged). No retry mechanism.\n \"\"\"\n\n def log_args():\n logger.error(f\"md_input: {md_input!r}\")\n logger.error(f\"kwargs: {kwargs!r}\")\n\n inline = kwargs.get(\"inline\", False) is True\n full_json = kwargs.pop(\"full_json\", False)\n\n if settings.ZDS_APP[\"zmd\"][\"disable_pings\"] is True:\n kwargs[\"disable_ping\"] = True\n\n endpoint = FORMAT_ENDPOINTS[output_format]\n\n try:\n timeout = 10\n real_input = str(md_input)\n if output_format.startswith(\"tex\") or full_json:\n # latex may be really long to generate but it is also restrained by server configuration\n timeout = 120\n # use manifest renderer\n real_input = md_input\n response = post(\n \"{}{}\".format(settings.ZDS_APP[\"zmd\"][\"server\"], endpoint),\n json={\n \"opts\": kwargs,\n \"md\": real_input,\n },\n timeout=timeout,\n )\n except HTTPError:\n logger.exception(\"An HTTP error happened, markdown rendering failed\")\n log_args()\n return \"\", {}, []\n\n if response.status_code == 413:\n return \"\", {}, [{\"message\": str(_(\"Texte trop volumineux.\"))}]\n\n if response.status_code != 200:\n logger.error(f\"The markdown server replied with status {response.status_code} (expected 200)\")\n log_args()\n return \"\", {}, []\n\n try:\n content, metadata, messages = response.json()\n logger.debug(\"Result %s, %s, %s\", content, metadata, messages)\n if messages:\n logger.error(\"Markdown errors %s\", json.dumps(messages))\n if isinstance(content, str):\n content = content.strip()\n if inline:\n content = content.replace(\"</p>\\n\", \"\\n\\n\").replace(\"\\n<p>\", \"\\n\")\n if full_json:\n return content, metadata, messages\n return mark_safe(content), metadata, messages\n except: # noqa\n logger.exception(\"Unexpected exception raised\")\n log_args()\n return \"\", {}, []\n\n\ndef render_markdown(md_input, *, on_error=None, disable_jsfiddle=True, **kwargs):\n \"\"\"Render a markdown string.\n\n Returns a tuple ``(rendered_content, metadata)``, where\n ``rendered_content`` is a string and ``metadata`` is a dict.\n\n Handles errors gracefully by returning an user-friendly HTML\n string which explains that the Markdown rendering has failed\n (without any technical details).\n\n \"\"\"\n opts = {\"disable_jsfiddle\": disable_jsfiddle}\n opts.update(kwargs)\n content, metadata, messages = _render_markdown_once(md_input, **opts)\n if messages and on_error:\n on_error([m[\"message\"] for m in messages])\n if content is not None:\n # Success!\n return content, metadata, messages\n\n # Oops, something went wrong\n\n attempts = kwargs.get(\"attempts\", 0)\n inline = kwargs.get(\"inline\", False) is True\n\n if attempts < MAX_ATTEMPTS:\n if not kwargs:\n kwargs = dict()\n return render_markdown(md_input, **dict(kwargs, attempts=attempts + 1))\n\n logger.error(\"Max attempt count reached, giving up\")\n logger.error(f\"md_input: {md_input!r}\")\n logger.error(f\"kwargs: {kwargs!r}\")\n\n # FIXME: This cannot work with LaTeX.\n if inline:\n return mark_safe(f\"<p>{json.dumps(messages)}</p>\"), metadata, []\n else:\n return mark_safe(f'<div class=\"error ico-after\"><p>{json.dumps(messages)}</p></div>'), metadata, []\n\n\ndef render_markdown_stats(md_input, **kwargs):\n \"\"\"\n Returns contents statistics (words and chars)\n \"\"\"\n kwargs[\"stats\"] = True\n kwargs[\"disable_images_download\"] = True\n logger.setLevel(logging.INFO)\n content, metadata, messages = _render_markdown_once(md_input, output_format=\"tex\", **kwargs)\n if metadata:\n return metadata.get(\"stats\", {}).get(\"signs\", {})\n return None\n\n\[email protected](name=\"epub_markdown\", needs_autoescape=False)\ndef epub_markdown(md_input, image_directory):\n media_root = str(settings.MEDIA_ROOT)\n if not media_root.endswith(\"/\"):\n media_root += \"/\"\n replaced_media_url = settings.MEDIA_URL\n if replaced_media_url.startswith(\"/\"):\n replaced_media_url = replaced_media_url[1:]\n return mark_safe(\n emarkdown(\n md_input,\n output_format=\"epub\",\n images_download_dir=image_directory.absolute,\n local_url_to_local_path=[settings.MEDIA_URL + \"galleries/[0-9]+\", image_directory.relative],\n )\n .replace('src\"/', f'src=\"{media_root}')\n .replace(f'src=\"{media_root}{replaced_media_url}', f'src=\"{media_root}')\n )\n\n\[email protected](needs_autoescape=False)\n@stringfilter\ndef emarkdown(md_input, use_jsfiddle=\"\", **kwargs):\n \"\"\"\n :param str md_input: Markdown string.\n :return: HTML string.\n :rtype: str\n \"\"\"\n disable_jsfiddle = use_jsfiddle != \"js\"\n content, metadata, messages = render_markdown(\n md_input,\n on_error=lambda m: logger.error(\"Markdown errors %s\", str(m)),\n **dict(kwargs, disable_jsfiddle=disable_jsfiddle),\n )\n kwargs.get(\"metadata\", {}).update(metadata)\n return content or \"\"\n\n\[email protected](needs_autoescape=False)\n@stringfilter\ndef emarkdown_preview(md_input, use_jsfiddle=\"\", **kwargs):\n \"\"\"\n Filter markdown string and render it to html.\n\n :param str md_input: Markdown string.\n :return: HTML string.\n :rtype: str\n \"\"\"\n disable_jsfiddle = use_jsfiddle != \"js\"\n\n content, metadata, messages = render_markdown(md_input, **dict(kwargs, disable_jsfiddle=disable_jsfiddle))\n\n if messages:\n content = _(\n '</div><div class=\"preview-error\"><strong>Erreur du serveur Markdown:</strong>\\n{}'.format(\n \"<br>- \".join([m[\"message\"] for m in messages])\n )\n )\n content = mark_safe(content)\n\n return content\n\n\[email protected](needs_autoescape=False)\n@stringfilter\ndef emarkdown_inline(text):\n \"\"\"\n Parses inline elements only and renders HTML. Mainly for member signatures.\n Although they are inline elements, pings are disabled.\n\n :param str text: Markdown string.\n :return: HTML string.\n :rtype: str\n \"\"\"\n rendered = emarkdown(text, inline=True)\n return mark_safe(rendered.replace(\"<a href=\", '<a rel=\"nofollow\" href='))\n\n\ndef sub_hd(match, count):\n \"\"\"Replace header shifted.\"\"\"\n subt = match.group(1)\n lvl = match.group(\"level\")\n header = match.group(\"header\")\n end = match.group(4)\n\n new_content = subt + \"#\" * count + lvl + header + end\n\n return new_content\n\n\ndef shift_heading(text, count):\n \"\"\"\n Shift header in markdown document.\n\n :param str text: Text to filter.\n :param int count:\n :return: Filtered text.\n :rtype: str\n \"\"\"\n text_by_code = re.split(\"(```|~~~)\", text)\n starting_code = None\n for i, element in enumerate(text_by_code):\n if element in [\"```\", \"~~~\"] and not starting_code:\n starting_code = element\n elif element == starting_code:\n starting_code = None\n elif starting_code is None:\n text_by_code[i] = re.sub(\n r\"(^|\\n)(?P<level>#{1,4})(?P<header>.*?)#*(\\n|$)\", lambda t: sub_hd(t, count), text_by_code[i]\n )\n\n return \"\".join(text_by_code)\n\n\[email protected](\"shift_heading_1\")\ndef shift_heading_1(text):\n return shift_heading(text, 1)\n\n\[email protected](\"shift_heading_2\")\ndef shift_heading_2(text):\n return shift_heading(text, 2)\n\n\[email protected](\"shift_heading_3\")\ndef shift_heading_3(text):\n return shift_heading(text, 3)\n", "path": "zds/utils/templatetags/emarkdown.py"}]} | 3,085 | 128 |
gh_patches_debug_30683 | rasdani/github-patches | git_diff | scikit-image__scikit-image-4313 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docstring in util.pad returns <no docstring>
## Description
`util.pad()` returns an empty docstring; however, [the docstring is there](https://github.com/scikit-image/scikit-image/blob/v0.12.2/skimage/util/arraypad.py#L1117).
Any idea on what's happening? Thanks y'all!
## Way to reproduce
```python
In [1]: from skimage import util
In [2]: util.pad?
Signature: util.pad(*args, **kwargs)
Docstring: <no docstring>
File: ~/<__array_function__ internals>
Type: function
```
## Version information
```python
# Paste the output of the following python commands
>>> from __future__ import print_function
>>> import sys; print(sys.version)
3.7.4 (default, Aug 13 2019, 20:35:49)
[GCC 7.3.0]
>>> import platform; print(platform.platform())
Linux-5.3.0-20-generic-x86_64-with-debian-buster-sid
>>> import skimage; print("scikit-image version: {}".format(skimage.__version__))
scikit-image version: 0.16.2
>>> import numpy; print("numpy version: {}".format(numpy.__version__))
numpy version: 1.17.3
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/util/__init__.py`
Content:
```
1 from .dtype import (img_as_float32, img_as_float64, img_as_float,
2 img_as_int, img_as_uint, img_as_ubyte,
3 img_as_bool, dtype_limits)
4 from .shape import view_as_blocks, view_as_windows
5 from .noise import random_noise
6 from .apply_parallel import apply_parallel
7
8 from .arraycrop import crop
9 from .compare import compare_images
10 from ._regular_grid import regular_grid, regular_seeds
11 from .unique import unique_rows
12 from ._invert import invert
13 from ._montage import montage
14
15 from .._shared.utils import copy_func
16
17 from numpy import pad as numpy_pad
18 pad = copy_func(numpy_pad, name='pad')
19
20
21 __all__ = ['img_as_float32',
22 'img_as_float64',
23 'img_as_float',
24 'img_as_int',
25 'img_as_uint',
26 'img_as_ubyte',
27 'img_as_bool',
28 'dtype_limits',
29 'view_as_blocks',
30 'view_as_windows',
31 'pad',
32 'crop',
33 'compare_images',
34 'montage',
35 'random_noise',
36 'regular_grid',
37 'regular_seeds',
38 'apply_parallel',
39 'invert',
40 'unique_rows',
41 ]
42
```
Path: `skimage/_shared/utils.py`
Content:
```
1 import warnings
2 import functools
3 import sys
4 import numpy as np
5 import types
6 import numbers
7
8 from ..util import img_as_float
9 from ._warnings import all_warnings, warn
10
11 __all__ = ['deprecated', 'get_bound_method_class', 'all_warnings',
12 'safe_as_int', 'check_nD', 'check_shape_equality', 'warn']
13
14
15 class skimage_deprecation(Warning):
16 """Create our own deprecation class, since Python >= 2.7
17 silences deprecations by default.
18
19 """
20 pass
21
22
23 class deprecated(object):
24 """Decorator to mark deprecated functions with warning.
25
26 Adapted from <http://wiki.python.org/moin/PythonDecoratorLibrary>.
27
28 Parameters
29 ----------
30 alt_func : str
31 If given, tell user what function to use instead.
32 behavior : {'warn', 'raise'}
33 Behavior during call to deprecated function: 'warn' = warn user that
34 function is deprecated; 'raise' = raise error.
35 removed_version : str
36 The package version in which the deprecated function will be removed.
37 """
38
39 def __init__(self, alt_func=None, behavior='warn', removed_version=None):
40 self.alt_func = alt_func
41 self.behavior = behavior
42 self.removed_version = removed_version
43
44 def __call__(self, func):
45
46 alt_msg = ''
47 if self.alt_func is not None:
48 alt_msg = ' Use ``%s`` instead.' % self.alt_func
49 rmv_msg = ''
50 if self.removed_version is not None:
51 rmv_msg = (' and will be removed in version %s' %
52 self.removed_version)
53
54 msg = ('Function ``%s`` is deprecated' % func.__name__ +
55 rmv_msg + '.' + alt_msg)
56
57 @functools.wraps(func)
58 def wrapped(*args, **kwargs):
59 if self.behavior == 'warn':
60 func_code = func.__code__
61 warnings.simplefilter('always', skimage_deprecation)
62 warnings.warn_explicit(msg,
63 category=skimage_deprecation,
64 filename=func_code.co_filename,
65 lineno=func_code.co_firstlineno + 1)
66 elif self.behavior == 'raise':
67 raise skimage_deprecation(msg)
68 return func(*args, **kwargs)
69
70 # modify doc string to display deprecation warning
71 doc = '**Deprecated function**.' + alt_msg
72 if wrapped.__doc__ is None:
73 wrapped.__doc__ = doc
74 else:
75 wrapped.__doc__ = doc + '\n\n ' + wrapped.__doc__
76
77 return wrapped
78
79
80 def get_bound_method_class(m):
81 """Return the class for a bound method.
82
83 """
84 return m.im_class if sys.version < '3' else m.__self__.__class__
85
86
87 def safe_as_int(val, atol=1e-3):
88 """
89 Attempt to safely cast values to integer format.
90
91 Parameters
92 ----------
93 val : scalar or iterable of scalars
94 Number or container of numbers which are intended to be interpreted as
95 integers, e.g., for indexing purposes, but which may not carry integer
96 type.
97 atol : float
98 Absolute tolerance away from nearest integer to consider values in
99 ``val`` functionally integers.
100
101 Returns
102 -------
103 val_int : NumPy scalar or ndarray of dtype `np.int64`
104 Returns the input value(s) coerced to dtype `np.int64` assuming all
105 were within ``atol`` of the nearest integer.
106
107 Notes
108 -----
109 This operation calculates ``val`` modulo 1, which returns the mantissa of
110 all values. Then all mantissas greater than 0.5 are subtracted from one.
111 Finally, the absolute tolerance from zero is calculated. If it is less
112 than ``atol`` for all value(s) in ``val``, they are rounded and returned
113 in an integer array. Or, if ``val`` was a scalar, a NumPy scalar type is
114 returned.
115
116 If any value(s) are outside the specified tolerance, an informative error
117 is raised.
118
119 Examples
120 --------
121 >>> safe_as_int(7.0)
122 7
123
124 >>> safe_as_int([9, 4, 2.9999999999])
125 array([9, 4, 3])
126
127 >>> safe_as_int(53.1)
128 Traceback (most recent call last):
129 ...
130 ValueError: Integer argument required but received 53.1, check inputs.
131
132 >>> safe_as_int(53.01, atol=0.01)
133 53
134
135 """
136 mod = np.asarray(val) % 1 # Extract mantissa
137
138 # Check for and subtract any mod values > 0.5 from 1
139 if mod.ndim == 0: # Scalar input, cannot be indexed
140 if mod > 0.5:
141 mod = 1 - mod
142 else: # Iterable input, now ndarray
143 mod[mod > 0.5] = 1 - mod[mod > 0.5] # Test on each side of nearest int
144
145 try:
146 np.testing.assert_allclose(mod, 0, atol=atol)
147 except AssertionError:
148 raise ValueError("Integer argument required but received "
149 "{0}, check inputs.".format(val))
150
151 return np.round(val).astype(np.int64)
152
153
154 def check_shape_equality(im1, im2):
155 """Raise an error if the shape do not match."""
156 if not im1.shape == im2.shape:
157 raise ValueError('Input images must have the same dimensions.')
158 return
159
160
161 def check_nD(array, ndim, arg_name='image'):
162 """
163 Verify an array meets the desired ndims and array isn't empty.
164
165 Parameters
166 ----------
167 array : array-like
168 Input array to be validated
169 ndim : int or iterable of ints
170 Allowable ndim or ndims for the array.
171 arg_name : str, optional
172 The name of the array in the original function.
173
174 """
175 array = np.asanyarray(array)
176 msg_incorrect_dim = "The parameter `%s` must be a %s-dimensional array"
177 msg_empty_array = "The parameter `%s` cannot be an empty array"
178 if isinstance(ndim, int):
179 ndim = [ndim]
180 if array.size == 0:
181 raise ValueError(msg_empty_array % (arg_name))
182 if not array.ndim in ndim:
183 raise ValueError(msg_incorrect_dim % (arg_name, '-or-'.join([str(n) for n in ndim])))
184
185
186 def copy_func(f, name=None):
187 """Create a copy of a function.
188
189 Parameters
190 ----------
191 f : function
192 Function to copy.
193 name : str, optional
194 Name of new function.
195
196 """
197 return types.FunctionType(f.__code__, f.__globals__, name or f.__name__,
198 f.__defaults__, f.__closure__)
199
200
201 def check_random_state(seed):
202 """Turn seed into a `np.random.RandomState` instance.
203
204 Parameters
205 ----------
206 seed : None, int or np.random.RandomState
207 If `seed` is None, return the RandomState singleton used by `np.random`.
208 If `seed` is an int, return a new RandomState instance seeded with `seed`.
209 If `seed` is already a RandomState instance, return it.
210
211 Raises
212 ------
213 ValueError
214 If `seed` is of the wrong type.
215
216 """
217 # Function originally from scikit-learn's module sklearn.utils.validation
218 if seed is None or seed is np.random:
219 return np.random.mtrand._rand
220 if isinstance(seed, (numbers.Integral, np.integer)):
221 return np.random.RandomState(seed)
222 if isinstance(seed, np.random.RandomState):
223 return seed
224 raise ValueError('%r cannot be used to seed a numpy.random.RandomState'
225 ' instance' % seed)
226
227
228 def convert_to_float(image, preserve_range):
229 """Convert input image to float image with the appropriate range.
230
231 Parameters
232 ----------
233 image : ndarray
234 Input image.
235 preserve_range : bool
236 Determines if the range of the image should be kept or transformed
237 using img_as_float. Also see
238 https://scikit-image.org/docs/dev/user_guide/data_types.html
239
240 Notes:
241 ------
242 * Input images with `float32` data type are not upcast.
243
244 Returns
245 -------
246 image : ndarray
247 Transformed version of the input.
248
249 """
250 if preserve_range:
251 # Convert image to double only if it is not single or double
252 # precision float
253 if image.dtype.char not in 'df':
254 image = image.astype(float)
255 else:
256 image = img_as_float(image)
257 return image
258
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/skimage/_shared/utils.py b/skimage/_shared/utils.py
--- a/skimage/_shared/utils.py
+++ b/skimage/_shared/utils.py
@@ -2,7 +2,6 @@
import functools
import sys
import numpy as np
-import types
import numbers
from ..util import img_as_float
@@ -183,21 +182,6 @@
raise ValueError(msg_incorrect_dim % (arg_name, '-or-'.join([str(n) for n in ndim])))
-def copy_func(f, name=None):
- """Create a copy of a function.
-
- Parameters
- ----------
- f : function
- Function to copy.
- name : str, optional
- Name of new function.
-
- """
- return types.FunctionType(f.__code__, f.__globals__, name or f.__name__,
- f.__defaults__, f.__closure__)
-
-
def check_random_state(seed):
"""Turn seed into a `np.random.RandomState` instance.
diff --git a/skimage/util/__init__.py b/skimage/util/__init__.py
--- a/skimage/util/__init__.py
+++ b/skimage/util/__init__.py
@@ -1,3 +1,6 @@
+import functools
+import warnings
+import numpy as np
from .dtype import (img_as_float32, img_as_float64, img_as_float,
img_as_int, img_as_uint, img_as_ubyte,
img_as_bool, dtype_limits)
@@ -12,10 +15,13 @@
from ._invert import invert
from ._montage import montage
-from .._shared.utils import copy_func
-from numpy import pad as numpy_pad
-pad = copy_func(numpy_pad, name='pad')
[email protected](np.pad)
+def pad(*args, **kwargs):
+ warnings.warn("skimage.util.pad is deprecated and will be removed in "
+ "version 0.19. Please use numpy.pad instead.",
+ FutureWarning, stacklevel=2)
+ return np.pad(*args, **kwargs)
__all__ = ['img_as_float32',
| {"golden_diff": "diff --git a/skimage/_shared/utils.py b/skimage/_shared/utils.py\n--- a/skimage/_shared/utils.py\n+++ b/skimage/_shared/utils.py\n@@ -2,7 +2,6 @@\n import functools\n import sys\n import numpy as np\n-import types\n import numbers\n \n from ..util import img_as_float\n@@ -183,21 +182,6 @@\n raise ValueError(msg_incorrect_dim % (arg_name, '-or-'.join([str(n) for n in ndim])))\n \n \n-def copy_func(f, name=None):\n- \"\"\"Create a copy of a function.\n-\n- Parameters\n- ----------\n- f : function\n- Function to copy.\n- name : str, optional\n- Name of new function.\n-\n- \"\"\"\n- return types.FunctionType(f.__code__, f.__globals__, name or f.__name__,\n- f.__defaults__, f.__closure__)\n-\n-\n def check_random_state(seed):\n \"\"\"Turn seed into a `np.random.RandomState` instance.\n \ndiff --git a/skimage/util/__init__.py b/skimage/util/__init__.py\n--- a/skimage/util/__init__.py\n+++ b/skimage/util/__init__.py\n@@ -1,3 +1,6 @@\n+import functools\n+import warnings\n+import numpy as np\n from .dtype import (img_as_float32, img_as_float64, img_as_float,\n img_as_int, img_as_uint, img_as_ubyte,\n img_as_bool, dtype_limits)\n@@ -12,10 +15,13 @@\n from ._invert import invert\n from ._montage import montage\n \n-from .._shared.utils import copy_func\n \n-from numpy import pad as numpy_pad\n-pad = copy_func(numpy_pad, name='pad')\[email protected](np.pad)\n+def pad(*args, **kwargs):\n+ warnings.warn(\"skimage.util.pad is deprecated and will be removed in \"\n+ \"version 0.19. Please use numpy.pad instead.\",\n+ FutureWarning, stacklevel=2)\n+ return np.pad(*args, **kwargs)\n \n \n __all__ = ['img_as_float32',\n", "issue": "Docstring in util.pad returns <no docstring>\n## Description\r\n\r\n`util.pad()` returns an empty docstring; however, [the docstring is there](https://github.com/scikit-image/scikit-image/blob/v0.12.2/skimage/util/arraypad.py#L1117).\r\nAny idea on what's happening? Thanks y'all!\r\n\r\n## Way to reproduce\r\n```python\r\nIn [1]: from skimage import util \r\n\r\nIn [2]: util.pad? \r\nSignature: util.pad(*args, **kwargs)\r\nDocstring: <no docstring>\r\nFile: ~/<__array_function__ internals>\r\nType: function\r\n```\r\n\r\n\r\n## Version information\r\n```python\r\n# Paste the output of the following python commands\r\n>>> from __future__ import print_function\r\n\r\n>>> import sys; print(sys.version)\r\n3.7.4 (default, Aug 13 2019, 20:35:49) \r\n[GCC 7.3.0]\r\n\r\n>>> import platform; print(platform.platform())\r\nLinux-5.3.0-20-generic-x86_64-with-debian-buster-sid\r\n\r\n>>> import skimage; print(\"scikit-image version: {}\".format(skimage.__version__))\r\nscikit-image version: 0.16.2\r\n\r\n>>> import numpy; print(\"numpy version: {}\".format(numpy.__version__))\r\nnumpy version: 1.17.3\r\n```\n", "before_files": [{"content": "from .dtype import (img_as_float32, img_as_float64, img_as_float,\n img_as_int, img_as_uint, img_as_ubyte,\n img_as_bool, dtype_limits)\nfrom .shape import view_as_blocks, view_as_windows\nfrom .noise import random_noise\nfrom .apply_parallel import apply_parallel\n\nfrom .arraycrop import crop\nfrom .compare import compare_images\nfrom ._regular_grid import regular_grid, regular_seeds\nfrom .unique import unique_rows\nfrom ._invert import invert\nfrom ._montage import montage\n\nfrom .._shared.utils import copy_func\n\nfrom numpy import pad as numpy_pad\npad = copy_func(numpy_pad, name='pad')\n\n\n__all__ = ['img_as_float32',\n 'img_as_float64',\n 'img_as_float',\n 'img_as_int',\n 'img_as_uint',\n 'img_as_ubyte',\n 'img_as_bool',\n 'dtype_limits',\n 'view_as_blocks',\n 'view_as_windows',\n 'pad',\n 'crop',\n 'compare_images',\n 'montage',\n 'random_noise',\n 'regular_grid',\n 'regular_seeds',\n 'apply_parallel',\n 'invert',\n 'unique_rows',\n ]\n", "path": "skimage/util/__init__.py"}, {"content": "import warnings\nimport functools\nimport sys\nimport numpy as np\nimport types\nimport numbers\n\nfrom ..util import img_as_float\nfrom ._warnings import all_warnings, warn\n\n__all__ = ['deprecated', 'get_bound_method_class', 'all_warnings',\n 'safe_as_int', 'check_nD', 'check_shape_equality', 'warn']\n\n\nclass skimage_deprecation(Warning):\n \"\"\"Create our own deprecation class, since Python >= 2.7\n silences deprecations by default.\n\n \"\"\"\n pass\n\n\nclass deprecated(object):\n \"\"\"Decorator to mark deprecated functions with warning.\n\n Adapted from <http://wiki.python.org/moin/PythonDecoratorLibrary>.\n\n Parameters\n ----------\n alt_func : str\n If given, tell user what function to use instead.\n behavior : {'warn', 'raise'}\n Behavior during call to deprecated function: 'warn' = warn user that\n function is deprecated; 'raise' = raise error.\n removed_version : str\n The package version in which the deprecated function will be removed.\n \"\"\"\n\n def __init__(self, alt_func=None, behavior='warn', removed_version=None):\n self.alt_func = alt_func\n self.behavior = behavior\n self.removed_version = removed_version\n\n def __call__(self, func):\n\n alt_msg = ''\n if self.alt_func is not None:\n alt_msg = ' Use ``%s`` instead.' % self.alt_func\n rmv_msg = ''\n if self.removed_version is not None:\n rmv_msg = (' and will be removed in version %s' %\n self.removed_version)\n\n msg = ('Function ``%s`` is deprecated' % func.__name__ +\n rmv_msg + '.' + alt_msg)\n\n @functools.wraps(func)\n def wrapped(*args, **kwargs):\n if self.behavior == 'warn':\n func_code = func.__code__\n warnings.simplefilter('always', skimage_deprecation)\n warnings.warn_explicit(msg,\n category=skimage_deprecation,\n filename=func_code.co_filename,\n lineno=func_code.co_firstlineno + 1)\n elif self.behavior == 'raise':\n raise skimage_deprecation(msg)\n return func(*args, **kwargs)\n\n # modify doc string to display deprecation warning\n doc = '**Deprecated function**.' + alt_msg\n if wrapped.__doc__ is None:\n wrapped.__doc__ = doc\n else:\n wrapped.__doc__ = doc + '\\n\\n ' + wrapped.__doc__\n\n return wrapped\n\n\ndef get_bound_method_class(m):\n \"\"\"Return the class for a bound method.\n\n \"\"\"\n return m.im_class if sys.version < '3' else m.__self__.__class__\n\n\ndef safe_as_int(val, atol=1e-3):\n \"\"\"\n Attempt to safely cast values to integer format.\n\n Parameters\n ----------\n val : scalar or iterable of scalars\n Number or container of numbers which are intended to be interpreted as\n integers, e.g., for indexing purposes, but which may not carry integer\n type.\n atol : float\n Absolute tolerance away from nearest integer to consider values in\n ``val`` functionally integers.\n\n Returns\n -------\n val_int : NumPy scalar or ndarray of dtype `np.int64`\n Returns the input value(s) coerced to dtype `np.int64` assuming all\n were within ``atol`` of the nearest integer.\n\n Notes\n -----\n This operation calculates ``val`` modulo 1, which returns the mantissa of\n all values. Then all mantissas greater than 0.5 are subtracted from one.\n Finally, the absolute tolerance from zero is calculated. If it is less\n than ``atol`` for all value(s) in ``val``, they are rounded and returned\n in an integer array. Or, if ``val`` was a scalar, a NumPy scalar type is\n returned.\n\n If any value(s) are outside the specified tolerance, an informative error\n is raised.\n\n Examples\n --------\n >>> safe_as_int(7.0)\n 7\n\n >>> safe_as_int([9, 4, 2.9999999999])\n array([9, 4, 3])\n\n >>> safe_as_int(53.1)\n Traceback (most recent call last):\n ...\n ValueError: Integer argument required but received 53.1, check inputs.\n\n >>> safe_as_int(53.01, atol=0.01)\n 53\n\n \"\"\"\n mod = np.asarray(val) % 1 # Extract mantissa\n\n # Check for and subtract any mod values > 0.5 from 1\n if mod.ndim == 0: # Scalar input, cannot be indexed\n if mod > 0.5:\n mod = 1 - mod\n else: # Iterable input, now ndarray\n mod[mod > 0.5] = 1 - mod[mod > 0.5] # Test on each side of nearest int\n\n try:\n np.testing.assert_allclose(mod, 0, atol=atol)\n except AssertionError:\n raise ValueError(\"Integer argument required but received \"\n \"{0}, check inputs.\".format(val))\n\n return np.round(val).astype(np.int64)\n\n\ndef check_shape_equality(im1, im2):\n \"\"\"Raise an error if the shape do not match.\"\"\"\n if not im1.shape == im2.shape:\n raise ValueError('Input images must have the same dimensions.')\n return\n\n\ndef check_nD(array, ndim, arg_name='image'):\n \"\"\"\n Verify an array meets the desired ndims and array isn't empty.\n\n Parameters\n ----------\n array : array-like\n Input array to be validated\n ndim : int or iterable of ints\n Allowable ndim or ndims for the array.\n arg_name : str, optional\n The name of the array in the original function.\n\n \"\"\"\n array = np.asanyarray(array)\n msg_incorrect_dim = \"The parameter `%s` must be a %s-dimensional array\"\n msg_empty_array = \"The parameter `%s` cannot be an empty array\"\n if isinstance(ndim, int):\n ndim = [ndim]\n if array.size == 0:\n raise ValueError(msg_empty_array % (arg_name))\n if not array.ndim in ndim:\n raise ValueError(msg_incorrect_dim % (arg_name, '-or-'.join([str(n) for n in ndim])))\n\n\ndef copy_func(f, name=None):\n \"\"\"Create a copy of a function.\n\n Parameters\n ----------\n f : function\n Function to copy.\n name : str, optional\n Name of new function.\n\n \"\"\"\n return types.FunctionType(f.__code__, f.__globals__, name or f.__name__,\n f.__defaults__, f.__closure__)\n\n\ndef check_random_state(seed):\n \"\"\"Turn seed into a `np.random.RandomState` instance.\n\n Parameters\n ----------\n seed : None, int or np.random.RandomState\n If `seed` is None, return the RandomState singleton used by `np.random`.\n If `seed` is an int, return a new RandomState instance seeded with `seed`.\n If `seed` is already a RandomState instance, return it.\n\n Raises\n ------\n ValueError\n If `seed` is of the wrong type.\n\n \"\"\"\n # Function originally from scikit-learn's module sklearn.utils.validation\n if seed is None or seed is np.random:\n return np.random.mtrand._rand\n if isinstance(seed, (numbers.Integral, np.integer)):\n return np.random.RandomState(seed)\n if isinstance(seed, np.random.RandomState):\n return seed\n raise ValueError('%r cannot be used to seed a numpy.random.RandomState'\n ' instance' % seed)\n\n\ndef convert_to_float(image, preserve_range):\n \"\"\"Convert input image to float image with the appropriate range.\n\n Parameters\n ----------\n image : ndarray\n Input image.\n preserve_range : bool\n Determines if the range of the image should be kept or transformed\n using img_as_float. Also see\n https://scikit-image.org/docs/dev/user_guide/data_types.html\n\n Notes:\n ------\n * Input images with `float32` data type are not upcast.\n\n Returns\n -------\n image : ndarray\n Transformed version of the input.\n\n \"\"\"\n if preserve_range:\n # Convert image to double only if it is not single or double\n # precision float\n if image.dtype.char not in 'df':\n image = image.astype(float)\n else:\n image = img_as_float(image)\n return image\n", "path": "skimage/_shared/utils.py"}], "after_files": [{"content": "import functools\nimport warnings\nimport numpy as np\nfrom .dtype import (img_as_float32, img_as_float64, img_as_float,\n img_as_int, img_as_uint, img_as_ubyte,\n img_as_bool, dtype_limits)\nfrom .shape import view_as_blocks, view_as_windows\nfrom .noise import random_noise\nfrom .apply_parallel import apply_parallel\n\nfrom .arraycrop import crop\nfrom .compare import compare_images\nfrom ._regular_grid import regular_grid, regular_seeds\nfrom .unique import unique_rows\nfrom ._invert import invert\nfrom ._montage import montage\n\n\[email protected](np.pad)\ndef pad(*args, **kwargs):\n warnings.warn(\"skimage.util.pad is deprecated and will be removed in \"\n \"version 0.19. Please use numpy.pad instead.\",\n FutureWarning, stacklevel=2)\n return np.pad(*args, **kwargs)\n\n\n__all__ = ['img_as_float32',\n 'img_as_float64',\n 'img_as_float',\n 'img_as_int',\n 'img_as_uint',\n 'img_as_ubyte',\n 'img_as_bool',\n 'dtype_limits',\n 'view_as_blocks',\n 'view_as_windows',\n 'pad',\n 'crop',\n 'compare_images',\n 'montage',\n 'random_noise',\n 'regular_grid',\n 'regular_seeds',\n 'apply_parallel',\n 'invert',\n 'unique_rows',\n ]\n", "path": "skimage/util/__init__.py"}, {"content": "import warnings\nimport functools\nimport sys\nimport numpy as np\nimport numbers\n\nfrom ..util import img_as_float\nfrom ._warnings import all_warnings, warn\n\n__all__ = ['deprecated', 'get_bound_method_class', 'all_warnings',\n 'safe_as_int', 'check_nD', 'check_shape_equality', 'warn']\n\n\nclass skimage_deprecation(Warning):\n \"\"\"Create our own deprecation class, since Python >= 2.7\n silences deprecations by default.\n\n \"\"\"\n pass\n\n\nclass deprecated(object):\n \"\"\"Decorator to mark deprecated functions with warning.\n\n Adapted from <http://wiki.python.org/moin/PythonDecoratorLibrary>.\n\n Parameters\n ----------\n alt_func : str\n If given, tell user what function to use instead.\n behavior : {'warn', 'raise'}\n Behavior during call to deprecated function: 'warn' = warn user that\n function is deprecated; 'raise' = raise error.\n removed_version : str\n The package version in which the deprecated function will be removed.\n \"\"\"\n\n def __init__(self, alt_func=None, behavior='warn', removed_version=None):\n self.alt_func = alt_func\n self.behavior = behavior\n self.removed_version = removed_version\n\n def __call__(self, func):\n\n alt_msg = ''\n if self.alt_func is not None:\n alt_msg = ' Use ``%s`` instead.' % self.alt_func\n rmv_msg = ''\n if self.removed_version is not None:\n rmv_msg = (' and will be removed in version %s' %\n self.removed_version)\n\n msg = ('Function ``%s`` is deprecated' % func.__name__ +\n rmv_msg + '.' + alt_msg)\n\n @functools.wraps(func)\n def wrapped(*args, **kwargs):\n if self.behavior == 'warn':\n func_code = func.__code__\n warnings.simplefilter('always', skimage_deprecation)\n warnings.warn_explicit(msg,\n category=skimage_deprecation,\n filename=func_code.co_filename,\n lineno=func_code.co_firstlineno + 1)\n elif self.behavior == 'raise':\n raise skimage_deprecation(msg)\n return func(*args, **kwargs)\n\n # modify doc string to display deprecation warning\n doc = '**Deprecated function**.' + alt_msg\n if wrapped.__doc__ is None:\n wrapped.__doc__ = doc\n else:\n wrapped.__doc__ = doc + '\\n\\n ' + wrapped.__doc__\n\n return wrapped\n\n\ndef get_bound_method_class(m):\n \"\"\"Return the class for a bound method.\n\n \"\"\"\n return m.im_class if sys.version < '3' else m.__self__.__class__\n\n\ndef safe_as_int(val, atol=1e-3):\n \"\"\"\n Attempt to safely cast values to integer format.\n\n Parameters\n ----------\n val : scalar or iterable of scalars\n Number or container of numbers which are intended to be interpreted as\n integers, e.g., for indexing purposes, but which may not carry integer\n type.\n atol : float\n Absolute tolerance away from nearest integer to consider values in\n ``val`` functionally integers.\n\n Returns\n -------\n val_int : NumPy scalar or ndarray of dtype `np.int64`\n Returns the input value(s) coerced to dtype `np.int64` assuming all\n were within ``atol`` of the nearest integer.\n\n Notes\n -----\n This operation calculates ``val`` modulo 1, which returns the mantissa of\n all values. Then all mantissas greater than 0.5 are subtracted from one.\n Finally, the absolute tolerance from zero is calculated. If it is less\n than ``atol`` for all value(s) in ``val``, they are rounded and returned\n in an integer array. Or, if ``val`` was a scalar, a NumPy scalar type is\n returned.\n\n If any value(s) are outside the specified tolerance, an informative error\n is raised.\n\n Examples\n --------\n >>> safe_as_int(7.0)\n 7\n\n >>> safe_as_int([9, 4, 2.9999999999])\n array([9, 4, 3])\n\n >>> safe_as_int(53.1)\n Traceback (most recent call last):\n ...\n ValueError: Integer argument required but received 53.1, check inputs.\n\n >>> safe_as_int(53.01, atol=0.01)\n 53\n\n \"\"\"\n mod = np.asarray(val) % 1 # Extract mantissa\n\n # Check for and subtract any mod values > 0.5 from 1\n if mod.ndim == 0: # Scalar input, cannot be indexed\n if mod > 0.5:\n mod = 1 - mod\n else: # Iterable input, now ndarray\n mod[mod > 0.5] = 1 - mod[mod > 0.5] # Test on each side of nearest int\n\n try:\n np.testing.assert_allclose(mod, 0, atol=atol)\n except AssertionError:\n raise ValueError(\"Integer argument required but received \"\n \"{0}, check inputs.\".format(val))\n\n return np.round(val).astype(np.int64)\n\n\ndef check_shape_equality(im1, im2):\n \"\"\"Raise an error if the shape do not match.\"\"\"\n if not im1.shape == im2.shape:\n raise ValueError('Input images must have the same dimensions.')\n return\n\n\ndef check_nD(array, ndim, arg_name='image'):\n \"\"\"\n Verify an array meets the desired ndims and array isn't empty.\n\n Parameters\n ----------\n array : array-like\n Input array to be validated\n ndim : int or iterable of ints\n Allowable ndim or ndims for the array.\n arg_name : str, optional\n The name of the array in the original function.\n\n \"\"\"\n array = np.asanyarray(array)\n msg_incorrect_dim = \"The parameter `%s` must be a %s-dimensional array\"\n msg_empty_array = \"The parameter `%s` cannot be an empty array\"\n if isinstance(ndim, int):\n ndim = [ndim]\n if array.size == 0:\n raise ValueError(msg_empty_array % (arg_name))\n if not array.ndim in ndim:\n raise ValueError(msg_incorrect_dim % (arg_name, '-or-'.join([str(n) for n in ndim])))\n\n\ndef check_random_state(seed):\n \"\"\"Turn seed into a `np.random.RandomState` instance.\n\n Parameters\n ----------\n seed : None, int or np.random.RandomState\n If `seed` is None, return the RandomState singleton used by `np.random`.\n If `seed` is an int, return a new RandomState instance seeded with `seed`.\n If `seed` is already a RandomState instance, return it.\n\n Raises\n ------\n ValueError\n If `seed` is of the wrong type.\n\n \"\"\"\n # Function originally from scikit-learn's module sklearn.utils.validation\n if seed is None or seed is np.random:\n return np.random.mtrand._rand\n if isinstance(seed, (numbers.Integral, np.integer)):\n return np.random.RandomState(seed)\n if isinstance(seed, np.random.RandomState):\n return seed\n raise ValueError('%r cannot be used to seed a numpy.random.RandomState'\n ' instance' % seed)\n\n\ndef convert_to_float(image, preserve_range):\n \"\"\"Convert input image to float image with the appropriate range.\n\n Parameters\n ----------\n image : ndarray\n Input image.\n preserve_range : bool\n Determines if the range of the image should be kept or transformed\n using img_as_float. Also see\n https://scikit-image.org/docs/dev/user_guide/data_types.html\n\n Notes:\n ------\n * Input images with `float32` data type are not upcast.\n\n Returns\n -------\n image : ndarray\n Transformed version of the input.\n\n \"\"\"\n if preserve_range:\n # Convert image to double only if it is not single or double\n # precision float\n if image.dtype.char not in 'df':\n image = image.astype(float)\n else:\n image = img_as_float(image)\n return image\n", "path": "skimage/_shared/utils.py"}]} | 3,515 | 479 |
gh_patches_debug_7066 | rasdani/github-patches | git_diff | keras-team__keras-nlp-1211 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AutoGraph error upon model.fit() in GPT
**Describe the bug**
<details><summary>StackTrace</summary>
<p>
OperatorNotAllowedInGraphError Traceback (most recent call last)
[<ipython-input-10-6ef525f3a2b6>](https://localhost:8080/#) in <cell line: 1>()
----> 1 model.fit(train_ds, validation_data=val_ds, verbose=2, epochs=EPOCHS)
1 frames
[/usr/local/lib/python3.10/dist-packages/keras_core/src/utils/traceback_utils.py](https://localhost:8080/#) in error_handler(*args, **kwargs)
121 # To get the full stack trace, call:
122 # `keras_core.config.disable_traceback_filtering()`
--> 123 raise e.with_traceback(filtered_tb) from None
124 finally:
125 del filtered_tb
[/usr/local/lib/python3.10/dist-packages/keras_nlp/src/metrics/perplexity.py](https://localhost:8080/#) in result(self)
161
162 def result(self):
--> 163 if self._number_of_samples == 0:
164 return 0.0
165 perplexity_score = ops.exp(
OperatorNotAllowedInGraphError: Using a symbolic `tf.Tensor` as a Python `bool` is not allowed: AutoGraph is disabled in this function. Try decorating it directly with @tf.function.
</p>
</details>
**To Reproduce**
[Colab Notebook](https://colab.research.google.com/drive/1779-TFbLUbVapOl8BQS1aLJF2LlLT2ZS?usp=sharing)
**Would you like to help us fix it?**
Yes
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras_nlp/metrics/perplexity.py`
Content:
```
1 # Copyright 2023 The KerasNLP Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Perplexity metric."""
16
17 from keras_nlp.api_export import keras_nlp_export
18 from keras_nlp.backend import keras
19 from keras_nlp.backend import ops
20 from keras_nlp.utils.tensor_utils import is_floating_dtype
21
22
23 @keras_nlp_export("keras_nlp.metrics.Perplexity")
24 class Perplexity(keras.metrics.Metric):
25 """Perplexity metric.
26
27 This class implements the perplexity metric. In short, this class calculates
28 the cross entropy loss and takes its exponent.
29 Note: This implementation is not suitable for fixed-size windows.
30
31 Args:
32 from_logits: bool. If True, `y_pred` (input to `update_state()`) should
33 be the logits as returned by the model. Otherwise, `y_pred` is a
34 tensor of probabilities.
35 mask_token_id: int. ID of the token to be masked. If provided, the mask
36 is computed for this class. Note that if this field is provided, and
37 if the `sample_weight` field in `update_state()` is also provided,
38 we will compute the final `sample_weight` as the element-wise
39 product of the mask and the `sample_weight`.
40 dtype: string or tf.dtypes.Dtype. Precision of metric computation. If
41 not specified, it defaults to `"float32"`.
42 name: string. Name of the metric instance.
43 **kwargs: Other keyword arguments.
44
45 Examples:
46
47 1. Calculate perplexity by calling update_state() and result().
48 1.1. `sample_weight`, and `mask_token_id` are not provided.
49 >>> np.random.seed(42)
50 >>> perplexity = keras_nlp.metrics.Perplexity(name="perplexity")
51 >>> target = np.random.randint(10, size=[2, 5])
52 >>> logits = np.random.uniform(size=(2, 5, 10))
53 >>> perplexity.update_state(target, logits)
54 >>> perplexity.result()
55 <tf.Tensor: shape=(), dtype=float32, numpy=14.352535>
56
57 1.2. `sample_weight` specified (masking token with ID 0).
58 >>> np.random.seed(42)
59 >>> perplexity = keras_nlp.metrics.Perplexity(name="perplexity")
60 >>> target = np.random.randint(10, size=[2, 5])
61 >>> logits = np.random.uniform(size=(2, 5, 10))
62 >>> sample_weight = (target != 0).astype("float32")
63 >>> perplexity.update_state(target, logits, sample_weight)
64 >>> perplexity.result()
65 <tf.Tensor: shape=(), dtype=float32, numpy=14.352535>
66
67 2. Call perplexity directly.
68 >>> np.random.seed(42)
69 >>> perplexity = keras_nlp.metrics.Perplexity(name="perplexity")
70 >>> target = np.random.randint(10, size=[2, 5])
71 >>> logits = np.random.uniform(size=(2, 5, 10))
72 >>> perplexity(target, logits)
73 <tf.Tensor: shape=(), dtype=float32, numpy=14.352535>
74
75 3. Provide the padding token ID and let the class compute the mask on its
76 own.
77 >>> np.random.seed(42)
78 >>> perplexity = keras_nlp.metrics.Perplexity(mask_token_id=0)
79 >>> target = np.random.randint(10, size=[2, 5])
80 >>> logits = np.random.uniform(size=(2, 5, 10))
81 >>> perplexity(target, logits)
82 <tf.Tensor: shape=(), dtype=float32, numpy=14.352535>
83 """
84
85 def __init__(
86 self,
87 from_logits=False,
88 mask_token_id=None,
89 dtype="float32",
90 name="perplexity",
91 **kwargs,
92 ):
93 if not is_floating_dtype(dtype):
94 raise ValueError(
95 "`dtype` must be a floating point type. "
96 f"Received: dtype={dtype}"
97 )
98
99 super().__init__(name=name, dtype=dtype, **kwargs)
100
101 self._crossentropy = keras.losses.SparseCategoricalCrossentropy(
102 from_logits=from_logits, reduction="sum"
103 )
104
105 self.from_logits = from_logits
106 self.mask_token_id = mask_token_id
107
108 self._aggregate_crossentropy = self.add_weight(
109 shape=(),
110 initializer="zeros",
111 dtype=self.dtype,
112 name="aggregate_crossentropy",
113 )
114 self._number_of_samples = self.add_weight(
115 shape=(),
116 initializer="zeros",
117 dtype=self.dtype,
118 name="number_of_samples",
119 )
120
121 def update_state(self, y_true, y_pred, sample_weight=None):
122 # y_true shape: (batch_size, seq_len)
123 # y_pred shape: (batch_size, seq_len, vocab_size)
124 y_true = ops.cast(y_true, self.dtype)
125 y_pred = ops.cast(y_pred, self.dtype)
126
127 if sample_weight is not None:
128 sample_weight = ops.cast(sample_weight, self.dtype)
129
130 batch_size = ops.cast(ops.shape(y_true)[0], self.dtype)
131
132 if self.mask_token_id is not None:
133 mask = ops.cast(
134 ops.logical_not(ops.equal(y_true, self.mask_token_id)),
135 self.dtype,
136 )
137 if sample_weight is None:
138 sample_weight = mask
139 else:
140 sample_weight = ops.multiply(mask, sample_weight)
141
142 # Calculate the Cross Entropy Loss.
143 crossentropy_value = ops.cast(
144 self._crossentropy(y_true, y_pred, sample_weight=sample_weight),
145 self.dtype,
146 ) # scalar
147
148 # Divide the loss by the number of non-masked tokens
149 if sample_weight is not None:
150 crossentropy_value = crossentropy_value / ops.sum(
151 sample_weight
152 ) # scalar
153 else:
154 crossentropy_value = crossentropy_value / (
155 ops.cast(ops.shape(y_true)[0], self.dtype)
156 * ops.cast(ops.shape(y_true)[1], self.dtype)
157 ) # scalar
158
159 self._aggregate_crossentropy.assign_add(batch_size * crossentropy_value)
160 self._number_of_samples.assign_add(batch_size)
161
162 def result(self):
163 if self._number_of_samples == 0:
164 return 0.0
165 perplexity_score = ops.exp(
166 self._aggregate_crossentropy / self._number_of_samples
167 )
168 return perplexity_score
169
170 def reset_state(self):
171 self._aggregate_crossentropy.assign(0.0)
172 self._number_of_samples.assign(0.0)
173
174 def get_config(self):
175 config = super().get_config()
176 config.update(
177 {
178 "from_logits": self.from_logits,
179 "mask_token_id": self.mask_token_id,
180 }
181 )
182 return config
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/keras_nlp/metrics/perplexity.py b/keras_nlp/metrics/perplexity.py
--- a/keras_nlp/metrics/perplexity.py
+++ b/keras_nlp/metrics/perplexity.py
@@ -160,10 +160,10 @@
self._number_of_samples.assign_add(batch_size)
def result(self):
- if self._number_of_samples == 0:
- return 0.0
- perplexity_score = ops.exp(
- self._aggregate_crossentropy / self._number_of_samples
+ perplexity_score = ops.where(
+ ops.equal(self._number_of_samples, 0),
+ 0,
+ ops.exp(self._aggregate_crossentropy / self._number_of_samples),
)
return perplexity_score
| {"golden_diff": "diff --git a/keras_nlp/metrics/perplexity.py b/keras_nlp/metrics/perplexity.py\n--- a/keras_nlp/metrics/perplexity.py\n+++ b/keras_nlp/metrics/perplexity.py\n@@ -160,10 +160,10 @@\n self._number_of_samples.assign_add(batch_size)\n \n def result(self):\n- if self._number_of_samples == 0:\n- return 0.0\n- perplexity_score = ops.exp(\n- self._aggregate_crossentropy / self._number_of_samples\n+ perplexity_score = ops.where(\n+ ops.equal(self._number_of_samples, 0),\n+ 0,\n+ ops.exp(self._aggregate_crossentropy / self._number_of_samples),\n )\n return perplexity_score\n", "issue": "AutoGraph error upon model.fit() in GPT \n**Describe the bug**\r\n\r\n<details><summary>StackTrace</summary>\r\n<p>\r\n\r\nOperatorNotAllowedInGraphError Traceback (most recent call last)\r\n[<ipython-input-10-6ef525f3a2b6>](https://localhost:8080/#) in <cell line: 1>()\r\n----> 1 model.fit(train_ds, validation_data=val_ds, verbose=2, epochs=EPOCHS)\r\n\r\n1 frames\r\n[/usr/local/lib/python3.10/dist-packages/keras_core/src/utils/traceback_utils.py](https://localhost:8080/#) in error_handler(*args, **kwargs)\r\n 121 # To get the full stack trace, call:\r\n 122 # `keras_core.config.disable_traceback_filtering()`\r\n--> 123 raise e.with_traceback(filtered_tb) from None\r\n 124 finally:\r\n 125 del filtered_tb\r\n\r\n[/usr/local/lib/python3.10/dist-packages/keras_nlp/src/metrics/perplexity.py](https://localhost:8080/#) in result(self)\r\n 161 \r\n 162 def result(self):\r\n--> 163 if self._number_of_samples == 0:\r\n 164 return 0.0\r\n 165 perplexity_score = ops.exp(\r\n\r\nOperatorNotAllowedInGraphError: Using a symbolic `tf.Tensor` as a Python `bool` is not allowed: AutoGraph is disabled in this function. Try decorating it directly with @tf.function.\r\n\r\n</p>\r\n</details> \r\n\r\n**To Reproduce**\r\n[Colab Notebook](https://colab.research.google.com/drive/1779-TFbLUbVapOl8BQS1aLJF2LlLT2ZS?usp=sharing)\r\n\r\n\r\n\r\n**Would you like to help us fix it?**\r\nYes\n", "before_files": [{"content": "# Copyright 2023 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Perplexity metric.\"\"\"\n\nfrom keras_nlp.api_export import keras_nlp_export\nfrom keras_nlp.backend import keras\nfrom keras_nlp.backend import ops\nfrom keras_nlp.utils.tensor_utils import is_floating_dtype\n\n\n@keras_nlp_export(\"keras_nlp.metrics.Perplexity\")\nclass Perplexity(keras.metrics.Metric):\n \"\"\"Perplexity metric.\n\n This class implements the perplexity metric. In short, this class calculates\n the cross entropy loss and takes its exponent.\n Note: This implementation is not suitable for fixed-size windows.\n\n Args:\n from_logits: bool. If True, `y_pred` (input to `update_state()`) should\n be the logits as returned by the model. Otherwise, `y_pred` is a\n tensor of probabilities.\n mask_token_id: int. ID of the token to be masked. If provided, the mask\n is computed for this class. Note that if this field is provided, and\n if the `sample_weight` field in `update_state()` is also provided,\n we will compute the final `sample_weight` as the element-wise\n product of the mask and the `sample_weight`.\n dtype: string or tf.dtypes.Dtype. Precision of metric computation. If\n not specified, it defaults to `\"float32\"`.\n name: string. Name of the metric instance.\n **kwargs: Other keyword arguments.\n\n Examples:\n\n 1. Calculate perplexity by calling update_state() and result().\n 1.1. `sample_weight`, and `mask_token_id` are not provided.\n >>> np.random.seed(42)\n >>> perplexity = keras_nlp.metrics.Perplexity(name=\"perplexity\")\n >>> target = np.random.randint(10, size=[2, 5])\n >>> logits = np.random.uniform(size=(2, 5, 10))\n >>> perplexity.update_state(target, logits)\n >>> perplexity.result()\n <tf.Tensor: shape=(), dtype=float32, numpy=14.352535>\n\n 1.2. `sample_weight` specified (masking token with ID 0).\n >>> np.random.seed(42)\n >>> perplexity = keras_nlp.metrics.Perplexity(name=\"perplexity\")\n >>> target = np.random.randint(10, size=[2, 5])\n >>> logits = np.random.uniform(size=(2, 5, 10))\n >>> sample_weight = (target != 0).astype(\"float32\")\n >>> perplexity.update_state(target, logits, sample_weight)\n >>> perplexity.result()\n <tf.Tensor: shape=(), dtype=float32, numpy=14.352535>\n\n 2. Call perplexity directly.\n >>> np.random.seed(42)\n >>> perplexity = keras_nlp.metrics.Perplexity(name=\"perplexity\")\n >>> target = np.random.randint(10, size=[2, 5])\n >>> logits = np.random.uniform(size=(2, 5, 10))\n >>> perplexity(target, logits)\n <tf.Tensor: shape=(), dtype=float32, numpy=14.352535>\n\n 3. Provide the padding token ID and let the class compute the mask on its\n own.\n >>> np.random.seed(42)\n >>> perplexity = keras_nlp.metrics.Perplexity(mask_token_id=0)\n >>> target = np.random.randint(10, size=[2, 5])\n >>> logits = np.random.uniform(size=(2, 5, 10))\n >>> perplexity(target, logits)\n <tf.Tensor: shape=(), dtype=float32, numpy=14.352535>\n \"\"\"\n\n def __init__(\n self,\n from_logits=False,\n mask_token_id=None,\n dtype=\"float32\",\n name=\"perplexity\",\n **kwargs,\n ):\n if not is_floating_dtype(dtype):\n raise ValueError(\n \"`dtype` must be a floating point type. \"\n f\"Received: dtype={dtype}\"\n )\n\n super().__init__(name=name, dtype=dtype, **kwargs)\n\n self._crossentropy = keras.losses.SparseCategoricalCrossentropy(\n from_logits=from_logits, reduction=\"sum\"\n )\n\n self.from_logits = from_logits\n self.mask_token_id = mask_token_id\n\n self._aggregate_crossentropy = self.add_weight(\n shape=(),\n initializer=\"zeros\",\n dtype=self.dtype,\n name=\"aggregate_crossentropy\",\n )\n self._number_of_samples = self.add_weight(\n shape=(),\n initializer=\"zeros\",\n dtype=self.dtype,\n name=\"number_of_samples\",\n )\n\n def update_state(self, y_true, y_pred, sample_weight=None):\n # y_true shape: (batch_size, seq_len)\n # y_pred shape: (batch_size, seq_len, vocab_size)\n y_true = ops.cast(y_true, self.dtype)\n y_pred = ops.cast(y_pred, self.dtype)\n\n if sample_weight is not None:\n sample_weight = ops.cast(sample_weight, self.dtype)\n\n batch_size = ops.cast(ops.shape(y_true)[0], self.dtype)\n\n if self.mask_token_id is not None:\n mask = ops.cast(\n ops.logical_not(ops.equal(y_true, self.mask_token_id)),\n self.dtype,\n )\n if sample_weight is None:\n sample_weight = mask\n else:\n sample_weight = ops.multiply(mask, sample_weight)\n\n # Calculate the Cross Entropy Loss.\n crossentropy_value = ops.cast(\n self._crossentropy(y_true, y_pred, sample_weight=sample_weight),\n self.dtype,\n ) # scalar\n\n # Divide the loss by the number of non-masked tokens\n if sample_weight is not None:\n crossentropy_value = crossentropy_value / ops.sum(\n sample_weight\n ) # scalar\n else:\n crossentropy_value = crossentropy_value / (\n ops.cast(ops.shape(y_true)[0], self.dtype)\n * ops.cast(ops.shape(y_true)[1], self.dtype)\n ) # scalar\n\n self._aggregate_crossentropy.assign_add(batch_size * crossentropy_value)\n self._number_of_samples.assign_add(batch_size)\n\n def result(self):\n if self._number_of_samples == 0:\n return 0.0\n perplexity_score = ops.exp(\n self._aggregate_crossentropy / self._number_of_samples\n )\n return perplexity_score\n\n def reset_state(self):\n self._aggregate_crossentropy.assign(0.0)\n self._number_of_samples.assign(0.0)\n\n def get_config(self):\n config = super().get_config()\n config.update(\n {\n \"from_logits\": self.from_logits,\n \"mask_token_id\": self.mask_token_id,\n }\n )\n return config\n", "path": "keras_nlp/metrics/perplexity.py"}], "after_files": [{"content": "# Copyright 2023 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Perplexity metric.\"\"\"\n\nfrom keras_nlp.api_export import keras_nlp_export\nfrom keras_nlp.backend import keras\nfrom keras_nlp.backend import ops\nfrom keras_nlp.utils.tensor_utils import is_floating_dtype\n\n\n@keras_nlp_export(\"keras_nlp.metrics.Perplexity\")\nclass Perplexity(keras.metrics.Metric):\n \"\"\"Perplexity metric.\n\n This class implements the perplexity metric. In short, this class calculates\n the cross entropy loss and takes its exponent.\n Note: This implementation is not suitable for fixed-size windows.\n\n Args:\n from_logits: bool. If True, `y_pred` (input to `update_state()`) should\n be the logits as returned by the model. Otherwise, `y_pred` is a\n tensor of probabilities.\n mask_token_id: int. ID of the token to be masked. If provided, the mask\n is computed for this class. Note that if this field is provided, and\n if the `sample_weight` field in `update_state()` is also provided,\n we will compute the final `sample_weight` as the element-wise\n product of the mask and the `sample_weight`.\n dtype: string or tf.dtypes.Dtype. Precision of metric computation. If\n not specified, it defaults to `\"float32\"`.\n name: string. Name of the metric instance.\n **kwargs: Other keyword arguments.\n\n Examples:\n\n 1. Calculate perplexity by calling update_state() and result().\n 1.1. `sample_weight`, and `mask_token_id` are not provided.\n >>> np.random.seed(42)\n >>> perplexity = keras_nlp.metrics.Perplexity(name=\"perplexity\")\n >>> target = np.random.randint(10, size=[2, 5])\n >>> logits = np.random.uniform(size=(2, 5, 10))\n >>> perplexity.update_state(target, logits)\n >>> perplexity.result()\n <tf.Tensor: shape=(), dtype=float32, numpy=14.352535>\n\n 1.2. `sample_weight` specified (masking token with ID 0).\n >>> np.random.seed(42)\n >>> perplexity = keras_nlp.metrics.Perplexity(name=\"perplexity\")\n >>> target = np.random.randint(10, size=[2, 5])\n >>> logits = np.random.uniform(size=(2, 5, 10))\n >>> sample_weight = (target != 0).astype(\"float32\")\n >>> perplexity.update_state(target, logits, sample_weight)\n >>> perplexity.result()\n <tf.Tensor: shape=(), dtype=float32, numpy=14.352535>\n\n 2. Call perplexity directly.\n >>> np.random.seed(42)\n >>> perplexity = keras_nlp.metrics.Perplexity(name=\"perplexity\")\n >>> target = np.random.randint(10, size=[2, 5])\n >>> logits = np.random.uniform(size=(2, 5, 10))\n >>> perplexity(target, logits)\n <tf.Tensor: shape=(), dtype=float32, numpy=14.352535>\n\n 3. Provide the padding token ID and let the class compute the mask on its\n own.\n >>> np.random.seed(42)\n >>> perplexity = keras_nlp.metrics.Perplexity(mask_token_id=0)\n >>> target = np.random.randint(10, size=[2, 5])\n >>> logits = np.random.uniform(size=(2, 5, 10))\n >>> perplexity(target, logits)\n <tf.Tensor: shape=(), dtype=float32, numpy=14.352535>\n \"\"\"\n\n def __init__(\n self,\n from_logits=False,\n mask_token_id=None,\n dtype=\"float32\",\n name=\"perplexity\",\n **kwargs,\n ):\n if not is_floating_dtype(dtype):\n raise ValueError(\n \"`dtype` must be a floating point type. \"\n f\"Received: dtype={dtype}\"\n )\n\n super().__init__(name=name, dtype=dtype, **kwargs)\n\n self._crossentropy = keras.losses.SparseCategoricalCrossentropy(\n from_logits=from_logits, reduction=\"sum\"\n )\n\n self.from_logits = from_logits\n self.mask_token_id = mask_token_id\n\n self._aggregate_crossentropy = self.add_weight(\n shape=(),\n initializer=\"zeros\",\n dtype=self.dtype,\n name=\"aggregate_crossentropy\",\n )\n self._number_of_samples = self.add_weight(\n shape=(),\n initializer=\"zeros\",\n dtype=self.dtype,\n name=\"number_of_samples\",\n )\n\n def update_state(self, y_true, y_pred, sample_weight=None):\n # y_true shape: (batch_size, seq_len)\n # y_pred shape: (batch_size, seq_len, vocab_size)\n y_true = ops.cast(y_true, self.dtype)\n y_pred = ops.cast(y_pred, self.dtype)\n\n if sample_weight is not None:\n sample_weight = ops.cast(sample_weight, self.dtype)\n\n batch_size = ops.cast(ops.shape(y_true)[0], self.dtype)\n\n if self.mask_token_id is not None:\n mask = ops.cast(\n ops.logical_not(ops.equal(y_true, self.mask_token_id)),\n self.dtype,\n )\n if sample_weight is None:\n sample_weight = mask\n else:\n sample_weight = ops.multiply(mask, sample_weight)\n\n # Calculate the Cross Entropy Loss.\n crossentropy_value = ops.cast(\n self._crossentropy(y_true, y_pred, sample_weight=sample_weight),\n self.dtype,\n ) # scalar\n\n # Divide the loss by the number of non-masked tokens\n if sample_weight is not None:\n crossentropy_value = crossentropy_value / ops.sum(\n sample_weight\n ) # scalar\n else:\n crossentropy_value = crossentropy_value / (\n ops.cast(ops.shape(y_true)[0], self.dtype)\n * ops.cast(ops.shape(y_true)[1], self.dtype)\n ) # scalar\n\n self._aggregate_crossentropy.assign_add(batch_size * crossentropy_value)\n self._number_of_samples.assign_add(batch_size)\n\n def result(self):\n perplexity_score = ops.where(\n ops.equal(self._number_of_samples, 0),\n 0,\n ops.exp(self._aggregate_crossentropy / self._number_of_samples),\n )\n return perplexity_score\n\n def reset_state(self):\n self._aggregate_crossentropy.assign(0.0)\n self._number_of_samples.assign(0.0)\n\n def get_config(self):\n config = super().get_config()\n config.update(\n {\n \"from_logits\": self.from_logits,\n \"mask_token_id\": self.mask_token_id,\n }\n )\n return config\n", "path": "keras_nlp/metrics/perplexity.py"}]} | 2,769 | 181 |
gh_patches_debug_27253 | rasdani/github-patches | git_diff | NVIDIA-Merlin__NVTabular-1580 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Groupby aggregations on non float64 columns throws an error
**Describe the bug**
When running `Groupby` on columns with a dtype other than `float64`, I receive the following error:
```
TypeError: Dtype discrepancy detected for column measurement_std: operator Groupby reported dtype `float32` but returned dtype `float64`.
```
**Steps/Code to reproduce bug**

Here is code that can be copied over
```
import cudf
import nvtabular as nvt
gdf = cudf.DataFrame(data={'cat': ['a', 'a', 'b'], 'timestamp': [1, 2, 1], 'measurement': [0.1, 0.2, 0.5]})
gdf.measurement = gdf.measurement.astype('float32')
out = ['measurement', 'cat'] >> nvt.ops.Groupby('cat', aggs=['std'])
ds = nvt.Dataset(gdf)
wf = nvt.Workflow(out)
res = wf.fit_transform(ds)
res.compute()
```
**Expected behavior**
The transformation to work, results to be returned without error.
**Environment details (please complete the following information):**
`merlin-pytorch-training:22.05` image with all Merlin repos updated and pip installed today
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nvtabular/ops/groupby.py`
Content:
```
1 #
2 # Copyright (c) 2021, NVIDIA CORPORATION.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 import numpy
16 from dask.dataframe.utils import meta_nonempty
17
18 from merlin.core.dispatch import DataFrameType, annotate
19 from merlin.schema import Schema
20
21 from .operator import ColumnSelector, Operator
22
23
24 class Groupby(Operator):
25 """Groupby Transformation
26
27 Locally transform each partition of a Dataset with one or
28 more groupby aggregations.
29
30 WARNING: This transformation does NOT move data between
31 partitions. Please make sure that the target Dataset object
32 is already shuffled by ``groupby_cols``, otherwise the
33 output may be incorrect. See: ``Dataset.shuffle_by_keys``.
34
35 Example usage::
36
37 groupby_cols = ['user_id', 'session_id']
38 dataset = dataset.shuffle_by_keys(keys=groupby_cols)
39
40 groupby_features = [
41 'user_id', 'session_id', 'month', 'prod_id',
42 ] >> ops.Groupby(
43 groupby_cols=groupby_cols,
44 sort_cols=['month'],
45 aggs={
46 'prod_id': 'list',
47 'month': ['first', 'last'],
48 },
49 )
50 processor = nvtabular.Workflow(groupby_features)
51
52 workflow.fit(dataset)
53 dataset_transformed = workflow.transform(dataset)
54
55 Parameters
56 -----------
57 groupby_cols : str or list of str
58 The column names to be used as groupby keys.
59 WARNING: Ensure the dataset was partitioned by those
60 groupby keys (see above for an example).
61 sort_cols : str or list of str
62 Columns to be used to sort each partition before
63 groupby aggregation is performed. If this argument
64 is not specified, the results will not be sorted.
65 aggs : dict, list or str
66 Groupby aggregations to perform. Supported list-based
67 aggregations include "list", "first" & "last". Most
68 conventional aggregations supported by Pandas/cuDF are
69 also allowed (e.g. "sum", "count", "max", "mean", etc.).
70 name_sep : str
71 String separator to use for new column names.
72 """
73
74 def __init__(
75 self, groupby_cols=None, sort_cols=None, aggs="list", name_sep="_", ascending=True
76 ):
77 self.groupby_cols = groupby_cols
78 self.sort_cols = sort_cols or []
79 if isinstance(self.groupby_cols, str):
80 self.groupby_cols = [self.groupby_cols]
81 if isinstance(self.sort_cols, str):
82 self.sort_cols = [self.sort_cols]
83 self.ascending = ascending
84
85 # Split aggregations into "conventional" aggregations
86 # and "list-based" aggregations. After this block,
87 # we will have a dictionary for each of these cases.
88 # We use the "__all__" key to specify aggregations
89 # that will be performed on all (non-key) columns.
90 self.list_aggs, self.conv_aggs = {}, {}
91 if isinstance(aggs, str):
92 aggs = {"__all__": [aggs]}
93 elif isinstance(aggs, list):
94 aggs = {"__all__": aggs}
95 for col, v in aggs.items():
96 _aggs = v if isinstance(v, list) else [v]
97 _conv_aggs, _list_aggs = set(), set()
98 for _agg in _aggs:
99 if is_list_agg(_agg):
100 _list_aggs.add("list" if _agg == list else _agg)
101 _conv_aggs.add(list)
102 else:
103 _conv_aggs.add(_agg)
104 if _conv_aggs:
105 self.conv_aggs[col] = list(_conv_aggs)
106 if _list_aggs:
107 self.list_aggs[col] = list(_list_aggs)
108
109 self.name_sep = name_sep
110 super().__init__()
111
112 @annotate("Groupby_op", color="darkgreen", domain="nvt_python")
113 def transform(self, col_selector: ColumnSelector, df: DataFrameType) -> DataFrameType:
114 # Sort if necessary
115 if self.sort_cols:
116 df = df.sort_values(self.sort_cols, ascending=self.ascending, ignore_index=True)
117
118 # List aggregations do not work with empty data.
119 # Use synthetic metadata to predict output columns.
120 empty_df = not len(df)
121
122 _df = meta_nonempty(df) if empty_df else df
123
124 # Get "complete" aggregation dicts
125 _list_aggs, _conv_aggs = _get_agg_dicts(
126 self.groupby_cols, self.list_aggs, self.conv_aggs, col_selector
127 )
128
129 # Apply aggregations
130 new_df = _apply_aggs(
131 _df,
132 self.groupby_cols,
133 _list_aggs,
134 _conv_aggs,
135 name_sep=self.name_sep,
136 ascending=self.ascending,
137 )
138
139 if empty_df:
140 return new_df.iloc[:0]
141 return new_df
142
143 transform.__doc__ = Operator.transform.__doc__
144
145 def compute_output_schema(
146 self, input_schema: Schema, col_selector: ColumnSelector, prev_output_schema: Schema = None
147 ) -> Schema:
148 if not col_selector and hasattr(self, "target"):
149 col_selector = (
150 ColumnSelector(self.target) if isinstance(self.target, list) else self.target
151 )
152 return super().compute_output_schema(input_schema, col_selector, prev_output_schema)
153
154 def column_mapping(self, col_selector):
155 column_mapping = {}
156
157 for groupby_col in self.groupby_cols:
158 if groupby_col in col_selector.names:
159 column_mapping[groupby_col] = [groupby_col]
160
161 _list_aggs, _conv_aggs = _get_agg_dicts(
162 self.groupby_cols, self.list_aggs, self.conv_aggs, col_selector
163 )
164
165 for input_col_name, aggs in _list_aggs.items():
166 output_col_names = _columns_out_from_aggs(
167 {input_col_name: aggs}, name_sep=self.name_sep
168 )
169 for output_col_name in output_col_names:
170 column_mapping[output_col_name] = [input_col_name]
171
172 for input_col_name, aggs in _conv_aggs.items():
173 output_col_names = _columns_out_from_aggs(
174 {input_col_name: aggs}, name_sep=self.name_sep
175 )
176 for output_col_name in output_col_names:
177 column_mapping[output_col_name] = [input_col_name]
178
179 return column_mapping
180
181 def _compute_dtype(self, col_schema, input_schema):
182 col_schema = super()._compute_dtype(col_schema, input_schema)
183
184 dtype = col_schema.dtype
185 is_list = col_schema.is_list
186
187 dtypes = {"count": numpy.int32, "mean": numpy.float32}
188
189 is_lists = {"list": True}
190
191 for col_name in input_schema.column_names:
192 combined_aggs = _aggs_for_column(col_name, self.conv_aggs)
193 combined_aggs += _aggs_for_column(col_name, self.list_aggs)
194 for agg in combined_aggs:
195 if col_schema.name.endswith(f"{self.name_sep}{agg}"):
196 dtype = dtypes.get(agg, dtype)
197 is_list = is_lists.get(agg, is_list)
198 break
199
200 return col_schema.with_dtype(dtype, is_list=is_list, is_ragged=is_list)
201
202
203 def _aggs_for_column(col_name, agg_dict):
204 return agg_dict.get(col_name, []) + agg_dict.get("__all__", [])
205
206
207 def _columns_out_from_aggs(aggs, name_sep="_"):
208 # Helper function for `output_column_names`
209 _agg_cols = []
210 for k, v in aggs.items():
211 for _v in v:
212 if isinstance(_v, str):
213 _agg_cols.append(name_sep.join([k, _v]))
214 return _agg_cols
215
216
217 def _apply_aggs(_df, groupby_cols, _list_aggs, _conv_aggs, name_sep="_", ascending=True):
218
219 # Apply conventional aggs
220 _columns = list(set(groupby_cols) | set(_conv_aggs) | set(_list_aggs))
221 df = _df[_columns].groupby(groupby_cols).agg(_conv_aggs).reset_index()
222
223 df.columns = [
224 name_sep.join([n for n in name if n != ""]) for name in df.columns.to_flat_index()
225 ]
226
227 # Handle custom aggs (e.g. "first" and "last")
228 for col, aggs in _list_aggs.items():
229 for _agg in aggs:
230 if is_list_agg(_agg, custom=True):
231 df[f"{col}{name_sep}{_agg}"] = _first_or_last(
232 df[f"{col}{name_sep}list"], _agg, ascending=ascending
233 )
234 if "list" not in aggs:
235 df.drop(columns=[col + f"{name_sep}list"], inplace=True)
236
237 for col in df.columns:
238 if col.endswith(f"{name_sep}count"):
239 df[col] = df[col].astype(numpy.int32)
240 elif col.endswith(f"{name_sep}mean"):
241 df[col] = df[col].astype(numpy.float32)
242
243 return df
244
245
246 def _get_agg_dicts(groupby_cols, list_aggs, conv_aggs, columns):
247 # Get updated aggregation dicts. This should map "__all__"
248 # to specific columns, and remove elements that are not
249 # in `columns`.
250 _allowed_cols = [c for c in columns.names if c not in groupby_cols]
251 _list_aggs = _ensure_agg_dict(list_aggs, _allowed_cols)
252 _conv_aggs = _ensure_agg_dict(conv_aggs, _allowed_cols)
253 return _list_aggs, _conv_aggs
254
255
256 def _ensure_agg_dict(_aggs, _allowed_cols):
257 # Make sure aggregation dict has legal keys
258 if "__all__" in _aggs:
259 return {col: _aggs["__all__"] for col in _allowed_cols}
260 else:
261 return {k: v for k, v in _aggs.items() if k in _allowed_cols}
262
263
264 def is_list_agg(agg, custom=False):
265 # check if `agg` is a supported list aggregation
266 if custom:
267 return agg in ("first", "last")
268 else:
269 return agg in ("list", list, "first", "last")
270
271
272 def _first_or_last(x, kind, ascending=True):
273 # Redirect to _first or _last
274 if kind == "first" and ascending:
275 return _first(x)
276 elif kind == "last" and not ascending:
277 return _first(x)
278 else:
279 return _last(x)
280
281
282 def _first(x):
283 # Convert each element of a list column to be the first
284 # item in the list
285 if hasattr(x, "list"):
286 # cuDF-specific behavior
287 offsets = x.list._column.offsets
288 elements = x.list._column.elements
289 return elements[offsets[:-1]]
290 else:
291 # cpu/pandas
292 return x.apply(lambda y: y[0])
293
294
295 def _last(x):
296 # Convert each element of a list column to be the last
297 # item in the list
298 if hasattr(x, "list"):
299 # cuDF-specific behavior
300 offsets = x.list._column.offsets
301 elements = x.list._column.elements
302 return elements[offsets[1:].values - 1]
303 else:
304 # cpu/pandas
305 return x.apply(lambda y: y[-1])
306
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nvtabular/ops/groupby.py b/nvtabular/ops/groupby.py
--- a/nvtabular/ops/groupby.py
+++ b/nvtabular/ops/groupby.py
@@ -12,6 +12,8 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
+import re
+
import numpy
from dask.dataframe.utils import meta_nonempty
@@ -184,7 +186,14 @@
dtype = col_schema.dtype
is_list = col_schema.is_list
- dtypes = {"count": numpy.int32, "mean": numpy.float32}
+ dtypes = {
+ "count": numpy.int32,
+ "nunique": numpy.int32,
+ "mean": numpy.float32,
+ "var": numpy.float32,
+ "std": numpy.float32,
+ "median": numpy.float32,
+ }
is_lists = {"list": True}
@@ -235,9 +244,9 @@
df.drop(columns=[col + f"{name_sep}list"], inplace=True)
for col in df.columns:
- if col.endswith(f"{name_sep}count"):
+ if re.search(f"{name_sep}(count|nunique)", col):
df[col] = df[col].astype(numpy.int32)
- elif col.endswith(f"{name_sep}mean"):
+ elif re.search(f"{name_sep}(mean|median|std|var)", col):
df[col] = df[col].astype(numpy.float32)
return df
| {"golden_diff": "diff --git a/nvtabular/ops/groupby.py b/nvtabular/ops/groupby.py\n--- a/nvtabular/ops/groupby.py\n+++ b/nvtabular/ops/groupby.py\n@@ -12,6 +12,8 @@\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n # See the License for the specific language governing permissions and\n # limitations under the License.\n+import re\n+\n import numpy\n from dask.dataframe.utils import meta_nonempty\n \n@@ -184,7 +186,14 @@\n dtype = col_schema.dtype\n is_list = col_schema.is_list\n \n- dtypes = {\"count\": numpy.int32, \"mean\": numpy.float32}\n+ dtypes = {\n+ \"count\": numpy.int32,\n+ \"nunique\": numpy.int32,\n+ \"mean\": numpy.float32,\n+ \"var\": numpy.float32,\n+ \"std\": numpy.float32,\n+ \"median\": numpy.float32,\n+ }\n \n is_lists = {\"list\": True}\n \n@@ -235,9 +244,9 @@\n df.drop(columns=[col + f\"{name_sep}list\"], inplace=True)\n \n for col in df.columns:\n- if col.endswith(f\"{name_sep}count\"):\n+ if re.search(f\"{name_sep}(count|nunique)\", col):\n df[col] = df[col].astype(numpy.int32)\n- elif col.endswith(f\"{name_sep}mean\"):\n+ elif re.search(f\"{name_sep}(mean|median|std|var)\", col):\n df[col] = df[col].astype(numpy.float32)\n \n return df\n", "issue": "[BUG] Groupby aggregations on non float64 columns throws an error\n**Describe the bug**\r\nWhen running `Groupby` on columns with a dtype other than `float64`, I receive the following error:\r\n\r\n```\r\nTypeError: Dtype discrepancy detected for column measurement_std: operator Groupby reported dtype `float32` but returned dtype `float64`.\r\n```\r\n\r\n**Steps/Code to reproduce bug**\r\n\r\n\r\nHere is code that can be copied over\r\n```\r\nimport cudf\r\nimport nvtabular as nvt\r\n\r\ngdf = cudf.DataFrame(data={'cat': ['a', 'a', 'b'], 'timestamp': [1, 2, 1], 'measurement': [0.1, 0.2, 0.5]})\r\n\r\ngdf.measurement = gdf.measurement.astype('float32')\r\n\r\nout = ['measurement', 'cat'] >> nvt.ops.Groupby('cat', aggs=['std'])\r\n\r\n\r\nds = nvt.Dataset(gdf)\r\nwf = nvt.Workflow(out)\r\nres = wf.fit_transform(ds)\r\n\r\nres.compute()\r\n```\r\n\r\n**Expected behavior**\r\nThe transformation to work, results to be returned without error.\r\n\r\n**Environment details (please complete the following information):**\r\n`merlin-pytorch-training:22.05` image with all Merlin repos updated and pip installed today\n", "before_files": [{"content": "#\n# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport numpy\nfrom dask.dataframe.utils import meta_nonempty\n\nfrom merlin.core.dispatch import DataFrameType, annotate\nfrom merlin.schema import Schema\n\nfrom .operator import ColumnSelector, Operator\n\n\nclass Groupby(Operator):\n \"\"\"Groupby Transformation\n\n Locally transform each partition of a Dataset with one or\n more groupby aggregations.\n\n WARNING: This transformation does NOT move data between\n partitions. Please make sure that the target Dataset object\n is already shuffled by ``groupby_cols``, otherwise the\n output may be incorrect. See: ``Dataset.shuffle_by_keys``.\n\n Example usage::\n\n groupby_cols = ['user_id', 'session_id']\n dataset = dataset.shuffle_by_keys(keys=groupby_cols)\n\n groupby_features = [\n 'user_id', 'session_id', 'month', 'prod_id',\n ] >> ops.Groupby(\n groupby_cols=groupby_cols,\n sort_cols=['month'],\n aggs={\n 'prod_id': 'list',\n 'month': ['first', 'last'],\n },\n )\n processor = nvtabular.Workflow(groupby_features)\n\n workflow.fit(dataset)\n dataset_transformed = workflow.transform(dataset)\n\n Parameters\n -----------\n groupby_cols : str or list of str\n The column names to be used as groupby keys.\n WARNING: Ensure the dataset was partitioned by those\n groupby keys (see above for an example).\n sort_cols : str or list of str\n Columns to be used to sort each partition before\n groupby aggregation is performed. If this argument\n is not specified, the results will not be sorted.\n aggs : dict, list or str\n Groupby aggregations to perform. Supported list-based\n aggregations include \"list\", \"first\" & \"last\". Most\n conventional aggregations supported by Pandas/cuDF are\n also allowed (e.g. \"sum\", \"count\", \"max\", \"mean\", etc.).\n name_sep : str\n String separator to use for new column names.\n \"\"\"\n\n def __init__(\n self, groupby_cols=None, sort_cols=None, aggs=\"list\", name_sep=\"_\", ascending=True\n ):\n self.groupby_cols = groupby_cols\n self.sort_cols = sort_cols or []\n if isinstance(self.groupby_cols, str):\n self.groupby_cols = [self.groupby_cols]\n if isinstance(self.sort_cols, str):\n self.sort_cols = [self.sort_cols]\n self.ascending = ascending\n\n # Split aggregations into \"conventional\" aggregations\n # and \"list-based\" aggregations. After this block,\n # we will have a dictionary for each of these cases.\n # We use the \"__all__\" key to specify aggregations\n # that will be performed on all (non-key) columns.\n self.list_aggs, self.conv_aggs = {}, {}\n if isinstance(aggs, str):\n aggs = {\"__all__\": [aggs]}\n elif isinstance(aggs, list):\n aggs = {\"__all__\": aggs}\n for col, v in aggs.items():\n _aggs = v if isinstance(v, list) else [v]\n _conv_aggs, _list_aggs = set(), set()\n for _agg in _aggs:\n if is_list_agg(_agg):\n _list_aggs.add(\"list\" if _agg == list else _agg)\n _conv_aggs.add(list)\n else:\n _conv_aggs.add(_agg)\n if _conv_aggs:\n self.conv_aggs[col] = list(_conv_aggs)\n if _list_aggs:\n self.list_aggs[col] = list(_list_aggs)\n\n self.name_sep = name_sep\n super().__init__()\n\n @annotate(\"Groupby_op\", color=\"darkgreen\", domain=\"nvt_python\")\n def transform(self, col_selector: ColumnSelector, df: DataFrameType) -> DataFrameType:\n # Sort if necessary\n if self.sort_cols:\n df = df.sort_values(self.sort_cols, ascending=self.ascending, ignore_index=True)\n\n # List aggregations do not work with empty data.\n # Use synthetic metadata to predict output columns.\n empty_df = not len(df)\n\n _df = meta_nonempty(df) if empty_df else df\n\n # Get \"complete\" aggregation dicts\n _list_aggs, _conv_aggs = _get_agg_dicts(\n self.groupby_cols, self.list_aggs, self.conv_aggs, col_selector\n )\n\n # Apply aggregations\n new_df = _apply_aggs(\n _df,\n self.groupby_cols,\n _list_aggs,\n _conv_aggs,\n name_sep=self.name_sep,\n ascending=self.ascending,\n )\n\n if empty_df:\n return new_df.iloc[:0]\n return new_df\n\n transform.__doc__ = Operator.transform.__doc__\n\n def compute_output_schema(\n self, input_schema: Schema, col_selector: ColumnSelector, prev_output_schema: Schema = None\n ) -> Schema:\n if not col_selector and hasattr(self, \"target\"):\n col_selector = (\n ColumnSelector(self.target) if isinstance(self.target, list) else self.target\n )\n return super().compute_output_schema(input_schema, col_selector, prev_output_schema)\n\n def column_mapping(self, col_selector):\n column_mapping = {}\n\n for groupby_col in self.groupby_cols:\n if groupby_col in col_selector.names:\n column_mapping[groupby_col] = [groupby_col]\n\n _list_aggs, _conv_aggs = _get_agg_dicts(\n self.groupby_cols, self.list_aggs, self.conv_aggs, col_selector\n )\n\n for input_col_name, aggs in _list_aggs.items():\n output_col_names = _columns_out_from_aggs(\n {input_col_name: aggs}, name_sep=self.name_sep\n )\n for output_col_name in output_col_names:\n column_mapping[output_col_name] = [input_col_name]\n\n for input_col_name, aggs in _conv_aggs.items():\n output_col_names = _columns_out_from_aggs(\n {input_col_name: aggs}, name_sep=self.name_sep\n )\n for output_col_name in output_col_names:\n column_mapping[output_col_name] = [input_col_name]\n\n return column_mapping\n\n def _compute_dtype(self, col_schema, input_schema):\n col_schema = super()._compute_dtype(col_schema, input_schema)\n\n dtype = col_schema.dtype\n is_list = col_schema.is_list\n\n dtypes = {\"count\": numpy.int32, \"mean\": numpy.float32}\n\n is_lists = {\"list\": True}\n\n for col_name in input_schema.column_names:\n combined_aggs = _aggs_for_column(col_name, self.conv_aggs)\n combined_aggs += _aggs_for_column(col_name, self.list_aggs)\n for agg in combined_aggs:\n if col_schema.name.endswith(f\"{self.name_sep}{agg}\"):\n dtype = dtypes.get(agg, dtype)\n is_list = is_lists.get(agg, is_list)\n break\n\n return col_schema.with_dtype(dtype, is_list=is_list, is_ragged=is_list)\n\n\ndef _aggs_for_column(col_name, agg_dict):\n return agg_dict.get(col_name, []) + agg_dict.get(\"__all__\", [])\n\n\ndef _columns_out_from_aggs(aggs, name_sep=\"_\"):\n # Helper function for `output_column_names`\n _agg_cols = []\n for k, v in aggs.items():\n for _v in v:\n if isinstance(_v, str):\n _agg_cols.append(name_sep.join([k, _v]))\n return _agg_cols\n\n\ndef _apply_aggs(_df, groupby_cols, _list_aggs, _conv_aggs, name_sep=\"_\", ascending=True):\n\n # Apply conventional aggs\n _columns = list(set(groupby_cols) | set(_conv_aggs) | set(_list_aggs))\n df = _df[_columns].groupby(groupby_cols).agg(_conv_aggs).reset_index()\n\n df.columns = [\n name_sep.join([n for n in name if n != \"\"]) for name in df.columns.to_flat_index()\n ]\n\n # Handle custom aggs (e.g. \"first\" and \"last\")\n for col, aggs in _list_aggs.items():\n for _agg in aggs:\n if is_list_agg(_agg, custom=True):\n df[f\"{col}{name_sep}{_agg}\"] = _first_or_last(\n df[f\"{col}{name_sep}list\"], _agg, ascending=ascending\n )\n if \"list\" not in aggs:\n df.drop(columns=[col + f\"{name_sep}list\"], inplace=True)\n\n for col in df.columns:\n if col.endswith(f\"{name_sep}count\"):\n df[col] = df[col].astype(numpy.int32)\n elif col.endswith(f\"{name_sep}mean\"):\n df[col] = df[col].astype(numpy.float32)\n\n return df\n\n\ndef _get_agg_dicts(groupby_cols, list_aggs, conv_aggs, columns):\n # Get updated aggregation dicts. This should map \"__all__\"\n # to specific columns, and remove elements that are not\n # in `columns`.\n _allowed_cols = [c for c in columns.names if c not in groupby_cols]\n _list_aggs = _ensure_agg_dict(list_aggs, _allowed_cols)\n _conv_aggs = _ensure_agg_dict(conv_aggs, _allowed_cols)\n return _list_aggs, _conv_aggs\n\n\ndef _ensure_agg_dict(_aggs, _allowed_cols):\n # Make sure aggregation dict has legal keys\n if \"__all__\" in _aggs:\n return {col: _aggs[\"__all__\"] for col in _allowed_cols}\n else:\n return {k: v for k, v in _aggs.items() if k in _allowed_cols}\n\n\ndef is_list_agg(agg, custom=False):\n # check if `agg` is a supported list aggregation\n if custom:\n return agg in (\"first\", \"last\")\n else:\n return agg in (\"list\", list, \"first\", \"last\")\n\n\ndef _first_or_last(x, kind, ascending=True):\n # Redirect to _first or _last\n if kind == \"first\" and ascending:\n return _first(x)\n elif kind == \"last\" and not ascending:\n return _first(x)\n else:\n return _last(x)\n\n\ndef _first(x):\n # Convert each element of a list column to be the first\n # item in the list\n if hasattr(x, \"list\"):\n # cuDF-specific behavior\n offsets = x.list._column.offsets\n elements = x.list._column.elements\n return elements[offsets[:-1]]\n else:\n # cpu/pandas\n return x.apply(lambda y: y[0])\n\n\ndef _last(x):\n # Convert each element of a list column to be the last\n # item in the list\n if hasattr(x, \"list\"):\n # cuDF-specific behavior\n offsets = x.list._column.offsets\n elements = x.list._column.elements\n return elements[offsets[1:].values - 1]\n else:\n # cpu/pandas\n return x.apply(lambda y: y[-1])\n", "path": "nvtabular/ops/groupby.py"}], "after_files": [{"content": "#\n# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport re\n\nimport numpy\nfrom dask.dataframe.utils import meta_nonempty\n\nfrom merlin.core.dispatch import DataFrameType, annotate\nfrom merlin.schema import Schema\n\nfrom .operator import ColumnSelector, Operator\n\n\nclass Groupby(Operator):\n \"\"\"Groupby Transformation\n\n Locally transform each partition of a Dataset with one or\n more groupby aggregations.\n\n WARNING: This transformation does NOT move data between\n partitions. Please make sure that the target Dataset object\n is already shuffled by ``groupby_cols``, otherwise the\n output may be incorrect. See: ``Dataset.shuffle_by_keys``.\n\n Example usage::\n\n groupby_cols = ['user_id', 'session_id']\n dataset = dataset.shuffle_by_keys(keys=groupby_cols)\n\n groupby_features = [\n 'user_id', 'session_id', 'month', 'prod_id',\n ] >> ops.Groupby(\n groupby_cols=groupby_cols,\n sort_cols=['month'],\n aggs={\n 'prod_id': 'list',\n 'month': ['first', 'last'],\n },\n )\n processor = nvtabular.Workflow(groupby_features)\n\n workflow.fit(dataset)\n dataset_transformed = workflow.transform(dataset)\n\n Parameters\n -----------\n groupby_cols : str or list of str\n The column names to be used as groupby keys.\n WARNING: Ensure the dataset was partitioned by those\n groupby keys (see above for an example).\n sort_cols : str or list of str\n Columns to be used to sort each partition before\n groupby aggregation is performed. If this argument\n is not specified, the results will not be sorted.\n aggs : dict, list or str\n Groupby aggregations to perform. Supported list-based\n aggregations include \"list\", \"first\" & \"last\". Most\n conventional aggregations supported by Pandas/cuDF are\n also allowed (e.g. \"sum\", \"count\", \"max\", \"mean\", etc.).\n name_sep : str\n String separator to use for new column names.\n \"\"\"\n\n def __init__(\n self, groupby_cols=None, sort_cols=None, aggs=\"list\", name_sep=\"_\", ascending=True\n ):\n self.groupby_cols = groupby_cols\n self.sort_cols = sort_cols or []\n if isinstance(self.groupby_cols, str):\n self.groupby_cols = [self.groupby_cols]\n if isinstance(self.sort_cols, str):\n self.sort_cols = [self.sort_cols]\n self.ascending = ascending\n\n # Split aggregations into \"conventional\" aggregations\n # and \"list-based\" aggregations. After this block,\n # we will have a dictionary for each of these cases.\n # We use the \"__all__\" key to specify aggregations\n # that will be performed on all (non-key) columns.\n self.list_aggs, self.conv_aggs = {}, {}\n if isinstance(aggs, str):\n aggs = {\"__all__\": [aggs]}\n elif isinstance(aggs, list):\n aggs = {\"__all__\": aggs}\n for col, v in aggs.items():\n _aggs = v if isinstance(v, list) else [v]\n _conv_aggs, _list_aggs = set(), set()\n for _agg in _aggs:\n if is_list_agg(_agg):\n _list_aggs.add(\"list\" if _agg == list else _agg)\n _conv_aggs.add(list)\n else:\n _conv_aggs.add(_agg)\n if _conv_aggs:\n self.conv_aggs[col] = list(_conv_aggs)\n if _list_aggs:\n self.list_aggs[col] = list(_list_aggs)\n\n self.name_sep = name_sep\n super().__init__()\n\n @annotate(\"Groupby_op\", color=\"darkgreen\", domain=\"nvt_python\")\n def transform(self, col_selector: ColumnSelector, df: DataFrameType) -> DataFrameType:\n # Sort if necessary\n if self.sort_cols:\n df = df.sort_values(self.sort_cols, ascending=self.ascending, ignore_index=True)\n\n # List aggregations do not work with empty data.\n # Use synthetic metadata to predict output columns.\n empty_df = not len(df)\n\n _df = meta_nonempty(df) if empty_df else df\n\n # Get \"complete\" aggregation dicts\n _list_aggs, _conv_aggs = _get_agg_dicts(\n self.groupby_cols, self.list_aggs, self.conv_aggs, col_selector\n )\n\n # Apply aggregations\n new_df = _apply_aggs(\n _df,\n self.groupby_cols,\n _list_aggs,\n _conv_aggs,\n name_sep=self.name_sep,\n ascending=self.ascending,\n )\n\n if empty_df:\n return new_df.iloc[:0]\n return new_df\n\n transform.__doc__ = Operator.transform.__doc__\n\n def compute_output_schema(\n self, input_schema: Schema, col_selector: ColumnSelector, prev_output_schema: Schema = None\n ) -> Schema:\n if not col_selector and hasattr(self, \"target\"):\n col_selector = (\n ColumnSelector(self.target) if isinstance(self.target, list) else self.target\n )\n return super().compute_output_schema(input_schema, col_selector, prev_output_schema)\n\n def column_mapping(self, col_selector):\n column_mapping = {}\n\n for groupby_col in self.groupby_cols:\n if groupby_col in col_selector.names:\n column_mapping[groupby_col] = [groupby_col]\n\n _list_aggs, _conv_aggs = _get_agg_dicts(\n self.groupby_cols, self.list_aggs, self.conv_aggs, col_selector\n )\n\n for input_col_name, aggs in _list_aggs.items():\n output_col_names = _columns_out_from_aggs(\n {input_col_name: aggs}, name_sep=self.name_sep\n )\n for output_col_name in output_col_names:\n column_mapping[output_col_name] = [input_col_name]\n\n for input_col_name, aggs in _conv_aggs.items():\n output_col_names = _columns_out_from_aggs(\n {input_col_name: aggs}, name_sep=self.name_sep\n )\n for output_col_name in output_col_names:\n column_mapping[output_col_name] = [input_col_name]\n\n return column_mapping\n\n def _compute_dtype(self, col_schema, input_schema):\n col_schema = super()._compute_dtype(col_schema, input_schema)\n\n dtype = col_schema.dtype\n is_list = col_schema.is_list\n\n dtypes = {\n \"count\": numpy.int32,\n \"nunique\": numpy.int32,\n \"mean\": numpy.float32,\n \"var\": numpy.float32,\n \"std\": numpy.float32,\n \"median\": numpy.float32,\n }\n\n is_lists = {\"list\": True}\n\n for col_name in input_schema.column_names:\n combined_aggs = _aggs_for_column(col_name, self.conv_aggs)\n combined_aggs += _aggs_for_column(col_name, self.list_aggs)\n for agg in combined_aggs:\n if col_schema.name.endswith(f\"{self.name_sep}{agg}\"):\n dtype = dtypes.get(agg, dtype)\n is_list = is_lists.get(agg, is_list)\n break\n\n return col_schema.with_dtype(dtype, is_list=is_list, is_ragged=is_list)\n\n\ndef _aggs_for_column(col_name, agg_dict):\n return agg_dict.get(col_name, []) + agg_dict.get(\"__all__\", [])\n\n\ndef _columns_out_from_aggs(aggs, name_sep=\"_\"):\n # Helper function for `output_column_names`\n _agg_cols = []\n for k, v in aggs.items():\n for _v in v:\n if isinstance(_v, str):\n _agg_cols.append(name_sep.join([k, _v]))\n return _agg_cols\n\n\ndef _apply_aggs(_df, groupby_cols, _list_aggs, _conv_aggs, name_sep=\"_\", ascending=True):\n\n # Apply conventional aggs\n _columns = list(set(groupby_cols) | set(_conv_aggs) | set(_list_aggs))\n df = _df[_columns].groupby(groupby_cols).agg(_conv_aggs).reset_index()\n\n df.columns = [\n name_sep.join([n for n in name if n != \"\"]) for name in df.columns.to_flat_index()\n ]\n\n # Handle custom aggs (e.g. \"first\" and \"last\")\n for col, aggs in _list_aggs.items():\n for _agg in aggs:\n if is_list_agg(_agg, custom=True):\n df[f\"{col}{name_sep}{_agg}\"] = _first_or_last(\n df[f\"{col}{name_sep}list\"], _agg, ascending=ascending\n )\n if \"list\" not in aggs:\n df.drop(columns=[col + f\"{name_sep}list\"], inplace=True)\n\n for col in df.columns:\n if re.search(f\"{name_sep}(count|nunique)\", col):\n df[col] = df[col].astype(numpy.int32)\n elif re.search(f\"{name_sep}(mean|median|std|var)\", col):\n df[col] = df[col].astype(numpy.float32)\n\n return df\n\n\ndef _get_agg_dicts(groupby_cols, list_aggs, conv_aggs, columns):\n # Get updated aggregation dicts. This should map \"__all__\"\n # to specific columns, and remove elements that are not\n # in `columns`.\n _allowed_cols = [c for c in columns.names if c not in groupby_cols]\n _list_aggs = _ensure_agg_dict(list_aggs, _allowed_cols)\n _conv_aggs = _ensure_agg_dict(conv_aggs, _allowed_cols)\n return _list_aggs, _conv_aggs\n\n\ndef _ensure_agg_dict(_aggs, _allowed_cols):\n # Make sure aggregation dict has legal keys\n if \"__all__\" in _aggs:\n return {col: _aggs[\"__all__\"] for col in _allowed_cols}\n else:\n return {k: v for k, v in _aggs.items() if k in _allowed_cols}\n\n\ndef is_list_agg(agg, custom=False):\n # check if `agg` is a supported list aggregation\n if custom:\n return agg in (\"first\", \"last\")\n else:\n return agg in (\"list\", list, \"first\", \"last\")\n\n\ndef _first_or_last(x, kind, ascending=True):\n # Redirect to _first or _last\n if kind == \"first\" and ascending:\n return _first(x)\n elif kind == \"last\" and not ascending:\n return _first(x)\n else:\n return _last(x)\n\n\ndef _first(x):\n # Convert each element of a list column to be the first\n # item in the list\n if hasattr(x, \"list\"):\n # cuDF-specific behavior\n offsets = x.list._column.offsets\n elements = x.list._column.elements\n return elements[offsets[:-1]]\n else:\n # cpu/pandas\n return x.apply(lambda y: y[0])\n\n\ndef _last(x):\n # Convert each element of a list column to be the last\n # item in the list\n if hasattr(x, \"list\"):\n # cuDF-specific behavior\n offsets = x.list._column.offsets\n elements = x.list._column.elements\n return elements[offsets[1:].values - 1]\n else:\n # cpu/pandas\n return x.apply(lambda y: y[-1])\n", "path": "nvtabular/ops/groupby.py"}]} | 4,040 | 377 |
gh_patches_debug_958 | rasdani/github-patches | git_diff | nvaccess__nvda-10921 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
VS Code: Reduce the number of times one has to use NVDA+Space to switch modes.
I just filed [this issue](https://github.com/microsoft/vscode/issues/93087) against VS Code where I suggest to use targeted role="document" in those places that produce HTML output for consumption, to make NVDA switch in and out of browse mode in a smart, automated, manner, reducing the number of times one has to use NVDA+Space to toggle modes. Examples I found while using the 1.44 VS Code Insider builds were:
* The Welcome page
* The details page for an extension
* The ReadMe file that may be displayed after an extension has been installed.
@leonardder suggested that, once this lands in stable, a modification might be needed for the VS Code app module. So filing this issue here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `source/appModules/code.py`
Content:
```
1 #appModules/code.py
2 #A part of NonVisual Desktop Access (NVDA)
3 #Copyright (C) 2019 NV Access Limited, Babbage B.V.
4 #This file is covered by the GNU General Public License.
5 #See the file COPYING for more details.
6
7 import appModuleHandler
8
9 class AppModule(appModuleHandler.AppModule):
10 disableBrowseModeByDefault = True
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/source/appModules/code.py b/source/appModules/code.py
deleted file mode 100644
--- a/source/appModules/code.py
+++ /dev/null
@@ -1,10 +0,0 @@
-#appModules/code.py
-#A part of NonVisual Desktop Access (NVDA)
-#Copyright (C) 2019 NV Access Limited, Babbage B.V.
-#This file is covered by the GNU General Public License.
-#See the file COPYING for more details.
-
-import appModuleHandler
-
-class AppModule(appModuleHandler.AppModule):
- disableBrowseModeByDefault = True
| {"golden_diff": "diff --git a/source/appModules/code.py b/source/appModules/code.py\ndeleted file mode 100644\n--- a/source/appModules/code.py\n+++ /dev/null\n@@ -1,10 +0,0 @@\n-#appModules/code.py\n-#A part of NonVisual Desktop Access (NVDA)\n-#Copyright (C) 2019 NV Access Limited, Babbage B.V.\n-#This file is covered by the GNU General Public License.\n-#See the file COPYING for more details.\n-\n-import appModuleHandler\n-\n-class AppModule(appModuleHandler.AppModule):\n-\tdisableBrowseModeByDefault = True\n", "issue": "VS Code: Reduce the number of times one has to use NVDA+Space to switch modes.\nI just filed [this issue](https://github.com/microsoft/vscode/issues/93087) against VS Code where I suggest to use targeted role=\"document\" in those places that produce HTML output for consumption, to make NVDA switch in and out of browse mode in a smart, automated, manner, reducing the number of times one has to use NVDA+Space to toggle modes. Examples I found while using the 1.44 VS Code Insider builds were:\r\n\r\n* The Welcome page\r\n* The details page for an extension\r\n* The ReadMe file that may be displayed after an extension has been installed.\r\n\r\n@leonardder suggested that, once this lands in stable, a modification might be needed for the VS Code app module. So filing this issue here.\n", "before_files": [{"content": "#appModules/code.py\n#A part of NonVisual Desktop Access (NVDA)\n#Copyright (C) 2019 NV Access Limited, Babbage B.V.\n#This file is covered by the GNU General Public License.\n#See the file COPYING for more details.\n\nimport appModuleHandler\n\nclass AppModule(appModuleHandler.AppModule):\n\tdisableBrowseModeByDefault = True\n", "path": "source/appModules/code.py"}], "after_files": [{"content": null, "path": "source/appModules/code.py"}]} | 535 | 139 |
gh_patches_debug_39654 | rasdani/github-patches | git_diff | ansible-collections__community.general-4794 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
community.general.sudoers should do a syntax check
### Summary
The `sudoers` module currently allows writing broken `sudoers` files, for example when `commands` is not an absolute path. This kind of user error can easily be detected by running `visudo -c -f ...` on the new file. I think it makes sense for the `sudoers` module to run this check.
Some errors, such as duplicate aliases, can only be detected in the context of the other `sudoers` files. According to [this post](https://serverfault.com/a/901906) a more comprehensive check can be done by running a full configuration check `visudo -c` after installing the new file. However, I'm not sure if this is worth implementing. The `sudoers` module does not currently seem to be capable of producing syntax error that can not be detected by a simple `visudo -c -f ...`.
### Issue Type
Feature Idea
### Component Name
sudoers
### Additional Information
Slightly modified example task from the documentation:
```yaml
- name: >-
Allow the alice user to run sudo /bin/systemctl restart my-service or
sudo /bin/systemctl reload my-service, but a password is required
community.general.sudoers:
name: alice-service
user: alice
commands:
- systemctl restart my-service
```
`sudo` commands complain about the syntax error:
```
visudo -c -f /etc/sudoers.d/alice-service
alice-service:1:21: syntax error
alice ALL=NOPASSWD: systemctl restart my-service
^~~~~~~~~
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/modules/system/sudoers.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4
5 # Copyright: (c) 2019, Jon Ellis (@JonEllis) <[email protected]>
6 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
7
8 from __future__ import absolute_import, division, print_function
9 __metaclass__ = type
10
11
12 DOCUMENTATION = '''
13 ---
14 module: sudoers
15 short_description: Manage sudoers files
16 version_added: "4.3.0"
17 description:
18 - This module allows for the manipulation of sudoers files.
19 author:
20 - "Jon Ellis (@JonEllis) <[email protected]>"
21 options:
22 commands:
23 description:
24 - The commands allowed by the sudoers rule.
25 - Multiple can be added by passing a list of commands.
26 - Use C(ALL) for all commands.
27 type: list
28 elements: str
29 group:
30 description:
31 - The name of the group for the sudoers rule.
32 - This option cannot be used in conjunction with I(user).
33 type: str
34 name:
35 required: true
36 description:
37 - The name of the sudoers rule.
38 - This will be used for the filename for the sudoers file managed by this rule.
39 type: str
40 nopassword:
41 description:
42 - Whether a password will be required to run the sudo'd command.
43 default: true
44 type: bool
45 runas:
46 description:
47 - Specify the target user the command(s) will run as.
48 type: str
49 version_added: 4.7.0
50 sudoers_path:
51 description:
52 - The path which sudoers config files will be managed in.
53 default: /etc/sudoers.d
54 type: str
55 state:
56 default: "present"
57 choices:
58 - present
59 - absent
60 description:
61 - Whether the rule should exist or not.
62 type: str
63 user:
64 description:
65 - The name of the user for the sudoers rule.
66 - This option cannot be used in conjunction with I(group).
67 type: str
68 '''
69
70 EXAMPLES = '''
71 - name: Allow the backup user to sudo /usr/local/bin/backup
72 community.general.sudoers:
73 name: allow-backup
74 state: present
75 user: backup
76 commands: /usr/local/bin/backup
77
78 - name: Allow the bob user to run any commands as alice with sudo -u alice
79 community.general.sudoers:
80 name: bob-do-as-alice
81 state: present
82 user: bob
83 runas: alice
84 commands: ALL
85
86 - name: >-
87 Allow the monitoring group to run sudo /usr/local/bin/gather-app-metrics
88 without requiring a password
89 community.general.sudoers:
90 name: monitor-app
91 group: monitoring
92 commands: /usr/local/bin/gather-app-metrics
93
94 - name: >-
95 Allow the alice user to run sudo /bin/systemctl restart my-service or
96 sudo /bin/systemctl reload my-service, but a password is required
97 community.general.sudoers:
98 name: alice-service
99 user: alice
100 commands:
101 - /bin/systemctl restart my-service
102 - /bin/systemctl reload my-service
103 nopassword: false
104
105 - name: Revoke the previous sudo grants given to the alice user
106 community.general.sudoers:
107 name: alice-service
108 state: absent
109 '''
110
111 import os
112 from ansible.module_utils.basic import AnsibleModule
113 from ansible.module_utils.common.text.converters import to_native
114
115
116 class Sudoers(object):
117
118 FILE_MODE = 0o440
119
120 def __init__(self, module):
121 self.check_mode = module.check_mode
122 self.name = module.params['name']
123 self.user = module.params['user']
124 self.group = module.params['group']
125 self.state = module.params['state']
126 self.nopassword = module.params['nopassword']
127 self.runas = module.params['runas']
128 self.sudoers_path = module.params['sudoers_path']
129 self.file = os.path.join(self.sudoers_path, self.name)
130 self.commands = module.params['commands']
131
132 def write(self):
133 if self.check_mode:
134 return
135
136 with open(self.file, 'w') as f:
137 f.write(self.content())
138
139 os.chmod(self.file, self.FILE_MODE)
140
141 def delete(self):
142 if self.check_mode:
143 return
144
145 os.remove(self.file)
146
147 def exists(self):
148 return os.path.exists(self.file)
149
150 def matches(self):
151 with open(self.file, 'r') as f:
152 content_matches = f.read() == self.content()
153
154 current_mode = os.stat(self.file).st_mode & 0o777
155 mode_matches = current_mode == self.FILE_MODE
156
157 return content_matches and mode_matches
158
159 def content(self):
160 if self.user:
161 owner = self.user
162 elif self.group:
163 owner = '%{group}'.format(group=self.group)
164
165 commands_str = ', '.join(self.commands)
166 nopasswd_str = 'NOPASSWD:' if self.nopassword else ''
167 runas_str = '({runas})'.format(runas=self.runas) if self.runas is not None else ''
168 return "{owner} ALL={runas}{nopasswd} {commands}\n".format(owner=owner, runas=runas_str, nopasswd=nopasswd_str, commands=commands_str)
169
170 def run(self):
171 if self.state == 'absent':
172 if self.exists():
173 self.delete()
174 return True
175 else:
176 return False
177
178 if self.exists() and self.matches():
179 return False
180
181 self.write()
182 return True
183
184
185 def main():
186 argument_spec = {
187 'commands': {
188 'type': 'list',
189 'elements': 'str',
190 },
191 'group': {},
192 'name': {
193 'required': True,
194 },
195 'nopassword': {
196 'type': 'bool',
197 'default': True,
198 },
199 'runas': {
200 'type': 'str',
201 'default': None,
202 },
203 'sudoers_path': {
204 'type': 'str',
205 'default': '/etc/sudoers.d',
206 },
207 'state': {
208 'default': 'present',
209 'choices': ['present', 'absent'],
210 },
211 'user': {},
212 }
213
214 module = AnsibleModule(
215 argument_spec=argument_spec,
216 mutually_exclusive=[['user', 'group']],
217 supports_check_mode=True,
218 required_if=[('state', 'present', ['commands'])],
219 )
220
221 sudoers = Sudoers(module)
222
223 try:
224 changed = sudoers.run()
225 module.exit_json(changed=changed)
226 except Exception as e:
227 module.fail_json(msg=to_native(e))
228
229
230 if __name__ == '__main__':
231 main()
232
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugins/modules/system/sudoers.py b/plugins/modules/system/sudoers.py
--- a/plugins/modules/system/sudoers.py
+++ b/plugins/modules/system/sudoers.py
@@ -65,6 +65,15 @@
- The name of the user for the sudoers rule.
- This option cannot be used in conjunction with I(group).
type: str
+ validation:
+ description:
+ - If C(absent), the sudoers rule will be added without validation.
+ - If C(detect) and visudo is available, then the sudoers rule will be validated by visudo.
+ - If C(required), visudo must be available to validate the sudoers rule.
+ type: str
+ default: detect
+ choices: [ absent, detect, required ]
+ version_added: 5.2.0
'''
EXAMPLES = '''
@@ -118,6 +127,8 @@
FILE_MODE = 0o440
def __init__(self, module):
+ self.module = module
+
self.check_mode = module.check_mode
self.name = module.params['name']
self.user = module.params['user']
@@ -128,6 +139,7 @@
self.sudoers_path = module.params['sudoers_path']
self.file = os.path.join(self.sudoers_path, self.name)
self.commands = module.params['commands']
+ self.validation = module.params['validation']
def write(self):
if self.check_mode:
@@ -167,6 +179,20 @@
runas_str = '({runas})'.format(runas=self.runas) if self.runas is not None else ''
return "{owner} ALL={runas}{nopasswd} {commands}\n".format(owner=owner, runas=runas_str, nopasswd=nopasswd_str, commands=commands_str)
+ def validate(self):
+ if self.validation == 'absent':
+ return
+
+ visudo_path = self.module.get_bin_path('visudo', required=self.validation == 'required')
+ if visudo_path is None:
+ return
+
+ check_command = [visudo_path, '-c', '-f', '-']
+ rc, stdout, stderr = self.module.run_command(check_command, data=self.content())
+
+ if rc != 0:
+ raise Exception('Failed to validate sudoers rule:\n{stdout}'.format(stdout=stdout))
+
def run(self):
if self.state == 'absent':
if self.exists():
@@ -175,6 +201,8 @@
else:
return False
+ self.validate()
+
if self.exists() and self.matches():
return False
@@ -209,6 +237,10 @@
'choices': ['present', 'absent'],
},
'user': {},
+ 'validation': {
+ 'default': 'detect',
+ 'choices': ['absent', 'detect', 'required']
+ },
}
module = AnsibleModule(
| {"golden_diff": "diff --git a/plugins/modules/system/sudoers.py b/plugins/modules/system/sudoers.py\n--- a/plugins/modules/system/sudoers.py\n+++ b/plugins/modules/system/sudoers.py\n@@ -65,6 +65,15 @@\n - The name of the user for the sudoers rule.\n - This option cannot be used in conjunction with I(group).\n type: str\n+ validation:\n+ description:\n+ - If C(absent), the sudoers rule will be added without validation.\n+ - If C(detect) and visudo is available, then the sudoers rule will be validated by visudo.\n+ - If C(required), visudo must be available to validate the sudoers rule.\n+ type: str\n+ default: detect\n+ choices: [ absent, detect, required ]\n+ version_added: 5.2.0\n '''\n \n EXAMPLES = '''\n@@ -118,6 +127,8 @@\n FILE_MODE = 0o440\n \n def __init__(self, module):\n+ self.module = module\n+\n self.check_mode = module.check_mode\n self.name = module.params['name']\n self.user = module.params['user']\n@@ -128,6 +139,7 @@\n self.sudoers_path = module.params['sudoers_path']\n self.file = os.path.join(self.sudoers_path, self.name)\n self.commands = module.params['commands']\n+ self.validation = module.params['validation']\n \n def write(self):\n if self.check_mode:\n@@ -167,6 +179,20 @@\n runas_str = '({runas})'.format(runas=self.runas) if self.runas is not None else ''\n return \"{owner} ALL={runas}{nopasswd} {commands}\\n\".format(owner=owner, runas=runas_str, nopasswd=nopasswd_str, commands=commands_str)\n \n+ def validate(self):\n+ if self.validation == 'absent':\n+ return\n+\n+ visudo_path = self.module.get_bin_path('visudo', required=self.validation == 'required')\n+ if visudo_path is None:\n+ return\n+\n+ check_command = [visudo_path, '-c', '-f', '-']\n+ rc, stdout, stderr = self.module.run_command(check_command, data=self.content())\n+\n+ if rc != 0:\n+ raise Exception('Failed to validate sudoers rule:\\n{stdout}'.format(stdout=stdout))\n+\n def run(self):\n if self.state == 'absent':\n if self.exists():\n@@ -175,6 +201,8 @@\n else:\n return False\n \n+ self.validate()\n+\n if self.exists() and self.matches():\n return False\n \n@@ -209,6 +237,10 @@\n 'choices': ['present', 'absent'],\n },\n 'user': {},\n+ 'validation': {\n+ 'default': 'detect',\n+ 'choices': ['absent', 'detect', 'required']\n+ },\n }\n \n module = AnsibleModule(\n", "issue": "community.general.sudoers should do a syntax check\n### Summary\n\nThe `sudoers` module currently allows writing broken `sudoers` files, for example when `commands` is not an absolute path. This kind of user error can easily be detected by running `visudo -c -f ...` on the new file. I think it makes sense for the `sudoers` module to run this check.\r\n\r\nSome errors, such as duplicate aliases, can only be detected in the context of the other `sudoers` files. According to [this post](https://serverfault.com/a/901906) a more comprehensive check can be done by running a full configuration check `visudo -c` after installing the new file. However, I'm not sure if this is worth implementing. The `sudoers` module does not currently seem to be capable of producing syntax error that can not be detected by a simple `visudo -c -f ...`.\n\n### Issue Type\n\nFeature Idea\n\n### Component Name\n\nsudoers\n\n### Additional Information\n\nSlightly modified example task from the documentation:\r\n```yaml\r\n- name: >-\r\n Allow the alice user to run sudo /bin/systemctl restart my-service or\r\n sudo /bin/systemctl reload my-service, but a password is required\r\n community.general.sudoers:\r\n name: alice-service\r\n user: alice\r\n commands:\r\n - systemctl restart my-service\r\n```\r\n\r\n`sudo` commands complain about the syntax error:\r\n```\r\nvisudo -c -f /etc/sudoers.d/alice-service\r\nalice-service:1:21: syntax error\r\nalice ALL=NOPASSWD: systemctl restart my-service\r\n ^~~~~~~~~\r\n```\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n\n# Copyright: (c) 2019, Jon Ellis (@JonEllis) <[email protected]>\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: sudoers\nshort_description: Manage sudoers files\nversion_added: \"4.3.0\"\ndescription:\n - This module allows for the manipulation of sudoers files.\nauthor:\n - \"Jon Ellis (@JonEllis) <[email protected]>\"\noptions:\n commands:\n description:\n - The commands allowed by the sudoers rule.\n - Multiple can be added by passing a list of commands.\n - Use C(ALL) for all commands.\n type: list\n elements: str\n group:\n description:\n - The name of the group for the sudoers rule.\n - This option cannot be used in conjunction with I(user).\n type: str\n name:\n required: true\n description:\n - The name of the sudoers rule.\n - This will be used for the filename for the sudoers file managed by this rule.\n type: str\n nopassword:\n description:\n - Whether a password will be required to run the sudo'd command.\n default: true\n type: bool\n runas:\n description:\n - Specify the target user the command(s) will run as.\n type: str\n version_added: 4.7.0\n sudoers_path:\n description:\n - The path which sudoers config files will be managed in.\n default: /etc/sudoers.d\n type: str\n state:\n default: \"present\"\n choices:\n - present\n - absent\n description:\n - Whether the rule should exist or not.\n type: str\n user:\n description:\n - The name of the user for the sudoers rule.\n - This option cannot be used in conjunction with I(group).\n type: str\n'''\n\nEXAMPLES = '''\n- name: Allow the backup user to sudo /usr/local/bin/backup\n community.general.sudoers:\n name: allow-backup\n state: present\n user: backup\n commands: /usr/local/bin/backup\n\n- name: Allow the bob user to run any commands as alice with sudo -u alice\n community.general.sudoers:\n name: bob-do-as-alice\n state: present\n user: bob\n runas: alice\n commands: ALL\n\n- name: >-\n Allow the monitoring group to run sudo /usr/local/bin/gather-app-metrics\n without requiring a password\n community.general.sudoers:\n name: monitor-app\n group: monitoring\n commands: /usr/local/bin/gather-app-metrics\n\n- name: >-\n Allow the alice user to run sudo /bin/systemctl restart my-service or\n sudo /bin/systemctl reload my-service, but a password is required\n community.general.sudoers:\n name: alice-service\n user: alice\n commands:\n - /bin/systemctl restart my-service\n - /bin/systemctl reload my-service\n nopassword: false\n\n- name: Revoke the previous sudo grants given to the alice user\n community.general.sudoers:\n name: alice-service\n state: absent\n'''\n\nimport os\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible.module_utils.common.text.converters import to_native\n\n\nclass Sudoers(object):\n\n FILE_MODE = 0o440\n\n def __init__(self, module):\n self.check_mode = module.check_mode\n self.name = module.params['name']\n self.user = module.params['user']\n self.group = module.params['group']\n self.state = module.params['state']\n self.nopassword = module.params['nopassword']\n self.runas = module.params['runas']\n self.sudoers_path = module.params['sudoers_path']\n self.file = os.path.join(self.sudoers_path, self.name)\n self.commands = module.params['commands']\n\n def write(self):\n if self.check_mode:\n return\n\n with open(self.file, 'w') as f:\n f.write(self.content())\n\n os.chmod(self.file, self.FILE_MODE)\n\n def delete(self):\n if self.check_mode:\n return\n\n os.remove(self.file)\n\n def exists(self):\n return os.path.exists(self.file)\n\n def matches(self):\n with open(self.file, 'r') as f:\n content_matches = f.read() == self.content()\n\n current_mode = os.stat(self.file).st_mode & 0o777\n mode_matches = current_mode == self.FILE_MODE\n\n return content_matches and mode_matches\n\n def content(self):\n if self.user:\n owner = self.user\n elif self.group:\n owner = '%{group}'.format(group=self.group)\n\n commands_str = ', '.join(self.commands)\n nopasswd_str = 'NOPASSWD:' if self.nopassword else ''\n runas_str = '({runas})'.format(runas=self.runas) if self.runas is not None else ''\n return \"{owner} ALL={runas}{nopasswd} {commands}\\n\".format(owner=owner, runas=runas_str, nopasswd=nopasswd_str, commands=commands_str)\n\n def run(self):\n if self.state == 'absent':\n if self.exists():\n self.delete()\n return True\n else:\n return False\n\n if self.exists() and self.matches():\n return False\n\n self.write()\n return True\n\n\ndef main():\n argument_spec = {\n 'commands': {\n 'type': 'list',\n 'elements': 'str',\n },\n 'group': {},\n 'name': {\n 'required': True,\n },\n 'nopassword': {\n 'type': 'bool',\n 'default': True,\n },\n 'runas': {\n 'type': 'str',\n 'default': None,\n },\n 'sudoers_path': {\n 'type': 'str',\n 'default': '/etc/sudoers.d',\n },\n 'state': {\n 'default': 'present',\n 'choices': ['present', 'absent'],\n },\n 'user': {},\n }\n\n module = AnsibleModule(\n argument_spec=argument_spec,\n mutually_exclusive=[['user', 'group']],\n supports_check_mode=True,\n required_if=[('state', 'present', ['commands'])],\n )\n\n sudoers = Sudoers(module)\n\n try:\n changed = sudoers.run()\n module.exit_json(changed=changed)\n except Exception as e:\n module.fail_json(msg=to_native(e))\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/system/sudoers.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n\n# Copyright: (c) 2019, Jon Ellis (@JonEllis) <[email protected]>\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: sudoers\nshort_description: Manage sudoers files\nversion_added: \"4.3.0\"\ndescription:\n - This module allows for the manipulation of sudoers files.\nauthor:\n - \"Jon Ellis (@JonEllis) <[email protected]>\"\noptions:\n commands:\n description:\n - The commands allowed by the sudoers rule.\n - Multiple can be added by passing a list of commands.\n - Use C(ALL) for all commands.\n type: list\n elements: str\n group:\n description:\n - The name of the group for the sudoers rule.\n - This option cannot be used in conjunction with I(user).\n type: str\n name:\n required: true\n description:\n - The name of the sudoers rule.\n - This will be used for the filename for the sudoers file managed by this rule.\n type: str\n nopassword:\n description:\n - Whether a password will be required to run the sudo'd command.\n default: true\n type: bool\n runas:\n description:\n - Specify the target user the command(s) will run as.\n type: str\n version_added: 4.7.0\n sudoers_path:\n description:\n - The path which sudoers config files will be managed in.\n default: /etc/sudoers.d\n type: str\n state:\n default: \"present\"\n choices:\n - present\n - absent\n description:\n - Whether the rule should exist or not.\n type: str\n user:\n description:\n - The name of the user for the sudoers rule.\n - This option cannot be used in conjunction with I(group).\n type: str\n validation:\n description:\n - If C(absent), the sudoers rule will be added without validation.\n - If C(detect) and visudo is available, then the sudoers rule will be validated by visudo.\n - If C(required), visudo must be available to validate the sudoers rule.\n type: str\n default: detect\n choices: [ absent, detect, required ]\n version_added: 5.2.0\n'''\n\nEXAMPLES = '''\n- name: Allow the backup user to sudo /usr/local/bin/backup\n community.general.sudoers:\n name: allow-backup\n state: present\n user: backup\n commands: /usr/local/bin/backup\n\n- name: Allow the bob user to run any commands as alice with sudo -u alice\n community.general.sudoers:\n name: bob-do-as-alice\n state: present\n user: bob\n runas: alice\n commands: ALL\n\n- name: >-\n Allow the monitoring group to run sudo /usr/local/bin/gather-app-metrics\n without requiring a password\n community.general.sudoers:\n name: monitor-app\n group: monitoring\n commands: /usr/local/bin/gather-app-metrics\n\n- name: >-\n Allow the alice user to run sudo /bin/systemctl restart my-service or\n sudo /bin/systemctl reload my-service, but a password is required\n community.general.sudoers:\n name: alice-service\n user: alice\n commands:\n - /bin/systemctl restart my-service\n - /bin/systemctl reload my-service\n nopassword: false\n\n- name: Revoke the previous sudo grants given to the alice user\n community.general.sudoers:\n name: alice-service\n state: absent\n'''\n\nimport os\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible.module_utils.common.text.converters import to_native\n\n\nclass Sudoers(object):\n\n FILE_MODE = 0o440\n\n def __init__(self, module):\n self.module = module\n\n self.check_mode = module.check_mode\n self.name = module.params['name']\n self.user = module.params['user']\n self.group = module.params['group']\n self.state = module.params['state']\n self.nopassword = module.params['nopassword']\n self.runas = module.params['runas']\n self.sudoers_path = module.params['sudoers_path']\n self.file = os.path.join(self.sudoers_path, self.name)\n self.commands = module.params['commands']\n self.validation = module.params['validation']\n\n def write(self):\n if self.check_mode:\n return\n\n with open(self.file, 'w') as f:\n f.write(self.content())\n\n os.chmod(self.file, self.FILE_MODE)\n\n def delete(self):\n if self.check_mode:\n return\n\n os.remove(self.file)\n\n def exists(self):\n return os.path.exists(self.file)\n\n def matches(self):\n with open(self.file, 'r') as f:\n content_matches = f.read() == self.content()\n\n current_mode = os.stat(self.file).st_mode & 0o777\n mode_matches = current_mode == self.FILE_MODE\n\n return content_matches and mode_matches\n\n def content(self):\n if self.user:\n owner = self.user\n elif self.group:\n owner = '%{group}'.format(group=self.group)\n\n commands_str = ', '.join(self.commands)\n nopasswd_str = 'NOPASSWD:' if self.nopassword else ''\n runas_str = '({runas})'.format(runas=self.runas) if self.runas is not None else ''\n return \"{owner} ALL={runas}{nopasswd} {commands}\\n\".format(owner=owner, runas=runas_str, nopasswd=nopasswd_str, commands=commands_str)\n\n def validate(self):\n if self.validation == 'absent':\n return\n\n visudo_path = self.module.get_bin_path('visudo', required=self.validation == 'required')\n if visudo_path is None:\n return\n\n check_command = [visudo_path, '-c', '-f', '-']\n rc, stdout, stderr = self.module.run_command(check_command, data=self.content())\n\n if rc != 0:\n raise Exception('Failed to validate sudoers rule:\\n{stdout}'.format(stdout=stdout))\n\n def run(self):\n if self.state == 'absent':\n if self.exists():\n self.delete()\n return True\n else:\n return False\n\n self.validate()\n\n if self.exists() and self.matches():\n return False\n\n self.write()\n return True\n\n\ndef main():\n argument_spec = {\n 'commands': {\n 'type': 'list',\n 'elements': 'str',\n },\n 'group': {},\n 'name': {\n 'required': True,\n },\n 'nopassword': {\n 'type': 'bool',\n 'default': True,\n },\n 'runas': {\n 'type': 'str',\n 'default': None,\n },\n 'sudoers_path': {\n 'type': 'str',\n 'default': '/etc/sudoers.d',\n },\n 'state': {\n 'default': 'present',\n 'choices': ['present', 'absent'],\n },\n 'user': {},\n 'validation': {\n 'default': 'detect',\n 'choices': ['absent', 'detect', 'required']\n },\n }\n\n module = AnsibleModule(\n argument_spec=argument_spec,\n mutually_exclusive=[['user', 'group']],\n supports_check_mode=True,\n required_if=[('state', 'present', ['commands'])],\n )\n\n sudoers = Sudoers(module)\n\n try:\n changed = sudoers.run()\n module.exit_json(changed=changed)\n except Exception as e:\n module.fail_json(msg=to_native(e))\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/system/sudoers.py"}]} | 2,752 | 687 |
gh_patches_debug_27836 | rasdani/github-patches | git_diff | encode__starlette-151 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wildcard domains and TrustedhostMiddleware
Support for wildcard domains
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/middleware/trustedhost.py`
Content:
```
1 from starlette.datastructures import Headers
2 from starlette.responses import PlainTextResponse
3 from starlette.types import ASGIApp, ASGIInstance, Scope
4 import typing
5
6
7 class TrustedHostMiddleware:
8 def __init__(
9 self, app: ASGIApp, allowed_hosts: typing.Sequence[str] = ["*"]
10 ) -> None:
11 self.app = app
12 self.allowed_hosts = allowed_hosts
13 self.allow_any = "*" in allowed_hosts
14
15 def __call__(self, scope: Scope) -> ASGIInstance:
16 if scope["type"] in ("http", "websocket") and not self.allow_any:
17 headers = Headers(scope=scope)
18 host = headers.get("host")
19 if host not in self.allowed_hosts:
20 return PlainTextResponse("Invalid host header", status_code=400)
21
22 return self.app(scope)
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlette/middleware/trustedhost.py b/starlette/middleware/trustedhost.py
--- a/starlette/middleware/trustedhost.py
+++ b/starlette/middleware/trustedhost.py
@@ -4,10 +4,17 @@
import typing
+ENFORCE_DOMAIN_WILDCARD = "Domain wildcard patterns must be like '*.example.com'."
+
+
class TrustedHostMiddleware:
def __init__(
self, app: ASGIApp, allowed_hosts: typing.Sequence[str] = ["*"]
) -> None:
+ for pattern in allowed_hosts:
+ assert "*" not in pattern[1:], ENFORCE_DOMAIN_WILDCARD
+ if pattern.startswith("*") and pattern != "*":
+ assert pattern.startswith("*."), ENFORCE_DOMAIN_WILDCARD
self.app = app
self.allowed_hosts = allowed_hosts
self.allow_any = "*" in allowed_hosts
@@ -15,8 +22,15 @@
def __call__(self, scope: Scope) -> ASGIInstance:
if scope["type"] in ("http", "websocket") and not self.allow_any:
headers = Headers(scope=scope)
- host = headers.get("host")
- if host not in self.allowed_hosts:
+ host = headers.get("host", "").split(":")[0]
+ for pattern in self.allowed_hosts:
+ if (
+ host == pattern
+ or pattern.startswith("*")
+ and host.endswith(pattern[1:])
+ ):
+ break
+ else:
return PlainTextResponse("Invalid host header", status_code=400)
return self.app(scope)
| {"golden_diff": "diff --git a/starlette/middleware/trustedhost.py b/starlette/middleware/trustedhost.py\n--- a/starlette/middleware/trustedhost.py\n+++ b/starlette/middleware/trustedhost.py\n@@ -4,10 +4,17 @@\n import typing\n \n \n+ENFORCE_DOMAIN_WILDCARD = \"Domain wildcard patterns must be like '*.example.com'.\"\n+\n+\n class TrustedHostMiddleware:\n def __init__(\n self, app: ASGIApp, allowed_hosts: typing.Sequence[str] = [\"*\"]\n ) -> None:\n+ for pattern in allowed_hosts:\n+ assert \"*\" not in pattern[1:], ENFORCE_DOMAIN_WILDCARD\n+ if pattern.startswith(\"*\") and pattern != \"*\":\n+ assert pattern.startswith(\"*.\"), ENFORCE_DOMAIN_WILDCARD\n self.app = app\n self.allowed_hosts = allowed_hosts\n self.allow_any = \"*\" in allowed_hosts\n@@ -15,8 +22,15 @@\n def __call__(self, scope: Scope) -> ASGIInstance:\n if scope[\"type\"] in (\"http\", \"websocket\") and not self.allow_any:\n headers = Headers(scope=scope)\n- host = headers.get(\"host\")\n- if host not in self.allowed_hosts:\n+ host = headers.get(\"host\", \"\").split(\":\")[0]\n+ for pattern in self.allowed_hosts:\n+ if (\n+ host == pattern\n+ or pattern.startswith(\"*\")\n+ and host.endswith(pattern[1:])\n+ ):\n+ break\n+ else:\n return PlainTextResponse(\"Invalid host header\", status_code=400)\n \n return self.app(scope)\n", "issue": "Wildcard domains and TrustedhostMiddleware\nSupport for wildcard domains\n", "before_files": [{"content": "from starlette.datastructures import Headers\nfrom starlette.responses import PlainTextResponse\nfrom starlette.types import ASGIApp, ASGIInstance, Scope\nimport typing\n\n\nclass TrustedHostMiddleware:\n def __init__(\n self, app: ASGIApp, allowed_hosts: typing.Sequence[str] = [\"*\"]\n ) -> None:\n self.app = app\n self.allowed_hosts = allowed_hosts\n self.allow_any = \"*\" in allowed_hosts\n\n def __call__(self, scope: Scope) -> ASGIInstance:\n if scope[\"type\"] in (\"http\", \"websocket\") and not self.allow_any:\n headers = Headers(scope=scope)\n host = headers.get(\"host\")\n if host not in self.allowed_hosts:\n return PlainTextResponse(\"Invalid host header\", status_code=400)\n\n return self.app(scope)\n", "path": "starlette/middleware/trustedhost.py"}], "after_files": [{"content": "from starlette.datastructures import Headers\nfrom starlette.responses import PlainTextResponse\nfrom starlette.types import ASGIApp, ASGIInstance, Scope\nimport typing\n\n\nENFORCE_DOMAIN_WILDCARD = \"Domain wildcard patterns must be like '*.example.com'.\"\n\n\nclass TrustedHostMiddleware:\n def __init__(\n self, app: ASGIApp, allowed_hosts: typing.Sequence[str] = [\"*\"]\n ) -> None:\n for pattern in allowed_hosts:\n assert \"*\" not in pattern[1:], ENFORCE_DOMAIN_WILDCARD\n if pattern.startswith(\"*\") and pattern != \"*\":\n assert pattern.startswith(\"*.\"), ENFORCE_DOMAIN_WILDCARD\n self.app = app\n self.allowed_hosts = allowed_hosts\n self.allow_any = \"*\" in allowed_hosts\n\n def __call__(self, scope: Scope) -> ASGIInstance:\n if scope[\"type\"] in (\"http\", \"websocket\") and not self.allow_any:\n headers = Headers(scope=scope)\n host = headers.get(\"host\", \"\").split(\":\")[0]\n for pattern in self.allowed_hosts:\n if (\n host == pattern\n or pattern.startswith(\"*\")\n and host.endswith(pattern[1:])\n ):\n break\n else:\n return PlainTextResponse(\"Invalid host header\", status_code=400)\n\n return self.app(scope)\n", "path": "starlette/middleware/trustedhost.py"}]} | 492 | 355 |
gh_patches_debug_30801 | rasdani/github-patches | git_diff | numba__numba-1719 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
LinkedList jitclass example is broken
```
Internal error:
TypeError: Invalid store of %"deferred.4329823704.value" to {i8*, {i32, {%"deferred.4329823704.data", i8}}*} in <numba.datamodel.models.OptionalModel object at 0x106713278>
File "linkedlist.py", line 53
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numba/targets/optional.py`
Content:
```
1 from __future__ import print_function, absolute_import, division
2
3 from numba import types, cgutils
4
5 from .imputils import lower_cast
6
7
8 def make_optional(valtype):
9 """
10 Return the Structure representation of a optional value
11 """
12 return cgutils.create_struct_proxy(types.Optional(valtype))
13
14
15 def always_return_true_impl(context, builder, sig, args):
16 return cgutils.true_bit
17
18
19 def always_return_false_impl(context, builder, sig, args):
20 return cgutils.false_bit
21
22
23 @lower_cast(types.Any, types.Optional)
24 def any_to_optional(context, builder, fromty, toty, val):
25 if fromty == types.none:
26 return context.make_optional_none(builder, toty.type)
27 else:
28 val = context.cast(builder, val, fromty, toty.type)
29 return context.make_optional_value(builder, toty.type, val)
30
31 @lower_cast(types.Optional, types.Any)
32 def optional_to_any(context, builder, fromty, toty, val):
33 optty = context.make_optional(fromty)
34 optval = optty(context, builder, value=val)
35 validbit = cgutils.as_bool_bit(builder, optval.valid)
36 with builder.if_then(builder.not_(validbit), likely=False):
37 msg = "expected %s, got None" % (fromty.type,)
38 context.call_conv.return_user_exc(builder, TypeError, (msg,))
39
40 return optval.data
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/numba/targets/optional.py b/numba/targets/optional.py
--- a/numba/targets/optional.py
+++ b/numba/targets/optional.py
@@ -20,6 +20,39 @@
return cgutils.false_bit
+@lower_cast(types.Optional, types.Optional)
+def optional_to_optional(context, builder, fromty, toty, val):
+ """
+ The handling of optional->optional cast must be special cased for
+ correct propagation of None value. Given type T and U. casting of
+ T? to U? (? denotes optional) should always succeed. If the from-value
+ is None, the None value the casted value (U?) should be None; otherwise,
+ the from-value is casted to U. This is different from casting T? to U,
+ which requires the from-value must not be None.
+ """
+ optty = context.make_optional(fromty)
+ optval = optty(context, builder, value=val)
+ validbit = cgutils.as_bool_bit(builder, optval.valid)
+ # Create uninitialized optional value
+ outoptty = context.make_optional(toty)
+ outoptval = outoptty(context, builder)
+
+ with builder.if_else(validbit) as (is_valid, is_not_valid):
+ with is_valid:
+ # Cast internal value
+ outoptval.valid = cgutils.true_bit
+ outoptval.data = context.cast(builder, optval.data,
+ fromty.type, toty.type)
+
+ with is_not_valid:
+ # Store None to result
+ outoptval.valid = cgutils.false_bit
+ outoptval.data = cgutils.get_null_value(
+ outoptval.data.type)
+
+ return outoptval._getvalue()
+
+
@lower_cast(types.Any, types.Optional)
def any_to_optional(context, builder, fromty, toty, val):
if fromty == types.none:
@@ -28,6 +61,7 @@
val = context.cast(builder, val, fromty, toty.type)
return context.make_optional_value(builder, toty.type, val)
+
@lower_cast(types.Optional, types.Any)
def optional_to_any(context, builder, fromty, toty, val):
optty = context.make_optional(fromty)
| {"golden_diff": "diff --git a/numba/targets/optional.py b/numba/targets/optional.py\n--- a/numba/targets/optional.py\n+++ b/numba/targets/optional.py\n@@ -20,6 +20,39 @@\n return cgutils.false_bit\n \n \n+@lower_cast(types.Optional, types.Optional)\n+def optional_to_optional(context, builder, fromty, toty, val):\n+ \"\"\"\n+ The handling of optional->optional cast must be special cased for\n+ correct propagation of None value. Given type T and U. casting of\n+ T? to U? (? denotes optional) should always succeed. If the from-value\n+ is None, the None value the casted value (U?) should be None; otherwise,\n+ the from-value is casted to U. This is different from casting T? to U,\n+ which requires the from-value must not be None.\n+ \"\"\"\n+ optty = context.make_optional(fromty)\n+ optval = optty(context, builder, value=val)\n+ validbit = cgutils.as_bool_bit(builder, optval.valid)\n+ # Create uninitialized optional value\n+ outoptty = context.make_optional(toty)\n+ outoptval = outoptty(context, builder)\n+\n+ with builder.if_else(validbit) as (is_valid, is_not_valid):\n+ with is_valid:\n+ # Cast internal value\n+ outoptval.valid = cgutils.true_bit\n+ outoptval.data = context.cast(builder, optval.data,\n+ fromty.type, toty.type)\n+\n+ with is_not_valid:\n+ # Store None to result\n+ outoptval.valid = cgutils.false_bit\n+ outoptval.data = cgutils.get_null_value(\n+ outoptval.data.type)\n+\n+ return outoptval._getvalue()\n+\n+\n @lower_cast(types.Any, types.Optional)\n def any_to_optional(context, builder, fromty, toty, val):\n if fromty == types.none:\n@@ -28,6 +61,7 @@\n val = context.cast(builder, val, fromty, toty.type)\n return context.make_optional_value(builder, toty.type, val)\n \n+\n @lower_cast(types.Optional, types.Any)\n def optional_to_any(context, builder, fromty, toty, val):\n optty = context.make_optional(fromty)\n", "issue": "LinkedList jitclass example is broken\n```\nInternal error:\nTypeError: Invalid store of %\"deferred.4329823704.value\" to {i8*, {i32, {%\"deferred.4329823704.data\", i8}}*} in <numba.datamodel.models.OptionalModel object at 0x106713278>\nFile \"linkedlist.py\", line 53\n```\n\n", "before_files": [{"content": "from __future__ import print_function, absolute_import, division\n\nfrom numba import types, cgutils\n\nfrom .imputils import lower_cast\n\n\ndef make_optional(valtype):\n \"\"\"\n Return the Structure representation of a optional value\n \"\"\"\n return cgutils.create_struct_proxy(types.Optional(valtype))\n\n\ndef always_return_true_impl(context, builder, sig, args):\n return cgutils.true_bit\n\n\ndef always_return_false_impl(context, builder, sig, args):\n return cgutils.false_bit\n\n\n@lower_cast(types.Any, types.Optional)\ndef any_to_optional(context, builder, fromty, toty, val):\n if fromty == types.none:\n return context.make_optional_none(builder, toty.type)\n else:\n val = context.cast(builder, val, fromty, toty.type)\n return context.make_optional_value(builder, toty.type, val)\n\n@lower_cast(types.Optional, types.Any)\ndef optional_to_any(context, builder, fromty, toty, val):\n optty = context.make_optional(fromty)\n optval = optty(context, builder, value=val)\n validbit = cgutils.as_bool_bit(builder, optval.valid)\n with builder.if_then(builder.not_(validbit), likely=False):\n msg = \"expected %s, got None\" % (fromty.type,)\n context.call_conv.return_user_exc(builder, TypeError, (msg,))\n\n return optval.data\n", "path": "numba/targets/optional.py"}], "after_files": [{"content": "from __future__ import print_function, absolute_import, division\n\nfrom numba import types, cgutils\n\nfrom .imputils import lower_cast\n\n\ndef make_optional(valtype):\n \"\"\"\n Return the Structure representation of a optional value\n \"\"\"\n return cgutils.create_struct_proxy(types.Optional(valtype))\n\n\ndef always_return_true_impl(context, builder, sig, args):\n return cgutils.true_bit\n\n\ndef always_return_false_impl(context, builder, sig, args):\n return cgutils.false_bit\n\n\n@lower_cast(types.Optional, types.Optional)\ndef optional_to_optional(context, builder, fromty, toty, val):\n \"\"\"\n The handling of optional->optional cast must be special cased for\n correct propagation of None value. Given type T and U. casting of\n T? to U? (? denotes optional) should always succeed. If the from-value\n is None, the None value the casted value (U?) should be None; otherwise,\n the from-value is casted to U. This is different from casting T? to U,\n which requires the from-value must not be None.\n \"\"\"\n optty = context.make_optional(fromty)\n optval = optty(context, builder, value=val)\n validbit = cgutils.as_bool_bit(builder, optval.valid)\n # Create uninitialized optional value\n outoptty = context.make_optional(toty)\n outoptval = outoptty(context, builder)\n\n with builder.if_else(validbit) as (is_valid, is_not_valid):\n with is_valid:\n # Cast internal value\n outoptval.valid = cgutils.true_bit\n outoptval.data = context.cast(builder, optval.data,\n fromty.type, toty.type)\n\n with is_not_valid:\n # Store None to result\n outoptval.valid = cgutils.false_bit\n outoptval.data = cgutils.get_null_value(\n outoptval.data.type)\n\n return outoptval._getvalue()\n\n\n@lower_cast(types.Any, types.Optional)\ndef any_to_optional(context, builder, fromty, toty, val):\n if fromty == types.none:\n return context.make_optional_none(builder, toty.type)\n else:\n val = context.cast(builder, val, fromty, toty.type)\n return context.make_optional_value(builder, toty.type, val)\n\n\n@lower_cast(types.Optional, types.Any)\ndef optional_to_any(context, builder, fromty, toty, val):\n optty = context.make_optional(fromty)\n optval = optty(context, builder, value=val)\n validbit = cgutils.as_bool_bit(builder, optval.valid)\n with builder.if_then(builder.not_(validbit), likely=False):\n msg = \"expected %s, got None\" % (fromty.type,)\n context.call_conv.return_user_exc(builder, TypeError, (msg,))\n\n return optval.data\n", "path": "numba/targets/optional.py"}]} | 743 | 525 |
gh_patches_debug_3087 | rasdani/github-patches | git_diff | svthalia__concrexit-3485 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Increase profile retention period
<!-- Please add the appropriate label for what change should be made -->
### What?
Currently, Profiles are dataminimised 1 month after the member's last membership ends.
We should increase that period to 3 months. I have discussed this with @JeeVee11.
### Why?
It happens quite often that people forget to renew their membership, and decide to do it later than the first of october. In those cases, currently, their profile will already have been wiped. To make it easier for those people to still renew, we should extend the retention period a bit longer.
### How?
Change a constant somewhere in `members/services.py`, and update the privacy policy.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/members/services.py`
Content:
```
1 """Services defined in the members package."""
2 from collections.abc import Callable
3 from datetime import date
4 from typing import Any
5
6 from django.conf import settings
7 from django.db.models import Count, Q
8 from django.utils import timezone
9
10 from members import emails
11 from members.models import Member, Membership
12 from utils.snippets import datetime_to_lectureyear
13
14
15 def _member_group_memberships(
16 member: Member, condition: Callable[[Membership], bool]
17 ) -> dict[str, dict[str, Any]]:
18 """Determine the group membership of a user based on a condition.
19
20 :return: Object with group memberships
21 """
22 memberships = member.membergroupmembership_set.all()
23 data = {}
24
25 for membership in memberships:
26 if not condition(membership):
27 continue
28 period = {
29 "since": membership.since,
30 "until": membership.until,
31 "chair": membership.chair,
32 }
33
34 if hasattr(membership.group, "board"):
35 period["role"] = membership.role
36
37 if membership.until is None and hasattr(membership.group, "board"):
38 period["until"] = membership.group.board.until
39
40 name = membership.group.name
41 if data.get(name):
42 data[name]["periods"].append(period)
43 if data[name]["earliest"] > period["since"]:
44 data[name]["earliest"] = period["since"]
45 if period["until"] is None or (
46 data[name]["latest"] is not None
47 and data[name]["latest"] < period["until"]
48 ):
49 data[name]["latest"] = period["until"]
50 data[name]["periods"].sort(key=lambda x: x["since"])
51 else:
52 data[name] = {
53 "pk": membership.group.pk,
54 "active": membership.group.active,
55 "name": name,
56 "periods": [period],
57 "url": settings.BASE_URL + membership.group.get_absolute_url(),
58 "earliest": period["since"],
59 "latest": period["until"],
60 }
61 return data
62
63
64 def member_achievements(member) -> list:
65 """Derive a list of achievements of a member.
66
67 Committee and board memberships + mentorships
68 """
69 achievements = _member_group_memberships(
70 member,
71 lambda membership: (
72 hasattr(membership.group, "board") or hasattr(membership.group, "committee")
73 ),
74 )
75
76 mentor_years = member.mentorship_set.all()
77 for mentor_year in mentor_years:
78 name = f"Mentor in {mentor_year.year}"
79 # Ensure mentorships appear last but are sorted
80 earliest = date.today()
81 earliest = earliest.replace(year=earliest.year + mentor_year.year)
82 # Making sure it does not crash in leap years
83 if earliest.month == 2 and earliest.day == 29:
84 earliest = earliest.replace(day=28)
85 if not achievements.get(name):
86 achievements[name] = {
87 "name": name,
88 "earliest": earliest,
89 }
90 return sorted(achievements.values(), key=lambda x: x["earliest"])
91
92
93 def member_societies(member) -> list:
94 """Derive a list of societies a member was part of."""
95 societies = _member_group_memberships(
96 member, lambda membership: (hasattr(membership.group, "society"))
97 )
98 return sorted(societies.values(), key=lambda x: x["earliest"])
99
100
101 def gen_stats_member_type() -> dict[str, list]:
102 """Generate statistics about membership types."""
103 data = {
104 "labels": [],
105 "datasets": [
106 {"data": []},
107 ],
108 }
109
110 for key, display in Membership.MEMBERSHIP_TYPES:
111 data["labels"].append(str(display))
112 data["datasets"][0]["data"].append(
113 Membership.objects.filter(since__lte=date.today())
114 .filter(Q(until__isnull=True) | Q(until__gt=date.today()))
115 .filter(type=key)
116 .count()
117 )
118
119 return data
120
121
122 def gen_stats_year() -> dict[str, list]:
123 """Generate statistics on how many members (and other membership types) there were in each cohort."""
124 years = range(2015, datetime_to_lectureyear(date.today()))
125
126 data = {
127 "labels": list(years),
128 "datasets": [
129 {"label": str(display), "data": []}
130 for _, display in Membership.MEMBERSHIP_TYPES
131 ],
132 }
133
134 for index, (key, _) in enumerate(Membership.MEMBERSHIP_TYPES):
135 for year in years:
136 data["datasets"][index]["data"].append(
137 Membership.objects.filter(since__lte=date(year=year, month=9, day=1))
138 .filter(
139 Q(until__isnull=True) | Q(until__gt=date(year=year, month=9, day=1))
140 )
141 .filter(type=key)
142 .count()
143 )
144
145 return data
146
147
148 def gen_stats_active_members() -> dict[str, list]:
149 """Generate statistics about active members."""
150 return {
151 "labels": ["Active Members", "Non-active Members"],
152 "datasets": [
153 {
154 "data": [
155 Member.active_members.count(),
156 Member.current_members.count() - Member.active_members.count(),
157 ]
158 }
159 ],
160 }
161
162
163 def verify_email_change(change_request) -> None:
164 """Mark the email change request as verified.
165
166 :param change_request: the email change request
167 """
168 change_request.verified = True
169 change_request.save()
170
171 process_email_change(change_request)
172
173
174 def confirm_email_change(change_request) -> None:
175 """Mark the email change request as verified.
176
177 :param change_request: the email change request
178 """
179 change_request.confirmed = True
180 change_request.save()
181
182 process_email_change(change_request)
183
184
185 def process_email_change(change_request) -> None:
186 """Change the user's email address if the request was completed and send the completion email.
187
188 :param change_request: the email change request
189 """
190 if not change_request.completed:
191 return
192
193 member = change_request.member
194 member.email = change_request.email
195 member.save()
196
197 emails.send_email_change_completion_message(change_request)
198
199
200 def execute_data_minimisation(dry_run=False, members=None) -> list[Member]:
201 """Clean the profiles of members/users of whom the last membership ended at least 31 days ago.
202
203 :param dry_run: does not really remove data if True
204 :param members: queryset of members to process, optional
205 :return: list of processed members
206 """
207 if not members:
208 members = Member.objects
209 members = (
210 members.annotate(membership_count=Count("membership"))
211 .exclude(
212 (
213 Q(membership__until__isnull=True)
214 | Q(membership__until__gt=timezone.now().date())
215 )
216 & Q(membership_count__gt=0)
217 )
218 .distinct()
219 .prefetch_related("membership_set", "profile")
220 )
221 deletion_period = timezone.now().date() - timezone.timedelta(days=31)
222 processed_members = []
223 for member in members:
224 if (
225 member.latest_membership is None
226 or member.latest_membership.until <= deletion_period
227 ):
228 processed_members.append(member)
229 profile = member.profile
230 profile.student_number = None
231 profile.phone_number = None
232 profile.address_street = None
233 profile.address_street2 = None
234 profile.address_postal_code = None
235 profile.address_city = None
236 profile.address_country = None
237 profile.birthday = None
238 profile.emergency_contact_phone_number = None
239 profile.emergency_contact = None
240 profile.is_minimized = True
241 if not dry_run:
242 profile.save()
243
244 return processed_members
245
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/members/services.py b/website/members/services.py
--- a/website/members/services.py
+++ b/website/members/services.py
@@ -218,7 +218,7 @@
.distinct()
.prefetch_related("membership_set", "profile")
)
- deletion_period = timezone.now().date() - timezone.timedelta(days=31)
+ deletion_period = timezone.now().date() - timezone.timedelta(days=90)
processed_members = []
for member in members:
if (
| {"golden_diff": "diff --git a/website/members/services.py b/website/members/services.py\n--- a/website/members/services.py\n+++ b/website/members/services.py\n@@ -218,7 +218,7 @@\n .distinct()\n .prefetch_related(\"membership_set\", \"profile\")\n )\n- deletion_period = timezone.now().date() - timezone.timedelta(days=31)\n+ deletion_period = timezone.now().date() - timezone.timedelta(days=90)\n processed_members = []\n for member in members:\n if (\n", "issue": "Increase profile retention period\n<!-- Please add the appropriate label for what change should be made -->\r\n\r\n### What?\r\nCurrently, Profiles are dataminimised 1 month after the member's last membership ends.\r\nWe should increase that period to 3 months. I have discussed this with @JeeVee11.\r\n\r\n### Why?\r\nIt happens quite often that people forget to renew their membership, and decide to do it later than the first of october. In those cases, currently, their profile will already have been wiped. To make it easier for those people to still renew, we should extend the retention period a bit longer.\r\n\r\n### How?\r\nChange a constant somewhere in `members/services.py`, and update the privacy policy. \r\n\n", "before_files": [{"content": "\"\"\"Services defined in the members package.\"\"\"\nfrom collections.abc import Callable\nfrom datetime import date\nfrom typing import Any\n\nfrom django.conf import settings\nfrom django.db.models import Count, Q\nfrom django.utils import timezone\n\nfrom members import emails\nfrom members.models import Member, Membership\nfrom utils.snippets import datetime_to_lectureyear\n\n\ndef _member_group_memberships(\n member: Member, condition: Callable[[Membership], bool]\n) -> dict[str, dict[str, Any]]:\n \"\"\"Determine the group membership of a user based on a condition.\n\n :return: Object with group memberships\n \"\"\"\n memberships = member.membergroupmembership_set.all()\n data = {}\n\n for membership in memberships:\n if not condition(membership):\n continue\n period = {\n \"since\": membership.since,\n \"until\": membership.until,\n \"chair\": membership.chair,\n }\n\n if hasattr(membership.group, \"board\"):\n period[\"role\"] = membership.role\n\n if membership.until is None and hasattr(membership.group, \"board\"):\n period[\"until\"] = membership.group.board.until\n\n name = membership.group.name\n if data.get(name):\n data[name][\"periods\"].append(period)\n if data[name][\"earliest\"] > period[\"since\"]:\n data[name][\"earliest\"] = period[\"since\"]\n if period[\"until\"] is None or (\n data[name][\"latest\"] is not None\n and data[name][\"latest\"] < period[\"until\"]\n ):\n data[name][\"latest\"] = period[\"until\"]\n data[name][\"periods\"].sort(key=lambda x: x[\"since\"])\n else:\n data[name] = {\n \"pk\": membership.group.pk,\n \"active\": membership.group.active,\n \"name\": name,\n \"periods\": [period],\n \"url\": settings.BASE_URL + membership.group.get_absolute_url(),\n \"earliest\": period[\"since\"],\n \"latest\": period[\"until\"],\n }\n return data\n\n\ndef member_achievements(member) -> list:\n \"\"\"Derive a list of achievements of a member.\n\n Committee and board memberships + mentorships\n \"\"\"\n achievements = _member_group_memberships(\n member,\n lambda membership: (\n hasattr(membership.group, \"board\") or hasattr(membership.group, \"committee\")\n ),\n )\n\n mentor_years = member.mentorship_set.all()\n for mentor_year in mentor_years:\n name = f\"Mentor in {mentor_year.year}\"\n # Ensure mentorships appear last but are sorted\n earliest = date.today()\n earliest = earliest.replace(year=earliest.year + mentor_year.year)\n # Making sure it does not crash in leap years\n if earliest.month == 2 and earliest.day == 29:\n earliest = earliest.replace(day=28)\n if not achievements.get(name):\n achievements[name] = {\n \"name\": name,\n \"earliest\": earliest,\n }\n return sorted(achievements.values(), key=lambda x: x[\"earliest\"])\n\n\ndef member_societies(member) -> list:\n \"\"\"Derive a list of societies a member was part of.\"\"\"\n societies = _member_group_memberships(\n member, lambda membership: (hasattr(membership.group, \"society\"))\n )\n return sorted(societies.values(), key=lambda x: x[\"earliest\"])\n\n\ndef gen_stats_member_type() -> dict[str, list]:\n \"\"\"Generate statistics about membership types.\"\"\"\n data = {\n \"labels\": [],\n \"datasets\": [\n {\"data\": []},\n ],\n }\n\n for key, display in Membership.MEMBERSHIP_TYPES:\n data[\"labels\"].append(str(display))\n data[\"datasets\"][0][\"data\"].append(\n Membership.objects.filter(since__lte=date.today())\n .filter(Q(until__isnull=True) | Q(until__gt=date.today()))\n .filter(type=key)\n .count()\n )\n\n return data\n\n\ndef gen_stats_year() -> dict[str, list]:\n \"\"\"Generate statistics on how many members (and other membership types) there were in each cohort.\"\"\"\n years = range(2015, datetime_to_lectureyear(date.today()))\n\n data = {\n \"labels\": list(years),\n \"datasets\": [\n {\"label\": str(display), \"data\": []}\n for _, display in Membership.MEMBERSHIP_TYPES\n ],\n }\n\n for index, (key, _) in enumerate(Membership.MEMBERSHIP_TYPES):\n for year in years:\n data[\"datasets\"][index][\"data\"].append(\n Membership.objects.filter(since__lte=date(year=year, month=9, day=1))\n .filter(\n Q(until__isnull=True) | Q(until__gt=date(year=year, month=9, day=1))\n )\n .filter(type=key)\n .count()\n )\n\n return data\n\n\ndef gen_stats_active_members() -> dict[str, list]:\n \"\"\"Generate statistics about active members.\"\"\"\n return {\n \"labels\": [\"Active Members\", \"Non-active Members\"],\n \"datasets\": [\n {\n \"data\": [\n Member.active_members.count(),\n Member.current_members.count() - Member.active_members.count(),\n ]\n }\n ],\n }\n\n\ndef verify_email_change(change_request) -> None:\n \"\"\"Mark the email change request as verified.\n\n :param change_request: the email change request\n \"\"\"\n change_request.verified = True\n change_request.save()\n\n process_email_change(change_request)\n\n\ndef confirm_email_change(change_request) -> None:\n \"\"\"Mark the email change request as verified.\n\n :param change_request: the email change request\n \"\"\"\n change_request.confirmed = True\n change_request.save()\n\n process_email_change(change_request)\n\n\ndef process_email_change(change_request) -> None:\n \"\"\"Change the user's email address if the request was completed and send the completion email.\n\n :param change_request: the email change request\n \"\"\"\n if not change_request.completed:\n return\n\n member = change_request.member\n member.email = change_request.email\n member.save()\n\n emails.send_email_change_completion_message(change_request)\n\n\ndef execute_data_minimisation(dry_run=False, members=None) -> list[Member]:\n \"\"\"Clean the profiles of members/users of whom the last membership ended at least 31 days ago.\n\n :param dry_run: does not really remove data if True\n :param members: queryset of members to process, optional\n :return: list of processed members\n \"\"\"\n if not members:\n members = Member.objects\n members = (\n members.annotate(membership_count=Count(\"membership\"))\n .exclude(\n (\n Q(membership__until__isnull=True)\n | Q(membership__until__gt=timezone.now().date())\n )\n & Q(membership_count__gt=0)\n )\n .distinct()\n .prefetch_related(\"membership_set\", \"profile\")\n )\n deletion_period = timezone.now().date() - timezone.timedelta(days=31)\n processed_members = []\n for member in members:\n if (\n member.latest_membership is None\n or member.latest_membership.until <= deletion_period\n ):\n processed_members.append(member)\n profile = member.profile\n profile.student_number = None\n profile.phone_number = None\n profile.address_street = None\n profile.address_street2 = None\n profile.address_postal_code = None\n profile.address_city = None\n profile.address_country = None\n profile.birthday = None\n profile.emergency_contact_phone_number = None\n profile.emergency_contact = None\n profile.is_minimized = True\n if not dry_run:\n profile.save()\n\n return processed_members\n", "path": "website/members/services.py"}], "after_files": [{"content": "\"\"\"Services defined in the members package.\"\"\"\nfrom collections.abc import Callable\nfrom datetime import date\nfrom typing import Any\n\nfrom django.conf import settings\nfrom django.db.models import Count, Q\nfrom django.utils import timezone\n\nfrom members import emails\nfrom members.models import Member, Membership\nfrom utils.snippets import datetime_to_lectureyear\n\n\ndef _member_group_memberships(\n member: Member, condition: Callable[[Membership], bool]\n) -> dict[str, dict[str, Any]]:\n \"\"\"Determine the group membership of a user based on a condition.\n\n :return: Object with group memberships\n \"\"\"\n memberships = member.membergroupmembership_set.all()\n data = {}\n\n for membership in memberships:\n if not condition(membership):\n continue\n period = {\n \"since\": membership.since,\n \"until\": membership.until,\n \"chair\": membership.chair,\n }\n\n if hasattr(membership.group, \"board\"):\n period[\"role\"] = membership.role\n\n if membership.until is None and hasattr(membership.group, \"board\"):\n period[\"until\"] = membership.group.board.until\n\n name = membership.group.name\n if data.get(name):\n data[name][\"periods\"].append(period)\n if data[name][\"earliest\"] > period[\"since\"]:\n data[name][\"earliest\"] = period[\"since\"]\n if period[\"until\"] is None or (\n data[name][\"latest\"] is not None\n and data[name][\"latest\"] < period[\"until\"]\n ):\n data[name][\"latest\"] = period[\"until\"]\n data[name][\"periods\"].sort(key=lambda x: x[\"since\"])\n else:\n data[name] = {\n \"pk\": membership.group.pk,\n \"active\": membership.group.active,\n \"name\": name,\n \"periods\": [period],\n \"url\": settings.BASE_URL + membership.group.get_absolute_url(),\n \"earliest\": period[\"since\"],\n \"latest\": period[\"until\"],\n }\n return data\n\n\ndef member_achievements(member) -> list:\n \"\"\"Derive a list of achievements of a member.\n\n Committee and board memberships + mentorships\n \"\"\"\n achievements = _member_group_memberships(\n member,\n lambda membership: (\n hasattr(membership.group, \"board\") or hasattr(membership.group, \"committee\")\n ),\n )\n\n mentor_years = member.mentorship_set.all()\n for mentor_year in mentor_years:\n name = f\"Mentor in {mentor_year.year}\"\n # Ensure mentorships appear last but are sorted\n earliest = date.today()\n earliest = earliest.replace(year=earliest.year + mentor_year.year)\n # Making sure it does not crash in leap years\n if earliest.month == 2 and earliest.day == 29:\n earliest = earliest.replace(day=28)\n if not achievements.get(name):\n achievements[name] = {\n \"name\": name,\n \"earliest\": earliest,\n }\n return sorted(achievements.values(), key=lambda x: x[\"earliest\"])\n\n\ndef member_societies(member) -> list:\n \"\"\"Derive a list of societies a member was part of.\"\"\"\n societies = _member_group_memberships(\n member, lambda membership: (hasattr(membership.group, \"society\"))\n )\n return sorted(societies.values(), key=lambda x: x[\"earliest\"])\n\n\ndef gen_stats_member_type() -> dict[str, list]:\n \"\"\"Generate statistics about membership types.\"\"\"\n data = {\n \"labels\": [],\n \"datasets\": [\n {\"data\": []},\n ],\n }\n\n for key, display in Membership.MEMBERSHIP_TYPES:\n data[\"labels\"].append(str(display))\n data[\"datasets\"][0][\"data\"].append(\n Membership.objects.filter(since__lte=date.today())\n .filter(Q(until__isnull=True) | Q(until__gt=date.today()))\n .filter(type=key)\n .count()\n )\n\n return data\n\n\ndef gen_stats_year() -> dict[str, list]:\n \"\"\"Generate statistics on how many members (and other membership types) there were in each cohort.\"\"\"\n years = range(2015, datetime_to_lectureyear(date.today()))\n\n data = {\n \"labels\": list(years),\n \"datasets\": [\n {\"label\": str(display), \"data\": []}\n for _, display in Membership.MEMBERSHIP_TYPES\n ],\n }\n\n for index, (key, _) in enumerate(Membership.MEMBERSHIP_TYPES):\n for year in years:\n data[\"datasets\"][index][\"data\"].append(\n Membership.objects.filter(since__lte=date(year=year, month=9, day=1))\n .filter(\n Q(until__isnull=True) | Q(until__gt=date(year=year, month=9, day=1))\n )\n .filter(type=key)\n .count()\n )\n\n return data\n\n\ndef gen_stats_active_members() -> dict[str, list]:\n \"\"\"Generate statistics about active members.\"\"\"\n return {\n \"labels\": [\"Active Members\", \"Non-active Members\"],\n \"datasets\": [\n {\n \"data\": [\n Member.active_members.count(),\n Member.current_members.count() - Member.active_members.count(),\n ]\n }\n ],\n }\n\n\ndef verify_email_change(change_request) -> None:\n \"\"\"Mark the email change request as verified.\n\n :param change_request: the email change request\n \"\"\"\n change_request.verified = True\n change_request.save()\n\n process_email_change(change_request)\n\n\ndef confirm_email_change(change_request) -> None:\n \"\"\"Mark the email change request as verified.\n\n :param change_request: the email change request\n \"\"\"\n change_request.confirmed = True\n change_request.save()\n\n process_email_change(change_request)\n\n\ndef process_email_change(change_request) -> None:\n \"\"\"Change the user's email address if the request was completed and send the completion email.\n\n :param change_request: the email change request\n \"\"\"\n if not change_request.completed:\n return\n\n member = change_request.member\n member.email = change_request.email\n member.save()\n\n emails.send_email_change_completion_message(change_request)\n\n\ndef execute_data_minimisation(dry_run=False, members=None) -> list[Member]:\n \"\"\"Clean the profiles of members/users of whom the last membership ended at least 31 days ago.\n\n :param dry_run: does not really remove data if True\n :param members: queryset of members to process, optional\n :return: list of processed members\n \"\"\"\n if not members:\n members = Member.objects\n members = (\n members.annotate(membership_count=Count(\"membership\"))\n .exclude(\n (\n Q(membership__until__isnull=True)\n | Q(membership__until__gt=timezone.now().date())\n )\n & Q(membership_count__gt=0)\n )\n .distinct()\n .prefetch_related(\"membership_set\", \"profile\")\n )\n deletion_period = timezone.now().date() - timezone.timedelta(days=90)\n processed_members = []\n for member in members:\n if (\n member.latest_membership is None\n or member.latest_membership.until <= deletion_period\n ):\n processed_members.append(member)\n profile = member.profile\n profile.student_number = None\n profile.phone_number = None\n profile.address_street = None\n profile.address_street2 = None\n profile.address_postal_code = None\n profile.address_city = None\n profile.address_country = None\n profile.birthday = None\n profile.emergency_contact_phone_number = None\n profile.emergency_contact = None\n profile.is_minimized = True\n if not dry_run:\n profile.save()\n\n return processed_members\n", "path": "website/members/services.py"}]} | 2,707 | 118 |
gh_patches_debug_28871 | rasdani/github-patches | git_diff | dask__distributed-8528 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Proxy to worker dashboard not working
```
dask scheduler
dask worker <scheduler addr> --dashboard
```
Then navigate to `http://127.0.0.1:8787/info/main/workers.html` and click on dashboard. This should proxy to the worker dashboard but instead it triggers the below exception (with a 500 error code, of course)
```python-traceback
/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/base/handlers.py:607: UserWarning: The Tornado web application does not have an 'identity_provider' defined in its settings. In future releases of jupyter_server, this will be a required key for all subclasses of `JupyterHandler`. For an example, see the jupyter_server source code for how to add an identity provider to the tornado settings: https://github.com/jupyter-server/jupyter_server/blob/v2.0.0/jupyter_server/serverapp.py#L242
if type(self.identity_provider) is IdentityProvider and mod_obj.__name__ != __name__:
2023-11-10 14:28:01,113 - tornado.application - ERROR - Uncaught exception GET /proxy/50121/192.168.2.53/status (127.0.0.1)
HTTPServerRequest(protocol='http', host='127.0.0.1:8787', method='GET', uri='/proxy/50121/192.168.2.53/status', version='HTTP/1.1', remote_ip='127.0.0.1')
Traceback (most recent call last):
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/tornado/web.py", line 1765, in _execute
result = await result # type: ignore
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/base/handlers.py", line 622, in prepare
_user = await _user
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/auth/identity.py", line 240, in _get_user
_cookie_user = self.get_user_cookie(handler)
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/auth/identity.py", line 398, in get_user_cookie
_user_cookie = handler.get_secure_cookie(
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/tornado/web.py", line 836, in get_signed_cookie
self.require_setting("cookie_secret", "secure cookies")
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/tornado/web.py", line 1669, in require_setting
raise Exception(
Exception: You must define the 'cookie_secret' setting in your application to use secure cookies
2023-11-10 14:28:01,121 - tornado.application - ERROR - Uncaught exception in write_error
Traceback (most recent call last):
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/tornado/web.py", line 1765, in _execute
result = await result # type: ignore
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/base/handlers.py", line 622, in prepare
_user = await _user
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/auth/identity.py", line 240, in _get_user
_cookie_user = self.get_user_cookie(handler)
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/auth/identity.py", line 398, in get_user_cookie
_user_cookie = handler.get_secure_cookie(
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/tornado/web.py", line 836, in get_signed_cookie
self.require_setting("cookie_secret", "secure cookies")
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/tornado/web.py", line 1669, in require_setting
raise Exception(
Exception: You must define the 'cookie_secret' setting in your application to use secure cookies
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/tornado/web.py", line 1294, in send_error
self.write_error(status_code, **kwargs)
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/base/handlers.py", line 719, in write_error
html = self.render_template("%s.html" % status_code, **ns)
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/base/handlers.py", line 645, in render_template
ns.update(self.template_namespace)
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/base/handlers.py", line 655, in template_namespace
logged_in=self.logged_in,
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/base/handlers.py", line 197, in logged_in
user = self.current_user
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/tornado/web.py", line 1420, in current_user
self._current_user = self.get_current_user()
File "/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/base/handlers.py", line 176, in get_current_user
raise RuntimeError(msg)
RuntimeError: Calling `GlobalProxyHandler.get_current_user()` directly is deprecated in jupyter-server 2.0. Use `self.current_user` instead (works in all versions).
```
```
jupyter_server 2.10.0 pyhd8ed1ab_0 conda-forge
jupyter_server_terminals 0.4.4 pyhd8ed1ab_1 conda-forge
jupyter-server-proxy 4.1.0 pyhd8ed1ab_0 conda-forge
tornado 6.3.3 py310h2aa6e3c_1 conda-forge
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `distributed/http/proxy.py`
Content:
```
1 from __future__ import annotations
2
3 import logging
4
5 from tornado import web
6
7 logger = logging.getLogger(__name__)
8
9 try:
10 from jupyter_server_proxy.handlers import ProxyHandler
11
12 class GlobalProxyHandler(ProxyHandler):
13 """
14 A tornado request handler that proxies HTTP and websockets
15 from a port to any valid endpoint'.
16 """
17
18 def initialize(self, dask_server=None, extra=None):
19 self.scheduler = dask_server
20 self.extra = extra or {}
21
22 async def http_get(self, port, host, proxied_path):
23 # route here first
24 # incoming URI /proxy/{port}/{host}/{proxied_path}
25
26 self.host = host
27
28 # rewrite uri for jupyter-server-proxy handling
29 uri = f"/proxy/{port}/{proxied_path}"
30 self.request.uri = uri
31
32 # slash is removed during regex in handler
33 proxied_path = "/%s" % proxied_path
34
35 worker = f"{self.host}:{port}"
36 if not check_worker_dashboard_exits(self.scheduler, worker):
37 msg = "Worker <%s> does not exist" % worker
38 self.set_status(400)
39 self.finish(msg)
40 return
41 return await self.proxy(port, proxied_path)
42
43 async def open(self, port, host, proxied_path):
44 # finally, proxy to other address/port
45 return await self.proxy_open(host, port, proxied_path)
46
47 def post(self, port, proxied_path):
48 return self.proxy(port, proxied_path)
49
50 def put(self, port, proxied_path):
51 return self.proxy(port, proxied_path)
52
53 def delete(self, port, proxied_path):
54 return self.proxy(port, proxied_path)
55
56 def head(self, port, proxied_path):
57 return self.proxy(port, proxied_path)
58
59 def patch(self, port, proxied_path):
60 return self.proxy(port, proxied_path)
61
62 def options(self, port, proxied_path):
63 return self.proxy(port, proxied_path)
64
65 def proxy(self, port, proxied_path):
66 # router here second
67 # returns ProxyHandler coroutine
68 return super().proxy(self.host, port, proxied_path)
69
70 except ImportError:
71 logger.info(
72 "To route to workers diagnostics web server "
73 "please install jupyter-server-proxy: "
74 "python -m pip install jupyter-server-proxy"
75 )
76
77 class GlobalProxyHandler(web.RequestHandler): # type: ignore
78 """Minimal Proxy handler when jupyter-server-proxy is not installed"""
79
80 def initialize(self, dask_server=None, extra=None):
81 self.server = dask_server
82 self.extra = extra or {}
83
84 def get(self, port, host, proxied_path):
85 worker_url = f"{host}:{port}/{proxied_path}"
86 msg = """
87 <p> Try navigating to <a href=http://{}>{}</a> for your worker dashboard </p>
88
89 <p>
90 Dask tried to proxy you to that page through your
91 Scheduler's dashboard connection, but you don't have
92 jupyter-server-proxy installed. You may want to install it
93 with either conda or pip, and then restart your scheduler.
94 </p>
95
96 <p><pre> conda install jupyter-server-proxy -c conda-forge </pre></p>
97 <p><pre> python -m pip install jupyter-server-proxy</pre></p>
98
99 <p>
100 The link above should work though if your workers are on a
101 sufficiently open network. This is common on single machines,
102 but less common in production clusters. Your IT administrators
103 will know more
104 </p>
105 """.format(
106 worker_url,
107 worker_url,
108 )
109 self.write(msg)
110
111
112 def check_worker_dashboard_exits(scheduler, worker):
113 """Check addr:port exists as a worker in scheduler list
114
115 Parameters
116 ----------
117 worker : str
118 addr:port
119
120 Returns
121 -------
122 bool
123 """
124 addr, port = worker.split(":")
125 workers = list(scheduler.workers.values())
126 for w in workers:
127 bokeh_port = w.services.get("dashboard", "")
128 if addr == w.host and port == str(bokeh_port):
129 return True
130 return False
131
132
133 routes: list[tuple] = [(r"proxy/(\d+)/(.*?)/(.*)", GlobalProxyHandler, {})]
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/distributed/http/proxy.py b/distributed/http/proxy.py
--- a/distributed/http/proxy.py
+++ b/distributed/http/proxy.py
@@ -19,6 +19,15 @@
self.scheduler = dask_server
self.extra = extra or {}
+ # `get_current_user` and `prepare` method impls reference
+ # issue in tornado & jupyter server compat here
+ # https://github.com/jupyter-server/jupyter_server/issues/1012
+ def get_current_user(self):
+ return "dask"
+
+ async def prepare(self):
+ web.authenticated(lambda rq: None)(self)
+
async def http_get(self, port, host, proxied_path):
# route here first
# incoming URI /proxy/{port}/{host}/{proxied_path}
@@ -29,6 +38,9 @@
uri = f"/proxy/{port}/{proxied_path}"
self.request.uri = uri
+ if self.host not in self.host_allowlist:
+ self.host_allowlist.append(self.host)
+
# slash is removed during regex in handler
proxied_path = "/%s" % proxied_path
@@ -41,6 +53,8 @@
return await self.proxy(port, proxied_path)
async def open(self, port, host, proxied_path):
+ if host not in self.host_allowlist:
+ self.host_allowlist.append(host)
# finally, proxy to other address/port
return await self.proxy_open(host, port, proxied_path)
| {"golden_diff": "diff --git a/distributed/http/proxy.py b/distributed/http/proxy.py\n--- a/distributed/http/proxy.py\n+++ b/distributed/http/proxy.py\n@@ -19,6 +19,15 @@\n self.scheduler = dask_server\n self.extra = extra or {}\n \n+ # `get_current_user` and `prepare` method impls reference\n+ # issue in tornado & jupyter server compat here\n+ # https://github.com/jupyter-server/jupyter_server/issues/1012\n+ def get_current_user(self):\n+ return \"dask\"\n+\n+ async def prepare(self):\n+ web.authenticated(lambda rq: None)(self)\n+\n async def http_get(self, port, host, proxied_path):\n # route here first\n # incoming URI /proxy/{port}/{host}/{proxied_path}\n@@ -29,6 +38,9 @@\n uri = f\"/proxy/{port}/{proxied_path}\"\n self.request.uri = uri\n \n+ if self.host not in self.host_allowlist:\n+ self.host_allowlist.append(self.host)\n+\n # slash is removed during regex in handler\n proxied_path = \"/%s\" % proxied_path\n \n@@ -41,6 +53,8 @@\n return await self.proxy(port, proxied_path)\n \n async def open(self, port, host, proxied_path):\n+ if host not in self.host_allowlist:\n+ self.host_allowlist.append(host)\n # finally, proxy to other address/port\n return await self.proxy_open(host, port, proxied_path)\n", "issue": "Proxy to worker dashboard not working\n\r\n\r\n```\r\ndask scheduler\r\ndask worker <scheduler addr> --dashboard\r\n```\r\n\r\nThen navigate to `http://127.0.0.1:8787/info/main/workers.html` and click on dashboard. This should proxy to the worker dashboard but instead it triggers the below exception (with a 500 error code, of course)\r\n\r\n```python-traceback\r\n/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/base/handlers.py:607: UserWarning: The Tornado web application does not have an 'identity_provider' defined in its settings. In future releases of jupyter_server, this will be a required key for all subclasses of `JupyterHandler`. For an example, see the jupyter_server source code for how to add an identity provider to the tornado settings: https://github.com/jupyter-server/jupyter_server/blob/v2.0.0/jupyter_server/serverapp.py#L242\r\n if type(self.identity_provider) is IdentityProvider and mod_obj.__name__ != __name__:\r\n2023-11-10 14:28:01,113 - tornado.application - ERROR - Uncaught exception GET /proxy/50121/192.168.2.53/status (127.0.0.1)\r\nHTTPServerRequest(protocol='http', host='127.0.0.1:8787', method='GET', uri='/proxy/50121/192.168.2.53/status', version='HTTP/1.1', remote_ip='127.0.0.1')\r\nTraceback (most recent call last):\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/tornado/web.py\", line 1765, in _execute\r\n result = await result # type: ignore\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/base/handlers.py\", line 622, in prepare\r\n _user = await _user\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/auth/identity.py\", line 240, in _get_user\r\n _cookie_user = self.get_user_cookie(handler)\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/auth/identity.py\", line 398, in get_user_cookie\r\n _user_cookie = handler.get_secure_cookie(\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/tornado/web.py\", line 836, in get_signed_cookie\r\n self.require_setting(\"cookie_secret\", \"secure cookies\")\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/tornado/web.py\", line 1669, in require_setting\r\n raise Exception(\r\nException: You must define the 'cookie_secret' setting in your application to use secure cookies\r\n2023-11-10 14:28:01,121 - tornado.application - ERROR - Uncaught exception in write_error\r\nTraceback (most recent call last):\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/tornado/web.py\", line 1765, in _execute\r\n result = await result # type: ignore\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/base/handlers.py\", line 622, in prepare\r\n _user = await _user\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/auth/identity.py\", line 240, in _get_user\r\n _cookie_user = self.get_user_cookie(handler)\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/auth/identity.py\", line 398, in get_user_cookie\r\n _user_cookie = handler.get_secure_cookie(\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/tornado/web.py\", line 836, in get_signed_cookie\r\n self.require_setting(\"cookie_secret\", \"secure cookies\")\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/tornado/web.py\", line 1669, in require_setting\r\n raise Exception(\r\nException: You must define the 'cookie_secret' setting in your application to use secure cookies\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/tornado/web.py\", line 1294, in send_error\r\n self.write_error(status_code, **kwargs)\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/base/handlers.py\", line 719, in write_error\r\n html = self.render_template(\"%s.html\" % status_code, **ns)\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/base/handlers.py\", line 645, in render_template\r\n ns.update(self.template_namespace)\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/base/handlers.py\", line 655, in template_namespace\r\n logged_in=self.logged_in,\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/base/handlers.py\", line 197, in logged_in\r\n user = self.current_user\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/tornado/web.py\", line 1420, in current_user\r\n self._current_user = self.get_current_user()\r\n File \"/Users/fjetter/miniforge3/envs/coiled-benchmarks-310/lib/python3.10/site-packages/jupyter_server/base/handlers.py\", line 176, in get_current_user\r\n raise RuntimeError(msg)\r\nRuntimeError: Calling `GlobalProxyHandler.get_current_user()` directly is deprecated in jupyter-server 2.0. Use `self.current_user` instead (works in all versions).\r\n```\r\n\r\n\r\n```\r\njupyter_server 2.10.0 pyhd8ed1ab_0 conda-forge\r\njupyter_server_terminals 0.4.4 pyhd8ed1ab_1 conda-forge\r\njupyter-server-proxy 4.1.0 pyhd8ed1ab_0 conda-forge\r\ntornado 6.3.3 py310h2aa6e3c_1 conda-forge\r\n```\n", "before_files": [{"content": "from __future__ import annotations\n\nimport logging\n\nfrom tornado import web\n\nlogger = logging.getLogger(__name__)\n\ntry:\n from jupyter_server_proxy.handlers import ProxyHandler\n\n class GlobalProxyHandler(ProxyHandler):\n \"\"\"\n A tornado request handler that proxies HTTP and websockets\n from a port to any valid endpoint'.\n \"\"\"\n\n def initialize(self, dask_server=None, extra=None):\n self.scheduler = dask_server\n self.extra = extra or {}\n\n async def http_get(self, port, host, proxied_path):\n # route here first\n # incoming URI /proxy/{port}/{host}/{proxied_path}\n\n self.host = host\n\n # rewrite uri for jupyter-server-proxy handling\n uri = f\"/proxy/{port}/{proxied_path}\"\n self.request.uri = uri\n\n # slash is removed during regex in handler\n proxied_path = \"/%s\" % proxied_path\n\n worker = f\"{self.host}:{port}\"\n if not check_worker_dashboard_exits(self.scheduler, worker):\n msg = \"Worker <%s> does not exist\" % worker\n self.set_status(400)\n self.finish(msg)\n return\n return await self.proxy(port, proxied_path)\n\n async def open(self, port, host, proxied_path):\n # finally, proxy to other address/port\n return await self.proxy_open(host, port, proxied_path)\n\n def post(self, port, proxied_path):\n return self.proxy(port, proxied_path)\n\n def put(self, port, proxied_path):\n return self.proxy(port, proxied_path)\n\n def delete(self, port, proxied_path):\n return self.proxy(port, proxied_path)\n\n def head(self, port, proxied_path):\n return self.proxy(port, proxied_path)\n\n def patch(self, port, proxied_path):\n return self.proxy(port, proxied_path)\n\n def options(self, port, proxied_path):\n return self.proxy(port, proxied_path)\n\n def proxy(self, port, proxied_path):\n # router here second\n # returns ProxyHandler coroutine\n return super().proxy(self.host, port, proxied_path)\n\nexcept ImportError:\n logger.info(\n \"To route to workers diagnostics web server \"\n \"please install jupyter-server-proxy: \"\n \"python -m pip install jupyter-server-proxy\"\n )\n\n class GlobalProxyHandler(web.RequestHandler): # type: ignore\n \"\"\"Minimal Proxy handler when jupyter-server-proxy is not installed\"\"\"\n\n def initialize(self, dask_server=None, extra=None):\n self.server = dask_server\n self.extra = extra or {}\n\n def get(self, port, host, proxied_path):\n worker_url = f\"{host}:{port}/{proxied_path}\"\n msg = \"\"\"\n <p> Try navigating to <a href=http://{}>{}</a> for your worker dashboard </p>\n\n <p>\n Dask tried to proxy you to that page through your\n Scheduler's dashboard connection, but you don't have\n jupyter-server-proxy installed. You may want to install it\n with either conda or pip, and then restart your scheduler.\n </p>\n\n <p><pre> conda install jupyter-server-proxy -c conda-forge </pre></p>\n <p><pre> python -m pip install jupyter-server-proxy</pre></p>\n\n <p>\n The link above should work though if your workers are on a\n sufficiently open network. This is common on single machines,\n but less common in production clusters. Your IT administrators\n will know more\n </p>\n \"\"\".format(\n worker_url,\n worker_url,\n )\n self.write(msg)\n\n\ndef check_worker_dashboard_exits(scheduler, worker):\n \"\"\"Check addr:port exists as a worker in scheduler list\n\n Parameters\n ----------\n worker : str\n addr:port\n\n Returns\n -------\n bool\n \"\"\"\n addr, port = worker.split(\":\")\n workers = list(scheduler.workers.values())\n for w in workers:\n bokeh_port = w.services.get(\"dashboard\", \"\")\n if addr == w.host and port == str(bokeh_port):\n return True\n return False\n\n\nroutes: list[tuple] = [(r\"proxy/(\\d+)/(.*?)/(.*)\", GlobalProxyHandler, {})]\n", "path": "distributed/http/proxy.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport logging\n\nfrom tornado import web\n\nlogger = logging.getLogger(__name__)\n\ntry:\n from jupyter_server_proxy.handlers import ProxyHandler\n\n class GlobalProxyHandler(ProxyHandler):\n \"\"\"\n A tornado request handler that proxies HTTP and websockets\n from a port to any valid endpoint'.\n \"\"\"\n\n def initialize(self, dask_server=None, extra=None):\n self.scheduler = dask_server\n self.extra = extra or {}\n\n # `get_current_user` and `prepare` method impls reference\n # issue in tornado & jupyter server compat here\n # https://github.com/jupyter-server/jupyter_server/issues/1012\n def get_current_user(self):\n return \"dask\"\n\n async def prepare(self):\n web.authenticated(lambda rq: None)(self)\n\n async def http_get(self, port, host, proxied_path):\n # route here first\n # incoming URI /proxy/{port}/{host}/{proxied_path}\n\n self.host = host\n\n # rewrite uri for jupyter-server-proxy handling\n uri = f\"/proxy/{port}/{proxied_path}\"\n self.request.uri = uri\n\n if self.host not in self.host_allowlist:\n self.host_allowlist.append(self.host)\n\n # slash is removed during regex in handler\n proxied_path = \"/%s\" % proxied_path\n\n worker = f\"{self.host}:{port}\"\n if not check_worker_dashboard_exits(self.scheduler, worker):\n msg = \"Worker <%s> does not exist\" % worker\n self.set_status(400)\n self.finish(msg)\n return\n return await self.proxy(port, proxied_path)\n\n async def open(self, port, host, proxied_path):\n if host not in self.host_allowlist:\n self.host_allowlist.append(host)\n # finally, proxy to other address/port\n return await self.proxy_open(host, port, proxied_path)\n\n def post(self, port, proxied_path):\n return self.proxy(port, proxied_path)\n\n def put(self, port, proxied_path):\n return self.proxy(port, proxied_path)\n\n def delete(self, port, proxied_path):\n return self.proxy(port, proxied_path)\n\n def head(self, port, proxied_path):\n return self.proxy(port, proxied_path)\n\n def patch(self, port, proxied_path):\n return self.proxy(port, proxied_path)\n\n def options(self, port, proxied_path):\n return self.proxy(port, proxied_path)\n\n def proxy(self, port, proxied_path):\n # router here second\n # returns ProxyHandler coroutine\n return super().proxy(self.host, port, proxied_path)\n\nexcept ImportError:\n logger.info(\n \"To route to workers diagnostics web server \"\n \"please install jupyter-server-proxy: \"\n \"python -m pip install jupyter-server-proxy\"\n )\n\n class GlobalProxyHandler(web.RequestHandler): # type: ignore\n \"\"\"Minimal Proxy handler when jupyter-server-proxy is not installed\"\"\"\n\n def initialize(self, dask_server=None, extra=None):\n self.server = dask_server\n self.extra = extra or {}\n\n def get(self, port, host, proxied_path):\n worker_url = f\"{host}:{port}/{proxied_path}\"\n msg = \"\"\"\n <p> Try navigating to <a href=http://{}>{}</a> for your worker dashboard </p>\n\n <p>\n Dask tried to proxy you to that page through your\n Scheduler's dashboard connection, but you don't have\n jupyter-server-proxy installed. You may want to install it\n with either conda or pip, and then restart your scheduler.\n </p>\n\n <p><pre> conda install jupyter-server-proxy -c conda-forge </pre></p>\n <p><pre> python -m pip install jupyter-server-proxy</pre></p>\n\n <p>\n The link above should work though if your workers are on a\n sufficiently open network. This is common on single machines,\n but less common in production clusters. Your IT administrators\n will know more\n </p>\n \"\"\".format(\n worker_url,\n worker_url,\n )\n self.write(msg)\n\n\ndef check_worker_dashboard_exits(scheduler, worker):\n \"\"\"Check addr:port exists as a worker in scheduler list\n\n Parameters\n ----------\n worker : str\n addr:port\n\n Returns\n -------\n bool\n \"\"\"\n addr, port = worker.split(\":\")\n workers = list(scheduler.workers.values())\n for w in workers:\n bokeh_port = w.services.get(\"dashboard\", \"\")\n if addr == w.host and port == str(bokeh_port):\n return True\n return False\n\n\nroutes: list[tuple] = [(r\"proxy/(\\d+)/(.*?)/(.*)\", GlobalProxyHandler, {})]\n", "path": "distributed/http/proxy.py"}]} | 3,244 | 348 |
gh_patches_debug_32533 | rasdani/github-patches | git_diff | pypa__setuptools-2573 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
upload_docs raises AttributeError with Sphinx 1.6
In Sphinx 1.6, `builder_target_dir` was replaced with `builder_target_dirs`, see sphinx-doc/sphinx@2afa0b6627f7e5afb188d5a60c8c4767f6250774 and sphinx-doc/sphinx#3476.
This causes an AttributeError in setuptools:
```pytb
Traceback (most recent call last):
File "setup.py", line 67, in <module>
setup(**setup_args)
File "/usr/lib/python3.5/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.5/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.5/distutils/dist.py", line 973, in run_command
cmd_obj.ensure_finalized()
File "/usr/lib/python3.5/distutils/cmd.py", line 107, in ensure_finalized
self.finalize_options()
File "/home/dmitry/.local/lib/python3.5/site-packages/setuptools/command/upload_docs.py", line 65, in finalize_options
self.target_dir = build_sphinx.builder_target_dir
File "/usr/lib/python3.5/distutils/cmd.py", line 103, in __getattr__
raise AttributeError(attr)
AttributeError: builder_target_dir
```
upload_docs raises AttributeError with Sphinx 1.6
In Sphinx 1.6, `builder_target_dir` was replaced with `builder_target_dirs`, see sphinx-doc/sphinx@2afa0b6627f7e5afb188d5a60c8c4767f6250774 and sphinx-doc/sphinx#3476.
This causes an AttributeError in setuptools:
```pytb
Traceback (most recent call last):
File "setup.py", line 67, in <module>
setup(**setup_args)
File "/usr/lib/python3.5/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.5/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.5/distutils/dist.py", line 973, in run_command
cmd_obj.ensure_finalized()
File "/usr/lib/python3.5/distutils/cmd.py", line 107, in ensure_finalized
self.finalize_options()
File "/home/dmitry/.local/lib/python3.5/site-packages/setuptools/command/upload_docs.py", line 65, in finalize_options
self.target_dir = build_sphinx.builder_target_dir
File "/usr/lib/python3.5/distutils/cmd.py", line 103, in __getattr__
raise AttributeError(attr)
AttributeError: builder_target_dir
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setuptools/command/upload_docs.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """upload_docs
3
4 Implements a Distutils 'upload_docs' subcommand (upload documentation to
5 PyPI's pythonhosted.org).
6 """
7
8 from base64 import standard_b64encode
9 from distutils import log
10 from distutils.errors import DistutilsOptionError
11 import os
12 import socket
13 import zipfile
14 import tempfile
15 import shutil
16 import itertools
17 import functools
18 import http.client
19 import urllib.parse
20
21 from pkg_resources import iter_entry_points
22 from .upload import upload
23
24
25 def _encode(s):
26 return s.encode('utf-8', 'surrogateescape')
27
28
29 class upload_docs(upload):
30 # override the default repository as upload_docs isn't
31 # supported by Warehouse (and won't be).
32 DEFAULT_REPOSITORY = 'https://pypi.python.org/pypi/'
33
34 description = 'Upload documentation to PyPI'
35
36 user_options = [
37 ('repository=', 'r',
38 "url of repository [default: %s]" % upload.DEFAULT_REPOSITORY),
39 ('show-response', None,
40 'display full response text from server'),
41 ('upload-dir=', None, 'directory to upload'),
42 ]
43 boolean_options = upload.boolean_options
44
45 def has_sphinx(self):
46 if self.upload_dir is None:
47 for ep in iter_entry_points('distutils.commands', 'build_sphinx'):
48 return True
49
50 sub_commands = [('build_sphinx', has_sphinx)]
51
52 def initialize_options(self):
53 upload.initialize_options(self)
54 self.upload_dir = None
55 self.target_dir = None
56
57 def finalize_options(self):
58 upload.finalize_options(self)
59 if self.upload_dir is None:
60 if self.has_sphinx():
61 build_sphinx = self.get_finalized_command('build_sphinx')
62 self.target_dir = build_sphinx.builder_target_dir
63 else:
64 build = self.get_finalized_command('build')
65 self.target_dir = os.path.join(build.build_base, 'docs')
66 else:
67 self.ensure_dirname('upload_dir')
68 self.target_dir = self.upload_dir
69 if 'pypi.python.org' in self.repository:
70 log.warn("Upload_docs command is deprecated. Use RTD instead.")
71 self.announce('Using upload directory %s' % self.target_dir)
72
73 def create_zipfile(self, filename):
74 zip_file = zipfile.ZipFile(filename, "w")
75 try:
76 self.mkpath(self.target_dir) # just in case
77 for root, dirs, files in os.walk(self.target_dir):
78 if root == self.target_dir and not files:
79 tmpl = "no files found in upload directory '%s'"
80 raise DistutilsOptionError(tmpl % self.target_dir)
81 for name in files:
82 full = os.path.join(root, name)
83 relative = root[len(self.target_dir):].lstrip(os.path.sep)
84 dest = os.path.join(relative, name)
85 zip_file.write(full, dest)
86 finally:
87 zip_file.close()
88
89 def run(self):
90 # Run sub commands
91 for cmd_name in self.get_sub_commands():
92 self.run_command(cmd_name)
93
94 tmp_dir = tempfile.mkdtemp()
95 name = self.distribution.metadata.get_name()
96 zip_file = os.path.join(tmp_dir, "%s.zip" % name)
97 try:
98 self.create_zipfile(zip_file)
99 self.upload_file(zip_file)
100 finally:
101 shutil.rmtree(tmp_dir)
102
103 @staticmethod
104 def _build_part(item, sep_boundary):
105 key, values = item
106 title = '\nContent-Disposition: form-data; name="%s"' % key
107 # handle multiple entries for the same name
108 if not isinstance(values, list):
109 values = [values]
110 for value in values:
111 if isinstance(value, tuple):
112 title += '; filename="%s"' % value[0]
113 value = value[1]
114 else:
115 value = _encode(value)
116 yield sep_boundary
117 yield _encode(title)
118 yield b"\n\n"
119 yield value
120 if value and value[-1:] == b'\r':
121 yield b'\n' # write an extra newline (lurve Macs)
122
123 @classmethod
124 def _build_multipart(cls, data):
125 """
126 Build up the MIME payload for the POST data
127 """
128 boundary = '--------------GHSKFJDLGDS7543FJKLFHRE75642756743254'
129 sep_boundary = b'\n--' + boundary.encode('ascii')
130 end_boundary = sep_boundary + b'--'
131 end_items = end_boundary, b"\n",
132 builder = functools.partial(
133 cls._build_part,
134 sep_boundary=sep_boundary,
135 )
136 part_groups = map(builder, data.items())
137 parts = itertools.chain.from_iterable(part_groups)
138 body_items = itertools.chain(parts, end_items)
139 content_type = 'multipart/form-data; boundary=%s' % boundary
140 return b''.join(body_items), content_type
141
142 def upload_file(self, filename):
143 with open(filename, 'rb') as f:
144 content = f.read()
145 meta = self.distribution.metadata
146 data = {
147 ':action': 'doc_upload',
148 'name': meta.get_name(),
149 'content': (os.path.basename(filename), content),
150 }
151 # set up the authentication
152 credentials = _encode(self.username + ':' + self.password)
153 credentials = standard_b64encode(credentials).decode('ascii')
154 auth = "Basic " + credentials
155
156 body, ct = self._build_multipart(data)
157
158 msg = "Submitting documentation to %s" % (self.repository)
159 self.announce(msg, log.INFO)
160
161 # build the Request
162 # We can't use urllib2 since we need to send the Basic
163 # auth right with the first request
164 schema, netloc, url, params, query, fragments = \
165 urllib.parse.urlparse(self.repository)
166 assert not params and not query and not fragments
167 if schema == 'http':
168 conn = http.client.HTTPConnection(netloc)
169 elif schema == 'https':
170 conn = http.client.HTTPSConnection(netloc)
171 else:
172 raise AssertionError("unsupported schema " + schema)
173
174 data = ''
175 try:
176 conn.connect()
177 conn.putrequest("POST", url)
178 content_type = ct
179 conn.putheader('Content-type', content_type)
180 conn.putheader('Content-length', str(len(body)))
181 conn.putheader('Authorization', auth)
182 conn.endheaders()
183 conn.send(body)
184 except socket.error as e:
185 self.announce(str(e), log.ERROR)
186 return
187
188 r = conn.getresponse()
189 if r.status == 200:
190 msg = 'Server response (%s): %s' % (r.status, r.reason)
191 self.announce(msg, log.INFO)
192 elif r.status == 301:
193 location = r.getheader('Location')
194 if location is None:
195 location = 'https://pythonhosted.org/%s/' % meta.get_name()
196 msg = 'Upload successful. Visit %s' % location
197 self.announce(msg, log.INFO)
198 else:
199 msg = 'Upload failed (%s): %s' % (r.status, r.reason)
200 self.announce(msg, log.ERROR)
201 if self.show_response:
202 print('-' * 75, r.read(), '-' * 75)
203
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setuptools/command/upload_docs.py b/setuptools/command/upload_docs.py
--- a/setuptools/command/upload_docs.py
+++ b/setuptools/command/upload_docs.py
@@ -2,7 +2,7 @@
"""upload_docs
Implements a Distutils 'upload_docs' subcommand (upload documentation to
-PyPI's pythonhosted.org).
+sites other than PyPi such as devpi).
"""
from base64 import standard_b64encode
@@ -31,7 +31,7 @@
# supported by Warehouse (and won't be).
DEFAULT_REPOSITORY = 'https://pypi.python.org/pypi/'
- description = 'Upload documentation to PyPI'
+ description = 'Upload documentation to sites other than PyPi such as devpi'
user_options = [
('repository=', 'r',
@@ -59,7 +59,7 @@
if self.upload_dir is None:
if self.has_sphinx():
build_sphinx = self.get_finalized_command('build_sphinx')
- self.target_dir = build_sphinx.builder_target_dir
+ self.target_dir = dict(build_sphinx.builder_target_dirs)['html']
else:
build = self.get_finalized_command('build')
self.target_dir = os.path.join(build.build_base, 'docs')
@@ -67,7 +67,7 @@
self.ensure_dirname('upload_dir')
self.target_dir = self.upload_dir
if 'pypi.python.org' in self.repository:
- log.warn("Upload_docs command is deprecated. Use RTD instead.")
+ log.warn("Upload_docs command is deprecated for PyPi. Use RTD instead.")
self.announce('Using upload directory %s' % self.target_dir)
def create_zipfile(self, filename):
| {"golden_diff": "diff --git a/setuptools/command/upload_docs.py b/setuptools/command/upload_docs.py\n--- a/setuptools/command/upload_docs.py\n+++ b/setuptools/command/upload_docs.py\n@@ -2,7 +2,7 @@\n \"\"\"upload_docs\n \n Implements a Distutils 'upload_docs' subcommand (upload documentation to\n-PyPI's pythonhosted.org).\n+sites other than PyPi such as devpi).\n \"\"\"\n \n from base64 import standard_b64encode\n@@ -31,7 +31,7 @@\n # supported by Warehouse (and won't be).\n DEFAULT_REPOSITORY = 'https://pypi.python.org/pypi/'\n \n- description = 'Upload documentation to PyPI'\n+ description = 'Upload documentation to sites other than PyPi such as devpi'\n \n user_options = [\n ('repository=', 'r',\n@@ -59,7 +59,7 @@\n if self.upload_dir is None:\n if self.has_sphinx():\n build_sphinx = self.get_finalized_command('build_sphinx')\n- self.target_dir = build_sphinx.builder_target_dir\n+ self.target_dir = dict(build_sphinx.builder_target_dirs)['html']\n else:\n build = self.get_finalized_command('build')\n self.target_dir = os.path.join(build.build_base, 'docs')\n@@ -67,7 +67,7 @@\n self.ensure_dirname('upload_dir')\n self.target_dir = self.upload_dir\n if 'pypi.python.org' in self.repository:\n- log.warn(\"Upload_docs command is deprecated. Use RTD instead.\")\n+ log.warn(\"Upload_docs command is deprecated for PyPi. Use RTD instead.\")\n self.announce('Using upload directory %s' % self.target_dir)\n \n def create_zipfile(self, filename):\n", "issue": "upload_docs raises AttributeError with Sphinx 1.6\nIn Sphinx 1.6, `builder_target_dir` was replaced with `builder_target_dirs`, see sphinx-doc/sphinx@2afa0b6627f7e5afb188d5a60c8c4767f6250774 and sphinx-doc/sphinx#3476.\r\n\r\nThis causes an AttributeError in setuptools:\r\n\r\n```pytb\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 67, in <module>\r\n setup(**setup_args)\r\n File \"/usr/lib/python3.5/distutils/core.py\", line 148, in setup\r\n dist.run_commands()\r\n File \"/usr/lib/python3.5/distutils/dist.py\", line 955, in run_commands\r\n self.run_command(cmd)\r\n File \"/usr/lib/python3.5/distutils/dist.py\", line 973, in run_command\r\n cmd_obj.ensure_finalized()\r\n File \"/usr/lib/python3.5/distutils/cmd.py\", line 107, in ensure_finalized\r\n self.finalize_options()\r\n File \"/home/dmitry/.local/lib/python3.5/site-packages/setuptools/command/upload_docs.py\", line 65, in finalize_options\r\n self.target_dir = build_sphinx.builder_target_dir\r\n File \"/usr/lib/python3.5/distutils/cmd.py\", line 103, in __getattr__\r\n raise AttributeError(attr)\r\nAttributeError: builder_target_dir\r\n```\nupload_docs raises AttributeError with Sphinx 1.6\nIn Sphinx 1.6, `builder_target_dir` was replaced with `builder_target_dirs`, see sphinx-doc/sphinx@2afa0b6627f7e5afb188d5a60c8c4767f6250774 and sphinx-doc/sphinx#3476.\r\n\r\nThis causes an AttributeError in setuptools:\r\n\r\n```pytb\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 67, in <module>\r\n setup(**setup_args)\r\n File \"/usr/lib/python3.5/distutils/core.py\", line 148, in setup\r\n dist.run_commands()\r\n File \"/usr/lib/python3.5/distutils/dist.py\", line 955, in run_commands\r\n self.run_command(cmd)\r\n File \"/usr/lib/python3.5/distutils/dist.py\", line 973, in run_command\r\n cmd_obj.ensure_finalized()\r\n File \"/usr/lib/python3.5/distutils/cmd.py\", line 107, in ensure_finalized\r\n self.finalize_options()\r\n File \"/home/dmitry/.local/lib/python3.5/site-packages/setuptools/command/upload_docs.py\", line 65, in finalize_options\r\n self.target_dir = build_sphinx.builder_target_dir\r\n File \"/usr/lib/python3.5/distutils/cmd.py\", line 103, in __getattr__\r\n raise AttributeError(attr)\r\nAttributeError: builder_target_dir\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"upload_docs\n\nImplements a Distutils 'upload_docs' subcommand (upload documentation to\nPyPI's pythonhosted.org).\n\"\"\"\n\nfrom base64 import standard_b64encode\nfrom distutils import log\nfrom distutils.errors import DistutilsOptionError\nimport os\nimport socket\nimport zipfile\nimport tempfile\nimport shutil\nimport itertools\nimport functools\nimport http.client\nimport urllib.parse\n\nfrom pkg_resources import iter_entry_points\nfrom .upload import upload\n\n\ndef _encode(s):\n return s.encode('utf-8', 'surrogateescape')\n\n\nclass upload_docs(upload):\n # override the default repository as upload_docs isn't\n # supported by Warehouse (and won't be).\n DEFAULT_REPOSITORY = 'https://pypi.python.org/pypi/'\n\n description = 'Upload documentation to PyPI'\n\n user_options = [\n ('repository=', 'r',\n \"url of repository [default: %s]\" % upload.DEFAULT_REPOSITORY),\n ('show-response', None,\n 'display full response text from server'),\n ('upload-dir=', None, 'directory to upload'),\n ]\n boolean_options = upload.boolean_options\n\n def has_sphinx(self):\n if self.upload_dir is None:\n for ep in iter_entry_points('distutils.commands', 'build_sphinx'):\n return True\n\n sub_commands = [('build_sphinx', has_sphinx)]\n\n def initialize_options(self):\n upload.initialize_options(self)\n self.upload_dir = None\n self.target_dir = None\n\n def finalize_options(self):\n upload.finalize_options(self)\n if self.upload_dir is None:\n if self.has_sphinx():\n build_sphinx = self.get_finalized_command('build_sphinx')\n self.target_dir = build_sphinx.builder_target_dir\n else:\n build = self.get_finalized_command('build')\n self.target_dir = os.path.join(build.build_base, 'docs')\n else:\n self.ensure_dirname('upload_dir')\n self.target_dir = self.upload_dir\n if 'pypi.python.org' in self.repository:\n log.warn(\"Upload_docs command is deprecated. Use RTD instead.\")\n self.announce('Using upload directory %s' % self.target_dir)\n\n def create_zipfile(self, filename):\n zip_file = zipfile.ZipFile(filename, \"w\")\n try:\n self.mkpath(self.target_dir) # just in case\n for root, dirs, files in os.walk(self.target_dir):\n if root == self.target_dir and not files:\n tmpl = \"no files found in upload directory '%s'\"\n raise DistutilsOptionError(tmpl % self.target_dir)\n for name in files:\n full = os.path.join(root, name)\n relative = root[len(self.target_dir):].lstrip(os.path.sep)\n dest = os.path.join(relative, name)\n zip_file.write(full, dest)\n finally:\n zip_file.close()\n\n def run(self):\n # Run sub commands\n for cmd_name in self.get_sub_commands():\n self.run_command(cmd_name)\n\n tmp_dir = tempfile.mkdtemp()\n name = self.distribution.metadata.get_name()\n zip_file = os.path.join(tmp_dir, \"%s.zip\" % name)\n try:\n self.create_zipfile(zip_file)\n self.upload_file(zip_file)\n finally:\n shutil.rmtree(tmp_dir)\n\n @staticmethod\n def _build_part(item, sep_boundary):\n key, values = item\n title = '\\nContent-Disposition: form-data; name=\"%s\"' % key\n # handle multiple entries for the same name\n if not isinstance(values, list):\n values = [values]\n for value in values:\n if isinstance(value, tuple):\n title += '; filename=\"%s\"' % value[0]\n value = value[1]\n else:\n value = _encode(value)\n yield sep_boundary\n yield _encode(title)\n yield b\"\\n\\n\"\n yield value\n if value and value[-1:] == b'\\r':\n yield b'\\n' # write an extra newline (lurve Macs)\n\n @classmethod\n def _build_multipart(cls, data):\n \"\"\"\n Build up the MIME payload for the POST data\n \"\"\"\n boundary = '--------------GHSKFJDLGDS7543FJKLFHRE75642756743254'\n sep_boundary = b'\\n--' + boundary.encode('ascii')\n end_boundary = sep_boundary + b'--'\n end_items = end_boundary, b\"\\n\",\n builder = functools.partial(\n cls._build_part,\n sep_boundary=sep_boundary,\n )\n part_groups = map(builder, data.items())\n parts = itertools.chain.from_iterable(part_groups)\n body_items = itertools.chain(parts, end_items)\n content_type = 'multipart/form-data; boundary=%s' % boundary\n return b''.join(body_items), content_type\n\n def upload_file(self, filename):\n with open(filename, 'rb') as f:\n content = f.read()\n meta = self.distribution.metadata\n data = {\n ':action': 'doc_upload',\n 'name': meta.get_name(),\n 'content': (os.path.basename(filename), content),\n }\n # set up the authentication\n credentials = _encode(self.username + ':' + self.password)\n credentials = standard_b64encode(credentials).decode('ascii')\n auth = \"Basic \" + credentials\n\n body, ct = self._build_multipart(data)\n\n msg = \"Submitting documentation to %s\" % (self.repository)\n self.announce(msg, log.INFO)\n\n # build the Request\n # We can't use urllib2 since we need to send the Basic\n # auth right with the first request\n schema, netloc, url, params, query, fragments = \\\n urllib.parse.urlparse(self.repository)\n assert not params and not query and not fragments\n if schema == 'http':\n conn = http.client.HTTPConnection(netloc)\n elif schema == 'https':\n conn = http.client.HTTPSConnection(netloc)\n else:\n raise AssertionError(\"unsupported schema \" + schema)\n\n data = ''\n try:\n conn.connect()\n conn.putrequest(\"POST\", url)\n content_type = ct\n conn.putheader('Content-type', content_type)\n conn.putheader('Content-length', str(len(body)))\n conn.putheader('Authorization', auth)\n conn.endheaders()\n conn.send(body)\n except socket.error as e:\n self.announce(str(e), log.ERROR)\n return\n\n r = conn.getresponse()\n if r.status == 200:\n msg = 'Server response (%s): %s' % (r.status, r.reason)\n self.announce(msg, log.INFO)\n elif r.status == 301:\n location = r.getheader('Location')\n if location is None:\n location = 'https://pythonhosted.org/%s/' % meta.get_name()\n msg = 'Upload successful. Visit %s' % location\n self.announce(msg, log.INFO)\n else:\n msg = 'Upload failed (%s): %s' % (r.status, r.reason)\n self.announce(msg, log.ERROR)\n if self.show_response:\n print('-' * 75, r.read(), '-' * 75)\n", "path": "setuptools/command/upload_docs.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"upload_docs\n\nImplements a Distutils 'upload_docs' subcommand (upload documentation to\nsites other than PyPi such as devpi).\n\"\"\"\n\nfrom base64 import standard_b64encode\nfrom distutils import log\nfrom distutils.errors import DistutilsOptionError\nimport os\nimport socket\nimport zipfile\nimport tempfile\nimport shutil\nimport itertools\nimport functools\nimport http.client\nimport urllib.parse\n\nfrom pkg_resources import iter_entry_points\nfrom .upload import upload\n\n\ndef _encode(s):\n return s.encode('utf-8', 'surrogateescape')\n\n\nclass upload_docs(upload):\n # override the default repository as upload_docs isn't\n # supported by Warehouse (and won't be).\n DEFAULT_REPOSITORY = 'https://pypi.python.org/pypi/'\n\n description = 'Upload documentation to sites other than PyPi such as devpi'\n\n user_options = [\n ('repository=', 'r',\n \"url of repository [default: %s]\" % upload.DEFAULT_REPOSITORY),\n ('show-response', None,\n 'display full response text from server'),\n ('upload-dir=', None, 'directory to upload'),\n ]\n boolean_options = upload.boolean_options\n\n def has_sphinx(self):\n if self.upload_dir is None:\n for ep in iter_entry_points('distutils.commands', 'build_sphinx'):\n return True\n\n sub_commands = [('build_sphinx', has_sphinx)]\n\n def initialize_options(self):\n upload.initialize_options(self)\n self.upload_dir = None\n self.target_dir = None\n\n def finalize_options(self):\n upload.finalize_options(self)\n if self.upload_dir is None:\n if self.has_sphinx():\n build_sphinx = self.get_finalized_command('build_sphinx')\n self.target_dir = dict(build_sphinx.builder_target_dirs)['html']\n else:\n build = self.get_finalized_command('build')\n self.target_dir = os.path.join(build.build_base, 'docs')\n else:\n self.ensure_dirname('upload_dir')\n self.target_dir = self.upload_dir\n if 'pypi.python.org' in self.repository:\n log.warn(\"Upload_docs command is deprecated for PyPi. Use RTD instead.\")\n self.announce('Using upload directory %s' % self.target_dir)\n\n def create_zipfile(self, filename):\n zip_file = zipfile.ZipFile(filename, \"w\")\n try:\n self.mkpath(self.target_dir) # just in case\n for root, dirs, files in os.walk(self.target_dir):\n if root == self.target_dir and not files:\n tmpl = \"no files found in upload directory '%s'\"\n raise DistutilsOptionError(tmpl % self.target_dir)\n for name in files:\n full = os.path.join(root, name)\n relative = root[len(self.target_dir):].lstrip(os.path.sep)\n dest = os.path.join(relative, name)\n zip_file.write(full, dest)\n finally:\n zip_file.close()\n\n def run(self):\n # Run sub commands\n for cmd_name in self.get_sub_commands():\n self.run_command(cmd_name)\n\n tmp_dir = tempfile.mkdtemp()\n name = self.distribution.metadata.get_name()\n zip_file = os.path.join(tmp_dir, \"%s.zip\" % name)\n try:\n self.create_zipfile(zip_file)\n self.upload_file(zip_file)\n finally:\n shutil.rmtree(tmp_dir)\n\n @staticmethod\n def _build_part(item, sep_boundary):\n key, values = item\n title = '\\nContent-Disposition: form-data; name=\"%s\"' % key\n # handle multiple entries for the same name\n if not isinstance(values, list):\n values = [values]\n for value in values:\n if isinstance(value, tuple):\n title += '; filename=\"%s\"' % value[0]\n value = value[1]\n else:\n value = _encode(value)\n yield sep_boundary\n yield _encode(title)\n yield b\"\\n\\n\"\n yield value\n if value and value[-1:] == b'\\r':\n yield b'\\n' # write an extra newline (lurve Macs)\n\n @classmethod\n def _build_multipart(cls, data):\n \"\"\"\n Build up the MIME payload for the POST data\n \"\"\"\n boundary = '--------------GHSKFJDLGDS7543FJKLFHRE75642756743254'\n sep_boundary = b'\\n--' + boundary.encode('ascii')\n end_boundary = sep_boundary + b'--'\n end_items = end_boundary, b\"\\n\",\n builder = functools.partial(\n cls._build_part,\n sep_boundary=sep_boundary,\n )\n part_groups = map(builder, data.items())\n parts = itertools.chain.from_iterable(part_groups)\n body_items = itertools.chain(parts, end_items)\n content_type = 'multipart/form-data; boundary=%s' % boundary\n return b''.join(body_items), content_type\n\n def upload_file(self, filename):\n with open(filename, 'rb') as f:\n content = f.read()\n meta = self.distribution.metadata\n data = {\n ':action': 'doc_upload',\n 'name': meta.get_name(),\n 'content': (os.path.basename(filename), content),\n }\n # set up the authentication\n credentials = _encode(self.username + ':' + self.password)\n credentials = standard_b64encode(credentials).decode('ascii')\n auth = \"Basic \" + credentials\n\n body, ct = self._build_multipart(data)\n\n msg = \"Submitting documentation to %s\" % (self.repository)\n self.announce(msg, log.INFO)\n\n # build the Request\n # We can't use urllib2 since we need to send the Basic\n # auth right with the first request\n schema, netloc, url, params, query, fragments = \\\n urllib.parse.urlparse(self.repository)\n assert not params and not query and not fragments\n if schema == 'http':\n conn = http.client.HTTPConnection(netloc)\n elif schema == 'https':\n conn = http.client.HTTPSConnection(netloc)\n else:\n raise AssertionError(\"unsupported schema \" + schema)\n\n data = ''\n try:\n conn.connect()\n conn.putrequest(\"POST\", url)\n content_type = ct\n conn.putheader('Content-type', content_type)\n conn.putheader('Content-length', str(len(body)))\n conn.putheader('Authorization', auth)\n conn.endheaders()\n conn.send(body)\n except socket.error as e:\n self.announce(str(e), log.ERROR)\n return\n\n r = conn.getresponse()\n if r.status == 200:\n msg = 'Server response (%s): %s' % (r.status, r.reason)\n self.announce(msg, log.INFO)\n elif r.status == 301:\n location = r.getheader('Location')\n if location is None:\n location = 'https://pythonhosted.org/%s/' % meta.get_name()\n msg = 'Upload successful. Visit %s' % location\n self.announce(msg, log.INFO)\n else:\n msg = 'Upload failed (%s): %s' % (r.status, r.reason)\n self.announce(msg, log.ERROR)\n if self.show_response:\n print('-' * 75, r.read(), '-' * 75)\n", "path": "setuptools/command/upload_docs.py"}]} | 3,003 | 382 |
gh_patches_debug_41428 | rasdani/github-patches | git_diff | ytdl-org__youtube-dl-13415 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
www.watchindianporn.net parser is broken
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.06.12**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['--verbose', '-iw', '--write-description', '-R', '10', '-o', '%(title)s-
%(id)s.%(ext)s', '-a', 'youtube-dl.txt', '--external-downloader', 'curl', '--external-downloader-arg
s', '-C - -L']
[debug] Batch file urls: ['http://www.watchindianporn.net/video/up-bhoji-lifting-her-saree-and-expos
ing-her-dirty-gaand-qsnHOGU7Ey1.html']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2017.06.12
[debug] Python version 3.4.4 - Windows-7-6.1.7601-SP1
[debug] exe versions: none
[debug] Proxy map: {}
[WatchIndianPorn] up-bhoji-lifting-her-saree-and-exposing-her-dirty-gaand: Downloading webpage
ERROR: Unable to extract url; please report this issue on https://yt-dl.org/bug . Make sure you are
using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verb
ose flag and include its complete output.
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpkyaecyzu\build\youtube_dl\Youtu
beDL.py", line 762, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpkyaecyzu\build\youtube_dl\extra
ctor\common.py", line 433, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpkyaecyzu\build\youtube_dl\extra
ctor\watchindianporn.py", line 44, in _real_extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpkyaecyzu\build\youtube_dl\extra
ctor\common.py", line 791, in _html_search_regex
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpkyaecyzu\build\youtube_dl\extra
ctor\common.py", line 782, in _search_regex
youtube_dl.utils.RegexNotFoundError: Unable to extract url; please report this issue on https://yt-d
l.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to c
all youtube-dl with the --verbose flag and include its complete output.
```
---
www.watchindianporn.net parser is broken. console log submitted above.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `youtube_dl/extractor/watchindianporn.py`
Content:
```
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 import re
5
6 from .common import InfoExtractor
7 from ..utils import (
8 unified_strdate,
9 parse_duration,
10 int_or_none,
11 )
12
13
14 class WatchIndianPornIE(InfoExtractor):
15 IE_DESC = 'Watch Indian Porn'
16 _VALID_URL = r'https?://(?:www\.)?watchindianporn\.net/(?:[^/]+/)*video/(?P<display_id>[^/]+)-(?P<id>[a-zA-Z0-9]+)\.html'
17 _TEST = {
18 'url': 'http://www.watchindianporn.net/video/hot-milf-from-kerala-shows-off-her-gorgeous-large-breasts-on-camera-RZa2avywNPa.html',
19 'md5': '249589a164dde236ec65832bfce17440',
20 'info_dict': {
21 'id': 'RZa2avywNPa',
22 'display_id': 'hot-milf-from-kerala-shows-off-her-gorgeous-large-breasts-on-camera',
23 'ext': 'mp4',
24 'title': 'Hot milf from kerala shows off her gorgeous large breasts on camera',
25 'thumbnail': r're:^https?://.*\.jpg$',
26 'uploader': 'LoveJay',
27 'upload_date': '20160428',
28 'duration': 226,
29 'view_count': int,
30 'comment_count': int,
31 'categories': list,
32 'age_limit': 18,
33 }
34 }
35
36 def _real_extract(self, url):
37 mobj = re.match(self._VALID_URL, url)
38 video_id = mobj.group('id')
39 display_id = mobj.group('display_id')
40
41 webpage = self._download_webpage(url, display_id)
42
43 video_url = self._html_search_regex(
44 r"url: escape\('([^']+)'\)", webpage, 'url')
45
46 title = self._html_search_regex(
47 r'<h2 class="he2"><span>(.*?)</span>',
48 webpage, 'title')
49 thumbnail = self._html_search_regex(
50 r'<span id="container"><img\s+src="([^"]+)"',
51 webpage, 'thumbnail', fatal=False)
52
53 uploader = self._html_search_regex(
54 r'class="aupa">\s*(.*?)</a>',
55 webpage, 'uploader')
56 upload_date = unified_strdate(self._html_search_regex(
57 r'Added: <strong>(.+?)</strong>', webpage, 'upload date', fatal=False))
58
59 duration = parse_duration(self._search_regex(
60 r'<td>Time:\s*</td>\s*<td align="right"><span>\s*(.+?)\s*</span>',
61 webpage, 'duration', fatal=False))
62
63 view_count = int_or_none(self._search_regex(
64 r'<td>Views:\s*</td>\s*<td align="right"><span>\s*(\d+)\s*</span>',
65 webpage, 'view count', fatal=False))
66 comment_count = int_or_none(self._search_regex(
67 r'<td>Comments:\s*</td>\s*<td align="right"><span>\s*(\d+)\s*</span>',
68 webpage, 'comment count', fatal=False))
69
70 categories = re.findall(
71 r'<a href="[^"]+/search/video/desi"><span>([^<]+)</span></a>',
72 webpage)
73
74 return {
75 'id': video_id,
76 'display_id': display_id,
77 'url': video_url,
78 'http_headers': {
79 'Referer': url,
80 },
81 'title': title,
82 'thumbnail': thumbnail,
83 'uploader': uploader,
84 'upload_date': upload_date,
85 'duration': duration,
86 'view_count': view_count,
87 'comment_count': comment_count,
88 'categories': categories,
89 'age_limit': 18,
90 }
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/youtube_dl/extractor/watchindianporn.py b/youtube_dl/extractor/watchindianporn.py
--- a/youtube_dl/extractor/watchindianporn.py
+++ b/youtube_dl/extractor/watchindianporn.py
@@ -4,11 +4,7 @@
import re
from .common import InfoExtractor
-from ..utils import (
- unified_strdate,
- parse_duration,
- int_or_none,
-)
+from ..utils import parse_duration
class WatchIndianPornIE(InfoExtractor):
@@ -23,11 +19,8 @@
'ext': 'mp4',
'title': 'Hot milf from kerala shows off her gorgeous large breasts on camera',
'thumbnail': r're:^https?://.*\.jpg$',
- 'uploader': 'LoveJay',
- 'upload_date': '20160428',
'duration': 226,
'view_count': int,
- 'comment_count': int,
'categories': list,
'age_limit': 18,
}
@@ -40,51 +33,36 @@
webpage = self._download_webpage(url, display_id)
- video_url = self._html_search_regex(
- r"url: escape\('([^']+)'\)", webpage, 'url')
+ info_dict = self._parse_html5_media_entries(url, webpage, video_id)[0]
- title = self._html_search_regex(
- r'<h2 class="he2"><span>(.*?)</span>',
- webpage, 'title')
- thumbnail = self._html_search_regex(
- r'<span id="container"><img\s+src="([^"]+)"',
- webpage, 'thumbnail', fatal=False)
-
- uploader = self._html_search_regex(
- r'class="aupa">\s*(.*?)</a>',
- webpage, 'uploader')
- upload_date = unified_strdate(self._html_search_regex(
- r'Added: <strong>(.+?)</strong>', webpage, 'upload date', fatal=False))
+ title = self._html_search_regex((
+ r'<title>(.+?)\s*-\s*Indian\s+Porn</title>',
+ r'<h4>(.+?)</h4>'
+ ), webpage, 'title')
duration = parse_duration(self._search_regex(
- r'<td>Time:\s*</td>\s*<td align="right"><span>\s*(.+?)\s*</span>',
+ r'Time:\s*<strong>\s*(.+?)\s*</strong>',
webpage, 'duration', fatal=False))
- view_count = int_or_none(self._search_regex(
- r'<td>Views:\s*</td>\s*<td align="right"><span>\s*(\d+)\s*</span>',
+ view_count = int(self._search_regex(
+ r'(?s)Time:\s*<strong>.*?</strong>.*?<strong>\s*(\d+)\s*</strong>',
webpage, 'view count', fatal=False))
- comment_count = int_or_none(self._search_regex(
- r'<td>Comments:\s*</td>\s*<td align="right"><span>\s*(\d+)\s*</span>',
- webpage, 'comment count', fatal=False))
categories = re.findall(
- r'<a href="[^"]+/search/video/desi"><span>([^<]+)</span></a>',
+ r'<a[^>]+class=[\'"]categories[\'"][^>]*>\s*([^<]+)\s*</a>',
webpage)
- return {
+ info_dict.update({
'id': video_id,
'display_id': display_id,
- 'url': video_url,
'http_headers': {
'Referer': url,
},
'title': title,
- 'thumbnail': thumbnail,
- 'uploader': uploader,
- 'upload_date': upload_date,
'duration': duration,
'view_count': view_count,
- 'comment_count': comment_count,
'categories': categories,
'age_limit': 18,
- }
+ })
+
+ return info_dict
| {"golden_diff": "diff --git a/youtube_dl/extractor/watchindianporn.py b/youtube_dl/extractor/watchindianporn.py\n--- a/youtube_dl/extractor/watchindianporn.py\n+++ b/youtube_dl/extractor/watchindianporn.py\n@@ -4,11 +4,7 @@\n import re\n \n from .common import InfoExtractor\n-from ..utils import (\n- unified_strdate,\n- parse_duration,\n- int_or_none,\n-)\n+from ..utils import parse_duration\n \n \n class WatchIndianPornIE(InfoExtractor):\n@@ -23,11 +19,8 @@\n 'ext': 'mp4',\n 'title': 'Hot milf from kerala shows off her gorgeous large breasts on camera',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n- 'uploader': 'LoveJay',\n- 'upload_date': '20160428',\n 'duration': 226,\n 'view_count': int,\n- 'comment_count': int,\n 'categories': list,\n 'age_limit': 18,\n }\n@@ -40,51 +33,36 @@\n \n webpage = self._download_webpage(url, display_id)\n \n- video_url = self._html_search_regex(\n- r\"url: escape\\('([^']+)'\\)\", webpage, 'url')\n+ info_dict = self._parse_html5_media_entries(url, webpage, video_id)[0]\n \n- title = self._html_search_regex(\n- r'<h2 class=\"he2\"><span>(.*?)</span>',\n- webpage, 'title')\n- thumbnail = self._html_search_regex(\n- r'<span id=\"container\"><img\\s+src=\"([^\"]+)\"',\n- webpage, 'thumbnail', fatal=False)\n-\n- uploader = self._html_search_regex(\n- r'class=\"aupa\">\\s*(.*?)</a>',\n- webpage, 'uploader')\n- upload_date = unified_strdate(self._html_search_regex(\n- r'Added: <strong>(.+?)</strong>', webpage, 'upload date', fatal=False))\n+ title = self._html_search_regex((\n+ r'<title>(.+?)\\s*-\\s*Indian\\s+Porn</title>',\n+ r'<h4>(.+?)</h4>'\n+ ), webpage, 'title')\n \n duration = parse_duration(self._search_regex(\n- r'<td>Time:\\s*</td>\\s*<td align=\"right\"><span>\\s*(.+?)\\s*</span>',\n+ r'Time:\\s*<strong>\\s*(.+?)\\s*</strong>',\n webpage, 'duration', fatal=False))\n \n- view_count = int_or_none(self._search_regex(\n- r'<td>Views:\\s*</td>\\s*<td align=\"right\"><span>\\s*(\\d+)\\s*</span>',\n+ view_count = int(self._search_regex(\n+ r'(?s)Time:\\s*<strong>.*?</strong>.*?<strong>\\s*(\\d+)\\s*</strong>',\n webpage, 'view count', fatal=False))\n- comment_count = int_or_none(self._search_regex(\n- r'<td>Comments:\\s*</td>\\s*<td align=\"right\"><span>\\s*(\\d+)\\s*</span>',\n- webpage, 'comment count', fatal=False))\n \n categories = re.findall(\n- r'<a href=\"[^\"]+/search/video/desi\"><span>([^<]+)</span></a>',\n+ r'<a[^>]+class=[\\'\"]categories[\\'\"][^>]*>\\s*([^<]+)\\s*</a>',\n webpage)\n \n- return {\n+ info_dict.update({\n 'id': video_id,\n 'display_id': display_id,\n- 'url': video_url,\n 'http_headers': {\n 'Referer': url,\n },\n 'title': title,\n- 'thumbnail': thumbnail,\n- 'uploader': uploader,\n- 'upload_date': upload_date,\n 'duration': duration,\n 'view_count': view_count,\n- 'comment_count': comment_count,\n 'categories': categories,\n 'age_limit': 18,\n- }\n+ })\n+\n+ return info_dict\n", "issue": "www.watchindianporn.net parser is broken\n- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.06.12**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n\r\n### What is the purpose of your *issue*?\r\n- [x] Bug report (encountered problems with youtube-dl)\r\n- [ ] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n---\r\n```\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: ['--verbose', '-iw', '--write-description', '-R', '10', '-o', '%(title)s-\r\n%(id)s.%(ext)s', '-a', 'youtube-dl.txt', '--external-downloader', 'curl', '--external-downloader-arg\r\ns', '-C - -L']\r\n[debug] Batch file urls: ['http://www.watchindianporn.net/video/up-bhoji-lifting-her-saree-and-expos\r\ning-her-dirty-gaand-qsnHOGU7Ey1.html']\r\n[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252\r\n[debug] youtube-dl version 2017.06.12\r\n[debug] Python version 3.4.4 - Windows-7-6.1.7601-SP1\r\n[debug] exe versions: none\r\n[debug] Proxy map: {}\r\n[WatchIndianPorn] up-bhoji-lifting-her-saree-and-exposing-her-dirty-gaand: Downloading webpage\r\nERROR: Unable to extract url; please report this issue on https://yt-dl.org/bug . Make sure you are\r\nusing the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verb\r\nose flag and include its complete output.\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\rg3\\tmpkyaecyzu\\build\\youtube_dl\\Youtu\r\nbeDL.py\", line 762, in extract_info\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\rg3\\tmpkyaecyzu\\build\\youtube_dl\\extra\r\nctor\\common.py\", line 433, in extract\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\rg3\\tmpkyaecyzu\\build\\youtube_dl\\extra\r\nctor\\watchindianporn.py\", line 44, in _real_extract\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\rg3\\tmpkyaecyzu\\build\\youtube_dl\\extra\r\nctor\\common.py\", line 791, in _html_search_regex\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\rg3\\tmpkyaecyzu\\build\\youtube_dl\\extra\r\nctor\\common.py\", line 782, in _search_regex\r\nyoutube_dl.utils.RegexNotFoundError: Unable to extract url; please report this issue on https://yt-d\r\nl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to c\r\nall youtube-dl with the --verbose flag and include its complete output.\r\n```\r\n---\r\nwww.watchindianporn.net parser is broken. console log submitted above.\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n unified_strdate,\n parse_duration,\n int_or_none,\n)\n\n\nclass WatchIndianPornIE(InfoExtractor):\n IE_DESC = 'Watch Indian Porn'\n _VALID_URL = r'https?://(?:www\\.)?watchindianporn\\.net/(?:[^/]+/)*video/(?P<display_id>[^/]+)-(?P<id>[a-zA-Z0-9]+)\\.html'\n _TEST = {\n 'url': 'http://www.watchindianporn.net/video/hot-milf-from-kerala-shows-off-her-gorgeous-large-breasts-on-camera-RZa2avywNPa.html',\n 'md5': '249589a164dde236ec65832bfce17440',\n 'info_dict': {\n 'id': 'RZa2avywNPa',\n 'display_id': 'hot-milf-from-kerala-shows-off-her-gorgeous-large-breasts-on-camera',\n 'ext': 'mp4',\n 'title': 'Hot milf from kerala shows off her gorgeous large breasts on camera',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n 'uploader': 'LoveJay',\n 'upload_date': '20160428',\n 'duration': 226,\n 'view_count': int,\n 'comment_count': int,\n 'categories': list,\n 'age_limit': 18,\n }\n }\n\n def _real_extract(self, url):\n mobj = re.match(self._VALID_URL, url)\n video_id = mobj.group('id')\n display_id = mobj.group('display_id')\n\n webpage = self._download_webpage(url, display_id)\n\n video_url = self._html_search_regex(\n r\"url: escape\\('([^']+)'\\)\", webpage, 'url')\n\n title = self._html_search_regex(\n r'<h2 class=\"he2\"><span>(.*?)</span>',\n webpage, 'title')\n thumbnail = self._html_search_regex(\n r'<span id=\"container\"><img\\s+src=\"([^\"]+)\"',\n webpage, 'thumbnail', fatal=False)\n\n uploader = self._html_search_regex(\n r'class=\"aupa\">\\s*(.*?)</a>',\n webpage, 'uploader')\n upload_date = unified_strdate(self._html_search_regex(\n r'Added: <strong>(.+?)</strong>', webpage, 'upload date', fatal=False))\n\n duration = parse_duration(self._search_regex(\n r'<td>Time:\\s*</td>\\s*<td align=\"right\"><span>\\s*(.+?)\\s*</span>',\n webpage, 'duration', fatal=False))\n\n view_count = int_or_none(self._search_regex(\n r'<td>Views:\\s*</td>\\s*<td align=\"right\"><span>\\s*(\\d+)\\s*</span>',\n webpage, 'view count', fatal=False))\n comment_count = int_or_none(self._search_regex(\n r'<td>Comments:\\s*</td>\\s*<td align=\"right\"><span>\\s*(\\d+)\\s*</span>',\n webpage, 'comment count', fatal=False))\n\n categories = re.findall(\n r'<a href=\"[^\"]+/search/video/desi\"><span>([^<]+)</span></a>',\n webpage)\n\n return {\n 'id': video_id,\n 'display_id': display_id,\n 'url': video_url,\n 'http_headers': {\n 'Referer': url,\n },\n 'title': title,\n 'thumbnail': thumbnail,\n 'uploader': uploader,\n 'upload_date': upload_date,\n 'duration': duration,\n 'view_count': view_count,\n 'comment_count': comment_count,\n 'categories': categories,\n 'age_limit': 18,\n }\n", "path": "youtube_dl/extractor/watchindianporn.py"}], "after_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom .common import InfoExtractor\nfrom ..utils import parse_duration\n\n\nclass WatchIndianPornIE(InfoExtractor):\n IE_DESC = 'Watch Indian Porn'\n _VALID_URL = r'https?://(?:www\\.)?watchindianporn\\.net/(?:[^/]+/)*video/(?P<display_id>[^/]+)-(?P<id>[a-zA-Z0-9]+)\\.html'\n _TEST = {\n 'url': 'http://www.watchindianporn.net/video/hot-milf-from-kerala-shows-off-her-gorgeous-large-breasts-on-camera-RZa2avywNPa.html',\n 'md5': '249589a164dde236ec65832bfce17440',\n 'info_dict': {\n 'id': 'RZa2avywNPa',\n 'display_id': 'hot-milf-from-kerala-shows-off-her-gorgeous-large-breasts-on-camera',\n 'ext': 'mp4',\n 'title': 'Hot milf from kerala shows off her gorgeous large breasts on camera',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n 'duration': 226,\n 'view_count': int,\n 'categories': list,\n 'age_limit': 18,\n }\n }\n\n def _real_extract(self, url):\n mobj = re.match(self._VALID_URL, url)\n video_id = mobj.group('id')\n display_id = mobj.group('display_id')\n\n webpage = self._download_webpage(url, display_id)\n\n info_dict = self._parse_html5_media_entries(url, webpage, video_id)[0]\n\n title = self._html_search_regex((\n r'<title>(.+?)\\s*-\\s*Indian\\s+Porn</title>',\n r'<h4>(.+?)</h4>'\n ), webpage, 'title')\n\n duration = parse_duration(self._search_regex(\n r'Time:\\s*<strong>\\s*(.+?)\\s*</strong>',\n webpage, 'duration', fatal=False))\n\n view_count = int(self._search_regex(\n r'(?s)Time:\\s*<strong>.*?</strong>.*?<strong>\\s*(\\d+)\\s*</strong>',\n webpage, 'view count', fatal=False))\n\n categories = re.findall(\n r'<a[^>]+class=[\\'\"]categories[\\'\"][^>]*>\\s*([^<]+)\\s*</a>',\n webpage)\n\n info_dict.update({\n 'id': video_id,\n 'display_id': display_id,\n 'http_headers': {\n 'Referer': url,\n },\n 'title': title,\n 'duration': duration,\n 'view_count': view_count,\n 'categories': categories,\n 'age_limit': 18,\n })\n\n return info_dict\n", "path": "youtube_dl/extractor/watchindianporn.py"}]} | 2,222 | 931 |
gh_patches_debug_14145 | rasdani/github-patches | git_diff | facebookresearch__nevergrad-11 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Hello-world install question
## Steps to reproduce
Install nevergrad
```
pip3 install -e [email protected]:facebookresearch/nevergrad@master#egg=nevergrad
Obtaining nevergrad from [email protected]:facebookresearch/nevergrad@master#egg=nevergrad
Cloning [email protected]:facebookresearch/nevergrad (to revision master) to ./src/nevergrad
Warning: Permanently added the RSA host key for IP address '192.30.255.112' to the list of known hosts.
[email protected]: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Command "git clone -q [email protected]:facebookresearch/nevergrad /Users/ME/Documents/workspace/temp/src/nevergrad" failed with error code 128 in None
```
ok. trying git clone and `python3 setup.py install` method. That seems to work.
run the sample program:
python3 mynevergrad.py
```
from nevergrad.optimization import optimizerlib
def square(x):
return (x - .5)**2
optimizer = optimizerlib.OnePlusOne(dimension=1, budget=100, num_workers=5)
recommendation = optimizer.optimize(square, executor=None, batch_mode=True)
```
## Observed Results
```
Traceback (most recent call last):
File "mynevergrad.py", line 6, in <module>
from nevergrad.optimization import optimizerlib
ModuleNotFoundError: No module named 'nevergrad.optimization'
```
## Expected Results
It should run the sample
## Relevant Code
```
import pkg_resources
for d in pkg_resources.working_set:
print(d)
```
DOES include `nevergrad 0.1.0`
This is very likely an install, python3, homebrew "installed in user directory", or paths issue, but given that `nevergrad 0.1.0` shows up in the list, it is odd...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
3 #
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6
7 # from distutils.core import setup
8 from setuptools import setup
9
10
11 with open('requirements.txt') as f:
12 requirements = f.read().splitlines()
13
14
15 setup(name='nevergrad',
16 version='0.1.0',
17 description='Gradient-free optimization toolbox',
18 author='Facebook AI Research',
19 packages=['nevergrad'],
20 install_requires=requirements,)
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -7,14 +7,22 @@
# from distutils.core import setup
from setuptools import setup
-
with open('requirements.txt') as f:
requirements = f.read().splitlines()
-
-setup(name='nevergrad',
- version='0.1.0',
- description='Gradient-free optimization toolbox',
- author='Facebook AI Research',
- packages=['nevergrad'],
- install_requires=requirements,)
+setup(
+ name='nevergrad',
+ version='0.1.0',
+ description='Gradient-free optimization toolbox',
+ author='Facebook AI Research',
+ packages=[
+ 'nevergrad',
+ 'nevergrad.benchmark',
+ 'nevergrad.benchmark.additional',
+ 'nevergrad.common',
+ 'nevergrad.functions',
+ 'nevergrad.instrumentation',
+ 'nevergrad.optimization',
+ ],
+ install_requires=requirements,
+)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -7,14 +7,22 @@\n # from distutils.core import setup\n from setuptools import setup\n \n-\n with open('requirements.txt') as f:\n requirements = f.read().splitlines()\n \n-\n-setup(name='nevergrad',\n- version='0.1.0',\n- description='Gradient-free optimization toolbox',\n- author='Facebook AI Research',\n- packages=['nevergrad'],\n- install_requires=requirements,)\n+setup(\n+ name='nevergrad',\n+ version='0.1.0',\n+ description='Gradient-free optimization toolbox',\n+ author='Facebook AI Research',\n+ packages=[\n+ 'nevergrad',\n+ 'nevergrad.benchmark',\n+ 'nevergrad.benchmark.additional',\n+ 'nevergrad.common',\n+ 'nevergrad.functions',\n+ 'nevergrad.instrumentation',\n+ 'nevergrad.optimization',\n+ ],\n+ install_requires=requirements,\n+)\n", "issue": "Hello-world install question\n## Steps to reproduce\r\n\r\nInstall nevergrad\r\n\r\n```\r\npip3 install -e [email protected]:facebookresearch/nevergrad@master#egg=nevergrad\r\nObtaining nevergrad from [email protected]:facebookresearch/nevergrad@master#egg=nevergrad\r\n Cloning [email protected]:facebookresearch/nevergrad (to revision master) to ./src/nevergrad\r\nWarning: Permanently added the RSA host key for IP address '192.30.255.112' to the list of known hosts.\r\[email protected]: Permission denied (publickey).\r\nfatal: Could not read from remote repository.\r\n\r\nPlease make sure you have the correct access rights\r\nand the repository exists.\r\nCommand \"git clone -q [email protected]:facebookresearch/nevergrad /Users/ME/Documents/workspace/temp/src/nevergrad\" failed with error code 128 in None\r\n```\r\n\r\nok. trying git clone and `python3 setup.py install` method. That seems to work.\r\n\r\nrun the sample program:\r\n\r\npython3 mynevergrad.py\r\n\r\n```\r\nfrom nevergrad.optimization import optimizerlib\r\n\r\ndef square(x):\r\n return (x - .5)**2\r\n\r\noptimizer = optimizerlib.OnePlusOne(dimension=1, budget=100, num_workers=5)\r\nrecommendation = optimizer.optimize(square, executor=None, batch_mode=True)\r\n```\r\n\r\n\r\n## Observed Results\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"mynevergrad.py\", line 6, in <module>\r\n from nevergrad.optimization import optimizerlib\r\nModuleNotFoundError: No module named 'nevergrad.optimization'\r\n```\r\n\r\n\r\n## Expected Results\r\n\r\nIt should run the sample\r\n\r\n## Relevant Code\r\n\r\n```\r\nimport pkg_resources\r\nfor d in pkg_resources.working_set:\r\n\tprint(d)\r\n```\r\nDOES include `nevergrad 0.1.0`\r\n\r\n\r\nThis is very likely an install, python3, homebrew \"installed in user directory\", or paths issue, but given that `nevergrad 0.1.0` shows up in the list, it is odd...\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\n# from distutils.core import setup\nfrom setuptools import setup\n\n\nwith open('requirements.txt') as f:\n requirements = f.read().splitlines()\n\n\nsetup(name='nevergrad',\n version='0.1.0',\n description='Gradient-free optimization toolbox',\n author='Facebook AI Research',\n packages=['nevergrad'],\n install_requires=requirements,)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\n# from distutils.core import setup\nfrom setuptools import setup\n\nwith open('requirements.txt') as f:\n requirements = f.read().splitlines()\n\nsetup(\n name='nevergrad',\n version='0.1.0',\n description='Gradient-free optimization toolbox',\n author='Facebook AI Research',\n packages=[\n 'nevergrad',\n 'nevergrad.benchmark',\n 'nevergrad.benchmark.additional',\n 'nevergrad.common',\n 'nevergrad.functions',\n 'nevergrad.instrumentation',\n 'nevergrad.optimization',\n ],\n install_requires=requirements,\n)\n", "path": "setup.py"}]} | 870 | 219 |
gh_patches_debug_38179 | rasdani/github-patches | git_diff | XanaduAI__strawberryfields-589 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
quantum neural net example output is Nan when using more than 1 mode
#### Issue description
The quantum_neural_network.py example output is Nan when using more than 1 mode. (working as expected for 1 mode)
* *Expected behavior:* Not Nan
* *Actual behavior:* ```Beginning optimization
Rep: 0 Cost: 7.0005 Fidelity: 0.0000 Trace: 1.0000
Rep: 1 Cost: nan Fidelity: nan Trace: nan
Rep: 2 Cost: nan Fidelity: nan Trace: nan
Rep: 3 Cost: nan Fidelity: nan Trace: nan```
* *Reproduces how often:* 100%
* *System information:*
``` Strawberry Fields: a Python library for continuous-variable quantum circuits.
Copyright 2018-2020 Xanadu Quantum Technologies Inc.
Python version: 3.8.5
Platform info: Linux-5.8.0-53-generic-x86_64-with-glibc2.10
Installation path: /home/jonas/anaconda3/envs/strawberry/lib/python3.8/site-packages/strawberryfields
Strawberry Fields version: 0.17.0
Numpy version: 1.19.2
Scipy version: 1.4.1
SymPy version: 1.7.1
NetworkX version: 2.5
The Walrus version: 0.14.0
Blackbird version: 0.3.1-dev
TensorFlow version: 2.2.0
```
#### Source code and tracebacks
update line 135 of the example quantum_neural_network.py from ```modes = 1``` to ```modes = 2```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/quantum_neural_network.py`
Content:
```
1 #!/usr/bin/env python3
2 import numpy as np
3 import tensorflow as tf
4 import strawberryfields as sf
5 from strawberryfields import ops
6
7
8 # =========================================================================
9 # Utility functions
10 # =========================================================================
11
12
13 # define interferometer
14 def interferometer(params, q):
15 """Parameterised interferometer acting on ``N`` modes.
16
17 Args:
18 params (list[float]): list of length ``max(1, N-1) + (N-1)*N`` parameters.
19
20 * The first ``N(N-1)/2`` parameters correspond to the beamsplitter angles
21 * The second ``N(N-1)/2`` parameters correspond to the beamsplitter phases
22 * The final ``N-1`` parameters correspond to local rotation on the first N-1 modes
23
24 q (list[RegRef]): list of Strawberry Fields quantum registers the interferometer
25 is to be applied to
26 """
27 N = len(q)
28 theta = params[:N*(N-1)//2]
29 phi = params[N*(N-1)//2:N*(N-1)]
30 rphi = params[-N+1:]
31
32 if N == 1:
33 # the interferometer is a single rotation
34 ops.Rgate(rphi[0]) | q[0]
35 return
36
37 n = 0 # keep track of free parameters
38
39 # Apply the rectangular beamsplitter array
40 # The array depth is N
41 for l in range(N):
42 for k, (q1, q2) in enumerate(zip(q[:-1], q[1:])):
43 # skip even or odd pairs depending on layer
44 if (l + k) % 2 != 1:
45 ops.BSgate(theta[n], phi[n]) | (q1, q2)
46 n += 1
47
48 # apply the final local phase shifts to all modes except the last one
49 for i in range(max(1, N - 1)):
50 ops.Rgate(rphi[i]) | q[i]
51 # Rgate only applied to first N - 1 modes
52
53
54 # define layer
55 def layer(params, q):
56 """CV quantum neural network layer acting on ``N`` modes.
57
58 Args:
59 params (list[float]): list of length ``2*(max(1, N-1) + N**2 + n)`` containing
60 the number of parameters for the layer
61 q (list[RegRef]): list of Strawberry Fields quantum registers the layer
62 is to be applied to
63 """
64 N = len(q)
65 M = int(N * (N - 1)) + max(1, N - 1)
66
67 int1 = params[:M]
68 s = params[M:M+N]
69 int2 = params[M+N:2*M+N]
70 dr = params[2*M+N:2*M+2*N]
71 dp = params[2*M+2*N:2*M+3*N]
72 k = params[2*M+3*N:2*M+4*N]
73
74 # begin layer
75 interferometer(int1, q)
76
77 for i in range(N):
78 ops.Sgate(s[i]) | q[i]
79
80 interferometer(int2, q)
81
82 for i in range(N):
83 ops.Dgate(dr[i], dp[i]) | q[i]
84 ops.Kgate(k[i]) | q[i]
85 # end layer
86
87
88 def init_weights(modes, layers, active_sd=0.0001, passive_sd=0.1):
89 """Initialize a 2D TensorFlow Variable containing normally-distributed
90 random weights for an ``N`` mode quantum neural network with ``L`` layers.
91
92 Args:
93 modes (int): the number of modes in the quantum neural network
94 layers (int): the number of layers in the quantum neural network
95 active_sd (float): the standard deviation used when initializing
96 the normally-distributed weights for the active parameters
97 (displacement, squeezing, and Kerr magnitude)
98 passive_sd (float): the standard deviation used when initializing
99 the normally-distributed weights for the passive parameters
100 (beamsplitter angles and all gate phases)
101
102 Returns:
103 tf.Variable[tf.float32]: A TensorFlow Variable of shape
104 ``[layers, 2*(max(1, modes-1) + modes**2 + modes)]``, where the Lth
105 row represents the layer parameters for the Lth layer.
106 """
107 # Number of interferometer parameters:
108 M = int(modes * (modes - 1)) + max(1, modes - 1)
109
110 # Create the TensorFlow variables
111 int1_weights = tf.random.normal(shape=[layers, M], stddev=passive_sd)
112 s_weights = tf.random.normal(shape=[layers, modes], stddev=active_sd)
113 int2_weights = tf.random.normal(shape=[layers, M], stddev=passive_sd)
114 dr_weights = tf.random.normal(shape=[layers, modes], stddev=active_sd)
115 dp_weights = tf.random.normal(shape=[layers, modes], stddev=passive_sd)
116 k_weights = tf.random.normal(shape=[layers, modes], stddev=active_sd)
117
118 weights = tf.concat([int1_weights, s_weights, int2_weights, dr_weights, dp_weights, k_weights], axis=1)
119 weights = tf.Variable(weights)
120
121 return weights
122
123
124 # =========================================================================
125 # Define the optimization problem
126 # =========================================================================
127
128
129 # set the random seed
130 tf.random.set_seed(137)
131 np.random.seed(137)
132
133
134 # define width and depth of CV quantum neural network
135 modes = 1
136 layers = 8
137 cutoff_dim = 6
138
139
140 # defining desired state (single photon state)
141 target_state = np.zeros(cutoff_dim)
142 target_state[1] = 1
143 target_state = tf.constant(target_state, dtype=tf.complex64)
144
145
146 # initialize engine and program
147 eng = sf.Engine(backend="tf", backend_options={"cutoff_dim": cutoff_dim})
148 qnn = sf.Program(modes)
149
150
151 # initialize QNN weights
152 weights = init_weights(modes, layers)
153 num_params = np.prod(weights.shape)
154
155
156 # Create array of Strawberry Fields symbolic gate arguments, matching
157 # the size of the weights Variable.
158 sf_params = np.arange(num_params).reshape(weights.shape).astype(np.str)
159 sf_params = np.array([qnn.params(*i) for i in sf_params])
160
161
162 # Construct the symbolic Strawberry Fields program by
163 # looping and applying layers to the program.
164 with qnn.context as q:
165 for k in range(layers):
166 layer(sf_params[k], q)
167
168
169 def cost(weights):
170 # Create a dictionary mapping from the names of the Strawberry Fields
171 # symbolic gate parameters to the TensorFlow weight values.
172 mapping = {p.name: w for p, w in zip(sf_params.flatten(), tf.reshape(weights, [-1]))}
173
174 # run the engine
175 state = eng.run(qnn, args=mapping).state
176 ket = state.ket()
177
178 difference = tf.reduce_sum(tf.abs(ket - target_state))
179 fidelity = tf.abs(tf.reduce_sum(tf.math.conj(ket) * target_state)) ** 2
180 return difference, fidelity, ket, tf.math.real(state.trace())
181
182
183 # set up optimizer
184 opt = tf.keras.optimizers.Adam()
185 cost_before, fidelity_before, _, _ = cost(weights)
186
187 print("Beginning optimization")
188
189
190 # Perform the optimization
191 for i in range(1000):
192 # reset the engine if it has already been executed
193 if eng.run_progs:
194 eng.reset()
195
196 with tf.GradientTape() as tape:
197 loss, fid, _, trace = cost(weights)
198
199 # one repetition of the optimization
200 gradients = tape.gradient(loss, weights)
201 opt.apply_gradients(zip([gradients], [weights]))
202
203 # Prints progress at every rep
204 if i % 1 == 0:
205 print("Rep: {} Cost: {:.4f} Fidelity: {:.4f} Trace: {:.4f}".format(i, loss, fid, trace))
206
207
208 cost_after, fidelity_after, ket_after, _ = cost(weights)
209
210
211 print("\nFidelity before optimization: ", fidelity_before.numpy())
212 print("Fidelity after optimization: ", fidelity_after.numpy())
213 print("\nTarget state: ", target_state.numpy())
214 print("Output state: ", np.round(ket_after.numpy(), decimals=3))
215
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/quantum_neural_network.py b/examples/quantum_neural_network.py
--- a/examples/quantum_neural_network.py
+++ b/examples/quantum_neural_network.py
@@ -25,9 +25,9 @@
is to be applied to
"""
N = len(q)
- theta = params[:N*(N-1)//2]
- phi = params[N*(N-1)//2:N*(N-1)]
- rphi = params[-N+1:]
+ theta = params[: N * (N - 1) // 2]
+ phi = params[N * (N - 1) // 2 : N * (N - 1)]
+ rphi = params[-N + 1 :]
if N == 1:
# the interferometer is a single rotation
@@ -65,11 +65,11 @@
M = int(N * (N - 1)) + max(1, N - 1)
int1 = params[:M]
- s = params[M:M+N]
- int2 = params[M+N:2*M+N]
- dr = params[2*M+N:2*M+2*N]
- dp = params[2*M+2*N:2*M+3*N]
- k = params[2*M+3*N:2*M+4*N]
+ s = params[M : M + N]
+ int2 = params[M + N : 2 * M + N]
+ dr = params[2 * M + N : 2 * M + 2 * N]
+ dp = params[2 * M + 2 * N : 2 * M + 3 * N]
+ k = params[2 * M + 3 * N : 2 * M + 4 * N]
# begin layer
interferometer(int1, q)
@@ -115,7 +115,9 @@
dp_weights = tf.random.normal(shape=[layers, modes], stddev=passive_sd)
k_weights = tf.random.normal(shape=[layers, modes], stddev=active_sd)
- weights = tf.concat([int1_weights, s_weights, int2_weights, dr_weights, dp_weights, k_weights], axis=1)
+ weights = tf.concat(
+ [int1_weights, s_weights, int2_weights, dr_weights, dp_weights, k_weights], axis=1
+ )
weights = tf.Variable(weights)
return weights
@@ -166,6 +168,20 @@
layer(sf_params[k], q)
+def safe_abs(x):
+ # Helper function to deal with tensor terms near zero
+
+ # Check where we have near zero terms
+ EPS = 1e-15
+ x = tf.where(tf.abs(x) < EPS, tf.zeros_like(x), x)
+ zero = tf.constant(0, dtype=tf.complex64)
+ x_ok = tf.not_equal(x, zero)
+
+ # To make sure, swap out the zeros with ones
+ safe_x = tf.where(x_ok, x, tf.ones_like(x, dtype=tf.complex64))
+ return tf.where(x_ok, tf.abs(safe_x), tf.zeros_like(x, dtype=tf.float32))
+
+
def cost(weights):
# Create a dictionary mapping from the names of the Strawberry Fields
# symbolic gate parameters to the TensorFlow weight values.
@@ -175,7 +191,7 @@
state = eng.run(qnn, args=mapping).state
ket = state.ket()
- difference = tf.reduce_sum(tf.abs(ket - target_state))
+ difference = tf.reduce_sum(safe_abs(ket - target_state))
fidelity = tf.abs(tf.reduce_sum(tf.math.conj(ket) * target_state)) ** 2
return difference, fidelity, ket, tf.math.real(state.trace())
| {"golden_diff": "diff --git a/examples/quantum_neural_network.py b/examples/quantum_neural_network.py\n--- a/examples/quantum_neural_network.py\n+++ b/examples/quantum_neural_network.py\n@@ -25,9 +25,9 @@\n is to be applied to\n \"\"\"\n N = len(q)\n- theta = params[:N*(N-1)//2]\n- phi = params[N*(N-1)//2:N*(N-1)]\n- rphi = params[-N+1:]\n+ theta = params[: N * (N - 1) // 2]\n+ phi = params[N * (N - 1) // 2 : N * (N - 1)]\n+ rphi = params[-N + 1 :]\n \n if N == 1:\n # the interferometer is a single rotation\n@@ -65,11 +65,11 @@\n M = int(N * (N - 1)) + max(1, N - 1)\n \n int1 = params[:M]\n- s = params[M:M+N]\n- int2 = params[M+N:2*M+N]\n- dr = params[2*M+N:2*M+2*N]\n- dp = params[2*M+2*N:2*M+3*N]\n- k = params[2*M+3*N:2*M+4*N]\n+ s = params[M : M + N]\n+ int2 = params[M + N : 2 * M + N]\n+ dr = params[2 * M + N : 2 * M + 2 * N]\n+ dp = params[2 * M + 2 * N : 2 * M + 3 * N]\n+ k = params[2 * M + 3 * N : 2 * M + 4 * N]\n \n # begin layer\n interferometer(int1, q)\n@@ -115,7 +115,9 @@\n dp_weights = tf.random.normal(shape=[layers, modes], stddev=passive_sd)\n k_weights = tf.random.normal(shape=[layers, modes], stddev=active_sd)\n \n- weights = tf.concat([int1_weights, s_weights, int2_weights, dr_weights, dp_weights, k_weights], axis=1)\n+ weights = tf.concat(\n+ [int1_weights, s_weights, int2_weights, dr_weights, dp_weights, k_weights], axis=1\n+ )\n weights = tf.Variable(weights)\n \n return weights\n@@ -166,6 +168,20 @@\n layer(sf_params[k], q)\n \n \n+def safe_abs(x):\n+ # Helper function to deal with tensor terms near zero\n+\n+ # Check where we have near zero terms\n+ EPS = 1e-15\n+ x = tf.where(tf.abs(x) < EPS, tf.zeros_like(x), x)\n+ zero = tf.constant(0, dtype=tf.complex64)\n+ x_ok = tf.not_equal(x, zero)\n+\n+ # To make sure, swap out the zeros with ones\n+ safe_x = tf.where(x_ok, x, tf.ones_like(x, dtype=tf.complex64))\n+ return tf.where(x_ok, tf.abs(safe_x), tf.zeros_like(x, dtype=tf.float32))\n+\n+\n def cost(weights):\n # Create a dictionary mapping from the names of the Strawberry Fields\n # symbolic gate parameters to the TensorFlow weight values.\n@@ -175,7 +191,7 @@\n state = eng.run(qnn, args=mapping).state\n ket = state.ket()\n \n- difference = tf.reduce_sum(tf.abs(ket - target_state))\n+ difference = tf.reduce_sum(safe_abs(ket - target_state))\n fidelity = tf.abs(tf.reduce_sum(tf.math.conj(ket) * target_state)) ** 2\n return difference, fidelity, ket, tf.math.real(state.trace())\n", "issue": "quantum neural net example output is Nan when using more than 1 mode\n#### Issue description\r\n\r\nThe quantum_neural_network.py example output is Nan when using more than 1 mode. (working as expected for 1 mode)\r\n\r\n* *Expected behavior:* Not Nan\r\n\r\n* *Actual behavior:* ```Beginning optimization\r\nRep: 0 Cost: 7.0005 Fidelity: 0.0000 Trace: 1.0000\r\nRep: 1 Cost: nan Fidelity: nan Trace: nan\r\nRep: 2 Cost: nan Fidelity: nan Trace: nan\r\nRep: 3 Cost: nan Fidelity: nan Trace: nan```\r\n\r\n* *Reproduces how often:* 100%\r\n* *System information:* \r\n``` Strawberry Fields: a Python library for continuous-variable quantum circuits.\r\nCopyright 2018-2020 Xanadu Quantum Technologies Inc.\r\n\r\nPython version: 3.8.5\r\nPlatform info: Linux-5.8.0-53-generic-x86_64-with-glibc2.10\r\nInstallation path: /home/jonas/anaconda3/envs/strawberry/lib/python3.8/site-packages/strawberryfields\r\nStrawberry Fields version: 0.17.0\r\nNumpy version: 1.19.2\r\nScipy version: 1.4.1\r\nSymPy version: 1.7.1\r\nNetworkX version: 2.5\r\nThe Walrus version: 0.14.0\r\nBlackbird version: 0.3.1-dev\r\nTensorFlow version: 2.2.0\r\n```\r\n\r\n\r\n#### Source code and tracebacks\r\nupdate line 135 of the example quantum_neural_network.py from ```modes = 1``` to ```modes = 2``` \r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\nimport numpy as np\nimport tensorflow as tf\nimport strawberryfields as sf\nfrom strawberryfields import ops\n\n\n# =========================================================================\n# Utility functions\n# =========================================================================\n\n\n# define interferometer\ndef interferometer(params, q):\n \"\"\"Parameterised interferometer acting on ``N`` modes.\n\n Args:\n params (list[float]): list of length ``max(1, N-1) + (N-1)*N`` parameters.\n\n * The first ``N(N-1)/2`` parameters correspond to the beamsplitter angles\n * The second ``N(N-1)/2`` parameters correspond to the beamsplitter phases\n * The final ``N-1`` parameters correspond to local rotation on the first N-1 modes\n\n q (list[RegRef]): list of Strawberry Fields quantum registers the interferometer\n is to be applied to\n \"\"\"\n N = len(q)\n theta = params[:N*(N-1)//2]\n phi = params[N*(N-1)//2:N*(N-1)]\n rphi = params[-N+1:]\n\n if N == 1:\n # the interferometer is a single rotation\n ops.Rgate(rphi[0]) | q[0]\n return\n\n n = 0 # keep track of free parameters\n\n # Apply the rectangular beamsplitter array\n # The array depth is N\n for l in range(N):\n for k, (q1, q2) in enumerate(zip(q[:-1], q[1:])):\n # skip even or odd pairs depending on layer\n if (l + k) % 2 != 1:\n ops.BSgate(theta[n], phi[n]) | (q1, q2)\n n += 1\n\n # apply the final local phase shifts to all modes except the last one\n for i in range(max(1, N - 1)):\n ops.Rgate(rphi[i]) | q[i]\n # Rgate only applied to first N - 1 modes\n\n\n# define layer\ndef layer(params, q):\n \"\"\"CV quantum neural network layer acting on ``N`` modes.\n\n Args:\n params (list[float]): list of length ``2*(max(1, N-1) + N**2 + n)`` containing\n the number of parameters for the layer\n q (list[RegRef]): list of Strawberry Fields quantum registers the layer\n is to be applied to\n \"\"\"\n N = len(q)\n M = int(N * (N - 1)) + max(1, N - 1)\n\n int1 = params[:M]\n s = params[M:M+N]\n int2 = params[M+N:2*M+N]\n dr = params[2*M+N:2*M+2*N]\n dp = params[2*M+2*N:2*M+3*N]\n k = params[2*M+3*N:2*M+4*N]\n\n # begin layer\n interferometer(int1, q)\n\n for i in range(N):\n ops.Sgate(s[i]) | q[i]\n\n interferometer(int2, q)\n\n for i in range(N):\n ops.Dgate(dr[i], dp[i]) | q[i]\n ops.Kgate(k[i]) | q[i]\n # end layer\n\n\ndef init_weights(modes, layers, active_sd=0.0001, passive_sd=0.1):\n \"\"\"Initialize a 2D TensorFlow Variable containing normally-distributed\n random weights for an ``N`` mode quantum neural network with ``L`` layers.\n\n Args:\n modes (int): the number of modes in the quantum neural network\n layers (int): the number of layers in the quantum neural network\n active_sd (float): the standard deviation used when initializing\n the normally-distributed weights for the active parameters\n (displacement, squeezing, and Kerr magnitude)\n passive_sd (float): the standard deviation used when initializing\n the normally-distributed weights for the passive parameters\n (beamsplitter angles and all gate phases)\n\n Returns:\n tf.Variable[tf.float32]: A TensorFlow Variable of shape\n ``[layers, 2*(max(1, modes-1) + modes**2 + modes)]``, where the Lth\n row represents the layer parameters for the Lth layer.\n \"\"\"\n # Number of interferometer parameters:\n M = int(modes * (modes - 1)) + max(1, modes - 1)\n\n # Create the TensorFlow variables\n int1_weights = tf.random.normal(shape=[layers, M], stddev=passive_sd)\n s_weights = tf.random.normal(shape=[layers, modes], stddev=active_sd)\n int2_weights = tf.random.normal(shape=[layers, M], stddev=passive_sd)\n dr_weights = tf.random.normal(shape=[layers, modes], stddev=active_sd)\n dp_weights = tf.random.normal(shape=[layers, modes], stddev=passive_sd)\n k_weights = tf.random.normal(shape=[layers, modes], stddev=active_sd)\n\n weights = tf.concat([int1_weights, s_weights, int2_weights, dr_weights, dp_weights, k_weights], axis=1)\n weights = tf.Variable(weights)\n\n return weights\n\n\n# =========================================================================\n# Define the optimization problem\n# =========================================================================\n\n\n# set the random seed\ntf.random.set_seed(137)\nnp.random.seed(137)\n\n\n# define width and depth of CV quantum neural network\nmodes = 1\nlayers = 8\ncutoff_dim = 6\n\n\n# defining desired state (single photon state)\ntarget_state = np.zeros(cutoff_dim)\ntarget_state[1] = 1\ntarget_state = tf.constant(target_state, dtype=tf.complex64)\n\n\n# initialize engine and program\neng = sf.Engine(backend=\"tf\", backend_options={\"cutoff_dim\": cutoff_dim})\nqnn = sf.Program(modes)\n\n\n# initialize QNN weights\nweights = init_weights(modes, layers)\nnum_params = np.prod(weights.shape)\n\n\n# Create array of Strawberry Fields symbolic gate arguments, matching\n# the size of the weights Variable.\nsf_params = np.arange(num_params).reshape(weights.shape).astype(np.str)\nsf_params = np.array([qnn.params(*i) for i in sf_params])\n\n\n# Construct the symbolic Strawberry Fields program by\n# looping and applying layers to the program.\nwith qnn.context as q:\n for k in range(layers):\n layer(sf_params[k], q)\n\n\ndef cost(weights):\n # Create a dictionary mapping from the names of the Strawberry Fields\n # symbolic gate parameters to the TensorFlow weight values.\n mapping = {p.name: w for p, w in zip(sf_params.flatten(), tf.reshape(weights, [-1]))}\n\n # run the engine\n state = eng.run(qnn, args=mapping).state\n ket = state.ket()\n\n difference = tf.reduce_sum(tf.abs(ket - target_state))\n fidelity = tf.abs(tf.reduce_sum(tf.math.conj(ket) * target_state)) ** 2\n return difference, fidelity, ket, tf.math.real(state.trace())\n\n\n# set up optimizer\nopt = tf.keras.optimizers.Adam()\ncost_before, fidelity_before, _, _ = cost(weights)\n\nprint(\"Beginning optimization\")\n\n\n# Perform the optimization\nfor i in range(1000):\n # reset the engine if it has already been executed\n if eng.run_progs:\n eng.reset()\n\n with tf.GradientTape() as tape:\n loss, fid, _, trace = cost(weights)\n\n # one repetition of the optimization\n gradients = tape.gradient(loss, weights)\n opt.apply_gradients(zip([gradients], [weights]))\n\n # Prints progress at every rep\n if i % 1 == 0:\n print(\"Rep: {} Cost: {:.4f} Fidelity: {:.4f} Trace: {:.4f}\".format(i, loss, fid, trace))\n\n\ncost_after, fidelity_after, ket_after, _ = cost(weights)\n\n\nprint(\"\\nFidelity before optimization: \", fidelity_before.numpy())\nprint(\"Fidelity after optimization: \", fidelity_after.numpy())\nprint(\"\\nTarget state: \", target_state.numpy())\nprint(\"Output state: \", np.round(ket_after.numpy(), decimals=3))\n", "path": "examples/quantum_neural_network.py"}], "after_files": [{"content": "#!/usr/bin/env python3\nimport numpy as np\nimport tensorflow as tf\nimport strawberryfields as sf\nfrom strawberryfields import ops\n\n\n# =========================================================================\n# Utility functions\n# =========================================================================\n\n\n# define interferometer\ndef interferometer(params, q):\n \"\"\"Parameterised interferometer acting on ``N`` modes.\n\n Args:\n params (list[float]): list of length ``max(1, N-1) + (N-1)*N`` parameters.\n\n * The first ``N(N-1)/2`` parameters correspond to the beamsplitter angles\n * The second ``N(N-1)/2`` parameters correspond to the beamsplitter phases\n * The final ``N-1`` parameters correspond to local rotation on the first N-1 modes\n\n q (list[RegRef]): list of Strawberry Fields quantum registers the interferometer\n is to be applied to\n \"\"\"\n N = len(q)\n theta = params[: N * (N - 1) // 2]\n phi = params[N * (N - 1) // 2 : N * (N - 1)]\n rphi = params[-N + 1 :]\n\n if N == 1:\n # the interferometer is a single rotation\n ops.Rgate(rphi[0]) | q[0]\n return\n\n n = 0 # keep track of free parameters\n\n # Apply the rectangular beamsplitter array\n # The array depth is N\n for l in range(N):\n for k, (q1, q2) in enumerate(zip(q[:-1], q[1:])):\n # skip even or odd pairs depending on layer\n if (l + k) % 2 != 1:\n ops.BSgate(theta[n], phi[n]) | (q1, q2)\n n += 1\n\n # apply the final local phase shifts to all modes except the last one\n for i in range(max(1, N - 1)):\n ops.Rgate(rphi[i]) | q[i]\n # Rgate only applied to first N - 1 modes\n\n\n# define layer\ndef layer(params, q):\n \"\"\"CV quantum neural network layer acting on ``N`` modes.\n\n Args:\n params (list[float]): list of length ``2*(max(1, N-1) + N**2 + n)`` containing\n the number of parameters for the layer\n q (list[RegRef]): list of Strawberry Fields quantum registers the layer\n is to be applied to\n \"\"\"\n N = len(q)\n M = int(N * (N - 1)) + max(1, N - 1)\n\n int1 = params[:M]\n s = params[M : M + N]\n int2 = params[M + N : 2 * M + N]\n dr = params[2 * M + N : 2 * M + 2 * N]\n dp = params[2 * M + 2 * N : 2 * M + 3 * N]\n k = params[2 * M + 3 * N : 2 * M + 4 * N]\n\n # begin layer\n interferometer(int1, q)\n\n for i in range(N):\n ops.Sgate(s[i]) | q[i]\n\n interferometer(int2, q)\n\n for i in range(N):\n ops.Dgate(dr[i], dp[i]) | q[i]\n ops.Kgate(k[i]) | q[i]\n # end layer\n\n\ndef init_weights(modes, layers, active_sd=0.0001, passive_sd=0.1):\n \"\"\"Initialize a 2D TensorFlow Variable containing normally-distributed\n random weights for an ``N`` mode quantum neural network with ``L`` layers.\n\n Args:\n modes (int): the number of modes in the quantum neural network\n layers (int): the number of layers in the quantum neural network\n active_sd (float): the standard deviation used when initializing\n the normally-distributed weights for the active parameters\n (displacement, squeezing, and Kerr magnitude)\n passive_sd (float): the standard deviation used when initializing\n the normally-distributed weights for the passive parameters\n (beamsplitter angles and all gate phases)\n\n Returns:\n tf.Variable[tf.float32]: A TensorFlow Variable of shape\n ``[layers, 2*(max(1, modes-1) + modes**2 + modes)]``, where the Lth\n row represents the layer parameters for the Lth layer.\n \"\"\"\n # Number of interferometer parameters:\n M = int(modes * (modes - 1)) + max(1, modes - 1)\n\n # Create the TensorFlow variables\n int1_weights = tf.random.normal(shape=[layers, M], stddev=passive_sd)\n s_weights = tf.random.normal(shape=[layers, modes], stddev=active_sd)\n int2_weights = tf.random.normal(shape=[layers, M], stddev=passive_sd)\n dr_weights = tf.random.normal(shape=[layers, modes], stddev=active_sd)\n dp_weights = tf.random.normal(shape=[layers, modes], stddev=passive_sd)\n k_weights = tf.random.normal(shape=[layers, modes], stddev=active_sd)\n\n weights = tf.concat(\n [int1_weights, s_weights, int2_weights, dr_weights, dp_weights, k_weights], axis=1\n )\n weights = tf.Variable(weights)\n\n return weights\n\n\n# =========================================================================\n# Define the optimization problem\n# =========================================================================\n\n\n# set the random seed\ntf.random.set_seed(137)\nnp.random.seed(137)\n\n\n# define width and depth of CV quantum neural network\nmodes = 1\nlayers = 8\ncutoff_dim = 6\n\n\n# defining desired state (single photon state)\ntarget_state = np.zeros(cutoff_dim)\ntarget_state[1] = 1\ntarget_state = tf.constant(target_state, dtype=tf.complex64)\n\n\n# initialize engine and program\neng = sf.Engine(backend=\"tf\", backend_options={\"cutoff_dim\": cutoff_dim})\nqnn = sf.Program(modes)\n\n\n# initialize QNN weights\nweights = init_weights(modes, layers)\nnum_params = np.prod(weights.shape)\n\n\n# Create array of Strawberry Fields symbolic gate arguments, matching\n# the size of the weights Variable.\nsf_params = np.arange(num_params).reshape(weights.shape).astype(np.str)\nsf_params = np.array([qnn.params(*i) for i in sf_params])\n\n\n# Construct the symbolic Strawberry Fields program by\n# looping and applying layers to the program.\nwith qnn.context as q:\n for k in range(layers):\n layer(sf_params[k], q)\n\n\ndef safe_abs(x):\n # Helper function to deal with tensor terms near zero\n\n # Check where we have near zero terms\n EPS = 1e-15\n x = tf.where(tf.abs(x) < EPS, tf.zeros_like(x), x)\n zero = tf.constant(0, dtype=tf.complex64)\n x_ok = tf.not_equal(x, zero)\n\n # To make sure, swap out the zeros with ones\n safe_x = tf.where(x_ok, x, tf.ones_like(x, dtype=tf.complex64))\n return tf.where(x_ok, tf.abs(safe_x), tf.zeros_like(x, dtype=tf.float32))\n\n\ndef cost(weights):\n # Create a dictionary mapping from the names of the Strawberry Fields\n # symbolic gate parameters to the TensorFlow weight values.\n mapping = {p.name: w for p, w in zip(sf_params.flatten(), tf.reshape(weights, [-1]))}\n\n # run the engine\n state = eng.run(qnn, args=mapping).state\n ket = state.ket()\n\n difference = tf.reduce_sum(safe_abs(ket - target_state))\n fidelity = tf.abs(tf.reduce_sum(tf.math.conj(ket) * target_state)) ** 2\n return difference, fidelity, ket, tf.math.real(state.trace())\n\n\n# set up optimizer\nopt = tf.keras.optimizers.Adam()\ncost_before, fidelity_before, _, _ = cost(weights)\n\nprint(\"Beginning optimization\")\n\n\n# Perform the optimization\nfor i in range(1000):\n # reset the engine if it has already been executed\n if eng.run_progs:\n eng.reset()\n\n with tf.GradientTape() as tape:\n loss, fid, _, trace = cost(weights)\n\n # one repetition of the optimization\n gradients = tape.gradient(loss, weights)\n opt.apply_gradients(zip([gradients], [weights]))\n\n # Prints progress at every rep\n if i % 1 == 0:\n print(\"Rep: {} Cost: {:.4f} Fidelity: {:.4f} Trace: {:.4f}\".format(i, loss, fid, trace))\n\n\ncost_after, fidelity_after, ket_after, _ = cost(weights)\n\n\nprint(\"\\nFidelity before optimization: \", fidelity_before.numpy())\nprint(\"Fidelity after optimization: \", fidelity_after.numpy())\nprint(\"\\nTarget state: \", target_state.numpy())\nprint(\"Output state: \", np.round(ket_after.numpy(), decimals=3))\n", "path": "examples/quantum_neural_network.py"}]} | 3,007 | 865 |
gh_patches_debug_16505 | rasdani/github-patches | git_diff | conda__conda-build-2005 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`conda skeleton pypi pyinstrument` tries to index Anaconda.org
_From @olgabot on September 9, 2016 22:4_
I want to post my PyPI package to Anaconda.org and I'm following along with [this](http://conda.pydata.org/docs/build_tutorials/pkgs.html?highlight=anaconda%20upload) tutorial which uses `pyinstrument` as an example, but I'm getting this error that seems to be stemming from a `conda index` command. This error happens no matter which PyPI package I use, even ones I own. Do you know what may be happening?
Side note: It would be helpful to know what to expect to see in a "skeleton"-created folder.
```
$ conda skeleton pypi pyinstrument
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): pypi.python.org
Warning, the following versions were found for pyinstrument
0.10.1
0.11
0.12
0.13
0.13.1
Using 0.13.1
Use --version to specify a different version.
Using url https://pypi.python.org/packages/64/56/d7a0d48973dcf58ea74d5f004e16e94969e03ae783b46f86f42f35a6b81b/pyinstrument-0.13.1.tar.gz (44 KB) for pyinstrument.
Downloading pyinstrument
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): pypi.python.org
INFO:fetch.start:('pyinstrument-0', 45395)
INFO:fetch.update:16384
INFO:fetch.update:32768
INFO:fetch.update:45395
INFO:fetch.update:45395
INFO:fetch.stop:None
Unpacking pyinstrument...
done
working in /var/folders/6l/83vj6nxn6g1b6l2c9ycym4hc0000gn/T/tmp9bdr5b9aconda_skeleton_pyinstrument-0.13.1.tar.gz
Error:
Indexing a copy of the Anaconda conda package channel is neither
necessary nor supported. If you wish to add your own packages,
you can do so by adding them to a separate channel.
```
Here's the output from `conda info`:
```
$ conda info
Warning: could not import binstar_client ('args' object has no attribute 'site')Current conda install:
platform : osx-64
conda version : 4.1.12
conda-env version : 2.5.2
conda-build version : 2.0.1
python version : 3.5.2.final.0
requests version : 2.10.0
root environment : /Users/olga/anaconda3 (writable)
default environment : /Users/olga/anaconda3
envs directories : /Users/olga/anaconda3/envs
package cache : /Users/olga/anaconda3/pkgs
channel URLs : https://conda.anaconda.org/r/osx-64/
https://conda.anaconda.org/r/noarch/
https://conda.anaconda.org/bioconda/osx-64/
https://conda.anaconda.org/bioconda/noarch/
https://repo.continuum.io/pkgs/free/osx-64/
https://repo.continuum.io/pkgs/free/noarch/
https://repo.continuum.io/pkgs/pro/osx-64/
https://repo.continuum.io/pkgs/pro/noarch/
config file : /Users/olga/.condarc
offline mode : False
is foreign system : False
```
_Copied from original issue: conda/conda#3406_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_build/index.py`
Content:
```
1 '''
2 Functions related to creating repodata index files.
3 '''
4
5 from __future__ import absolute_import, division, print_function
6
7 import os
8 import bz2
9 import sys
10 import json
11 import tarfile
12 from os.path import isfile, join, getmtime
13
14 from conda_build.utils import file_info, get_lock, try_acquire_locks
15 from .conda_interface import PY3, md5_file
16
17
18 def read_index_tar(tar_path, config, lock):
19 """ Returns the index.json dict inside the given package tarball. """
20 locks = []
21 if config.locking:
22 locks = [lock]
23 with try_acquire_locks(locks, config.timeout):
24 with tarfile.open(tar_path) as t:
25 try:
26 return json.loads(t.extractfile('info/index.json').read().decode('utf-8'))
27 except EOFError:
28 raise RuntimeError("Could not extract %s. File probably corrupt."
29 % tar_path)
30 except OSError as e:
31 raise RuntimeError("Could not extract %s (%s)" % (tar_path, e))
32 except tarfile.ReadError:
33 raise RuntimeError("Could not extract metadata from %s. "
34 "File probably corrupt." % tar_path)
35
36
37 def write_repodata(repodata, dir_path, lock, config=None):
38 """ Write updated repodata.json and repodata.json.bz2 """
39 if not config:
40 import conda_build.config
41 config = conda_build.config.config
42 locks = []
43 if config.locking:
44 locks = [lock]
45 with try_acquire_locks(locks, config.timeout):
46 data = json.dumps(repodata, indent=2, sort_keys=True)
47 # strip trailing whitespace
48 data = '\n'.join(line.rstrip() for line in data.splitlines())
49 # make sure we have newline at the end
50 if not data.endswith('\n'):
51 data += '\n'
52 with open(join(dir_path, 'repodata.json'), 'w') as fo:
53 fo.write(data)
54 with open(join(dir_path, 'repodata.json.bz2'), 'wb') as fo:
55 fo.write(bz2.compress(data.encode('utf-8')))
56
57
58 def update_index(dir_path, config, force=False, check_md5=False, remove=True, lock=None,
59 could_be_mirror=True):
60 """
61 Update all index files in dir_path with changed packages.
62
63 :param verbose: Should detailed status messages be output?
64 :type verbose: bool
65 :param force: Whether to re-index all packages (including those that
66 haven't changed) or not.
67 :type force: bool
68 :param check_md5: Whether to check MD5s instead of mtimes for determining
69 if a package changed.
70 :type check_md5: bool
71 """
72
73 if config.verbose:
74 print("updating index in:", dir_path)
75 index_path = join(dir_path, '.index.json')
76 if not os.path.isdir(dir_path):
77 os.makedirs(dir_path)
78
79 if not lock:
80 lock = get_lock(dir_path)
81
82 locks = []
83 if config.locking:
84 locks.append(lock)
85
86 with try_acquire_locks(locks, config.timeout):
87 if force:
88 index = {}
89 else:
90 try:
91 mode_dict = {'mode': 'r', 'encoding': 'utf-8'} if PY3 else {'mode': 'rb'}
92 with open(index_path, **mode_dict) as fi:
93 index = json.load(fi)
94 except (IOError, ValueError):
95 index = {}
96
97 files = set(fn for fn in os.listdir(dir_path) if fn.endswith('.tar.bz2'))
98 if could_be_mirror and any(fn.startswith('_license-') for fn in files):
99 sys.exit("""\
100 Error:
101 Indexing a copy of the Anaconda conda package channel is neither
102 necessary nor supported. If you wish to add your own packages,
103 you can do so by adding them to a separate channel.
104 """)
105 for fn in files:
106 path = join(dir_path, fn)
107 if fn in index:
108 if check_md5:
109 if index[fn]['md5'] == md5_file(path):
110 continue
111 elif index[fn]['mtime'] == getmtime(path):
112 continue
113 if config.verbose:
114 print('updating:', fn)
115 d = read_index_tar(path, config, lock=lock)
116 d.update(file_info(path))
117 index[fn] = d
118
119 for fn in files:
120 index[fn]['sig'] = '.' if isfile(join(dir_path, fn + '.sig')) else None
121
122 if remove:
123 # remove files from the index which are not on disk
124 for fn in set(index) - files:
125 if config.verbose:
126 print("removing:", fn)
127 del index[fn]
128
129 # Deal with Python 2 and 3's different json module type reqs
130 mode_dict = {'mode': 'w', 'encoding': 'utf-8'} if PY3 else {'mode': 'wb'}
131 with open(index_path, **mode_dict) as fo:
132 json.dump(index, fo, indent=2, sort_keys=True, default=str)
133
134 # --- new repodata
135 for fn in index:
136 info = index[fn]
137 for varname in 'arch', 'platform', 'mtime', 'ucs':
138 try:
139 del info[varname]
140 except KeyError:
141 pass
142
143 if 'requires' in info and 'depends' not in info:
144 info['depends'] = info['requires']
145
146 repodata = {'packages': index, 'info': {}}
147 write_repodata(repodata, dir_path, lock=lock, config=config)
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda_build/index.py b/conda_build/index.py
--- a/conda_build/index.py
+++ b/conda_build/index.py
@@ -6,7 +6,6 @@
import os
import bz2
-import sys
import json
import tarfile
from os.path import isfile, join, getmtime
@@ -95,13 +94,6 @@
index = {}
files = set(fn for fn in os.listdir(dir_path) if fn.endswith('.tar.bz2'))
- if could_be_mirror and any(fn.startswith('_license-') for fn in files):
- sys.exit("""\
- Error:
- Indexing a copy of the Anaconda conda package channel is neither
- necessary nor supported. If you wish to add your own packages,
- you can do so by adding them to a separate channel.
- """)
for fn in files:
path = join(dir_path, fn)
if fn in index:
| {"golden_diff": "diff --git a/conda_build/index.py b/conda_build/index.py\n--- a/conda_build/index.py\n+++ b/conda_build/index.py\n@@ -6,7 +6,6 @@\n \n import os\n import bz2\n-import sys\n import json\n import tarfile\n from os.path import isfile, join, getmtime\n@@ -95,13 +94,6 @@\n index = {}\n \n files = set(fn for fn in os.listdir(dir_path) if fn.endswith('.tar.bz2'))\n- if could_be_mirror and any(fn.startswith('_license-') for fn in files):\n- sys.exit(\"\"\"\\\n- Error:\n- Indexing a copy of the Anaconda conda package channel is neither\n- necessary nor supported. If you wish to add your own packages,\n- you can do so by adding them to a separate channel.\n- \"\"\")\n for fn in files:\n path = join(dir_path, fn)\n if fn in index:\n", "issue": "`conda skeleton pypi pyinstrument` tries to index Anaconda.org\n_From @olgabot on September 9, 2016 22:4_\n\nI want to post my PyPI package to Anaconda.org and I'm following along with [this](http://conda.pydata.org/docs/build_tutorials/pkgs.html?highlight=anaconda%20upload) tutorial which uses `pyinstrument` as an example, but I'm getting this error that seems to be stemming from a `conda index` command. This error happens no matter which PyPI package I use, even ones I own. Do you know what may be happening?\n\nSide note: It would be helpful to know what to expect to see in a \"skeleton\"-created folder.\n\n```\n$ conda skeleton pypi pyinstrument\nINFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): pypi.python.org\nWarning, the following versions were found for pyinstrument\n0.10.1\n0.11\n0.12\n0.13\n0.13.1\nUsing 0.13.1\nUse --version to specify a different version.\nUsing url https://pypi.python.org/packages/64/56/d7a0d48973dcf58ea74d5f004e16e94969e03ae783b46f86f42f35a6b81b/pyinstrument-0.13.1.tar.gz (44 KB) for pyinstrument.\nDownloading pyinstrument\nINFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): pypi.python.org\nINFO:fetch.start:('pyinstrument-0', 45395)\nINFO:fetch.update:16384\nINFO:fetch.update:32768\nINFO:fetch.update:45395\nINFO:fetch.update:45395\nINFO:fetch.stop:None\nUnpacking pyinstrument...\ndone\nworking in /var/folders/6l/83vj6nxn6g1b6l2c9ycym4hc0000gn/T/tmp9bdr5b9aconda_skeleton_pyinstrument-0.13.1.tar.gz\nError:\n Indexing a copy of the Anaconda conda package channel is neither\n necessary nor supported. If you wish to add your own packages,\n you can do so by adding them to a separate channel.\n```\n\nHere's the output from `conda info`:\n\n```\n$ conda info\nWarning: could not import binstar_client ('args' object has no attribute 'site')Current conda install:\n\n platform : osx-64\n conda version : 4.1.12\n conda-env version : 2.5.2\n conda-build version : 2.0.1\n python version : 3.5.2.final.0\n requests version : 2.10.0\n root environment : /Users/olga/anaconda3 (writable)\n default environment : /Users/olga/anaconda3\n envs directories : /Users/olga/anaconda3/envs\n package cache : /Users/olga/anaconda3/pkgs\n channel URLs : https://conda.anaconda.org/r/osx-64/\n https://conda.anaconda.org/r/noarch/\n https://conda.anaconda.org/bioconda/osx-64/\n https://conda.anaconda.org/bioconda/noarch/\n https://repo.continuum.io/pkgs/free/osx-64/\n https://repo.continuum.io/pkgs/free/noarch/\n https://repo.continuum.io/pkgs/pro/osx-64/\n https://repo.continuum.io/pkgs/pro/noarch/\n config file : /Users/olga/.condarc\n offline mode : False\n is foreign system : False\n```\n\n\n_Copied from original issue: conda/conda#3406_\n", "before_files": [{"content": "'''\nFunctions related to creating repodata index files.\n'''\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport bz2\nimport sys\nimport json\nimport tarfile\nfrom os.path import isfile, join, getmtime\n\nfrom conda_build.utils import file_info, get_lock, try_acquire_locks\nfrom .conda_interface import PY3, md5_file\n\n\ndef read_index_tar(tar_path, config, lock):\n \"\"\" Returns the index.json dict inside the given package tarball. \"\"\"\n locks = []\n if config.locking:\n locks = [lock]\n with try_acquire_locks(locks, config.timeout):\n with tarfile.open(tar_path) as t:\n try:\n return json.loads(t.extractfile('info/index.json').read().decode('utf-8'))\n except EOFError:\n raise RuntimeError(\"Could not extract %s. File probably corrupt.\"\n % tar_path)\n except OSError as e:\n raise RuntimeError(\"Could not extract %s (%s)\" % (tar_path, e))\n except tarfile.ReadError:\n raise RuntimeError(\"Could not extract metadata from %s. \"\n \"File probably corrupt.\" % tar_path)\n\n\ndef write_repodata(repodata, dir_path, lock, config=None):\n \"\"\" Write updated repodata.json and repodata.json.bz2 \"\"\"\n if not config:\n import conda_build.config\n config = conda_build.config.config\n locks = []\n if config.locking:\n locks = [lock]\n with try_acquire_locks(locks, config.timeout):\n data = json.dumps(repodata, indent=2, sort_keys=True)\n # strip trailing whitespace\n data = '\\n'.join(line.rstrip() for line in data.splitlines())\n # make sure we have newline at the end\n if not data.endswith('\\n'):\n data += '\\n'\n with open(join(dir_path, 'repodata.json'), 'w') as fo:\n fo.write(data)\n with open(join(dir_path, 'repodata.json.bz2'), 'wb') as fo:\n fo.write(bz2.compress(data.encode('utf-8')))\n\n\ndef update_index(dir_path, config, force=False, check_md5=False, remove=True, lock=None,\n could_be_mirror=True):\n \"\"\"\n Update all index files in dir_path with changed packages.\n\n :param verbose: Should detailed status messages be output?\n :type verbose: bool\n :param force: Whether to re-index all packages (including those that\n haven't changed) or not.\n :type force: bool\n :param check_md5: Whether to check MD5s instead of mtimes for determining\n if a package changed.\n :type check_md5: bool\n \"\"\"\n\n if config.verbose:\n print(\"updating index in:\", dir_path)\n index_path = join(dir_path, '.index.json')\n if not os.path.isdir(dir_path):\n os.makedirs(dir_path)\n\n if not lock:\n lock = get_lock(dir_path)\n\n locks = []\n if config.locking:\n locks.append(lock)\n\n with try_acquire_locks(locks, config.timeout):\n if force:\n index = {}\n else:\n try:\n mode_dict = {'mode': 'r', 'encoding': 'utf-8'} if PY3 else {'mode': 'rb'}\n with open(index_path, **mode_dict) as fi:\n index = json.load(fi)\n except (IOError, ValueError):\n index = {}\n\n files = set(fn for fn in os.listdir(dir_path) if fn.endswith('.tar.bz2'))\n if could_be_mirror and any(fn.startswith('_license-') for fn in files):\n sys.exit(\"\"\"\\\n Error:\n Indexing a copy of the Anaconda conda package channel is neither\n necessary nor supported. If you wish to add your own packages,\n you can do so by adding them to a separate channel.\n \"\"\")\n for fn in files:\n path = join(dir_path, fn)\n if fn in index:\n if check_md5:\n if index[fn]['md5'] == md5_file(path):\n continue\n elif index[fn]['mtime'] == getmtime(path):\n continue\n if config.verbose:\n print('updating:', fn)\n d = read_index_tar(path, config, lock=lock)\n d.update(file_info(path))\n index[fn] = d\n\n for fn in files:\n index[fn]['sig'] = '.' if isfile(join(dir_path, fn + '.sig')) else None\n\n if remove:\n # remove files from the index which are not on disk\n for fn in set(index) - files:\n if config.verbose:\n print(\"removing:\", fn)\n del index[fn]\n\n # Deal with Python 2 and 3's different json module type reqs\n mode_dict = {'mode': 'w', 'encoding': 'utf-8'} if PY3 else {'mode': 'wb'}\n with open(index_path, **mode_dict) as fo:\n json.dump(index, fo, indent=2, sort_keys=True, default=str)\n\n # --- new repodata\n for fn in index:\n info = index[fn]\n for varname in 'arch', 'platform', 'mtime', 'ucs':\n try:\n del info[varname]\n except KeyError:\n pass\n\n if 'requires' in info and 'depends' not in info:\n info['depends'] = info['requires']\n\n repodata = {'packages': index, 'info': {}}\n write_repodata(repodata, dir_path, lock=lock, config=config)\n", "path": "conda_build/index.py"}], "after_files": [{"content": "'''\nFunctions related to creating repodata index files.\n'''\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport bz2\nimport json\nimport tarfile\nfrom os.path import isfile, join, getmtime\n\nfrom conda_build.utils import file_info, get_lock, try_acquire_locks\nfrom .conda_interface import PY3, md5_file\n\n\ndef read_index_tar(tar_path, config, lock):\n \"\"\" Returns the index.json dict inside the given package tarball. \"\"\"\n locks = []\n if config.locking:\n locks = [lock]\n with try_acquire_locks(locks, config.timeout):\n with tarfile.open(tar_path) as t:\n try:\n return json.loads(t.extractfile('info/index.json').read().decode('utf-8'))\n except EOFError:\n raise RuntimeError(\"Could not extract %s. File probably corrupt.\"\n % tar_path)\n except OSError as e:\n raise RuntimeError(\"Could not extract %s (%s)\" % (tar_path, e))\n except tarfile.ReadError:\n raise RuntimeError(\"Could not extract metadata from %s. \"\n \"File probably corrupt.\" % tar_path)\n\n\ndef write_repodata(repodata, dir_path, lock, config=None):\n \"\"\" Write updated repodata.json and repodata.json.bz2 \"\"\"\n if not config:\n import conda_build.config\n config = conda_build.config.config\n locks = []\n if config.locking:\n locks = [lock]\n with try_acquire_locks(locks, config.timeout):\n data = json.dumps(repodata, indent=2, sort_keys=True)\n # strip trailing whitespace\n data = '\\n'.join(line.rstrip() for line in data.splitlines())\n # make sure we have newline at the end\n if not data.endswith('\\n'):\n data += '\\n'\n with open(join(dir_path, 'repodata.json'), 'w') as fo:\n fo.write(data)\n with open(join(dir_path, 'repodata.json.bz2'), 'wb') as fo:\n fo.write(bz2.compress(data.encode('utf-8')))\n\n\ndef update_index(dir_path, config, force=False, check_md5=False, remove=True, lock=None,\n could_be_mirror=True):\n \"\"\"\n Update all index files in dir_path with changed packages.\n\n :param verbose: Should detailed status messages be output?\n :type verbose: bool\n :param force: Whether to re-index all packages (including those that\n haven't changed) or not.\n :type force: bool\n :param check_md5: Whether to check MD5s instead of mtimes for determining\n if a package changed.\n :type check_md5: bool\n \"\"\"\n\n if config.verbose:\n print(\"updating index in:\", dir_path)\n index_path = join(dir_path, '.index.json')\n if not os.path.isdir(dir_path):\n os.makedirs(dir_path)\n\n if not lock:\n lock = get_lock(dir_path)\n\n locks = []\n if config.locking:\n locks.append(lock)\n\n with try_acquire_locks(locks, config.timeout):\n if force:\n index = {}\n else:\n try:\n mode_dict = {'mode': 'r', 'encoding': 'utf-8'} if PY3 else {'mode': 'rb'}\n with open(index_path, **mode_dict) as fi:\n index = json.load(fi)\n except (IOError, ValueError):\n index = {}\n\n files = set(fn for fn in os.listdir(dir_path) if fn.endswith('.tar.bz2'))\n for fn in files:\n path = join(dir_path, fn)\n if fn in index:\n if check_md5:\n if index[fn]['md5'] == md5_file(path):\n continue\n elif index[fn]['mtime'] == getmtime(path):\n continue\n if config.verbose:\n print('updating:', fn)\n d = read_index_tar(path, config, lock=lock)\n d.update(file_info(path))\n index[fn] = d\n\n for fn in files:\n index[fn]['sig'] = '.' if isfile(join(dir_path, fn + '.sig')) else None\n\n if remove:\n # remove files from the index which are not on disk\n for fn in set(index) - files:\n if config.verbose:\n print(\"removing:\", fn)\n del index[fn]\n\n # Deal with Python 2 and 3's different json module type reqs\n mode_dict = {'mode': 'w', 'encoding': 'utf-8'} if PY3 else {'mode': 'wb'}\n with open(index_path, **mode_dict) as fo:\n json.dump(index, fo, indent=2, sort_keys=True, default=str)\n\n # --- new repodata\n for fn in index:\n info = index[fn]\n for varname in 'arch', 'platform', 'mtime', 'ucs':\n try:\n del info[varname]\n except KeyError:\n pass\n\n if 'requires' in info and 'depends' not in info:\n info['depends'] = info['requires']\n\n repodata = {'packages': index, 'info': {}}\n write_repodata(repodata, dir_path, lock=lock, config=config)\n", "path": "conda_build/index.py"}]} | 2,716 | 212 |
gh_patches_debug_22162 | rasdani/github-patches | git_diff | bids-standard__pybids-71 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Force UTF-8 encoding anywhere JSON files are read in
Per an issue on the mailing list, we should explicitly force UTF-8 encoding whenever we work with JSON files, because the BIDS spec already mandates UTF-8 for spec compliance. Otherwise we risk failure on platforms where the default encoding is something else.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bids/grabbids/bids_layout.py`
Content:
```
1 import os
2 import re
3 import json
4
5 from os.path import dirname
6 from os.path import abspath
7 from os.path import join as pathjoin
8 from os.path import basename
9
10 from grabbit import Layout
11
12 __all__ = ['BIDSLayout']
13
14
15 class BIDSLayout(Layout):
16 def __init__(self, path, config=None, **kwargs):
17 if config is None:
18 root = dirname(abspath(__file__))
19 config = pathjoin(root, 'config', 'bids.json')
20 super(BIDSLayout, self).__init__(path, config,
21 dynamic_getters=True, **kwargs)
22
23 def _validate_file(self, f):
24 # Return False to exclude a file from indexing. This should call
25 # some kind of validation regex.
26 return True
27
28 def _get_nearest_helper(self, path, extension, type=None, **kwargs):
29 path = abspath(path)
30
31 if path not in self.files:
32 raise ValueError("File '%s' could not be found in the current BIDS"
33 " project." % path)
34
35 if not type:
36 # Constrain the search to .json files with the same type as target
37 type = self.files[path].entities['type']
38
39 tmp = self.get_nearest(path, extensions=extension, all_=True,
40 type=type, ignore_strict_entities=['type'],
41 **kwargs)
42
43 if len(tmp):
44 return tmp
45 else:
46 return None
47
48 def get_metadata(self, path, **kwargs):
49
50 potentialJSONs = self._get_nearest_helper(path, '.json', **kwargs)
51 if not isinstance(potentialJSONs, list): return potentialJSONs
52
53 merged_param_dict = {}
54 for json_file_path in reversed(potentialJSONs):
55 if os.path.exists(json_file_path):
56 param_dict = json.load(open(json_file_path, "r"))
57 merged_param_dict.update(param_dict)
58
59 return merged_param_dict
60
61 def get_bvec(self, path, **kwargs):
62 tmp = self._get_nearest_helper(path, 'bvec', type='dwi', **kwargs)[0]
63 if isinstance(tmp, list):
64 return tmp[0]
65 else:
66 return tmp
67
68 def get_bval(self, path, **kwargs):
69 tmp = self._get_nearest_helper(path, 'bval', type='dwi', **kwargs)[0]
70 if isinstance(tmp, list):
71 return tmp[0]
72 else:
73 return tmp
74
75 def get_events(self, path, **kwargs):
76 tmp = self._get_nearest_helper(path, '.tsv', type='events', **kwargs)
77 if isinstance(tmp, list):
78 return tmp[0]
79 else:
80 return tmp
81
82 def get_fieldmap(self, path, return_list=False):
83 fieldmaps = self._get_fieldmaps(path)
84
85 if return_list:
86 return fieldmaps
87 else:
88 if len(fieldmaps) == 1:
89 return fieldmaps[0]
90 elif len(fieldmaps) > 1:
91 raise ValueError("More than one fieldmap found, but the "
92 "'return_list' argument was set to False. "
93 "Either ensure that there is only one "
94 "fieldmap for this image, or set the "
95 "'return_list' argument to True and handle "
96 "the result as a list.")
97 else: # len(fieldmaps) == 0
98 return None
99
100 def _get_fieldmaps(self, path):
101 sub = os.path.split(path)[1].split("_")[0].split("sub-")[1]
102 fieldmap_set = []
103 type_ = '(phase1|phasediff|epi|fieldmap)'
104 for file in self.get(subject=sub, type=type_,
105 extensions=['nii.gz', 'nii']):
106 metadata = self.get_metadata(file.filename)
107 if metadata and "IntendedFor" in metadata.keys():
108 if isinstance(metadata["IntendedFor"], list):
109 intended_for = metadata["IntendedFor"]
110 else:
111 intended_for = [metadata["IntendedFor"]]
112 if any([path.endswith(suffix) for suffix in intended_for]):
113 cur_fieldmap = {}
114 if file.type == "phasediff":
115 cur_fieldmap = {"phasediff": file.filename,
116 "magnitude1": file.filename.replace(
117 "phasediff", "magnitude1"),
118 "magnitude2": file.filename.replace(
119 "phasediff", "magnitude2"),
120 "type": "phasediff"}
121 elif file.type == "phase1":
122 cur_fieldmap["phase1"] = file.filename
123 cur_fieldmap["magnitude1"] = \
124 file.filename.replace("phase1", "magnitude1")
125 cur_fieldmap["phase2"] = \
126 file.filename.replace("phase1", "phase2")
127 cur_fieldmap["magnitude2"] = \
128 file.filename.replace("phase1", "magnitude2")
129 cur_fieldmap["type"] = "phase"
130 elif file.type == "epi":
131 cur_fieldmap["epi"] = file.filename
132 cur_fieldmap["type"] = "epi"
133 elif file.type == "fieldmap":
134 cur_fieldmap["fieldmap"] = file.filename
135 cur_fieldmap["magnitude"] = \
136 file.filename.replace("fieldmap", "magnitude")
137 cur_fieldmap["type"] = "fieldmap"
138 fieldmap_set.append(cur_fieldmap)
139 return fieldmap_set
140
141 def find_match(self, target, source=None):
142
143 # Try to take the easy way out
144 if source is not None:
145 _target = source.split('.')[0] + '.' + target
146 if os.path.exists(_target):
147 return target
148
149 if target in list(self.entities.keys()):
150 candidates = list(self.entities[target].files.keys())
151 else:
152 candidates = []
153
154 for root, directories, filenames in os.walk(self.root):
155 for f in filenames:
156 if re.search(target + '$', f):
157 if os.path.sep == "\\":
158 f = f.replace("\\", "\\\\")
159 candidates.append(f)
160
161 if source is None:
162 return candidates
163
164 # Walk up the file hierarchy from source, find first match
165 if not os.path.exists(source):
166 raise OSError("The file '%s' doesn't exist." % source)
167 elif not source.startswith(self.root):
168 raise ValueError("The file '%s' is not contained "
169 "within the current project "
170 "directory (%s)." % (source, self.root))
171 rel = os.path.relpath(dirname(source), self.root)
172 sep = os.path.sep
173 chunks = rel.split(sep)
174 n_chunks = len(chunks)
175 for i in range(n_chunks, -1, -1):
176 path = pathjoin(self.root, *chunks[:i])
177 patt = path + '\%s[^\%s]+$' % (sep, sep)
178 if sep == "\\":
179 patt = path + '\\[^\\]+$'
180 patt = patt.replace("\\", "\\\\")
181 matches = [x for x in candidates if re.search(patt, x)]
182 if matches:
183 if len(matches) == 1:
184 return matches[0]
185 else:
186 raise ValueError("Ambiguous target: more than one "
187 "candidate file found in "
188 "directory '%s'." % path)
189 return None
190
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bids/grabbids/bids_layout.py b/bids/grabbids/bids_layout.py
--- a/bids/grabbids/bids_layout.py
+++ b/bids/grabbids/bids_layout.py
@@ -5,7 +5,6 @@
from os.path import dirname
from os.path import abspath
from os.path import join as pathjoin
-from os.path import basename
from grabbit import Layout
@@ -48,12 +47,15 @@
def get_metadata(self, path, **kwargs):
potentialJSONs = self._get_nearest_helper(path, '.json', **kwargs)
- if not isinstance(potentialJSONs, list): return potentialJSONs
+
+ if not isinstance(potentialJSONs, list):
+ return potentialJSONs
merged_param_dict = {}
for json_file_path in reversed(potentialJSONs):
if os.path.exists(json_file_path):
- param_dict = json.load(open(json_file_path, "r"))
+ param_dict = json.load(open(json_file_path, "r",
+ encoding='utf-8'))
merged_param_dict.update(param_dict)
return merged_param_dict
| {"golden_diff": "diff --git a/bids/grabbids/bids_layout.py b/bids/grabbids/bids_layout.py\n--- a/bids/grabbids/bids_layout.py\n+++ b/bids/grabbids/bids_layout.py\n@@ -5,7 +5,6 @@\n from os.path import dirname\n from os.path import abspath\n from os.path import join as pathjoin\n-from os.path import basename\n \n from grabbit import Layout\n \n@@ -48,12 +47,15 @@\n def get_metadata(self, path, **kwargs):\n \n potentialJSONs = self._get_nearest_helper(path, '.json', **kwargs)\n- if not isinstance(potentialJSONs, list): return potentialJSONs\n+\n+ if not isinstance(potentialJSONs, list):\n+ return potentialJSONs\n \n merged_param_dict = {}\n for json_file_path in reversed(potentialJSONs):\n if os.path.exists(json_file_path):\n- param_dict = json.load(open(json_file_path, \"r\"))\n+ param_dict = json.load(open(json_file_path, \"r\",\n+ encoding='utf-8'))\n merged_param_dict.update(param_dict)\n \n return merged_param_dict\n", "issue": "Force UTF-8 encoding anywhere JSON files are read in\nPer an issue on the mailing list, we should explicitly force UTF-8 encoding whenever we work with JSON files, because the BIDS spec already mandates UTF-8 for spec compliance. Otherwise we risk failure on platforms where the default encoding is something else.\n", "before_files": [{"content": "import os\nimport re\nimport json\n\nfrom os.path import dirname\nfrom os.path import abspath\nfrom os.path import join as pathjoin\nfrom os.path import basename\n\nfrom grabbit import Layout\n\n__all__ = ['BIDSLayout']\n\n\nclass BIDSLayout(Layout):\n def __init__(self, path, config=None, **kwargs):\n if config is None:\n root = dirname(abspath(__file__))\n config = pathjoin(root, 'config', 'bids.json')\n super(BIDSLayout, self).__init__(path, config,\n dynamic_getters=True, **kwargs)\n\n def _validate_file(self, f):\n # Return False to exclude a file from indexing. This should call\n # some kind of validation regex.\n return True\n\n def _get_nearest_helper(self, path, extension, type=None, **kwargs):\n path = abspath(path)\n\n if path not in self.files:\n raise ValueError(\"File '%s' could not be found in the current BIDS\"\n \" project.\" % path)\n\n if not type:\n # Constrain the search to .json files with the same type as target\n type = self.files[path].entities['type']\n\n tmp = self.get_nearest(path, extensions=extension, all_=True,\n type=type, ignore_strict_entities=['type'],\n **kwargs)\n\n if len(tmp):\n return tmp\n else:\n return None\n\n def get_metadata(self, path, **kwargs):\n\n potentialJSONs = self._get_nearest_helper(path, '.json', **kwargs)\n if not isinstance(potentialJSONs, list): return potentialJSONs\n\n merged_param_dict = {}\n for json_file_path in reversed(potentialJSONs):\n if os.path.exists(json_file_path):\n param_dict = json.load(open(json_file_path, \"r\"))\n merged_param_dict.update(param_dict)\n\n return merged_param_dict\n\n def get_bvec(self, path, **kwargs):\n tmp = self._get_nearest_helper(path, 'bvec', type='dwi', **kwargs)[0]\n if isinstance(tmp, list):\n return tmp[0]\n else:\n return tmp\n\n def get_bval(self, path, **kwargs):\n tmp = self._get_nearest_helper(path, 'bval', type='dwi', **kwargs)[0]\n if isinstance(tmp, list):\n return tmp[0]\n else:\n return tmp\n\n def get_events(self, path, **kwargs):\n tmp = self._get_nearest_helper(path, '.tsv', type='events', **kwargs)\n if isinstance(tmp, list):\n return tmp[0]\n else:\n return tmp\n\n def get_fieldmap(self, path, return_list=False):\n fieldmaps = self._get_fieldmaps(path)\n\n if return_list:\n return fieldmaps\n else:\n if len(fieldmaps) == 1:\n return fieldmaps[0]\n elif len(fieldmaps) > 1:\n raise ValueError(\"More than one fieldmap found, but the \"\n \"'return_list' argument was set to False. \"\n \"Either ensure that there is only one \"\n \"fieldmap for this image, or set the \"\n \"'return_list' argument to True and handle \"\n \"the result as a list.\")\n else: # len(fieldmaps) == 0\n return None\n\n def _get_fieldmaps(self, path):\n sub = os.path.split(path)[1].split(\"_\")[0].split(\"sub-\")[1]\n fieldmap_set = []\n type_ = '(phase1|phasediff|epi|fieldmap)'\n for file in self.get(subject=sub, type=type_,\n extensions=['nii.gz', 'nii']):\n metadata = self.get_metadata(file.filename)\n if metadata and \"IntendedFor\" in metadata.keys():\n if isinstance(metadata[\"IntendedFor\"], list):\n intended_for = metadata[\"IntendedFor\"]\n else:\n intended_for = [metadata[\"IntendedFor\"]]\n if any([path.endswith(suffix) for suffix in intended_for]):\n cur_fieldmap = {}\n if file.type == \"phasediff\":\n cur_fieldmap = {\"phasediff\": file.filename,\n \"magnitude1\": file.filename.replace(\n \"phasediff\", \"magnitude1\"),\n \"magnitude2\": file.filename.replace(\n \"phasediff\", \"magnitude2\"),\n \"type\": \"phasediff\"}\n elif file.type == \"phase1\":\n cur_fieldmap[\"phase1\"] = file.filename\n cur_fieldmap[\"magnitude1\"] = \\\n file.filename.replace(\"phase1\", \"magnitude1\")\n cur_fieldmap[\"phase2\"] = \\\n file.filename.replace(\"phase1\", \"phase2\")\n cur_fieldmap[\"magnitude2\"] = \\\n file.filename.replace(\"phase1\", \"magnitude2\")\n cur_fieldmap[\"type\"] = \"phase\"\n elif file.type == \"epi\":\n cur_fieldmap[\"epi\"] = file.filename\n cur_fieldmap[\"type\"] = \"epi\"\n elif file.type == \"fieldmap\":\n cur_fieldmap[\"fieldmap\"] = file.filename\n cur_fieldmap[\"magnitude\"] = \\\n file.filename.replace(\"fieldmap\", \"magnitude\")\n cur_fieldmap[\"type\"] = \"fieldmap\"\n fieldmap_set.append(cur_fieldmap)\n return fieldmap_set\n\n def find_match(self, target, source=None):\n\n # Try to take the easy way out\n if source is not None:\n _target = source.split('.')[0] + '.' + target\n if os.path.exists(_target):\n return target\n\n if target in list(self.entities.keys()):\n candidates = list(self.entities[target].files.keys())\n else:\n candidates = []\n\n for root, directories, filenames in os.walk(self.root):\n for f in filenames:\n if re.search(target + '$', f):\n if os.path.sep == \"\\\\\":\n f = f.replace(\"\\\\\", \"\\\\\\\\\")\n candidates.append(f)\n\n if source is None:\n return candidates\n\n # Walk up the file hierarchy from source, find first match\n if not os.path.exists(source):\n raise OSError(\"The file '%s' doesn't exist.\" % source)\n elif not source.startswith(self.root):\n raise ValueError(\"The file '%s' is not contained \"\n \"within the current project \"\n \"directory (%s).\" % (source, self.root))\n rel = os.path.relpath(dirname(source), self.root)\n sep = os.path.sep\n chunks = rel.split(sep)\n n_chunks = len(chunks)\n for i in range(n_chunks, -1, -1):\n path = pathjoin(self.root, *chunks[:i])\n patt = path + '\\%s[^\\%s]+$' % (sep, sep)\n if sep == \"\\\\\":\n patt = path + '\\\\[^\\\\]+$'\n patt = patt.replace(\"\\\\\", \"\\\\\\\\\")\n matches = [x for x in candidates if re.search(patt, x)]\n if matches:\n if len(matches) == 1:\n return matches[0]\n else:\n raise ValueError(\"Ambiguous target: more than one \"\n \"candidate file found in \"\n \"directory '%s'.\" % path)\n return None\n", "path": "bids/grabbids/bids_layout.py"}], "after_files": [{"content": "import os\nimport re\nimport json\n\nfrom os.path import dirname\nfrom os.path import abspath\nfrom os.path import join as pathjoin\n\nfrom grabbit import Layout\n\n__all__ = ['BIDSLayout']\n\n\nclass BIDSLayout(Layout):\n def __init__(self, path, config=None, **kwargs):\n if config is None:\n root = dirname(abspath(__file__))\n config = pathjoin(root, 'config', 'bids.json')\n super(BIDSLayout, self).__init__(path, config,\n dynamic_getters=True, **kwargs)\n\n def _validate_file(self, f):\n # Return False to exclude a file from indexing. This should call\n # some kind of validation regex.\n return True\n\n def _get_nearest_helper(self, path, extension, type=None, **kwargs):\n path = abspath(path)\n\n if path not in self.files:\n raise ValueError(\"File '%s' could not be found in the current BIDS\"\n \" project.\" % path)\n\n if not type:\n # Constrain the search to .json files with the same type as target\n type = self.files[path].entities['type']\n\n tmp = self.get_nearest(path, extensions=extension, all_=True,\n type=type, ignore_strict_entities=['type'],\n **kwargs)\n\n if len(tmp):\n return tmp\n else:\n return None\n\n def get_metadata(self, path, **kwargs):\n\n potentialJSONs = self._get_nearest_helper(path, '.json', **kwargs)\n\n if not isinstance(potentialJSONs, list):\n return potentialJSONs\n\n merged_param_dict = {}\n for json_file_path in reversed(potentialJSONs):\n if os.path.exists(json_file_path):\n param_dict = json.load(open(json_file_path, \"r\",\n encoding='utf-8'))\n merged_param_dict.update(param_dict)\n\n return merged_param_dict\n\n def get_bvec(self, path, **kwargs):\n tmp = self._get_nearest_helper(path, 'bvec', type='dwi', **kwargs)[0]\n if isinstance(tmp, list):\n return tmp[0]\n else:\n return tmp\n\n def get_bval(self, path, **kwargs):\n tmp = self._get_nearest_helper(path, 'bval', type='dwi', **kwargs)[0]\n if isinstance(tmp, list):\n return tmp[0]\n else:\n return tmp\n\n def get_events(self, path, **kwargs):\n tmp = self._get_nearest_helper(path, '.tsv', type='events', **kwargs)\n if isinstance(tmp, list):\n return tmp[0]\n else:\n return tmp\n\n def get_fieldmap(self, path, return_list=False):\n fieldmaps = self._get_fieldmaps(path)\n\n if return_list:\n return fieldmaps\n else:\n if len(fieldmaps) == 1:\n return fieldmaps[0]\n elif len(fieldmaps) > 1:\n raise ValueError(\"More than one fieldmap found, but the \"\n \"'return_list' argument was set to False. \"\n \"Either ensure that there is only one \"\n \"fieldmap for this image, or set the \"\n \"'return_list' argument to True and handle \"\n \"the result as a list.\")\n else: # len(fieldmaps) == 0\n return None\n\n def _get_fieldmaps(self, path):\n sub = os.path.split(path)[1].split(\"_\")[0].split(\"sub-\")[1]\n fieldmap_set = []\n type_ = '(phase1|phasediff|epi|fieldmap)'\n for file in self.get(subject=sub, type=type_,\n extensions=['nii.gz', 'nii']):\n metadata = self.get_metadata(file.filename)\n if metadata and \"IntendedFor\" in metadata.keys():\n if isinstance(metadata[\"IntendedFor\"], list):\n intended_for = metadata[\"IntendedFor\"]\n else:\n intended_for = [metadata[\"IntendedFor\"]]\n if any([path.endswith(suffix) for suffix in intended_for]):\n cur_fieldmap = {}\n if file.type == \"phasediff\":\n cur_fieldmap = {\"phasediff\": file.filename,\n \"magnitude1\": file.filename.replace(\n \"phasediff\", \"magnitude1\"),\n \"magnitude2\": file.filename.replace(\n \"phasediff\", \"magnitude2\"),\n \"type\": \"phasediff\"}\n elif file.type == \"phase1\":\n cur_fieldmap[\"phase1\"] = file.filename\n cur_fieldmap[\"magnitude1\"] = \\\n file.filename.replace(\"phase1\", \"magnitude1\")\n cur_fieldmap[\"phase2\"] = \\\n file.filename.replace(\"phase1\", \"phase2\")\n cur_fieldmap[\"magnitude2\"] = \\\n file.filename.replace(\"phase1\", \"magnitude2\")\n cur_fieldmap[\"type\"] = \"phase\"\n elif file.type == \"epi\":\n cur_fieldmap[\"epi\"] = file.filename\n cur_fieldmap[\"type\"] = \"epi\"\n elif file.type == \"fieldmap\":\n cur_fieldmap[\"fieldmap\"] = file.filename\n cur_fieldmap[\"magnitude\"] = \\\n file.filename.replace(\"fieldmap\", \"magnitude\")\n cur_fieldmap[\"type\"] = \"fieldmap\"\n fieldmap_set.append(cur_fieldmap)\n return fieldmap_set\n\n def find_match(self, target, source=None):\n\n # Try to take the easy way out\n if source is not None:\n _target = source.split('.')[0] + '.' + target\n if os.path.exists(_target):\n return target\n\n if target in list(self.entities.keys()):\n candidates = list(self.entities[target].files.keys())\n else:\n candidates = []\n\n for root, directories, filenames in os.walk(self.root):\n for f in filenames:\n if re.search(target + '$', f):\n if os.path.sep == \"\\\\\":\n f = f.replace(\"\\\\\", \"\\\\\\\\\")\n candidates.append(f)\n\n if source is None:\n return candidates\n\n # Walk up the file hierarchy from source, find first match\n if not os.path.exists(source):\n raise OSError(\"The file '%s' doesn't exist.\" % source)\n elif not source.startswith(self.root):\n raise ValueError(\"The file '%s' is not contained \"\n \"within the current project \"\n \"directory (%s).\" % (source, self.root))\n rel = os.path.relpath(dirname(source), self.root)\n sep = os.path.sep\n chunks = rel.split(sep)\n n_chunks = len(chunks)\n for i in range(n_chunks, -1, -1):\n path = pathjoin(self.root, *chunks[:i])\n patt = path + '\\%s[^\\%s]+$' % (sep, sep)\n if sep == \"\\\\\":\n patt = path + '\\\\[^\\\\]+$'\n patt = patt.replace(\"\\\\\", \"\\\\\\\\\")\n matches = [x for x in candidates if re.search(patt, x)]\n if matches:\n if len(matches) == 1:\n return matches[0]\n else:\n raise ValueError(\"Ambiguous target: more than one \"\n \"candidate file found in \"\n \"directory '%s'.\" % path)\n return None\n", "path": "bids/grabbids/bids_layout.py"}]} | 2,365 | 257 |
gh_patches_debug_33030 | rasdani/github-patches | git_diff | pypa__cibuildwheel-1613 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
setup.py setup() not detected in __name__ == '__main__' block
### Description
My setup.py setup() includes:
python_requires=">=3.8"
However cibuildwheel still tries and fails to compile under Python 3.6.
I understand there is [CIBW_BUILD / CIBW_SKIP](https://cibuildwheel.readthedocs.io/en/stable/options/#build-skip) but that is then duplicating the python requires information.
I can add a \[project\] section to pyproject.toml but that leads to a lot of problems because it ends up fighting with setup() parameters and they **really** don't like it.
I believe cibuildwheel should establish the Python version support automatically whether it comes from setuptools or pyproject.toml, and not try to build on unsupported versions. My [pyproject.toml](https://github.com/rogerbinns/apsw/blob/master/pyproject.toml) is:
````
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
````
### Build log
https://github.com/rogerbinns/apsw/actions/runs/6175182758/job/16761477543
### CI config
https://github.com/rogerbinns/apsw/actions/runs/6175182758/workflow
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cibuildwheel/projectfiles.py`
Content:
```
1 from __future__ import annotations
2
3 import ast
4 import configparser
5 import contextlib
6 from pathlib import Path
7
8 from ._compat import tomllib
9
10
11 class Analyzer(ast.NodeVisitor):
12 def __init__(self) -> None:
13 self.requires_python: str | None = None
14
15 def visit(self, node: ast.AST) -> None:
16 for inner_node in ast.walk(node):
17 for child in ast.iter_child_nodes(inner_node):
18 child.parent = inner_node # type: ignore[attr-defined]
19 super().visit(node)
20
21 def visit_keyword(self, node: ast.keyword) -> None:
22 self.generic_visit(node)
23 # Must not be nested in an if or other structure
24 # This will be Module -> Expr -> Call -> keyword
25 if (
26 node.arg == "python_requires"
27 and not hasattr(node.parent.parent.parent, "parent") # type: ignore[attr-defined]
28 and isinstance(node.value, ast.Constant)
29 ):
30 self.requires_python = node.value.value
31
32
33 def setup_py_python_requires(content: str) -> str | None:
34 try:
35 tree = ast.parse(content)
36 analyzer = Analyzer()
37 analyzer.visit(tree)
38 return analyzer.requires_python or None
39 except Exception: # pylint: disable=broad-except
40 return None
41
42
43 def get_requires_python_str(package_dir: Path) -> str | None:
44 """Return the python requires string from the most canonical source available, or None"""
45
46 # Read in from pyproject.toml:project.requires-python
47 with contextlib.suppress(FileNotFoundError):
48 with (package_dir / "pyproject.toml").open("rb") as f1:
49 info = tomllib.load(f1)
50 with contextlib.suppress(KeyError, IndexError, TypeError):
51 return str(info["project"]["requires-python"])
52
53 # Read in from setup.cfg:options.python_requires
54 config = configparser.ConfigParser()
55 with contextlib.suppress(FileNotFoundError):
56 config.read(package_dir / "setup.cfg")
57 with contextlib.suppress(KeyError, IndexError, TypeError):
58 return str(config["options"]["python_requires"])
59
60 setup_py = package_dir / "setup.py"
61 with contextlib.suppress(FileNotFoundError), setup_py.open(encoding="utf8") as f2:
62 return setup_py_python_requires(f2.read())
63
64 return None
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cibuildwheel/projectfiles.py b/cibuildwheel/projectfiles.py
--- a/cibuildwheel/projectfiles.py
+++ b/cibuildwheel/projectfiles.py
@@ -8,6 +8,43 @@
from ._compat import tomllib
+def get_parent(node: ast.AST | None, depth: int = 1) -> ast.AST | None:
+ for _ in range(depth):
+ node = getattr(node, "parent", None)
+ return node
+
+
+def is_main(parent: ast.AST | None) -> bool:
+ if parent is None:
+ return False
+
+ # This would be much nicer with 3.10's pattern matching!
+ if not isinstance(parent, ast.If):
+ return False
+ if not isinstance(parent.test, ast.Compare):
+ return False
+
+ try:
+ (op,) = parent.test.ops
+ (comp,) = parent.test.comparators
+ except ValueError:
+ return False
+
+ if not isinstance(op, ast.Eq):
+ return False
+
+ values = {comp, parent.test.left}
+
+ mains = {x for x in values if isinstance(x, ast.Constant) and x.value == "__main__"}
+ if len(mains) != 1:
+ return False
+ consts = {x for x in values if isinstance(x, ast.Name) and x.id == "__name__"}
+ if len(consts) != 1:
+ return False
+
+ return True
+
+
class Analyzer(ast.NodeVisitor):
def __init__(self) -> None:
self.requires_python: str | None = None
@@ -19,13 +56,22 @@
super().visit(node)
def visit_keyword(self, node: ast.keyword) -> None:
+ # Must not be nested except for if __name__ == "__main__"
+
self.generic_visit(node)
- # Must not be nested in an if or other structure
# This will be Module -> Expr -> Call -> keyword
+ parent = get_parent(node, 4)
+ unnested = parent is None
+
+ # This will be Module -> If -> Expr -> Call -> keyword
+ name_main_unnested = (
+ parent is not None and get_parent(parent) is None and is_main(get_parent(node, 3))
+ )
+
if (
node.arg == "python_requires"
- and not hasattr(node.parent.parent.parent, "parent") # type: ignore[attr-defined]
and isinstance(node.value, ast.Constant)
+ and (unnested or name_main_unnested)
):
self.requires_python = node.value.value
| {"golden_diff": "diff --git a/cibuildwheel/projectfiles.py b/cibuildwheel/projectfiles.py\n--- a/cibuildwheel/projectfiles.py\n+++ b/cibuildwheel/projectfiles.py\n@@ -8,6 +8,43 @@\n from ._compat import tomllib\n \n \n+def get_parent(node: ast.AST | None, depth: int = 1) -> ast.AST | None:\n+ for _ in range(depth):\n+ node = getattr(node, \"parent\", None)\n+ return node\n+\n+\n+def is_main(parent: ast.AST | None) -> bool:\n+ if parent is None:\n+ return False\n+\n+ # This would be much nicer with 3.10's pattern matching!\n+ if not isinstance(parent, ast.If):\n+ return False\n+ if not isinstance(parent.test, ast.Compare):\n+ return False\n+\n+ try:\n+ (op,) = parent.test.ops\n+ (comp,) = parent.test.comparators\n+ except ValueError:\n+ return False\n+\n+ if not isinstance(op, ast.Eq):\n+ return False\n+\n+ values = {comp, parent.test.left}\n+\n+ mains = {x for x in values if isinstance(x, ast.Constant) and x.value == \"__main__\"}\n+ if len(mains) != 1:\n+ return False\n+ consts = {x for x in values if isinstance(x, ast.Name) and x.id == \"__name__\"}\n+ if len(consts) != 1:\n+ return False\n+\n+ return True\n+\n+\n class Analyzer(ast.NodeVisitor):\n def __init__(self) -> None:\n self.requires_python: str | None = None\n@@ -19,13 +56,22 @@\n super().visit(node)\n \n def visit_keyword(self, node: ast.keyword) -> None:\n+ # Must not be nested except for if __name__ == \"__main__\"\n+\n self.generic_visit(node)\n- # Must not be nested in an if or other structure\n # This will be Module -> Expr -> Call -> keyword\n+ parent = get_parent(node, 4)\n+ unnested = parent is None\n+\n+ # This will be Module -> If -> Expr -> Call -> keyword\n+ name_main_unnested = (\n+ parent is not None and get_parent(parent) is None and is_main(get_parent(node, 3))\n+ )\n+\n if (\n node.arg == \"python_requires\"\n- and not hasattr(node.parent.parent.parent, \"parent\") # type: ignore[attr-defined]\n and isinstance(node.value, ast.Constant)\n+ and (unnested or name_main_unnested)\n ):\n self.requires_python = node.value.value\n", "issue": "setup.py setup() not detected in __name__ == '__main__' block\n### Description\n\nMy setup.py setup() includes:\r\n\r\n python_requires=\">=3.8\"\r\n\r\nHowever cibuildwheel still tries and fails to compile under Python 3.6.\r\n\r\nI understand there is [CIBW_BUILD / CIBW_SKIP](https://cibuildwheel.readthedocs.io/en/stable/options/#build-skip) but that is then duplicating the python requires information.\r\n\r\nI can add a \\[project\\] section to pyproject.toml but that leads to a lot of problems because it ends up fighting with setup() parameters and they **really** don't like it.\r\n\r\nI believe cibuildwheel should establish the Python version support automatically whether it comes from setuptools or pyproject.toml, and not try to build on unsupported versions. My [pyproject.toml](https://github.com/rogerbinns/apsw/blob/master/pyproject.toml) is:\r\n\r\n````\r\n[build-system]\r\nrequires = [\"setuptools\"]\r\nbuild-backend = \"setuptools.build_meta\"\r\n````\r\n\n\n### Build log\n\nhttps://github.com/rogerbinns/apsw/actions/runs/6175182758/job/16761477543\n\n### CI config\n\nhttps://github.com/rogerbinns/apsw/actions/runs/6175182758/workflow\n", "before_files": [{"content": "from __future__ import annotations\n\nimport ast\nimport configparser\nimport contextlib\nfrom pathlib import Path\n\nfrom ._compat import tomllib\n\n\nclass Analyzer(ast.NodeVisitor):\n def __init__(self) -> None:\n self.requires_python: str | None = None\n\n def visit(self, node: ast.AST) -> None:\n for inner_node in ast.walk(node):\n for child in ast.iter_child_nodes(inner_node):\n child.parent = inner_node # type: ignore[attr-defined]\n super().visit(node)\n\n def visit_keyword(self, node: ast.keyword) -> None:\n self.generic_visit(node)\n # Must not be nested in an if or other structure\n # This will be Module -> Expr -> Call -> keyword\n if (\n node.arg == \"python_requires\"\n and not hasattr(node.parent.parent.parent, \"parent\") # type: ignore[attr-defined]\n and isinstance(node.value, ast.Constant)\n ):\n self.requires_python = node.value.value\n\n\ndef setup_py_python_requires(content: str) -> str | None:\n try:\n tree = ast.parse(content)\n analyzer = Analyzer()\n analyzer.visit(tree)\n return analyzer.requires_python or None\n except Exception: # pylint: disable=broad-except\n return None\n\n\ndef get_requires_python_str(package_dir: Path) -> str | None:\n \"\"\"Return the python requires string from the most canonical source available, or None\"\"\"\n\n # Read in from pyproject.toml:project.requires-python\n with contextlib.suppress(FileNotFoundError):\n with (package_dir / \"pyproject.toml\").open(\"rb\") as f1:\n info = tomllib.load(f1)\n with contextlib.suppress(KeyError, IndexError, TypeError):\n return str(info[\"project\"][\"requires-python\"])\n\n # Read in from setup.cfg:options.python_requires\n config = configparser.ConfigParser()\n with contextlib.suppress(FileNotFoundError):\n config.read(package_dir / \"setup.cfg\")\n with contextlib.suppress(KeyError, IndexError, TypeError):\n return str(config[\"options\"][\"python_requires\"])\n\n setup_py = package_dir / \"setup.py\"\n with contextlib.suppress(FileNotFoundError), setup_py.open(encoding=\"utf8\") as f2:\n return setup_py_python_requires(f2.read())\n\n return None\n", "path": "cibuildwheel/projectfiles.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport ast\nimport configparser\nimport contextlib\nfrom pathlib import Path\n\nfrom ._compat import tomllib\n\n\ndef get_parent(node: ast.AST | None, depth: int = 1) -> ast.AST | None:\n for _ in range(depth):\n node = getattr(node, \"parent\", None)\n return node\n\n\ndef is_main(parent: ast.AST | None) -> bool:\n if parent is None:\n return False\n\n # This would be much nicer with 3.10's pattern matching!\n if not isinstance(parent, ast.If):\n return False\n if not isinstance(parent.test, ast.Compare):\n return False\n\n try:\n (op,) = parent.test.ops\n (comp,) = parent.test.comparators\n except ValueError:\n return False\n\n if not isinstance(op, ast.Eq):\n return False\n\n values = {comp, parent.test.left}\n\n mains = {x for x in values if isinstance(x, ast.Constant) and x.value == \"__main__\"}\n if len(mains) != 1:\n return False\n consts = {x for x in values if isinstance(x, ast.Name) and x.id == \"__name__\"}\n if len(consts) != 1:\n return False\n\n return True\n\n\nclass Analyzer(ast.NodeVisitor):\n def __init__(self) -> None:\n self.requires_python: str | None = None\n\n def visit(self, node: ast.AST) -> None:\n for inner_node in ast.walk(node):\n for child in ast.iter_child_nodes(inner_node):\n child.parent = inner_node # type: ignore[attr-defined]\n super().visit(node)\n\n def visit_keyword(self, node: ast.keyword) -> None:\n # Must not be nested except for if __name__ == \"__main__\"\n\n self.generic_visit(node)\n # This will be Module -> Expr -> Call -> keyword\n parent = get_parent(node, 4)\n unnested = parent is None\n\n # This will be Module -> If -> Expr -> Call -> keyword\n name_main_unnested = (\n parent is not None and get_parent(parent) is None and is_main(get_parent(node, 3))\n )\n\n if (\n node.arg == \"python_requires\"\n and isinstance(node.value, ast.Constant)\n and (unnested or name_main_unnested)\n ):\n self.requires_python = node.value.value\n\n\ndef setup_py_python_requires(content: str) -> str | None:\n try:\n tree = ast.parse(content)\n analyzer = Analyzer()\n analyzer.visit(tree)\n return analyzer.requires_python or None\n except Exception: # pylint: disable=broad-except\n return None\n\n\ndef get_requires_python_str(package_dir: Path) -> str | None:\n \"\"\"Return the python requires string from the most canonical source available, or None\"\"\"\n\n # Read in from pyproject.toml:project.requires-python\n with contextlib.suppress(FileNotFoundError):\n with (package_dir / \"pyproject.toml\").open(\"rb\") as f1:\n info = tomllib.load(f1)\n with contextlib.suppress(KeyError, IndexError, TypeError):\n return str(info[\"project\"][\"requires-python\"])\n\n # Read in from setup.cfg:options.python_requires\n config = configparser.ConfigParser()\n with contextlib.suppress(FileNotFoundError):\n config.read(package_dir / \"setup.cfg\")\n with contextlib.suppress(KeyError, IndexError, TypeError):\n return str(config[\"options\"][\"python_requires\"])\n\n setup_py = package_dir / \"setup.py\"\n with contextlib.suppress(FileNotFoundError), setup_py.open(encoding=\"utf8\") as f2:\n return setup_py_python_requires(f2.read())\n\n return None\n", "path": "cibuildwheel/projectfiles.py"}]} | 1,180 | 596 |
gh_patches_debug_42711 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-2463 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fail to fetch shard using multi-process in Python.
```python
class ImageDataset(IterableDataset):
def __init__(self, data_shard_service, shuffle=False):
self.data_shard_service = data_shard_service
self._shuffle = shuffle
def __iter__(self):
while True:
index = self.data_shard_service.fetch_shard()
image, label = read_images(index)
yield image, label
dataset = ImageDataset(
allreduce_controller.data_shard_service, shuffle=True
)
data_loader = DataLoader(
dataset=dataset,
batch_size=args.batch_size,
num_workers=2
)
```

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticai_api/pytorch/controller.py`
Content:
```
1 # Copyright 2020 The ElasticDL Authors. All rights reserved.
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 import os
15 import time
16 import traceback
17
18 from elasticai_api.common.base_controller import (
19 DEFAULT_MAX_ALLREDUCE_RETRY_NUM,
20 RETRY_ALLREDUCE_INTERVAL_SECS,
21 AllReduceController,
22 )
23 from elasticai_api.common.constants import WorkerEnv
24 from elasticai_api.common.data_shard_service import DataShardService
25 from elasticai_api.common.master_client import build_master_client
26 from elasticai_api.util.log_utils import default_logger as logger
27
28 try:
29 import horovod.torch as hvd
30 from horovod.common.exceptions import HorovodInternalError
31 from horovod.torch.functions import (
32 broadcast_optimizer_state,
33 broadcast_parameters,
34 broadcast_object,
35 )
36
37 except ImportError:
38 hvd = None
39
40
41 def create_elastic_controller(batch_size, num_epochs=None, dataset_size=None):
42 """Create an elastic AllReduce controller with data shard service.
43 Users can use the `controller.data_shard_service` to get data
44 shards like:
45 ```python
46 while True:
47 shard = controller.data_shard_service.fetch_shard()
48 for i in range(shard.start, shard.end):
49 yield i
50 ```
51
52 Users also can use the controller to do an elastic training.
53
54 ```python
55 model = ...
56 optimizer = optim.SGD(model.parameters(), lr=0.1)
57 optimizer = hvd.DistributedOptimizer(optimizer)
58
59 controller.set_broadcast_model(model)
60 ontroller.set_broadcast_optimizer(optimizer)
61 model.train()
62 for batch_idx, (data, target) in enumerate(data_loader):
63 # Use the elastic function to wrap the training function with a batch.
64 elastic_train_one_batch = allreduce_controller.elastic_run(
65 train_one_batch
66 )
67
68 def train_one_batch(model, optimizer, data, target):
69 optimizer.zero_grad()
70 output = model(data)
71 loss = F.nll_loss(output, target)
72 loss.backward()
73 optimizer.step()
74 return loss
75 ```
76
77 Args:
78 batch_size: The batch size of a single worker.
79 num_epochs: The number of epochs.
80 dataset_size: The total size of dataset.
81 """
82 master_client = build_master_client()
83 data_shard_service = DataShardService(
84 master_client, batch_size, num_epochs, dataset_size
85 )
86
87 controller = PyTorchAllReduceController(master_client, data_shard_service)
88 controller.init_horovod_locally()
89 return controller
90
91
92 class PyTorchAllReduceController(AllReduceController):
93 def __init__(self, master_client, data_shard_service):
94 super(PyTorchAllReduceController, self).__init__(
95 master_client, data_shard_service
96 )
97 self._model = None
98 self._optimizer = None
99 self.backward_passes_per_step = 1
100 # ElasticDL master should set the number of workers into envs.
101 self.global_batch_num_per_step = int(
102 os.getenv(WorkerEnv.WORKER_NUM, 1)
103 )
104 self.global_completed_batch_num = 0
105
106 def set_broadcast_model(self, model):
107 self._model = model
108
109 def set_broadcast_optimizer(self, optimizer):
110 self._optimizer = optimizer
111
112 def broadcast(self):
113 broadcast_parameters(self._model.state_dict(), root_rank=0)
114 broadcast_optimizer_state(self._optimizer, root_rank=0)
115 self.global_completed_batch_num = broadcast_object(
116 self.global_completed_batch_num, name="GlobalCompletedBatchNum"
117 )
118
119 def train_one_batch_with_retries(self, func, *args, **kwargs):
120 self.reset_backward_passes_per_step()
121 allreduce_success = False
122 for _ in range(DEFAULT_MAX_ALLREDUCE_RETRY_NUM):
123 try:
124 self._broadcast_if_needed()
125 result = func(*args, **kwargs)
126 allreduce_success = True
127 break
128 except HorovodInternalError:
129 logger.warning(
130 "Failed to perform allreduce operation on "
131 "the gradients. Retrying..."
132 )
133 # Those error message show that the communication
134 # to merge gradient fails and we can rebuild the
135 # communication.
136 self.restore()
137 except RuntimeError:
138 traceback.print_exc()
139 self.restore()
140 if not allreduce_success:
141 raise RuntimeError("Failed to perform allreduce.")
142 self._update_completed_minibatches()
143 return result
144
145 def restore(self):
146 time.sleep(RETRY_ALLREDUCE_INTERVAL_SECS)
147 # Call `load_state_dict` to reset the state of Horovod optimizer
148 self._optimizer.load_state_dict(self._optimizer.state_dict())
149 self._optimizer.zero_grad()
150 self._rendezvous_manager.init_horovod_if_needed()
151
152 def _update_completed_minibatches(self):
153 if (
154 hasattr(self._optimizer, "fixed_global_batch_size")
155 and self._optimizer.fixed_global_batch_size
156 ):
157 if self._optimizer.update_gradients:
158 self.global_completed_batch_num += (
159 self.global_batch_num_per_step
160 )
161 else:
162 self.global_completed_batch_num += hvd.size()
163
164 def reset_backward_passes_per_step(self):
165 # Only reset backward_passes_per_step when using the optimizer
166 # with fixed_global_batch_size
167 if (
168 hasattr(self._optimizer, "fixed_global_batch_size")
169 and self._optimizer.fixed_global_batch_size
170 ):
171 world_size = hvd.size()
172 rank = hvd.rank()
173 self.backward_passes_per_step = int(
174 self.global_batch_num_per_step / world_size
175 )
176 if rank < self.global_batch_num_per_step % world_size:
177 self.backward_passes_per_step += 1
178 if (
179 self.backward_passes_per_step
180 != self._optimizer.backward_passes_per_step
181 ):
182 self._optimizer.set_backward_passes_per_step(
183 self.backward_passes_per_step
184 )
185 logger.info(
186 "Backward passes per step = {}".format(
187 self._optimizer.backward_passes_per_step
188 )
189 )
190
```
Path: `elasticai_api/common/data_shard_service.py`
Content:
```
1 # Copyright 2020 The ElasticDL Authors. All rights reserved.
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 import threading
15 from collections import deque
16
17 from elasticai_api.common.constants import TaskExecCounterKey
18 from elasticai_api.common.master_client import build_master_client
19 from elasticai_api.proto import elasticai_api_pb2
20
21
22 def build_data_shard_service(
23 batch_size,
24 num_epochs=None,
25 dataset_size=None,
26 task_type=elasticai_api_pb2.TRAINING,
27 ):
28 master_client = build_master_client()
29 return DataShardService(
30 batch_size=batch_size,
31 master_client=master_client,
32 num_epochs=num_epochs,
33 dataset_size=dataset_size,
34 task_type=task_type,
35 )
36
37
38 class DataShardService(object):
39 def __init__(
40 self,
41 master_client,
42 batch_size,
43 num_epochs=None,
44 dataset_size=None,
45 task_type=elasticai_api_pb2.TRAINING,
46 ):
47 self._mc = master_client
48 self._batch_size = batch_size
49 self._num_epochs = num_epochs
50 self._dataset_size = dataset_size
51 self._task_type = task_type
52 self._lock = threading.Lock()
53 self._failed_record_count = 0
54 self._reported_record_count = 0
55 self._current_task = None
56 self._pending_tasks = deque()
57 self._report_training_params()
58
59 def _report_training_params(self):
60 if self._num_epochs and self._dataset_size:
61 self._mc.report_training_params(
62 self._batch_size, self._num_epochs, self._dataset_size
63 )
64
65 def get_current_task(self):
66 return self._current_task
67
68 def get_task(self, task_type=None):
69 task = self._mc.get_task(task_type)
70 if task.type == self._task_type:
71 with self._lock:
72 self._pending_tasks.append(task)
73 if len(self._pending_tasks) == 1:
74 self._current_task = task
75
76 return task
77
78 def _report_task(self, task, err_msg=""):
79 if self._failed_record_count != 0:
80 exec_counters = {
81 TaskExecCounterKey.FAIL_COUNT: self._failed_record_count
82 }
83 else:
84 exec_counters = None
85 self._mc.report_task_result(
86 task.task_id, err_msg, exec_counters=exec_counters
87 )
88
89 def report_batch_done(self, batch_size=None, err_msg=""):
90 """
91 Report the number of records in the latest processed batch,
92 so DynamicShardingManager knows if some pending tasks are finished
93 and report_task_result to the master.
94 Return True if there are some finished tasks, False otherwise.
95 """
96 record_count = batch_size if batch_size else self._batch_size
97 self._reported_record_count += record_count
98 if err_msg:
99 self._failed_record_count += record_count
100
101 if not self._pending_tasks:
102 return False
103 task = self._pending_tasks[0]
104 total_record_num = task.shard.end - task.shard.start
105 if self._reported_record_count >= total_record_num:
106 # Keep popping tasks until the reported record count is less
107 # than the size of the current data since `batch_size` may be
108 # larger than `shard.end - shard.start`
109 with self._lock:
110 while (
111 self._pending_tasks
112 and self._reported_record_count
113 >= self._pending_tasks[0].shard.end
114 - self._pending_tasks[0].shard.start
115 ):
116 self._reported_record_count -= (
117 self._pending_tasks[0].shard.end
118 - self._pending_tasks[0].shard.start
119 )
120 task = self._pending_tasks.popleft()
121 self._report_task(task, err_msg)
122 self._failed_record_count = 0
123 if self._pending_tasks:
124 self._current_task = self._pending_tasks[0]
125 return True
126 return False
127
128 def fetch_shard(self):
129 """Fetch data shard and each shard contains the name,
130 start and end index.
131 """
132 task = self.get_task(self._task_type)
133 if task.type != self._task_type:
134 return None
135
136 return task.shard
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticai_api/common/data_shard_service.py b/elasticai_api/common/data_shard_service.py
--- a/elasticai_api/common/data_shard_service.py
+++ b/elasticai_api/common/data_shard_service.py
@@ -11,8 +11,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import random
import threading
+import time
from collections import deque
+from multiprocessing import SimpleQueue
from elasticai_api.common.constants import TaskExecCounterKey
from elasticai_api.common.master_client import build_master_client
@@ -134,3 +137,51 @@
return None
return task.shard
+
+
+class RecordIndexService(DataShardService):
+ def __init__(
+ self,
+ master_client,
+ batch_size,
+ num_epochs=None,
+ dataset_size=None,
+ task_type=elasticai_api_pb2.TRAINING,
+ shuffle=False,
+ ):
+ super(RecordIndexService, self).__init__(
+ master_client, batch_size, num_epochs, dataset_size, task_type
+ )
+ self._shuffle = shuffle
+ self._shard_queue = SimpleQueue()
+ threading.Thread(
+ target=self._get_shard_indices,
+ name="fetch_shard_indices",
+ daemon=True,
+ ).start()
+
+ def _get_shard_indices(self):
+ while True:
+ if self._shard_queue.empty():
+ task = self.get_task(self._task_type)
+ if not task.shard or task.type != self._task_type:
+ break
+ ids = list(range(task.shard.start, task.shard.end))
+ if self._shuffle:
+ random.shuffle(ids)
+ for i in ids:
+ self._shard_queue.put(i)
+ else:
+ time.sleep(1)
+
+ def fetch_record_index(self):
+ """Fetch an index of the record. The function get an index
+ from a queue because there may be multiple sub-process to call
+ the function.
+ """
+ for _ in range(30):
+ if not self._shard_queue.empty():
+ return self._shard_queue.get()
+ else:
+ time.sleep(1)
+ raise StopIteration
diff --git a/elasticai_api/pytorch/controller.py b/elasticai_api/pytorch/controller.py
--- a/elasticai_api/pytorch/controller.py
+++ b/elasticai_api/pytorch/controller.py
@@ -21,7 +21,7 @@
AllReduceController,
)
from elasticai_api.common.constants import WorkerEnv
-from elasticai_api.common.data_shard_service import DataShardService
+from elasticai_api.common.data_shard_service import RecordIndexService
from elasticai_api.common.master_client import build_master_client
from elasticai_api.util.log_utils import default_logger as logger
@@ -38,15 +38,14 @@
hvd = None
-def create_elastic_controller(batch_size, num_epochs=None, dataset_size=None):
- """Create an elastic AllReduce controller with data shard service.
+def create_elastic_controller(
+ batch_size, num_epochs=None, dataset_size=None, shuffle=False
+):
+ """Create an elastic AllReduce controller with record index service.
Users can use the `controller.data_shard_service` to get data
shards like:
```python
- while True:
- shard = controller.data_shard_service.fetch_shard()
- for i in range(shard.start, shard.end):
- yield i
+ index = controller.data_shard_service.fetch_record_index()
```
Users also can use the controller to do an elastic training.
@@ -80,11 +79,17 @@
dataset_size: The total size of dataset.
"""
master_client = build_master_client()
- data_shard_service = DataShardService(
- master_client, batch_size, num_epochs, dataset_size
+ record_index_service = RecordIndexService(
+ master_client=master_client,
+ batch_size=batch_size,
+ num_epochs=num_epochs,
+ dataset_size=dataset_size,
+ shuffle=shuffle,
)
- controller = PyTorchAllReduceController(master_client, data_shard_service)
+ controller = PyTorchAllReduceController(
+ master_client, record_index_service
+ )
controller.init_horovod_locally()
return controller
| {"golden_diff": "diff --git a/elasticai_api/common/data_shard_service.py b/elasticai_api/common/data_shard_service.py\n--- a/elasticai_api/common/data_shard_service.py\n+++ b/elasticai_api/common/data_shard_service.py\n@@ -11,8 +11,11 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+import random\n import threading\n+import time\n from collections import deque\n+from multiprocessing import SimpleQueue\n \n from elasticai_api.common.constants import TaskExecCounterKey\n from elasticai_api.common.master_client import build_master_client\n@@ -134,3 +137,51 @@\n return None\n \n return task.shard\n+\n+\n+class RecordIndexService(DataShardService):\n+ def __init__(\n+ self,\n+ master_client,\n+ batch_size,\n+ num_epochs=None,\n+ dataset_size=None,\n+ task_type=elasticai_api_pb2.TRAINING,\n+ shuffle=False,\n+ ):\n+ super(RecordIndexService, self).__init__(\n+ master_client, batch_size, num_epochs, dataset_size, task_type\n+ )\n+ self._shuffle = shuffle\n+ self._shard_queue = SimpleQueue()\n+ threading.Thread(\n+ target=self._get_shard_indices,\n+ name=\"fetch_shard_indices\",\n+ daemon=True,\n+ ).start()\n+\n+ def _get_shard_indices(self):\n+ while True:\n+ if self._shard_queue.empty():\n+ task = self.get_task(self._task_type)\n+ if not task.shard or task.type != self._task_type:\n+ break\n+ ids = list(range(task.shard.start, task.shard.end))\n+ if self._shuffle:\n+ random.shuffle(ids)\n+ for i in ids:\n+ self._shard_queue.put(i)\n+ else:\n+ time.sleep(1)\n+\n+ def fetch_record_index(self):\n+ \"\"\"Fetch an index of the record. The function get an index\n+ from a queue because there may be multiple sub-process to call\n+ the function.\n+ \"\"\"\n+ for _ in range(30):\n+ if not self._shard_queue.empty():\n+ return self._shard_queue.get()\n+ else:\n+ time.sleep(1)\n+ raise StopIteration\ndiff --git a/elasticai_api/pytorch/controller.py b/elasticai_api/pytorch/controller.py\n--- a/elasticai_api/pytorch/controller.py\n+++ b/elasticai_api/pytorch/controller.py\n@@ -21,7 +21,7 @@\n AllReduceController,\n )\n from elasticai_api.common.constants import WorkerEnv\n-from elasticai_api.common.data_shard_service import DataShardService\n+from elasticai_api.common.data_shard_service import RecordIndexService\n from elasticai_api.common.master_client import build_master_client\n from elasticai_api.util.log_utils import default_logger as logger\n \n@@ -38,15 +38,14 @@\n hvd = None\n \n \n-def create_elastic_controller(batch_size, num_epochs=None, dataset_size=None):\n- \"\"\"Create an elastic AllReduce controller with data shard service.\n+def create_elastic_controller(\n+ batch_size, num_epochs=None, dataset_size=None, shuffle=False\n+):\n+ \"\"\"Create an elastic AllReduce controller with record index service.\n Users can use the `controller.data_shard_service` to get data\n shards like:\n ```python\n- while True:\n- shard = controller.data_shard_service.fetch_shard()\n- for i in range(shard.start, shard.end):\n- yield i\n+ index = controller.data_shard_service.fetch_record_index()\n ```\n \n Users also can use the controller to do an elastic training.\n@@ -80,11 +79,17 @@\n dataset_size: The total size of dataset.\n \"\"\"\n master_client = build_master_client()\n- data_shard_service = DataShardService(\n- master_client, batch_size, num_epochs, dataset_size\n+ record_index_service = RecordIndexService(\n+ master_client=master_client,\n+ batch_size=batch_size,\n+ num_epochs=num_epochs,\n+ dataset_size=dataset_size,\n+ shuffle=shuffle,\n )\n \n- controller = PyTorchAllReduceController(master_client, data_shard_service)\n+ controller = PyTorchAllReduceController(\n+ master_client, record_index_service\n+ )\n controller.init_horovod_locally()\n return controller\n", "issue": "Fail to fetch shard using multi-process in Python.\n```python\r\nclass ImageDataset(IterableDataset):\r\n def __init__(self, data_shard_service, shuffle=False):\r\n self.data_shard_service = data_shard_service\r\n self._shuffle = shuffle\r\n\r\n def __iter__(self):\r\n while True:\r\n index = self.data_shard_service.fetch_shard()\r\n image, label = read_images(index)\r\n yield image, label\r\n\r\ndataset = ImageDataset(\r\n allreduce_controller.data_shard_service, shuffle=True\r\n )\r\n\r\ndata_loader = DataLoader(\r\n dataset=dataset,\r\n batch_size=args.batch_size,\r\n num_workers=2\r\n )\r\n```\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2020 The ElasticDL Authors. All rights reserved.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport time\nimport traceback\n\nfrom elasticai_api.common.base_controller import (\n DEFAULT_MAX_ALLREDUCE_RETRY_NUM,\n RETRY_ALLREDUCE_INTERVAL_SECS,\n AllReduceController,\n)\nfrom elasticai_api.common.constants import WorkerEnv\nfrom elasticai_api.common.data_shard_service import DataShardService\nfrom elasticai_api.common.master_client import build_master_client\nfrom elasticai_api.util.log_utils import default_logger as logger\n\ntry:\n import horovod.torch as hvd\n from horovod.common.exceptions import HorovodInternalError\n from horovod.torch.functions import (\n broadcast_optimizer_state,\n broadcast_parameters,\n broadcast_object,\n )\n\nexcept ImportError:\n hvd = None\n\n\ndef create_elastic_controller(batch_size, num_epochs=None, dataset_size=None):\n \"\"\"Create an elastic AllReduce controller with data shard service.\n Users can use the `controller.data_shard_service` to get data\n shards like:\n ```python\n while True:\n shard = controller.data_shard_service.fetch_shard()\n for i in range(shard.start, shard.end):\n yield i\n ```\n\n Users also can use the controller to do an elastic training.\n\n ```python\n model = ...\n optimizer = optim.SGD(model.parameters(), lr=0.1)\n optimizer = hvd.DistributedOptimizer(optimizer)\n\n controller.set_broadcast_model(model)\n ontroller.set_broadcast_optimizer(optimizer)\n model.train()\n for batch_idx, (data, target) in enumerate(data_loader):\n # Use the elastic function to wrap the training function with a batch.\n elastic_train_one_batch = allreduce_controller.elastic_run(\n train_one_batch\n )\n\n def train_one_batch(model, optimizer, data, target):\n optimizer.zero_grad()\n output = model(data)\n loss = F.nll_loss(output, target)\n loss.backward()\n optimizer.step()\n return loss\n ```\n\n Args:\n batch_size: The batch size of a single worker.\n num_epochs: The number of epochs.\n dataset_size: The total size of dataset.\n \"\"\"\n master_client = build_master_client()\n data_shard_service = DataShardService(\n master_client, batch_size, num_epochs, dataset_size\n )\n\n controller = PyTorchAllReduceController(master_client, data_shard_service)\n controller.init_horovod_locally()\n return controller\n\n\nclass PyTorchAllReduceController(AllReduceController):\n def __init__(self, master_client, data_shard_service):\n super(PyTorchAllReduceController, self).__init__(\n master_client, data_shard_service\n )\n self._model = None\n self._optimizer = None\n self.backward_passes_per_step = 1\n # ElasticDL master should set the number of workers into envs.\n self.global_batch_num_per_step = int(\n os.getenv(WorkerEnv.WORKER_NUM, 1)\n )\n self.global_completed_batch_num = 0\n\n def set_broadcast_model(self, model):\n self._model = model\n\n def set_broadcast_optimizer(self, optimizer):\n self._optimizer = optimizer\n\n def broadcast(self):\n broadcast_parameters(self._model.state_dict(), root_rank=0)\n broadcast_optimizer_state(self._optimizer, root_rank=0)\n self.global_completed_batch_num = broadcast_object(\n self.global_completed_batch_num, name=\"GlobalCompletedBatchNum\"\n )\n\n def train_one_batch_with_retries(self, func, *args, **kwargs):\n self.reset_backward_passes_per_step()\n allreduce_success = False\n for _ in range(DEFAULT_MAX_ALLREDUCE_RETRY_NUM):\n try:\n self._broadcast_if_needed()\n result = func(*args, **kwargs)\n allreduce_success = True\n break\n except HorovodInternalError:\n logger.warning(\n \"Failed to perform allreduce operation on \"\n \"the gradients. Retrying...\"\n )\n # Those error message show that the communication\n # to merge gradient fails and we can rebuild the\n # communication.\n self.restore()\n except RuntimeError:\n traceback.print_exc()\n self.restore()\n if not allreduce_success:\n raise RuntimeError(\"Failed to perform allreduce.\")\n self._update_completed_minibatches()\n return result\n\n def restore(self):\n time.sleep(RETRY_ALLREDUCE_INTERVAL_SECS)\n # Call `load_state_dict` to reset the state of Horovod optimizer\n self._optimizer.load_state_dict(self._optimizer.state_dict())\n self._optimizer.zero_grad()\n self._rendezvous_manager.init_horovod_if_needed()\n\n def _update_completed_minibatches(self):\n if (\n hasattr(self._optimizer, \"fixed_global_batch_size\")\n and self._optimizer.fixed_global_batch_size\n ):\n if self._optimizer.update_gradients:\n self.global_completed_batch_num += (\n self.global_batch_num_per_step\n )\n else:\n self.global_completed_batch_num += hvd.size()\n\n def reset_backward_passes_per_step(self):\n # Only reset backward_passes_per_step when using the optimizer\n # with fixed_global_batch_size\n if (\n hasattr(self._optimizer, \"fixed_global_batch_size\")\n and self._optimizer.fixed_global_batch_size\n ):\n world_size = hvd.size()\n rank = hvd.rank()\n self.backward_passes_per_step = int(\n self.global_batch_num_per_step / world_size\n )\n if rank < self.global_batch_num_per_step % world_size:\n self.backward_passes_per_step += 1\n if (\n self.backward_passes_per_step\n != self._optimizer.backward_passes_per_step\n ):\n self._optimizer.set_backward_passes_per_step(\n self.backward_passes_per_step\n )\n logger.info(\n \"Backward passes per step = {}\".format(\n self._optimizer.backward_passes_per_step\n )\n )\n", "path": "elasticai_api/pytorch/controller.py"}, {"content": "# Copyright 2020 The ElasticDL Authors. All rights reserved.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport threading\nfrom collections import deque\n\nfrom elasticai_api.common.constants import TaskExecCounterKey\nfrom elasticai_api.common.master_client import build_master_client\nfrom elasticai_api.proto import elasticai_api_pb2\n\n\ndef build_data_shard_service(\n batch_size,\n num_epochs=None,\n dataset_size=None,\n task_type=elasticai_api_pb2.TRAINING,\n):\n master_client = build_master_client()\n return DataShardService(\n batch_size=batch_size,\n master_client=master_client,\n num_epochs=num_epochs,\n dataset_size=dataset_size,\n task_type=task_type,\n )\n\n\nclass DataShardService(object):\n def __init__(\n self,\n master_client,\n batch_size,\n num_epochs=None,\n dataset_size=None,\n task_type=elasticai_api_pb2.TRAINING,\n ):\n self._mc = master_client\n self._batch_size = batch_size\n self._num_epochs = num_epochs\n self._dataset_size = dataset_size\n self._task_type = task_type\n self._lock = threading.Lock()\n self._failed_record_count = 0\n self._reported_record_count = 0\n self._current_task = None\n self._pending_tasks = deque()\n self._report_training_params()\n\n def _report_training_params(self):\n if self._num_epochs and self._dataset_size:\n self._mc.report_training_params(\n self._batch_size, self._num_epochs, self._dataset_size\n )\n\n def get_current_task(self):\n return self._current_task\n\n def get_task(self, task_type=None):\n task = self._mc.get_task(task_type)\n if task.type == self._task_type:\n with self._lock:\n self._pending_tasks.append(task)\n if len(self._pending_tasks) == 1:\n self._current_task = task\n\n return task\n\n def _report_task(self, task, err_msg=\"\"):\n if self._failed_record_count != 0:\n exec_counters = {\n TaskExecCounterKey.FAIL_COUNT: self._failed_record_count\n }\n else:\n exec_counters = None\n self._mc.report_task_result(\n task.task_id, err_msg, exec_counters=exec_counters\n )\n\n def report_batch_done(self, batch_size=None, err_msg=\"\"):\n \"\"\"\n Report the number of records in the latest processed batch,\n so DynamicShardingManager knows if some pending tasks are finished\n and report_task_result to the master.\n Return True if there are some finished tasks, False otherwise.\n \"\"\"\n record_count = batch_size if batch_size else self._batch_size\n self._reported_record_count += record_count\n if err_msg:\n self._failed_record_count += record_count\n\n if not self._pending_tasks:\n return False\n task = self._pending_tasks[0]\n total_record_num = task.shard.end - task.shard.start\n if self._reported_record_count >= total_record_num:\n # Keep popping tasks until the reported record count is less\n # than the size of the current data since `batch_size` may be\n # larger than `shard.end - shard.start`\n with self._lock:\n while (\n self._pending_tasks\n and self._reported_record_count\n >= self._pending_tasks[0].shard.end\n - self._pending_tasks[0].shard.start\n ):\n self._reported_record_count -= (\n self._pending_tasks[0].shard.end\n - self._pending_tasks[0].shard.start\n )\n task = self._pending_tasks.popleft()\n self._report_task(task, err_msg)\n self._failed_record_count = 0\n if self._pending_tasks:\n self._current_task = self._pending_tasks[0]\n return True\n return False\n\n def fetch_shard(self):\n \"\"\"Fetch data shard and each shard contains the name,\n start and end index.\n \"\"\"\n task = self.get_task(self._task_type)\n if task.type != self._task_type:\n return None\n\n return task.shard\n", "path": "elasticai_api/common/data_shard_service.py"}], "after_files": [{"content": "# Copyright 2020 The ElasticDL Authors. All rights reserved.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport time\nimport traceback\n\nfrom elasticai_api.common.base_controller import (\n DEFAULT_MAX_ALLREDUCE_RETRY_NUM,\n RETRY_ALLREDUCE_INTERVAL_SECS,\n AllReduceController,\n)\nfrom elasticai_api.common.constants import WorkerEnv\nfrom elasticai_api.common.data_shard_service import RecordIndexService\nfrom elasticai_api.common.master_client import build_master_client\nfrom elasticai_api.util.log_utils import default_logger as logger\n\ntry:\n import horovod.torch as hvd\n from horovod.common.exceptions import HorovodInternalError\n from horovod.torch.functions import (\n broadcast_optimizer_state,\n broadcast_parameters,\n broadcast_object,\n )\n\nexcept ImportError:\n hvd = None\n\n\ndef create_elastic_controller(\n batch_size, num_epochs=None, dataset_size=None, shuffle=False\n):\n \"\"\"Create an elastic AllReduce controller with record index service.\n Users can use the `controller.data_shard_service` to get data\n shards like:\n ```python\n index = controller.data_shard_service.fetch_record_index()\n ```\n\n Users also can use the controller to do an elastic training.\n\n ```python\n model = ...\n optimizer = optim.SGD(model.parameters(), lr=0.1)\n optimizer = hvd.DistributedOptimizer(optimizer)\n\n controller.set_broadcast_model(model)\n ontroller.set_broadcast_optimizer(optimizer)\n model.train()\n for batch_idx, (data, target) in enumerate(data_loader):\n # Use the elastic function to wrap the training function with a batch.\n elastic_train_one_batch = allreduce_controller.elastic_run(\n train_one_batch\n )\n\n def train_one_batch(model, optimizer, data, target):\n optimizer.zero_grad()\n output = model(data)\n loss = F.nll_loss(output, target)\n loss.backward()\n optimizer.step()\n return loss\n ```\n\n Args:\n batch_size: The batch size of a single worker.\n num_epochs: The number of epochs.\n dataset_size: The total size of dataset.\n \"\"\"\n master_client = build_master_client()\n record_index_service = RecordIndexService(\n master_client=master_client,\n batch_size=batch_size,\n num_epochs=num_epochs,\n dataset_size=dataset_size,\n shuffle=shuffle,\n )\n\n controller = PyTorchAllReduceController(\n master_client, record_index_service\n )\n controller.init_horovod_locally()\n return controller\n\n\nclass PyTorchAllReduceController(AllReduceController):\n def __init__(self, master_client, data_shard_service):\n super(PyTorchAllReduceController, self).__init__(\n master_client, data_shard_service\n )\n self._model = None\n self._optimizer = None\n self.backward_passes_per_step = 1\n # ElasticDL master should set the number of workers into envs.\n self.global_batch_num_per_step = int(\n os.getenv(WorkerEnv.WORKER_NUM, 1)\n )\n self.global_completed_batch_num = 0\n\n def set_broadcast_model(self, model):\n self._model = model\n\n def set_broadcast_optimizer(self, optimizer):\n self._optimizer = optimizer\n\n def broadcast(self):\n broadcast_parameters(self._model.state_dict(), root_rank=0)\n broadcast_optimizer_state(self._optimizer, root_rank=0)\n self.global_completed_batch_num = broadcast_object(\n self.global_completed_batch_num, name=\"GlobalCompletedBatchNum\"\n )\n\n def train_one_batch_with_retries(self, func, *args, **kwargs):\n self.reset_backward_passes_per_step()\n allreduce_success = False\n for _ in range(DEFAULT_MAX_ALLREDUCE_RETRY_NUM):\n try:\n self._broadcast_if_needed()\n result = func(*args, **kwargs)\n allreduce_success = True\n break\n except HorovodInternalError:\n logger.warning(\n \"Failed to perform allreduce operation on \"\n \"the gradients. Retrying...\"\n )\n # Those error message show that the communication\n # to merge gradient fails and we can rebuild the\n # communication.\n self.restore()\n except RuntimeError:\n traceback.print_exc()\n self.restore()\n if not allreduce_success:\n raise RuntimeError(\"Failed to perform allreduce.\")\n self._update_completed_minibatches()\n return result\n\n def restore(self):\n time.sleep(RETRY_ALLREDUCE_INTERVAL_SECS)\n # Call `load_state_dict` to reset the state of Horovod optimizer\n self._optimizer.load_state_dict(self._optimizer.state_dict())\n self._optimizer.zero_grad()\n self._rendezvous_manager.init_horovod_if_needed()\n\n def _update_completed_minibatches(self):\n if (\n hasattr(self._optimizer, \"fixed_global_batch_size\")\n and self._optimizer.fixed_global_batch_size\n ):\n if self._optimizer.update_gradients:\n self.global_completed_batch_num += (\n self.global_batch_num_per_step\n )\n else:\n self.global_completed_batch_num += hvd.size()\n\n def reset_backward_passes_per_step(self):\n # Only reset backward_passes_per_step when using the optimizer\n # with fixed_global_batch_size\n if (\n hasattr(self._optimizer, \"fixed_global_batch_size\")\n and self._optimizer.fixed_global_batch_size\n ):\n world_size = hvd.size()\n rank = hvd.rank()\n self.backward_passes_per_step = int(\n self.global_batch_num_per_step / world_size\n )\n if rank < self.global_batch_num_per_step % world_size:\n self.backward_passes_per_step += 1\n if (\n self.backward_passes_per_step\n != self._optimizer.backward_passes_per_step\n ):\n self._optimizer.set_backward_passes_per_step(\n self.backward_passes_per_step\n )\n logger.info(\n \"Backward passes per step = {}\".format(\n self._optimizer.backward_passes_per_step\n )\n )\n", "path": "elasticai_api/pytorch/controller.py"}, {"content": "# Copyright 2020 The ElasticDL Authors. All rights reserved.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport random\nimport threading\nimport time\nfrom collections import deque\nfrom multiprocessing import SimpleQueue\n\nfrom elasticai_api.common.constants import TaskExecCounterKey\nfrom elasticai_api.common.master_client import build_master_client\nfrom elasticai_api.proto import elasticai_api_pb2\n\n\ndef build_data_shard_service(\n batch_size,\n num_epochs=None,\n dataset_size=None,\n task_type=elasticai_api_pb2.TRAINING,\n):\n master_client = build_master_client()\n return DataShardService(\n batch_size=batch_size,\n master_client=master_client,\n num_epochs=num_epochs,\n dataset_size=dataset_size,\n task_type=task_type,\n )\n\n\nclass DataShardService(object):\n def __init__(\n self,\n master_client,\n batch_size,\n num_epochs=None,\n dataset_size=None,\n task_type=elasticai_api_pb2.TRAINING,\n ):\n self._mc = master_client\n self._batch_size = batch_size\n self._num_epochs = num_epochs\n self._dataset_size = dataset_size\n self._task_type = task_type\n self._lock = threading.Lock()\n self._failed_record_count = 0\n self._reported_record_count = 0\n self._current_task = None\n self._pending_tasks = deque()\n self._report_training_params()\n\n def _report_training_params(self):\n if self._num_epochs and self._dataset_size:\n self._mc.report_training_params(\n self._batch_size, self._num_epochs, self._dataset_size\n )\n\n def get_current_task(self):\n return self._current_task\n\n def get_task(self, task_type=None):\n task = self._mc.get_task(task_type)\n if task.type == self._task_type:\n with self._lock:\n self._pending_tasks.append(task)\n if len(self._pending_tasks) == 1:\n self._current_task = task\n\n return task\n\n def _report_task(self, task, err_msg=\"\"):\n if self._failed_record_count != 0:\n exec_counters = {\n TaskExecCounterKey.FAIL_COUNT: self._failed_record_count\n }\n else:\n exec_counters = None\n self._mc.report_task_result(\n task.task_id, err_msg, exec_counters=exec_counters\n )\n\n def report_batch_done(self, batch_size=None, err_msg=\"\"):\n \"\"\"\n Report the number of records in the latest processed batch,\n so DynamicShardingManager knows if some pending tasks are finished\n and report_task_result to the master.\n Return True if there are some finished tasks, False otherwise.\n \"\"\"\n record_count = batch_size if batch_size else self._batch_size\n self._reported_record_count += record_count\n if err_msg:\n self._failed_record_count += record_count\n\n if not self._pending_tasks:\n return False\n task = self._pending_tasks[0]\n total_record_num = task.shard.end - task.shard.start\n if self._reported_record_count >= total_record_num:\n # Keep popping tasks until the reported record count is less\n # than the size of the current data since `batch_size` may be\n # larger than `shard.end - shard.start`\n with self._lock:\n while (\n self._pending_tasks\n and self._reported_record_count\n >= self._pending_tasks[0].shard.end\n - self._pending_tasks[0].shard.start\n ):\n self._reported_record_count -= (\n self._pending_tasks[0].shard.end\n - self._pending_tasks[0].shard.start\n )\n task = self._pending_tasks.popleft()\n self._report_task(task, err_msg)\n self._failed_record_count = 0\n if self._pending_tasks:\n self._current_task = self._pending_tasks[0]\n return True\n return False\n\n def fetch_shard(self):\n \"\"\"Fetch data shard and each shard contains the name,\n start and end index.\n \"\"\"\n task = self.get_task(self._task_type)\n if task.type != self._task_type:\n return None\n\n return task.shard\n\n\nclass RecordIndexService(DataShardService):\n def __init__(\n self,\n master_client,\n batch_size,\n num_epochs=None,\n dataset_size=None,\n task_type=elasticai_api_pb2.TRAINING,\n shuffle=False,\n ):\n super(RecordIndexService, self).__init__(\n master_client, batch_size, num_epochs, dataset_size, task_type\n )\n self._shuffle = shuffle\n self._shard_queue = SimpleQueue()\n threading.Thread(\n target=self._get_shard_indices,\n name=\"fetch_shard_indices\",\n daemon=True,\n ).start()\n\n def _get_shard_indices(self):\n while True:\n if self._shard_queue.empty():\n task = self.get_task(self._task_type)\n if not task.shard or task.type != self._task_type:\n break\n ids = list(range(task.shard.start, task.shard.end))\n if self._shuffle:\n random.shuffle(ids)\n for i in ids:\n self._shard_queue.put(i)\n else:\n time.sleep(1)\n\n def fetch_record_index(self):\n \"\"\"Fetch an index of the record. The function get an index\n from a queue because there may be multiple sub-process to call\n the function.\n \"\"\"\n for _ in range(30):\n if not self._shard_queue.empty():\n return self._shard_queue.get()\n else:\n time.sleep(1)\n raise StopIteration\n", "path": "elasticai_api/common/data_shard_service.py"}]} | 3,675 | 987 |
gh_patches_debug_14713 | rasdani/github-patches | git_diff | pypa__setuptools-3309 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
package_data not doing recursive glob calls
Even though glob supports it, glob support for recursive ** searching is not enabled when specifying package_data. This is not clear in the documentation.
https://github.com/pypa/setuptools/blob/a94ccbf404a79d56f9b171024dee361de9a948da/setuptools/command/build_py.py#L106
This means that only single `*` searches in a specific directory are supported. By changing the `glob` function to support recursive, this would allow us to specify `**` patterns to a directory instead of having to specify every sub-directory in the structure.
Reference:
https://github.com/python/cpython/blob/master/Lib/glob.py#L18
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setuptools/command/build_py.py`
Content:
```
1 from glob import glob
2 from distutils.util import convert_path
3 import distutils.command.build_py as orig
4 import os
5 import fnmatch
6 import textwrap
7 import io
8 import distutils.errors
9 import itertools
10 import stat
11 from setuptools.extern.more_itertools import unique_everseen
12
13
14 def make_writable(target):
15 os.chmod(target, os.stat(target).st_mode | stat.S_IWRITE)
16
17
18 class build_py(orig.build_py):
19 """Enhanced 'build_py' command that includes data files with packages
20
21 The data files are specified via a 'package_data' argument to 'setup()'.
22 See 'setuptools.dist.Distribution' for more details.
23
24 Also, this version of the 'build_py' command allows you to specify both
25 'py_modules' and 'packages' in the same setup operation.
26 """
27
28 def finalize_options(self):
29 orig.build_py.finalize_options(self)
30 self.package_data = self.distribution.package_data
31 self.exclude_package_data = self.distribution.exclude_package_data or {}
32 if 'data_files' in self.__dict__:
33 del self.__dict__['data_files']
34 self.__updated_files = []
35
36 def run(self):
37 """Build modules, packages, and copy data files to build directory"""
38 if not self.py_modules and not self.packages:
39 return
40
41 if self.py_modules:
42 self.build_modules()
43
44 if self.packages:
45 self.build_packages()
46 self.build_package_data()
47
48 # Only compile actual .py files, using our base class' idea of what our
49 # output files are.
50 self.byte_compile(orig.build_py.get_outputs(self, include_bytecode=0))
51
52 def __getattr__(self, attr):
53 "lazily compute data files"
54 if attr == 'data_files':
55 self.data_files = self._get_data_files()
56 return self.data_files
57 return orig.build_py.__getattr__(self, attr)
58
59 def build_module(self, module, module_file, package):
60 outfile, copied = orig.build_py.build_module(self, module, module_file, package)
61 if copied:
62 self.__updated_files.append(outfile)
63 return outfile, copied
64
65 def _get_data_files(self):
66 """Generate list of '(package,src_dir,build_dir,filenames)' tuples"""
67 self.analyze_manifest()
68 return list(map(self._get_pkg_data_files, self.packages or ()))
69
70 def get_data_files_without_manifest(self):
71 """
72 Generate list of ``(package,src_dir,build_dir,filenames)`` tuples,
73 but without triggering any attempt to analyze or build the manifest.
74 """
75 # Prevent eventual errors from unset `manifest_files`
76 # (that would otherwise be set by `analyze_manifest`)
77 self.__dict__.setdefault('manifest_files', {})
78 return list(map(self._get_pkg_data_files, self.packages or ()))
79
80 def _get_pkg_data_files(self, package):
81 # Locate package source directory
82 src_dir = self.get_package_dir(package)
83
84 # Compute package build directory
85 build_dir = os.path.join(*([self.build_lib] + package.split('.')))
86
87 # Strip directory from globbed filenames
88 filenames = [
89 os.path.relpath(file, src_dir)
90 for file in self.find_data_files(package, src_dir)
91 ]
92 return package, src_dir, build_dir, filenames
93
94 def find_data_files(self, package, src_dir):
95 """Return filenames for package's data files in 'src_dir'"""
96 patterns = self._get_platform_patterns(
97 self.package_data,
98 package,
99 src_dir,
100 )
101 globs_expanded = map(glob, patterns)
102 # flatten the expanded globs into an iterable of matches
103 globs_matches = itertools.chain.from_iterable(globs_expanded)
104 glob_files = filter(os.path.isfile, globs_matches)
105 files = itertools.chain(
106 self.manifest_files.get(package, []),
107 glob_files,
108 )
109 return self.exclude_data_files(package, src_dir, files)
110
111 def build_package_data(self):
112 """Copy data files into build directory"""
113 for package, src_dir, build_dir, filenames in self.data_files:
114 for filename in filenames:
115 target = os.path.join(build_dir, filename)
116 self.mkpath(os.path.dirname(target))
117 srcfile = os.path.join(src_dir, filename)
118 outf, copied = self.copy_file(srcfile, target)
119 make_writable(target)
120 srcfile = os.path.abspath(srcfile)
121
122 def analyze_manifest(self):
123 self.manifest_files = mf = {}
124 if not self.distribution.include_package_data:
125 return
126 src_dirs = {}
127 for package in self.packages or ():
128 # Locate package source directory
129 src_dirs[assert_relative(self.get_package_dir(package))] = package
130
131 self.run_command('egg_info')
132 ei_cmd = self.get_finalized_command('egg_info')
133 for path in ei_cmd.filelist.files:
134 d, f = os.path.split(assert_relative(path))
135 prev = None
136 oldf = f
137 while d and d != prev and d not in src_dirs:
138 prev = d
139 d, df = os.path.split(d)
140 f = os.path.join(df, f)
141 if d in src_dirs:
142 if path.endswith('.py') and f == oldf:
143 continue # it's a module, not data
144 mf.setdefault(src_dirs[d], []).append(path)
145
146 def get_data_files(self):
147 pass # Lazily compute data files in _get_data_files() function.
148
149 def check_package(self, package, package_dir):
150 """Check namespace packages' __init__ for declare_namespace"""
151 try:
152 return self.packages_checked[package]
153 except KeyError:
154 pass
155
156 init_py = orig.build_py.check_package(self, package, package_dir)
157 self.packages_checked[package] = init_py
158
159 if not init_py or not self.distribution.namespace_packages:
160 return init_py
161
162 for pkg in self.distribution.namespace_packages:
163 if pkg == package or pkg.startswith(package + '.'):
164 break
165 else:
166 return init_py
167
168 with io.open(init_py, 'rb') as f:
169 contents = f.read()
170 if b'declare_namespace' not in contents:
171 raise distutils.errors.DistutilsError(
172 "Namespace package problem: %s is a namespace package, but "
173 "its\n__init__.py does not call declare_namespace()! Please "
174 'fix it.\n(See the setuptools manual under '
175 '"Namespace Packages" for details.)\n"' % (package,)
176 )
177 return init_py
178
179 def initialize_options(self):
180 self.packages_checked = {}
181 orig.build_py.initialize_options(self)
182
183 def get_package_dir(self, package):
184 res = orig.build_py.get_package_dir(self, package)
185 if self.distribution.src_root is not None:
186 return os.path.join(self.distribution.src_root, res)
187 return res
188
189 def exclude_data_files(self, package, src_dir, files):
190 """Filter filenames for package's data files in 'src_dir'"""
191 files = list(files)
192 patterns = self._get_platform_patterns(
193 self.exclude_package_data,
194 package,
195 src_dir,
196 )
197 match_groups = (fnmatch.filter(files, pattern) for pattern in patterns)
198 # flatten the groups of matches into an iterable of matches
199 matches = itertools.chain.from_iterable(match_groups)
200 bad = set(matches)
201 keepers = (fn for fn in files if fn not in bad)
202 # ditch dupes
203 return list(unique_everseen(keepers))
204
205 @staticmethod
206 def _get_platform_patterns(spec, package, src_dir):
207 """
208 yield platform-specific path patterns (suitable for glob
209 or fn_match) from a glob-based spec (such as
210 self.package_data or self.exclude_package_data)
211 matching package in src_dir.
212 """
213 raw_patterns = itertools.chain(
214 spec.get('', []),
215 spec.get(package, []),
216 )
217 return (
218 # Each pattern has to be converted to a platform-specific path
219 os.path.join(src_dir, convert_path(pattern))
220 for pattern in raw_patterns
221 )
222
223
224 def assert_relative(path):
225 if not os.path.isabs(path):
226 return path
227 from distutils.errors import DistutilsSetupError
228
229 msg = (
230 textwrap.dedent(
231 """
232 Error: setup script specifies an absolute path:
233
234 %s
235
236 setup() arguments must *always* be /-separated paths relative to the
237 setup.py directory, *never* absolute paths.
238 """
239 ).lstrip()
240 % path
241 )
242 raise DistutilsSetupError(msg)
243
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setuptools/command/build_py.py b/setuptools/command/build_py.py
--- a/setuptools/command/build_py.py
+++ b/setuptools/command/build_py.py
@@ -1,3 +1,4 @@
+from functools import partial
from glob import glob
from distutils.util import convert_path
import distutils.command.build_py as orig
@@ -98,7 +99,7 @@
package,
src_dir,
)
- globs_expanded = map(glob, patterns)
+ globs_expanded = map(partial(glob, recursive=True), patterns)
# flatten the expanded globs into an iterable of matches
globs_matches = itertools.chain.from_iterable(globs_expanded)
glob_files = filter(os.path.isfile, globs_matches)
| {"golden_diff": "diff --git a/setuptools/command/build_py.py b/setuptools/command/build_py.py\n--- a/setuptools/command/build_py.py\n+++ b/setuptools/command/build_py.py\n@@ -1,3 +1,4 @@\n+from functools import partial\n from glob import glob\n from distutils.util import convert_path\n import distutils.command.build_py as orig\n@@ -98,7 +99,7 @@\n package,\n src_dir,\n )\n- globs_expanded = map(glob, patterns)\n+ globs_expanded = map(partial(glob, recursive=True), patterns)\n # flatten the expanded globs into an iterable of matches\n globs_matches = itertools.chain.from_iterable(globs_expanded)\n glob_files = filter(os.path.isfile, globs_matches)\n", "issue": "package_data not doing recursive glob calls\nEven though glob supports it, glob support for recursive ** searching is not enabled when specifying package_data. This is not clear in the documentation.\r\n\r\nhttps://github.com/pypa/setuptools/blob/a94ccbf404a79d56f9b171024dee361de9a948da/setuptools/command/build_py.py#L106\r\n\r\nThis means that only single `*` searches in a specific directory are supported. By changing the `glob` function to support recursive, this would allow us to specify `**` patterns to a directory instead of having to specify every sub-directory in the structure.\r\n\r\nReference:\r\nhttps://github.com/python/cpython/blob/master/Lib/glob.py#L18\n", "before_files": [{"content": "from glob import glob\nfrom distutils.util import convert_path\nimport distutils.command.build_py as orig\nimport os\nimport fnmatch\nimport textwrap\nimport io\nimport distutils.errors\nimport itertools\nimport stat\nfrom setuptools.extern.more_itertools import unique_everseen\n\n\ndef make_writable(target):\n os.chmod(target, os.stat(target).st_mode | stat.S_IWRITE)\n\n\nclass build_py(orig.build_py):\n \"\"\"Enhanced 'build_py' command that includes data files with packages\n\n The data files are specified via a 'package_data' argument to 'setup()'.\n See 'setuptools.dist.Distribution' for more details.\n\n Also, this version of the 'build_py' command allows you to specify both\n 'py_modules' and 'packages' in the same setup operation.\n \"\"\"\n\n def finalize_options(self):\n orig.build_py.finalize_options(self)\n self.package_data = self.distribution.package_data\n self.exclude_package_data = self.distribution.exclude_package_data or {}\n if 'data_files' in self.__dict__:\n del self.__dict__['data_files']\n self.__updated_files = []\n\n def run(self):\n \"\"\"Build modules, packages, and copy data files to build directory\"\"\"\n if not self.py_modules and not self.packages:\n return\n\n if self.py_modules:\n self.build_modules()\n\n if self.packages:\n self.build_packages()\n self.build_package_data()\n\n # Only compile actual .py files, using our base class' idea of what our\n # output files are.\n self.byte_compile(orig.build_py.get_outputs(self, include_bytecode=0))\n\n def __getattr__(self, attr):\n \"lazily compute data files\"\n if attr == 'data_files':\n self.data_files = self._get_data_files()\n return self.data_files\n return orig.build_py.__getattr__(self, attr)\n\n def build_module(self, module, module_file, package):\n outfile, copied = orig.build_py.build_module(self, module, module_file, package)\n if copied:\n self.__updated_files.append(outfile)\n return outfile, copied\n\n def _get_data_files(self):\n \"\"\"Generate list of '(package,src_dir,build_dir,filenames)' tuples\"\"\"\n self.analyze_manifest()\n return list(map(self._get_pkg_data_files, self.packages or ()))\n\n def get_data_files_without_manifest(self):\n \"\"\"\n Generate list of ``(package,src_dir,build_dir,filenames)`` tuples,\n but without triggering any attempt to analyze or build the manifest.\n \"\"\"\n # Prevent eventual errors from unset `manifest_files`\n # (that would otherwise be set by `analyze_manifest`)\n self.__dict__.setdefault('manifest_files', {})\n return list(map(self._get_pkg_data_files, self.packages or ()))\n\n def _get_pkg_data_files(self, package):\n # Locate package source directory\n src_dir = self.get_package_dir(package)\n\n # Compute package build directory\n build_dir = os.path.join(*([self.build_lib] + package.split('.')))\n\n # Strip directory from globbed filenames\n filenames = [\n os.path.relpath(file, src_dir)\n for file in self.find_data_files(package, src_dir)\n ]\n return package, src_dir, build_dir, filenames\n\n def find_data_files(self, package, src_dir):\n \"\"\"Return filenames for package's data files in 'src_dir'\"\"\"\n patterns = self._get_platform_patterns(\n self.package_data,\n package,\n src_dir,\n )\n globs_expanded = map(glob, patterns)\n # flatten the expanded globs into an iterable of matches\n globs_matches = itertools.chain.from_iterable(globs_expanded)\n glob_files = filter(os.path.isfile, globs_matches)\n files = itertools.chain(\n self.manifest_files.get(package, []),\n glob_files,\n )\n return self.exclude_data_files(package, src_dir, files)\n\n def build_package_data(self):\n \"\"\"Copy data files into build directory\"\"\"\n for package, src_dir, build_dir, filenames in self.data_files:\n for filename in filenames:\n target = os.path.join(build_dir, filename)\n self.mkpath(os.path.dirname(target))\n srcfile = os.path.join(src_dir, filename)\n outf, copied = self.copy_file(srcfile, target)\n make_writable(target)\n srcfile = os.path.abspath(srcfile)\n\n def analyze_manifest(self):\n self.manifest_files = mf = {}\n if not self.distribution.include_package_data:\n return\n src_dirs = {}\n for package in self.packages or ():\n # Locate package source directory\n src_dirs[assert_relative(self.get_package_dir(package))] = package\n\n self.run_command('egg_info')\n ei_cmd = self.get_finalized_command('egg_info')\n for path in ei_cmd.filelist.files:\n d, f = os.path.split(assert_relative(path))\n prev = None\n oldf = f\n while d and d != prev and d not in src_dirs:\n prev = d\n d, df = os.path.split(d)\n f = os.path.join(df, f)\n if d in src_dirs:\n if path.endswith('.py') and f == oldf:\n continue # it's a module, not data\n mf.setdefault(src_dirs[d], []).append(path)\n\n def get_data_files(self):\n pass # Lazily compute data files in _get_data_files() function.\n\n def check_package(self, package, package_dir):\n \"\"\"Check namespace packages' __init__ for declare_namespace\"\"\"\n try:\n return self.packages_checked[package]\n except KeyError:\n pass\n\n init_py = orig.build_py.check_package(self, package, package_dir)\n self.packages_checked[package] = init_py\n\n if not init_py or not self.distribution.namespace_packages:\n return init_py\n\n for pkg in self.distribution.namespace_packages:\n if pkg == package or pkg.startswith(package + '.'):\n break\n else:\n return init_py\n\n with io.open(init_py, 'rb') as f:\n contents = f.read()\n if b'declare_namespace' not in contents:\n raise distutils.errors.DistutilsError(\n \"Namespace package problem: %s is a namespace package, but \"\n \"its\\n__init__.py does not call declare_namespace()! Please \"\n 'fix it.\\n(See the setuptools manual under '\n '\"Namespace Packages\" for details.)\\n\"' % (package,)\n )\n return init_py\n\n def initialize_options(self):\n self.packages_checked = {}\n orig.build_py.initialize_options(self)\n\n def get_package_dir(self, package):\n res = orig.build_py.get_package_dir(self, package)\n if self.distribution.src_root is not None:\n return os.path.join(self.distribution.src_root, res)\n return res\n\n def exclude_data_files(self, package, src_dir, files):\n \"\"\"Filter filenames for package's data files in 'src_dir'\"\"\"\n files = list(files)\n patterns = self._get_platform_patterns(\n self.exclude_package_data,\n package,\n src_dir,\n )\n match_groups = (fnmatch.filter(files, pattern) for pattern in patterns)\n # flatten the groups of matches into an iterable of matches\n matches = itertools.chain.from_iterable(match_groups)\n bad = set(matches)\n keepers = (fn for fn in files if fn not in bad)\n # ditch dupes\n return list(unique_everseen(keepers))\n\n @staticmethod\n def _get_platform_patterns(spec, package, src_dir):\n \"\"\"\n yield platform-specific path patterns (suitable for glob\n or fn_match) from a glob-based spec (such as\n self.package_data or self.exclude_package_data)\n matching package in src_dir.\n \"\"\"\n raw_patterns = itertools.chain(\n spec.get('', []),\n spec.get(package, []),\n )\n return (\n # Each pattern has to be converted to a platform-specific path\n os.path.join(src_dir, convert_path(pattern))\n for pattern in raw_patterns\n )\n\n\ndef assert_relative(path):\n if not os.path.isabs(path):\n return path\n from distutils.errors import DistutilsSetupError\n\n msg = (\n textwrap.dedent(\n \"\"\"\n Error: setup script specifies an absolute path:\n\n %s\n\n setup() arguments must *always* be /-separated paths relative to the\n setup.py directory, *never* absolute paths.\n \"\"\"\n ).lstrip()\n % path\n )\n raise DistutilsSetupError(msg)\n", "path": "setuptools/command/build_py.py"}], "after_files": [{"content": "from functools import partial\nfrom glob import glob\nfrom distutils.util import convert_path\nimport distutils.command.build_py as orig\nimport os\nimport fnmatch\nimport textwrap\nimport io\nimport distutils.errors\nimport itertools\nimport stat\nfrom setuptools.extern.more_itertools import unique_everseen\n\n\ndef make_writable(target):\n os.chmod(target, os.stat(target).st_mode | stat.S_IWRITE)\n\n\nclass build_py(orig.build_py):\n \"\"\"Enhanced 'build_py' command that includes data files with packages\n\n The data files are specified via a 'package_data' argument to 'setup()'.\n See 'setuptools.dist.Distribution' for more details.\n\n Also, this version of the 'build_py' command allows you to specify both\n 'py_modules' and 'packages' in the same setup operation.\n \"\"\"\n\n def finalize_options(self):\n orig.build_py.finalize_options(self)\n self.package_data = self.distribution.package_data\n self.exclude_package_data = self.distribution.exclude_package_data or {}\n if 'data_files' in self.__dict__:\n del self.__dict__['data_files']\n self.__updated_files = []\n\n def run(self):\n \"\"\"Build modules, packages, and copy data files to build directory\"\"\"\n if not self.py_modules and not self.packages:\n return\n\n if self.py_modules:\n self.build_modules()\n\n if self.packages:\n self.build_packages()\n self.build_package_data()\n\n # Only compile actual .py files, using our base class' idea of what our\n # output files are.\n self.byte_compile(orig.build_py.get_outputs(self, include_bytecode=0))\n\n def __getattr__(self, attr):\n \"lazily compute data files\"\n if attr == 'data_files':\n self.data_files = self._get_data_files()\n return self.data_files\n return orig.build_py.__getattr__(self, attr)\n\n def build_module(self, module, module_file, package):\n outfile, copied = orig.build_py.build_module(self, module, module_file, package)\n if copied:\n self.__updated_files.append(outfile)\n return outfile, copied\n\n def _get_data_files(self):\n \"\"\"Generate list of '(package,src_dir,build_dir,filenames)' tuples\"\"\"\n self.analyze_manifest()\n return list(map(self._get_pkg_data_files, self.packages or ()))\n\n def get_data_files_without_manifest(self):\n \"\"\"\n Generate list of ``(package,src_dir,build_dir,filenames)`` tuples,\n but without triggering any attempt to analyze or build the manifest.\n \"\"\"\n # Prevent eventual errors from unset `manifest_files`\n # (that would otherwise be set by `analyze_manifest`)\n self.__dict__.setdefault('manifest_files', {})\n return list(map(self._get_pkg_data_files, self.packages or ()))\n\n def _get_pkg_data_files(self, package):\n # Locate package source directory\n src_dir = self.get_package_dir(package)\n\n # Compute package build directory\n build_dir = os.path.join(*([self.build_lib] + package.split('.')))\n\n # Strip directory from globbed filenames\n filenames = [\n os.path.relpath(file, src_dir)\n for file in self.find_data_files(package, src_dir)\n ]\n return package, src_dir, build_dir, filenames\n\n def find_data_files(self, package, src_dir):\n \"\"\"Return filenames for package's data files in 'src_dir'\"\"\"\n patterns = self._get_platform_patterns(\n self.package_data,\n package,\n src_dir,\n )\n globs_expanded = map(partial(glob, recursive=True), patterns)\n # flatten the expanded globs into an iterable of matches\n globs_matches = itertools.chain.from_iterable(globs_expanded)\n glob_files = filter(os.path.isfile, globs_matches)\n files = itertools.chain(\n self.manifest_files.get(package, []),\n glob_files,\n )\n return self.exclude_data_files(package, src_dir, files)\n\n def build_package_data(self):\n \"\"\"Copy data files into build directory\"\"\"\n for package, src_dir, build_dir, filenames in self.data_files:\n for filename in filenames:\n target = os.path.join(build_dir, filename)\n self.mkpath(os.path.dirname(target))\n srcfile = os.path.join(src_dir, filename)\n outf, copied = self.copy_file(srcfile, target)\n make_writable(target)\n srcfile = os.path.abspath(srcfile)\n\n def analyze_manifest(self):\n self.manifest_files = mf = {}\n if not self.distribution.include_package_data:\n return\n src_dirs = {}\n for package in self.packages or ():\n # Locate package source directory\n src_dirs[assert_relative(self.get_package_dir(package))] = package\n\n self.run_command('egg_info')\n ei_cmd = self.get_finalized_command('egg_info')\n for path in ei_cmd.filelist.files:\n d, f = os.path.split(assert_relative(path))\n prev = None\n oldf = f\n while d and d != prev and d not in src_dirs:\n prev = d\n d, df = os.path.split(d)\n f = os.path.join(df, f)\n if d in src_dirs:\n if path.endswith('.py') and f == oldf:\n continue # it's a module, not data\n mf.setdefault(src_dirs[d], []).append(path)\n\n def get_data_files(self):\n pass # Lazily compute data files in _get_data_files() function.\n\n def check_package(self, package, package_dir):\n \"\"\"Check namespace packages' __init__ for declare_namespace\"\"\"\n try:\n return self.packages_checked[package]\n except KeyError:\n pass\n\n init_py = orig.build_py.check_package(self, package, package_dir)\n self.packages_checked[package] = init_py\n\n if not init_py or not self.distribution.namespace_packages:\n return init_py\n\n for pkg in self.distribution.namespace_packages:\n if pkg == package or pkg.startswith(package + '.'):\n break\n else:\n return init_py\n\n with io.open(init_py, 'rb') as f:\n contents = f.read()\n if b'declare_namespace' not in contents:\n raise distutils.errors.DistutilsError(\n \"Namespace package problem: %s is a namespace package, but \"\n \"its\\n__init__.py does not call declare_namespace()! Please \"\n 'fix it.\\n(See the setuptools manual under '\n '\"Namespace Packages\" for details.)\\n\"' % (package,)\n )\n return init_py\n\n def initialize_options(self):\n self.packages_checked = {}\n orig.build_py.initialize_options(self)\n\n def get_package_dir(self, package):\n res = orig.build_py.get_package_dir(self, package)\n if self.distribution.src_root is not None:\n return os.path.join(self.distribution.src_root, res)\n return res\n\n def exclude_data_files(self, package, src_dir, files):\n \"\"\"Filter filenames for package's data files in 'src_dir'\"\"\"\n files = list(files)\n patterns = self._get_platform_patterns(\n self.exclude_package_data,\n package,\n src_dir,\n )\n match_groups = (fnmatch.filter(files, pattern) for pattern in patterns)\n # flatten the groups of matches into an iterable of matches\n matches = itertools.chain.from_iterable(match_groups)\n bad = set(matches)\n keepers = (fn for fn in files if fn not in bad)\n # ditch dupes\n return list(unique_everseen(keepers))\n\n @staticmethod\n def _get_platform_patterns(spec, package, src_dir):\n \"\"\"\n yield platform-specific path patterns (suitable for glob\n or fn_match) from a glob-based spec (such as\n self.package_data or self.exclude_package_data)\n matching package in src_dir.\n \"\"\"\n raw_patterns = itertools.chain(\n spec.get('', []),\n spec.get(package, []),\n )\n return (\n # Each pattern has to be converted to a platform-specific path\n os.path.join(src_dir, convert_path(pattern))\n for pattern in raw_patterns\n )\n\n\ndef assert_relative(path):\n if not os.path.isabs(path):\n return path\n from distutils.errors import DistutilsSetupError\n\n msg = (\n textwrap.dedent(\n \"\"\"\n Error: setup script specifies an absolute path:\n\n %s\n\n setup() arguments must *always* be /-separated paths relative to the\n setup.py directory, *never* absolute paths.\n \"\"\"\n ).lstrip()\n % path\n )\n raise DistutilsSetupError(msg)\n", "path": "setuptools/command/build_py.py"}]} | 2,913 | 167 |
gh_patches_debug_13024 | rasdani/github-patches | git_diff | vega__altair-2570 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tooltip doesn't support "image" key
My understanding is that to render tooltips, Altair uses the `vega-tooltip` plugin. Per that project's [README](https://github.com/vega/vega-tooltip), one awesome feature that library has is the ability to render images as part of the tooltip with the image key. From the docs:
> Supports special keys title (becomes the title of the tooltip) and image (used as the url for an embedded image)
Using the tooltip without the `image` key is fine:
```
mydata = pd.DataFrame.from_records([{'a': 1, 'b': 1}, {'a': 2, 'b': 2}])
chart = alt.Chart(mydata).mark_circle().encode(
x='a',
y='b',
tooltip=alt.Tooltip(['a'], title='My Cool Tooltip')
).interactive()
display(chart)
```
However, when I add the `image` key, it stops working:
```
chart = alt.Chart(mydata).mark_circle().encode(
x='a',
y='b',
tooltip=alt.Tooltip(['a'], title='My Cool Tooltip', image='https://picsum.photos/200')
).interactive()
display(chart)
```
```
SchemaValidationError: Invalid specification
altair.vegalite.v4.schema.channels.Tooltip, validating 'additionalProperties'
Additional properties are not allowed ('image' was unexpected)
```
Maybe this feature is already supported but the schema is out of date? Thanks.
---
Please follow these steps to make it more efficient to solve your issue:
- [N/A] Since Altair is a Python wrapper around the Vega-Lite visualization grammar, [most bugs should be reported directly to Vega-Lite](https://github.com/vega/vega-lite/issues). You can click the Action Button of your Altair chart and "Open in Vega Editor" to create a reproducible Vega-Lite example and see if you get the same error in the Vega Editor.
- [x] Search for duplicate issues.
- [x] Use the latest version of Altair.
- [x] Describe how to reproduce the bug and include the full code and data to reproduce it, ideally using a sample data set from `vega_datasets`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `altair/examples/image_tooltip.py`
Content:
```
1 """
2 Image tooltip
3 -------------
4 This example shows how to render images in tooltips.
5 Either URLs or local file paths can be used to reference
6 the images.
7 """
8 # category: other charts
9
10 import altair as alt
11 import pandas as pd
12
13 source = pd.DataFrame.from_records(
14 [{'a': 1, 'b': 1, 'image': 'https://altair-viz.github.io/_static/altair-logo-light.png'},
15 {'a': 2, 'b': 2, 'image': 'https://avatars.githubusercontent.com/u/11796929?s=200&v=4'}]
16 )
17 alt.Chart(source).mark_circle(size=200).encode(
18 x='a',
19 y='b',
20 tooltip=['image'] # Must be a list for the image to render
21 )
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/altair/examples/image_tooltip.py b/altair/examples/image_tooltip.py
--- a/altair/examples/image_tooltip.py
+++ b/altair/examples/image_tooltip.py
@@ -3,7 +3,9 @@
-------------
This example shows how to render images in tooltips.
Either URLs or local file paths can be used to reference
-the images.
+the images. To render the image, you must use the special
+column name "image" in your data and pass it as a list to
+the tooltip encoding.
"""
# category: other charts
@@ -17,5 +19,5 @@
alt.Chart(source).mark_circle(size=200).encode(
x='a',
y='b',
- tooltip=['image'] # Must be a list for the image to render
+ tooltip=['image'] # Must be a list containing a field called "image"
)
| {"golden_diff": "diff --git a/altair/examples/image_tooltip.py b/altair/examples/image_tooltip.py\n--- a/altair/examples/image_tooltip.py\n+++ b/altair/examples/image_tooltip.py\n@@ -3,7 +3,9 @@\n -------------\n This example shows how to render images in tooltips.\n Either URLs or local file paths can be used to reference\n-the images.\n+the images. To render the image, you must use the special\n+column name \"image\" in your data and pass it as a list to\n+the tooltip encoding.\n \"\"\"\n # category: other charts\n \n@@ -17,5 +19,5 @@\n alt.Chart(source).mark_circle(size=200).encode(\n x='a',\n y='b',\n- tooltip=['image'] # Must be a list for the image to render\n+ tooltip=['image'] # Must be a list containing a field called \"image\"\n )\n", "issue": "Tooltip doesn't support \"image\" key\nMy understanding is that to render tooltips, Altair uses the `vega-tooltip` plugin. Per that project's [README](https://github.com/vega/vega-tooltip), one awesome feature that library has is the ability to render images as part of the tooltip with the image key. From the docs:\r\n\r\n> Supports special keys title (becomes the title of the tooltip) and image (used as the url for an embedded image)\r\n\r\nUsing the tooltip without the `image` key is fine:\r\n\r\n```\r\nmydata = pd.DataFrame.from_records([{'a': 1, 'b': 1}, {'a': 2, 'b': 2}])\r\nchart = alt.Chart(mydata).mark_circle().encode(\r\n x='a',\r\n y='b',\r\n tooltip=alt.Tooltip(['a'], title='My Cool Tooltip')\r\n).interactive()\r\n\r\ndisplay(chart)\r\n```\r\n\r\nHowever, when I add the `image` key, it stops working:\r\n\r\n```\r\nchart = alt.Chart(mydata).mark_circle().encode(\r\n x='a',\r\n y='b',\r\n tooltip=alt.Tooltip(['a'], title='My Cool Tooltip', image='https://picsum.photos/200')\r\n).interactive()\r\n\r\ndisplay(chart)\r\n```\r\n\r\n```\r\nSchemaValidationError: Invalid specification\r\n\r\n altair.vegalite.v4.schema.channels.Tooltip, validating 'additionalProperties'\r\n\r\n Additional properties are not allowed ('image' was unexpected)\r\n```\r\n\r\nMaybe this feature is already supported but the schema is out of date? Thanks.\r\n\r\n---\r\n\r\nPlease follow these steps to make it more efficient to solve your issue:\r\n\r\n- [N/A] Since Altair is a Python wrapper around the Vega-Lite visualization grammar, [most bugs should be reported directly to Vega-Lite](https://github.com/vega/vega-lite/issues). You can click the Action Button of your Altair chart and \"Open in Vega Editor\" to create a reproducible Vega-Lite example and see if you get the same error in the Vega Editor.\r\n- [x] Search for duplicate issues.\r\n- [x] Use the latest version of Altair.\r\n- [x] Describe how to reproduce the bug and include the full code and data to reproduce it, ideally using a sample data set from `vega_datasets`.\r\n\n", "before_files": [{"content": "\"\"\"\nImage tooltip\n-------------\nThis example shows how to render images in tooltips.\nEither URLs or local file paths can be used to reference\nthe images.\n\"\"\"\n# category: other charts\n\nimport altair as alt\nimport pandas as pd\n\nsource = pd.DataFrame.from_records(\n [{'a': 1, 'b': 1, 'image': 'https://altair-viz.github.io/_static/altair-logo-light.png'},\n {'a': 2, 'b': 2, 'image': 'https://avatars.githubusercontent.com/u/11796929?s=200&v=4'}]\n)\nalt.Chart(source).mark_circle(size=200).encode(\n x='a',\n y='b',\n tooltip=['image'] # Must be a list for the image to render\n)\n", "path": "altair/examples/image_tooltip.py"}], "after_files": [{"content": "\"\"\"\nImage tooltip\n-------------\nThis example shows how to render images in tooltips.\nEither URLs or local file paths can be used to reference\nthe images. To render the image, you must use the special\ncolumn name \"image\" in your data and pass it as a list to\nthe tooltip encoding.\n\"\"\"\n# category: other charts\n\nimport altair as alt\nimport pandas as pd\n\nsource = pd.DataFrame.from_records(\n [{'a': 1, 'b': 1, 'image': 'https://altair-viz.github.io/_static/altair-logo-light.png'},\n {'a': 2, 'b': 2, 'image': 'https://avatars.githubusercontent.com/u/11796929?s=200&v=4'}]\n)\nalt.Chart(source).mark_circle(size=200).encode(\n x='a',\n y='b',\n tooltip=['image'] # Must be a list containing a field called \"image\"\n)\n", "path": "altair/examples/image_tooltip.py"}]} | 952 | 195 |
gh_patches_debug_814 | rasdani/github-patches | git_diff | scrapy__scrapy-4311 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Consider making METAREFRESH_IGNORE_TAGS an empty list by default
As a way to allow users to fix #1422, #3768 introduced the `METAREFRESH_IGNORE_TAGS` setting.
To keep backward compatibility, the setting was introduced with `['script', 'noscript']` as default value. However, to reproduce the behavior of web browsers, it seems the right value would be `[]`.
Should we switch the default value of the `METAREFRESH_IGNORE_TAGS` setting to `[]`, even though the change breaks backward compatibility?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/settings/default_settings.py`
Content:
```
1 """
2 This module contains the default values for all settings used by Scrapy.
3
4 For more information about these settings you can read the settings
5 documentation in docs/topics/settings.rst
6
7 Scrapy developers, if you add a setting here remember to:
8
9 * add it in alphabetical order
10 * group similar settings without leaving blank lines
11 * add its documentation to the available settings documentation
12 (docs/topics/settings.rst)
13
14 """
15
16 import sys
17 from importlib import import_module
18 from os.path import join, abspath, dirname
19
20 AJAXCRAWL_ENABLED = False
21
22 ASYNCIO_REACTOR = False
23
24 AUTOTHROTTLE_ENABLED = False
25 AUTOTHROTTLE_DEBUG = False
26 AUTOTHROTTLE_MAX_DELAY = 60.0
27 AUTOTHROTTLE_START_DELAY = 5.0
28 AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
29
30 BOT_NAME = 'scrapybot'
31
32 CLOSESPIDER_TIMEOUT = 0
33 CLOSESPIDER_PAGECOUNT = 0
34 CLOSESPIDER_ITEMCOUNT = 0
35 CLOSESPIDER_ERRORCOUNT = 0
36
37 COMMANDS_MODULE = ''
38
39 COMPRESSION_ENABLED = True
40
41 CONCURRENT_ITEMS = 100
42
43 CONCURRENT_REQUESTS = 16
44 CONCURRENT_REQUESTS_PER_DOMAIN = 8
45 CONCURRENT_REQUESTS_PER_IP = 0
46
47 COOKIES_ENABLED = True
48 COOKIES_DEBUG = False
49
50 DEFAULT_ITEM_CLASS = 'scrapy.item.Item'
51
52 DEFAULT_REQUEST_HEADERS = {
53 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
54 'Accept-Language': 'en',
55 }
56
57 DEPTH_LIMIT = 0
58 DEPTH_STATS_VERBOSE = False
59 DEPTH_PRIORITY = 0
60
61 DNSCACHE_ENABLED = True
62 DNSCACHE_SIZE = 10000
63 DNS_RESOLVER = 'scrapy.resolver.CachingThreadedResolver'
64 DNS_TIMEOUT = 60
65
66 DOWNLOAD_DELAY = 0
67
68 DOWNLOAD_HANDLERS = {}
69 DOWNLOAD_HANDLERS_BASE = {
70 'data': 'scrapy.core.downloader.handlers.datauri.DataURIDownloadHandler',
71 'file': 'scrapy.core.downloader.handlers.file.FileDownloadHandler',
72 'http': 'scrapy.core.downloader.handlers.http.HTTPDownloadHandler',
73 'https': 'scrapy.core.downloader.handlers.http.HTTPDownloadHandler',
74 's3': 'scrapy.core.downloader.handlers.s3.S3DownloadHandler',
75 'ftp': 'scrapy.core.downloader.handlers.ftp.FTPDownloadHandler',
76 }
77
78 DOWNLOAD_TIMEOUT = 180 # 3mins
79
80 DOWNLOAD_MAXSIZE = 1024*1024*1024 # 1024m
81 DOWNLOAD_WARNSIZE = 32*1024*1024 # 32m
82
83 DOWNLOAD_FAIL_ON_DATALOSS = True
84
85 DOWNLOADER = 'scrapy.core.downloader.Downloader'
86
87 DOWNLOADER_HTTPCLIENTFACTORY = 'scrapy.core.downloader.webclient.ScrapyHTTPClientFactory'
88 DOWNLOADER_CLIENTCONTEXTFACTORY = 'scrapy.core.downloader.contextfactory.ScrapyClientContextFactory'
89 DOWNLOADER_CLIENT_TLS_CIPHERS = 'DEFAULT'
90 # Use highest TLS/SSL protocol version supported by the platform, also allowing negotiation:
91 DOWNLOADER_CLIENT_TLS_METHOD = 'TLS'
92 DOWNLOADER_CLIENT_TLS_VERBOSE_LOGGING = False
93
94 DOWNLOADER_MIDDLEWARES = {}
95
96 DOWNLOADER_MIDDLEWARES_BASE = {
97 # Engine side
98 'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100,
99 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300,
100 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,
101 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 400,
102 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 500,
103 'scrapy.downloadermiddlewares.retry.RetryMiddleware': 550,
104 'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware': 560,
105 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580,
106 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590,
107 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 600,
108 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': 700,
109 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,
110 'scrapy.downloadermiddlewares.stats.DownloaderStats': 850,
111 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': 900,
112 # Downloader side
113 }
114
115 DOWNLOADER_STATS = True
116
117 DUPEFILTER_CLASS = 'scrapy.dupefilters.RFPDupeFilter'
118
119 EDITOR = 'vi'
120 if sys.platform == 'win32':
121 EDITOR = '%s -m idlelib.idle'
122
123 EXTENSIONS = {}
124
125 EXTENSIONS_BASE = {
126 'scrapy.extensions.corestats.CoreStats': 0,
127 'scrapy.extensions.telnet.TelnetConsole': 0,
128 'scrapy.extensions.memusage.MemoryUsage': 0,
129 'scrapy.extensions.memdebug.MemoryDebugger': 0,
130 'scrapy.extensions.closespider.CloseSpider': 0,
131 'scrapy.extensions.feedexport.FeedExporter': 0,
132 'scrapy.extensions.logstats.LogStats': 0,
133 'scrapy.extensions.spiderstate.SpiderState': 0,
134 'scrapy.extensions.throttle.AutoThrottle': 0,
135 }
136
137 FEED_TEMPDIR = None
138 FEED_URI = None
139 FEED_URI_PARAMS = None # a function to extend uri arguments
140 FEED_FORMAT = 'jsonlines'
141 FEED_STORE_EMPTY = False
142 FEED_EXPORT_ENCODING = None
143 FEED_EXPORT_FIELDS = None
144 FEED_STORAGES = {}
145 FEED_STORAGES_BASE = {
146 '': 'scrapy.extensions.feedexport.FileFeedStorage',
147 'file': 'scrapy.extensions.feedexport.FileFeedStorage',
148 'stdout': 'scrapy.extensions.feedexport.StdoutFeedStorage',
149 's3': 'scrapy.extensions.feedexport.S3FeedStorage',
150 'ftp': 'scrapy.extensions.feedexport.FTPFeedStorage',
151 }
152 FEED_EXPORTERS = {}
153 FEED_EXPORTERS_BASE = {
154 'json': 'scrapy.exporters.JsonItemExporter',
155 'jsonlines': 'scrapy.exporters.JsonLinesItemExporter',
156 'jl': 'scrapy.exporters.JsonLinesItemExporter',
157 'csv': 'scrapy.exporters.CsvItemExporter',
158 'xml': 'scrapy.exporters.XmlItemExporter',
159 'marshal': 'scrapy.exporters.MarshalItemExporter',
160 'pickle': 'scrapy.exporters.PickleItemExporter',
161 }
162 FEED_EXPORT_INDENT = 0
163
164 FEED_STORAGE_FTP_ACTIVE = False
165 FEED_STORAGE_S3_ACL = ''
166
167 FILES_STORE_S3_ACL = 'private'
168 FILES_STORE_GCS_ACL = ''
169
170 FTP_USER = 'anonymous'
171 FTP_PASSWORD = 'guest'
172 FTP_PASSIVE_MODE = True
173
174 HTTPCACHE_ENABLED = False
175 HTTPCACHE_DIR = 'httpcache'
176 HTTPCACHE_IGNORE_MISSING = False
177 HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
178 HTTPCACHE_EXPIRATION_SECS = 0
179 HTTPCACHE_ALWAYS_STORE = False
180 HTTPCACHE_IGNORE_HTTP_CODES = []
181 HTTPCACHE_IGNORE_SCHEMES = ['file']
182 HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS = []
183 HTTPCACHE_DBM_MODULE = 'dbm'
184 HTTPCACHE_POLICY = 'scrapy.extensions.httpcache.DummyPolicy'
185 HTTPCACHE_GZIP = False
186
187 HTTPPROXY_ENABLED = True
188 HTTPPROXY_AUTH_ENCODING = 'latin-1'
189
190 IMAGES_STORE_S3_ACL = 'private'
191 IMAGES_STORE_GCS_ACL = ''
192
193 ITEM_PROCESSOR = 'scrapy.pipelines.ItemPipelineManager'
194
195 ITEM_PIPELINES = {}
196 ITEM_PIPELINES_BASE = {}
197
198 LOG_ENABLED = True
199 LOG_ENCODING = 'utf-8'
200 LOG_FORMATTER = 'scrapy.logformatter.LogFormatter'
201 LOG_FORMAT = '%(asctime)s [%(name)s] %(levelname)s: %(message)s'
202 LOG_DATEFORMAT = '%Y-%m-%d %H:%M:%S'
203 LOG_STDOUT = False
204 LOG_LEVEL = 'DEBUG'
205 LOG_FILE = None
206 LOG_SHORT_NAMES = False
207
208 SCHEDULER_DEBUG = False
209
210 LOGSTATS_INTERVAL = 60.0
211
212 MAIL_HOST = 'localhost'
213 MAIL_PORT = 25
214 MAIL_FROM = 'scrapy@localhost'
215 MAIL_PASS = None
216 MAIL_USER = None
217
218 MEMDEBUG_ENABLED = False # enable memory debugging
219 MEMDEBUG_NOTIFY = [] # send memory debugging report by mail at engine shutdown
220
221 MEMUSAGE_CHECK_INTERVAL_SECONDS = 60.0
222 MEMUSAGE_ENABLED = True
223 MEMUSAGE_LIMIT_MB = 0
224 MEMUSAGE_NOTIFY_MAIL = []
225 MEMUSAGE_WARNING_MB = 0
226
227 METAREFRESH_ENABLED = True
228 METAREFRESH_IGNORE_TAGS = ['script', 'noscript']
229 METAREFRESH_MAXDELAY = 100
230
231 NEWSPIDER_MODULE = ''
232
233 RANDOMIZE_DOWNLOAD_DELAY = True
234
235 REACTOR_THREADPOOL_MAXSIZE = 10
236
237 REDIRECT_ENABLED = True
238 REDIRECT_MAX_TIMES = 20 # uses Firefox default setting
239 REDIRECT_PRIORITY_ADJUST = +2
240
241 REFERER_ENABLED = True
242 REFERRER_POLICY = 'scrapy.spidermiddlewares.referer.DefaultReferrerPolicy'
243
244 RETRY_ENABLED = True
245 RETRY_TIMES = 2 # initial response + 2 retries = 3 requests
246 RETRY_HTTP_CODES = [500, 502, 503, 504, 522, 524, 408, 429]
247 RETRY_PRIORITY_ADJUST = -1
248
249 ROBOTSTXT_OBEY = False
250 ROBOTSTXT_PARSER = 'scrapy.robotstxt.ProtegoRobotParser'
251 ROBOTSTXT_USER_AGENT = None
252
253 SCHEDULER = 'scrapy.core.scheduler.Scheduler'
254 SCHEDULER_DISK_QUEUE = 'scrapy.squeues.PickleLifoDiskQueue'
255 SCHEDULER_MEMORY_QUEUE = 'scrapy.squeues.LifoMemoryQueue'
256 SCHEDULER_PRIORITY_QUEUE = 'scrapy.pqueues.ScrapyPriorityQueue'
257
258 SCRAPER_SLOT_MAX_ACTIVE_SIZE = 5000000
259
260 SPIDER_LOADER_CLASS = 'scrapy.spiderloader.SpiderLoader'
261 SPIDER_LOADER_WARN_ONLY = False
262
263 SPIDER_MIDDLEWARES = {}
264
265 SPIDER_MIDDLEWARES_BASE = {
266 # Engine side
267 'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': 50,
268 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': 500,
269 'scrapy.spidermiddlewares.referer.RefererMiddleware': 700,
270 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware': 800,
271 'scrapy.spidermiddlewares.depth.DepthMiddleware': 900,
272 # Spider side
273 }
274
275 SPIDER_MODULES = []
276
277 STATS_CLASS = 'scrapy.statscollectors.MemoryStatsCollector'
278 STATS_DUMP = True
279
280 STATSMAILER_RCPTS = []
281
282 TEMPLATES_DIR = abspath(join(dirname(__file__), '..', 'templates'))
283
284 URLLENGTH_LIMIT = 2083
285
286 USER_AGENT = 'Scrapy/%s (+https://scrapy.org)' % import_module('scrapy').__version__
287
288 TELNETCONSOLE_ENABLED = 1
289 TELNETCONSOLE_PORT = [6023, 6073]
290 TELNETCONSOLE_HOST = '127.0.0.1'
291 TELNETCONSOLE_USERNAME = 'scrapy'
292 TELNETCONSOLE_PASSWORD = None
293
294 SPIDER_CONTRACTS = {}
295 SPIDER_CONTRACTS_BASE = {
296 'scrapy.contracts.default.UrlContract': 1,
297 'scrapy.contracts.default.CallbackKeywordArgumentsContract': 1,
298 'scrapy.contracts.default.ReturnsContract': 2,
299 'scrapy.contracts.default.ScrapesContract': 3,
300 }
301
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/settings/default_settings.py b/scrapy/settings/default_settings.py
--- a/scrapy/settings/default_settings.py
+++ b/scrapy/settings/default_settings.py
@@ -225,7 +225,7 @@
MEMUSAGE_WARNING_MB = 0
METAREFRESH_ENABLED = True
-METAREFRESH_IGNORE_TAGS = ['script', 'noscript']
+METAREFRESH_IGNORE_TAGS = []
METAREFRESH_MAXDELAY = 100
NEWSPIDER_MODULE = ''
| {"golden_diff": "diff --git a/scrapy/settings/default_settings.py b/scrapy/settings/default_settings.py\n--- a/scrapy/settings/default_settings.py\n+++ b/scrapy/settings/default_settings.py\n@@ -225,7 +225,7 @@\n MEMUSAGE_WARNING_MB = 0\n \n METAREFRESH_ENABLED = True\n-METAREFRESH_IGNORE_TAGS = ['script', 'noscript']\n+METAREFRESH_IGNORE_TAGS = []\n METAREFRESH_MAXDELAY = 100\n \n NEWSPIDER_MODULE = ''\n", "issue": "Consider making METAREFRESH_IGNORE_TAGS an empty list by default\nAs a way to allow users to fix #1422, #3768 introduced the `METAREFRESH_IGNORE_TAGS` setting.\r\n\r\nTo keep backward compatibility, the setting was introduced with `['script', 'noscript']` as default value. However, to reproduce the behavior of web browsers, it seems the right value would be `[]`.\r\n\r\nShould we switch the default value of the `METAREFRESH_IGNORE_TAGS` setting to `[]`, even though the change breaks backward compatibility?\n", "before_files": [{"content": "\"\"\"\nThis module contains the default values for all settings used by Scrapy.\n\nFor more information about these settings you can read the settings\ndocumentation in docs/topics/settings.rst\n\nScrapy developers, if you add a setting here remember to:\n\n* add it in alphabetical order\n* group similar settings without leaving blank lines\n* add its documentation to the available settings documentation\n (docs/topics/settings.rst)\n\n\"\"\"\n\nimport sys\nfrom importlib import import_module\nfrom os.path import join, abspath, dirname\n\nAJAXCRAWL_ENABLED = False\n\nASYNCIO_REACTOR = False\n\nAUTOTHROTTLE_ENABLED = False\nAUTOTHROTTLE_DEBUG = False\nAUTOTHROTTLE_MAX_DELAY = 60.0\nAUTOTHROTTLE_START_DELAY = 5.0\nAUTOTHROTTLE_TARGET_CONCURRENCY = 1.0\n\nBOT_NAME = 'scrapybot'\n\nCLOSESPIDER_TIMEOUT = 0\nCLOSESPIDER_PAGECOUNT = 0\nCLOSESPIDER_ITEMCOUNT = 0\nCLOSESPIDER_ERRORCOUNT = 0\n\nCOMMANDS_MODULE = ''\n\nCOMPRESSION_ENABLED = True\n\nCONCURRENT_ITEMS = 100\n\nCONCURRENT_REQUESTS = 16\nCONCURRENT_REQUESTS_PER_DOMAIN = 8\nCONCURRENT_REQUESTS_PER_IP = 0\n\nCOOKIES_ENABLED = True\nCOOKIES_DEBUG = False\n\nDEFAULT_ITEM_CLASS = 'scrapy.item.Item'\n\nDEFAULT_REQUEST_HEADERS = {\n 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',\n 'Accept-Language': 'en',\n}\n\nDEPTH_LIMIT = 0\nDEPTH_STATS_VERBOSE = False\nDEPTH_PRIORITY = 0\n\nDNSCACHE_ENABLED = True\nDNSCACHE_SIZE = 10000\nDNS_RESOLVER = 'scrapy.resolver.CachingThreadedResolver'\nDNS_TIMEOUT = 60\n\nDOWNLOAD_DELAY = 0\n\nDOWNLOAD_HANDLERS = {}\nDOWNLOAD_HANDLERS_BASE = {\n 'data': 'scrapy.core.downloader.handlers.datauri.DataURIDownloadHandler',\n 'file': 'scrapy.core.downloader.handlers.file.FileDownloadHandler',\n 'http': 'scrapy.core.downloader.handlers.http.HTTPDownloadHandler',\n 'https': 'scrapy.core.downloader.handlers.http.HTTPDownloadHandler',\n 's3': 'scrapy.core.downloader.handlers.s3.S3DownloadHandler',\n 'ftp': 'scrapy.core.downloader.handlers.ftp.FTPDownloadHandler',\n}\n\nDOWNLOAD_TIMEOUT = 180 # 3mins\n\nDOWNLOAD_MAXSIZE = 1024*1024*1024 # 1024m\nDOWNLOAD_WARNSIZE = 32*1024*1024 # 32m\n\nDOWNLOAD_FAIL_ON_DATALOSS = True\n\nDOWNLOADER = 'scrapy.core.downloader.Downloader'\n\nDOWNLOADER_HTTPCLIENTFACTORY = 'scrapy.core.downloader.webclient.ScrapyHTTPClientFactory'\nDOWNLOADER_CLIENTCONTEXTFACTORY = 'scrapy.core.downloader.contextfactory.ScrapyClientContextFactory'\nDOWNLOADER_CLIENT_TLS_CIPHERS = 'DEFAULT'\n# Use highest TLS/SSL protocol version supported by the platform, also allowing negotiation:\nDOWNLOADER_CLIENT_TLS_METHOD = 'TLS'\nDOWNLOADER_CLIENT_TLS_VERBOSE_LOGGING = False\n\nDOWNLOADER_MIDDLEWARES = {}\n\nDOWNLOADER_MIDDLEWARES_BASE = {\n # Engine side\n 'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100,\n 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300,\n 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,\n 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 400,\n 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 500,\n 'scrapy.downloadermiddlewares.retry.RetryMiddleware': 550,\n 'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware': 560,\n 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580,\n 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590,\n 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 600,\n 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': 700,\n 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,\n 'scrapy.downloadermiddlewares.stats.DownloaderStats': 850,\n 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': 900,\n # Downloader side\n}\n\nDOWNLOADER_STATS = True\n\nDUPEFILTER_CLASS = 'scrapy.dupefilters.RFPDupeFilter'\n\nEDITOR = 'vi'\nif sys.platform == 'win32':\n EDITOR = '%s -m idlelib.idle'\n\nEXTENSIONS = {}\n\nEXTENSIONS_BASE = {\n 'scrapy.extensions.corestats.CoreStats': 0,\n 'scrapy.extensions.telnet.TelnetConsole': 0,\n 'scrapy.extensions.memusage.MemoryUsage': 0,\n 'scrapy.extensions.memdebug.MemoryDebugger': 0,\n 'scrapy.extensions.closespider.CloseSpider': 0,\n 'scrapy.extensions.feedexport.FeedExporter': 0,\n 'scrapy.extensions.logstats.LogStats': 0,\n 'scrapy.extensions.spiderstate.SpiderState': 0,\n 'scrapy.extensions.throttle.AutoThrottle': 0,\n}\n\nFEED_TEMPDIR = None\nFEED_URI = None\nFEED_URI_PARAMS = None # a function to extend uri arguments\nFEED_FORMAT = 'jsonlines'\nFEED_STORE_EMPTY = False\nFEED_EXPORT_ENCODING = None\nFEED_EXPORT_FIELDS = None\nFEED_STORAGES = {}\nFEED_STORAGES_BASE = {\n '': 'scrapy.extensions.feedexport.FileFeedStorage',\n 'file': 'scrapy.extensions.feedexport.FileFeedStorage',\n 'stdout': 'scrapy.extensions.feedexport.StdoutFeedStorage',\n 's3': 'scrapy.extensions.feedexport.S3FeedStorage',\n 'ftp': 'scrapy.extensions.feedexport.FTPFeedStorage',\n}\nFEED_EXPORTERS = {}\nFEED_EXPORTERS_BASE = {\n 'json': 'scrapy.exporters.JsonItemExporter',\n 'jsonlines': 'scrapy.exporters.JsonLinesItemExporter',\n 'jl': 'scrapy.exporters.JsonLinesItemExporter',\n 'csv': 'scrapy.exporters.CsvItemExporter',\n 'xml': 'scrapy.exporters.XmlItemExporter',\n 'marshal': 'scrapy.exporters.MarshalItemExporter',\n 'pickle': 'scrapy.exporters.PickleItemExporter',\n}\nFEED_EXPORT_INDENT = 0\n\nFEED_STORAGE_FTP_ACTIVE = False\nFEED_STORAGE_S3_ACL = ''\n\nFILES_STORE_S3_ACL = 'private'\nFILES_STORE_GCS_ACL = ''\n\nFTP_USER = 'anonymous'\nFTP_PASSWORD = 'guest'\nFTP_PASSIVE_MODE = True\n\nHTTPCACHE_ENABLED = False\nHTTPCACHE_DIR = 'httpcache'\nHTTPCACHE_IGNORE_MISSING = False\nHTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'\nHTTPCACHE_EXPIRATION_SECS = 0\nHTTPCACHE_ALWAYS_STORE = False\nHTTPCACHE_IGNORE_HTTP_CODES = []\nHTTPCACHE_IGNORE_SCHEMES = ['file']\nHTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS = []\nHTTPCACHE_DBM_MODULE = 'dbm'\nHTTPCACHE_POLICY = 'scrapy.extensions.httpcache.DummyPolicy'\nHTTPCACHE_GZIP = False\n\nHTTPPROXY_ENABLED = True\nHTTPPROXY_AUTH_ENCODING = 'latin-1'\n\nIMAGES_STORE_S3_ACL = 'private'\nIMAGES_STORE_GCS_ACL = ''\n\nITEM_PROCESSOR = 'scrapy.pipelines.ItemPipelineManager'\n\nITEM_PIPELINES = {}\nITEM_PIPELINES_BASE = {}\n\nLOG_ENABLED = True\nLOG_ENCODING = 'utf-8'\nLOG_FORMATTER = 'scrapy.logformatter.LogFormatter'\nLOG_FORMAT = '%(asctime)s [%(name)s] %(levelname)s: %(message)s'\nLOG_DATEFORMAT = '%Y-%m-%d %H:%M:%S'\nLOG_STDOUT = False\nLOG_LEVEL = 'DEBUG'\nLOG_FILE = None\nLOG_SHORT_NAMES = False\n\nSCHEDULER_DEBUG = False\n\nLOGSTATS_INTERVAL = 60.0\n\nMAIL_HOST = 'localhost'\nMAIL_PORT = 25\nMAIL_FROM = 'scrapy@localhost'\nMAIL_PASS = None\nMAIL_USER = None\n\nMEMDEBUG_ENABLED = False # enable memory debugging\nMEMDEBUG_NOTIFY = [] # send memory debugging report by mail at engine shutdown\n\nMEMUSAGE_CHECK_INTERVAL_SECONDS = 60.0\nMEMUSAGE_ENABLED = True\nMEMUSAGE_LIMIT_MB = 0\nMEMUSAGE_NOTIFY_MAIL = []\nMEMUSAGE_WARNING_MB = 0\n\nMETAREFRESH_ENABLED = True\nMETAREFRESH_IGNORE_TAGS = ['script', 'noscript']\nMETAREFRESH_MAXDELAY = 100\n\nNEWSPIDER_MODULE = ''\n\nRANDOMIZE_DOWNLOAD_DELAY = True\n\nREACTOR_THREADPOOL_MAXSIZE = 10\n\nREDIRECT_ENABLED = True\nREDIRECT_MAX_TIMES = 20 # uses Firefox default setting\nREDIRECT_PRIORITY_ADJUST = +2\n\nREFERER_ENABLED = True\nREFERRER_POLICY = 'scrapy.spidermiddlewares.referer.DefaultReferrerPolicy'\n\nRETRY_ENABLED = True\nRETRY_TIMES = 2 # initial response + 2 retries = 3 requests\nRETRY_HTTP_CODES = [500, 502, 503, 504, 522, 524, 408, 429]\nRETRY_PRIORITY_ADJUST = -1\n\nROBOTSTXT_OBEY = False\nROBOTSTXT_PARSER = 'scrapy.robotstxt.ProtegoRobotParser'\nROBOTSTXT_USER_AGENT = None\n\nSCHEDULER = 'scrapy.core.scheduler.Scheduler'\nSCHEDULER_DISK_QUEUE = 'scrapy.squeues.PickleLifoDiskQueue'\nSCHEDULER_MEMORY_QUEUE = 'scrapy.squeues.LifoMemoryQueue'\nSCHEDULER_PRIORITY_QUEUE = 'scrapy.pqueues.ScrapyPriorityQueue'\n\nSCRAPER_SLOT_MAX_ACTIVE_SIZE = 5000000\n\nSPIDER_LOADER_CLASS = 'scrapy.spiderloader.SpiderLoader'\nSPIDER_LOADER_WARN_ONLY = False\n\nSPIDER_MIDDLEWARES = {}\n\nSPIDER_MIDDLEWARES_BASE = {\n # Engine side\n 'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': 50,\n 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': 500,\n 'scrapy.spidermiddlewares.referer.RefererMiddleware': 700,\n 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware': 800,\n 'scrapy.spidermiddlewares.depth.DepthMiddleware': 900,\n # Spider side\n}\n\nSPIDER_MODULES = []\n\nSTATS_CLASS = 'scrapy.statscollectors.MemoryStatsCollector'\nSTATS_DUMP = True\n\nSTATSMAILER_RCPTS = []\n\nTEMPLATES_DIR = abspath(join(dirname(__file__), '..', 'templates'))\n\nURLLENGTH_LIMIT = 2083\n\nUSER_AGENT = 'Scrapy/%s (+https://scrapy.org)' % import_module('scrapy').__version__\n\nTELNETCONSOLE_ENABLED = 1\nTELNETCONSOLE_PORT = [6023, 6073]\nTELNETCONSOLE_HOST = '127.0.0.1'\nTELNETCONSOLE_USERNAME = 'scrapy'\nTELNETCONSOLE_PASSWORD = None\n\nSPIDER_CONTRACTS = {}\nSPIDER_CONTRACTS_BASE = {\n 'scrapy.contracts.default.UrlContract': 1,\n 'scrapy.contracts.default.CallbackKeywordArgumentsContract': 1,\n 'scrapy.contracts.default.ReturnsContract': 2,\n 'scrapy.contracts.default.ScrapesContract': 3,\n}\n", "path": "scrapy/settings/default_settings.py"}], "after_files": [{"content": "\"\"\"\nThis module contains the default values for all settings used by Scrapy.\n\nFor more information about these settings you can read the settings\ndocumentation in docs/topics/settings.rst\n\nScrapy developers, if you add a setting here remember to:\n\n* add it in alphabetical order\n* group similar settings without leaving blank lines\n* add its documentation to the available settings documentation\n (docs/topics/settings.rst)\n\n\"\"\"\n\nimport sys\nfrom importlib import import_module\nfrom os.path import join, abspath, dirname\n\nAJAXCRAWL_ENABLED = False\n\nASYNCIO_REACTOR = False\n\nAUTOTHROTTLE_ENABLED = False\nAUTOTHROTTLE_DEBUG = False\nAUTOTHROTTLE_MAX_DELAY = 60.0\nAUTOTHROTTLE_START_DELAY = 5.0\nAUTOTHROTTLE_TARGET_CONCURRENCY = 1.0\n\nBOT_NAME = 'scrapybot'\n\nCLOSESPIDER_TIMEOUT = 0\nCLOSESPIDER_PAGECOUNT = 0\nCLOSESPIDER_ITEMCOUNT = 0\nCLOSESPIDER_ERRORCOUNT = 0\n\nCOMMANDS_MODULE = ''\n\nCOMPRESSION_ENABLED = True\n\nCONCURRENT_ITEMS = 100\n\nCONCURRENT_REQUESTS = 16\nCONCURRENT_REQUESTS_PER_DOMAIN = 8\nCONCURRENT_REQUESTS_PER_IP = 0\n\nCOOKIES_ENABLED = True\nCOOKIES_DEBUG = False\n\nDEFAULT_ITEM_CLASS = 'scrapy.item.Item'\n\nDEFAULT_REQUEST_HEADERS = {\n 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',\n 'Accept-Language': 'en',\n}\n\nDEPTH_LIMIT = 0\nDEPTH_STATS_VERBOSE = False\nDEPTH_PRIORITY = 0\n\nDNSCACHE_ENABLED = True\nDNSCACHE_SIZE = 10000\nDNS_RESOLVER = 'scrapy.resolver.CachingThreadedResolver'\nDNS_TIMEOUT = 60\n\nDOWNLOAD_DELAY = 0\n\nDOWNLOAD_HANDLERS = {}\nDOWNLOAD_HANDLERS_BASE = {\n 'data': 'scrapy.core.downloader.handlers.datauri.DataURIDownloadHandler',\n 'file': 'scrapy.core.downloader.handlers.file.FileDownloadHandler',\n 'http': 'scrapy.core.downloader.handlers.http.HTTPDownloadHandler',\n 'https': 'scrapy.core.downloader.handlers.http.HTTPDownloadHandler',\n 's3': 'scrapy.core.downloader.handlers.s3.S3DownloadHandler',\n 'ftp': 'scrapy.core.downloader.handlers.ftp.FTPDownloadHandler',\n}\n\nDOWNLOAD_TIMEOUT = 180 # 3mins\n\nDOWNLOAD_MAXSIZE = 1024*1024*1024 # 1024m\nDOWNLOAD_WARNSIZE = 32*1024*1024 # 32m\n\nDOWNLOAD_FAIL_ON_DATALOSS = True\n\nDOWNLOADER = 'scrapy.core.downloader.Downloader'\n\nDOWNLOADER_HTTPCLIENTFACTORY = 'scrapy.core.downloader.webclient.ScrapyHTTPClientFactory'\nDOWNLOADER_CLIENTCONTEXTFACTORY = 'scrapy.core.downloader.contextfactory.ScrapyClientContextFactory'\nDOWNLOADER_CLIENT_TLS_CIPHERS = 'DEFAULT'\n# Use highest TLS/SSL protocol version supported by the platform, also allowing negotiation:\nDOWNLOADER_CLIENT_TLS_METHOD = 'TLS'\nDOWNLOADER_CLIENT_TLS_VERBOSE_LOGGING = False\n\nDOWNLOADER_MIDDLEWARES = {}\n\nDOWNLOADER_MIDDLEWARES_BASE = {\n # Engine side\n 'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100,\n 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300,\n 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,\n 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 400,\n 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 500,\n 'scrapy.downloadermiddlewares.retry.RetryMiddleware': 550,\n 'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware': 560,\n 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580,\n 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590,\n 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 600,\n 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': 700,\n 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,\n 'scrapy.downloadermiddlewares.stats.DownloaderStats': 850,\n 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': 900,\n # Downloader side\n}\n\nDOWNLOADER_STATS = True\n\nDUPEFILTER_CLASS = 'scrapy.dupefilters.RFPDupeFilter'\n\nEDITOR = 'vi'\nif sys.platform == 'win32':\n EDITOR = '%s -m idlelib.idle'\n\nEXTENSIONS = {}\n\nEXTENSIONS_BASE = {\n 'scrapy.extensions.corestats.CoreStats': 0,\n 'scrapy.extensions.telnet.TelnetConsole': 0,\n 'scrapy.extensions.memusage.MemoryUsage': 0,\n 'scrapy.extensions.memdebug.MemoryDebugger': 0,\n 'scrapy.extensions.closespider.CloseSpider': 0,\n 'scrapy.extensions.feedexport.FeedExporter': 0,\n 'scrapy.extensions.logstats.LogStats': 0,\n 'scrapy.extensions.spiderstate.SpiderState': 0,\n 'scrapy.extensions.throttle.AutoThrottle': 0,\n}\n\nFEED_TEMPDIR = None\nFEED_URI = None\nFEED_URI_PARAMS = None # a function to extend uri arguments\nFEED_FORMAT = 'jsonlines'\nFEED_STORE_EMPTY = False\nFEED_EXPORT_ENCODING = None\nFEED_EXPORT_FIELDS = None\nFEED_STORAGES = {}\nFEED_STORAGES_BASE = {\n '': 'scrapy.extensions.feedexport.FileFeedStorage',\n 'file': 'scrapy.extensions.feedexport.FileFeedStorage',\n 'stdout': 'scrapy.extensions.feedexport.StdoutFeedStorage',\n 's3': 'scrapy.extensions.feedexport.S3FeedStorage',\n 'ftp': 'scrapy.extensions.feedexport.FTPFeedStorage',\n}\nFEED_EXPORTERS = {}\nFEED_EXPORTERS_BASE = {\n 'json': 'scrapy.exporters.JsonItemExporter',\n 'jsonlines': 'scrapy.exporters.JsonLinesItemExporter',\n 'jl': 'scrapy.exporters.JsonLinesItemExporter',\n 'csv': 'scrapy.exporters.CsvItemExporter',\n 'xml': 'scrapy.exporters.XmlItemExporter',\n 'marshal': 'scrapy.exporters.MarshalItemExporter',\n 'pickle': 'scrapy.exporters.PickleItemExporter',\n}\nFEED_EXPORT_INDENT = 0\n\nFEED_STORAGE_FTP_ACTIVE = False\nFEED_STORAGE_S3_ACL = ''\n\nFILES_STORE_S3_ACL = 'private'\nFILES_STORE_GCS_ACL = ''\n\nFTP_USER = 'anonymous'\nFTP_PASSWORD = 'guest'\nFTP_PASSIVE_MODE = True\n\nHTTPCACHE_ENABLED = False\nHTTPCACHE_DIR = 'httpcache'\nHTTPCACHE_IGNORE_MISSING = False\nHTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'\nHTTPCACHE_EXPIRATION_SECS = 0\nHTTPCACHE_ALWAYS_STORE = False\nHTTPCACHE_IGNORE_HTTP_CODES = []\nHTTPCACHE_IGNORE_SCHEMES = ['file']\nHTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS = []\nHTTPCACHE_DBM_MODULE = 'dbm'\nHTTPCACHE_POLICY = 'scrapy.extensions.httpcache.DummyPolicy'\nHTTPCACHE_GZIP = False\n\nHTTPPROXY_ENABLED = True\nHTTPPROXY_AUTH_ENCODING = 'latin-1'\n\nIMAGES_STORE_S3_ACL = 'private'\nIMAGES_STORE_GCS_ACL = ''\n\nITEM_PROCESSOR = 'scrapy.pipelines.ItemPipelineManager'\n\nITEM_PIPELINES = {}\nITEM_PIPELINES_BASE = {}\n\nLOG_ENABLED = True\nLOG_ENCODING = 'utf-8'\nLOG_FORMATTER = 'scrapy.logformatter.LogFormatter'\nLOG_FORMAT = '%(asctime)s [%(name)s] %(levelname)s: %(message)s'\nLOG_DATEFORMAT = '%Y-%m-%d %H:%M:%S'\nLOG_STDOUT = False\nLOG_LEVEL = 'DEBUG'\nLOG_FILE = None\nLOG_SHORT_NAMES = False\n\nSCHEDULER_DEBUG = False\n\nLOGSTATS_INTERVAL = 60.0\n\nMAIL_HOST = 'localhost'\nMAIL_PORT = 25\nMAIL_FROM = 'scrapy@localhost'\nMAIL_PASS = None\nMAIL_USER = None\n\nMEMDEBUG_ENABLED = False # enable memory debugging\nMEMDEBUG_NOTIFY = [] # send memory debugging report by mail at engine shutdown\n\nMEMUSAGE_CHECK_INTERVAL_SECONDS = 60.0\nMEMUSAGE_ENABLED = True\nMEMUSAGE_LIMIT_MB = 0\nMEMUSAGE_NOTIFY_MAIL = []\nMEMUSAGE_WARNING_MB = 0\n\nMETAREFRESH_ENABLED = True\nMETAREFRESH_IGNORE_TAGS = []\nMETAREFRESH_MAXDELAY = 100\n\nNEWSPIDER_MODULE = ''\n\nRANDOMIZE_DOWNLOAD_DELAY = True\n\nREACTOR_THREADPOOL_MAXSIZE = 10\n\nREDIRECT_ENABLED = True\nREDIRECT_MAX_TIMES = 20 # uses Firefox default setting\nREDIRECT_PRIORITY_ADJUST = +2\n\nREFERER_ENABLED = True\nREFERRER_POLICY = 'scrapy.spidermiddlewares.referer.DefaultReferrerPolicy'\n\nRETRY_ENABLED = True\nRETRY_TIMES = 2 # initial response + 2 retries = 3 requests\nRETRY_HTTP_CODES = [500, 502, 503, 504, 522, 524, 408, 429]\nRETRY_PRIORITY_ADJUST = -1\n\nROBOTSTXT_OBEY = False\nROBOTSTXT_PARSER = 'scrapy.robotstxt.ProtegoRobotParser'\nROBOTSTXT_USER_AGENT = None\n\nSCHEDULER = 'scrapy.core.scheduler.Scheduler'\nSCHEDULER_DISK_QUEUE = 'scrapy.squeues.PickleLifoDiskQueue'\nSCHEDULER_MEMORY_QUEUE = 'scrapy.squeues.LifoMemoryQueue'\nSCHEDULER_PRIORITY_QUEUE = 'scrapy.pqueues.ScrapyPriorityQueue'\n\nSCRAPER_SLOT_MAX_ACTIVE_SIZE = 5000000\n\nSPIDER_LOADER_CLASS = 'scrapy.spiderloader.SpiderLoader'\nSPIDER_LOADER_WARN_ONLY = False\n\nSPIDER_MIDDLEWARES = {}\n\nSPIDER_MIDDLEWARES_BASE = {\n # Engine side\n 'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': 50,\n 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': 500,\n 'scrapy.spidermiddlewares.referer.RefererMiddleware': 700,\n 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware': 800,\n 'scrapy.spidermiddlewares.depth.DepthMiddleware': 900,\n # Spider side\n}\n\nSPIDER_MODULES = []\n\nSTATS_CLASS = 'scrapy.statscollectors.MemoryStatsCollector'\nSTATS_DUMP = True\n\nSTATSMAILER_RCPTS = []\n\nTEMPLATES_DIR = abspath(join(dirname(__file__), '..', 'templates'))\n\nURLLENGTH_LIMIT = 2083\n\nUSER_AGENT = 'Scrapy/%s (+https://scrapy.org)' % import_module('scrapy').__version__\n\nTELNETCONSOLE_ENABLED = 1\nTELNETCONSOLE_PORT = [6023, 6073]\nTELNETCONSOLE_HOST = '127.0.0.1'\nTELNETCONSOLE_USERNAME = 'scrapy'\nTELNETCONSOLE_PASSWORD = None\n\nSPIDER_CONTRACTS = {}\nSPIDER_CONTRACTS_BASE = {\n 'scrapy.contracts.default.UrlContract': 1,\n 'scrapy.contracts.default.CallbackKeywordArgumentsContract': 1,\n 'scrapy.contracts.default.ReturnsContract': 2,\n 'scrapy.contracts.default.ScrapesContract': 3,\n}\n", "path": "scrapy/settings/default_settings.py"}]} | 3,777 | 108 |
gh_patches_debug_17870 | rasdani/github-patches | git_diff | Flexget__Flexget-2378 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
descargas 2020 broken (again)
### Expected behaviour:
it downloads torrent properly.
### Actual behaviour:
### Steps to reproduce:
add this rss http://feeds.feedburner.com/descargas2020new
flexget execute
you will get:
2019-04-28 23:00 WARNING urlrewriter series_sigo URL rewriting descargas2020 failed: Unable to locate torrent ID from url http://descargas2020.com/descargar-torrent/121574_-1556477420-55-pasos--blurayrip/
2019-04-28 23:00 ERROR entry series_sigo Failed 55 Pasos [BluRay Rip][AC3 2.0 Castellano][2019][] (None)
#### Config:
```
tasks:
series_sigo:
rss:
url: http://feeds.feedburner.com/descargas2020new
link: link
all_series: yes
seen: local
regexp:
reject:
- PCDVD
accept_all: yes
thetvdb_lookup: yes
no_entries_ok: yes
set:
filename: "{{title|pathscrub}}.torrent"
download: /home/flexget/torrent/
```
#### Log:
2019-04-28 23:00 WARNING urlrewriter series_sigo URL rewriting descargas2020 failed: Unable to locate torrent ID from url http://descargas2020.com/descargar-torrent/121574_-1556477420-55-pasos--blurayrip/
2019-04-28 23:00 ERROR entry series_sigo Failed 55 Pasos [BluRay Rip][AC3 2.0 Castellano][2019][] (None)
```
paste log output here
```
</details>
### Additional information:
- FlexGet version: 2.20.23
- Python version: 2.7.13
- Installation method: i don't remember
- Using daemon (yes/no): no
- OS and version: debian 9 x64
- Link to crash log:
<!---
In config and debug/crash logs, remember to redact any personal or sensitive information such as passwords, API keys, private URLs and so on.
Please verify that the following data is present before submitting your issue:
- Link to a paste service or paste above the relevant config (preferably full config, including templates if present). Please make sure the paste does not expire, if possible.
- Link to a paste service or paste above debug-level logs of the relevant task/s (use `flexget -L debug execute --tasks <Task_name>`).
- FlexGet version (use `flexget -V` to get it).
- Full Python version, for example `2.7.11` (use `python -V` to get it). Note that FlexGet is not supported for use with Python v3.0, 3.1, 3.2 or 3.6.
- Installation method (pip, git install, etc).
- Whether or not you're running FlexGet as a daemon.
- OS and version.
- Attach crash log if one was generated, in addition to the debug-level log. It can be found in the directory with your config file.
--->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flexget/components/sites/sites/descargas2020.py`
Content:
```
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # noqa pylint: disable=unused-import, redefined-builtin
3
4 import logging
5 import re
6
7 from flexget import plugin
8 from flexget.event import event
9 from flexget.components.sites.urlrewriting import UrlRewritingError
10 from flexget.utils.requests import Session, TimedLimiter
11 from flexget.utils.soup import get_soup
12 from flexget.utils import requests
13
14 from flexget.entry import Entry
15 from flexget.components.sites.utils import normalize_unicode
16
17 import unicodedata
18
19 log = logging.getLogger('descargas2020')
20
21 DESCARGAS2020_TORRENT_FORMAT = 'http://descargas2020.com/download/{:0>6}.torrent'
22 REWRITABLE_REGEX = re.compile(
23 r'https?://(www.)?(descargas2020|tvsinpagar|tumejortorrent|torrentlocura|torrentrapid).com/'
24 )
25
26
27 class UrlRewriteDescargas2020(object):
28 """Descargas2020 urlrewriter and search."""
29
30 schema = {'type': 'boolean', 'default': False}
31
32 def __init__(self):
33 self._session = None
34
35 @property
36 def session(self):
37 # TODO: This is not used for all requests even ..
38 if self._session is None:
39 self._session = Session()
40 self._session.headers.update(
41 {'User-Agent': 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'}
42 )
43 self._session.add_domain_limiter(TimedLimiter('descargas2020.com', '2 seconds'))
44 return self._session
45
46 # urlrewriter API
47 def url_rewritable(self, task, entry):
48 url = entry['url']
49 return not url.endswith('.torrent') and REWRITABLE_REGEX.match(url)
50
51 # urlrewriter API
52 def url_rewrite(self, task, entry):
53 entry['url'] = self.parse_download_page(entry['url'], task)
54
55 @plugin.internet(log)
56 def parse_download_page(self, url, task):
57 log.verbose('Descargas2020 URL: %s', url)
58
59 try:
60 page = self.session.get(url)
61 except requests.RequestException as e:
62 raise UrlRewritingError(e)
63 try:
64 soup = get_soup(page.text)
65 except Exception as e:
66 raise UrlRewritingError(e)
67
68 torrent_id = None
69 url_format = DESCARGAS2020_TORRENT_FORMAT
70
71 torrent_id_prog = re.compile(
72 r"(?:parametros\s*=\s*\n?)\s*{\s*\n(?:\s*'\w+'\s*:.*\n)+\s*'(?:torrentID|id)'\s*:\s*'(\d+)'"
73 )
74 torrent_ids = soup.findAll(text=torrent_id_prog)
75 if torrent_ids:
76 match = torrent_id_prog.search(torrent_ids[0])
77 if match:
78 torrent_id = match.group(1)
79 if not torrent_id:
80 log.debug('torrent ID not found, searching openTorrent script')
81 match = re.search(
82 r'function openTorrent.*\n.*\{.*(\n.*)+window\.location\.href =\s*\".*\/(\d+.*)\";',
83 page.text,
84 re.MULTILINE,
85 )
86 if match:
87 torrent_id = match.group(2).rstrip('/')
88
89 if not torrent_id:
90 raise UrlRewritingError('Unable to locate torrent ID from url %s' % url)
91
92 return url_format.format(torrent_id)
93
94 def search(self, task, entry, config=None):
95 if not config:
96 log.debug('Descargas2020 disabled')
97 return set()
98 log.debug('Search Descargas2020')
99 url_search = 'http://descargas2020.com/buscar'
100 results = set()
101 for search_string in entry.get('search_strings', [entry['title']]):
102 query = normalize_unicode(search_string)
103 query = re.sub(r' \(\d\d\d\d\)$', '', query)
104 log.debug('Searching Descargas2020 %s', query)
105 query = unicodedata.normalize('NFD', query).encode('ascii', 'ignore')
106 data = {'q': query}
107 try:
108 response = task.requests.post(url_search, data=data)
109 except requests.RequestException as e:
110 log.error('Error searching Descargas2020: %s', e)
111 return results
112 content = response.content
113 soup = get_soup(content)
114 soup2 = soup.find('ul', attrs={'class': 'buscar-list'})
115 children = soup2.findAll('a', href=True)
116 for child in children:
117 entry = Entry()
118 entry['url'] = child['href']
119 entry_title = child.find('h2')
120 if entry_title is None:
121 log.debug('Ignore empty entry')
122 continue
123 entry_title = entry_title.text
124 if not entry_title:
125 continue
126 try:
127 entry_quality_lan = re.search(
128 r'.+ \[([^\]]+)\](\[[^\]]+\])+$', entry_title
129 ).group(1)
130 except AttributeError:
131 log.debug('Quality not found')
132 continue
133 entry_title = re.sub(r' \[.+]$', '', entry_title)
134 entry['title'] = entry_title + ' ' + entry_quality_lan
135 results.add(entry)
136 log.debug('Finish search Descargas2020 with %d entries', len(results))
137 return results
138
139
140 @event('plugin.register')
141 def register_plugin():
142 plugin.register(
143 UrlRewriteDescargas2020, 'descargas2020', interfaces=['urlrewriter', 'search'], api_ver=2
144 )
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flexget/components/sites/sites/descargas2020.py b/flexget/components/sites/sites/descargas2020.py
--- a/flexget/components/sites/sites/descargas2020.py
+++ b/flexget/components/sites/sites/descargas2020.py
@@ -22,7 +22,9 @@
REWRITABLE_REGEX = re.compile(
r'https?://(www.)?(descargas2020|tvsinpagar|tumejortorrent|torrentlocura|torrentrapid).com/'
)
-
+NONREWRITABLE_REGEX = re.compile(
+ r'(.*/descargar-torrent/|.*\.torrent$)'
+)
class UrlRewriteDescargas2020(object):
"""Descargas2020 urlrewriter and search."""
@@ -46,7 +48,7 @@
# urlrewriter API
def url_rewritable(self, task, entry):
url = entry['url']
- return not url.endswith('.torrent') and REWRITABLE_REGEX.match(url)
+ return REWRITABLE_REGEX.match(url) and not NONREWRITABLE_REGEX.match(url)
# urlrewriter API
def url_rewrite(self, task, entry):
| {"golden_diff": "diff --git a/flexget/components/sites/sites/descargas2020.py b/flexget/components/sites/sites/descargas2020.py\n--- a/flexget/components/sites/sites/descargas2020.py\n+++ b/flexget/components/sites/sites/descargas2020.py\n@@ -22,7 +22,9 @@\n REWRITABLE_REGEX = re.compile(\n r'https?://(www.)?(descargas2020|tvsinpagar|tumejortorrent|torrentlocura|torrentrapid).com/'\n )\n-\n+NONREWRITABLE_REGEX = re.compile(\n+ r'(.*/descargar-torrent/|.*\\.torrent$)'\n+)\n \n class UrlRewriteDescargas2020(object):\n \"\"\"Descargas2020 urlrewriter and search.\"\"\"\n@@ -46,7 +48,7 @@\n # urlrewriter API\n def url_rewritable(self, task, entry):\n url = entry['url']\n- return not url.endswith('.torrent') and REWRITABLE_REGEX.match(url)\n+ return REWRITABLE_REGEX.match(url) and not NONREWRITABLE_REGEX.match(url)\n \n # urlrewriter API\n def url_rewrite(self, task, entry):\n", "issue": "descargas 2020 broken (again)\n### Expected behaviour:\r\nit downloads torrent properly.\r\n\r\n### Actual behaviour:\r\n\r\n### Steps to reproduce:\r\nadd this rss http://feeds.feedburner.com/descargas2020new\r\nflexget execute\r\nyou will get:\r\n2019-04-28 23:00 WARNING urlrewriter series_sigo URL rewriting descargas2020 failed: Unable to locate torrent ID from url http://descargas2020.com/descargar-torrent/121574_-1556477420-55-pasos--blurayrip/\r\n2019-04-28 23:00 ERROR entry series_sigo Failed 55 Pasos [BluRay Rip][AC3 2.0 Castellano][2019][] (None)\r\n\r\n#### Config:\r\n\r\n```\r\ntasks:\r\n series_sigo:\r\n rss:\r\n url: http://feeds.feedburner.com/descargas2020new\r\n link: link\r\n all_series: yes\r\n seen: local\r\n regexp:\r\n reject:\r\n - PCDVD\r\n accept_all: yes\r\n thetvdb_lookup: yes\r\n no_entries_ok: yes\r\n set:\r\n filename: \"{{title|pathscrub}}.torrent\"\r\n download: /home/flexget/torrent/\r\n```\r\n \r\n#### Log:\r\n\r\n\r\n2019-04-28 23:00 WARNING urlrewriter series_sigo URL rewriting descargas2020 failed: Unable to locate torrent ID from url http://descargas2020.com/descargar-torrent/121574_-1556477420-55-pasos--blurayrip/\r\n2019-04-28 23:00 ERROR entry series_sigo Failed 55 Pasos [BluRay Rip][AC3 2.0 Castellano][2019][] (None)\r\n\r\n\r\n```\r\npaste log output here\r\n```\r\n</details>\r\n\r\n### Additional information:\r\n\r\n- FlexGet version: 2.20.23\r\n- Python version: 2.7.13\r\n- Installation method: i don't remember\r\n- Using daemon (yes/no): no\r\n- OS and version: debian 9 x64\r\n- Link to crash log:\r\n\r\n<!---\r\nIn config and debug/crash logs, remember to redact any personal or sensitive information such as passwords, API keys, private URLs and so on.\r\n\r\nPlease verify that the following data is present before submitting your issue:\r\n\r\n- Link to a paste service or paste above the relevant config (preferably full config, including templates if present). Please make sure the paste does not expire, if possible.\r\n- Link to a paste service or paste above debug-level logs of the relevant task/s (use `flexget -L debug execute --tasks <Task_name>`).\r\n- FlexGet version (use `flexget -V` to get it).\r\n- Full Python version, for example `2.7.11` (use `python -V` to get it). Note that FlexGet is not supported for use with Python v3.0, 3.1, 3.2 or 3.6.\r\n- Installation method (pip, git install, etc).\r\n- Whether or not you're running FlexGet as a daemon.\r\n- OS and version.\r\n- Attach crash log if one was generated, in addition to the debug-level log. It can be found in the directory with your config file.\r\n--->\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\n\nimport logging\nimport re\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.components.sites.urlrewriting import UrlRewritingError\nfrom flexget.utils.requests import Session, TimedLimiter\nfrom flexget.utils.soup import get_soup\nfrom flexget.utils import requests\n\nfrom flexget.entry import Entry\nfrom flexget.components.sites.utils import normalize_unicode\n\nimport unicodedata\n\nlog = logging.getLogger('descargas2020')\n\nDESCARGAS2020_TORRENT_FORMAT = 'http://descargas2020.com/download/{:0>6}.torrent'\nREWRITABLE_REGEX = re.compile(\n r'https?://(www.)?(descargas2020|tvsinpagar|tumejortorrent|torrentlocura|torrentrapid).com/'\n)\n\n\nclass UrlRewriteDescargas2020(object):\n \"\"\"Descargas2020 urlrewriter and search.\"\"\"\n\n schema = {'type': 'boolean', 'default': False}\n\n def __init__(self):\n self._session = None\n\n @property\n def session(self):\n # TODO: This is not used for all requests even ..\n if self._session is None:\n self._session = Session()\n self._session.headers.update(\n {'User-Agent': 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'}\n )\n self._session.add_domain_limiter(TimedLimiter('descargas2020.com', '2 seconds'))\n return self._session\n\n # urlrewriter API\n def url_rewritable(self, task, entry):\n url = entry['url']\n return not url.endswith('.torrent') and REWRITABLE_REGEX.match(url)\n\n # urlrewriter API\n def url_rewrite(self, task, entry):\n entry['url'] = self.parse_download_page(entry['url'], task)\n\n @plugin.internet(log)\n def parse_download_page(self, url, task):\n log.verbose('Descargas2020 URL: %s', url)\n\n try:\n page = self.session.get(url)\n except requests.RequestException as e:\n raise UrlRewritingError(e)\n try:\n soup = get_soup(page.text)\n except Exception as e:\n raise UrlRewritingError(e)\n\n torrent_id = None\n url_format = DESCARGAS2020_TORRENT_FORMAT\n\n torrent_id_prog = re.compile(\n r\"(?:parametros\\s*=\\s*\\n?)\\s*{\\s*\\n(?:\\s*'\\w+'\\s*:.*\\n)+\\s*'(?:torrentID|id)'\\s*:\\s*'(\\d+)'\"\n )\n torrent_ids = soup.findAll(text=torrent_id_prog)\n if torrent_ids:\n match = torrent_id_prog.search(torrent_ids[0])\n if match:\n torrent_id = match.group(1)\n if not torrent_id:\n log.debug('torrent ID not found, searching openTorrent script')\n match = re.search(\n r'function openTorrent.*\\n.*\\{.*(\\n.*)+window\\.location\\.href =\\s*\\\".*\\/(\\d+.*)\\\";',\n page.text,\n re.MULTILINE,\n )\n if match:\n torrent_id = match.group(2).rstrip('/')\n\n if not torrent_id:\n raise UrlRewritingError('Unable to locate torrent ID from url %s' % url)\n\n return url_format.format(torrent_id)\n\n def search(self, task, entry, config=None):\n if not config:\n log.debug('Descargas2020 disabled')\n return set()\n log.debug('Search Descargas2020')\n url_search = 'http://descargas2020.com/buscar'\n results = set()\n for search_string in entry.get('search_strings', [entry['title']]):\n query = normalize_unicode(search_string)\n query = re.sub(r' \\(\\d\\d\\d\\d\\)$', '', query)\n log.debug('Searching Descargas2020 %s', query)\n query = unicodedata.normalize('NFD', query).encode('ascii', 'ignore')\n data = {'q': query}\n try:\n response = task.requests.post(url_search, data=data)\n except requests.RequestException as e:\n log.error('Error searching Descargas2020: %s', e)\n return results\n content = response.content\n soup = get_soup(content)\n soup2 = soup.find('ul', attrs={'class': 'buscar-list'})\n children = soup2.findAll('a', href=True)\n for child in children:\n entry = Entry()\n entry['url'] = child['href']\n entry_title = child.find('h2')\n if entry_title is None:\n log.debug('Ignore empty entry')\n continue\n entry_title = entry_title.text\n if not entry_title:\n continue\n try:\n entry_quality_lan = re.search(\n r'.+ \\[([^\\]]+)\\](\\[[^\\]]+\\])+$', entry_title\n ).group(1)\n except AttributeError:\n log.debug('Quality not found')\n continue\n entry_title = re.sub(r' \\[.+]$', '', entry_title)\n entry['title'] = entry_title + ' ' + entry_quality_lan\n results.add(entry)\n log.debug('Finish search Descargas2020 with %d entries', len(results))\n return results\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(\n UrlRewriteDescargas2020, 'descargas2020', interfaces=['urlrewriter', 'search'], api_ver=2\n )\n", "path": "flexget/components/sites/sites/descargas2020.py"}], "after_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\n\nimport logging\nimport re\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.components.sites.urlrewriting import UrlRewritingError\nfrom flexget.utils.requests import Session, TimedLimiter\nfrom flexget.utils.soup import get_soup\nfrom flexget.utils import requests\n\nfrom flexget.entry import Entry\nfrom flexget.components.sites.utils import normalize_unicode\n\nimport unicodedata\n\nlog = logging.getLogger('descargas2020')\n\nDESCARGAS2020_TORRENT_FORMAT = 'http://descargas2020.com/download/{:0>6}.torrent'\nREWRITABLE_REGEX = re.compile(\n r'https?://(www.)?(descargas2020|tvsinpagar|tumejortorrent|torrentlocura|torrentrapid).com/'\n)\nNONREWRITABLE_REGEX = re.compile(\n r'(.*/descargar-torrent/|.*\\.torrent$)'\n)\n\nclass UrlRewriteDescargas2020(object):\n \"\"\"Descargas2020 urlrewriter and search.\"\"\"\n\n schema = {'type': 'boolean', 'default': False}\n\n def __init__(self):\n self._session = None\n\n @property\n def session(self):\n # TODO: This is not used for all requests even ..\n if self._session is None:\n self._session = Session()\n self._session.headers.update(\n {'User-Agent': 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'}\n )\n self._session.add_domain_limiter(TimedLimiter('descargas2020.com', '2 seconds'))\n return self._session\n\n # urlrewriter API\n def url_rewritable(self, task, entry):\n url = entry['url']\n return REWRITABLE_REGEX.match(url) and not NONREWRITABLE_REGEX.match(url)\n\n # urlrewriter API\n def url_rewrite(self, task, entry):\n entry['url'] = self.parse_download_page(entry['url'], task)\n\n @plugin.internet(log)\n def parse_download_page(self, url, task):\n log.verbose('Descargas2020 URL: %s', url)\n\n try:\n page = self.session.get(url)\n except requests.RequestException as e:\n raise UrlRewritingError(e)\n try:\n soup = get_soup(page.text)\n except Exception as e:\n raise UrlRewritingError(e)\n\n torrent_id = None\n url_format = DESCARGAS2020_TORRENT_FORMAT\n\n torrent_id_prog = re.compile(\n r\"(?:parametros\\s*=\\s*\\n?)\\s*{\\s*\\n(?:\\s*'\\w+'\\s*:.*\\n)+\\s*'(?:torrentID|id)'\\s*:\\s*'(\\d+)'\"\n )\n torrent_ids = soup.findAll(text=torrent_id_prog)\n if torrent_ids:\n match = torrent_id_prog.search(torrent_ids[0])\n if match:\n torrent_id = match.group(1)\n if not torrent_id:\n log.debug('torrent ID not found, searching openTorrent script')\n match = re.search(\n r'function openTorrent.*\\n.*\\{.*(\\n.*)+window\\.location\\.href =\\s*\\\".*\\/(\\d+.*)\\\";',\n page.text,\n re.MULTILINE,\n )\n if match:\n torrent_id = match.group(2).rstrip('/')\n\n if not torrent_id:\n raise UrlRewritingError('Unable to locate torrent ID from url %s' % url)\n\n return url_format.format(torrent_id)\n\n def search(self, task, entry, config=None):\n if not config:\n log.debug('Descargas2020 disabled')\n return set()\n log.debug('Search Descargas2020')\n url_search = 'http://descargas2020.com/buscar'\n results = set()\n for search_string in entry.get('search_strings', [entry['title']]):\n query = normalize_unicode(search_string)\n query = re.sub(r' \\(\\d\\d\\d\\d\\)$', '', query)\n log.debug('Searching Descargas2020 %s', query)\n query = unicodedata.normalize('NFD', query).encode('ascii', 'ignore')\n data = {'q': query}\n try:\n response = task.requests.post(url_search, data=data)\n except requests.RequestException as e:\n log.error('Error searching Descargas2020: %s', e)\n return results\n content = response.content\n soup = get_soup(content)\n soup2 = soup.find('ul', attrs={'class': 'buscar-list'})\n children = soup2.findAll('a', href=True)\n for child in children:\n entry = Entry()\n entry['url'] = child['href']\n entry_title = child.find('h2')\n if entry_title is None:\n log.debug('Ignore empty entry')\n continue\n entry_title = entry_title.text\n if not entry_title:\n continue\n try:\n entry_quality_lan = re.search(\n r'.+ \\[([^\\]]+)\\](\\[[^\\]]+\\])+$', entry_title\n ).group(1)\n except AttributeError:\n log.debug('Quality not found')\n continue\n entry_title = re.sub(r' \\[.+]$', '', entry_title)\n entry['title'] = entry_title + ' ' + entry_quality_lan\n results.add(entry)\n log.debug('Finish search Descargas2020 with %d entries', len(results))\n return results\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(\n UrlRewriteDescargas2020, 'descargas2020', interfaces=['urlrewriter', 'search'], api_ver=2\n )\n", "path": "flexget/components/sites/sites/descargas2020.py"}]} | 2,656 | 283 |
gh_patches_debug_2489 | rasdani/github-patches | git_diff | rucio__rucio-2776 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Account and Scope new types
Motivation
----------
For multi-vo the internal representation of scope and account will need to be different from the external representation. The translations for these should be done in a consistent way and this can be prepared beforehand.
Modification
------------
Create a new type for each of scope and account.
Convert incoming account names and scopes to these types in the API directory so that multi-vo changes are more straight forward.
If normal strings are used in core, raise an error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/rucio/vcsversion.py`
Content:
```
1
2 '''
3 This file is automatically generated; Do not edit it. :)
4 '''
5 VERSION_INFO = {
6 'final': True,
7 'version': '1.20.3',
8 'branch_nick': 'patch-0-Release__Rucio_1_20_3_preparation',
9 'revision_id': 'f05e019f7178590718bf3f1eee415cc46cb59159',
10 'revno': 8410
11 }
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/rucio/vcsversion.py b/lib/rucio/vcsversion.py
--- a/lib/rucio/vcsversion.py
+++ b/lib/rucio/vcsversion.py
@@ -4,8 +4,8 @@
'''
VERSION_INFO = {
'final': True,
- 'version': '1.20.3',
- 'branch_nick': 'patch-0-Release__Rucio_1_20_3_preparation',
- 'revision_id': 'f05e019f7178590718bf3f1eee415cc46cb59159',
- 'revno': 8410
+ 'version': '1.20.4rc1',
+ 'branch_nick': 'patch-0-Release__1_20_4rc1_preparation',
+ 'revision_id': '525812b8f83f1069d38ab78aebedb732f21e77ec',
+ 'revno': 8418
}
| {"golden_diff": "diff --git a/lib/rucio/vcsversion.py b/lib/rucio/vcsversion.py\n--- a/lib/rucio/vcsversion.py\n+++ b/lib/rucio/vcsversion.py\n@@ -4,8 +4,8 @@\n '''\n VERSION_INFO = {\n 'final': True,\n- 'version': '1.20.3',\n- 'branch_nick': 'patch-0-Release__Rucio_1_20_3_preparation',\n- 'revision_id': 'f05e019f7178590718bf3f1eee415cc46cb59159',\n- 'revno': 8410\n+ 'version': '1.20.4rc1',\n+ 'branch_nick': 'patch-0-Release__1_20_4rc1_preparation',\n+ 'revision_id': '525812b8f83f1069d38ab78aebedb732f21e77ec',\n+ 'revno': 8418\n }\n", "issue": "Account and Scope new types\nMotivation\r\n----------\r\nFor multi-vo the internal representation of scope and account will need to be different from the external representation. The translations for these should be done in a consistent way and this can be prepared beforehand.\r\n\r\n\r\nModification\r\n------------\r\nCreate a new type for each of scope and account. \r\nConvert incoming account names and scopes to these types in the API directory so that multi-vo changes are more straight forward.\r\nIf normal strings are used in core, raise an error.\r\n\n", "before_files": [{"content": "\n'''\nThis file is automatically generated; Do not edit it. :)\n'''\nVERSION_INFO = {\n 'final': True,\n 'version': '1.20.3',\n 'branch_nick': 'patch-0-Release__Rucio_1_20_3_preparation',\n 'revision_id': 'f05e019f7178590718bf3f1eee415cc46cb59159',\n 'revno': 8410\n}\n", "path": "lib/rucio/vcsversion.py"}], "after_files": [{"content": "\n'''\nThis file is automatically generated; Do not edit it. :)\n'''\nVERSION_INFO = {\n 'final': True,\n 'version': '1.20.4rc1',\n 'branch_nick': 'patch-0-Release__1_20_4rc1_preparation',\n 'revision_id': '525812b8f83f1069d38ab78aebedb732f21e77ec',\n 'revno': 8418\n}\n", "path": "lib/rucio/vcsversion.py"}]} | 497 | 254 |
gh_patches_debug_1579 | rasdani/github-patches | git_diff | elastic__apm-agent-python-1758 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
KeyError:'db' when capturing elasticsearch span context
**Describe the bug**: ...
**To Reproduce**
1. Instrument via capture serverless
2. Perform an async_scan:
```python
async with AsyncElasticsearch(**elastic_context) as client:
async for doc in async_scan(
client=client,
index='XXX',
query=query,
_source=['field']
):
```

**Environment (please complete the following information)**
- OS: [e.g. Linux] debian:bullseye-slim (python:3.9-slim container image) x86
- Python version: 3.9
- Framework and version [e.g. Django 2.1]: N/A AWS Lambda Function
- APM Server version: 8.6.1
- Agent version: 6.14.0
**Additional context**
Add any other context about the problem here.
- Agent config options <!-- be careful not to post sensitive information -->
<details>
<summary>Click to expand</summary>
```
ELASTIC_APM_LOG_LEVEL: warning
ELASTIC_APM_LAMBDA_APM_SERVER: https://<elastic-cloud-hostname>:443
ELASTIC_APM_SECRET_TOKEN: ${env:ELASTIC_APM_SECRET_TOKEN}
ELASTIC_APM_ENVIRONMENT: ${self:provider.stage}
ELASTIC_APM_SPAN_COMPRESSION_SAME_KIND_MAX_DURATION: 50ms
```
</details>
- `requirements.txt`:
<details>
<summary>Click to expand</summary>
```toml
[tool.poetry.dependencies]
python = "^3.9"
elasticsearch = "^8"
elastic-apm = "^6.14"
tomlkit = "^0.11.1"
uvloop = "^0.17.0"
PyYAML = "^6.0"
aiobotocore = "^2.3"
urllib3 = "^1.26.10"
python-dateutil = "^2.8.2"
pyjarm = "^0.0.5"
aiohttp = "^3.8"
multidict = "^6"
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticapm/instrumentation/packages/asyncio/elasticsearch.py`
Content:
```
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 import elasticapm
32 from elasticapm.instrumentation.packages.asyncio.base import AsyncAbstractInstrumentedModule
33 from elasticapm.instrumentation.packages.elasticsearch import (
34 ElasticsearchConnectionInstrumentation,
35 ElasticsearchTransportInstrumentation,
36 )
37 from elasticapm.traces import DroppedSpan, execution_context
38
39
40 class ElasticSearchAsyncConnection(ElasticsearchConnectionInstrumentation, AsyncAbstractInstrumentedModule):
41 name = "elasticsearch_connection"
42
43 def get_instrument_list(self):
44 try:
45 import elastic_transport # noqa: F401
46
47 return [
48 ("elastic_transport._node._http_aiohttp", "AiohttpHttpNode.perform_request"),
49 ]
50 except ImportError:
51 return [
52 ("elasticsearch_async.connection", "AIOHttpConnection.perform_request"),
53 ("elasticsearch._async.http_aiohttp", "AIOHttpConnection.perform_request"),
54 ]
55
56 async def call(self, module, method, wrapped, instance, args, kwargs):
57 span = execution_context.get_span()
58 if not span or isinstance(span, DroppedSpan):
59 return await wrapped(*args, **kwargs)
60
61 self._update_context_by_request_data(span.context, instance, args, kwargs)
62
63 result = await wrapped(*args, **kwargs)
64 if hasattr(result, "meta"): # elasticsearch-py 8.x+
65 status_code = result.meta.status
66 else:
67 status_code = result[0]
68
69 span.context["http"] = {"status_code": status_code}
70
71 return result
72
73
74 class ElasticsearchAsyncTransportInstrumentation(
75 ElasticsearchTransportInstrumentation, AsyncAbstractInstrumentedModule
76 ):
77 name = "elasticsearch_connection"
78
79 instrument_list = [
80 ("elasticsearch._async.transport", "AsyncTransport.perform_request"),
81 ]
82
83 def get_instrument_list(self):
84 try:
85 import elastic_transport # noqa: F401
86
87 return [
88 ("elastic_transport", "AsyncTransport.perform_request"),
89 ]
90 except ImportError:
91 return [
92 ("elasticsearch._async.transport", "AsyncTransport.perform_request"),
93 ]
94
95 async def call(self, module, method, wrapped, instance, args, kwargs):
96 async with elasticapm.async_capture_span(
97 self._get_signature(args, kwargs),
98 span_type="db",
99 span_subtype="elasticsearch",
100 span_action="query",
101 extra={},
102 skip_frames=2,
103 leaf=True,
104 ) as span:
105 result_data = await wrapped(*args, **kwargs)
106
107 hits = self._get_hits(result_data)
108 if hits:
109 span.context["db"]["rows_affected"] = hits
110
111 return result_data
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticapm/instrumentation/packages/asyncio/elasticsearch.py b/elasticapm/instrumentation/packages/asyncio/elasticsearch.py
--- a/elasticapm/instrumentation/packages/asyncio/elasticsearch.py
+++ b/elasticapm/instrumentation/packages/asyncio/elasticsearch.py
@@ -106,6 +106,6 @@
hits = self._get_hits(result_data)
if hits:
- span.context["db"]["rows_affected"] = hits
+ span.update_context("db", {"rows_affected": hits})
return result_data
| {"golden_diff": "diff --git a/elasticapm/instrumentation/packages/asyncio/elasticsearch.py b/elasticapm/instrumentation/packages/asyncio/elasticsearch.py\n--- a/elasticapm/instrumentation/packages/asyncio/elasticsearch.py\n+++ b/elasticapm/instrumentation/packages/asyncio/elasticsearch.py\n@@ -106,6 +106,6 @@\n \n hits = self._get_hits(result_data)\n if hits:\n- span.context[\"db\"][\"rows_affected\"] = hits\n+ span.update_context(\"db\", {\"rows_affected\": hits})\n \n return result_data\n", "issue": "KeyError:'db' when capturing elasticsearch span context\n**Describe the bug**: ...\r\n\r\n**To Reproduce**\r\n\r\n1. Instrument via capture serverless\r\n2. Perform an async_scan:\r\n```python\r\nasync with AsyncElasticsearch(**elastic_context) as client:\r\n async for doc in async_scan(\r\n client=client,\r\n index='XXX',\r\n query=query,\r\n _source=['field']\r\n ):\r\n```\r\n\r\n\r\n\r\n**Environment (please complete the following information)**\r\n- OS: [e.g. Linux] debian:bullseye-slim (python:3.9-slim container image) x86\r\n- Python version: 3.9\r\n- Framework and version [e.g. Django 2.1]: N/A AWS Lambda Function\r\n- APM Server version: 8.6.1\r\n- Agent version: 6.14.0\r\n\r\n\r\n**Additional context**\r\n\r\nAdd any other context about the problem here.\r\n\r\n- Agent config options <!-- be careful not to post sensitive information -->\r\n <details>\r\n <summary>Click to expand</summary>\r\n\r\n ```\r\n ELASTIC_APM_LOG_LEVEL: warning\r\n ELASTIC_APM_LAMBDA_APM_SERVER: https://<elastic-cloud-hostname>:443\r\n ELASTIC_APM_SECRET_TOKEN: ${env:ELASTIC_APM_SECRET_TOKEN}\r\n ELASTIC_APM_ENVIRONMENT: ${self:provider.stage}\r\n ELASTIC_APM_SPAN_COMPRESSION_SAME_KIND_MAX_DURATION: 50ms\r\n ```\r\n </details>\r\n- `requirements.txt`:\r\n <details>\r\n <summary>Click to expand</summary>\r\n\r\n ```toml\r\n [tool.poetry.dependencies]\r\n python = \"^3.9\"\r\n elasticsearch = \"^8\"\r\n elastic-apm = \"^6.14\"\r\n tomlkit = \"^0.11.1\"\r\n uvloop = \"^0.17.0\"\r\n PyYAML = \"^6.0\"\r\n aiobotocore = \"^2.3\"\r\n urllib3 = \"^1.26.10\"\r\n python-dateutil = \"^2.8.2\"\r\n pyjarm = \"^0.0.5\"\r\n aiohttp = \"^3.8\"\r\n multidict = \"^6\"\r\n ```\r\n </details>\r\n\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nimport elasticapm\nfrom elasticapm.instrumentation.packages.asyncio.base import AsyncAbstractInstrumentedModule\nfrom elasticapm.instrumentation.packages.elasticsearch import (\n ElasticsearchConnectionInstrumentation,\n ElasticsearchTransportInstrumentation,\n)\nfrom elasticapm.traces import DroppedSpan, execution_context\n\n\nclass ElasticSearchAsyncConnection(ElasticsearchConnectionInstrumentation, AsyncAbstractInstrumentedModule):\n name = \"elasticsearch_connection\"\n\n def get_instrument_list(self):\n try:\n import elastic_transport # noqa: F401\n\n return [\n (\"elastic_transport._node._http_aiohttp\", \"AiohttpHttpNode.perform_request\"),\n ]\n except ImportError:\n return [\n (\"elasticsearch_async.connection\", \"AIOHttpConnection.perform_request\"),\n (\"elasticsearch._async.http_aiohttp\", \"AIOHttpConnection.perform_request\"),\n ]\n\n async def call(self, module, method, wrapped, instance, args, kwargs):\n span = execution_context.get_span()\n if not span or isinstance(span, DroppedSpan):\n return await wrapped(*args, **kwargs)\n\n self._update_context_by_request_data(span.context, instance, args, kwargs)\n\n result = await wrapped(*args, **kwargs)\n if hasattr(result, \"meta\"): # elasticsearch-py 8.x+\n status_code = result.meta.status\n else:\n status_code = result[0]\n\n span.context[\"http\"] = {\"status_code\": status_code}\n\n return result\n\n\nclass ElasticsearchAsyncTransportInstrumentation(\n ElasticsearchTransportInstrumentation, AsyncAbstractInstrumentedModule\n):\n name = \"elasticsearch_connection\"\n\n instrument_list = [\n (\"elasticsearch._async.transport\", \"AsyncTransport.perform_request\"),\n ]\n\n def get_instrument_list(self):\n try:\n import elastic_transport # noqa: F401\n\n return [\n (\"elastic_transport\", \"AsyncTransport.perform_request\"),\n ]\n except ImportError:\n return [\n (\"elasticsearch._async.transport\", \"AsyncTransport.perform_request\"),\n ]\n\n async def call(self, module, method, wrapped, instance, args, kwargs):\n async with elasticapm.async_capture_span(\n self._get_signature(args, kwargs),\n span_type=\"db\",\n span_subtype=\"elasticsearch\",\n span_action=\"query\",\n extra={},\n skip_frames=2,\n leaf=True,\n ) as span:\n result_data = await wrapped(*args, **kwargs)\n\n hits = self._get_hits(result_data)\n if hits:\n span.context[\"db\"][\"rows_affected\"] = hits\n\n return result_data\n", "path": "elasticapm/instrumentation/packages/asyncio/elasticsearch.py"}], "after_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nimport elasticapm\nfrom elasticapm.instrumentation.packages.asyncio.base import AsyncAbstractInstrumentedModule\nfrom elasticapm.instrumentation.packages.elasticsearch import (\n ElasticsearchConnectionInstrumentation,\n ElasticsearchTransportInstrumentation,\n)\nfrom elasticapm.traces import DroppedSpan, execution_context\n\n\nclass ElasticSearchAsyncConnection(ElasticsearchConnectionInstrumentation, AsyncAbstractInstrumentedModule):\n name = \"elasticsearch_connection\"\n\n def get_instrument_list(self):\n try:\n import elastic_transport # noqa: F401\n\n return [\n (\"elastic_transport._node._http_aiohttp\", \"AiohttpHttpNode.perform_request\"),\n ]\n except ImportError:\n return [\n (\"elasticsearch_async.connection\", \"AIOHttpConnection.perform_request\"),\n (\"elasticsearch._async.http_aiohttp\", \"AIOHttpConnection.perform_request\"),\n ]\n\n async def call(self, module, method, wrapped, instance, args, kwargs):\n span = execution_context.get_span()\n if not span or isinstance(span, DroppedSpan):\n return await wrapped(*args, **kwargs)\n\n self._update_context_by_request_data(span.context, instance, args, kwargs)\n\n result = await wrapped(*args, **kwargs)\n if hasattr(result, \"meta\"): # elasticsearch-py 8.x+\n status_code = result.meta.status\n else:\n status_code = result[0]\n\n span.context[\"http\"] = {\"status_code\": status_code}\n\n return result\n\n\nclass ElasticsearchAsyncTransportInstrumentation(\n ElasticsearchTransportInstrumentation, AsyncAbstractInstrumentedModule\n):\n name = \"elasticsearch_connection\"\n\n instrument_list = [\n (\"elasticsearch._async.transport\", \"AsyncTransport.perform_request\"),\n ]\n\n def get_instrument_list(self):\n try:\n import elastic_transport # noqa: F401\n\n return [\n (\"elastic_transport\", \"AsyncTransport.perform_request\"),\n ]\n except ImportError:\n return [\n (\"elasticsearch._async.transport\", \"AsyncTransport.perform_request\"),\n ]\n\n async def call(self, module, method, wrapped, instance, args, kwargs):\n async with elasticapm.async_capture_span(\n self._get_signature(args, kwargs),\n span_type=\"db\",\n span_subtype=\"elasticsearch\",\n span_action=\"query\",\n extra={},\n skip_frames=2,\n leaf=True,\n ) as span:\n result_data = await wrapped(*args, **kwargs)\n\n hits = self._get_hits(result_data)\n if hits:\n span.update_context(\"db\", {\"rows_affected\": hits})\n\n return result_data\n", "path": "elasticapm/instrumentation/packages/asyncio/elasticsearch.py"}]} | 1,947 | 132 |
gh_patches_debug_6983 | rasdani/github-patches | git_diff | frappe__frappe-6179 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"StartTLS is not supported"-Bug in LDAP integrations
`self` is used in the method `authenticate_ldap_user` which results in the Python exception:
```
NameError: global name 'self' is not defined
```
https://github.com/frappe/frappe/blob/6811f0345e35ea521942db4b29425dd71e403ab8/frappe/integrations/doctype/ldap_settings/ldap_settings.py#L89
Since all exceptions are caught regardless their type, a misleading "StartTLS is not supported" error is thrown:
https://github.com/frappe/frappe/blob/6811f0345e35ea521942db4b29425dd71e403ab8/frappe/integrations/doctype/ldap_settings/ldap_settings.py#L95
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `frappe/integrations/doctype/ldap_settings/ldap_settings.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2015, Frappe Technologies and contributors
3 # For license information, please see license.txt
4
5 from __future__ import unicode_literals
6 import frappe
7 from frappe import _
8 from frappe.utils import cstr
9 from frappe.model.document import Document
10
11 class LDAPSettings(Document):
12 def validate(self):
13 if not self.flags.ignore_mandatory:
14 self.validate_ldap_credentails()
15
16 def validate_ldap_credentails(self):
17 try:
18 import ldap
19 conn = ldap.initialize(self.ldap_server_url)
20 try:
21 if self.ssl_tls_mode == 'StartTLS':
22 conn.set_option(ldap.OPT_X_TLS_DEMAND, True)
23 if self.require_trusted_certificate == 'Yes':
24 conn.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_DEMAND)
25 conn.start_tls_s()
26 except:
27 frappe.throw(_("StartTLS is not supported"))
28
29 conn.simple_bind_s(self.base_dn, self.get_password(raise_exception=False))
30 except ImportError:
31 msg = """
32 <div>
33 {{_("Seems ldap is not installed on system.<br>Guidelines to install ldap dependancies and python package")}},
34 <a href="https://discuss.erpnext.com/t/frappe-v-7-1-beta-ldap-dependancies/15841" target="_blank">{{_("Click here")}}</a>,
35 </div>
36 """
37 frappe.throw(msg, title=_("LDAP Not Installed"))
38
39 except ldap.LDAPError:
40 conn.unbind_s()
41 frappe.throw(_("Incorrect UserId or Password"))
42
43 def get_ldap_settings():
44 try:
45 settings = frappe.get_doc("LDAP Settings")
46
47 settings.update({
48 "method": "frappe.integrations.doctype.ldap_settings.ldap_settings.login"
49 })
50 return settings
51 except Exception:
52 # this will return blank settings
53 return frappe._dict()
54
55 @frappe.whitelist(allow_guest=True)
56 def login():
57 #### LDAP LOGIN LOGIC #####
58 args = frappe.form_dict
59 user = authenticate_ldap_user(frappe.as_unicode(args.usr), frappe.as_unicode(args.pwd))
60
61 frappe.local.login_manager.user = user.name
62 frappe.local.login_manager.post_login()
63
64 # because of a GET request!
65 frappe.db.commit()
66
67 def authenticate_ldap_user(user=None, password=None):
68 dn = None
69 params = {}
70 settings = get_ldap_settings()
71
72 try:
73 import ldap
74 except:
75 msg = """
76 <div>
77 {{_("Seems ldap is not installed on system.")}}<br>
78 <a href"https://discuss.erpnext.com/t/frappe-v-7-1-beta-ldap-dependancies/15841">{{_("Click here")}}</a>,
79 {{_("Guidelines to install ldap dependancies and python")}}
80 </div>
81 """
82 frappe.throw(msg, title=_("LDAP Not Installed"))
83
84 conn = ldap.initialize(settings.ldap_server_url)
85
86 try:
87 try:
88 # set TLS settings for secure connection
89 if self.ssl_tls_mode == 'StartTLS':
90 conn.set_option(ldap.OPT_X_TLS_DEMAND, True)
91 if self.require_trusted_certificate == 'Yes':
92 conn.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_DEMAND)
93 conn.start_tls_s()
94 except:
95 frappe.throw(_("StartTLS is not supported"))
96
97 # simple_bind_s is synchronous binding to server, it takes two param DN and password
98 conn.simple_bind_s(settings.base_dn, settings.get_password(raise_exception=False))
99
100 #search for surnames beginning with a
101 #available options for how deep a search you want.
102 #LDAP_SCOPE_BASE, LDAP_SCOPE_ONELEVEL,LDAP_SCOPE_SUBTREE,
103 result = conn.search_s(settings.organizational_unit, ldap.SCOPE_SUBTREE,
104 settings.ldap_search_string.format(user))
105
106 for dn, r in result:
107 dn = cstr(dn)
108 params["email"] = cstr(r[settings.ldap_email_field][0])
109 params["username"] = cstr(r[settings.ldap_username_field][0])
110 params["first_name"] = cstr(r[settings.ldap_first_name_field][0])
111
112 if dn:
113 conn.simple_bind_s(dn, frappe.as_unicode(password))
114 return create_user(params)
115 else:
116 frappe.throw(_("Not a valid LDAP user"))
117
118 except ldap.LDAPError:
119 conn.unbind_s()
120 frappe.throw(_("Incorrect UserId or Password"))
121
122 def create_user(params):
123 if frappe.db.exists("User", params["email"]):
124 return frappe.get_doc("User", params["email"])
125
126 else:
127 params.update({
128 "doctype": "User",
129 "send_welcome_email": 0,
130 "language": "",
131 "user_type": "System User",
132 "roles": [{
133 "role": _("Blogger")
134 }]
135 })
136
137 user = frappe.get_doc(params).insert(ignore_permissions=True)
138 frappe.db.commit()
139
140 return user
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/frappe/integrations/doctype/ldap_settings/ldap_settings.py b/frappe/integrations/doctype/ldap_settings/ldap_settings.py
--- a/frappe/integrations/doctype/ldap_settings/ldap_settings.py
+++ b/frappe/integrations/doctype/ldap_settings/ldap_settings.py
@@ -86,7 +86,7 @@
try:
try:
# set TLS settings for secure connection
- if self.ssl_tls_mode == 'StartTLS':
+ if settings.ssl_tls_mode == 'StartTLS':
conn.set_option(ldap.OPT_X_TLS_DEMAND, True)
if self.require_trusted_certificate == 'Yes':
conn.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_DEMAND)
| {"golden_diff": "diff --git a/frappe/integrations/doctype/ldap_settings/ldap_settings.py b/frappe/integrations/doctype/ldap_settings/ldap_settings.py\n--- a/frappe/integrations/doctype/ldap_settings/ldap_settings.py\n+++ b/frappe/integrations/doctype/ldap_settings/ldap_settings.py\n@@ -86,7 +86,7 @@\n \ttry:\n \t\ttry:\n \t\t\t# set TLS settings for secure connection\n-\t\t\tif self.ssl_tls_mode == 'StartTLS':\n+\t\t\tif settings.ssl_tls_mode == 'StartTLS':\n \t\t\t\tconn.set_option(ldap.OPT_X_TLS_DEMAND, True)\n \t\t\t\tif self.require_trusted_certificate == 'Yes':\n \t\t\t\t\tconn.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_DEMAND)\n", "issue": "\"StartTLS is not supported\"-Bug in LDAP integrations\n`self` is used in the method `authenticate_ldap_user` which results in the Python exception:\r\n```\r\nNameError: global name 'self' is not defined\r\n```\r\nhttps://github.com/frappe/frappe/blob/6811f0345e35ea521942db4b29425dd71e403ab8/frappe/integrations/doctype/ldap_settings/ldap_settings.py#L89\r\n\r\nSince all exceptions are caught regardless their type, a misleading \"StartTLS is not supported\" error is thrown:\r\nhttps://github.com/frappe/frappe/blob/6811f0345e35ea521942db4b29425dd71e403ab8/frappe/integrations/doctype/ldap_settings/ldap_settings.py#L95\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2015, Frappe Technologies and contributors\n# For license information, please see license.txt\n\nfrom __future__ import unicode_literals\nimport frappe\nfrom frappe import _\nfrom frappe.utils import cstr\nfrom frappe.model.document import Document\n\nclass LDAPSettings(Document):\n\tdef validate(self):\n\t\tif not self.flags.ignore_mandatory:\n\t\t\tself.validate_ldap_credentails()\n\n\tdef validate_ldap_credentails(self):\n\t\ttry:\n\t\t\timport ldap\n\t\t\tconn = ldap.initialize(self.ldap_server_url)\n\t\t\ttry:\n\t\t\t\tif self.ssl_tls_mode == 'StartTLS':\n\t\t\t\t\tconn.set_option(ldap.OPT_X_TLS_DEMAND, True)\n\t\t\t\t\tif self.require_trusted_certificate == 'Yes':\n\t\t\t\t\t\tconn.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_DEMAND)\n\t\t\t\t\tconn.start_tls_s()\n\t\t\texcept:\n\t\t\t\tfrappe.throw(_(\"StartTLS is not supported\"))\n\n\t\t\tconn.simple_bind_s(self.base_dn, self.get_password(raise_exception=False))\n\t\texcept ImportError:\n\t\t\tmsg = \"\"\"\n\t\t\t\t<div>\n\t\t\t\t\t{{_(\"Seems ldap is not installed on system.<br>Guidelines to install ldap dependancies and python package\")}},\n\t\t\t\t\t<a href=\"https://discuss.erpnext.com/t/frappe-v-7-1-beta-ldap-dependancies/15841\" target=\"_blank\">{{_(\"Click here\")}}</a>,\n\t\t\t\t</div>\n\t\t\t\"\"\"\n\t\t\tfrappe.throw(msg, title=_(\"LDAP Not Installed\"))\n\n\t\texcept ldap.LDAPError:\n\t\t\tconn.unbind_s()\n\t\t\tfrappe.throw(_(\"Incorrect UserId or Password\"))\n\ndef get_ldap_settings():\n\ttry:\n\t\tsettings = frappe.get_doc(\"LDAP Settings\")\n\n\t\tsettings.update({\n\t\t\t\"method\": \"frappe.integrations.doctype.ldap_settings.ldap_settings.login\"\n\t\t})\n\t\treturn settings\n\texcept Exception:\n\t\t# this will return blank settings\n\t\treturn frappe._dict()\n\[email protected](allow_guest=True)\ndef login():\n\t#### LDAP LOGIN LOGIC #####\n\targs = frappe.form_dict\n\tuser = authenticate_ldap_user(frappe.as_unicode(args.usr), frappe.as_unicode(args.pwd))\n\n\tfrappe.local.login_manager.user = user.name\n\tfrappe.local.login_manager.post_login()\n\n\t# because of a GET request!\n\tfrappe.db.commit()\n\ndef authenticate_ldap_user(user=None, password=None):\n\tdn = None\n\tparams = {}\n\tsettings = get_ldap_settings()\n\n\ttry:\n\t\timport ldap\n\texcept:\n\t\tmsg = \"\"\"\n\t\t\t<div>\n\t\t\t\t{{_(\"Seems ldap is not installed on system.\")}}<br>\n\t\t\t\t<a href\"https://discuss.erpnext.com/t/frappe-v-7-1-beta-ldap-dependancies/15841\">{{_(\"Click here\")}}</a>,\n\t\t\t\t\t{{_(\"Guidelines to install ldap dependancies and python\")}}\n\t\t\t</div>\n\t\t\"\"\"\n\t\tfrappe.throw(msg, title=_(\"LDAP Not Installed\"))\n\n\tconn = ldap.initialize(settings.ldap_server_url)\n\n\ttry:\n\t\ttry:\n\t\t\t# set TLS settings for secure connection\n\t\t\tif self.ssl_tls_mode == 'StartTLS':\n\t\t\t\tconn.set_option(ldap.OPT_X_TLS_DEMAND, True)\n\t\t\t\tif self.require_trusted_certificate == 'Yes':\n\t\t\t\t\tconn.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_DEMAND)\n\t\t\t\tconn.start_tls_s()\n\t\texcept:\n\t\t\tfrappe.throw(_(\"StartTLS is not supported\"))\n\n\t\t# simple_bind_s is synchronous binding to server, it takes two param DN and password\n\t\tconn.simple_bind_s(settings.base_dn, settings.get_password(raise_exception=False))\n\n\t\t#search for surnames beginning with a\n\t\t#available options for how deep a search you want.\n\t\t#LDAP_SCOPE_BASE, LDAP_SCOPE_ONELEVEL,LDAP_SCOPE_SUBTREE,\n\t\tresult = conn.search_s(settings.organizational_unit, ldap.SCOPE_SUBTREE,\n\t\t\tsettings.ldap_search_string.format(user))\n\n\t\tfor dn, r in result:\n\t\t\tdn = cstr(dn)\n\t\t\tparams[\"email\"] = cstr(r[settings.ldap_email_field][0])\n\t\t\tparams[\"username\"] = cstr(r[settings.ldap_username_field][0])\n\t\t\tparams[\"first_name\"] = cstr(r[settings.ldap_first_name_field][0])\n\n\t\tif dn:\n\t\t\tconn.simple_bind_s(dn, frappe.as_unicode(password))\n\t\t\treturn create_user(params)\n\t\telse:\n\t\t\tfrappe.throw(_(\"Not a valid LDAP user\"))\n\n\texcept ldap.LDAPError:\n\t\tconn.unbind_s()\n\t\tfrappe.throw(_(\"Incorrect UserId or Password\"))\n\ndef create_user(params):\n\tif frappe.db.exists(\"User\", params[\"email\"]):\n\t\treturn frappe.get_doc(\"User\", params[\"email\"])\n\n\telse:\n\t\tparams.update({\n\t\t\t\"doctype\": \"User\",\n\t\t\t\"send_welcome_email\": 0,\n\t\t\t\"language\": \"\",\n\t\t\t\"user_type\": \"System User\",\n\t\t\t\"roles\": [{\n\t\t\t\t\"role\": _(\"Blogger\")\n\t\t\t}]\n\t\t})\n\n\t\tuser = frappe.get_doc(params).insert(ignore_permissions=True)\n\t\tfrappe.db.commit()\n\n\t\treturn user\n", "path": "frappe/integrations/doctype/ldap_settings/ldap_settings.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2015, Frappe Technologies and contributors\n# For license information, please see license.txt\n\nfrom __future__ import unicode_literals\nimport frappe\nfrom frappe import _\nfrom frappe.utils import cstr\nfrom frappe.model.document import Document\n\nclass LDAPSettings(Document):\n\tdef validate(self):\n\t\tif not self.flags.ignore_mandatory:\n\t\t\tself.validate_ldap_credentails()\n\n\tdef validate_ldap_credentails(self):\n\t\ttry:\n\t\t\timport ldap\n\t\t\tconn = ldap.initialize(self.ldap_server_url)\n\t\t\ttry:\n\t\t\t\tif self.ssl_tls_mode == 'StartTLS':\n\t\t\t\t\tconn.set_option(ldap.OPT_X_TLS_DEMAND, True)\n\t\t\t\t\tif self.require_trusted_certificate == 'Yes':\n\t\t\t\t\t\tconn.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_DEMAND)\n\t\t\t\t\tconn.start_tls_s()\n\t\t\texcept:\n\t\t\t\tfrappe.throw(_(\"StartTLS is not supported\"))\n\n\t\t\tconn.simple_bind_s(self.base_dn, self.get_password(raise_exception=False))\n\t\texcept ImportError:\n\t\t\tmsg = \"\"\"\n\t\t\t\t<div>\n\t\t\t\t\t{{_(\"Seems ldap is not installed on system.<br>Guidelines to install ldap dependancies and python package\")}},\n\t\t\t\t\t<a href=\"https://discuss.erpnext.com/t/frappe-v-7-1-beta-ldap-dependancies/15841\" target=\"_blank\">{{_(\"Click here\")}}</a>,\n\t\t\t\t</div>\n\t\t\t\"\"\"\n\t\t\tfrappe.throw(msg, title=_(\"LDAP Not Installed\"))\n\n\t\texcept ldap.LDAPError:\n\t\t\tconn.unbind_s()\n\t\t\tfrappe.throw(_(\"Incorrect UserId or Password\"))\n\ndef get_ldap_settings():\n\ttry:\n\t\tsettings = frappe.get_doc(\"LDAP Settings\")\n\n\t\tsettings.update({\n\t\t\t\"method\": \"frappe.integrations.doctype.ldap_settings.ldap_settings.login\"\n\t\t})\n\t\treturn settings\n\texcept Exception:\n\t\t# this will return blank settings\n\t\treturn frappe._dict()\n\[email protected](allow_guest=True)\ndef login():\n\t#### LDAP LOGIN LOGIC #####\n\targs = frappe.form_dict\n\tuser = authenticate_ldap_user(frappe.as_unicode(args.usr), frappe.as_unicode(args.pwd))\n\n\tfrappe.local.login_manager.user = user.name\n\tfrappe.local.login_manager.post_login()\n\n\t# because of a GET request!\n\tfrappe.db.commit()\n\ndef authenticate_ldap_user(user=None, password=None):\n\tdn = None\n\tparams = {}\n\tsettings = get_ldap_settings()\n\n\ttry:\n\t\timport ldap\n\texcept:\n\t\tmsg = \"\"\"\n\t\t\t<div>\n\t\t\t\t{{_(\"Seems ldap is not installed on system.\")}}<br>\n\t\t\t\t<a href\"https://discuss.erpnext.com/t/frappe-v-7-1-beta-ldap-dependancies/15841\">{{_(\"Click here\")}}</a>,\n\t\t\t\t\t{{_(\"Guidelines to install ldap dependancies and python\")}}\n\t\t\t</div>\n\t\t\"\"\"\n\t\tfrappe.throw(msg, title=_(\"LDAP Not Installed\"))\n\n\tconn = ldap.initialize(settings.ldap_server_url)\n\n\ttry:\n\t\ttry:\n\t\t\t# set TLS settings for secure connection\n\t\t\tif settings.ssl_tls_mode == 'StartTLS':\n\t\t\t\tconn.set_option(ldap.OPT_X_TLS_DEMAND, True)\n\t\t\t\tif self.require_trusted_certificate == 'Yes':\n\t\t\t\t\tconn.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_DEMAND)\n\t\t\t\tconn.start_tls_s()\n\t\texcept:\n\t\t\tfrappe.throw(_(\"StartTLS is not supported\"))\n\n\t\t# simple_bind_s is synchronous binding to server, it takes two param DN and password\n\t\tconn.simple_bind_s(settings.base_dn, settings.get_password(raise_exception=False))\n\n\t\t#search for surnames beginning with a\n\t\t#available options for how deep a search you want.\n\t\t#LDAP_SCOPE_BASE, LDAP_SCOPE_ONELEVEL,LDAP_SCOPE_SUBTREE,\n\t\tresult = conn.search_s(settings.organizational_unit, ldap.SCOPE_SUBTREE,\n\t\t\tsettings.ldap_search_string.format(user))\n\n\t\tfor dn, r in result:\n\t\t\tdn = cstr(dn)\n\t\t\tparams[\"email\"] = cstr(r[settings.ldap_email_field][0])\n\t\t\tparams[\"username\"] = cstr(r[settings.ldap_username_field][0])\n\t\t\tparams[\"first_name\"] = cstr(r[settings.ldap_first_name_field][0])\n\n\t\tif dn:\n\t\t\tconn.simple_bind_s(dn, frappe.as_unicode(password))\n\t\t\treturn create_user(params)\n\t\telse:\n\t\t\tfrappe.throw(_(\"Not a valid LDAP user\"))\n\n\texcept ldap.LDAPError:\n\t\tconn.unbind_s()\n\t\tfrappe.throw(_(\"Incorrect UserId or Password\"))\n\ndef create_user(params):\n\tif frappe.db.exists(\"User\", params[\"email\"]):\n\t\treturn frappe.get_doc(\"User\", params[\"email\"])\n\n\telse:\n\t\tparams.update({\n\t\t\t\"doctype\": \"User\",\n\t\t\t\"send_welcome_email\": 0,\n\t\t\t\"language\": \"\",\n\t\t\t\"user_type\": \"System User\",\n\t\t\t\"roles\": [{\n\t\t\t\t\"role\": _(\"Blogger\")\n\t\t\t}]\n\t\t})\n\n\t\tuser = frappe.get_doc(params).insert(ignore_permissions=True)\n\t\tfrappe.db.commit()\n\n\t\treturn user\n", "path": "frappe/integrations/doctype/ldap_settings/ldap_settings.py"}]} | 1,929 | 167 |
gh_patches_debug_36352 | rasdani/github-patches | git_diff | pypa__virtualenv-1579 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Long env path names cause weird errors after activiting virtual environment
After creating a virtualenv in the path
`/home/harald/git-repos/sandbox/stackoverflow/41454203-how-do-you-import-the-library-being-tested-when-running-a-nosetest/env`
I could not run the env's pip,
```
nosetest/env$ ./bin/pip
bash: ./bin/pip: /home/harald/git-repos/sandbox/stackoverflow/41454203-how-do-you-import-the-l: bad interpreter: Permission denied
```
and running just pip gives me the system's pip, with all of its packages.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/virtualenv/seed/via_app_data/pip_install/base.py`
Content:
```
1 from __future__ import absolute_import, unicode_literals
2
3 import logging
4 import os
5 import re
6 import shutil
7 import zipfile
8 from abc import ABCMeta, abstractmethod
9 from tempfile import mkdtemp
10 from textwrap import dedent
11
12 import six
13 from six import PY3
14
15 from virtualenv.info import IS_WIN
16 from virtualenv.util import ConfigParser
17 from virtualenv.util.path import Path, make_exe
18
19
20 @six.add_metaclass(ABCMeta)
21 class PipInstall(object):
22 def __init__(self, wheel, creator, image_folder):
23 self._wheel = wheel
24 self._creator = creator
25 self._image_dir = image_folder
26 self._extracted = False
27 self.__dist_info = None
28 self._console_entry_points = None
29
30 @abstractmethod
31 def _sync(self, src, dst):
32 raise NotImplementedError
33
34 def install(self):
35 self._extracted = True
36 # sync image
37 for filename in self._image_dir.iterdir():
38 into = self._creator.purelib / filename.name
39 if into.exists():
40 if into.is_dir() and not into.is_symlink():
41 shutil.rmtree(str(into))
42 else:
43 into.unlink()
44 self._sync(filename, into)
45 # generate console executables
46 consoles = set()
47 script_dir = self._creator.script_dir
48 for name, module in self._console_scripts.items():
49 consoles.update(self._create_console_entry_point(name, module, script_dir))
50 logging.debug("generated console scripts %s", " ".join(i.name for i in consoles))
51
52 def build_image(self):
53 # 1. first extract the wheel
54 logging.debug("build install image to %s of %s", self._image_dir, self._wheel.name)
55 with zipfile.ZipFile(str(self._wheel)) as zip_ref:
56 zip_ref.extractall(str(self._image_dir))
57 self._extracted = True
58 # 2. now add additional files not present in the package
59 new_files = self._generate_new_files()
60 # 3. finally fix the records file
61 self._fix_records(new_files)
62
63 def _records_text(self, files):
64 record_data = "\n".join(
65 "{},,".format(os.path.relpath(six.ensure_text(str(rec)), six.ensure_text(str(self._image_dir))))
66 for rec in files
67 )
68 return record_data
69
70 def _generate_new_files(self):
71 new_files = set()
72 installer = self._dist_info / "INSTALLER"
73 installer.write_text("pip\n")
74 new_files.add(installer)
75 # inject a no-op root element, as workaround for bug added
76 # by https://github.com/pypa/pip/commit/c7ae06c79#r35523722
77 marker = self._image_dir / "{}.virtualenv".format(self._dist_info.name)
78 marker.write_text("")
79 new_files.add(marker)
80 folder = mkdtemp()
81 try:
82 to_folder = Path(folder)
83 rel = os.path.relpath(
84 six.ensure_text(str(self._creator.script_dir)), six.ensure_text(str(self._creator.purelib))
85 )
86 for name, module in self._console_scripts.items():
87 new_files.update(
88 Path(os.path.normpath(six.ensure_text(str(self._image_dir / rel / i.name))))
89 for i in self._create_console_entry_point(name, module, to_folder)
90 )
91 finally:
92 shutil.rmtree(folder, ignore_errors=True)
93 return new_files
94
95 @property
96 def _dist_info(self):
97 if self._extracted is False:
98 return None # pragma: no cover
99 if self.__dist_info is None:
100 for filename in self._image_dir.iterdir():
101 if filename.suffix == ".dist-info":
102 self.__dist_info = filename
103 break
104 else:
105 raise RuntimeError("no dist info") # pragma: no cover
106 return self.__dist_info
107
108 @abstractmethod
109 def _fix_records(self, extra_record_data):
110 raise NotImplementedError
111
112 @property
113 def _console_scripts(self):
114 if self._extracted is False:
115 return None # pragma: no cover
116 if self._console_entry_points is None:
117 self._console_entry_points = {}
118 entry_points = self._dist_info / "entry_points.txt"
119 if entry_points.exists():
120 parser = ConfigParser.ConfigParser()
121 with entry_points.open() as file_handler:
122 reader = getattr(parser, "read_file" if PY3 else "readfp")
123 reader(file_handler)
124 if "console_scripts" in parser.sections():
125 for name, value in parser.items("console_scripts"):
126 match = re.match(r"(.*?)-?\d\.?\d*", name)
127 if match:
128 name = match.groups(1)[0]
129 self._console_entry_points[name] = value
130 return self._console_entry_points
131
132 def _create_console_entry_point(self, name, value, to_folder):
133 result = []
134 if IS_WIN:
135 # windows doesn't support simple script files, so fallback to more complicated exe generator
136 from distlib.scripts import ScriptMaker
137
138 maker = ScriptMaker(None, str(to_folder))
139 maker.clobber = True # overwrite
140 maker.variants = {"", "X", "X.Y"} # create all variants
141 maker.set_mode = True # ensure they are executable
142 maker.executable = str(self._creator.exe)
143 specification = "{} = {}".format(name, value)
144 new_files = maker.make(specification)
145 result.extend(Path(i) for i in new_files)
146 else:
147 module, func = value.split(":")
148 content = (
149 dedent(
150 """
151 #!{0}
152 # -*- coding: utf-8 -*-
153 import re
154 import sys
155
156 from {1} import {2}
157
158 if __name__ == "__main__":
159 sys.argv[0] = re.sub(r"(-script.pyw?|.exe)?$", "", sys.argv[0])
160 sys.exit({2}())
161 """
162 )
163 .lstrip()
164 .format(self._creator.exe, module, func)
165 )
166
167 version = self._creator.interpreter.version_info
168 for new_name in (
169 name,
170 "{}{}".format(name, version.major),
171 "{}-{}.{}".format(name, version.major, version.minor),
172 ):
173 exe = to_folder / new_name
174 exe.write_text(content, encoding="utf-8")
175 make_exe(exe)
176 result.append(exe)
177 return result
178
179 def clear(self):
180 if self._image_dir.exists():
181 shutil.rmtree(six.ensure_text(str(self._image_dir)))
182
183 def has_image(self):
184 return self._image_dir.exists() and next(self._image_dir.iterdir()) is not None
185
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/virtualenv/seed/via_app_data/pip_install/base.py b/src/virtualenv/seed/via_app_data/pip_install/base.py
--- a/src/virtualenv/seed/via_app_data/pip_install/base.py
+++ b/src/virtualenv/seed/via_app_data/pip_install/base.py
@@ -7,14 +7,12 @@
import zipfile
from abc import ABCMeta, abstractmethod
from tempfile import mkdtemp
-from textwrap import dedent
import six
from six import PY3
-from virtualenv.info import IS_WIN
from virtualenv.util import ConfigParser
-from virtualenv.util.path import Path, make_exe
+from virtualenv.util.path import Path
@six.add_metaclass(ABCMeta)
@@ -131,49 +129,16 @@
def _create_console_entry_point(self, name, value, to_folder):
result = []
- if IS_WIN:
- # windows doesn't support simple script files, so fallback to more complicated exe generator
- from distlib.scripts import ScriptMaker
-
- maker = ScriptMaker(None, str(to_folder))
- maker.clobber = True # overwrite
- maker.variants = {"", "X", "X.Y"} # create all variants
- maker.set_mode = True # ensure they are executable
- maker.executable = str(self._creator.exe)
- specification = "{} = {}".format(name, value)
- new_files = maker.make(specification)
- result.extend(Path(i) for i in new_files)
- else:
- module, func = value.split(":")
- content = (
- dedent(
- """
- #!{0}
- # -*- coding: utf-8 -*-
- import re
- import sys
-
- from {1} import {2}
-
- if __name__ == "__main__":
- sys.argv[0] = re.sub(r"(-script.pyw?|.exe)?$", "", sys.argv[0])
- sys.exit({2}())
- """
- )
- .lstrip()
- .format(self._creator.exe, module, func)
- )
-
- version = self._creator.interpreter.version_info
- for new_name in (
- name,
- "{}{}".format(name, version.major),
- "{}-{}.{}".format(name, version.major, version.minor),
- ):
- exe = to_folder / new_name
- exe.write_text(content, encoding="utf-8")
- make_exe(exe)
- result.append(exe)
+ from distlib.scripts import ScriptMaker
+
+ maker = ScriptMaker(None, str(to_folder))
+ maker.clobber = True # overwrite
+ maker.variants = {"", "X", "X.Y"} # create all variants
+ maker.set_mode = True # ensure they are executable
+ maker.executable = str(self._creator.exe)
+ specification = "{} = {}".format(name, value)
+ new_files = maker.make(specification)
+ result.extend(Path(i) for i in new_files)
return result
def clear(self):
| {"golden_diff": "diff --git a/src/virtualenv/seed/via_app_data/pip_install/base.py b/src/virtualenv/seed/via_app_data/pip_install/base.py\n--- a/src/virtualenv/seed/via_app_data/pip_install/base.py\n+++ b/src/virtualenv/seed/via_app_data/pip_install/base.py\n@@ -7,14 +7,12 @@\n import zipfile\n from abc import ABCMeta, abstractmethod\n from tempfile import mkdtemp\n-from textwrap import dedent\n \n import six\n from six import PY3\n \n-from virtualenv.info import IS_WIN\n from virtualenv.util import ConfigParser\n-from virtualenv.util.path import Path, make_exe\n+from virtualenv.util.path import Path\n \n \n @six.add_metaclass(ABCMeta)\n@@ -131,49 +129,16 @@\n \n def _create_console_entry_point(self, name, value, to_folder):\n result = []\n- if IS_WIN:\n- # windows doesn't support simple script files, so fallback to more complicated exe generator\n- from distlib.scripts import ScriptMaker\n-\n- maker = ScriptMaker(None, str(to_folder))\n- maker.clobber = True # overwrite\n- maker.variants = {\"\", \"X\", \"X.Y\"} # create all variants\n- maker.set_mode = True # ensure they are executable\n- maker.executable = str(self._creator.exe)\n- specification = \"{} = {}\".format(name, value)\n- new_files = maker.make(specification)\n- result.extend(Path(i) for i in new_files)\n- else:\n- module, func = value.split(\":\")\n- content = (\n- dedent(\n- \"\"\"\n- #!{0}\n- # -*- coding: utf-8 -*-\n- import re\n- import sys\n-\n- from {1} import {2}\n-\n- if __name__ == \"__main__\":\n- sys.argv[0] = re.sub(r\"(-script.pyw?|.exe)?$\", \"\", sys.argv[0])\n- sys.exit({2}())\n- \"\"\"\n- )\n- .lstrip()\n- .format(self._creator.exe, module, func)\n- )\n-\n- version = self._creator.interpreter.version_info\n- for new_name in (\n- name,\n- \"{}{}\".format(name, version.major),\n- \"{}-{}.{}\".format(name, version.major, version.minor),\n- ):\n- exe = to_folder / new_name\n- exe.write_text(content, encoding=\"utf-8\")\n- make_exe(exe)\n- result.append(exe)\n+ from distlib.scripts import ScriptMaker\n+\n+ maker = ScriptMaker(None, str(to_folder))\n+ maker.clobber = True # overwrite\n+ maker.variants = {\"\", \"X\", \"X.Y\"} # create all variants\n+ maker.set_mode = True # ensure they are executable\n+ maker.executable = str(self._creator.exe)\n+ specification = \"{} = {}\".format(name, value)\n+ new_files = maker.make(specification)\n+ result.extend(Path(i) for i in new_files)\n return result\n \n def clear(self):\n", "issue": "Long env path names cause weird errors after activiting virtual environment\nAfter creating a virtualenv in the path\r\n\r\n`/home/harald/git-repos/sandbox/stackoverflow/41454203-how-do-you-import-the-library-being-tested-when-running-a-nosetest/env`\r\n\r\nI could not run the env's pip,\r\n\r\n```\r\nnosetest/env$ ./bin/pip\r\nbash: ./bin/pip: /home/harald/git-repos/sandbox/stackoverflow/41454203-how-do-you-import-the-l: bad interpreter: Permission denied\r\n```\r\n\r\nand running just pip gives me the system's pip, with all of its packages.\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nimport logging\nimport os\nimport re\nimport shutil\nimport zipfile\nfrom abc import ABCMeta, abstractmethod\nfrom tempfile import mkdtemp\nfrom textwrap import dedent\n\nimport six\nfrom six import PY3\n\nfrom virtualenv.info import IS_WIN\nfrom virtualenv.util import ConfigParser\nfrom virtualenv.util.path import Path, make_exe\n\n\[email protected]_metaclass(ABCMeta)\nclass PipInstall(object):\n def __init__(self, wheel, creator, image_folder):\n self._wheel = wheel\n self._creator = creator\n self._image_dir = image_folder\n self._extracted = False\n self.__dist_info = None\n self._console_entry_points = None\n\n @abstractmethod\n def _sync(self, src, dst):\n raise NotImplementedError\n\n def install(self):\n self._extracted = True\n # sync image\n for filename in self._image_dir.iterdir():\n into = self._creator.purelib / filename.name\n if into.exists():\n if into.is_dir() and not into.is_symlink():\n shutil.rmtree(str(into))\n else:\n into.unlink()\n self._sync(filename, into)\n # generate console executables\n consoles = set()\n script_dir = self._creator.script_dir\n for name, module in self._console_scripts.items():\n consoles.update(self._create_console_entry_point(name, module, script_dir))\n logging.debug(\"generated console scripts %s\", \" \".join(i.name for i in consoles))\n\n def build_image(self):\n # 1. first extract the wheel\n logging.debug(\"build install image to %s of %s\", self._image_dir, self._wheel.name)\n with zipfile.ZipFile(str(self._wheel)) as zip_ref:\n zip_ref.extractall(str(self._image_dir))\n self._extracted = True\n # 2. now add additional files not present in the package\n new_files = self._generate_new_files()\n # 3. finally fix the records file\n self._fix_records(new_files)\n\n def _records_text(self, files):\n record_data = \"\\n\".join(\n \"{},,\".format(os.path.relpath(six.ensure_text(str(rec)), six.ensure_text(str(self._image_dir))))\n for rec in files\n )\n return record_data\n\n def _generate_new_files(self):\n new_files = set()\n installer = self._dist_info / \"INSTALLER\"\n installer.write_text(\"pip\\n\")\n new_files.add(installer)\n # inject a no-op root element, as workaround for bug added\n # by https://github.com/pypa/pip/commit/c7ae06c79#r35523722\n marker = self._image_dir / \"{}.virtualenv\".format(self._dist_info.name)\n marker.write_text(\"\")\n new_files.add(marker)\n folder = mkdtemp()\n try:\n to_folder = Path(folder)\n rel = os.path.relpath(\n six.ensure_text(str(self._creator.script_dir)), six.ensure_text(str(self._creator.purelib))\n )\n for name, module in self._console_scripts.items():\n new_files.update(\n Path(os.path.normpath(six.ensure_text(str(self._image_dir / rel / i.name))))\n for i in self._create_console_entry_point(name, module, to_folder)\n )\n finally:\n shutil.rmtree(folder, ignore_errors=True)\n return new_files\n\n @property\n def _dist_info(self):\n if self._extracted is False:\n return None # pragma: no cover\n if self.__dist_info is None:\n for filename in self._image_dir.iterdir():\n if filename.suffix == \".dist-info\":\n self.__dist_info = filename\n break\n else:\n raise RuntimeError(\"no dist info\") # pragma: no cover\n return self.__dist_info\n\n @abstractmethod\n def _fix_records(self, extra_record_data):\n raise NotImplementedError\n\n @property\n def _console_scripts(self):\n if self._extracted is False:\n return None # pragma: no cover\n if self._console_entry_points is None:\n self._console_entry_points = {}\n entry_points = self._dist_info / \"entry_points.txt\"\n if entry_points.exists():\n parser = ConfigParser.ConfigParser()\n with entry_points.open() as file_handler:\n reader = getattr(parser, \"read_file\" if PY3 else \"readfp\")\n reader(file_handler)\n if \"console_scripts\" in parser.sections():\n for name, value in parser.items(\"console_scripts\"):\n match = re.match(r\"(.*?)-?\\d\\.?\\d*\", name)\n if match:\n name = match.groups(1)[0]\n self._console_entry_points[name] = value\n return self._console_entry_points\n\n def _create_console_entry_point(self, name, value, to_folder):\n result = []\n if IS_WIN:\n # windows doesn't support simple script files, so fallback to more complicated exe generator\n from distlib.scripts import ScriptMaker\n\n maker = ScriptMaker(None, str(to_folder))\n maker.clobber = True # overwrite\n maker.variants = {\"\", \"X\", \"X.Y\"} # create all variants\n maker.set_mode = True # ensure they are executable\n maker.executable = str(self._creator.exe)\n specification = \"{} = {}\".format(name, value)\n new_files = maker.make(specification)\n result.extend(Path(i) for i in new_files)\n else:\n module, func = value.split(\":\")\n content = (\n dedent(\n \"\"\"\n #!{0}\n # -*- coding: utf-8 -*-\n import re\n import sys\n\n from {1} import {2}\n\n if __name__ == \"__main__\":\n sys.argv[0] = re.sub(r\"(-script.pyw?|.exe)?$\", \"\", sys.argv[0])\n sys.exit({2}())\n \"\"\"\n )\n .lstrip()\n .format(self._creator.exe, module, func)\n )\n\n version = self._creator.interpreter.version_info\n for new_name in (\n name,\n \"{}{}\".format(name, version.major),\n \"{}-{}.{}\".format(name, version.major, version.minor),\n ):\n exe = to_folder / new_name\n exe.write_text(content, encoding=\"utf-8\")\n make_exe(exe)\n result.append(exe)\n return result\n\n def clear(self):\n if self._image_dir.exists():\n shutil.rmtree(six.ensure_text(str(self._image_dir)))\n\n def has_image(self):\n return self._image_dir.exists() and next(self._image_dir.iterdir()) is not None\n", "path": "src/virtualenv/seed/via_app_data/pip_install/base.py"}], "after_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nimport logging\nimport os\nimport re\nimport shutil\nimport zipfile\nfrom abc import ABCMeta, abstractmethod\nfrom tempfile import mkdtemp\n\nimport six\nfrom six import PY3\n\nfrom virtualenv.util import ConfigParser\nfrom virtualenv.util.path import Path\n\n\[email protected]_metaclass(ABCMeta)\nclass PipInstall(object):\n def __init__(self, wheel, creator, image_folder):\n self._wheel = wheel\n self._creator = creator\n self._image_dir = image_folder\n self._extracted = False\n self.__dist_info = None\n self._console_entry_points = None\n\n @abstractmethod\n def _sync(self, src, dst):\n raise NotImplementedError\n\n def install(self):\n self._extracted = True\n # sync image\n for filename in self._image_dir.iterdir():\n into = self._creator.purelib / filename.name\n if into.exists():\n if into.is_dir() and not into.is_symlink():\n shutil.rmtree(str(into))\n else:\n into.unlink()\n self._sync(filename, into)\n # generate console executables\n consoles = set()\n script_dir = self._creator.script_dir\n for name, module in self._console_scripts.items():\n consoles.update(self._create_console_entry_point(name, module, script_dir))\n logging.debug(\"generated console scripts %s\", \" \".join(i.name for i in consoles))\n\n def build_image(self):\n # 1. first extract the wheel\n logging.debug(\"build install image to %s of %s\", self._image_dir, self._wheel.name)\n with zipfile.ZipFile(str(self._wheel)) as zip_ref:\n zip_ref.extractall(str(self._image_dir))\n self._extracted = True\n # 2. now add additional files not present in the package\n new_files = self._generate_new_files()\n # 3. finally fix the records file\n self._fix_records(new_files)\n\n def _records_text(self, files):\n record_data = \"\\n\".join(\n \"{},,\".format(os.path.relpath(six.ensure_text(str(rec)), six.ensure_text(str(self._image_dir))))\n for rec in files\n )\n return record_data\n\n def _generate_new_files(self):\n new_files = set()\n installer = self._dist_info / \"INSTALLER\"\n installer.write_text(\"pip\\n\")\n new_files.add(installer)\n # inject a no-op root element, as workaround for bug added\n # by https://github.com/pypa/pip/commit/c7ae06c79#r35523722\n marker = self._image_dir / \"{}.virtualenv\".format(self._dist_info.name)\n marker.write_text(\"\")\n new_files.add(marker)\n folder = mkdtemp()\n try:\n to_folder = Path(folder)\n rel = os.path.relpath(\n six.ensure_text(str(self._creator.script_dir)), six.ensure_text(str(self._creator.purelib))\n )\n for name, module in self._console_scripts.items():\n new_files.update(\n Path(os.path.normpath(six.ensure_text(str(self._image_dir / rel / i.name))))\n for i in self._create_console_entry_point(name, module, to_folder)\n )\n finally:\n shutil.rmtree(folder, ignore_errors=True)\n return new_files\n\n @property\n def _dist_info(self):\n if self._extracted is False:\n return None # pragma: no cover\n if self.__dist_info is None:\n for filename in self._image_dir.iterdir():\n if filename.suffix == \".dist-info\":\n self.__dist_info = filename\n break\n else:\n raise RuntimeError(\"no dist info\") # pragma: no cover\n return self.__dist_info\n\n @abstractmethod\n def _fix_records(self, extra_record_data):\n raise NotImplementedError\n\n @property\n def _console_scripts(self):\n if self._extracted is False:\n return None # pragma: no cover\n if self._console_entry_points is None:\n self._console_entry_points = {}\n entry_points = self._dist_info / \"entry_points.txt\"\n if entry_points.exists():\n parser = ConfigParser.ConfigParser()\n with entry_points.open() as file_handler:\n reader = getattr(parser, \"read_file\" if PY3 else \"readfp\")\n reader(file_handler)\n if \"console_scripts\" in parser.sections():\n for name, value in parser.items(\"console_scripts\"):\n match = re.match(r\"(.*?)-?\\d\\.?\\d*\", name)\n if match:\n name = match.groups(1)[0]\n self._console_entry_points[name] = value\n return self._console_entry_points\n\n def _create_console_entry_point(self, name, value, to_folder):\n result = []\n from distlib.scripts import ScriptMaker\n\n maker = ScriptMaker(None, str(to_folder))\n maker.clobber = True # overwrite\n maker.variants = {\"\", \"X\", \"X.Y\"} # create all variants\n maker.set_mode = True # ensure they are executable\n maker.executable = str(self._creator.exe)\n specification = \"{} = {}\".format(name, value)\n new_files = maker.make(specification)\n result.extend(Path(i) for i in new_files)\n return result\n\n def clear(self):\n if self._image_dir.exists():\n shutil.rmtree(six.ensure_text(str(self._image_dir)))\n\n def has_image(self):\n return self._image_dir.exists() and next(self._image_dir.iterdir()) is not None\n", "path": "src/virtualenv/seed/via_app_data/pip_install/base.py"}]} | 2,329 | 700 |
gh_patches_debug_35026 | rasdani/github-patches | git_diff | vyperlang__vyper-1400 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deployable bytecode attempts to access calldata
Hi There,
I'm trying to understand vyper constructor usage patterns. When compiling the example Vyper [erc20 contract](https://github.com/ethereum/vyper/blob/master/examples/tokens/ERC20.vy), the output (deployed) bytecode looks like this:
```
0x600035...000f3
```
The second opcode `0x35` is `CALLDATALOAD`. Given that in the EVM specification, contract creation calls assume `calldata == []`, how is calldata used in this case?
In Solidity, a comparable example would not include `CALLDATALOAD` in the executable part of deployed bytecode, reserving it instead for the runtime bytecode portion.
Let me know if I'm misunderstanding!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vyper/parser/parser.py`
Content:
```
1 import ast
2 from typing import (
3 List,
4 cast,
5 )
6
7 from vyper.exceptions import (
8 EventDeclarationException,
9 FunctionDeclarationException,
10 ParserException,
11 StructureException,
12 )
13 from vyper.parser.function_definitions import (
14 is_default_func,
15 is_initializer,
16 parse_function,
17 )
18 from vyper.parser.global_context import (
19 GlobalContext,
20 )
21 from vyper.parser.lll_node import (
22 LLLnode,
23 )
24 from vyper.parser.parser_utils import (
25 annotate_and_optimize_ast,
26 )
27 from vyper.parser.pre_parser import (
28 pre_parse,
29 )
30 from vyper.signatures import (
31 sig_utils,
32 )
33 from vyper.signatures.event_signature import (
34 EventSignature,
35 )
36 from vyper.signatures.function_signature import (
37 FunctionSignature,
38 )
39 from vyper.signatures.interface import (
40 check_valid_contract_interface,
41 )
42 from vyper.utils import (
43 LOADED_LIMIT_MAP,
44 )
45
46 if not hasattr(ast, 'AnnAssign'):
47 raise Exception("Requires python 3.6 or higher for annotation support")
48
49
50 # Header code
51 INITIALIZER_LIST = ['seq', ['mstore', 28, ['calldataload', 0]]]
52 # Store limit constants at fixed addresses in memory.
53 INITIALIZER_LIST += [['mstore', pos, limit_size] for pos, limit_size in LOADED_LIMIT_MAP.items()]
54 INITIALIZER_LLL = LLLnode.from_list(INITIALIZER_LIST, typ=None)
55
56
57 def parse_to_ast(source_code: str) -> List[ast.stmt]:
58 """
59 Parses the given vyper source code and returns a list of python AST objects
60 for all statements in the source. Performs pre-processing of source code
61 before parsing as well as post-processing of the resulting AST.
62
63 :param source_code: The vyper source code to be parsed.
64 :return: The post-processed list of python AST objects for each statement in
65 ``source_code``.
66 """
67 class_types, reformatted_code = pre_parse(source_code)
68
69 if '\x00' in reformatted_code:
70 raise ParserException('No null bytes (\\x00) allowed in the source code.')
71
72 # The return type depends on the parse mode which is why we need to cast here
73 parsed_ast = cast(ast.Module, ast.parse(reformatted_code))
74 annotate_and_optimize_ast(parsed_ast, reformatted_code, class_types)
75
76 return parsed_ast.body
77
78
79 def parse_events(sigs, global_ctx):
80 for event in global_ctx._events:
81 sigs[event.target.id] = EventSignature.from_declaration(event, global_ctx)
82 return sigs
83
84
85 def parse_external_contracts(external_contracts, global_ctx):
86 for _contractname in global_ctx._contracts:
87 _contract_defs = global_ctx._contracts[_contractname]
88 _defnames = [_def.name for _def in _contract_defs]
89 contract = {}
90 if len(set(_defnames)) < len(_contract_defs):
91 raise FunctionDeclarationException(
92 "Duplicate function name: %s" % (
93 [name for name in _defnames if _defnames.count(name) > 1][0]
94 )
95 )
96
97 for _def in _contract_defs:
98 constant = False
99 # test for valid call type keyword.
100 if len(_def.body) == 1 and \
101 isinstance(_def.body[0], ast.Expr) and \
102 isinstance(_def.body[0].value, ast.Name) and \
103 _def.body[0].value.id in ('modifying', 'constant'):
104 constant = True if _def.body[0].value.id == 'constant' else False
105 else:
106 raise StructureException('constant or modifying call type must be specified', _def)
107 # Recognizes already-defined structs
108 sig = FunctionSignature.from_definition(
109 _def,
110 contract_def=True,
111 constant=constant,
112 custom_structs=global_ctx._structs,
113 constants=global_ctx._constants
114 )
115 contract[sig.name] = sig
116 external_contracts[_contractname] = contract
117
118 for interface_name, interface in global_ctx._interfaces.items():
119 external_contracts[interface_name] = {
120 sig.name: sig
121 for sig in interface
122 if isinstance(sig, FunctionSignature)
123 }
124
125 return external_contracts
126
127
128 def parse_other_functions(o,
129 otherfuncs,
130 sigs,
131 external_contracts,
132 origcode,
133 global_ctx,
134 default_function,
135 runtime_only):
136 sub = ['seq', INITIALIZER_LLL]
137 add_gas = INITIALIZER_LLL.gas
138 for _def in otherfuncs:
139 sub.append(
140 parse_function(_def, {**{'self': sigs}, **external_contracts}, origcode, global_ctx)
141 )
142 sub[-1].total_gas += add_gas
143 add_gas += 30
144 for sig in sig_utils.generate_default_arg_sigs(_def, external_contracts, global_ctx):
145 sig.gas = sub[-1].total_gas
146 sigs[sig.sig] = sig
147
148 # Add fallback function
149 if default_function:
150 default_func = parse_function(
151 default_function[0],
152 {**{'self': sigs}, **external_contracts},
153 origcode,
154 global_ctx,
155 )
156 sub.append(default_func)
157 else:
158 sub.append(LLLnode.from_list(['revert', 0, 0], typ=None, annotation='Default function'))
159 if runtime_only:
160 return sub
161 else:
162 o.append(['return', 0, ['lll', sub, 0]])
163 return o
164
165
166 # Main python parse tree => LLL method
167 def parse_tree_to_lll(code, origcode, runtime_only=False, interface_codes=None):
168 global_ctx = GlobalContext.get_global_context(code, interface_codes=interface_codes)
169 _names_def = [_def.name for _def in global_ctx._defs]
170 # Checks for duplicate function names
171 if len(set(_names_def)) < len(_names_def):
172 raise FunctionDeclarationException(
173 "Duplicate function name: %s" % (
174 [name for name in _names_def if _names_def.count(name) > 1][0]
175 )
176 )
177 _names_events = [_event.target.id for _event in global_ctx._events]
178 # Checks for duplicate event names
179 if len(set(_names_events)) < len(_names_events):
180 raise EventDeclarationException(
181 "Duplicate event name: %s" % (
182 [name for name in _names_events if _names_events.count(name) > 1][0]
183 )
184 )
185 # Initialization function
186 initfunc = [_def for _def in global_ctx._defs if is_initializer(_def)]
187 # Default function
188 defaultfunc = [_def for _def in global_ctx._defs if is_default_func(_def)]
189 # Regular functions
190 otherfuncs = [
191 _def
192 for _def
193 in global_ctx._defs
194 if not is_initializer(_def) and not is_default_func(_def)
195 ]
196 sigs = {}
197 external_contracts = {}
198 # Create the main statement
199 o = ['seq']
200 if global_ctx._events:
201 sigs = parse_events(sigs, global_ctx)
202 if global_ctx._contracts or global_ctx._interfaces:
203 external_contracts = parse_external_contracts(external_contracts, global_ctx)
204 # If there is an init func...
205 if initfunc:
206 o.append(INITIALIZER_LLL)
207 o.append(
208 parse_function(
209 initfunc[0],
210 {**{'self': sigs}, **external_contracts},
211 origcode,
212 global_ctx,
213 )
214 )
215 # If there are regular functions...
216 if otherfuncs or defaultfunc:
217 o = parse_other_functions(
218 o, otherfuncs, sigs, external_contracts, origcode, global_ctx, defaultfunc, runtime_only
219 )
220
221 # Check if interface of contract is correct.
222 check_valid_contract_interface(global_ctx, sigs)
223
224 return LLLnode.from_list(o, typ=None)
225
226
227 def parse_to_lll(kode, runtime_only=False, interface_codes=None):
228 code = parse_to_ast(kode)
229 return parse_tree_to_lll(code, kode, runtime_only=runtime_only, interface_codes=interface_codes)
230
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vyper/parser/parser.py b/vyper/parser/parser.py
--- a/vyper/parser/parser.py
+++ b/vyper/parser/parser.py
@@ -1,5 +1,6 @@
import ast
from typing import (
+ Any,
List,
cast,
)
@@ -46,6 +47,20 @@
if not hasattr(ast, 'AnnAssign'):
raise Exception("Requires python 3.6 or higher for annotation support")
+# Header code
+STORE_CALLDATA: List[Any] = ['seq', ['mstore', 28, ['calldataload', 0]]]
+# Store limit constants at fixed addresses in memory.
+LIMIT_MEMORY_SET: List[Any] = [
+ ['mstore', pos, limit_size]
+ for pos, limit_size in LOADED_LIMIT_MAP.items()
+]
+FUNC_INIT_LLL = LLLnode.from_list(
+ STORE_CALLDATA + LIMIT_MEMORY_SET, typ=None
+)
+INIT_FUNC_INIT_LLL = LLLnode.from_list(
+ ['seq'] + LIMIT_MEMORY_SET, typ=None
+)
+
# Header code
INITIALIZER_LIST = ['seq', ['mstore', 28, ['calldataload', 0]]]
@@ -133,8 +148,9 @@
global_ctx,
default_function,
runtime_only):
- sub = ['seq', INITIALIZER_LLL]
- add_gas = INITIALIZER_LLL.gas
+ sub = ['seq', FUNC_INIT_LLL]
+ add_gas = FUNC_INIT_LLL.gas
+
for _def in otherfuncs:
sub.append(
parse_function(_def, {**{'self': sigs}, **external_contracts}, origcode, global_ctx)
@@ -203,15 +219,14 @@
external_contracts = parse_external_contracts(external_contracts, global_ctx)
# If there is an init func...
if initfunc:
- o.append(INITIALIZER_LLL)
- o.append(
- parse_function(
- initfunc[0],
- {**{'self': sigs}, **external_contracts},
- origcode,
- global_ctx,
- )
- )
+ o.append(INIT_FUNC_INIT_LLL)
+ o.append(parse_function(
+ initfunc[0],
+ {**{'self': sigs}, **external_contracts},
+ origcode,
+ global_ctx,
+ ))
+
# If there are regular functions...
if otherfuncs or defaultfunc:
o = parse_other_functions(
| {"golden_diff": "diff --git a/vyper/parser/parser.py b/vyper/parser/parser.py\n--- a/vyper/parser/parser.py\n+++ b/vyper/parser/parser.py\n@@ -1,5 +1,6 @@\n import ast\n from typing import (\n+ Any,\n List,\n cast,\n )\n@@ -46,6 +47,20 @@\n if not hasattr(ast, 'AnnAssign'):\n raise Exception(\"Requires python 3.6 or higher for annotation support\")\n \n+# Header code\n+STORE_CALLDATA: List[Any] = ['seq', ['mstore', 28, ['calldataload', 0]]]\n+# Store limit constants at fixed addresses in memory.\n+LIMIT_MEMORY_SET: List[Any] = [\n+ ['mstore', pos, limit_size]\n+ for pos, limit_size in LOADED_LIMIT_MAP.items()\n+]\n+FUNC_INIT_LLL = LLLnode.from_list(\n+ STORE_CALLDATA + LIMIT_MEMORY_SET, typ=None\n+)\n+INIT_FUNC_INIT_LLL = LLLnode.from_list(\n+ ['seq'] + LIMIT_MEMORY_SET, typ=None\n+)\n+\n \n # Header code\n INITIALIZER_LIST = ['seq', ['mstore', 28, ['calldataload', 0]]]\n@@ -133,8 +148,9 @@\n global_ctx,\n default_function,\n runtime_only):\n- sub = ['seq', INITIALIZER_LLL]\n- add_gas = INITIALIZER_LLL.gas\n+ sub = ['seq', FUNC_INIT_LLL]\n+ add_gas = FUNC_INIT_LLL.gas\n+\n for _def in otherfuncs:\n sub.append(\n parse_function(_def, {**{'self': sigs}, **external_contracts}, origcode, global_ctx)\n@@ -203,15 +219,14 @@\n external_contracts = parse_external_contracts(external_contracts, global_ctx)\n # If there is an init func...\n if initfunc:\n- o.append(INITIALIZER_LLL)\n- o.append(\n- parse_function(\n- initfunc[0],\n- {**{'self': sigs}, **external_contracts},\n- origcode,\n- global_ctx,\n- )\n- )\n+ o.append(INIT_FUNC_INIT_LLL)\n+ o.append(parse_function(\n+ initfunc[0],\n+ {**{'self': sigs}, **external_contracts},\n+ origcode,\n+ global_ctx,\n+ ))\n+\n # If there are regular functions...\n if otherfuncs or defaultfunc:\n o = parse_other_functions(\n", "issue": "Deployable bytecode attempts to access calldata\nHi There,\r\n\r\nI'm trying to understand vyper constructor usage patterns. When compiling the example Vyper [erc20 contract](https://github.com/ethereum/vyper/blob/master/examples/tokens/ERC20.vy), the output (deployed) bytecode looks like this:\r\n\r\n```\r\n0x600035...000f3\r\n```\r\n\r\nThe second opcode `0x35` is `CALLDATALOAD`. Given that in the EVM specification, contract creation calls assume `calldata == []`, how is calldata used in this case?\r\n\r\nIn Solidity, a comparable example would not include `CALLDATALOAD` in the executable part of deployed bytecode, reserving it instead for the runtime bytecode portion.\r\n\r\nLet me know if I'm misunderstanding!\n", "before_files": [{"content": "import ast\nfrom typing import (\n List,\n cast,\n)\n\nfrom vyper.exceptions import (\n EventDeclarationException,\n FunctionDeclarationException,\n ParserException,\n StructureException,\n)\nfrom vyper.parser.function_definitions import (\n is_default_func,\n is_initializer,\n parse_function,\n)\nfrom vyper.parser.global_context import (\n GlobalContext,\n)\nfrom vyper.parser.lll_node import (\n LLLnode,\n)\nfrom vyper.parser.parser_utils import (\n annotate_and_optimize_ast,\n)\nfrom vyper.parser.pre_parser import (\n pre_parse,\n)\nfrom vyper.signatures import (\n sig_utils,\n)\nfrom vyper.signatures.event_signature import (\n EventSignature,\n)\nfrom vyper.signatures.function_signature import (\n FunctionSignature,\n)\nfrom vyper.signatures.interface import (\n check_valid_contract_interface,\n)\nfrom vyper.utils import (\n LOADED_LIMIT_MAP,\n)\n\nif not hasattr(ast, 'AnnAssign'):\n raise Exception(\"Requires python 3.6 or higher for annotation support\")\n\n\n# Header code\nINITIALIZER_LIST = ['seq', ['mstore', 28, ['calldataload', 0]]]\n# Store limit constants at fixed addresses in memory.\nINITIALIZER_LIST += [['mstore', pos, limit_size] for pos, limit_size in LOADED_LIMIT_MAP.items()]\nINITIALIZER_LLL = LLLnode.from_list(INITIALIZER_LIST, typ=None)\n\n\ndef parse_to_ast(source_code: str) -> List[ast.stmt]:\n \"\"\"\n Parses the given vyper source code and returns a list of python AST objects\n for all statements in the source. Performs pre-processing of source code\n before parsing as well as post-processing of the resulting AST.\n\n :param source_code: The vyper source code to be parsed.\n :return: The post-processed list of python AST objects for each statement in\n ``source_code``.\n \"\"\"\n class_types, reformatted_code = pre_parse(source_code)\n\n if '\\x00' in reformatted_code:\n raise ParserException('No null bytes (\\\\x00) allowed in the source code.')\n\n # The return type depends on the parse mode which is why we need to cast here\n parsed_ast = cast(ast.Module, ast.parse(reformatted_code))\n annotate_and_optimize_ast(parsed_ast, reformatted_code, class_types)\n\n return parsed_ast.body\n\n\ndef parse_events(sigs, global_ctx):\n for event in global_ctx._events:\n sigs[event.target.id] = EventSignature.from_declaration(event, global_ctx)\n return sigs\n\n\ndef parse_external_contracts(external_contracts, global_ctx):\n for _contractname in global_ctx._contracts:\n _contract_defs = global_ctx._contracts[_contractname]\n _defnames = [_def.name for _def in _contract_defs]\n contract = {}\n if len(set(_defnames)) < len(_contract_defs):\n raise FunctionDeclarationException(\n \"Duplicate function name: %s\" % (\n [name for name in _defnames if _defnames.count(name) > 1][0]\n )\n )\n\n for _def in _contract_defs:\n constant = False\n # test for valid call type keyword.\n if len(_def.body) == 1 and \\\n isinstance(_def.body[0], ast.Expr) and \\\n isinstance(_def.body[0].value, ast.Name) and \\\n _def.body[0].value.id in ('modifying', 'constant'):\n constant = True if _def.body[0].value.id == 'constant' else False\n else:\n raise StructureException('constant or modifying call type must be specified', _def)\n # Recognizes already-defined structs\n sig = FunctionSignature.from_definition(\n _def,\n contract_def=True,\n constant=constant,\n custom_structs=global_ctx._structs,\n constants=global_ctx._constants\n )\n contract[sig.name] = sig\n external_contracts[_contractname] = contract\n\n for interface_name, interface in global_ctx._interfaces.items():\n external_contracts[interface_name] = {\n sig.name: sig\n for sig in interface\n if isinstance(sig, FunctionSignature)\n }\n\n return external_contracts\n\n\ndef parse_other_functions(o,\n otherfuncs,\n sigs,\n external_contracts,\n origcode,\n global_ctx,\n default_function,\n runtime_only):\n sub = ['seq', INITIALIZER_LLL]\n add_gas = INITIALIZER_LLL.gas\n for _def in otherfuncs:\n sub.append(\n parse_function(_def, {**{'self': sigs}, **external_contracts}, origcode, global_ctx)\n )\n sub[-1].total_gas += add_gas\n add_gas += 30\n for sig in sig_utils.generate_default_arg_sigs(_def, external_contracts, global_ctx):\n sig.gas = sub[-1].total_gas\n sigs[sig.sig] = sig\n\n # Add fallback function\n if default_function:\n default_func = parse_function(\n default_function[0],\n {**{'self': sigs}, **external_contracts},\n origcode,\n global_ctx,\n )\n sub.append(default_func)\n else:\n sub.append(LLLnode.from_list(['revert', 0, 0], typ=None, annotation='Default function'))\n if runtime_only:\n return sub\n else:\n o.append(['return', 0, ['lll', sub, 0]])\n return o\n\n\n# Main python parse tree => LLL method\ndef parse_tree_to_lll(code, origcode, runtime_only=False, interface_codes=None):\n global_ctx = GlobalContext.get_global_context(code, interface_codes=interface_codes)\n _names_def = [_def.name for _def in global_ctx._defs]\n # Checks for duplicate function names\n if len(set(_names_def)) < len(_names_def):\n raise FunctionDeclarationException(\n \"Duplicate function name: %s\" % (\n [name for name in _names_def if _names_def.count(name) > 1][0]\n )\n )\n _names_events = [_event.target.id for _event in global_ctx._events]\n # Checks for duplicate event names\n if len(set(_names_events)) < len(_names_events):\n raise EventDeclarationException(\n \"Duplicate event name: %s\" % (\n [name for name in _names_events if _names_events.count(name) > 1][0]\n )\n )\n # Initialization function\n initfunc = [_def for _def in global_ctx._defs if is_initializer(_def)]\n # Default function\n defaultfunc = [_def for _def in global_ctx._defs if is_default_func(_def)]\n # Regular functions\n otherfuncs = [\n _def\n for _def\n in global_ctx._defs\n if not is_initializer(_def) and not is_default_func(_def)\n ]\n sigs = {}\n external_contracts = {}\n # Create the main statement\n o = ['seq']\n if global_ctx._events:\n sigs = parse_events(sigs, global_ctx)\n if global_ctx._contracts or global_ctx._interfaces:\n external_contracts = parse_external_contracts(external_contracts, global_ctx)\n # If there is an init func...\n if initfunc:\n o.append(INITIALIZER_LLL)\n o.append(\n parse_function(\n initfunc[0],\n {**{'self': sigs}, **external_contracts},\n origcode,\n global_ctx,\n )\n )\n # If there are regular functions...\n if otherfuncs or defaultfunc:\n o = parse_other_functions(\n o, otherfuncs, sigs, external_contracts, origcode, global_ctx, defaultfunc, runtime_only\n )\n\n # Check if interface of contract is correct.\n check_valid_contract_interface(global_ctx, sigs)\n\n return LLLnode.from_list(o, typ=None)\n\n\ndef parse_to_lll(kode, runtime_only=False, interface_codes=None):\n code = parse_to_ast(kode)\n return parse_tree_to_lll(code, kode, runtime_only=runtime_only, interface_codes=interface_codes)\n", "path": "vyper/parser/parser.py"}], "after_files": [{"content": "import ast\nfrom typing import (\n Any,\n List,\n cast,\n)\n\nfrom vyper.exceptions import (\n EventDeclarationException,\n FunctionDeclarationException,\n ParserException,\n StructureException,\n)\nfrom vyper.parser.function_definitions import (\n is_default_func,\n is_initializer,\n parse_function,\n)\nfrom vyper.parser.global_context import (\n GlobalContext,\n)\nfrom vyper.parser.lll_node import (\n LLLnode,\n)\nfrom vyper.parser.parser_utils import (\n annotate_and_optimize_ast,\n)\nfrom vyper.parser.pre_parser import (\n pre_parse,\n)\nfrom vyper.signatures import (\n sig_utils,\n)\nfrom vyper.signatures.event_signature import (\n EventSignature,\n)\nfrom vyper.signatures.function_signature import (\n FunctionSignature,\n)\nfrom vyper.signatures.interface import (\n check_valid_contract_interface,\n)\nfrom vyper.utils import (\n LOADED_LIMIT_MAP,\n)\n\nif not hasattr(ast, 'AnnAssign'):\n raise Exception(\"Requires python 3.6 or higher for annotation support\")\n\n# Header code\nSTORE_CALLDATA: List[Any] = ['seq', ['mstore', 28, ['calldataload', 0]]]\n# Store limit constants at fixed addresses in memory.\nLIMIT_MEMORY_SET: List[Any] = [\n ['mstore', pos, limit_size]\n for pos, limit_size in LOADED_LIMIT_MAP.items()\n]\nFUNC_INIT_LLL = LLLnode.from_list(\n STORE_CALLDATA + LIMIT_MEMORY_SET, typ=None\n)\nINIT_FUNC_INIT_LLL = LLLnode.from_list(\n ['seq'] + LIMIT_MEMORY_SET, typ=None\n)\n\n\n# Header code\nINITIALIZER_LIST = ['seq', ['mstore', 28, ['calldataload', 0]]]\n# Store limit constants at fixed addresses in memory.\nINITIALIZER_LIST += [['mstore', pos, limit_size] for pos, limit_size in LOADED_LIMIT_MAP.items()]\nINITIALIZER_LLL = LLLnode.from_list(INITIALIZER_LIST, typ=None)\n\n\ndef parse_to_ast(source_code: str) -> List[ast.stmt]:\n \"\"\"\n Parses the given vyper source code and returns a list of python AST objects\n for all statements in the source. Performs pre-processing of source code\n before parsing as well as post-processing of the resulting AST.\n\n :param source_code: The vyper source code to be parsed.\n :return: The post-processed list of python AST objects for each statement in\n ``source_code``.\n \"\"\"\n class_types, reformatted_code = pre_parse(source_code)\n\n if '\\x00' in reformatted_code:\n raise ParserException('No null bytes (\\\\x00) allowed in the source code.')\n\n # The return type depends on the parse mode which is why we need to cast here\n parsed_ast = cast(ast.Module, ast.parse(reformatted_code))\n annotate_and_optimize_ast(parsed_ast, reformatted_code, class_types)\n\n return parsed_ast.body\n\n\ndef parse_events(sigs, global_ctx):\n for event in global_ctx._events:\n sigs[event.target.id] = EventSignature.from_declaration(event, global_ctx)\n return sigs\n\n\ndef parse_external_contracts(external_contracts, global_ctx):\n for _contractname in global_ctx._contracts:\n _contract_defs = global_ctx._contracts[_contractname]\n _defnames = [_def.name for _def in _contract_defs]\n contract = {}\n if len(set(_defnames)) < len(_contract_defs):\n raise FunctionDeclarationException(\n \"Duplicate function name: %s\" % (\n [name for name in _defnames if _defnames.count(name) > 1][0]\n )\n )\n\n for _def in _contract_defs:\n constant = False\n # test for valid call type keyword.\n if len(_def.body) == 1 and \\\n isinstance(_def.body[0], ast.Expr) and \\\n isinstance(_def.body[0].value, ast.Name) and \\\n _def.body[0].value.id in ('modifying', 'constant'):\n constant = True if _def.body[0].value.id == 'constant' else False\n else:\n raise StructureException('constant or modifying call type must be specified', _def)\n # Recognizes already-defined structs\n sig = FunctionSignature.from_definition(\n _def,\n contract_def=True,\n constant=constant,\n custom_structs=global_ctx._structs,\n constants=global_ctx._constants\n )\n contract[sig.name] = sig\n external_contracts[_contractname] = contract\n\n for interface_name, interface in global_ctx._interfaces.items():\n external_contracts[interface_name] = {\n sig.name: sig\n for sig in interface\n if isinstance(sig, FunctionSignature)\n }\n\n return external_contracts\n\n\ndef parse_other_functions(o,\n otherfuncs,\n sigs,\n external_contracts,\n origcode,\n global_ctx,\n default_function,\n runtime_only):\n sub = ['seq', FUNC_INIT_LLL]\n add_gas = FUNC_INIT_LLL.gas\n\n for _def in otherfuncs:\n sub.append(\n parse_function(_def, {**{'self': sigs}, **external_contracts}, origcode, global_ctx)\n )\n sub[-1].total_gas += add_gas\n add_gas += 30\n for sig in sig_utils.generate_default_arg_sigs(_def, external_contracts, global_ctx):\n sig.gas = sub[-1].total_gas\n sigs[sig.sig] = sig\n\n # Add fallback function\n if default_function:\n default_func = parse_function(\n default_function[0],\n {**{'self': sigs}, **external_contracts},\n origcode,\n global_ctx,\n )\n sub.append(default_func)\n else:\n sub.append(LLLnode.from_list(['revert', 0, 0], typ=None, annotation='Default function'))\n if runtime_only:\n return sub\n else:\n o.append(['return', 0, ['lll', sub, 0]])\n return o\n\n\n# Main python parse tree => LLL method\ndef parse_tree_to_lll(code, origcode, runtime_only=False, interface_codes=None):\n global_ctx = GlobalContext.get_global_context(code, interface_codes=interface_codes)\n _names_def = [_def.name for _def in global_ctx._defs]\n # Checks for duplicate function names\n if len(set(_names_def)) < len(_names_def):\n raise FunctionDeclarationException(\n \"Duplicate function name: %s\" % (\n [name for name in _names_def if _names_def.count(name) > 1][0]\n )\n )\n _names_events = [_event.target.id for _event in global_ctx._events]\n # Checks for duplicate event names\n if len(set(_names_events)) < len(_names_events):\n raise EventDeclarationException(\n \"Duplicate event name: %s\" % (\n [name for name in _names_events if _names_events.count(name) > 1][0]\n )\n )\n # Initialization function\n initfunc = [_def for _def in global_ctx._defs if is_initializer(_def)]\n # Default function\n defaultfunc = [_def for _def in global_ctx._defs if is_default_func(_def)]\n # Regular functions\n otherfuncs = [\n _def\n for _def\n in global_ctx._defs\n if not is_initializer(_def) and not is_default_func(_def)\n ]\n sigs = {}\n external_contracts = {}\n # Create the main statement\n o = ['seq']\n if global_ctx._events:\n sigs = parse_events(sigs, global_ctx)\n if global_ctx._contracts or global_ctx._interfaces:\n external_contracts = parse_external_contracts(external_contracts, global_ctx)\n # If there is an init func...\n if initfunc:\n o.append(INIT_FUNC_INIT_LLL)\n o.append(parse_function(\n initfunc[0],\n {**{'self': sigs}, **external_contracts},\n origcode,\n global_ctx,\n ))\n\n # If there are regular functions...\n if otherfuncs or defaultfunc:\n o = parse_other_functions(\n o, otherfuncs, sigs, external_contracts, origcode, global_ctx, defaultfunc, runtime_only\n )\n\n # Check if interface of contract is correct.\n check_valid_contract_interface(global_ctx, sigs)\n\n return LLLnode.from_list(o, typ=None)\n\n\ndef parse_to_lll(kode, runtime_only=False, interface_codes=None):\n code = parse_to_ast(kode)\n return parse_tree_to_lll(code, kode, runtime_only=runtime_only, interface_codes=interface_codes)\n", "path": "vyper/parser/parser.py"}]} | 2,786 | 564 |
gh_patches_debug_16700 | rasdani/github-patches | git_diff | Kinto__kinto-251 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
500 instead of 503 when storage backend fails with default bucket
Currently we do not catch a potential backend error when creating the implicit objects. And we use a reference to `bucket` and `collection` in the `finally` close...
```
File "/home/travis/build/Kinto/kinto/kinto/views/buckets.py", line 155, in default_bucket
create_collection(request, bucket_id)
File "/home/travis/build/Kinto/kinto/kinto/views/buckets.py", line 123, in create_collection
already_created[collection_uri] = collection
UnboundLocalError: local variable 'collection' referenced before assignment;
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/views/buckets.py`
Content:
```
1 from six import text_type
2 from uuid import UUID
3
4 from pyramid import httpexceptions
5 from pyramid.settings import asbool
6 from pyramid.security import NO_PERMISSION_REQUIRED
7 from pyramid.view import view_config
8
9 from cliquet import resource
10 from cliquet.utils import hmac_digest, build_request, reapply_cors
11 from cliquet.storage import exceptions as storage_exceptions
12
13 from kinto.authorization import RouteFactory
14 from kinto.views import NameGenerator
15 from kinto.views.collections import Collection
16
17
18 @resource.register(name='bucket',
19 collection_methods=('GET', 'POST'),
20 collection_path='/buckets',
21 record_path='/buckets/{{id}}')
22 class Bucket(resource.ProtectedResource):
23 permissions = ('read', 'write', 'collection:create', 'group:create')
24
25 def __init__(self, *args, **kwargs):
26 super(Bucket, self).__init__(*args, **kwargs)
27 self.model.id_generator = NameGenerator()
28
29 def get_parent_id(self, request):
30 # Buckets are not isolated by user, unlike Cliquet resources.
31 return ''
32
33 def delete(self):
34 result = super(Bucket, self).delete()
35
36 # Delete groups.
37 storage = self.model.storage
38 parent_id = '/buckets/%s' % self.record_id
39 storage.delete_all(collection_id='group',
40 parent_id=parent_id,
41 with_deleted=False)
42 storage.purge_deleted(collection_id='group',
43 parent_id=parent_id)
44
45 # Delete collections.
46 deleted = storage.delete_all(collection_id='collection',
47 parent_id=parent_id,
48 with_deleted=False)
49 storage.purge_deleted(collection_id='collection',
50 parent_id=parent_id)
51
52 # Delete records.
53 id_field = self.model.id_field
54 for collection in deleted:
55 parent_id = '/buckets/%s/collections/%s' % (self.record_id,
56 collection[id_field])
57 storage.delete_all(collection_id='record',
58 parent_id=parent_id,
59 with_deleted=False)
60 storage.purge_deleted(collection_id='record', parent_id=parent_id)
61
62 return result
63
64
65 def create_bucket(request, bucket_id):
66 """Create a bucket if it doesn't exists."""
67 bucket_put = (request.method.lower() == 'put' and
68 request.path.endswith('buckets/default'))
69 # Do nothing if current request will already create the bucket.
70 if bucket_put:
71 return
72
73 # Do not intent to create multiple times per request (e.g. in batch).
74 already_created = request.bound_data.setdefault('buckets', {})
75 if bucket_id in already_created:
76 return
77
78 # Fake context to instantiate a Bucket resource.
79 context = RouteFactory(request)
80 context.get_permission_object_id = lambda r, i: '/buckets/%s' % bucket_id
81 resource = Bucket(request, context)
82 try:
83 bucket = resource.model.create_record({'id': bucket_id})
84 except storage_exceptions.UnicityError as e:
85 bucket = e.record
86 finally:
87 already_created[bucket_id] = bucket
88
89
90 def create_collection(request, bucket_id):
91 # Do nothing if current request does not involve a collection.
92 subpath = request.matchdict.get('subpath')
93 if not (subpath and subpath.startswith('collections/')):
94 return
95
96 collection_id = subpath.split('/')[1]
97 collection_uri = '/buckets/%s/collections/%s' % (bucket_id, collection_id)
98
99 # Do not intent to create multiple times per request (e.g. in batch).
100 already_created = request.bound_data.setdefault('collections', {})
101 if collection_uri in already_created:
102 return
103
104 # Do nothing if current request will already create the collection.
105 collection_put = (request.method.lower() == 'put' and
106 request.path.endswith(collection_id))
107 if collection_put:
108 return
109
110 # Fake context to instantiate a Collection resource.
111 context = RouteFactory(request)
112 context.get_permission_object_id = lambda r, i: collection_uri
113
114 backup = request.matchdict
115 request.matchdict = dict(bucket_id=bucket_id,
116 id=collection_id,
117 **request.matchdict)
118 resource = Collection(request, context)
119 try:
120 collection = resource.model.create_record({'id': collection_id})
121 except storage_exceptions.UnicityError as e:
122 collection = e.record
123 finally:
124 already_created[collection_uri] = collection
125 request.matchdict = backup
126
127
128 @view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)
129 @view_config(route_name='default_bucket_collection',
130 permission=NO_PERMISSION_REQUIRED)
131 def default_bucket(request):
132 if request.method.lower() == 'options':
133 path = request.path.replace('default', 'unknown')
134 subrequest = build_request(request, {
135 'method': 'OPTIONS',
136 'path': path
137 })
138 return request.invoke_subrequest(subrequest)
139
140 if getattr(request, 'prefixed_userid', None) is None:
141 # Pass through the forbidden_view_config
142 raise httpexceptions.HTTPForbidden()
143
144 settings = request.registry.settings
145
146 if asbool(settings['readonly']):
147 raise httpexceptions.HTTPMethodNotAllowed()
148
149 hmac_secret = settings['userid_hmac_secret']
150 # Build the user unguessable bucket_id UUID from its user_id
151 digest = hmac_digest(hmac_secret, request.prefixed_userid)
152 bucket_id = text_type(UUID(digest[:32]))
153 path = request.path.replace('/buckets/default', '/buckets/%s' % bucket_id)
154 querystring = request.url[(request.url.index(request.path) +
155 len(request.path)):]
156
157 # Make sure bucket exists
158 create_bucket(request, bucket_id)
159
160 # Make sure the collection exists
161 create_collection(request, bucket_id)
162
163 subrequest = build_request(request, {
164 'method': request.method,
165 'path': path + querystring,
166 'body': request.body
167 })
168 subrequest.bound_data = request.bound_data
169
170 try:
171 response = request.invoke_subrequest(subrequest)
172 except httpexceptions.HTTPException as error:
173 if error.content_type == 'application/json':
174 response = reapply_cors(subrequest, error)
175 else:
176 # Ask the upper level to format the error.
177 raise error
178 return response
179
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py
--- a/kinto/views/buckets.py
+++ b/kinto/views/buckets.py
@@ -83,8 +83,7 @@
bucket = resource.model.create_record({'id': bucket_id})
except storage_exceptions.UnicityError as e:
bucket = e.record
- finally:
- already_created[bucket_id] = bucket
+ already_created[bucket_id] = bucket
def create_collection(request, bucket_id):
@@ -120,8 +119,7 @@
collection = resource.model.create_record({'id': collection_id})
except storage_exceptions.UnicityError as e:
collection = e.record
- finally:
- already_created[collection_uri] = collection
+ already_created[collection_uri] = collection
request.matchdict = backup
| {"golden_diff": "diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py\n--- a/kinto/views/buckets.py\n+++ b/kinto/views/buckets.py\n@@ -83,8 +83,7 @@\n bucket = resource.model.create_record({'id': bucket_id})\n except storage_exceptions.UnicityError as e:\n bucket = e.record\n- finally:\n- already_created[bucket_id] = bucket\n+ already_created[bucket_id] = bucket\n \n \n def create_collection(request, bucket_id):\n@@ -120,8 +119,7 @@\n collection = resource.model.create_record({'id': collection_id})\n except storage_exceptions.UnicityError as e:\n collection = e.record\n- finally:\n- already_created[collection_uri] = collection\n+ already_created[collection_uri] = collection\n request.matchdict = backup\n", "issue": "500 instead of 503 when storage backend fails with default bucket\nCurrently we do not catch a potential backend error when creating the implicit objects. And we use a reference to `bucket` and `collection` in the `finally` close...\n\n```\n File \"/home/travis/build/Kinto/kinto/kinto/views/buckets.py\", line 155, in default_bucket\n create_collection(request, bucket_id)\n File \"/home/travis/build/Kinto/kinto/kinto/views/buckets.py\", line 123, in create_collection\n already_created[collection_uri] = collection\nUnboundLocalError: local variable 'collection' referenced before assignment;\n```\n\n", "before_files": [{"content": "from six import text_type\nfrom uuid import UUID\n\nfrom pyramid import httpexceptions\nfrom pyramid.settings import asbool\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.view import view_config\n\nfrom cliquet import resource\nfrom cliquet.utils import hmac_digest, build_request, reapply_cors\nfrom cliquet.storage import exceptions as storage_exceptions\n\nfrom kinto.authorization import RouteFactory\nfrom kinto.views import NameGenerator\nfrom kinto.views.collections import Collection\n\n\[email protected](name='bucket',\n collection_methods=('GET', 'POST'),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\nclass Bucket(resource.ProtectedResource):\n permissions = ('read', 'write', 'collection:create', 'group:create')\n\n def __init__(self, *args, **kwargs):\n super(Bucket, self).__init__(*args, **kwargs)\n self.model.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n # Buckets are not isolated by user, unlike Cliquet resources.\n return ''\n\n def delete(self):\n result = super(Bucket, self).delete()\n\n # Delete groups.\n storage = self.model.storage\n parent_id = '/buckets/%s' % self.record_id\n storage.delete_all(collection_id='group',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='group',\n parent_id=parent_id)\n\n # Delete collections.\n deleted = storage.delete_all(collection_id='collection',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='collection',\n parent_id=parent_id)\n\n # Delete records.\n id_field = self.model.id_field\n for collection in deleted:\n parent_id = '/buckets/%s/collections/%s' % (self.record_id,\n collection[id_field])\n storage.delete_all(collection_id='record',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='record', parent_id=parent_id)\n\n return result\n\n\ndef create_bucket(request, bucket_id):\n \"\"\"Create a bucket if it doesn't exists.\"\"\"\n bucket_put = (request.method.lower() == 'put' and\n request.path.endswith('buckets/default'))\n # Do nothing if current request will already create the bucket.\n if bucket_put:\n return\n\n # Do not intent to create multiple times per request (e.g. in batch).\n already_created = request.bound_data.setdefault('buckets', {})\n if bucket_id in already_created:\n return\n\n # Fake context to instantiate a Bucket resource.\n context = RouteFactory(request)\n context.get_permission_object_id = lambda r, i: '/buckets/%s' % bucket_id\n resource = Bucket(request, context)\n try:\n bucket = resource.model.create_record({'id': bucket_id})\n except storage_exceptions.UnicityError as e:\n bucket = e.record\n finally:\n already_created[bucket_id] = bucket\n\n\ndef create_collection(request, bucket_id):\n # Do nothing if current request does not involve a collection.\n subpath = request.matchdict.get('subpath')\n if not (subpath and subpath.startswith('collections/')):\n return\n\n collection_id = subpath.split('/')[1]\n collection_uri = '/buckets/%s/collections/%s' % (bucket_id, collection_id)\n\n # Do not intent to create multiple times per request (e.g. in batch).\n already_created = request.bound_data.setdefault('collections', {})\n if collection_uri in already_created:\n return\n\n # Do nothing if current request will already create the collection.\n collection_put = (request.method.lower() == 'put' and\n request.path.endswith(collection_id))\n if collection_put:\n return\n\n # Fake context to instantiate a Collection resource.\n context = RouteFactory(request)\n context.get_permission_object_id = lambda r, i: collection_uri\n\n backup = request.matchdict\n request.matchdict = dict(bucket_id=bucket_id,\n id=collection_id,\n **request.matchdict)\n resource = Collection(request, context)\n try:\n collection = resource.model.create_record({'id': collection_id})\n except storage_exceptions.UnicityError as e:\n collection = e.record\n finally:\n already_created[collection_uri] = collection\n request.matchdict = backup\n\n\n@view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)\n@view_config(route_name='default_bucket_collection',\n permission=NO_PERMISSION_REQUIRED)\ndef default_bucket(request):\n if request.method.lower() == 'options':\n path = request.path.replace('default', 'unknown')\n subrequest = build_request(request, {\n 'method': 'OPTIONS',\n 'path': path\n })\n return request.invoke_subrequest(subrequest)\n\n if getattr(request, 'prefixed_userid', None) is None:\n # Pass through the forbidden_view_config\n raise httpexceptions.HTTPForbidden()\n\n settings = request.registry.settings\n\n if asbool(settings['readonly']):\n raise httpexceptions.HTTPMethodNotAllowed()\n\n hmac_secret = settings['userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n digest = hmac_digest(hmac_secret, request.prefixed_userid)\n bucket_id = text_type(UUID(digest[:32]))\n path = request.path.replace('/buckets/default', '/buckets/%s' % bucket_id)\n querystring = request.url[(request.url.index(request.path) +\n len(request.path)):]\n\n # Make sure bucket exists\n create_bucket(request, bucket_id)\n\n # Make sure the collection exists\n create_collection(request, bucket_id)\n\n subrequest = build_request(request, {\n 'method': request.method,\n 'path': path + querystring,\n 'body': request.body\n })\n subrequest.bound_data = request.bound_data\n\n try:\n response = request.invoke_subrequest(subrequest)\n except httpexceptions.HTTPException as error:\n if error.content_type == 'application/json':\n response = reapply_cors(subrequest, error)\n else:\n # Ask the upper level to format the error.\n raise error\n return response\n", "path": "kinto/views/buckets.py"}], "after_files": [{"content": "from six import text_type\nfrom uuid import UUID\n\nfrom pyramid import httpexceptions\nfrom pyramid.settings import asbool\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.view import view_config\n\nfrom cliquet import resource\nfrom cliquet.utils import hmac_digest, build_request, reapply_cors\nfrom cliquet.storage import exceptions as storage_exceptions\n\nfrom kinto.authorization import RouteFactory\nfrom kinto.views import NameGenerator\nfrom kinto.views.collections import Collection\n\n\[email protected](name='bucket',\n collection_methods=('GET', 'POST'),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\nclass Bucket(resource.ProtectedResource):\n permissions = ('read', 'write', 'collection:create', 'group:create')\n\n def __init__(self, *args, **kwargs):\n super(Bucket, self).__init__(*args, **kwargs)\n self.model.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n # Buckets are not isolated by user, unlike Cliquet resources.\n return ''\n\n def delete(self):\n result = super(Bucket, self).delete()\n\n # Delete groups.\n storage = self.model.storage\n parent_id = '/buckets/%s' % self.record_id\n storage.delete_all(collection_id='group',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='group',\n parent_id=parent_id)\n\n # Delete collections.\n deleted = storage.delete_all(collection_id='collection',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='collection',\n parent_id=parent_id)\n\n # Delete records.\n id_field = self.model.id_field\n for collection in deleted:\n parent_id = '/buckets/%s/collections/%s' % (self.record_id,\n collection[id_field])\n storage.delete_all(collection_id='record',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='record', parent_id=parent_id)\n\n return result\n\n\ndef create_bucket(request, bucket_id):\n \"\"\"Create a bucket if it doesn't exists.\"\"\"\n bucket_put = (request.method.lower() == 'put' and\n request.path.endswith('buckets/default'))\n # Do nothing if current request will already create the bucket.\n if bucket_put:\n return\n\n # Do not intent to create multiple times per request (e.g. in batch).\n already_created = request.bound_data.setdefault('buckets', {})\n if bucket_id in already_created:\n return\n\n # Fake context to instantiate a Bucket resource.\n context = RouteFactory(request)\n context.get_permission_object_id = lambda r, i: '/buckets/%s' % bucket_id\n resource = Bucket(request, context)\n try:\n bucket = resource.model.create_record({'id': bucket_id})\n except storage_exceptions.UnicityError as e:\n bucket = e.record\n already_created[bucket_id] = bucket\n\n\ndef create_collection(request, bucket_id):\n # Do nothing if current request does not involve a collection.\n subpath = request.matchdict.get('subpath')\n if not (subpath and subpath.startswith('collections/')):\n return\n\n collection_id = subpath.split('/')[1]\n collection_uri = '/buckets/%s/collections/%s' % (bucket_id, collection_id)\n\n # Do not intent to create multiple times per request (e.g. in batch).\n already_created = request.bound_data.setdefault('collections', {})\n if collection_uri in already_created:\n return\n\n # Do nothing if current request will already create the collection.\n collection_put = (request.method.lower() == 'put' and\n request.path.endswith(collection_id))\n if collection_put:\n return\n\n # Fake context to instantiate a Collection resource.\n context = RouteFactory(request)\n context.get_permission_object_id = lambda r, i: collection_uri\n\n backup = request.matchdict\n request.matchdict = dict(bucket_id=bucket_id,\n id=collection_id,\n **request.matchdict)\n resource = Collection(request, context)\n try:\n collection = resource.model.create_record({'id': collection_id})\n except storage_exceptions.UnicityError as e:\n collection = e.record\n already_created[collection_uri] = collection\n request.matchdict = backup\n\n\n@view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)\n@view_config(route_name='default_bucket_collection',\n permission=NO_PERMISSION_REQUIRED)\ndef default_bucket(request):\n if request.method.lower() == 'options':\n path = request.path.replace('default', 'unknown')\n subrequest = build_request(request, {\n 'method': 'OPTIONS',\n 'path': path\n })\n return request.invoke_subrequest(subrequest)\n\n if getattr(request, 'prefixed_userid', None) is None:\n # Pass through the forbidden_view_config\n raise httpexceptions.HTTPForbidden()\n\n settings = request.registry.settings\n\n if asbool(settings['readonly']):\n raise httpexceptions.HTTPMethodNotAllowed()\n\n hmac_secret = settings['userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n digest = hmac_digest(hmac_secret, request.prefixed_userid)\n bucket_id = text_type(UUID(digest[:32]))\n path = request.path.replace('/buckets/default', '/buckets/%s' % bucket_id)\n querystring = request.url[(request.url.index(request.path) +\n len(request.path)):]\n\n # Make sure bucket exists\n create_bucket(request, bucket_id)\n\n # Make sure the collection exists\n create_collection(request, bucket_id)\n\n subrequest = build_request(request, {\n 'method': request.method,\n 'path': path + querystring,\n 'body': request.body\n })\n subrequest.bound_data = request.bound_data\n\n try:\n response = request.invoke_subrequest(subrequest)\n except httpexceptions.HTTPException as error:\n if error.content_type == 'application/json':\n response = reapply_cors(subrequest, error)\n else:\n # Ask the upper level to format the error.\n raise error\n return response\n", "path": "kinto/views/buckets.py"}]} | 2,156 | 188 |
gh_patches_debug_9901 | rasdani/github-patches | git_diff | bridgecrewio__checkov-1062 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV_AZURE_12: Retention policy of 0 is not supported
**Describe the bug**
When setting the `retention_period.days` value on `azurerm_network_watcher_flow_log` to `0`, `CKV_AZURE_12` still fails. `0` is the value to signify indefinite/forever retention.
**To Reproduce**
Steps to reproduce the behavior:
1. Create an `azurerm_network_watcher_flow_log` resource
2. Set
```
retention_policy {
enabled = true
days = 0
}
```
3. Run `checkov`
4. Receive failure for `CKV_AZURE_12`
**Expected behavior**
`0` is an accepted value as documented at https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-nsg-flow-logging-overview#how-logging-works
**Desktop (please complete the following information):**
- OS: Linux
- Checkov Version: 2.0.26
**Additional context**
At https://github.com/bridgecrewio/checkov/blob/master/checkov/terraform/checks/resource/azure/NetworkWatcherFlowLogPeriod.py#L17 likely need to adjust the innermost `if` statement to something like:
```python
if retention_in_days is not None and (retention_days == 0 or retention_days >= 90):
```
Happy to open a pull request if this is an acceptable solution.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/azure/NetworkWatcherFlowLogPeriod.py`
Content:
```
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.common.util.type_forcers import force_int
3 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceCheck
4
5
6 class NetworkWatcherFlowLogPeriod(BaseResourceCheck):
7 def __init__(self):
8 name = "Ensure that Network Security Group Flow Log retention period is 'greater than 90 days'"
9 id = "CKV_AZURE_12"
10 supported_resources = ['azurerm_network_watcher_flow_log']
11 categories = [CheckCategories.LOGGING]
12 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
13
14 def scan_resource_conf(self, conf):
15 if 'enabled' in conf and conf['enabled'][0]:
16 retention_block = conf['retention_policy'][0]
17 if retention_block['enabled'][0]:
18 retention_in_days = force_int(retention_block['days'][0])
19 if retention_in_days and retention_in_days >= 90:
20 return CheckResult.PASSED
21 return CheckResult.FAILED
22
23
24 check = NetworkWatcherFlowLogPeriod()
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/azure/NetworkWatcherFlowLogPeriod.py b/checkov/terraform/checks/resource/azure/NetworkWatcherFlowLogPeriod.py
--- a/checkov/terraform/checks/resource/azure/NetworkWatcherFlowLogPeriod.py
+++ b/checkov/terraform/checks/resource/azure/NetworkWatcherFlowLogPeriod.py
@@ -16,7 +16,7 @@
retention_block = conf['retention_policy'][0]
if retention_block['enabled'][0]:
retention_in_days = force_int(retention_block['days'][0])
- if retention_in_days and retention_in_days >= 90:
+ if retention_in_days is not None and (retention_in_days == 0 or retention_in_days >= 90):
return CheckResult.PASSED
return CheckResult.FAILED
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/NetworkWatcherFlowLogPeriod.py b/checkov/terraform/checks/resource/azure/NetworkWatcherFlowLogPeriod.py\n--- a/checkov/terraform/checks/resource/azure/NetworkWatcherFlowLogPeriod.py\n+++ b/checkov/terraform/checks/resource/azure/NetworkWatcherFlowLogPeriod.py\n@@ -16,7 +16,7 @@\n retention_block = conf['retention_policy'][0]\n if retention_block['enabled'][0]:\n retention_in_days = force_int(retention_block['days'][0])\n- if retention_in_days and retention_in_days >= 90:\n+ if retention_in_days is not None and (retention_in_days == 0 or retention_in_days >= 90):\n return CheckResult.PASSED\n return CheckResult.FAILED\n", "issue": "CKV_AZURE_12: Retention policy of 0 is not supported\n**Describe the bug**\r\nWhen setting the `retention_period.days` value on `azurerm_network_watcher_flow_log` to `0`, `CKV_AZURE_12` still fails. `0` is the value to signify indefinite/forever retention.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Create an `azurerm_network_watcher_flow_log` resource\r\n2. Set\r\n```\r\n retention_policy {\r\n enabled = true\r\n days = 0\r\n }\r\n```\r\n3. Run `checkov`\r\n4. Receive failure for `CKV_AZURE_12`\r\n\r\n**Expected behavior**\r\n`0` is an accepted value as documented at https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-nsg-flow-logging-overview#how-logging-works\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Linux\r\n - Checkov Version: 2.0.26\r\n\r\n\r\n**Additional context**\r\nAt https://github.com/bridgecrewio/checkov/blob/master/checkov/terraform/checks/resource/azure/NetworkWatcherFlowLogPeriod.py#L17 likely need to adjust the innermost `if` statement to something like:\r\n\r\n```python\r\nif retention_in_days is not None and (retention_days == 0 or retention_days >= 90):\r\n```\r\n\r\nHappy to open a pull request if this is an acceptable solution.\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.common.util.type_forcers import force_int\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceCheck\n\n\nclass NetworkWatcherFlowLogPeriod(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure that Network Security Group Flow Log retention period is 'greater than 90 days'\"\n id = \"CKV_AZURE_12\"\n supported_resources = ['azurerm_network_watcher_flow_log']\n categories = [CheckCategories.LOGGING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if 'enabled' in conf and conf['enabled'][0]:\n retention_block = conf['retention_policy'][0]\n if retention_block['enabled'][0]:\n retention_in_days = force_int(retention_block['days'][0])\n if retention_in_days and retention_in_days >= 90:\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = NetworkWatcherFlowLogPeriod()\n", "path": "checkov/terraform/checks/resource/azure/NetworkWatcherFlowLogPeriod.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.common.util.type_forcers import force_int\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceCheck\n\n\nclass NetworkWatcherFlowLogPeriod(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure that Network Security Group Flow Log retention period is 'greater than 90 days'\"\n id = \"CKV_AZURE_12\"\n supported_resources = ['azurerm_network_watcher_flow_log']\n categories = [CheckCategories.LOGGING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if 'enabled' in conf and conf['enabled'][0]:\n retention_block = conf['retention_policy'][0]\n if retention_block['enabled'][0]:\n retention_in_days = force_int(retention_block['days'][0])\n if retention_in_days is not None and (retention_in_days == 0 or retention_in_days >= 90):\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = NetworkWatcherFlowLogPeriod()\n", "path": "checkov/terraform/checks/resource/azure/NetworkWatcherFlowLogPeriod.py"}]} | 867 | 182 |
gh_patches_debug_30688 | rasdani/github-patches | git_diff | joke2k__faker-592 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reseed code broke in 0.8 unnecessarily
Commit ea4f189bbff1925d7a7e2d7cbc6e91e2e8a9a3f3 changed the name of the `random.Random()` object in `faker.generator` from `random` to `mod_random`, re-exposing the `random` module as the name `faker.generator.random`. This broke code like:
```py
from faker.generator import random
random.seed(1)
```
which is basically what my `pytest-randomly` plugin does. At first I thought this was #586 but then I saw that every run was broken. The failure is silent because `random` is still a name in `faker.generator`, it just now points at the global `random` module 😱
I suggest just doing some renaming in the module to fix this so old code continues to work.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/utils/distribution.py`
Content:
```
1 # coding=utf-8
2
3 import bisect
4 from faker.generator import mod_random
5
6 def random_sample(random=None):
7 if random is None:
8 random = mod_random
9 return random.uniform(0.0, 1.0)
10
11
12 def cumsum(it):
13 total = 0
14 for x in it:
15 total += x
16 yield total
17
18
19 def choice_distribution(a, p, random=None):
20 if random is None:
21 random = mod_random
22
23 assert len(a) == len(p)
24
25 if hasattr(random, 'choices'):
26 return random.choices(a, weights=p)[0]
27 else:
28 cdf = list(cumsum(p))
29 normal = cdf[-1]
30 cdf2 = [float(i) / float(normal) for i in cdf]
31 uniform_sample = random_sample(random=random)
32 idx = bisect.bisect_right(cdf2, uniform_sample)
33 return a[idx]
34
```
Path: `faker/generator.py`
Content:
```
1 # coding=utf-8
2
3 from __future__ import unicode_literals
4
5 import re
6 import random
7
8
9 _re_token = re.compile(r'\{\{(\s?)(\w+)(\s?)\}\}')
10 mod_random = random.Random()
11
12
13 class Generator(object):
14
15 __config = {}
16
17 def __init__(self, **config):
18 self.providers = []
19 self.__config = dict(
20 list(self.__config.items()) + list(config.items()))
21 self.__random = mod_random
22
23 def add_provider(self, provider):
24
25 if type(provider) is type:
26 provider = provider(self)
27
28 self.providers.insert(0, provider)
29
30 for method_name in dir(provider):
31 # skip 'private' method
32 if method_name.startswith('_'):
33 continue
34
35 faker_function = getattr(provider, method_name)
36
37 if hasattr(faker_function, '__call__') or \
38 isinstance(faker_function, (classmethod, staticmethod)):
39 # add all faker method to generator
40 self.set_formatter(method_name, faker_function)
41
42 def provider(self, name):
43 try:
44 lst = [p for p in self.get_providers()
45 if p.__provider__ == name.lower()]
46 return lst[0]
47 except IndexError:
48 return None
49
50 def get_providers(self):
51 """Returns added providers."""
52 return self.providers
53
54 @property
55 def random(self):
56 return self.__random
57
58 def seed_instance(self, seed=None):
59 """Calls random.seed"""
60 if self.__random == mod_random:
61 # create per-instance random obj when first time seed_instance() is called
62 self.__random = random.Random()
63 self.__random.seed(seed)
64
65 @classmethod
66 def seed(cls, seed=None):
67 mod_random.seed(seed)
68
69 def format(self, formatter, *args, **kwargs):
70 """
71 This is a secure way to make a fake from another Provider.
72 """
73 # TODO: data export?
74 return self.get_formatter(formatter)(*args, **kwargs)
75
76 def get_formatter(self, formatter):
77 try:
78 return getattr(self, formatter)
79 except AttributeError:
80 if 'locale' in self.__config:
81 msg = 'Unknown formatter "{0}" with locale "{1}"'.format(
82 formatter, self.__config['locale']
83 )
84 else:
85 raise AttributeError('Unknown formatter "{0}"'.format(
86 formatter
87 ))
88 raise AttributeError(msg)
89
90 def set_formatter(self, name, method):
91 """
92 This method adds a provider method to generator.
93 Override this method to add some decoration or logging stuff.
94 """
95 setattr(self, name, method)
96
97 def parse(self, text):
98 """
99 Replaces tokens (like '{{ tokenName }}' or '{{tokenName}}')
100 with the result from the token method call.
101 """
102 return _re_token.sub(self.__format_token, text)
103
104 def __format_token(self, matches):
105 formatter = list(matches.groups())
106 formatter[1] = self.format(formatter[1])
107 return ''.join(formatter)
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/generator.py b/faker/generator.py
--- a/faker/generator.py
+++ b/faker/generator.py
@@ -3,11 +3,12 @@
from __future__ import unicode_literals
import re
-import random
+import random as random_module
_re_token = re.compile(r'\{\{(\s?)(\w+)(\s?)\}\}')
-mod_random = random.Random()
+random = random_module.Random()
+mod_random = random # compat with name released in 0.8
class Generator(object):
@@ -18,7 +19,7 @@
self.providers = []
self.__config = dict(
list(self.__config.items()) + list(config.items()))
- self.__random = mod_random
+ self.__random = random
def add_provider(self, provider):
@@ -57,14 +58,14 @@
def seed_instance(self, seed=None):
"""Calls random.seed"""
- if self.__random == mod_random:
+ if self.__random == random:
# create per-instance random obj when first time seed_instance() is called
- self.__random = random.Random()
+ self.__random = random_module.Random()
self.__random.seed(seed)
@classmethod
def seed(cls, seed=None):
- mod_random.seed(seed)
+ random.seed(seed)
def format(self, formatter, *args, **kwargs):
"""
diff --git a/faker/utils/distribution.py b/faker/utils/distribution.py
--- a/faker/utils/distribution.py
+++ b/faker/utils/distribution.py
@@ -1,7 +1,8 @@
# coding=utf-8
import bisect
-from faker.generator import mod_random
+from faker.generator import random as mod_random
+
def random_sample(random=None):
if random is None:
| {"golden_diff": "diff --git a/faker/generator.py b/faker/generator.py\n--- a/faker/generator.py\n+++ b/faker/generator.py\n@@ -3,11 +3,12 @@\n from __future__ import unicode_literals\n \n import re\n-import random\n+import random as random_module\n \n \n _re_token = re.compile(r'\\{\\{(\\s?)(\\w+)(\\s?)\\}\\}')\n-mod_random = random.Random()\n+random = random_module.Random()\n+mod_random = random # compat with name released in 0.8\n \n \n class Generator(object):\n@@ -18,7 +19,7 @@\n self.providers = []\n self.__config = dict(\n list(self.__config.items()) + list(config.items()))\n- self.__random = mod_random\n+ self.__random = random\n \n def add_provider(self, provider):\n \n@@ -57,14 +58,14 @@\n \n def seed_instance(self, seed=None):\n \"\"\"Calls random.seed\"\"\"\n- if self.__random == mod_random:\n+ if self.__random == random:\n # create per-instance random obj when first time seed_instance() is called\n- self.__random = random.Random()\n+ self.__random = random_module.Random()\n self.__random.seed(seed)\n \n @classmethod\n def seed(cls, seed=None):\n- mod_random.seed(seed)\n+ random.seed(seed)\n \n def format(self, formatter, *args, **kwargs):\n \"\"\"\ndiff --git a/faker/utils/distribution.py b/faker/utils/distribution.py\n--- a/faker/utils/distribution.py\n+++ b/faker/utils/distribution.py\n@@ -1,7 +1,8 @@\n # coding=utf-8\n \n import bisect\n-from faker.generator import mod_random\n+from faker.generator import random as mod_random\n+\n \n def random_sample(random=None):\n if random is None:\n", "issue": "Reseed code broke in 0.8 unnecessarily\nCommit ea4f189bbff1925d7a7e2d7cbc6e91e2e8a9a3f3 changed the name of the `random.Random()` object in `faker.generator` from `random` to `mod_random`, re-exposing the `random` module as the name `faker.generator.random`. This broke code like:\r\n\r\n```py\r\nfrom faker.generator import random\r\nrandom.seed(1)\r\n``` \r\n\r\nwhich is basically what my `pytest-randomly` plugin does. At first I thought this was #586 but then I saw that every run was broken. The failure is silent because `random` is still a name in `faker.generator`, it just now points at the global `random` module \ud83d\ude31\r\n\r\nI suggest just doing some renaming in the module to fix this so old code continues to work.\n", "before_files": [{"content": "# coding=utf-8\n\nimport bisect\nfrom faker.generator import mod_random\n\ndef random_sample(random=None):\n if random is None:\n random = mod_random\n return random.uniform(0.0, 1.0)\n\n\ndef cumsum(it):\n total = 0\n for x in it:\n total += x\n yield total\n\n\ndef choice_distribution(a, p, random=None):\n if random is None:\n random = mod_random\n\n assert len(a) == len(p)\n\n if hasattr(random, 'choices'):\n return random.choices(a, weights=p)[0]\n else:\n cdf = list(cumsum(p))\n normal = cdf[-1]\n cdf2 = [float(i) / float(normal) for i in cdf]\n uniform_sample = random_sample(random=random)\n idx = bisect.bisect_right(cdf2, uniform_sample)\n return a[idx]\n", "path": "faker/utils/distribution.py"}, {"content": "# coding=utf-8\n\nfrom __future__ import unicode_literals\n\nimport re\nimport random\n\n\n_re_token = re.compile(r'\\{\\{(\\s?)(\\w+)(\\s?)\\}\\}')\nmod_random = random.Random()\n\n\nclass Generator(object):\n\n __config = {}\n\n def __init__(self, **config):\n self.providers = []\n self.__config = dict(\n list(self.__config.items()) + list(config.items()))\n self.__random = mod_random\n\n def add_provider(self, provider):\n\n if type(provider) is type:\n provider = provider(self)\n\n self.providers.insert(0, provider)\n\n for method_name in dir(provider):\n # skip 'private' method\n if method_name.startswith('_'):\n continue\n\n faker_function = getattr(provider, method_name)\n\n if hasattr(faker_function, '__call__') or \\\n isinstance(faker_function, (classmethod, staticmethod)):\n # add all faker method to generator\n self.set_formatter(method_name, faker_function)\n\n def provider(self, name):\n try:\n lst = [p for p in self.get_providers()\n if p.__provider__ == name.lower()]\n return lst[0]\n except IndexError:\n return None\n\n def get_providers(self):\n \"\"\"Returns added providers.\"\"\"\n return self.providers\n\n @property\n def random(self):\n return self.__random\n\n def seed_instance(self, seed=None):\n \"\"\"Calls random.seed\"\"\"\n if self.__random == mod_random:\n # create per-instance random obj when first time seed_instance() is called\n self.__random = random.Random()\n self.__random.seed(seed)\n\n @classmethod\n def seed(cls, seed=None):\n mod_random.seed(seed)\n\n def format(self, formatter, *args, **kwargs):\n \"\"\"\n This is a secure way to make a fake from another Provider.\n \"\"\"\n # TODO: data export?\n return self.get_formatter(formatter)(*args, **kwargs)\n\n def get_formatter(self, formatter):\n try:\n return getattr(self, formatter)\n except AttributeError:\n if 'locale' in self.__config:\n msg = 'Unknown formatter \"{0}\" with locale \"{1}\"'.format(\n formatter, self.__config['locale']\n )\n else:\n raise AttributeError('Unknown formatter \"{0}\"'.format(\n formatter\n ))\n raise AttributeError(msg)\n\n def set_formatter(self, name, method):\n \"\"\"\n This method adds a provider method to generator.\n Override this method to add some decoration or logging stuff.\n \"\"\"\n setattr(self, name, method)\n\n def parse(self, text):\n \"\"\"\n Replaces tokens (like '{{ tokenName }}' or '{{tokenName}}')\n with the result from the token method call.\n \"\"\"\n return _re_token.sub(self.__format_token, text)\n\n def __format_token(self, matches):\n formatter = list(matches.groups())\n formatter[1] = self.format(formatter[1])\n return ''.join(formatter)\n", "path": "faker/generator.py"}], "after_files": [{"content": "# coding=utf-8\n\nimport bisect\nfrom faker.generator import random as mod_random\n\n\ndef random_sample(random=None):\n if random is None:\n random = mod_random\n return random.uniform(0.0, 1.0)\n\n\ndef cumsum(it):\n total = 0\n for x in it:\n total += x\n yield total\n\n\ndef choice_distribution(a, p, random=None):\n if random is None:\n random = mod_random\n\n assert len(a) == len(p)\n\n if hasattr(random, 'choices'):\n return random.choices(a, weights=p)[0]\n else:\n cdf = list(cumsum(p))\n normal = cdf[-1]\n cdf2 = [float(i) / float(normal) for i in cdf]\n uniform_sample = random_sample(random=random)\n idx = bisect.bisect_right(cdf2, uniform_sample)\n return a[idx]\n", "path": "faker/utils/distribution.py"}, {"content": "# coding=utf-8\n\nfrom __future__ import unicode_literals\n\nimport re\nimport random as random_module\n\n\n_re_token = re.compile(r'\\{\\{(\\s?)(\\w+)(\\s?)\\}\\}')\nrandom = random_module.Random()\nmod_random = random # compat with name released in 0.8\n\n\nclass Generator(object):\n\n __config = {}\n\n def __init__(self, **config):\n self.providers = []\n self.__config = dict(\n list(self.__config.items()) + list(config.items()))\n self.__random = random\n\n def add_provider(self, provider):\n\n if type(provider) is type:\n provider = provider(self)\n\n self.providers.insert(0, provider)\n\n for method_name in dir(provider):\n # skip 'private' method\n if method_name.startswith('_'):\n continue\n\n faker_function = getattr(provider, method_name)\n\n if hasattr(faker_function, '__call__') or \\\n isinstance(faker_function, (classmethod, staticmethod)):\n # add all faker method to generator\n self.set_formatter(method_name, faker_function)\n\n def provider(self, name):\n try:\n lst = [p for p in self.get_providers()\n if p.__provider__ == name.lower()]\n return lst[0]\n except IndexError:\n return None\n\n def get_providers(self):\n \"\"\"Returns added providers.\"\"\"\n return self.providers\n\n @property\n def random(self):\n return self.__random\n\n def seed_instance(self, seed=None):\n \"\"\"Calls random.seed\"\"\"\n if self.__random == random:\n # create per-instance random obj when first time seed_instance() is called\n self.__random = random_module.Random()\n self.__random.seed(seed)\n\n @classmethod\n def seed(cls, seed=None):\n random.seed(seed)\n\n def format(self, formatter, *args, **kwargs):\n \"\"\"\n This is a secure way to make a fake from another Provider.\n \"\"\"\n # TODO: data export?\n return self.get_formatter(formatter)(*args, **kwargs)\n\n def get_formatter(self, formatter):\n try:\n return getattr(self, formatter)\n except AttributeError:\n if 'locale' in self.__config:\n msg = 'Unknown formatter \"{0}\" with locale \"{1}\"'.format(\n formatter, self.__config['locale']\n )\n else:\n raise AttributeError('Unknown formatter \"{0}\"'.format(\n formatter\n ))\n raise AttributeError(msg)\n\n def set_formatter(self, name, method):\n \"\"\"\n This method adds a provider method to generator.\n Override this method to add some decoration or logging stuff.\n \"\"\"\n setattr(self, name, method)\n\n def parse(self, text):\n \"\"\"\n Replaces tokens (like '{{ tokenName }}' or '{{tokenName}}')\n with the result from the token method call.\n \"\"\"\n return _re_token.sub(self.__format_token, text)\n\n def __format_token(self, matches):\n formatter = list(matches.groups())\n formatter[1] = self.format(formatter[1])\n return ''.join(formatter)\n", "path": "faker/generator.py"}]} | 1,596 | 407 |
gh_patches_debug_4387 | rasdani/github-patches | git_diff | falconry__falcon-1593 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Official support for CPython 3.8
CPython 3.8.0 was [released Oct 14, 2019](https://www.python.org/downloads/release/python-380/).
Make Falcon offically support CPython 3.8:
- [ ] README
- [ ] Docker images
- [ ] Docs
- [ ] Tox
- [ ] Travis
- [ ] setup.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import glob
2 import imp
3 import io
4 import os
5 from os import path
6 import re
7 import sys
8
9 from setuptools import Extension, find_packages, setup
10
11 MYDIR = path.abspath(os.path.dirname(__file__))
12
13 VERSION = imp.load_source('version', path.join('.', 'falcon', 'version.py'))
14 VERSION = VERSION.__version__
15
16 REQUIRES = []
17
18 try:
19 sys.pypy_version_info
20 PYPY = True
21 except AttributeError:
22 PYPY = False
23
24 if PYPY:
25 CYTHON = False
26 else:
27 try:
28 from Cython.Distutils import build_ext
29 CYTHON = True
30 except ImportError:
31 # TODO(kgriffs): pip now ignores all output, so the user
32 # may not see this message. See also:
33 #
34 # https://github.com/pypa/pip/issues/2732
35 #
36 print('\nNOTE: Cython not installed. '
37 'Falcon will still work fine, but may run '
38 'a bit slower.\n')
39 CYTHON = False
40
41 if CYTHON:
42 def list_modules(dirname):
43 filenames = glob.glob(path.join(dirname, '*.py'))
44
45 module_names = []
46 for name in filenames:
47 module, ext = path.splitext(path.basename(name))
48 if module != '__init__':
49 module_names.append(module)
50
51 return module_names
52
53 package_names = [
54 'falcon',
55 'falcon.media',
56 'falcon.routing',
57 'falcon.util',
58 'falcon.vendor.mimeparse',
59 ]
60 ext_modules = [
61 Extension(
62 package + '.' + module,
63 [path.join(*(package.split('.') + [module + '.py']))]
64 )
65 for package in package_names
66 for module in list_modules(path.join(MYDIR, *package.split('.')))
67 ]
68
69 cmdclass = {'build_ext': build_ext}
70
71 else:
72 cmdclass = {}
73 ext_modules = []
74
75
76 def load_description():
77 in_patron_list = False
78 in_patron_replacement = False
79 in_raw = False
80
81 description_lines = []
82
83 # NOTE(kgriffs): PyPI does not support the raw directive
84 for readme_line in io.open('README.rst', 'r', encoding='utf-8'):
85
86 # NOTE(vytas): The patron list largely builds upon raw sections
87 if readme_line.startswith('.. Patron list starts'):
88 in_patron_list = True
89 in_patron_replacement = True
90 continue
91 elif in_patron_list:
92 if not readme_line.strip():
93 in_patron_replacement = False
94 elif in_patron_replacement:
95 description_lines.append(readme_line.lstrip())
96 if readme_line.startswith('.. Patron list ends'):
97 in_patron_list = False
98 continue
99 elif readme_line.startswith('.. raw::'):
100 in_raw = True
101 elif in_raw:
102 if readme_line and not re.match(r'\s', readme_line):
103 in_raw = False
104
105 if not in_raw:
106 description_lines.append(readme_line)
107
108 return ''.join(description_lines)
109
110
111 setup(
112 name='falcon',
113 version=VERSION,
114 description='An unladen web framework for building APIs and app backends.',
115 long_description=load_description(),
116 long_description_content_type='text/x-rst',
117 classifiers=[
118 'Development Status :: 5 - Production/Stable',
119 'Environment :: Web Environment',
120 'Natural Language :: English',
121 'Intended Audience :: Developers',
122 'Intended Audience :: System Administrators',
123 'License :: OSI Approved :: Apache Software License',
124 'Operating System :: MacOS :: MacOS X',
125 'Operating System :: Microsoft :: Windows',
126 'Operating System :: POSIX',
127 'Topic :: Internet :: WWW/HTTP :: WSGI',
128 'Topic :: Software Development :: Libraries :: Application Frameworks',
129 'Programming Language :: Python',
130 'Programming Language :: Python :: Implementation :: CPython',
131 'Programming Language :: Python :: Implementation :: PyPy',
132 'Programming Language :: Python :: 3',
133 'Programming Language :: Python :: 3.5',
134 'Programming Language :: Python :: 3.6',
135 'Programming Language :: Python :: 3.7',
136 ],
137 keywords='wsgi web api framework rest http cloud',
138 author='Kurt Griffiths',
139 author_email='[email protected]',
140 url='https://falconframework.org',
141 license='Apache 2.0',
142 packages=find_packages(exclude=['tests']),
143 include_package_data=True,
144 zip_safe=False,
145 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',
146 install_requires=REQUIRES,
147 cmdclass=cmdclass,
148 ext_modules=ext_modules,
149 tests_require=['testtools', 'requests', 'pyyaml', 'pytest', 'pytest-runner'],
150 entry_points={
151 'console_scripts': [
152 'falcon-bench = falcon.cmd.bench:main',
153 'falcon-print-routes = falcon.cmd.print_routes:main'
154 ]
155 }
156 )
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -133,6 +133,7 @@
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
+ 'Programming Language :: Python :: 3.8',
],
keywords='wsgi web api framework rest http cloud',
author='Kurt Griffiths',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -133,6 +133,7 @@\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n+ 'Programming Language :: Python :: 3.8',\n ],\n keywords='wsgi web api framework rest http cloud',\n author='Kurt Griffiths',\n", "issue": "Official support for CPython 3.8\nCPython 3.8.0 was [released Oct 14, 2019](https://www.python.org/downloads/release/python-380/).\r\n\r\nMake Falcon offically support CPython 3.8:\r\n- [ ] README\r\n- [ ] Docker images\r\n- [ ] Docs\r\n- [ ] Tox\r\n- [ ] Travis\r\n- [ ] setup.py\n", "before_files": [{"content": "import glob\nimport imp\nimport io\nimport os\nfrom os import path\nimport re\nimport sys\n\nfrom setuptools import Extension, find_packages, setup\n\nMYDIR = path.abspath(os.path.dirname(__file__))\n\nVERSION = imp.load_source('version', path.join('.', 'falcon', 'version.py'))\nVERSION = VERSION.__version__\n\nREQUIRES = []\n\ntry:\n sys.pypy_version_info\n PYPY = True\nexcept AttributeError:\n PYPY = False\n\nif PYPY:\n CYTHON = False\nelse:\n try:\n from Cython.Distutils import build_ext\n CYTHON = True\n except ImportError:\n # TODO(kgriffs): pip now ignores all output, so the user\n # may not see this message. See also:\n #\n # https://github.com/pypa/pip/issues/2732\n #\n print('\\nNOTE: Cython not installed. '\n 'Falcon will still work fine, but may run '\n 'a bit slower.\\n')\n CYTHON = False\n\nif CYTHON:\n def list_modules(dirname):\n filenames = glob.glob(path.join(dirname, '*.py'))\n\n module_names = []\n for name in filenames:\n module, ext = path.splitext(path.basename(name))\n if module != '__init__':\n module_names.append(module)\n\n return module_names\n\n package_names = [\n 'falcon',\n 'falcon.media',\n 'falcon.routing',\n 'falcon.util',\n 'falcon.vendor.mimeparse',\n ]\n ext_modules = [\n Extension(\n package + '.' + module,\n [path.join(*(package.split('.') + [module + '.py']))]\n )\n for package in package_names\n for module in list_modules(path.join(MYDIR, *package.split('.')))\n ]\n\n cmdclass = {'build_ext': build_ext}\n\nelse:\n cmdclass = {}\n ext_modules = []\n\n\ndef load_description():\n in_patron_list = False\n in_patron_replacement = False\n in_raw = False\n\n description_lines = []\n\n # NOTE(kgriffs): PyPI does not support the raw directive\n for readme_line in io.open('README.rst', 'r', encoding='utf-8'):\n\n # NOTE(vytas): The patron list largely builds upon raw sections\n if readme_line.startswith('.. Patron list starts'):\n in_patron_list = True\n in_patron_replacement = True\n continue\n elif in_patron_list:\n if not readme_line.strip():\n in_patron_replacement = False\n elif in_patron_replacement:\n description_lines.append(readme_line.lstrip())\n if readme_line.startswith('.. Patron list ends'):\n in_patron_list = False\n continue\n elif readme_line.startswith('.. raw::'):\n in_raw = True\n elif in_raw:\n if readme_line and not re.match(r'\\s', readme_line):\n in_raw = False\n\n if not in_raw:\n description_lines.append(readme_line)\n\n return ''.join(description_lines)\n\n\nsetup(\n name='falcon',\n version=VERSION,\n description='An unladen web framework for building APIs and app backends.',\n long_description=load_description(),\n long_description_content_type='text/x-rst',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Web Environment',\n 'Natural Language :: English',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Topic :: Internet :: WWW/HTTP :: WSGI',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ],\n keywords='wsgi web api framework rest http cloud',\n author='Kurt Griffiths',\n author_email='[email protected]',\n url='https://falconframework.org',\n license='Apache 2.0',\n packages=find_packages(exclude=['tests']),\n include_package_data=True,\n zip_safe=False,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',\n install_requires=REQUIRES,\n cmdclass=cmdclass,\n ext_modules=ext_modules,\n tests_require=['testtools', 'requests', 'pyyaml', 'pytest', 'pytest-runner'],\n entry_points={\n 'console_scripts': [\n 'falcon-bench = falcon.cmd.bench:main',\n 'falcon-print-routes = falcon.cmd.print_routes:main'\n ]\n }\n)\n", "path": "setup.py"}], "after_files": [{"content": "import glob\nimport imp\nimport io\nimport os\nfrom os import path\nimport re\nimport sys\n\nfrom setuptools import Extension, find_packages, setup\n\nMYDIR = path.abspath(os.path.dirname(__file__))\n\nVERSION = imp.load_source('version', path.join('.', 'falcon', 'version.py'))\nVERSION = VERSION.__version__\n\nREQUIRES = []\n\ntry:\n sys.pypy_version_info\n PYPY = True\nexcept AttributeError:\n PYPY = False\n\nif PYPY:\n CYTHON = False\nelse:\n try:\n from Cython.Distutils import build_ext\n CYTHON = True\n except ImportError:\n # TODO(kgriffs): pip now ignores all output, so the user\n # may not see this message. See also:\n #\n # https://github.com/pypa/pip/issues/2732\n #\n print('\\nNOTE: Cython not installed. '\n 'Falcon will still work fine, but may run '\n 'a bit slower.\\n')\n CYTHON = False\n\nif CYTHON:\n def list_modules(dirname):\n filenames = glob.glob(path.join(dirname, '*.py'))\n\n module_names = []\n for name in filenames:\n module, ext = path.splitext(path.basename(name))\n if module != '__init__':\n module_names.append(module)\n\n return module_names\n\n package_names = [\n 'falcon',\n 'falcon.media',\n 'falcon.routing',\n 'falcon.util',\n 'falcon.vendor.mimeparse',\n ]\n ext_modules = [\n Extension(\n package + '.' + module,\n [path.join(*(package.split('.') + [module + '.py']))]\n )\n for package in package_names\n for module in list_modules(path.join(MYDIR, *package.split('.')))\n ]\n\n cmdclass = {'build_ext': build_ext}\n\nelse:\n cmdclass = {}\n ext_modules = []\n\n\ndef load_description():\n in_patron_list = False\n in_patron_replacement = False\n in_raw = False\n\n description_lines = []\n\n # NOTE(kgriffs): PyPI does not support the raw directive\n for readme_line in io.open('README.rst', 'r', encoding='utf-8'):\n\n # NOTE(vytas): The patron list largely builds upon raw sections\n if readme_line.startswith('.. Patron list starts'):\n in_patron_list = True\n in_patron_replacement = True\n continue\n elif in_patron_list:\n if not readme_line.strip():\n in_patron_replacement = False\n elif in_patron_replacement:\n description_lines.append(readme_line.lstrip())\n if readme_line.startswith('.. Patron list ends'):\n in_patron_list = False\n continue\n elif readme_line.startswith('.. raw::'):\n in_raw = True\n elif in_raw:\n if readme_line and not re.match(r'\\s', readme_line):\n in_raw = False\n\n if not in_raw:\n description_lines.append(readme_line)\n\n return ''.join(description_lines)\n\n\nsetup(\n name='falcon',\n version=VERSION,\n description='An unladen web framework for building APIs and app backends.',\n long_description=load_description(),\n long_description_content_type='text/x-rst',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Web Environment',\n 'Natural Language :: English',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Topic :: Internet :: WWW/HTTP :: WSGI',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n keywords='wsgi web api framework rest http cloud',\n author='Kurt Griffiths',\n author_email='[email protected]',\n url='https://falconframework.org',\n license='Apache 2.0',\n packages=find_packages(exclude=['tests']),\n include_package_data=True,\n zip_safe=False,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',\n install_requires=REQUIRES,\n cmdclass=cmdclass,\n ext_modules=ext_modules,\n tests_require=['testtools', 'requests', 'pyyaml', 'pytest', 'pytest-runner'],\n entry_points={\n 'console_scripts': [\n 'falcon-bench = falcon.cmd.bench:main',\n 'falcon-print-routes = falcon.cmd.print_routes:main'\n ]\n }\n)\n", "path": "setup.py"}]} | 1,812 | 105 |
gh_patches_debug_19281 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-7251 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`TypeError: unsupported operand type(s) for |: 'KeyboardModifier' and 'Key'` with PySide 6.4.0.1 application
<!--
Welcome to the PyInstaller issue tracker! Before creating an issue, please heed the following:
1. This tracker should only be used to report bugs and request features / enhancements to PyInstaller
- For questions and general support, use the discussions forum.
2. Use the search function before creating a new issue. Duplicates will be closed and directed to
the original discussion.
3. When making a bug report, make sure you provide all required information. The easier it is for
maintainers to reproduce, the faster it'll be fixed.
-->
<!-- +++ ONLY TEXT +++ DO NOT POST IMAGES +++ -->
## Description of the issue
When running a frozen PySide6 application using PySide 6.4.0.1, the `|` operator between Qt.KeyboardModifier and Qt.Key stops working, for example:
```
Traceback (most recent call last):
File "script.py", line 28, in <module>
window = MainWindow()
File "script.py", line 18, in __init__
button_action.setShortcut(QKeySequence(Qt.AltModifier | Qt.Key_D))
TypeError: unsupported operand type(s) for |: 'KeyboardModifier' and 'Key'
[10109] Failed to execute script 'script' due to unhandled exception!
```
Note that the script used for this works just fine if invoked directly from a development environment, and it works both in a development and a frozen build if downgraded to 6.3.2.
Before filing this I made a repo to demonstrate the bug, but most of the info is included in this issue template already. https://github.com/twizmwazin/pyside-pyinstaller-unsupported-operand-bug
### Context information (for bug reports)
* Output of `pyinstaller --version`: ```5.6.2```
* Version of Python: Python 3.10.8
* Platform: Ubuntu 20.04 on WSL 2
* How you installed Python: Python 3.10.8 from source
* Did you also try this on another platform? Does it work there?
First discovered after upgrading to PySide 6.4.0.1 in Azure DevOps CI using Ubuntu 20.04 and MS's build of Python 3.10. Also reproduced on Windows 11 using Python 3.10.8 64-bit installed using the python.org installer.
* try the latest development version, using the following command:
```shell
pip install https://github.com/pyinstaller/pyinstaller/archive/develop.zip
```
* follow *all* the instructions in our "If Things Go Wrong" Guide
(https://github.com/pyinstaller/pyinstaller/wiki/If-Things-Go-Wrong) and
### Make sure [everything is packaged correctly](https://github.com/pyinstaller/pyinstaller/wiki/How-to-Report-Bugs#make-sure-everything-is-packaged-correctly)
* [x] start with clean installation
* [x] use the latest development version
* [x] Run your frozen program **from a command window (shell)** — instead of double-clicking on it
* [x] Package your program in **--onedir mode**
* [x] Package **without UPX**, say: use the option `--noupx` or set `upx=False` in your .spec-file
* [x] Repackage you application in **verbose/debug mode**. For this, pass the option `--debug` to `pyi-makespec` or `pyinstaller` or use `EXE(..., debug=1, ...)` in your .spec file.
### A minimal example program which shows the error
```
#!/usr/bin/env python3
from PySide6.QtCore import Qt
from PySide6.QtGui import QAction, QKeySequence
from PySide6.QtWidgets import QApplication, QToolBar, QMainWindow
class MainWindow(QMainWindow):
def __init__(self):
super(MainWindow, self).__init__()
self.setWindowTitle("My Awesome App")
toolbar = QToolBar("My main toolbar")
self.addToolBar(toolbar)
button_action = QAction("Your button", self)
button_action.setShortcut(QKeySequence(Qt.AltModifier | Qt.Key_D))
button_action.triggered.connect(self.onMyToolBarButtonClick)
toolbar.addAction(button_action)
def onMyToolBarButtonClick(self, s):
print("click", s)
if __name__ == "__main__":
app = QApplication()
window = MainWindow()
window.show()
app.exec()
```
### Stacktrace / full error message
```
Traceback (most recent call last):
File "script.py", line 31, in <module>
window = MainWindow()
File "script.py", line 21, in __init__
button_action.setShortcut(QKeySequence(Qt.AltModifier | Qt.Key_D))
TypeError: unsupported operand type(s) for |: 'KeyboardModifier' and 'Key'
```
Please also see <https://github.com/pyinstaller/pyinstaller/wiki/How-to-Report-Bugs>
for more about what would use to solve the issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/hooks/hook-PySide6.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2021-2022, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License (version 2
5 # or later) with exception for distributing the bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
10 #-----------------------------------------------------------------------------
11
12 from PyInstaller.utils.hooks.qt import get_qt_binaries, pyside6_library_info
13
14 # Only proceed if PySide6 can be imported.
15 if pyside6_library_info.version is not None:
16 hiddenimports = ['shiboken6', 'inspect']
17
18 # Collect required Qt binaries.
19 binaries = get_qt_binaries(pyside6_library_info)
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/PyInstaller/hooks/hook-PySide6.py b/PyInstaller/hooks/hook-PySide6.py
--- a/PyInstaller/hooks/hook-PySide6.py
+++ b/PyInstaller/hooks/hook-PySide6.py
@@ -9,11 +9,17 @@
# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
#-----------------------------------------------------------------------------
+from PyInstaller.utils.hooks import is_module_satisfies
from PyInstaller.utils.hooks.qt import get_qt_binaries, pyside6_library_info
# Only proceed if PySide6 can be imported.
if pyside6_library_info.version is not None:
hiddenimports = ['shiboken6', 'inspect']
+ # Starting with PySide6 6.4.0, we need to collect PySide6.support.deprecated for | and & operators to work with
+ # Qt key and key modifiers enums. See #7249.
+ if is_module_satisfies("PySide6 >= 6.4.0"):
+ hiddenimports += ['PySide6.support.deprecated']
+
# Collect required Qt binaries.
binaries = get_qt_binaries(pyside6_library_info)
| {"golden_diff": "diff --git a/PyInstaller/hooks/hook-PySide6.py b/PyInstaller/hooks/hook-PySide6.py\n--- a/PyInstaller/hooks/hook-PySide6.py\n+++ b/PyInstaller/hooks/hook-PySide6.py\n@@ -9,11 +9,17 @@\n # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n #-----------------------------------------------------------------------------\n \n+from PyInstaller.utils.hooks import is_module_satisfies\n from PyInstaller.utils.hooks.qt import get_qt_binaries, pyside6_library_info\n \n # Only proceed if PySide6 can be imported.\n if pyside6_library_info.version is not None:\n hiddenimports = ['shiboken6', 'inspect']\n \n+ # Starting with PySide6 6.4.0, we need to collect PySide6.support.deprecated for | and & operators to work with\n+ # Qt key and key modifiers enums. See #7249.\n+ if is_module_satisfies(\"PySide6 >= 6.4.0\"):\n+ hiddenimports += ['PySide6.support.deprecated']\n+\n # Collect required Qt binaries.\n binaries = get_qt_binaries(pyside6_library_info)\n", "issue": "`TypeError: unsupported operand type(s) for |: 'KeyboardModifier' and 'Key'` with PySide 6.4.0.1 application\n<!--\r\nWelcome to the PyInstaller issue tracker! Before creating an issue, please heed the following:\r\n\r\n1. This tracker should only be used to report bugs and request features / enhancements to PyInstaller\r\n - For questions and general support, use the discussions forum.\r\n2. Use the search function before creating a new issue. Duplicates will be closed and directed to\r\n the original discussion.\r\n3. When making a bug report, make sure you provide all required information. The easier it is for\r\n maintainers to reproduce, the faster it'll be fixed.\r\n-->\r\n\r\n<!-- +++ ONLY TEXT +++ DO NOT POST IMAGES +++ -->\r\n\r\n## Description of the issue\r\n\r\nWhen running a frozen PySide6 application using PySide 6.4.0.1, the `|` operator between Qt.KeyboardModifier and Qt.Key stops working, for example:\r\n```\r\nTraceback (most recent call last):\r\n File \"script.py\", line 28, in <module>\r\n window = MainWindow()\r\n File \"script.py\", line 18, in __init__\r\n button_action.setShortcut(QKeySequence(Qt.AltModifier | Qt.Key_D))\r\nTypeError: unsupported operand type(s) for |: 'KeyboardModifier' and 'Key'\r\n[10109] Failed to execute script 'script' due to unhandled exception!\r\n```\r\n\r\nNote that the script used for this works just fine if invoked directly from a development environment, and it works both in a development and a frozen build if downgraded to 6.3.2.\r\n\r\nBefore filing this I made a repo to demonstrate the bug, but most of the info is included in this issue template already. https://github.com/twizmwazin/pyside-pyinstaller-unsupported-operand-bug\r\n\r\n### Context information (for bug reports)\r\n\r\n* Output of `pyinstaller --version`: ```5.6.2```\r\n* Version of Python: Python 3.10.8\r\n* Platform: Ubuntu 20.04 on WSL 2\r\n* How you installed Python: Python 3.10.8 from source\r\n* Did you also try this on another platform? Does it work there?\r\nFirst discovered after upgrading to PySide 6.4.0.1 in Azure DevOps CI using Ubuntu 20.04 and MS's build of Python 3.10. Also reproduced on Windows 11 using Python 3.10.8 64-bit installed using the python.org installer.\r\n\r\n\r\n* try the latest development version, using the following command:\r\n\r\n```shell\r\npip install https://github.com/pyinstaller/pyinstaller/archive/develop.zip\r\n```\r\n\r\n* follow *all* the instructions in our \"If Things Go Wrong\" Guide\r\n (https://github.com/pyinstaller/pyinstaller/wiki/If-Things-Go-Wrong) and\r\n\r\n### Make sure [everything is packaged correctly](https://github.com/pyinstaller/pyinstaller/wiki/How-to-Report-Bugs#make-sure-everything-is-packaged-correctly)\r\n\r\n * [x] start with clean installation\r\n * [x] use the latest development version\r\n * [x] Run your frozen program **from a command window (shell)** \u2014 instead of double-clicking on it\r\n * [x] Package your program in **--onedir mode**\r\n * [x] Package **without UPX**, say: use the option `--noupx` or set `upx=False` in your .spec-file\r\n * [x] Repackage you application in **verbose/debug mode**. For this, pass the option `--debug` to `pyi-makespec` or `pyinstaller` or use `EXE(..., debug=1, ...)` in your .spec file.\r\n\r\n\r\n### A minimal example program which shows the error\r\n\r\n```\r\n#!/usr/bin/env python3\r\n\r\nfrom PySide6.QtCore import Qt\r\nfrom PySide6.QtGui import QAction, QKeySequence\r\nfrom PySide6.QtWidgets import QApplication, QToolBar, QMainWindow\r\n\r\n\r\nclass MainWindow(QMainWindow):\r\n def __init__(self):\r\n super(MainWindow, self).__init__()\r\n\r\n self.setWindowTitle(\"My Awesome App\")\r\n\r\n toolbar = QToolBar(\"My main toolbar\")\r\n self.addToolBar(toolbar)\r\n\r\n button_action = QAction(\"Your button\", self)\r\n button_action.setShortcut(QKeySequence(Qt.AltModifier | Qt.Key_D))\r\n button_action.triggered.connect(self.onMyToolBarButtonClick)\r\n toolbar.addAction(button_action)\r\n\r\n def onMyToolBarButtonClick(self, s):\r\n print(\"click\", s)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n app = QApplication()\r\n window = MainWindow()\r\n window.show()\r\n app.exec()\r\n```\r\n\r\n### Stacktrace / full error message\r\n\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"script.py\", line 31, in <module>\r\n window = MainWindow()\r\n File \"script.py\", line 21, in __init__\r\n button_action.setShortcut(QKeySequence(Qt.AltModifier | Qt.Key_D))\r\nTypeError: unsupported operand type(s) for |: 'KeyboardModifier' and 'Key'\r\n```\r\n\r\nPlease also see <https://github.com/pyinstaller/pyinstaller/wiki/How-to-Report-Bugs>\r\nfor more about what would use to solve the issue.\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2021-2022, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nfrom PyInstaller.utils.hooks.qt import get_qt_binaries, pyside6_library_info\n\n# Only proceed if PySide6 can be imported.\nif pyside6_library_info.version is not None:\n hiddenimports = ['shiboken6', 'inspect']\n\n # Collect required Qt binaries.\n binaries = get_qt_binaries(pyside6_library_info)\n", "path": "PyInstaller/hooks/hook-PySide6.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2021-2022, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nfrom PyInstaller.utils.hooks import is_module_satisfies\nfrom PyInstaller.utils.hooks.qt import get_qt_binaries, pyside6_library_info\n\n# Only proceed if PySide6 can be imported.\nif pyside6_library_info.version is not None:\n hiddenimports = ['shiboken6', 'inspect']\n\n # Starting with PySide6 6.4.0, we need to collect PySide6.support.deprecated for | and & operators to work with\n # Qt key and key modifiers enums. See #7249.\n if is_module_satisfies(\"PySide6 >= 6.4.0\"):\n hiddenimports += ['PySide6.support.deprecated']\n\n # Collect required Qt binaries.\n binaries = get_qt_binaries(pyside6_library_info)\n", "path": "PyInstaller/hooks/hook-PySide6.py"}]} | 1,600 | 269 |
gh_patches_debug_26508 | rasdani/github-patches | git_diff | gammapy__gammapy-1567 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
What's the correct value for errordef in iminuit?
see https://github.com/gammapy/gammapy/blob/master/gammapy/utils/fitting/iminuit.py#L88
Reminder:
out fit statistic is -2 log (likelihood)
https://github.com/gammapy/gammapy/blob/master/gammapy/stats/fit_statistics.py#L58
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gammapy/utils/fitting/iminuit.py`
Content:
```
1 # Licensed under a 3-clause BSD style license - see LICENSE.rst
2 """iminuit fitting functions.
3 """
4 from __future__ import absolute_import, division, print_function, unicode_literals
5 import numpy as np
6
7 __all__ = [
8 'fit_iminuit',
9 ]
10
11
12 def fit_iminuit(parameters, function, opts_minuit=None):
13 """iminuit optimization
14
15 Parameters
16 ----------
17 parameters : `~gammapy.utils.modeling.ParameterList`
18 Parameters with starting values
19 function : callable
20 Likelihood function
21 opts_minuit : dict (optional)
22 Options passed to `iminuit.Minuit` constructor
23
24 Returns
25 -------
26 parameters : `~gammapy.utils.modeling.ParameterList`
27 Parameters with best-fit values
28 minuit : `~iminuit.Minuit`
29 Minuit object
30 """
31 from iminuit import Minuit
32
33 minuit_func = MinuitFunction(function, parameters)
34
35 if opts_minuit is None:
36 opts_minuit = {}
37 opts_minuit.update(make_minuit_par_kwargs(parameters))
38
39 minuit = Minuit(minuit_func.fcn,
40 forced_parameters=parameters.names,
41 **opts_minuit)
42
43 minuit.migrad()
44 parameters.covariance = _get_covar(minuit)
45
46 return parameters, minuit
47
48
49 class MinuitFunction(object):
50 """Wrapper for iminuit
51
52 Parameters
53 ----------
54 parameters : `~gammapy.utils.modeling.ParameterList`
55 Parameters with starting values
56 function : callable
57 Likelihood function
58 """
59
60 def __init__(self, function, parameters):
61 self.function = function
62 self.parameters = parameters
63
64 def fcn(self, *values):
65 for value, parameter in zip(values, self.parameters.parameters):
66 parameter.value = value
67 return self.function(self.parameters)
68
69
70 def make_minuit_par_kwargs(parameters):
71 """Create *Parameter Keyword Arguments* for the `Minuit` constructor.
72
73 See: http://iminuit.readthedocs.io/en/latest/api.html#iminuit.Minuit
74 """
75 kwargs = {}
76 for par in parameters.parameters:
77 kwargs[par.name] = par.value
78 if par.frozen:
79 kwargs['fix_{}'.format(par.name)] = True
80 min_ = None if np.isnan(par.min) else par.min
81 max_ = None if np.isnan(par.max) else par.max
82 kwargs['limit_{}'.format(par.name)] = (min_, max_)
83
84 if parameters.covariance is None:
85 kwargs['error_{}'.format(par.name)] = 1
86 else:
87 kwargs['error_{}'.format(par.name)] = parameters.error(par.name)
88
89 # TODO: Check if we need 0.5 or 1
90 kwargs['errordef'] = 1
91
92 return kwargs
93
94
95 def _get_covar(minuit):
96 """Get full covar matrix as Numpy array.
97
98 This was added as `minuit.np_covariance` in `iminuit` in v1.3,
99 but we still want to support v1.2
100 """
101 n = len(minuit.parameters)
102 m = np.zeros((n, n))
103 for i1, k1 in enumerate(minuit.parameters):
104 for i2, k2 in enumerate(minuit.parameters):
105 if set([k1, k2]).issubset(minuit.list_of_vary_param()):
106 m[i1, i2] = minuit.covariance[(k1, k2)]
107 return m
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gammapy/utils/fitting/iminuit.py b/gammapy/utils/fitting/iminuit.py
--- a/gammapy/utils/fitting/iminuit.py
+++ b/gammapy/utils/fitting/iminuit.py
@@ -36,6 +36,10 @@
opts_minuit = {}
opts_minuit.update(make_minuit_par_kwargs(parameters))
+ # In Gammapy, we have the factor 2 in the likelihood function
+ # This means `errordef=1` in the Minuit interface is correct
+ opts_minuit.setdefault('errordef', 1)
+
minuit = Minuit(minuit_func.fcn,
forced_parameters=parameters.names,
**opts_minuit)
@@ -75,8 +79,7 @@
kwargs = {}
for par in parameters.parameters:
kwargs[par.name] = par.value
- if par.frozen:
- kwargs['fix_{}'.format(par.name)] = True
+
min_ = None if np.isnan(par.min) else par.min
max_ = None if np.isnan(par.max) else par.max
kwargs['limit_{}'.format(par.name)] = (min_, max_)
@@ -86,8 +89,8 @@
else:
kwargs['error_{}'.format(par.name)] = parameters.error(par.name)
- # TODO: Check if we need 0.5 or 1
- kwargs['errordef'] = 1
+ if par.frozen:
+ kwargs['fix_{}'.format(par.name)] = True
return kwargs
| {"golden_diff": "diff --git a/gammapy/utils/fitting/iminuit.py b/gammapy/utils/fitting/iminuit.py\n--- a/gammapy/utils/fitting/iminuit.py\n+++ b/gammapy/utils/fitting/iminuit.py\n@@ -36,6 +36,10 @@\n opts_minuit = {}\n opts_minuit.update(make_minuit_par_kwargs(parameters))\n \n+ # In Gammapy, we have the factor 2 in the likelihood function\n+ # This means `errordef=1` in the Minuit interface is correct\n+ opts_minuit.setdefault('errordef', 1)\n+\n minuit = Minuit(minuit_func.fcn,\n forced_parameters=parameters.names,\n **opts_minuit)\n@@ -75,8 +79,7 @@\n kwargs = {}\n for par in parameters.parameters:\n kwargs[par.name] = par.value\n- if par.frozen:\n- kwargs['fix_{}'.format(par.name)] = True\n+\n min_ = None if np.isnan(par.min) else par.min\n max_ = None if np.isnan(par.max) else par.max\n kwargs['limit_{}'.format(par.name)] = (min_, max_)\n@@ -86,8 +89,8 @@\n else:\n kwargs['error_{}'.format(par.name)] = parameters.error(par.name)\n \n- # TODO: Check if we need 0.5 or 1\n- kwargs['errordef'] = 1\n+ if par.frozen:\n+ kwargs['fix_{}'.format(par.name)] = True\n \n return kwargs\n", "issue": "What's the correct value for errordef in iminuit?\nsee https://github.com/gammapy/gammapy/blob/master/gammapy/utils/fitting/iminuit.py#L88\r\n\r\nReminder:\r\nout fit statistic is -2 log (likelihood)\r\nhttps://github.com/gammapy/gammapy/blob/master/gammapy/stats/fit_statistics.py#L58\n", "before_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"iminuit fitting functions.\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\nimport numpy as np\n\n__all__ = [\n 'fit_iminuit',\n]\n\n\ndef fit_iminuit(parameters, function, opts_minuit=None):\n \"\"\"iminuit optimization\n\n Parameters\n ----------\n parameters : `~gammapy.utils.modeling.ParameterList`\n Parameters with starting values\n function : callable\n Likelihood function\n opts_minuit : dict (optional)\n Options passed to `iminuit.Minuit` constructor\n\n Returns\n -------\n parameters : `~gammapy.utils.modeling.ParameterList`\n Parameters with best-fit values\n minuit : `~iminuit.Minuit`\n Minuit object\n \"\"\"\n from iminuit import Minuit\n\n minuit_func = MinuitFunction(function, parameters)\n\n if opts_minuit is None:\n opts_minuit = {}\n opts_minuit.update(make_minuit_par_kwargs(parameters))\n\n minuit = Minuit(minuit_func.fcn,\n forced_parameters=parameters.names,\n **opts_minuit)\n\n minuit.migrad()\n parameters.covariance = _get_covar(minuit)\n\n return parameters, minuit\n\n\nclass MinuitFunction(object):\n \"\"\"Wrapper for iminuit\n\n Parameters\n ----------\n parameters : `~gammapy.utils.modeling.ParameterList`\n Parameters with starting values\n function : callable\n Likelihood function\n \"\"\"\n\n def __init__(self, function, parameters):\n self.function = function\n self.parameters = parameters\n\n def fcn(self, *values):\n for value, parameter in zip(values, self.parameters.parameters):\n parameter.value = value\n return self.function(self.parameters)\n\n\ndef make_minuit_par_kwargs(parameters):\n \"\"\"Create *Parameter Keyword Arguments* for the `Minuit` constructor.\n\n See: http://iminuit.readthedocs.io/en/latest/api.html#iminuit.Minuit\n \"\"\"\n kwargs = {}\n for par in parameters.parameters:\n kwargs[par.name] = par.value\n if par.frozen:\n kwargs['fix_{}'.format(par.name)] = True\n min_ = None if np.isnan(par.min) else par.min\n max_ = None if np.isnan(par.max) else par.max\n kwargs['limit_{}'.format(par.name)] = (min_, max_)\n\n if parameters.covariance is None:\n kwargs['error_{}'.format(par.name)] = 1\n else:\n kwargs['error_{}'.format(par.name)] = parameters.error(par.name)\n\n # TODO: Check if we need 0.5 or 1\n kwargs['errordef'] = 1\n\n return kwargs\n\n\ndef _get_covar(minuit):\n \"\"\"Get full covar matrix as Numpy array.\n\n This was added as `minuit.np_covariance` in `iminuit` in v1.3,\n but we still want to support v1.2\n \"\"\"\n n = len(minuit.parameters)\n m = np.zeros((n, n))\n for i1, k1 in enumerate(minuit.parameters):\n for i2, k2 in enumerate(minuit.parameters):\n if set([k1, k2]).issubset(minuit.list_of_vary_param()):\n m[i1, i2] = minuit.covariance[(k1, k2)]\n return m\n", "path": "gammapy/utils/fitting/iminuit.py"}], "after_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"iminuit fitting functions.\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\nimport numpy as np\n\n__all__ = [\n 'fit_iminuit',\n]\n\n\ndef fit_iminuit(parameters, function, opts_minuit=None):\n \"\"\"iminuit optimization\n\n Parameters\n ----------\n parameters : `~gammapy.utils.modeling.ParameterList`\n Parameters with starting values\n function : callable\n Likelihood function\n opts_minuit : dict (optional)\n Options passed to `iminuit.Minuit` constructor\n\n Returns\n -------\n parameters : `~gammapy.utils.modeling.ParameterList`\n Parameters with best-fit values\n minuit : `~iminuit.Minuit`\n Minuit object\n \"\"\"\n from iminuit import Minuit\n\n minuit_func = MinuitFunction(function, parameters)\n\n if opts_minuit is None:\n opts_minuit = {}\n opts_minuit.update(make_minuit_par_kwargs(parameters))\n\n # In Gammapy, we have the factor 2 in the likelihood function\n # This means `errordef=1` in the Minuit interface is correct\n opts_minuit.setdefault('errordef', 1)\n\n minuit = Minuit(minuit_func.fcn,\n forced_parameters=parameters.names,\n **opts_minuit)\n\n minuit.migrad()\n parameters.covariance = _get_covar(minuit)\n\n return parameters, minuit\n\n\nclass MinuitFunction(object):\n \"\"\"Wrapper for iminuit\n\n Parameters\n ----------\n parameters : `~gammapy.utils.modeling.ParameterList`\n Parameters with starting values\n function : callable\n Likelihood function\n \"\"\"\n\n def __init__(self, function, parameters):\n self.function = function\n self.parameters = parameters\n\n def fcn(self, *values):\n for value, parameter in zip(values, self.parameters.parameters):\n parameter.value = value\n return self.function(self.parameters)\n\n\ndef make_minuit_par_kwargs(parameters):\n \"\"\"Create *Parameter Keyword Arguments* for the `Minuit` constructor.\n\n See: http://iminuit.readthedocs.io/en/latest/api.html#iminuit.Minuit\n \"\"\"\n kwargs = {}\n for par in parameters.parameters:\n kwargs[par.name] = par.value\n\n min_ = None if np.isnan(par.min) else par.min\n max_ = None if np.isnan(par.max) else par.max\n kwargs['limit_{}'.format(par.name)] = (min_, max_)\n\n if parameters.covariance is None:\n kwargs['error_{}'.format(par.name)] = 1\n else:\n kwargs['error_{}'.format(par.name)] = parameters.error(par.name)\n\n if par.frozen:\n kwargs['fix_{}'.format(par.name)] = True\n\n return kwargs\n\n\ndef _get_covar(minuit):\n \"\"\"Get full covar matrix as Numpy array.\n\n This was added as `minuit.np_covariance` in `iminuit` in v1.3,\n but we still want to support v1.2\n \"\"\"\n n = len(minuit.parameters)\n m = np.zeros((n, n))\n for i1, k1 in enumerate(minuit.parameters):\n for i2, k2 in enumerate(minuit.parameters):\n if set([k1, k2]).issubset(minuit.list_of_vary_param()):\n m[i1, i2] = minuit.covariance[(k1, k2)]\n return m\n", "path": "gammapy/utils/fitting/iminuit.py"}]} | 1,329 | 355 |
gh_patches_debug_10886 | rasdani/github-patches | git_diff | certbot__certbot-4243 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
--cert-name option not honored on new SSL certificate issuance
Hello,
```shell
root@stretch:~# certbot-auto --version
certbot 0.11.1
```
```shell
# certbot-auto \
certonly \
--non-interactive \
--agree-tos --email <email> \
--staging \
--webroot --webroot-path <path> \
--allow-subset-of-names \
--domain unresolvable.domain.tld \
--domain resolvable.domain.tld \
--cert-name unresolvable.domain.tld
```
Result in:
```shell
No certificate found with name unresolvable.domain.tld (expected /etc/letsencrypt/renewal/unresolvable.domain.tld.conf).
```
But help say (When creating a new certificate ...):
```
--cert-name CERTNAME Certificate name to apply. Only one certificate name
can be used per Certbot run. To see certificate names,
run 'certbot certificates'. When creating a new
certificate, specifies the new certificate's name.
(default: None)
```
Thank you.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `certbot/cert_manager.py`
Content:
```
1 """Tools for managing certificates."""
2 import datetime
3 import logging
4 import os
5 import pytz
6 import traceback
7 import zope.component
8
9 from certbot import errors
10 from certbot import interfaces
11 from certbot import ocsp
12 from certbot import storage
13 from certbot import util
14
15 from certbot.display import util as display_util
16
17 logger = logging.getLogger(__name__)
18
19 ###################
20 # Commands
21 ###################
22
23 def update_live_symlinks(config):
24 """Update the certificate file family symlinks to use archive_dir.
25
26 Use the information in the config file to make symlinks point to
27 the correct archive directory.
28
29 .. note:: This assumes that the installation is using a Reverter object.
30
31 :param config: Configuration.
32 :type config: :class:`certbot.configuration.NamespaceConfig`
33
34 """
35 for renewal_file in storage.renewal_conf_files(config):
36 storage.RenewableCert(renewal_file, config, update_symlinks=True)
37
38 def rename_lineage(config):
39 """Rename the specified lineage to the new name.
40
41 :param config: Configuration.
42 :type config: :class:`certbot.configuration.NamespaceConfig`
43
44 """
45 disp = zope.component.getUtility(interfaces.IDisplay)
46
47 certname = _get_certname(config, "rename")
48
49 new_certname = config.new_certname
50 if not new_certname:
51 code, new_certname = disp.input(
52 "Enter the new name for certificate {0}".format(certname),
53 flag="--updated-cert-name", force_interactive=True)
54 if code != display_util.OK or not new_certname:
55 raise errors.Error("User ended interaction.")
56
57 lineage = lineage_for_certname(config, certname)
58 if not lineage:
59 raise errors.ConfigurationError("No existing certificate with name "
60 "{0} found.".format(certname))
61 storage.rename_renewal_config(certname, new_certname, config)
62 disp.notification("Successfully renamed {0} to {1}."
63 .format(certname, new_certname), pause=False)
64
65 def certificates(config):
66 """Display information about certs configured with Certbot
67
68 :param config: Configuration.
69 :type config: :class:`certbot.configuration.NamespaceConfig`
70 """
71 parsed_certs = []
72 parse_failures = []
73 for renewal_file in storage.renewal_conf_files(config):
74 try:
75 renewal_candidate = storage.RenewableCert(renewal_file, config)
76 parsed_certs.append(renewal_candidate)
77 except Exception as e: # pylint: disable=broad-except
78 logger.warning("Renewal configuration file %s produced an "
79 "unexpected error: %s. Skipping.", renewal_file, e)
80 logger.debug("Traceback was:\n%s", traceback.format_exc())
81 parse_failures.append(renewal_file)
82
83 # Describe all the certs
84 _describe_certs(config, parsed_certs, parse_failures)
85
86 def delete(config):
87 """Delete Certbot files associated with a certificate lineage."""
88 certname = _get_certname(config, "delete")
89 storage.delete_files(config, certname)
90 disp = zope.component.getUtility(interfaces.IDisplay)
91 disp.notification("Deleted all files relating to certificate {0}."
92 .format(certname), pause=False)
93
94 ###################
95 # Public Helpers
96 ###################
97
98 def lineage_for_certname(cli_config, certname):
99 """Find a lineage object with name certname."""
100 configs_dir = cli_config.renewal_configs_dir
101 # Verify the directory is there
102 util.make_or_verify_dir(configs_dir, mode=0o755, uid=os.geteuid())
103 renewal_file = storage.renewal_file_for_certname(cli_config, certname)
104 try:
105 return storage.RenewableCert(renewal_file, cli_config)
106 except (errors.CertStorageError, IOError):
107 logger.debug("Renewal conf file %s is broken.", renewal_file)
108 logger.debug("Traceback was:\n%s", traceback.format_exc())
109 return None
110
111 def domains_for_certname(config, certname):
112 """Find the domains in the cert with name certname."""
113 lineage = lineage_for_certname(config, certname)
114 return lineage.names() if lineage else None
115
116 def find_duplicative_certs(config, domains):
117 """Find existing certs that duplicate the request."""
118 def update_certs_for_domain_matches(candidate_lineage, rv):
119 """Return cert as identical_names_cert if it matches,
120 or subset_names_cert if it matches as subset
121 """
122 # TODO: Handle these differently depending on whether they are
123 # expired or still valid?
124 identical_names_cert, subset_names_cert = rv
125 candidate_names = set(candidate_lineage.names())
126 if candidate_names == set(domains):
127 identical_names_cert = candidate_lineage
128 elif candidate_names.issubset(set(domains)):
129 # This logic finds and returns the largest subset-names cert
130 # in the case where there are several available.
131 if subset_names_cert is None:
132 subset_names_cert = candidate_lineage
133 elif len(candidate_names) > len(subset_names_cert.names()):
134 subset_names_cert = candidate_lineage
135 return (identical_names_cert, subset_names_cert)
136
137 return _search_lineages(config, update_certs_for_domain_matches, (None, None))
138
139
140 ###################
141 # Private Helpers
142 ###################
143
144 def _get_certname(config, verb):
145 """Get certname from flag, interactively, or error out.
146 """
147 certname = config.certname
148 if not certname:
149 disp = zope.component.getUtility(interfaces.IDisplay)
150 filenames = storage.renewal_conf_files(config)
151 choices = [storage.lineagename_for_filename(name) for name in filenames]
152 if not choices:
153 raise errors.Error("No existing certificates found.")
154 code, index = disp.menu("Which certificate would you like to {0}?".format(verb),
155 choices, ok_label="Select", flag="--cert-name",
156 force_interactive=True)
157 if code != display_util.OK or not index in range(0, len(choices)):
158 raise errors.Error("User ended interaction.")
159 certname = choices[index]
160 return certname
161
162 def _report_lines(msgs):
163 """Format a results report for a category of single-line renewal outcomes"""
164 return " " + "\n ".join(str(msg) for msg in msgs)
165
166 def _report_human_readable(config, parsed_certs):
167 """Format a results report for a parsed cert"""
168 certinfo = []
169 checker = ocsp.RevocationChecker()
170 for cert in parsed_certs:
171 if config.certname and cert.lineagename != config.certname:
172 continue
173 if config.domains and not set(config.domains).issubset(cert.names()):
174 continue
175 now = pytz.UTC.fromutc(datetime.datetime.utcnow())
176
177 reasons = []
178 if cert.is_test_cert:
179 reasons.append('TEST_CERT')
180 if cert.target_expiry <= now:
181 reasons.append('EXPIRED')
182 if checker.ocsp_revoked(cert.cert, cert.chain):
183 reasons.append('REVOKED')
184
185 if reasons:
186 status = "INVALID: " + ", ".join(reasons)
187 else:
188 diff = cert.target_expiry - now
189 if diff.days == 1:
190 status = "VALID: 1 day"
191 elif diff.days < 1:
192 status = "VALID: {0} hour(s)".format(diff.seconds // 3600)
193 else:
194 status = "VALID: {0} days".format(diff.days)
195
196 valid_string = "{0} ({1})".format(cert.target_expiry, status)
197 certinfo.append(" Certificate Name: {0}\n"
198 " Domains: {1}\n"
199 " Expiry Date: {2}\n"
200 " Certificate Path: {3}\n"
201 " Private Key Path: {4}".format(
202 cert.lineagename,
203 " ".join(cert.names()),
204 valid_string,
205 cert.fullchain,
206 cert.privkey))
207 return "\n".join(certinfo)
208
209 def _describe_certs(config, parsed_certs, parse_failures):
210 """Print information about the certs we know about"""
211 out = []
212
213 notify = out.append
214
215 if not parsed_certs and not parse_failures:
216 notify("No certs found.")
217 else:
218 if parsed_certs:
219 match = "matching " if config.certname or config.domains else ""
220 notify("Found the following {0}certs:".format(match))
221 notify(_report_human_readable(config, parsed_certs))
222 if parse_failures:
223 notify("\nThe following renewal configuration files "
224 "were invalid:")
225 notify(_report_lines(parse_failures))
226
227 disp = zope.component.getUtility(interfaces.IDisplay)
228 disp.notification("\n".join(out), pause=False, wrap=False)
229
230 def _search_lineages(cli_config, func, initial_rv):
231 """Iterate func over unbroken lineages, allowing custom return conditions.
232
233 Allows flexible customization of return values, including multiple
234 return values and complex checks.
235 """
236 configs_dir = cli_config.renewal_configs_dir
237 # Verify the directory is there
238 util.make_or_verify_dir(configs_dir, mode=0o755, uid=os.geteuid())
239
240 rv = initial_rv
241 for renewal_file in storage.renewal_conf_files(cli_config):
242 try:
243 candidate_lineage = storage.RenewableCert(renewal_file, cli_config)
244 except (errors.CertStorageError, IOError):
245 logger.debug("Renewal conf file %s is broken. Skipping.", renewal_file)
246 logger.debug("Traceback was:\n%s", traceback.format_exc())
247 continue
248 rv = func(candidate_lineage, rv)
249 return rv
250
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/certbot/cert_manager.py b/certbot/cert_manager.py
--- a/certbot/cert_manager.py
+++ b/certbot/cert_manager.py
@@ -100,7 +100,10 @@
configs_dir = cli_config.renewal_configs_dir
# Verify the directory is there
util.make_or_verify_dir(configs_dir, mode=0o755, uid=os.geteuid())
- renewal_file = storage.renewal_file_for_certname(cli_config, certname)
+ try:
+ renewal_file = storage.renewal_file_for_certname(cli_config, certname)
+ except errors.CertStorageError:
+ return None
try:
return storage.RenewableCert(renewal_file, cli_config)
except (errors.CertStorageError, IOError):
| {"golden_diff": "diff --git a/certbot/cert_manager.py b/certbot/cert_manager.py\n--- a/certbot/cert_manager.py\n+++ b/certbot/cert_manager.py\n@@ -100,7 +100,10 @@\n configs_dir = cli_config.renewal_configs_dir\n # Verify the directory is there\n util.make_or_verify_dir(configs_dir, mode=0o755, uid=os.geteuid())\n- renewal_file = storage.renewal_file_for_certname(cli_config, certname)\n+ try:\n+ renewal_file = storage.renewal_file_for_certname(cli_config, certname)\n+ except errors.CertStorageError:\n+ return None\n try:\n return storage.RenewableCert(renewal_file, cli_config)\n except (errors.CertStorageError, IOError):\n", "issue": "--cert-name option not honored on new SSL certificate issuance\nHello,\r\n\r\n```shell\r\nroot@stretch:~# certbot-auto --version\r\ncertbot 0.11.1\r\n```\r\n```shell\r\n# certbot-auto \\\r\ncertonly \\\r\n--non-interactive \\\r\n--agree-tos --email <email> \\\r\n--staging \\\r\n--webroot --webroot-path <path> \\\r\n--allow-subset-of-names \\\r\n--domain unresolvable.domain.tld \\\r\n--domain resolvable.domain.tld \\\r\n--cert-name unresolvable.domain.tld\r\n```\r\nResult in:\r\n```shell\r\nNo certificate found with name unresolvable.domain.tld (expected /etc/letsencrypt/renewal/unresolvable.domain.tld.conf).\r\n```\r\nBut help say (When creating a new certificate ...):\r\n```\r\n--cert-name CERTNAME Certificate name to apply. Only one certificate name\r\n can be used per Certbot run. To see certificate names,\r\n run 'certbot certificates'. When creating a new\r\n certificate, specifies the new certificate's name.\r\n (default: None)\r\n```\r\n\r\nThank you.\n", "before_files": [{"content": "\"\"\"Tools for managing certificates.\"\"\"\nimport datetime\nimport logging\nimport os\nimport pytz\nimport traceback\nimport zope.component\n\nfrom certbot import errors\nfrom certbot import interfaces\nfrom certbot import ocsp\nfrom certbot import storage\nfrom certbot import util\n\nfrom certbot.display import util as display_util\n\nlogger = logging.getLogger(__name__)\n\n###################\n# Commands\n###################\n\ndef update_live_symlinks(config):\n \"\"\"Update the certificate file family symlinks to use archive_dir.\n\n Use the information in the config file to make symlinks point to\n the correct archive directory.\n\n .. note:: This assumes that the installation is using a Reverter object.\n\n :param config: Configuration.\n :type config: :class:`certbot.configuration.NamespaceConfig`\n\n \"\"\"\n for renewal_file in storage.renewal_conf_files(config):\n storage.RenewableCert(renewal_file, config, update_symlinks=True)\n\ndef rename_lineage(config):\n \"\"\"Rename the specified lineage to the new name.\n\n :param config: Configuration.\n :type config: :class:`certbot.configuration.NamespaceConfig`\n\n \"\"\"\n disp = zope.component.getUtility(interfaces.IDisplay)\n\n certname = _get_certname(config, \"rename\")\n\n new_certname = config.new_certname\n if not new_certname:\n code, new_certname = disp.input(\n \"Enter the new name for certificate {0}\".format(certname),\n flag=\"--updated-cert-name\", force_interactive=True)\n if code != display_util.OK or not new_certname:\n raise errors.Error(\"User ended interaction.\")\n\n lineage = lineage_for_certname(config, certname)\n if not lineage:\n raise errors.ConfigurationError(\"No existing certificate with name \"\n \"{0} found.\".format(certname))\n storage.rename_renewal_config(certname, new_certname, config)\n disp.notification(\"Successfully renamed {0} to {1}.\"\n .format(certname, new_certname), pause=False)\n\ndef certificates(config):\n \"\"\"Display information about certs configured with Certbot\n\n :param config: Configuration.\n :type config: :class:`certbot.configuration.NamespaceConfig`\n \"\"\"\n parsed_certs = []\n parse_failures = []\n for renewal_file in storage.renewal_conf_files(config):\n try:\n renewal_candidate = storage.RenewableCert(renewal_file, config)\n parsed_certs.append(renewal_candidate)\n except Exception as e: # pylint: disable=broad-except\n logger.warning(\"Renewal configuration file %s produced an \"\n \"unexpected error: %s. Skipping.\", renewal_file, e)\n logger.debug(\"Traceback was:\\n%s\", traceback.format_exc())\n parse_failures.append(renewal_file)\n\n # Describe all the certs\n _describe_certs(config, parsed_certs, parse_failures)\n\ndef delete(config):\n \"\"\"Delete Certbot files associated with a certificate lineage.\"\"\"\n certname = _get_certname(config, \"delete\")\n storage.delete_files(config, certname)\n disp = zope.component.getUtility(interfaces.IDisplay)\n disp.notification(\"Deleted all files relating to certificate {0}.\"\n .format(certname), pause=False)\n\n###################\n# Public Helpers\n###################\n\ndef lineage_for_certname(cli_config, certname):\n \"\"\"Find a lineage object with name certname.\"\"\"\n configs_dir = cli_config.renewal_configs_dir\n # Verify the directory is there\n util.make_or_verify_dir(configs_dir, mode=0o755, uid=os.geteuid())\n renewal_file = storage.renewal_file_for_certname(cli_config, certname)\n try:\n return storage.RenewableCert(renewal_file, cli_config)\n except (errors.CertStorageError, IOError):\n logger.debug(\"Renewal conf file %s is broken.\", renewal_file)\n logger.debug(\"Traceback was:\\n%s\", traceback.format_exc())\n return None\n\ndef domains_for_certname(config, certname):\n \"\"\"Find the domains in the cert with name certname.\"\"\"\n lineage = lineage_for_certname(config, certname)\n return lineage.names() if lineage else None\n\ndef find_duplicative_certs(config, domains):\n \"\"\"Find existing certs that duplicate the request.\"\"\"\n def update_certs_for_domain_matches(candidate_lineage, rv):\n \"\"\"Return cert as identical_names_cert if it matches,\n or subset_names_cert if it matches as subset\n \"\"\"\n # TODO: Handle these differently depending on whether they are\n # expired or still valid?\n identical_names_cert, subset_names_cert = rv\n candidate_names = set(candidate_lineage.names())\n if candidate_names == set(domains):\n identical_names_cert = candidate_lineage\n elif candidate_names.issubset(set(domains)):\n # This logic finds and returns the largest subset-names cert\n # in the case where there are several available.\n if subset_names_cert is None:\n subset_names_cert = candidate_lineage\n elif len(candidate_names) > len(subset_names_cert.names()):\n subset_names_cert = candidate_lineage\n return (identical_names_cert, subset_names_cert)\n\n return _search_lineages(config, update_certs_for_domain_matches, (None, None))\n\n\n###################\n# Private Helpers\n###################\n\ndef _get_certname(config, verb):\n \"\"\"Get certname from flag, interactively, or error out.\n \"\"\"\n certname = config.certname\n if not certname:\n disp = zope.component.getUtility(interfaces.IDisplay)\n filenames = storage.renewal_conf_files(config)\n choices = [storage.lineagename_for_filename(name) for name in filenames]\n if not choices:\n raise errors.Error(\"No existing certificates found.\")\n code, index = disp.menu(\"Which certificate would you like to {0}?\".format(verb),\n choices, ok_label=\"Select\", flag=\"--cert-name\",\n force_interactive=True)\n if code != display_util.OK or not index in range(0, len(choices)):\n raise errors.Error(\"User ended interaction.\")\n certname = choices[index]\n return certname\n\ndef _report_lines(msgs):\n \"\"\"Format a results report for a category of single-line renewal outcomes\"\"\"\n return \" \" + \"\\n \".join(str(msg) for msg in msgs)\n\ndef _report_human_readable(config, parsed_certs):\n \"\"\"Format a results report for a parsed cert\"\"\"\n certinfo = []\n checker = ocsp.RevocationChecker()\n for cert in parsed_certs:\n if config.certname and cert.lineagename != config.certname:\n continue\n if config.domains and not set(config.domains).issubset(cert.names()):\n continue\n now = pytz.UTC.fromutc(datetime.datetime.utcnow())\n\n reasons = []\n if cert.is_test_cert:\n reasons.append('TEST_CERT')\n if cert.target_expiry <= now:\n reasons.append('EXPIRED')\n if checker.ocsp_revoked(cert.cert, cert.chain):\n reasons.append('REVOKED')\n\n if reasons:\n status = \"INVALID: \" + \", \".join(reasons)\n else:\n diff = cert.target_expiry - now\n if diff.days == 1:\n status = \"VALID: 1 day\"\n elif diff.days < 1:\n status = \"VALID: {0} hour(s)\".format(diff.seconds // 3600)\n else:\n status = \"VALID: {0} days\".format(diff.days)\n\n valid_string = \"{0} ({1})\".format(cert.target_expiry, status)\n certinfo.append(\" Certificate Name: {0}\\n\"\n \" Domains: {1}\\n\"\n \" Expiry Date: {2}\\n\"\n \" Certificate Path: {3}\\n\"\n \" Private Key Path: {4}\".format(\n cert.lineagename,\n \" \".join(cert.names()),\n valid_string,\n cert.fullchain,\n cert.privkey))\n return \"\\n\".join(certinfo)\n\ndef _describe_certs(config, parsed_certs, parse_failures):\n \"\"\"Print information about the certs we know about\"\"\"\n out = []\n\n notify = out.append\n\n if not parsed_certs and not parse_failures:\n notify(\"No certs found.\")\n else:\n if parsed_certs:\n match = \"matching \" if config.certname or config.domains else \"\"\n notify(\"Found the following {0}certs:\".format(match))\n notify(_report_human_readable(config, parsed_certs))\n if parse_failures:\n notify(\"\\nThe following renewal configuration files \"\n \"were invalid:\")\n notify(_report_lines(parse_failures))\n\n disp = zope.component.getUtility(interfaces.IDisplay)\n disp.notification(\"\\n\".join(out), pause=False, wrap=False)\n\ndef _search_lineages(cli_config, func, initial_rv):\n \"\"\"Iterate func over unbroken lineages, allowing custom return conditions.\n\n Allows flexible customization of return values, including multiple\n return values and complex checks.\n \"\"\"\n configs_dir = cli_config.renewal_configs_dir\n # Verify the directory is there\n util.make_or_verify_dir(configs_dir, mode=0o755, uid=os.geteuid())\n\n rv = initial_rv\n for renewal_file in storage.renewal_conf_files(cli_config):\n try:\n candidate_lineage = storage.RenewableCert(renewal_file, cli_config)\n except (errors.CertStorageError, IOError):\n logger.debug(\"Renewal conf file %s is broken. Skipping.\", renewal_file)\n logger.debug(\"Traceback was:\\n%s\", traceback.format_exc())\n continue\n rv = func(candidate_lineage, rv)\n return rv\n", "path": "certbot/cert_manager.py"}], "after_files": [{"content": "\"\"\"Tools for managing certificates.\"\"\"\nimport datetime\nimport logging\nimport os\nimport pytz\nimport traceback\nimport zope.component\n\nfrom certbot import errors\nfrom certbot import interfaces\nfrom certbot import ocsp\nfrom certbot import storage\nfrom certbot import util\n\nfrom certbot.display import util as display_util\n\nlogger = logging.getLogger(__name__)\n\n###################\n# Commands\n###################\n\ndef update_live_symlinks(config):\n \"\"\"Update the certificate file family symlinks to use archive_dir.\n\n Use the information in the config file to make symlinks point to\n the correct archive directory.\n\n .. note:: This assumes that the installation is using a Reverter object.\n\n :param config: Configuration.\n :type config: :class:`certbot.configuration.NamespaceConfig`\n\n \"\"\"\n for renewal_file in storage.renewal_conf_files(config):\n storage.RenewableCert(renewal_file, config, update_symlinks=True)\n\ndef rename_lineage(config):\n \"\"\"Rename the specified lineage to the new name.\n\n :param config: Configuration.\n :type config: :class:`certbot.configuration.NamespaceConfig`\n\n \"\"\"\n disp = zope.component.getUtility(interfaces.IDisplay)\n\n certname = _get_certname(config, \"rename\")\n\n new_certname = config.new_certname\n if not new_certname:\n code, new_certname = disp.input(\n \"Enter the new name for certificate {0}\".format(certname),\n flag=\"--updated-cert-name\", force_interactive=True)\n if code != display_util.OK or not new_certname:\n raise errors.Error(\"User ended interaction.\")\n\n lineage = lineage_for_certname(config, certname)\n if not lineage:\n raise errors.ConfigurationError(\"No existing certificate with name \"\n \"{0} found.\".format(certname))\n storage.rename_renewal_config(certname, new_certname, config)\n disp.notification(\"Successfully renamed {0} to {1}.\"\n .format(certname, new_certname), pause=False)\n\ndef certificates(config):\n \"\"\"Display information about certs configured with Certbot\n\n :param config: Configuration.\n :type config: :class:`certbot.configuration.NamespaceConfig`\n \"\"\"\n parsed_certs = []\n parse_failures = []\n for renewal_file in storage.renewal_conf_files(config):\n try:\n renewal_candidate = storage.RenewableCert(renewal_file, config)\n parsed_certs.append(renewal_candidate)\n except Exception as e: # pylint: disable=broad-except\n logger.warning(\"Renewal configuration file %s produced an \"\n \"unexpected error: %s. Skipping.\", renewal_file, e)\n logger.debug(\"Traceback was:\\n%s\", traceback.format_exc())\n parse_failures.append(renewal_file)\n\n # Describe all the certs\n _describe_certs(config, parsed_certs, parse_failures)\n\ndef delete(config):\n \"\"\"Delete Certbot files associated with a certificate lineage.\"\"\"\n certname = _get_certname(config, \"delete\")\n storage.delete_files(config, certname)\n disp = zope.component.getUtility(interfaces.IDisplay)\n disp.notification(\"Deleted all files relating to certificate {0}.\"\n .format(certname), pause=False)\n\n###################\n# Public Helpers\n###################\n\ndef lineage_for_certname(cli_config, certname):\n \"\"\"Find a lineage object with name certname.\"\"\"\n configs_dir = cli_config.renewal_configs_dir\n # Verify the directory is there\n util.make_or_verify_dir(configs_dir, mode=0o755, uid=os.geteuid())\n try:\n renewal_file = storage.renewal_file_for_certname(cli_config, certname)\n except errors.CertStorageError:\n return None\n try:\n return storage.RenewableCert(renewal_file, cli_config)\n except (errors.CertStorageError, IOError):\n logger.debug(\"Renewal conf file %s is broken.\", renewal_file)\n logger.debug(\"Traceback was:\\n%s\", traceback.format_exc())\n return None\n\ndef domains_for_certname(config, certname):\n \"\"\"Find the domains in the cert with name certname.\"\"\"\n lineage = lineage_for_certname(config, certname)\n return lineage.names() if lineage else None\n\ndef find_duplicative_certs(config, domains):\n \"\"\"Find existing certs that duplicate the request.\"\"\"\n def update_certs_for_domain_matches(candidate_lineage, rv):\n \"\"\"Return cert as identical_names_cert if it matches,\n or subset_names_cert if it matches as subset\n \"\"\"\n # TODO: Handle these differently depending on whether they are\n # expired or still valid?\n identical_names_cert, subset_names_cert = rv\n candidate_names = set(candidate_lineage.names())\n if candidate_names == set(domains):\n identical_names_cert = candidate_lineage\n elif candidate_names.issubset(set(domains)):\n # This logic finds and returns the largest subset-names cert\n # in the case where there are several available.\n if subset_names_cert is None:\n subset_names_cert = candidate_lineage\n elif len(candidate_names) > len(subset_names_cert.names()):\n subset_names_cert = candidate_lineage\n return (identical_names_cert, subset_names_cert)\n\n return _search_lineages(config, update_certs_for_domain_matches, (None, None))\n\n\n###################\n# Private Helpers\n###################\n\ndef _get_certname(config, verb):\n \"\"\"Get certname from flag, interactively, or error out.\n \"\"\"\n certname = config.certname\n if not certname:\n disp = zope.component.getUtility(interfaces.IDisplay)\n filenames = storage.renewal_conf_files(config)\n choices = [storage.lineagename_for_filename(name) for name in filenames]\n if not choices:\n raise errors.Error(\"No existing certificates found.\")\n code, index = disp.menu(\"Which certificate would you like to {0}?\".format(verb),\n choices, ok_label=\"Select\", flag=\"--cert-name\",\n force_interactive=True)\n if code != display_util.OK or not index in range(0, len(choices)):\n raise errors.Error(\"User ended interaction.\")\n certname = choices[index]\n return certname\n\ndef _report_lines(msgs):\n \"\"\"Format a results report for a category of single-line renewal outcomes\"\"\"\n return \" \" + \"\\n \".join(str(msg) for msg in msgs)\n\ndef _report_human_readable(config, parsed_certs):\n \"\"\"Format a results report for a parsed cert\"\"\"\n certinfo = []\n checker = ocsp.RevocationChecker()\n for cert in parsed_certs:\n if config.certname and cert.lineagename != config.certname:\n continue\n if config.domains and not set(config.domains).issubset(cert.names()):\n continue\n now = pytz.UTC.fromutc(datetime.datetime.utcnow())\n\n reasons = []\n if cert.is_test_cert:\n reasons.append('TEST_CERT')\n if cert.target_expiry <= now:\n reasons.append('EXPIRED')\n if checker.ocsp_revoked(cert.cert, cert.chain):\n reasons.append('REVOKED')\n\n if reasons:\n status = \"INVALID: \" + \", \".join(reasons)\n else:\n diff = cert.target_expiry - now\n if diff.days == 1:\n status = \"VALID: 1 day\"\n elif diff.days < 1:\n status = \"VALID: {0} hour(s)\".format(diff.seconds // 3600)\n else:\n status = \"VALID: {0} days\".format(diff.days)\n\n valid_string = \"{0} ({1})\".format(cert.target_expiry, status)\n certinfo.append(\" Certificate Name: {0}\\n\"\n \" Domains: {1}\\n\"\n \" Expiry Date: {2}\\n\"\n \" Certificate Path: {3}\\n\"\n \" Private Key Path: {4}\".format(\n cert.lineagename,\n \" \".join(cert.names()),\n valid_string,\n cert.fullchain,\n cert.privkey))\n return \"\\n\".join(certinfo)\n\ndef _describe_certs(config, parsed_certs, parse_failures):\n \"\"\"Print information about the certs we know about\"\"\"\n out = []\n\n notify = out.append\n\n if not parsed_certs and not parse_failures:\n notify(\"No certs found.\")\n else:\n if parsed_certs:\n match = \"matching \" if config.certname or config.domains else \"\"\n notify(\"Found the following {0}certs:\".format(match))\n notify(_report_human_readable(config, parsed_certs))\n if parse_failures:\n notify(\"\\nThe following renewal configuration files \"\n \"were invalid:\")\n notify(_report_lines(parse_failures))\n\n disp = zope.component.getUtility(interfaces.IDisplay)\n disp.notification(\"\\n\".join(out), pause=False, wrap=False)\n\ndef _search_lineages(cli_config, func, initial_rv):\n \"\"\"Iterate func over unbroken lineages, allowing custom return conditions.\n\n Allows flexible customization of return values, including multiple\n return values and complex checks.\n \"\"\"\n configs_dir = cli_config.renewal_configs_dir\n # Verify the directory is there\n util.make_or_verify_dir(configs_dir, mode=0o755, uid=os.geteuid())\n\n rv = initial_rv\n for renewal_file in storage.renewal_conf_files(cli_config):\n try:\n candidate_lineage = storage.RenewableCert(renewal_file, cli_config)\n except (errors.CertStorageError, IOError):\n logger.debug(\"Renewal conf file %s is broken. Skipping.\", renewal_file)\n logger.debug(\"Traceback was:\\n%s\", traceback.format_exc())\n continue\n rv = func(candidate_lineage, rv)\n return rv\n", "path": "certbot/cert_manager.py"}]} | 3,234 | 185 |
gh_patches_debug_42192 | rasdani/github-patches | git_diff | opsdroid__opsdroid-692 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change the way opsdroid prioritize regex / NLU skills
# Description
Actually opsdroid use a score system to decide what skill have to run when more than one skills match a message.
This score system is designed to prioritize NLU engines over regex (see #311).
The problem comes when you're working at the same time with some NLU engine and some regex skills. If the regex skill have some words of some NLU intent, it will be never executed, even though you write it exactly as in the regex.
## Steps to Reproduce
For example, you add a greetings intent to some NLU training with various sentences like 'hello', 'hello opsdroid', 'hi', 'hello there', 'whats up', 'good morning'...
And you have the min-score at 0.6.
Then you have a regex matcher with something like 'hello from (?P\w+)!'.
If a user writes "hello from Barcelona!" opsdroid will prioritize the NLU response, because it's almost secure that the NLU platform will detect the intent into the sentence.
## Expected Functionality
If a developer has the regex 'hello from (?P\w+)!' and it matches, it has to be prioritized over the NLU intent, because regex it's more specific.
## Proposed solution
My proposal is redesign the priority system by specificity. So I will prioritize skills in that order:
_parse (#450) > regex > NLU engines_
And, to decide what skill to execute if more than one match in the same level:
- In parse and regex, use the expression length. Longer expression normally it's more specific.
- In NLU engines, use the engine score (like now)
Any thought or discussion will be welcome :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opsdroid/parsers/regex.py`
Content:
```
1 """A helper function for parsing and executing regex skills."""
2
3 import logging
4 import re
5
6 from opsdroid.const import REGEX_MAX_SCORE
7
8
9 _LOGGER = logging.getLogger(__name__)
10
11
12 async def calculate_score(regex):
13 """Calculate the score of a regex."""
14 # The score asymptotically approaches the max score
15 # based on the length of the expression.
16 return (1 - (1 / ((len(regex) + 1) ** 2))) * REGEX_MAX_SCORE
17
18
19 async def parse_regex(opsdroid, message):
20 """Parse a message against all regex skills."""
21 matched_skills = []
22 for skill in opsdroid.skills:
23 if "regex" in skill:
24 if skill["regex"]["case_sensitive"]:
25 regex = re.search(skill["regex"]["expression"],
26 message.text)
27 else:
28 regex = re.search(skill["regex"]["expression"],
29 message.text, re.IGNORECASE)
30 if regex:
31 message.regex = regex
32 matched_skills.append({
33 "score": await calculate_score(
34 skill["regex"]["expression"]),
35 "skill": skill["skill"],
36 "config": skill["config"],
37 "message": message
38 })
39 return matched_skills
40
```
Path: `opsdroid/const.py`
Content:
```
1 """Constants used by OpsDroid."""
2 import os
3 from appdirs import user_log_dir, user_config_dir, user_data_dir
4 from opsdroid import __version__ # noqa # pylint: disable=unused-import
5
6 NAME = 'opsdroid'
7 DEFAULT_GIT_URL = "https://github.com/opsdroid/"
8 MODULES_DIRECTORY = "opsdroid-modules"
9 DEFAULT_ROOT_PATH = user_data_dir(NAME)
10 DEFAULT_LOG_FILENAME = os.path.join(
11 user_log_dir(NAME, appauthor=False), 'output.log')
12 DEFAULT_MODULES_PATH = user_data_dir(NAME, MODULES_DIRECTORY)
13 DEFAULT_MODULE_DEPS_PATH = os.path.join(
14 user_data_dir(NAME, MODULES_DIRECTORY), "site-packages")
15 DEFAULT_CONFIG_PATH = os.path.join(
16 user_config_dir(NAME, appauthor=False), "configuration.yaml")
17 PRE_0_12_0_CONFIG_PATH = os.path.join(DEFAULT_ROOT_PATH, "configuration.yaml")
18 PRE_0_12_0_ROOT_PATH = os.path.expanduser("~/.opsdroid")
19 DEFAULT_MODULE_BRANCH = "master"
20 DEFAULT_LANGUAGE = 'en'
21 LOCALE_DIR = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'locale')
22 EXAMPLE_CONFIG_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)),
23 "configuration/example_configuration.yaml")
24 REGEX_MAX_SCORE = 0.6
25
26 RASANLU_DEFAULT_URL = "http://localhost:5000"
27 RASANLU_DEFAULT_PROJECT = "opsdroid"
28
29 LUISAI_DEFAULT_URL = "https://westus.api.cognitive.microsoft.com" \
30 "/luis/v2.0/apps/"
31
32 DIALOGFLOW_API_ENDPOINT = "https://api.dialogflow.com/v1/query"
33 DIALOGFLOW_API_VERSION = "20150910"
34
35 WITAI_DEFAULT_VERSION = "20170307"
36 WITAI_API_ENDPOINT = "https://api.wit.ai/message?"
37
38 RECASTAI_API_ENDPOINT = "https://api.recast.ai/v2/request"
39
```
Path: `opsdroid/matchers.py`
Content:
```
1 """Decorator functions to use when creating skill modules."""
2
3 import logging
4
5 from opsdroid.helper import get_opsdroid
6 from opsdroid.web import Web
7
8
9 _LOGGER = logging.getLogger(__name__)
10
11
12 def match_regex(regex, case_sensitive=True):
13 """Return regex match decorator."""
14 def matcher(func):
15 """Add decorated function to skills list for regex matching."""
16 opsdroid = get_opsdroid()
17 if opsdroid:
18 config = opsdroid.loader.current_import_config
19 regex_setup = {
20 "expression": regex,
21 "case_sensitive": case_sensitive
22 }
23 opsdroid.skills.append({"regex": regex_setup,
24 "skill": func,
25 "config": config})
26 return func
27 return matcher
28
29
30 def match_apiai_action(action):
31 """Return Dialogflow action match decorator."""
32 def matcher(func):
33 """Add decorated function to skills list for Dialogflow matching."""
34 opsdroid = get_opsdroid()
35 if opsdroid:
36 config = opsdroid.loader.current_import_config
37 opsdroid.skills.append({"dialogflow_action": action,
38 "skill": func,
39 "config": config})
40 return func
41 _LOGGER.warning(_("Api.ai is now called Dialogflow, this matcher "
42 "will stop working in the future. "
43 "Use match_dialogflow_action instead."))
44 return matcher
45
46
47 def match_apiai_intent(intent):
48 """Return Dialogflow intent match decorator."""
49 def matcher(func):
50 """Add decorated function to skills list for Dialogflow matching."""
51 opsdroid = get_opsdroid()
52 if opsdroid:
53 config = opsdroid.loader.current_import_config
54 opsdroid.skills.append({"dialogflow_intent": intent,
55 "skill": func,
56 "config": config})
57 return func
58 _LOGGER.warning(_("Api.ai is now called Dialogflow, this matcher "
59 "will stop working in the future. "
60 "Use match_dialogflow_intent instead."))
61 return matcher
62
63
64 def match_dialogflow_action(action):
65 """Return Dialogflowi action match decorator."""
66 def matcher(func):
67 """Add decorated function to skills list for Dialogflow matching."""
68 opsdroid = get_opsdroid()
69 if opsdroid:
70 config = opsdroid.loader.current_import_config
71 opsdroid.skills.append({"dialogflow_action": action,
72 "skill": func,
73 "config": config})
74 return func
75 return matcher
76
77
78 def match_dialogflow_intent(intent):
79 """Return Dialogflow intent match decorator."""
80 def matcher(func):
81 """Add decorated function to skills list for Dialogflow matching."""
82 opsdroid = get_opsdroid()
83 if opsdroid:
84 config = opsdroid.loader.current_import_config
85 opsdroid.skills.append({"dialogflow_intent": intent,
86 "skill": func,
87 "config": config})
88 return func
89 return matcher
90
91
92 def match_luisai_intent(intent):
93 """Return luisai intent match decorator."""
94 def matcher(func):
95 """Add decorated function to skills list for luisai matching."""
96 opsdroid = get_opsdroid()
97 if opsdroid:
98 config = opsdroid.loader.current_import_config
99 opsdroid.skills.append({"luisai_intent": intent,
100 "skill": func,
101 "config": config})
102 return func
103 return matcher
104
105
106 def match_rasanlu(intent):
107 """Return Rasa NLU intent match decorator."""
108 def matcher(func):
109 """Add decorated function to skills list for Rasa NLU matching."""
110 opsdroid = get_opsdroid()
111 if opsdroid:
112 config = opsdroid.loader.current_import_config
113 opsdroid.skills.append({"rasanlu_intent": intent,
114 "skill": func,
115 "config": config})
116 return func
117 return matcher
118
119
120 def match_recastai(intent):
121 """Return recastai intent match decorator."""
122 def matcher(func):
123 """Add decorated function to skills list for recastai matching."""
124 opsdroid = get_opsdroid()
125 if opsdroid:
126 config = opsdroid.loader.current_import_config
127 opsdroid.skills.append({"recastai_intent": intent,
128 "skill": func,
129 "config": config})
130 return func
131 return matcher
132
133
134 def match_witai(intent):
135 """Return witai intent match decorator."""
136 def matcher(func):
137 """Add decorated function to skills list for witai matching."""
138 opsdroid = get_opsdroid()
139 if opsdroid:
140 config = opsdroid.loader.current_import_config
141 opsdroid.skills.append({"witai_intent": intent,
142 "skill": func,
143 "config": config})
144 return func
145 return matcher
146
147
148 def match_crontab(crontab, timezone=None):
149 """Return crontab match decorator."""
150 def matcher(func):
151 """Add decorated function to skills list for crontab matching."""
152 opsdroid = get_opsdroid()
153 if opsdroid:
154 config = opsdroid.loader.current_import_config
155 opsdroid.skills.append({"crontab": crontab,
156 "skill": func,
157 "config": config,
158 "timezone": timezone})
159 return func
160 return matcher
161
162
163 def match_webhook(webhook):
164 """Return webhook match decorator."""
165 def matcher(func):
166 """Add decorated function to skills list for webhook matching."""
167 opsdroid = get_opsdroid()
168 if opsdroid:
169 config = opsdroid.loader.current_import_config
170 opsdroid.skills.append({"webhook": webhook,
171 "skill": func,
172 "config": config})
173
174 async def wrapper(req, opsdroid=opsdroid, config=config):
175 """Wrap up the aiohttp handler."""
176 _LOGGER.info("Running skill %s via webhook", webhook)
177 opsdroid.stats["webhooks_called"] = \
178 opsdroid.stats["webhooks_called"] + 1
179 await func(opsdroid, config, req)
180 return Web.build_response(200, {"called_skill": webhook})
181
182 opsdroid.web_server.web_app.router.add_post(
183 "/skill/{}/{}".format(config["name"], webhook), wrapper)
184 opsdroid.web_server.web_app.router.add_post(
185 "/skill/{}/{}/".format(config["name"], webhook), wrapper)
186
187 return func
188 return matcher
189
190
191 def match_always(func=None):
192 """Return always match decorator."""
193 def matcher(func):
194 """Add decorated function to skills list for always matching."""
195 opsdroid = get_opsdroid()
196 if opsdroid:
197 config = opsdroid.loader.current_import_config
198 opsdroid.skills.append({"always": True,
199 "skill": func,
200 "config": config})
201 return func
202
203 # Allow for decorator with or without parenthesis as there are no args.
204 if callable(func):
205 return matcher(func)
206 return matcher
207
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opsdroid/const.py b/opsdroid/const.py
--- a/opsdroid/const.py
+++ b/opsdroid/const.py
@@ -21,7 +21,7 @@
LOCALE_DIR = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'locale')
EXAMPLE_CONFIG_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)),
"configuration/example_configuration.yaml")
-REGEX_MAX_SCORE = 0.6
+REGEX_SCORE_FACTOR = 0.6
RASANLU_DEFAULT_URL = "http://localhost:5000"
RASANLU_DEFAULT_PROJECT = "opsdroid"
diff --git a/opsdroid/matchers.py b/opsdroid/matchers.py
--- a/opsdroid/matchers.py
+++ b/opsdroid/matchers.py
@@ -2,6 +2,7 @@
import logging
+from opsdroid.const import REGEX_SCORE_FACTOR
from opsdroid.helper import get_opsdroid
from opsdroid.web import Web
@@ -9,7 +10,7 @@
_LOGGER = logging.getLogger(__name__)
-def match_regex(regex, case_sensitive=True):
+def match_regex(regex, case_sensitive=True, score_factor=None):
"""Return regex match decorator."""
def matcher(func):
"""Add decorated function to skills list for regex matching."""
@@ -18,7 +19,8 @@
config = opsdroid.loader.current_import_config
regex_setup = {
"expression": regex,
- "case_sensitive": case_sensitive
+ "case_sensitive": case_sensitive,
+ "score_factor": score_factor or REGEX_SCORE_FACTOR,
}
opsdroid.skills.append({"regex": regex_setup,
"skill": func,
diff --git a/opsdroid/parsers/regex.py b/opsdroid/parsers/regex.py
--- a/opsdroid/parsers/regex.py
+++ b/opsdroid/parsers/regex.py
@@ -3,17 +3,14 @@
import logging
import re
-from opsdroid.const import REGEX_MAX_SCORE
-
-
_LOGGER = logging.getLogger(__name__)
-async def calculate_score(regex):
+async def calculate_score(regex, score_factor):
"""Calculate the score of a regex."""
# The score asymptotically approaches the max score
# based on the length of the expression.
- return (1 - (1 / ((len(regex) + 1) ** 2))) * REGEX_MAX_SCORE
+ return (1 - (1 / ((len(regex) + 1) ** 2))) * score_factor
async def parse_regex(opsdroid, message):
@@ -21,17 +18,18 @@
matched_skills = []
for skill in opsdroid.skills:
if "regex" in skill:
- if skill["regex"]["case_sensitive"]:
- regex = re.search(skill["regex"]["expression"],
+ opts = skill["regex"]
+ if opts["case_sensitive"]:
+ regex = re.search(opts["expression"],
message.text)
else:
- regex = re.search(skill["regex"]["expression"],
+ regex = re.search(opts["expression"],
message.text, re.IGNORECASE)
if regex:
message.regex = regex
matched_skills.append({
"score": await calculate_score(
- skill["regex"]["expression"]),
+ opts["expression"], opts["score_factor"]),
"skill": skill["skill"],
"config": skill["config"],
"message": message
| {"golden_diff": "diff --git a/opsdroid/const.py b/opsdroid/const.py\n--- a/opsdroid/const.py\n+++ b/opsdroid/const.py\n@@ -21,7 +21,7 @@\n LOCALE_DIR = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'locale')\n EXAMPLE_CONFIG_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n \"configuration/example_configuration.yaml\")\n-REGEX_MAX_SCORE = 0.6\n+REGEX_SCORE_FACTOR = 0.6\n \n RASANLU_DEFAULT_URL = \"http://localhost:5000\"\n RASANLU_DEFAULT_PROJECT = \"opsdroid\"\ndiff --git a/opsdroid/matchers.py b/opsdroid/matchers.py\n--- a/opsdroid/matchers.py\n+++ b/opsdroid/matchers.py\n@@ -2,6 +2,7 @@\n \n import logging\n \n+from opsdroid.const import REGEX_SCORE_FACTOR\n from opsdroid.helper import get_opsdroid\n from opsdroid.web import Web\n \n@@ -9,7 +10,7 @@\n _LOGGER = logging.getLogger(__name__)\n \n \n-def match_regex(regex, case_sensitive=True):\n+def match_regex(regex, case_sensitive=True, score_factor=None):\n \"\"\"Return regex match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for regex matching.\"\"\"\n@@ -18,7 +19,8 @@\n config = opsdroid.loader.current_import_config\n regex_setup = {\n \"expression\": regex,\n- \"case_sensitive\": case_sensitive\n+ \"case_sensitive\": case_sensitive,\n+ \"score_factor\": score_factor or REGEX_SCORE_FACTOR,\n }\n opsdroid.skills.append({\"regex\": regex_setup,\n \"skill\": func,\ndiff --git a/opsdroid/parsers/regex.py b/opsdroid/parsers/regex.py\n--- a/opsdroid/parsers/regex.py\n+++ b/opsdroid/parsers/regex.py\n@@ -3,17 +3,14 @@\n import logging\n import re\n \n-from opsdroid.const import REGEX_MAX_SCORE\n-\n-\n _LOGGER = logging.getLogger(__name__)\n \n \n-async def calculate_score(regex):\n+async def calculate_score(regex, score_factor):\n \"\"\"Calculate the score of a regex.\"\"\"\n # The score asymptotically approaches the max score\n # based on the length of the expression.\n- return (1 - (1 / ((len(regex) + 1) ** 2))) * REGEX_MAX_SCORE\n+ return (1 - (1 / ((len(regex) + 1) ** 2))) * score_factor\n \n \n async def parse_regex(opsdroid, message):\n@@ -21,17 +18,18 @@\n matched_skills = []\n for skill in opsdroid.skills:\n if \"regex\" in skill:\n- if skill[\"regex\"][\"case_sensitive\"]:\n- regex = re.search(skill[\"regex\"][\"expression\"],\n+ opts = skill[\"regex\"]\n+ if opts[\"case_sensitive\"]:\n+ regex = re.search(opts[\"expression\"],\n message.text)\n else:\n- regex = re.search(skill[\"regex\"][\"expression\"],\n+ regex = re.search(opts[\"expression\"],\n message.text, re.IGNORECASE)\n if regex:\n message.regex = regex\n matched_skills.append({\n \"score\": await calculate_score(\n- skill[\"regex\"][\"expression\"]),\n+ opts[\"expression\"], opts[\"score_factor\"]),\n \"skill\": skill[\"skill\"],\n \"config\": skill[\"config\"],\n \"message\": message\n", "issue": "Change the way opsdroid prioritize regex / NLU skills\n# Description\r\nActually opsdroid use a score system to decide what skill have to run when more than one skills match a message.\r\nThis score system is designed to prioritize NLU engines over regex (see #311).\r\n\r\nThe problem comes when you're working at the same time with some NLU engine and some regex skills. If the regex skill have some words of some NLU intent, it will be never executed, even though you write it exactly as in the regex.\r\n\r\n## Steps to Reproduce\r\nFor example, you add a greetings intent to some NLU training with various sentences like 'hello', 'hello opsdroid', 'hi', 'hello there', 'whats up', 'good morning'...\r\nAnd you have the min-score at 0.6.\r\n\r\nThen you have a regex matcher with something like 'hello from (?P\\w+)!'.\r\n\r\nIf a user writes \"hello from Barcelona!\" opsdroid will prioritize the NLU response, because it's almost secure that the NLU platform will detect the intent into the sentence.\r\n\r\n## Expected Functionality\r\nIf a developer has the regex 'hello from (?P\\w+)!' and it matches, it has to be prioritized over the NLU intent, because regex it's more specific.\r\n\r\n## Proposed solution\r\nMy proposal is redesign the priority system by specificity. So I will prioritize skills in that order:\r\n_parse (#450) > regex > NLU engines_\r\n\r\nAnd, to decide what skill to execute if more than one match in the same level:\r\n- In parse and regex, use the expression length. Longer expression normally it's more specific.\r\n- In NLU engines, use the engine score (like now)\r\n\r\nAny thought or discussion will be welcome :)\n", "before_files": [{"content": "\"\"\"A helper function for parsing and executing regex skills.\"\"\"\n\nimport logging\nimport re\n\nfrom opsdroid.const import REGEX_MAX_SCORE\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\nasync def calculate_score(regex):\n \"\"\"Calculate the score of a regex.\"\"\"\n # The score asymptotically approaches the max score\n # based on the length of the expression.\n return (1 - (1 / ((len(regex) + 1) ** 2))) * REGEX_MAX_SCORE\n\n\nasync def parse_regex(opsdroid, message):\n \"\"\"Parse a message against all regex skills.\"\"\"\n matched_skills = []\n for skill in opsdroid.skills:\n if \"regex\" in skill:\n if skill[\"regex\"][\"case_sensitive\"]:\n regex = re.search(skill[\"regex\"][\"expression\"],\n message.text)\n else:\n regex = re.search(skill[\"regex\"][\"expression\"],\n message.text, re.IGNORECASE)\n if regex:\n message.regex = regex\n matched_skills.append({\n \"score\": await calculate_score(\n skill[\"regex\"][\"expression\"]),\n \"skill\": skill[\"skill\"],\n \"config\": skill[\"config\"],\n \"message\": message\n })\n return matched_skills\n", "path": "opsdroid/parsers/regex.py"}, {"content": "\"\"\"Constants used by OpsDroid.\"\"\"\nimport os\nfrom appdirs import user_log_dir, user_config_dir, user_data_dir\nfrom opsdroid import __version__ # noqa # pylint: disable=unused-import\n\nNAME = 'opsdroid'\nDEFAULT_GIT_URL = \"https://github.com/opsdroid/\"\nMODULES_DIRECTORY = \"opsdroid-modules\"\nDEFAULT_ROOT_PATH = user_data_dir(NAME)\nDEFAULT_LOG_FILENAME = os.path.join(\n user_log_dir(NAME, appauthor=False), 'output.log')\nDEFAULT_MODULES_PATH = user_data_dir(NAME, MODULES_DIRECTORY)\nDEFAULT_MODULE_DEPS_PATH = os.path.join(\n user_data_dir(NAME, MODULES_DIRECTORY), \"site-packages\")\nDEFAULT_CONFIG_PATH = os.path.join(\n user_config_dir(NAME, appauthor=False), \"configuration.yaml\")\nPRE_0_12_0_CONFIG_PATH = os.path.join(DEFAULT_ROOT_PATH, \"configuration.yaml\")\nPRE_0_12_0_ROOT_PATH = os.path.expanduser(\"~/.opsdroid\")\nDEFAULT_MODULE_BRANCH = \"master\"\nDEFAULT_LANGUAGE = 'en'\nLOCALE_DIR = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'locale')\nEXAMPLE_CONFIG_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n \"configuration/example_configuration.yaml\")\nREGEX_MAX_SCORE = 0.6\n\nRASANLU_DEFAULT_URL = \"http://localhost:5000\"\nRASANLU_DEFAULT_PROJECT = \"opsdroid\"\n\nLUISAI_DEFAULT_URL = \"https://westus.api.cognitive.microsoft.com\" \\\n \"/luis/v2.0/apps/\"\n\nDIALOGFLOW_API_ENDPOINT = \"https://api.dialogflow.com/v1/query\"\nDIALOGFLOW_API_VERSION = \"20150910\"\n\nWITAI_DEFAULT_VERSION = \"20170307\"\nWITAI_API_ENDPOINT = \"https://api.wit.ai/message?\"\n\nRECASTAI_API_ENDPOINT = \"https://api.recast.ai/v2/request\"\n", "path": "opsdroid/const.py"}, {"content": "\"\"\"Decorator functions to use when creating skill modules.\"\"\"\n\nimport logging\n\nfrom opsdroid.helper import get_opsdroid\nfrom opsdroid.web import Web\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\ndef match_regex(regex, case_sensitive=True):\n \"\"\"Return regex match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for regex matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n regex_setup = {\n \"expression\": regex,\n \"case_sensitive\": case_sensitive\n }\n opsdroid.skills.append({\"regex\": regex_setup,\n \"skill\": func,\n \"config\": config})\n return func\n return matcher\n\n\ndef match_apiai_action(action):\n \"\"\"Return Dialogflow action match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for Dialogflow matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"dialogflow_action\": action,\n \"skill\": func,\n \"config\": config})\n return func\n _LOGGER.warning(_(\"Api.ai is now called Dialogflow, this matcher \"\n \"will stop working in the future. \"\n \"Use match_dialogflow_action instead.\"))\n return matcher\n\n\ndef match_apiai_intent(intent):\n \"\"\"Return Dialogflow intent match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for Dialogflow matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"dialogflow_intent\": intent,\n \"skill\": func,\n \"config\": config})\n return func\n _LOGGER.warning(_(\"Api.ai is now called Dialogflow, this matcher \"\n \"will stop working in the future. \"\n \"Use match_dialogflow_intent instead.\"))\n return matcher\n\n\ndef match_dialogflow_action(action):\n \"\"\"Return Dialogflowi action match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for Dialogflow matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"dialogflow_action\": action,\n \"skill\": func,\n \"config\": config})\n return func\n return matcher\n\n\ndef match_dialogflow_intent(intent):\n \"\"\"Return Dialogflow intent match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for Dialogflow matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"dialogflow_intent\": intent,\n \"skill\": func,\n \"config\": config})\n return func\n return matcher\n\n\ndef match_luisai_intent(intent):\n \"\"\"Return luisai intent match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for luisai matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"luisai_intent\": intent,\n \"skill\": func,\n \"config\": config})\n return func\n return matcher\n\n\ndef match_rasanlu(intent):\n \"\"\"Return Rasa NLU intent match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for Rasa NLU matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"rasanlu_intent\": intent,\n \"skill\": func,\n \"config\": config})\n return func\n return matcher\n\n\ndef match_recastai(intent):\n \"\"\"Return recastai intent match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for recastai matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"recastai_intent\": intent,\n \"skill\": func,\n \"config\": config})\n return func\n return matcher\n\n\ndef match_witai(intent):\n \"\"\"Return witai intent match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for witai matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"witai_intent\": intent,\n \"skill\": func,\n \"config\": config})\n return func\n return matcher\n\n\ndef match_crontab(crontab, timezone=None):\n \"\"\"Return crontab match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for crontab matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"crontab\": crontab,\n \"skill\": func,\n \"config\": config,\n \"timezone\": timezone})\n return func\n return matcher\n\n\ndef match_webhook(webhook):\n \"\"\"Return webhook match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for webhook matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"webhook\": webhook,\n \"skill\": func,\n \"config\": config})\n\n async def wrapper(req, opsdroid=opsdroid, config=config):\n \"\"\"Wrap up the aiohttp handler.\"\"\"\n _LOGGER.info(\"Running skill %s via webhook\", webhook)\n opsdroid.stats[\"webhooks_called\"] = \\\n opsdroid.stats[\"webhooks_called\"] + 1\n await func(opsdroid, config, req)\n return Web.build_response(200, {\"called_skill\": webhook})\n\n opsdroid.web_server.web_app.router.add_post(\n \"/skill/{}/{}\".format(config[\"name\"], webhook), wrapper)\n opsdroid.web_server.web_app.router.add_post(\n \"/skill/{}/{}/\".format(config[\"name\"], webhook), wrapper)\n\n return func\n return matcher\n\n\ndef match_always(func=None):\n \"\"\"Return always match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for always matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"always\": True,\n \"skill\": func,\n \"config\": config})\n return func\n\n # Allow for decorator with or without parenthesis as there are no args.\n if callable(func):\n return matcher(func)\n return matcher\n", "path": "opsdroid/matchers.py"}], "after_files": [{"content": "\"\"\"A helper function for parsing and executing regex skills.\"\"\"\n\nimport logging\nimport re\n\n_LOGGER = logging.getLogger(__name__)\n\n\nasync def calculate_score(regex, score_factor):\n \"\"\"Calculate the score of a regex.\"\"\"\n # The score asymptotically approaches the max score\n # based on the length of the expression.\n return (1 - (1 / ((len(regex) + 1) ** 2))) * score_factor\n\n\nasync def parse_regex(opsdroid, message):\n \"\"\"Parse a message against all regex skills.\"\"\"\n matched_skills = []\n for skill in opsdroid.skills:\n if \"regex\" in skill:\n opts = skill[\"regex\"]\n if opts[\"case_sensitive\"]:\n regex = re.search(opts[\"expression\"],\n message.text)\n else:\n regex = re.search(opts[\"expression\"],\n message.text, re.IGNORECASE)\n if regex:\n message.regex = regex\n matched_skills.append({\n \"score\": await calculate_score(\n opts[\"expression\"], opts[\"score_factor\"]),\n \"skill\": skill[\"skill\"],\n \"config\": skill[\"config\"],\n \"message\": message\n })\n return matched_skills\n", "path": "opsdroid/parsers/regex.py"}, {"content": "\"\"\"Constants used by OpsDroid.\"\"\"\nimport os\nfrom appdirs import user_log_dir, user_config_dir, user_data_dir\nfrom opsdroid import __version__ # noqa # pylint: disable=unused-import\n\nNAME = 'opsdroid'\nDEFAULT_GIT_URL = \"https://github.com/opsdroid/\"\nMODULES_DIRECTORY = \"opsdroid-modules\"\nDEFAULT_ROOT_PATH = user_data_dir(NAME)\nDEFAULT_LOG_FILENAME = os.path.join(\n user_log_dir(NAME, appauthor=False), 'output.log')\nDEFAULT_MODULES_PATH = user_data_dir(NAME, MODULES_DIRECTORY)\nDEFAULT_MODULE_DEPS_PATH = os.path.join(\n user_data_dir(NAME, MODULES_DIRECTORY), \"site-packages\")\nDEFAULT_CONFIG_PATH = os.path.join(\n user_config_dir(NAME, appauthor=False), \"configuration.yaml\")\nPRE_0_12_0_CONFIG_PATH = os.path.join(DEFAULT_ROOT_PATH, \"configuration.yaml\")\nPRE_0_12_0_ROOT_PATH = os.path.expanduser(\"~/.opsdroid\")\nDEFAULT_MODULE_BRANCH = \"master\"\nDEFAULT_LANGUAGE = 'en'\nLOCALE_DIR = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'locale')\nEXAMPLE_CONFIG_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n \"configuration/example_configuration.yaml\")\nREGEX_SCORE_FACTOR = 0.6\n\nRASANLU_DEFAULT_URL = \"http://localhost:5000\"\nRASANLU_DEFAULT_PROJECT = \"opsdroid\"\n\nLUISAI_DEFAULT_URL = \"https://westus.api.cognitive.microsoft.com\" \\\n \"/luis/v2.0/apps/\"\n\nDIALOGFLOW_API_ENDPOINT = \"https://api.dialogflow.com/v1/query\"\nDIALOGFLOW_API_VERSION = \"20150910\"\n\nWITAI_DEFAULT_VERSION = \"20170307\"\nWITAI_API_ENDPOINT = \"https://api.wit.ai/message?\"\n\nRECASTAI_API_ENDPOINT = \"https://api.recast.ai/v2/request\"\n", "path": "opsdroid/const.py"}, {"content": "\"\"\"Decorator functions to use when creating skill modules.\"\"\"\n\nimport logging\n\nfrom opsdroid.const import REGEX_SCORE_FACTOR\nfrom opsdroid.helper import get_opsdroid\nfrom opsdroid.web import Web\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\ndef match_regex(regex, case_sensitive=True, score_factor=None):\n \"\"\"Return regex match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for regex matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n regex_setup = {\n \"expression\": regex,\n \"case_sensitive\": case_sensitive,\n \"score_factor\": score_factor or REGEX_SCORE_FACTOR,\n }\n opsdroid.skills.append({\"regex\": regex_setup,\n \"skill\": func,\n \"config\": config})\n return func\n return matcher\n\n\ndef match_apiai_action(action):\n \"\"\"Return Dialogflow action match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for Dialogflow matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"dialogflow_action\": action,\n \"skill\": func,\n \"config\": config})\n return func\n _LOGGER.warning(_(\"Api.ai is now called Dialogflow, this matcher \"\n \"will stop working in the future. \"\n \"Use match_dialogflow_action instead.\"))\n return matcher\n\n\ndef match_apiai_intent(intent):\n \"\"\"Return Dialogflow intent match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for Dialogflow matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"dialogflow_intent\": intent,\n \"skill\": func,\n \"config\": config})\n return func\n _LOGGER.warning(_(\"Api.ai is now called Dialogflow, this matcher \"\n \"will stop working in the future. \"\n \"Use match_dialogflow_intent instead.\"))\n return matcher\n\n\ndef match_dialogflow_action(action):\n \"\"\"Return Dialogflowi action match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for Dialogflow matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"dialogflow_action\": action,\n \"skill\": func,\n \"config\": config})\n return func\n return matcher\n\n\ndef match_dialogflow_intent(intent):\n \"\"\"Return Dialogflow intent match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for Dialogflow matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"dialogflow_intent\": intent,\n \"skill\": func,\n \"config\": config})\n return func\n return matcher\n\n\ndef match_luisai_intent(intent):\n \"\"\"Return luisai intent match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for luisai matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"luisai_intent\": intent,\n \"skill\": func,\n \"config\": config})\n return func\n return matcher\n\n\ndef match_rasanlu(intent):\n \"\"\"Return Rasa NLU intent match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for Rasa NLU matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"rasanlu_intent\": intent,\n \"skill\": func,\n \"config\": config})\n return func\n return matcher\n\n\ndef match_recastai(intent):\n \"\"\"Return recastai intent match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for recastai matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"recastai_intent\": intent,\n \"skill\": func,\n \"config\": config})\n return func\n return matcher\n\n\ndef match_witai(intent):\n \"\"\"Return witai intent match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for witai matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"witai_intent\": intent,\n \"skill\": func,\n \"config\": config})\n return func\n return matcher\n\n\ndef match_crontab(crontab, timezone=None):\n \"\"\"Return crontab match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for crontab matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"crontab\": crontab,\n \"skill\": func,\n \"config\": config,\n \"timezone\": timezone})\n return func\n return matcher\n\n\ndef match_webhook(webhook):\n \"\"\"Return webhook match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for webhook matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"webhook\": webhook,\n \"skill\": func,\n \"config\": config})\n\n async def wrapper(req, opsdroid=opsdroid, config=config):\n \"\"\"Wrap up the aiohttp handler.\"\"\"\n _LOGGER.info(\"Running skill %s via webhook\", webhook)\n opsdroid.stats[\"webhooks_called\"] = \\\n opsdroid.stats[\"webhooks_called\"] + 1\n await func(opsdroid, config, req)\n return Web.build_response(200, {\"called_skill\": webhook})\n\n opsdroid.web_server.web_app.router.add_post(\n \"/skill/{}/{}\".format(config[\"name\"], webhook), wrapper)\n opsdroid.web_server.web_app.router.add_post(\n \"/skill/{}/{}/\".format(config[\"name\"], webhook), wrapper)\n\n return func\n return matcher\n\n\ndef match_always(func=None):\n \"\"\"Return always match decorator.\"\"\"\n def matcher(func):\n \"\"\"Add decorated function to skills list for always matching.\"\"\"\n opsdroid = get_opsdroid()\n if opsdroid:\n config = opsdroid.loader.current_import_config\n opsdroid.skills.append({\"always\": True,\n \"skill\": func,\n \"config\": config})\n return func\n\n # Allow for decorator with or without parenthesis as there are no args.\n if callable(func):\n return matcher(func)\n return matcher\n", "path": "opsdroid/matchers.py"}]} | 3,478 | 774 |
gh_patches_debug_2560 | rasdani/github-patches | git_diff | python-poetry__poetry-1673 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`poetry shell` with fish does not echo in python REPL
- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.
- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).
- **OS version and name**: MacOS Catalina 10.15.1
- **Poetry version**: 1.0.0b8
- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: N/A
## Issue
As described by [L0stLink](https://github.com/sdispater/poetry/issues/1593#issuecomment-555132468) in a now-closed issue, `fish` still appears to have some issues when using `poetry shell`. Specifically, in the Python REPL "typed text not visible, but pressing enter shows that the input was registered and whatever was typed, executes".
It appears that the fix in #1621 only addressed the `bash` case.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `poetry/utils/shell.py`
Content:
```
1 import os
2 import signal
3 import sys
4
5 import pexpect
6
7 from clikit.utils.terminal import Terminal
8 from shellingham import ShellDetectionFailure
9 from shellingham import detect_shell
10
11 from ._compat import WINDOWS
12 from .env import VirtualEnv
13
14
15 class Shell:
16 """
17 Represents the current shell.
18 """
19
20 _shell = None
21
22 def __init__(self, name, path): # type: (str, str) -> None
23 self._name = name
24 self._path = path
25
26 @property
27 def name(self): # type: () -> str
28 return self._name
29
30 @property
31 def path(self): # type: () -> str
32 return self._path
33
34 @classmethod
35 def get(cls): # type: () -> Shell
36 """
37 Retrieve the current shell.
38 """
39 if cls._shell is not None:
40 return cls._shell
41
42 try:
43 name, path = detect_shell(os.getpid())
44 except (RuntimeError, ShellDetectionFailure):
45 raise RuntimeError("Unable to detect the current shell.")
46
47 cls._shell = cls(name, path)
48
49 return cls._shell
50
51 def activate(self, env): # type: (VirtualEnv) -> None
52 if WINDOWS:
53 return env.execute(self.path)
54
55 terminal = Terminal()
56 with env.temp_environ():
57 c = pexpect.spawn(
58 self._path, ["-i"], dimensions=(terminal.height, terminal.width)
59 )
60
61 if not self._name == "bash":
62 c.setecho(False)
63
64 activate_script = self._get_activate_script()
65 bin_dir = "Scripts" if WINDOWS else "bin"
66 activate_path = env.path / bin_dir / activate_script
67 c.sendline("{} {}".format(self._get_source_command(), activate_path))
68
69 def resize(sig, data):
70 terminal = Terminal()
71 c.setwinsize(terminal.height, terminal.width)
72
73 signal.signal(signal.SIGWINCH, resize)
74
75 # Interact with the new shell.
76 c.interact(escape_character=None)
77 c.close()
78
79 sys.exit(c.exitstatus)
80
81 def _get_activate_script(self):
82 if "fish" == self._name:
83 suffix = ".fish"
84 elif "csh" == self._name:
85 suffix = ".csh"
86 else:
87 suffix = ""
88
89 return "activate" + suffix
90
91 def _get_source_command(self):
92 if "fish" == self._name:
93 return "source"
94 elif "csh" == self._name:
95 return "source"
96
97 return "."
98
99 def __repr__(self): # type: () -> str
100 return '{}("{}", "{}")'.format(self.__class__.__name__, self._name, self._path)
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/poetry/utils/shell.py b/poetry/utils/shell.py
--- a/poetry/utils/shell.py
+++ b/poetry/utils/shell.py
@@ -58,7 +58,7 @@
self._path, ["-i"], dimensions=(terminal.height, terminal.width)
)
- if not self._name == "bash":
+ if self._name == "zsh":
c.setecho(False)
activate_script = self._get_activate_script()
| {"golden_diff": "diff --git a/poetry/utils/shell.py b/poetry/utils/shell.py\n--- a/poetry/utils/shell.py\n+++ b/poetry/utils/shell.py\n@@ -58,7 +58,7 @@\n self._path, [\"-i\"], dimensions=(terminal.height, terminal.width)\n )\n \n- if not self._name == \"bash\":\n+ if self._name == \"zsh\":\n c.setecho(False)\n \n activate_script = self._get_activate_script()\n", "issue": "`poetry shell` with fish does not echo in python REPL\n- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.\r\n- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).\r\n\r\n- **OS version and name**: MacOS Catalina 10.15.1\r\n- **Poetry version**: 1.0.0b8\r\n- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: N/A\r\n\r\n## Issue\r\nAs described by [L0stLink](https://github.com/sdispater/poetry/issues/1593#issuecomment-555132468) in a now-closed issue, `fish` still appears to have some issues when using `poetry shell`. Specifically, in the Python REPL \"typed text not visible, but pressing enter shows that the input was registered and whatever was typed, executes\".\r\n\r\nIt appears that the fix in #1621 only addressed the `bash` case.\n", "before_files": [{"content": "import os\nimport signal\nimport sys\n\nimport pexpect\n\nfrom clikit.utils.terminal import Terminal\nfrom shellingham import ShellDetectionFailure\nfrom shellingham import detect_shell\n\nfrom ._compat import WINDOWS\nfrom .env import VirtualEnv\n\n\nclass Shell:\n \"\"\"\n Represents the current shell.\n \"\"\"\n\n _shell = None\n\n def __init__(self, name, path): # type: (str, str) -> None\n self._name = name\n self._path = path\n\n @property\n def name(self): # type: () -> str\n return self._name\n\n @property\n def path(self): # type: () -> str\n return self._path\n\n @classmethod\n def get(cls): # type: () -> Shell\n \"\"\"\n Retrieve the current shell.\n \"\"\"\n if cls._shell is not None:\n return cls._shell\n\n try:\n name, path = detect_shell(os.getpid())\n except (RuntimeError, ShellDetectionFailure):\n raise RuntimeError(\"Unable to detect the current shell.\")\n\n cls._shell = cls(name, path)\n\n return cls._shell\n\n def activate(self, env): # type: (VirtualEnv) -> None\n if WINDOWS:\n return env.execute(self.path)\n\n terminal = Terminal()\n with env.temp_environ():\n c = pexpect.spawn(\n self._path, [\"-i\"], dimensions=(terminal.height, terminal.width)\n )\n\n if not self._name == \"bash\":\n c.setecho(False)\n\n activate_script = self._get_activate_script()\n bin_dir = \"Scripts\" if WINDOWS else \"bin\"\n activate_path = env.path / bin_dir / activate_script\n c.sendline(\"{} {}\".format(self._get_source_command(), activate_path))\n\n def resize(sig, data):\n terminal = Terminal()\n c.setwinsize(terminal.height, terminal.width)\n\n signal.signal(signal.SIGWINCH, resize)\n\n # Interact with the new shell.\n c.interact(escape_character=None)\n c.close()\n\n sys.exit(c.exitstatus)\n\n def _get_activate_script(self):\n if \"fish\" == self._name:\n suffix = \".fish\"\n elif \"csh\" == self._name:\n suffix = \".csh\"\n else:\n suffix = \"\"\n\n return \"activate\" + suffix\n\n def _get_source_command(self):\n if \"fish\" == self._name:\n return \"source\"\n elif \"csh\" == self._name:\n return \"source\"\n\n return \".\"\n\n def __repr__(self): # type: () -> str\n return '{}(\"{}\", \"{}\")'.format(self.__class__.__name__, self._name, self._path)\n", "path": "poetry/utils/shell.py"}], "after_files": [{"content": "import os\nimport signal\nimport sys\n\nimport pexpect\n\nfrom clikit.utils.terminal import Terminal\nfrom shellingham import ShellDetectionFailure\nfrom shellingham import detect_shell\n\nfrom ._compat import WINDOWS\nfrom .env import VirtualEnv\n\n\nclass Shell:\n \"\"\"\n Represents the current shell.\n \"\"\"\n\n _shell = None\n\n def __init__(self, name, path): # type: (str, str) -> None\n self._name = name\n self._path = path\n\n @property\n def name(self): # type: () -> str\n return self._name\n\n @property\n def path(self): # type: () -> str\n return self._path\n\n @classmethod\n def get(cls): # type: () -> Shell\n \"\"\"\n Retrieve the current shell.\n \"\"\"\n if cls._shell is not None:\n return cls._shell\n\n try:\n name, path = detect_shell(os.getpid())\n except (RuntimeError, ShellDetectionFailure):\n raise RuntimeError(\"Unable to detect the current shell.\")\n\n cls._shell = cls(name, path)\n\n return cls._shell\n\n def activate(self, env): # type: (VirtualEnv) -> None\n if WINDOWS:\n return env.execute(self.path)\n\n terminal = Terminal()\n with env.temp_environ():\n c = pexpect.spawn(\n self._path, [\"-i\"], dimensions=(terminal.height, terminal.width)\n )\n\n if self._name == \"zsh\":\n c.setecho(False)\n\n activate_script = self._get_activate_script()\n bin_dir = \"Scripts\" if WINDOWS else \"bin\"\n activate_path = env.path / bin_dir / activate_script\n c.sendline(\"{} {}\".format(self._get_source_command(), activate_path))\n\n def resize(sig, data):\n terminal = Terminal()\n c.setwinsize(terminal.height, terminal.width)\n\n signal.signal(signal.SIGWINCH, resize)\n\n # Interact with the new shell.\n c.interact(escape_character=None)\n c.close()\n\n sys.exit(c.exitstatus)\n\n def _get_activate_script(self):\n if \"fish\" == self._name:\n suffix = \".fish\"\n elif \"csh\" == self._name:\n suffix = \".csh\"\n else:\n suffix = \"\"\n\n return \"activate\" + suffix\n\n def _get_source_command(self):\n if \"fish\" == self._name:\n return \"source\"\n elif \"csh\" == self._name:\n return \"source\"\n\n return \".\"\n\n def __repr__(self): # type: () -> str\n return '{}(\"{}\", \"{}\")'.format(self.__class__.__name__, self._name, self._path)\n", "path": "poetry/utils/shell.py"}]} | 1,345 | 111 |
gh_patches_debug_24650 | rasdani/github-patches | git_diff | PlasmaPy__PlasmaPy-1690 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Switch to using `pathlib` instead of `os.path`
In newer Python code, it's generally recommended to use [`pathlib`](https://docs.python.org/3/library/pathlib.html) instead of `os.path`. There are [a few places where we use `os.path`](https://github.com/PlasmaPy/PlasmaPy/search?q=os.path+-plasmapy_sphinx) where we should switch to `pathlib`.
There are also [places where we are using `pathlib`](https://github.com/PlasmaPy/PlasmaPy/search?q=pathlib&type=code) which could be used as examples.
We shouldn't make this change in `docs/plasmapy_sphinx` since that's being moved to another package (#1643).
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plasmapy/plasma/sources/openpmd_hdf5.py`
Content:
```
1 __all__ = ["HDF5Reader"]
2
3 import astropy.units as u
4 import h5py
5 import numpy as np
6 import os
7
8 from packaging.version import Version
9
10 from plasmapy.plasma.exceptions import DataStandardError
11 from plasmapy.plasma.plasma_base import GenericPlasma
12
13 _OUTDATED_VERSION = "1.1.0"
14 _NEWER_VERSION = "2.0.0"
15
16 # This is the order what OpenPMD uses to store unit
17 # dimensions for a record.
18 _UNITS = (u.meter, u.kilogram, u.second, u.ampere, u.Kelvin, u.mol, u.candela)
19
20
21 def _fetch_units(openPMD_dims):
22 """Converts a collection of OpenPMD dimensions to astropy.units."""
23
24 units = u.dimensionless_unscaled
25 for factor, unit in zip(openPMD_dims, _UNITS):
26 units *= unit**factor
27 units, *_ = units.compose()
28 return units
29
30
31 def _valid_version(openPMD_version, outdated=_OUTDATED_VERSION, newer=_NEWER_VERSION):
32 """Checks if the passed version is supported or not."""
33
34 parsed_version = Version(openPMD_version)
35 outdated_version = Version(outdated)
36 newer_version = Version(newer)
37 return outdated_version <= parsed_version < newer_version
38
39
40 class HDF5Reader(GenericPlasma):
41 """
42 Core class for accessing various attributes on HDF5 files that
43 are based on OpenPMD_ standards.
44
45 Parameters
46 ----------
47 hdf5 : `str`
48 Path to HDF5 file.
49
50 **kwargs
51 Any keyword accepted by `~plasmapy.plasma.plasma_base.GenericPlasma`.
52
53 """
54
55 def __init__(self, hdf5, **kwargs):
56 super().__init__(**kwargs)
57
58 if not os.path.isfile(hdf5):
59 raise FileNotFoundError(f"Could not find file: '{hdf5}'")
60
61 h5 = h5py.File(hdf5, "r")
62 self.h5 = h5
63
64 self._check_valid_openpmd_version()
65
66 self.subname = tuple(self.h5["data"])[0]
67
68 def __enter__(self):
69 return self.h5
70
71 def close(self):
72 self.h5.close()
73
74 def __exit__(self):
75 self.h5.close()
76
77 def _check_valid_openpmd_version(self):
78 try:
79 openPMD_version = self.h5.attrs["openPMD"].decode("utf-8")
80 if _valid_version(openPMD_version):
81 return True
82 else:
83 raise DataStandardError(
84 f"We currently only support HDF5 versions"
85 f"starting from v{_OUTDATED_VERSION} and "
86 f"lower than v{_NEWER_VERSION}. You can "
87 f"however convert your HDF5 to a supported "
88 f"version. For more information; see "
89 f"https://github.com/openPMD/openPMD-updater"
90 )
91 except KeyError:
92 raise DataStandardError(
93 "Input HDF5 file does not go on with standards defined by OpenPMD"
94 )
95
96 @property
97 def electric_field(self):
98 """
99 An (x, y, z) array containing electric field data. (Returned as an astropy
100 `~astropy.units.Quantity`.)
101 """
102 path = f"data/{self.subname}/fields/E"
103 if path in self.h5:
104 units = _fetch_units(self.h5[path].attrs["unitDimension"])
105 axes = [self.h5[path][axis] for axis in self.h5[path]]
106 return np.array(axes) * units
107 else:
108 raise AttributeError("No electric field data available in HDF5 file")
109
110 @property
111 def charge_density(self):
112 """
113 An array containing charge density data. (Returned as an astropy
114 `~astropy.units.Quantity`.)
115 """
116 path = f"data/{self.subname}/fields/rho"
117 if path in self.h5:
118 units = _fetch_units(self.h5[path].attrs["unitDimension"])
119 return np.array(self.h5[path]) * units
120 else:
121 raise AttributeError("No charge density data available in HDF5 file")
122
123 @property
124 def magnetic_field(self):
125 path = f"data/{self.subname}/fields/B"
126 if path in self.h5:
127 units = _fetch_units(self.h5[path].attrs["unitDimension"])
128 axes = [self.h5[path][axis] for axis in self.h5[path]]
129 return np.array(axes) * units
130 else:
131 raise AttributeError("No magnetic field data available in HDF5 file")
132
133 @property
134 def electric_current(self):
135 path = f"data/{self.subname}/fields/J"
136 if path in self.h5:
137 units = _fetch_units(self.h5[path].attrs["unitDimension"])
138 axes = [self.h5[path][axis] for axis in self.h5[path]]
139 return np.array(axes) * units
140 else:
141 raise AttributeError("No electric current data available in HDF5 file")
142
143 @classmethod
144 def is_datasource_for(cls, **kwargs):
145 if "hdf5" not in kwargs:
146 return False
147
148 hdf5 = kwargs.get("hdf5")
149 openPMD = kwargs.get("openPMD")
150
151 isfile = os.path.isfile(hdf5)
152 if not isfile:
153 raise FileNotFoundError(f"Could not find file: '{hdf5}'")
154
155 if "openPMD" not in kwargs:
156
157 h5 = h5py.File(hdf5, "r")
158 try:
159 openPMD = h5.attrs["openPMD"]
160 except KeyError:
161 openPMD = False
162
163 return openPMD
164
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plasmapy/plasma/sources/openpmd_hdf5.py b/plasmapy/plasma/sources/openpmd_hdf5.py
--- a/plasmapy/plasma/sources/openpmd_hdf5.py
+++ b/plasmapy/plasma/sources/openpmd_hdf5.py
@@ -3,9 +3,9 @@
import astropy.units as u
import h5py
import numpy as np
-import os
from packaging.version import Version
+from pathlib import Path
from plasmapy.plasma.exceptions import DataStandardError
from plasmapy.plasma.plasma_base import GenericPlasma
@@ -55,7 +55,7 @@
def __init__(self, hdf5, **kwargs):
super().__init__(**kwargs)
- if not os.path.isfile(hdf5):
+ if not Path(hdf5).is_file():
raise FileNotFoundError(f"Could not find file: '{hdf5}'")
h5 = h5py.File(hdf5, "r")
@@ -148,8 +148,7 @@
hdf5 = kwargs.get("hdf5")
openPMD = kwargs.get("openPMD")
- isfile = os.path.isfile(hdf5)
- if not isfile:
+ if not Path(hdf5).is_file():
raise FileNotFoundError(f"Could not find file: '{hdf5}'")
if "openPMD" not in kwargs:
| {"golden_diff": "diff --git a/plasmapy/plasma/sources/openpmd_hdf5.py b/plasmapy/plasma/sources/openpmd_hdf5.py\n--- a/plasmapy/plasma/sources/openpmd_hdf5.py\n+++ b/plasmapy/plasma/sources/openpmd_hdf5.py\n@@ -3,9 +3,9 @@\n import astropy.units as u\n import h5py\n import numpy as np\n-import os\n \n from packaging.version import Version\n+from pathlib import Path\n \n from plasmapy.plasma.exceptions import DataStandardError\n from plasmapy.plasma.plasma_base import GenericPlasma\n@@ -55,7 +55,7 @@\n def __init__(self, hdf5, **kwargs):\n super().__init__(**kwargs)\n \n- if not os.path.isfile(hdf5):\n+ if not Path(hdf5).is_file():\n raise FileNotFoundError(f\"Could not find file: '{hdf5}'\")\n \n h5 = h5py.File(hdf5, \"r\")\n@@ -148,8 +148,7 @@\n hdf5 = kwargs.get(\"hdf5\")\n openPMD = kwargs.get(\"openPMD\")\n \n- isfile = os.path.isfile(hdf5)\n- if not isfile:\n+ if not Path(hdf5).is_file():\n raise FileNotFoundError(f\"Could not find file: '{hdf5}'\")\n \n if \"openPMD\" not in kwargs:\n", "issue": "Switch to using `pathlib` instead of `os.path`\nIn newer Python code, it's generally recommended to use [`pathlib`](https://docs.python.org/3/library/pathlib.html) instead of `os.path`. There are [a few places where we use `os.path`](https://github.com/PlasmaPy/PlasmaPy/search?q=os.path+-plasmapy_sphinx) where we should switch to `pathlib`. \r\n\r\nThere are also [places where we are using `pathlib`](https://github.com/PlasmaPy/PlasmaPy/search?q=pathlib&type=code) which could be used as examples. \r\n\r\nWe shouldn't make this change in `docs/plasmapy_sphinx` since that's being moved to another package (#1643).\r\n\r\nThanks!\n", "before_files": [{"content": "__all__ = [\"HDF5Reader\"]\n\nimport astropy.units as u\nimport h5py\nimport numpy as np\nimport os\n\nfrom packaging.version import Version\n\nfrom plasmapy.plasma.exceptions import DataStandardError\nfrom plasmapy.plasma.plasma_base import GenericPlasma\n\n_OUTDATED_VERSION = \"1.1.0\"\n_NEWER_VERSION = \"2.0.0\"\n\n# This is the order what OpenPMD uses to store unit\n# dimensions for a record.\n_UNITS = (u.meter, u.kilogram, u.second, u.ampere, u.Kelvin, u.mol, u.candela)\n\n\ndef _fetch_units(openPMD_dims):\n \"\"\"Converts a collection of OpenPMD dimensions to astropy.units.\"\"\"\n\n units = u.dimensionless_unscaled\n for factor, unit in zip(openPMD_dims, _UNITS):\n units *= unit**factor\n units, *_ = units.compose()\n return units\n\n\ndef _valid_version(openPMD_version, outdated=_OUTDATED_VERSION, newer=_NEWER_VERSION):\n \"\"\"Checks if the passed version is supported or not.\"\"\"\n\n parsed_version = Version(openPMD_version)\n outdated_version = Version(outdated)\n newer_version = Version(newer)\n return outdated_version <= parsed_version < newer_version\n\n\nclass HDF5Reader(GenericPlasma):\n \"\"\"\n Core class for accessing various attributes on HDF5 files that\n are based on OpenPMD_ standards.\n\n Parameters\n ----------\n hdf5 : `str`\n Path to HDF5 file.\n\n **kwargs\n Any keyword accepted by `~plasmapy.plasma.plasma_base.GenericPlasma`.\n\n \"\"\"\n\n def __init__(self, hdf5, **kwargs):\n super().__init__(**kwargs)\n\n if not os.path.isfile(hdf5):\n raise FileNotFoundError(f\"Could not find file: '{hdf5}'\")\n\n h5 = h5py.File(hdf5, \"r\")\n self.h5 = h5\n\n self._check_valid_openpmd_version()\n\n self.subname = tuple(self.h5[\"data\"])[0]\n\n def __enter__(self):\n return self.h5\n\n def close(self):\n self.h5.close()\n\n def __exit__(self):\n self.h5.close()\n\n def _check_valid_openpmd_version(self):\n try:\n openPMD_version = self.h5.attrs[\"openPMD\"].decode(\"utf-8\")\n if _valid_version(openPMD_version):\n return True\n else:\n raise DataStandardError(\n f\"We currently only support HDF5 versions\"\n f\"starting from v{_OUTDATED_VERSION} and \"\n f\"lower than v{_NEWER_VERSION}. You can \"\n f\"however convert your HDF5 to a supported \"\n f\"version. For more information; see \"\n f\"https://github.com/openPMD/openPMD-updater\"\n )\n except KeyError:\n raise DataStandardError(\n \"Input HDF5 file does not go on with standards defined by OpenPMD\"\n )\n\n @property\n def electric_field(self):\n \"\"\"\n An (x, y, z) array containing electric field data. (Returned as an astropy\n `~astropy.units.Quantity`.)\n \"\"\"\n path = f\"data/{self.subname}/fields/E\"\n if path in self.h5:\n units = _fetch_units(self.h5[path].attrs[\"unitDimension\"])\n axes = [self.h5[path][axis] for axis in self.h5[path]]\n return np.array(axes) * units\n else:\n raise AttributeError(\"No electric field data available in HDF5 file\")\n\n @property\n def charge_density(self):\n \"\"\"\n An array containing charge density data. (Returned as an astropy\n `~astropy.units.Quantity`.)\n \"\"\"\n path = f\"data/{self.subname}/fields/rho\"\n if path in self.h5:\n units = _fetch_units(self.h5[path].attrs[\"unitDimension\"])\n return np.array(self.h5[path]) * units\n else:\n raise AttributeError(\"No charge density data available in HDF5 file\")\n\n @property\n def magnetic_field(self):\n path = f\"data/{self.subname}/fields/B\"\n if path in self.h5:\n units = _fetch_units(self.h5[path].attrs[\"unitDimension\"])\n axes = [self.h5[path][axis] for axis in self.h5[path]]\n return np.array(axes) * units\n else:\n raise AttributeError(\"No magnetic field data available in HDF5 file\")\n\n @property\n def electric_current(self):\n path = f\"data/{self.subname}/fields/J\"\n if path in self.h5:\n units = _fetch_units(self.h5[path].attrs[\"unitDimension\"])\n axes = [self.h5[path][axis] for axis in self.h5[path]]\n return np.array(axes) * units\n else:\n raise AttributeError(\"No electric current data available in HDF5 file\")\n\n @classmethod\n def is_datasource_for(cls, **kwargs):\n if \"hdf5\" not in kwargs:\n return False\n\n hdf5 = kwargs.get(\"hdf5\")\n openPMD = kwargs.get(\"openPMD\")\n\n isfile = os.path.isfile(hdf5)\n if not isfile:\n raise FileNotFoundError(f\"Could not find file: '{hdf5}'\")\n\n if \"openPMD\" not in kwargs:\n\n h5 = h5py.File(hdf5, \"r\")\n try:\n openPMD = h5.attrs[\"openPMD\"]\n except KeyError:\n openPMD = False\n\n return openPMD\n", "path": "plasmapy/plasma/sources/openpmd_hdf5.py"}], "after_files": [{"content": "__all__ = [\"HDF5Reader\"]\n\nimport astropy.units as u\nimport h5py\nimport numpy as np\n\nfrom packaging.version import Version\nfrom pathlib import Path\n\nfrom plasmapy.plasma.exceptions import DataStandardError\nfrom plasmapy.plasma.plasma_base import GenericPlasma\n\n_OUTDATED_VERSION = \"1.1.0\"\n_NEWER_VERSION = \"2.0.0\"\n\n# This is the order what OpenPMD uses to store unit\n# dimensions for a record.\n_UNITS = (u.meter, u.kilogram, u.second, u.ampere, u.Kelvin, u.mol, u.candela)\n\n\ndef _fetch_units(openPMD_dims):\n \"\"\"Converts a collection of OpenPMD dimensions to astropy.units.\"\"\"\n\n units = u.dimensionless_unscaled\n for factor, unit in zip(openPMD_dims, _UNITS):\n units *= unit**factor\n units, *_ = units.compose()\n return units\n\n\ndef _valid_version(openPMD_version, outdated=_OUTDATED_VERSION, newer=_NEWER_VERSION):\n \"\"\"Checks if the passed version is supported or not.\"\"\"\n\n parsed_version = Version(openPMD_version)\n outdated_version = Version(outdated)\n newer_version = Version(newer)\n return outdated_version <= parsed_version < newer_version\n\n\nclass HDF5Reader(GenericPlasma):\n \"\"\"\n Core class for accessing various attributes on HDF5 files that\n are based on OpenPMD_ standards.\n\n Parameters\n ----------\n hdf5 : `str`\n Path to HDF5 file.\n\n **kwargs\n Any keyword accepted by `~plasmapy.plasma.plasma_base.GenericPlasma`.\n\n \"\"\"\n\n def __init__(self, hdf5, **kwargs):\n super().__init__(**kwargs)\n\n if not Path(hdf5).is_file():\n raise FileNotFoundError(f\"Could not find file: '{hdf5}'\")\n\n h5 = h5py.File(hdf5, \"r\")\n self.h5 = h5\n\n self._check_valid_openpmd_version()\n\n self.subname = tuple(self.h5[\"data\"])[0]\n\n def __enter__(self):\n return self.h5\n\n def close(self):\n self.h5.close()\n\n def __exit__(self):\n self.h5.close()\n\n def _check_valid_openpmd_version(self):\n try:\n openPMD_version = self.h5.attrs[\"openPMD\"].decode(\"utf-8\")\n if _valid_version(openPMD_version):\n return True\n else:\n raise DataStandardError(\n f\"We currently only support HDF5 versions\"\n f\"starting from v{_OUTDATED_VERSION} and \"\n f\"lower than v{_NEWER_VERSION}. You can \"\n f\"however convert your HDF5 to a supported \"\n f\"version. For more information; see \"\n f\"https://github.com/openPMD/openPMD-updater\"\n )\n except KeyError:\n raise DataStandardError(\n \"Input HDF5 file does not go on with standards defined by OpenPMD\"\n )\n\n @property\n def electric_field(self):\n \"\"\"\n An (x, y, z) array containing electric field data. (Returned as an astropy\n `~astropy.units.Quantity`.)\n \"\"\"\n path = f\"data/{self.subname}/fields/E\"\n if path in self.h5:\n units = _fetch_units(self.h5[path].attrs[\"unitDimension\"])\n axes = [self.h5[path][axis] for axis in self.h5[path]]\n return np.array(axes) * units\n else:\n raise AttributeError(\"No electric field data available in HDF5 file\")\n\n @property\n def charge_density(self):\n \"\"\"\n An array containing charge density data. (Returned as an astropy\n `~astropy.units.Quantity`.)\n \"\"\"\n path = f\"data/{self.subname}/fields/rho\"\n if path in self.h5:\n units = _fetch_units(self.h5[path].attrs[\"unitDimension\"])\n return np.array(self.h5[path]) * units\n else:\n raise AttributeError(\"No charge density data available in HDF5 file\")\n\n @property\n def magnetic_field(self):\n path = f\"data/{self.subname}/fields/B\"\n if path in self.h5:\n units = _fetch_units(self.h5[path].attrs[\"unitDimension\"])\n axes = [self.h5[path][axis] for axis in self.h5[path]]\n return np.array(axes) * units\n else:\n raise AttributeError(\"No magnetic field data available in HDF5 file\")\n\n @property\n def electric_current(self):\n path = f\"data/{self.subname}/fields/J\"\n if path in self.h5:\n units = _fetch_units(self.h5[path].attrs[\"unitDimension\"])\n axes = [self.h5[path][axis] for axis in self.h5[path]]\n return np.array(axes) * units\n else:\n raise AttributeError(\"No electric current data available in HDF5 file\")\n\n @classmethod\n def is_datasource_for(cls, **kwargs):\n if \"hdf5\" not in kwargs:\n return False\n\n hdf5 = kwargs.get(\"hdf5\")\n openPMD = kwargs.get(\"openPMD\")\n\n if not Path(hdf5).is_file():\n raise FileNotFoundError(f\"Could not find file: '{hdf5}'\")\n\n if \"openPMD\" not in kwargs:\n\n h5 = h5py.File(hdf5, \"r\")\n try:\n openPMD = h5.attrs[\"openPMD\"]\n except KeyError:\n openPMD = False\n\n return openPMD\n", "path": "plasmapy/plasma/sources/openpmd_hdf5.py"}]} | 2,080 | 324 |
gh_patches_debug_34178 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-1127 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add check for mandatory functions in model files
Users are required to implement some functions(e.g, `loss()`) in the model file. We can add some checks for the model file to see if these required functions are implemented correctly. Otherwise, if the functions are not implemented (correctly), it won't be found until the job starts to run on k8s cluster.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/python/common/model_helper.py`
Content:
```
1 import importlib.util
2 import os
3
4 from elasticdl.python.common.log_util import default_logger as logger
5 from elasticdl.python.worker.prediction_outputs_processor import (
6 BasePredictionOutputsProcessor,
7 )
8
9
10 def load_module(module_file):
11 spec = importlib.util.spec_from_file_location(module_file, module_file)
12 module = importlib.util.module_from_spec(spec)
13 spec.loader.exec_module(module)
14 return module
15
16
17 # TODO: Discuss whether we need to support default model
18 # function/class names such as `custom_model()`
19 # or `CustomModel()`
20 def load_model_from_module(model_def, model_module, model_params):
21 model_def_name = model_def.split(".")[-1]
22 if model_def_name in model_module:
23 custom_model_name = model_def_name
24 else:
25 raise ValueError(
26 "Cannot find the custom model function/class "
27 "in model definition files"
28 )
29 if model_params:
30 kvs = model_params.split(",")
31 model_params_dict = {}
32 for kv in kvs:
33 k, v = kv.split("=")
34 model_params_dict[k] = eval(v)
35 return model_module[custom_model_name](**model_params_dict)
36 else:
37 return model_module[custom_model_name]()
38
39
40 def get_module_file_path(model_zoo, spec_key):
41 """Get the path to module file from model zoo and the spec string.
42
43 For example, if `model_zoo = "model_zoo"` and
44 `spec_key = "test_module.custom_model"`, the function returns
45 "model_zoo/test_module.py".
46 """
47 return os.path.join(model_zoo, "/".join(spec_key.split(".")[:-1]) + ".py")
48
49
50 def _get_spec_value(spec_key, model_zoo, default_module):
51 """Get the value to the given spec key.
52
53 Notes:
54
55 * If the dot-splitted spec key (e.g. "test_module.custom_model"
56 is splitted into "test_module" and "custom_model") is of length 1
57 (e.g. `spec_key` is "custom_model"), return the value in the
58 specified `default_module`.
59 * If the spec key does not exist in the module, return `None`.
60 """
61 spec_key_items = spec_key.split(".")
62 spec_key_base = spec_key_items[-1]
63 if len(spec_key_items) == 1:
64 spec_key_module = default_module
65 else:
66 spec_key_module = load_module(
67 get_module_file_path(model_zoo, spec_key)
68 ).__dict__
69 return (
70 spec_key_module[spec_key_base]
71 if spec_key_base in spec_key_module
72 else None
73 )
74
75
76 def get_model_spec(
77 model_zoo,
78 model_def,
79 model_params,
80 dataset_fn,
81 loss,
82 optimizer,
83 eval_metrics_fn,
84 prediction_outputs_processor,
85 ):
86 """Get the model spec items in a tuple.
87
88 The model spec tuple contains the following items in order:
89
90 * The model object instantiated with parameters specified
91 in `model_params`,
92 * The `dataset_fn`,
93 * The `loss`,
94 * The `optimizer`,
95 * The `eval_metrics_fn`,
96 * The `prediction_outputs_processor`. Note that it will print
97 warning if it's not inherited from `BasePredictionOutputsProcessor`.
98 """
99 model_def_module_file = get_module_file_path(model_zoo, model_def)
100 default_module = load_module(model_def_module_file).__dict__
101 model = load_model_from_module(model_def, default_module, model_params)
102 prediction_outputs_processor = _get_spec_value(
103 prediction_outputs_processor, model_zoo, default_module
104 )
105 if prediction_outputs_processor and not isinstance(
106 prediction_outputs_processor, BasePredictionOutputsProcessor
107 ):
108 logger.warning(
109 "prediction_outputs_processor is not "
110 "inherited from BasePredictionOutputsProcessor. "
111 "Prediction outputs may not be processed correctly."
112 )
113 return (
114 model,
115 _get_spec_value(dataset_fn, model_zoo, default_module),
116 _get_spec_value(loss, model_zoo, default_module),
117 _get_spec_value(optimizer, model_zoo, default_module),
118 _get_spec_value(eval_metrics_fn, model_zoo, default_module),
119 prediction_outputs_processor,
120 )
121
122
123 def save_checkpoint_to_file(pb_model, file_name):
124 encoded_model = pb_model.SerializeToString()
125 with open(file_name, "wb") as f:
126 f.write(encoded_model)
127
128
129 def load_from_checkpoint_file(file_name):
130 from elasticdl.proto import elasticdl_pb2
131
132 pb_model = elasticdl_pb2.Model()
133 with open(file_name, "rb") as f:
134 pb_model.ParseFromString(f.read())
135 return pb_model
136
137
138 def find_layer(model, layer_class):
139 """
140 Find all layers in model that are instances of layer_class
141 """
142 layers = []
143 for layer in model.layers:
144 if isinstance(layer, layer_class):
145 layers.append(layer)
146 elif hasattr(layer, "layers"):
147 # search in nested layers
148 layers += find_layer(layer, layer_class)
149 return layers
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticdl/python/common/model_helper.py b/elasticdl/python/common/model_helper.py
--- a/elasticdl/python/common/model_helper.py
+++ b/elasticdl/python/common/model_helper.py
@@ -14,9 +14,6 @@
return module
-# TODO: Discuss whether we need to support default model
-# function/class names such as `custom_model()`
-# or `CustomModel()`
def load_model_from_module(model_def, model_module, model_params):
model_def_name = model_def.split(".")[-1]
if model_def_name in model_module:
@@ -47,7 +44,7 @@
return os.path.join(model_zoo, "/".join(spec_key.split(".")[:-1]) + ".py")
-def _get_spec_value(spec_key, model_zoo, default_module):
+def _get_spec_value(spec_key, model_zoo, default_module, required=False):
"""Get the value to the given spec key.
Notes:
@@ -66,11 +63,17 @@
spec_key_module = load_module(
get_module_file_path(model_zoo, spec_key)
).__dict__
- return (
+ spec_value = (
spec_key_module[spec_key_base]
if spec_key_base in spec_key_module
else None
)
+ if required and spec_value is None:
+ raise Exception(
+ "Missing required spec key %s in the module: %s"
+ % (spec_key_base, spec_key)
+ )
+ return spec_value
def get_model_spec(
@@ -112,10 +115,12 @@
)
return (
model,
- _get_spec_value(dataset_fn, model_zoo, default_module),
- _get_spec_value(loss, model_zoo, default_module),
- _get_spec_value(optimizer, model_zoo, default_module),
- _get_spec_value(eval_metrics_fn, model_zoo, default_module),
+ _get_spec_value(dataset_fn, model_zoo, default_module, required=True),
+ _get_spec_value(loss, model_zoo, default_module, required=True),
+ _get_spec_value(optimizer, model_zoo, default_module, required=True),
+ _get_spec_value(
+ eval_metrics_fn, model_zoo, default_module, required=True
+ ),
prediction_outputs_processor,
)
| {"golden_diff": "diff --git a/elasticdl/python/common/model_helper.py b/elasticdl/python/common/model_helper.py\n--- a/elasticdl/python/common/model_helper.py\n+++ b/elasticdl/python/common/model_helper.py\n@@ -14,9 +14,6 @@\n return module\n \n \n-# TODO: Discuss whether we need to support default model\n-# function/class names such as `custom_model()`\n-# or `CustomModel()`\n def load_model_from_module(model_def, model_module, model_params):\n model_def_name = model_def.split(\".\")[-1]\n if model_def_name in model_module:\n@@ -47,7 +44,7 @@\n return os.path.join(model_zoo, \"/\".join(spec_key.split(\".\")[:-1]) + \".py\")\n \n \n-def _get_spec_value(spec_key, model_zoo, default_module):\n+def _get_spec_value(spec_key, model_zoo, default_module, required=False):\n \"\"\"Get the value to the given spec key.\n \n Notes:\n@@ -66,11 +63,17 @@\n spec_key_module = load_module(\n get_module_file_path(model_zoo, spec_key)\n ).__dict__\n- return (\n+ spec_value = (\n spec_key_module[spec_key_base]\n if spec_key_base in spec_key_module\n else None\n )\n+ if required and spec_value is None:\n+ raise Exception(\n+ \"Missing required spec key %s in the module: %s\"\n+ % (spec_key_base, spec_key)\n+ )\n+ return spec_value\n \n \n def get_model_spec(\n@@ -112,10 +115,12 @@\n )\n return (\n model,\n- _get_spec_value(dataset_fn, model_zoo, default_module),\n- _get_spec_value(loss, model_zoo, default_module),\n- _get_spec_value(optimizer, model_zoo, default_module),\n- _get_spec_value(eval_metrics_fn, model_zoo, default_module),\n+ _get_spec_value(dataset_fn, model_zoo, default_module, required=True),\n+ _get_spec_value(loss, model_zoo, default_module, required=True),\n+ _get_spec_value(optimizer, model_zoo, default_module, required=True),\n+ _get_spec_value(\n+ eval_metrics_fn, model_zoo, default_module, required=True\n+ ),\n prediction_outputs_processor,\n )\n", "issue": "Add check for mandatory functions in model files\nUsers are required to implement some functions(e.g, `loss()`) in the model file. We can add some checks for the model file to see if these required functions are implemented correctly. Otherwise, if the functions are not implemented (correctly), it won't be found until the job starts to run on k8s cluster.\n", "before_files": [{"content": "import importlib.util\nimport os\n\nfrom elasticdl.python.common.log_util import default_logger as logger\nfrom elasticdl.python.worker.prediction_outputs_processor import (\n BasePredictionOutputsProcessor,\n)\n\n\ndef load_module(module_file):\n spec = importlib.util.spec_from_file_location(module_file, module_file)\n module = importlib.util.module_from_spec(spec)\n spec.loader.exec_module(module)\n return module\n\n\n# TODO: Discuss whether we need to support default model\n# function/class names such as `custom_model()`\n# or `CustomModel()`\ndef load_model_from_module(model_def, model_module, model_params):\n model_def_name = model_def.split(\".\")[-1]\n if model_def_name in model_module:\n custom_model_name = model_def_name\n else:\n raise ValueError(\n \"Cannot find the custom model function/class \"\n \"in model definition files\"\n )\n if model_params:\n kvs = model_params.split(\",\")\n model_params_dict = {}\n for kv in kvs:\n k, v = kv.split(\"=\")\n model_params_dict[k] = eval(v)\n return model_module[custom_model_name](**model_params_dict)\n else:\n return model_module[custom_model_name]()\n\n\ndef get_module_file_path(model_zoo, spec_key):\n \"\"\"Get the path to module file from model zoo and the spec string.\n\n For example, if `model_zoo = \"model_zoo\"` and\n `spec_key = \"test_module.custom_model\"`, the function returns\n \"model_zoo/test_module.py\".\n \"\"\"\n return os.path.join(model_zoo, \"/\".join(spec_key.split(\".\")[:-1]) + \".py\")\n\n\ndef _get_spec_value(spec_key, model_zoo, default_module):\n \"\"\"Get the value to the given spec key.\n\n Notes:\n\n * If the dot-splitted spec key (e.g. \"test_module.custom_model\"\n is splitted into \"test_module\" and \"custom_model\") is of length 1\n (e.g. `spec_key` is \"custom_model\"), return the value in the\n specified `default_module`.\n * If the spec key does not exist in the module, return `None`.\n \"\"\"\n spec_key_items = spec_key.split(\".\")\n spec_key_base = spec_key_items[-1]\n if len(spec_key_items) == 1:\n spec_key_module = default_module\n else:\n spec_key_module = load_module(\n get_module_file_path(model_zoo, spec_key)\n ).__dict__\n return (\n spec_key_module[spec_key_base]\n if spec_key_base in spec_key_module\n else None\n )\n\n\ndef get_model_spec(\n model_zoo,\n model_def,\n model_params,\n dataset_fn,\n loss,\n optimizer,\n eval_metrics_fn,\n prediction_outputs_processor,\n):\n \"\"\"Get the model spec items in a tuple.\n\n The model spec tuple contains the following items in order:\n\n * The model object instantiated with parameters specified\n in `model_params`,\n * The `dataset_fn`,\n * The `loss`,\n * The `optimizer`,\n * The `eval_metrics_fn`,\n * The `prediction_outputs_processor`. Note that it will print\n warning if it's not inherited from `BasePredictionOutputsProcessor`.\n \"\"\"\n model_def_module_file = get_module_file_path(model_zoo, model_def)\n default_module = load_module(model_def_module_file).__dict__\n model = load_model_from_module(model_def, default_module, model_params)\n prediction_outputs_processor = _get_spec_value(\n prediction_outputs_processor, model_zoo, default_module\n )\n if prediction_outputs_processor and not isinstance(\n prediction_outputs_processor, BasePredictionOutputsProcessor\n ):\n logger.warning(\n \"prediction_outputs_processor is not \"\n \"inherited from BasePredictionOutputsProcessor. \"\n \"Prediction outputs may not be processed correctly.\"\n )\n return (\n model,\n _get_spec_value(dataset_fn, model_zoo, default_module),\n _get_spec_value(loss, model_zoo, default_module),\n _get_spec_value(optimizer, model_zoo, default_module),\n _get_spec_value(eval_metrics_fn, model_zoo, default_module),\n prediction_outputs_processor,\n )\n\n\ndef save_checkpoint_to_file(pb_model, file_name):\n encoded_model = pb_model.SerializeToString()\n with open(file_name, \"wb\") as f:\n f.write(encoded_model)\n\n\ndef load_from_checkpoint_file(file_name):\n from elasticdl.proto import elasticdl_pb2\n\n pb_model = elasticdl_pb2.Model()\n with open(file_name, \"rb\") as f:\n pb_model.ParseFromString(f.read())\n return pb_model\n\n\ndef find_layer(model, layer_class):\n \"\"\"\n Find all layers in model that are instances of layer_class\n \"\"\"\n layers = []\n for layer in model.layers:\n if isinstance(layer, layer_class):\n layers.append(layer)\n elif hasattr(layer, \"layers\"):\n # search in nested layers\n layers += find_layer(layer, layer_class)\n return layers\n", "path": "elasticdl/python/common/model_helper.py"}], "after_files": [{"content": "import importlib.util\nimport os\n\nfrom elasticdl.python.common.log_util import default_logger as logger\nfrom elasticdl.python.worker.prediction_outputs_processor import (\n BasePredictionOutputsProcessor,\n)\n\n\ndef load_module(module_file):\n spec = importlib.util.spec_from_file_location(module_file, module_file)\n module = importlib.util.module_from_spec(spec)\n spec.loader.exec_module(module)\n return module\n\n\ndef load_model_from_module(model_def, model_module, model_params):\n model_def_name = model_def.split(\".\")[-1]\n if model_def_name in model_module:\n custom_model_name = model_def_name\n else:\n raise ValueError(\n \"Cannot find the custom model function/class \"\n \"in model definition files\"\n )\n if model_params:\n kvs = model_params.split(\",\")\n model_params_dict = {}\n for kv in kvs:\n k, v = kv.split(\"=\")\n model_params_dict[k] = eval(v)\n return model_module[custom_model_name](**model_params_dict)\n else:\n return model_module[custom_model_name]()\n\n\ndef get_module_file_path(model_zoo, spec_key):\n \"\"\"Get the path to module file from model zoo and the spec string.\n\n For example, if `model_zoo = \"model_zoo\"` and\n `spec_key = \"test_module.custom_model\"`, the function returns\n \"model_zoo/test_module.py\".\n \"\"\"\n return os.path.join(model_zoo, \"/\".join(spec_key.split(\".\")[:-1]) + \".py\")\n\n\ndef _get_spec_value(spec_key, model_zoo, default_module, required=False):\n \"\"\"Get the value to the given spec key.\n\n Notes:\n\n * If the dot-splitted spec key (e.g. \"test_module.custom_model\"\n is splitted into \"test_module\" and \"custom_model\") is of length 1\n (e.g. `spec_key` is \"custom_model\"), return the value in the\n specified `default_module`.\n * If the spec key does not exist in the module, return `None`.\n \"\"\"\n spec_key_items = spec_key.split(\".\")\n spec_key_base = spec_key_items[-1]\n if len(spec_key_items) == 1:\n spec_key_module = default_module\n else:\n spec_key_module = load_module(\n get_module_file_path(model_zoo, spec_key)\n ).__dict__\n spec_value = (\n spec_key_module[spec_key_base]\n if spec_key_base in spec_key_module\n else None\n )\n if required and spec_value is None:\n raise Exception(\n \"Missing required spec key %s in the module: %s\"\n % (spec_key_base, spec_key)\n )\n return spec_value\n\n\ndef get_model_spec(\n model_zoo,\n model_def,\n model_params,\n dataset_fn,\n loss,\n optimizer,\n eval_metrics_fn,\n prediction_outputs_processor,\n):\n \"\"\"Get the model spec items in a tuple.\n\n The model spec tuple contains the following items in order:\n\n * The model object instantiated with parameters specified\n in `model_params`,\n * The `dataset_fn`,\n * The `loss`,\n * The `optimizer`,\n * The `eval_metrics_fn`,\n * The `prediction_outputs_processor`. Note that it will print\n warning if it's not inherited from `BasePredictionOutputsProcessor`.\n \"\"\"\n model_def_module_file = get_module_file_path(model_zoo, model_def)\n default_module = load_module(model_def_module_file).__dict__\n model = load_model_from_module(model_def, default_module, model_params)\n prediction_outputs_processor = _get_spec_value(\n prediction_outputs_processor, model_zoo, default_module\n )\n if prediction_outputs_processor and not isinstance(\n prediction_outputs_processor, BasePredictionOutputsProcessor\n ):\n logger.warning(\n \"prediction_outputs_processor is not \"\n \"inherited from BasePredictionOutputsProcessor. \"\n \"Prediction outputs may not be processed correctly.\"\n )\n return (\n model,\n _get_spec_value(dataset_fn, model_zoo, default_module, required=True),\n _get_spec_value(loss, model_zoo, default_module, required=True),\n _get_spec_value(optimizer, model_zoo, default_module, required=True),\n _get_spec_value(\n eval_metrics_fn, model_zoo, default_module, required=True\n ),\n prediction_outputs_processor,\n )\n\n\ndef save_checkpoint_to_file(pb_model, file_name):\n encoded_model = pb_model.SerializeToString()\n with open(file_name, \"wb\") as f:\n f.write(encoded_model)\n\n\ndef load_from_checkpoint_file(file_name):\n from elasticdl.proto import elasticdl_pb2\n\n pb_model = elasticdl_pb2.Model()\n with open(file_name, \"rb\") as f:\n pb_model.ParseFromString(f.read())\n return pb_model\n\n\ndef find_layer(model, layer_class):\n \"\"\"\n Find all layers in model that are instances of layer_class\n \"\"\"\n layers = []\n for layer in model.layers:\n if isinstance(layer, layer_class):\n layers.append(layer)\n elif hasattr(layer, \"layers\"):\n # search in nested layers\n layers += find_layer(layer, layer_class)\n return layers\n", "path": "elasticdl/python/common/model_helper.py"}]} | 1,763 | 520 |
gh_patches_debug_34834 | rasdani/github-patches | git_diff | svthalia__concrexit-3407 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Canceled event registrations are marked as queued
### Describe the bug
After canceling a registration,
### How to reproduce
Steps to reproduce the behaviour:
1. Go to https://staging.thalia.nu/api/v2/admin/events/183/registrations/?limit=999999999&ordering=-date_cancelled&cancelled=true&queued=true
2. Go to https://staging.thalia.nu/api/v2/admin/events/183/registrations/?limit=999999999&ordering=-date_cancelled&cancelled=true&queued=false
3. Notice that the canceled registrations are only listed under `queued=true`, and that the queue position is 3
4. Go to https://staging.thalia.nu/api/v2/admin/events/183/registrations/?limit=999999999&ordering=-date_cancelled&cancelled=false&queued=true
5. Notice that the queued position 3 already exists.
### Expected behaviour
Canceled registrations should show up as canceled and not in queue, as they are not in the queue
### Screenshots
If applicable, add screenshots to help explain your problem.
### Additional context
I think the reason they are listed as queued is because the queue position is set. Ideally this field would be nulled or set to a special value like -1.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/events/models/event_registration.py`
Content:
```
1 from django.core import validators
2 from django.core.exceptions import ValidationError
3 from django.db import models
4 from django.db.models import Count, F, Q
5 from django.db.models.functions import Greatest, NullIf
6 from django.utils import timezone
7 from django.utils.translation import gettext_lazy as _
8
9 from queryable_properties.managers import QueryablePropertiesManager
10 from queryable_properties.properties import AnnotationProperty
11
12 from events import emails
13 from payments.models import PaymentAmountField
14
15 from .event import Event
16
17
18 def registration_member_choices_limit():
19 """Define queryset filters to only include current members."""
20 return Q(membership__until__isnull=True) | Q(
21 membership__until__gt=timezone.now().date()
22 )
23
24
25 class EventRegistration(models.Model):
26 """Describes a registration for an Event."""
27
28 objects = QueryablePropertiesManager()
29
30 event = models.ForeignKey(Event, models.CASCADE)
31
32 member = models.ForeignKey(
33 "members.Member",
34 models.CASCADE,
35 blank=True,
36 null=True,
37 )
38
39 name = models.CharField(
40 _("name"),
41 max_length=50,
42 help_text=_("Use this for non-members"),
43 null=True,
44 blank=True,
45 )
46
47 alt_email = models.EmailField(
48 _("email"),
49 help_text=_("Email address for non-members"),
50 max_length=254,
51 null=True,
52 blank=True,
53 )
54
55 alt_phone_number = models.CharField(
56 max_length=20,
57 verbose_name=_("Phone number"),
58 help_text=_("Phone number for non-members"),
59 validators=[
60 validators.RegexValidator(
61 regex=r"^\+?\d+$",
62 message=_("Please enter a valid phone number"),
63 )
64 ],
65 null=True,
66 blank=True,
67 )
68
69 date = models.DateTimeField(_("registration date"), default=timezone.now)
70 date_cancelled = models.DateTimeField(_("cancellation date"), null=True, blank=True)
71
72 present = models.BooleanField(
73 _("present"),
74 default=False,
75 )
76
77 special_price = PaymentAmountField(
78 verbose_name=_("special price"),
79 blank=True,
80 null=True,
81 validators=[validators.MinValueValidator(0)],
82 )
83
84 remarks = models.TextField(_("remarks"), null=True, blank=True)
85
86 payment = models.OneToOneField(
87 "payments.Payment",
88 related_name="events_registration",
89 on_delete=models.SET_NULL,
90 blank=True,
91 null=True,
92 )
93
94 @property
95 def phone_number(self):
96 if self.member:
97 return self.member.profile.phone_number
98 return self.alt_phone_number
99
100 @property
101 def email(self):
102 if self.member:
103 return self.member.email
104 return self.alt_email
105
106 @property
107 def information_fields(self):
108 fields = self.event.registrationinformationfield_set.all()
109 return [
110 {"field": field, "value": field.get_value_for(self)} for field in fields
111 ]
112
113 @property
114 def is_registered(self):
115 return self.date_cancelled is None
116
117 queue_position = AnnotationProperty(
118 # Get queue position by counting amount of registrations with lower date and in case of same date lower id
119 # Subsequently cast to None if this is 0 or lower, in which case it isn't in the queue
120 NullIf(
121 Greatest(
122 Count(
123 "event__eventregistration",
124 filter=Q(event__eventregistration__date_cancelled=None)
125 & (
126 Q(event__eventregistration__date__lt=F("date"))
127 | Q(event__eventregistration__id__lte=F("id"))
128 & Q(event__eventregistration__date__exact=F("date"))
129 ),
130 )
131 - F("event__max_participants"),
132 0,
133 ),
134 0,
135 )
136 )
137
138 @property
139 def is_invited(self):
140 return self.is_registered and not self.queue_position
141
142 def is_external(self):
143 return bool(self.name)
144
145 def is_late_cancellation(self):
146 # First check whether or not the user cancelled
147 # If the user cancelled then check if this was after the deadline
148 # And if there is a max participants number:
149 # do a complex check to calculate if this user was on
150 # the waiting list at the time of cancellation, since
151 # you shouldn't need to pay the costs of something
152 # you weren't even able to go to.
153 return (
154 self.date_cancelled
155 and self.event.cancel_deadline
156 and self.date_cancelled > self.event.cancel_deadline
157 and (
158 self.event.max_participants is None
159 or self.event.eventregistration_set.filter(
160 (
161 Q(date_cancelled__gte=self.date_cancelled)
162 | Q(date_cancelled=None)
163 )
164 & Q(date__lte=self.date)
165 ).count()
166 < self.event.max_participants
167 )
168 )
169
170 def is_paid(self):
171 return self.payment
172
173 @property
174 def payment_amount(self):
175 return self.event.price if not self.special_price else self.special_price
176
177 def would_cancel_after_deadline(self):
178 now = timezone.now()
179 if not self.event.registration_required:
180 return False
181 return not self.queue_position and now >= self.event.cancel_deadline
182
183 def clean(self):
184 errors = {}
185 if (self.member is None and not self.name) or (self.member and self.name):
186 errors.update(
187 {
188 "member": _("Either specify a member or a name"),
189 "name": _("Either specify a member or a name"),
190 }
191 )
192 if self.member and self.alt_email:
193 errors.update(
194 {"alt_email": _("Email should only be specified for non-members")}
195 )
196 if self.member and self.alt_phone_number:
197 errors.update(
198 {
199 "alt_phone_number": _(
200 "Phone number should only be specified for non-members"
201 )
202 }
203 )
204 if (
205 self.payment
206 and self.special_price
207 and self.special_price != self.payment.amount
208 ):
209 errors.update(
210 {
211 "special_price": _(
212 "Cannot change price of already paid registration"
213 ),
214 }
215 )
216
217 if errors:
218 raise ValidationError(errors)
219
220 def save(self, **kwargs):
221 self.full_clean()
222
223 created = self.pk is None
224 super().save(**kwargs)
225
226 if (
227 created
228 and self.is_registered
229 and self.email
230 and self.event.registration_required
231 ):
232 if (
233 self.member is not None
234 and not self.member.profile.receive_registration_confirmation
235 ):
236 return # Don't send email if the user doesn't want them.
237
238 emails.notify_registration(self)
239
240 def __str__(self):
241 if self.member:
242 return f"{self.member.get_full_name()}: {self.event}"
243 return f"{self.name}: {self.event}"
244
245 class Meta:
246 verbose_name = _("Registration")
247 verbose_name_plural = _("Registrations")
248 ordering = ("date",)
249 unique_together = (("member", "event"),)
250
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/events/models/event_registration.py b/website/events/models/event_registration.py
--- a/website/events/models/event_registration.py
+++ b/website/events/models/event_registration.py
@@ -1,7 +1,7 @@
from django.core import validators
from django.core.exceptions import ValidationError
from django.db import models
-from django.db.models import Count, F, Q
+from django.db.models import Case, Count, F, Q, When
from django.db.models.functions import Greatest, NullIf
from django.utils import timezone
from django.utils.translation import gettext_lazy as _
@@ -115,23 +115,30 @@
return self.date_cancelled is None
queue_position = AnnotationProperty(
- # Get queue position by counting amount of registrations with lower date and in case of same date lower id
- # Subsequently cast to None if this is 0 or lower, in which case it isn't in the queue
- NullIf(
- Greatest(
- Count(
- "event__eventregistration",
- filter=Q(event__eventregistration__date_cancelled=None)
- & (
- Q(event__eventregistration__date__lt=F("date"))
- | Q(event__eventregistration__id__lte=F("id"))
- & Q(event__eventregistration__date__exact=F("date"))
+ Case(
+ # Get queue position by counting amount of registrations with lower date and in case of same date lower id
+ # Subsequently cast to None if this is 0 or lower, in which case it isn't in the queue
+ # If the current registration is cancelled, also force it to None.
+ When(
+ date_cancelled=None,
+ then=NullIf(
+ Greatest(
+ Count(
+ "event__eventregistration",
+ filter=Q(event__eventregistration__date_cancelled=None)
+ & (
+ Q(event__eventregistration__date__lt=F("date"))
+ | Q(event__eventregistration__id__lte=F("id"))
+ & Q(event__eventregistration__date__exact=F("date"))
+ ),
+ )
+ - F("event__max_participants"),
+ 0,
),
- )
- - F("event__max_participants"),
- 0,
+ 0,
+ ),
),
- 0,
+ default=None,
)
)
| {"golden_diff": "diff --git a/website/events/models/event_registration.py b/website/events/models/event_registration.py\n--- a/website/events/models/event_registration.py\n+++ b/website/events/models/event_registration.py\n@@ -1,7 +1,7 @@\n from django.core import validators\n from django.core.exceptions import ValidationError\n from django.db import models\n-from django.db.models import Count, F, Q\n+from django.db.models import Case, Count, F, Q, When\n from django.db.models.functions import Greatest, NullIf\n from django.utils import timezone\n from django.utils.translation import gettext_lazy as _\n@@ -115,23 +115,30 @@\n return self.date_cancelled is None\n \n queue_position = AnnotationProperty(\n- # Get queue position by counting amount of registrations with lower date and in case of same date lower id\n- # Subsequently cast to None if this is 0 or lower, in which case it isn't in the queue\n- NullIf(\n- Greatest(\n- Count(\n- \"event__eventregistration\",\n- filter=Q(event__eventregistration__date_cancelled=None)\n- & (\n- Q(event__eventregistration__date__lt=F(\"date\"))\n- | Q(event__eventregistration__id__lte=F(\"id\"))\n- & Q(event__eventregistration__date__exact=F(\"date\"))\n+ Case(\n+ # Get queue position by counting amount of registrations with lower date and in case of same date lower id\n+ # Subsequently cast to None if this is 0 or lower, in which case it isn't in the queue\n+ # If the current registration is cancelled, also force it to None.\n+ When(\n+ date_cancelled=None,\n+ then=NullIf(\n+ Greatest(\n+ Count(\n+ \"event__eventregistration\",\n+ filter=Q(event__eventregistration__date_cancelled=None)\n+ & (\n+ Q(event__eventregistration__date__lt=F(\"date\"))\n+ | Q(event__eventregistration__id__lte=F(\"id\"))\n+ & Q(event__eventregistration__date__exact=F(\"date\"))\n+ ),\n+ )\n+ - F(\"event__max_participants\"),\n+ 0,\n ),\n- )\n- - F(\"event__max_participants\"),\n- 0,\n+ 0,\n+ ),\n ),\n- 0,\n+ default=None,\n )\n )\n", "issue": "Canceled event registrations are marked as queued\n### Describe the bug\r\nAfter canceling a registration, \r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Go to https://staging.thalia.nu/api/v2/admin/events/183/registrations/?limit=999999999&ordering=-date_cancelled&cancelled=true&queued=true\r\n2. Go to https://staging.thalia.nu/api/v2/admin/events/183/registrations/?limit=999999999&ordering=-date_cancelled&cancelled=true&queued=false\r\n3. Notice that the canceled registrations are only listed under `queued=true`, and that the queue position is 3\r\n4. Go to https://staging.thalia.nu/api/v2/admin/events/183/registrations/?limit=999999999&ordering=-date_cancelled&cancelled=false&queued=true\r\n5. Notice that the queued position 3 already exists.\r\n\r\n### Expected behaviour\r\nCanceled registrations should show up as canceled and not in queue, as they are not in the queue\r\n\r\n### Screenshots\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n### Additional context\r\nI think the reason they are listed as queued is because the queue position is set. Ideally this field would be nulled or set to a special value like -1.\r\n\n", "before_files": [{"content": "from django.core import validators\nfrom django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.db.models import Count, F, Q\nfrom django.db.models.functions import Greatest, NullIf\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom queryable_properties.managers import QueryablePropertiesManager\nfrom queryable_properties.properties import AnnotationProperty\n\nfrom events import emails\nfrom payments.models import PaymentAmountField\n\nfrom .event import Event\n\n\ndef registration_member_choices_limit():\n \"\"\"Define queryset filters to only include current members.\"\"\"\n return Q(membership__until__isnull=True) | Q(\n membership__until__gt=timezone.now().date()\n )\n\n\nclass EventRegistration(models.Model):\n \"\"\"Describes a registration for an Event.\"\"\"\n\n objects = QueryablePropertiesManager()\n\n event = models.ForeignKey(Event, models.CASCADE)\n\n member = models.ForeignKey(\n \"members.Member\",\n models.CASCADE,\n blank=True,\n null=True,\n )\n\n name = models.CharField(\n _(\"name\"),\n max_length=50,\n help_text=_(\"Use this for non-members\"),\n null=True,\n blank=True,\n )\n\n alt_email = models.EmailField(\n _(\"email\"),\n help_text=_(\"Email address for non-members\"),\n max_length=254,\n null=True,\n blank=True,\n )\n\n alt_phone_number = models.CharField(\n max_length=20,\n verbose_name=_(\"Phone number\"),\n help_text=_(\"Phone number for non-members\"),\n validators=[\n validators.RegexValidator(\n regex=r\"^\\+?\\d+$\",\n message=_(\"Please enter a valid phone number\"),\n )\n ],\n null=True,\n blank=True,\n )\n\n date = models.DateTimeField(_(\"registration date\"), default=timezone.now)\n date_cancelled = models.DateTimeField(_(\"cancellation date\"), null=True, blank=True)\n\n present = models.BooleanField(\n _(\"present\"),\n default=False,\n )\n\n special_price = PaymentAmountField(\n verbose_name=_(\"special price\"),\n blank=True,\n null=True,\n validators=[validators.MinValueValidator(0)],\n )\n\n remarks = models.TextField(_(\"remarks\"), null=True, blank=True)\n\n payment = models.OneToOneField(\n \"payments.Payment\",\n related_name=\"events_registration\",\n on_delete=models.SET_NULL,\n blank=True,\n null=True,\n )\n\n @property\n def phone_number(self):\n if self.member:\n return self.member.profile.phone_number\n return self.alt_phone_number\n\n @property\n def email(self):\n if self.member:\n return self.member.email\n return self.alt_email\n\n @property\n def information_fields(self):\n fields = self.event.registrationinformationfield_set.all()\n return [\n {\"field\": field, \"value\": field.get_value_for(self)} for field in fields\n ]\n\n @property\n def is_registered(self):\n return self.date_cancelled is None\n\n queue_position = AnnotationProperty(\n # Get queue position by counting amount of registrations with lower date and in case of same date lower id\n # Subsequently cast to None if this is 0 or lower, in which case it isn't in the queue\n NullIf(\n Greatest(\n Count(\n \"event__eventregistration\",\n filter=Q(event__eventregistration__date_cancelled=None)\n & (\n Q(event__eventregistration__date__lt=F(\"date\"))\n | Q(event__eventregistration__id__lte=F(\"id\"))\n & Q(event__eventregistration__date__exact=F(\"date\"))\n ),\n )\n - F(\"event__max_participants\"),\n 0,\n ),\n 0,\n )\n )\n\n @property\n def is_invited(self):\n return self.is_registered and not self.queue_position\n\n def is_external(self):\n return bool(self.name)\n\n def is_late_cancellation(self):\n # First check whether or not the user cancelled\n # If the user cancelled then check if this was after the deadline\n # And if there is a max participants number:\n # do a complex check to calculate if this user was on\n # the waiting list at the time of cancellation, since\n # you shouldn't need to pay the costs of something\n # you weren't even able to go to.\n return (\n self.date_cancelled\n and self.event.cancel_deadline\n and self.date_cancelled > self.event.cancel_deadline\n and (\n self.event.max_participants is None\n or self.event.eventregistration_set.filter(\n (\n Q(date_cancelled__gte=self.date_cancelled)\n | Q(date_cancelled=None)\n )\n & Q(date__lte=self.date)\n ).count()\n < self.event.max_participants\n )\n )\n\n def is_paid(self):\n return self.payment\n\n @property\n def payment_amount(self):\n return self.event.price if not self.special_price else self.special_price\n\n def would_cancel_after_deadline(self):\n now = timezone.now()\n if not self.event.registration_required:\n return False\n return not self.queue_position and now >= self.event.cancel_deadline\n\n def clean(self):\n errors = {}\n if (self.member is None and not self.name) or (self.member and self.name):\n errors.update(\n {\n \"member\": _(\"Either specify a member or a name\"),\n \"name\": _(\"Either specify a member or a name\"),\n }\n )\n if self.member and self.alt_email:\n errors.update(\n {\"alt_email\": _(\"Email should only be specified for non-members\")}\n )\n if self.member and self.alt_phone_number:\n errors.update(\n {\n \"alt_phone_number\": _(\n \"Phone number should only be specified for non-members\"\n )\n }\n )\n if (\n self.payment\n and self.special_price\n and self.special_price != self.payment.amount\n ):\n errors.update(\n {\n \"special_price\": _(\n \"Cannot change price of already paid registration\"\n ),\n }\n )\n\n if errors:\n raise ValidationError(errors)\n\n def save(self, **kwargs):\n self.full_clean()\n\n created = self.pk is None\n super().save(**kwargs)\n\n if (\n created\n and self.is_registered\n and self.email\n and self.event.registration_required\n ):\n if (\n self.member is not None\n and not self.member.profile.receive_registration_confirmation\n ):\n return # Don't send email if the user doesn't want them.\n\n emails.notify_registration(self)\n\n def __str__(self):\n if self.member:\n return f\"{self.member.get_full_name()}: {self.event}\"\n return f\"{self.name}: {self.event}\"\n\n class Meta:\n verbose_name = _(\"Registration\")\n verbose_name_plural = _(\"Registrations\")\n ordering = (\"date\",)\n unique_together = ((\"member\", \"event\"),)\n", "path": "website/events/models/event_registration.py"}], "after_files": [{"content": "from django.core import validators\nfrom django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.db.models import Case, Count, F, Q, When\nfrom django.db.models.functions import Greatest, NullIf\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom queryable_properties.managers import QueryablePropertiesManager\nfrom queryable_properties.properties import AnnotationProperty\n\nfrom events import emails\nfrom payments.models import PaymentAmountField\n\nfrom .event import Event\n\n\ndef registration_member_choices_limit():\n \"\"\"Define queryset filters to only include current members.\"\"\"\n return Q(membership__until__isnull=True) | Q(\n membership__until__gt=timezone.now().date()\n )\n\n\nclass EventRegistration(models.Model):\n \"\"\"Describes a registration for an Event.\"\"\"\n\n objects = QueryablePropertiesManager()\n\n event = models.ForeignKey(Event, models.CASCADE)\n\n member = models.ForeignKey(\n \"members.Member\",\n models.CASCADE,\n blank=True,\n null=True,\n )\n\n name = models.CharField(\n _(\"name\"),\n max_length=50,\n help_text=_(\"Use this for non-members\"),\n null=True,\n blank=True,\n )\n\n alt_email = models.EmailField(\n _(\"email\"),\n help_text=_(\"Email address for non-members\"),\n max_length=254,\n null=True,\n blank=True,\n )\n\n alt_phone_number = models.CharField(\n max_length=20,\n verbose_name=_(\"Phone number\"),\n help_text=_(\"Phone number for non-members\"),\n validators=[\n validators.RegexValidator(\n regex=r\"^\\+?\\d+$\",\n message=_(\"Please enter a valid phone number\"),\n )\n ],\n null=True,\n blank=True,\n )\n\n date = models.DateTimeField(_(\"registration date\"), default=timezone.now)\n date_cancelled = models.DateTimeField(_(\"cancellation date\"), null=True, blank=True)\n\n present = models.BooleanField(\n _(\"present\"),\n default=False,\n )\n\n special_price = PaymentAmountField(\n verbose_name=_(\"special price\"),\n blank=True,\n null=True,\n validators=[validators.MinValueValidator(0)],\n )\n\n remarks = models.TextField(_(\"remarks\"), null=True, blank=True)\n\n payment = models.OneToOneField(\n \"payments.Payment\",\n related_name=\"events_registration\",\n on_delete=models.SET_NULL,\n blank=True,\n null=True,\n )\n\n @property\n def phone_number(self):\n if self.member:\n return self.member.profile.phone_number\n return self.alt_phone_number\n\n @property\n def email(self):\n if self.member:\n return self.member.email\n return self.alt_email\n\n @property\n def information_fields(self):\n fields = self.event.registrationinformationfield_set.all()\n return [\n {\"field\": field, \"value\": field.get_value_for(self)} for field in fields\n ]\n\n @property\n def is_registered(self):\n return self.date_cancelled is None\n\n queue_position = AnnotationProperty(\n Case(\n # Get queue position by counting amount of registrations with lower date and in case of same date lower id\n # Subsequently cast to None if this is 0 or lower, in which case it isn't in the queue\n # If the current registration is cancelled, also force it to None.\n When(\n date_cancelled=None,\n then=NullIf(\n Greatest(\n Count(\n \"event__eventregistration\",\n filter=Q(event__eventregistration__date_cancelled=None)\n & (\n Q(event__eventregistration__date__lt=F(\"date\"))\n | Q(event__eventregistration__id__lte=F(\"id\"))\n & Q(event__eventregistration__date__exact=F(\"date\"))\n ),\n )\n - F(\"event__max_participants\"),\n 0,\n ),\n 0,\n ),\n ),\n default=None,\n )\n )\n\n @property\n def is_invited(self):\n return self.is_registered and not self.queue_position\n\n def is_external(self):\n return bool(self.name)\n\n def is_late_cancellation(self):\n # First check whether or not the user cancelled\n # If the user cancelled then check if this was after the deadline\n # And if there is a max participants number:\n # do a complex check to calculate if this user was on\n # the waiting list at the time of cancellation, since\n # you shouldn't need to pay the costs of something\n # you weren't even able to go to.\n return (\n self.date_cancelled\n and self.event.cancel_deadline\n and self.date_cancelled > self.event.cancel_deadline\n and (\n self.event.max_participants is None\n or self.event.eventregistration_set.filter(\n (\n Q(date_cancelled__gte=self.date_cancelled)\n | Q(date_cancelled=None)\n )\n & Q(date__lte=self.date)\n ).count()\n < self.event.max_participants\n )\n )\n\n def is_paid(self):\n return self.payment\n\n @property\n def payment_amount(self):\n return self.event.price if not self.special_price else self.special_price\n\n def would_cancel_after_deadline(self):\n now = timezone.now()\n if not self.event.registration_required:\n return False\n return not self.queue_position and now >= self.event.cancel_deadline\n\n def clean(self):\n errors = {}\n if (self.member is None and not self.name) or (self.member and self.name):\n errors.update(\n {\n \"member\": _(\"Either specify a member or a name\"),\n \"name\": _(\"Either specify a member or a name\"),\n }\n )\n if self.member and self.alt_email:\n errors.update(\n {\"alt_email\": _(\"Email should only be specified for non-members\")}\n )\n if self.member and self.alt_phone_number:\n errors.update(\n {\n \"alt_phone_number\": _(\n \"Phone number should only be specified for non-members\"\n )\n }\n )\n if (\n self.payment\n and self.special_price\n and self.special_price != self.payment.amount\n ):\n errors.update(\n {\n \"special_price\": _(\n \"Cannot change price of already paid registration\"\n ),\n }\n )\n\n if errors:\n raise ValidationError(errors)\n\n def save(self, **kwargs):\n self.full_clean()\n\n created = self.pk is None\n super().save(**kwargs)\n\n if (\n created\n and self.is_registered\n and self.email\n and self.event.registration_required\n ):\n if (\n self.member is not None\n and not self.member.profile.receive_registration_confirmation\n ):\n return # Don't send email if the user doesn't want them.\n\n emails.notify_registration(self)\n\n def __str__(self):\n if self.member:\n return f\"{self.member.get_full_name()}: {self.event}\"\n return f\"{self.name}: {self.event}\"\n\n class Meta:\n verbose_name = _(\"Registration\")\n verbose_name_plural = _(\"Registrations\")\n ordering = (\"date\",)\n unique_together = ((\"member\", \"event\"),)\n", "path": "website/events/models/event_registration.py"}]} | 2,683 | 526 |
gh_patches_debug_39525 | rasdani/github-patches | git_diff | lnbits__lnbits-836 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Poetry does not gracefully shutdown on CTRL+C
This is a bit critical, bad things can happen if LNbits is just killed without finishing off all routines.
Poetry with `poetry run lnbits`:
<img width="700" alt="image" src="https://user-images.githubusercontent.com/93376500/182562297-6850567b-6fb3-4003-ac9c-317d92392b04.png">
Running lnbits using `./venv/bin/uvicorn lnbits.__main__:app`:
<img width="671" alt="image" src="https://user-images.githubusercontent.com/93376500/182562341-b4d56bfa-bf4f-4981-84e6-06678923439f.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lnbits/server.py`
Content:
```
1 import click
2 import uvicorn
3
4
5 @click.command()
6 @click.option("--port", default="5000", help="Port to run LNBits on")
7 @click.option("--host", default="127.0.0.1", help="Host to run LNBits on")
8 def main(port, host):
9 """Launched with `poetry run lnbits` at root level"""
10 uvicorn.run("lnbits.__main__:app", port=port, host=host)
11
12
13 if __name__ == "__main__":
14 main()
15
16 # def main():
17 # """Launched with `poetry run start` at root level"""
18 # uvicorn.run("lnbits.__main__:app")
19
```
Path: `build.py`
Content:
```
1 import warnings
2 import subprocess
3 import glob
4 import os
5 from os import path
6 from typing import Any, List, NamedTuple, Optional
7 from pathlib import Path
8
9 LNBITS_PATH = path.dirname(path.realpath(__file__)) + "/lnbits"
10
11 def get_js_vendored(prefer_minified: bool = False) -> List[str]:
12 paths = get_vendored(".js", prefer_minified)
13
14 def sorter(key: str):
15 if "moment@" in key:
16 return 1
17 if "vue@" in key:
18 return 2
19 if "vue-router@" in key:
20 return 3
21 if "polyfills" in key:
22 return 4
23 return 9
24
25 return sorted(paths, key=sorter)
26
27
28 def get_css_vendored(prefer_minified: bool = False) -> List[str]:
29 paths = get_vendored(".css", prefer_minified)
30
31 def sorter(key: str):
32 if "quasar@" in key:
33 return 1
34 if "vue@" in key:
35 return 2
36 if "chart.js@" in key:
37 return 100
38 return 9
39
40 return sorted(paths, key=sorter)
41
42
43 def get_vendored(ext: str, prefer_minified: bool = False) -> List[str]:
44 paths: List[str] = []
45 for path in glob.glob(
46 os.path.join(LNBITS_PATH, "static/vendor/**"), recursive=True
47 ):
48 if path.endswith(".min" + ext):
49 # path is minified
50 unminified = path.replace(".min" + ext, ext)
51 if prefer_minified:
52 paths.append(path)
53 if unminified in paths:
54 paths.remove(unminified)
55 elif unminified not in paths:
56 paths.append(path)
57
58 elif path.endswith(ext):
59 # path is not minified
60 minified = path.replace(ext, ".min" + ext)
61 if not prefer_minified:
62 paths.append(path)
63 if minified in paths:
64 paths.remove(minified)
65 elif minified not in paths:
66 paths.append(path)
67
68 return sorted(paths)
69
70
71 def url_for_vendored(abspath: str) -> str:
72 return "/" + os.path.relpath(abspath, LNBITS_PATH)
73
74 def transpile_scss():
75 with warnings.catch_warnings():
76 warnings.simplefilter("ignore")
77 from scss.compiler import compile_string # type: ignore
78
79 with open(os.path.join(LNBITS_PATH, "static/scss/base.scss")) as scss:
80 with open(os.path.join(LNBITS_PATH, "static/css/base.css"), "w") as css:
81 css.write(compile_string(scss.read()))
82
83 def bundle_vendored():
84 for getfiles, outputpath in [
85 (get_js_vendored, os.path.join(LNBITS_PATH, "static/bundle.js")),
86 (get_css_vendored, os.path.join(LNBITS_PATH, "static/bundle.css")),
87 ]:
88 output = ""
89 for path in getfiles():
90 with open(path) as f:
91 output += "/* " + url_for_vendored(path) + " */\n" + f.read() + ";\n"
92 with open(outputpath, "w") as f:
93 f.write(output)
94
95
96 def build():
97 transpile_scss()
98 bundle_vendored()
99 # root = Path("lnbits/static/foo")
100 # root.mkdir(parents=True)
101 # root.joinpath("example.css").write_text("")
102
103 if __name__ == "__main__":
104 build()
105
106 #def build(setup_kwargs):
107 # """Build """
108 # transpile_scss()
109 # bundle_vendored()
110 # subprocess.run(["ls", "-la", "./lnbits/static"])
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/build.py b/build.py
--- a/build.py
+++ b/build.py
@@ -8,6 +8,7 @@
LNBITS_PATH = path.dirname(path.realpath(__file__)) + "/lnbits"
+
def get_js_vendored(prefer_minified: bool = False) -> List[str]:
paths = get_vendored(".js", prefer_minified)
@@ -71,6 +72,7 @@
def url_for_vendored(abspath: str) -> str:
return "/" + os.path.relpath(abspath, LNBITS_PATH)
+
def transpile_scss():
with warnings.catch_warnings():
warnings.simplefilter("ignore")
@@ -80,6 +82,7 @@
with open(os.path.join(LNBITS_PATH, "static/css/base.css"), "w") as css:
css.write(compile_string(scss.read()))
+
def bundle_vendored():
for getfiles, outputpath in [
(get_js_vendored, os.path.join(LNBITS_PATH, "static/bundle.js")),
@@ -96,15 +99,7 @@
def build():
transpile_scss()
bundle_vendored()
-# root = Path("lnbits/static/foo")
-# root.mkdir(parents=True)
-# root.joinpath("example.css").write_text("")
-if __name__ == "__main__":
- build()
-#def build(setup_kwargs):
-# """Build """
-# transpile_scss()
-# bundle_vendored()
-# subprocess.run(["ls", "-la", "./lnbits/static"])
+if __name__ == "__main__":
+ build()
diff --git a/lnbits/server.py b/lnbits/server.py
--- a/lnbits/server.py
+++ b/lnbits/server.py
@@ -1,18 +1,45 @@
+import time
+
import click
import uvicorn
+from lnbits.settings import HOST, PORT
+
[email protected]()
[email protected]("--port", default="5000", help="Port to run LNBits on")
[email protected]("--host", default="127.0.0.1", help="Host to run LNBits on")
-def main(port, host):
[email protected](
+ context_settings=dict(
+ ignore_unknown_options=True,
+ allow_extra_args=True,
+ )
+)
[email protected]("--port", default=PORT, help="Port to listen on")
[email protected]("--host", default=HOST, help="Host to run LNBits on")
[email protected]("--ssl-keyfile", default=None, help="Path to SSL keyfile")
[email protected]("--ssl-certfile", default=None, help="Path to SSL certificate")
[email protected]_context
+def main(ctx, port: int, host: str, ssl_keyfile: str, ssl_certfile: str):
"""Launched with `poetry run lnbits` at root level"""
- uvicorn.run("lnbits.__main__:app", port=port, host=host)
+ # this beautiful beast parses all command line arguments and passes them to the uvicorn server
+ d = dict(
+ [
+ (
+ item[0].strip("--").replace("-", "_"),
+ int(item[1]) if item[1].isdigit() else item[1],
+ )
+ for item in zip(*[iter(ctx.args)] * 2)
+ ]
+ )
+ config = uvicorn.Config(
+ "lnbits.__main__:app",
+ port=port,
+ host=host,
+ ssl_keyfile=ssl_keyfile,
+ ssl_certfile=ssl_certfile,
+ **d
+ )
+ server = uvicorn.Server(config)
+ server.run()
if __name__ == "__main__":
main()
-
-# def main():
-# """Launched with `poetry run start` at root level"""
-# uvicorn.run("lnbits.__main__:app")
| {"golden_diff": "diff --git a/build.py b/build.py\n--- a/build.py\n+++ b/build.py\n@@ -8,6 +8,7 @@\n \n LNBITS_PATH = path.dirname(path.realpath(__file__)) + \"/lnbits\"\n \n+\n def get_js_vendored(prefer_minified: bool = False) -> List[str]:\n paths = get_vendored(\".js\", prefer_minified)\n \n@@ -71,6 +72,7 @@\n def url_for_vendored(abspath: str) -> str:\n return \"/\" + os.path.relpath(abspath, LNBITS_PATH)\n \n+\n def transpile_scss():\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n@@ -80,6 +82,7 @@\n with open(os.path.join(LNBITS_PATH, \"static/css/base.css\"), \"w\") as css:\n css.write(compile_string(scss.read()))\n \n+\n def bundle_vendored():\n for getfiles, outputpath in [\n (get_js_vendored, os.path.join(LNBITS_PATH, \"static/bundle.js\")),\n@@ -96,15 +99,7 @@\n def build():\n transpile_scss()\n bundle_vendored()\n-# root = Path(\"lnbits/static/foo\")\n-# root.mkdir(parents=True)\n-# root.joinpath(\"example.css\").write_text(\"\")\n \n-if __name__ == \"__main__\":\n- build()\n \n-#def build(setup_kwargs):\n-# \"\"\"Build \"\"\"\n-# transpile_scss()\n-# bundle_vendored()\n-# subprocess.run([\"ls\", \"-la\", \"./lnbits/static\"])\n+if __name__ == \"__main__\":\n+ build()\ndiff --git a/lnbits/server.py b/lnbits/server.py\n--- a/lnbits/server.py\n+++ b/lnbits/server.py\n@@ -1,18 +1,45 @@\n+import time\n+\n import click\n import uvicorn\n \n+from lnbits.settings import HOST, PORT\n+\n \[email protected]()\[email protected](\"--port\", default=\"5000\", help=\"Port to run LNBits on\")\[email protected](\"--host\", default=\"127.0.0.1\", help=\"Host to run LNBits on\")\n-def main(port, host):\[email protected](\n+ context_settings=dict(\n+ ignore_unknown_options=True,\n+ allow_extra_args=True,\n+ )\n+)\[email protected](\"--port\", default=PORT, help=\"Port to listen on\")\[email protected](\"--host\", default=HOST, help=\"Host to run LNBits on\")\[email protected](\"--ssl-keyfile\", default=None, help=\"Path to SSL keyfile\")\[email protected](\"--ssl-certfile\", default=None, help=\"Path to SSL certificate\")\[email protected]_context\n+def main(ctx, port: int, host: str, ssl_keyfile: str, ssl_certfile: str):\n \"\"\"Launched with `poetry run lnbits` at root level\"\"\"\n- uvicorn.run(\"lnbits.__main__:app\", port=port, host=host)\n+ # this beautiful beast parses all command line arguments and passes them to the uvicorn server\n+ d = dict(\n+ [\n+ (\n+ item[0].strip(\"--\").replace(\"-\", \"_\"),\n+ int(item[1]) if item[1].isdigit() else item[1],\n+ )\n+ for item in zip(*[iter(ctx.args)] * 2)\n+ ]\n+ )\n+ config = uvicorn.Config(\n+ \"lnbits.__main__:app\",\n+ port=port,\n+ host=host,\n+ ssl_keyfile=ssl_keyfile,\n+ ssl_certfile=ssl_certfile,\n+ **d\n+ )\n+ server = uvicorn.Server(config)\n+ server.run()\n \n \n if __name__ == \"__main__\":\n main()\n-\n-# def main():\n-# \"\"\"Launched with `poetry run start` at root level\"\"\"\n-# uvicorn.run(\"lnbits.__main__:app\")\n", "issue": "Poetry does not gracefully shutdown on CTRL+C\nThis is a bit critical, bad things can happen if LNbits is just killed without finishing off all routines.\r\n\r\nPoetry with `poetry run lnbits`:\r\n<img width=\"700\" alt=\"image\" src=\"https://user-images.githubusercontent.com/93376500/182562297-6850567b-6fb3-4003-ac9c-317d92392b04.png\">\r\n\r\nRunning lnbits using `./venv/bin/uvicorn lnbits.__main__:app`:\r\n<img width=\"671\" alt=\"image\" src=\"https://user-images.githubusercontent.com/93376500/182562341-b4d56bfa-bf4f-4981-84e6-06678923439f.png\">\r\n\n", "before_files": [{"content": "import click\nimport uvicorn\n\n\[email protected]()\[email protected](\"--port\", default=\"5000\", help=\"Port to run LNBits on\")\[email protected](\"--host\", default=\"127.0.0.1\", help=\"Host to run LNBits on\")\ndef main(port, host):\n \"\"\"Launched with `poetry run lnbits` at root level\"\"\"\n uvicorn.run(\"lnbits.__main__:app\", port=port, host=host)\n\n\nif __name__ == \"__main__\":\n main()\n\n# def main():\n# \"\"\"Launched with `poetry run start` at root level\"\"\"\n# uvicorn.run(\"lnbits.__main__:app\")\n", "path": "lnbits/server.py"}, {"content": "import warnings\nimport subprocess\nimport glob\nimport os\nfrom os import path\nfrom typing import Any, List, NamedTuple, Optional\nfrom pathlib import Path\n\nLNBITS_PATH = path.dirname(path.realpath(__file__)) + \"/lnbits\"\n\ndef get_js_vendored(prefer_minified: bool = False) -> List[str]:\n paths = get_vendored(\".js\", prefer_minified)\n\n def sorter(key: str):\n if \"moment@\" in key:\n return 1\n if \"vue@\" in key:\n return 2\n if \"vue-router@\" in key:\n return 3\n if \"polyfills\" in key:\n return 4\n return 9\n\n return sorted(paths, key=sorter)\n\n\ndef get_css_vendored(prefer_minified: bool = False) -> List[str]:\n paths = get_vendored(\".css\", prefer_minified)\n\n def sorter(key: str):\n if \"quasar@\" in key:\n return 1\n if \"vue@\" in key:\n return 2\n if \"chart.js@\" in key:\n return 100\n return 9\n\n return sorted(paths, key=sorter)\n\n\ndef get_vendored(ext: str, prefer_minified: bool = False) -> List[str]:\n paths: List[str] = []\n for path in glob.glob(\n os.path.join(LNBITS_PATH, \"static/vendor/**\"), recursive=True\n ):\n if path.endswith(\".min\" + ext):\n # path is minified\n unminified = path.replace(\".min\" + ext, ext)\n if prefer_minified:\n paths.append(path)\n if unminified in paths:\n paths.remove(unminified)\n elif unminified not in paths:\n paths.append(path)\n\n elif path.endswith(ext):\n # path is not minified\n minified = path.replace(ext, \".min\" + ext)\n if not prefer_minified:\n paths.append(path)\n if minified in paths:\n paths.remove(minified)\n elif minified not in paths:\n paths.append(path)\n\n return sorted(paths)\n\n\ndef url_for_vendored(abspath: str) -> str:\n return \"/\" + os.path.relpath(abspath, LNBITS_PATH)\n\ndef transpile_scss():\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n from scss.compiler import compile_string # type: ignore\n\n with open(os.path.join(LNBITS_PATH, \"static/scss/base.scss\")) as scss:\n with open(os.path.join(LNBITS_PATH, \"static/css/base.css\"), \"w\") as css:\n css.write(compile_string(scss.read()))\n\ndef bundle_vendored():\n for getfiles, outputpath in [\n (get_js_vendored, os.path.join(LNBITS_PATH, \"static/bundle.js\")),\n (get_css_vendored, os.path.join(LNBITS_PATH, \"static/bundle.css\")),\n ]:\n output = \"\"\n for path in getfiles():\n with open(path) as f:\n output += \"/* \" + url_for_vendored(path) + \" */\\n\" + f.read() + \";\\n\"\n with open(outputpath, \"w\") as f:\n f.write(output)\n\n\ndef build():\n transpile_scss()\n bundle_vendored()\n# root = Path(\"lnbits/static/foo\")\n# root.mkdir(parents=True)\n# root.joinpath(\"example.css\").write_text(\"\")\n\nif __name__ == \"__main__\":\n build()\n\n#def build(setup_kwargs):\n# \"\"\"Build \"\"\"\n# transpile_scss()\n# bundle_vendored()\n# subprocess.run([\"ls\", \"-la\", \"./lnbits/static\"])\n", "path": "build.py"}], "after_files": [{"content": "import time\n\nimport click\nimport uvicorn\n\nfrom lnbits.settings import HOST, PORT\n\n\[email protected](\n context_settings=dict(\n ignore_unknown_options=True,\n allow_extra_args=True,\n )\n)\[email protected](\"--port\", default=PORT, help=\"Port to listen on\")\[email protected](\"--host\", default=HOST, help=\"Host to run LNBits on\")\[email protected](\"--ssl-keyfile\", default=None, help=\"Path to SSL keyfile\")\[email protected](\"--ssl-certfile\", default=None, help=\"Path to SSL certificate\")\[email protected]_context\ndef main(ctx, port: int, host: str, ssl_keyfile: str, ssl_certfile: str):\n \"\"\"Launched with `poetry run lnbits` at root level\"\"\"\n # this beautiful beast parses all command line arguments and passes them to the uvicorn server\n d = dict(\n [\n (\n item[0].strip(\"--\").replace(\"-\", \"_\"),\n int(item[1]) if item[1].isdigit() else item[1],\n )\n for item in zip(*[iter(ctx.args)] * 2)\n ]\n )\n config = uvicorn.Config(\n \"lnbits.__main__:app\",\n port=port,\n host=host,\n ssl_keyfile=ssl_keyfile,\n ssl_certfile=ssl_certfile,\n **d\n )\n server = uvicorn.Server(config)\n server.run()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "lnbits/server.py"}, {"content": "import warnings\nimport subprocess\nimport glob\nimport os\nfrom os import path\nfrom typing import Any, List, NamedTuple, Optional\nfrom pathlib import Path\n\nLNBITS_PATH = path.dirname(path.realpath(__file__)) + \"/lnbits\"\n\n\ndef get_js_vendored(prefer_minified: bool = False) -> List[str]:\n paths = get_vendored(\".js\", prefer_minified)\n\n def sorter(key: str):\n if \"moment@\" in key:\n return 1\n if \"vue@\" in key:\n return 2\n if \"vue-router@\" in key:\n return 3\n if \"polyfills\" in key:\n return 4\n return 9\n\n return sorted(paths, key=sorter)\n\n\ndef get_css_vendored(prefer_minified: bool = False) -> List[str]:\n paths = get_vendored(\".css\", prefer_minified)\n\n def sorter(key: str):\n if \"quasar@\" in key:\n return 1\n if \"vue@\" in key:\n return 2\n if \"chart.js@\" in key:\n return 100\n return 9\n\n return sorted(paths, key=sorter)\n\n\ndef get_vendored(ext: str, prefer_minified: bool = False) -> List[str]:\n paths: List[str] = []\n for path in glob.glob(\n os.path.join(LNBITS_PATH, \"static/vendor/**\"), recursive=True\n ):\n if path.endswith(\".min\" + ext):\n # path is minified\n unminified = path.replace(\".min\" + ext, ext)\n if prefer_minified:\n paths.append(path)\n if unminified in paths:\n paths.remove(unminified)\n elif unminified not in paths:\n paths.append(path)\n\n elif path.endswith(ext):\n # path is not minified\n minified = path.replace(ext, \".min\" + ext)\n if not prefer_minified:\n paths.append(path)\n if minified in paths:\n paths.remove(minified)\n elif minified not in paths:\n paths.append(path)\n\n return sorted(paths)\n\n\ndef url_for_vendored(abspath: str) -> str:\n return \"/\" + os.path.relpath(abspath, LNBITS_PATH)\n\n\ndef transpile_scss():\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n from scss.compiler import compile_string # type: ignore\n\n with open(os.path.join(LNBITS_PATH, \"static/scss/base.scss\")) as scss:\n with open(os.path.join(LNBITS_PATH, \"static/css/base.css\"), \"w\") as css:\n css.write(compile_string(scss.read()))\n\n\ndef bundle_vendored():\n for getfiles, outputpath in [\n (get_js_vendored, os.path.join(LNBITS_PATH, \"static/bundle.js\")),\n (get_css_vendored, os.path.join(LNBITS_PATH, \"static/bundle.css\")),\n ]:\n output = \"\"\n for path in getfiles():\n with open(path) as f:\n output += \"/* \" + url_for_vendored(path) + \" */\\n\" + f.read() + \";\\n\"\n with open(outputpath, \"w\") as f:\n f.write(output)\n\n\ndef build():\n transpile_scss()\n bundle_vendored()\n\n\nif __name__ == \"__main__\":\n build()\n", "path": "build.py"}]} | 1,709 | 881 |
gh_patches_debug_29767 | rasdani/github-patches | git_diff | rotki__rotki-5054 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
rotki 1.26.0 + windows: Unable to open database
## Problem Definition
Premium windows users started getting this after upgrading to 1.26.0.
It's probably related to the changes I did here: https://github.com/rotki/rotki/pull/5043 but manifests only in Windows as none else saw it when using/testing in other OSes.
## Logs
```
[28/10/2022 18:14:05 W. Europe Daylight Time] ERROR rotkehlchen.greenlets Greenlet with id 3094640951536: Upload data to server died with exception: unable to open database: C:\Users\isido\AppData\Local\Temp\tmpp2jyr7sq.db.
Exception Name: <class 'pysqlcipher3.dbapi2.OperationalError'>
Exception Info: unable to open database: C:\Users\isido\AppData\Local\Temp\tmpp2jyr7sq.db
Traceback:
File "src\\gevent\\greenlet.py", line 908, in gevent._gevent_cgreenlet.Greenlet.run
File "rotkehlchen\premium\sync.py", line 143, in maybe_upload_data_to_server
File "rotkehlchen\data_handler.py", line 198, in compress_and_encrypt_db
File "rotkehlchen\db\dbhandler.py", line 483, in export_unencrypted
File "rotkehlchen\db\drivers\gevent.py", line 264, in executescript
[28/10/2022 18:14:05 W. Europe Daylight Time] ERROR rotkehlchen.user_messages Greenlet with id 3094640951536: Upload data to server died with exception: unable to open database: C:\Users\isido\AppData\Local\Temp\tmpp2jyr7sq.db. Check the logs for more details
```
### System Description
Operating system: Windows
Rotki version: 1.26.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rotkehlchen/data_handler.py`
Content:
```
1 import base64
2 import hashlib
3 import logging
4 import shutil
5 import tempfile
6 import zlib
7 from pathlib import Path
8 from typing import Dict, List, Optional, Tuple
9
10 from rotkehlchen.assets.asset import Asset
11 from rotkehlchen.crypto import decrypt, encrypt
12 from rotkehlchen.db.dbhandler import DBHandler
13 from rotkehlchen.db.settings import ModifiableDBSettings
14 from rotkehlchen.errors.api import AuthenticationError
15 from rotkehlchen.errors.misc import SystemPermissionError
16 from rotkehlchen.logging import RotkehlchenLogsAdapter
17 from rotkehlchen.types import B64EncodedBytes, B64EncodedString
18 from rotkehlchen.user_messages import MessagesAggregator
19 from rotkehlchen.utils.misc import timestamp_to_date, ts_now
20
21 logger = logging.getLogger(__name__)
22 log = RotkehlchenLogsAdapter(logger)
23
24 BUFFERSIZE = 64 * 1024
25
26
27 class DataHandler():
28
29 def __init__(
30 self,
31 data_directory: Path,
32 msg_aggregator: MessagesAggregator,
33 sql_vm_instructions_cb: int,
34 ):
35 self.logged_in = False
36 self.data_directory = data_directory
37 self.username = 'no_user'
38 self.password = ''
39 self.msg_aggregator = msg_aggregator
40 self.sql_vm_instructions_cb = sql_vm_instructions_cb
41
42 def logout(self) -> None:
43 if self.logged_in:
44 self.username = 'no_user'
45 self.password = ''
46 self.user_data_dir: Optional[Path] = None
47 db = getattr(self, 'db', None)
48 if db is not None:
49 with self.db.conn.read_ctx() as cursor:
50 self.db.update_owned_assets_in_globaldb(cursor)
51 self.db.logout()
52 self.logged_in = False
53
54 def change_password(self, new_password: str) -> bool:
55 success: bool = False
56
57 if self.logged_in:
58 success = self.db.change_password(new_password)
59 self.password = new_password
60
61 return success
62
63 def unlock(
64 self,
65 username: str,
66 password: str,
67 create_new: bool,
68 initial_settings: Optional[ModifiableDBSettings] = None,
69 ) -> Path:
70 """Unlocks a user, either logging them in or creating a new user
71
72 May raise:
73 - SystemPermissionError if there are permission errors when accessing the DB
74 or a directory in the user's filesystem
75 - AuthenticationError if the given user does not exist, or if
76 sqlcipher version problems are detected
77 - DBUpgradeError if the rotki DB version is newer than the software or
78 there is a DB upgrade and there is an error or if the version is older
79 than the one supported.
80 """
81 user_data_dir = self.data_directory / username
82 if create_new:
83 try:
84 if (user_data_dir / 'rotkehlchen.db').exists():
85 raise AuthenticationError(
86 f'User {username} already exists. User data dir: {user_data_dir}',
87 )
88
89 user_data_dir.mkdir(exist_ok=True)
90 except PermissionError as e:
91 raise SystemPermissionError(
92 f'Failed to create directory for user: {str(e)}',
93 ) from e
94
95 else:
96 try:
97 if not user_data_dir.exists():
98 raise AuthenticationError('User {} does not exist'.format(username))
99
100 if not (user_data_dir / 'rotkehlchen.db').exists():
101 raise PermissionError
102
103 except PermissionError as e:
104 # This is bad. User directory exists but database is missing.
105 # Or either DB or user directory can't be accessed due to permissions
106 # Make a backup of the directory that user should probably remove
107 # on their own. At the same time delete the directory so that a new
108 # user account can be created
109 shutil.move(
110 user_data_dir,
111 self.data_directory / f'auto_backup_{username}_{ts_now()}',
112 )
113
114 raise SystemPermissionError(
115 'User {} exists but DB is missing. Somehow must have been manually '
116 'deleted or is corrupt or access permissions do not allow reading. '
117 'Please recreate the user account. '
118 'A backup of the user directory was created.'.format(username),
119 ) from e
120
121 self.db: DBHandler = DBHandler(
122 user_data_dir=user_data_dir,
123 password=password,
124 msg_aggregator=self.msg_aggregator,
125 initial_settings=initial_settings,
126 sql_vm_instructions_cb=self.sql_vm_instructions_cb,
127 )
128 self.user_data_dir = user_data_dir
129 self.logged_in = True
130 self.username = username
131 self.password = password
132 return user_data_dir
133
134 def add_ignored_assets(self, assets: List[Asset]) -> Tuple[Optional[List[Asset]], str]:
135 """Adds ignored assets to the DB.
136
137 If any of the given assets is already in the DB the function does nothing
138 and returns an error message.
139 """
140 with self.db.conn.read_ctx() as cursor:
141 ignored_assets = self.db.get_ignored_assets(cursor)
142 for asset in assets:
143 if asset in ignored_assets:
144 msg = f'{asset.identifier} is already in ignored assets'
145 return None, msg
146
147 with self.db.user_write() as write_cursor:
148 for asset in assets:
149 self.db.add_to_ignored_assets(write_cursor=write_cursor, asset=asset)
150
151 return self.db.get_ignored_assets(cursor), ''
152
153 def remove_ignored_assets(self, assets: List[Asset]) -> Tuple[Optional[List[Asset]], str]:
154 """Removes ignored assets from the DB.
155
156 If any of the given assets is not in the DB the call function does nothing
157 and returns an error message.
158 """
159 with self.db.conn.read_ctx() as cursor:
160 ignored_assets = self.db.get_ignored_assets(cursor)
161 for asset in assets:
162 if asset not in ignored_assets:
163 msg = f'{asset.identifier} is not in ignored assets'
164 return None, msg
165
166 with self.db.user_write() as write_cursor:
167 for asset in assets:
168 self.db.remove_from_ignored_assets(write_cursor=write_cursor, asset=asset)
169
170 return self.db.get_ignored_assets(cursor), ''
171
172 def get_users(self) -> Dict[str, str]:
173 """Returns a dict with all users in the system.
174
175 Each key is a user's name and the value is denoting whether that
176 particular user is logged in or not
177 """
178 users = {}
179 for x in self.data_directory.iterdir():
180 try:
181 if x.is_dir() and (x / 'rotkehlchen.db').exists():
182 users[x.stem] = 'loggedin' if x.stem == self.username else 'loggedout'
183 except PermissionError:
184 # ignore directories that can't be accessed
185 continue
186
187 return users
188
189 def compress_and_encrypt_db(self, password: str) -> Tuple[B64EncodedBytes, str]:
190 """Decrypt the DB, dump in temporary plaintextdb, compress it,
191 and then re-encrypt it
192
193 Returns a b64 encoded binary blob"""
194 log.info('Compress and encrypt DB')
195 compressor = zlib.compressobj(level=9)
196 with tempfile.NamedTemporaryFile(delete=True, suffix='.db') as tempdbfile:
197 tempdb = Path(tempdbfile.name)
198 self.db.export_unencrypted(tempdb)
199 source_data = bytearray()
200 compressed_data = bytearray()
201 with open(tempdb, 'rb') as src_f:
202 block = src_f.read(BUFFERSIZE)
203 while block:
204 source_data += block
205 compressed_data += compressor.compress(block)
206 block = src_f.read(BUFFERSIZE)
207
208 compressed_data += compressor.flush()
209
210 original_data_hash = base64.b64encode(
211 hashlib.sha256(source_data).digest(),
212 ).decode()
213 encrypted_data = encrypt(password.encode(), bytes(compressed_data))
214 return B64EncodedBytes(encrypted_data.encode()), original_data_hash
215
216 def decompress_and_decrypt_db(self, password: str, encrypted_data: B64EncodedString) -> None:
217 """Decrypt and decompress the encrypted data we receive from the server
218
219 If successful then replace our local Database
220
221 May Raise:
222 - UnableToDecryptRemoteData due to decrypt()
223 - DBUpgradeError if the rotki DB version is newer than the software or
224 there is a DB upgrade and there is an error or if the version is older
225 than the one supported.
226 - SystemPermissionError if the DB file permissions are not correct
227 """
228 log.info('Decompress and decrypt DB')
229 # First make a backup of the DB we are about to replace
230 date = timestamp_to_date(ts=ts_now(), formatstr='%Y_%m_%d_%H_%M_%S', treat_as_local=True)
231 shutil.copyfile(
232 self.data_directory / self.username / 'rotkehlchen.db',
233 self.data_directory / self.username / f'rotkehlchen_db_{date}.backup',
234 )
235
236 decrypted_data = decrypt(password.encode(), encrypted_data)
237 decompressed_data = zlib.decompress(decrypted_data)
238 self.db.import_unencrypted(decompressed_data, password)
239
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rotkehlchen/data_handler.py b/rotkehlchen/data_handler.py
--- a/rotkehlchen/data_handler.py
+++ b/rotkehlchen/data_handler.py
@@ -193,12 +193,13 @@
Returns a b64 encoded binary blob"""
log.info('Compress and encrypt DB')
compressor = zlib.compressobj(level=9)
- with tempfile.NamedTemporaryFile(delete=True, suffix='.db') as tempdbfile:
- tempdb = Path(tempdbfile.name)
- self.db.export_unencrypted(tempdb)
+ with tempfile.NamedTemporaryFile(delete=False, suffix='.db') as tempdbfile:
+ tempdbpath = Path(tempdbfile.name)
+ tempdbfile.close() # close the file to allow re-opening by export_unencrypted in windows https://github.com/rotki/rotki/issues/5051 # noqa: E501
+ self.db.export_unencrypted(tempdbpath)
source_data = bytearray()
compressed_data = bytearray()
- with open(tempdb, 'rb') as src_f:
+ with open(tempdbpath, 'rb') as src_f:
block = src_f.read(BUFFERSIZE)
while block:
source_data += block
@@ -211,6 +212,8 @@
hashlib.sha256(source_data).digest(),
).decode()
encrypted_data = encrypt(password.encode(), bytes(compressed_data))
+ # cleanup temp file to avoid windows problem (https://github.com/rotki/rotki/issues/5051)
+ tempdbpath.unlink()
return B64EncodedBytes(encrypted_data.encode()), original_data_hash
def decompress_and_decrypt_db(self, password: str, encrypted_data: B64EncodedString) -> None:
| {"golden_diff": "diff --git a/rotkehlchen/data_handler.py b/rotkehlchen/data_handler.py\n--- a/rotkehlchen/data_handler.py\n+++ b/rotkehlchen/data_handler.py\n@@ -193,12 +193,13 @@\n Returns a b64 encoded binary blob\"\"\"\n log.info('Compress and encrypt DB')\n compressor = zlib.compressobj(level=9)\n- with tempfile.NamedTemporaryFile(delete=True, suffix='.db') as tempdbfile:\n- tempdb = Path(tempdbfile.name)\n- self.db.export_unencrypted(tempdb)\n+ with tempfile.NamedTemporaryFile(delete=False, suffix='.db') as tempdbfile:\n+ tempdbpath = Path(tempdbfile.name)\n+ tempdbfile.close() # close the file to allow re-opening by export_unencrypted in windows https://github.com/rotki/rotki/issues/5051 # noqa: E501\n+ self.db.export_unencrypted(tempdbpath)\n source_data = bytearray()\n compressed_data = bytearray()\n- with open(tempdb, 'rb') as src_f:\n+ with open(tempdbpath, 'rb') as src_f:\n block = src_f.read(BUFFERSIZE)\n while block:\n source_data += block\n@@ -211,6 +212,8 @@\n hashlib.sha256(source_data).digest(),\n ).decode()\n encrypted_data = encrypt(password.encode(), bytes(compressed_data))\n+ # cleanup temp file to avoid windows problem (https://github.com/rotki/rotki/issues/5051)\n+ tempdbpath.unlink()\n return B64EncodedBytes(encrypted_data.encode()), original_data_hash\n \n def decompress_and_decrypt_db(self, password: str, encrypted_data: B64EncodedString) -> None:\n", "issue": "rotki 1.26.0 + windows: Unable to open database\n## Problem Definition\r\n\r\nPremium windows users started getting this after upgrading to 1.26.0.\r\n\r\nIt's probably related to the changes I did here: https://github.com/rotki/rotki/pull/5043 but manifests only in Windows as none else saw it when using/testing in other OSes.\r\n\r\n## Logs\r\n\r\n```\r\n[28/10/2022 18:14:05 W. Europe Daylight Time] ERROR rotkehlchen.greenlets Greenlet with id 3094640951536: Upload data to server died with exception: unable to open database: C:\\Users\\isido\\AppData\\Local\\Temp\\tmpp2jyr7sq.db.\r\nException Name: <class 'pysqlcipher3.dbapi2.OperationalError'>\r\nException Info: unable to open database: C:\\Users\\isido\\AppData\\Local\\Temp\\tmpp2jyr7sq.db\r\nTraceback:\r\n File \"src\\\\gevent\\\\greenlet.py\", line 908, in gevent._gevent_cgreenlet.Greenlet.run\r\n File \"rotkehlchen\\premium\\sync.py\", line 143, in maybe_upload_data_to_server\r\n File \"rotkehlchen\\data_handler.py\", line 198, in compress_and_encrypt_db\r\n File \"rotkehlchen\\db\\dbhandler.py\", line 483, in export_unencrypted\r\n File \"rotkehlchen\\db\\drivers\\gevent.py\", line 264, in executescript\r\n\r\n[28/10/2022 18:14:05 W. Europe Daylight Time] ERROR rotkehlchen.user_messages Greenlet with id 3094640951536: Upload data to server died with exception: unable to open database: C:\\Users\\isido\\AppData\\Local\\Temp\\tmpp2jyr7sq.db. Check the logs for more details\r\n```\r\n\r\n\r\n\r\n### System Description\r\n\r\n\r\nOperating system: Windows\r\nRotki version: 1.26.0\r\n\n", "before_files": [{"content": "import base64\nimport hashlib\nimport logging\nimport shutil\nimport tempfile\nimport zlib\nfrom pathlib import Path\nfrom typing import Dict, List, Optional, Tuple\n\nfrom rotkehlchen.assets.asset import Asset\nfrom rotkehlchen.crypto import decrypt, encrypt\nfrom rotkehlchen.db.dbhandler import DBHandler\nfrom rotkehlchen.db.settings import ModifiableDBSettings\nfrom rotkehlchen.errors.api import AuthenticationError\nfrom rotkehlchen.errors.misc import SystemPermissionError\nfrom rotkehlchen.logging import RotkehlchenLogsAdapter\nfrom rotkehlchen.types import B64EncodedBytes, B64EncodedString\nfrom rotkehlchen.user_messages import MessagesAggregator\nfrom rotkehlchen.utils.misc import timestamp_to_date, ts_now\n\nlogger = logging.getLogger(__name__)\nlog = RotkehlchenLogsAdapter(logger)\n\nBUFFERSIZE = 64 * 1024\n\n\nclass DataHandler():\n\n def __init__(\n self,\n data_directory: Path,\n msg_aggregator: MessagesAggregator,\n sql_vm_instructions_cb: int,\n ):\n self.logged_in = False\n self.data_directory = data_directory\n self.username = 'no_user'\n self.password = ''\n self.msg_aggregator = msg_aggregator\n self.sql_vm_instructions_cb = sql_vm_instructions_cb\n\n def logout(self) -> None:\n if self.logged_in:\n self.username = 'no_user'\n self.password = ''\n self.user_data_dir: Optional[Path] = None\n db = getattr(self, 'db', None)\n if db is not None:\n with self.db.conn.read_ctx() as cursor:\n self.db.update_owned_assets_in_globaldb(cursor)\n self.db.logout()\n self.logged_in = False\n\n def change_password(self, new_password: str) -> bool:\n success: bool = False\n\n if self.logged_in:\n success = self.db.change_password(new_password)\n self.password = new_password\n\n return success\n\n def unlock(\n self,\n username: str,\n password: str,\n create_new: bool,\n initial_settings: Optional[ModifiableDBSettings] = None,\n ) -> Path:\n \"\"\"Unlocks a user, either logging them in or creating a new user\n\n May raise:\n - SystemPermissionError if there are permission errors when accessing the DB\n or a directory in the user's filesystem\n - AuthenticationError if the given user does not exist, or if\n sqlcipher version problems are detected\n - DBUpgradeError if the rotki DB version is newer than the software or\n there is a DB upgrade and there is an error or if the version is older\n than the one supported.\n \"\"\"\n user_data_dir = self.data_directory / username\n if create_new:\n try:\n if (user_data_dir / 'rotkehlchen.db').exists():\n raise AuthenticationError(\n f'User {username} already exists. User data dir: {user_data_dir}',\n )\n\n user_data_dir.mkdir(exist_ok=True)\n except PermissionError as e:\n raise SystemPermissionError(\n f'Failed to create directory for user: {str(e)}',\n ) from e\n\n else:\n try:\n if not user_data_dir.exists():\n raise AuthenticationError('User {} does not exist'.format(username))\n\n if not (user_data_dir / 'rotkehlchen.db').exists():\n raise PermissionError\n\n except PermissionError as e:\n # This is bad. User directory exists but database is missing.\n # Or either DB or user directory can't be accessed due to permissions\n # Make a backup of the directory that user should probably remove\n # on their own. At the same time delete the directory so that a new\n # user account can be created\n shutil.move(\n user_data_dir,\n self.data_directory / f'auto_backup_{username}_{ts_now()}',\n )\n\n raise SystemPermissionError(\n 'User {} exists but DB is missing. Somehow must have been manually '\n 'deleted or is corrupt or access permissions do not allow reading. '\n 'Please recreate the user account. '\n 'A backup of the user directory was created.'.format(username),\n ) from e\n\n self.db: DBHandler = DBHandler(\n user_data_dir=user_data_dir,\n password=password,\n msg_aggregator=self.msg_aggregator,\n initial_settings=initial_settings,\n sql_vm_instructions_cb=self.sql_vm_instructions_cb,\n )\n self.user_data_dir = user_data_dir\n self.logged_in = True\n self.username = username\n self.password = password\n return user_data_dir\n\n def add_ignored_assets(self, assets: List[Asset]) -> Tuple[Optional[List[Asset]], str]:\n \"\"\"Adds ignored assets to the DB.\n\n If any of the given assets is already in the DB the function does nothing\n and returns an error message.\n \"\"\"\n with self.db.conn.read_ctx() as cursor:\n ignored_assets = self.db.get_ignored_assets(cursor)\n for asset in assets:\n if asset in ignored_assets:\n msg = f'{asset.identifier} is already in ignored assets'\n return None, msg\n\n with self.db.user_write() as write_cursor:\n for asset in assets:\n self.db.add_to_ignored_assets(write_cursor=write_cursor, asset=asset)\n\n return self.db.get_ignored_assets(cursor), ''\n\n def remove_ignored_assets(self, assets: List[Asset]) -> Tuple[Optional[List[Asset]], str]:\n \"\"\"Removes ignored assets from the DB.\n\n If any of the given assets is not in the DB the call function does nothing\n and returns an error message.\n \"\"\"\n with self.db.conn.read_ctx() as cursor:\n ignored_assets = self.db.get_ignored_assets(cursor)\n for asset in assets:\n if asset not in ignored_assets:\n msg = f'{asset.identifier} is not in ignored assets'\n return None, msg\n\n with self.db.user_write() as write_cursor:\n for asset in assets:\n self.db.remove_from_ignored_assets(write_cursor=write_cursor, asset=asset)\n\n return self.db.get_ignored_assets(cursor), ''\n\n def get_users(self) -> Dict[str, str]:\n \"\"\"Returns a dict with all users in the system.\n\n Each key is a user's name and the value is denoting whether that\n particular user is logged in or not\n \"\"\"\n users = {}\n for x in self.data_directory.iterdir():\n try:\n if x.is_dir() and (x / 'rotkehlchen.db').exists():\n users[x.stem] = 'loggedin' if x.stem == self.username else 'loggedout'\n except PermissionError:\n # ignore directories that can't be accessed\n continue\n\n return users\n\n def compress_and_encrypt_db(self, password: str) -> Tuple[B64EncodedBytes, str]:\n \"\"\"Decrypt the DB, dump in temporary plaintextdb, compress it,\n and then re-encrypt it\n\n Returns a b64 encoded binary blob\"\"\"\n log.info('Compress and encrypt DB')\n compressor = zlib.compressobj(level=9)\n with tempfile.NamedTemporaryFile(delete=True, suffix='.db') as tempdbfile:\n tempdb = Path(tempdbfile.name)\n self.db.export_unencrypted(tempdb)\n source_data = bytearray()\n compressed_data = bytearray()\n with open(tempdb, 'rb') as src_f:\n block = src_f.read(BUFFERSIZE)\n while block:\n source_data += block\n compressed_data += compressor.compress(block)\n block = src_f.read(BUFFERSIZE)\n\n compressed_data += compressor.flush()\n\n original_data_hash = base64.b64encode(\n hashlib.sha256(source_data).digest(),\n ).decode()\n encrypted_data = encrypt(password.encode(), bytes(compressed_data))\n return B64EncodedBytes(encrypted_data.encode()), original_data_hash\n\n def decompress_and_decrypt_db(self, password: str, encrypted_data: B64EncodedString) -> None:\n \"\"\"Decrypt and decompress the encrypted data we receive from the server\n\n If successful then replace our local Database\n\n May Raise:\n - UnableToDecryptRemoteData due to decrypt()\n - DBUpgradeError if the rotki DB version is newer than the software or\n there is a DB upgrade and there is an error or if the version is older\n than the one supported.\n - SystemPermissionError if the DB file permissions are not correct\n \"\"\"\n log.info('Decompress and decrypt DB')\n # First make a backup of the DB we are about to replace\n date = timestamp_to_date(ts=ts_now(), formatstr='%Y_%m_%d_%H_%M_%S', treat_as_local=True)\n shutil.copyfile(\n self.data_directory / self.username / 'rotkehlchen.db',\n self.data_directory / self.username / f'rotkehlchen_db_{date}.backup',\n )\n\n decrypted_data = decrypt(password.encode(), encrypted_data)\n decompressed_data = zlib.decompress(decrypted_data)\n self.db.import_unencrypted(decompressed_data, password)\n", "path": "rotkehlchen/data_handler.py"}], "after_files": [{"content": "import base64\nimport hashlib\nimport logging\nimport shutil\nimport tempfile\nimport zlib\nfrom pathlib import Path\nfrom typing import Dict, List, Optional, Tuple\n\nfrom rotkehlchen.assets.asset import Asset\nfrom rotkehlchen.crypto import decrypt, encrypt\nfrom rotkehlchen.db.dbhandler import DBHandler\nfrom rotkehlchen.db.settings import ModifiableDBSettings\nfrom rotkehlchen.errors.api import AuthenticationError\nfrom rotkehlchen.errors.misc import SystemPermissionError\nfrom rotkehlchen.logging import RotkehlchenLogsAdapter\nfrom rotkehlchen.types import B64EncodedBytes, B64EncodedString\nfrom rotkehlchen.user_messages import MessagesAggregator\nfrom rotkehlchen.utils.misc import timestamp_to_date, ts_now\n\nlogger = logging.getLogger(__name__)\nlog = RotkehlchenLogsAdapter(logger)\n\nBUFFERSIZE = 64 * 1024\n\n\nclass DataHandler():\n\n def __init__(\n self,\n data_directory: Path,\n msg_aggregator: MessagesAggregator,\n sql_vm_instructions_cb: int,\n ):\n self.logged_in = False\n self.data_directory = data_directory\n self.username = 'no_user'\n self.password = ''\n self.msg_aggregator = msg_aggregator\n self.sql_vm_instructions_cb = sql_vm_instructions_cb\n\n def logout(self) -> None:\n if self.logged_in:\n self.username = 'no_user'\n self.password = ''\n self.user_data_dir: Optional[Path] = None\n db = getattr(self, 'db', None)\n if db is not None:\n with self.db.conn.read_ctx() as cursor:\n self.db.update_owned_assets_in_globaldb(cursor)\n self.db.logout()\n self.logged_in = False\n\n def change_password(self, new_password: str) -> bool:\n success: bool = False\n\n if self.logged_in:\n success = self.db.change_password(new_password)\n self.password = new_password\n\n return success\n\n def unlock(\n self,\n username: str,\n password: str,\n create_new: bool,\n initial_settings: Optional[ModifiableDBSettings] = None,\n ) -> Path:\n \"\"\"Unlocks a user, either logging them in or creating a new user\n\n May raise:\n - SystemPermissionError if there are permission errors when accessing the DB\n or a directory in the user's filesystem\n - AuthenticationError if the given user does not exist, or if\n sqlcipher version problems are detected\n - DBUpgradeError if the rotki DB version is newer than the software or\n there is a DB upgrade and there is an error or if the version is older\n than the one supported.\n \"\"\"\n user_data_dir = self.data_directory / username\n if create_new:\n try:\n if (user_data_dir / 'rotkehlchen.db').exists():\n raise AuthenticationError(\n f'User {username} already exists. User data dir: {user_data_dir}',\n )\n\n user_data_dir.mkdir(exist_ok=True)\n except PermissionError as e:\n raise SystemPermissionError(\n f'Failed to create directory for user: {str(e)}',\n ) from e\n\n else:\n try:\n if not user_data_dir.exists():\n raise AuthenticationError('User {} does not exist'.format(username))\n\n if not (user_data_dir / 'rotkehlchen.db').exists():\n raise PermissionError\n\n except PermissionError as e:\n # This is bad. User directory exists but database is missing.\n # Or either DB or user directory can't be accessed due to permissions\n # Make a backup of the directory that user should probably remove\n # on their own. At the same time delete the directory so that a new\n # user account can be created\n shutil.move(\n user_data_dir,\n self.data_directory / f'auto_backup_{username}_{ts_now()}',\n )\n\n raise SystemPermissionError(\n 'User {} exists but DB is missing. Somehow must have been manually '\n 'deleted or is corrupt or access permissions do not allow reading. '\n 'Please recreate the user account. '\n 'A backup of the user directory was created.'.format(username),\n ) from e\n\n self.db: DBHandler = DBHandler(\n user_data_dir=user_data_dir,\n password=password,\n msg_aggregator=self.msg_aggregator,\n initial_settings=initial_settings,\n sql_vm_instructions_cb=self.sql_vm_instructions_cb,\n )\n self.user_data_dir = user_data_dir\n self.logged_in = True\n self.username = username\n self.password = password\n return user_data_dir\n\n def add_ignored_assets(self, assets: List[Asset]) -> Tuple[Optional[List[Asset]], str]:\n \"\"\"Adds ignored assets to the DB.\n\n If any of the given assets is already in the DB the function does nothing\n and returns an error message.\n \"\"\"\n with self.db.conn.read_ctx() as cursor:\n ignored_assets = self.db.get_ignored_assets(cursor)\n for asset in assets:\n if asset in ignored_assets:\n msg = f'{asset.identifier} is already in ignored assets'\n return None, msg\n\n with self.db.user_write() as write_cursor:\n for asset in assets:\n self.db.add_to_ignored_assets(write_cursor=write_cursor, asset=asset)\n\n return self.db.get_ignored_assets(cursor), ''\n\n def remove_ignored_assets(self, assets: List[Asset]) -> Tuple[Optional[List[Asset]], str]:\n \"\"\"Removes ignored assets from the DB.\n\n If any of the given assets is not in the DB the call function does nothing\n and returns an error message.\n \"\"\"\n with self.db.conn.read_ctx() as cursor:\n ignored_assets = self.db.get_ignored_assets(cursor)\n for asset in assets:\n if asset not in ignored_assets:\n msg = f'{asset.identifier} is not in ignored assets'\n return None, msg\n\n with self.db.user_write() as write_cursor:\n for asset in assets:\n self.db.remove_from_ignored_assets(write_cursor=write_cursor, asset=asset)\n\n return self.db.get_ignored_assets(cursor), ''\n\n def get_users(self) -> Dict[str, str]:\n \"\"\"Returns a dict with all users in the system.\n\n Each key is a user's name and the value is denoting whether that\n particular user is logged in or not\n \"\"\"\n users = {}\n for x in self.data_directory.iterdir():\n try:\n if x.is_dir() and (x / 'rotkehlchen.db').exists():\n users[x.stem] = 'loggedin' if x.stem == self.username else 'loggedout'\n except PermissionError:\n # ignore directories that can't be accessed\n continue\n\n return users\n\n def compress_and_encrypt_db(self, password: str) -> Tuple[B64EncodedBytes, str]:\n \"\"\"Decrypt the DB, dump in temporary plaintextdb, compress it,\n and then re-encrypt it\n\n Returns a b64 encoded binary blob\"\"\"\n log.info('Compress and encrypt DB')\n compressor = zlib.compressobj(level=9)\n with tempfile.NamedTemporaryFile(delete=False, suffix='.db') as tempdbfile:\n tempdbpath = Path(tempdbfile.name)\n tempdbfile.close() # close the file to allow re-opening by export_unencrypted in windows https://github.com/rotki/rotki/issues/5051 # noqa: E501\n self.db.export_unencrypted(tempdbpath)\n source_data = bytearray()\n compressed_data = bytearray()\n with open(tempdbpath, 'rb') as src_f:\n block = src_f.read(BUFFERSIZE)\n while block:\n source_data += block\n compressed_data += compressor.compress(block)\n block = src_f.read(BUFFERSIZE)\n\n compressed_data += compressor.flush()\n\n original_data_hash = base64.b64encode(\n hashlib.sha256(source_data).digest(),\n ).decode()\n encrypted_data = encrypt(password.encode(), bytes(compressed_data))\n # cleanup temp file to avoid windows problem (https://github.com/rotki/rotki/issues/5051)\n tempdbpath.unlink()\n return B64EncodedBytes(encrypted_data.encode()), original_data_hash\n\n def decompress_and_decrypt_db(self, password: str, encrypted_data: B64EncodedString) -> None:\n \"\"\"Decrypt and decompress the encrypted data we receive from the server\n\n If successful then replace our local Database\n\n May Raise:\n - UnableToDecryptRemoteData due to decrypt()\n - DBUpgradeError if the rotki DB version is newer than the software or\n there is a DB upgrade and there is an error or if the version is older\n than the one supported.\n - SystemPermissionError if the DB file permissions are not correct\n \"\"\"\n log.info('Decompress and decrypt DB')\n # First make a backup of the DB we are about to replace\n date = timestamp_to_date(ts=ts_now(), formatstr='%Y_%m_%d_%H_%M_%S', treat_as_local=True)\n shutil.copyfile(\n self.data_directory / self.username / 'rotkehlchen.db',\n self.data_directory / self.username / f'rotkehlchen_db_{date}.backup',\n )\n\n decrypted_data = decrypt(password.encode(), encrypted_data)\n decompressed_data = zlib.decompress(decrypted_data)\n self.db.import_unencrypted(decompressed_data, password)\n", "path": "rotkehlchen/data_handler.py"}]} | 3,315 | 399 |
gh_patches_debug_17386 | rasdani/github-patches | git_diff | carpentries__amy-2593 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Integrity error when submitting instructor training application
If I enter a non-Eventbrite URL such as https://google.com, I get the warning that it must be an Evenbrite URL. If I try an Eventbrite URL such as one listed on our [calendar](https://carpentries.github.io/instructor-training/training_calendar.html) or if I leave that blank, I get an Integrity Error.
Integrity error on `eventbrite_url` field when submitting Instructor Training form:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `amy/extforms/forms.py`
Content:
```
1 from datetime import date
2 from typing import Iterable, cast
3 from urllib.parse import urlparse
4
5 from captcha.fields import ReCaptchaField
6 from crispy_forms.layout import HTML, Div, Field, Layout
7 from django import forms
8 from django.core.exceptions import ValidationError
9 from django.db.models.fields import BLANK_CHOICE_DASH
10 from django.http import HttpRequest
11
12 from consents.forms import option_display_value
13 from consents.models import Term, TrainingRequestConsent
14 from extrequests.forms import (
15 SelfOrganisedSubmissionBaseForm,
16 WorkshopInquiryRequestBaseForm,
17 WorkshopRequestBaseForm,
18 )
19 from extrequests.utils import MemberCodeValidationError, member_code_valid_training
20 from workshops.fields import (
21 CheckboxSelectMultipleWithOthers,
22 RadioSelectWithOther,
23 Select2Widget,
24 )
25 from workshops.forms import BootstrapHelper
26 from workshops.models import TrainingRequest
27 from workshops.utils.feature_flags import feature_flag_enabled
28
29
30 class TrainingRequestForm(forms.ModelForm):
31 # agreement fields are moved to the model
32
33 captcha = ReCaptchaField()
34
35 helper = BootstrapHelper(wider_labels=True, add_cancel_button=False)
36
37 class Meta:
38 model = TrainingRequest
39 fields = (
40 "review_process",
41 "member_code",
42 "member_code_override",
43 "eventbrite_url",
44 "personal",
45 "family",
46 "email",
47 "secondary_email",
48 "github",
49 "occupation",
50 "occupation_other",
51 "affiliation",
52 "location",
53 "country",
54 "underresourced",
55 "domains",
56 "domains_other",
57 "underrepresented",
58 "underrepresented_details",
59 "nonprofit_teaching_experience",
60 "previous_involvement",
61 "previous_training",
62 "previous_training_other",
63 "previous_training_explanation",
64 "previous_experience",
65 "previous_experience_other",
66 "previous_experience_explanation",
67 "programming_language_usage_frequency",
68 "checkout_intent",
69 "teaching_intent",
70 "teaching_frequency_expectation",
71 "teaching_frequency_expectation_other",
72 "max_travelling_frequency",
73 "max_travelling_frequency_other",
74 "reason",
75 "user_notes",
76 # "data_privacy_agreement",
77 "code_of_conduct_agreement",
78 )
79 widgets = {
80 "review_process": forms.RadioSelect(),
81 "occupation": RadioSelectWithOther("occupation_other"),
82 "domains": CheckboxSelectMultipleWithOthers("domains_other"),
83 "underrepresented": forms.RadioSelect(),
84 "previous_involvement": forms.CheckboxSelectMultiple(),
85 "previous_training": RadioSelectWithOther("previous_training_other"),
86 "previous_experience": RadioSelectWithOther("previous_experience_other"),
87 "programming_language_usage_frequency": forms.RadioSelect(),
88 "checkout_intent": forms.RadioSelect(),
89 "teaching_intent": forms.RadioSelect(),
90 "teaching_frequency_expectation": RadioSelectWithOther(
91 "teaching_frequency_expectation_other"
92 ),
93 "max_travelling_frequency": RadioSelectWithOther(
94 "max_travelling_frequency_other"
95 ),
96 "country": Select2Widget,
97 }
98
99 def __init__(self, *args, **kwargs):
100 # request is required for ENFORCE_MEMBER_CODES flag
101 self.request_http = kwargs.pop("request", None)
102 super().__init__(*args, **kwargs)
103
104 # Only active and required terms.
105 self.terms = (
106 Term.objects.prefetch_active_options()
107 .filter(required_type=Term.PROFILE_REQUIRE_TYPE)
108 .order_by("slug")
109 )
110
111 self.set_consent_fields(self.terms)
112
113 # set up a layout object for the helper
114 self.helper.layout = self.helper.build_default_layout(self)
115
116 self.set_other_fields(self.helper.layout)
117 self.set_fake_required_fields()
118 self.set_accordion(self.helper.layout)
119 self.set_display_member_code_override(visible=False)
120 self.set_hr(self.helper.layout)
121
122 def set_other_field(self, field_name: str, layout: Layout) -> None:
123 """
124 Set up a field so that it can be displayed as a separate widget.
125 """
126 WidgetType = self._meta.widgets[field_name].__class__ # type: ignore
127 cast(WidgetType, self[field_name].field.widget).other_field = self[
128 f"{field_name}_other"
129 ]
130 layout.fields.remove(f"{field_name}_other")
131
132 def set_other_fields(self, layout: Layout) -> None:
133 """
134 Set fields that have "Other" counterpart as a separate widget.
135 """
136 # Set up "*WithOther" widgets so that they can display additional
137 # inline fields. The original "*other" fields are removed from the layout.
138 self.set_other_field("occupation", layout)
139 self.set_other_field("domains", layout)
140 self.set_other_field("previous_training", layout)
141 self.set_other_field("previous_experience", layout)
142 self.set_other_field("teaching_frequency_expectation", layout)
143 self.set_other_field("max_travelling_frequency", layout)
144
145 def set_fake_required_fields(self) -> None:
146 # fake requiredness of the registration code / group name
147 self["member_code"].field.widget.fake_required = True # type: ignore
148
149 def set_accordion(self, layout: Layout) -> None:
150 # special accordion display for the review process
151 self["review_process"].field.widget.subfields = { # type: ignore
152 "preapproved": [
153 self["member_code"],
154 self["member_code_override"],
155 self["eventbrite_url"],
156 ],
157 "open": [], # this option doesn't require any additional fields
158 }
159 self[
160 "review_process"
161 ].field.widget.notes = TrainingRequest.REVIEW_CHOICES_NOTES # type: ignore
162
163 # get current position of `review_process` field
164 pos_index = layout.fields.index("review_process")
165
166 layout.fields.remove("review_process")
167 layout.fields.remove("member_code")
168 layout.fields.remove("member_code_override")
169 layout.fields.remove("eventbrite_url")
170
171 # insert div+field at previously saved position
172 layout.insert(
173 pos_index,
174 Div(
175 Field(
176 "review_process", template="bootstrap4/layout/radio-accordion.html"
177 ),
178 css_class="form-group row",
179 ),
180 )
181
182 def set_display_member_code_override(self, *, visible: bool) -> None:
183 widget = forms.CheckboxInput() if visible else forms.HiddenInput()
184 self.fields["member_code_override"].widget = widget
185
186 def set_hr(self, layout: Layout) -> None:
187 # add <HR> around "underrepresented*" fields
188 index = layout.fields.index("underrepresented")
189 layout.insert(index, HTML(self.helper.hr()))
190
191 index = layout.fields.index("underrepresented_details")
192 layout.insert(index + 1, HTML(self.helper.hr()))
193
194 def set_consent_fields(self, terms: Iterable[Term]) -> None:
195 for term in terms:
196 self.fields[term.slug] = self.create_consent_field(term)
197
198 def create_consent_field(self, term: Term) -> forms.ChoiceField:
199 options = [(opt.pk, option_display_value(opt)) for opt in term.options]
200 label = term.training_request_content or term.content
201 required = term.required_type == Term.PROFILE_REQUIRE_TYPE
202 initial = None
203 attrs = {"class": "border border-warning"} if initial is None else {}
204
205 field = forms.ChoiceField(
206 choices=BLANK_CHOICE_DASH + options,
207 label=label,
208 required=required,
209 initial=initial,
210 help_text=term.help_text or "",
211 widget=forms.Select(attrs=attrs),
212 )
213 return field
214
215 @feature_flag_enabled("ENFORCE_MEMBER_CODES")
216 def validate_member_code(
217 self, request: HttpRequest
218 ) -> None | dict[str, ValidationError]:
219 errors = dict()
220 member_code = self.cleaned_data.get("member_code", "")
221 member_code_override = self.cleaned_data.get("member_code_override", False)
222 error_msg = (
223 "This code is invalid. "
224 "This may be due to a typo, an expired code, "
225 "a code that has not yet been activated, "
226 "or a code with no training seats remaining. "
227 "Please confirm that you have copied the code correctly, "
228 "or confirm the code with the Membership Contact for your group. "
229 "If the code seems to be correct, tick the checkbox below to ignore "
230 "this message."
231 )
232
233 if not member_code:
234 return None
235
236 # check code validity
237 # grace period: 90 days before and after
238 try:
239 member_code_is_valid = member_code_valid_training(
240 member_code, date.today(), grace_before=90, grace_after=90
241 )
242 if member_code_is_valid and member_code_override:
243 # case where a user corrects their code but ticks the box anyway
244 # checkbox doesn't need to be ticked, so correct it quietly and continue
245 self.cleaned_data["member_code_override"] = False
246 self.set_display_member_code_override(visible=False)
247 except MemberCodeValidationError:
248 self.set_display_member_code_override(visible=True)
249 if not member_code_override:
250 # user must either correct the code or tick the override
251 errors["member_code"] = ValidationError(error_msg)
252
253 return errors
254
255 def clean_eventbrite_url(self):
256 """Check that entered URL includes 'eventbrite' in the domain."""
257 eventbrite_url = self.cleaned_data.get("eventbrite_url", "")
258 if eventbrite_url and "eventbrite" not in urlparse(eventbrite_url).hostname:
259 raise ValidationError("Must be an Eventbrite URL.")
260
261 def clean(self):
262 super().clean()
263 errors = dict()
264
265 # 1: validate registration code
266 review_process = self.cleaned_data.get("review_process", "")
267 member_code = self.cleaned_data.get("member_code", "").split()
268
269 # it's required when review_process is 'preapproved', but not when
270 # 'open'
271 if review_process == "preapproved" and not member_code:
272 errors["review_process"] = ValidationError(
273 "Registration code is required for pre-approved training "
274 "review process."
275 )
276
277 # it's required to be empty when review_process is 'open'
278 if review_process == "open" and member_code:
279 errors["review_process"] = ValidationError(
280 "Registration code must be empty for open training review process."
281 )
282
283 # confirm that code is valid
284 membership_errors = self.validate_member_code(request=self.request_http)
285 if membership_errors:
286 errors.update(membership_errors)
287
288 if errors:
289 raise ValidationError(errors)
290
291 def save(self, *args, **kwargs) -> None:
292 training_request = super().save(*args, **kwargs)
293 new_consents: list[TrainingRequestConsent] = []
294 for term in self.terms:
295 option_id = self.cleaned_data.get(term.slug)
296 if not option_id:
297 continue
298 new_consents.append(
299 TrainingRequestConsent(
300 training_request=training_request,
301 term_option_id=option_id,
302 term_id=term.pk,
303 )
304 )
305 TrainingRequestConsent.objects.bulk_create(new_consents)
306 return training_request
307
308
309 class WorkshopRequestExternalForm(WorkshopRequestBaseForm):
310 captcha = ReCaptchaField()
311
312 class Meta(WorkshopRequestBaseForm.Meta):
313 fields = WorkshopRequestBaseForm.Meta.fields + ("captcha",)
314
315
316 class WorkshopInquiryRequestExternalForm(WorkshopInquiryRequestBaseForm):
317 captcha = ReCaptchaField()
318
319 class Meta(WorkshopInquiryRequestBaseForm.Meta):
320 fields = WorkshopInquiryRequestBaseForm.Meta.fields + ("captcha",)
321
322
323 class SelfOrganisedSubmissionExternalForm(SelfOrganisedSubmissionBaseForm):
324 captcha = ReCaptchaField()
325
326 class Meta(SelfOrganisedSubmissionBaseForm.Meta):
327 fields = SelfOrganisedSubmissionBaseForm.Meta.fields + ("captcha",)
328
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/amy/extforms/forms.py b/amy/extforms/forms.py
--- a/amy/extforms/forms.py
+++ b/amy/extforms/forms.py
@@ -34,6 +34,11 @@
helper = BootstrapHelper(wider_labels=True, add_cancel_button=False)
+ code_of_conduct_agreement = forms.BooleanField(
+ required=True,
+ label=TrainingRequest._meta.get_field("code_of_conduct_agreement").verbose_name,
+ )
+
class Meta:
model = TrainingRequest
fields = (
@@ -257,6 +262,7 @@
eventbrite_url = self.cleaned_data.get("eventbrite_url", "")
if eventbrite_url and "eventbrite" not in urlparse(eventbrite_url).hostname:
raise ValidationError("Must be an Eventbrite URL.")
+ return eventbrite_url
def clean(self):
super().clean()
| {"golden_diff": "diff --git a/amy/extforms/forms.py b/amy/extforms/forms.py\n--- a/amy/extforms/forms.py\n+++ b/amy/extforms/forms.py\n@@ -34,6 +34,11 @@\n \n helper = BootstrapHelper(wider_labels=True, add_cancel_button=False)\n \n+ code_of_conduct_agreement = forms.BooleanField(\n+ required=True,\n+ label=TrainingRequest._meta.get_field(\"code_of_conduct_agreement\").verbose_name,\n+ )\n+\n class Meta:\n model = TrainingRequest\n fields = (\n@@ -257,6 +262,7 @@\n eventbrite_url = self.cleaned_data.get(\"eventbrite_url\", \"\")\n if eventbrite_url and \"eventbrite\" not in urlparse(eventbrite_url).hostname:\n raise ValidationError(\"Must be an Eventbrite URL.\")\n+ return eventbrite_url\n \n def clean(self):\n super().clean()\n", "issue": "Integrity error when submitting instructor training application\nIf I enter a non-Eventbrite URL such as https://google.com, I get the warning that it must be an Evenbrite URL. If I try an Eventbrite URL such as one listed on our [calendar](https://carpentries.github.io/instructor-training/training_calendar.html) or if I leave that blank, I get an Integrity Error. \r\nIntegrity error on `eventbrite_url` field when submitting Instructor Training form: \r\n\r\n\r\n\n", "before_files": [{"content": "from datetime import date\nfrom typing import Iterable, cast\nfrom urllib.parse import urlparse\n\nfrom captcha.fields import ReCaptchaField\nfrom crispy_forms.layout import HTML, Div, Field, Layout\nfrom django import forms\nfrom django.core.exceptions import ValidationError\nfrom django.db.models.fields import BLANK_CHOICE_DASH\nfrom django.http import HttpRequest\n\nfrom consents.forms import option_display_value\nfrom consents.models import Term, TrainingRequestConsent\nfrom extrequests.forms import (\n SelfOrganisedSubmissionBaseForm,\n WorkshopInquiryRequestBaseForm,\n WorkshopRequestBaseForm,\n)\nfrom extrequests.utils import MemberCodeValidationError, member_code_valid_training\nfrom workshops.fields import (\n CheckboxSelectMultipleWithOthers,\n RadioSelectWithOther,\n Select2Widget,\n)\nfrom workshops.forms import BootstrapHelper\nfrom workshops.models import TrainingRequest\nfrom workshops.utils.feature_flags import feature_flag_enabled\n\n\nclass TrainingRequestForm(forms.ModelForm):\n # agreement fields are moved to the model\n\n captcha = ReCaptchaField()\n\n helper = BootstrapHelper(wider_labels=True, add_cancel_button=False)\n\n class Meta:\n model = TrainingRequest\n fields = (\n \"review_process\",\n \"member_code\",\n \"member_code_override\",\n \"eventbrite_url\",\n \"personal\",\n \"family\",\n \"email\",\n \"secondary_email\",\n \"github\",\n \"occupation\",\n \"occupation_other\",\n \"affiliation\",\n \"location\",\n \"country\",\n \"underresourced\",\n \"domains\",\n \"domains_other\",\n \"underrepresented\",\n \"underrepresented_details\",\n \"nonprofit_teaching_experience\",\n \"previous_involvement\",\n \"previous_training\",\n \"previous_training_other\",\n \"previous_training_explanation\",\n \"previous_experience\",\n \"previous_experience_other\",\n \"previous_experience_explanation\",\n \"programming_language_usage_frequency\",\n \"checkout_intent\",\n \"teaching_intent\",\n \"teaching_frequency_expectation\",\n \"teaching_frequency_expectation_other\",\n \"max_travelling_frequency\",\n \"max_travelling_frequency_other\",\n \"reason\",\n \"user_notes\",\n # \"data_privacy_agreement\",\n \"code_of_conduct_agreement\",\n )\n widgets = {\n \"review_process\": forms.RadioSelect(),\n \"occupation\": RadioSelectWithOther(\"occupation_other\"),\n \"domains\": CheckboxSelectMultipleWithOthers(\"domains_other\"),\n \"underrepresented\": forms.RadioSelect(),\n \"previous_involvement\": forms.CheckboxSelectMultiple(),\n \"previous_training\": RadioSelectWithOther(\"previous_training_other\"),\n \"previous_experience\": RadioSelectWithOther(\"previous_experience_other\"),\n \"programming_language_usage_frequency\": forms.RadioSelect(),\n \"checkout_intent\": forms.RadioSelect(),\n \"teaching_intent\": forms.RadioSelect(),\n \"teaching_frequency_expectation\": RadioSelectWithOther(\n \"teaching_frequency_expectation_other\"\n ),\n \"max_travelling_frequency\": RadioSelectWithOther(\n \"max_travelling_frequency_other\"\n ),\n \"country\": Select2Widget,\n }\n\n def __init__(self, *args, **kwargs):\n # request is required for ENFORCE_MEMBER_CODES flag\n self.request_http = kwargs.pop(\"request\", None)\n super().__init__(*args, **kwargs)\n\n # Only active and required terms.\n self.terms = (\n Term.objects.prefetch_active_options()\n .filter(required_type=Term.PROFILE_REQUIRE_TYPE)\n .order_by(\"slug\")\n )\n\n self.set_consent_fields(self.terms)\n\n # set up a layout object for the helper\n self.helper.layout = self.helper.build_default_layout(self)\n\n self.set_other_fields(self.helper.layout)\n self.set_fake_required_fields()\n self.set_accordion(self.helper.layout)\n self.set_display_member_code_override(visible=False)\n self.set_hr(self.helper.layout)\n\n def set_other_field(self, field_name: str, layout: Layout) -> None:\n \"\"\"\n Set up a field so that it can be displayed as a separate widget.\n \"\"\"\n WidgetType = self._meta.widgets[field_name].__class__ # type: ignore\n cast(WidgetType, self[field_name].field.widget).other_field = self[\n f\"{field_name}_other\"\n ]\n layout.fields.remove(f\"{field_name}_other\")\n\n def set_other_fields(self, layout: Layout) -> None:\n \"\"\"\n Set fields that have \"Other\" counterpart as a separate widget.\n \"\"\"\n # Set up \"*WithOther\" widgets so that they can display additional\n # inline fields. The original \"*other\" fields are removed from the layout.\n self.set_other_field(\"occupation\", layout)\n self.set_other_field(\"domains\", layout)\n self.set_other_field(\"previous_training\", layout)\n self.set_other_field(\"previous_experience\", layout)\n self.set_other_field(\"teaching_frequency_expectation\", layout)\n self.set_other_field(\"max_travelling_frequency\", layout)\n\n def set_fake_required_fields(self) -> None:\n # fake requiredness of the registration code / group name\n self[\"member_code\"].field.widget.fake_required = True # type: ignore\n\n def set_accordion(self, layout: Layout) -> None:\n # special accordion display for the review process\n self[\"review_process\"].field.widget.subfields = { # type: ignore\n \"preapproved\": [\n self[\"member_code\"],\n self[\"member_code_override\"],\n self[\"eventbrite_url\"],\n ],\n \"open\": [], # this option doesn't require any additional fields\n }\n self[\n \"review_process\"\n ].field.widget.notes = TrainingRequest.REVIEW_CHOICES_NOTES # type: ignore\n\n # get current position of `review_process` field\n pos_index = layout.fields.index(\"review_process\")\n\n layout.fields.remove(\"review_process\")\n layout.fields.remove(\"member_code\")\n layout.fields.remove(\"member_code_override\")\n layout.fields.remove(\"eventbrite_url\")\n\n # insert div+field at previously saved position\n layout.insert(\n pos_index,\n Div(\n Field(\n \"review_process\", template=\"bootstrap4/layout/radio-accordion.html\"\n ),\n css_class=\"form-group row\",\n ),\n )\n\n def set_display_member_code_override(self, *, visible: bool) -> None:\n widget = forms.CheckboxInput() if visible else forms.HiddenInput()\n self.fields[\"member_code_override\"].widget = widget\n\n def set_hr(self, layout: Layout) -> None:\n # add <HR> around \"underrepresented*\" fields\n index = layout.fields.index(\"underrepresented\")\n layout.insert(index, HTML(self.helper.hr()))\n\n index = layout.fields.index(\"underrepresented_details\")\n layout.insert(index + 1, HTML(self.helper.hr()))\n\n def set_consent_fields(self, terms: Iterable[Term]) -> None:\n for term in terms:\n self.fields[term.slug] = self.create_consent_field(term)\n\n def create_consent_field(self, term: Term) -> forms.ChoiceField:\n options = [(opt.pk, option_display_value(opt)) for opt in term.options]\n label = term.training_request_content or term.content\n required = term.required_type == Term.PROFILE_REQUIRE_TYPE\n initial = None\n attrs = {\"class\": \"border border-warning\"} if initial is None else {}\n\n field = forms.ChoiceField(\n choices=BLANK_CHOICE_DASH + options,\n label=label,\n required=required,\n initial=initial,\n help_text=term.help_text or \"\",\n widget=forms.Select(attrs=attrs),\n )\n return field\n\n @feature_flag_enabled(\"ENFORCE_MEMBER_CODES\")\n def validate_member_code(\n self, request: HttpRequest\n ) -> None | dict[str, ValidationError]:\n errors = dict()\n member_code = self.cleaned_data.get(\"member_code\", \"\")\n member_code_override = self.cleaned_data.get(\"member_code_override\", False)\n error_msg = (\n \"This code is invalid. \"\n \"This may be due to a typo, an expired code, \"\n \"a code that has not yet been activated, \"\n \"or a code with no training seats remaining. \"\n \"Please confirm that you have copied the code correctly, \"\n \"or confirm the code with the Membership Contact for your group. \"\n \"If the code seems to be correct, tick the checkbox below to ignore \"\n \"this message.\"\n )\n\n if not member_code:\n return None\n\n # check code validity\n # grace period: 90 days before and after\n try:\n member_code_is_valid = member_code_valid_training(\n member_code, date.today(), grace_before=90, grace_after=90\n )\n if member_code_is_valid and member_code_override:\n # case where a user corrects their code but ticks the box anyway\n # checkbox doesn't need to be ticked, so correct it quietly and continue\n self.cleaned_data[\"member_code_override\"] = False\n self.set_display_member_code_override(visible=False)\n except MemberCodeValidationError:\n self.set_display_member_code_override(visible=True)\n if not member_code_override:\n # user must either correct the code or tick the override\n errors[\"member_code\"] = ValidationError(error_msg)\n\n return errors\n\n def clean_eventbrite_url(self):\n \"\"\"Check that entered URL includes 'eventbrite' in the domain.\"\"\"\n eventbrite_url = self.cleaned_data.get(\"eventbrite_url\", \"\")\n if eventbrite_url and \"eventbrite\" not in urlparse(eventbrite_url).hostname:\n raise ValidationError(\"Must be an Eventbrite URL.\")\n\n def clean(self):\n super().clean()\n errors = dict()\n\n # 1: validate registration code\n review_process = self.cleaned_data.get(\"review_process\", \"\")\n member_code = self.cleaned_data.get(\"member_code\", \"\").split()\n\n # it's required when review_process is 'preapproved', but not when\n # 'open'\n if review_process == \"preapproved\" and not member_code:\n errors[\"review_process\"] = ValidationError(\n \"Registration code is required for pre-approved training \"\n \"review process.\"\n )\n\n # it's required to be empty when review_process is 'open'\n if review_process == \"open\" and member_code:\n errors[\"review_process\"] = ValidationError(\n \"Registration code must be empty for open training review process.\"\n )\n\n # confirm that code is valid\n membership_errors = self.validate_member_code(request=self.request_http)\n if membership_errors:\n errors.update(membership_errors)\n\n if errors:\n raise ValidationError(errors)\n\n def save(self, *args, **kwargs) -> None:\n training_request = super().save(*args, **kwargs)\n new_consents: list[TrainingRequestConsent] = []\n for term in self.terms:\n option_id = self.cleaned_data.get(term.slug)\n if not option_id:\n continue\n new_consents.append(\n TrainingRequestConsent(\n training_request=training_request,\n term_option_id=option_id,\n term_id=term.pk,\n )\n )\n TrainingRequestConsent.objects.bulk_create(new_consents)\n return training_request\n\n\nclass WorkshopRequestExternalForm(WorkshopRequestBaseForm):\n captcha = ReCaptchaField()\n\n class Meta(WorkshopRequestBaseForm.Meta):\n fields = WorkshopRequestBaseForm.Meta.fields + (\"captcha\",)\n\n\nclass WorkshopInquiryRequestExternalForm(WorkshopInquiryRequestBaseForm):\n captcha = ReCaptchaField()\n\n class Meta(WorkshopInquiryRequestBaseForm.Meta):\n fields = WorkshopInquiryRequestBaseForm.Meta.fields + (\"captcha\",)\n\n\nclass SelfOrganisedSubmissionExternalForm(SelfOrganisedSubmissionBaseForm):\n captcha = ReCaptchaField()\n\n class Meta(SelfOrganisedSubmissionBaseForm.Meta):\n fields = SelfOrganisedSubmissionBaseForm.Meta.fields + (\"captcha\",)\n", "path": "amy/extforms/forms.py"}], "after_files": [{"content": "from datetime import date\nfrom typing import Iterable, cast\nfrom urllib.parse import urlparse\n\nfrom captcha.fields import ReCaptchaField\nfrom crispy_forms.layout import HTML, Div, Field, Layout\nfrom django import forms\nfrom django.core.exceptions import ValidationError\nfrom django.db.models.fields import BLANK_CHOICE_DASH\nfrom django.http import HttpRequest\n\nfrom consents.forms import option_display_value\nfrom consents.models import Term, TrainingRequestConsent\nfrom extrequests.forms import (\n SelfOrganisedSubmissionBaseForm,\n WorkshopInquiryRequestBaseForm,\n WorkshopRequestBaseForm,\n)\nfrom extrequests.utils import MemberCodeValidationError, member_code_valid_training\nfrom workshops.fields import (\n CheckboxSelectMultipleWithOthers,\n RadioSelectWithOther,\n Select2Widget,\n)\nfrom workshops.forms import BootstrapHelper\nfrom workshops.models import TrainingRequest\nfrom workshops.utils.feature_flags import feature_flag_enabled\n\n\nclass TrainingRequestForm(forms.ModelForm):\n # agreement fields are moved to the model\n\n captcha = ReCaptchaField()\n\n helper = BootstrapHelper(wider_labels=True, add_cancel_button=False)\n\n code_of_conduct_agreement = forms.BooleanField(\n required=True,\n label=TrainingRequest._meta.get_field(\"code_of_conduct_agreement\").verbose_name,\n )\n\n class Meta:\n model = TrainingRequest\n fields = (\n \"review_process\",\n \"member_code\",\n \"member_code_override\",\n \"eventbrite_url\",\n \"personal\",\n \"family\",\n \"email\",\n \"secondary_email\",\n \"github\",\n \"occupation\",\n \"occupation_other\",\n \"affiliation\",\n \"location\",\n \"country\",\n \"underresourced\",\n \"domains\",\n \"domains_other\",\n \"underrepresented\",\n \"underrepresented_details\",\n \"nonprofit_teaching_experience\",\n \"previous_involvement\",\n \"previous_training\",\n \"previous_training_other\",\n \"previous_training_explanation\",\n \"previous_experience\",\n \"previous_experience_other\",\n \"previous_experience_explanation\",\n \"programming_language_usage_frequency\",\n \"checkout_intent\",\n \"teaching_intent\",\n \"teaching_frequency_expectation\",\n \"teaching_frequency_expectation_other\",\n \"max_travelling_frequency\",\n \"max_travelling_frequency_other\",\n \"reason\",\n \"user_notes\",\n # \"data_privacy_agreement\",\n \"code_of_conduct_agreement\",\n )\n widgets = {\n \"review_process\": forms.RadioSelect(),\n \"occupation\": RadioSelectWithOther(\"occupation_other\"),\n \"domains\": CheckboxSelectMultipleWithOthers(\"domains_other\"),\n \"underrepresented\": forms.RadioSelect(),\n \"previous_involvement\": forms.CheckboxSelectMultiple(),\n \"previous_training\": RadioSelectWithOther(\"previous_training_other\"),\n \"previous_experience\": RadioSelectWithOther(\"previous_experience_other\"),\n \"programming_language_usage_frequency\": forms.RadioSelect(),\n \"checkout_intent\": forms.RadioSelect(),\n \"teaching_intent\": forms.RadioSelect(),\n \"teaching_frequency_expectation\": RadioSelectWithOther(\n \"teaching_frequency_expectation_other\"\n ),\n \"max_travelling_frequency\": RadioSelectWithOther(\n \"max_travelling_frequency_other\"\n ),\n \"country\": Select2Widget,\n }\n\n def __init__(self, *args, **kwargs):\n # request is required for ENFORCE_MEMBER_CODES flag\n self.request_http = kwargs.pop(\"request\", None)\n super().__init__(*args, **kwargs)\n\n # Only active and required terms.\n self.terms = (\n Term.objects.prefetch_active_options()\n .filter(required_type=Term.PROFILE_REQUIRE_TYPE)\n .order_by(\"slug\")\n )\n\n self.set_consent_fields(self.terms)\n\n # set up a layout object for the helper\n self.helper.layout = self.helper.build_default_layout(self)\n\n self.set_other_fields(self.helper.layout)\n self.set_fake_required_fields()\n self.set_accordion(self.helper.layout)\n self.set_display_member_code_override(visible=False)\n self.set_hr(self.helper.layout)\n\n def set_other_field(self, field_name: str, layout: Layout) -> None:\n \"\"\"\n Set up a field so that it can be displayed as a separate widget.\n \"\"\"\n WidgetType = self._meta.widgets[field_name].__class__ # type: ignore\n cast(WidgetType, self[field_name].field.widget).other_field = self[\n f\"{field_name}_other\"\n ]\n layout.fields.remove(f\"{field_name}_other\")\n\n def set_other_fields(self, layout: Layout) -> None:\n \"\"\"\n Set fields that have \"Other\" counterpart as a separate widget.\n \"\"\"\n # Set up \"*WithOther\" widgets so that they can display additional\n # inline fields. The original \"*other\" fields are removed from the layout.\n self.set_other_field(\"occupation\", layout)\n self.set_other_field(\"domains\", layout)\n self.set_other_field(\"previous_training\", layout)\n self.set_other_field(\"previous_experience\", layout)\n self.set_other_field(\"teaching_frequency_expectation\", layout)\n self.set_other_field(\"max_travelling_frequency\", layout)\n\n def set_fake_required_fields(self) -> None:\n # fake requiredness of the registration code / group name\n self[\"member_code\"].field.widget.fake_required = True # type: ignore\n\n def set_accordion(self, layout: Layout) -> None:\n # special accordion display for the review process\n self[\"review_process\"].field.widget.subfields = { # type: ignore\n \"preapproved\": [\n self[\"member_code\"],\n self[\"member_code_override\"],\n self[\"eventbrite_url\"],\n ],\n \"open\": [], # this option doesn't require any additional fields\n }\n self[\n \"review_process\"\n ].field.widget.notes = TrainingRequest.REVIEW_CHOICES_NOTES # type: ignore\n\n # get current position of `review_process` field\n pos_index = layout.fields.index(\"review_process\")\n\n layout.fields.remove(\"review_process\")\n layout.fields.remove(\"member_code\")\n layout.fields.remove(\"member_code_override\")\n layout.fields.remove(\"eventbrite_url\")\n\n # insert div+field at previously saved position\n layout.insert(\n pos_index,\n Div(\n Field(\n \"review_process\", template=\"bootstrap4/layout/radio-accordion.html\"\n ),\n css_class=\"form-group row\",\n ),\n )\n\n def set_display_member_code_override(self, *, visible: bool) -> None:\n widget = forms.CheckboxInput() if visible else forms.HiddenInput()\n self.fields[\"member_code_override\"].widget = widget\n\n def set_hr(self, layout: Layout) -> None:\n # add <HR> around \"underrepresented*\" fields\n index = layout.fields.index(\"underrepresented\")\n layout.insert(index, HTML(self.helper.hr()))\n\n index = layout.fields.index(\"underrepresented_details\")\n layout.insert(index + 1, HTML(self.helper.hr()))\n\n def set_consent_fields(self, terms: Iterable[Term]) -> None:\n for term in terms:\n self.fields[term.slug] = self.create_consent_field(term)\n\n def create_consent_field(self, term: Term) -> forms.ChoiceField:\n options = [(opt.pk, option_display_value(opt)) for opt in term.options]\n label = term.training_request_content or term.content\n required = term.required_type == Term.PROFILE_REQUIRE_TYPE\n initial = None\n attrs = {\"class\": \"border border-warning\"} if initial is None else {}\n\n field = forms.ChoiceField(\n choices=BLANK_CHOICE_DASH + options,\n label=label,\n required=required,\n initial=initial,\n help_text=term.help_text or \"\",\n widget=forms.Select(attrs=attrs),\n )\n return field\n\n @feature_flag_enabled(\"ENFORCE_MEMBER_CODES\")\n def validate_member_code(\n self, request: HttpRequest\n ) -> None | dict[str, ValidationError]:\n errors = dict()\n member_code = self.cleaned_data.get(\"member_code\", \"\")\n member_code_override = self.cleaned_data.get(\"member_code_override\", False)\n error_msg = (\n \"This code is invalid. \"\n \"This may be due to a typo, an expired code, \"\n \"a code that has not yet been activated, \"\n \"or a code with no training seats remaining. \"\n \"Please confirm that you have copied the code correctly, \"\n \"or confirm the code with the Membership Contact for your group. \"\n \"If the code seems to be correct, tick the checkbox below to ignore \"\n \"this message.\"\n )\n\n if not member_code:\n return None\n\n # check code validity\n # grace period: 90 days before and after\n try:\n member_code_is_valid = member_code_valid_training(\n member_code, date.today(), grace_before=90, grace_after=90\n )\n if member_code_is_valid and member_code_override:\n # case where a user corrects their code but ticks the box anyway\n # checkbox doesn't need to be ticked, so correct it quietly and continue\n self.cleaned_data[\"member_code_override\"] = False\n self.set_display_member_code_override(visible=False)\n except MemberCodeValidationError:\n self.set_display_member_code_override(visible=True)\n if not member_code_override:\n # user must either correct the code or tick the override\n errors[\"member_code\"] = ValidationError(error_msg)\n\n return errors\n\n def clean_eventbrite_url(self):\n \"\"\"Check that entered URL includes 'eventbrite' in the domain.\"\"\"\n eventbrite_url = self.cleaned_data.get(\"eventbrite_url\", \"\")\n if eventbrite_url and \"eventbrite\" not in urlparse(eventbrite_url).hostname:\n raise ValidationError(\"Must be an Eventbrite URL.\")\n return eventbrite_url\n\n def clean(self):\n super().clean()\n errors = dict()\n\n # 1: validate registration code\n review_process = self.cleaned_data.get(\"review_process\", \"\")\n member_code = self.cleaned_data.get(\"member_code\", \"\").split()\n\n # it's required when review_process is 'preapproved', but not when\n # 'open'\n if review_process == \"preapproved\" and not member_code:\n errors[\"review_process\"] = ValidationError(\n \"Registration code is required for pre-approved training \"\n \"review process.\"\n )\n\n # it's required to be empty when review_process is 'open'\n if review_process == \"open\" and member_code:\n errors[\"review_process\"] = ValidationError(\n \"Registration code must be empty for open training review process.\"\n )\n\n # confirm that code is valid\n membership_errors = self.validate_member_code(request=self.request_http)\n if membership_errors:\n errors.update(membership_errors)\n\n if errors:\n raise ValidationError(errors)\n\n def save(self, *args, **kwargs) -> None:\n training_request = super().save(*args, **kwargs)\n new_consents: list[TrainingRequestConsent] = []\n for term in self.terms:\n option_id = self.cleaned_data.get(term.slug)\n if not option_id:\n continue\n new_consents.append(\n TrainingRequestConsent(\n training_request=training_request,\n term_option_id=option_id,\n term_id=term.pk,\n )\n )\n TrainingRequestConsent.objects.bulk_create(new_consents)\n return training_request\n\n\nclass WorkshopRequestExternalForm(WorkshopRequestBaseForm):\n captcha = ReCaptchaField()\n\n class Meta(WorkshopRequestBaseForm.Meta):\n fields = WorkshopRequestBaseForm.Meta.fields + (\"captcha\",)\n\n\nclass WorkshopInquiryRequestExternalForm(WorkshopInquiryRequestBaseForm):\n captcha = ReCaptchaField()\n\n class Meta(WorkshopInquiryRequestBaseForm.Meta):\n fields = WorkshopInquiryRequestBaseForm.Meta.fields + (\"captcha\",)\n\n\nclass SelfOrganisedSubmissionExternalForm(SelfOrganisedSubmissionBaseForm):\n captcha = ReCaptchaField()\n\n class Meta(SelfOrganisedSubmissionBaseForm.Meta):\n fields = SelfOrganisedSubmissionBaseForm.Meta.fields + (\"captcha\",)\n", "path": "amy/extforms/forms.py"}]} | 3,855 | 205 |
gh_patches_debug_32972 | rasdani/github-patches | git_diff | cleanlab__cleanlab-980 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Revert version upper bound of termcolor dependency
## Background
In #905 , an upper bound to the version of termcolor was added. This was a temporary fix to pass CI on existing PRs.
https://github.com/cleanlab/cleanlab/blob/4ac43c0a2654179732cafc704e807c3c5c9ec1f1/setup.py#L108
Without the upper bound, the following tests fail on Python 3.8+
https://github.com/cleanlab/cleanlab/blob/4ac43c0a2654179732cafc704e807c3c5c9ec1f1/tests/test_token_classification.py#L134-L169
Here's a screenshot of a CI run with these tests failing:
<img width="1142" alt="image" src="https://github.com/cleanlab/cleanlab/assets/18127060/2d371cf4-0d73-49e7-8392-df3384c46a71">
It looks like the `colored` variable no longer has the
## Task
The version limits need to be reverted to
```python
"termcolor>=2.0.0",
```
but the affected unit tests need to pass as well when the new release of termcolor ([version 2.4.0](https://github.com/termcolor/termcolor/releases/tag/2.4.0)) is installed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_packages
2 from setuptools.command.egg_info import egg_info
3
4 # To use a consistent encoding
5 from codecs import open
6 from os import path
7
8
9 class egg_info_ex(egg_info):
10 """Includes license file into `.egg-info` folder."""
11
12 def run(self):
13 # don't duplicate license into `.egg-info` when building a distribution
14 if not self.distribution.have_run.get("install", True):
15 # `install` command is in progress, copy license
16 self.mkpath(self.egg_info)
17 self.copy_file("LICENSE", self.egg_info)
18
19 egg_info.run(self)
20
21
22 here = path.abspath(path.dirname(__file__))
23
24 # Get the long description from the README file
25 with open(path.join(here, "README.md"), encoding="utf-8") as f:
26 long_description = f.read()
27
28 # Get version number and store it in __version__
29 exec(open("cleanlab/version.py").read())
30
31 DATALAB_REQUIRE = [
32 # Mainly for Datalab's data storage class.
33 # Still some type hints that require datasets
34 "datasets>=2.7.0",
35 ]
36
37 IMAGE_REQUIRE = DATALAB_REQUIRE + ["cleanvision>=0.3.2"]
38
39 EXTRAS_REQUIRE = {
40 "datalab": DATALAB_REQUIRE,
41 "image": IMAGE_REQUIRE,
42 "all": ["matplotlib>=3.5.1"],
43 }
44 EXTRAS_REQUIRE["all"] = list(set(sum(EXTRAS_REQUIRE.values(), [])))
45
46 setup(
47 name="cleanlab",
48 version=__version__,
49 license="AGPLv3+",
50 long_description=long_description,
51 long_description_content_type="text/markdown",
52 description="The standard package for data-centric AI, machine learning with label errors, "
53 "and automatically finding and fixing dataset issues in Python.",
54 url="https://cleanlab.ai",
55 project_urls={
56 "Documentation": "https://docs.cleanlab.ai",
57 "Bug Tracker": "https://github.com/cleanlab/cleanlab/issues",
58 "Source Code": "https://github.com/cleanlab/cleanlab",
59 },
60 author="Cleanlab Inc.",
61 author_email="[email protected]",
62 # See https://pypi.python.org/pypi?%3Aaction=list_classifiers
63 classifiers=[
64 "Development Status :: 4 - Beta",
65 "Intended Audience :: Developers",
66 "Intended Audience :: Education",
67 "Intended Audience :: Science/Research",
68 "Intended Audience :: Information Technology",
69 "License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
70 "Natural Language :: English",
71 # We believe this package works will these versions, but we do not guarantee it!
72 "Programming Language :: Python :: 3",
73 "Programming Language :: Python :: 3.7",
74 "Programming Language :: Python :: 3.8",
75 "Programming Language :: Python :: 3.9",
76 "Programming Language :: Python :: 3.10",
77 "Programming Language :: Python",
78 "Topic :: Software Development",
79 "Topic :: Scientific/Engineering",
80 "Topic :: Scientific/Engineering :: Mathematics",
81 "Topic :: Scientific/Engineering :: Artificial Intelligence",
82 "Topic :: Software Development :: Libraries",
83 "Topic :: Software Development :: Libraries :: Python Modules",
84 ],
85 python_requires=">=3.7",
86 # What does your project relate to?
87 keywords="machine_learning data_cleaning confident_learning classification weak_supervision "
88 "learning_with_noisy_labels unsupervised_learning datacentric_ai, datacentric",
89 # You can just specify the packages manually here if your project is
90 # simple. Or you can use find_packages().
91 packages=find_packages(exclude=[]),
92 # Include cleanlab license file.
93 include_package_data=True,
94 package_data={
95 "": ["LICENSE"],
96 },
97 license_files=("LICENSE",),
98 cmdclass={"egg_info": egg_info_ex},
99 # List run-time dependencies here. These will be installed by pip when
100 # your project is installed. For an analysis of "install_requires" vs pip's
101 # requirements files see:
102 # https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/
103 install_requires=[
104 "numpy>=1.20.0",
105 "scikit-learn>=1.0",
106 "tqdm>=4.53.0",
107 "pandas>=1.1.5",
108 "termcolor>=2.0.0,<2.4.0",
109 ],
110 extras_require=EXTRAS_REQUIRE,
111 )
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -70,10 +70,10 @@
"Natural Language :: English",
# We believe this package works will these versions, but we do not guarantee it!
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
"Programming Language :: Python",
"Topic :: Software Development",
"Topic :: Scientific/Engineering",
@@ -82,7 +82,7 @@
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules",
],
- python_requires=">=3.7",
+ python_requires=">=3.8",
# What does your project relate to?
keywords="machine_learning data_cleaning confident_learning classification weak_supervision "
"learning_with_noisy_labels unsupervised_learning datacentric_ai, datacentric",
@@ -101,11 +101,11 @@
# requirements files see:
# https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/
install_requires=[
- "numpy>=1.20.0",
- "scikit-learn>=1.0",
+ "numpy>=1.22.0",
+ "scikit-learn>=1.1",
"tqdm>=4.53.0",
- "pandas>=1.1.5",
- "termcolor>=2.0.0,<2.4.0",
+ "pandas>=1.4.0",
+ "termcolor>=2.4.0",
],
extras_require=EXTRAS_REQUIRE,
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -70,10 +70,10 @@\n \"Natural Language :: English\",\n # We believe this package works will these versions, but we do not guarantee it!\n \"Programming Language :: Python :: 3\",\n- \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n+ \"Programming Language :: Python :: 3.11\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development\",\n \"Topic :: Scientific/Engineering\",\n@@ -82,7 +82,7 @@\n \"Topic :: Software Development :: Libraries\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n- python_requires=\">=3.7\",\n+ python_requires=\">=3.8\",\n # What does your project relate to?\n keywords=\"machine_learning data_cleaning confident_learning classification weak_supervision \"\n \"learning_with_noisy_labels unsupervised_learning datacentric_ai, datacentric\",\n@@ -101,11 +101,11 @@\n # requirements files see:\n # https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/\n install_requires=[\n- \"numpy>=1.20.0\",\n- \"scikit-learn>=1.0\",\n+ \"numpy>=1.22.0\",\n+ \"scikit-learn>=1.1\",\n \"tqdm>=4.53.0\",\n- \"pandas>=1.1.5\",\n- \"termcolor>=2.0.0,<2.4.0\",\n+ \"pandas>=1.4.0\",\n+ \"termcolor>=2.4.0\",\n ],\n extras_require=EXTRAS_REQUIRE,\n )\n", "issue": "Revert version upper bound of termcolor dependency\n## Background\r\n\r\nIn #905 , an upper bound to the version of termcolor was added. This was a temporary fix to pass CI on existing PRs.\r\n\r\nhttps://github.com/cleanlab/cleanlab/blob/4ac43c0a2654179732cafc704e807c3c5c9ec1f1/setup.py#L108\r\n\r\nWithout the upper bound, the following tests fail on Python 3.8+\r\nhttps://github.com/cleanlab/cleanlab/blob/4ac43c0a2654179732cafc704e807c3c5c9ec1f1/tests/test_token_classification.py#L134-L169\r\n\r\nHere's a screenshot of a CI run with these tests failing:\r\n\r\n<img width=\"1142\" alt=\"image\" src=\"https://github.com/cleanlab/cleanlab/assets/18127060/2d371cf4-0d73-49e7-8392-df3384c46a71\">\r\n\r\nIt looks like the `colored`\u00a0 variable no longer has the\r\n\r\n## Task\r\n\r\n\r\nThe version limits need to be reverted to\r\n\r\n```python\r\n \"termcolor>=2.0.0\", \r\n```\r\n\r\nbut the affected unit tests need to pass as well when the new release of termcolor ([version 2.4.0](https://github.com/termcolor/termcolor/releases/tag/2.4.0)) is installed.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from setuptools import setup, find_packages\nfrom setuptools.command.egg_info import egg_info\n\n# To use a consistent encoding\nfrom codecs import open\nfrom os import path\n\n\nclass egg_info_ex(egg_info):\n \"\"\"Includes license file into `.egg-info` folder.\"\"\"\n\n def run(self):\n # don't duplicate license into `.egg-info` when building a distribution\n if not self.distribution.have_run.get(\"install\", True):\n # `install` command is in progress, copy license\n self.mkpath(self.egg_info)\n self.copy_file(\"LICENSE\", self.egg_info)\n\n egg_info.run(self)\n\n\nhere = path.abspath(path.dirname(__file__))\n\n# Get the long description from the README file\nwith open(path.join(here, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\n# Get version number and store it in __version__\nexec(open(\"cleanlab/version.py\").read())\n\nDATALAB_REQUIRE = [\n # Mainly for Datalab's data storage class.\n # Still some type hints that require datasets\n \"datasets>=2.7.0\",\n]\n\nIMAGE_REQUIRE = DATALAB_REQUIRE + [\"cleanvision>=0.3.2\"]\n\nEXTRAS_REQUIRE = {\n \"datalab\": DATALAB_REQUIRE,\n \"image\": IMAGE_REQUIRE,\n \"all\": [\"matplotlib>=3.5.1\"],\n}\nEXTRAS_REQUIRE[\"all\"] = list(set(sum(EXTRAS_REQUIRE.values(), [])))\n\nsetup(\n name=\"cleanlab\",\n version=__version__,\n license=\"AGPLv3+\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n description=\"The standard package for data-centric AI, machine learning with label errors, \"\n \"and automatically finding and fixing dataset issues in Python.\",\n url=\"https://cleanlab.ai\",\n project_urls={\n \"Documentation\": \"https://docs.cleanlab.ai\",\n \"Bug Tracker\": \"https://github.com/cleanlab/cleanlab/issues\",\n \"Source Code\": \"https://github.com/cleanlab/cleanlab\",\n },\n author=\"Cleanlab Inc.\",\n author_email=\"[email protected]\",\n # See https://pypi.python.org/pypi?%3Aaction=list_classifiers\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Information Technology\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Natural Language :: English\",\n # We believe this package works will these versions, but we do not guarantee it!\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Software Development :: Libraries\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n python_requires=\">=3.7\",\n # What does your project relate to?\n keywords=\"machine_learning data_cleaning confident_learning classification weak_supervision \"\n \"learning_with_noisy_labels unsupervised_learning datacentric_ai, datacentric\",\n # You can just specify the packages manually here if your project is\n # simple. Or you can use find_packages().\n packages=find_packages(exclude=[]),\n # Include cleanlab license file.\n include_package_data=True,\n package_data={\n \"\": [\"LICENSE\"],\n },\n license_files=(\"LICENSE\",),\n cmdclass={\"egg_info\": egg_info_ex},\n # List run-time dependencies here. These will be installed by pip when\n # your project is installed. For an analysis of \"install_requires\" vs pip's\n # requirements files see:\n # https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/\n install_requires=[\n \"numpy>=1.20.0\",\n \"scikit-learn>=1.0\",\n \"tqdm>=4.53.0\",\n \"pandas>=1.1.5\",\n \"termcolor>=2.0.0,<2.4.0\",\n ],\n extras_require=EXTRAS_REQUIRE,\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\nfrom setuptools.command.egg_info import egg_info\n\n# To use a consistent encoding\nfrom codecs import open\nfrom os import path\n\n\nclass egg_info_ex(egg_info):\n \"\"\"Includes license file into `.egg-info` folder.\"\"\"\n\n def run(self):\n # don't duplicate license into `.egg-info` when building a distribution\n if not self.distribution.have_run.get(\"install\", True):\n # `install` command is in progress, copy license\n self.mkpath(self.egg_info)\n self.copy_file(\"LICENSE\", self.egg_info)\n\n egg_info.run(self)\n\n\nhere = path.abspath(path.dirname(__file__))\n\n# Get the long description from the README file\nwith open(path.join(here, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\n# Get version number and store it in __version__\nexec(open(\"cleanlab/version.py\").read())\n\nDATALAB_REQUIRE = [\n # Mainly for Datalab's data storage class.\n # Still some type hints that require datasets\n \"datasets>=2.7.0\",\n]\n\nIMAGE_REQUIRE = DATALAB_REQUIRE + [\"cleanvision>=0.3.2\"]\n\nEXTRAS_REQUIRE = {\n \"datalab\": DATALAB_REQUIRE,\n \"image\": IMAGE_REQUIRE,\n \"all\": [\"matplotlib>=3.5.1\"],\n}\nEXTRAS_REQUIRE[\"all\"] = list(set(sum(EXTRAS_REQUIRE.values(), [])))\n\nsetup(\n name=\"cleanlab\",\n version=__version__,\n license=\"AGPLv3+\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n description=\"The standard package for data-centric AI, machine learning with label errors, \"\n \"and automatically finding and fixing dataset issues in Python.\",\n url=\"https://cleanlab.ai\",\n project_urls={\n \"Documentation\": \"https://docs.cleanlab.ai\",\n \"Bug Tracker\": \"https://github.com/cleanlab/cleanlab/issues\",\n \"Source Code\": \"https://github.com/cleanlab/cleanlab\",\n },\n author=\"Cleanlab Inc.\",\n author_email=\"[email protected]\",\n # See https://pypi.python.org/pypi?%3Aaction=list_classifiers\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Information Technology\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Natural Language :: English\",\n # We believe this package works will these versions, but we do not guarantee it!\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Software Development :: Libraries\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n python_requires=\">=3.8\",\n # What does your project relate to?\n keywords=\"machine_learning data_cleaning confident_learning classification weak_supervision \"\n \"learning_with_noisy_labels unsupervised_learning datacentric_ai, datacentric\",\n # You can just specify the packages manually here if your project is\n # simple. Or you can use find_packages().\n packages=find_packages(exclude=[]),\n # Include cleanlab license file.\n include_package_data=True,\n package_data={\n \"\": [\"LICENSE\"],\n },\n license_files=(\"LICENSE\",),\n cmdclass={\"egg_info\": egg_info_ex},\n # List run-time dependencies here. These will be installed by pip when\n # your project is installed. For an analysis of \"install_requires\" vs pip's\n # requirements files see:\n # https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/\n install_requires=[\n \"numpy>=1.22.0\",\n \"scikit-learn>=1.1\",\n \"tqdm>=4.53.0\",\n \"pandas>=1.4.0\",\n \"termcolor>=2.4.0\",\n ],\n extras_require=EXTRAS_REQUIRE,\n)\n", "path": "setup.py"}]} | 1,826 | 427 |
gh_patches_debug_646 | rasdani/github-patches | git_diff | pex-tool__pex-2034 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.120
On the docket:
+ [x] Support REPL command history #2019
+ [x] Using --complete-platform with --resolve-local-platforms should build sdists when local platform provides a subset of complete-platforms #2026
+ [x] A loose layout, venv-with-symlink PEX creates brittle symlinks #2023
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.119"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.119"
+__version__ = "2.1.120"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.119\"\n+__version__ = \"2.1.120\"\n", "issue": "Release 2.1.120\nOn the docket:\r\n+ [x] Support REPL command history #2019 \r\n+ [x] Using --complete-platform with --resolve-local-platforms should build sdists when local platform provides a subset of complete-platforms #2026\r\n+ [x] A loose layout, venv-with-symlink PEX creates brittle symlinks #2023\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.119\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.120\"\n", "path": "pex/version.py"}]} | 398 | 98 |
gh_patches_debug_5626 | rasdani/github-patches | git_diff | modin-project__modin-4553 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TEST: "build docs" CI action is broken with SphinxWarning: Invalid configuration value found: 'language = None'.
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: macOS Monterey
- **Modin version** (`modin.__version__`): 0477c688972bee89250d75835ff543db27f2e0d5
- **Python version**: 3.9.12
- **Code we can use to reproduce**:
`cd docs && sphinx-build -T -E -W -b html . build`
### Describe the problem
Building docs fails with SphinxWarning: Invalid configuration value found: 'language = None'. Fails with latest Sphinx version, 5.0.1, as well as previous version, 5.0.0.
### Source code / logs
<details>
<summary>Stack trace </summary>
```
Running Sphinx v5.0.0
Traceback (most recent call last):
File "/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/site-packages/sphinx/cmd/build.py", line 272, in build_main
app = Sphinx(args.sourcedir, args.confdir, args.outputdir,
File "/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/site-packages/sphinx/application.py", line 202, in __init__
self.config = Config.read(self.confdir, confoverrides or {}, self.tags)
File "/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/site-packages/sphinx/config.py", line 172, in read
logger.warning(__("Invalid configuration value found: 'language = None'. "
File "/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/logging/__init__.py", line 1812, in warning
self.log(WARNING, msg, *args, **kwargs)
File "/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/site-packages/sphinx/util/logging.py", line 122, in log
super().log(level, msg, *args, **kwargs)
File "/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/logging/__init__.py", line 1844, in log
self.logger.log(level, msg, *args, **kwargs)
File "/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/logging/__init__.py", line 1512, in log
self._log(level, msg, args, **kwargs)
File "/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/logging/__init__.py", line 1589, in _log
self.handle(record)
File "/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/logging/__init__.py", line 1599, in handle
self.callHandlers(record)
File "/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/logging/__init__.py", line 1661, in callHandlers
hdlr.handle(record)
File "/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/logging/__init__.py", line 948, in handle
rv = self.filter(record)
File "/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/logging/__init__.py", line 806, in filter
result = f.filter(record)
File "/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/site-packages/sphinx/util/logging.py", line 425, in filter
raise exc
sphinx.errors.SphinxWarning: Invalid configuration value found: 'language = None'. Update your configuration to a valid langauge code. Falling back to 'en' (English).
Warning, treated as error:
Invalid configuration value found: 'language = None'. Update your configuration to a valid langauge code. Falling back to 'en' (English).
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Configuration file for the Sphinx documentation builder.
4 #
5 # This file does only contain a selection of the most common options. For a
6 # full list see the documentation:
7 # http://www.sphinx-doc.org/en/stable/config
8
9 # -- Project information -----------------------------------------------------
10 import sys
11 import os
12 import types
13
14 import ray
15
16 # stub ray.remote to be a no-op so it doesn't shadow docstrings
17 def noop_decorator(*args, **kwargs):
18 if len(args) == 1 and len(kwargs) == 0 and callable(args[0]):
19 # This is the case where the decorator is just @ray.remote without parameters.
20 return args[0]
21 return lambda cls_or_func: cls_or_func
22
23
24 ray.remote = noop_decorator
25
26 # fake modules if they're missing
27 for mod_name in ("cudf", "cupy", "pyarrow.gandiva", "omniscidbe"):
28 try:
29 __import__(mod_name)
30 except ImportError:
31 sys.modules[mod_name] = types.ModuleType(
32 mod_name, f"fake {mod_name} for building docs"
33 )
34 if not hasattr(sys.modules["cudf"], "DataFrame"):
35 sys.modules["cudf"].DataFrame = type("DataFrame", (object,), {})
36 if not hasattr(sys.modules["cupy"], "ndarray"):
37 sys.modules["cupy"].ndarray = type("ndarray", (object,), {})
38 if not hasattr(sys.modules["omniscidbe"], "PyDbEngine"):
39 sys.modules["omniscidbe"].PyDbEngine = type("PyDbEngine", (object,), {})
40
41 sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
42 import modin
43
44 from modin.config.__main__ import export_config_help
45
46 configs_file_path = os.path.abspath(
47 os.path.join(os.path.dirname(__file__), "flow/modin/configs_help.csv")
48 )
49 # Export configs help to create configs table in the docs/flow/modin/config.rst
50 export_config_help(configs_file_path)
51
52 project = "Modin"
53 copyright = "2018-2022, Modin Developers."
54 author = "Modin contributors"
55
56 # The short X.Y version
57 version = "{}".format(modin.__version__)
58 # The full version, including alpha/beta/rc tags
59 release = version
60
61
62 # -- General configuration ---------------------------------------------------
63
64 # If your documentation needs a minimal Sphinx version, state it here.
65 #
66 # needs_sphinx = '1.0'
67
68 # Add any Sphinx extension module names here, as strings. They can be
69 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
70 # ones.
71 extensions = [
72 "sphinx.ext.autodoc",
73 "sphinx.ext.napoleon",
74 "sphinx.ext.intersphinx",
75 "sphinx.ext.todo",
76 "sphinx.ext.mathjax",
77 "sphinx.ext.githubpages",
78 "sphinx.ext.graphviz",
79 "sphinxcontrib.plantuml",
80 "sphinx_issues",
81 ]
82
83
84 # Add any paths that contain templates here, relative to this directory.
85 templates_path = ["_templates"]
86
87 # The suffix(es) of source filenames.
88 # You can specify multiple suffix as a list of string:
89 #
90 # source_suffix = ['.rst', '.md']
91 source_suffix = ".rst"
92
93 # The master toctree document.
94 master_doc = "index"
95
96 # The language for content autogenerated by Sphinx. Refer to documentation
97 # for a list of supported languages.
98 #
99 # This is also used if you do content translation via gettext catalogs.
100 # Usually you set "language" from the command line for these cases.
101 language = None
102
103 # List of patterns, relative to source directory, that match files and
104 # directories to ignore when looking for source files.
105 # This pattern also affects html_static_path and html_extra_path .
106 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
107
108 # The name of the Pygments (syntax highlighting) style to use.
109 pygments_style = "sphinx"
110
111
112 # -- Options for HTML output -------------------------------------------------
113
114 # Maps git branches to Sphinx themes
115 default_html_theme = "pydata_sphinx_theme"
116 current_branch = "nature"
117
118 # The theme to use for HTML and HTML Help pages. See the documentation for
119 # a list of builtin themes.
120 #
121 html_theme = "pydata_sphinx_theme"
122
123 html_favicon = "img/MODIN_ver2.ico"
124
125 html_logo = "img/MODIN_ver2.png"
126
127 # Theme options are theme-specific and customize the look and feel of a theme
128 # further. For a list of options available for each theme, see the
129 # documentation.
130 #
131 html_theme_options = {
132 "sidebarwidth": 270,
133 "collapse_navigation": False,
134 "navigation_depth": 4,
135 "show_toc_level": 2,
136 "github_url": "https://github.com/modin-project/modin",
137 "icon_links": [
138 {
139 "name": "PyPI",
140 "url": "https://pypi.org/project/modin",
141 "icon": "fab fa-python",
142 },
143 {
144 "name": "conda-forge",
145 "url": "https://anaconda.org/conda-forge/modin",
146 "icon": "fas fa-circle-notch",
147 },
148 {
149 "name": "Join the Slack",
150 "url": "https://modin.org/slack.html",
151 "icon": "fab fa-slack",
152 },
153 {
154 "name": "Discourse",
155 "url": "https://discuss.modin.org/",
156 "icon": "fab fa-discourse",
157 },
158 {
159 "name": "Mailing List",
160 "url": "https://groups.google.com/forum/#!forum/modin-dev",
161 "icon": "fas fa-envelope-square",
162 },
163 ],
164 }
165
166 # Custom sidebar templates, must be a dictionary that maps document names
167 # to template names.
168 #
169 # The default sidebars (for documents that don't match any pattern) are
170 # defined by theme itself. Builtin themes are using these templates by
171 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
172 # 'searchbox.html']``.
173 #
174 # The default pydata_sphinx_theme sidebar templates are
175 # sidebar-nav-bs.html and search-field.html.
176 html_sidebars = {}
177
178 issues_github_path = "modin-project/modin"
179
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -98,7 +98,7 @@
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
-language = None
+language = "en"
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -98,7 +98,7 @@\n #\n # This is also used if you do content translation via gettext catalogs.\n # Usually you set \"language\" from the command line for these cases.\n-language = None\n+language = \"en\"\n \n # List of patterns, relative to source directory, that match files and\n # directories to ignore when looking for source files.\n", "issue": "TEST: \"build docs\" CI action is broken with SphinxWarning: Invalid configuration value found: 'language = None'.\n### System information\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: macOS Monterey\r\n- **Modin version** (`modin.__version__`): 0477c688972bee89250d75835ff543db27f2e0d5\r\n- **Python version**: 3.9.12\r\n- **Code we can use to reproduce**:\r\n`cd docs && sphinx-build -T -E -W -b html . build`\r\n\r\n\r\n\r\n### Describe the problem\r\nBuilding docs fails with SphinxWarning: Invalid configuration value found: 'language = None'. Fails with latest Sphinx version, 5.0.1, as well as previous version, 5.0.0.\r\n\r\n### Source code / logs\r\n\r\n<details>\r\n<summary>Stack trace </summary>\r\n\r\n```\r\nRunning Sphinx v5.0.0\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/site-packages/sphinx/cmd/build.py\", line 272, in build_main\r\n app = Sphinx(args.sourcedir, args.confdir, args.outputdir,\r\n File \"/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/site-packages/sphinx/application.py\", line 202, in __init__\r\n self.config = Config.read(self.confdir, confoverrides or {}, self.tags)\r\n File \"/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/site-packages/sphinx/config.py\", line 172, in read\r\n logger.warning(__(\"Invalid configuration value found: 'language = None'. \"\r\n File \"/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/logging/__init__.py\", line 1812, in warning\r\n self.log(WARNING, msg, *args, **kwargs)\r\n File \"/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/site-packages/sphinx/util/logging.py\", line 122, in log\r\n super().log(level, msg, *args, **kwargs)\r\n File \"/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/logging/__init__.py\", line 1844, in log\r\n self.logger.log(level, msg, *args, **kwargs)\r\n File \"/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/logging/__init__.py\", line 1512, in log\r\n self._log(level, msg, args, **kwargs)\r\n File \"/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/logging/__init__.py\", line 1589, in _log\r\n self.handle(record)\r\n File \"/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/logging/__init__.py\", line 1599, in handle\r\n self.callHandlers(record)\r\n File \"/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/logging/__init__.py\", line 1661, in callHandlers\r\n hdlr.handle(record)\r\n File \"/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/logging/__init__.py\", line 948, in handle\r\n rv = self.filter(record)\r\n File \"/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/logging/__init__.py\", line 806, in filter\r\n result = f.filter(record)\r\n File \"/Users/maheshvashishtha/opt/anaconda3/envs/modin-latest/lib/python3.9/site-packages/sphinx/util/logging.py\", line 425, in filter\r\n raise exc\r\nsphinx.errors.SphinxWarning: Invalid configuration value found: 'language = None'. Update your configuration to a valid langauge code. Falling back to 'en' (English).\r\n\r\nWarning, treated as error:\r\nInvalid configuration value found: 'language = None'. Update your configuration to a valid langauge code. Falling back to 'en' (English).\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/stable/config\n\n# -- Project information -----------------------------------------------------\nimport sys\nimport os\nimport types\n\nimport ray\n\n# stub ray.remote to be a no-op so it doesn't shadow docstrings\ndef noop_decorator(*args, **kwargs):\n if len(args) == 1 and len(kwargs) == 0 and callable(args[0]):\n # This is the case where the decorator is just @ray.remote without parameters.\n return args[0]\n return lambda cls_or_func: cls_or_func\n\n\nray.remote = noop_decorator\n\n# fake modules if they're missing\nfor mod_name in (\"cudf\", \"cupy\", \"pyarrow.gandiva\", \"omniscidbe\"):\n try:\n __import__(mod_name)\n except ImportError:\n sys.modules[mod_name] = types.ModuleType(\n mod_name, f\"fake {mod_name} for building docs\"\n )\nif not hasattr(sys.modules[\"cudf\"], \"DataFrame\"):\n sys.modules[\"cudf\"].DataFrame = type(\"DataFrame\", (object,), {})\nif not hasattr(sys.modules[\"cupy\"], \"ndarray\"):\n sys.modules[\"cupy\"].ndarray = type(\"ndarray\", (object,), {})\nif not hasattr(sys.modules[\"omniscidbe\"], \"PyDbEngine\"):\n sys.modules[\"omniscidbe\"].PyDbEngine = type(\"PyDbEngine\", (object,), {})\n\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\")))\nimport modin\n\nfrom modin.config.__main__ import export_config_help\n\nconfigs_file_path = os.path.abspath(\n os.path.join(os.path.dirname(__file__), \"flow/modin/configs_help.csv\")\n)\n# Export configs help to create configs table in the docs/flow/modin/config.rst\nexport_config_help(configs_file_path)\n\nproject = \"Modin\"\ncopyright = \"2018-2022, Modin Developers.\"\nauthor = \"Modin contributors\"\n\n# The short X.Y version\nversion = \"{}\".format(modin.__version__)\n# The full version, including alpha/beta/rc tags\nrelease = version\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.githubpages\",\n \"sphinx.ext.graphviz\",\n \"sphinxcontrib.plantuml\",\n \"sphinx_issues\",\n]\n\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# Maps git branches to Sphinx themes\ndefault_html_theme = \"pydata_sphinx_theme\"\ncurrent_branch = \"nature\"\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"pydata_sphinx_theme\"\n\nhtml_favicon = \"img/MODIN_ver2.ico\"\n\nhtml_logo = \"img/MODIN_ver2.png\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n \"sidebarwidth\": 270,\n \"collapse_navigation\": False,\n \"navigation_depth\": 4,\n \"show_toc_level\": 2,\n \"github_url\": \"https://github.com/modin-project/modin\",\n \"icon_links\": [\n {\n \"name\": \"PyPI\",\n \"url\": \"https://pypi.org/project/modin\",\n \"icon\": \"fab fa-python\",\n },\n {\n \"name\": \"conda-forge\",\n \"url\": \"https://anaconda.org/conda-forge/modin\",\n \"icon\": \"fas fa-circle-notch\",\n },\n {\n \"name\": \"Join the Slack\",\n \"url\": \"https://modin.org/slack.html\",\n \"icon\": \"fab fa-slack\",\n },\n {\n \"name\": \"Discourse\",\n \"url\": \"https://discuss.modin.org/\",\n \"icon\": \"fab fa-discourse\",\n },\n {\n \"name\": \"Mailing List\",\n \"url\": \"https://groups.google.com/forum/#!forum/modin-dev\",\n \"icon\": \"fas fa-envelope-square\",\n },\n ],\n}\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# The default pydata_sphinx_theme sidebar templates are\n# sidebar-nav-bs.html and search-field.html.\nhtml_sidebars = {}\n\nissues_github_path = \"modin-project/modin\"\n", "path": "docs/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/stable/config\n\n# -- Project information -----------------------------------------------------\nimport sys\nimport os\nimport types\n\nimport ray\n\n# stub ray.remote to be a no-op so it doesn't shadow docstrings\ndef noop_decorator(*args, **kwargs):\n if len(args) == 1 and len(kwargs) == 0 and callable(args[0]):\n # This is the case where the decorator is just @ray.remote without parameters.\n return args[0]\n return lambda cls_or_func: cls_or_func\n\n\nray.remote = noop_decorator\n\n# fake modules if they're missing\nfor mod_name in (\"cudf\", \"cupy\", \"pyarrow.gandiva\", \"omniscidbe\"):\n try:\n __import__(mod_name)\n except ImportError:\n sys.modules[mod_name] = types.ModuleType(\n mod_name, f\"fake {mod_name} for building docs\"\n )\nif not hasattr(sys.modules[\"cudf\"], \"DataFrame\"):\n sys.modules[\"cudf\"].DataFrame = type(\"DataFrame\", (object,), {})\nif not hasattr(sys.modules[\"cupy\"], \"ndarray\"):\n sys.modules[\"cupy\"].ndarray = type(\"ndarray\", (object,), {})\nif not hasattr(sys.modules[\"omniscidbe\"], \"PyDbEngine\"):\n sys.modules[\"omniscidbe\"].PyDbEngine = type(\"PyDbEngine\", (object,), {})\n\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\")))\nimport modin\n\nfrom modin.config.__main__ import export_config_help\n\nconfigs_file_path = os.path.abspath(\n os.path.join(os.path.dirname(__file__), \"flow/modin/configs_help.csv\")\n)\n# Export configs help to create configs table in the docs/flow/modin/config.rst\nexport_config_help(configs_file_path)\n\nproject = \"Modin\"\ncopyright = \"2018-2022, Modin Developers.\"\nauthor = \"Modin contributors\"\n\n# The short X.Y version\nversion = \"{}\".format(modin.__version__)\n# The full version, including alpha/beta/rc tags\nrelease = version\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.githubpages\",\n \"sphinx.ext.graphviz\",\n \"sphinxcontrib.plantuml\",\n \"sphinx_issues\",\n]\n\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = \"en\"\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# Maps git branches to Sphinx themes\ndefault_html_theme = \"pydata_sphinx_theme\"\ncurrent_branch = \"nature\"\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"pydata_sphinx_theme\"\n\nhtml_favicon = \"img/MODIN_ver2.ico\"\n\nhtml_logo = \"img/MODIN_ver2.png\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n \"sidebarwidth\": 270,\n \"collapse_navigation\": False,\n \"navigation_depth\": 4,\n \"show_toc_level\": 2,\n \"github_url\": \"https://github.com/modin-project/modin\",\n \"icon_links\": [\n {\n \"name\": \"PyPI\",\n \"url\": \"https://pypi.org/project/modin\",\n \"icon\": \"fab fa-python\",\n },\n {\n \"name\": \"conda-forge\",\n \"url\": \"https://anaconda.org/conda-forge/modin\",\n \"icon\": \"fas fa-circle-notch\",\n },\n {\n \"name\": \"Join the Slack\",\n \"url\": \"https://modin.org/slack.html\",\n \"icon\": \"fab fa-slack\",\n },\n {\n \"name\": \"Discourse\",\n \"url\": \"https://discuss.modin.org/\",\n \"icon\": \"fab fa-discourse\",\n },\n {\n \"name\": \"Mailing List\",\n \"url\": \"https://groups.google.com/forum/#!forum/modin-dev\",\n \"icon\": \"fas fa-envelope-square\",\n },\n ],\n}\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# The default pydata_sphinx_theme sidebar templates are\n# sidebar-nav-bs.html and search-field.html.\nhtml_sidebars = {}\n\nissues_github_path = \"modin-project/modin\"\n", "path": "docs/conf.py"}]} | 3,086 | 102 |
gh_patches_debug_12495 | rasdani/github-patches | git_diff | geopandas__geopandas-478 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BUG: coordinate indexer does not correctly handle slice start/stop of 0
When slicing a geoseries using the coordinate indexer, a slice start/stop of 0 is ignored.
```
>>> gs = gpd.GeoSeries([Point(x, x) for x in range(-3, 4)])
>>> print(gs)
0 POINT (-3 -3)
1 POINT (-2 -2)
2 POINT (-1 -1)
3 POINT (0 0)
4 POINT (1 1)
5 POINT (2 2)
6 POINT (3 3)
dtype: object
```
When attempting to select points located with x/y < 0, you get:
```
>>> print(gs.cx[:0, :0])
0 POINT (-3 -3)
1 POINT (-2 -2)
2 POINT (-1 -1)
3 POINT (0 0)
4 POINT (1 1)
5 POINT (2 2)
6 POINT (3 3)
dtype: object
```
whereas the expected behavior is:
```
>>> print(gs.cx[:0, :0])
0 POINT (-3 -3)
1 POINT (-2 -2)
2 POINT (-1 -1)
3 POINT (0 0)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geopandas/geoseries.py`
Content:
```
1 from functools import partial
2 import json
3 from warnings import warn
4
5 import numpy as np
6 from pandas import Series, DataFrame
7 from pandas.core.indexing import _NDFrameIndexer
8
9 import pyproj
10 from shapely.geometry import box, shape, Polygon, Point
11 from shapely.geometry.collection import GeometryCollection
12 from shapely.geometry.base import BaseGeometry
13 from shapely.ops import transform
14
15 from geopandas.plotting import plot_series
16 from geopandas.base import GeoPandasBase, _series_unary_op
17
18
19 def _is_empty(x):
20 try:
21 return x.is_empty
22 except:
23 return False
24
25
26 class _CoordinateIndexer(_NDFrameIndexer):
27 """ Indexing by coordinate slices """
28 def _getitem_tuple(self, tup):
29 obj = self.obj
30 xs, ys = tup
31 # handle numeric values as x and/or y coordinate index
32 if type(xs) is not slice:
33 xs = slice(xs, xs)
34 if type(ys) is not slice:
35 ys = slice(ys, ys)
36 # don't know how to handle step; should this raise?
37 if xs.step is not None or ys.step is not None:
38 warn("Ignoring step - full interval is used.")
39 xmin, ymin, xmax, ymax = obj.total_bounds
40 bbox = box(xs.start or xmin,
41 ys.start or ymin,
42 xs.stop or xmax,
43 ys.stop or ymax)
44 idx = obj.intersects(bbox)
45 return obj[idx]
46
47
48 class GeoSeries(GeoPandasBase, Series):
49 """A Series object designed to store shapely geometry objects."""
50 _metadata = ['name', 'crs']
51
52 def __new__(cls, *args, **kwargs):
53 kwargs.pop('crs', None)
54 arr = Series.__new__(cls)
55 if type(arr) is GeoSeries:
56 return arr
57 else:
58 return arr.view(GeoSeries)
59
60 def __init__(self, *args, **kwargs):
61 # fix problem for scalar geometries passed
62 if len(args) == 1 and isinstance(args[0], BaseGeometry):
63 args = ([args[0]],)
64
65 crs = kwargs.pop('crs', None)
66
67 super(GeoSeries, self).__init__(*args, **kwargs)
68 self.crs = crs
69 self._invalidate_sindex()
70
71 def append(self, *args, **kwargs):
72 return self._wrapped_pandas_method('append', *args, **kwargs)
73
74 @property
75 def geometry(self):
76 return self
77
78 @property
79 def x(self):
80 """Return the x location of point geometries in a GeoSeries"""
81 if (self.geom_type == "Point").all():
82 return _series_unary_op(self, 'x', null_value=np.nan)
83 else:
84 message = "x attribute access only provided for Point geometries"
85 raise ValueError(message)
86
87 @property
88 def y(self):
89 """Return the y location of point geometries in a GeoSeries"""
90 if (self.geom_type == "Point").all():
91 return _series_unary_op(self, 'y', null_value=np.nan)
92 else:
93 message = "y attribute access only provided for Point geometries"
94 raise ValueError(message)
95
96 @classmethod
97 def from_file(cls, filename, **kwargs):
98 """
99 Alternate constructor to create a GeoSeries from a file
100
101 Parameters
102 ----------
103
104 filename : str
105 File path or file handle to read from. Depending on which kwargs
106 are included, the content of filename may vary, see:
107 http://toblerity.github.io/fiona/README.html#usage
108 for usage details.
109 kwargs : key-word arguments
110 These arguments are passed to fiona.open, and can be used to
111 access multi-layer data, data stored within archives (zip files),
112 etc.
113
114 """
115 import fiona
116 geoms = []
117 with fiona.open(filename, **kwargs) as f:
118 crs = f.crs
119 for rec in f:
120 geoms.append(shape(rec['geometry']))
121 g = GeoSeries(geoms)
122 g.crs = crs
123 return g
124
125 @property
126 def __geo_interface__(self):
127 """Returns a GeoSeries as a python feature collection
128 """
129 from geopandas import GeoDataFrame
130 return GeoDataFrame({'geometry': self}).__geo_interface__
131
132 def to_file(self, filename, driver="ESRI Shapefile", **kwargs):
133 from geopandas import GeoDataFrame
134 data = GeoDataFrame({"geometry": self,
135 "id":self.index.values},
136 index=self.index)
137 data.crs = self.crs
138 data.to_file(filename, driver, **kwargs)
139
140 #
141 # Implement pandas methods
142 #
143
144 @property
145 def _constructor(self):
146 return GeoSeries
147
148 def _wrapped_pandas_method(self, mtd, *args, **kwargs):
149 """Wrap a generic pandas method to ensure it returns a GeoSeries"""
150 val = getattr(super(GeoSeries, self), mtd)(*args, **kwargs)
151 if type(val) == Series:
152 val.__class__ = GeoSeries
153 val.crs = self.crs
154 val._invalidate_sindex()
155 return val
156
157 def __getitem__(self, key):
158 return self._wrapped_pandas_method('__getitem__', key)
159
160 def sort_index(self, *args, **kwargs):
161 return self._wrapped_pandas_method('sort_index', *args, **kwargs)
162
163 def take(self, *args, **kwargs):
164 return self._wrapped_pandas_method('take', *args, **kwargs)
165
166 def select(self, *args, **kwargs):
167 return self._wrapped_pandas_method('select', *args, **kwargs)
168
169 @property
170 def _can_hold_na(self):
171 return False
172
173 def __finalize__(self, other, method=None, **kwargs):
174 """ propagate metadata from other to self """
175 # NOTE: backported from pandas master (upcoming v0.13)
176 for name in self._metadata:
177 object.__setattr__(self, name, getattr(other, name, None))
178 return self
179
180 def copy(self, order='C'):
181 """
182 Make a copy of this GeoSeries object
183
184 Parameters
185 ----------
186 deep : boolean, default True
187 Make a deep copy, i.e. also copy data
188
189 Returns
190 -------
191 copy : GeoSeries
192 """
193 # FIXME: this will likely be unnecessary in pandas >= 0.13
194 return GeoSeries(self.values.copy(order), index=self.index,
195 name=self.name).__finalize__(self)
196
197 def isnull(self):
198 """Null values in a GeoSeries are represented by empty geometric objects"""
199 non_geo_null = super(GeoSeries, self).isnull()
200 val = self.apply(_is_empty)
201 return np.logical_or(non_geo_null, val)
202
203 def fillna(self, value=None, method=None, inplace=False,
204 **kwargs):
205 """Fill NA/NaN values with a geometry (empty polygon by default).
206
207 "method" is currently not implemented for pandas <= 0.12.
208 """
209 if value is None:
210 value = Point()
211 return super(GeoSeries, self).fillna(value=value, method=method,
212 inplace=inplace, **kwargs)
213
214 def align(self, other, join='outer', level=None, copy=True,
215 fill_value=None, **kwargs):
216 if fill_value is None:
217 fill_value = Point()
218 left, right = super(GeoSeries, self).align(other, join=join,
219 level=level, copy=copy,
220 fill_value=fill_value,
221 **kwargs)
222 if isinstance(other, GeoSeries):
223 return GeoSeries(left), GeoSeries(right)
224 else: # It is probably a Series, let's keep it that way
225 return GeoSeries(left), right
226
227
228 def __contains__(self, other):
229 """Allow tests of the form "geom in s"
230
231 Tests whether a GeoSeries contains a geometry.
232
233 Note: This is not the same as the geometric method "contains".
234 """
235 if isinstance(other, BaseGeometry):
236 return np.any(self.geom_equals(other))
237 else:
238 return False
239
240 def plot(self, *args, **kwargs):
241 return plot_series(self, *args, **kwargs)
242
243 plot.__doc__ = plot_series.__doc__
244
245 #
246 # Additional methods
247 #
248
249 def to_crs(self, crs=None, epsg=None):
250 """Transform geometries to a new coordinate reference system
251
252 This method will transform all points in all objects. It has
253 no notion or projecting entire geometries. All segments
254 joining points are assumed to be lines in the current
255 projection, not geodesics. Objects crossing the dateline (or
256 other projection boundary) will have undesirable behavior.
257
258 `to_crs` passes the `crs` argument to the `Proj` function from the
259 `pyproj` library (with the option `preserve_units=True`). It can
260 therefore accept proj4 projections in any format
261 supported by `Proj`, including dictionaries, or proj4 strings.
262
263 """
264 from fiona.crs import from_epsg
265 if self.crs is None:
266 raise ValueError('Cannot transform naive geometries. '
267 'Please set a crs on the object first.')
268 if crs is None:
269 try:
270 crs = from_epsg(epsg)
271 except TypeError:
272 raise TypeError('Must set either crs or epsg for output.')
273 proj_in = pyproj.Proj(self.crs, preserve_units=True)
274 proj_out = pyproj.Proj(crs, preserve_units=True)
275 project = partial(pyproj.transform, proj_in, proj_out)
276 result = self.apply(lambda geom: transform(project, geom))
277 result.__class__ = GeoSeries
278 result.crs = crs
279 result._invalidate_sindex()
280 return result
281
282 def to_json(self, **kwargs):
283 """
284 Returns a GeoJSON string representation of the GeoSeries.
285
286 Parameters
287 ----------
288 *kwargs* that will be passed to json.dumps().
289 """
290 return json.dumps(self.__geo_interface__, **kwargs)
291
292 #
293 # Implement standard operators for GeoSeries
294 #
295
296 def __xor__(self, other):
297 """Implement ^ operator as for builtin set type"""
298 return self.symmetric_difference(other)
299
300 def __or__(self, other):
301 """Implement | operator as for builtin set type"""
302 return self.union(other)
303
304 def __and__(self, other):
305 """Implement & operator as for builtin set type"""
306 return self.intersection(other)
307
308 def __sub__(self, other):
309 """Implement - operator as for builtin set type"""
310 return self.difference(other)
311
312 GeoSeries._create_indexer('cx', _CoordinateIndexer)
313
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/geopandas/geoseries.py b/geopandas/geoseries.py
--- a/geopandas/geoseries.py
+++ b/geopandas/geoseries.py
@@ -37,10 +37,10 @@
if xs.step is not None or ys.step is not None:
warn("Ignoring step - full interval is used.")
xmin, ymin, xmax, ymax = obj.total_bounds
- bbox = box(xs.start or xmin,
- ys.start or ymin,
- xs.stop or xmax,
- ys.stop or ymax)
+ bbox = box(xs.start if xs.start is not None else xmin,
+ ys.start if ys.start is not None else ymin,
+ xs.stop if xs.stop is not None else xmax,
+ ys.stop if ys.stop is not None else ymax)
idx = obj.intersects(bbox)
return obj[idx]
| {"golden_diff": "diff --git a/geopandas/geoseries.py b/geopandas/geoseries.py\n--- a/geopandas/geoseries.py\n+++ b/geopandas/geoseries.py\n@@ -37,10 +37,10 @@\n if xs.step is not None or ys.step is not None:\n warn(\"Ignoring step - full interval is used.\")\n xmin, ymin, xmax, ymax = obj.total_bounds\n- bbox = box(xs.start or xmin,\n- ys.start or ymin,\n- xs.stop or xmax,\n- ys.stop or ymax)\n+ bbox = box(xs.start if xs.start is not None else xmin,\n+ ys.start if ys.start is not None else ymin,\n+ xs.stop if xs.stop is not None else xmax,\n+ ys.stop if ys.stop is not None else ymax)\n idx = obj.intersects(bbox)\n return obj[idx]\n", "issue": "BUG: coordinate indexer does not correctly handle slice start/stop of 0\nWhen slicing a geoseries using the coordinate indexer, a slice start/stop of 0 is ignored.\r\n\r\n```\r\n>>> gs = gpd.GeoSeries([Point(x, x) for x in range(-3, 4)])\r\n>>> print(gs)\r\n0 POINT (-3 -3)\r\n1 POINT (-2 -2)\r\n2 POINT (-1 -1)\r\n3 POINT (0 0)\r\n4 POINT (1 1)\r\n5 POINT (2 2)\r\n6 POINT (3 3)\r\ndtype: object\r\n```\r\nWhen attempting to select points located with x/y < 0, you get:\r\n```\r\n>>> print(gs.cx[:0, :0])\r\n0 POINT (-3 -3)\r\n1 POINT (-2 -2)\r\n2 POINT (-1 -1)\r\n3 POINT (0 0)\r\n4 POINT (1 1)\r\n5 POINT (2 2)\r\n6 POINT (3 3)\r\ndtype: object\r\n```\r\nwhereas the expected behavior is:\r\n```\r\n>>> print(gs.cx[:0, :0])\r\n0 POINT (-3 -3)\r\n1 POINT (-2 -2)\r\n2 POINT (-1 -1)\r\n3 POINT (0 0)\r\n```\n", "before_files": [{"content": "from functools import partial\nimport json\nfrom warnings import warn\n\nimport numpy as np\nfrom pandas import Series, DataFrame\nfrom pandas.core.indexing import _NDFrameIndexer\n\nimport pyproj\nfrom shapely.geometry import box, shape, Polygon, Point\nfrom shapely.geometry.collection import GeometryCollection\nfrom shapely.geometry.base import BaseGeometry\nfrom shapely.ops import transform\n\nfrom geopandas.plotting import plot_series\nfrom geopandas.base import GeoPandasBase, _series_unary_op\n\n\ndef _is_empty(x):\n try:\n return x.is_empty\n except:\n return False\n\n\nclass _CoordinateIndexer(_NDFrameIndexer):\n \"\"\" Indexing by coordinate slices \"\"\"\n def _getitem_tuple(self, tup):\n obj = self.obj\n xs, ys = tup\n # handle numeric values as x and/or y coordinate index\n if type(xs) is not slice:\n xs = slice(xs, xs)\n if type(ys) is not slice:\n ys = slice(ys, ys)\n # don't know how to handle step; should this raise?\n if xs.step is not None or ys.step is not None:\n warn(\"Ignoring step - full interval is used.\")\n xmin, ymin, xmax, ymax = obj.total_bounds\n bbox = box(xs.start or xmin,\n ys.start or ymin,\n xs.stop or xmax,\n ys.stop or ymax)\n idx = obj.intersects(bbox)\n return obj[idx]\n\n\nclass GeoSeries(GeoPandasBase, Series):\n \"\"\"A Series object designed to store shapely geometry objects.\"\"\"\n _metadata = ['name', 'crs']\n\n def __new__(cls, *args, **kwargs):\n kwargs.pop('crs', None)\n arr = Series.__new__(cls)\n if type(arr) is GeoSeries:\n return arr\n else:\n return arr.view(GeoSeries)\n\n def __init__(self, *args, **kwargs):\n # fix problem for scalar geometries passed\n if len(args) == 1 and isinstance(args[0], BaseGeometry):\n args = ([args[0]],)\n\n crs = kwargs.pop('crs', None)\n\n super(GeoSeries, self).__init__(*args, **kwargs)\n self.crs = crs\n self._invalidate_sindex()\n\n def append(self, *args, **kwargs):\n return self._wrapped_pandas_method('append', *args, **kwargs)\n\n @property\n def geometry(self):\n return self\n\n @property\n def x(self):\n \"\"\"Return the x location of point geometries in a GeoSeries\"\"\"\n if (self.geom_type == \"Point\").all():\n return _series_unary_op(self, 'x', null_value=np.nan)\n else:\n message = \"x attribute access only provided for Point geometries\"\n raise ValueError(message)\n\n @property\n def y(self):\n \"\"\"Return the y location of point geometries in a GeoSeries\"\"\"\n if (self.geom_type == \"Point\").all():\n return _series_unary_op(self, 'y', null_value=np.nan)\n else:\n message = \"y attribute access only provided for Point geometries\"\n raise ValueError(message)\n\n @classmethod\n def from_file(cls, filename, **kwargs):\n \"\"\"\n Alternate constructor to create a GeoSeries from a file\n\n Parameters\n ----------\n\n filename : str\n File path or file handle to read from. Depending on which kwargs\n are included, the content of filename may vary, see:\n http://toblerity.github.io/fiona/README.html#usage\n for usage details.\n kwargs : key-word arguments\n These arguments are passed to fiona.open, and can be used to\n access multi-layer data, data stored within archives (zip files),\n etc.\n\n \"\"\"\n import fiona\n geoms = []\n with fiona.open(filename, **kwargs) as f:\n crs = f.crs\n for rec in f:\n geoms.append(shape(rec['geometry']))\n g = GeoSeries(geoms)\n g.crs = crs\n return g\n\n @property\n def __geo_interface__(self):\n \"\"\"Returns a GeoSeries as a python feature collection\n \"\"\"\n from geopandas import GeoDataFrame\n return GeoDataFrame({'geometry': self}).__geo_interface__\n\n def to_file(self, filename, driver=\"ESRI Shapefile\", **kwargs):\n from geopandas import GeoDataFrame\n data = GeoDataFrame({\"geometry\": self,\n \"id\":self.index.values},\n index=self.index)\n data.crs = self.crs\n data.to_file(filename, driver, **kwargs)\n\n #\n # Implement pandas methods\n #\n\n @property\n def _constructor(self):\n return GeoSeries\n\n def _wrapped_pandas_method(self, mtd, *args, **kwargs):\n \"\"\"Wrap a generic pandas method to ensure it returns a GeoSeries\"\"\"\n val = getattr(super(GeoSeries, self), mtd)(*args, **kwargs)\n if type(val) == Series:\n val.__class__ = GeoSeries\n val.crs = self.crs\n val._invalidate_sindex()\n return val\n\n def __getitem__(self, key):\n return self._wrapped_pandas_method('__getitem__', key)\n\n def sort_index(self, *args, **kwargs):\n return self._wrapped_pandas_method('sort_index', *args, **kwargs)\n\n def take(self, *args, **kwargs):\n return self._wrapped_pandas_method('take', *args, **kwargs)\n\n def select(self, *args, **kwargs):\n return self._wrapped_pandas_method('select', *args, **kwargs)\n\n @property\n def _can_hold_na(self):\n return False\n\n def __finalize__(self, other, method=None, **kwargs):\n \"\"\" propagate metadata from other to self \"\"\"\n # NOTE: backported from pandas master (upcoming v0.13)\n for name in self._metadata:\n object.__setattr__(self, name, getattr(other, name, None))\n return self\n\n def copy(self, order='C'):\n \"\"\"\n Make a copy of this GeoSeries object\n\n Parameters\n ----------\n deep : boolean, default True\n Make a deep copy, i.e. also copy data\n\n Returns\n -------\n copy : GeoSeries\n \"\"\"\n # FIXME: this will likely be unnecessary in pandas >= 0.13\n return GeoSeries(self.values.copy(order), index=self.index,\n name=self.name).__finalize__(self)\n\n def isnull(self):\n \"\"\"Null values in a GeoSeries are represented by empty geometric objects\"\"\"\n non_geo_null = super(GeoSeries, self).isnull()\n val = self.apply(_is_empty)\n return np.logical_or(non_geo_null, val)\n\n def fillna(self, value=None, method=None, inplace=False,\n **kwargs):\n \"\"\"Fill NA/NaN values with a geometry (empty polygon by default).\n\n \"method\" is currently not implemented for pandas <= 0.12.\n \"\"\"\n if value is None:\n value = Point()\n return super(GeoSeries, self).fillna(value=value, method=method,\n inplace=inplace, **kwargs)\n\n def align(self, other, join='outer', level=None, copy=True,\n fill_value=None, **kwargs):\n if fill_value is None:\n fill_value = Point()\n left, right = super(GeoSeries, self).align(other, join=join,\n level=level, copy=copy,\n fill_value=fill_value,\n **kwargs)\n if isinstance(other, GeoSeries):\n return GeoSeries(left), GeoSeries(right)\n else: # It is probably a Series, let's keep it that way\n return GeoSeries(left), right\n\n\n def __contains__(self, other):\n \"\"\"Allow tests of the form \"geom in s\"\n\n Tests whether a GeoSeries contains a geometry.\n\n Note: This is not the same as the geometric method \"contains\".\n \"\"\"\n if isinstance(other, BaseGeometry):\n return np.any(self.geom_equals(other))\n else:\n return False\n\n def plot(self, *args, **kwargs):\n return plot_series(self, *args, **kwargs)\n\n plot.__doc__ = plot_series.__doc__\n\n #\n # Additional methods\n #\n\n def to_crs(self, crs=None, epsg=None):\n \"\"\"Transform geometries to a new coordinate reference system\n\n This method will transform all points in all objects. It has\n no notion or projecting entire geometries. All segments\n joining points are assumed to be lines in the current\n projection, not geodesics. Objects crossing the dateline (or\n other projection boundary) will have undesirable behavior.\n\n `to_crs` passes the `crs` argument to the `Proj` function from the\n `pyproj` library (with the option `preserve_units=True`). It can\n therefore accept proj4 projections in any format\n supported by `Proj`, including dictionaries, or proj4 strings.\n\n \"\"\"\n from fiona.crs import from_epsg\n if self.crs is None:\n raise ValueError('Cannot transform naive geometries. '\n 'Please set a crs on the object first.')\n if crs is None:\n try:\n crs = from_epsg(epsg)\n except TypeError:\n raise TypeError('Must set either crs or epsg for output.')\n proj_in = pyproj.Proj(self.crs, preserve_units=True)\n proj_out = pyproj.Proj(crs, preserve_units=True)\n project = partial(pyproj.transform, proj_in, proj_out)\n result = self.apply(lambda geom: transform(project, geom))\n result.__class__ = GeoSeries\n result.crs = crs\n result._invalidate_sindex()\n return result\n\n def to_json(self, **kwargs):\n \"\"\"\n Returns a GeoJSON string representation of the GeoSeries.\n\n Parameters\n ----------\n *kwargs* that will be passed to json.dumps().\n \"\"\"\n return json.dumps(self.__geo_interface__, **kwargs)\n\n #\n # Implement standard operators for GeoSeries\n #\n\n def __xor__(self, other):\n \"\"\"Implement ^ operator as for builtin set type\"\"\"\n return self.symmetric_difference(other)\n\n def __or__(self, other):\n \"\"\"Implement | operator as for builtin set type\"\"\"\n return self.union(other)\n\n def __and__(self, other):\n \"\"\"Implement & operator as for builtin set type\"\"\"\n return self.intersection(other)\n\n def __sub__(self, other):\n \"\"\"Implement - operator as for builtin set type\"\"\"\n return self.difference(other)\n\nGeoSeries._create_indexer('cx', _CoordinateIndexer)\n", "path": "geopandas/geoseries.py"}], "after_files": [{"content": "from functools import partial\nimport json\nfrom warnings import warn\n\nimport numpy as np\nfrom pandas import Series, DataFrame\nfrom pandas.core.indexing import _NDFrameIndexer\n\nimport pyproj\nfrom shapely.geometry import box, shape, Polygon, Point\nfrom shapely.geometry.collection import GeometryCollection\nfrom shapely.geometry.base import BaseGeometry\nfrom shapely.ops import transform\n\nfrom geopandas.plotting import plot_series\nfrom geopandas.base import GeoPandasBase, _series_unary_op\n\n\ndef _is_empty(x):\n try:\n return x.is_empty\n except:\n return False\n\n\nclass _CoordinateIndexer(_NDFrameIndexer):\n \"\"\" Indexing by coordinate slices \"\"\"\n def _getitem_tuple(self, tup):\n obj = self.obj\n xs, ys = tup\n # handle numeric values as x and/or y coordinate index\n if type(xs) is not slice:\n xs = slice(xs, xs)\n if type(ys) is not slice:\n ys = slice(ys, ys)\n # don't know how to handle step; should this raise?\n if xs.step is not None or ys.step is not None:\n warn(\"Ignoring step - full interval is used.\")\n xmin, ymin, xmax, ymax = obj.total_bounds\n bbox = box(xs.start if xs.start is not None else xmin,\n ys.start if ys.start is not None else ymin,\n xs.stop if xs.stop is not None else xmax,\n ys.stop if ys.stop is not None else ymax)\n idx = obj.intersects(bbox)\n return obj[idx]\n\n\nclass GeoSeries(GeoPandasBase, Series):\n \"\"\"A Series object designed to store shapely geometry objects.\"\"\"\n _metadata = ['name', 'crs']\n\n def __new__(cls, *args, **kwargs):\n kwargs.pop('crs', None)\n arr = Series.__new__(cls)\n if type(arr) is GeoSeries:\n return arr\n else:\n return arr.view(GeoSeries)\n\n def __init__(self, *args, **kwargs):\n # fix problem for scalar geometries passed\n if len(args) == 1 and isinstance(args[0], BaseGeometry):\n args = ([args[0]],)\n\n crs = kwargs.pop('crs', None)\n\n super(GeoSeries, self).__init__(*args, **kwargs)\n self.crs = crs\n self._invalidate_sindex()\n\n def append(self, *args, **kwargs):\n return self._wrapped_pandas_method('append', *args, **kwargs)\n\n @property\n def geometry(self):\n return self\n\n @property\n def x(self):\n \"\"\"Return the x location of point geometries in a GeoSeries\"\"\"\n if (self.geom_type == \"Point\").all():\n return _series_unary_op(self, 'x', null_value=np.nan)\n else:\n message = \"x attribute access only provided for Point geometries\"\n raise ValueError(message)\n\n @property\n def y(self):\n \"\"\"Return the y location of point geometries in a GeoSeries\"\"\"\n if (self.geom_type == \"Point\").all():\n return _series_unary_op(self, 'y', null_value=np.nan)\n else:\n message = \"y attribute access only provided for Point geometries\"\n raise ValueError(message)\n\n @classmethod\n def from_file(cls, filename, **kwargs):\n \"\"\"\n Alternate constructor to create a GeoSeries from a file\n\n Parameters\n ----------\n\n filename : str\n File path or file handle to read from. Depending on which kwargs\n are included, the content of filename may vary, see:\n http://toblerity.github.io/fiona/README.html#usage\n for usage details.\n kwargs : key-word arguments\n These arguments are passed to fiona.open, and can be used to\n access multi-layer data, data stored within archives (zip files),\n etc.\n\n \"\"\"\n import fiona\n geoms = []\n with fiona.open(filename, **kwargs) as f:\n crs = f.crs\n for rec in f:\n geoms.append(shape(rec['geometry']))\n g = GeoSeries(geoms)\n g.crs = crs\n return g\n\n @property\n def __geo_interface__(self):\n \"\"\"Returns a GeoSeries as a python feature collection\n \"\"\"\n from geopandas import GeoDataFrame\n return GeoDataFrame({'geometry': self}).__geo_interface__\n\n def to_file(self, filename, driver=\"ESRI Shapefile\", **kwargs):\n from geopandas import GeoDataFrame\n data = GeoDataFrame({\"geometry\": self,\n \"id\":self.index.values},\n index=self.index)\n data.crs = self.crs\n data.to_file(filename, driver, **kwargs)\n\n #\n # Implement pandas methods\n #\n\n @property\n def _constructor(self):\n return GeoSeries\n\n def _wrapped_pandas_method(self, mtd, *args, **kwargs):\n \"\"\"Wrap a generic pandas method to ensure it returns a GeoSeries\"\"\"\n val = getattr(super(GeoSeries, self), mtd)(*args, **kwargs)\n if type(val) == Series:\n val.__class__ = GeoSeries\n val.crs = self.crs\n val._invalidate_sindex()\n return val\n\n def __getitem__(self, key):\n return self._wrapped_pandas_method('__getitem__', key)\n\n def sort_index(self, *args, **kwargs):\n return self._wrapped_pandas_method('sort_index', *args, **kwargs)\n\n def take(self, *args, **kwargs):\n return self._wrapped_pandas_method('take', *args, **kwargs)\n\n def select(self, *args, **kwargs):\n return self._wrapped_pandas_method('select', *args, **kwargs)\n\n @property\n def _can_hold_na(self):\n return False\n\n def __finalize__(self, other, method=None, **kwargs):\n \"\"\" propagate metadata from other to self \"\"\"\n # NOTE: backported from pandas master (upcoming v0.13)\n for name in self._metadata:\n object.__setattr__(self, name, getattr(other, name, None))\n return self\n\n def copy(self, order='C'):\n \"\"\"\n Make a copy of this GeoSeries object\n\n Parameters\n ----------\n deep : boolean, default True\n Make a deep copy, i.e. also copy data\n\n Returns\n -------\n copy : GeoSeries\n \"\"\"\n # FIXME: this will likely be unnecessary in pandas >= 0.13\n return GeoSeries(self.values.copy(order), index=self.index,\n name=self.name).__finalize__(self)\n\n def isnull(self):\n \"\"\"Null values in a GeoSeries are represented by empty geometric objects\"\"\"\n non_geo_null = super(GeoSeries, self).isnull()\n val = self.apply(_is_empty)\n return np.logical_or(non_geo_null, val)\n\n def fillna(self, value=None, method=None, inplace=False,\n **kwargs):\n \"\"\"Fill NA/NaN values with a geometry (empty polygon by default).\n\n \"method\" is currently not implemented for pandas <= 0.12.\n \"\"\"\n if value is None:\n value = Point()\n return super(GeoSeries, self).fillna(value=value, method=method,\n inplace=inplace, **kwargs)\n\n def align(self, other, join='outer', level=None, copy=True,\n fill_value=None, **kwargs):\n if fill_value is None:\n fill_value = Point()\n left, right = super(GeoSeries, self).align(other, join=join,\n level=level, copy=copy,\n fill_value=fill_value,\n **kwargs)\n if isinstance(other, GeoSeries):\n return GeoSeries(left), GeoSeries(right)\n else: # It is probably a Series, let's keep it that way\n return GeoSeries(left), right\n\n\n def __contains__(self, other):\n \"\"\"Allow tests of the form \"geom in s\"\n\n Tests whether a GeoSeries contains a geometry.\n\n Note: This is not the same as the geometric method \"contains\".\n \"\"\"\n if isinstance(other, BaseGeometry):\n return np.any(self.geom_equals(other))\n else:\n return False\n\n def plot(self, *args, **kwargs):\n return plot_series(self, *args, **kwargs)\n\n plot.__doc__ = plot_series.__doc__\n\n #\n # Additional methods\n #\n\n def to_crs(self, crs=None, epsg=None):\n \"\"\"Transform geometries to a new coordinate reference system\n\n This method will transform all points in all objects. It has\n no notion or projecting entire geometries. All segments\n joining points are assumed to be lines in the current\n projection, not geodesics. Objects crossing the dateline (or\n other projection boundary) will have undesirable behavior.\n\n `to_crs` passes the `crs` argument to the `Proj` function from the\n `pyproj` library (with the option `preserve_units=True`). It can\n therefore accept proj4 projections in any format\n supported by `Proj`, including dictionaries, or proj4 strings.\n\n \"\"\"\n from fiona.crs import from_epsg\n if self.crs is None:\n raise ValueError('Cannot transform naive geometries. '\n 'Please set a crs on the object first.')\n if crs is None:\n try:\n crs = from_epsg(epsg)\n except TypeError:\n raise TypeError('Must set either crs or epsg for output.')\n proj_in = pyproj.Proj(self.crs, preserve_units=True)\n proj_out = pyproj.Proj(crs, preserve_units=True)\n project = partial(pyproj.transform, proj_in, proj_out)\n result = self.apply(lambda geom: transform(project, geom))\n result.__class__ = GeoSeries\n result.crs = crs\n result._invalidate_sindex()\n return result\n\n def to_json(self, **kwargs):\n \"\"\"\n Returns a GeoJSON string representation of the GeoSeries.\n\n Parameters\n ----------\n *kwargs* that will be passed to json.dumps().\n \"\"\"\n return json.dumps(self.__geo_interface__, **kwargs)\n\n #\n # Implement standard operators for GeoSeries\n #\n\n def __xor__(self, other):\n \"\"\"Implement ^ operator as for builtin set type\"\"\"\n return self.symmetric_difference(other)\n\n def __or__(self, other):\n \"\"\"Implement | operator as for builtin set type\"\"\"\n return self.union(other)\n\n def __and__(self, other):\n \"\"\"Implement & operator as for builtin set type\"\"\"\n return self.intersection(other)\n\n def __sub__(self, other):\n \"\"\"Implement - operator as for builtin set type\"\"\"\n return self.difference(other)\n\nGeoSeries._create_indexer('cx', _CoordinateIndexer)\n", "path": "geopandas/geoseries.py"}]} | 3,731 | 191 |
gh_patches_debug_6399 | rasdani/github-patches | git_diff | facebookresearch__hydra-277 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Clean up --cfg
It will be cleaner for --cfg to always take one of job|hydra|all, and not have it default to job.
this will eliminate the problem that occures when --cfg is not the last flag in the command line and some override is associated with --cfg.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hydra/_internal/utils.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 import inspect
3 import os
4 import sys
5 from .hydra import Hydra
6 import argparse
7
8
9 def run_hydra(args_parser, task_function, config_path, strict):
10 stack = inspect.stack()
11 frame = stack[2]
12
13 calling_file = None
14 calling__module = None
15 try:
16 calling_file = frame[0].f_locals["__file__"]
17 except KeyError:
18 pass
19 try:
20 module_envs = ["HYDRA_MAIN_MODULE", "FB_PAR_MAIN_MODULE", "FB_XAR_MAIN_MODULE"]
21 for module_env in module_envs:
22 if module_env in os.environ:
23 calling__module = os.environ[module_env]
24 break
25
26 if calling__module is None:
27 calling__module = frame[0].f_globals[frame[3]].__module__
28 except KeyError:
29 pass
30
31 hydra = Hydra(
32 calling_file=calling_file,
33 calling_module=calling__module,
34 config_path=config_path,
35 task_function=task_function,
36 strict=strict,
37 )
38
39 args = args_parser.parse_args()
40 if args.help:
41 hydra.app_help(args_parser=args_parser, args=args)
42 sys.exit(0)
43 if args.hydra_help:
44 hydra.hydra_help(args_parser=args_parser, args=args)
45 sys.exit(0)
46
47 has_show_cfg = args.cfg is not None
48 num_commands = args.run + has_show_cfg + args.multirun + args.shell_completion
49 if num_commands > 1:
50 raise ValueError(
51 "Only one of --run, --multirun, -cfg and --shell_completion can be specified"
52 )
53 if num_commands == 0:
54 args.run = True
55 if args.run:
56 hydra.run(overrides=args.overrides)
57 elif args.multirun:
58 hydra.multirun(overrides=args.overrides)
59 elif args.cfg:
60 hydra.show_cfg(overrides=args.overrides, cfg_type=args.cfg)
61 elif args.shell_completion:
62 hydra.shell_completion(overrides=args.overrides)
63 else:
64 print("Command not specified")
65 sys.exit(1)
66
67
68 def _get_exec_command():
69 if sys.argv[0].endswith(".py"):
70 return "python {}".format(sys.argv[0])
71 else:
72 # Running as an installed app (setuptools entry point)
73 executable = os.path.basename(sys.argv[0])
74 return executable
75
76
77 def get_args_parser():
78 from .. import __version__
79
80 parser = argparse.ArgumentParser(add_help=False, description="Hydra")
81 parser.add_argument("--help", "-h", action="store_true", help="Application's help")
82 parser.add_argument("--hydra-help", action="store_true", help="Hydra's help")
83 parser.add_argument(
84 "--version", action="version", version="Hydra {}".format(__version__)
85 )
86 parser.add_argument(
87 "overrides",
88 nargs="*",
89 help="Any key=value arguments to override config values (use dots for.nested=overrides)",
90 )
91
92 parser.add_argument(
93 "--cfg",
94 "-c",
95 const="job",
96 nargs="?",
97 choices=["job", "hydra", "all"],
98 help="Show config instead of running, optional value indicates which config to show (defaults to job)",
99 )
100
101 parser.add_argument("--run", "-r", action="store_true", help="Run a job")
102
103 parser.add_argument(
104 "--multirun",
105 "-m",
106 action="store_true",
107 help="Run multiple jobs with the configured launcher",
108 )
109
110 shell = "SHELL_NAME"
111 install_cmd = 'eval "$({} -sc install={})"'.format(_get_exec_command(), shell)
112 uninstall_cmd = 'eval "$({} -sc uninstall={})"'.format(_get_exec_command(), shell)
113 parser.add_argument(
114 "--shell_completion",
115 "-sc",
116 action="store_true",
117 help="""Install or Uninstall shell completion:
118 Install:
119 {}
120
121 Uninstall:
122 {}
123 """.format(
124 install_cmd, uninstall_cmd
125 ),
126 )
127 return parser
128
129
130 def get_args(args=None):
131 return get_args_parser().parse_args(args=args)
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hydra/_internal/utils.py b/hydra/_internal/utils.py
--- a/hydra/_internal/utils.py
+++ b/hydra/_internal/utils.py
@@ -92,10 +92,8 @@
parser.add_argument(
"--cfg",
"-c",
- const="job",
- nargs="?",
choices=["job", "hydra", "all"],
- help="Show config instead of running, optional value indicates which config to show (defaults to job)",
+ help="Show config instead of running [job|hydra|all]",
)
parser.add_argument("--run", "-r", action="store_true", help="Run a job")
| {"golden_diff": "diff --git a/hydra/_internal/utils.py b/hydra/_internal/utils.py\n--- a/hydra/_internal/utils.py\n+++ b/hydra/_internal/utils.py\n@@ -92,10 +92,8 @@\n parser.add_argument(\n \"--cfg\",\n \"-c\",\n- const=\"job\",\n- nargs=\"?\",\n choices=[\"job\", \"hydra\", \"all\"],\n- help=\"Show config instead of running, optional value indicates which config to show (defaults to job)\",\n+ help=\"Show config instead of running [job|hydra|all]\",\n )\n \n parser.add_argument(\"--run\", \"-r\", action=\"store_true\", help=\"Run a job\")\n", "issue": "Clean up --cfg\nIt will be cleaner for --cfg to always take one of job|hydra|all, and not have it default to job.\r\nthis will eliminate the problem that occures when --cfg is not the last flag in the command line and some override is associated with --cfg.\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport inspect\nimport os\nimport sys\nfrom .hydra import Hydra\nimport argparse\n\n\ndef run_hydra(args_parser, task_function, config_path, strict):\n stack = inspect.stack()\n frame = stack[2]\n\n calling_file = None\n calling__module = None\n try:\n calling_file = frame[0].f_locals[\"__file__\"]\n except KeyError:\n pass\n try:\n module_envs = [\"HYDRA_MAIN_MODULE\", \"FB_PAR_MAIN_MODULE\", \"FB_XAR_MAIN_MODULE\"]\n for module_env in module_envs:\n if module_env in os.environ:\n calling__module = os.environ[module_env]\n break\n\n if calling__module is None:\n calling__module = frame[0].f_globals[frame[3]].__module__\n except KeyError:\n pass\n\n hydra = Hydra(\n calling_file=calling_file,\n calling_module=calling__module,\n config_path=config_path,\n task_function=task_function,\n strict=strict,\n )\n\n args = args_parser.parse_args()\n if args.help:\n hydra.app_help(args_parser=args_parser, args=args)\n sys.exit(0)\n if args.hydra_help:\n hydra.hydra_help(args_parser=args_parser, args=args)\n sys.exit(0)\n\n has_show_cfg = args.cfg is not None\n num_commands = args.run + has_show_cfg + args.multirun + args.shell_completion\n if num_commands > 1:\n raise ValueError(\n \"Only one of --run, --multirun, -cfg and --shell_completion can be specified\"\n )\n if num_commands == 0:\n args.run = True\n if args.run:\n hydra.run(overrides=args.overrides)\n elif args.multirun:\n hydra.multirun(overrides=args.overrides)\n elif args.cfg:\n hydra.show_cfg(overrides=args.overrides, cfg_type=args.cfg)\n elif args.shell_completion:\n hydra.shell_completion(overrides=args.overrides)\n else:\n print(\"Command not specified\")\n sys.exit(1)\n\n\ndef _get_exec_command():\n if sys.argv[0].endswith(\".py\"):\n return \"python {}\".format(sys.argv[0])\n else:\n # Running as an installed app (setuptools entry point)\n executable = os.path.basename(sys.argv[0])\n return executable\n\n\ndef get_args_parser():\n from .. import __version__\n\n parser = argparse.ArgumentParser(add_help=False, description=\"Hydra\")\n parser.add_argument(\"--help\", \"-h\", action=\"store_true\", help=\"Application's help\")\n parser.add_argument(\"--hydra-help\", action=\"store_true\", help=\"Hydra's help\")\n parser.add_argument(\n \"--version\", action=\"version\", version=\"Hydra {}\".format(__version__)\n )\n parser.add_argument(\n \"overrides\",\n nargs=\"*\",\n help=\"Any key=value arguments to override config values (use dots for.nested=overrides)\",\n )\n\n parser.add_argument(\n \"--cfg\",\n \"-c\",\n const=\"job\",\n nargs=\"?\",\n choices=[\"job\", \"hydra\", \"all\"],\n help=\"Show config instead of running, optional value indicates which config to show (defaults to job)\",\n )\n\n parser.add_argument(\"--run\", \"-r\", action=\"store_true\", help=\"Run a job\")\n\n parser.add_argument(\n \"--multirun\",\n \"-m\",\n action=\"store_true\",\n help=\"Run multiple jobs with the configured launcher\",\n )\n\n shell = \"SHELL_NAME\"\n install_cmd = 'eval \"$({} -sc install={})\"'.format(_get_exec_command(), shell)\n uninstall_cmd = 'eval \"$({} -sc uninstall={})\"'.format(_get_exec_command(), shell)\n parser.add_argument(\n \"--shell_completion\",\n \"-sc\",\n action=\"store_true\",\n help=\"\"\"Install or Uninstall shell completion:\n Install:\n {}\n\n Uninstall:\n {}\n\"\"\".format(\n install_cmd, uninstall_cmd\n ),\n )\n return parser\n\n\ndef get_args(args=None):\n return get_args_parser().parse_args(args=args)\n", "path": "hydra/_internal/utils.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport inspect\nimport os\nimport sys\nfrom .hydra import Hydra\nimport argparse\n\n\ndef run_hydra(args_parser, task_function, config_path, strict):\n stack = inspect.stack()\n frame = stack[2]\n\n calling_file = None\n calling__module = None\n try:\n calling_file = frame[0].f_locals[\"__file__\"]\n except KeyError:\n pass\n try:\n module_envs = [\"HYDRA_MAIN_MODULE\", \"FB_PAR_MAIN_MODULE\", \"FB_XAR_MAIN_MODULE\"]\n for module_env in module_envs:\n if module_env in os.environ:\n calling__module = os.environ[module_env]\n break\n\n if calling__module is None:\n calling__module = frame[0].f_globals[frame[3]].__module__\n except KeyError:\n pass\n\n hydra = Hydra(\n calling_file=calling_file,\n calling_module=calling__module,\n config_path=config_path,\n task_function=task_function,\n strict=strict,\n )\n\n args = args_parser.parse_args()\n if args.help:\n hydra.app_help(args_parser=args_parser, args=args)\n sys.exit(0)\n if args.hydra_help:\n hydra.hydra_help(args_parser=args_parser, args=args)\n sys.exit(0)\n\n has_show_cfg = args.cfg is not None\n num_commands = args.run + has_show_cfg + args.multirun + args.shell_completion\n if num_commands > 1:\n raise ValueError(\n \"Only one of --run, --multirun, -cfg and --shell_completion can be specified\"\n )\n if num_commands == 0:\n args.run = True\n if args.run:\n hydra.run(overrides=args.overrides)\n elif args.multirun:\n hydra.multirun(overrides=args.overrides)\n elif args.cfg:\n hydra.show_cfg(overrides=args.overrides, cfg_type=args.cfg)\n elif args.shell_completion:\n hydra.shell_completion(overrides=args.overrides)\n else:\n print(\"Command not specified\")\n sys.exit(1)\n\n\ndef _get_exec_command():\n if sys.argv[0].endswith(\".py\"):\n return \"python {}\".format(sys.argv[0])\n else:\n # Running as an installed app (setuptools entry point)\n executable = os.path.basename(sys.argv[0])\n return executable\n\n\ndef get_args_parser():\n from .. import __version__\n\n parser = argparse.ArgumentParser(add_help=False, description=\"Hydra\")\n parser.add_argument(\"--help\", \"-h\", action=\"store_true\", help=\"Application's help\")\n parser.add_argument(\"--hydra-help\", action=\"store_true\", help=\"Hydra's help\")\n parser.add_argument(\n \"--version\", action=\"version\", version=\"Hydra {}\".format(__version__)\n )\n parser.add_argument(\n \"overrides\",\n nargs=\"*\",\n help=\"Any key=value arguments to override config values (use dots for.nested=overrides)\",\n )\n\n parser.add_argument(\n \"--cfg\",\n \"-c\",\n choices=[\"job\", \"hydra\", \"all\"],\n help=\"Show config instead of running [job|hydra|all]\",\n )\n\n parser.add_argument(\"--run\", \"-r\", action=\"store_true\", help=\"Run a job\")\n\n parser.add_argument(\n \"--multirun\",\n \"-m\",\n action=\"store_true\",\n help=\"Run multiple jobs with the configured launcher\",\n )\n\n shell = \"SHELL_NAME\"\n install_cmd = 'eval \"$({} -sc install={})\"'.format(_get_exec_command(), shell)\n uninstall_cmd = 'eval \"$({} -sc uninstall={})\"'.format(_get_exec_command(), shell)\n parser.add_argument(\n \"--shell_completion\",\n \"-sc\",\n action=\"store_true\",\n help=\"\"\"Install or Uninstall shell completion:\n Install:\n {}\n\n Uninstall:\n {}\n\"\"\".format(\n install_cmd, uninstall_cmd\n ),\n )\n return parser\n\n\ndef get_args(args=None):\n return get_args_parser().parse_args(args=args)\n", "path": "hydra/_internal/utils.py"}]} | 1,526 | 152 |
gh_patches_debug_33836 | rasdani/github-patches | git_diff | bridgecrewio__checkov-1166 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to see any CKV2 checks in the list
**Describe the bug**
I posted this on Slack first and was confirmed it's a bug. I can't see any of the CKV2 checks when running `checkov -l`
**Expected behavior**
CKV2 / graph checks should be present working.
**Screenshots**

**Desktop (please complete the following information):**
- OS: os X
- Checkov Version 2.0.107
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import logging
3 import os
4 from importlib import util
5 from os import path
6
7 import setuptools
8 from setuptools import setup
9
10 # read the contents of your README file
11 this_directory = path.abspath(path.dirname(__file__))
12 with open(path.join(this_directory, "README.md"), encoding="utf-8") as f:
13 long_description = f.read()
14
15 logger = logging.getLogger(__name__)
16 spec = util.spec_from_file_location(
17 "checkov.version", os.path.join("checkov", "version.py")
18 )
19 # noinspection PyUnresolvedReferences
20 mod = util.module_from_spec(spec)
21 spec.loader.exec_module(mod) # type: ignore
22 version = mod.version # type: ignore
23
24 setup(
25 extras_require={
26 "dev": [
27 "pytest==5.3.1",
28 "coverage",
29 "coverage-badge",
30 "GitPython==3.1.7",
31 "bandit"
32 ]
33 },
34 install_requires=[
35 "bc-python-hcl2>=0.3.18",
36 "cloudsplaining>=0.4.1",
37 "deep_merge",
38 "tabulate",
39 "colorama",
40 "termcolor",
41 "junit-xml",
42 "dpath>=1.5.0,<2",
43 "pyyaml>=5.4.1",
44 "boto3==1.17.27",
45 "GitPython",
46 "six==1.15.0",
47 "jmespath",
48 "tqdm",
49 "update_checker",
50 "semantic_version",
51 "packaging",
52 "networkx",
53 "dockerfile-parse",
54 "docker"
55 ],
56 license="Apache License 2.0",
57 name="checkov",
58 version=version,
59 python_requires=">=3.7",
60 description="Infrastructure as code static analysis",
61 author="bridgecrew",
62 author_email="[email protected]",
63 url="https://github.com/nimrodkor/checkov",
64 packages=setuptools.find_packages(exclude=["tests*","integration_tests*"]),
65 scripts=["bin/checkov", "bin/checkov.cmd"],
66 long_description=long_description,
67 long_description_content_type="text/markdown",
68 classifiers=[
69 'Environment :: Console',
70 'Intended Audience :: Developers',
71 'Intended Audience :: System Administrators',
72 'Programming Language :: Python :: 3.7',
73 'Programming Language :: Python :: 3.8',
74 'Programming Language :: Python :: 3.9',
75 'Topic :: Security',
76 'Topic :: Software Development :: Build Tools'
77 ]
78 )
79
```
Path: `checkov/terraform/checks_infra/registry.py`
Content:
```
1 import json
2 import logging
3 import os
4
5 import yaml
6
7 from checkov.common.graph.checks_infra.base_parser import BaseGraphCheckParser
8 from checkov.common.graph.checks_infra.registry import BaseRegistry
9 from checkov.terraform.checks_infra.resources_types import resources_types
10
11 CHECKS_POSSIBLE_ENDING = [".yaml", ".yml"]
12
13
14 class Registry(BaseRegistry):
15 def __init__(self, parser=BaseGraphCheckParser(), checks_dir=None):
16 super().__init__(parser)
17 self.checks = []
18 self.parser = parser
19 self.checks_dir = checks_dir if checks_dir else \
20 os.path.join(os.path.dirname(os.path.dirname(__file__)), "checks", "graph_checks")
21 self.logger = logging.getLogger(__name__)
22
23 def load_checks(self):
24 self._load_checks_from_dir(self.checks_dir)
25
26 def _load_checks_from_dir(self, directory: str):
27 dir = os.path.expanduser(directory)
28 self.logger.debug("Loading external checks from {}".format(dir))
29 for root, d_names, f_names in os.walk(dir):
30 for file in f_names:
31 file_ending = os.path.splitext(file)[1]
32 if file_ending in CHECKS_POSSIBLE_ENDING:
33 with open(f'{root}/{file}', "r") as f:
34 if dir != self.checks_dir:
35 # This is a custom check, log its loading
36 logging.info(f"loading {file}")
37 check_yaml = yaml.safe_load(f)
38 check_json = json.loads(json.dumps(check_yaml))
39 check = self.parser.parse_raw_check(check_json, resources_types=self._get_resource_types(check_json))
40 if not any([c for c in self.checks if check.id == c.id]):
41 self.checks.append(check)
42
43 def load_external_checks(self, dir: str):
44 self._load_checks_from_dir(dir)
45
46 @staticmethod
47 def _get_resource_types(check_json):
48 provider = check_json.get("scope", {}).get("provider", "").lower()
49 return resources_types.get(provider)
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks_infra/registry.py b/checkov/terraform/checks_infra/registry.py
--- a/checkov/terraform/checks_infra/registry.py
+++ b/checkov/terraform/checks_infra/registry.py
@@ -25,15 +25,20 @@
def _load_checks_from_dir(self, directory: str):
dir = os.path.expanduser(directory)
- self.logger.debug("Loading external checks from {}".format(dir))
+
+ checks_dir_content = os.listdir(os.path.dirname(dir))
+ self.logger.info(f'Checks dir contents: {checks_dir_content}')
+
+ self.logger.info("Loading external checks from {}".format(dir))
for root, d_names, f_names in os.walk(dir):
+ self.logger.info(f'Searching through {d_names} and {f_names}')
for file in f_names:
file_ending = os.path.splitext(file)[1]
if file_ending in CHECKS_POSSIBLE_ENDING:
with open(f'{root}/{file}', "r") as f:
- if dir != self.checks_dir:
+ # if dir != self.checks_dir:
# This is a custom check, log its loading
- logging.info(f"loading {file}")
+ self.logger.info(f"loading {file}")
check_yaml = yaml.safe_load(f)
check_json = json.loads(json.dumps(check_yaml))
check = self.parser.parse_raw_check(check_json, resources_types=self._get_resource_types(check_json))
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -62,6 +62,8 @@
author_email="[email protected]",
url="https://github.com/nimrodkor/checkov",
packages=setuptools.find_packages(exclude=["tests*","integration_tests*"]),
+ include_package_data=True,
+ package_data = {'': ['*.yaml', '*.yml']},
scripts=["bin/checkov", "bin/checkov.cmd"],
long_description=long_description,
long_description_content_type="text/markdown",
| {"golden_diff": "diff --git a/checkov/terraform/checks_infra/registry.py b/checkov/terraform/checks_infra/registry.py\n--- a/checkov/terraform/checks_infra/registry.py\n+++ b/checkov/terraform/checks_infra/registry.py\n@@ -25,15 +25,20 @@\n \n def _load_checks_from_dir(self, directory: str):\n dir = os.path.expanduser(directory)\n- self.logger.debug(\"Loading external checks from {}\".format(dir))\n+\n+ checks_dir_content = os.listdir(os.path.dirname(dir))\n+ self.logger.info(f'Checks dir contents: {checks_dir_content}')\n+\n+ self.logger.info(\"Loading external checks from {}\".format(dir))\n for root, d_names, f_names in os.walk(dir):\n+ self.logger.info(f'Searching through {d_names} and {f_names}')\n for file in f_names:\n file_ending = os.path.splitext(file)[1]\n if file_ending in CHECKS_POSSIBLE_ENDING:\n with open(f'{root}/{file}', \"r\") as f:\n- if dir != self.checks_dir:\n+ # if dir != self.checks_dir:\n # This is a custom check, log its loading\n- logging.info(f\"loading {file}\")\n+ self.logger.info(f\"loading {file}\")\n check_yaml = yaml.safe_load(f)\n check_json = json.loads(json.dumps(check_yaml))\n check = self.parser.parse_raw_check(check_json, resources_types=self._get_resource_types(check_json))\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -62,6 +62,8 @@\n author_email=\"[email protected]\",\n url=\"https://github.com/nimrodkor/checkov\",\n packages=setuptools.find_packages(exclude=[\"tests*\",\"integration_tests*\"]),\n+ include_package_data=True,\n+ package_data = {'': ['*.yaml', '*.yml']},\n scripts=[\"bin/checkov\", \"bin/checkov.cmd\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n", "issue": "Unable to see any CKV2 checks in the list\n**Describe the bug**\r\nI posted this on Slack first and was confirmed it's a bug. I can't see any of the CKV2 checks when running `checkov -l`\r\n\r\n**Expected behavior**\r\nCKV2 / graph checks should be present working.\r\n\r\n**Screenshots**\r\n\r\n\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: os X\r\n - Checkov Version 2.0.107\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport logging\nimport os\nfrom importlib import util\nfrom os import path\n\nimport setuptools\nfrom setuptools import setup\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nlogger = logging.getLogger(__name__)\nspec = util.spec_from_file_location(\n \"checkov.version\", os.path.join(\"checkov\", \"version.py\")\n)\n# noinspection PyUnresolvedReferences\nmod = util.module_from_spec(spec)\nspec.loader.exec_module(mod) # type: ignore\nversion = mod.version # type: ignore\n\nsetup(\n extras_require={\n \"dev\": [\n \"pytest==5.3.1\",\n \"coverage\",\n \"coverage-badge\",\n \"GitPython==3.1.7\",\n \"bandit\"\n ]\n },\n install_requires=[\n \"bc-python-hcl2>=0.3.18\",\n \"cloudsplaining>=0.4.1\",\n \"deep_merge\",\n \"tabulate\",\n \"colorama\",\n \"termcolor\",\n \"junit-xml\",\n \"dpath>=1.5.0,<2\",\n \"pyyaml>=5.4.1\",\n \"boto3==1.17.27\",\n \"GitPython\",\n \"six==1.15.0\",\n \"jmespath\",\n \"tqdm\",\n \"update_checker\",\n \"semantic_version\",\n \"packaging\",\n \"networkx\",\n \"dockerfile-parse\",\n \"docker\"\n ],\n license=\"Apache License 2.0\",\n name=\"checkov\",\n version=version,\n python_requires=\">=3.7\",\n description=\"Infrastructure as code static analysis\",\n author=\"bridgecrew\",\n author_email=\"[email protected]\",\n url=\"https://github.com/nimrodkor/checkov\",\n packages=setuptools.find_packages(exclude=[\"tests*\",\"integration_tests*\"]),\n scripts=[\"bin/checkov\", \"bin/checkov.cmd\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Topic :: Security',\n 'Topic :: Software Development :: Build Tools'\n ]\n)\n", "path": "setup.py"}, {"content": "import json\nimport logging\nimport os\n\nimport yaml\n\nfrom checkov.common.graph.checks_infra.base_parser import BaseGraphCheckParser\nfrom checkov.common.graph.checks_infra.registry import BaseRegistry\nfrom checkov.terraform.checks_infra.resources_types import resources_types\n\nCHECKS_POSSIBLE_ENDING = [\".yaml\", \".yml\"]\n\n\nclass Registry(BaseRegistry):\n def __init__(self, parser=BaseGraphCheckParser(), checks_dir=None):\n super().__init__(parser)\n self.checks = []\n self.parser = parser\n self.checks_dir = checks_dir if checks_dir else \\\n os.path.join(os.path.dirname(os.path.dirname(__file__)), \"checks\", \"graph_checks\")\n self.logger = logging.getLogger(__name__)\n\n def load_checks(self):\n self._load_checks_from_dir(self.checks_dir)\n\n def _load_checks_from_dir(self, directory: str):\n dir = os.path.expanduser(directory)\n self.logger.debug(\"Loading external checks from {}\".format(dir))\n for root, d_names, f_names in os.walk(dir):\n for file in f_names:\n file_ending = os.path.splitext(file)[1]\n if file_ending in CHECKS_POSSIBLE_ENDING:\n with open(f'{root}/{file}', \"r\") as f:\n if dir != self.checks_dir:\n # This is a custom check, log its loading\n logging.info(f\"loading {file}\")\n check_yaml = yaml.safe_load(f)\n check_json = json.loads(json.dumps(check_yaml))\n check = self.parser.parse_raw_check(check_json, resources_types=self._get_resource_types(check_json))\n if not any([c for c in self.checks if check.id == c.id]):\n self.checks.append(check)\n\n def load_external_checks(self, dir: str):\n self._load_checks_from_dir(dir)\n\n @staticmethod\n def _get_resource_types(check_json):\n provider = check_json.get(\"scope\", {}).get(\"provider\", \"\").lower()\n return resources_types.get(provider)\n", "path": "checkov/terraform/checks_infra/registry.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport logging\nimport os\nfrom importlib import util\nfrom os import path\n\nimport setuptools\nfrom setuptools import setup\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nlogger = logging.getLogger(__name__)\nspec = util.spec_from_file_location(\n \"checkov.version\", os.path.join(\"checkov\", \"version.py\")\n)\n# noinspection PyUnresolvedReferences\nmod = util.module_from_spec(spec)\nspec.loader.exec_module(mod) # type: ignore\nversion = mod.version # type: ignore\n\nsetup(\n extras_require={\n \"dev\": [\n \"pytest==5.3.1\",\n \"coverage\",\n \"coverage-badge\",\n \"GitPython==3.1.7\",\n \"bandit\"\n ]\n },\n install_requires=[\n \"bc-python-hcl2>=0.3.18\",\n \"cloudsplaining>=0.4.1\",\n \"deep_merge\",\n \"tabulate\",\n \"colorama\",\n \"termcolor\",\n \"junit-xml\",\n \"dpath>=1.5.0,<2\",\n \"pyyaml>=5.4.1\",\n \"boto3==1.17.27\",\n \"GitPython\",\n \"six==1.15.0\",\n \"jmespath\",\n \"tqdm\",\n \"update_checker\",\n \"semantic_version\",\n \"packaging\",\n \"networkx\",\n \"dockerfile-parse\",\n \"docker\"\n ],\n license=\"Apache License 2.0\",\n name=\"checkov\",\n version=version,\n python_requires=\">=3.7\",\n description=\"Infrastructure as code static analysis\",\n author=\"bridgecrew\",\n author_email=\"[email protected]\",\n url=\"https://github.com/nimrodkor/checkov\",\n packages=setuptools.find_packages(exclude=[\"tests*\",\"integration_tests*\"]),\n include_package_data=True,\n package_data = {'': ['*.yaml', '*.yml']},\n scripts=[\"bin/checkov\", \"bin/checkov.cmd\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Topic :: Security',\n 'Topic :: Software Development :: Build Tools'\n ]\n)\n", "path": "setup.py"}, {"content": "import json\nimport logging\nimport os\n\nimport yaml\n\nfrom checkov.common.graph.checks_infra.base_parser import BaseGraphCheckParser\nfrom checkov.common.graph.checks_infra.registry import BaseRegistry\nfrom checkov.terraform.checks_infra.resources_types import resources_types\n\nCHECKS_POSSIBLE_ENDING = [\".yaml\", \".yml\"]\n\n\nclass Registry(BaseRegistry):\n def __init__(self, parser=BaseGraphCheckParser(), checks_dir=None):\n super().__init__(parser)\n self.checks = []\n self.parser = parser\n self.checks_dir = checks_dir if checks_dir else \\\n os.path.join(os.path.dirname(os.path.dirname(__file__)), \"checks\", \"graph_checks\")\n self.logger = logging.getLogger(__name__)\n\n def load_checks(self):\n self._load_checks_from_dir(self.checks_dir)\n\n def _load_checks_from_dir(self, directory: str):\n dir = os.path.expanduser(directory)\n\n checks_dir_content = os.listdir(os.path.dirname(dir))\n self.logger.info(f'Checks dir contents: {checks_dir_content}')\n\n self.logger.info(\"Loading external checks from {}\".format(dir))\n for root, d_names, f_names in os.walk(dir):\n self.logger.info(f'Searching through {d_names} and {f_names}')\n for file in f_names:\n file_ending = os.path.splitext(file)[1]\n if file_ending in CHECKS_POSSIBLE_ENDING:\n with open(f'{root}/{file}', \"r\") as f:\n # if dir != self.checks_dir:\n # This is a custom check, log its loading\n self.logger.info(f\"loading {file}\")\n check_yaml = yaml.safe_load(f)\n check_json = json.loads(json.dumps(check_yaml))\n check = self.parser.parse_raw_check(check_json, resources_types=self._get_resource_types(check_json))\n if not any([c for c in self.checks if check.id == c.id]):\n self.checks.append(check)\n\n def load_external_checks(self, dir: str):\n self._load_checks_from_dir(dir)\n\n @staticmethod\n def _get_resource_types(check_json):\n provider = check_json.get(\"scope\", {}).get(\"provider\", \"\").lower()\n return resources_types.get(provider)\n", "path": "checkov/terraform/checks_infra/registry.py"}]} | 1,673 | 453 |
gh_patches_debug_801 | rasdani/github-patches | git_diff | google__flax-2407 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Outdated `rich` dependency version
The version of `rich` is currently limited to `rich~=11.1`, causing problems with `pip` dependency resolution when installing with other packages.
https://github.com/google/flax/blob/cda7a4c85bbce744e412ab82e298ddf76d4770d2/setup.py#L33
Should be a trivial fix since `flax.linen.summary` doesn't seem to need any changes, I'll open a PR.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2022 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """setup.py for Flax."""
16
17 import os
18 from setuptools import find_packages
19 from setuptools import setup
20
21 here = os.path.abspath(os.path.dirname(__file__))
22 try:
23 README = open(os.path.join(here, "README.md"), encoding="utf-8").read()
24 except OSError:
25 README = ""
26
27 install_requires = [
28 "numpy>=1.12",
29 "jax>=0.3.16",
30 "matplotlib", # only needed for tensorboard export
31 "msgpack",
32 "optax",
33 "rich~=11.1",
34 "typing_extensions>=4.1.1",
35 "PyYAML>=5.4.1",
36 ]
37
38 tests_require = [
39 "atari-py==0.2.5", # Last version does not have the ROMs we test on pre-packaged
40 "clu", # All examples.
41 "gym==0.18.3",
42 "jaxlib",
43 "jraph>=0.0.6dev0",
44 "ml-collections",
45 "opencv-python",
46 "pytest",
47 "pytest-cov",
48 "pytest-custom_exit_code",
49 "pytest-xdist==1.34.0", # upgrading to 2.0 broke tests, need to investigate
50 "pytype",
51 "sentencepiece", # WMT example.
52 "svn",
53 "tensorflow_text>=2.4.0", # WMT example.
54 "tensorflow_datasets",
55 "tensorflow",
56 "torch",
57 ]
58
59 __version__ = None
60
61 with open("flax/version.py") as f:
62 exec(f.read(), globals())
63
64 setup(
65 name="flax",
66 version=__version__,
67 description="Flax: A neural network library for JAX designed for flexibility",
68 long_description="\n\n".join([README]),
69 long_description_content_type="text/markdown",
70 classifiers=[
71 "Development Status :: 3 - Alpha",
72 "Intended Audience :: Developers",
73 "Intended Audience :: Science/Research",
74 "License :: OSI Approved :: Apache Software License",
75 "Programming Language :: Python :: 3.7",
76 "Topic :: Scientific/Engineering :: Artificial Intelligence",
77 ],
78 keywords="",
79 author="Flax team",
80 author_email="[email protected]",
81 url="https://github.com/google/flax",
82 packages=find_packages(),
83 package_data={"flax": ["py.typed"]},
84 zip_safe=False,
85 install_requires=install_requires,
86 extras_require={
87 "testing": tests_require,
88 },
89 )
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -30,7 +30,7 @@
"matplotlib", # only needed for tensorboard export
"msgpack",
"optax",
- "rich~=11.1",
+ "rich>=11.1",
"typing_extensions>=4.1.1",
"PyYAML>=5.4.1",
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -30,7 +30,7 @@\n \"matplotlib\", # only needed for tensorboard export\n \"msgpack\",\n \"optax\",\n- \"rich~=11.1\",\n+ \"rich>=11.1\",\n \"typing_extensions>=4.1.1\",\n \"PyYAML>=5.4.1\",\n ]\n", "issue": "Outdated `rich` dependency version\nThe version of `rich` is currently limited to `rich~=11.1`, causing problems with `pip` dependency resolution when installing with other packages.\r\n\r\nhttps://github.com/google/flax/blob/cda7a4c85bbce744e412ab82e298ddf76d4770d2/setup.py#L33\r\n\r\nShould be a trivial fix since `flax.linen.summary` doesn't seem to need any changes, I'll open a PR.\r\n\n", "before_files": [{"content": "# Copyright 2022 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"setup.py for Flax.\"\"\"\n\nimport os\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n README = open(os.path.join(here, \"README.md\"), encoding=\"utf-8\").read()\nexcept OSError:\n README = \"\"\n\ninstall_requires = [\n \"numpy>=1.12\",\n \"jax>=0.3.16\",\n \"matplotlib\", # only needed for tensorboard export\n \"msgpack\",\n \"optax\",\n \"rich~=11.1\",\n \"typing_extensions>=4.1.1\",\n \"PyYAML>=5.4.1\",\n]\n\ntests_require = [\n \"atari-py==0.2.5\", # Last version does not have the ROMs we test on pre-packaged\n \"clu\", # All examples.\n \"gym==0.18.3\",\n \"jaxlib\",\n \"jraph>=0.0.6dev0\",\n \"ml-collections\",\n \"opencv-python\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-custom_exit_code\",\n \"pytest-xdist==1.34.0\", # upgrading to 2.0 broke tests, need to investigate\n \"pytype\",\n \"sentencepiece\", # WMT example.\n \"svn\",\n \"tensorflow_text>=2.4.0\", # WMT example.\n \"tensorflow_datasets\",\n \"tensorflow\",\n \"torch\",\n]\n\n__version__ = None\n\nwith open(\"flax/version.py\") as f:\n exec(f.read(), globals())\n\nsetup(\n name=\"flax\",\n version=__version__,\n description=\"Flax: A neural network library for JAX designed for flexibility\",\n long_description=\"\\n\\n\".join([README]),\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n keywords=\"\",\n author=\"Flax team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/google/flax\",\n packages=find_packages(),\n package_data={\"flax\": [\"py.typed\"]},\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n \"testing\": tests_require,\n },\n )\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2022 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"setup.py for Flax.\"\"\"\n\nimport os\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n README = open(os.path.join(here, \"README.md\"), encoding=\"utf-8\").read()\nexcept OSError:\n README = \"\"\n\ninstall_requires = [\n \"numpy>=1.12\",\n \"jax>=0.3.16\",\n \"matplotlib\", # only needed for tensorboard export\n \"msgpack\",\n \"optax\",\n \"rich>=11.1\",\n \"typing_extensions>=4.1.1\",\n \"PyYAML>=5.4.1\",\n]\n\ntests_require = [\n \"atari-py==0.2.5\", # Last version does not have the ROMs we test on pre-packaged\n \"clu\", # All examples.\n \"gym==0.18.3\",\n \"jaxlib\",\n \"jraph>=0.0.6dev0\",\n \"ml-collections\",\n \"opencv-python\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-custom_exit_code\",\n \"pytest-xdist==1.34.0\", # upgrading to 2.0 broke tests, need to investigate\n \"pytype\",\n \"sentencepiece\", # WMT example.\n \"svn\",\n \"tensorflow_text>=2.4.0\", # WMT example.\n \"tensorflow_datasets\",\n \"tensorflow\",\n \"torch\",\n]\n\n__version__ = None\n\nwith open(\"flax/version.py\") as f:\n exec(f.read(), globals())\n\nsetup(\n name=\"flax\",\n version=__version__,\n description=\"Flax: A neural network library for JAX designed for flexibility\",\n long_description=\"\\n\\n\".join([README]),\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n keywords=\"\",\n author=\"Flax team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/google/flax\",\n packages=find_packages(),\n package_data={\"flax\": [\"py.typed\"]},\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n \"testing\": tests_require,\n },\n )\n", "path": "setup.py"}]} | 1,229 | 99 |
gh_patches_debug_7456 | rasdani/github-patches | git_diff | encode__httpx-421 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
HTTPError should be importable frop the top-level httpx package
From #365:
> `HTTPError` is not available at the top level like the other exceptions and like it was in requests. This is a somewhat common exception to catch raise_for_status, so having to add another import statement for it is a bit odd if intentional.
Put differently, `HTTPError` is missing from `httpx/__init__.py`. Adding it would allow us to do:
```python
import httpx
try:
r = httpx.get("https://example.org")
r.raise_for_status()
except httpx.HTTPError:
...
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `httpx/__init__.py`
Content:
```
1 from .__version__ import __description__, __title__, __version__
2 from .api import delete, get, head, options, patch, post, put, request
3 from .client import AsyncClient, Client
4 from .concurrency.asyncio import AsyncioBackend
5 from .concurrency.base import (
6 BaseBackgroundManager,
7 BasePoolSemaphore,
8 BaseTCPStream,
9 ConcurrencyBackend,
10 )
11 from .config import (
12 USER_AGENT,
13 CertTypes,
14 HTTPVersionConfig,
15 HTTPVersionTypes,
16 PoolLimits,
17 SSLConfig,
18 TimeoutConfig,
19 TimeoutTypes,
20 VerifyTypes,
21 )
22 from .dispatch.base import AsyncDispatcher, Dispatcher
23 from .dispatch.connection import HTTPConnection
24 from .dispatch.connection_pool import ConnectionPool
25 from .dispatch.proxy_http import HTTPProxy, HTTPProxyMode
26 from .exceptions import (
27 ConnectTimeout,
28 CookieConflict,
29 DecodingError,
30 InvalidURL,
31 NotRedirectResponse,
32 PoolTimeout,
33 ProtocolError,
34 ProxyError,
35 ReadTimeout,
36 RedirectBodyUnavailable,
37 RedirectLoop,
38 ResponseClosed,
39 ResponseNotRead,
40 StreamConsumed,
41 Timeout,
42 TooManyRedirects,
43 WriteTimeout,
44 )
45 from .middleware.digest_auth import DigestAuth
46 from .models import (
47 URL,
48 AsyncRequest,
49 AsyncRequestData,
50 AsyncResponse,
51 AsyncResponseContent,
52 AuthTypes,
53 Cookies,
54 CookieTypes,
55 Headers,
56 HeaderTypes,
57 Origin,
58 QueryParams,
59 QueryParamTypes,
60 Request,
61 RequestData,
62 RequestFiles,
63 Response,
64 ResponseContent,
65 URLTypes,
66 )
67 from .status_codes import StatusCode, codes
68
69 __all__ = [
70 "__description__",
71 "__title__",
72 "__version__",
73 "delete",
74 "get",
75 "head",
76 "options",
77 "patch",
78 "post",
79 "patch",
80 "put",
81 "request",
82 "AsyncClient",
83 "Client",
84 "AsyncioBackend",
85 "USER_AGENT",
86 "CertTypes",
87 "PoolLimits",
88 "SSLConfig",
89 "TimeoutConfig",
90 "VerifyTypes",
91 "HTTPConnection",
92 "BasePoolSemaphore",
93 "BaseBackgroundManager",
94 "ConnectionPool",
95 "HTTPProxy",
96 "HTTPProxyMode",
97 "ConnectTimeout",
98 "CookieConflict",
99 "DecodingError",
100 "InvalidURL",
101 "NotRedirectResponse",
102 "PoolTimeout",
103 "ProtocolError",
104 "ReadTimeout",
105 "RedirectBodyUnavailable",
106 "RedirectLoop",
107 "ResponseClosed",
108 "ResponseNotRead",
109 "StreamConsumed",
110 "ProxyError",
111 "Timeout",
112 "TooManyRedirects",
113 "WriteTimeout",
114 "AsyncDispatcher",
115 "BaseTCPStream",
116 "ConcurrencyBackend",
117 "Dispatcher",
118 "URL",
119 "URLTypes",
120 "StatusCode",
121 "codes",
122 "TimeoutTypes",
123 "HTTPVersionTypes",
124 "HTTPVersionConfig",
125 "AsyncRequest",
126 "AsyncRequestData",
127 "AsyncResponse",
128 "AsyncResponseContent",
129 "AuthTypes",
130 "Cookies",
131 "CookieTypes",
132 "Headers",
133 "HeaderTypes",
134 "Origin",
135 "QueryParams",
136 "QueryParamTypes",
137 "Request",
138 "RequestData",
139 "Response",
140 "ResponseContent",
141 "RequestFiles",
142 "DigestAuth",
143 ]
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/httpx/__init__.py b/httpx/__init__.py
--- a/httpx/__init__.py
+++ b/httpx/__init__.py
@@ -27,6 +27,7 @@
ConnectTimeout,
CookieConflict,
DecodingError,
+ HTTPError,
InvalidURL,
NotRedirectResponse,
PoolTimeout,
@@ -97,6 +98,7 @@
"ConnectTimeout",
"CookieConflict",
"DecodingError",
+ "HTTPError",
"InvalidURL",
"NotRedirectResponse",
"PoolTimeout",
| {"golden_diff": "diff --git a/httpx/__init__.py b/httpx/__init__.py\n--- a/httpx/__init__.py\n+++ b/httpx/__init__.py\n@@ -27,6 +27,7 @@\n ConnectTimeout,\n CookieConflict,\n DecodingError,\n+ HTTPError,\n InvalidURL,\n NotRedirectResponse,\n PoolTimeout,\n@@ -97,6 +98,7 @@\n \"ConnectTimeout\",\n \"CookieConflict\",\n \"DecodingError\",\n+ \"HTTPError\",\n \"InvalidURL\",\n \"NotRedirectResponse\",\n \"PoolTimeout\",\n", "issue": "HTTPError should be importable frop the top-level httpx package\nFrom #365:\r\n\r\n> `HTTPError` is not available at the top level like the other exceptions and like it was in requests. This is a somewhat common exception to catch raise_for_status, so having to add another import statement for it is a bit odd if intentional.\r\n\r\nPut differently, `HTTPError` is missing from `httpx/__init__.py`. Adding it would allow us to do:\r\n\r\n```python\r\nimport httpx\r\n\r\ntry:\r\n r = httpx.get(\"https://example.org\")\r\n r.raise_for_status()\r\nexcept httpx.HTTPError:\r\n ...\r\n```\n", "before_files": [{"content": "from .__version__ import __description__, __title__, __version__\nfrom .api import delete, get, head, options, patch, post, put, request\nfrom .client import AsyncClient, Client\nfrom .concurrency.asyncio import AsyncioBackend\nfrom .concurrency.base import (\n BaseBackgroundManager,\n BasePoolSemaphore,\n BaseTCPStream,\n ConcurrencyBackend,\n)\nfrom .config import (\n USER_AGENT,\n CertTypes,\n HTTPVersionConfig,\n HTTPVersionTypes,\n PoolLimits,\n SSLConfig,\n TimeoutConfig,\n TimeoutTypes,\n VerifyTypes,\n)\nfrom .dispatch.base import AsyncDispatcher, Dispatcher\nfrom .dispatch.connection import HTTPConnection\nfrom .dispatch.connection_pool import ConnectionPool\nfrom .dispatch.proxy_http import HTTPProxy, HTTPProxyMode\nfrom .exceptions import (\n ConnectTimeout,\n CookieConflict,\n DecodingError,\n InvalidURL,\n NotRedirectResponse,\n PoolTimeout,\n ProtocolError,\n ProxyError,\n ReadTimeout,\n RedirectBodyUnavailable,\n RedirectLoop,\n ResponseClosed,\n ResponseNotRead,\n StreamConsumed,\n Timeout,\n TooManyRedirects,\n WriteTimeout,\n)\nfrom .middleware.digest_auth import DigestAuth\nfrom .models import (\n URL,\n AsyncRequest,\n AsyncRequestData,\n AsyncResponse,\n AsyncResponseContent,\n AuthTypes,\n Cookies,\n CookieTypes,\n Headers,\n HeaderTypes,\n Origin,\n QueryParams,\n QueryParamTypes,\n Request,\n RequestData,\n RequestFiles,\n Response,\n ResponseContent,\n URLTypes,\n)\nfrom .status_codes import StatusCode, codes\n\n__all__ = [\n \"__description__\",\n \"__title__\",\n \"__version__\",\n \"delete\",\n \"get\",\n \"head\",\n \"options\",\n \"patch\",\n \"post\",\n \"patch\",\n \"put\",\n \"request\",\n \"AsyncClient\",\n \"Client\",\n \"AsyncioBackend\",\n \"USER_AGENT\",\n \"CertTypes\",\n \"PoolLimits\",\n \"SSLConfig\",\n \"TimeoutConfig\",\n \"VerifyTypes\",\n \"HTTPConnection\",\n \"BasePoolSemaphore\",\n \"BaseBackgroundManager\",\n \"ConnectionPool\",\n \"HTTPProxy\",\n \"HTTPProxyMode\",\n \"ConnectTimeout\",\n \"CookieConflict\",\n \"DecodingError\",\n \"InvalidURL\",\n \"NotRedirectResponse\",\n \"PoolTimeout\",\n \"ProtocolError\",\n \"ReadTimeout\",\n \"RedirectBodyUnavailable\",\n \"RedirectLoop\",\n \"ResponseClosed\",\n \"ResponseNotRead\",\n \"StreamConsumed\",\n \"ProxyError\",\n \"Timeout\",\n \"TooManyRedirects\",\n \"WriteTimeout\",\n \"AsyncDispatcher\",\n \"BaseTCPStream\",\n \"ConcurrencyBackend\",\n \"Dispatcher\",\n \"URL\",\n \"URLTypes\",\n \"StatusCode\",\n \"codes\",\n \"TimeoutTypes\",\n \"HTTPVersionTypes\",\n \"HTTPVersionConfig\",\n \"AsyncRequest\",\n \"AsyncRequestData\",\n \"AsyncResponse\",\n \"AsyncResponseContent\",\n \"AuthTypes\",\n \"Cookies\",\n \"CookieTypes\",\n \"Headers\",\n \"HeaderTypes\",\n \"Origin\",\n \"QueryParams\",\n \"QueryParamTypes\",\n \"Request\",\n \"RequestData\",\n \"Response\",\n \"ResponseContent\",\n \"RequestFiles\",\n \"DigestAuth\",\n]\n", "path": "httpx/__init__.py"}], "after_files": [{"content": "from .__version__ import __description__, __title__, __version__\nfrom .api import delete, get, head, options, patch, post, put, request\nfrom .client import AsyncClient, Client\nfrom .concurrency.asyncio import AsyncioBackend\nfrom .concurrency.base import (\n BaseBackgroundManager,\n BasePoolSemaphore,\n BaseTCPStream,\n ConcurrencyBackend,\n)\nfrom .config import (\n USER_AGENT,\n CertTypes,\n HTTPVersionConfig,\n HTTPVersionTypes,\n PoolLimits,\n SSLConfig,\n TimeoutConfig,\n TimeoutTypes,\n VerifyTypes,\n)\nfrom .dispatch.base import AsyncDispatcher, Dispatcher\nfrom .dispatch.connection import HTTPConnection\nfrom .dispatch.connection_pool import ConnectionPool\nfrom .dispatch.proxy_http import HTTPProxy, HTTPProxyMode\nfrom .exceptions import (\n ConnectTimeout,\n CookieConflict,\n DecodingError,\n HTTPError,\n InvalidURL,\n NotRedirectResponse,\n PoolTimeout,\n ProtocolError,\n ProxyError,\n ReadTimeout,\n RedirectBodyUnavailable,\n RedirectLoop,\n ResponseClosed,\n ResponseNotRead,\n StreamConsumed,\n Timeout,\n TooManyRedirects,\n WriteTimeout,\n)\nfrom .middleware.digest_auth import DigestAuth\nfrom .models import (\n URL,\n AsyncRequest,\n AsyncRequestData,\n AsyncResponse,\n AsyncResponseContent,\n AuthTypes,\n Cookies,\n CookieTypes,\n Headers,\n HeaderTypes,\n Origin,\n QueryParams,\n QueryParamTypes,\n Request,\n RequestData,\n RequestFiles,\n Response,\n ResponseContent,\n URLTypes,\n)\nfrom .status_codes import StatusCode, codes\n\n__all__ = [\n \"__description__\",\n \"__title__\",\n \"__version__\",\n \"delete\",\n \"get\",\n \"head\",\n \"options\",\n \"patch\",\n \"post\",\n \"patch\",\n \"put\",\n \"request\",\n \"AsyncClient\",\n \"Client\",\n \"AsyncioBackend\",\n \"USER_AGENT\",\n \"CertTypes\",\n \"PoolLimits\",\n \"SSLConfig\",\n \"TimeoutConfig\",\n \"VerifyTypes\",\n \"HTTPConnection\",\n \"BasePoolSemaphore\",\n \"BaseBackgroundManager\",\n \"ConnectionPool\",\n \"HTTPProxy\",\n \"HTTPProxyMode\",\n \"ConnectTimeout\",\n \"CookieConflict\",\n \"DecodingError\",\n \"HTTPError\",\n \"InvalidURL\",\n \"NotRedirectResponse\",\n \"PoolTimeout\",\n \"ProtocolError\",\n \"ReadTimeout\",\n \"RedirectBodyUnavailable\",\n \"RedirectLoop\",\n \"ResponseClosed\",\n \"ResponseNotRead\",\n \"StreamConsumed\",\n \"ProxyError\",\n \"Timeout\",\n \"TooManyRedirects\",\n \"WriteTimeout\",\n \"AsyncDispatcher\",\n \"BaseTCPStream\",\n \"ConcurrencyBackend\",\n \"Dispatcher\",\n \"URL\",\n \"URLTypes\",\n \"StatusCode\",\n \"codes\",\n \"TimeoutTypes\",\n \"HTTPVersionTypes\",\n \"HTTPVersionConfig\",\n \"AsyncRequest\",\n \"AsyncRequestData\",\n \"AsyncResponse\",\n \"AsyncResponseContent\",\n \"AuthTypes\",\n \"Cookies\",\n \"CookieTypes\",\n \"Headers\",\n \"HeaderTypes\",\n \"Origin\",\n \"QueryParams\",\n \"QueryParamTypes\",\n \"Request\",\n \"RequestData\",\n \"Response\",\n \"ResponseContent\",\n \"RequestFiles\",\n \"DigestAuth\",\n]\n", "path": "httpx/__init__.py"}]} | 1,431 | 129 |
gh_patches_debug_6982 | rasdani/github-patches | git_diff | psf__black-3247 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Suggestion: Mention contextlib.ExitStack in "Use backslashes for with statements"?
**Is this related to a problem? Please describe.**
On [Black's future style doc](https://black.readthedocs.io/en/stable/the_black_code_style/future_style.html#using-backslashes-for-with-statements) it talks about `black` handling a `with` with multiple context managers in a few ways (Python-version-dependent).
**Describe the solution you'd like**
Black should suggest to the user (in documentation only) that if they don't like Black's current or future behavior, that constructing a `contextlib.ExitStack` can help the issue while maintaining 95%-ish of the code readability and has minimal overhead.
As an example (feel free to steal), and using the existing code on that page
```python
with contextlib.ExitStack() as exit_stack:
cm1 = exit_stack.enter_context(make_context_manager(1))
cm2 = exit_stack.enter_context(make_context_manager(2))
cm3 = exit_stack.enter_context(make_context_manager(3))
cm4 = exit_stack.enter_context(make_context_manager(4))
...
```
**Describe alternatives you've considered**
N/A
**Additional context**
:heart: black
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Configuration file for the Sphinx documentation builder.
4 #
5 # This file does only contain a selection of the most common options. For a
6 # full list see the documentation:
7 # http://www.sphinx-doc.org/en/stable/config
8
9 # -- Path setup --------------------------------------------------------------
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14 #
15
16 import os
17 import string
18 from pathlib import Path
19
20 from pkg_resources import get_distribution
21
22 CURRENT_DIR = Path(__file__).parent
23
24
25 def make_pypi_svg(version: str) -> None:
26 template: Path = CURRENT_DIR / "_static" / "pypi_template.svg"
27 target: Path = CURRENT_DIR / "_static" / "pypi.svg"
28 with open(str(template), "r", encoding="utf8") as f:
29 svg: str = string.Template(f.read()).substitute(version=version)
30 with open(str(target), "w", encoding="utf8") as f:
31 f.write(svg)
32
33
34 # Necessary so Click doesn't hit an encode error when called by
35 # sphinxcontrib-programoutput on Windows.
36 os.putenv("pythonioencoding", "utf-8")
37
38 # -- Project information -----------------------------------------------------
39
40 project = "Black"
41 copyright = "2018-Present, Łukasz Langa and contributors to Black"
42 author = "Łukasz Langa and contributors to Black"
43
44 # Autopopulate version
45 # The version, including alpha/beta/rc tags, but not commit hash and datestamps
46 release = get_distribution("black").version.split("+")[0]
47 # The short X.Y version.
48 version = release
49 for sp in "abcfr":
50 version = version.split(sp)[0]
51
52 make_pypi_svg(release)
53
54
55 # -- General configuration ---------------------------------------------------
56
57 # If your documentation needs a minimal Sphinx version, state it here.
58 needs_sphinx = "3.0"
59
60 # Add any Sphinx extension module names here, as strings. They can be
61 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
62 # ones.
63 extensions = [
64 "sphinx.ext.autodoc",
65 "sphinx.ext.intersphinx",
66 "sphinx.ext.napoleon",
67 "myst_parser",
68 "sphinxcontrib.programoutput",
69 "sphinx_copybutton",
70 ]
71
72 # If you need extensions of a certain version or higher, list them here.
73 needs_extensions = {"myst_parser": "0.13.7"}
74
75 # Add any paths that contain templates here, relative to this directory.
76 templates_path = ["_templates"]
77
78 # The suffix(es) of source filenames.
79 # You can specify multiple suffix as a list of string:
80 source_suffix = [".rst", ".md"]
81
82 # The master toctree document.
83 master_doc = "index"
84
85 # The language for content autogenerated by Sphinx. Refer to documentation
86 # for a list of supported languages.
87 #
88 # This is also used if you do content translation via gettext catalogs.
89 # Usually you set "language" from the command line for these cases.
90 language = "en"
91
92 # List of patterns, relative to source directory, that match files and
93 # directories to ignore when looking for source files.
94 # This pattern also affects html_static_path and html_extra_path .
95
96 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
97
98 # The name of the Pygments (syntax highlighting) style to use.
99 pygments_style = "sphinx"
100
101 # We need headers to be linkable to so ask MyST-Parser to autogenerate anchor IDs for
102 # headers up to and including level 3.
103 myst_heading_anchors = 3
104
105 # Prettier support formatting some MyST syntax but not all, so let's disable the
106 # unsupported yet still enabled by default ones.
107 myst_disable_syntax = [
108 "colon_fence",
109 "myst_block_break",
110 "myst_line_comment",
111 "math_block",
112 ]
113
114 # Optional MyST Syntaxes
115 myst_enable_extensions = []
116
117 # -- Options for HTML output -------------------------------------------------
118
119 # The theme to use for HTML and HTML Help pages. See the documentation for
120 # a list of builtin themes.
121 #
122 html_theme = "furo"
123 html_logo = "_static/logo2-readme.png"
124
125 # Add any paths that contain custom static files (such as style sheets) here,
126 # relative to this directory. They are copied after the builtin static files,
127 # so a file named "default.css" will overwrite the builtin "default.css".
128 html_static_path = ["_static"]
129
130 # Custom sidebar templates, must be a dictionary that maps document names
131 # to template names.
132 #
133 # The default sidebars (for documents that don't match any pattern) are
134 # defined by theme itself. Builtin themes are using these templates by
135 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
136 # 'searchbox.html']``.
137 #
138 # html_sidebars = {}
139
140
141 # -- Options for HTMLHelp output ---------------------------------------------
142
143 # Output file base name for HTML help builder.
144 htmlhelp_basename = "blackdoc"
145
146
147 # -- Options for LaTeX output ------------------------------------------------
148
149 # Grouping the document tree into LaTeX files. List of tuples
150 # (source start file, target name, title,
151 # author, documentclass [howto, manual, or own class]).
152 latex_documents = [
153 (
154 master_doc,
155 "black.tex",
156 "Documentation for Black",
157 "Łukasz Langa and contributors to Black",
158 "manual",
159 )
160 ]
161
162
163 # -- Options for manual page output ------------------------------------------
164
165 # One entry per manual page. List of tuples
166 # (source start file, name, description, authors, manual section).
167 man_pages = [(master_doc, "black", "Documentation for Black", [author], 1)]
168
169
170 # -- Options for Texinfo output ----------------------------------------------
171
172 # Grouping the document tree into Texinfo files. List of tuples
173 # (source start file, target name, title, author,
174 # dir menu entry, description, category)
175 texinfo_documents = [
176 (
177 master_doc,
178 "Black",
179 "Documentation for Black",
180 author,
181 "Black",
182 "The uncompromising Python code formatter",
183 "Miscellaneous",
184 )
185 ]
186
187
188 # -- Options for Epub output -------------------------------------------------
189
190 # Bibliographic Dublin Core info.
191 epub_title = project
192 epub_author = author
193 epub_publisher = author
194 epub_copyright = copyright
195
196 # The unique identifier of the text. This can be a ISBN number
197 # or the project homepage.
198 #
199 # epub_identifier = ''
200
201 # A unique identification for the text.
202 #
203 # epub_uid = ''
204
205 # A list of files that should not be packed into the epub file.
206 epub_exclude_files = ["search.html"]
207
208
209 # -- Extension configuration -------------------------------------------------
210
211 autodoc_member_order = "bysource"
212
213 # -- Options for intersphinx extension ---------------------------------------
214
215 # Example configuration for intersphinx: refer to the Python standard library.
216 intersphinx_mapping = {"https://docs.python.org/3/": None}
217
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -55,7 +55,7 @@
# -- General configuration ---------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
-needs_sphinx = "3.0"
+needs_sphinx = "4.4"
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -55,7 +55,7 @@\n # -- General configuration ---------------------------------------------------\n \n # If your documentation needs a minimal Sphinx version, state it here.\n-needs_sphinx = \"3.0\"\n+needs_sphinx = \"4.4\"\n \n # Add any Sphinx extension module names here, as strings. They can be\n # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n", "issue": "Suggestion: Mention contextlib.ExitStack in \"Use backslashes for with statements\"?\n**Is this related to a problem? Please describe.**\r\n\r\nOn [Black's future style doc](https://black.readthedocs.io/en/stable/the_black_code_style/future_style.html#using-backslashes-for-with-statements) it talks about `black` handling a `with` with multiple context managers in a few ways (Python-version-dependent).\r\n\r\n**Describe the solution you'd like**\r\n\r\nBlack should suggest to the user (in documentation only) that if they don't like Black's current or future behavior, that constructing a `contextlib.ExitStack` can help the issue while maintaining 95%-ish of the code readability and has minimal overhead.\r\n\r\nAs an example (feel free to steal), and using the existing code on that page\r\n\r\n```python\r\nwith contextlib.ExitStack() as exit_stack:\r\n cm1 = exit_stack.enter_context(make_context_manager(1))\r\n cm2 = exit_stack.enter_context(make_context_manager(2))\r\n cm3 = exit_stack.enter_context(make_context_manager(3))\r\n cm4 = exit_stack.enter_context(make_context_manager(4))\r\n ...\r\n```\r\n\r\n**Describe alternatives you've considered**\r\n\r\nN/A\r\n\r\n**Additional context**\r\n\r\n:heart: black\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/stable/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n\nimport os\nimport string\nfrom pathlib import Path\n\nfrom pkg_resources import get_distribution\n\nCURRENT_DIR = Path(__file__).parent\n\n\ndef make_pypi_svg(version: str) -> None:\n template: Path = CURRENT_DIR / \"_static\" / \"pypi_template.svg\"\n target: Path = CURRENT_DIR / \"_static\" / \"pypi.svg\"\n with open(str(template), \"r\", encoding=\"utf8\") as f:\n svg: str = string.Template(f.read()).substitute(version=version)\n with open(str(target), \"w\", encoding=\"utf8\") as f:\n f.write(svg)\n\n\n# Necessary so Click doesn't hit an encode error when called by\n# sphinxcontrib-programoutput on Windows.\nos.putenv(\"pythonioencoding\", \"utf-8\")\n\n# -- Project information -----------------------------------------------------\n\nproject = \"Black\"\ncopyright = \"2018-Present, \u0141ukasz Langa and contributors to Black\"\nauthor = \"\u0141ukasz Langa and contributors to Black\"\n\n# Autopopulate version\n# The version, including alpha/beta/rc tags, but not commit hash and datestamps\nrelease = get_distribution(\"black\").version.split(\"+\")[0]\n# The short X.Y version.\nversion = release\nfor sp in \"abcfr\":\n version = version.split(sp)[0]\n\nmake_pypi_svg(release)\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\nneeds_sphinx = \"3.0\"\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.napoleon\",\n \"myst_parser\",\n \"sphinxcontrib.programoutput\",\n \"sphinx_copybutton\",\n]\n\n# If you need extensions of a certain version or higher, list them here.\nneeds_extensions = {\"myst_parser\": \"0.13.7\"}\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\nsource_suffix = [\".rst\", \".md\"]\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = \"en\"\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\n\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# We need headers to be linkable to so ask MyST-Parser to autogenerate anchor IDs for\n# headers up to and including level 3.\nmyst_heading_anchors = 3\n\n# Prettier support formatting some MyST syntax but not all, so let's disable the\n# unsupported yet still enabled by default ones.\nmyst_disable_syntax = [\n \"colon_fence\",\n \"myst_block_break\",\n \"myst_line_comment\",\n \"math_block\",\n]\n\n# Optional MyST Syntaxes\nmyst_enable_extensions = []\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"furo\"\nhtml_logo = \"_static/logo2-readme.png\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"blackdoc\"\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n \"black.tex\",\n \"Documentation for Black\",\n \"\u0141ukasz Langa and contributors to Black\",\n \"manual\",\n )\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"black\", \"Documentation for Black\", [author], 1)]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"Black\",\n \"Documentation for Black\",\n author,\n \"Black\",\n \"The uncompromising Python code formatter\",\n \"Miscellaneous\",\n )\n]\n\n\n# -- Options for Epub output -------------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\nepub_author = author\nepub_publisher = author\nepub_copyright = copyright\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#\n# epub_identifier = ''\n\n# A unique identification for the text.\n#\n# epub_uid = ''\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = [\"search.html\"]\n\n\n# -- Extension configuration -------------------------------------------------\n\nautodoc_member_order = \"bysource\"\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\"https://docs.python.org/3/\": None}\n", "path": "docs/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/stable/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n\nimport os\nimport string\nfrom pathlib import Path\n\nfrom pkg_resources import get_distribution\n\nCURRENT_DIR = Path(__file__).parent\n\n\ndef make_pypi_svg(version: str) -> None:\n template: Path = CURRENT_DIR / \"_static\" / \"pypi_template.svg\"\n target: Path = CURRENT_DIR / \"_static\" / \"pypi.svg\"\n with open(str(template), \"r\", encoding=\"utf8\") as f:\n svg: str = string.Template(f.read()).substitute(version=version)\n with open(str(target), \"w\", encoding=\"utf8\") as f:\n f.write(svg)\n\n\n# Necessary so Click doesn't hit an encode error when called by\n# sphinxcontrib-programoutput on Windows.\nos.putenv(\"pythonioencoding\", \"utf-8\")\n\n# -- Project information -----------------------------------------------------\n\nproject = \"Black\"\ncopyright = \"2018-Present, \u0141ukasz Langa and contributors to Black\"\nauthor = \"\u0141ukasz Langa and contributors to Black\"\n\n# Autopopulate version\n# The version, including alpha/beta/rc tags, but not commit hash and datestamps\nrelease = get_distribution(\"black\").version.split(\"+\")[0]\n# The short X.Y version.\nversion = release\nfor sp in \"abcfr\":\n version = version.split(sp)[0]\n\nmake_pypi_svg(release)\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\nneeds_sphinx = \"4.4\"\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.napoleon\",\n \"myst_parser\",\n \"sphinxcontrib.programoutput\",\n \"sphinx_copybutton\",\n]\n\n# If you need extensions of a certain version or higher, list them here.\nneeds_extensions = {\"myst_parser\": \"0.13.7\"}\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\nsource_suffix = [\".rst\", \".md\"]\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = \"en\"\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\n\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# We need headers to be linkable to so ask MyST-Parser to autogenerate anchor IDs for\n# headers up to and including level 3.\nmyst_heading_anchors = 3\n\n# Prettier support formatting some MyST syntax but not all, so let's disable the\n# unsupported yet still enabled by default ones.\nmyst_disable_syntax = [\n \"colon_fence\",\n \"myst_block_break\",\n \"myst_line_comment\",\n \"math_block\",\n]\n\n# Optional MyST Syntaxes\nmyst_enable_extensions = []\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"furo\"\nhtml_logo = \"_static/logo2-readme.png\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"blackdoc\"\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n \"black.tex\",\n \"Documentation for Black\",\n \"\u0141ukasz Langa and contributors to Black\",\n \"manual\",\n )\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"black\", \"Documentation for Black\", [author], 1)]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"Black\",\n \"Documentation for Black\",\n author,\n \"Black\",\n \"The uncompromising Python code formatter\",\n \"Miscellaneous\",\n )\n]\n\n\n# -- Options for Epub output -------------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\nepub_author = author\nepub_publisher = author\nepub_copyright = copyright\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#\n# epub_identifier = ''\n\n# A unique identification for the text.\n#\n# epub_uid = ''\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = [\"search.html\"]\n\n\n# -- Extension configuration -------------------------------------------------\n\nautodoc_member_order = \"bysource\"\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\"https://docs.python.org/3/\": None}\n", "path": "docs/conf.py"}]} | 2,599 | 111 |
gh_patches_debug_16770 | rasdani/github-patches | git_diff | nautobot__nautobot-5736 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add Family column to Device Types table
### Environment
* Nautobot version (Docker tag too if applicable): 2.2.3
* Python version: 3.11
* Database platform, version: postgres 15
* Middleware(s):
### Steps to Reproduce
1. Go to Device Types list https://demo.nautobot.com/dcim/device-types/
2. Try to add Family in Table Configuration
3. Family is not in the list of available columns
#3559 #5040
https://github.com/nautobot/nautobot/blob/42440ebd9b381534ad89d62420ebea00d703d64e/nautobot/dcim/tables/devicetypes.py#L95
### Expected Behavior
Family column can be selected in Device Types list
### Observed Behavior
Family column is not in the list of available columns
Add Family column to Device Types table
### Environment
* Nautobot version (Docker tag too if applicable): 2.2.3
* Python version: 3.11
* Database platform, version: postgres 15
* Middleware(s):
### Steps to Reproduce
1. Go to Device Types list https://demo.nautobot.com/dcim/device-types/
2. Try to add Family in Table Configuration
3. Family is not in the list of available columns
#3559 #5040
https://github.com/nautobot/nautobot/blob/42440ebd9b381534ad89d62420ebea00d703d64e/nautobot/dcim/tables/devicetypes.py#L95
### Expected Behavior
Family column can be selected in Device Types list
### Observed Behavior
Family column is not in the list of available columns
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nautobot/dcim/tables/devicetypes.py`
Content:
```
1 import django_tables2 as tables
2
3 from nautobot.core.tables import (
4 BaseTable,
5 BooleanColumn,
6 ButtonsColumn,
7 LinkedCountColumn,
8 TagColumn,
9 ToggleColumn,
10 )
11 from nautobot.dcim.models import (
12 ConsolePortTemplate,
13 ConsoleServerPortTemplate,
14 DeviceBayTemplate,
15 DeviceFamily,
16 DeviceType,
17 FrontPortTemplate,
18 InterfaceTemplate,
19 Manufacturer,
20 PowerOutletTemplate,
21 PowerPortTemplate,
22 RearPortTemplate,
23 )
24
25 __all__ = (
26 "ConsolePortTemplateTable",
27 "ConsoleServerPortTemplateTable",
28 "DeviceBayTemplateTable",
29 "DeviceFamilyTable",
30 "DeviceTypeTable",
31 "FrontPortTemplateTable",
32 "InterfaceTemplateTable",
33 "ManufacturerTable",
34 "PowerOutletTemplateTable",
35 "PowerPortTemplateTable",
36 "RearPortTemplateTable",
37 )
38
39
40 #
41 # Manufacturers
42 #
43
44
45 class ManufacturerTable(BaseTable):
46 pk = ToggleColumn()
47 name = tables.LinkColumn()
48 device_type_count = LinkedCountColumn(
49 viewname="dcim:devicetype_list", url_params={"manufacturer": "name"}, verbose_name="Device Types"
50 )
51 inventory_item_count = LinkedCountColumn(
52 viewname="dcim:inventoryitem_list", url_params={"manufacturer": "name"}, verbose_name="Inventory Items"
53 )
54 platform_count = LinkedCountColumn(
55 viewname="dcim:platform_list", url_params={"manufacturer": "name"}, verbose_name="Platforms"
56 )
57 actions = ButtonsColumn(Manufacturer)
58
59 class Meta(BaseTable.Meta):
60 model = Manufacturer
61 fields = (
62 "pk",
63 "name",
64 "device_type_count",
65 "inventory_item_count",
66 "platform_count",
67 "description",
68 "actions",
69 )
70
71
72 #
73 # Device Family
74 #
75
76
77 class DeviceFamilyTable(BaseTable):
78 pk = ToggleColumn()
79 name = tables.Column(linkify=True)
80 device_type_count = LinkedCountColumn(
81 viewname="dcim:devicetype_list", url_params={"device_family": "name"}, verbose_name="Device Types"
82 )
83 actions = ButtonsColumn(DeviceFamily)
84 tags = TagColumn(url_name="dcim:devicefamily_list")
85
86 class Meta(BaseTable.Meta):
87 model = DeviceFamily
88 fields = (
89 "pk",
90 "name",
91 "device_type_count",
92 "description",
93 "actions",
94 "tags",
95 )
96
97
98 #
99 # Device types
100 #
101
102
103 class DeviceTypeTable(BaseTable):
104 pk = ToggleColumn()
105 model = tables.Column(linkify=True, verbose_name="Device Type")
106 is_full_depth = BooleanColumn(verbose_name="Full Depth")
107 device_count = LinkedCountColumn(
108 viewname="dcim:device_list",
109 url_params={"device_type": "pk"},
110 verbose_name="Devices",
111 )
112 tags = TagColumn(url_name="dcim:devicetype_list")
113
114 class Meta(BaseTable.Meta):
115 model = DeviceType
116 fields = (
117 "pk",
118 "model",
119 "manufacturer",
120 "part_number",
121 "u_height",
122 "is_full_depth",
123 "subdevice_role",
124 "device_count",
125 "tags",
126 )
127 default_columns = (
128 "pk",
129 "model",
130 "manufacturer",
131 "part_number",
132 "u_height",
133 "is_full_depth",
134 "device_count",
135 )
136
137
138 #
139 # Device type components
140 #
141
142
143 class ComponentTemplateTable(BaseTable):
144 pk = ToggleColumn()
145 name = tables.Column(order_by=("_name",))
146
147
148 class ConsolePortTemplateTable(ComponentTemplateTable):
149 actions = ButtonsColumn(
150 model=ConsolePortTemplate,
151 buttons=("edit", "delete"),
152 return_url_extra="%23tab_consoleports",
153 )
154
155 class Meta(BaseTable.Meta):
156 model = ConsolePortTemplate
157 fields = ("pk", "name", "label", "type", "description", "actions")
158 empty_text = "None"
159
160
161 class ConsoleServerPortTemplateTable(ComponentTemplateTable):
162 actions = ButtonsColumn(
163 model=ConsoleServerPortTemplate,
164 buttons=("edit", "delete"),
165 return_url_extra="%23tab_consoleserverports",
166 )
167
168 class Meta(BaseTable.Meta):
169 model = ConsoleServerPortTemplate
170 fields = ("pk", "name", "label", "type", "description", "actions")
171 empty_text = "None"
172
173
174 class PowerPortTemplateTable(ComponentTemplateTable):
175 actions = ButtonsColumn(
176 model=PowerPortTemplate,
177 buttons=("edit", "delete"),
178 return_url_extra="%23tab_powerports",
179 )
180
181 class Meta(BaseTable.Meta):
182 model = PowerPortTemplate
183 fields = (
184 "pk",
185 "name",
186 "label",
187 "type",
188 "maximum_draw",
189 "allocated_draw",
190 "description",
191 "actions",
192 )
193 empty_text = "None"
194
195
196 class PowerOutletTemplateTable(ComponentTemplateTable):
197 actions = ButtonsColumn(
198 model=PowerOutletTemplate,
199 buttons=("edit", "delete"),
200 return_url_extra="%23tab_poweroutlets",
201 )
202
203 class Meta(BaseTable.Meta):
204 model = PowerOutletTemplate
205 fields = (
206 "pk",
207 "name",
208 "label",
209 "type",
210 "power_port",
211 "feed_leg",
212 "description",
213 "actions",
214 )
215 empty_text = "None"
216
217
218 class InterfaceTemplateTable(ComponentTemplateTable):
219 mgmt_only = BooleanColumn(verbose_name="Management Only")
220 actions = ButtonsColumn(
221 model=InterfaceTemplate,
222 buttons=("edit", "delete"),
223 return_url_extra="%23tab_interfaces",
224 )
225
226 class Meta(BaseTable.Meta):
227 model = InterfaceTemplate
228 fields = ("pk", "name", "label", "mgmt_only", "type", "description", "actions")
229 empty_text = "None"
230
231
232 class FrontPortTemplateTable(ComponentTemplateTable):
233 rear_port_position = tables.Column(verbose_name="Position")
234 actions = ButtonsColumn(
235 model=FrontPortTemplate,
236 buttons=("edit", "delete"),
237 return_url_extra="%23tab_frontports",
238 )
239
240 class Meta(BaseTable.Meta):
241 model = FrontPortTemplate
242 fields = (
243 "pk",
244 "name",
245 "label",
246 "type",
247 "rear_port",
248 "rear_port_position",
249 "description",
250 "actions",
251 )
252 empty_text = "None"
253
254
255 class RearPortTemplateTable(ComponentTemplateTable):
256 actions = ButtonsColumn(
257 model=RearPortTemplate,
258 buttons=("edit", "delete"),
259 return_url_extra="%23tab_rearports",
260 )
261
262 class Meta(BaseTable.Meta):
263 model = RearPortTemplate
264 fields = ("pk", "name", "label", "type", "positions", "description", "actions")
265 empty_text = "None"
266
267
268 class DeviceBayTemplateTable(ComponentTemplateTable):
269 actions = ButtonsColumn(
270 model=DeviceBayTemplate,
271 buttons=("edit", "delete"),
272 return_url_extra="%23tab_devicebays",
273 )
274
275 class Meta(BaseTable.Meta):
276 model = DeviceBayTemplate
277 fields = ("pk", "name", "label", "description", "actions")
278 empty_text = "None"
279
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nautobot/dcim/tables/devicetypes.py b/nautobot/dcim/tables/devicetypes.py
--- a/nautobot/dcim/tables/devicetypes.py
+++ b/nautobot/dcim/tables/devicetypes.py
@@ -103,6 +103,8 @@
class DeviceTypeTable(BaseTable):
pk = ToggleColumn()
model = tables.Column(linkify=True, verbose_name="Device Type")
+ manufacturer = tables.Column(linkify=True)
+ device_family = tables.Column(linkify=True)
is_full_depth = BooleanColumn(verbose_name="Full Depth")
device_count = LinkedCountColumn(
viewname="dcim:device_list",
@@ -117,6 +119,7 @@
"pk",
"model",
"manufacturer",
+ "device_family",
"part_number",
"u_height",
"is_full_depth",
| {"golden_diff": "diff --git a/nautobot/dcim/tables/devicetypes.py b/nautobot/dcim/tables/devicetypes.py\n--- a/nautobot/dcim/tables/devicetypes.py\n+++ b/nautobot/dcim/tables/devicetypes.py\n@@ -103,6 +103,8 @@\n class DeviceTypeTable(BaseTable):\n pk = ToggleColumn()\n model = tables.Column(linkify=True, verbose_name=\"Device Type\")\n+ manufacturer = tables.Column(linkify=True)\n+ device_family = tables.Column(linkify=True)\n is_full_depth = BooleanColumn(verbose_name=\"Full Depth\")\n device_count = LinkedCountColumn(\n viewname=\"dcim:device_list\",\n@@ -117,6 +119,7 @@\n \"pk\",\n \"model\",\n \"manufacturer\",\n+ \"device_family\",\n \"part_number\",\n \"u_height\",\n \"is_full_depth\",\n", "issue": "Add Family column to Device Types table\n### Environment\r\n* Nautobot version (Docker tag too if applicable): 2.2.3\r\n* Python version: 3.11\r\n* Database platform, version: postgres 15\r\n* Middleware(s):\r\n\r\n### Steps to Reproduce\r\n1. Go to Device Types list https://demo.nautobot.com/dcim/device-types/\r\n2. Try to add Family in Table Configuration\r\n3. Family is not in the list of available columns\r\n\r\n#3559 #5040 \r\nhttps://github.com/nautobot/nautobot/blob/42440ebd9b381534ad89d62420ebea00d703d64e/nautobot/dcim/tables/devicetypes.py#L95\r\n\r\n### Expected Behavior\r\nFamily column can be selected in Device Types list\r\n\r\n### Observed Behavior\r\nFamily column is not in the list of available columns\nAdd Family column to Device Types table\n### Environment\r\n* Nautobot version (Docker tag too if applicable): 2.2.3\r\n* Python version: 3.11\r\n* Database platform, version: postgres 15\r\n* Middleware(s):\r\n\r\n### Steps to Reproduce\r\n1. Go to Device Types list https://demo.nautobot.com/dcim/device-types/\r\n2. Try to add Family in Table Configuration\r\n3. Family is not in the list of available columns\r\n\r\n#3559 #5040 \r\nhttps://github.com/nautobot/nautobot/blob/42440ebd9b381534ad89d62420ebea00d703d64e/nautobot/dcim/tables/devicetypes.py#L95\r\n\r\n### Expected Behavior\r\nFamily column can be selected in Device Types list\r\n\r\n### Observed Behavior\r\nFamily column is not in the list of available columns\n", "before_files": [{"content": "import django_tables2 as tables\n\nfrom nautobot.core.tables import (\n BaseTable,\n BooleanColumn,\n ButtonsColumn,\n LinkedCountColumn,\n TagColumn,\n ToggleColumn,\n)\nfrom nautobot.dcim.models import (\n ConsolePortTemplate,\n ConsoleServerPortTemplate,\n DeviceBayTemplate,\n DeviceFamily,\n DeviceType,\n FrontPortTemplate,\n InterfaceTemplate,\n Manufacturer,\n PowerOutletTemplate,\n PowerPortTemplate,\n RearPortTemplate,\n)\n\n__all__ = (\n \"ConsolePortTemplateTable\",\n \"ConsoleServerPortTemplateTable\",\n \"DeviceBayTemplateTable\",\n \"DeviceFamilyTable\",\n \"DeviceTypeTable\",\n \"FrontPortTemplateTable\",\n \"InterfaceTemplateTable\",\n \"ManufacturerTable\",\n \"PowerOutletTemplateTable\",\n \"PowerPortTemplateTable\",\n \"RearPortTemplateTable\",\n)\n\n\n#\n# Manufacturers\n#\n\n\nclass ManufacturerTable(BaseTable):\n pk = ToggleColumn()\n name = tables.LinkColumn()\n device_type_count = LinkedCountColumn(\n viewname=\"dcim:devicetype_list\", url_params={\"manufacturer\": \"name\"}, verbose_name=\"Device Types\"\n )\n inventory_item_count = LinkedCountColumn(\n viewname=\"dcim:inventoryitem_list\", url_params={\"manufacturer\": \"name\"}, verbose_name=\"Inventory Items\"\n )\n platform_count = LinkedCountColumn(\n viewname=\"dcim:platform_list\", url_params={\"manufacturer\": \"name\"}, verbose_name=\"Platforms\"\n )\n actions = ButtonsColumn(Manufacturer)\n\n class Meta(BaseTable.Meta):\n model = Manufacturer\n fields = (\n \"pk\",\n \"name\",\n \"device_type_count\",\n \"inventory_item_count\",\n \"platform_count\",\n \"description\",\n \"actions\",\n )\n\n\n#\n# Device Family\n#\n\n\nclass DeviceFamilyTable(BaseTable):\n pk = ToggleColumn()\n name = tables.Column(linkify=True)\n device_type_count = LinkedCountColumn(\n viewname=\"dcim:devicetype_list\", url_params={\"device_family\": \"name\"}, verbose_name=\"Device Types\"\n )\n actions = ButtonsColumn(DeviceFamily)\n tags = TagColumn(url_name=\"dcim:devicefamily_list\")\n\n class Meta(BaseTable.Meta):\n model = DeviceFamily\n fields = (\n \"pk\",\n \"name\",\n \"device_type_count\",\n \"description\",\n \"actions\",\n \"tags\",\n )\n\n\n#\n# Device types\n#\n\n\nclass DeviceTypeTable(BaseTable):\n pk = ToggleColumn()\n model = tables.Column(linkify=True, verbose_name=\"Device Type\")\n is_full_depth = BooleanColumn(verbose_name=\"Full Depth\")\n device_count = LinkedCountColumn(\n viewname=\"dcim:device_list\",\n url_params={\"device_type\": \"pk\"},\n verbose_name=\"Devices\",\n )\n tags = TagColumn(url_name=\"dcim:devicetype_list\")\n\n class Meta(BaseTable.Meta):\n model = DeviceType\n fields = (\n \"pk\",\n \"model\",\n \"manufacturer\",\n \"part_number\",\n \"u_height\",\n \"is_full_depth\",\n \"subdevice_role\",\n \"device_count\",\n \"tags\",\n )\n default_columns = (\n \"pk\",\n \"model\",\n \"manufacturer\",\n \"part_number\",\n \"u_height\",\n \"is_full_depth\",\n \"device_count\",\n )\n\n\n#\n# Device type components\n#\n\n\nclass ComponentTemplateTable(BaseTable):\n pk = ToggleColumn()\n name = tables.Column(order_by=(\"_name\",))\n\n\nclass ConsolePortTemplateTable(ComponentTemplateTable):\n actions = ButtonsColumn(\n model=ConsolePortTemplate,\n buttons=(\"edit\", \"delete\"),\n return_url_extra=\"%23tab_consoleports\",\n )\n\n class Meta(BaseTable.Meta):\n model = ConsolePortTemplate\n fields = (\"pk\", \"name\", \"label\", \"type\", \"description\", \"actions\")\n empty_text = \"None\"\n\n\nclass ConsoleServerPortTemplateTable(ComponentTemplateTable):\n actions = ButtonsColumn(\n model=ConsoleServerPortTemplate,\n buttons=(\"edit\", \"delete\"),\n return_url_extra=\"%23tab_consoleserverports\",\n )\n\n class Meta(BaseTable.Meta):\n model = ConsoleServerPortTemplate\n fields = (\"pk\", \"name\", \"label\", \"type\", \"description\", \"actions\")\n empty_text = \"None\"\n\n\nclass PowerPortTemplateTable(ComponentTemplateTable):\n actions = ButtonsColumn(\n model=PowerPortTemplate,\n buttons=(\"edit\", \"delete\"),\n return_url_extra=\"%23tab_powerports\",\n )\n\n class Meta(BaseTable.Meta):\n model = PowerPortTemplate\n fields = (\n \"pk\",\n \"name\",\n \"label\",\n \"type\",\n \"maximum_draw\",\n \"allocated_draw\",\n \"description\",\n \"actions\",\n )\n empty_text = \"None\"\n\n\nclass PowerOutletTemplateTable(ComponentTemplateTable):\n actions = ButtonsColumn(\n model=PowerOutletTemplate,\n buttons=(\"edit\", \"delete\"),\n return_url_extra=\"%23tab_poweroutlets\",\n )\n\n class Meta(BaseTable.Meta):\n model = PowerOutletTemplate\n fields = (\n \"pk\",\n \"name\",\n \"label\",\n \"type\",\n \"power_port\",\n \"feed_leg\",\n \"description\",\n \"actions\",\n )\n empty_text = \"None\"\n\n\nclass InterfaceTemplateTable(ComponentTemplateTable):\n mgmt_only = BooleanColumn(verbose_name=\"Management Only\")\n actions = ButtonsColumn(\n model=InterfaceTemplate,\n buttons=(\"edit\", \"delete\"),\n return_url_extra=\"%23tab_interfaces\",\n )\n\n class Meta(BaseTable.Meta):\n model = InterfaceTemplate\n fields = (\"pk\", \"name\", \"label\", \"mgmt_only\", \"type\", \"description\", \"actions\")\n empty_text = \"None\"\n\n\nclass FrontPortTemplateTable(ComponentTemplateTable):\n rear_port_position = tables.Column(verbose_name=\"Position\")\n actions = ButtonsColumn(\n model=FrontPortTemplate,\n buttons=(\"edit\", \"delete\"),\n return_url_extra=\"%23tab_frontports\",\n )\n\n class Meta(BaseTable.Meta):\n model = FrontPortTemplate\n fields = (\n \"pk\",\n \"name\",\n \"label\",\n \"type\",\n \"rear_port\",\n \"rear_port_position\",\n \"description\",\n \"actions\",\n )\n empty_text = \"None\"\n\n\nclass RearPortTemplateTable(ComponentTemplateTable):\n actions = ButtonsColumn(\n model=RearPortTemplate,\n buttons=(\"edit\", \"delete\"),\n return_url_extra=\"%23tab_rearports\",\n )\n\n class Meta(BaseTable.Meta):\n model = RearPortTemplate\n fields = (\"pk\", \"name\", \"label\", \"type\", \"positions\", \"description\", \"actions\")\n empty_text = \"None\"\n\n\nclass DeviceBayTemplateTable(ComponentTemplateTable):\n actions = ButtonsColumn(\n model=DeviceBayTemplate,\n buttons=(\"edit\", \"delete\"),\n return_url_extra=\"%23tab_devicebays\",\n )\n\n class Meta(BaseTable.Meta):\n model = DeviceBayTemplate\n fields = (\"pk\", \"name\", \"label\", \"description\", \"actions\")\n empty_text = \"None\"\n", "path": "nautobot/dcim/tables/devicetypes.py"}], "after_files": [{"content": "import django_tables2 as tables\n\nfrom nautobot.core.tables import (\n BaseTable,\n BooleanColumn,\n ButtonsColumn,\n LinkedCountColumn,\n TagColumn,\n ToggleColumn,\n)\nfrom nautobot.dcim.models import (\n ConsolePortTemplate,\n ConsoleServerPortTemplate,\n DeviceBayTemplate,\n DeviceFamily,\n DeviceType,\n FrontPortTemplate,\n InterfaceTemplate,\n Manufacturer,\n PowerOutletTemplate,\n PowerPortTemplate,\n RearPortTemplate,\n)\n\n__all__ = (\n \"ConsolePortTemplateTable\",\n \"ConsoleServerPortTemplateTable\",\n \"DeviceBayTemplateTable\",\n \"DeviceFamilyTable\",\n \"DeviceTypeTable\",\n \"FrontPortTemplateTable\",\n \"InterfaceTemplateTable\",\n \"ManufacturerTable\",\n \"PowerOutletTemplateTable\",\n \"PowerPortTemplateTable\",\n \"RearPortTemplateTable\",\n)\n\n\n#\n# Manufacturers\n#\n\n\nclass ManufacturerTable(BaseTable):\n pk = ToggleColumn()\n name = tables.LinkColumn()\n device_type_count = LinkedCountColumn(\n viewname=\"dcim:devicetype_list\", url_params={\"manufacturer\": \"name\"}, verbose_name=\"Device Types\"\n )\n inventory_item_count = LinkedCountColumn(\n viewname=\"dcim:inventoryitem_list\", url_params={\"manufacturer\": \"name\"}, verbose_name=\"Inventory Items\"\n )\n platform_count = LinkedCountColumn(\n viewname=\"dcim:platform_list\", url_params={\"manufacturer\": \"name\"}, verbose_name=\"Platforms\"\n )\n actions = ButtonsColumn(Manufacturer)\n\n class Meta(BaseTable.Meta):\n model = Manufacturer\n fields = (\n \"pk\",\n \"name\",\n \"device_type_count\",\n \"inventory_item_count\",\n \"platform_count\",\n \"description\",\n \"actions\",\n )\n\n\n#\n# Device Family\n#\n\n\nclass DeviceFamilyTable(BaseTable):\n pk = ToggleColumn()\n name = tables.Column(linkify=True)\n device_type_count = LinkedCountColumn(\n viewname=\"dcim:devicetype_list\", url_params={\"device_family\": \"name\"}, verbose_name=\"Device Types\"\n )\n actions = ButtonsColumn(DeviceFamily)\n tags = TagColumn(url_name=\"dcim:devicefamily_list\")\n\n class Meta(BaseTable.Meta):\n model = DeviceFamily\n fields = (\n \"pk\",\n \"name\",\n \"device_type_count\",\n \"description\",\n \"actions\",\n \"tags\",\n )\n\n\n#\n# Device types\n#\n\n\nclass DeviceTypeTable(BaseTable):\n pk = ToggleColumn()\n model = tables.Column(linkify=True, verbose_name=\"Device Type\")\n manufacturer = tables.Column(linkify=True)\n device_family = tables.Column(linkify=True)\n is_full_depth = BooleanColumn(verbose_name=\"Full Depth\")\n device_count = LinkedCountColumn(\n viewname=\"dcim:device_list\",\n url_params={\"device_type\": \"pk\"},\n verbose_name=\"Devices\",\n )\n tags = TagColumn(url_name=\"dcim:devicetype_list\")\n\n class Meta(BaseTable.Meta):\n model = DeviceType\n fields = (\n \"pk\",\n \"model\",\n \"manufacturer\",\n \"device_family\",\n \"part_number\",\n \"u_height\",\n \"is_full_depth\",\n \"subdevice_role\",\n \"device_count\",\n \"tags\",\n )\n default_columns = (\n \"pk\",\n \"model\",\n \"manufacturer\",\n \"part_number\",\n \"u_height\",\n \"is_full_depth\",\n \"device_count\",\n )\n\n\n#\n# Device type components\n#\n\n\nclass ComponentTemplateTable(BaseTable):\n pk = ToggleColumn()\n name = tables.Column(order_by=(\"_name\",))\n\n\nclass ConsolePortTemplateTable(ComponentTemplateTable):\n actions = ButtonsColumn(\n model=ConsolePortTemplate,\n buttons=(\"edit\", \"delete\"),\n return_url_extra=\"%23tab_consoleports\",\n )\n\n class Meta(BaseTable.Meta):\n model = ConsolePortTemplate\n fields = (\"pk\", \"name\", \"label\", \"type\", \"description\", \"actions\")\n empty_text = \"None\"\n\n\nclass ConsoleServerPortTemplateTable(ComponentTemplateTable):\n actions = ButtonsColumn(\n model=ConsoleServerPortTemplate,\n buttons=(\"edit\", \"delete\"),\n return_url_extra=\"%23tab_consoleserverports\",\n )\n\n class Meta(BaseTable.Meta):\n model = ConsoleServerPortTemplate\n fields = (\"pk\", \"name\", \"label\", \"type\", \"description\", \"actions\")\n empty_text = \"None\"\n\n\nclass PowerPortTemplateTable(ComponentTemplateTable):\n actions = ButtonsColumn(\n model=PowerPortTemplate,\n buttons=(\"edit\", \"delete\"),\n return_url_extra=\"%23tab_powerports\",\n )\n\n class Meta(BaseTable.Meta):\n model = PowerPortTemplate\n fields = (\n \"pk\",\n \"name\",\n \"label\",\n \"type\",\n \"maximum_draw\",\n \"allocated_draw\",\n \"description\",\n \"actions\",\n )\n empty_text = \"None\"\n\n\nclass PowerOutletTemplateTable(ComponentTemplateTable):\n actions = ButtonsColumn(\n model=PowerOutletTemplate,\n buttons=(\"edit\", \"delete\"),\n return_url_extra=\"%23tab_poweroutlets\",\n )\n\n class Meta(BaseTable.Meta):\n model = PowerOutletTemplate\n fields = (\n \"pk\",\n \"name\",\n \"label\",\n \"type\",\n \"power_port\",\n \"feed_leg\",\n \"description\",\n \"actions\",\n )\n empty_text = \"None\"\n\n\nclass InterfaceTemplateTable(ComponentTemplateTable):\n mgmt_only = BooleanColumn(verbose_name=\"Management Only\")\n actions = ButtonsColumn(\n model=InterfaceTemplate,\n buttons=(\"edit\", \"delete\"),\n return_url_extra=\"%23tab_interfaces\",\n )\n\n class Meta(BaseTable.Meta):\n model = InterfaceTemplate\n fields = (\"pk\", \"name\", \"label\", \"mgmt_only\", \"type\", \"description\", \"actions\")\n empty_text = \"None\"\n\n\nclass FrontPortTemplateTable(ComponentTemplateTable):\n rear_port_position = tables.Column(verbose_name=\"Position\")\n actions = ButtonsColumn(\n model=FrontPortTemplate,\n buttons=(\"edit\", \"delete\"),\n return_url_extra=\"%23tab_frontports\",\n )\n\n class Meta(BaseTable.Meta):\n model = FrontPortTemplate\n fields = (\n \"pk\",\n \"name\",\n \"label\",\n \"type\",\n \"rear_port\",\n \"rear_port_position\",\n \"description\",\n \"actions\",\n )\n empty_text = \"None\"\n\n\nclass RearPortTemplateTable(ComponentTemplateTable):\n actions = ButtonsColumn(\n model=RearPortTemplate,\n buttons=(\"edit\", \"delete\"),\n return_url_extra=\"%23tab_rearports\",\n )\n\n class Meta(BaseTable.Meta):\n model = RearPortTemplate\n fields = (\"pk\", \"name\", \"label\", \"type\", \"positions\", \"description\", \"actions\")\n empty_text = \"None\"\n\n\nclass DeviceBayTemplateTable(ComponentTemplateTable):\n actions = ButtonsColumn(\n model=DeviceBayTemplate,\n buttons=(\"edit\", \"delete\"),\n return_url_extra=\"%23tab_devicebays\",\n )\n\n class Meta(BaseTable.Meta):\n model = DeviceBayTemplate\n fields = (\"pk\", \"name\", \"label\", \"description\", \"actions\")\n empty_text = \"None\"\n", "path": "nautobot/dcim/tables/devicetypes.py"}]} | 2,967 | 198 |
gh_patches_debug_10189 | rasdani/github-patches | git_diff | beetbox__beets-1980 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
hook: Crash when using non-ASCII paths in command template
Using the config
```
hook:
hooks:
- event: album_imported
command: beet convert -ay path:{album.path}
```
and this command
```
beet import -a -L path:/path/to/lib/Ария
```
I get an UnicodeDecodeError from `hook.py` line 50.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `beetsplug/hook.py`
Content:
```
1 # This file is part of beets.
2 # Copyright 2015, Adrian Sampson.
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining
5 # a copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish,
8 # distribute, sublicense, and/or sell copies of the Software, and to
9 # permit persons to whom the Software is furnished to do so, subject to
10 # the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be
13 # included in all copies or substantial portions of the Software.
14
15 """Allows custom commands to be run when an event is emitted by beets"""
16 from __future__ import division, absolute_import, print_function
17
18 import shlex
19 import subprocess
20
21 from beets.plugins import BeetsPlugin
22 from beets.ui import _arg_encoding
23
24
25 class HookPlugin(BeetsPlugin):
26 """Allows custom commands to be run when an event is emitted by beets"""
27 def __init__(self):
28 super(HookPlugin, self).__init__()
29
30 self.config.add({
31 'hooks': []
32 })
33
34 hooks = self.config['hooks'].get(list)
35
36 for hook_index in range(len(hooks)):
37 hook = self.config['hooks'][hook_index]
38
39 hook_event = hook['event'].get()
40 hook_command = hook['command'].get()
41
42 self.create_and_register_hook(hook_event, hook_command)
43
44 def create_and_register_hook(self, event, command):
45 def hook_function(**kwargs):
46 if command is None or len(command) == 0:
47 self._log.error('invalid command "{0}"', command)
48 return
49
50 formatted_command = command.format(event=event, **kwargs)
51 encoded_command = formatted_command.decode(_arg_encoding())
52 command_pieces = shlex.split(encoded_command)
53
54 self._log.debug('Running command "{0}" for event {1}',
55 encoded_command, event)
56
57 try:
58 subprocess.Popen(command_pieces).wait()
59 except OSError as exc:
60 self._log.error('hook for {0} failed: {1}', event, exc)
61
62 self.register_listener(event, hook_function)
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/beetsplug/hook.py b/beetsplug/hook.py
--- a/beetsplug/hook.py
+++ b/beetsplug/hook.py
@@ -47,7 +47,9 @@
self._log.error('invalid command "{0}"', command)
return
- formatted_command = command.format(event=event, **kwargs)
+ unicode_command = command.decode('utf-8')
+ formatted_command = unicode_command.format(event=event,
+ **kwargs)
encoded_command = formatted_command.decode(_arg_encoding())
command_pieces = shlex.split(encoded_command)
| {"golden_diff": "diff --git a/beetsplug/hook.py b/beetsplug/hook.py\n--- a/beetsplug/hook.py\n+++ b/beetsplug/hook.py\n@@ -47,7 +47,9 @@\n self._log.error('invalid command \"{0}\"', command)\n return\n \n- formatted_command = command.format(event=event, **kwargs)\n+ unicode_command = command.decode('utf-8')\n+ formatted_command = unicode_command.format(event=event,\n+ **kwargs)\n encoded_command = formatted_command.decode(_arg_encoding())\n command_pieces = shlex.split(encoded_command)\n", "issue": "hook: Crash when using non-ASCII paths in command template\nUsing the config\n\n```\nhook:\n hooks:\n - event: album_imported\n command: beet convert -ay path:{album.path}\n```\n\nand this command\n\n```\nbeet import -a -L path:/path/to/lib/\u0410\u0440\u0438\u044f\n```\n\nI get an UnicodeDecodeError from `hook.py` line 50.\n\n", "before_files": [{"content": "# This file is part of beets.\n# Copyright 2015, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Allows custom commands to be run when an event is emitted by beets\"\"\"\nfrom __future__ import division, absolute_import, print_function\n\nimport shlex\nimport subprocess\n\nfrom beets.plugins import BeetsPlugin\nfrom beets.ui import _arg_encoding\n\n\nclass HookPlugin(BeetsPlugin):\n \"\"\"Allows custom commands to be run when an event is emitted by beets\"\"\"\n def __init__(self):\n super(HookPlugin, self).__init__()\n\n self.config.add({\n 'hooks': []\n })\n\n hooks = self.config['hooks'].get(list)\n\n for hook_index in range(len(hooks)):\n hook = self.config['hooks'][hook_index]\n\n hook_event = hook['event'].get()\n hook_command = hook['command'].get()\n\n self.create_and_register_hook(hook_event, hook_command)\n\n def create_and_register_hook(self, event, command):\n def hook_function(**kwargs):\n if command is None or len(command) == 0:\n self._log.error('invalid command \"{0}\"', command)\n return\n\n formatted_command = command.format(event=event, **kwargs)\n encoded_command = formatted_command.decode(_arg_encoding())\n command_pieces = shlex.split(encoded_command)\n\n self._log.debug('Running command \"{0}\" for event {1}',\n encoded_command, event)\n\n try:\n subprocess.Popen(command_pieces).wait()\n except OSError as exc:\n self._log.error('hook for {0} failed: {1}', event, exc)\n\n self.register_listener(event, hook_function)\n", "path": "beetsplug/hook.py"}], "after_files": [{"content": "# This file is part of beets.\n# Copyright 2015, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Allows custom commands to be run when an event is emitted by beets\"\"\"\nfrom __future__ import division, absolute_import, print_function\n\nimport shlex\nimport subprocess\n\nfrom beets.plugins import BeetsPlugin\nfrom beets.ui import _arg_encoding\n\n\nclass HookPlugin(BeetsPlugin):\n \"\"\"Allows custom commands to be run when an event is emitted by beets\"\"\"\n def __init__(self):\n super(HookPlugin, self).__init__()\n\n self.config.add({\n 'hooks': []\n })\n\n hooks = self.config['hooks'].get(list)\n\n for hook_index in range(len(hooks)):\n hook = self.config['hooks'][hook_index]\n\n hook_event = hook['event'].get()\n hook_command = hook['command'].get()\n\n self.create_and_register_hook(hook_event, hook_command)\n\n def create_and_register_hook(self, event, command):\n def hook_function(**kwargs):\n if command is None or len(command) == 0:\n self._log.error('invalid command \"{0}\"', command)\n return\n\n unicode_command = command.decode('utf-8')\n formatted_command = unicode_command.format(event=event,\n **kwargs)\n encoded_command = formatted_command.decode(_arg_encoding())\n command_pieces = shlex.split(encoded_command)\n\n self._log.debug('Running command \"{0}\" for event {1}',\n encoded_command, event)\n\n try:\n subprocess.Popen(command_pieces).wait()\n except OSError as exc:\n self._log.error('hook for {0} failed: {1}', event, exc)\n\n self.register_listener(event, hook_function)\n", "path": "beetsplug/hook.py"}]} | 943 | 127 |
gh_patches_debug_19350 | rasdani/github-patches | git_diff | instadeepai__Mava-606 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[TEST] Jax Model Updating
### What do you want to test?
Jax model updating components
### Outline of test structure
* Unit tests
* Test components and hooks
### Definition of done
Passing checks, cover all hooks, edge cases considered
### Mandatory checklist before making a PR
* [ ] The success criteria laid down in “Definition of done” are met.
* [ ] Test code is documented - docstrings for methods and classes, static types for arguments.
* [ ] Documentation is updated - README, CONTRIBUTING, or other documentation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mava/components/jax/training/model_updating.py`
Content:
```
1 # python3
2 # Copyright 2021 InstaDeep Ltd. All rights reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 """Trainer components for system updating."""
17
18 import abc
19 from dataclasses import dataclass
20 from typing import Any, Callable, Dict, Optional, Tuple
21
22 import jax
23 import jax.numpy as jnp
24 import optax
25 from acme.jax import networks as networks_lib
26 from jax.random import KeyArray
27 from optax._src import base as optax_base
28
29 from mava.components.jax.training import Batch, Utility
30 from mava.core_jax import SystemTrainer
31
32
33 class MinibatchUpdate(Utility):
34 @abc.abstractmethod
35 def __init__(self, config: Any) -> None:
36 """_summary_
37
38 Args:
39 config : _description_.
40 """
41 self.config = config
42
43 @staticmethod
44 def name() -> str:
45 """_summary_
46
47 Returns:
48 _description_
49 """
50 return "minibatch_update"
51
52
53 @dataclass
54 class MAPGMinibatchUpdateConfig:
55 learning_rate: float = 1e-3
56 adam_epsilon: float = 1e-5
57 max_gradient_norm: float = 0.5
58 optimizer: Optional[optax_base.GradientTransformation] = (None,)
59
60
61 class MAPGMinibatchUpdate(MinibatchUpdate):
62 def __init__(
63 self,
64 config: MAPGMinibatchUpdateConfig = MAPGMinibatchUpdateConfig(),
65 ):
66 """_summary_
67
68 Args:
69 config : _description_.
70 """
71 self.config = config
72
73 def on_training_utility_fns(self, trainer: SystemTrainer) -> None:
74 """_summary_"""
75
76 if not self.config.optimizer:
77 trainer.store.optimizer = optax.chain(
78 optax.clip_by_global_norm(self.config.max_gradient_norm),
79 optax.scale_by_adam(eps=self.config.adam_epsilon),
80 optax.scale(-self.config.learning_rate),
81 )
82 else:
83 trainer.store.optimizer = self.config.optimizer
84
85 # Initialize optimizers.
86 trainer.store.opt_states = {}
87 for net_key in trainer.store.networks["networks"].keys():
88 trainer.store.opt_states[net_key] = trainer.store.optimizer.init(
89 trainer.store.networks["networks"][net_key].params
90 ) # pytype: disable=attribute-error
91
92 def model_update_minibatch(
93 carry: Tuple[networks_lib.Params, optax.OptState], minibatch: Batch
94 ) -> Tuple[Tuple[Any, optax.OptState], Dict[str, Any]]:
95 """Performs model update for a single minibatch."""
96 params, opt_states = carry
97
98 # Normalize advantages at the minibatch level before using them.
99 advantages = jax.tree_map(
100 lambda x: (x - jnp.mean(x, axis=0)) / (jnp.std(x, axis=0) + 1e-8),
101 minibatch.advantages,
102 )
103
104 # Calculate the gradients and agent metrics.
105 gradients, agent_metrics = trainer.store.grad_fn(
106 params,
107 minibatch.observations,
108 minibatch.actions,
109 minibatch.behavior_log_probs,
110 minibatch.target_values,
111 advantages,
112 minibatch.behavior_values,
113 )
114
115 # Update the networks and optimizors.
116 metrics = {}
117 for agent_key in trainer.store.trainer_agents:
118 agent_net_key = trainer.store.trainer_agent_net_keys[agent_key]
119 # Apply updates
120 # TODO (dries): Use one optimizer per network type here and not
121 # just one.
122 updates, opt_states[agent_net_key] = trainer.store.optimizer.update(
123 gradients[agent_key], opt_states[agent_net_key]
124 )
125 params[agent_net_key] = optax.apply_updates(
126 params[agent_net_key], updates
127 )
128
129 agent_metrics[agent_key]["norm_grad"] = optax.global_norm(
130 gradients[agent_key]
131 )
132 agent_metrics[agent_key]["norm_updates"] = optax.global_norm(updates)
133 metrics[agent_key] = agent_metrics
134 return (params, opt_states), metrics
135
136 trainer.store.minibatch_update_fn = model_update_minibatch
137
138 @staticmethod
139 def config_class() -> Optional[Callable]:
140 """Config class used for component.
141
142 Returns:
143 config class/dataclass for component.
144 """
145 return MAPGMinibatchUpdateConfig
146
147
148 class EpochUpdate(Utility):
149 @abc.abstractmethod
150 def __init__(self, config: Any) -> None:
151 """_summary_
152
153 Args:
154 config : _description_.
155 """
156 self.config = config
157
158 @staticmethod
159 def name() -> str:
160 """_summary_
161
162 Returns:
163 _description_
164 """
165 return "epoch_update"
166
167
168 @dataclass
169 class MAPGEpochUpdateConfig:
170 num_epochs: int = 4
171 num_minibatches: int = 1
172
173
174 class MAPGEpochUpdate(EpochUpdate):
175 def __init__(
176 self,
177 config: MAPGEpochUpdateConfig = MAPGEpochUpdateConfig(),
178 ):
179 """_summary_
180
181 Args:
182 config : _description_.
183 """
184 self.config = config
185
186 def on_training_utility_fns(self, trainer: SystemTrainer) -> None:
187 """_summary_"""
188 trainer.store.num_epochs = self.config.num_epochs
189 trainer.store.num_minibatches = self.config.num_minibatches
190
191 def model_update_epoch(
192 carry: Tuple[KeyArray, Any, optax.OptState, Batch],
193 unused_t: Tuple[()],
194 ) -> Tuple[
195 Tuple[KeyArray, Any, optax.OptState, Batch],
196 Dict[str, jnp.ndarray],
197 ]:
198 """Performs model updates based on one epoch of data."""
199 key, params, opt_states, batch = carry
200
201 new_key, subkey = jax.random.split(key)
202
203 # TODO (dries): This assert is ugly. Is there a better way to do this check?
204 # Maybe using a tree map of some sort?
205 # shapes = jax.tree_map(
206 # lambda x: x.shape[0]==trainer.store.full_batch_size, batch
207 # )
208 # assert ...
209 assert (
210 list(batch.observations.values())[0].observation.shape[0]
211 == trainer.store.full_batch_size
212 )
213
214 permutation = jax.random.permutation(subkey, trainer.store.full_batch_size)
215
216 shuffled_batch = jax.tree_map(
217 lambda x: jnp.take(x, permutation, axis=0), batch
218 )
219 minibatches = jax.tree_map(
220 lambda x: jnp.reshape(
221 x, [self.config.num_minibatches, -1] + list(x.shape[1:])
222 ),
223 shuffled_batch,
224 )
225
226 (new_params, new_opt_states), metrics = jax.lax.scan(
227 trainer.store.minibatch_update_fn,
228 (params, opt_states),
229 minibatches,
230 length=self.config.num_minibatches,
231 )
232
233 return (new_key, new_params, new_opt_states, batch), metrics
234
235 trainer.store.epoch_update_fn = model_update_epoch
236
237 @staticmethod
238 def config_class() -> Optional[Callable]:
239 """Config class used for component.
240
241 Returns:
242 config class/dataclass for component.
243 """
244 return MAPGEpochUpdateConfig
245
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mava/components/jax/training/model_updating.py b/mava/components/jax/training/model_updating.py
--- a/mava/components/jax/training/model_updating.py
+++ b/mava/components/jax/training/model_updating.py
@@ -113,7 +113,6 @@
)
# Update the networks and optimizors.
- metrics = {}
for agent_key in trainer.store.trainer_agents:
agent_net_key = trainer.store.trainer_agent_net_keys[agent_key]
# Apply updates
@@ -130,8 +129,7 @@
gradients[agent_key]
)
agent_metrics[agent_key]["norm_updates"] = optax.global_norm(updates)
- metrics[agent_key] = agent_metrics
- return (params, opt_states), metrics
+ return (params, opt_states), agent_metrics
trainer.store.minibatch_update_fn = model_update_minibatch
| {"golden_diff": "diff --git a/mava/components/jax/training/model_updating.py b/mava/components/jax/training/model_updating.py\n--- a/mava/components/jax/training/model_updating.py\n+++ b/mava/components/jax/training/model_updating.py\n@@ -113,7 +113,6 @@\n )\n \n # Update the networks and optimizors.\n- metrics = {}\n for agent_key in trainer.store.trainer_agents:\n agent_net_key = trainer.store.trainer_agent_net_keys[agent_key]\n # Apply updates\n@@ -130,8 +129,7 @@\n gradients[agent_key]\n )\n agent_metrics[agent_key][\"norm_updates\"] = optax.global_norm(updates)\n- metrics[agent_key] = agent_metrics\n- return (params, opt_states), metrics\n+ return (params, opt_states), agent_metrics\n \n trainer.store.minibatch_update_fn = model_update_minibatch\n", "issue": "[TEST] Jax Model Updating\n### What do you want to test?\r\nJax model updating components\r\n\r\n### Outline of test structure\r\n* Unit tests\r\n* Test components and hooks\r\n\r\n### Definition of done\r\nPassing checks, cover all hooks, edge cases considered\r\n\r\n### Mandatory checklist before making a PR\r\n* [ ] The success criteria laid down in \u201cDefinition of done\u201d are met.\r\n* [ ] Test code is documented - docstrings for methods and classes, static types for arguments.\r\n* [ ] Documentation is updated - README, CONTRIBUTING, or other documentation.\n", "before_files": [{"content": "# python3\n# Copyright 2021 InstaDeep Ltd. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Trainer components for system updating.\"\"\"\n\nimport abc\nfrom dataclasses import dataclass\nfrom typing import Any, Callable, Dict, Optional, Tuple\n\nimport jax\nimport jax.numpy as jnp\nimport optax\nfrom acme.jax import networks as networks_lib\nfrom jax.random import KeyArray\nfrom optax._src import base as optax_base\n\nfrom mava.components.jax.training import Batch, Utility\nfrom mava.core_jax import SystemTrainer\n\n\nclass MinibatchUpdate(Utility):\n @abc.abstractmethod\n def __init__(self, config: Any) -> None:\n \"\"\"_summary_\n\n Args:\n config : _description_.\n \"\"\"\n self.config = config\n\n @staticmethod\n def name() -> str:\n \"\"\"_summary_\n\n Returns:\n _description_\n \"\"\"\n return \"minibatch_update\"\n\n\n@dataclass\nclass MAPGMinibatchUpdateConfig:\n learning_rate: float = 1e-3\n adam_epsilon: float = 1e-5\n max_gradient_norm: float = 0.5\n optimizer: Optional[optax_base.GradientTransformation] = (None,)\n\n\nclass MAPGMinibatchUpdate(MinibatchUpdate):\n def __init__(\n self,\n config: MAPGMinibatchUpdateConfig = MAPGMinibatchUpdateConfig(),\n ):\n \"\"\"_summary_\n\n Args:\n config : _description_.\n \"\"\"\n self.config = config\n\n def on_training_utility_fns(self, trainer: SystemTrainer) -> None:\n \"\"\"_summary_\"\"\"\n\n if not self.config.optimizer:\n trainer.store.optimizer = optax.chain(\n optax.clip_by_global_norm(self.config.max_gradient_norm),\n optax.scale_by_adam(eps=self.config.adam_epsilon),\n optax.scale(-self.config.learning_rate),\n )\n else:\n trainer.store.optimizer = self.config.optimizer\n\n # Initialize optimizers.\n trainer.store.opt_states = {}\n for net_key in trainer.store.networks[\"networks\"].keys():\n trainer.store.opt_states[net_key] = trainer.store.optimizer.init(\n trainer.store.networks[\"networks\"][net_key].params\n ) # pytype: disable=attribute-error\n\n def model_update_minibatch(\n carry: Tuple[networks_lib.Params, optax.OptState], minibatch: Batch\n ) -> Tuple[Tuple[Any, optax.OptState], Dict[str, Any]]:\n \"\"\"Performs model update for a single minibatch.\"\"\"\n params, opt_states = carry\n\n # Normalize advantages at the minibatch level before using them.\n advantages = jax.tree_map(\n lambda x: (x - jnp.mean(x, axis=0)) / (jnp.std(x, axis=0) + 1e-8),\n minibatch.advantages,\n )\n\n # Calculate the gradients and agent metrics.\n gradients, agent_metrics = trainer.store.grad_fn(\n params,\n minibatch.observations,\n minibatch.actions,\n minibatch.behavior_log_probs,\n minibatch.target_values,\n advantages,\n minibatch.behavior_values,\n )\n\n # Update the networks and optimizors.\n metrics = {}\n for agent_key in trainer.store.trainer_agents:\n agent_net_key = trainer.store.trainer_agent_net_keys[agent_key]\n # Apply updates\n # TODO (dries): Use one optimizer per network type here and not\n # just one.\n updates, opt_states[agent_net_key] = trainer.store.optimizer.update(\n gradients[agent_key], opt_states[agent_net_key]\n )\n params[agent_net_key] = optax.apply_updates(\n params[agent_net_key], updates\n )\n\n agent_metrics[agent_key][\"norm_grad\"] = optax.global_norm(\n gradients[agent_key]\n )\n agent_metrics[agent_key][\"norm_updates\"] = optax.global_norm(updates)\n metrics[agent_key] = agent_metrics\n return (params, opt_states), metrics\n\n trainer.store.minibatch_update_fn = model_update_minibatch\n\n @staticmethod\n def config_class() -> Optional[Callable]:\n \"\"\"Config class used for component.\n\n Returns:\n config class/dataclass for component.\n \"\"\"\n return MAPGMinibatchUpdateConfig\n\n\nclass EpochUpdate(Utility):\n @abc.abstractmethod\n def __init__(self, config: Any) -> None:\n \"\"\"_summary_\n\n Args:\n config : _description_.\n \"\"\"\n self.config = config\n\n @staticmethod\n def name() -> str:\n \"\"\"_summary_\n\n Returns:\n _description_\n \"\"\"\n return \"epoch_update\"\n\n\n@dataclass\nclass MAPGEpochUpdateConfig:\n num_epochs: int = 4\n num_minibatches: int = 1\n\n\nclass MAPGEpochUpdate(EpochUpdate):\n def __init__(\n self,\n config: MAPGEpochUpdateConfig = MAPGEpochUpdateConfig(),\n ):\n \"\"\"_summary_\n\n Args:\n config : _description_.\n \"\"\"\n self.config = config\n\n def on_training_utility_fns(self, trainer: SystemTrainer) -> None:\n \"\"\"_summary_\"\"\"\n trainer.store.num_epochs = self.config.num_epochs\n trainer.store.num_minibatches = self.config.num_minibatches\n\n def model_update_epoch(\n carry: Tuple[KeyArray, Any, optax.OptState, Batch],\n unused_t: Tuple[()],\n ) -> Tuple[\n Tuple[KeyArray, Any, optax.OptState, Batch],\n Dict[str, jnp.ndarray],\n ]:\n \"\"\"Performs model updates based on one epoch of data.\"\"\"\n key, params, opt_states, batch = carry\n\n new_key, subkey = jax.random.split(key)\n\n # TODO (dries): This assert is ugly. Is there a better way to do this check?\n # Maybe using a tree map of some sort?\n # shapes = jax.tree_map(\n # lambda x: x.shape[0]==trainer.store.full_batch_size, batch\n # )\n # assert ...\n assert (\n list(batch.observations.values())[0].observation.shape[0]\n == trainer.store.full_batch_size\n )\n\n permutation = jax.random.permutation(subkey, trainer.store.full_batch_size)\n\n shuffled_batch = jax.tree_map(\n lambda x: jnp.take(x, permutation, axis=0), batch\n )\n minibatches = jax.tree_map(\n lambda x: jnp.reshape(\n x, [self.config.num_minibatches, -1] + list(x.shape[1:])\n ),\n shuffled_batch,\n )\n\n (new_params, new_opt_states), metrics = jax.lax.scan(\n trainer.store.minibatch_update_fn,\n (params, opt_states),\n minibatches,\n length=self.config.num_minibatches,\n )\n\n return (new_key, new_params, new_opt_states, batch), metrics\n\n trainer.store.epoch_update_fn = model_update_epoch\n\n @staticmethod\n def config_class() -> Optional[Callable]:\n \"\"\"Config class used for component.\n\n Returns:\n config class/dataclass for component.\n \"\"\"\n return MAPGEpochUpdateConfig\n", "path": "mava/components/jax/training/model_updating.py"}], "after_files": [{"content": "# python3\n# Copyright 2021 InstaDeep Ltd. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Trainer components for system updating.\"\"\"\n\nimport abc\nfrom dataclasses import dataclass\nfrom typing import Any, Callable, Dict, Optional, Tuple\n\nimport jax\nimport jax.numpy as jnp\nimport optax\nfrom acme.jax import networks as networks_lib\nfrom jax.random import KeyArray\nfrom optax._src import base as optax_base\n\nfrom mava.components.jax.training import Batch, Utility\nfrom mava.core_jax import SystemTrainer\n\n\nclass MinibatchUpdate(Utility):\n @abc.abstractmethod\n def __init__(self, config: Any) -> None:\n \"\"\"_summary_\n\n Args:\n config : _description_.\n \"\"\"\n self.config = config\n\n @staticmethod\n def name() -> str:\n \"\"\"_summary_\n\n Returns:\n _description_\n \"\"\"\n return \"minibatch_update\"\n\n\n@dataclass\nclass MAPGMinibatchUpdateConfig:\n learning_rate: float = 1e-3\n adam_epsilon: float = 1e-5\n max_gradient_norm: float = 0.5\n optimizer: Optional[optax_base.GradientTransformation] = (None,)\n\n\nclass MAPGMinibatchUpdate(MinibatchUpdate):\n def __init__(\n self,\n config: MAPGMinibatchUpdateConfig = MAPGMinibatchUpdateConfig(),\n ):\n \"\"\"_summary_\n\n Args:\n config : _description_.\n \"\"\"\n self.config = config\n\n def on_training_utility_fns(self, trainer: SystemTrainer) -> None:\n \"\"\"_summary_\"\"\"\n\n if not self.config.optimizer:\n trainer.store.optimizer = optax.chain(\n optax.clip_by_global_norm(self.config.max_gradient_norm),\n optax.scale_by_adam(eps=self.config.adam_epsilon),\n optax.scale(-self.config.learning_rate),\n )\n else:\n trainer.store.optimizer = self.config.optimizer\n\n # Initialize optimizers.\n trainer.store.opt_states = {}\n for net_key in trainer.store.networks[\"networks\"].keys():\n trainer.store.opt_states[net_key] = trainer.store.optimizer.init(\n trainer.store.networks[\"networks\"][net_key].params\n ) # pytype: disable=attribute-error\n\n def model_update_minibatch(\n carry: Tuple[networks_lib.Params, optax.OptState], minibatch: Batch\n ) -> Tuple[Tuple[Any, optax.OptState], Dict[str, Any]]:\n \"\"\"Performs model update for a single minibatch.\"\"\"\n params, opt_states = carry\n\n # Normalize advantages at the minibatch level before using them.\n advantages = jax.tree_map(\n lambda x: (x - jnp.mean(x, axis=0)) / (jnp.std(x, axis=0) + 1e-8),\n minibatch.advantages,\n )\n\n # Calculate the gradients and agent metrics.\n gradients, agent_metrics = trainer.store.grad_fn(\n params,\n minibatch.observations,\n minibatch.actions,\n minibatch.behavior_log_probs,\n minibatch.target_values,\n advantages,\n minibatch.behavior_values,\n )\n\n # Update the networks and optimizors.\n for agent_key in trainer.store.trainer_agents:\n agent_net_key = trainer.store.trainer_agent_net_keys[agent_key]\n # Apply updates\n # TODO (dries): Use one optimizer per network type here and not\n # just one.\n updates, opt_states[agent_net_key] = trainer.store.optimizer.update(\n gradients[agent_key], opt_states[agent_net_key]\n )\n params[agent_net_key] = optax.apply_updates(\n params[agent_net_key], updates\n )\n\n agent_metrics[agent_key][\"norm_grad\"] = optax.global_norm(\n gradients[agent_key]\n )\n agent_metrics[agent_key][\"norm_updates\"] = optax.global_norm(updates)\n return (params, opt_states), agent_metrics\n\n trainer.store.minibatch_update_fn = model_update_minibatch\n\n @staticmethod\n def config_class() -> Optional[Callable]:\n \"\"\"Config class used for component.\n\n Returns:\n config class/dataclass for component.\n \"\"\"\n return MAPGMinibatchUpdateConfig\n\n\nclass EpochUpdate(Utility):\n @abc.abstractmethod\n def __init__(self, config: Any) -> None:\n \"\"\"_summary_\n\n Args:\n config : _description_.\n \"\"\"\n self.config = config\n\n @staticmethod\n def name() -> str:\n \"\"\"_summary_\n\n Returns:\n _description_\n \"\"\"\n return \"epoch_update\"\n\n\n@dataclass\nclass MAPGEpochUpdateConfig:\n num_epochs: int = 4\n num_minibatches: int = 1\n\n\nclass MAPGEpochUpdate(EpochUpdate):\n def __init__(\n self,\n config: MAPGEpochUpdateConfig = MAPGEpochUpdateConfig(),\n ):\n \"\"\"_summary_\n\n Args:\n config : _description_.\n \"\"\"\n self.config = config\n\n def on_training_utility_fns(self, trainer: SystemTrainer) -> None:\n \"\"\"_summary_\"\"\"\n trainer.store.num_epochs = self.config.num_epochs\n trainer.store.num_minibatches = self.config.num_minibatches\n\n def model_update_epoch(\n carry: Tuple[KeyArray, Any, optax.OptState, Batch],\n unused_t: Tuple[()],\n ) -> Tuple[\n Tuple[KeyArray, Any, optax.OptState, Batch],\n Dict[str, jnp.ndarray],\n ]:\n \"\"\"Performs model updates based on one epoch of data.\"\"\"\n key, params, opt_states, batch = carry\n\n new_key, subkey = jax.random.split(key)\n\n # TODO (dries): This assert is ugly. Is there a better way to do this check?\n # Maybe using a tree map of some sort?\n # shapes = jax.tree_map(\n # lambda x: x.shape[0]==trainer.store.full_batch_size, batch\n # )\n # assert ...\n assert (\n list(batch.observations.values())[0].observation.shape[0]\n == trainer.store.full_batch_size\n )\n\n permutation = jax.random.permutation(subkey, trainer.store.full_batch_size)\n\n shuffled_batch = jax.tree_map(\n lambda x: jnp.take(x, permutation, axis=0), batch\n )\n minibatches = jax.tree_map(\n lambda x: jnp.reshape(\n x, [self.config.num_minibatches, -1] + list(x.shape[1:])\n ),\n shuffled_batch,\n )\n\n (new_params, new_opt_states), metrics = jax.lax.scan(\n trainer.store.minibatch_update_fn,\n (params, opt_states),\n minibatches,\n length=self.config.num_minibatches,\n )\n\n return (new_key, new_params, new_opt_states, batch), metrics\n\n trainer.store.epoch_update_fn = model_update_epoch\n\n @staticmethod\n def config_class() -> Optional[Callable]:\n \"\"\"Config class used for component.\n\n Returns:\n config class/dataclass for component.\n \"\"\"\n return MAPGEpochUpdateConfig\n", "path": "mava/components/jax/training/model_updating.py"}]} | 2,737 | 209 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.