problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_29710 | rasdani/github-patches | git_diff | keras-team__autokeras-166 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Model evaluation on test set yields different results on the same model
<!---
**If you are reporting a bug:**
* Verify that your issue is not being currently addressed by other issues or pull requests.
* Please note that Auto-Keras is only compatible with **Python 3.6**.
* Tag the issue with the `bug report` tag.
-->
### Bug Description
Test set evaluation yields different results if re-run.
### Reproducing Steps
Steps to reproduce the behavior:
re-run multiple times:
```python
y_pred = clf.evaluate(X_test, y_test)
print(y_pred)
```
will yield different results:
```
0.0992
0.1032
0.101
0.0989
```
Further, using manual evaluation:
```python
y_prediction = clf.predict(x_test=X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_pred=y_prediction, y_true=y_test)
```
leads to different results as well. It looks like the model either uses some random function (AFAIK all: ```random_states=42``` in the source code) or there is some major error in the pipeline that causes different predictions of the test set all the time.
### Expected Behavior
Final evaluation on a test set should not lead to different results using the same model on the same data.
### Setup Details
Include the details about the versions of:
- OS type and version: Linux
- Python: 3.6.5
- autokeras: 0.2.11
- scikit-learn:0.19.1
- numpy:1.14.5
- keras: 2.2.2
- scipy:1.1.0
- tensorflow: 1.10.0
- pytorch:0.4.1
### Additional context
This error is verified on MNIST and Fashion-MNIST.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `autokeras/preprocessor.py`
Content:
```
1 import torch
2
3 import numpy as np
4 from torch.utils.data import Dataset, DataLoader
5 from torchvision.transforms import ToPILImage, RandomCrop, RandomHorizontalFlip, ToTensor, Normalize, Compose
6
7 from autokeras.constant import Constant
8
9
10 class OneHotEncoder:
11 """A class that can format data.
12
13 This class provides ways to transform data's classification label into vector.
14
15 Attributes:
16 data: The input data
17 n_classes: The number of classes in the classification problem.
18 labels: The number of labels.
19 label_to_vec: Mapping from label to vector.
20 int_to_label: Mapping from int to label.
21 """
22
23 def __init__(self):
24 """Initialize a OneHotEncoder"""
25 self.data = None
26 self.n_classes = 0
27 self.labels = None
28 self.label_to_vec = {}
29 self.int_to_label = {}
30
31 def fit(self, data):
32 """Create mapping from label to vector, and vector to label."""
33 data = np.array(data).flatten()
34 self.labels = set(data)
35 self.n_classes = len(self.labels)
36 for index, label in enumerate(self.labels):
37 vec = np.array([0] * self.n_classes)
38 vec[index] = 1
39 self.label_to_vec[label] = vec
40 self.int_to_label[index] = label
41
42 def transform(self, data):
43 """Get vector for every element in the data array."""
44 data = np.array(data)
45 if len(data.shape) > 1:
46 data = data.flatten()
47 return np.array(list(map(lambda x: self.label_to_vec[x], data)))
48
49 def inverse_transform(self, data):
50 """Get label for every element in data."""
51 return np.array(list(map(lambda x: self.int_to_label[x], np.argmax(np.array(data), axis=1))))
52
53
54 class DataTransformer:
55 def __init__(self, data, augment=Constant.DATA_AUGMENTATION):
56 self.max_val = data.max()
57 data = data / self.max_val
58 self.mean = np.mean(data, axis=(0, 1, 2), keepdims=True).flatten()
59 self.std = np.std(data, axis=(0, 1, 2), keepdims=True).flatten()
60 self.augment = augment
61
62 def transform_train(self, data, targets=None, batch_size=None):
63 if not self.augment:
64 augment_list = []
65 else:
66 augment_list = [ToPILImage(),
67 RandomCrop(data.shape[1:3], padding=4),
68 RandomHorizontalFlip(),
69 ToTensor()
70 ]
71 common_list = [Normalize(torch.Tensor(self.mean), torch.Tensor(self.std))]
72 compose_list = augment_list + common_list
73
74 return self._transform(batch_size, compose_list, data, targets)
75
76 def transform_test(self, data, targets=None, batch_size=None):
77 common_list = [Normalize(torch.Tensor(self.mean), torch.Tensor(self.std))]
78 compose_list = common_list
79
80 return self._transform(batch_size, compose_list, data, targets)
81
82 def _transform(self, batch_size, compose_list, data, targets):
83 if batch_size is None:
84 batch_size = Constant.MAX_BATCH_SIZE
85 batch_size = min(len(data), batch_size)
86 data = data / self.max_val
87 data = torch.Tensor(data.transpose(0, 3, 1, 2))
88 data_transforms = Compose(compose_list)
89 dataset = MultiTransformDataset(data, targets, data_transforms)
90 return DataLoader(dataset, batch_size=batch_size, shuffle=True)
91
92
93 class MultiTransformDataset(Dataset):
94 def __init__(self, dataset, target, compose):
95 self.dataset = dataset
96 self.target = target
97 self.compose = compose
98
99 def __getitem__(self, index):
100 feature = self.dataset[index]
101 if self.target is None:
102 return self.compose(feature)
103 return self.compose(feature), self.target[index]
104
105 def __len__(self):
106 return len(self.dataset)
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/autokeras/preprocessor.py b/autokeras/preprocessor.py
--- a/autokeras/preprocessor.py
+++ b/autokeras/preprocessor.py
@@ -71,23 +71,31 @@
common_list = [Normalize(torch.Tensor(self.mean), torch.Tensor(self.std))]
compose_list = augment_list + common_list
- return self._transform(batch_size, compose_list, data, targets)
+ dataset = self._transform(compose_list, data, targets)
+
+ if batch_size is None:
+ batch_size = Constant.MAX_BATCH_SIZE
+ batch_size = min(len(data), batch_size)
+
+ return DataLoader(dataset, batch_size=batch_size, shuffle=True)
def transform_test(self, data, targets=None, batch_size=None):
common_list = [Normalize(torch.Tensor(self.mean), torch.Tensor(self.std))]
compose_list = common_list
- return self._transform(batch_size, compose_list, data, targets)
+ dataset = self._transform(compose_list, data, targets)
- def _transform(self, batch_size, compose_list, data, targets):
if batch_size is None:
batch_size = Constant.MAX_BATCH_SIZE
batch_size = min(len(data), batch_size)
+
+ return DataLoader(dataset, batch_size=batch_size, shuffle=False)
+
+ def _transform(self, compose_list, data, targets):
data = data / self.max_val
data = torch.Tensor(data.transpose(0, 3, 1, 2))
data_transforms = Compose(compose_list)
- dataset = MultiTransformDataset(data, targets, data_transforms)
- return DataLoader(dataset, batch_size=batch_size, shuffle=True)
+ return MultiTransformDataset(data, targets, data_transforms)
class MultiTransformDataset(Dataset):
| {"golden_diff": "diff --git a/autokeras/preprocessor.py b/autokeras/preprocessor.py\n--- a/autokeras/preprocessor.py\n+++ b/autokeras/preprocessor.py\n@@ -71,23 +71,31 @@\n common_list = [Normalize(torch.Tensor(self.mean), torch.Tensor(self.std))]\n compose_list = augment_list + common_list\n \n- return self._transform(batch_size, compose_list, data, targets)\n+ dataset = self._transform(compose_list, data, targets)\n+\n+ if batch_size is None:\n+ batch_size = Constant.MAX_BATCH_SIZE\n+ batch_size = min(len(data), batch_size)\n+\n+ return DataLoader(dataset, batch_size=batch_size, shuffle=True)\n \n def transform_test(self, data, targets=None, batch_size=None):\n common_list = [Normalize(torch.Tensor(self.mean), torch.Tensor(self.std))]\n compose_list = common_list\n \n- return self._transform(batch_size, compose_list, data, targets)\n+ dataset = self._transform(compose_list, data, targets)\n \n- def _transform(self, batch_size, compose_list, data, targets):\n if batch_size is None:\n batch_size = Constant.MAX_BATCH_SIZE\n batch_size = min(len(data), batch_size)\n+\n+ return DataLoader(dataset, batch_size=batch_size, shuffle=False)\n+\n+ def _transform(self, compose_list, data, targets):\n data = data / self.max_val\n data = torch.Tensor(data.transpose(0, 3, 1, 2))\n data_transforms = Compose(compose_list)\n- dataset = MultiTransformDataset(data, targets, data_transforms)\n- return DataLoader(dataset, batch_size=batch_size, shuffle=True)\n+ return MultiTransformDataset(data, targets, data_transforms)\n \n \n class MultiTransformDataset(Dataset):\n", "issue": "Model evaluation on test set yields different results on the same model\n<!---\r\n**If you are reporting a bug:**\r\n* Verify that your issue is not being currently addressed by other issues or pull requests.\r\n* Please note that Auto-Keras is only compatible with **Python 3.6**.\r\n* Tag the issue with the `bug report` tag.\r\n-->\r\n\r\n### Bug Description\r\nTest set evaluation yields different results if re-run.\r\n\r\n### Reproducing Steps\r\nSteps to reproduce the behavior:\r\nre-run multiple times:\r\n```python\r\ny_pred = clf.evaluate(X_test, y_test)\r\nprint(y_pred)\r\n```\r\nwill yield different results:\r\n```\r\n0.0992\r\n0.1032\r\n0.101\r\n0.0989\r\n```\r\n\r\nFurther, using manual evaluation:\r\n```python\r\ny_prediction = clf.predict(x_test=X_test)\r\nfrom sklearn.metrics import accuracy_score\r\naccuracy_score(y_pred=y_prediction, y_true=y_test)\r\n```\r\nleads to different results as well. It looks like the model either uses some random function (AFAIK all: ```random_states=42``` in the source code) or there is some major error in the pipeline that causes different predictions of the test set all the time.\r\n\r\n### Expected Behavior\r\nFinal evaluation on a test set should not lead to different results using the same model on the same data.\r\n\r\n### Setup Details\r\nInclude the details about the versions of:\r\n - OS type and version: Linux\r\n - Python: 3.6.5\r\n - autokeras: 0.2.11\r\n - scikit-learn:0.19.1\r\n - numpy:1.14.5\r\n - keras: 2.2.2\r\n - scipy:1.1.0\r\n - tensorflow: 1.10.0\r\n - pytorch:0.4.1\r\n\r\n### Additional context\r\nThis error is verified on MNIST and Fashion-MNIST.\n", "before_files": [{"content": "import torch\n\nimport numpy as np\nfrom torch.utils.data import Dataset, DataLoader\nfrom torchvision.transforms import ToPILImage, RandomCrop, RandomHorizontalFlip, ToTensor, Normalize, Compose\n\nfrom autokeras.constant import Constant\n\n\nclass OneHotEncoder:\n \"\"\"A class that can format data.\n\n This class provides ways to transform data's classification label into vector.\n\n Attributes:\n data: The input data\n n_classes: The number of classes in the classification problem.\n labels: The number of labels.\n label_to_vec: Mapping from label to vector.\n int_to_label: Mapping from int to label.\n \"\"\"\n\n def __init__(self):\n \"\"\"Initialize a OneHotEncoder\"\"\"\n self.data = None\n self.n_classes = 0\n self.labels = None\n self.label_to_vec = {}\n self.int_to_label = {}\n\n def fit(self, data):\n \"\"\"Create mapping from label to vector, and vector to label.\"\"\"\n data = np.array(data).flatten()\n self.labels = set(data)\n self.n_classes = len(self.labels)\n for index, label in enumerate(self.labels):\n vec = np.array([0] * self.n_classes)\n vec[index] = 1\n self.label_to_vec[label] = vec\n self.int_to_label[index] = label\n\n def transform(self, data):\n \"\"\"Get vector for every element in the data array.\"\"\"\n data = np.array(data)\n if len(data.shape) > 1:\n data = data.flatten()\n return np.array(list(map(lambda x: self.label_to_vec[x], data)))\n\n def inverse_transform(self, data):\n \"\"\"Get label for every element in data.\"\"\"\n return np.array(list(map(lambda x: self.int_to_label[x], np.argmax(np.array(data), axis=1))))\n\n\nclass DataTransformer:\n def __init__(self, data, augment=Constant.DATA_AUGMENTATION):\n self.max_val = data.max()\n data = data / self.max_val\n self.mean = np.mean(data, axis=(0, 1, 2), keepdims=True).flatten()\n self.std = np.std(data, axis=(0, 1, 2), keepdims=True).flatten()\n self.augment = augment\n\n def transform_train(self, data, targets=None, batch_size=None):\n if not self.augment:\n augment_list = []\n else:\n augment_list = [ToPILImage(),\n RandomCrop(data.shape[1:3], padding=4),\n RandomHorizontalFlip(),\n ToTensor()\n ]\n common_list = [Normalize(torch.Tensor(self.mean), torch.Tensor(self.std))]\n compose_list = augment_list + common_list\n\n return self._transform(batch_size, compose_list, data, targets)\n\n def transform_test(self, data, targets=None, batch_size=None):\n common_list = [Normalize(torch.Tensor(self.mean), torch.Tensor(self.std))]\n compose_list = common_list\n\n return self._transform(batch_size, compose_list, data, targets)\n\n def _transform(self, batch_size, compose_list, data, targets):\n if batch_size is None:\n batch_size = Constant.MAX_BATCH_SIZE\n batch_size = min(len(data), batch_size)\n data = data / self.max_val\n data = torch.Tensor(data.transpose(0, 3, 1, 2))\n data_transforms = Compose(compose_list)\n dataset = MultiTransformDataset(data, targets, data_transforms)\n return DataLoader(dataset, batch_size=batch_size, shuffle=True)\n\n\nclass MultiTransformDataset(Dataset):\n def __init__(self, dataset, target, compose):\n self.dataset = dataset\n self.target = target\n self.compose = compose\n\n def __getitem__(self, index):\n feature = self.dataset[index]\n if self.target is None:\n return self.compose(feature)\n return self.compose(feature), self.target[index]\n\n def __len__(self):\n return len(self.dataset)\n", "path": "autokeras/preprocessor.py"}], "after_files": [{"content": "import torch\n\nimport numpy as np\nfrom torch.utils.data import Dataset, DataLoader\nfrom torchvision.transforms import ToPILImage, RandomCrop, RandomHorizontalFlip, ToTensor, Normalize, Compose\n\nfrom autokeras.constant import Constant\n\n\nclass OneHotEncoder:\n \"\"\"A class that can format data.\n\n This class provides ways to transform data's classification label into vector.\n\n Attributes:\n data: The input data\n n_classes: The number of classes in the classification problem.\n labels: The number of labels.\n label_to_vec: Mapping from label to vector.\n int_to_label: Mapping from int to label.\n \"\"\"\n\n def __init__(self):\n \"\"\"Initialize a OneHotEncoder\"\"\"\n self.data = None\n self.n_classes = 0\n self.labels = None\n self.label_to_vec = {}\n self.int_to_label = {}\n\n def fit(self, data):\n \"\"\"Create mapping from label to vector, and vector to label.\"\"\"\n data = np.array(data).flatten()\n self.labels = set(data)\n self.n_classes = len(self.labels)\n for index, label in enumerate(self.labels):\n vec = np.array([0] * self.n_classes)\n vec[index] = 1\n self.label_to_vec[label] = vec\n self.int_to_label[index] = label\n\n def transform(self, data):\n \"\"\"Get vector for every element in the data array.\"\"\"\n data = np.array(data)\n if len(data.shape) > 1:\n data = data.flatten()\n return np.array(list(map(lambda x: self.label_to_vec[x], data)))\n\n def inverse_transform(self, data):\n \"\"\"Get label for every element in data.\"\"\"\n return np.array(list(map(lambda x: self.int_to_label[x], np.argmax(np.array(data), axis=1))))\n\n\nclass DataTransformer:\n def __init__(self, data, augment=Constant.DATA_AUGMENTATION):\n self.max_val = data.max()\n data = data / self.max_val\n self.mean = np.mean(data, axis=(0, 1, 2), keepdims=True).flatten()\n self.std = np.std(data, axis=(0, 1, 2), keepdims=True).flatten()\n self.augment = augment\n\n def transform_train(self, data, targets=None, batch_size=None):\n if not self.augment:\n augment_list = []\n else:\n augment_list = [ToPILImage(),\n RandomCrop(data.shape[1:3], padding=4),\n RandomHorizontalFlip(),\n ToTensor()\n ]\n common_list = [Normalize(torch.Tensor(self.mean), torch.Tensor(self.std))]\n compose_list = augment_list + common_list\n\n dataset = self._transform(compose_list, data, targets)\n\n if batch_size is None:\n batch_size = Constant.MAX_BATCH_SIZE\n batch_size = min(len(data), batch_size)\n\n return DataLoader(dataset, batch_size=batch_size, shuffle=True)\n\n def transform_test(self, data, targets=None, batch_size=None):\n common_list = [Normalize(torch.Tensor(self.mean), torch.Tensor(self.std))]\n compose_list = common_list\n\n dataset = self._transform(compose_list, data, targets)\n\n if batch_size is None:\n batch_size = Constant.MAX_BATCH_SIZE\n batch_size = min(len(data), batch_size)\n\n return DataLoader(dataset, batch_size=batch_size, shuffle=False)\n\n def _transform(self, compose_list, data, targets):\n data = data / self.max_val\n data = torch.Tensor(data.transpose(0, 3, 1, 2))\n data_transforms = Compose(compose_list)\n return MultiTransformDataset(data, targets, data_transforms)\n\n\nclass MultiTransformDataset(Dataset):\n def __init__(self, dataset, target, compose):\n self.dataset = dataset\n self.target = target\n self.compose = compose\n\n def __getitem__(self, index):\n feature = self.dataset[index]\n if self.target is None:\n return self.compose(feature)\n return self.compose(feature), self.target[index]\n\n def __len__(self):\n return len(self.dataset)\n", "path": "autokeras/preprocessor.py"}]} | 1,736 | 392 |
gh_patches_debug_9194 | rasdani/github-patches | git_diff | akvo__akvo-rsr-2129 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Show results framework for private projects
## Test plan
GIVEN the 'My results' tab in MyRSR
WHEN searching for a private project
THEN the project should appear in the list
WHEN the private project is opened in 'My results'
THEN the user should have full access to the results framework, just like a regular public project
## Issue description
Currently, we have disabled the results framework in MyRSR for private projects. However, it should be possible to enter results data for private projects as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rest/views/typeahead.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Akvo RSR is covered by the GNU Affero General Public License.
4 See more details in the license.txt file located at the root folder of the
5 Akvo RSR module. For additional details on the GNU license please
6 see < http://www.gnu.org/licenses/agpl.html >.
7 """
8
9 from akvo.rest.serializers import (TypeaheadCountrySerializer,
10 TypeaheadOrganisationSerializer,
11 TypeaheadProjectSerializer,
12 TypeaheadProjectUpdateSerializer)
13 from akvo.rsr.models import Country, Organisation, Project, ProjectUpdate
14
15 from rest_framework.decorators import api_view
16 from rest_framework.response import Response
17
18
19 def rejig(queryset, serializer):
20 """Rearrange & add queryset count to the response data."""
21 return {
22 'count': queryset.count(),
23 'results': serializer.data
24 }
25
26
27 @api_view(['GET'])
28 def typeahead_country(request):
29 countries = Country.objects.all()
30 return Response(
31 rejig(countries, TypeaheadCountrySerializer(countries, many=True))
32 )
33
34
35 @api_view(['GET'])
36 def typeahead_organisation(request):
37 organisations = Organisation.objects.all()
38 return Response(
39 rejig(organisations, TypeaheadOrganisationSerializer(organisations,
40 many=True))
41 )
42
43
44 @api_view(['GET'])
45 def typeahead_user_organisations(request):
46 user = request.user
47 is_admin = user.is_active and (user.is_superuser or user.is_admin)
48 organisations = user.approved_organisations() if not is_admin else Organisation.objects.all()
49 return Response(
50 rejig(organisations, TypeaheadOrganisationSerializer(organisations,
51 many=True))
52 )
53
54
55 @api_view(['GET'])
56 def typeahead_project(request):
57 projects = Project.objects.all().exclude(title='')
58 return Response(
59 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
60 )
61
62
63 @api_view(['GET'])
64 def typeahead_user_projects(request):
65 user = request.user
66 is_admin = user.is_active and (user.is_superuser or user.is_admin)
67 if is_admin:
68 projects = Project.objects.all()
69 else:
70 projects = user.approved_organisations().all_projects()
71 projects = projects.exclude(title='')
72 return Response(
73 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
74 )
75
76
77 @api_view(['GET'])
78 def typeahead_impact_projects(request):
79 user = request.user
80 projects = Project.objects.all() if user.is_admin or user.is_superuser else user.my_projects()
81 projects = projects.published().filter(is_impact_project=True, is_public=True).order_by('title')
82
83 return Response(
84 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
85 )
86
87
88 @api_view(['GET'])
89 def typeahead_projectupdate(request):
90 updates = ProjectUpdate.objects.all()
91 return Response(
92 rejig(updates, TypeaheadProjectUpdateSerializer(updates, many=True))
93 )
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/akvo/rest/views/typeahead.py b/akvo/rest/views/typeahead.py
--- a/akvo/rest/views/typeahead.py
+++ b/akvo/rest/views/typeahead.py
@@ -78,7 +78,7 @@
def typeahead_impact_projects(request):
user = request.user
projects = Project.objects.all() if user.is_admin or user.is_superuser else user.my_projects()
- projects = projects.published().filter(is_impact_project=True, is_public=True).order_by('title')
+ projects = projects.published().filter(is_impact_project=True).order_by('title')
return Response(
rejig(projects, TypeaheadProjectSerializer(projects, many=True))
| {"golden_diff": "diff --git a/akvo/rest/views/typeahead.py b/akvo/rest/views/typeahead.py\n--- a/akvo/rest/views/typeahead.py\n+++ b/akvo/rest/views/typeahead.py\n@@ -78,7 +78,7 @@\n def typeahead_impact_projects(request):\n user = request.user\n projects = Project.objects.all() if user.is_admin or user.is_superuser else user.my_projects()\n- projects = projects.published().filter(is_impact_project=True, is_public=True).order_by('title')\n+ projects = projects.published().filter(is_impact_project=True).order_by('title')\n \n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n", "issue": "Show results framework for private projects\n## Test plan\n\nGIVEN the 'My results' tab in MyRSR\nWHEN searching for a private project\nTHEN the project should appear in the list\n\nWHEN the private project is opened in 'My results'\nTHEN the user should have full access to the results framework, just like a regular public project\n## Issue description\n\nCurrently, we have disabled the results framework in MyRSR for private projects. However, it should be possible to enter results data for private projects as well.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom akvo.rest.serializers import (TypeaheadCountrySerializer,\n TypeaheadOrganisationSerializer,\n TypeaheadProjectSerializer,\n TypeaheadProjectUpdateSerializer)\nfrom akvo.rsr.models import Country, Organisation, Project, ProjectUpdate\n\nfrom rest_framework.decorators import api_view\nfrom rest_framework.response import Response\n\n\ndef rejig(queryset, serializer):\n \"\"\"Rearrange & add queryset count to the response data.\"\"\"\n return {\n 'count': queryset.count(),\n 'results': serializer.data\n }\n\n\n@api_view(['GET'])\ndef typeahead_country(request):\n countries = Country.objects.all()\n return Response(\n rejig(countries, TypeaheadCountrySerializer(countries, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_organisation(request):\n organisations = Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_user_organisations(request):\n user = request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n organisations = user.approved_organisations() if not is_admin else Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_project(request):\n projects = Project.objects.all().exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_user_projects(request):\n user = request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n if is_admin:\n projects = Project.objects.all()\n else:\n projects = user.approved_organisations().all_projects()\n projects = projects.exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_impact_projects(request):\n user = request.user\n projects = Project.objects.all() if user.is_admin or user.is_superuser else user.my_projects()\n projects = projects.published().filter(is_impact_project=True, is_public=True).order_by('title')\n\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_projectupdate(request):\n updates = ProjectUpdate.objects.all()\n return Response(\n rejig(updates, TypeaheadProjectUpdateSerializer(updates, many=True))\n )\n", "path": "akvo/rest/views/typeahead.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom akvo.rest.serializers import (TypeaheadCountrySerializer,\n TypeaheadOrganisationSerializer,\n TypeaheadProjectSerializer,\n TypeaheadProjectUpdateSerializer)\nfrom akvo.rsr.models import Country, Organisation, Project, ProjectUpdate\n\nfrom rest_framework.decorators import api_view\nfrom rest_framework.response import Response\n\n\ndef rejig(queryset, serializer):\n \"\"\"Rearrange & add queryset count to the response data.\"\"\"\n return {\n 'count': queryset.count(),\n 'results': serializer.data\n }\n\n\n@api_view(['GET'])\ndef typeahead_country(request):\n countries = Country.objects.all()\n return Response(\n rejig(countries, TypeaheadCountrySerializer(countries, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_organisation(request):\n organisations = Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_user_organisations(request):\n user = request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n organisations = user.approved_organisations() if not is_admin else Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_project(request):\n projects = Project.objects.all().exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_user_projects(request):\n user = request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n if is_admin:\n projects = Project.objects.all()\n else:\n projects = user.approved_organisations().all_projects()\n projects = projects.exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_impact_projects(request):\n user = request.user\n projects = Project.objects.all() if user.is_admin or user.is_superuser else user.my_projects()\n projects = projects.published().filter(is_impact_project=True).order_by('title')\n\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_projectupdate(request):\n updates = ProjectUpdate.objects.all()\n return Response(\n rejig(updates, TypeaheadProjectUpdateSerializer(updates, many=True))\n )\n", "path": "akvo/rest/views/typeahead.py"}]} | 1,178 | 153 |
gh_patches_debug_24995 | rasdani/github-patches | git_diff | e-valuation__EvaP-1853 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Investigate Github Action caching
One can cache certain directories in github actions. For Python, caching the entire installation is used (including all the `site-packages`), so that wheels don't need to be rebuild. Additionally, the download could be faster. We should investigate how much there is to gain.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/evaluation/management/commands/scss.py`
Content:
```
1 import os
2 import subprocess # nosec
3
4 from django.conf import settings
5 from django.core.management.base import BaseCommand
6
7
8 class Command(BaseCommand):
9 def add_arguments(self, parser):
10 parser.add_argument(
11 "--watch",
12 action="store_true",
13 help="Watch stylesheets and recompile when they change.",
14 )
15 parser.add_argument(
16 "--production",
17 action="store_true",
18 help="Compress output stylesheet and do not generate source maps."
19 " Intended to use in production deployment.",
20 )
21
22 def handle(self, *args, **options):
23 static_directory = settings.STATICFILES_DIRS[0]
24 command = [
25 "npx",
26 "sass",
27 os.path.join(static_directory, "scss", "evap.scss"),
28 os.path.join(static_directory, "css", "evap.css"),
29 ]
30
31 if options["watch"]:
32 command += ["--watch", "--poll"]
33
34 if options["production"]:
35 command += ["--style", "compressed", "--no-source-map"]
36
37 try:
38 subprocess.run(command, check=True) # nosec
39 except FileNotFoundError:
40 print("Could not find sass command", file=self.stderr)
41 except KeyboardInterrupt:
42 pass
43
```
Path: `evap/evaluation/management/commands/ts.py`
Content:
```
1 import argparse
2 import os
3 import subprocess # nosec
4 import unittest
5
6 from django.conf import settings
7 from django.core.management import call_command
8 from django.core.management.base import BaseCommand, CommandError
9 from django.test.runner import DiscoverRunner
10
11
12 class RenderPagesRunner(DiscoverRunner):
13 """Test runner which only includes `render_pages.*` methods.
14 The actual logic of the page rendering is implemented in the `@render_pages` decorator."""
15
16 test_loader = unittest.TestLoader()
17
18 def __init__(self, **kwargs):
19 super().__init__(**kwargs)
20 self.test_loader.testMethodPrefix = "render_pages"
21
22
23 class Command(BaseCommand):
24 def add_arguments(self, parser: argparse.ArgumentParser):
25 subparsers = parser.add_subparsers(dest="command", required=True)
26 compile_parser = subparsers.add_parser("compile")
27 compile_parser.add_argument(
28 "--watch",
29 action="store_true",
30 help="Watch scripts and recompile when they change.",
31 )
32 self.add_fresh_argument(compile_parser)
33 test_parser = subparsers.add_parser("test")
34 self.add_fresh_argument(test_parser)
35 subparsers.add_parser("render_pages")
36
37 @staticmethod
38 def add_fresh_argument(parser: argparse.ArgumentParser):
39 parser.add_argument(
40 "--fresh",
41 action="store_true",
42 help="Delete .tsbuildinfo.json before compilation to force a fresh compilation."
43 "This is useful when incremental compilation does not yield the expected output.",
44 )
45
46 def handle(self, *args, **options):
47 if options["command"] == "compile":
48 self.compile(**options)
49 elif options["command"] == "test":
50 self.test(**options)
51 elif options["command"] == "render_pages":
52 self.render_pages(**options)
53
54 def run_command(self, command):
55 try:
56 subprocess.run(command, check=True) # nosec
57 except FileNotFoundError:
58 print(f"Could not find {command[0]} command", file=self.stderr)
59 except KeyboardInterrupt:
60 pass
61 except subprocess.CalledProcessError as e:
62 raise CommandError("Error during command execution", returncode=e.returncode) from e
63
64 def compile(self, watch=False, fresh=False, **_options):
65 static_directory = settings.STATICFILES_DIRS[0]
66 command = [
67 "npx",
68 "tsc",
69 "--project",
70 os.path.join(static_directory, "ts", "tsconfig.compile.json"),
71 ]
72
73 if watch:
74 command += ["--watch"]
75
76 if fresh:
77 try:
78 os.remove(os.path.join(static_directory, "ts", ".tsbuildinfo.json"))
79 except FileNotFoundError:
80 pass
81
82 self.run_command(command)
83
84 def test(self, **options):
85 call_command("scss")
86 self.compile(**options)
87 self.render_pages()
88 self.run_command(["npx", "jest"])
89
90 @staticmethod
91 def render_pages(**_options):
92 # Enable debug mode as otherwise a collectstatic beforehand would be necessary,
93 # as missing static files would result into an error.
94 test_runner = RenderPagesRunner(debug_mode=True)
95 failed_tests = test_runner.run_tests([])
96 if failed_tests > 0:
97 raise CommandError("Failures during render_pages")
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/evap/evaluation/management/commands/scss.py b/evap/evaluation/management/commands/scss.py
--- a/evap/evaluation/management/commands/scss.py
+++ b/evap/evaluation/management/commands/scss.py
@@ -2,7 +2,7 @@
import subprocess # nosec
from django.conf import settings
-from django.core.management.base import BaseCommand
+from django.core.management.base import BaseCommand, CommandError
class Command(BaseCommand):
@@ -36,7 +36,7 @@
try:
subprocess.run(command, check=True) # nosec
- except FileNotFoundError:
- print("Could not find sass command", file=self.stderr)
+ except FileNotFoundError as e:
+ raise CommandError("Could not find sass command") from e
except KeyboardInterrupt:
pass
diff --git a/evap/evaluation/management/commands/ts.py b/evap/evaluation/management/commands/ts.py
--- a/evap/evaluation/management/commands/ts.py
+++ b/evap/evaluation/management/commands/ts.py
@@ -54,8 +54,8 @@
def run_command(self, command):
try:
subprocess.run(command, check=True) # nosec
- except FileNotFoundError:
- print(f"Could not find {command[0]} command", file=self.stderr)
+ except FileNotFoundError as e:
+ raise CommandError(f"Could not find {command[0]} command") from e
except KeyboardInterrupt:
pass
except subprocess.CalledProcessError as e:
| {"golden_diff": "diff --git a/evap/evaluation/management/commands/scss.py b/evap/evaluation/management/commands/scss.py\n--- a/evap/evaluation/management/commands/scss.py\n+++ b/evap/evaluation/management/commands/scss.py\n@@ -2,7 +2,7 @@\n import subprocess # nosec\n \n from django.conf import settings\n-from django.core.management.base import BaseCommand\n+from django.core.management.base import BaseCommand, CommandError\n \n \n class Command(BaseCommand):\n@@ -36,7 +36,7 @@\n \n try:\n subprocess.run(command, check=True) # nosec\n- except FileNotFoundError:\n- print(\"Could not find sass command\", file=self.stderr)\n+ except FileNotFoundError as e:\n+ raise CommandError(\"Could not find sass command\") from e\n except KeyboardInterrupt:\n pass\ndiff --git a/evap/evaluation/management/commands/ts.py b/evap/evaluation/management/commands/ts.py\n--- a/evap/evaluation/management/commands/ts.py\n+++ b/evap/evaluation/management/commands/ts.py\n@@ -54,8 +54,8 @@\n def run_command(self, command):\n try:\n subprocess.run(command, check=True) # nosec\n- except FileNotFoundError:\n- print(f\"Could not find {command[0]} command\", file=self.stderr)\n+ except FileNotFoundError as e:\n+ raise CommandError(f\"Could not find {command[0]} command\") from e\n except KeyboardInterrupt:\n pass\n except subprocess.CalledProcessError as e:\n", "issue": "Investigate Github Action caching\nOne can cache certain directories in github actions. For Python, caching the entire installation is used (including all the `site-packages`), so that wheels don't need to be rebuild. Additionally, the download could be faster. We should investigate how much there is to gain.\n", "before_files": [{"content": "import os\nimport subprocess # nosec\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\n\n\nclass Command(BaseCommand):\n def add_arguments(self, parser):\n parser.add_argument(\n \"--watch\",\n action=\"store_true\",\n help=\"Watch stylesheets and recompile when they change.\",\n )\n parser.add_argument(\n \"--production\",\n action=\"store_true\",\n help=\"Compress output stylesheet and do not generate source maps.\"\n \" Intended to use in production deployment.\",\n )\n\n def handle(self, *args, **options):\n static_directory = settings.STATICFILES_DIRS[0]\n command = [\n \"npx\",\n \"sass\",\n os.path.join(static_directory, \"scss\", \"evap.scss\"),\n os.path.join(static_directory, \"css\", \"evap.css\"),\n ]\n\n if options[\"watch\"]:\n command += [\"--watch\", \"--poll\"]\n\n if options[\"production\"]:\n command += [\"--style\", \"compressed\", \"--no-source-map\"]\n\n try:\n subprocess.run(command, check=True) # nosec\n except FileNotFoundError:\n print(\"Could not find sass command\", file=self.stderr)\n except KeyboardInterrupt:\n pass\n", "path": "evap/evaluation/management/commands/scss.py"}, {"content": "import argparse\nimport os\nimport subprocess # nosec\nimport unittest\n\nfrom django.conf import settings\nfrom django.core.management import call_command\nfrom django.core.management.base import BaseCommand, CommandError\nfrom django.test.runner import DiscoverRunner\n\n\nclass RenderPagesRunner(DiscoverRunner):\n \"\"\"Test runner which only includes `render_pages.*` methods.\n The actual logic of the page rendering is implemented in the `@render_pages` decorator.\"\"\"\n\n test_loader = unittest.TestLoader()\n\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n self.test_loader.testMethodPrefix = \"render_pages\"\n\n\nclass Command(BaseCommand):\n def add_arguments(self, parser: argparse.ArgumentParser):\n subparsers = parser.add_subparsers(dest=\"command\", required=True)\n compile_parser = subparsers.add_parser(\"compile\")\n compile_parser.add_argument(\n \"--watch\",\n action=\"store_true\",\n help=\"Watch scripts and recompile when they change.\",\n )\n self.add_fresh_argument(compile_parser)\n test_parser = subparsers.add_parser(\"test\")\n self.add_fresh_argument(test_parser)\n subparsers.add_parser(\"render_pages\")\n\n @staticmethod\n def add_fresh_argument(parser: argparse.ArgumentParser):\n parser.add_argument(\n \"--fresh\",\n action=\"store_true\",\n help=\"Delete .tsbuildinfo.json before compilation to force a fresh compilation.\"\n \"This is useful when incremental compilation does not yield the expected output.\",\n )\n\n def handle(self, *args, **options):\n if options[\"command\"] == \"compile\":\n self.compile(**options)\n elif options[\"command\"] == \"test\":\n self.test(**options)\n elif options[\"command\"] == \"render_pages\":\n self.render_pages(**options)\n\n def run_command(self, command):\n try:\n subprocess.run(command, check=True) # nosec\n except FileNotFoundError:\n print(f\"Could not find {command[0]} command\", file=self.stderr)\n except KeyboardInterrupt:\n pass\n except subprocess.CalledProcessError as e:\n raise CommandError(\"Error during command execution\", returncode=e.returncode) from e\n\n def compile(self, watch=False, fresh=False, **_options):\n static_directory = settings.STATICFILES_DIRS[0]\n command = [\n \"npx\",\n \"tsc\",\n \"--project\",\n os.path.join(static_directory, \"ts\", \"tsconfig.compile.json\"),\n ]\n\n if watch:\n command += [\"--watch\"]\n\n if fresh:\n try:\n os.remove(os.path.join(static_directory, \"ts\", \".tsbuildinfo.json\"))\n except FileNotFoundError:\n pass\n\n self.run_command(command)\n\n def test(self, **options):\n call_command(\"scss\")\n self.compile(**options)\n self.render_pages()\n self.run_command([\"npx\", \"jest\"])\n\n @staticmethod\n def render_pages(**_options):\n # Enable debug mode as otherwise a collectstatic beforehand would be necessary,\n # as missing static files would result into an error.\n test_runner = RenderPagesRunner(debug_mode=True)\n failed_tests = test_runner.run_tests([])\n if failed_tests > 0:\n raise CommandError(\"Failures during render_pages\")\n", "path": "evap/evaluation/management/commands/ts.py"}], "after_files": [{"content": "import os\nimport subprocess # nosec\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand, CommandError\n\n\nclass Command(BaseCommand):\n def add_arguments(self, parser):\n parser.add_argument(\n \"--watch\",\n action=\"store_true\",\n help=\"Watch stylesheets and recompile when they change.\",\n )\n parser.add_argument(\n \"--production\",\n action=\"store_true\",\n help=\"Compress output stylesheet and do not generate source maps.\"\n \" Intended to use in production deployment.\",\n )\n\n def handle(self, *args, **options):\n static_directory = settings.STATICFILES_DIRS[0]\n command = [\n \"npx\",\n \"sass\",\n os.path.join(static_directory, \"scss\", \"evap.scss\"),\n os.path.join(static_directory, \"css\", \"evap.css\"),\n ]\n\n if options[\"watch\"]:\n command += [\"--watch\", \"--poll\"]\n\n if options[\"production\"]:\n command += [\"--style\", \"compressed\", \"--no-source-map\"]\n\n try:\n subprocess.run(command, check=True) # nosec\n except FileNotFoundError as e:\n raise CommandError(\"Could not find sass command\") from e\n except KeyboardInterrupt:\n pass\n", "path": "evap/evaluation/management/commands/scss.py"}, {"content": "import argparse\nimport os\nimport subprocess # nosec\nimport unittest\n\nfrom django.conf import settings\nfrom django.core.management import call_command\nfrom django.core.management.base import BaseCommand, CommandError\nfrom django.test.runner import DiscoverRunner\n\n\nclass RenderPagesRunner(DiscoverRunner):\n \"\"\"Test runner which only includes `render_pages.*` methods.\n The actual logic of the page rendering is implemented in the `@render_pages` decorator.\"\"\"\n\n test_loader = unittest.TestLoader()\n\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n self.test_loader.testMethodPrefix = \"render_pages\"\n\n\nclass Command(BaseCommand):\n def add_arguments(self, parser: argparse.ArgumentParser):\n subparsers = parser.add_subparsers(dest=\"command\", required=True)\n compile_parser = subparsers.add_parser(\"compile\")\n compile_parser.add_argument(\n \"--watch\",\n action=\"store_true\",\n help=\"Watch scripts and recompile when they change.\",\n )\n self.add_fresh_argument(compile_parser)\n test_parser = subparsers.add_parser(\"test\")\n self.add_fresh_argument(test_parser)\n subparsers.add_parser(\"render_pages\")\n\n @staticmethod\n def add_fresh_argument(parser: argparse.ArgumentParser):\n parser.add_argument(\n \"--fresh\",\n action=\"store_true\",\n help=\"Delete .tsbuildinfo.json before compilation to force a fresh compilation.\"\n \"This is useful when incremental compilation does not yield the expected output.\",\n )\n\n def handle(self, *args, **options):\n if options[\"command\"] == \"compile\":\n self.compile(**options)\n elif options[\"command\"] == \"test\":\n self.test(**options)\n elif options[\"command\"] == \"render_pages\":\n self.render_pages(**options)\n\n def run_command(self, command):\n try:\n subprocess.run(command, check=True) # nosec\n except FileNotFoundError as e:\n raise CommandError(f\"Could not find {command[0]} command\") from e\n except KeyboardInterrupt:\n pass\n except subprocess.CalledProcessError as e:\n raise CommandError(\"Error during command execution\", returncode=e.returncode) from e\n\n def compile(self, watch=False, fresh=False, **_options):\n static_directory = settings.STATICFILES_DIRS[0]\n command = [\n \"npx\",\n \"tsc\",\n \"--project\",\n os.path.join(static_directory, \"ts\", \"tsconfig.compile.json\"),\n ]\n\n if watch:\n command += [\"--watch\"]\n\n if fresh:\n try:\n os.remove(os.path.join(static_directory, \"ts\", \".tsbuildinfo.json\"))\n except FileNotFoundError:\n pass\n\n self.run_command(command)\n\n def test(self, **options):\n call_command(\"scss\")\n self.compile(**options)\n self.render_pages()\n self.run_command([\"npx\", \"jest\"])\n\n @staticmethod\n def render_pages(**_options):\n # Enable debug mode as otherwise a collectstatic beforehand would be necessary,\n # as missing static files would result into an error.\n test_runner = RenderPagesRunner(debug_mode=True)\n failed_tests = test_runner.run_tests([])\n if failed_tests > 0:\n raise CommandError(\"Failures during render_pages\")\n", "path": "evap/evaluation/management/commands/ts.py"}]} | 1,561 | 348 |
gh_patches_debug_34206 | rasdani/github-patches | git_diff | kymatio__kymatio-244 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
RELEASE requirements?
Hi,
When doing:
```
conda install pytorch torchvision -c pytorch
pip install -i https://test.pypi.org/simple/ kymatio
```
then cloning the corresponding release, and doing `pytest`, I have tons of errors that are evidences that none of the requirements.txt packages are installed. Actually, this makes me think we should have a pip for the GPU version(like `pip install kymatio-gpu`)?
https://github.com/kymatio/kymatio/blob/master/requirements.txt
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import csv
3 import importlib
4 import os
5 import shutil
6 import sys
7 from setuptools import setup, find_packages
8
9 # Constants
10 DISTNAME = 'kymatio'
11 DESCRIPTION = 'Wavelet scattering transforms in Python with GPU acceleration'
12 URL = 'https://kymatio.github.io'
13 LICENSE = 'BSD-3-Clause'
14
15
16 # Parse description
17 with open('README.md') as f:
18 README = f.read().split('\n')
19 LONG_DESCRIPTION = '\n'.join([x for x in README if not x[:3]=='[!['])
20
21
22 # Parse version.py
23 kymatio_version_spec = importlib.util.spec_from_file_location(
24 'kymatio_version', 'kymatio/version.py')
25 kymatio_version_module = importlib.util.module_from_spec(kymatio_version_spec)
26 kymatio_version_spec.loader.exec_module(kymatio_version_module)
27 VERSION = kymatio_version_module.version
28
29
30 # Parse requirements.txt
31 with open('requirements.txt', 'r') as f:
32 REQUIREMENTS = f.read().split('\n')
33
34
35 setup_info = dict(
36 # Metadata
37 name=DISTNAME,
38 version=VERSION,
39 author=('Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko, '
40 'Michael Eickenberg, Mathieu Andreux, Georgios Exarchakis, '
41 'Louis Thiry, Vincent Lostanlen, Joakim Andén, '
42 'Tomás Angles, Gabriel Huang, Roberto Leonarduzzi'),
43 author_email=('[email protected], [email protected], '
44 '[email protected], [email protected], '
45 '[email protected], [email protected], '
46 '[email protected], [email protected], [email protected], '
47 '[email protected], [email protected], [email protected]'),
48 url=URL,
49 download_url='https://github.com/kymatio/kymatio/releases',
50 classifiers=['Intended Audience :: Education',
51 'Intended Audience :: Science/Research',
52 'License :: OSI Approved :: BSD License',
53 'Natural Language :: English',
54 'Operating System :: MacOS',
55 'Operating System :: Microsoft :: Windows',
56 'Operating System :: POSIX :: Linux',
57 'Programming Language :: Python :: 3.4',
58 'Programming Language :: Python :: 3.5',
59 'Programming Language :: Python :: 3.6',
60 'Programming Language :: Python :: 3.7',
61 'Programming Language :: Python :: 3.8',
62 'Topic :: Multimedia :: Graphics :: 3D Modeling',
63 'Topic :: Multimedia :: Sound/Audio :: Analysis',
64 'Topic :: Scientific/Engineering :: Artificial Intelligence',
65 'Topic :: Scientific/Engineering :: Chemistry',
66 'Topic :: Scientific/Engineering :: Image Recognition',
67 'Topic :: Scientific/Engineering :: Information Analysis',
68 'Topic :: Scientific/Engineering :: Mathematics',
69 'Topic :: Scientific/Engineering :: Physics',
70 'Topic :: Software Development :: Libraries :: Python Modules',
71 ],
72 description=DESCRIPTION,
73 long_description=LONG_DESCRIPTION,
74 long_description_content_type='text/markdown',
75 license=LICENSE,
76 packages=find_packages(exclude=('test',)),
77 install_requires=REQUIREMENTS,
78 zip_safe=True,
79 )
80
81 setup(**setup_info)
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -9,7 +9,7 @@
# Constants
DISTNAME = 'kymatio'
DESCRIPTION = 'Wavelet scattering transforms in Python with GPU acceleration'
-URL = 'https://kymatio.github.io'
+URL = 'https://www.kymat.io'
LICENSE = 'BSD-3-Clause'
@@ -47,15 +47,18 @@
'[email protected], [email protected], [email protected]'),
url=URL,
download_url='https://github.com/kymatio/kymatio/releases',
+ project_urls={
+ 'Documentation': 'https://www.kymat.io/codereference.html',
+ 'Source': 'https://github.com/kymatio/kymatio/',
+ 'Tracker': 'https://github.com/kymatio/kymatio/issues',
+ 'Authors': 'https://github.com/kymatio/kymatio/blob/master/AUTHORS.md'
+ },
classifiers=['Intended Audience :: Education',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: BSD License',
'Natural Language :: English',
'Operating System :: MacOS',
- 'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX :: Linux',
- 'Programming Language :: Python :: 3.4',
- 'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
@@ -72,6 +75,7 @@
description=DESCRIPTION,
long_description=LONG_DESCRIPTION,
long_description_content_type='text/markdown',
+ python_requires='>=3.6',
license=LICENSE,
packages=find_packages(exclude=('test',)),
install_requires=REQUIREMENTS,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -9,7 +9,7 @@\n # Constants\n DISTNAME = 'kymatio'\n DESCRIPTION = 'Wavelet scattering transforms in Python with GPU acceleration'\n-URL = 'https://kymatio.github.io'\n+URL = 'https://www.kymat.io'\n LICENSE = 'BSD-3-Clause'\n \n \n@@ -47,15 +47,18 @@\n '[email protected], [email protected], [email protected]'),\n url=URL,\n download_url='https://github.com/kymatio/kymatio/releases',\n+ project_urls={\n+ 'Documentation': 'https://www.kymat.io/codereference.html',\n+ 'Source': 'https://github.com/kymatio/kymatio/',\n+ 'Tracker': 'https://github.com/kymatio/kymatio/issues',\n+ 'Authors': 'https://github.com/kymatio/kymatio/blob/master/AUTHORS.md'\n+ },\n classifiers=['Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n 'Operating System :: MacOS',\n- 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX :: Linux',\n- 'Programming Language :: Python :: 3.4',\n- 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n@@ -72,6 +75,7 @@\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type='text/markdown',\n+ python_requires='>=3.6',\n license=LICENSE,\n packages=find_packages(exclude=('test',)),\n install_requires=REQUIREMENTS,\n", "issue": "RELEASE requirements?\nHi,\r\n\r\nWhen doing:\r\n\r\n```\r\nconda install pytorch torchvision -c pytorch\r\npip install -i https://test.pypi.org/simple/ kymatio\r\n```\r\n\r\nthen cloning the corresponding release, and doing `pytest`, I have tons of errors that are evidences that none of the requirements.txt packages are installed. Actually, this makes me think we should have a pip for the GPU version(like `pip install kymatio-gpu`)?\r\n\r\nhttps://github.com/kymatio/kymatio/blob/master/requirements.txt\n", "before_files": [{"content": "#!/usr/bin/env python\nimport csv\nimport importlib\nimport os\nimport shutil\nimport sys\nfrom setuptools import setup, find_packages\n\n# Constants\nDISTNAME = 'kymatio'\nDESCRIPTION = 'Wavelet scattering transforms in Python with GPU acceleration'\nURL = 'https://kymatio.github.io'\nLICENSE = 'BSD-3-Clause'\n\n\n# Parse description\nwith open('README.md') as f:\n README = f.read().split('\\n')\n LONG_DESCRIPTION = '\\n'.join([x for x in README if not x[:3]=='[!['])\n\n\n# Parse version.py\nkymatio_version_spec = importlib.util.spec_from_file_location(\n 'kymatio_version', 'kymatio/version.py')\nkymatio_version_module = importlib.util.module_from_spec(kymatio_version_spec)\nkymatio_version_spec.loader.exec_module(kymatio_version_module)\nVERSION = kymatio_version_module.version\n\n\n# Parse requirements.txt\nwith open('requirements.txt', 'r') as f:\n REQUIREMENTS = f.read().split('\\n')\n\n\nsetup_info = dict(\n # Metadata\n name=DISTNAME,\n version=VERSION,\n author=('Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko, '\n 'Michael Eickenberg, Mathieu Andreux, Georgios Exarchakis, '\n 'Louis Thiry, Vincent Lostanlen, Joakim And\u00e9n, '\n 'Tom\u00e1s Angles, Gabriel Huang, Roberto Leonarduzzi'),\n author_email=('[email protected], [email protected], '\n '[email protected], [email protected], '\n '[email protected], [email protected], '\n '[email protected], [email protected], [email protected], '\n '[email protected], [email protected], [email protected]'),\n url=URL,\n download_url='https://github.com/kymatio/kymatio/releases',\n classifiers=['Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n 'Operating System :: MacOS',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Topic :: Multimedia :: Graphics :: 3D Modeling',\n 'Topic :: Multimedia :: Sound/Audio :: Analysis',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Scientific/Engineering :: Chemistry',\n 'Topic :: Scientific/Engineering :: Image Recognition',\n 'Topic :: Scientific/Engineering :: Information Analysis',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Scientific/Engineering :: Physics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type='text/markdown',\n license=LICENSE,\n packages=find_packages(exclude=('test',)),\n install_requires=REQUIREMENTS,\n zip_safe=True,\n)\n\nsetup(**setup_info)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport csv\nimport importlib\nimport os\nimport shutil\nimport sys\nfrom setuptools import setup, find_packages\n\n# Constants\nDISTNAME = 'kymatio'\nDESCRIPTION = 'Wavelet scattering transforms in Python with GPU acceleration'\nURL = 'https://www.kymat.io'\nLICENSE = 'BSD-3-Clause'\n\n\n# Parse description\nwith open('README.md') as f:\n README = f.read().split('\\n')\n LONG_DESCRIPTION = '\\n'.join([x for x in README if not x[:3]=='[!['])\n\n\n# Parse version.py\nkymatio_version_spec = importlib.util.spec_from_file_location(\n 'kymatio_version', 'kymatio/version.py')\nkymatio_version_module = importlib.util.module_from_spec(kymatio_version_spec)\nkymatio_version_spec.loader.exec_module(kymatio_version_module)\nVERSION = kymatio_version_module.version\n\n\n# Parse requirements.txt\nwith open('requirements.txt', 'r') as f:\n REQUIREMENTS = f.read().split('\\n')\n\n\nsetup_info = dict(\n # Metadata\n name=DISTNAME,\n version=VERSION,\n author=('Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko, '\n 'Michael Eickenberg, Mathieu Andreux, Georgios Exarchakis, '\n 'Louis Thiry, Vincent Lostanlen, Joakim And\u00e9n, '\n 'Tom\u00e1s Angles, Gabriel Huang, Roberto Leonarduzzi'),\n author_email=('[email protected], [email protected], '\n '[email protected], [email protected], '\n '[email protected], [email protected], '\n '[email protected], [email protected], [email protected], '\n '[email protected], [email protected], [email protected]'),\n url=URL,\n download_url='https://github.com/kymatio/kymatio/releases',\n project_urls={\n 'Documentation': 'https://www.kymat.io/codereference.html',\n 'Source': 'https://github.com/kymatio/kymatio/',\n 'Tracker': 'https://github.com/kymatio/kymatio/issues',\n 'Authors': 'https://github.com/kymatio/kymatio/blob/master/AUTHORS.md'\n },\n classifiers=['Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n 'Operating System :: MacOS',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Topic :: Multimedia :: Graphics :: 3D Modeling',\n 'Topic :: Multimedia :: Sound/Audio :: Analysis',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Scientific/Engineering :: Chemistry',\n 'Topic :: Scientific/Engineering :: Image Recognition',\n 'Topic :: Scientific/Engineering :: Information Analysis',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Scientific/Engineering :: Physics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type='text/markdown',\n python_requires='>=3.6',\n license=LICENSE,\n packages=find_packages(exclude=('test',)),\n install_requires=REQUIREMENTS,\n zip_safe=True,\n)\n\nsetup(**setup_info)\n", "path": "setup.py"}]} | 1,283 | 427 |
gh_patches_debug_27447 | rasdani/github-patches | git_diff | bridgecrewio__checkov-1310 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV_AZURE_103 not accepting vsts_configuration as valid Git repository
**Describe the bug**
The rule CKV_AZURE_103 only accepts the Github configuration for Git source control for an Azure Data Factory instance. However, it is possible to configure a Git source control using the `vsts_configuration`.
**To Reproduce**
Steps to reproduce the behavior:
Create the following resource
```hcl
resource "azurerm_data_factory" "main" {
# General
name = "dummy-name"
resource_group_name = azurerm_resource_group.primary.name
location = azurerm_resource_group.primary.location
# Azure DevOps
vsts_configuration {
account_name = var.account_name
branch_name = var.branch_name
project_name = var.project_name
repository_name = var.repository_name
root_folder = var.root_folder
tenant_id = data.azurerm_client_config.current.tenant_id
}
}
```
**Expected behavior**
Expected to accept both vsts_configuration and github_configuration as valid Git source control configurations.
**Desktop (please complete the following information):**
- OS: Ubuntu 20.04 LTS
- Checkov Version 2.0.86
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/azure/DataFactoryUsesGitRepository.py`
Content:
```
1 from checkov.common.models.consts import ANY_VALUE
2 from checkov.common.models.enums import CheckCategories
3 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
4
5
6 class DataFactoryUsesGitRepository(BaseResourceValueCheck):
7 def __init__(self):
8 name = "Ensure that Azure Data Factory uses Git repository for source control"
9 id = "CKV_AZURE_103"
10 supported_resources = ['azurerm_data_factory']
11 categories = [CheckCategories.GENERAL_SECURITY]
12 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
13
14 def get_inspected_key(self):
15 return "github_configuration/[0]/repository_name"
16
17 def get_expected_value(self):
18 return ANY_VALUE
19
20
21 check = DataFactoryUsesGitRepository()
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/azure/DataFactoryUsesGitRepository.py b/checkov/terraform/checks/resource/azure/DataFactoryUsesGitRepository.py
--- a/checkov/terraform/checks/resource/azure/DataFactoryUsesGitRepository.py
+++ b/checkov/terraform/checks/resource/azure/DataFactoryUsesGitRepository.py
@@ -1,21 +1,25 @@
-from checkov.common.models.consts import ANY_VALUE
-from checkov.common.models.enums import CheckCategories
-from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
+from typing import Dict, List, Any
+from checkov.common.models.enums import CheckCategories, CheckResult
+from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
-class DataFactoryUsesGitRepository(BaseResourceValueCheck):
- def __init__(self):
+
+class DataFactoryUsesGitRepository(BaseResourceCheck):
+ def __init__(self) -> None:
name = "Ensure that Azure Data Factory uses Git repository for source control"
id = "CKV_AZURE_103"
- supported_resources = ['azurerm_data_factory']
+ supported_resources = ["azurerm_data_factory"]
categories = [CheckCategories.GENERAL_SECURITY]
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
- def get_inspected_key(self):
- return "github_configuration/[0]/repository_name"
-
- def get_expected_value(self):
- return ANY_VALUE
+ def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:
+ github = conf.get("github_configuration", [{}])[0]
+ if github.get("repository_name"):
+ return CheckResult.PASSED
+ vsts = conf.get("vsts_configuration", [{}])[0]
+ if vsts.get("repository_name"):
+ return CheckResult.PASSED
+ return CheckResult.FAILED
check = DataFactoryUsesGitRepository()
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/DataFactoryUsesGitRepository.py b/checkov/terraform/checks/resource/azure/DataFactoryUsesGitRepository.py\n--- a/checkov/terraform/checks/resource/azure/DataFactoryUsesGitRepository.py\n+++ b/checkov/terraform/checks/resource/azure/DataFactoryUsesGitRepository.py\n@@ -1,21 +1,25 @@\n-from checkov.common.models.consts import ANY_VALUE\n-from checkov.common.models.enums import CheckCategories\n-from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n+from typing import Dict, List, Any\n \n+from checkov.common.models.enums import CheckCategories, CheckResult\n+from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n \n-class DataFactoryUsesGitRepository(BaseResourceValueCheck):\n- def __init__(self):\n+\n+class DataFactoryUsesGitRepository(BaseResourceCheck):\n+ def __init__(self) -> None:\n name = \"Ensure that Azure Data Factory uses Git repository for source control\"\n id = \"CKV_AZURE_103\"\n- supported_resources = ['azurerm_data_factory']\n+ supported_resources = [\"azurerm_data_factory\"]\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n- def get_inspected_key(self):\n- return \"github_configuration/[0]/repository_name\"\n-\n- def get_expected_value(self):\n- return ANY_VALUE\n+ def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n+ github = conf.get(\"github_configuration\", [{}])[0]\n+ if github.get(\"repository_name\"):\n+ return CheckResult.PASSED\n+ vsts = conf.get(\"vsts_configuration\", [{}])[0]\n+ if vsts.get(\"repository_name\"):\n+ return CheckResult.PASSED\n+ return CheckResult.FAILED\n \n \n check = DataFactoryUsesGitRepository()\n", "issue": "CKV_AZURE_103 not accepting vsts_configuration as valid Git repository\n**Describe the bug**\r\nThe rule CKV_AZURE_103 only accepts the Github configuration for Git source control for an Azure Data Factory instance. However, it is possible to configure a Git source control using the `vsts_configuration`. \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\nCreate the following resource\r\n```hcl\r\nresource \"azurerm_data_factory\" \"main\" {\r\n\r\n # General\r\n name = \"dummy-name\"\r\n resource_group_name = azurerm_resource_group.primary.name\r\n location = azurerm_resource_group.primary.location\r\n\r\n # Azure DevOps\r\n vsts_configuration {\r\n account_name = var.account_name\r\n branch_name = var.branch_name\r\n project_name = var.project_name\r\n repository_name = var.repository_name\r\n root_folder = var.root_folder\r\n tenant_id = data.azurerm_client_config.current.tenant_id\r\n }\r\n\r\n}\r\n```\r\n\r\n**Expected behavior**\r\nExpected to accept both vsts_configuration and github_configuration as valid Git source control configurations.\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Ubuntu 20.04 LTS\r\n - Checkov Version 2.0.86\n", "before_files": [{"content": "from checkov.common.models.consts import ANY_VALUE\nfrom checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass DataFactoryUsesGitRepository(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure that Azure Data Factory uses Git repository for source control\"\n id = \"CKV_AZURE_103\"\n supported_resources = ['azurerm_data_factory']\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"github_configuration/[0]/repository_name\"\n\n def get_expected_value(self):\n return ANY_VALUE\n\n\ncheck = DataFactoryUsesGitRepository()\n", "path": "checkov/terraform/checks/resource/azure/DataFactoryUsesGitRepository.py"}], "after_files": [{"content": "from typing import Dict, List, Any\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass DataFactoryUsesGitRepository(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure that Azure Data Factory uses Git repository for source control\"\n id = \"CKV_AZURE_103\"\n supported_resources = [\"azurerm_data_factory\"]\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n github = conf.get(\"github_configuration\", [{}])[0]\n if github.get(\"repository_name\"):\n return CheckResult.PASSED\n vsts = conf.get(\"vsts_configuration\", [{}])[0]\n if vsts.get(\"repository_name\"):\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = DataFactoryUsesGitRepository()\n", "path": "checkov/terraform/checks/resource/azure/DataFactoryUsesGitRepository.py"}]} | 750 | 436 |
gh_patches_debug_37684 | rasdani/github-patches | git_diff | pulp__pulpcore-4335 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Need a way to directly migrate pulp_labels from 3.21 to 3.28
**Is your feature request related to a problem? Please describe.**
As RHUI developers, we are heavy users of pulp_labels on repository objects. As the current 4.5 release is on pulpcore 3.21, we would like our next release (4.6) to be based on pulpcore 3.28, and we need to ensure a smooth upgrade path for our customers. As it is currently laid out, the 3.21 to 3.28 migration path requires an intermediate step (<3.25) where `pulpcore-manager datarepair-labels` command is available.
**Describe the solution you'd like**
The ideal solution would be to make Django migrations take care of that automatically, without requiring an intermediate step/release. Since we live in non-ideal world, a solution where an additional command like `datarepair-labels` is available would also suffice.
**Describe alternatives you've considered**
We have considered an intermediate release of RHUI 4.6 with pulpcore 3.22 followed by RHUI 4.7 built on 3.28, but it would create a lot of migration issues for our customers, with many of them still running RHUI 4.0.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/migrations/0104_delete_label.py`
Content:
```
1 # Generated by Django 3.2.18 on 2023-05-04 07:06
2
3 from django.db import migrations
4
5
6 def check_no_existing_labels(apps, schema_editor):
7 Label = apps.get_model("core", "Label")
8 if Label.objects.exists():
9 raise RuntimeError(
10 "There are remaining labels. Please revert to pulpcore<3.25 and make sure all labels are properly mirgated or deleted."
11 )
12
13
14 class Migration(migrations.Migration):
15
16 dependencies = [
17 ('core', '0103_alter_export_task'),
18 ]
19
20 operations = [
21 migrations.RunPython(
22 code=check_no_existing_labels,
23 reverse_code=migrations.RunPython.noop,
24 elidable=True,
25 ),
26 migrations.DeleteModel(
27 name='Label',
28 ),
29 ]
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pulpcore/app/migrations/0104_delete_label.py b/pulpcore/app/migrations/0104_delete_label.py
--- a/pulpcore/app/migrations/0104_delete_label.py
+++ b/pulpcore/app/migrations/0104_delete_label.py
@@ -1,6 +1,45 @@
# Generated by Django 3.2.18 on 2023-05-04 07:06
from django.db import migrations
+from django.db.models.expressions import OuterRef, RawSQL
+
+
+def migrate_remaining_labels(apps, schema_editor):
+ """
+ This data migration handles the "but what about plugins" problem noted in the issue [0], with only two caveats:
+
+ Case 1: If there were to exist a plugin containing a Model whose model-name ended in (for example) "Repository",
+ that was NOT a detail-model of a Repository master-model, AND that plugin allowed Labels for such a model - then,
+ upon running this migration, those Labels would be lost.
+
+ Case 2: If there were to exist a plugin containing a Model that was a Detail of (for example) Repository,
+ but named something like "PluginRepositoryButWhy", and that plugin allowed Labels,
+ and instances of such a Model had Labels associated with them - then this migration would fail,
+ because the Labels would not be found, migrated, and deleted, and the old-Label table would not be able to be dropped.
+
+ And the plugins described above would have to have existed and been in use with pulpcore/3.21,only -
+ if they appeared with core/3.22, they'd be using new-Labels and all would be (already) well.
+
+ No such plugins/Models exist, to the best of our knowledge.
+
+ [0] https://github.com/pulp/pulpcore/issues/4319
+ """
+ Label = apps.get_model("core", "Label")
+ Repository = apps.get_model("core", "Repository")
+ Remote = apps.get_model("core", "Remote")
+ Distribution = apps.get_model("core", "Distribution")
+ ContentType = apps.get_model("contenttypes", "ContentType")
+
+ for master_model, model_name in [(Repository, "repository"), (Remote, "remote"), (Distribution, "distribution")]:
+ detail_ctypes = ContentType.objects.filter(app_label__ne="core", model__endswith=model_name)
+ affected_ids = Label.objects.filter(content_type__in=detail_ctypes).values("object_id").distinct()
+ label_subq = Label.objects.filter(
+ content_type__in=detail_ctypes, object_id=OuterRef("pulp_id")
+ ).annotate(
+ label_data=RawSQL("hstore(array_agg(key), array_agg(value))", [])
+ ).values("label_data")
+ master_model.objects.filter(pulp_id__in=affected_ids).update(pulp_labels=label_subq)
+ Label.objects.filter(content_type__in=detail_ctypes).delete()
def check_no_existing_labels(apps, schema_editor):
@@ -18,11 +57,20 @@
]
operations = [
+ migrations.RunPython(
+ code=migrate_remaining_labels,
+ reverse_code=migrations.RunPython.noop,
+ elidable=True,
+ ),
migrations.RunPython(
code=check_no_existing_labels,
reverse_code=migrations.RunPython.noop,
elidable=True,
),
+ migrations.RunSQL(
+ sql="SET CONSTRAINTS ALL IMMEDIATE;",
+ reverse_sql="",
+ ),
migrations.DeleteModel(
name='Label',
),
| {"golden_diff": "diff --git a/pulpcore/app/migrations/0104_delete_label.py b/pulpcore/app/migrations/0104_delete_label.py\n--- a/pulpcore/app/migrations/0104_delete_label.py\n+++ b/pulpcore/app/migrations/0104_delete_label.py\n@@ -1,6 +1,45 @@\n # Generated by Django 3.2.18 on 2023-05-04 07:06\n \n from django.db import migrations\n+from django.db.models.expressions import OuterRef, RawSQL\n+\n+\n+def migrate_remaining_labels(apps, schema_editor):\n+ \"\"\"\n+ This data migration handles the \"but what about plugins\" problem noted in the issue [0], with only two caveats:\n+\n+ Case 1: If there were to exist a plugin containing a Model whose model-name ended in (for example) \"Repository\",\n+ that was NOT a detail-model of a Repository master-model, AND that plugin allowed Labels for such a model - then,\n+ upon running this migration, those Labels would be lost.\n+\n+ Case 2: If there were to exist a plugin containing a Model that was a Detail of (for example) Repository,\n+ but named something like \"PluginRepositoryButWhy\", and that plugin allowed Labels,\n+ and instances of such a Model had Labels associated with them - then this migration would fail,\n+ because the Labels would not be found, migrated, and deleted, and the old-Label table would not be able to be dropped.\n+\n+ And the plugins described above would have to have existed and been in use with pulpcore/3.21,only -\n+ if they appeared with core/3.22, they'd be using new-Labels and all would be (already) well.\n+\n+ No such plugins/Models exist, to the best of our knowledge.\n+\n+ [0] https://github.com/pulp/pulpcore/issues/4319\n+ \"\"\"\n+ Label = apps.get_model(\"core\", \"Label\")\n+ Repository = apps.get_model(\"core\", \"Repository\")\n+ Remote = apps.get_model(\"core\", \"Remote\")\n+ Distribution = apps.get_model(\"core\", \"Distribution\")\n+ ContentType = apps.get_model(\"contenttypes\", \"ContentType\")\n+\n+ for master_model, model_name in [(Repository, \"repository\"), (Remote, \"remote\"), (Distribution, \"distribution\")]:\n+ detail_ctypes = ContentType.objects.filter(app_label__ne=\"core\", model__endswith=model_name)\n+ affected_ids = Label.objects.filter(content_type__in=detail_ctypes).values(\"object_id\").distinct()\n+ label_subq = Label.objects.filter(\n+ content_type__in=detail_ctypes, object_id=OuterRef(\"pulp_id\")\n+ ).annotate(\n+ label_data=RawSQL(\"hstore(array_agg(key), array_agg(value))\", [])\n+ ).values(\"label_data\")\n+ master_model.objects.filter(pulp_id__in=affected_ids).update(pulp_labels=label_subq)\n+ Label.objects.filter(content_type__in=detail_ctypes).delete()\n \n \n def check_no_existing_labels(apps, schema_editor):\n@@ -18,11 +57,20 @@\n ]\n \n operations = [\n+ migrations.RunPython(\n+ code=migrate_remaining_labels,\n+ reverse_code=migrations.RunPython.noop,\n+ elidable=True,\n+ ),\n migrations.RunPython(\n code=check_no_existing_labels,\n reverse_code=migrations.RunPython.noop,\n elidable=True,\n ),\n+ migrations.RunSQL(\n+ sql=\"SET CONSTRAINTS ALL IMMEDIATE;\",\n+ reverse_sql=\"\",\n+ ),\n migrations.DeleteModel(\n name='Label',\n ),\n", "issue": "Need a way to directly migrate pulp_labels from 3.21 to 3.28\n**Is your feature request related to a problem? Please describe.**\r\nAs RHUI developers, we are heavy users of pulp_labels on repository objects. As the current 4.5 release is on pulpcore 3.21, we would like our next release (4.6) to be based on pulpcore 3.28, and we need to ensure a smooth upgrade path for our customers. As it is currently laid out, the 3.21 to 3.28 migration path requires an intermediate step (<3.25) where `pulpcore-manager datarepair-labels` command is available.\r\n\r\n**Describe the solution you'd like**\r\nThe ideal solution would be to make Django migrations take care of that automatically, without requiring an intermediate step/release. Since we live in non-ideal world, a solution where an additional command like `datarepair-labels` is available would also suffice.\r\n\r\n**Describe alternatives you've considered**\r\nWe have considered an intermediate release of RHUI 4.6 with pulpcore 3.22 followed by RHUI 4.7 built on 3.28, but it would create a lot of migration issues for our customers, with many of them still running RHUI 4.0.\r\n\r\n\n", "before_files": [{"content": "# Generated by Django 3.2.18 on 2023-05-04 07:06\n\nfrom django.db import migrations\n\n\ndef check_no_existing_labels(apps, schema_editor):\n Label = apps.get_model(\"core\", \"Label\")\n if Label.objects.exists():\n raise RuntimeError(\n \"There are remaining labels. Please revert to pulpcore<3.25 and make sure all labels are properly mirgated or deleted.\"\n )\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0103_alter_export_task'),\n ]\n\n operations = [\n migrations.RunPython(\n code=check_no_existing_labels,\n reverse_code=migrations.RunPython.noop,\n elidable=True,\n ),\n migrations.DeleteModel(\n name='Label',\n ),\n ]\n", "path": "pulpcore/app/migrations/0104_delete_label.py"}], "after_files": [{"content": "# Generated by Django 3.2.18 on 2023-05-04 07:06\n\nfrom django.db import migrations\nfrom django.db.models.expressions import OuterRef, RawSQL\n\n\ndef migrate_remaining_labels(apps, schema_editor):\n \"\"\"\n This data migration handles the \"but what about plugins\" problem noted in the issue [0], with only two caveats:\n\n Case 1: If there were to exist a plugin containing a Model whose model-name ended in (for example) \"Repository\",\n that was NOT a detail-model of a Repository master-model, AND that plugin allowed Labels for such a model - then,\n upon running this migration, those Labels would be lost.\n\n Case 2: If there were to exist a plugin containing a Model that was a Detail of (for example) Repository,\n but named something like \"PluginRepositoryButWhy\", and that plugin allowed Labels,\n and instances of such a Model had Labels associated with them - then this migration would fail,\n because the Labels would not be found, migrated, and deleted, and the old-Label table would not be able to be dropped.\n\n And the plugins described above would have to have existed and been in use with pulpcore/3.21,only -\n if they appeared with core/3.22, they'd be using new-Labels and all would be (already) well.\n\n No such plugins/Models exist, to the best of our knowledge.\n\n [0] https://github.com/pulp/pulpcore/issues/4319\n \"\"\"\n Label = apps.get_model(\"core\", \"Label\")\n Repository = apps.get_model(\"core\", \"Repository\")\n Remote = apps.get_model(\"core\", \"Remote\")\n Distribution = apps.get_model(\"core\", \"Distribution\")\n ContentType = apps.get_model(\"contenttypes\", \"ContentType\")\n\n for master_model, model_name in [(Repository, \"repository\"), (Remote, \"remote\"), (Distribution, \"distribution\")]:\n detail_ctypes = ContentType.objects.filter(app_label__ne=\"core\", model__endswith=model_name)\n affected_ids = Label.objects.filter(content_type__in=detail_ctypes).values(\"object_id\").distinct()\n label_subq = Label.objects.filter(\n content_type__in=detail_ctypes, object_id=OuterRef(\"pulp_id\")\n ).annotate(\n label_data=RawSQL(\"hstore(array_agg(key), array_agg(value))\", [])\n ).values(\"label_data\")\n master_model.objects.filter(pulp_id__in=affected_ids).update(pulp_labels=label_subq)\n Label.objects.filter(content_type__in=detail_ctypes).delete()\n\n\ndef check_no_existing_labels(apps, schema_editor):\n Label = apps.get_model(\"core\", \"Label\")\n if Label.objects.exists():\n raise RuntimeError(\n \"There are remaining labels. Please revert to pulpcore<3.25 and make sure all labels are properly mirgated or deleted.\"\n )\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0103_alter_export_task'),\n ]\n\n operations = [\n migrations.RunPython(\n code=migrate_remaining_labels,\n reverse_code=migrations.RunPython.noop,\n elidable=True,\n ),\n migrations.RunPython(\n code=check_no_existing_labels,\n reverse_code=migrations.RunPython.noop,\n elidable=True,\n ),\n migrations.RunSQL(\n sql=\"SET CONSTRAINTS ALL IMMEDIATE;\",\n reverse_sql=\"\",\n ),\n migrations.DeleteModel(\n name='Label',\n ),\n ]\n", "path": "pulpcore/app/migrations/0104_delete_label.py"}]} | 786 | 814 |
gh_patches_debug_11491 | rasdani/github-patches | git_diff | scikit-hep__pyhf-444 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pyhf commandline tools requires uproot extra
# Description
just issuing `pyhf --help` requires the uproot extra since `commandline.py` imports `writexml`.
# Expected Behavior
I don't need uproot if I don't want to use json2xml or xml2json
# Actual Behavior
I can't use `pyhf` without installing uproot.
# Steps to Reproduce
install master and run pyhf
# Checklist
- [x] Run `git fetch` to get the most up to date version of `master`
- [x] Searched through existing Issues to confirm this is not a duplicate issue
- [x] Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyhf/commandline.py`
Content:
```
1 import logging
2
3 import click
4 import json
5 import os
6
7 from . import writexml
8 from .utils import hypotest
9 from .pdf import Workspace
10 from .version import __version__
11
12 logging.basicConfig()
13 log = logging.getLogger(__name__)
14
15 # This is only needed for Python 2/3 compatibility
16 def ensure_dirs(path):
17 try:
18 os.makedirs(path, exist_ok=True)
19 except TypeError:
20 if not os.path.exists(path):
21 os.makedirs(path)
22
23
24 @click.group(context_settings=dict(help_option_names=['-h', '--help']))
25 @click.version_option(version=__version__)
26 def pyhf():
27 pass
28
29
30 @pyhf.command()
31 @click.argument('entrypoint-xml', type=click.Path(exists=True))
32 @click.option(
33 '--basedir',
34 help='The base directory for the XML files to point relative to.',
35 type=click.Path(exists=True),
36 default=os.getcwd(),
37 )
38 @click.option(
39 '--output-file',
40 help='The location of the output json file. If not specified, prints to screen.',
41 default=None,
42 )
43 @click.option('--track-progress/--hide-progress', default=True)
44 def xml2json(entrypoint_xml, basedir, output_file, track_progress):
45 """ Entrypoint XML: The top-level XML file for the PDF definition. """
46 try:
47 import uproot
48
49 assert uproot
50 except ImportError:
51 log.error(
52 "xml2json requires uproot, please install pyhf using the "
53 "xmlio extra: pip install pyhf[xmlio] or install uproot "
54 "manually: pip install uproot"
55 )
56 from . import readxml
57
58 spec = readxml.parse(entrypoint_xml, basedir, track_progress=track_progress)
59 if output_file is None:
60 print(json.dumps(spec, indent=4, sort_keys=True))
61 else:
62 with open(output_file, 'w+') as out_file:
63 json.dump(spec, out_file, indent=4, sort_keys=True)
64 log.debug("Written to {0:s}".format(output_file))
65
66
67 @pyhf.command()
68 @click.argument('workspace', default='-')
69 @click.option('--output-dir', type=click.Path(exists=True), default='.')
70 @click.option('--specroot', default='config')
71 @click.option('--dataroot', default='data')
72 @click.option('--resultprefix', default='FitConfig')
73 def json2xml(workspace, output_dir, specroot, dataroot, resultprefix):
74 try:
75 import uproot
76
77 assert uproot
78 except ImportError:
79 log.error(
80 "json2xml requires uproot, please install pyhf using the "
81 "xmlio extra: pip install pyhf[xmlio] or install uproot "
82 "manually: pip install uproot"
83 )
84
85 ensure_dirs(output_dir)
86 with click.open_file(workspace, 'r') as specstream:
87 d = json.load(specstream)
88 ensure_dirs(os.path.join(output_dir, specroot))
89 ensure_dirs(os.path.join(output_dir, dataroot))
90 with click.open_file(
91 os.path.join(output_dir, '{0:s}.xml'.format(resultprefix)), 'w'
92 ) as outstream:
93 outstream.write(
94 writexml.writexml(
95 d,
96 os.path.join(output_dir, specroot),
97 os.path.join(output_dir, dataroot),
98 resultprefix,
99 ).decode('utf-8')
100 )
101
102
103 @pyhf.command()
104 @click.argument('workspace', default='-')
105 @click.option(
106 '--output-file',
107 help='The location of the output json file. If not specified, prints to screen.',
108 default=None,
109 )
110 @click.option('--measurement', default=None)
111 @click.option('-p', '--patch', multiple=True)
112 @click.option('--testpoi', default=1.0)
113 def cls(workspace, output_file, measurement, patch, testpoi):
114 with click.open_file(workspace, 'r') as specstream:
115 wspec = json.load(specstream)
116
117 w = Workspace(wspec)
118
119 patches = [json.loads(click.open_file(pfile, 'r').read()) for pfile in patch]
120 p = w.model(measurement_name=measurement, patches=patches)
121 result = hypotest(testpoi, w.data(p), p, return_expected_set=True)
122 result = {'CLs_obs': result[0].tolist()[0], 'CLs_exp': result[-1].ravel().tolist()}
123 if output_file is None:
124 print(json.dumps(result, indent=4, sort_keys=True))
125 else:
126 with open(output_file, 'w+') as out_file:
127 json.dump(result, out_file, indent=4, sort_keys=True)
128 log.debug("Written to {0:s}".format(output_file))
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyhf/commandline.py b/pyhf/commandline.py
--- a/pyhf/commandline.py
+++ b/pyhf/commandline.py
@@ -4,7 +4,6 @@
import json
import os
-from . import writexml
from .utils import hypotest
from .pdf import Workspace
from .version import __version__
@@ -81,6 +80,7 @@
"xmlio extra: pip install pyhf[xmlio] or install uproot "
"manually: pip install uproot"
)
+ from . import writexml
ensure_dirs(output_dir)
with click.open_file(workspace, 'r') as specstream:
| {"golden_diff": "diff --git a/pyhf/commandline.py b/pyhf/commandline.py\n--- a/pyhf/commandline.py\n+++ b/pyhf/commandline.py\n@@ -4,7 +4,6 @@\n import json\n import os\n \n-from . import writexml\n from .utils import hypotest\n from .pdf import Workspace\n from .version import __version__\n@@ -81,6 +80,7 @@\n \"xmlio extra: pip install pyhf[xmlio] or install uproot \"\n \"manually: pip install uproot\"\n )\n+ from . import writexml\n \n ensure_dirs(output_dir)\n with click.open_file(workspace, 'r') as specstream:\n", "issue": "pyhf commandline tools requires uproot extra\n# Description\r\n\r\njust issuing `pyhf --help` requires the uproot extra since `commandline.py` imports `writexml`.\r\n\r\n# Expected Behavior\r\n\r\nI don't need uproot if I don't want to use json2xml or xml2json\r\n\r\n# Actual Behavior\r\n\r\nI can't use `pyhf` without installing uproot.\r\n\r\n# Steps to Reproduce\r\n\r\ninstall master and run pyhf\r\n\r\n# Checklist\r\n\r\n- [x] Run `git fetch` to get the most up to date version of `master`\r\n- [x] Searched through existing Issues to confirm this is not a duplicate issue\r\n- [x] Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue\r\n\n", "before_files": [{"content": "import logging\n\nimport click\nimport json\nimport os\n\nfrom . import writexml\nfrom .utils import hypotest\nfrom .pdf import Workspace\nfrom .version import __version__\n\nlogging.basicConfig()\nlog = logging.getLogger(__name__)\n\n# This is only needed for Python 2/3 compatibility\ndef ensure_dirs(path):\n try:\n os.makedirs(path, exist_ok=True)\n except TypeError:\n if not os.path.exists(path):\n os.makedirs(path)\n\n\[email protected](context_settings=dict(help_option_names=['-h', '--help']))\[email protected]_option(version=__version__)\ndef pyhf():\n pass\n\n\[email protected]()\[email protected]('entrypoint-xml', type=click.Path(exists=True))\[email protected](\n '--basedir',\n help='The base directory for the XML files to point relative to.',\n type=click.Path(exists=True),\n default=os.getcwd(),\n)\[email protected](\n '--output-file',\n help='The location of the output json file. If not specified, prints to screen.',\n default=None,\n)\[email protected]('--track-progress/--hide-progress', default=True)\ndef xml2json(entrypoint_xml, basedir, output_file, track_progress):\n \"\"\" Entrypoint XML: The top-level XML file for the PDF definition. \"\"\"\n try:\n import uproot\n\n assert uproot\n except ImportError:\n log.error(\n \"xml2json requires uproot, please install pyhf using the \"\n \"xmlio extra: pip install pyhf[xmlio] or install uproot \"\n \"manually: pip install uproot\"\n )\n from . import readxml\n\n spec = readxml.parse(entrypoint_xml, basedir, track_progress=track_progress)\n if output_file is None:\n print(json.dumps(spec, indent=4, sort_keys=True))\n else:\n with open(output_file, 'w+') as out_file:\n json.dump(spec, out_file, indent=4, sort_keys=True)\n log.debug(\"Written to {0:s}\".format(output_file))\n\n\[email protected]()\[email protected]('workspace', default='-')\[email protected]('--output-dir', type=click.Path(exists=True), default='.')\[email protected]('--specroot', default='config')\[email protected]('--dataroot', default='data')\[email protected]('--resultprefix', default='FitConfig')\ndef json2xml(workspace, output_dir, specroot, dataroot, resultprefix):\n try:\n import uproot\n\n assert uproot\n except ImportError:\n log.error(\n \"json2xml requires uproot, please install pyhf using the \"\n \"xmlio extra: pip install pyhf[xmlio] or install uproot \"\n \"manually: pip install uproot\"\n )\n\n ensure_dirs(output_dir)\n with click.open_file(workspace, 'r') as specstream:\n d = json.load(specstream)\n ensure_dirs(os.path.join(output_dir, specroot))\n ensure_dirs(os.path.join(output_dir, dataroot))\n with click.open_file(\n os.path.join(output_dir, '{0:s}.xml'.format(resultprefix)), 'w'\n ) as outstream:\n outstream.write(\n writexml.writexml(\n d,\n os.path.join(output_dir, specroot),\n os.path.join(output_dir, dataroot),\n resultprefix,\n ).decode('utf-8')\n )\n\n\[email protected]()\[email protected]('workspace', default='-')\[email protected](\n '--output-file',\n help='The location of the output json file. If not specified, prints to screen.',\n default=None,\n)\[email protected]('--measurement', default=None)\[email protected]('-p', '--patch', multiple=True)\[email protected]('--testpoi', default=1.0)\ndef cls(workspace, output_file, measurement, patch, testpoi):\n with click.open_file(workspace, 'r') as specstream:\n wspec = json.load(specstream)\n\n w = Workspace(wspec)\n\n patches = [json.loads(click.open_file(pfile, 'r').read()) for pfile in patch]\n p = w.model(measurement_name=measurement, patches=patches)\n result = hypotest(testpoi, w.data(p), p, return_expected_set=True)\n result = {'CLs_obs': result[0].tolist()[0], 'CLs_exp': result[-1].ravel().tolist()}\n if output_file is None:\n print(json.dumps(result, indent=4, sort_keys=True))\n else:\n with open(output_file, 'w+') as out_file:\n json.dump(result, out_file, indent=4, sort_keys=True)\n log.debug(\"Written to {0:s}\".format(output_file))\n", "path": "pyhf/commandline.py"}], "after_files": [{"content": "import logging\n\nimport click\nimport json\nimport os\n\nfrom .utils import hypotest\nfrom .pdf import Workspace\nfrom .version import __version__\n\nlogging.basicConfig()\nlog = logging.getLogger(__name__)\n\n# This is only needed for Python 2/3 compatibility\ndef ensure_dirs(path):\n try:\n os.makedirs(path, exist_ok=True)\n except TypeError:\n if not os.path.exists(path):\n os.makedirs(path)\n\n\[email protected](context_settings=dict(help_option_names=['-h', '--help']))\[email protected]_option(version=__version__)\ndef pyhf():\n pass\n\n\[email protected]()\[email protected]('entrypoint-xml', type=click.Path(exists=True))\[email protected](\n '--basedir',\n help='The base directory for the XML files to point relative to.',\n type=click.Path(exists=True),\n default=os.getcwd(),\n)\[email protected](\n '--output-file',\n help='The location of the output json file. If not specified, prints to screen.',\n default=None,\n)\[email protected]('--track-progress/--hide-progress', default=True)\ndef xml2json(entrypoint_xml, basedir, output_file, track_progress):\n \"\"\" Entrypoint XML: The top-level XML file for the PDF definition. \"\"\"\n try:\n import uproot\n\n assert uproot\n except ImportError:\n log.error(\n \"xml2json requires uproot, please install pyhf using the \"\n \"xmlio extra: pip install pyhf[xmlio] or install uproot \"\n \"manually: pip install uproot\"\n )\n from . import readxml\n\n spec = readxml.parse(entrypoint_xml, basedir, track_progress=track_progress)\n if output_file is None:\n print(json.dumps(spec, indent=4, sort_keys=True))\n else:\n with open(output_file, 'w+') as out_file:\n json.dump(spec, out_file, indent=4, sort_keys=True)\n log.debug(\"Written to {0:s}\".format(output_file))\n\n\[email protected]()\[email protected]('workspace', default='-')\[email protected]('--output-dir', type=click.Path(exists=True), default='.')\[email protected]('--specroot', default='config')\[email protected]('--dataroot', default='data')\[email protected]('--resultprefix', default='FitConfig')\ndef json2xml(workspace, output_dir, specroot, dataroot, resultprefix):\n try:\n import uproot\n\n assert uproot\n except ImportError:\n log.error(\n \"json2xml requires uproot, please install pyhf using the \"\n \"xmlio extra: pip install pyhf[xmlio] or install uproot \"\n \"manually: pip install uproot\"\n )\n from . import writexml\n\n ensure_dirs(output_dir)\n with click.open_file(workspace, 'r') as specstream:\n d = json.load(specstream)\n ensure_dirs(os.path.join(output_dir, specroot))\n ensure_dirs(os.path.join(output_dir, dataroot))\n with click.open_file(\n os.path.join(output_dir, '{0:s}.xml'.format(resultprefix)), 'w'\n ) as outstream:\n outstream.write(\n writexml.writexml(\n d,\n os.path.join(output_dir, specroot),\n os.path.join(output_dir, dataroot),\n resultprefix,\n ).decode('utf-8')\n )\n\n\[email protected]()\[email protected]('workspace', default='-')\[email protected](\n '--output-file',\n help='The location of the output json file. If not specified, prints to screen.',\n default=None,\n)\[email protected]('--measurement', default=None)\[email protected]('-p', '--patch', multiple=True)\[email protected]('--testpoi', default=1.0)\ndef cls(workspace, output_file, measurement, patch, testpoi):\n with click.open_file(workspace, 'r') as specstream:\n wspec = json.load(specstream)\n\n w = Workspace(wspec)\n\n patches = [json.loads(click.open_file(pfile, 'r').read()) for pfile in patch]\n p = w.model(measurement_name=measurement, patches=patches)\n result = hypotest(testpoi, w.data(p), p, return_expected_set=True)\n result = {'CLs_obs': result[0].tolist()[0], 'CLs_exp': result[-1].ravel().tolist()}\n if output_file is None:\n print(json.dumps(result, indent=4, sort_keys=True))\n else:\n with open(output_file, 'w+') as out_file:\n json.dump(result, out_file, indent=4, sort_keys=True)\n log.debug(\"Written to {0:s}\".format(output_file))\n", "path": "pyhf/commandline.py"}]} | 1,734 | 148 |
gh_patches_debug_1273 | rasdani/github-patches | git_diff | ivy-llc__ivy-17162 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
is_integer
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/paddle/tensor/attribute.py`
Content:
```
1 # global
2 import ivy
3 from ivy.functional.frontends.paddle.func_wrapper import (
4 to_ivy_arrays_and_back,
5 )
6
7
8 @to_ivy_arrays_and_back
9 def is_complex(x):
10 return ivy.is_complex_dtype(x)
11
12
13 @to_ivy_arrays_and_back
14 def is_floating_point(x):
15 return ivy.is_float_dtype(x)
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/paddle/tensor/attribute.py b/ivy/functional/frontends/paddle/tensor/attribute.py
--- a/ivy/functional/frontends/paddle/tensor/attribute.py
+++ b/ivy/functional/frontends/paddle/tensor/attribute.py
@@ -10,6 +10,11 @@
return ivy.is_complex_dtype(x)
+@to_ivy_arrays_and_back
+def is_integer(x):
+ return ivy.is_int_dtype(x)
+
+
@to_ivy_arrays_and_back
def is_floating_point(x):
return ivy.is_float_dtype(x)
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/attribute.py b/ivy/functional/frontends/paddle/tensor/attribute.py\n--- a/ivy/functional/frontends/paddle/tensor/attribute.py\n+++ b/ivy/functional/frontends/paddle/tensor/attribute.py\n@@ -10,6 +10,11 @@\n return ivy.is_complex_dtype(x)\n \n \n+@to_ivy_arrays_and_back\n+def is_integer(x):\n+ return ivy.is_int_dtype(x)\n+\n+\n @to_ivy_arrays_and_back\n def is_floating_point(x):\n return ivy.is_float_dtype(x)\n", "issue": "is_integer\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@to_ivy_arrays_and_back\ndef is_complex(x):\n return ivy.is_complex_dtype(x)\n\n\n@to_ivy_arrays_and_back\ndef is_floating_point(x):\n return ivy.is_float_dtype(x)\n", "path": "ivy/functional/frontends/paddle/tensor/attribute.py"}], "after_files": [{"content": "# global\nimport ivy\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@to_ivy_arrays_and_back\ndef is_complex(x):\n return ivy.is_complex_dtype(x)\n\n\n@to_ivy_arrays_and_back\ndef is_integer(x):\n return ivy.is_int_dtype(x)\n\n\n@to_ivy_arrays_and_back\ndef is_floating_point(x):\n return ivy.is_float_dtype(x)\n", "path": "ivy/functional/frontends/paddle/tensor/attribute.py"}]} | 370 | 138 |
gh_patches_debug_28376 | rasdani/github-patches | git_diff | iterative__dvc-4075 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implement `--no-exec` option for `import-url` command
`dvc import-url` creates new `.dvc` file, just as `dvc run`. Sometimes files which would be imported are already present locally and it's quite inconvenient that they should be downloaded again in order to create a pipeline step.
Because of that it would be great to add `--no-exec` option: we create pipeline step, then use `dvc commit` to update its md5 with already downloaded file.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/repo/imp_url.py`
Content:
```
1 import os
2
3 from dvc.repo.scm_context import scm_context
4 from dvc.utils import relpath, resolve_output, resolve_paths
5 from dvc.utils.fs import path_isin
6
7 from ..exceptions import OutputDuplicationError
8 from . import locked
9
10
11 @locked
12 @scm_context
13 def imp_url(self, url, out=None, fname=None, erepo=None, frozen=True):
14 from dvc.dvcfile import Dvcfile
15 from dvc.stage import Stage, create_stage
16
17 out = resolve_output(url, out)
18 path, wdir, out = resolve_paths(self, out)
19
20 # NOTE: when user is importing something from within their own repository
21 if (
22 erepo is None
23 and os.path.exists(url)
24 and path_isin(os.path.abspath(url), self.root_dir)
25 ):
26 url = relpath(url, wdir)
27
28 stage = create_stage(
29 Stage,
30 self,
31 fname or path,
32 wdir=wdir,
33 deps=[url],
34 outs=[out],
35 erepo=erepo,
36 )
37
38 if stage is None:
39 return None
40
41 dvcfile = Dvcfile(self, stage.path)
42 dvcfile.remove()
43
44 try:
45 self.check_modified_graph([stage])
46 except OutputDuplicationError as exc:
47 raise OutputDuplicationError(exc.output, set(exc.stages) - {stage})
48
49 stage.run()
50
51 stage.frozen = frozen
52
53 dvcfile.dump(stage)
54
55 return stage
56
```
Path: `dvc/command/imp_url.py`
Content:
```
1 import argparse
2 import logging
3
4 from dvc.command import completion
5 from dvc.command.base import CmdBase, append_doc_link
6 from dvc.exceptions import DvcException
7
8 logger = logging.getLogger(__name__)
9
10
11 class CmdImportUrl(CmdBase):
12 def run(self):
13 try:
14 self.repo.imp_url(
15 self.args.url, out=self.args.out, fname=self.args.file
16 )
17 except DvcException:
18 logger.exception(
19 "failed to import {}. You could also try downloading "
20 "it manually, and adding it with `dvc add`.".format(
21 self.args.url
22 )
23 )
24 return 1
25 return 0
26
27
28 def add_parser(subparsers, parent_parser):
29 IMPORT_HELP = (
30 "Download or copy file from URL and take it under DVC control."
31 )
32
33 import_parser = subparsers.add_parser(
34 "import-url",
35 parents=[parent_parser],
36 description=append_doc_link(IMPORT_HELP, "import-url"),
37 help=IMPORT_HELP,
38 formatter_class=argparse.RawTextHelpFormatter,
39 )
40 import_parser.add_argument(
41 "url",
42 help="Supported urls:\n"
43 "/path/to/file\n"
44 "/path/to/directory\n"
45 "C:\\\\path\\to\\file\n"
46 "C:\\\\path\\to\\directory\n"
47 "https://example.com/path/to/file\n"
48 "s3://bucket/path/to/file\n"
49 "s3://bucket/path/to/directory\n"
50 "gs://bucket/path/to/file\n"
51 "gs://bucket/path/to/directory\n"
52 "hdfs://example.com/path/to/file\n"
53 "ssh://example.com:/path/to/file\n"
54 "ssh://example.com:/path/to/directory\n"
55 "remote://myremote/path/to/file (see `dvc remote`)",
56 )
57 import_parser.add_argument(
58 "out",
59 nargs="?",
60 help="Destination path to put files to.",
61 choices=completion.Optional.DIR,
62 )
63 import_parser.add_argument(
64 "--file",
65 help="Specify name of the DVC-file this command will generate.",
66 metavar="<filename>",
67 choices=completion.Optional.DIR,
68 )
69 import_parser.set_defaults(func=CmdImportUrl)
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dvc/command/imp_url.py b/dvc/command/imp_url.py
--- a/dvc/command/imp_url.py
+++ b/dvc/command/imp_url.py
@@ -12,7 +12,10 @@
def run(self):
try:
self.repo.imp_url(
- self.args.url, out=self.args.out, fname=self.args.file
+ self.args.url,
+ out=self.args.out,
+ fname=self.args.file,
+ no_exec=self.args.no_exec,
)
except DvcException:
logger.exception(
@@ -66,4 +69,10 @@
metavar="<filename>",
choices=completion.Optional.DIR,
)
+ import_parser.add_argument(
+ "--no-exec",
+ action="store_true",
+ default=False,
+ help="Only create stage file without actually download it.",
+ )
import_parser.set_defaults(func=CmdImportUrl)
diff --git a/dvc/repo/imp_url.py b/dvc/repo/imp_url.py
--- a/dvc/repo/imp_url.py
+++ b/dvc/repo/imp_url.py
@@ -10,7 +10,9 @@
@locked
@scm_context
-def imp_url(self, url, out=None, fname=None, erepo=None, frozen=True):
+def imp_url(
+ self, url, out=None, fname=None, erepo=None, frozen=True, no_exec=False
+):
from dvc.dvcfile import Dvcfile
from dvc.stage import Stage, create_stage
@@ -46,7 +48,10 @@
except OutputDuplicationError as exc:
raise OutputDuplicationError(exc.output, set(exc.stages) - {stage})
- stage.run()
+ if no_exec:
+ stage.ignore_outs()
+ else:
+ stage.run()
stage.frozen = frozen
| {"golden_diff": "diff --git a/dvc/command/imp_url.py b/dvc/command/imp_url.py\n--- a/dvc/command/imp_url.py\n+++ b/dvc/command/imp_url.py\n@@ -12,7 +12,10 @@\n def run(self):\n try:\n self.repo.imp_url(\n- self.args.url, out=self.args.out, fname=self.args.file\n+ self.args.url,\n+ out=self.args.out,\n+ fname=self.args.file,\n+ no_exec=self.args.no_exec,\n )\n except DvcException:\n logger.exception(\n@@ -66,4 +69,10 @@\n metavar=\"<filename>\",\n choices=completion.Optional.DIR,\n )\n+ import_parser.add_argument(\n+ \"--no-exec\",\n+ action=\"store_true\",\n+ default=False,\n+ help=\"Only create stage file without actually download it.\",\n+ )\n import_parser.set_defaults(func=CmdImportUrl)\ndiff --git a/dvc/repo/imp_url.py b/dvc/repo/imp_url.py\n--- a/dvc/repo/imp_url.py\n+++ b/dvc/repo/imp_url.py\n@@ -10,7 +10,9 @@\n \n @locked\n @scm_context\n-def imp_url(self, url, out=None, fname=None, erepo=None, frozen=True):\n+def imp_url(\n+ self, url, out=None, fname=None, erepo=None, frozen=True, no_exec=False\n+):\n from dvc.dvcfile import Dvcfile\n from dvc.stage import Stage, create_stage\n \n@@ -46,7 +48,10 @@\n except OutputDuplicationError as exc:\n raise OutputDuplicationError(exc.output, set(exc.stages) - {stage})\n \n- stage.run()\n+ if no_exec:\n+ stage.ignore_outs()\n+ else:\n+ stage.run()\n \n stage.frozen = frozen\n", "issue": "Implement `--no-exec` option for `import-url` command\n`dvc import-url` creates new `.dvc` file, just as `dvc run`. Sometimes files which would be imported are already present locally and it's quite inconvenient that they should be downloaded again in order to create a pipeline step.\r\n\r\nBecause of that it would be great to add `--no-exec` option: we create pipeline step, then use `dvc commit` to update its md5 with already downloaded file.\n", "before_files": [{"content": "import os\n\nfrom dvc.repo.scm_context import scm_context\nfrom dvc.utils import relpath, resolve_output, resolve_paths\nfrom dvc.utils.fs import path_isin\n\nfrom ..exceptions import OutputDuplicationError\nfrom . import locked\n\n\n@locked\n@scm_context\ndef imp_url(self, url, out=None, fname=None, erepo=None, frozen=True):\n from dvc.dvcfile import Dvcfile\n from dvc.stage import Stage, create_stage\n\n out = resolve_output(url, out)\n path, wdir, out = resolve_paths(self, out)\n\n # NOTE: when user is importing something from within their own repository\n if (\n erepo is None\n and os.path.exists(url)\n and path_isin(os.path.abspath(url), self.root_dir)\n ):\n url = relpath(url, wdir)\n\n stage = create_stage(\n Stage,\n self,\n fname or path,\n wdir=wdir,\n deps=[url],\n outs=[out],\n erepo=erepo,\n )\n\n if stage is None:\n return None\n\n dvcfile = Dvcfile(self, stage.path)\n dvcfile.remove()\n\n try:\n self.check_modified_graph([stage])\n except OutputDuplicationError as exc:\n raise OutputDuplicationError(exc.output, set(exc.stages) - {stage})\n\n stage.run()\n\n stage.frozen = frozen\n\n dvcfile.dump(stage)\n\n return stage\n", "path": "dvc/repo/imp_url.py"}, {"content": "import argparse\nimport logging\n\nfrom dvc.command import completion\nfrom dvc.command.base import CmdBase, append_doc_link\nfrom dvc.exceptions import DvcException\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdImportUrl(CmdBase):\n def run(self):\n try:\n self.repo.imp_url(\n self.args.url, out=self.args.out, fname=self.args.file\n )\n except DvcException:\n logger.exception(\n \"failed to import {}. You could also try downloading \"\n \"it manually, and adding it with `dvc add`.\".format(\n self.args.url\n )\n )\n return 1\n return 0\n\n\ndef add_parser(subparsers, parent_parser):\n IMPORT_HELP = (\n \"Download or copy file from URL and take it under DVC control.\"\n )\n\n import_parser = subparsers.add_parser(\n \"import-url\",\n parents=[parent_parser],\n description=append_doc_link(IMPORT_HELP, \"import-url\"),\n help=IMPORT_HELP,\n formatter_class=argparse.RawTextHelpFormatter,\n )\n import_parser.add_argument(\n \"url\",\n help=\"Supported urls:\\n\"\n \"/path/to/file\\n\"\n \"/path/to/directory\\n\"\n \"C:\\\\\\\\path\\\\to\\\\file\\n\"\n \"C:\\\\\\\\path\\\\to\\\\directory\\n\"\n \"https://example.com/path/to/file\\n\"\n \"s3://bucket/path/to/file\\n\"\n \"s3://bucket/path/to/directory\\n\"\n \"gs://bucket/path/to/file\\n\"\n \"gs://bucket/path/to/directory\\n\"\n \"hdfs://example.com/path/to/file\\n\"\n \"ssh://example.com:/path/to/file\\n\"\n \"ssh://example.com:/path/to/directory\\n\"\n \"remote://myremote/path/to/file (see `dvc remote`)\",\n )\n import_parser.add_argument(\n \"out\",\n nargs=\"?\",\n help=\"Destination path to put files to.\",\n choices=completion.Optional.DIR,\n )\n import_parser.add_argument(\n \"--file\",\n help=\"Specify name of the DVC-file this command will generate.\",\n metavar=\"<filename>\",\n choices=completion.Optional.DIR,\n )\n import_parser.set_defaults(func=CmdImportUrl)\n", "path": "dvc/command/imp_url.py"}], "after_files": [{"content": "import os\n\nfrom dvc.repo.scm_context import scm_context\nfrom dvc.utils import relpath, resolve_output, resolve_paths\nfrom dvc.utils.fs import path_isin\n\nfrom ..exceptions import OutputDuplicationError\nfrom . import locked\n\n\n@locked\n@scm_context\ndef imp_url(\n self, url, out=None, fname=None, erepo=None, frozen=True, no_exec=False\n):\n from dvc.dvcfile import Dvcfile\n from dvc.stage import Stage, create_stage\n\n out = resolve_output(url, out)\n path, wdir, out = resolve_paths(self, out)\n\n # NOTE: when user is importing something from within their own repository\n if (\n erepo is None\n and os.path.exists(url)\n and path_isin(os.path.abspath(url), self.root_dir)\n ):\n url = relpath(url, wdir)\n\n stage = create_stage(\n Stage,\n self,\n fname or path,\n wdir=wdir,\n deps=[url],\n outs=[out],\n erepo=erepo,\n )\n\n if stage is None:\n return None\n\n dvcfile = Dvcfile(self, stage.path)\n dvcfile.remove()\n\n try:\n self.check_modified_graph([stage])\n except OutputDuplicationError as exc:\n raise OutputDuplicationError(exc.output, set(exc.stages) - {stage})\n\n if no_exec:\n stage.ignore_outs()\n else:\n stage.run()\n\n stage.frozen = frozen\n\n dvcfile.dump(stage)\n\n return stage\n", "path": "dvc/repo/imp_url.py"}, {"content": "import argparse\nimport logging\n\nfrom dvc.command import completion\nfrom dvc.command.base import CmdBase, append_doc_link\nfrom dvc.exceptions import DvcException\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdImportUrl(CmdBase):\n def run(self):\n try:\n self.repo.imp_url(\n self.args.url,\n out=self.args.out,\n fname=self.args.file,\n no_exec=self.args.no_exec,\n )\n except DvcException:\n logger.exception(\n \"failed to import {}. You could also try downloading \"\n \"it manually, and adding it with `dvc add`.\".format(\n self.args.url\n )\n )\n return 1\n return 0\n\n\ndef add_parser(subparsers, parent_parser):\n IMPORT_HELP = (\n \"Download or copy file from URL and take it under DVC control.\"\n )\n\n import_parser = subparsers.add_parser(\n \"import-url\",\n parents=[parent_parser],\n description=append_doc_link(IMPORT_HELP, \"import-url\"),\n help=IMPORT_HELP,\n formatter_class=argparse.RawTextHelpFormatter,\n )\n import_parser.add_argument(\n \"url\",\n help=\"Supported urls:\\n\"\n \"/path/to/file\\n\"\n \"/path/to/directory\\n\"\n \"C:\\\\\\\\path\\\\to\\\\file\\n\"\n \"C:\\\\\\\\path\\\\to\\\\directory\\n\"\n \"https://example.com/path/to/file\\n\"\n \"s3://bucket/path/to/file\\n\"\n \"s3://bucket/path/to/directory\\n\"\n \"gs://bucket/path/to/file\\n\"\n \"gs://bucket/path/to/directory\\n\"\n \"hdfs://example.com/path/to/file\\n\"\n \"ssh://example.com:/path/to/file\\n\"\n \"ssh://example.com:/path/to/directory\\n\"\n \"remote://myremote/path/to/file (see `dvc remote`)\",\n )\n import_parser.add_argument(\n \"out\",\n nargs=\"?\",\n help=\"Destination path to put files to.\",\n choices=completion.Optional.DIR,\n )\n import_parser.add_argument(\n \"--file\",\n help=\"Specify name of the DVC-file this command will generate.\",\n metavar=\"<filename>\",\n choices=completion.Optional.DIR,\n )\n import_parser.add_argument(\n \"--no-exec\",\n action=\"store_true\",\n default=False,\n help=\"Only create stage file without actually download it.\",\n )\n import_parser.set_defaults(func=CmdImportUrl)\n", "path": "dvc/command/imp_url.py"}]} | 1,428 | 418 |
gh_patches_debug_50783 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-3282 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error in NL API, 'Sentiment' object has no attribute 'score'
Upon executing the following code on GCE & locally I'm getting 'score' not found error locally while it works on GCE.
```
def analyze_text(text):
language_client = language.Client()
document = language_client.document_from_text(text)
annotations = document.annotate_text(include_sentiment=True)
print(annotations.sentiment.score)
print (annotations.sentiment.magnitude)
```
Only difference I can find is on GCE, the gcloud version is different than the local
**GCE gcloud**
gcloud --version
Google Cloud SDK 145.0.0
alpha 2017.02.21
app-engine-python 1.9.50
beta 2017.02.21
bq 2.0.24
bq-nix 2.0.24
core 2017.02.21
core-nix 2017.02.21
gcloud
gcloud-deps 2017.02.21
gcloud-deps-linux-x86_64 2017.02.21
gsutil 4.22
gsutil-nix 4.22
**Local gcloud**
gcloud --version
Google Cloud SDK 148.0.1
bq 2.0.24
bq-nix 2.0.24
core 2017.03.24
core-nix 2016.11.07
gcloud
gcloud-deps 2017.03.17
gcloud-deps-darwin-x86_64 2017.02.21
gsutil 4.23
gsutil-nix 4.19
Debugging locally, the response from NL API consist of polarity property and not score.
<img width="761" alt="image" src="https://cloud.githubusercontent.com/assets/1888535/24331449/62cbdbee-1252-11e7-9d29-8699df078fcf.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `language/google/cloud/language/sentiment.py`
Content:
```
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Definition for Google Cloud Natural Language API sentiment.
16
17 Sentiment is the response to an ``analyzeSentiment`` request.
18 """
19
20
21 class Sentiment(object):
22 """A Google Cloud Natural Language API sentiment object.
23
24 .. _Sentiment message: https://cloud.google.com/natural-language/\
25 reference/rest/v1/Sentiment
26 .. _Sentiment basics: https://cloud.google.com/natural-language/\
27 docs/basics#sentiment-analysis-values
28
29 See `Sentiment message`_ and `Sentiment basics`_.
30
31 :type score: float
32 :param score: Score of the sentiment in the ``[-1.0, 1.0]`` range.
33 Larger numbers represent more positive sentiments.
34
35 :type magnitude: float
36 :param magnitude: A non-negative number in the ``[0, +inf)`` range, which
37 represents the absolute magnitude of sentiment
38 regardless of score (positive or negative).
39 """
40
41 def __init__(self, score, magnitude):
42 self.score = score
43 self.magnitude = magnitude
44
45 @classmethod
46 def from_api_repr(cls, payload):
47 """Convert a Sentiment from the JSON API into a :class:`Sentiment`.
48
49 :param payload: dict
50 :type payload: The value from the backend.
51
52 :rtype: :class:`Sentiment`
53 :returns: The sentiment parsed from the API representation.
54 """
55 score = payload['score']
56 magnitude = payload['magnitude']
57 return cls(score, magnitude)
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/language/google/cloud/language/sentiment.py b/language/google/cloud/language/sentiment.py
--- a/language/google/cloud/language/sentiment.py
+++ b/language/google/cloud/language/sentiment.py
@@ -52,6 +52,6 @@
:rtype: :class:`Sentiment`
:returns: The sentiment parsed from the API representation.
"""
- score = payload['score']
+ score = payload.get('score', None)
magnitude = payload['magnitude']
return cls(score, magnitude)
| {"golden_diff": "diff --git a/language/google/cloud/language/sentiment.py b/language/google/cloud/language/sentiment.py\n--- a/language/google/cloud/language/sentiment.py\n+++ b/language/google/cloud/language/sentiment.py\n@@ -52,6 +52,6 @@\n :rtype: :class:`Sentiment`\n :returns: The sentiment parsed from the API representation.\n \"\"\"\n- score = payload['score']\n+ score = payload.get('score', None)\n magnitude = payload['magnitude']\n return cls(score, magnitude)\n", "issue": "Error in NL API, 'Sentiment' object has no attribute 'score'\nUpon executing the following code on GCE & locally I'm getting 'score' not found error locally while it works on GCE.\r\n\r\n```\r\ndef analyze_text(text):\r\n language_client = language.Client()\r\n document = language_client.document_from_text(text)\r\n annotations = document.annotate_text(include_sentiment=True)\r\n print(annotations.sentiment.score)\r\n print (annotations.sentiment.magnitude)\r\n```\r\n\r\nOnly difference I can find is on GCE, the gcloud version is different than the local\r\n**GCE gcloud**\r\ngcloud --version\r\nGoogle Cloud SDK 145.0.0\r\nalpha 2017.02.21\r\napp-engine-python 1.9.50\r\nbeta 2017.02.21\r\nbq 2.0.24\r\nbq-nix 2.0.24\r\ncore 2017.02.21\r\ncore-nix 2017.02.21\r\ngcloud \r\ngcloud-deps 2017.02.21\r\ngcloud-deps-linux-x86_64 2017.02.21\r\ngsutil 4.22\r\ngsutil-nix 4.22\r\n\r\n**Local gcloud**\r\ngcloud --version\r\nGoogle Cloud SDK 148.0.1\r\nbq 2.0.24\r\nbq-nix 2.0.24\r\ncore 2017.03.24\r\ncore-nix 2016.11.07\r\ngcloud \r\ngcloud-deps 2017.03.17\r\ngcloud-deps-darwin-x86_64 2017.02.21\r\ngsutil 4.23\r\ngsutil-nix 4.19\r\n\r\nDebugging locally, the response from NL API consist of polarity property and not score.\r\n\r\n<img width=\"761\" alt=\"image\" src=\"https://cloud.githubusercontent.com/assets/1888535/24331449/62cbdbee-1252-11e7-9d29-8699df078fcf.png\">\r\n\r\n\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Definition for Google Cloud Natural Language API sentiment.\n\nSentiment is the response to an ``analyzeSentiment`` request.\n\"\"\"\n\n\nclass Sentiment(object):\n \"\"\"A Google Cloud Natural Language API sentiment object.\n\n .. _Sentiment message: https://cloud.google.com/natural-language/\\\n reference/rest/v1/Sentiment\n .. _Sentiment basics: https://cloud.google.com/natural-language/\\\n docs/basics#sentiment-analysis-values\n\n See `Sentiment message`_ and `Sentiment basics`_.\n\n :type score: float\n :param score: Score of the sentiment in the ``[-1.0, 1.0]`` range.\n Larger numbers represent more positive sentiments.\n\n :type magnitude: float\n :param magnitude: A non-negative number in the ``[0, +inf)`` range, which\n represents the absolute magnitude of sentiment\n regardless of score (positive or negative).\n \"\"\"\n\n def __init__(self, score, magnitude):\n self.score = score\n self.magnitude = magnitude\n\n @classmethod\n def from_api_repr(cls, payload):\n \"\"\"Convert a Sentiment from the JSON API into a :class:`Sentiment`.\n\n :param payload: dict\n :type payload: The value from the backend.\n\n :rtype: :class:`Sentiment`\n :returns: The sentiment parsed from the API representation.\n \"\"\"\n score = payload['score']\n magnitude = payload['magnitude']\n return cls(score, magnitude)\n", "path": "language/google/cloud/language/sentiment.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Definition for Google Cloud Natural Language API sentiment.\n\nSentiment is the response to an ``analyzeSentiment`` request.\n\"\"\"\n\n\nclass Sentiment(object):\n \"\"\"A Google Cloud Natural Language API sentiment object.\n\n .. _Sentiment message: https://cloud.google.com/natural-language/\\\n reference/rest/v1/Sentiment\n .. _Sentiment basics: https://cloud.google.com/natural-language/\\\n docs/basics#sentiment-analysis-values\n\n See `Sentiment message`_ and `Sentiment basics`_.\n\n :type score: float\n :param score: Score of the sentiment in the ``[-1.0, 1.0]`` range.\n Larger numbers represent more positive sentiments.\n\n :type magnitude: float\n :param magnitude: A non-negative number in the ``[0, +inf)`` range, which\n represents the absolute magnitude of sentiment\n regardless of score (positive or negative).\n \"\"\"\n\n def __init__(self, score, magnitude):\n self.score = score\n self.magnitude = magnitude\n\n @classmethod\n def from_api_repr(cls, payload):\n \"\"\"Convert a Sentiment from the JSON API into a :class:`Sentiment`.\n\n :param payload: dict\n :type payload: The value from the backend.\n\n :rtype: :class:`Sentiment`\n :returns: The sentiment parsed from the API representation.\n \"\"\"\n score = payload.get('score', None)\n magnitude = payload['magnitude']\n return cls(score, magnitude)\n", "path": "language/google/cloud/language/sentiment.py"}]} | 1,335 | 113 |
gh_patches_debug_7619 | rasdani/github-patches | git_diff | pyca__cryptography-2682 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Any interest for an override parameter for _MAX_CLOCK_SKEW?
It seems like a narrow assumption to assume that the actors communicating via Fernet-encrypted messages will always be able to have somewhat synchronized clocks. In our case, we have no way to amend the clocks of the systems upon which our software runs and they often differ by at least minutes, if not hours or days.
Our current solution is to just stomp on `cryptography.fernet._MAX_CLOCK_SKEW` and set it to some `LARGE_VALUE`. Not ideal, but it gets the job done for now.
I can think of several options to improve the situation here, but most of them involve changes to this library's code.
1) Add an optional parameter to Fernet's `__init__()` function or `decrypt()` function to disable the clock skew check altogether.
2) Add an optional parameter to Fernet's `__init__()` function or `decrypt()` that defaults to `_MAX_CLOCK_SKEW` and use that value for the clock skew check.
3) Add ability to set an offset on the Fernet object (or pass it in to `decrypt()`) that would be applied to the extracted timestamp prior to being checked against `_MAX_CLOCK_SKEW`. It would be the library user's responsibility for determining the appropriate offset value for each external actor.
4) I could subclass Fernet and implement some appropriate behavior in my new class (would essentially involve copying the whole `decrypt()` function into my codebase just to tweak this specific portion's behavior)
Would any of options 1-3 be worth pursuing to the point of a pull-request? I can probably make a pretty good effort on each of them, but I don't want to spend the time if the maintainers don't think any are worth attempting.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cryptography/fernet.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import base64
8 import binascii
9 import os
10 import struct
11 import time
12
13 import six
14
15 from cryptography.exceptions import InvalidSignature
16 from cryptography.hazmat.backends import default_backend
17 from cryptography.hazmat.primitives import hashes, padding
18 from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
19 from cryptography.hazmat.primitives.hmac import HMAC
20
21
22 class InvalidToken(Exception):
23 pass
24
25
26 _MAX_CLOCK_SKEW = 60
27
28
29 class Fernet(object):
30 def __init__(self, key, backend=None):
31 if backend is None:
32 backend = default_backend()
33
34 key = base64.urlsafe_b64decode(key)
35 if len(key) != 32:
36 raise ValueError(
37 "Fernet key must be 32 url-safe base64-encoded bytes."
38 )
39
40 self._signing_key = key[:16]
41 self._encryption_key = key[16:]
42 self._backend = backend
43
44 @classmethod
45 def generate_key(cls):
46 return base64.urlsafe_b64encode(os.urandom(32))
47
48 def encrypt(self, data):
49 current_time = int(time.time())
50 iv = os.urandom(16)
51 return self._encrypt_from_parts(data, current_time, iv)
52
53 def _encrypt_from_parts(self, data, current_time, iv):
54 if not isinstance(data, bytes):
55 raise TypeError("data must be bytes.")
56
57 padder = padding.PKCS7(algorithms.AES.block_size).padder()
58 padded_data = padder.update(data) + padder.finalize()
59 encryptor = Cipher(
60 algorithms.AES(self._encryption_key), modes.CBC(iv), self._backend
61 ).encryptor()
62 ciphertext = encryptor.update(padded_data) + encryptor.finalize()
63
64 basic_parts = (
65 b"\x80" + struct.pack(">Q", current_time) + iv + ciphertext
66 )
67
68 h = HMAC(self._signing_key, hashes.SHA256(), backend=self._backend)
69 h.update(basic_parts)
70 hmac = h.finalize()
71 return base64.urlsafe_b64encode(basic_parts + hmac)
72
73 def decrypt(self, token, ttl=None):
74 if not isinstance(token, bytes):
75 raise TypeError("token must be bytes.")
76
77 current_time = int(time.time())
78
79 try:
80 data = base64.urlsafe_b64decode(token)
81 except (TypeError, binascii.Error):
82 raise InvalidToken
83
84 if not data or six.indexbytes(data, 0) != 0x80:
85 raise InvalidToken
86
87 try:
88 timestamp, = struct.unpack(">Q", data[1:9])
89 except struct.error:
90 raise InvalidToken
91 if ttl is not None:
92 if timestamp + ttl < current_time:
93 raise InvalidToken
94 if current_time + _MAX_CLOCK_SKEW < timestamp:
95 raise InvalidToken
96 h = HMAC(self._signing_key, hashes.SHA256(), backend=self._backend)
97 h.update(data[:-32])
98 try:
99 h.verify(data[-32:])
100 except InvalidSignature:
101 raise InvalidToken
102
103 iv = data[9:25]
104 ciphertext = data[25:-32]
105 decryptor = Cipher(
106 algorithms.AES(self._encryption_key), modes.CBC(iv), self._backend
107 ).decryptor()
108 plaintext_padded = decryptor.update(ciphertext)
109 try:
110 plaintext_padded += decryptor.finalize()
111 except ValueError:
112 raise InvalidToken
113 unpadder = padding.PKCS7(algorithms.AES.block_size).unpadder()
114
115 unpadded = unpadder.update(plaintext_padded)
116 try:
117 unpadded += unpadder.finalize()
118 except ValueError:
119 raise InvalidToken
120 return unpadded
121
122
123 class MultiFernet(object):
124 def __init__(self, fernets):
125 fernets = list(fernets)
126 if not fernets:
127 raise ValueError(
128 "MultiFernet requires at least one Fernet instance"
129 )
130 self._fernets = fernets
131
132 def encrypt(self, msg):
133 return self._fernets[0].encrypt(msg)
134
135 def decrypt(self, msg, ttl=None):
136 for f in self._fernets:
137 try:
138 return f.decrypt(msg, ttl)
139 except InvalidToken:
140 pass
141 raise InvalidToken
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cryptography/fernet.py b/src/cryptography/fernet.py
--- a/src/cryptography/fernet.py
+++ b/src/cryptography/fernet.py
@@ -91,8 +91,10 @@
if ttl is not None:
if timestamp + ttl < current_time:
raise InvalidToken
- if current_time + _MAX_CLOCK_SKEW < timestamp:
- raise InvalidToken
+
+ if current_time + _MAX_CLOCK_SKEW < timestamp:
+ raise InvalidToken
+
h = HMAC(self._signing_key, hashes.SHA256(), backend=self._backend)
h.update(data[:-32])
try:
| {"golden_diff": "diff --git a/src/cryptography/fernet.py b/src/cryptography/fernet.py\n--- a/src/cryptography/fernet.py\n+++ b/src/cryptography/fernet.py\n@@ -91,8 +91,10 @@\n if ttl is not None:\n if timestamp + ttl < current_time:\n raise InvalidToken\n- if current_time + _MAX_CLOCK_SKEW < timestamp:\n- raise InvalidToken\n+\n+ if current_time + _MAX_CLOCK_SKEW < timestamp:\n+ raise InvalidToken\n+\n h = HMAC(self._signing_key, hashes.SHA256(), backend=self._backend)\n h.update(data[:-32])\n try:\n", "issue": "Any interest for an override parameter for _MAX_CLOCK_SKEW?\nIt seems like a narrow assumption to assume that the actors communicating via Fernet-encrypted messages will always be able to have somewhat synchronized clocks. In our case, we have no way to amend the clocks of the systems upon which our software runs and they often differ by at least minutes, if not hours or days.\n\nOur current solution is to just stomp on `cryptography.fernet._MAX_CLOCK_SKEW` and set it to some `LARGE_VALUE`. Not ideal, but it gets the job done for now.\n\nI can think of several options to improve the situation here, but most of them involve changes to this library's code.\n\n1) Add an optional parameter to Fernet's `__init__()` function or `decrypt()` function to disable the clock skew check altogether.\n2) Add an optional parameter to Fernet's `__init__()` function or `decrypt()` that defaults to `_MAX_CLOCK_SKEW` and use that value for the clock skew check.\n3) Add ability to set an offset on the Fernet object (or pass it in to `decrypt()`) that would be applied to the extracted timestamp prior to being checked against `_MAX_CLOCK_SKEW`. It would be the library user's responsibility for determining the appropriate offset value for each external actor.\n4) I could subclass Fernet and implement some appropriate behavior in my new class (would essentially involve copying the whole `decrypt()` function into my codebase just to tweak this specific portion's behavior)\n\nWould any of options 1-3 be worth pursuing to the point of a pull-request? I can probably make a pretty good effort on each of them, but I don't want to spend the time if the maintainers don't think any are worth attempting.\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport base64\nimport binascii\nimport os\nimport struct\nimport time\n\nimport six\n\nfrom cryptography.exceptions import InvalidSignature\nfrom cryptography.hazmat.backends import default_backend\nfrom cryptography.hazmat.primitives import hashes, padding\nfrom cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes\nfrom cryptography.hazmat.primitives.hmac import HMAC\n\n\nclass InvalidToken(Exception):\n pass\n\n\n_MAX_CLOCK_SKEW = 60\n\n\nclass Fernet(object):\n def __init__(self, key, backend=None):\n if backend is None:\n backend = default_backend()\n\n key = base64.urlsafe_b64decode(key)\n if len(key) != 32:\n raise ValueError(\n \"Fernet key must be 32 url-safe base64-encoded bytes.\"\n )\n\n self._signing_key = key[:16]\n self._encryption_key = key[16:]\n self._backend = backend\n\n @classmethod\n def generate_key(cls):\n return base64.urlsafe_b64encode(os.urandom(32))\n\n def encrypt(self, data):\n current_time = int(time.time())\n iv = os.urandom(16)\n return self._encrypt_from_parts(data, current_time, iv)\n\n def _encrypt_from_parts(self, data, current_time, iv):\n if not isinstance(data, bytes):\n raise TypeError(\"data must be bytes.\")\n\n padder = padding.PKCS7(algorithms.AES.block_size).padder()\n padded_data = padder.update(data) + padder.finalize()\n encryptor = Cipher(\n algorithms.AES(self._encryption_key), modes.CBC(iv), self._backend\n ).encryptor()\n ciphertext = encryptor.update(padded_data) + encryptor.finalize()\n\n basic_parts = (\n b\"\\x80\" + struct.pack(\">Q\", current_time) + iv + ciphertext\n )\n\n h = HMAC(self._signing_key, hashes.SHA256(), backend=self._backend)\n h.update(basic_parts)\n hmac = h.finalize()\n return base64.urlsafe_b64encode(basic_parts + hmac)\n\n def decrypt(self, token, ttl=None):\n if not isinstance(token, bytes):\n raise TypeError(\"token must be bytes.\")\n\n current_time = int(time.time())\n\n try:\n data = base64.urlsafe_b64decode(token)\n except (TypeError, binascii.Error):\n raise InvalidToken\n\n if not data or six.indexbytes(data, 0) != 0x80:\n raise InvalidToken\n\n try:\n timestamp, = struct.unpack(\">Q\", data[1:9])\n except struct.error:\n raise InvalidToken\n if ttl is not None:\n if timestamp + ttl < current_time:\n raise InvalidToken\n if current_time + _MAX_CLOCK_SKEW < timestamp:\n raise InvalidToken\n h = HMAC(self._signing_key, hashes.SHA256(), backend=self._backend)\n h.update(data[:-32])\n try:\n h.verify(data[-32:])\n except InvalidSignature:\n raise InvalidToken\n\n iv = data[9:25]\n ciphertext = data[25:-32]\n decryptor = Cipher(\n algorithms.AES(self._encryption_key), modes.CBC(iv), self._backend\n ).decryptor()\n plaintext_padded = decryptor.update(ciphertext)\n try:\n plaintext_padded += decryptor.finalize()\n except ValueError:\n raise InvalidToken\n unpadder = padding.PKCS7(algorithms.AES.block_size).unpadder()\n\n unpadded = unpadder.update(plaintext_padded)\n try:\n unpadded += unpadder.finalize()\n except ValueError:\n raise InvalidToken\n return unpadded\n\n\nclass MultiFernet(object):\n def __init__(self, fernets):\n fernets = list(fernets)\n if not fernets:\n raise ValueError(\n \"MultiFernet requires at least one Fernet instance\"\n )\n self._fernets = fernets\n\n def encrypt(self, msg):\n return self._fernets[0].encrypt(msg)\n\n def decrypt(self, msg, ttl=None):\n for f in self._fernets:\n try:\n return f.decrypt(msg, ttl)\n except InvalidToken:\n pass\n raise InvalidToken\n", "path": "src/cryptography/fernet.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport base64\nimport binascii\nimport os\nimport struct\nimport time\n\nimport six\n\nfrom cryptography.exceptions import InvalidSignature\nfrom cryptography.hazmat.backends import default_backend\nfrom cryptography.hazmat.primitives import hashes, padding\nfrom cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes\nfrom cryptography.hazmat.primitives.hmac import HMAC\n\n\nclass InvalidToken(Exception):\n pass\n\n\n_MAX_CLOCK_SKEW = 60\n\n\nclass Fernet(object):\n def __init__(self, key, backend=None):\n if backend is None:\n backend = default_backend()\n\n key = base64.urlsafe_b64decode(key)\n if len(key) != 32:\n raise ValueError(\n \"Fernet key must be 32 url-safe base64-encoded bytes.\"\n )\n\n self._signing_key = key[:16]\n self._encryption_key = key[16:]\n self._backend = backend\n\n @classmethod\n def generate_key(cls):\n return base64.urlsafe_b64encode(os.urandom(32))\n\n def encrypt(self, data):\n current_time = int(time.time())\n iv = os.urandom(16)\n return self._encrypt_from_parts(data, current_time, iv)\n\n def _encrypt_from_parts(self, data, current_time, iv):\n if not isinstance(data, bytes):\n raise TypeError(\"data must be bytes.\")\n\n padder = padding.PKCS7(algorithms.AES.block_size).padder()\n padded_data = padder.update(data) + padder.finalize()\n encryptor = Cipher(\n algorithms.AES(self._encryption_key), modes.CBC(iv), self._backend\n ).encryptor()\n ciphertext = encryptor.update(padded_data) + encryptor.finalize()\n\n basic_parts = (\n b\"\\x80\" + struct.pack(\">Q\", current_time) + iv + ciphertext\n )\n\n h = HMAC(self._signing_key, hashes.SHA256(), backend=self._backend)\n h.update(basic_parts)\n hmac = h.finalize()\n return base64.urlsafe_b64encode(basic_parts + hmac)\n\n def decrypt(self, token, ttl=None):\n if not isinstance(token, bytes):\n raise TypeError(\"token must be bytes.\")\n\n current_time = int(time.time())\n\n try:\n data = base64.urlsafe_b64decode(token)\n except (TypeError, binascii.Error):\n raise InvalidToken\n\n if not data or six.indexbytes(data, 0) != 0x80:\n raise InvalidToken\n\n try:\n timestamp, = struct.unpack(\">Q\", data[1:9])\n except struct.error:\n raise InvalidToken\n if ttl is not None:\n if timestamp + ttl < current_time:\n raise InvalidToken\n\n if current_time + _MAX_CLOCK_SKEW < timestamp:\n raise InvalidToken\n\n h = HMAC(self._signing_key, hashes.SHA256(), backend=self._backend)\n h.update(data[:-32])\n try:\n h.verify(data[-32:])\n except InvalidSignature:\n raise InvalidToken\n\n iv = data[9:25]\n ciphertext = data[25:-32]\n decryptor = Cipher(\n algorithms.AES(self._encryption_key), modes.CBC(iv), self._backend\n ).decryptor()\n plaintext_padded = decryptor.update(ciphertext)\n try:\n plaintext_padded += decryptor.finalize()\n except ValueError:\n raise InvalidToken\n unpadder = padding.PKCS7(algorithms.AES.block_size).unpadder()\n\n unpadded = unpadder.update(plaintext_padded)\n try:\n unpadded += unpadder.finalize()\n except ValueError:\n raise InvalidToken\n return unpadded\n\n\nclass MultiFernet(object):\n def __init__(self, fernets):\n fernets = list(fernets)\n if not fernets:\n raise ValueError(\n \"MultiFernet requires at least one Fernet instance\"\n )\n self._fernets = fernets\n\n def encrypt(self, msg):\n return self._fernets[0].encrypt(msg)\n\n def decrypt(self, msg, ttl=None):\n for f in self._fernets:\n try:\n return f.decrypt(msg, ttl)\n except InvalidToken:\n pass\n raise InvalidToken\n", "path": "src/cryptography/fernet.py"}]} | 1,968 | 151 |
gh_patches_debug_35745 | rasdani/github-patches | git_diff | uclapi__uclapi-977 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Webhooks
They haven't been delivered for a _long_ time now, so we finally need to dig deep and see what is going wrong. The room data is definitely up to date, however.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/uclapi/roombookings/management/commands/trigger_webhooks.py`
Content:
```
1 from django.core.management.base import BaseCommand
2 from roombookings.models import Lock, BookingA, BookingB
3 from roombookings.helpers import _serialize_bookings
4 from dashboard.models import Webhook, WebhookTriggerHistory
5 from datetime import datetime
6 from deepdiff import DeepDiff
7 import grequests
8 from django.utils import timezone
9
10
11 class Command(BaseCommand):
12
13 help = 'Diff roombooking result sets and notify relevant webhooks'
14
15 def handle(self, *args, **options):
16 self.stdout.write("Triggering webhooks")
17
18 # currently locked table is the old one, more recent one is not locked
19 lock = Lock.objects.all()[0] # there is only ever one lock
20
21 if lock.bookingA:
22 old_booking_table = BookingA
23 new_booking_table = BookingB
24 else:
25 old_booking_table = BookingB
26 new_booking_table = BookingA
27
28 now = datetime.now()
29
30 old_bookings = _serialize_bookings(
31 old_booking_table.objects.filter(
32 startdatetime__gt=now
33 )
34 )
35 new_bookings = _serialize_bookings(
36 new_booking_table.objects.filter(
37 startdatetime__gt=now
38 )
39 )
40
41 ddiff = DeepDiff(old_bookings, new_bookings, ignore_order=True)
42
43 webhooks = Webhook.objects.all()
44 # assumption: list of webhooks will be longer than ddiff
45
46 num_bookings_added = 0
47 num_bookings_removed = 0
48 if "iterable_item_added" in ddiff:
49 num_bookings_added = len(
50 ddiff["iterable_item_added"].values()
51 )
52
53 if "iterable_item_removed" in ddiff:
54 num_bookings_removed = len(
55 ddiff["iterable_item_removed"].values()
56 )
57
58 self.stdout.write(
59 "{} bookings added\n{} bookings removed.".format(
60 num_bookings_added,
61 num_bookings_removed
62 )
63 )
64
65 def webhook_map(webhook):
66 def webhook_filter(booking):
67 return (
68 (
69 webhook.siteid == '' or
70 booking["siteid"] == webhook.siteid
71 ) and
72 (
73 webhook.roomid == '' or
74 booking["roomid"] == webhook.roomid
75 ) and
76 (
77 webhook.contact == '' or
78 # mimick SQL 'like'
79 webhook.contact in str(booking["contact"])
80 )
81 )
82 output = {
83 "webhook_in_db": webhook,
84 "url": webhook.url,
85 "verification_secret": webhook.verification_secret
86 }
87 if "iterable_item_added" in ddiff:
88 bookings_added = list(filter(
89 webhook_filter, ddiff["iterable_item_added"].values()
90 ))
91 if bookings_added != []:
92 output["bookings_added"] = bookings_added
93 if "iterable_item_removed" in ddiff:
94 bookings_removed = list(filter(
95 webhook_filter, ddiff["iterable_item_removed"].values()
96 ))
97 if bookings_removed != []:
98 output["bookings_removed"] = bookings_removed
99
100 return output
101
102 webhooks_to_enact = list(map(webhook_map, webhooks))
103
104 unsent_requests = []
105 for idx, webhook in enumerate(webhooks_to_enact):
106 payload = {
107 "service": "roombookings",
108 "name": "bookings_changed",
109 "verification_secret": webhook["verification_secret"],
110 "content": {}
111 }
112
113 if "bookings_added" in webhook:
114 payload["content"]["bookings_added"] = (
115 webhook["bookings_added"]
116 )
117 if "bookings_removed" in webhook:
118 payload["content"]["bookings_removed"] = (
119 webhook["bookings_removed"]
120 )
121
122 webhooks_to_enact[idx]["payload"] = payload
123
124 if payload["content"] != {}:
125 unsent_requests.append(
126 grequests.post(
127 webhook["url"], json=payload, headers={
128 "User-Agent": "uclapi-bot/1"
129 }
130 )
131 )
132 self.stdout.write(
133 "Triggering {} webhooks.".format(len(unsent_requests))
134 )
135 grequests.map(unsent_requests)
136
137 for webhook in webhooks_to_enact:
138 if webhook["payload"]["content"] != {}:
139 webhook_in_db = webhook["webhook_in_db"]
140 webhook_in_db.last_fired = timezone.now()
141 webhook_in_db.save()
142
143 new_webhook_history_entry = WebhookTriggerHistory(
144 webhook=webhook_in_db,
145 payload=webhook["payload"]
146 )
147 new_webhook_history_entry.save()
148
149 self.stdout.write("Webhooks triggered.")
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend/uclapi/roombookings/management/commands/trigger_webhooks.py b/backend/uclapi/roombookings/management/commands/trigger_webhooks.py
--- a/backend/uclapi/roombookings/management/commands/trigger_webhooks.py
+++ b/backend/uclapi/roombookings/management/commands/trigger_webhooks.py
@@ -4,16 +4,25 @@
from dashboard.models import Webhook, WebhookTriggerHistory
from datetime import datetime
from deepdiff import DeepDiff
-import grequests
from django.utils import timezone
+from requests_futures.sessions import FuturesSession
class Command(BaseCommand):
help = 'Diff roombooking result sets and notify relevant webhooks'
+ def add_arguments(self, parser):
+ parser.add_argument(
+ '--debug',
+ action='store_true',
+ dest='debug',
+ help='Print webhook responses',
+ )
+
def handle(self, *args, **options):
self.stdout.write("Triggering webhooks")
+ session = FuturesSession()
# currently locked table is the old one, more recent one is not locked
lock = Lock.objects.all()[0] # there is only ever one lock
@@ -121,10 +130,11 @@
webhooks_to_enact[idx]["payload"] = payload
- if payload["content"] != {}:
+ if payload["content"] != {} and webhook["url"] != "":
unsent_requests.append(
- grequests.post(
- webhook["url"], json=payload, headers={
+ session.post(
+ webhook["url"], json=payload,
+ headers={
"User-Agent": "uclapi-bot/1"
}
)
@@ -132,7 +142,11 @@
self.stdout.write(
"Triggering {} webhooks.".format(len(unsent_requests))
)
- grequests.map(unsent_requests)
+ if("debug" in options):
+ for i in unsent_requests:
+ self.stdout.write(
+ 'response status {0}'.format(i.result().status_code)
+ )
for webhook in webhooks_to_enact:
if webhook["payload"]["content"] != {}:
| {"golden_diff": "diff --git a/backend/uclapi/roombookings/management/commands/trigger_webhooks.py b/backend/uclapi/roombookings/management/commands/trigger_webhooks.py\n--- a/backend/uclapi/roombookings/management/commands/trigger_webhooks.py\n+++ b/backend/uclapi/roombookings/management/commands/trigger_webhooks.py\n@@ -4,16 +4,25 @@\n from dashboard.models import Webhook, WebhookTriggerHistory\n from datetime import datetime\n from deepdiff import DeepDiff\n-import grequests\n from django.utils import timezone\n+from requests_futures.sessions import FuturesSession\n \n \n class Command(BaseCommand):\n \n help = 'Diff roombooking result sets and notify relevant webhooks'\n \n+ def add_arguments(self, parser):\n+ parser.add_argument(\n+ '--debug',\n+ action='store_true',\n+ dest='debug',\n+ help='Print webhook responses',\n+ )\n+\n def handle(self, *args, **options):\n self.stdout.write(\"Triggering webhooks\")\n+ session = FuturesSession()\n \n # currently locked table is the old one, more recent one is not locked\n lock = Lock.objects.all()[0] # there is only ever one lock\n@@ -121,10 +130,11 @@\n \n webhooks_to_enact[idx][\"payload\"] = payload\n \n- if payload[\"content\"] != {}:\n+ if payload[\"content\"] != {} and webhook[\"url\"] != \"\":\n unsent_requests.append(\n- grequests.post(\n- webhook[\"url\"], json=payload, headers={\n+ session.post(\n+ webhook[\"url\"], json=payload,\n+ headers={\n \"User-Agent\": \"uclapi-bot/1\"\n }\n )\n@@ -132,7 +142,11 @@\n self.stdout.write(\n \"Triggering {} webhooks.\".format(len(unsent_requests))\n )\n- grequests.map(unsent_requests)\n+ if(\"debug\" in options):\n+ for i in unsent_requests:\n+ self.stdout.write(\n+ 'response status {0}'.format(i.result().status_code)\n+ )\n \n for webhook in webhooks_to_enact:\n if webhook[\"payload\"][\"content\"] != {}:\n", "issue": "Webhooks\nThey haven't been delivered for a _long_ time now, so we finally need to dig deep and see what is going wrong. The room data is definitely up to date, however.\n", "before_files": [{"content": "from django.core.management.base import BaseCommand\nfrom roombookings.models import Lock, BookingA, BookingB\nfrom roombookings.helpers import _serialize_bookings\nfrom dashboard.models import Webhook, WebhookTriggerHistory\nfrom datetime import datetime\nfrom deepdiff import DeepDiff\nimport grequests\nfrom django.utils import timezone\n\n\nclass Command(BaseCommand):\n\n help = 'Diff roombooking result sets and notify relevant webhooks'\n\n def handle(self, *args, **options):\n self.stdout.write(\"Triggering webhooks\")\n\n # currently locked table is the old one, more recent one is not locked\n lock = Lock.objects.all()[0] # there is only ever one lock\n\n if lock.bookingA:\n old_booking_table = BookingA\n new_booking_table = BookingB\n else:\n old_booking_table = BookingB\n new_booking_table = BookingA\n\n now = datetime.now()\n\n old_bookings = _serialize_bookings(\n old_booking_table.objects.filter(\n startdatetime__gt=now\n )\n )\n new_bookings = _serialize_bookings(\n new_booking_table.objects.filter(\n startdatetime__gt=now\n )\n )\n\n ddiff = DeepDiff(old_bookings, new_bookings, ignore_order=True)\n\n webhooks = Webhook.objects.all()\n # assumption: list of webhooks will be longer than ddiff\n\n num_bookings_added = 0\n num_bookings_removed = 0\n if \"iterable_item_added\" in ddiff:\n num_bookings_added = len(\n ddiff[\"iterable_item_added\"].values()\n )\n\n if \"iterable_item_removed\" in ddiff:\n num_bookings_removed = len(\n ddiff[\"iterable_item_removed\"].values()\n )\n\n self.stdout.write(\n \"{} bookings added\\n{} bookings removed.\".format(\n num_bookings_added,\n num_bookings_removed\n )\n )\n\n def webhook_map(webhook):\n def webhook_filter(booking):\n return (\n (\n webhook.siteid == '' or\n booking[\"siteid\"] == webhook.siteid\n ) and\n (\n webhook.roomid == '' or\n booking[\"roomid\"] == webhook.roomid\n ) and\n (\n webhook.contact == '' or\n # mimick SQL 'like'\n webhook.contact in str(booking[\"contact\"])\n )\n )\n output = {\n \"webhook_in_db\": webhook,\n \"url\": webhook.url,\n \"verification_secret\": webhook.verification_secret\n }\n if \"iterable_item_added\" in ddiff:\n bookings_added = list(filter(\n webhook_filter, ddiff[\"iterable_item_added\"].values()\n ))\n if bookings_added != []:\n output[\"bookings_added\"] = bookings_added\n if \"iterable_item_removed\" in ddiff:\n bookings_removed = list(filter(\n webhook_filter, ddiff[\"iterable_item_removed\"].values()\n ))\n if bookings_removed != []:\n output[\"bookings_removed\"] = bookings_removed\n\n return output\n\n webhooks_to_enact = list(map(webhook_map, webhooks))\n\n unsent_requests = []\n for idx, webhook in enumerate(webhooks_to_enact):\n payload = {\n \"service\": \"roombookings\",\n \"name\": \"bookings_changed\",\n \"verification_secret\": webhook[\"verification_secret\"],\n \"content\": {}\n }\n\n if \"bookings_added\" in webhook:\n payload[\"content\"][\"bookings_added\"] = (\n webhook[\"bookings_added\"]\n )\n if \"bookings_removed\" in webhook:\n payload[\"content\"][\"bookings_removed\"] = (\n webhook[\"bookings_removed\"]\n )\n\n webhooks_to_enact[idx][\"payload\"] = payload\n\n if payload[\"content\"] != {}:\n unsent_requests.append(\n grequests.post(\n webhook[\"url\"], json=payload, headers={\n \"User-Agent\": \"uclapi-bot/1\"\n }\n )\n )\n self.stdout.write(\n \"Triggering {} webhooks.\".format(len(unsent_requests))\n )\n grequests.map(unsent_requests)\n\n for webhook in webhooks_to_enact:\n if webhook[\"payload\"][\"content\"] != {}:\n webhook_in_db = webhook[\"webhook_in_db\"]\n webhook_in_db.last_fired = timezone.now()\n webhook_in_db.save()\n\n new_webhook_history_entry = WebhookTriggerHistory(\n webhook=webhook_in_db,\n payload=webhook[\"payload\"]\n )\n new_webhook_history_entry.save()\n\n self.stdout.write(\"Webhooks triggered.\")\n", "path": "backend/uclapi/roombookings/management/commands/trigger_webhooks.py"}], "after_files": [{"content": "from django.core.management.base import BaseCommand\nfrom roombookings.models import Lock, BookingA, BookingB\nfrom roombookings.helpers import _serialize_bookings\nfrom dashboard.models import Webhook, WebhookTriggerHistory\nfrom datetime import datetime\nfrom deepdiff import DeepDiff\nfrom django.utils import timezone\nfrom requests_futures.sessions import FuturesSession\n\n\nclass Command(BaseCommand):\n\n help = 'Diff roombooking result sets and notify relevant webhooks'\n\n def add_arguments(self, parser):\n parser.add_argument(\n '--debug',\n action='store_true',\n dest='debug',\n help='Print webhook responses',\n )\n\n def handle(self, *args, **options):\n self.stdout.write(\"Triggering webhooks\")\n session = FuturesSession()\n\n # currently locked table is the old one, more recent one is not locked\n lock = Lock.objects.all()[0] # there is only ever one lock\n\n if lock.bookingA:\n old_booking_table = BookingA\n new_booking_table = BookingB\n else:\n old_booking_table = BookingB\n new_booking_table = BookingA\n\n now = datetime.now()\n\n old_bookings = _serialize_bookings(\n old_booking_table.objects.filter(\n startdatetime__gt=now\n )\n )\n new_bookings = _serialize_bookings(\n new_booking_table.objects.filter(\n startdatetime__gt=now\n )\n )\n\n ddiff = DeepDiff(old_bookings, new_bookings, ignore_order=True)\n\n webhooks = Webhook.objects.all()\n # assumption: list of webhooks will be longer than ddiff\n\n num_bookings_added = 0\n num_bookings_removed = 0\n if \"iterable_item_added\" in ddiff:\n num_bookings_added = len(\n ddiff[\"iterable_item_added\"].values()\n )\n\n if \"iterable_item_removed\" in ddiff:\n num_bookings_removed = len(\n ddiff[\"iterable_item_removed\"].values()\n )\n\n self.stdout.write(\n \"{} bookings added\\n{} bookings removed.\".format(\n num_bookings_added,\n num_bookings_removed\n )\n )\n\n def webhook_map(webhook):\n def webhook_filter(booking):\n return (\n (\n webhook.siteid == '' or\n booking[\"siteid\"] == webhook.siteid\n ) and\n (\n webhook.roomid == '' or\n booking[\"roomid\"] == webhook.roomid\n ) and\n (\n webhook.contact == '' or\n # mimick SQL 'like'\n webhook.contact in str(booking[\"contact\"])\n )\n )\n output = {\n \"webhook_in_db\": webhook,\n \"url\": webhook.url,\n \"verification_secret\": webhook.verification_secret\n }\n if \"iterable_item_added\" in ddiff:\n bookings_added = list(filter(\n webhook_filter, ddiff[\"iterable_item_added\"].values()\n ))\n if bookings_added != []:\n output[\"bookings_added\"] = bookings_added\n if \"iterable_item_removed\" in ddiff:\n bookings_removed = list(filter(\n webhook_filter, ddiff[\"iterable_item_removed\"].values()\n ))\n if bookings_removed != []:\n output[\"bookings_removed\"] = bookings_removed\n\n return output\n\n webhooks_to_enact = list(map(webhook_map, webhooks))\n\n unsent_requests = []\n for idx, webhook in enumerate(webhooks_to_enact):\n payload = {\n \"service\": \"roombookings\",\n \"name\": \"bookings_changed\",\n \"verification_secret\": webhook[\"verification_secret\"],\n \"content\": {}\n }\n\n if \"bookings_added\" in webhook:\n payload[\"content\"][\"bookings_added\"] = (\n webhook[\"bookings_added\"]\n )\n if \"bookings_removed\" in webhook:\n payload[\"content\"][\"bookings_removed\"] = (\n webhook[\"bookings_removed\"]\n )\n\n webhooks_to_enact[idx][\"payload\"] = payload\n\n if payload[\"content\"] != {} and webhook[\"url\"] != \"\":\n unsent_requests.append(\n session.post(\n webhook[\"url\"], json=payload,\n headers={\n \"User-Agent\": \"uclapi-bot/1\"\n }\n )\n )\n self.stdout.write(\n \"Triggering {} webhooks.\".format(len(unsent_requests))\n )\n if(\"debug\" in options):\n for i in unsent_requests:\n self.stdout.write(\n 'response status {0}'.format(i.result().status_code)\n )\n\n for webhook in webhooks_to_enact:\n if webhook[\"payload\"][\"content\"] != {}:\n webhook_in_db = webhook[\"webhook_in_db\"]\n webhook_in_db.last_fired = timezone.now()\n webhook_in_db.save()\n\n new_webhook_history_entry = WebhookTriggerHistory(\n webhook=webhook_in_db,\n payload=webhook[\"payload\"]\n )\n new_webhook_history_entry.save()\n\n self.stdout.write(\"Webhooks triggered.\")\n", "path": "backend/uclapi/roombookings/management/commands/trigger_webhooks.py"}]} | 1,648 | 495 |
gh_patches_debug_34858 | rasdani/github-patches | git_diff | ansible-collections__amazon.aws-430 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
aws_service_ip_ranges suppport for ipv6
### Summary
We are using amazon.aws collection and we noticed that the aws_service_ip_ranges does not have an option to return IPv6 ranges.
### Issue Type
Feature Idea
### Component Name
`{ lookup('aws_service_ip_ranges', region='us-west-2', service='ROUTE53_HEALTHCHECKS', ipv6_prefix=True, wantlist=True) }`
Should return a list of IPv6 addresses that correspond to the Route53 health check.
### Pull Request
#430
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```
vars:
rt53_ranges: "{{ lookup('aws_service_ip_ranges', region='us-west-2', service='ROUTE53_HEALTHCHECKS', ipv6_prefix=True, wantlist=True) }}"
tasks:
- name: "use list return option and iterate as a loop"
debug: msg="{% for x in rt53_ranges %}{{ x }} {% endfor %}"
# ###"2600:1f14:7ff:f800::/56,2600:1f14:fff:f800::/56"
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/lookup/aws_service_ip_ranges.py`
Content:
```
1 # (c) 2016 James Turner <[email protected]>
2 # (c) 2017 Ansible Project
3 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
4 from __future__ import (absolute_import, division, print_function)
5 __metaclass__ = type
6
7 DOCUMENTATION = '''
8 lookup: aws_service_ip_ranges
9 author:
10 - James Turner <[email protected]>
11 requirements:
12 - must have public internet connectivity
13 short_description: Look up the IP ranges for services provided in AWS such as EC2 and S3.
14 description:
15 - AWS publishes IP ranges used on the public internet by EC2, S3, CloudFront, CodeBuild, Route53, and Route53 Health Checking.
16 - This module produces a list of all the ranges (by default) or can narrow down the list to the specified region or service.
17 options:
18 service:
19 description: 'The service to filter ranges by. Options: EC2, S3, CLOUDFRONT, CODEbUILD, ROUTE53, ROUTE53_HEALTHCHECKS'
20 region:
21 description: 'The AWS region to narrow the ranges to. Examples: us-east-1, eu-west-2, ap-southeast-1'
22 '''
23
24 EXAMPLES = """
25 vars:
26 ec2_ranges: "{{ lookup('aws_service_ip_ranges', region='ap-southeast-2', service='EC2', wantlist=True) }}"
27 tasks:
28
29 - name: "use list return option and iterate as a loop"
30 debug: msg="{% for cidr in ec2_ranges %}{{ cidr }} {% endfor %}"
31 # "52.62.0.0/15 52.64.0.0/17 52.64.128.0/17 52.65.0.0/16 52.95.241.0/24 52.95.255.16/28 54.66.0.0/16 "
32
33 - name: "Pull S3 IP ranges, and print the default return style"
34 debug: msg="{{ lookup('aws_service_ip_ranges', region='us-east-1', service='S3') }}"
35 # "52.92.16.0/20,52.216.0.0/15,54.231.0.0/17"
36 """
37
38 RETURN = """
39 _raw:
40 description: comma-separated list of CIDR ranges
41 """
42
43
44 import json
45
46 from ansible.errors import AnsibleError
47 from ansible.module_utils.six.moves.urllib.error import HTTPError
48 from ansible.module_utils.six.moves.urllib.error import URLError
49 from ansible.module_utils._text import to_native
50 from ansible.module_utils.urls import ConnectionError
51 from ansible.module_utils.urls import open_url
52 from ansible.module_utils.urls import SSLValidationError
53 from ansible.plugins.lookup import LookupBase
54
55
56 class LookupModule(LookupBase):
57 def run(self, terms, variables, **kwargs):
58 try:
59 resp = open_url('https://ip-ranges.amazonaws.com/ip-ranges.json')
60 amazon_response = json.load(resp)['prefixes']
61 except getattr(json.decoder, 'JSONDecodeError', ValueError) as e:
62 # on Python 3+, json.decoder.JSONDecodeError is raised for bad
63 # JSON. On 2.x it's a ValueError
64 raise AnsibleError("Could not decode AWS IP ranges: %s" % to_native(e))
65 except HTTPError as e:
66 raise AnsibleError("Received HTTP error while pulling IP ranges: %s" % to_native(e))
67 except SSLValidationError as e:
68 raise AnsibleError("Error validating the server's certificate for: %s" % to_native(e))
69 except URLError as e:
70 raise AnsibleError("Failed look up IP range service: %s" % to_native(e))
71 except ConnectionError as e:
72 raise AnsibleError("Error connecting to IP range service: %s" % to_native(e))
73
74 if 'region' in kwargs:
75 region = kwargs['region']
76 amazon_response = (item for item in amazon_response if item['region'] == region)
77 if 'service' in kwargs:
78 service = str.upper(kwargs['service'])
79 amazon_response = (item for item in amazon_response if item['service'] == service)
80
81 return [item['ip_prefix'] for item in amazon_response]
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugins/lookup/aws_service_ip_ranges.py b/plugins/lookup/aws_service_ip_ranges.py
--- a/plugins/lookup/aws_service_ip_ranges.py
+++ b/plugins/lookup/aws_service_ip_ranges.py
@@ -19,6 +19,9 @@
description: 'The service to filter ranges by. Options: EC2, S3, CLOUDFRONT, CODEbUILD, ROUTE53, ROUTE53_HEALTHCHECKS'
region:
description: 'The AWS region to narrow the ranges to. Examples: us-east-1, eu-west-2, ap-southeast-1'
+ ipv6_prefixes:
+ description: 'When I(ipv6_prefixes=True) the lookup will return ipv6 addresses instead of ipv4 addresses'
+ version_added: 2.1.0
'''
EXAMPLES = """
@@ -40,7 +43,6 @@
description: comma-separated list of CIDR ranges
"""
-
import json
from ansible.errors import AnsibleError
@@ -55,9 +57,16 @@
class LookupModule(LookupBase):
def run(self, terms, variables, **kwargs):
+ if "ipv6_prefixes" in kwargs and kwargs["ipv6_prefixes"]:
+ prefixes_label = "ipv6_prefixes"
+ ip_prefix_label = "ipv6_prefix"
+ else:
+ prefixes_label = "prefixes"
+ ip_prefix_label = "ip_prefix"
+
try:
resp = open_url('https://ip-ranges.amazonaws.com/ip-ranges.json')
- amazon_response = json.load(resp)['prefixes']
+ amazon_response = json.load(resp)[prefixes_label]
except getattr(json.decoder, 'JSONDecodeError', ValueError) as e:
# on Python 3+, json.decoder.JSONDecodeError is raised for bad
# JSON. On 2.x it's a ValueError
@@ -77,5 +86,5 @@
if 'service' in kwargs:
service = str.upper(kwargs['service'])
amazon_response = (item for item in amazon_response if item['service'] == service)
-
- return [item['ip_prefix'] for item in amazon_response]
+ iprange = [item[ip_prefix_label] for item in amazon_response]
+ return iprange
| {"golden_diff": "diff --git a/plugins/lookup/aws_service_ip_ranges.py b/plugins/lookup/aws_service_ip_ranges.py\n--- a/plugins/lookup/aws_service_ip_ranges.py\n+++ b/plugins/lookup/aws_service_ip_ranges.py\n@@ -19,6 +19,9 @@\n description: 'The service to filter ranges by. Options: EC2, S3, CLOUDFRONT, CODEbUILD, ROUTE53, ROUTE53_HEALTHCHECKS'\n region:\n description: 'The AWS region to narrow the ranges to. Examples: us-east-1, eu-west-2, ap-southeast-1'\n+ ipv6_prefixes:\n+ description: 'When I(ipv6_prefixes=True) the lookup will return ipv6 addresses instead of ipv4 addresses'\n+ version_added: 2.1.0\n '''\n \n EXAMPLES = \"\"\"\n@@ -40,7 +43,6 @@\n description: comma-separated list of CIDR ranges\n \"\"\"\n \n-\n import json\n \n from ansible.errors import AnsibleError\n@@ -55,9 +57,16 @@\n \n class LookupModule(LookupBase):\n def run(self, terms, variables, **kwargs):\n+ if \"ipv6_prefixes\" in kwargs and kwargs[\"ipv6_prefixes\"]:\n+ prefixes_label = \"ipv6_prefixes\"\n+ ip_prefix_label = \"ipv6_prefix\"\n+ else:\n+ prefixes_label = \"prefixes\"\n+ ip_prefix_label = \"ip_prefix\"\n+\n try:\n resp = open_url('https://ip-ranges.amazonaws.com/ip-ranges.json')\n- amazon_response = json.load(resp)['prefixes']\n+ amazon_response = json.load(resp)[prefixes_label]\n except getattr(json.decoder, 'JSONDecodeError', ValueError) as e:\n # on Python 3+, json.decoder.JSONDecodeError is raised for bad\n # JSON. On 2.x it's a ValueError\n@@ -77,5 +86,5 @@\n if 'service' in kwargs:\n service = str.upper(kwargs['service'])\n amazon_response = (item for item in amazon_response if item['service'] == service)\n-\n- return [item['ip_prefix'] for item in amazon_response]\n+ iprange = [item[ip_prefix_label] for item in amazon_response]\n+ return iprange\n", "issue": "aws_service_ip_ranges suppport for ipv6\n### Summary\r\n\r\nWe are using amazon.aws collection and we noticed that the aws_service_ip_ranges does not have an option to return IPv6 ranges.\r\n\r\n### Issue Type\r\n\r\nFeature Idea\r\n\r\n### Component Name\r\n\r\n`{ lookup('aws_service_ip_ranges', region='us-west-2', service='ROUTE53_HEALTHCHECKS', ipv6_prefix=True, wantlist=True) }`\r\nShould return a list of IPv6 addresses that correspond to the Route53 health check. \r\n\r\n\r\n### Pull Request\r\n#430\r\n\r\n### Additional Information\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```\r\nvars:\r\n rt53_ranges: \"{{ lookup('aws_service_ip_ranges', region='us-west-2', service='ROUTE53_HEALTHCHECKS', ipv6_prefix=True, wantlist=True) }}\"\r\ntasks:\r\n\r\n- name: \"use list return option and iterate as a loop\"\r\n debug: msg=\"{% for x in rt53_ranges %}{{ x }} {% endfor %}\"\r\n# ###\"2600:1f14:7ff:f800::/56,2600:1f14:fff:f800::/56\"\r\n```\r\n\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "# (c) 2016 James Turner <[email protected]>\n# (c) 2017 Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nDOCUMENTATION = '''\nlookup: aws_service_ip_ranges\nauthor:\n - James Turner <[email protected]>\nrequirements:\n - must have public internet connectivity\nshort_description: Look up the IP ranges for services provided in AWS such as EC2 and S3.\ndescription:\n - AWS publishes IP ranges used on the public internet by EC2, S3, CloudFront, CodeBuild, Route53, and Route53 Health Checking.\n - This module produces a list of all the ranges (by default) or can narrow down the list to the specified region or service.\noptions:\n service:\n description: 'The service to filter ranges by. Options: EC2, S3, CLOUDFRONT, CODEbUILD, ROUTE53, ROUTE53_HEALTHCHECKS'\n region:\n description: 'The AWS region to narrow the ranges to. Examples: us-east-1, eu-west-2, ap-southeast-1'\n'''\n\nEXAMPLES = \"\"\"\nvars:\n ec2_ranges: \"{{ lookup('aws_service_ip_ranges', region='ap-southeast-2', service='EC2', wantlist=True) }}\"\ntasks:\n\n- name: \"use list return option and iterate as a loop\"\n debug: msg=\"{% for cidr in ec2_ranges %}{{ cidr }} {% endfor %}\"\n# \"52.62.0.0/15 52.64.0.0/17 52.64.128.0/17 52.65.0.0/16 52.95.241.0/24 52.95.255.16/28 54.66.0.0/16 \"\n\n- name: \"Pull S3 IP ranges, and print the default return style\"\n debug: msg=\"{{ lookup('aws_service_ip_ranges', region='us-east-1', service='S3') }}\"\n# \"52.92.16.0/20,52.216.0.0/15,54.231.0.0/17\"\n\"\"\"\n\nRETURN = \"\"\"\n_raw:\n description: comma-separated list of CIDR ranges\n\"\"\"\n\n\nimport json\n\nfrom ansible.errors import AnsibleError\nfrom ansible.module_utils.six.moves.urllib.error import HTTPError\nfrom ansible.module_utils.six.moves.urllib.error import URLError\nfrom ansible.module_utils._text import to_native\nfrom ansible.module_utils.urls import ConnectionError\nfrom ansible.module_utils.urls import open_url\nfrom ansible.module_utils.urls import SSLValidationError\nfrom ansible.plugins.lookup import LookupBase\n\n\nclass LookupModule(LookupBase):\n def run(self, terms, variables, **kwargs):\n try:\n resp = open_url('https://ip-ranges.amazonaws.com/ip-ranges.json')\n amazon_response = json.load(resp)['prefixes']\n except getattr(json.decoder, 'JSONDecodeError', ValueError) as e:\n # on Python 3+, json.decoder.JSONDecodeError is raised for bad\n # JSON. On 2.x it's a ValueError\n raise AnsibleError(\"Could not decode AWS IP ranges: %s\" % to_native(e))\n except HTTPError as e:\n raise AnsibleError(\"Received HTTP error while pulling IP ranges: %s\" % to_native(e))\n except SSLValidationError as e:\n raise AnsibleError(\"Error validating the server's certificate for: %s\" % to_native(e))\n except URLError as e:\n raise AnsibleError(\"Failed look up IP range service: %s\" % to_native(e))\n except ConnectionError as e:\n raise AnsibleError(\"Error connecting to IP range service: %s\" % to_native(e))\n\n if 'region' in kwargs:\n region = kwargs['region']\n amazon_response = (item for item in amazon_response if item['region'] == region)\n if 'service' in kwargs:\n service = str.upper(kwargs['service'])\n amazon_response = (item for item in amazon_response if item['service'] == service)\n\n return [item['ip_prefix'] for item in amazon_response]\n", "path": "plugins/lookup/aws_service_ip_ranges.py"}], "after_files": [{"content": "# (c) 2016 James Turner <[email protected]>\n# (c) 2017 Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nDOCUMENTATION = '''\nlookup: aws_service_ip_ranges\nauthor:\n - James Turner <[email protected]>\nrequirements:\n - must have public internet connectivity\nshort_description: Look up the IP ranges for services provided in AWS such as EC2 and S3.\ndescription:\n - AWS publishes IP ranges used on the public internet by EC2, S3, CloudFront, CodeBuild, Route53, and Route53 Health Checking.\n - This module produces a list of all the ranges (by default) or can narrow down the list to the specified region or service.\noptions:\n service:\n description: 'The service to filter ranges by. Options: EC2, S3, CLOUDFRONT, CODEbUILD, ROUTE53, ROUTE53_HEALTHCHECKS'\n region:\n description: 'The AWS region to narrow the ranges to. Examples: us-east-1, eu-west-2, ap-southeast-1'\n ipv6_prefixes:\n description: 'When I(ipv6_prefixes=True) the lookup will return ipv6 addresses instead of ipv4 addresses'\n version_added: 2.1.0\n'''\n\nEXAMPLES = \"\"\"\nvars:\n ec2_ranges: \"{{ lookup('aws_service_ip_ranges', region='ap-southeast-2', service='EC2', wantlist=True) }}\"\ntasks:\n\n- name: \"use list return option and iterate as a loop\"\n debug: msg=\"{% for cidr in ec2_ranges %}{{ cidr }} {% endfor %}\"\n# \"52.62.0.0/15 52.64.0.0/17 52.64.128.0/17 52.65.0.0/16 52.95.241.0/24 52.95.255.16/28 54.66.0.0/16 \"\n\n- name: \"Pull S3 IP ranges, and print the default return style\"\n debug: msg=\"{{ lookup('aws_service_ip_ranges', region='us-east-1', service='S3') }}\"\n# \"52.92.16.0/20,52.216.0.0/15,54.231.0.0/17\"\n\"\"\"\n\nRETURN = \"\"\"\n_raw:\n description: comma-separated list of CIDR ranges\n\"\"\"\n\nimport json\n\nfrom ansible.errors import AnsibleError\nfrom ansible.module_utils.six.moves.urllib.error import HTTPError\nfrom ansible.module_utils.six.moves.urllib.error import URLError\nfrom ansible.module_utils._text import to_native\nfrom ansible.module_utils.urls import ConnectionError\nfrom ansible.module_utils.urls import open_url\nfrom ansible.module_utils.urls import SSLValidationError\nfrom ansible.plugins.lookup import LookupBase\n\n\nclass LookupModule(LookupBase):\n def run(self, terms, variables, **kwargs):\n if \"ipv6_prefixes\" in kwargs and kwargs[\"ipv6_prefixes\"]:\n prefixes_label = \"ipv6_prefixes\"\n ip_prefix_label = \"ipv6_prefix\"\n else:\n prefixes_label = \"prefixes\"\n ip_prefix_label = \"ip_prefix\"\n\n try:\n resp = open_url('https://ip-ranges.amazonaws.com/ip-ranges.json')\n amazon_response = json.load(resp)[prefixes_label]\n except getattr(json.decoder, 'JSONDecodeError', ValueError) as e:\n # on Python 3+, json.decoder.JSONDecodeError is raised for bad\n # JSON. On 2.x it's a ValueError\n raise AnsibleError(\"Could not decode AWS IP ranges: %s\" % to_native(e))\n except HTTPError as e:\n raise AnsibleError(\"Received HTTP error while pulling IP ranges: %s\" % to_native(e))\n except SSLValidationError as e:\n raise AnsibleError(\"Error validating the server's certificate for: %s\" % to_native(e))\n except URLError as e:\n raise AnsibleError(\"Failed look up IP range service: %s\" % to_native(e))\n except ConnectionError as e:\n raise AnsibleError(\"Error connecting to IP range service: %s\" % to_native(e))\n\n if 'region' in kwargs:\n region = kwargs['region']\n amazon_response = (item for item in amazon_response if item['region'] == region)\n if 'service' in kwargs:\n service = str.upper(kwargs['service'])\n amazon_response = (item for item in amazon_response if item['service'] == service)\n iprange = [item[ip_prefix_label] for item in amazon_response]\n return iprange\n", "path": "plugins/lookup/aws_service_ip_ranges.py"}]} | 1,679 | 499 |
gh_patches_debug_57019 | rasdani/github-patches | git_diff | fidals__shopelectro-415 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Убери пункт меню
Убери mp3 колонки из меню. Их больше не будет.
Я зашел в админку и выключил категорию. Надеюсь правильно )
http://prntscr.com/k553lt
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shopelectro/templatetags/se_extras.py`
Content:
```
1 import datetime
2 import math
3
4 from django import template
5 from django.conf import settings
6 from django.contrib.humanize.templatetags.humanize import intcomma
7 from django.template.defaultfilters import floatformat
8 from django.urls import reverse
9
10 from images.models import ImageMixin
11 from pages.models import Page
12
13 from shopelectro import config
14 from shopelectro.models import Category
15
16 register = template.Library()
17
18
19 # TODO - move it in catalog. Inspired by lp_electric
20 @register.simple_tag
21 def roots():
22 return sorted(
23 Category.objects
24 .select_related('page')
25 .get_cached_trees(), # https://goo.gl/rFKiku
26 key=lambda x: x.page.position
27 )
28
29
30 @register.simple_tag
31 def footer_links():
32 return config.FOOTER_LINKS
33
34
35 # TODO - move in pages. Inspired by LP electric
36 @register.filter
37 def class_name(model):
38 """Return Model name."""
39 return type(model).__name__
40
41
42 @register.simple_tag
43 def time_to_call():
44 def is_weekend(t):
45 return t.weekday() > 4
46
47 def is_friday(t):
48 return t.weekday() == 4
49
50 def not_yet_opened(t):
51 current_time = (t.hour, t.minute)
52 open_time = (10, 00)
53 return current_time < open_time and not is_weekend(t)
54
55 def is_closed(t):
56 current_time = (t.hour, t.minute)
57 closing_time = (16, 30) if is_friday(t) else (17, 30)
58 return current_time > closing_time
59
60 when_we_call = {
61 lambda now: is_weekend(now) or (is_friday(now) and is_closed(now)): 'В понедельник в 10:30',
62 lambda now: not_yet_opened(now): 'Сегодня в 10:30',
63 lambda now: is_closed(now) and not (is_friday(now) or is_weekend(now)): 'Завтра в 10:30',
64 lambda _: True: 'В течение 30 минут'
65 }
66
67 time_ = datetime.datetime.now()
68 call = ' позвонит менеджер и обсудит детали доставки.'
69 for condition, time in when_we_call.items():
70 if condition(time_):
71 return time + call
72
73
74 # TODO - move it in pages.
75 @register.simple_tag
76 def full_url(url_name, *args):
77 return settings.BASE_URL + reverse(url_name, args=args)
78
79
80 @register.filter
81 def humanize_price(price):
82 return intcomma(floatformat(price, 0))
83
84
85 # Not good code, but duker at 06/10/2016 don't know how to fix it.
86 # It makes Image model very complex.
87 @register.simple_tag
88 def get_img_alt(entity: ImageMixin):
89 product_alt = 'Фотография {}'
90 logo_alt = 'Логотип компании Shopelectro'
91
92 if not isinstance(entity, Page):
93 return logo_alt
94
95 # try one of this attributes to get pages name
96 name_attrs = ['h1', 'title', 'name']
97 entity_name = next(
98 getattr(entity, attr)
99 for attr in name_attrs
100 if getattr(entity, attr)
101 )
102 return product_alt.format(entity_name)
103
104
105 @register.simple_tag
106 def main_image_or_logo(page: Page):
107 """Used for microdata."""
108 if hasattr(page, 'main_image') and page.main_image:
109 return page.main_image.url
110 else:
111 return settings.STATIC_URL + 'images/logo.png'
112
113
114 @register.inclusion_tag('catalog/product_feedbacks_icons.html')
115 def icon_stars(rating=0):
116 """Render set of rating icons based on 1 through 5 rating values."""
117 full_icons = int(math.floor(rating))
118 half_icons = 0 if rating == int(rating) else 1
119 empty_icons = 5 - full_icons - half_icons
120
121 return {
122 'full_icons': range(full_icons),
123 'half_icons': range(half_icons),
124 'empty_icons': range(empty_icons),
125 }
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/shopelectro/templatetags/se_extras.py b/shopelectro/templatetags/se_extras.py
--- a/shopelectro/templatetags/se_extras.py
+++ b/shopelectro/templatetags/se_extras.py
@@ -20,10 +20,13 @@
@register.simple_tag
def roots():
return sorted(
- Category.objects
- .select_related('page')
- .get_cached_trees(), # https://goo.gl/rFKiku
- key=lambda x: x.page.position
+ filter(
+ lambda x: x.page.is_active,
+ Category.objects # https://goo.gl/rFKiku
+ .select_related('page')
+ .get_cached_trees()
+ ),
+ key=lambda x: x.page.position,
)
| {"golden_diff": "diff --git a/shopelectro/templatetags/se_extras.py b/shopelectro/templatetags/se_extras.py\n--- a/shopelectro/templatetags/se_extras.py\n+++ b/shopelectro/templatetags/se_extras.py\n@@ -20,10 +20,13 @@\n @register.simple_tag\n def roots():\n return sorted(\n- Category.objects\n- .select_related('page')\n- .get_cached_trees(), # https://goo.gl/rFKiku\n- key=lambda x: x.page.position\n+ filter(\n+ lambda x: x.page.is_active,\n+ Category.objects # https://goo.gl/rFKiku\n+ .select_related('page')\n+ .get_cached_trees()\n+ ),\n+ key=lambda x: x.page.position,\n )\n", "issue": "\u0423\u0431\u0435\u0440\u0438 \u043f\u0443\u043d\u043a\u0442 \u043c\u0435\u043d\u044e\n\u0423\u0431\u0435\u0440\u0438 mp3 \u043a\u043e\u043b\u043e\u043d\u043a\u0438 \u0438\u0437 \u043c\u0435\u043d\u044e. \u0418\u0445 \u0431\u043e\u043b\u044c\u0448\u0435 \u043d\u0435 \u0431\u0443\u0434\u0435\u0442.\r\n\u042f \u0437\u0430\u0448\u0435\u043b \u0432 \u0430\u0434\u043c\u0438\u043d\u043a\u0443 \u0438 \u0432\u044b\u043a\u043b\u044e\u0447\u0438\u043b \u043a\u0430\u0442\u0435\u0433\u043e\u0440\u0438\u044e. \u041d\u0430\u0434\u0435\u044e\u0441\u044c \u043f\u0440\u0430\u0432\u0438\u043b\u044c\u043d\u043e ) \r\nhttp://prntscr.com/k553lt\n", "before_files": [{"content": "import datetime\nimport math\n\nfrom django import template\nfrom django.conf import settings\nfrom django.contrib.humanize.templatetags.humanize import intcomma\nfrom django.template.defaultfilters import floatformat\nfrom django.urls import reverse\n\nfrom images.models import ImageMixin\nfrom pages.models import Page\n\nfrom shopelectro import config\nfrom shopelectro.models import Category\n\nregister = template.Library()\n\n\n# TODO - move it in catalog. Inspired by lp_electric\[email protected]_tag\ndef roots():\n return sorted(\n Category.objects\n .select_related('page')\n .get_cached_trees(), # https://goo.gl/rFKiku\n key=lambda x: x.page.position\n )\n\n\[email protected]_tag\ndef footer_links():\n return config.FOOTER_LINKS\n\n\n# TODO - move in pages. Inspired by LP electric\[email protected]\ndef class_name(model):\n \"\"\"Return Model name.\"\"\"\n return type(model).__name__\n\n\[email protected]_tag\ndef time_to_call():\n def is_weekend(t):\n return t.weekday() > 4\n\n def is_friday(t):\n return t.weekday() == 4\n\n def not_yet_opened(t):\n current_time = (t.hour, t.minute)\n open_time = (10, 00)\n return current_time < open_time and not is_weekend(t)\n\n def is_closed(t):\n current_time = (t.hour, t.minute)\n closing_time = (16, 30) if is_friday(t) else (17, 30)\n return current_time > closing_time\n\n when_we_call = {\n lambda now: is_weekend(now) or (is_friday(now) and is_closed(now)): '\u0412 \u043f\u043e\u043d\u0435\u0434\u0435\u043b\u044c\u043d\u0438\u043a \u0432 10:30',\n lambda now: not_yet_opened(now): '\u0421\u0435\u0433\u043e\u0434\u043d\u044f \u0432 10:30',\n lambda now: is_closed(now) and not (is_friday(now) or is_weekend(now)): '\u0417\u0430\u0432\u0442\u0440\u0430 \u0432 10:30',\n lambda _: True: '\u0412 \u0442\u0435\u0447\u0435\u043d\u0438\u0435 30 \u043c\u0438\u043d\u0443\u0442'\n }\n\n time_ = datetime.datetime.now()\n call = ' \u043f\u043e\u0437\u0432\u043e\u043d\u0438\u0442 \u043c\u0435\u043d\u0435\u0434\u0436\u0435\u0440 \u0438 \u043e\u0431\u0441\u0443\u0434\u0438\u0442 \u0434\u0435\u0442\u0430\u043b\u0438 \u0434\u043e\u0441\u0442\u0430\u0432\u043a\u0438.'\n for condition, time in when_we_call.items():\n if condition(time_):\n return time + call\n\n\n# TODO - move it in pages.\[email protected]_tag\ndef full_url(url_name, *args):\n return settings.BASE_URL + reverse(url_name, args=args)\n\n\[email protected]\ndef humanize_price(price):\n return intcomma(floatformat(price, 0))\n\n\n# Not good code, but duker at 06/10/2016 don't know how to fix it.\n# It makes Image model very complex.\[email protected]_tag\ndef get_img_alt(entity: ImageMixin):\n product_alt = '\u0424\u043e\u0442\u043e\u0433\u0440\u0430\u0444\u0438\u044f {}'\n logo_alt = '\u041b\u043e\u0433\u043e\u0442\u0438\u043f \u043a\u043e\u043c\u043f\u0430\u043d\u0438\u0438 Shopelectro'\n\n if not isinstance(entity, Page):\n return logo_alt\n\n # try one of this attributes to get pages name\n name_attrs = ['h1', 'title', 'name']\n entity_name = next(\n getattr(entity, attr)\n for attr in name_attrs\n if getattr(entity, attr)\n )\n return product_alt.format(entity_name)\n\n\[email protected]_tag\ndef main_image_or_logo(page: Page):\n \"\"\"Used for microdata.\"\"\"\n if hasattr(page, 'main_image') and page.main_image:\n return page.main_image.url\n else:\n return settings.STATIC_URL + 'images/logo.png'\n\n\[email protected]_tag('catalog/product_feedbacks_icons.html')\ndef icon_stars(rating=0):\n \"\"\"Render set of rating icons based on 1 through 5 rating values.\"\"\"\n full_icons = int(math.floor(rating))\n half_icons = 0 if rating == int(rating) else 1\n empty_icons = 5 - full_icons - half_icons\n\n return {\n 'full_icons': range(full_icons),\n 'half_icons': range(half_icons),\n 'empty_icons': range(empty_icons),\n }\n", "path": "shopelectro/templatetags/se_extras.py"}], "after_files": [{"content": "import datetime\nimport math\n\nfrom django import template\nfrom django.conf import settings\nfrom django.contrib.humanize.templatetags.humanize import intcomma\nfrom django.template.defaultfilters import floatformat\nfrom django.urls import reverse\n\nfrom images.models import ImageMixin\nfrom pages.models import Page\n\nfrom shopelectro import config\nfrom shopelectro.models import Category\n\nregister = template.Library()\n\n\n# TODO - move it in catalog. Inspired by lp_electric\[email protected]_tag\ndef roots():\n return sorted(\n filter(\n lambda x: x.page.is_active,\n Category.objects # https://goo.gl/rFKiku\n .select_related('page')\n .get_cached_trees()\n ),\n key=lambda x: x.page.position,\n )\n\n\[email protected]_tag\ndef footer_links():\n return config.FOOTER_LINKS\n\n\n# TODO - move in pages. Inspired by LP electric\[email protected]\ndef class_name(model):\n \"\"\"Return Model name.\"\"\"\n return type(model).__name__\n\n\[email protected]_tag\ndef time_to_call():\n def is_weekend(t):\n return t.weekday() > 4\n\n def is_friday(t):\n return t.weekday() == 4\n\n def not_yet_opened(t):\n current_time = (t.hour, t.minute)\n open_time = (10, 00)\n return current_time < open_time and not is_weekend(t)\n\n def is_closed(t):\n current_time = (t.hour, t.minute)\n closing_time = (16, 30) if is_friday(t) else (17, 30)\n return current_time > closing_time\n\n when_we_call = {\n lambda now: is_weekend(now) or (is_friday(now) and is_closed(now)): '\u0412 \u043f\u043e\u043d\u0435\u0434\u0435\u043b\u044c\u043d\u0438\u043a \u0432 10:30',\n lambda now: not_yet_opened(now): '\u0421\u0435\u0433\u043e\u0434\u043d\u044f \u0432 10:30',\n lambda now: is_closed(now) and not (is_friday(now) or is_weekend(now)): '\u0417\u0430\u0432\u0442\u0440\u0430 \u0432 10:30',\n lambda _: True: '\u0412 \u0442\u0435\u0447\u0435\u043d\u0438\u0435 30 \u043c\u0438\u043d\u0443\u0442'\n }\n\n time_ = datetime.datetime.now()\n call = ' \u043f\u043e\u0437\u0432\u043e\u043d\u0438\u0442 \u043c\u0435\u043d\u0435\u0434\u0436\u0435\u0440 \u0438 \u043e\u0431\u0441\u0443\u0434\u0438\u0442 \u0434\u0435\u0442\u0430\u043b\u0438 \u0434\u043e\u0441\u0442\u0430\u0432\u043a\u0438.'\n for condition, time in when_we_call.items():\n if condition(time_):\n return time + call\n\n\n# TODO - move it in pages.\[email protected]_tag\ndef full_url(url_name, *args):\n return settings.BASE_URL + reverse(url_name, args=args)\n\n\[email protected]\ndef humanize_price(price):\n return intcomma(floatformat(price, 0))\n\n\n# Not good code, but duker at 06/10/2016 don't know how to fix it.\n# It makes Image model very complex.\[email protected]_tag\ndef get_img_alt(entity: ImageMixin):\n product_alt = '\u0424\u043e\u0442\u043e\u0433\u0440\u0430\u0444\u0438\u044f {}'\n logo_alt = '\u041b\u043e\u0433\u043e\u0442\u0438\u043f \u043a\u043e\u043c\u043f\u0430\u043d\u0438\u0438 Shopelectro'\n\n if not isinstance(entity, Page):\n return logo_alt\n\n # try one of this attributes to get pages name\n name_attrs = ['h1', 'title', 'name']\n entity_name = next(\n getattr(entity, attr)\n for attr in name_attrs\n if getattr(entity, attr)\n )\n return product_alt.format(entity_name)\n\n\[email protected]_tag\ndef main_image_or_logo(page: Page):\n \"\"\"Used for microdata.\"\"\"\n if hasattr(page, 'main_image') and page.main_image:\n return page.main_image.url\n else:\n return settings.STATIC_URL + 'images/logo.png'\n\n\[email protected]_tag('catalog/product_feedbacks_icons.html')\ndef icon_stars(rating=0):\n \"\"\"Render set of rating icons based on 1 through 5 rating values.\"\"\"\n full_icons = int(math.floor(rating))\n half_icons = 0 if rating == int(rating) else 1\n empty_icons = 5 - full_icons - half_icons\n\n return {\n 'full_icons': range(full_icons),\n 'half_icons': range(half_icons),\n 'empty_icons': range(empty_icons),\n }\n", "path": "shopelectro/templatetags/se_extras.py"}]} | 1,508 | 188 |
gh_patches_debug_9311 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-3578 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pyside2 looking different from when running as script.
Hi. Im using latest development version.
As you can see when i run my test script:
```
from PySide2 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(800, 600)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.gridLayout = QtWidgets.QGridLayout(self.centralwidget)
self.gridLayout.setObjectName("gridLayout")
self.tabWidget = QtWidgets.QTabWidget(self.centralwidget)
self.tabWidget.setObjectName("tabWidget")
self.tab = QtWidgets.QWidget()
self.tab.setObjectName("tab")
self.pushButton = QtWidgets.QPushButton(self.tab)
self.pushButton.setGeometry(QtCore.QRect(150, 90, 97, 34))
self.pushButton.setObjectName("pushButton")
self.tableWidget = QtWidgets.QTableWidget(self.tab)
self.tableWidget.setGeometry(QtCore.QRect(140, 150, 256, 192))
self.tableWidget.setObjectName("tableWidget")
self.tableWidget.setColumnCount(3)
self.tableWidget.setRowCount(0)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(0, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(1, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(2, item)
self.tabWidget.addTab(self.tab, "")
self.tab_2 = QtWidgets.QWidget()
self.tab_2.setObjectName("tab_2")
self.tabWidget.addTab(self.tab_2, "")
self.gridLayout.addWidget(self.tabWidget, 0, 0, 1, 1)
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 800, 30))
self.menubar.setObjectName("menubar")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.pushButton.setText(_translate("MainWindow", "PushButton"))
item = self.tableWidget.horizontalHeaderItem(0)
item.setText(_translate("MainWindow", "New Column"))
item = self.tableWidget.horizontalHeaderItem(1)
item.setText(_translate("MainWindow", "New Column"))
item = self.tableWidget.horizontalHeaderItem(2)
item.setText(_translate("MainWindow", "New Column"))
self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab), _translate("MainWindow", "Tab 1"))
self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab_2), _translate("MainWindow", "Tab 2"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
```
from console:

and as the binary pyinstaller creates:

It looks very different(ignore the blue line, that's something that happened recently on this comp for some reason but the problem existed previous to this). Perhaps this has something to do with the themes in qt5_plugins? Anyway, how can i fix this? Ideally it would be possible to bundle my theme "breeze dark" with the application so the user has the ability to use that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/hooks/hook-PySide2.QtGui.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2013-2018, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License with exception
5 # for distributing bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #-----------------------------------------------------------------------------
9
10 from PyInstaller.utils.hooks import qt_plugins_binaries
11 from PyInstaller.compat import is_linux
12
13 hiddenimports = ['PySide2.QtCore']
14
15 binaries = []
16 binaries.extend(qt_plugins_binaries('accessible', namespace='PySide2'))
17 binaries.extend(qt_plugins_binaries('iconengines', namespace='PySide2'))
18 binaries.extend(qt_plugins_binaries('imageformats', namespace='PySide2'))
19 binaries.extend(qt_plugins_binaries('inputmethods', namespace='PySide2'))
20 binaries.extend(qt_plugins_binaries('graphicssystems', namespace='PySide2'))
21 binaries.extend(qt_plugins_binaries('platforms', namespace='PySide2'))
22
23 if is_linux:
24 binaries.extend(qt_plugins_binaries('platformthemes', namespace='PySide2'))
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/PyInstaller/hooks/hook-PySide2.QtGui.py b/PyInstaller/hooks/hook-PySide2.QtGui.py
--- a/PyInstaller/hooks/hook-PySide2.QtGui.py
+++ b/PyInstaller/hooks/hook-PySide2.QtGui.py
@@ -19,6 +19,7 @@
binaries.extend(qt_plugins_binaries('inputmethods', namespace='PySide2'))
binaries.extend(qt_plugins_binaries('graphicssystems', namespace='PySide2'))
binaries.extend(qt_plugins_binaries('platforms', namespace='PySide2'))
+binaries.extend(qt_plugins_binaries('styles', namespace='PySide2'))
if is_linux:
binaries.extend(qt_plugins_binaries('platformthemes', namespace='PySide2'))
| {"golden_diff": "diff --git a/PyInstaller/hooks/hook-PySide2.QtGui.py b/PyInstaller/hooks/hook-PySide2.QtGui.py\n--- a/PyInstaller/hooks/hook-PySide2.QtGui.py\n+++ b/PyInstaller/hooks/hook-PySide2.QtGui.py\n@@ -19,6 +19,7 @@\n binaries.extend(qt_plugins_binaries('inputmethods', namespace='PySide2'))\n binaries.extend(qt_plugins_binaries('graphicssystems', namespace='PySide2'))\n binaries.extend(qt_plugins_binaries('platforms', namespace='PySide2'))\n+binaries.extend(qt_plugins_binaries('styles', namespace='PySide2'))\n \n if is_linux:\n binaries.extend(qt_plugins_binaries('platformthemes', namespace='PySide2'))\n", "issue": "Pyside2 looking different from when running as script.\nHi. Im using latest development version.\r\n\r\nAs you can see when i run my test script:\r\n```\r\nfrom PySide2 import QtCore, QtGui, QtWidgets\r\n\r\nclass Ui_MainWindow(object):\r\n def setupUi(self, MainWindow):\r\n MainWindow.setObjectName(\"MainWindow\")\r\n MainWindow.resize(800, 600)\r\n self.centralwidget = QtWidgets.QWidget(MainWindow)\r\n self.centralwidget.setObjectName(\"centralwidget\")\r\n self.gridLayout = QtWidgets.QGridLayout(self.centralwidget)\r\n self.gridLayout.setObjectName(\"gridLayout\")\r\n self.tabWidget = QtWidgets.QTabWidget(self.centralwidget)\r\n self.tabWidget.setObjectName(\"tabWidget\")\r\n self.tab = QtWidgets.QWidget()\r\n self.tab.setObjectName(\"tab\")\r\n self.pushButton = QtWidgets.QPushButton(self.tab)\r\n self.pushButton.setGeometry(QtCore.QRect(150, 90, 97, 34))\r\n self.pushButton.setObjectName(\"pushButton\")\r\n self.tableWidget = QtWidgets.QTableWidget(self.tab)\r\n self.tableWidget.setGeometry(QtCore.QRect(140, 150, 256, 192))\r\n self.tableWidget.setObjectName(\"tableWidget\")\r\n self.tableWidget.setColumnCount(3)\r\n self.tableWidget.setRowCount(0)\r\n item = QtWidgets.QTableWidgetItem()\r\n self.tableWidget.setHorizontalHeaderItem(0, item)\r\n item = QtWidgets.QTableWidgetItem()\r\n self.tableWidget.setHorizontalHeaderItem(1, item)\r\n item = QtWidgets.QTableWidgetItem()\r\n self.tableWidget.setHorizontalHeaderItem(2, item)\r\n self.tabWidget.addTab(self.tab, \"\")\r\n self.tab_2 = QtWidgets.QWidget()\r\n self.tab_2.setObjectName(\"tab_2\")\r\n self.tabWidget.addTab(self.tab_2, \"\")\r\n self.gridLayout.addWidget(self.tabWidget, 0, 0, 1, 1)\r\n MainWindow.setCentralWidget(self.centralwidget)\r\n self.menubar = QtWidgets.QMenuBar(MainWindow)\r\n self.menubar.setGeometry(QtCore.QRect(0, 0, 800, 30))\r\n self.menubar.setObjectName(\"menubar\")\r\n MainWindow.setMenuBar(self.menubar)\r\n self.statusbar = QtWidgets.QStatusBar(MainWindow)\r\n self.statusbar.setObjectName(\"statusbar\")\r\n MainWindow.setStatusBar(self.statusbar)\r\n\r\n self.retranslateUi(MainWindow)\r\n QtCore.QMetaObject.connectSlotsByName(MainWindow)\r\n\r\n def retranslateUi(self, MainWindow):\r\n _translate = QtCore.QCoreApplication.translate\r\n MainWindow.setWindowTitle(_translate(\"MainWindow\", \"MainWindow\"))\r\n self.pushButton.setText(_translate(\"MainWindow\", \"PushButton\"))\r\n item = self.tableWidget.horizontalHeaderItem(0)\r\n item.setText(_translate(\"MainWindow\", \"New Column\"))\r\n item = self.tableWidget.horizontalHeaderItem(1)\r\n item.setText(_translate(\"MainWindow\", \"New Column\"))\r\n item = self.tableWidget.horizontalHeaderItem(2)\r\n item.setText(_translate(\"MainWindow\", \"New Column\"))\r\n self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab), _translate(\"MainWindow\", \"Tab 1\"))\r\n self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab_2), _translate(\"MainWindow\", \"Tab 2\"))\r\n\r\n\r\nif __name__ == \"__main__\":\r\n import sys\r\n app = QtWidgets.QApplication(sys.argv)\r\n MainWindow = QtWidgets.QMainWindow()\r\n ui = Ui_MainWindow()\r\n ui.setupUi(MainWindow)\r\n MainWindow.show()\r\n sys.exit(app.exec_())\r\n```\r\n\r\nfrom console:\r\n\r\nand as the binary pyinstaller creates:\r\n\r\n\r\nIt looks very different(ignore the blue line, that's something that happened recently on this comp for some reason but the problem existed previous to this). Perhaps this has something to do with the themes in qt5_plugins? Anyway, how can i fix this? Ideally it would be possible to bundle my theme \"breeze dark\" with the application so the user has the ability to use that.\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2013-2018, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\nfrom PyInstaller.utils.hooks import qt_plugins_binaries\nfrom PyInstaller.compat import is_linux\n\nhiddenimports = ['PySide2.QtCore']\n\nbinaries = []\nbinaries.extend(qt_plugins_binaries('accessible', namespace='PySide2'))\nbinaries.extend(qt_plugins_binaries('iconengines', namespace='PySide2'))\nbinaries.extend(qt_plugins_binaries('imageformats', namespace='PySide2'))\nbinaries.extend(qt_plugins_binaries('inputmethods', namespace='PySide2'))\nbinaries.extend(qt_plugins_binaries('graphicssystems', namespace='PySide2'))\nbinaries.extend(qt_plugins_binaries('platforms', namespace='PySide2'))\n\nif is_linux:\n binaries.extend(qt_plugins_binaries('platformthemes', namespace='PySide2'))\n", "path": "PyInstaller/hooks/hook-PySide2.QtGui.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2013-2018, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\nfrom PyInstaller.utils.hooks import qt_plugins_binaries\nfrom PyInstaller.compat import is_linux\n\nhiddenimports = ['PySide2.QtCore']\n\nbinaries = []\nbinaries.extend(qt_plugins_binaries('accessible', namespace='PySide2'))\nbinaries.extend(qt_plugins_binaries('iconengines', namespace='PySide2'))\nbinaries.extend(qt_plugins_binaries('imageformats', namespace='PySide2'))\nbinaries.extend(qt_plugins_binaries('inputmethods', namespace='PySide2'))\nbinaries.extend(qt_plugins_binaries('graphicssystems', namespace='PySide2'))\nbinaries.extend(qt_plugins_binaries('platforms', namespace='PySide2'))\nbinaries.extend(qt_plugins_binaries('styles', namespace='PySide2'))\n\nif is_linux:\n binaries.extend(qt_plugins_binaries('platformthemes', namespace='PySide2'))\n", "path": "PyInstaller/hooks/hook-PySide2.QtGui.py"}]} | 1,470 | 169 |
gh_patches_debug_5669 | rasdani/github-patches | git_diff | lutris__lutris-3705 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Store banners in .cache
They have no place to be in .local/share
Store banners in .cache
They have no place to be in .local/share
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lutris/settings.py`
Content:
```
1 """Internal settings."""
2 import os
3 import sys
4 from gettext import gettext as _
5
6 from gi.repository import GLib
7
8 from lutris import __version__
9 from lutris.util.settings import SettingsIO
10
11 PROJECT = "Lutris"
12 VERSION = __version__
13 COPYRIGHT = _("(c) 2010-2021 Lutris Team")
14 AUTHORS = [_("The Lutris team")]
15
16 # Paths
17 CONFIG_DIR = os.path.join(GLib.get_user_config_dir(), "lutris")
18 CONFIG_FILE = os.path.join(CONFIG_DIR, "lutris.conf")
19 DATA_DIR = os.path.join(GLib.get_user_data_dir(), "lutris")
20 RUNNER_DIR = os.path.join(DATA_DIR, "runners")
21 RUNTIME_DIR = os.path.join(DATA_DIR, "runtime")
22 CACHE_DIR = os.path.join(GLib.get_user_cache_dir(), "lutris")
23 GAME_CONFIG_DIR = os.path.join(CONFIG_DIR, "games")
24
25 TMP_PATH = os.path.join(CACHE_DIR, "tmp")
26 BANNER_PATH = os.path.join(DATA_DIR, "banners")
27 COVERART_PATH = os.path.join(DATA_DIR, "coverart")
28 ICON_PATH = os.path.join(GLib.get_user_data_dir(), "icons", "hicolor", "128x128", "apps")
29
30 sio = SettingsIO(CONFIG_FILE)
31 if "nosetests" in sys.argv[0] or "pytest" in sys.argv[0]:
32 PGA_DB = "/tmp/pga.db"
33 else:
34 PGA_DB = sio.read_setting("pga_path") or os.path.join(DATA_DIR, "pga.db")
35
36 SITE_URL = sio.read_setting("website") or "https://lutris.net"
37
38 DRIVER_HOWTO_URL = "https://github.com/lutris/docs/blob/master/InstallingDrivers.md"
39 INSTALLER_URL = SITE_URL + "/api/installers/%s"
40 # XXX change this, should query on the installer, not the game.
41 INSTALLER_REVISION_URL = SITE_URL + "/api/installers/games/%s/revisions/%s"
42 GAME_URL = SITE_URL + "/games/%s/"
43 RUNTIME_URL = SITE_URL + "/api/runtimes"
44
45 STEAM_API_KEY = sio.read_setting("steam_api_key") or "34C9698CEB394AB4401D65927C6B3752"
46 DISCORD_CLIENT_ID = sio.read_setting("discord_client_id") or "618290412402114570"
47
48
49 read_setting = sio.read_setting
50 write_setting = sio.write_setting
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lutris/settings.py b/lutris/settings.py
--- a/lutris/settings.py
+++ b/lutris/settings.py
@@ -23,7 +23,7 @@
GAME_CONFIG_DIR = os.path.join(CONFIG_DIR, "games")
TMP_PATH = os.path.join(CACHE_DIR, "tmp")
-BANNER_PATH = os.path.join(DATA_DIR, "banners")
+BANNER_PATH = os.path.join(CACHE_DIR, "banners")
COVERART_PATH = os.path.join(DATA_DIR, "coverart")
ICON_PATH = os.path.join(GLib.get_user_data_dir(), "icons", "hicolor", "128x128", "apps")
| {"golden_diff": "diff --git a/lutris/settings.py b/lutris/settings.py\n--- a/lutris/settings.py\n+++ b/lutris/settings.py\n@@ -23,7 +23,7 @@\n GAME_CONFIG_DIR = os.path.join(CONFIG_DIR, \"games\")\n \n TMP_PATH = os.path.join(CACHE_DIR, \"tmp\")\n-BANNER_PATH = os.path.join(DATA_DIR, \"banners\")\n+BANNER_PATH = os.path.join(CACHE_DIR, \"banners\")\n COVERART_PATH = os.path.join(DATA_DIR, \"coverart\")\n ICON_PATH = os.path.join(GLib.get_user_data_dir(), \"icons\", \"hicolor\", \"128x128\", \"apps\")\n", "issue": "Store banners in .cache\nThey have no place to be in .local/share\nStore banners in .cache\nThey have no place to be in .local/share\n", "before_files": [{"content": "\"\"\"Internal settings.\"\"\"\nimport os\nimport sys\nfrom gettext import gettext as _\n\nfrom gi.repository import GLib\n\nfrom lutris import __version__\nfrom lutris.util.settings import SettingsIO\n\nPROJECT = \"Lutris\"\nVERSION = __version__\nCOPYRIGHT = _(\"(c) 2010-2021 Lutris Team\")\nAUTHORS = [_(\"The Lutris team\")]\n\n# Paths\nCONFIG_DIR = os.path.join(GLib.get_user_config_dir(), \"lutris\")\nCONFIG_FILE = os.path.join(CONFIG_DIR, \"lutris.conf\")\nDATA_DIR = os.path.join(GLib.get_user_data_dir(), \"lutris\")\nRUNNER_DIR = os.path.join(DATA_DIR, \"runners\")\nRUNTIME_DIR = os.path.join(DATA_DIR, \"runtime\")\nCACHE_DIR = os.path.join(GLib.get_user_cache_dir(), \"lutris\")\nGAME_CONFIG_DIR = os.path.join(CONFIG_DIR, \"games\")\n\nTMP_PATH = os.path.join(CACHE_DIR, \"tmp\")\nBANNER_PATH = os.path.join(DATA_DIR, \"banners\")\nCOVERART_PATH = os.path.join(DATA_DIR, \"coverart\")\nICON_PATH = os.path.join(GLib.get_user_data_dir(), \"icons\", \"hicolor\", \"128x128\", \"apps\")\n\nsio = SettingsIO(CONFIG_FILE)\nif \"nosetests\" in sys.argv[0] or \"pytest\" in sys.argv[0]:\n PGA_DB = \"/tmp/pga.db\"\nelse:\n PGA_DB = sio.read_setting(\"pga_path\") or os.path.join(DATA_DIR, \"pga.db\")\n\nSITE_URL = sio.read_setting(\"website\") or \"https://lutris.net\"\n\nDRIVER_HOWTO_URL = \"https://github.com/lutris/docs/blob/master/InstallingDrivers.md\"\nINSTALLER_URL = SITE_URL + \"/api/installers/%s\"\n# XXX change this, should query on the installer, not the game.\nINSTALLER_REVISION_URL = SITE_URL + \"/api/installers/games/%s/revisions/%s\"\nGAME_URL = SITE_URL + \"/games/%s/\"\nRUNTIME_URL = SITE_URL + \"/api/runtimes\"\n\nSTEAM_API_KEY = sio.read_setting(\"steam_api_key\") or \"34C9698CEB394AB4401D65927C6B3752\"\nDISCORD_CLIENT_ID = sio.read_setting(\"discord_client_id\") or \"618290412402114570\"\n\n\nread_setting = sio.read_setting\nwrite_setting = sio.write_setting\n", "path": "lutris/settings.py"}], "after_files": [{"content": "\"\"\"Internal settings.\"\"\"\nimport os\nimport sys\nfrom gettext import gettext as _\n\nfrom gi.repository import GLib\n\nfrom lutris import __version__\nfrom lutris.util.settings import SettingsIO\n\nPROJECT = \"Lutris\"\nVERSION = __version__\nCOPYRIGHT = _(\"(c) 2010-2021 Lutris Team\")\nAUTHORS = [_(\"The Lutris team\")]\n\n# Paths\nCONFIG_DIR = os.path.join(GLib.get_user_config_dir(), \"lutris\")\nCONFIG_FILE = os.path.join(CONFIG_DIR, \"lutris.conf\")\nDATA_DIR = os.path.join(GLib.get_user_data_dir(), \"lutris\")\nRUNNER_DIR = os.path.join(DATA_DIR, \"runners\")\nRUNTIME_DIR = os.path.join(DATA_DIR, \"runtime\")\nCACHE_DIR = os.path.join(GLib.get_user_cache_dir(), \"lutris\")\nGAME_CONFIG_DIR = os.path.join(CONFIG_DIR, \"games\")\n\nTMP_PATH = os.path.join(CACHE_DIR, \"tmp\")\nBANNER_PATH = os.path.join(CACHE_DIR, \"banners\")\nCOVERART_PATH = os.path.join(DATA_DIR, \"coverart\")\nICON_PATH = os.path.join(GLib.get_user_data_dir(), \"icons\", \"hicolor\", \"128x128\", \"apps\")\n\nsio = SettingsIO(CONFIG_FILE)\nif \"nosetests\" in sys.argv[0] or \"pytest\" in sys.argv[0]:\n PGA_DB = \"/tmp/pga.db\"\nelse:\n PGA_DB = sio.read_setting(\"pga_path\") or os.path.join(DATA_DIR, \"pga.db\")\n\nSITE_URL = sio.read_setting(\"website\") or \"https://lutris.net\"\n\nDRIVER_HOWTO_URL = \"https://github.com/lutris/docs/blob/master/InstallingDrivers.md\"\nINSTALLER_URL = SITE_URL + \"/api/installers/%s\"\n# XXX change this, should query on the installer, not the game.\nINSTALLER_REVISION_URL = SITE_URL + \"/api/installers/games/%s/revisions/%s\"\nGAME_URL = SITE_URL + \"/games/%s/\"\nRUNTIME_URL = SITE_URL + \"/api/runtimes\"\n\nSTEAM_API_KEY = sio.read_setting(\"steam_api_key\") or \"34C9698CEB394AB4401D65927C6B3752\"\nDISCORD_CLIENT_ID = sio.read_setting(\"discord_client_id\") or \"618290412402114570\"\n\n\nread_setting = sio.read_setting\nwrite_setting = sio.write_setting\n", "path": "lutris/settings.py"}]} | 936 | 146 |
gh_patches_debug_804 | rasdani/github-patches | git_diff | scikit-hep__awkward-1830 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`ak.fill_none(axis=None)` does nothing
### Version of Awkward Array
main
### Description and code to reproduce
The `apply` function for this case does not return (or recurse)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/awkward/operations/ak_fill_none.py`
Content:
```
1 # BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE
2
3 import numbers
4
5 import awkward as ak
6
7 np = ak.nplikes.NumpyMetadata.instance()
8
9
10 def fill_none(array, value, axis=-1, highlevel=True, behavior=None):
11 """
12 Args:
13 array: Data in which to replace None with a given value.
14 value: Data with which to replace None.
15 axis (None or int): If None, replace all None values in the array
16 with the given value; if an int, The dimension at which this
17 operation is applied. The outermost dimension is `0`, followed
18 by `1`, etc., and negative values count backward from the
19 innermost: `-1` is the innermost dimension, `-2` is the next
20 level up, etc.
21 highlevel (bool): If True, return an #ak.Array; otherwise, return
22 a low-level #ak.contents.Content subclass.
23 behavior (None or dict): Custom #ak.behavior for the output array, if
24 high-level.
25
26 Replaces missing values (None) with a given `value`.
27
28 For example, in the following `array`,
29
30 ak.Array([[1.1, None, 2.2], [], [None, 3.3, 4.4]])
31
32 The None values could be replaced with `0` by
33
34 >>> ak.fill_none(array, 0)
35 <Array [[1.1, 0, 2.2], [], [0, 3.3, 4.4]] type='3 * var * float64'>
36
37 The replacement value doesn't strictly need the same type as the
38 surrounding data. For example, the None values could also be replaced
39 by a string.
40
41 >>> ak.fill_none(array, "hi")
42 <Array [[1.1, 'hi', 2.2], ... ['hi', 3.3, 4.4]] type='3 * var * union[float64, s...'>
43
44 The list content now has a union type:
45
46 >>> ak.type(ak.fill_none(array, "hi"))
47 3 * var * union[float64, string]
48
49 The values could be floating-point numbers or strings.
50 """
51 with ak._errors.OperationErrorContext(
52 "ak.fill_none",
53 dict(
54 array=array, value=value, axis=axis, highlevel=highlevel, behavior=behavior
55 ),
56 ):
57 return _impl(array, value, axis, highlevel, behavior)
58
59
60 def _impl(array, value, axis, highlevel, behavior):
61 arraylayout = ak.operations.to_layout(array, allow_record=True, allow_other=False)
62 nplike = ak.nplikes.nplike_of(arraylayout)
63
64 # Convert value type to appropriate layout
65 if (
66 isinstance(value, np.ndarray)
67 and issubclass(value.dtype.type, (np.bool_, np.number))
68 and len(value.shape) != 0
69 ):
70 valuelayout = ak.operations.to_layout(
71 nplike.asarray(value)[np.newaxis], allow_record=False, allow_other=False
72 )
73 elif isinstance(value, (bool, numbers.Number, np.bool_, np.number)) or (
74 isinstance(value, np.ndarray)
75 and issubclass(value.dtype.type, (np.bool_, np.number))
76 ):
77 valuelayout = ak.operations.to_layout(
78 nplike.asarray(value), allow_record=False, allow_other=False
79 )
80 elif (
81 ak._util.is_sized_iterable(value)
82 and not (isinstance(value, (str, bytes)))
83 or isinstance(value, (ak.highlevel.Record, ak.record.Record))
84 ):
85 valuelayout = ak.operations.to_layout(
86 value, allow_record=True, allow_other=False
87 )
88 if isinstance(valuelayout, ak.record.Record):
89 valuelayout = valuelayout.array[valuelayout.at : valuelayout.at + 1]
90 elif len(valuelayout) == 0:
91 offsets = ak.index.Index64(
92 nplike.array([0, 0], dtype=np.int64), nplike=nplike
93 )
94 valuelayout = ak.contents.ListOffsetArray(offsets, valuelayout)
95 else:
96 valuelayout = ak.contents.RegularArray(valuelayout, len(valuelayout), 1)
97 else:
98 valuelayout = ak.operations.to_layout(
99 [value], allow_record=False, allow_other=False
100 )
101
102 def maybe_fillna(layout):
103 if layout.is_OptionType:
104 return layout.fill_none(valuelayout)
105 else:
106 return layout
107
108 if axis is None:
109
110 def action(layout, depth, depth_context, **kwargs):
111 layout = maybe_fillna(layout)
112
113 else:
114
115 def action(layout, depth, depth_context, **kwargs):
116 posaxis = layout.axis_wrap_if_negative(depth_context["posaxis"])
117 depth_context["posaxis"] = posaxis
118 if posaxis + 1 < depth:
119 return layout
120 elif posaxis + 1 == depth:
121 return maybe_fillna(layout)
122
123 depth_context = {"posaxis": axis}
124 out = arraylayout.recursively_apply(action, behavior, depth_context=depth_context)
125
126 return ak._util.wrap(out, ak._util.behavior_of(array, behavior=behavior), highlevel)
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/awkward/operations/ak_fill_none.py b/src/awkward/operations/ak_fill_none.py
--- a/src/awkward/operations/ak_fill_none.py
+++ b/src/awkward/operations/ak_fill_none.py
@@ -107,8 +107,8 @@
if axis is None:
- def action(layout, depth, depth_context, **kwargs):
- layout = maybe_fillna(layout)
+ def action(layout, continuation, **kwargs):
+ return maybe_fillna(continuation())
else:
| {"golden_diff": "diff --git a/src/awkward/operations/ak_fill_none.py b/src/awkward/operations/ak_fill_none.py\n--- a/src/awkward/operations/ak_fill_none.py\n+++ b/src/awkward/operations/ak_fill_none.py\n@@ -107,8 +107,8 @@\n \n if axis is None:\n \n- def action(layout, depth, depth_context, **kwargs):\n- layout = maybe_fillna(layout)\n+ def action(layout, continuation, **kwargs):\n+ return maybe_fillna(continuation())\n \n else:\n", "issue": "`ak.fill_none(axis=None)` does nothing\n### Version of Awkward Array\r\n\r\nmain\r\n\r\n### Description and code to reproduce\r\n\r\nThe `apply` function for this case does not return (or recurse)\n", "before_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n\nimport numbers\n\nimport awkward as ak\n\nnp = ak.nplikes.NumpyMetadata.instance()\n\n\ndef fill_none(array, value, axis=-1, highlevel=True, behavior=None):\n \"\"\"\n Args:\n array: Data in which to replace None with a given value.\n value: Data with which to replace None.\n axis (None or int): If None, replace all None values in the array\n with the given value; if an int, The dimension at which this\n operation is applied. The outermost dimension is `0`, followed\n by `1`, etc., and negative values count backward from the\n innermost: `-1` is the innermost dimension, `-2` is the next\n level up, etc.\n highlevel (bool): If True, return an #ak.Array; otherwise, return\n a low-level #ak.contents.Content subclass.\n behavior (None or dict): Custom #ak.behavior for the output array, if\n high-level.\n\n Replaces missing values (None) with a given `value`.\n\n For example, in the following `array`,\n\n ak.Array([[1.1, None, 2.2], [], [None, 3.3, 4.4]])\n\n The None values could be replaced with `0` by\n\n >>> ak.fill_none(array, 0)\n <Array [[1.1, 0, 2.2], [], [0, 3.3, 4.4]] type='3 * var * float64'>\n\n The replacement value doesn't strictly need the same type as the\n surrounding data. For example, the None values could also be replaced\n by a string.\n\n >>> ak.fill_none(array, \"hi\")\n <Array [[1.1, 'hi', 2.2], ... ['hi', 3.3, 4.4]] type='3 * var * union[float64, s...'>\n\n The list content now has a union type:\n\n >>> ak.type(ak.fill_none(array, \"hi\"))\n 3 * var * union[float64, string]\n\n The values could be floating-point numbers or strings.\n \"\"\"\n with ak._errors.OperationErrorContext(\n \"ak.fill_none\",\n dict(\n array=array, value=value, axis=axis, highlevel=highlevel, behavior=behavior\n ),\n ):\n return _impl(array, value, axis, highlevel, behavior)\n\n\ndef _impl(array, value, axis, highlevel, behavior):\n arraylayout = ak.operations.to_layout(array, allow_record=True, allow_other=False)\n nplike = ak.nplikes.nplike_of(arraylayout)\n\n # Convert value type to appropriate layout\n if (\n isinstance(value, np.ndarray)\n and issubclass(value.dtype.type, (np.bool_, np.number))\n and len(value.shape) != 0\n ):\n valuelayout = ak.operations.to_layout(\n nplike.asarray(value)[np.newaxis], allow_record=False, allow_other=False\n )\n elif isinstance(value, (bool, numbers.Number, np.bool_, np.number)) or (\n isinstance(value, np.ndarray)\n and issubclass(value.dtype.type, (np.bool_, np.number))\n ):\n valuelayout = ak.operations.to_layout(\n nplike.asarray(value), allow_record=False, allow_other=False\n )\n elif (\n ak._util.is_sized_iterable(value)\n and not (isinstance(value, (str, bytes)))\n or isinstance(value, (ak.highlevel.Record, ak.record.Record))\n ):\n valuelayout = ak.operations.to_layout(\n value, allow_record=True, allow_other=False\n )\n if isinstance(valuelayout, ak.record.Record):\n valuelayout = valuelayout.array[valuelayout.at : valuelayout.at + 1]\n elif len(valuelayout) == 0:\n offsets = ak.index.Index64(\n nplike.array([0, 0], dtype=np.int64), nplike=nplike\n )\n valuelayout = ak.contents.ListOffsetArray(offsets, valuelayout)\n else:\n valuelayout = ak.contents.RegularArray(valuelayout, len(valuelayout), 1)\n else:\n valuelayout = ak.operations.to_layout(\n [value], allow_record=False, allow_other=False\n )\n\n def maybe_fillna(layout):\n if layout.is_OptionType:\n return layout.fill_none(valuelayout)\n else:\n return layout\n\n if axis is None:\n\n def action(layout, depth, depth_context, **kwargs):\n layout = maybe_fillna(layout)\n\n else:\n\n def action(layout, depth, depth_context, **kwargs):\n posaxis = layout.axis_wrap_if_negative(depth_context[\"posaxis\"])\n depth_context[\"posaxis\"] = posaxis\n if posaxis + 1 < depth:\n return layout\n elif posaxis + 1 == depth:\n return maybe_fillna(layout)\n\n depth_context = {\"posaxis\": axis}\n out = arraylayout.recursively_apply(action, behavior, depth_context=depth_context)\n\n return ak._util.wrap(out, ak._util.behavior_of(array, behavior=behavior), highlevel)\n", "path": "src/awkward/operations/ak_fill_none.py"}], "after_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n\nimport numbers\n\nimport awkward as ak\n\nnp = ak.nplikes.NumpyMetadata.instance()\n\n\ndef fill_none(array, value, axis=-1, highlevel=True, behavior=None):\n \"\"\"\n Args:\n array: Data in which to replace None with a given value.\n value: Data with which to replace None.\n axis (None or int): If None, replace all None values in the array\n with the given value; if an int, The dimension at which this\n operation is applied. The outermost dimension is `0`, followed\n by `1`, etc., and negative values count backward from the\n innermost: `-1` is the innermost dimension, `-2` is the next\n level up, etc.\n highlevel (bool): If True, return an #ak.Array; otherwise, return\n a low-level #ak.contents.Content subclass.\n behavior (None or dict): Custom #ak.behavior for the output array, if\n high-level.\n\n Replaces missing values (None) with a given `value`.\n\n For example, in the following `array`,\n\n ak.Array([[1.1, None, 2.2], [], [None, 3.3, 4.4]])\n\n The None values could be replaced with `0` by\n\n >>> ak.fill_none(array, 0)\n <Array [[1.1, 0, 2.2], [], [0, 3.3, 4.4]] type='3 * var * float64'>\n\n The replacement value doesn't strictly need the same type as the\n surrounding data. For example, the None values could also be replaced\n by a string.\n\n >>> ak.fill_none(array, \"hi\")\n <Array [[1.1, 'hi', 2.2], ... ['hi', 3.3, 4.4]] type='3 * var * union[float64, s...'>\n\n The list content now has a union type:\n\n >>> ak.type(ak.fill_none(array, \"hi\"))\n 3 * var * union[float64, string]\n\n The values could be floating-point numbers or strings.\n \"\"\"\n with ak._errors.OperationErrorContext(\n \"ak.fill_none\",\n dict(\n array=array, value=value, axis=axis, highlevel=highlevel, behavior=behavior\n ),\n ):\n return _impl(array, value, axis, highlevel, behavior)\n\n\ndef _impl(array, value, axis, highlevel, behavior):\n arraylayout = ak.operations.to_layout(array, allow_record=True, allow_other=False)\n nplike = ak.nplikes.nplike_of(arraylayout)\n\n # Convert value type to appropriate layout\n if (\n isinstance(value, np.ndarray)\n and issubclass(value.dtype.type, (np.bool_, np.number))\n and len(value.shape) != 0\n ):\n valuelayout = ak.operations.to_layout(\n nplike.asarray(value)[np.newaxis], allow_record=False, allow_other=False\n )\n elif isinstance(value, (bool, numbers.Number, np.bool_, np.number)) or (\n isinstance(value, np.ndarray)\n and issubclass(value.dtype.type, (np.bool_, np.number))\n ):\n valuelayout = ak.operations.to_layout(\n nplike.asarray(value), allow_record=False, allow_other=False\n )\n elif (\n ak._util.is_sized_iterable(value)\n and not (isinstance(value, (str, bytes)))\n or isinstance(value, (ak.highlevel.Record, ak.record.Record))\n ):\n valuelayout = ak.operations.to_layout(\n value, allow_record=True, allow_other=False\n )\n if isinstance(valuelayout, ak.record.Record):\n valuelayout = valuelayout.array[valuelayout.at : valuelayout.at + 1]\n elif len(valuelayout) == 0:\n offsets = ak.index.Index64(\n nplike.array([0, 0], dtype=np.int64), nplike=nplike\n )\n valuelayout = ak.contents.ListOffsetArray(offsets, valuelayout)\n else:\n valuelayout = ak.contents.RegularArray(valuelayout, len(valuelayout), 1)\n else:\n valuelayout = ak.operations.to_layout(\n [value], allow_record=False, allow_other=False\n )\n\n def maybe_fillna(layout):\n if layout.is_OptionType:\n return layout.fill_none(valuelayout)\n else:\n return layout\n\n if axis is None:\n\n def action(layout, continuation, **kwargs):\n return maybe_fillna(continuation())\n\n else:\n\n def action(layout, depth, depth_context, **kwargs):\n posaxis = layout.axis_wrap_if_negative(depth_context[\"posaxis\"])\n depth_context[\"posaxis\"] = posaxis\n if posaxis + 1 < depth:\n return layout\n elif posaxis + 1 == depth:\n return maybe_fillna(layout)\n\n depth_context = {\"posaxis\": axis}\n out = arraylayout.recursively_apply(action, behavior, depth_context=depth_context)\n\n return ak._util.wrap(out, ak._util.behavior_of(array, behavior=behavior), highlevel)\n", "path": "src/awkward/operations/ak_fill_none.py"}]} | 1,742 | 126 |
gh_patches_debug_36171 | rasdani/github-patches | git_diff | pandas-dev__pandas-28230 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ASV Benchmark for read_excel
Right now we don't benchmark performance of read_excel across the various engines (xlrd, openpyxl and odfpy) so would gladly accept contributions to improve benchmark coverage on those
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `asv_bench/benchmarks/io/excel.py`
Content:
```
1 from io import BytesIO
2
3 import numpy as np
4
5 from pandas import DataFrame, ExcelWriter, date_range, read_excel
6 import pandas.util.testing as tm
7
8
9 class Excel:
10
11 params = ["openpyxl", "xlsxwriter", "xlwt"]
12 param_names = ["engine"]
13
14 def setup(self, engine):
15 N = 2000
16 C = 5
17 self.df = DataFrame(
18 np.random.randn(N, C),
19 columns=["float{}".format(i) for i in range(C)],
20 index=date_range("20000101", periods=N, freq="H"),
21 )
22 self.df["object"] = tm.makeStringIndex(N)
23 self.bio_read = BytesIO()
24 self.writer_read = ExcelWriter(self.bio_read, engine=engine)
25 self.df.to_excel(self.writer_read, sheet_name="Sheet1")
26 self.writer_read.save()
27 self.bio_read.seek(0)
28
29 def time_read_excel(self, engine):
30 read_excel(self.bio_read)
31
32 def time_write_excel(self, engine):
33 bio_write = BytesIO()
34 bio_write.seek(0)
35 writer_write = ExcelWriter(bio_write, engine=engine)
36 self.df.to_excel(writer_write, sheet_name="Sheet1")
37 writer_write.save()
38
39
40 from ..pandas_vb_common import setup # noqa: F401 isort:skip
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/asv_bench/benchmarks/io/excel.py b/asv_bench/benchmarks/io/excel.py
--- a/asv_bench/benchmarks/io/excel.py
+++ b/asv_bench/benchmarks/io/excel.py
@@ -1,40 +1,72 @@
from io import BytesIO
import numpy as np
+from odf.opendocument import OpenDocumentSpreadsheet
+from odf.table import Table, TableCell, TableRow
+from odf.text import P
from pandas import DataFrame, ExcelWriter, date_range, read_excel
import pandas.util.testing as tm
-class Excel:
+def _generate_dataframe():
+ N = 2000
+ C = 5
+ df = DataFrame(
+ np.random.randn(N, C),
+ columns=["float{}".format(i) for i in range(C)],
+ index=date_range("20000101", periods=N, freq="H"),
+ )
+ df["object"] = tm.makeStringIndex(N)
+ return df
+
+
+class WriteExcel:
params = ["openpyxl", "xlsxwriter", "xlwt"]
param_names = ["engine"]
def setup(self, engine):
- N = 2000
- C = 5
- self.df = DataFrame(
- np.random.randn(N, C),
- columns=["float{}".format(i) for i in range(C)],
- index=date_range("20000101", periods=N, freq="H"),
- )
- self.df["object"] = tm.makeStringIndex(N)
- self.bio_read = BytesIO()
- self.writer_read = ExcelWriter(self.bio_read, engine=engine)
- self.df.to_excel(self.writer_read, sheet_name="Sheet1")
- self.writer_read.save()
- self.bio_read.seek(0)
-
- def time_read_excel(self, engine):
- read_excel(self.bio_read)
+ self.df = _generate_dataframe()
def time_write_excel(self, engine):
- bio_write = BytesIO()
- bio_write.seek(0)
- writer_write = ExcelWriter(bio_write, engine=engine)
- self.df.to_excel(writer_write, sheet_name="Sheet1")
- writer_write.save()
+ bio = BytesIO()
+ bio.seek(0)
+ writer = ExcelWriter(bio, engine=engine)
+ self.df.to_excel(writer, sheet_name="Sheet1")
+ writer.save()
+
+
+class ReadExcel:
+
+ params = ["xlrd", "openpyxl", "odf"]
+ param_names = ["engine"]
+ fname_excel = "spreadsheet.xlsx"
+ fname_odf = "spreadsheet.ods"
+
+ def _create_odf(self):
+ doc = OpenDocumentSpreadsheet()
+ table = Table(name="Table1")
+ for row in self.df.values:
+ tr = TableRow()
+ for val in row:
+ tc = TableCell(valuetype="string")
+ tc.addElement(P(text=val))
+ tr.addElement(tc)
+ table.addElement(tr)
+
+ doc.spreadsheet.addElement(table)
+ doc.save(self.fname_odf)
+
+ def setup_cache(self):
+ self.df = _generate_dataframe()
+
+ self.df.to_excel(self.fname_excel, sheet_name="Sheet1")
+ self._create_odf()
+
+ def time_read_excel(self, engine):
+ fname = self.fname_odf if engine == "odf" else self.fname_excel
+ read_excel(fname, engine=engine)
from ..pandas_vb_common import setup # noqa: F401 isort:skip
| {"golden_diff": "diff --git a/asv_bench/benchmarks/io/excel.py b/asv_bench/benchmarks/io/excel.py\n--- a/asv_bench/benchmarks/io/excel.py\n+++ b/asv_bench/benchmarks/io/excel.py\n@@ -1,40 +1,72 @@\n from io import BytesIO\n \n import numpy as np\n+from odf.opendocument import OpenDocumentSpreadsheet\n+from odf.table import Table, TableCell, TableRow\n+from odf.text import P\n \n from pandas import DataFrame, ExcelWriter, date_range, read_excel\n import pandas.util.testing as tm\n \n \n-class Excel:\n+def _generate_dataframe():\n+ N = 2000\n+ C = 5\n+ df = DataFrame(\n+ np.random.randn(N, C),\n+ columns=[\"float{}\".format(i) for i in range(C)],\n+ index=date_range(\"20000101\", periods=N, freq=\"H\"),\n+ )\n+ df[\"object\"] = tm.makeStringIndex(N)\n+ return df\n+\n+\n+class WriteExcel:\n \n params = [\"openpyxl\", \"xlsxwriter\", \"xlwt\"]\n param_names = [\"engine\"]\n \n def setup(self, engine):\n- N = 2000\n- C = 5\n- self.df = DataFrame(\n- np.random.randn(N, C),\n- columns=[\"float{}\".format(i) for i in range(C)],\n- index=date_range(\"20000101\", periods=N, freq=\"H\"),\n- )\n- self.df[\"object\"] = tm.makeStringIndex(N)\n- self.bio_read = BytesIO()\n- self.writer_read = ExcelWriter(self.bio_read, engine=engine)\n- self.df.to_excel(self.writer_read, sheet_name=\"Sheet1\")\n- self.writer_read.save()\n- self.bio_read.seek(0)\n-\n- def time_read_excel(self, engine):\n- read_excel(self.bio_read)\n+ self.df = _generate_dataframe()\n \n def time_write_excel(self, engine):\n- bio_write = BytesIO()\n- bio_write.seek(0)\n- writer_write = ExcelWriter(bio_write, engine=engine)\n- self.df.to_excel(writer_write, sheet_name=\"Sheet1\")\n- writer_write.save()\n+ bio = BytesIO()\n+ bio.seek(0)\n+ writer = ExcelWriter(bio, engine=engine)\n+ self.df.to_excel(writer, sheet_name=\"Sheet1\")\n+ writer.save()\n+\n+\n+class ReadExcel:\n+\n+ params = [\"xlrd\", \"openpyxl\", \"odf\"]\n+ param_names = [\"engine\"]\n+ fname_excel = \"spreadsheet.xlsx\"\n+ fname_odf = \"spreadsheet.ods\"\n+\n+ def _create_odf(self):\n+ doc = OpenDocumentSpreadsheet()\n+ table = Table(name=\"Table1\")\n+ for row in self.df.values:\n+ tr = TableRow()\n+ for val in row:\n+ tc = TableCell(valuetype=\"string\")\n+ tc.addElement(P(text=val))\n+ tr.addElement(tc)\n+ table.addElement(tr)\n+\n+ doc.spreadsheet.addElement(table)\n+ doc.save(self.fname_odf)\n+\n+ def setup_cache(self):\n+ self.df = _generate_dataframe()\n+\n+ self.df.to_excel(self.fname_excel, sheet_name=\"Sheet1\")\n+ self._create_odf()\n+\n+ def time_read_excel(self, engine):\n+ fname = self.fname_odf if engine == \"odf\" else self.fname_excel\n+ read_excel(fname, engine=engine)\n \n \n from ..pandas_vb_common import setup # noqa: F401 isort:skip\n", "issue": "ASV Benchmark for read_excel\nRight now we don't benchmark performance of read_excel across the various engines (xlrd, openpyxl and odfpy) so would gladly accept contributions to improve benchmark coverage on those\n", "before_files": [{"content": "from io import BytesIO\n\nimport numpy as np\n\nfrom pandas import DataFrame, ExcelWriter, date_range, read_excel\nimport pandas.util.testing as tm\n\n\nclass Excel:\n\n params = [\"openpyxl\", \"xlsxwriter\", \"xlwt\"]\n param_names = [\"engine\"]\n\n def setup(self, engine):\n N = 2000\n C = 5\n self.df = DataFrame(\n np.random.randn(N, C),\n columns=[\"float{}\".format(i) for i in range(C)],\n index=date_range(\"20000101\", periods=N, freq=\"H\"),\n )\n self.df[\"object\"] = tm.makeStringIndex(N)\n self.bio_read = BytesIO()\n self.writer_read = ExcelWriter(self.bio_read, engine=engine)\n self.df.to_excel(self.writer_read, sheet_name=\"Sheet1\")\n self.writer_read.save()\n self.bio_read.seek(0)\n\n def time_read_excel(self, engine):\n read_excel(self.bio_read)\n\n def time_write_excel(self, engine):\n bio_write = BytesIO()\n bio_write.seek(0)\n writer_write = ExcelWriter(bio_write, engine=engine)\n self.df.to_excel(writer_write, sheet_name=\"Sheet1\")\n writer_write.save()\n\n\nfrom ..pandas_vb_common import setup # noqa: F401 isort:skip\n", "path": "asv_bench/benchmarks/io/excel.py"}], "after_files": [{"content": "from io import BytesIO\n\nimport numpy as np\nfrom odf.opendocument import OpenDocumentSpreadsheet\nfrom odf.table import Table, TableCell, TableRow\nfrom odf.text import P\n\nfrom pandas import DataFrame, ExcelWriter, date_range, read_excel\nimport pandas.util.testing as tm\n\n\ndef _generate_dataframe():\n N = 2000\n C = 5\n df = DataFrame(\n np.random.randn(N, C),\n columns=[\"float{}\".format(i) for i in range(C)],\n index=date_range(\"20000101\", periods=N, freq=\"H\"),\n )\n df[\"object\"] = tm.makeStringIndex(N)\n return df\n\n\nclass WriteExcel:\n\n params = [\"openpyxl\", \"xlsxwriter\", \"xlwt\"]\n param_names = [\"engine\"]\n\n def setup(self, engine):\n self.df = _generate_dataframe()\n\n def time_write_excel(self, engine):\n bio = BytesIO()\n bio.seek(0)\n writer = ExcelWriter(bio, engine=engine)\n self.df.to_excel(writer, sheet_name=\"Sheet1\")\n writer.save()\n\n\nclass ReadExcel:\n\n params = [\"xlrd\", \"openpyxl\", \"odf\"]\n param_names = [\"engine\"]\n fname_excel = \"spreadsheet.xlsx\"\n fname_odf = \"spreadsheet.ods\"\n\n def _create_odf(self):\n doc = OpenDocumentSpreadsheet()\n table = Table(name=\"Table1\")\n for row in self.df.values:\n tr = TableRow()\n for val in row:\n tc = TableCell(valuetype=\"string\")\n tc.addElement(P(text=val))\n tr.addElement(tc)\n table.addElement(tr)\n\n doc.spreadsheet.addElement(table)\n doc.save(self.fname_odf)\n\n def setup_cache(self):\n self.df = _generate_dataframe()\n\n self.df.to_excel(self.fname_excel, sheet_name=\"Sheet1\")\n self._create_odf()\n\n def time_read_excel(self, engine):\n fname = self.fname_odf if engine == \"odf\" else self.fname_excel\n read_excel(fname, engine=engine)\n\n\nfrom ..pandas_vb_common import setup # noqa: F401 isort:skip\n", "path": "asv_bench/benchmarks/io/excel.py"}]} | 683 | 812 |
gh_patches_debug_27454 | rasdani/github-patches | git_diff | bridgecrewio__checkov-4530 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CloudFormation EC2Credentials.py does evaluate if the UserData is base64 encoded
**Describe the issue**
The check EC2Credentials.py which checks the UserData of a CloudFormation template against secrets does not work if the userdata are already encoded in base64.
The check is not returning the correct result and the processing time is very slow.
**Examples**
```yaml
AWSTemplateFormatVersion: "2010-09-09"
Resources:
Resource0:
Type: AWS::EC2::Instance
Properties:
ImageId: ami-04169656fea786776
UserData: IyEgL2Jpbi9iYXNoCnN1ZG8gYXB0LWdldCB1cGRhdGUKc3VkbyBhcHQtZ2V0IGluc3RhbGwgLXkgYXBhY2hlMgpzdWRvIHN5c3RlbWN0bCBzdGFydCBhcGFjaGUyCnN1ZG8gc3lzdGVtY3RsIGVuYWJsZSBhcGFjaGUyCmV4cG9ydCBBV1NfQUNDRVNTX0tFWV9JRApleHBvcnQgQVdTX0FDQ0VTU19LRVlfSUQ9QUtJQUlPU0ZPRE5ON0VYQU1QTEUKZXhwb3J0IEFXU19TRUNSRVRfQUNDRVNTX0tFWT13SmFsclhVdG5GRU1JL0s3TURFTkcvYlB4UmZpQ1lFWEFNUExFS0VZCmV4cG9ydCBBV1NfREVGQVVMVF9SRUdJT049dXMtd2VzdC0yCmVjaG8gIjxoMT5EZXBsb3llZCB2aWEgVGVycmFmb3JtPC9oMT4iIHwgc3VkbyB0ZWUgL3Zhci93d3cvaHRtbC9pbmRleC5odG1s
```
**Version (please complete the following information):**
- Checkov Version [e.g. 22]
- >2.0.0
**Additional context**
Fix implemented here by trying to decode the base64 string.
https://github.com/j2clerck/checkov/commit/af9abd724520ec21ec5510dfc5db2ef83fd9e6dc
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/cloudformation/checks/resource/aws/EC2Credentials.py`
Content:
```
1 from typing import List
2
3 from checkov.common.models.enums import CheckResult, CheckCategories
4 from checkov.cloudformation.checks.resource.base_resource_check import BaseResourceCheck
5 from checkov.common.util.secrets import get_secrets_from_string
6
7
8 class EC2Credentials(BaseResourceCheck):
9 def __init__(self):
10 name = "Ensure no hard-coded secrets exist in EC2 user data"
11 id = "CKV_AWS_46"
12 supported_resources = ['AWS::EC2::Instance']
13 categories = [CheckCategories.SECRETS]
14 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
15
16 def scan_resource_conf(self, conf):
17 if 'Properties' in conf.keys():
18 if 'UserData' in conf['Properties'].keys():
19 user_data = conf['Properties']['UserData']
20 # Cast to string as user data object can look slightly different depending
21 # on Yaml or JSON CF Templates and how the B64 conversion is done.
22 user_data_str = str(user_data)
23 if isinstance(user_data_str, str):
24 secrets = get_secrets_from_string(str(user_data_str))
25 if secrets:
26 for idx, secret in enumerate(secrets):
27 conf[f'{self.id}_secret_{idx}'] = secret
28 return CheckResult.FAILED
29 return CheckResult.PASSED
30
31 def get_evaluated_keys(self) -> List[str]:
32 return ["Properties/UserData"]
33
34
35 check = EC2Credentials()
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/cloudformation/checks/resource/aws/EC2Credentials.py b/checkov/cloudformation/checks/resource/aws/EC2Credentials.py
--- a/checkov/cloudformation/checks/resource/aws/EC2Credentials.py
+++ b/checkov/cloudformation/checks/resource/aws/EC2Credentials.py
@@ -1,4 +1,5 @@
from typing import List
+from base64 import b64decode
from checkov.common.models.enums import CheckResult, CheckCategories
from checkov.cloudformation.checks.resource.base_resource_check import BaseResourceCheck
@@ -17,9 +18,15 @@
if 'Properties' in conf.keys():
if 'UserData' in conf['Properties'].keys():
user_data = conf['Properties']['UserData']
- # Cast to string as user data object can look slightly different depending
- # on Yaml or JSON CF Templates and how the B64 conversion is done.
- user_data_str = str(user_data)
+ # In some case, the UserData might be a base64 encoded string which will slow down (3 minutes)
+ # the get_secrets_from_string function.
+ try:
+ user_data_str = b64decode(user_data).decode()
+ except Exception:
+ # Cast to string as user data object can look slightly different depending
+ # on Yaml or JSON CF Templates and how the B64 conversion is done.
+ user_data_str = str(user_data)
+
if isinstance(user_data_str, str):
secrets = get_secrets_from_string(str(user_data_str))
if secrets:
| {"golden_diff": "diff --git a/checkov/cloudformation/checks/resource/aws/EC2Credentials.py b/checkov/cloudformation/checks/resource/aws/EC2Credentials.py\n--- a/checkov/cloudformation/checks/resource/aws/EC2Credentials.py\n+++ b/checkov/cloudformation/checks/resource/aws/EC2Credentials.py\n@@ -1,4 +1,5 @@\n from typing import List\n+from base64 import b64decode\n \n from checkov.common.models.enums import CheckResult, CheckCategories\n from checkov.cloudformation.checks.resource.base_resource_check import BaseResourceCheck\n@@ -17,9 +18,15 @@\n if 'Properties' in conf.keys():\n if 'UserData' in conf['Properties'].keys():\n user_data = conf['Properties']['UserData']\n- # Cast to string as user data object can look slightly different depending\n- # on Yaml or JSON CF Templates and how the B64 conversion is done.\n- user_data_str = str(user_data)\n+ # In some case, the UserData might be a base64 encoded string which will slow down (3 minutes)\n+ # the get_secrets_from_string function.\n+ try:\n+ user_data_str = b64decode(user_data).decode()\n+ except Exception:\n+ # Cast to string as user data object can look slightly different depending\n+ # on Yaml or JSON CF Templates and how the B64 conversion is done.\n+ user_data_str = str(user_data)\n+\n if isinstance(user_data_str, str):\n secrets = get_secrets_from_string(str(user_data_str))\n if secrets:\n", "issue": "CloudFormation EC2Credentials.py does evaluate if the UserData is base64 encoded\n**Describe the issue**\r\nThe check EC2Credentials.py which checks the UserData of a CloudFormation template against secrets does not work if the userdata are already encoded in base64. \r\nThe check is not returning the correct result and the processing time is very slow.\r\n\r\n**Examples**\r\n```yaml\r\nAWSTemplateFormatVersion: \"2010-09-09\"\r\nResources:\r\n Resource0:\r\n Type: AWS::EC2::Instance\r\n Properties:\r\n ImageId: ami-04169656fea786776\r\n UserData: IyEgL2Jpbi9iYXNoCnN1ZG8gYXB0LWdldCB1cGRhdGUKc3VkbyBhcHQtZ2V0IGluc3RhbGwgLXkgYXBhY2hlMgpzdWRvIHN5c3RlbWN0bCBzdGFydCBhcGFjaGUyCnN1ZG8gc3lzdGVtY3RsIGVuYWJsZSBhcGFjaGUyCmV4cG9ydCBBV1NfQUNDRVNTX0tFWV9JRApleHBvcnQgQVdTX0FDQ0VTU19LRVlfSUQ9QUtJQUlPU0ZPRE5ON0VYQU1QTEUKZXhwb3J0IEFXU19TRUNSRVRfQUNDRVNTX0tFWT13SmFsclhVdG5GRU1JL0s3TURFTkcvYlB4UmZpQ1lFWEFNUExFS0VZCmV4cG9ydCBBV1NfREVGQVVMVF9SRUdJT049dXMtd2VzdC0yCmVjaG8gIjxoMT5EZXBsb3llZCB2aWEgVGVycmFmb3JtPC9oMT4iIHwgc3VkbyB0ZWUgL3Zhci93d3cvaHRtbC9pbmRleC5odG1s\r\n\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version [e.g. 22]\r\n - >2.0.0\r\n\r\n**Additional context**\r\nFix implemented here by trying to decode the base64 string.\r\nhttps://github.com/j2clerck/checkov/commit/af9abd724520ec21ec5510dfc5db2ef83fd9e6dc\r\n\n", "before_files": [{"content": "from typing import List\n\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.cloudformation.checks.resource.base_resource_check import BaseResourceCheck\nfrom checkov.common.util.secrets import get_secrets_from_string\n\n\nclass EC2Credentials(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure no hard-coded secrets exist in EC2 user data\"\n id = \"CKV_AWS_46\"\n supported_resources = ['AWS::EC2::Instance']\n categories = [CheckCategories.SECRETS]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if 'Properties' in conf.keys():\n if 'UserData' in conf['Properties'].keys():\n user_data = conf['Properties']['UserData']\n # Cast to string as user data object can look slightly different depending\n # on Yaml or JSON CF Templates and how the B64 conversion is done.\n user_data_str = str(user_data)\n if isinstance(user_data_str, str):\n secrets = get_secrets_from_string(str(user_data_str))\n if secrets:\n for idx, secret in enumerate(secrets):\n conf[f'{self.id}_secret_{idx}'] = secret\n return CheckResult.FAILED\n return CheckResult.PASSED\n\n def get_evaluated_keys(self) -> List[str]:\n return [\"Properties/UserData\"]\n\n\ncheck = EC2Credentials()\n", "path": "checkov/cloudformation/checks/resource/aws/EC2Credentials.py"}], "after_files": [{"content": "from typing import List\nfrom base64 import b64decode\n\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.cloudformation.checks.resource.base_resource_check import BaseResourceCheck\nfrom checkov.common.util.secrets import get_secrets_from_string\n\n\nclass EC2Credentials(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure no hard-coded secrets exist in EC2 user data\"\n id = \"CKV_AWS_46\"\n supported_resources = ['AWS::EC2::Instance']\n categories = [CheckCategories.SECRETS]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if 'Properties' in conf.keys():\n if 'UserData' in conf['Properties'].keys():\n user_data = conf['Properties']['UserData']\n # In some case, the UserData might be a base64 encoded string which will slow down (3 minutes)\n # the get_secrets_from_string function.\n try:\n user_data_str = b64decode(user_data).decode()\n except Exception:\n # Cast to string as user data object can look slightly different depending\n # on Yaml or JSON CF Templates and how the B64 conversion is done.\n user_data_str = str(user_data)\n\n if isinstance(user_data_str, str):\n secrets = get_secrets_from_string(str(user_data_str))\n if secrets:\n for idx, secret in enumerate(secrets):\n conf[f'{self.id}_secret_{idx}'] = secret\n return CheckResult.FAILED\n return CheckResult.PASSED\n\n def get_evaluated_keys(self) -> List[str]:\n return [\"Properties/UserData\"]\n\n\ncheck = EC2Credentials()\n", "path": "checkov/cloudformation/checks/resource/aws/EC2Credentials.py"}]} | 1,242 | 342 |
gh_patches_debug_17541 | rasdani/github-patches | git_diff | liqd__a4-opin-284 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Filter draft projects from all listings
Projects that are in draft mode should be removed from most listings:
- [x] latest projects for all users
- [x] other projects for all users
- [x] organisation page for non-initiators
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `euth/organisations/views.py`
Content:
```
1 from django.views import generic
2
3 from . import models
4
5
6 class OrganisationDetailView(generic.DetailView):
7 model = models.Organisation
8
9
10 class OrganisationListView(generic.ListView):
11 model = models.Organisation
12 paginate_by = 10
13
```
Path: `euth/projects/models.py`
Content:
```
1 from django.conf import settings
2 from django.db import models
3 from django.utils import functional, timezone
4
5 from contrib.transforms import html_transforms
6 from euth.contrib import base_models, validators
7 from euth.organisations import models as org_models
8
9
10 class ProjectManager(models.Manager):
11
12 def get_by_natural_key(self, name):
13 return self.get(name=name)
14
15 def featured(self):
16 return self.filter(is_draft=False).order_by('-created')[:8]
17
18
19 class Project(base_models.TimeStampedModel):
20 slug = models.SlugField(max_length=512, unique=True)
21 name = models.CharField(max_length=512)
22 organisation = models.ForeignKey(
23 org_models.Organisation, on_delete=models.CASCADE)
24 description = models.CharField(max_length=1024)
25 information = models.TextField()
26 is_public = models.BooleanField(default=True)
27 is_draft = models.BooleanField(default=True)
28 image = models.ImageField(
29 upload_to='projects/backgrounds',
30 blank=True,
31 validators=[validators.validate_hero_image])
32 participants = models.ManyToManyField(
33 settings.AUTH_USER_MODEL,
34 related_name='project_participant',
35 blank=True,
36 )
37 moderators = models.ManyToManyField(
38 settings.AUTH_USER_MODEL,
39 related_name='project_moderator'
40 )
41
42 objects = ProjectManager()
43
44 def __str__(self):
45 return self.name
46
47 def save(self, *args, **kwargs):
48 self.information = html_transforms.clean_html_field(
49 self.information)
50 super(Project, self).save(*args, **kwargs)
51
52 def get_absolute_url(self):
53 from django.core.urlresolvers import reverse
54 return reverse('project-detail', args=[str(self.slug)])
55
56 def has_member(self, user):
57 """
58 Everybody is member of all public projects and private projects can
59 be joined as moderator or participant.
60 """
61 return (
62 (user.is_authenticated() and self.is_public)
63 or (user in self.participants.all())
64 or (user in self.moderators.all())
65 )
66
67 @functional.cached_property
68 def other_projects(self):
69 other_projects = self.organisation.project_set.all().exclude(
70 slug=self.slug)
71 return other_projects
72
73 @functional.cached_property
74 def is_private(self):
75 return not self.is_public
76
77 @functional.cached_property
78 def active_phase(self):
79 from euth.phases import models as phase_models
80 return phase_models.Phase.objects\
81 .filter(module__project=self)\
82 .active_phases()\
83 .first()
84
85 @property
86 def days_left(self):
87 if self.active_phase:
88 today = timezone.now().replace(hour=0, minute=0, second=0)
89 time_delta = self.active_phase.end_date - today
90 return time_delta.days
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/euth/organisations/views.py b/euth/organisations/views.py
--- a/euth/organisations/views.py
+++ b/euth/organisations/views.py
@@ -6,6 +6,12 @@
class OrganisationDetailView(generic.DetailView):
model = models.Organisation
+ def visible_projects(self):
+ if self.request.user in self.object.initiators.all():
+ return self.object.project_set.all()
+ else:
+ return self.object.project_set.filter(is_draft=False)
+
class OrganisationListView(generic.ListView):
model = models.Organisation
diff --git a/euth/projects/models.py b/euth/projects/models.py
--- a/euth/projects/models.py
+++ b/euth/projects/models.py
@@ -66,8 +66,8 @@
@functional.cached_property
def other_projects(self):
- other_projects = self.organisation.project_set.all().exclude(
- slug=self.slug)
+ other_projects = self.organisation.project_set\
+ .filter(is_draft=False).exclude(slug=self.slug)
return other_projects
@functional.cached_property
| {"golden_diff": "diff --git a/euth/organisations/views.py b/euth/organisations/views.py\n--- a/euth/organisations/views.py\n+++ b/euth/organisations/views.py\n@@ -6,6 +6,12 @@\n class OrganisationDetailView(generic.DetailView):\n model = models.Organisation\n \n+ def visible_projects(self):\n+ if self.request.user in self.object.initiators.all():\n+ return self.object.project_set.all()\n+ else:\n+ return self.object.project_set.filter(is_draft=False)\n+\n \n class OrganisationListView(generic.ListView):\n model = models.Organisation\ndiff --git a/euth/projects/models.py b/euth/projects/models.py\n--- a/euth/projects/models.py\n+++ b/euth/projects/models.py\n@@ -66,8 +66,8 @@\n \n @functional.cached_property\n def other_projects(self):\n- other_projects = self.organisation.project_set.all().exclude(\n- slug=self.slug)\n+ other_projects = self.organisation.project_set\\\n+ .filter(is_draft=False).exclude(slug=self.slug)\n return other_projects\n \n @functional.cached_property\n", "issue": "Filter draft projects from all listings\nProjects that are in draft mode should be removed from most listings:\n- [x] latest projects for all users\n- [x] other projects for all users\n- [x] organisation page for non-initiators\n\n", "before_files": [{"content": "from django.views import generic\n\nfrom . import models\n\n\nclass OrganisationDetailView(generic.DetailView):\n model = models.Organisation\n\n\nclass OrganisationListView(generic.ListView):\n model = models.Organisation\n paginate_by = 10\n", "path": "euth/organisations/views.py"}, {"content": "from django.conf import settings\nfrom django.db import models\nfrom django.utils import functional, timezone\n\nfrom contrib.transforms import html_transforms\nfrom euth.contrib import base_models, validators\nfrom euth.organisations import models as org_models\n\n\nclass ProjectManager(models.Manager):\n\n def get_by_natural_key(self, name):\n return self.get(name=name)\n\n def featured(self):\n return self.filter(is_draft=False).order_by('-created')[:8]\n\n\nclass Project(base_models.TimeStampedModel):\n slug = models.SlugField(max_length=512, unique=True)\n name = models.CharField(max_length=512)\n organisation = models.ForeignKey(\n org_models.Organisation, on_delete=models.CASCADE)\n description = models.CharField(max_length=1024)\n information = models.TextField()\n is_public = models.BooleanField(default=True)\n is_draft = models.BooleanField(default=True)\n image = models.ImageField(\n upload_to='projects/backgrounds',\n blank=True,\n validators=[validators.validate_hero_image])\n participants = models.ManyToManyField(\n settings.AUTH_USER_MODEL,\n related_name='project_participant',\n blank=True,\n )\n moderators = models.ManyToManyField(\n settings.AUTH_USER_MODEL,\n related_name='project_moderator'\n )\n\n objects = ProjectManager()\n\n def __str__(self):\n return self.name\n\n def save(self, *args, **kwargs):\n self.information = html_transforms.clean_html_field(\n self.information)\n super(Project, self).save(*args, **kwargs)\n\n def get_absolute_url(self):\n from django.core.urlresolvers import reverse\n return reverse('project-detail', args=[str(self.slug)])\n\n def has_member(self, user):\n \"\"\"\n Everybody is member of all public projects and private projects can\n be joined as moderator or participant.\n \"\"\"\n return (\n (user.is_authenticated() and self.is_public)\n or (user in self.participants.all())\n or (user in self.moderators.all())\n )\n\n @functional.cached_property\n def other_projects(self):\n other_projects = self.organisation.project_set.all().exclude(\n slug=self.slug)\n return other_projects\n\n @functional.cached_property\n def is_private(self):\n return not self.is_public\n\n @functional.cached_property\n def active_phase(self):\n from euth.phases import models as phase_models\n return phase_models.Phase.objects\\\n .filter(module__project=self)\\\n .active_phases()\\\n .first()\n\n @property\n def days_left(self):\n if self.active_phase:\n today = timezone.now().replace(hour=0, minute=0, second=0)\n time_delta = self.active_phase.end_date - today\n return time_delta.days\n", "path": "euth/projects/models.py"}], "after_files": [{"content": "from django.views import generic\n\nfrom . import models\n\n\nclass OrganisationDetailView(generic.DetailView):\n model = models.Organisation\n\n def visible_projects(self):\n if self.request.user in self.object.initiators.all():\n return self.object.project_set.all()\n else:\n return self.object.project_set.filter(is_draft=False)\n\n\nclass OrganisationListView(generic.ListView):\n model = models.Organisation\n paginate_by = 10\n", "path": "euth/organisations/views.py"}, {"content": "from django.conf import settings\nfrom django.db import models\nfrom django.utils import functional, timezone\n\nfrom contrib.transforms import html_transforms\nfrom euth.contrib import base_models, validators\nfrom euth.organisations import models as org_models\n\n\nclass ProjectManager(models.Manager):\n\n def get_by_natural_key(self, name):\n return self.get(name=name)\n\n def featured(self):\n return self.filter(is_draft=False).order_by('-created')[:8]\n\n\nclass Project(base_models.TimeStampedModel):\n slug = models.SlugField(max_length=512, unique=True)\n name = models.CharField(max_length=512)\n organisation = models.ForeignKey(\n org_models.Organisation, on_delete=models.CASCADE)\n description = models.CharField(max_length=1024)\n information = models.TextField()\n is_public = models.BooleanField(default=True)\n is_draft = models.BooleanField(default=True)\n image = models.ImageField(\n upload_to='projects/backgrounds',\n blank=True,\n validators=[validators.validate_hero_image])\n participants = models.ManyToManyField(\n settings.AUTH_USER_MODEL,\n related_name='project_participant',\n blank=True,\n )\n moderators = models.ManyToManyField(\n settings.AUTH_USER_MODEL,\n related_name='project_moderator'\n )\n\n objects = ProjectManager()\n\n def __str__(self):\n return self.name\n\n def save(self, *args, **kwargs):\n self.information = html_transforms.clean_html_field(\n self.information)\n super(Project, self).save(*args, **kwargs)\n\n def get_absolute_url(self):\n from django.core.urlresolvers import reverse\n return reverse('project-detail', args=[str(self.slug)])\n\n def has_member(self, user):\n \"\"\"\n Everybody is member of all public projects and private projects can\n be joined as moderator or participant.\n \"\"\"\n return (\n (user.is_authenticated() and self.is_public)\n or (user in self.participants.all())\n or (user in self.moderators.all())\n )\n\n @functional.cached_property\n def other_projects(self):\n other_projects = self.organisation.project_set\\\n .filter(is_draft=False).exclude(slug=self.slug)\n return other_projects\n\n @functional.cached_property\n def is_private(self):\n return not self.is_public\n\n @functional.cached_property\n def active_phase(self):\n from euth.phases import models as phase_models\n return phase_models.Phase.objects\\\n .filter(module__project=self)\\\n .active_phases()\\\n .first()\n\n @property\n def days_left(self):\n if self.active_phase:\n today = timezone.now().replace(hour=0, minute=0, second=0)\n time_delta = self.active_phase.end_date - today\n return time_delta.days\n", "path": "euth/projects/models.py"}]} | 1,167 | 243 |
gh_patches_debug_28597 | rasdani/github-patches | git_diff | openstates__openstates-scrapers-1354 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NH: legislators scrape is failing
State: NH
Legislator scraper needs some attention
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/nh/legislators.py`
Content:
```
1 import re
2
3 from billy.scrape.legislators import LegislatorScraper, Legislator
4 from openstates.utils import LXMLMixin
5
6
7 class NHLegislatorScraper(LegislatorScraper, LXMLMixin):
8 jurisdiction = 'nh'
9 latest_only = True
10 members_url = 'http://www.gencourt.state.nh.us/downloads/Members.txt'
11
12 chamber_map = {'H': 'lower', 'S': 'upper'}
13 party_map = {
14 'D': 'Democratic',
15 'R': 'Republican',
16 'I': 'Independent',
17 'L': 'Libertarian',
18 }
19
20 def _get_photo(self, url, chamber):
21 """Attempts to find a portrait in the given legislator profile."""
22 doc = self.lxmlize(url)
23
24 if chamber == 'upper':
25 src = doc.xpath('//div[@id="page_content"]//img[contains(@src, '
26 '"images/senators") or contains(@src, "Senator")]/@src')
27 elif chamber == 'lower':
28 src = doc.xpath('//img[contains(@src, "images/memberpics")]/@src')
29
30 if src and 'nophoto' not in src[0]:
31 photo_url = src[0]
32 else:
33 photo_url = ''
34
35 return photo_url
36
37 def _parse_legislator(self, row, chamber, term):
38 # Capture legislator vitals.
39 first_name = row['FirstName']
40 middle_name = row['MiddleName']
41 last_name = row['lastname']
42 full_name = '{} {} {}'.format(first_name, middle_name, last_name)
43 full_name = re.sub(r'[\s]{2,}', ' ', full_name)
44
45 district = '{} {}'.format(row['county'], int(row['District'])).strip()
46 party = self.party_map[row['party']]
47 email = row['EMailAddress1']
48
49 legislator = Legislator(term, chamber, district, full_name,
50 first_name=first_name, last_name=last_name,
51 middle_name=middle_name, party=party,
52 email=email)
53
54 # Capture legislator office contact information.
55 district_address = '{}\n{}\n{}, {} {}'.format(row['street'],
56 row['address2'], row['city'], row['state'], row['zipcode']).strip()
57
58 legislator.add_office('district', 'Home Address',
59 address=district_address)
60
61 # Retrieve legislator portrait.
62 #profile_url = None
63 #if chamber == 'upper':
64 # profile_url = 'http://www.gencourt.state.nh.us/Senate/members/webpages/district{:02d}.aspx'.format(row['District'])
65 #elif chamber == 'lower':
66 # profile_url = 'http://www.gencourt.state.nh.us/house/members/member.aspx?member={}'.format(row['employee_no'])
67
68 #if profile_url:
69 # legislator['photo_url'] = self._get_photo(profile_url, chamber)
70 # legislator.add_source(profile_url)
71
72 return legislator
73
74 def _parse_members_txt(self):
75 lines = self.get(self.members_url).text.splitlines()
76
77 header = lines[0].split('\t')
78
79 for line in lines[1:]:
80 yield dict(zip(header, line.split('\t')))
81
82 def scrape(self, chamber, term):
83 for row in self._parse_members_txt():
84 if self.chamber_map[row['LegislativeBody']] == chamber:
85 leg = self._parse_legislator(row, chamber, term)
86 leg.add_source(self.members_url)
87 self.save_legislator(leg)
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openstates/nh/legislators.py b/openstates/nh/legislators.py
--- a/openstates/nh/legislators.py
+++ b/openstates/nh/legislators.py
@@ -38,13 +38,13 @@
# Capture legislator vitals.
first_name = row['FirstName']
middle_name = row['MiddleName']
- last_name = row['lastname']
+ last_name = row['LastName']
full_name = '{} {} {}'.format(first_name, middle_name, last_name)
full_name = re.sub(r'[\s]{2,}', ' ', full_name)
- district = '{} {}'.format(row['county'], int(row['District'])).strip()
- party = self.party_map[row['party']]
- email = row['EMailAddress1']
+ district = '{} {}'.format(row['County'], int(row['District'])).strip()
+ party = self.party_map[row['party'].upper()]
+ email = row['WorkEmail']
legislator = Legislator(term, chamber, district, full_name,
first_name=first_name, last_name=last_name,
@@ -52,8 +52,8 @@
email=email)
# Capture legislator office contact information.
- district_address = '{}\n{}\n{}, {} {}'.format(row['street'],
- row['address2'], row['city'], row['state'], row['zipcode']).strip()
+ district_address = '{}\n{}\n{}, {} {}'.format(row['Address'],
+ row['address2'], row['city'], row['State'], row['Zipcode']).strip()
legislator.add_office('district', 'Home Address',
address=district_address)
| {"golden_diff": "diff --git a/openstates/nh/legislators.py b/openstates/nh/legislators.py\n--- a/openstates/nh/legislators.py\n+++ b/openstates/nh/legislators.py\n@@ -38,13 +38,13 @@\n # Capture legislator vitals.\n first_name = row['FirstName']\n middle_name = row['MiddleName']\n- last_name = row['lastname']\n+ last_name = row['LastName']\n full_name = '{} {} {}'.format(first_name, middle_name, last_name)\n full_name = re.sub(r'[\\s]{2,}', ' ', full_name)\n \n- district = '{} {}'.format(row['county'], int(row['District'])).strip()\n- party = self.party_map[row['party']]\n- email = row['EMailAddress1']\n+ district = '{} {}'.format(row['County'], int(row['District'])).strip()\n+ party = self.party_map[row['party'].upper()]\n+ email = row['WorkEmail']\n \n legislator = Legislator(term, chamber, district, full_name,\n first_name=first_name, last_name=last_name,\n@@ -52,8 +52,8 @@\n email=email)\n \n # Capture legislator office contact information.\n- district_address = '{}\\n{}\\n{}, {} {}'.format(row['street'],\n- row['address2'], row['city'], row['state'], row['zipcode']).strip()\n+ district_address = '{}\\n{}\\n{}, {} {}'.format(row['Address'],\n+ row['address2'], row['city'], row['State'], row['Zipcode']).strip()\n \n legislator.add_office('district', 'Home Address',\n address=district_address)\n", "issue": "NH: legislators scrape is failing\nState: NH\r\n\r\nLegislator scraper needs some attention\n", "before_files": [{"content": "import re\n\nfrom billy.scrape.legislators import LegislatorScraper, Legislator\nfrom openstates.utils import LXMLMixin\n\n\nclass NHLegislatorScraper(LegislatorScraper, LXMLMixin):\n jurisdiction = 'nh'\n latest_only = True\n members_url = 'http://www.gencourt.state.nh.us/downloads/Members.txt'\n\n chamber_map = {'H': 'lower', 'S': 'upper'}\n party_map = {\n 'D': 'Democratic',\n 'R': 'Republican',\n 'I': 'Independent',\n 'L': 'Libertarian',\n }\n\n def _get_photo(self, url, chamber):\n \"\"\"Attempts to find a portrait in the given legislator profile.\"\"\"\n doc = self.lxmlize(url)\n\n if chamber == 'upper':\n src = doc.xpath('//div[@id=\"page_content\"]//img[contains(@src, '\n '\"images/senators\") or contains(@src, \"Senator\")]/@src')\n elif chamber == 'lower':\n src = doc.xpath('//img[contains(@src, \"images/memberpics\")]/@src')\n\n if src and 'nophoto' not in src[0]:\n photo_url = src[0]\n else:\n photo_url = ''\n\n return photo_url\n\n def _parse_legislator(self, row, chamber, term):\n # Capture legislator vitals.\n first_name = row['FirstName']\n middle_name = row['MiddleName']\n last_name = row['lastname']\n full_name = '{} {} {}'.format(first_name, middle_name, last_name)\n full_name = re.sub(r'[\\s]{2,}', ' ', full_name)\n\n district = '{} {}'.format(row['county'], int(row['District'])).strip()\n party = self.party_map[row['party']]\n email = row['EMailAddress1']\n\n legislator = Legislator(term, chamber, district, full_name,\n first_name=first_name, last_name=last_name,\n middle_name=middle_name, party=party,\n email=email)\n\n # Capture legislator office contact information.\n district_address = '{}\\n{}\\n{}, {} {}'.format(row['street'],\n row['address2'], row['city'], row['state'], row['zipcode']).strip()\n\n legislator.add_office('district', 'Home Address',\n address=district_address)\n\n # Retrieve legislator portrait.\n #profile_url = None\n #if chamber == 'upper':\n # profile_url = 'http://www.gencourt.state.nh.us/Senate/members/webpages/district{:02d}.aspx'.format(row['District'])\n #elif chamber == 'lower':\n # profile_url = 'http://www.gencourt.state.nh.us/house/members/member.aspx?member={}'.format(row['employee_no'])\n\n #if profile_url:\n # legislator['photo_url'] = self._get_photo(profile_url, chamber)\n # legislator.add_source(profile_url)\n\n return legislator\n\n def _parse_members_txt(self):\n lines = self.get(self.members_url).text.splitlines()\n\n header = lines[0].split('\\t')\n\n for line in lines[1:]:\n yield dict(zip(header, line.split('\\t')))\n\n def scrape(self, chamber, term):\n for row in self._parse_members_txt():\n if self.chamber_map[row['LegislativeBody']] == chamber:\n leg = self._parse_legislator(row, chamber, term)\n leg.add_source(self.members_url)\n self.save_legislator(leg)\n", "path": "openstates/nh/legislators.py"}], "after_files": [{"content": "import re\n\nfrom billy.scrape.legislators import LegislatorScraper, Legislator\nfrom openstates.utils import LXMLMixin\n\n\nclass NHLegislatorScraper(LegislatorScraper, LXMLMixin):\n jurisdiction = 'nh'\n latest_only = True\n members_url = 'http://www.gencourt.state.nh.us/downloads/Members.txt'\n\n chamber_map = {'H': 'lower', 'S': 'upper'}\n party_map = {\n 'D': 'Democratic',\n 'R': 'Republican',\n 'I': 'Independent',\n 'L': 'Libertarian',\n }\n\n def _get_photo(self, url, chamber):\n \"\"\"Attempts to find a portrait in the given legislator profile.\"\"\"\n doc = self.lxmlize(url)\n\n if chamber == 'upper':\n src = doc.xpath('//div[@id=\"page_content\"]//img[contains(@src, '\n '\"images/senators\") or contains(@src, \"Senator\")]/@src')\n elif chamber == 'lower':\n src = doc.xpath('//img[contains(@src, \"images/memberpics\")]/@src')\n\n if src and 'nophoto' not in src[0]:\n photo_url = src[0]\n else:\n photo_url = ''\n\n return photo_url\n\n def _parse_legislator(self, row, chamber, term):\n # Capture legislator vitals.\n first_name = row['FirstName']\n middle_name = row['MiddleName']\n last_name = row['LastName']\n full_name = '{} {} {}'.format(first_name, middle_name, last_name)\n full_name = re.sub(r'[\\s]{2,}', ' ', full_name)\n\n district = '{} {}'.format(row['County'], int(row['District'])).strip()\n party = self.party_map[row['party'].upper()]\n email = row['WorkEmail']\n\n legislator = Legislator(term, chamber, district, full_name,\n first_name=first_name, last_name=last_name,\n middle_name=middle_name, party=party,\n email=email)\n\n # Capture legislator office contact information.\n district_address = '{}\\n{}\\n{}, {} {}'.format(row['Address'],\n row['address2'], row['city'], row['State'], row['Zipcode']).strip()\n\n legislator.add_office('district', 'Home Address',\n address=district_address)\n\n # Retrieve legislator portrait.\n #profile_url = None\n #if chamber == 'upper':\n # profile_url = 'http://www.gencourt.state.nh.us/Senate/members/webpages/district{:02d}.aspx'.format(row['District'])\n #elif chamber == 'lower':\n # profile_url = 'http://www.gencourt.state.nh.us/house/members/member.aspx?member={}'.format(row['employee_no'])\n\n #if profile_url:\n # legislator['photo_url'] = self._get_photo(profile_url, chamber)\n # legislator.add_source(profile_url)\n\n return legislator\n\n def _parse_members_txt(self):\n lines = self.get(self.members_url).text.splitlines()\n\n header = lines[0].split('\\t')\n\n for line in lines[1:]:\n yield dict(zip(header, line.split('\\t')))\n\n def scrape(self, chamber, term):\n for row in self._parse_members_txt():\n if self.chamber_map[row['LegislativeBody']] == chamber:\n leg = self._parse_legislator(row, chamber, term)\n leg.add_source(self.members_url)\n self.save_legislator(leg)\n", "path": "openstates/nh/legislators.py"}]} | 1,234 | 378 |
gh_patches_debug_2764 | rasdani/github-patches | git_diff | ivy-llc__ivy-18208 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
expand
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/paddle/tensor/manipulation.py`
Content:
```
1 # global
2 import ivy
3 from ivy.functional.frontends.paddle.func_wrapper import (
4 to_ivy_arrays_and_back,
5 )
6 from ivy.func_wrapper import (
7 with_unsupported_dtypes,
8 with_supported_dtypes,
9 )
10
11
12 @to_ivy_arrays_and_back
13 def reshape(x, shape):
14 return ivy.reshape(x, shape)
15
16
17 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
18 @to_ivy_arrays_and_back
19 def abs(x, name=None):
20 return ivy.abs(x)
21
22
23 absolute = abs
24
25
26 @to_ivy_arrays_and_back
27 def stack(x, axis=0, name=None):
28 return ivy.stack(x, axis=axis)
29
30
31 @with_unsupported_dtypes({"2.5.0 and below": ("int8", "int16")}, "paddle")
32 @to_ivy_arrays_and_back
33 def concat(x, axis, name=None):
34 return ivy.concat(x, axis=axis)
35
36
37 @with_unsupported_dtypes(
38 {"2.5.0 and below": ("int8", "uint8", "int16", "float16")},
39 "paddle",
40 )
41 @to_ivy_arrays_and_back
42 def tile(x, repeat_times, name=None):
43 return ivy.tile(x, repeats=repeat_times)
44
45
46 @with_unsupported_dtypes(
47 {"2.5.0 and below": ("int16", "complex64", "complex128")},
48 "paddle",
49 )
50 @to_ivy_arrays_and_back
51 def split(x, num_or_sections, axis=0, name=None):
52 return ivy.split(x, num_or_size_splits=num_or_sections, axis=axis)
53
54
55 @with_unsupported_dtypes(
56 {"2.5.0 and below": ("float16", "bfloat16", "int8", "int16")},
57 "paddle",
58 )
59 @to_ivy_arrays_and_back
60 def squeeze(x, axis=None, name=None):
61 return ivy.squeeze(x, axis=axis)
62
63
64 @with_supported_dtypes(
65 {
66 "2.5.0 and below": (
67 "bool",
68 "float16",
69 "float32",
70 "float64",
71 "int32",
72 "int64",
73 "uint8",
74 )
75 },
76 "paddle",
77 )
78 @to_ivy_arrays_and_back
79 def cast(x, dtype):
80 return ivy.astype(x, dtype)
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/paddle/tensor/manipulation.py b/ivy/functional/frontends/paddle/tensor/manipulation.py
--- a/ivy/functional/frontends/paddle/tensor/manipulation.py
+++ b/ivy/functional/frontends/paddle/tensor/manipulation.py
@@ -61,6 +61,15 @@
return ivy.squeeze(x, axis=axis)
+@with_supported_dtypes(
+ {"2.5.0 and below": ("bool", "float32", "float64", "int32", "int64")},
+ "paddle",
+)
+@to_ivy_arrays_and_back
+def expand(x, shape, name=None):
+ return ivy.expand(x, shape)
+
+
@with_supported_dtypes(
{
"2.5.0 and below": (
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/manipulation.py b/ivy/functional/frontends/paddle/tensor/manipulation.py\n--- a/ivy/functional/frontends/paddle/tensor/manipulation.py\n+++ b/ivy/functional/frontends/paddle/tensor/manipulation.py\n@@ -61,6 +61,15 @@\n return ivy.squeeze(x, axis=axis)\n \n \n+@with_supported_dtypes(\n+ {\"2.5.0 and below\": (\"bool\", \"float32\", \"float64\", \"int32\", \"int64\")},\n+ \"paddle\",\n+)\n+@to_ivy_arrays_and_back\n+def expand(x, shape, name=None):\n+ return ivy.expand(x, shape)\n+\n+\n @with_supported_dtypes(\n {\n \"2.5.0 and below\": (\n", "issue": "expand\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\nfrom ivy.func_wrapper import (\n with_unsupported_dtypes,\n with_supported_dtypes,\n)\n\n\n@to_ivy_arrays_and_back\ndef reshape(x, shape):\n return ivy.reshape(x, shape)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef abs(x, name=None):\n return ivy.abs(x)\n\n\nabsolute = abs\n\n\n@to_ivy_arrays_and_back\ndef stack(x, axis=0, name=None):\n return ivy.stack(x, axis=axis)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"int8\", \"int16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef concat(x, axis, name=None):\n return ivy.concat(x, axis=axis)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"int8\", \"uint8\", \"int16\", \"float16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef tile(x, repeat_times, name=None):\n return ivy.tile(x, repeats=repeat_times)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"int16\", \"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef split(x, num_or_sections, axis=0, name=None):\n return ivy.split(x, num_or_size_splits=num_or_sections, axis=axis)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"float16\", \"bfloat16\", \"int8\", \"int16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef squeeze(x, axis=None, name=None):\n return ivy.squeeze(x, axis=axis)\n\n\n@with_supported_dtypes(\n {\n \"2.5.0 and below\": (\n \"bool\",\n \"float16\",\n \"float32\",\n \"float64\",\n \"int32\",\n \"int64\",\n \"uint8\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef cast(x, dtype):\n return ivy.astype(x, dtype)\n", "path": "ivy/functional/frontends/paddle/tensor/manipulation.py"}], "after_files": [{"content": "# global\nimport ivy\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\nfrom ivy.func_wrapper import (\n with_unsupported_dtypes,\n with_supported_dtypes,\n)\n\n\n@to_ivy_arrays_and_back\ndef reshape(x, shape):\n return ivy.reshape(x, shape)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef abs(x, name=None):\n return ivy.abs(x)\n\n\nabsolute = abs\n\n\n@to_ivy_arrays_and_back\ndef stack(x, axis=0, name=None):\n return ivy.stack(x, axis=axis)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"int8\", \"int16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef concat(x, axis, name=None):\n return ivy.concat(x, axis=axis)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"int8\", \"uint8\", \"int16\", \"float16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef tile(x, repeat_times, name=None):\n return ivy.tile(x, repeats=repeat_times)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"int16\", \"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef split(x, num_or_sections, axis=0, name=None):\n return ivy.split(x, num_or_size_splits=num_or_sections, axis=axis)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"float16\", \"bfloat16\", \"int8\", \"int16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef squeeze(x, axis=None, name=None):\n return ivy.squeeze(x, axis=axis)\n\n\n@with_supported_dtypes(\n {\"2.5.0 and below\": (\"bool\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef expand(x, shape, name=None):\n return ivy.expand(x, shape)\n\n\n@with_supported_dtypes(\n {\n \"2.5.0 and below\": (\n \"bool\",\n \"float16\",\n \"float32\",\n \"float64\",\n \"int32\",\n \"int64\",\n \"uint8\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef cast(x, dtype):\n return ivy.astype(x, dtype)\n", "path": "ivy/functional/frontends/paddle/tensor/manipulation.py"}]} | 971 | 192 |
gh_patches_debug_27372 | rasdani/github-patches | git_diff | open-mmlab__mmdetection-4250 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Different mean values of image normalization for HRNet
I find the different configs of HRNet experiments use different mean values for image normalization.
For example,
[fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py](https://github.com/open-mmlab/mmdetection/blob/master/configs/hrnet/fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py) uses the `[102.9801, 115.9465, 122.7717]` as the mean value,
while [fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py](https://github.com/open-mmlab/mmdetection/blob/master/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py) uses the `[103.530, 116.280, 123.675]` as the mean value.
Which one is correct?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py`
Content:
```
1 _base_ = './fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py'
2 img_norm_cfg = dict(
3 mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
4 train_pipeline = [
5 dict(type='LoadImageFromFile'),
6 dict(type='LoadAnnotations', with_bbox=True),
7 dict(
8 type='Resize',
9 img_scale=[(1333, 640), (1333, 800)],
10 multiscale_mode='value',
11 keep_ratio=True),
12 dict(type='RandomFlip', flip_ratio=0.5),
13 dict(type='Normalize', **img_norm_cfg),
14 dict(type='Pad', size_divisor=32),
15 dict(type='DefaultFormatBundle'),
16 dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
17 ]
18 test_pipeline = [
19 dict(type='LoadImageFromFile'),
20 dict(
21 type='MultiScaleFlipAug',
22 img_scale=(1333, 800),
23 flip=False,
24 transforms=[
25 dict(type='Resize', keep_ratio=True),
26 dict(type='RandomFlip'),
27 dict(type='Normalize', **img_norm_cfg),
28 dict(type='Pad', size_divisor=32),
29 dict(type='ImageToTensor', keys=['img']),
30 dict(type='Collect', keys=['img']),
31 ])
32 ]
33 data = dict(
34 train=dict(pipeline=train_pipeline),
35 val=dict(pipeline=test_pipeline),
36 test=dict(pipeline=test_pipeline))
37 # learning policy
38 lr_config = dict(step=[16, 22])
39 total_epochs = 24
40
```
Path: `configs/hrnet/fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py`
Content:
```
1 _base_ = '../fcos/fcos_r50_caffe_fpn_gn-head_4x4_1x_coco.py'
2 model = dict(
3 pretrained='open-mmlab://msra/hrnetv2_w32',
4 backbone=dict(
5 _delete_=True,
6 type='HRNet',
7 extra=dict(
8 stage1=dict(
9 num_modules=1,
10 num_branches=1,
11 block='BOTTLENECK',
12 num_blocks=(4, ),
13 num_channels=(64, )),
14 stage2=dict(
15 num_modules=1,
16 num_branches=2,
17 block='BASIC',
18 num_blocks=(4, 4),
19 num_channels=(32, 64)),
20 stage3=dict(
21 num_modules=4,
22 num_branches=3,
23 block='BASIC',
24 num_blocks=(4, 4, 4),
25 num_channels=(32, 64, 128)),
26 stage4=dict(
27 num_modules=3,
28 num_branches=4,
29 block='BASIC',
30 num_blocks=(4, 4, 4, 4),
31 num_channels=(32, 64, 128, 256)))),
32 neck=dict(
33 _delete_=True,
34 type='HRFPN',
35 in_channels=[32, 64, 128, 256],
36 out_channels=256,
37 stride=2,
38 num_outs=5))
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/configs/hrnet/fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py b/configs/hrnet/fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py
--- a/configs/hrnet/fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py
+++ b/configs/hrnet/fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py
@@ -36,3 +36,34 @@
out_channels=256,
stride=2,
num_outs=5))
+img_norm_cfg = dict(
+ mean=[103.53, 116.28, 123.675], std=[57.375, 57.12, 58.395], to_rgb=False)
+train_pipeline = [
+ dict(type='LoadImageFromFile'),
+ dict(type='LoadAnnotations', with_bbox=True),
+ dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
+ dict(type='RandomFlip', flip_ratio=0.5),
+ dict(type='Normalize', **img_norm_cfg),
+ dict(type='Pad', size_divisor=32),
+ dict(type='DefaultFormatBundle'),
+ dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
+]
+test_pipeline = [
+ dict(type='LoadImageFromFile'),
+ dict(
+ type='MultiScaleFlipAug',
+ img_scale=(1333, 800),
+ flip=False,
+ transforms=[
+ dict(type='Resize', keep_ratio=True),
+ dict(type='RandomFlip'),
+ dict(type='Normalize', **img_norm_cfg),
+ dict(type='Pad', size_divisor=32),
+ dict(type='ImageToTensor', keys=['img']),
+ dict(type='Collect', keys=['img']),
+ ])
+]
+data = dict(
+ train=dict(pipeline=train_pipeline),
+ val=dict(pipeline=test_pipeline),
+ test=dict(pipeline=test_pipeline))
diff --git a/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py b/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py
--- a/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py
+++ b/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py
@@ -1,6 +1,6 @@
_base_ = './fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py'
img_norm_cfg = dict(
- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
+ mean=[103.53, 116.28, 123.675], std=[57.375, 57.12, 58.395], to_rgb=False)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
| {"golden_diff": "diff --git a/configs/hrnet/fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py b/configs/hrnet/fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py\n--- a/configs/hrnet/fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py\n+++ b/configs/hrnet/fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py\n@@ -36,3 +36,34 @@\n out_channels=256,\n stride=2,\n num_outs=5))\n+img_norm_cfg = dict(\n+ mean=[103.53, 116.28, 123.675], std=[57.375, 57.12, 58.395], to_rgb=False)\n+train_pipeline = [\n+ dict(type='LoadImageFromFile'),\n+ dict(type='LoadAnnotations', with_bbox=True),\n+ dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),\n+ dict(type='RandomFlip', flip_ratio=0.5),\n+ dict(type='Normalize', **img_norm_cfg),\n+ dict(type='Pad', size_divisor=32),\n+ dict(type='DefaultFormatBundle'),\n+ dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),\n+]\n+test_pipeline = [\n+ dict(type='LoadImageFromFile'),\n+ dict(\n+ type='MultiScaleFlipAug',\n+ img_scale=(1333, 800),\n+ flip=False,\n+ transforms=[\n+ dict(type='Resize', keep_ratio=True),\n+ dict(type='RandomFlip'),\n+ dict(type='Normalize', **img_norm_cfg),\n+ dict(type='Pad', size_divisor=32),\n+ dict(type='ImageToTensor', keys=['img']),\n+ dict(type='Collect', keys=['img']),\n+ ])\n+]\n+data = dict(\n+ train=dict(pipeline=train_pipeline),\n+ val=dict(pipeline=test_pipeline),\n+ test=dict(pipeline=test_pipeline))\ndiff --git a/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py b/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py\n--- a/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py\n+++ b/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py\n@@ -1,6 +1,6 @@\n _base_ = './fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py'\n img_norm_cfg = dict(\n- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)\n+ mean=[103.53, 116.28, 123.675], std=[57.375, 57.12, 58.395], to_rgb=False)\n train_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', with_bbox=True),\n", "issue": "Different mean values of image normalization for HRNet\nI find the different configs of HRNet experiments use different mean values for image normalization.\r\n\r\nFor example,\r\n[fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py](https://github.com/open-mmlab/mmdetection/blob/master/configs/hrnet/fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py) uses the `[102.9801, 115.9465, 122.7717]` as the mean value,\r\nwhile [fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py](https://github.com/open-mmlab/mmdetection/blob/master/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py) uses the `[103.530, 116.280, 123.675]` as the mean value.\r\n\r\nWhich one is correct?\n", "before_files": [{"content": "_base_ = './fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py'\nimg_norm_cfg = dict(\n mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', with_bbox=True),\n dict(\n type='Resize',\n img_scale=[(1333, 640), (1333, 800)],\n multiscale_mode='value',\n keep_ratio=True),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='Pad', size_divisor=32),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(1333, 800),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=True),\n dict(type='RandomFlip'),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='Pad', size_divisor=32),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img']),\n ])\n]\ndata = dict(\n train=dict(pipeline=train_pipeline),\n val=dict(pipeline=test_pipeline),\n test=dict(pipeline=test_pipeline))\n# learning policy\nlr_config = dict(step=[16, 22])\ntotal_epochs = 24\n", "path": "configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py"}, {"content": "_base_ = '../fcos/fcos_r50_caffe_fpn_gn-head_4x4_1x_coco.py'\nmodel = dict(\n pretrained='open-mmlab://msra/hrnetv2_w32',\n backbone=dict(\n _delete_=True,\n type='HRNet',\n extra=dict(\n stage1=dict(\n num_modules=1,\n num_branches=1,\n block='BOTTLENECK',\n num_blocks=(4, ),\n num_channels=(64, )),\n stage2=dict(\n num_modules=1,\n num_branches=2,\n block='BASIC',\n num_blocks=(4, 4),\n num_channels=(32, 64)),\n stage3=dict(\n num_modules=4,\n num_branches=3,\n block='BASIC',\n num_blocks=(4, 4, 4),\n num_channels=(32, 64, 128)),\n stage4=dict(\n num_modules=3,\n num_branches=4,\n block='BASIC',\n num_blocks=(4, 4, 4, 4),\n num_channels=(32, 64, 128, 256)))),\n neck=dict(\n _delete_=True,\n type='HRFPN',\n in_channels=[32, 64, 128, 256],\n out_channels=256,\n stride=2,\n num_outs=5))\n", "path": "configs/hrnet/fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py"}], "after_files": [{"content": "_base_ = './fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py'\nimg_norm_cfg = dict(\n mean=[103.53, 116.28, 123.675], std=[57.375, 57.12, 58.395], to_rgb=False)\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', with_bbox=True),\n dict(\n type='Resize',\n img_scale=[(1333, 640), (1333, 800)],\n multiscale_mode='value',\n keep_ratio=True),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='Pad', size_divisor=32),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(1333, 800),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=True),\n dict(type='RandomFlip'),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='Pad', size_divisor=32),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img']),\n ])\n]\ndata = dict(\n train=dict(pipeline=train_pipeline),\n val=dict(pipeline=test_pipeline),\n test=dict(pipeline=test_pipeline))\n# learning policy\nlr_config = dict(step=[16, 22])\ntotal_epochs = 24\n", "path": "configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py"}, {"content": "_base_ = '../fcos/fcos_r50_caffe_fpn_gn-head_4x4_1x_coco.py'\nmodel = dict(\n pretrained='open-mmlab://msra/hrnetv2_w32',\n backbone=dict(\n _delete_=True,\n type='HRNet',\n extra=dict(\n stage1=dict(\n num_modules=1,\n num_branches=1,\n block='BOTTLENECK',\n num_blocks=(4, ),\n num_channels=(64, )),\n stage2=dict(\n num_modules=1,\n num_branches=2,\n block='BASIC',\n num_blocks=(4, 4),\n num_channels=(32, 64)),\n stage3=dict(\n num_modules=4,\n num_branches=3,\n block='BASIC',\n num_blocks=(4, 4, 4),\n num_channels=(32, 64, 128)),\n stage4=dict(\n num_modules=3,\n num_branches=4,\n block='BASIC',\n num_blocks=(4, 4, 4, 4),\n num_channels=(32, 64, 128, 256)))),\n neck=dict(\n _delete_=True,\n type='HRFPN',\n in_channels=[32, 64, 128, 256],\n out_channels=256,\n stride=2,\n num_outs=5))\nimg_norm_cfg = dict(\n mean=[103.53, 116.28, 123.675], std=[57.375, 57.12, 58.395], to_rgb=False)\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', with_bbox=True),\n dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='Pad', size_divisor=32),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(1333, 800),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=True),\n dict(type='RandomFlip'),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='Pad', size_divisor=32),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img']),\n ])\n]\ndata = dict(\n train=dict(pipeline=train_pipeline),\n val=dict(pipeline=test_pipeline),\n test=dict(pipeline=test_pipeline))\n", "path": "configs/hrnet/fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py"}]} | 1,438 | 829 |
gh_patches_debug_45268 | rasdani/github-patches | git_diff | dj-stripe__dj-stripe-1001 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Webhook missing: invoice.payment_action_required
I can't find any reference to ```invoice.payment_action_required``` anywhere, however this is a fairly critical piece of off-session SCA workflow. Is it simply a case of adding it to the signals list?
https://stripe.com/docs/api/events/types#event_types-invoice.payment_action_required
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `djstripe/signals.py`
Content:
```
1 """
2 signals are sent for each event Stripe sends to the app
3
4 Stripe docs for Webhooks: https://stripe.com/docs/webhooks
5 """
6 from django.db.models.signals import pre_delete
7 from django.dispatch import Signal, receiver
8
9 from . import settings as djstripe_settings
10
11 webhook_processing_error = Signal(providing_args=["data", "exception"])
12
13 # A signal for each Event type. See https://stripe.com/docs/api/events/types
14
15 WEBHOOK_SIGNALS = dict(
16 [
17 (hook, Signal(providing_args=["event"]))
18 for hook in [
19 "account.updated",
20 "account.application.authorized",
21 "account.application.deauthorized",
22 "account.external_account.created",
23 "account.external_account.deleted",
24 "account.external_account.updated",
25 "application_fee.created",
26 "application_fee.refunded",
27 "application_fee.refund.updated",
28 "balance.available",
29 "charge.captured",
30 "charge.expired",
31 "charge.failed",
32 "charge.pending",
33 "charge.refunded",
34 "charge.succeeded",
35 "charge.updated",
36 "charge.dispute.closed",
37 "charge.dispute.created",
38 "charge.dispute.funds_reinstated",
39 "charge.dispute.funds_withdrawn",
40 "charge.dispute.updated",
41 "charge.refund.updated",
42 "checkout.session.completed",
43 "coupon.created",
44 "coupon.deleted",
45 "coupon.updated",
46 "customer.created",
47 "customer.deleted",
48 "customer.updated",
49 "customer.discount.created",
50 "customer.discount.deleted",
51 "customer.discount.updated",
52 "customer.source.created",
53 "customer.source.deleted",
54 "customer.source.expiring",
55 "customer.source.updated",
56 "customer.subscription.created",
57 "customer.subscription.deleted",
58 "customer.subscription.trial_will_end",
59 "customer.subscription.updated",
60 "file.created",
61 "invoice.created",
62 "invoice.deleted",
63 "invoice.finalized",
64 "invoice.marked_uncollectible",
65 "invoice.payment_failed",
66 "invoice.payment_succeeded",
67 "invoice.sent",
68 "invoice.upcoming",
69 "invoice.updated",
70 "invoice.voided",
71 "invoiceitem.created",
72 "invoiceitem.deleted",
73 "invoiceitem.updated",
74 "issuing_authorization.created",
75 "issuing_authorization.request",
76 "issuing_authorization.updated",
77 "issuing_card.created",
78 "issuing_card.updated",
79 "issuing_cardholder.created",
80 "issuing_cardholder.updated",
81 "issuing_dispute.created",
82 "issuing_dispute.updated",
83 "issuing_settlement.created",
84 "issuing_settlement.updated",
85 "issuing_transaction.created",
86 "issuing_transaction.updated",
87 "order.created",
88 "order.payment_failed",
89 "order.payment_succeeded",
90 "order.updated",
91 "order_return.created",
92 "payment_intent.amount_capturable_updated",
93 "payment_intent.created",
94 "payment_intent.payment_failed",
95 "payment_intent.succeeded",
96 "payment_method.attached",
97 "payment_method.card_automatically_updated",
98 "payment_method.detached",
99 "payment_method.updated",
100 "payout.canceled",
101 "payout.created",
102 "payout.failed",
103 "payout.paid",
104 "payout.updated",
105 "plan.created",
106 "plan.deleted",
107 "plan.updated",
108 "product.created",
109 "product.deleted",
110 "product.updated",
111 "recipient.created",
112 "recipient.deleted",
113 "recipient.updated",
114 "reporting.report_run.failed",
115 "reporting.report_run.succeeded",
116 "reporting.report_type.updated",
117 "review.closed",
118 "review.opened",
119 "setup_intent.created",
120 "setup_intent.setup_failed",
121 "setup_intent.succeeded",
122 "sigma.scheduled_query_run.created",
123 "sku.created",
124 "sku.deleted",
125 "sku.updated",
126 "source.canceled",
127 "source.chargeable",
128 "source.failed",
129 "source.mandate_notification",
130 "source.refund_attributes_required",
131 "source.transaction.created",
132 "source.transaction.updated",
133 "topup.canceled",
134 "topup.created",
135 "topup.failed",
136 "topup.reversed",
137 "topup.succeeded",
138 "transfer.created",
139 "transfer.reversed",
140 "transfer.updated",
141 # deprecated (no longer in events_types list) - TODO can be deleted?
142 "checkout_beta.session_succeeded",
143 "issuer_fraud_record.created",
144 "payment_intent.requires_capture",
145 "subscription_schedule.canceled",
146 "subscription_schedule.completed",
147 "subscription_schedule.created",
148 "subscription_schedule.released",
149 "subscription_schedule.updated",
150 # special case? - TODO can be deleted?
151 "ping",
152 ]
153 ]
154 )
155
156
157 @receiver(pre_delete, sender=djstripe_settings.get_subscriber_model_string())
158 def on_delete_subscriber_purge_customer(instance=None, **kwargs):
159 """ Purge associated customers when the subscriber is deleted. """
160 for customer in instance.djstripe_customers.all():
161 customer.purge()
162
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/djstripe/signals.py b/djstripe/signals.py
--- a/djstripe/signals.py
+++ b/djstripe/signals.py
@@ -16,36 +16,41 @@
[
(hook, Signal(providing_args=["event"]))
for hook in [
- "account.updated",
+ # Update this by copy-pasting the "enabled_events" enum values from
+ # https://raw.githubusercontent.com/stripe/openapi/master/openapi/spec3.json
"account.application.authorized",
"account.application.deauthorized",
"account.external_account.created",
"account.external_account.deleted",
"account.external_account.updated",
+ "account.updated",
"application_fee.created",
- "application_fee.refunded",
"application_fee.refund.updated",
+ "application_fee.refunded",
"balance.available",
+ "capability.updated",
"charge.captured",
- "charge.expired",
- "charge.failed",
- "charge.pending",
- "charge.refunded",
- "charge.succeeded",
- "charge.updated",
"charge.dispute.closed",
"charge.dispute.created",
"charge.dispute.funds_reinstated",
"charge.dispute.funds_withdrawn",
"charge.dispute.updated",
+ "charge.expired",
+ "charge.failed",
+ "charge.pending",
"charge.refund.updated",
+ "charge.refunded",
+ "charge.succeeded",
+ "charge.updated",
"checkout.session.completed",
"coupon.created",
"coupon.deleted",
"coupon.updated",
+ "credit_note.created",
+ "credit_note.updated",
+ "credit_note.voided",
"customer.created",
"customer.deleted",
- "customer.updated",
"customer.discount.created",
"customer.discount.deleted",
"customer.discount.updated",
@@ -57,11 +62,16 @@
"customer.subscription.deleted",
"customer.subscription.trial_will_end",
"customer.subscription.updated",
+ "customer.tax_id.created",
+ "customer.tax_id.deleted",
+ "customer.tax_id.updated",
+ "customer.updated",
"file.created",
"invoice.created",
"invoice.deleted",
"invoice.finalized",
"invoice.marked_uncollectible",
+ "invoice.payment_action_required",
"invoice.payment_failed",
"invoice.payment_succeeded",
"invoice.sent",
@@ -102,12 +112,17 @@
"payout.failed",
"payout.paid",
"payout.updated",
+ "person.created",
+ "person.deleted",
+ "person.updated",
"plan.created",
"plan.deleted",
"plan.updated",
"product.created",
"product.deleted",
"product.updated",
+ "radar.early_fraud_warning.created",
+ "radar.early_fraud_warning.updated",
"recipient.created",
"recipient.deleted",
"recipient.updated",
@@ -130,12 +145,23 @@
"source.refund_attributes_required",
"source.transaction.created",
"source.transaction.updated",
+ "subscription_schedule.aborted",
+ "subscription_schedule.canceled",
+ "subscription_schedule.completed",
+ "subscription_schedule.created",
+ "subscription_schedule.expiring",
+ "subscription_schedule.released",
+ "subscription_schedule.updated",
+ "tax_rate.created",
+ "tax_rate.updated",
"topup.canceled",
"topup.created",
"topup.failed",
"topup.reversed",
"topup.succeeded",
"transfer.created",
+ "transfer.failed",
+ "transfer.paid",
"transfer.reversed",
"transfer.updated",
# deprecated (no longer in events_types list) - TODO can be deleted?
| {"golden_diff": "diff --git a/djstripe/signals.py b/djstripe/signals.py\n--- a/djstripe/signals.py\n+++ b/djstripe/signals.py\n@@ -16,36 +16,41 @@\n [\n (hook, Signal(providing_args=[\"event\"]))\n for hook in [\n- \"account.updated\",\n+ # Update this by copy-pasting the \"enabled_events\" enum values from\n+ # https://raw.githubusercontent.com/stripe/openapi/master/openapi/spec3.json\n \"account.application.authorized\",\n \"account.application.deauthorized\",\n \"account.external_account.created\",\n \"account.external_account.deleted\",\n \"account.external_account.updated\",\n+ \"account.updated\",\n \"application_fee.created\",\n- \"application_fee.refunded\",\n \"application_fee.refund.updated\",\n+ \"application_fee.refunded\",\n \"balance.available\",\n+ \"capability.updated\",\n \"charge.captured\",\n- \"charge.expired\",\n- \"charge.failed\",\n- \"charge.pending\",\n- \"charge.refunded\",\n- \"charge.succeeded\",\n- \"charge.updated\",\n \"charge.dispute.closed\",\n \"charge.dispute.created\",\n \"charge.dispute.funds_reinstated\",\n \"charge.dispute.funds_withdrawn\",\n \"charge.dispute.updated\",\n+ \"charge.expired\",\n+ \"charge.failed\",\n+ \"charge.pending\",\n \"charge.refund.updated\",\n+ \"charge.refunded\",\n+ \"charge.succeeded\",\n+ \"charge.updated\",\n \"checkout.session.completed\",\n \"coupon.created\",\n \"coupon.deleted\",\n \"coupon.updated\",\n+ \"credit_note.created\",\n+ \"credit_note.updated\",\n+ \"credit_note.voided\",\n \"customer.created\",\n \"customer.deleted\",\n- \"customer.updated\",\n \"customer.discount.created\",\n \"customer.discount.deleted\",\n \"customer.discount.updated\",\n@@ -57,11 +62,16 @@\n \"customer.subscription.deleted\",\n \"customer.subscription.trial_will_end\",\n \"customer.subscription.updated\",\n+ \"customer.tax_id.created\",\n+ \"customer.tax_id.deleted\",\n+ \"customer.tax_id.updated\",\n+ \"customer.updated\",\n \"file.created\",\n \"invoice.created\",\n \"invoice.deleted\",\n \"invoice.finalized\",\n \"invoice.marked_uncollectible\",\n+ \"invoice.payment_action_required\",\n \"invoice.payment_failed\",\n \"invoice.payment_succeeded\",\n \"invoice.sent\",\n@@ -102,12 +112,17 @@\n \"payout.failed\",\n \"payout.paid\",\n \"payout.updated\",\n+ \"person.created\",\n+ \"person.deleted\",\n+ \"person.updated\",\n \"plan.created\",\n \"plan.deleted\",\n \"plan.updated\",\n \"product.created\",\n \"product.deleted\",\n \"product.updated\",\n+ \"radar.early_fraud_warning.created\",\n+ \"radar.early_fraud_warning.updated\",\n \"recipient.created\",\n \"recipient.deleted\",\n \"recipient.updated\",\n@@ -130,12 +145,23 @@\n \"source.refund_attributes_required\",\n \"source.transaction.created\",\n \"source.transaction.updated\",\n+ \"subscription_schedule.aborted\",\n+ \"subscription_schedule.canceled\",\n+ \"subscription_schedule.completed\",\n+ \"subscription_schedule.created\",\n+ \"subscription_schedule.expiring\",\n+ \"subscription_schedule.released\",\n+ \"subscription_schedule.updated\",\n+ \"tax_rate.created\",\n+ \"tax_rate.updated\",\n \"topup.canceled\",\n \"topup.created\",\n \"topup.failed\",\n \"topup.reversed\",\n \"topup.succeeded\",\n \"transfer.created\",\n+ \"transfer.failed\",\n+ \"transfer.paid\",\n \"transfer.reversed\",\n \"transfer.updated\",\n # deprecated (no longer in events_types list) - TODO can be deleted?\n", "issue": "Webhook missing: invoice.payment_action_required\nI can't find any reference to ```invoice.payment_action_required``` anywhere, however this is a fairly critical piece of off-session SCA workflow. Is it simply a case of adding it to the signals list?\r\n\r\nhttps://stripe.com/docs/api/events/types#event_types-invoice.payment_action_required\n", "before_files": [{"content": "\"\"\"\nsignals are sent for each event Stripe sends to the app\n\nStripe docs for Webhooks: https://stripe.com/docs/webhooks\n\"\"\"\nfrom django.db.models.signals import pre_delete\nfrom django.dispatch import Signal, receiver\n\nfrom . import settings as djstripe_settings\n\nwebhook_processing_error = Signal(providing_args=[\"data\", \"exception\"])\n\n# A signal for each Event type. See https://stripe.com/docs/api/events/types\n\nWEBHOOK_SIGNALS = dict(\n [\n (hook, Signal(providing_args=[\"event\"]))\n for hook in [\n \"account.updated\",\n \"account.application.authorized\",\n \"account.application.deauthorized\",\n \"account.external_account.created\",\n \"account.external_account.deleted\",\n \"account.external_account.updated\",\n \"application_fee.created\",\n \"application_fee.refunded\",\n \"application_fee.refund.updated\",\n \"balance.available\",\n \"charge.captured\",\n \"charge.expired\",\n \"charge.failed\",\n \"charge.pending\",\n \"charge.refunded\",\n \"charge.succeeded\",\n \"charge.updated\",\n \"charge.dispute.closed\",\n \"charge.dispute.created\",\n \"charge.dispute.funds_reinstated\",\n \"charge.dispute.funds_withdrawn\",\n \"charge.dispute.updated\",\n \"charge.refund.updated\",\n \"checkout.session.completed\",\n \"coupon.created\",\n \"coupon.deleted\",\n \"coupon.updated\",\n \"customer.created\",\n \"customer.deleted\",\n \"customer.updated\",\n \"customer.discount.created\",\n \"customer.discount.deleted\",\n \"customer.discount.updated\",\n \"customer.source.created\",\n \"customer.source.deleted\",\n \"customer.source.expiring\",\n \"customer.source.updated\",\n \"customer.subscription.created\",\n \"customer.subscription.deleted\",\n \"customer.subscription.trial_will_end\",\n \"customer.subscription.updated\",\n \"file.created\",\n \"invoice.created\",\n \"invoice.deleted\",\n \"invoice.finalized\",\n \"invoice.marked_uncollectible\",\n \"invoice.payment_failed\",\n \"invoice.payment_succeeded\",\n \"invoice.sent\",\n \"invoice.upcoming\",\n \"invoice.updated\",\n \"invoice.voided\",\n \"invoiceitem.created\",\n \"invoiceitem.deleted\",\n \"invoiceitem.updated\",\n \"issuing_authorization.created\",\n \"issuing_authorization.request\",\n \"issuing_authorization.updated\",\n \"issuing_card.created\",\n \"issuing_card.updated\",\n \"issuing_cardholder.created\",\n \"issuing_cardholder.updated\",\n \"issuing_dispute.created\",\n \"issuing_dispute.updated\",\n \"issuing_settlement.created\",\n \"issuing_settlement.updated\",\n \"issuing_transaction.created\",\n \"issuing_transaction.updated\",\n \"order.created\",\n \"order.payment_failed\",\n \"order.payment_succeeded\",\n \"order.updated\",\n \"order_return.created\",\n \"payment_intent.amount_capturable_updated\",\n \"payment_intent.created\",\n \"payment_intent.payment_failed\",\n \"payment_intent.succeeded\",\n \"payment_method.attached\",\n \"payment_method.card_automatically_updated\",\n \"payment_method.detached\",\n \"payment_method.updated\",\n \"payout.canceled\",\n \"payout.created\",\n \"payout.failed\",\n \"payout.paid\",\n \"payout.updated\",\n \"plan.created\",\n \"plan.deleted\",\n \"plan.updated\",\n \"product.created\",\n \"product.deleted\",\n \"product.updated\",\n \"recipient.created\",\n \"recipient.deleted\",\n \"recipient.updated\",\n \"reporting.report_run.failed\",\n \"reporting.report_run.succeeded\",\n \"reporting.report_type.updated\",\n \"review.closed\",\n \"review.opened\",\n \"setup_intent.created\",\n \"setup_intent.setup_failed\",\n \"setup_intent.succeeded\",\n \"sigma.scheduled_query_run.created\",\n \"sku.created\",\n \"sku.deleted\",\n \"sku.updated\",\n \"source.canceled\",\n \"source.chargeable\",\n \"source.failed\",\n \"source.mandate_notification\",\n \"source.refund_attributes_required\",\n \"source.transaction.created\",\n \"source.transaction.updated\",\n \"topup.canceled\",\n \"topup.created\",\n \"topup.failed\",\n \"topup.reversed\",\n \"topup.succeeded\",\n \"transfer.created\",\n \"transfer.reversed\",\n \"transfer.updated\",\n # deprecated (no longer in events_types list) - TODO can be deleted?\n \"checkout_beta.session_succeeded\",\n \"issuer_fraud_record.created\",\n \"payment_intent.requires_capture\",\n \"subscription_schedule.canceled\",\n \"subscription_schedule.completed\",\n \"subscription_schedule.created\",\n \"subscription_schedule.released\",\n \"subscription_schedule.updated\",\n # special case? - TODO can be deleted?\n \"ping\",\n ]\n ]\n)\n\n\n@receiver(pre_delete, sender=djstripe_settings.get_subscriber_model_string())\ndef on_delete_subscriber_purge_customer(instance=None, **kwargs):\n \"\"\" Purge associated customers when the subscriber is deleted. \"\"\"\n for customer in instance.djstripe_customers.all():\n customer.purge()\n", "path": "djstripe/signals.py"}], "after_files": [{"content": "\"\"\"\nsignals are sent for each event Stripe sends to the app\n\nStripe docs for Webhooks: https://stripe.com/docs/webhooks\n\"\"\"\nfrom django.db.models.signals import pre_delete\nfrom django.dispatch import Signal, receiver\n\nfrom . import settings as djstripe_settings\n\nwebhook_processing_error = Signal(providing_args=[\"data\", \"exception\"])\n\n# A signal for each Event type. See https://stripe.com/docs/api/events/types\n\nWEBHOOK_SIGNALS = dict(\n [\n (hook, Signal(providing_args=[\"event\"]))\n for hook in [\n # Update this by copy-pasting the \"enabled_events\" enum values from\n # https://raw.githubusercontent.com/stripe/openapi/master/openapi/spec3.json\n \"account.application.authorized\",\n \"account.application.deauthorized\",\n \"account.external_account.created\",\n \"account.external_account.deleted\",\n \"account.external_account.updated\",\n \"account.updated\",\n \"application_fee.created\",\n \"application_fee.refund.updated\",\n \"application_fee.refunded\",\n \"balance.available\",\n \"capability.updated\",\n \"charge.captured\",\n \"charge.dispute.closed\",\n \"charge.dispute.created\",\n \"charge.dispute.funds_reinstated\",\n \"charge.dispute.funds_withdrawn\",\n \"charge.dispute.updated\",\n \"charge.expired\",\n \"charge.failed\",\n \"charge.pending\",\n \"charge.refund.updated\",\n \"charge.refunded\",\n \"charge.succeeded\",\n \"charge.updated\",\n \"checkout.session.completed\",\n \"coupon.created\",\n \"coupon.deleted\",\n \"coupon.updated\",\n \"credit_note.created\",\n \"credit_note.updated\",\n \"credit_note.voided\",\n \"customer.created\",\n \"customer.deleted\",\n \"customer.discount.created\",\n \"customer.discount.deleted\",\n \"customer.discount.updated\",\n \"customer.source.created\",\n \"customer.source.deleted\",\n \"customer.source.expiring\",\n \"customer.source.updated\",\n \"customer.subscription.created\",\n \"customer.subscription.deleted\",\n \"customer.subscription.trial_will_end\",\n \"customer.subscription.updated\",\n \"customer.tax_id.created\",\n \"customer.tax_id.deleted\",\n \"customer.tax_id.updated\",\n \"customer.updated\",\n \"file.created\",\n \"invoice.created\",\n \"invoice.deleted\",\n \"invoice.finalized\",\n \"invoice.marked_uncollectible\",\n \"invoice.payment_action_required\",\n \"invoice.payment_failed\",\n \"invoice.payment_succeeded\",\n \"invoice.sent\",\n \"invoice.upcoming\",\n \"invoice.updated\",\n \"invoice.voided\",\n \"invoiceitem.created\",\n \"invoiceitem.deleted\",\n \"invoiceitem.updated\",\n \"issuing_authorization.created\",\n \"issuing_authorization.request\",\n \"issuing_authorization.updated\",\n \"issuing_card.created\",\n \"issuing_card.updated\",\n \"issuing_cardholder.created\",\n \"issuing_cardholder.updated\",\n \"issuing_dispute.created\",\n \"issuing_dispute.updated\",\n \"issuing_settlement.created\",\n \"issuing_settlement.updated\",\n \"issuing_transaction.created\",\n \"issuing_transaction.updated\",\n \"order.created\",\n \"order.payment_failed\",\n \"order.payment_succeeded\",\n \"order.updated\",\n \"order_return.created\",\n \"payment_intent.amount_capturable_updated\",\n \"payment_intent.created\",\n \"payment_intent.payment_failed\",\n \"payment_intent.succeeded\",\n \"payment_method.attached\",\n \"payment_method.card_automatically_updated\",\n \"payment_method.detached\",\n \"payment_method.updated\",\n \"payout.canceled\",\n \"payout.created\",\n \"payout.failed\",\n \"payout.paid\",\n \"payout.updated\",\n \"person.created\",\n \"person.deleted\",\n \"person.updated\",\n \"plan.created\",\n \"plan.deleted\",\n \"plan.updated\",\n \"product.created\",\n \"product.deleted\",\n \"product.updated\",\n \"radar.early_fraud_warning.created\",\n \"radar.early_fraud_warning.updated\",\n \"recipient.created\",\n \"recipient.deleted\",\n \"recipient.updated\",\n \"reporting.report_run.failed\",\n \"reporting.report_run.succeeded\",\n \"reporting.report_type.updated\",\n \"review.closed\",\n \"review.opened\",\n \"setup_intent.created\",\n \"setup_intent.setup_failed\",\n \"setup_intent.succeeded\",\n \"sigma.scheduled_query_run.created\",\n \"sku.created\",\n \"sku.deleted\",\n \"sku.updated\",\n \"source.canceled\",\n \"source.chargeable\",\n \"source.failed\",\n \"source.mandate_notification\",\n \"source.refund_attributes_required\",\n \"source.transaction.created\",\n \"source.transaction.updated\",\n \"subscription_schedule.aborted\",\n \"subscription_schedule.canceled\",\n \"subscription_schedule.completed\",\n \"subscription_schedule.created\",\n \"subscription_schedule.expiring\",\n \"subscription_schedule.released\",\n \"subscription_schedule.updated\",\n \"tax_rate.created\",\n \"tax_rate.updated\",\n \"topup.canceled\",\n \"topup.created\",\n \"topup.failed\",\n \"topup.reversed\",\n \"topup.succeeded\",\n \"transfer.created\",\n \"transfer.failed\",\n \"transfer.paid\",\n \"transfer.reversed\",\n \"transfer.updated\",\n # deprecated (no longer in events_types list) - TODO can be deleted?\n \"checkout_beta.session_succeeded\",\n \"issuer_fraud_record.created\",\n \"payment_intent.requires_capture\",\n \"subscription_schedule.canceled\",\n \"subscription_schedule.completed\",\n \"subscription_schedule.created\",\n \"subscription_schedule.released\",\n \"subscription_schedule.updated\",\n # special case? - TODO can be deleted?\n \"ping\",\n ]\n ]\n)\n\n\n@receiver(pre_delete, sender=djstripe_settings.get_subscriber_model_string())\ndef on_delete_subscriber_purge_customer(instance=None, **kwargs):\n \"\"\" Purge associated customers when the subscriber is deleted. \"\"\"\n for customer in instance.djstripe_customers.all():\n customer.purge()\n", "path": "djstripe/signals.py"}]} | 1,765 | 832 |
gh_patches_debug_4877 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-2042 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
updating progress doesn't work
Warning : i see this bug as unlinked with instance, because i saw it on my instance and on bookwyrm.social.
When i go to the book view, i have a block with two progress, but without any print of the pages i've read.

If i want to edit the progress to add pages number read, i have the choice between two dates and the number input. So i fill in the form. with only page number filled.

Then i'm going back to the book view, but with 3 entries instead of two. page number hasn't been saved/printed and it added a new progress instead of editing it.

One problem is you can't save a progress without a comment, but even with that fix, i think progress should indicate the pages read and the pages left to be read. a more "graphic' layout could be very cool ;)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/forms/forms.py`
Content:
```
1 """ using django model forms """
2 from django import forms
3 from django.forms import widgets
4 from django.utils.translation import gettext_lazy as _
5
6 from bookwyrm import models
7 from bookwyrm.models.user import FeedFilterChoices
8 from .custom_form import CustomForm
9
10
11 # pylint: disable=missing-class-docstring
12 class FeedStatusTypesForm(CustomForm):
13 class Meta:
14 model = models.User
15 fields = ["feed_status_types"]
16 help_texts = {f: None for f in fields}
17 widgets = {
18 "feed_status_types": widgets.CheckboxSelectMultiple(
19 choices=FeedFilterChoices,
20 ),
21 }
22
23
24 class ImportForm(forms.Form):
25 csv_file = forms.FileField()
26
27
28 class ShelfForm(CustomForm):
29 class Meta:
30 model = models.Shelf
31 fields = ["user", "name", "privacy", "description"]
32
33
34 class GoalForm(CustomForm):
35 class Meta:
36 model = models.AnnualGoal
37 fields = ["user", "year", "goal", "privacy"]
38
39
40 class ReportForm(CustomForm):
41 class Meta:
42 model = models.Report
43 fields = ["user", "reporter", "status", "links", "note"]
44
45
46 class ReadThroughForm(CustomForm):
47 def clean(self):
48 """make sure the email isn't in use by a registered user"""
49 cleaned_data = super().clean()
50 start_date = cleaned_data.get("start_date")
51 finish_date = cleaned_data.get("finish_date")
52 if start_date and finish_date and start_date > finish_date:
53 self.add_error(
54 "finish_date", _("Reading finish date cannot be before start date.")
55 )
56
57 class Meta:
58 model = models.ReadThrough
59 fields = ["user", "book", "start_date", "finish_date"]
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bookwyrm/forms/forms.py b/bookwyrm/forms/forms.py
--- a/bookwyrm/forms/forms.py
+++ b/bookwyrm/forms/forms.py
@@ -45,7 +45,7 @@
class ReadThroughForm(CustomForm):
def clean(self):
- """make sure the email isn't in use by a registered user"""
+ """don't let readthroughs end before they start"""
cleaned_data = super().clean()
start_date = cleaned_data.get("start_date")
finish_date = cleaned_data.get("finish_date")
| {"golden_diff": "diff --git a/bookwyrm/forms/forms.py b/bookwyrm/forms/forms.py\n--- a/bookwyrm/forms/forms.py\n+++ b/bookwyrm/forms/forms.py\n@@ -45,7 +45,7 @@\n \n class ReadThroughForm(CustomForm):\n def clean(self):\n- \"\"\"make sure the email isn't in use by a registered user\"\"\"\n+ \"\"\"don't let readthroughs end before they start\"\"\"\n cleaned_data = super().clean()\n start_date = cleaned_data.get(\"start_date\")\n finish_date = cleaned_data.get(\"finish_date\")\n", "issue": "updating progress doesn't work\nWarning : i see this bug as unlinked with instance, because i saw it on my instance and on bookwyrm.social. \r\n\r\nWhen i go to the book view, i have a block with two progress, but without any print of the pages i've read. \r\n\r\n\r\nIf i want to edit the progress to add pages number read, i have the choice between two dates and the number input. So i fill in the form. with only page number filled.\r\n\r\n\r\nThen i'm going back to the book view, but with 3 entries instead of two. page number hasn't been saved/printed and it added a new progress instead of editing it.\r\n\r\n\r\nOne problem is you can't save a progress without a comment, but even with that fix, i think progress should indicate the pages read and the pages left to be read. a more \"graphic' layout could be very cool ;)\n", "before_files": [{"content": "\"\"\" using django model forms \"\"\"\nfrom django import forms\nfrom django.forms import widgets\nfrom django.utils.translation import gettext_lazy as _\n\nfrom bookwyrm import models\nfrom bookwyrm.models.user import FeedFilterChoices\nfrom .custom_form import CustomForm\n\n\n# pylint: disable=missing-class-docstring\nclass FeedStatusTypesForm(CustomForm):\n class Meta:\n model = models.User\n fields = [\"feed_status_types\"]\n help_texts = {f: None for f in fields}\n widgets = {\n \"feed_status_types\": widgets.CheckboxSelectMultiple(\n choices=FeedFilterChoices,\n ),\n }\n\n\nclass ImportForm(forms.Form):\n csv_file = forms.FileField()\n\n\nclass ShelfForm(CustomForm):\n class Meta:\n model = models.Shelf\n fields = [\"user\", \"name\", \"privacy\", \"description\"]\n\n\nclass GoalForm(CustomForm):\n class Meta:\n model = models.AnnualGoal\n fields = [\"user\", \"year\", \"goal\", \"privacy\"]\n\n\nclass ReportForm(CustomForm):\n class Meta:\n model = models.Report\n fields = [\"user\", \"reporter\", \"status\", \"links\", \"note\"]\n\n\nclass ReadThroughForm(CustomForm):\n def clean(self):\n \"\"\"make sure the email isn't in use by a registered user\"\"\"\n cleaned_data = super().clean()\n start_date = cleaned_data.get(\"start_date\")\n finish_date = cleaned_data.get(\"finish_date\")\n if start_date and finish_date and start_date > finish_date:\n self.add_error(\n \"finish_date\", _(\"Reading finish date cannot be before start date.\")\n )\n\n class Meta:\n model = models.ReadThrough\n fields = [\"user\", \"book\", \"start_date\", \"finish_date\"]\n", "path": "bookwyrm/forms/forms.py"}], "after_files": [{"content": "\"\"\" using django model forms \"\"\"\nfrom django import forms\nfrom django.forms import widgets\nfrom django.utils.translation import gettext_lazy as _\n\nfrom bookwyrm import models\nfrom bookwyrm.models.user import FeedFilterChoices\nfrom .custom_form import CustomForm\n\n\n# pylint: disable=missing-class-docstring\nclass FeedStatusTypesForm(CustomForm):\n class Meta:\n model = models.User\n fields = [\"feed_status_types\"]\n help_texts = {f: None for f in fields}\n widgets = {\n \"feed_status_types\": widgets.CheckboxSelectMultiple(\n choices=FeedFilterChoices,\n ),\n }\n\n\nclass ImportForm(forms.Form):\n csv_file = forms.FileField()\n\n\nclass ShelfForm(CustomForm):\n class Meta:\n model = models.Shelf\n fields = [\"user\", \"name\", \"privacy\", \"description\"]\n\n\nclass GoalForm(CustomForm):\n class Meta:\n model = models.AnnualGoal\n fields = [\"user\", \"year\", \"goal\", \"privacy\"]\n\n\nclass ReportForm(CustomForm):\n class Meta:\n model = models.Report\n fields = [\"user\", \"reporter\", \"status\", \"links\", \"note\"]\n\n\nclass ReadThroughForm(CustomForm):\n def clean(self):\n \"\"\"don't let readthroughs end before they start\"\"\"\n cleaned_data = super().clean()\n start_date = cleaned_data.get(\"start_date\")\n finish_date = cleaned_data.get(\"finish_date\")\n if start_date and finish_date and start_date > finish_date:\n self.add_error(\n \"finish_date\", _(\"Reading finish date cannot be before start date.\")\n )\n\n class Meta:\n model = models.ReadThrough\n fields = [\"user\", \"book\", \"start_date\", \"finish_date\"]\n", "path": "bookwyrm/forms/forms.py"}]} | 1,129 | 122 |
gh_patches_debug_11674 | rasdani/github-patches | git_diff | sopel-irc__sopel-1270 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[wikipedia] Error on nonexistent article link
I productively (? :laughing:) found another issue in the `wikipedia` module while testing my fix for #1255. It turns out that, if someone sends a link to a Wikipedia article that doesn't exist, the bot spits out a nice `KeyError`. Better to spit out a clean error (or, perhaps, nothing).
As with #1255, I'm testing a fix for this on my own Sopel instance. Once I'm happy with it (and the wording of whatever error message I add), there will be a PR for feedback before merging.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/wikipedia.py`
Content:
```
1 # coding=utf-8
2 # Copyright 2013 Elsie Powell - embolalia.com
3 # Licensed under the Eiffel Forum License 2.
4 from __future__ import unicode_literals, absolute_import, print_function, division
5 from sopel import web, tools
6 from sopel.config.types import StaticSection, ValidatedAttribute
7 from sopel.module import NOLIMIT, commands, example, rule
8 import json
9 import re
10
11 import sys
12 if sys.version_info.major < 3:
13 from urlparse import unquote as _unquote
14 unquote = lambda s: _unquote(s.encode('utf-8')).decode('utf-8')
15 else:
16 from urllib.parse import unquote
17
18 REDIRECT = re.compile(r'^REDIRECT (.*)')
19
20
21 class WikipediaSection(StaticSection):
22 default_lang = ValidatedAttribute('default_lang', default='en')
23 """The default language to find articles from."""
24 lang_per_channel = ValidatedAttribute('lang_per_channel')
25
26
27 def setup(bot):
28 bot.config.define_section('wikipedia', WikipediaSection)
29
30 regex = re.compile('([a-z]+).(wikipedia.org/wiki/)([^ ]+)')
31 if not bot.memory.contains('url_callbacks'):
32 bot.memory['url_callbacks'] = tools.SopelMemory()
33 bot.memory['url_callbacks'][regex] = mw_info
34
35
36 def configure(config):
37 config.define_section('wikipedia', WikipediaSection)
38 config.wikipedia.configure_setting(
39 'default_lang',
40 "Enter the default language to find articles from."
41 )
42
43
44 def mw_search(server, query, num):
45 """
46 Searches the specified MediaWiki server for the given query, and returns
47 the specified number of results.
48 """
49 search_url = ('http://%s/w/api.php?format=json&action=query'
50 '&list=search&srlimit=%d&srprop=timestamp&srwhat=text'
51 '&srsearch=') % (server, num)
52 search_url += query
53 query = json.loads(web.get(search_url))
54 if 'query' in query:
55 query = query['query']['search']
56 return [r['title'] for r in query]
57 else:
58 return None
59
60
61 def say_snippet(bot, server, query, show_url=True):
62 page_name = query.replace('_', ' ')
63 query = query.replace(' ', '_')
64 snippet = mw_snippet(server, query)
65 msg = '[WIKIPEDIA] {} | "{}"'.format(page_name, snippet)
66 if show_url:
67 msg = msg + ' | https://{}/wiki/{}'.format(server, query)
68 bot.say(msg)
69
70
71 def mw_snippet(server, query):
72 """
73 Retrives a snippet of the specified length from the given page on the given
74 server.
75 """
76 snippet_url = ('https://' + server + '/w/api.php?format=json'
77 '&action=query&prop=extracts&exintro&explaintext'
78 '&exchars=300&redirects&titles=')
79 snippet_url += query
80 snippet = json.loads(web.get(snippet_url))
81 snippet = snippet['query']['pages']
82
83 # For some reason, the API gives the page *number* as the key, so we just
84 # grab the first page number in the results.
85 snippet = snippet[list(snippet.keys())[0]]
86
87 return snippet['extract']
88
89
90 @rule('.*\/([a-z]+\.wikipedia.org)\/wiki\/((?!File\:)[^ ]+).*')
91 def mw_info(bot, trigger, found_match=None):
92 """
93 Retrives a snippet of the specified length from the given page on the given
94 server.
95 """
96 match = found_match or trigger
97 say_snippet(bot, match.group(1), unquote(match.group(2)), show_url=False)
98
99
100 @commands('w', 'wiki', 'wik')
101 @example('.w San Francisco')
102 def wikipedia(bot, trigger):
103 lang = bot.config.wikipedia.default_lang
104
105 #change lang if channel has custom language set
106 if (trigger.sender and not trigger.sender.is_nick() and
107 bot.config.wikipedia.lang_per_channel):
108 customlang = re.search('(' + trigger.sender + '):(\w+)',
109 bot.config.wikipedia.lang_per_channel)
110 if customlang is not None:
111 lang = customlang.group(2)
112
113 if trigger.group(2) is None:
114 bot.reply("What do you want me to look up?")
115 return NOLIMIT
116
117 query = trigger.group(2)
118 args = re.search(r'^-([a-z]{2,12})\s(.*)', query)
119 if args is not None:
120 lang = args.group(1)
121 query = args.group(2)
122
123 if not query:
124 bot.reply('What do you want me to look up?')
125 return NOLIMIT
126 server = lang + '.wikipedia.org'
127 query = mw_search(server, query, 1)
128 if not query:
129 bot.reply("I can't find any results for that.")
130 return NOLIMIT
131 else:
132 query = query[0]
133 say_snippet(bot, server, query)
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sopel/modules/wikipedia.py b/sopel/modules/wikipedia.py
--- a/sopel/modules/wikipedia.py
+++ b/sopel/modules/wikipedia.py
@@ -61,7 +61,12 @@
def say_snippet(bot, server, query, show_url=True):
page_name = query.replace('_', ' ')
query = query.replace(' ', '_')
- snippet = mw_snippet(server, query)
+ try:
+ snippet = mw_snippet(server, query)
+ except KeyError:
+ if show_url:
+ bot.say("[WIKIPEDIA] Error fetching snippet for \"{}\".".format(page_name))
+ return
msg = '[WIKIPEDIA] {} | "{}"'.format(page_name, snippet)
if show_url:
msg = msg + ' | https://{}/wiki/{}'.format(server, query)
| {"golden_diff": "diff --git a/sopel/modules/wikipedia.py b/sopel/modules/wikipedia.py\n--- a/sopel/modules/wikipedia.py\n+++ b/sopel/modules/wikipedia.py\n@@ -61,7 +61,12 @@\n def say_snippet(bot, server, query, show_url=True):\n page_name = query.replace('_', ' ')\n query = query.replace(' ', '_')\n- snippet = mw_snippet(server, query)\n+ try:\n+ snippet = mw_snippet(server, query)\n+ except KeyError:\n+ if show_url:\n+ bot.say(\"[WIKIPEDIA] Error fetching snippet for \\\"{}\\\".\".format(page_name))\n+ return\n msg = '[WIKIPEDIA] {} | \"{}\"'.format(page_name, snippet)\n if show_url:\n msg = msg + ' | https://{}/wiki/{}'.format(server, query)\n", "issue": "[wikipedia] Error on nonexistent article link\nI productively (? :laughing:) found another issue in the `wikipedia` module while testing my fix for #1255. It turns out that, if someone sends a link to a Wikipedia article that doesn't exist, the bot spits out a nice `KeyError`. Better to spit out a clean error (or, perhaps, nothing).\r\n\r\nAs with #1255, I'm testing a fix for this on my own Sopel instance. Once I'm happy with it (and the wording of whatever error message I add), there will be a PR for feedback before merging.\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2013 Elsie Powell - embolalia.com\n# Licensed under the Eiffel Forum License 2.\nfrom __future__ import unicode_literals, absolute_import, print_function, division\nfrom sopel import web, tools\nfrom sopel.config.types import StaticSection, ValidatedAttribute\nfrom sopel.module import NOLIMIT, commands, example, rule\nimport json\nimport re\n\nimport sys\nif sys.version_info.major < 3:\n from urlparse import unquote as _unquote\n unquote = lambda s: _unquote(s.encode('utf-8')).decode('utf-8')\nelse:\n from urllib.parse import unquote\n\nREDIRECT = re.compile(r'^REDIRECT (.*)')\n\n\nclass WikipediaSection(StaticSection):\n default_lang = ValidatedAttribute('default_lang', default='en')\n \"\"\"The default language to find articles from.\"\"\"\n lang_per_channel = ValidatedAttribute('lang_per_channel')\n\n\ndef setup(bot):\n bot.config.define_section('wikipedia', WikipediaSection)\n\n regex = re.compile('([a-z]+).(wikipedia.org/wiki/)([^ ]+)')\n if not bot.memory.contains('url_callbacks'):\n bot.memory['url_callbacks'] = tools.SopelMemory()\n bot.memory['url_callbacks'][regex] = mw_info\n\n\ndef configure(config):\n config.define_section('wikipedia', WikipediaSection)\n config.wikipedia.configure_setting(\n 'default_lang',\n \"Enter the default language to find articles from.\"\n )\n\n\ndef mw_search(server, query, num):\n \"\"\"\n Searches the specified MediaWiki server for the given query, and returns\n the specified number of results.\n \"\"\"\n search_url = ('http://%s/w/api.php?format=json&action=query'\n '&list=search&srlimit=%d&srprop=timestamp&srwhat=text'\n '&srsearch=') % (server, num)\n search_url += query\n query = json.loads(web.get(search_url))\n if 'query' in query:\n query = query['query']['search']\n return [r['title'] for r in query]\n else:\n return None\n\n\ndef say_snippet(bot, server, query, show_url=True):\n page_name = query.replace('_', ' ')\n query = query.replace(' ', '_')\n snippet = mw_snippet(server, query)\n msg = '[WIKIPEDIA] {} | \"{}\"'.format(page_name, snippet)\n if show_url:\n msg = msg + ' | https://{}/wiki/{}'.format(server, query)\n bot.say(msg)\n\n\ndef mw_snippet(server, query):\n \"\"\"\n Retrives a snippet of the specified length from the given page on the given\n server.\n \"\"\"\n snippet_url = ('https://' + server + '/w/api.php?format=json'\n '&action=query&prop=extracts&exintro&explaintext'\n '&exchars=300&redirects&titles=')\n snippet_url += query\n snippet = json.loads(web.get(snippet_url))\n snippet = snippet['query']['pages']\n\n # For some reason, the API gives the page *number* as the key, so we just\n # grab the first page number in the results.\n snippet = snippet[list(snippet.keys())[0]]\n\n return snippet['extract']\n\n\n@rule('.*\\/([a-z]+\\.wikipedia.org)\\/wiki\\/((?!File\\:)[^ ]+).*')\ndef mw_info(bot, trigger, found_match=None):\n \"\"\"\n Retrives a snippet of the specified length from the given page on the given\n server.\n \"\"\"\n match = found_match or trigger\n say_snippet(bot, match.group(1), unquote(match.group(2)), show_url=False)\n\n\n@commands('w', 'wiki', 'wik')\n@example('.w San Francisco')\ndef wikipedia(bot, trigger):\n lang = bot.config.wikipedia.default_lang\n\n #change lang if channel has custom language set\n if (trigger.sender and not trigger.sender.is_nick() and\n bot.config.wikipedia.lang_per_channel):\n customlang = re.search('(' + trigger.sender + '):(\\w+)',\n bot.config.wikipedia.lang_per_channel)\n if customlang is not None:\n lang = customlang.group(2)\n\n if trigger.group(2) is None:\n bot.reply(\"What do you want me to look up?\")\n return NOLIMIT\n\n query = trigger.group(2)\n args = re.search(r'^-([a-z]{2,12})\\s(.*)', query)\n if args is not None:\n lang = args.group(1)\n query = args.group(2)\n\n if not query:\n bot.reply('What do you want me to look up?')\n return NOLIMIT\n server = lang + '.wikipedia.org'\n query = mw_search(server, query, 1)\n if not query:\n bot.reply(\"I can't find any results for that.\")\n return NOLIMIT\n else:\n query = query[0]\n say_snippet(bot, server, query)\n", "path": "sopel/modules/wikipedia.py"}], "after_files": [{"content": "# coding=utf-8\n# Copyright 2013 Elsie Powell - embolalia.com\n# Licensed under the Eiffel Forum License 2.\nfrom __future__ import unicode_literals, absolute_import, print_function, division\nfrom sopel import web, tools\nfrom sopel.config.types import StaticSection, ValidatedAttribute\nfrom sopel.module import NOLIMIT, commands, example, rule\nimport json\nimport re\n\nimport sys\nif sys.version_info.major < 3:\n from urlparse import unquote as _unquote\n unquote = lambda s: _unquote(s.encode('utf-8')).decode('utf-8')\nelse:\n from urllib.parse import unquote\n\nREDIRECT = re.compile(r'^REDIRECT (.*)')\n\n\nclass WikipediaSection(StaticSection):\n default_lang = ValidatedAttribute('default_lang', default='en')\n \"\"\"The default language to find articles from.\"\"\"\n lang_per_channel = ValidatedAttribute('lang_per_channel')\n\n\ndef setup(bot):\n bot.config.define_section('wikipedia', WikipediaSection)\n\n regex = re.compile('([a-z]+).(wikipedia.org/wiki/)([^ ]+)')\n if not bot.memory.contains('url_callbacks'):\n bot.memory['url_callbacks'] = tools.SopelMemory()\n bot.memory['url_callbacks'][regex] = mw_info\n\n\ndef configure(config):\n config.define_section('wikipedia', WikipediaSection)\n config.wikipedia.configure_setting(\n 'default_lang',\n \"Enter the default language to find articles from.\"\n )\n\n\ndef mw_search(server, query, num):\n \"\"\"\n Searches the specified MediaWiki server for the given query, and returns\n the specified number of results.\n \"\"\"\n search_url = ('http://%s/w/api.php?format=json&action=query'\n '&list=search&srlimit=%d&srprop=timestamp&srwhat=text'\n '&srsearch=') % (server, num)\n search_url += query\n query = json.loads(web.get(search_url))\n if 'query' in query:\n query = query['query']['search']\n return [r['title'] for r in query]\n else:\n return None\n\n\ndef say_snippet(bot, server, query, show_url=True):\n page_name = query.replace('_', ' ')\n query = query.replace(' ', '_')\n try:\n snippet = mw_snippet(server, query)\n except KeyError:\n if show_url:\n bot.say(\"[WIKIPEDIA] Error fetching snippet for \\\"{}\\\".\".format(page_name))\n return\n msg = '[WIKIPEDIA] {} | \"{}\"'.format(page_name, snippet)\n if show_url:\n msg = msg + ' | https://{}/wiki/{}'.format(server, query)\n bot.say(msg)\n\n\ndef mw_snippet(server, query):\n \"\"\"\n Retrives a snippet of the specified length from the given page on the given\n server.\n \"\"\"\n snippet_url = ('https://' + server + '/w/api.php?format=json'\n '&action=query&prop=extracts&exintro&explaintext'\n '&exchars=300&redirects&titles=')\n snippet_url += query\n snippet = json.loads(web.get(snippet_url))\n snippet = snippet['query']['pages']\n\n # For some reason, the API gives the page *number* as the key, so we just\n # grab the first page number in the results.\n snippet = snippet[list(snippet.keys())[0]]\n\n return snippet['extract']\n\n\n@rule('.*\\/([a-z]+\\.wikipedia.org)\\/wiki\\/((?!File\\:)[^ ]+).*')\ndef mw_info(bot, trigger, found_match=None):\n \"\"\"\n Retrives a snippet of the specified length from the given page on the given\n server.\n \"\"\"\n match = found_match or trigger\n say_snippet(bot, match.group(1), unquote(match.group(2)), show_url=False)\n\n\n@commands('w', 'wiki', 'wik')\n@example('.w San Francisco')\ndef wikipedia(bot, trigger):\n lang = bot.config.wikipedia.default_lang\n\n #change lang if channel has custom language set\n if (trigger.sender and not trigger.sender.is_nick() and\n bot.config.wikipedia.lang_per_channel):\n customlang = re.search('(' + trigger.sender + '):(\\w+)',\n bot.config.wikipedia.lang_per_channel)\n if customlang is not None:\n lang = customlang.group(2)\n\n if trigger.group(2) is None:\n bot.reply(\"What do you want me to look up?\")\n return NOLIMIT\n\n query = trigger.group(2)\n args = re.search(r'^-([a-z]{2,12})\\s(.*)', query)\n if args is not None:\n lang = args.group(1)\n query = args.group(2)\n\n if not query:\n bot.reply('What do you want me to look up?')\n return NOLIMIT\n server = lang + '.wikipedia.org'\n query = mw_search(server, query, 1)\n if not query:\n bot.reply(\"I can't find any results for that.\")\n return NOLIMIT\n else:\n query = query[0]\n say_snippet(bot, server, query)\n", "path": "sopel/modules/wikipedia.py"}]} | 1,792 | 191 |
gh_patches_debug_28952 | rasdani/github-patches | git_diff | nautobot__nautobot-3943 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Protect against Git-repo jobs clobbering sys.modules
### As ...
Patti - Platform Admin
### I want ...
To add Jobs via Git repositories without worrying about them impacting the overall stability of the platform.
After #3840 , there is a risk that a misnamed (or maliciously-named) Git repository could potentially clobber existing Python code. For example, creating a repository and assigning it the slug `nautobot` causes all sorts of havoc as it results in Nautobot unloading itself and then attempting to reimport all of Nautobot's code from the Git repository by that name.
### So that ...
The application is resilient to user error as well as mischievous/malicious actions by authenticated users.
### I know this is done when...
- Creating a GitRepository whose `slug` matches any currently loaded Python module in the Nautobot environment is rejected with an appropriate error message.
### Optional - Feature groups this request pertains to.
- [X] Automation
- [ ] Circuits
- [ ] DCIM
- [ ] IPAM
- [ ] Misc (including Data Sources)
- [ ] Organization
- [ ] Plugins (and other Extensibility)
- [X] Security (Secrets, etc)
- [ ] Image Management
- [ ] UI/UX
- [ ] Documentation
- [ ] Other (not directly a platform feature)
### Database Changes
None
### External Dependencies
None
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nautobot/extras/models/datasources.py`
Content:
```
1 """Models for representing external data sources."""
2 import os
3
4 from django.conf import settings
5 from django.core.exceptions import ValidationError
6 from django.core.serializers.json import DjangoJSONEncoder
7 from django.core.validators import URLValidator
8 from django.db import models
9
10 from nautobot.core.models.fields import AutoSlugField, slugify_dashes_to_underscores
11 from nautobot.core.models.generics import PrimaryModel
12 from nautobot.extras.utils import extras_features, check_if_key_is_graphql_safe
13
14
15 @extras_features(
16 "config_context_owners",
17 "export_template_owners",
18 "job_results",
19 "webhooks",
20 )
21 class GitRepository(PrimaryModel):
22 """Representation of a Git repository used as an external data source."""
23
24 name = models.CharField(
25 max_length=100,
26 unique=True,
27 )
28 slug = AutoSlugField(
29 populate_from="name",
30 help_text="Internal field name. Please use underscores rather than dashes in this key.",
31 slugify_function=slugify_dashes_to_underscores,
32 )
33
34 remote_url = models.URLField(
35 max_length=255,
36 # For the moment we don't support ssh:// and git:// URLs
37 help_text="Only HTTP and HTTPS URLs are presently supported",
38 validators=[URLValidator(schemes=["http", "https"])],
39 )
40 branch = models.CharField(
41 max_length=64,
42 default="main",
43 )
44
45 current_head = models.CharField(
46 help_text="Commit hash of the most recent fetch from the selected branch. Used for syncing between workers.",
47 max_length=48,
48 default="",
49 blank=True,
50 )
51
52 secrets_group = models.ForeignKey(
53 to="extras.SecretsGroup",
54 on_delete=models.SET_NULL,
55 default=None,
56 blank=True,
57 null=True,
58 related_name="git_repositories",
59 )
60
61 # Data content types that this repo is a source of. Valid options are dynamically generated based on
62 # the data types registered in registry['datasource_contents'].
63 provided_contents = models.JSONField(encoder=DjangoJSONEncoder, default=list, blank=True)
64
65 clone_fields = ["remote_url", "secrets_group", "provided_contents"]
66
67 class Meta:
68 ordering = ["name"]
69 verbose_name = "Git repository"
70 verbose_name_plural = "Git repositories"
71
72 def __init__(self, *args, **kwargs):
73 super().__init__(*args, **kwargs)
74
75 # Store the initial repo slug so we can check for changes on save().
76 self.__initial_slug = self.slug
77
78 def __str__(self):
79 return self.name
80
81 def clean(self):
82 super().clean()
83
84 if self.slug != "":
85 check_if_key_is_graphql_safe(self.__class__.__name__, self.slug, "slug")
86
87 if self.present_in_database and self.slug != self.__initial_slug:
88 raise ValidationError(
89 f"Slug cannot be changed once set. Current slug is {self.__initial_slug}, "
90 f"requested slug is {self.slug}"
91 )
92
93 def get_latest_sync(self):
94 """
95 Return a `JobResult` for the latest sync operation.
96
97 Returns:
98 JobResult
99 """
100 from nautobot.extras.models import JobResult
101
102 # This will match all "GitRepository" jobs (pull/refresh, dry-run, etc.)
103 prefix = "nautobot.core.jobs.GitRepository"
104 return JobResult.objects.filter(task_name__startswith=prefix, task_kwargs__repository=self.pk).latest()
105
106 def to_csv(self):
107 return (
108 self.name,
109 self.slug,
110 self.remote_url,
111 self.branch,
112 self.secrets_group.name if self.secrets_group else None,
113 self.provided_contents,
114 )
115
116 @property
117 def filesystem_path(self):
118 return os.path.join(settings.GIT_ROOT, self.slug)
119
120 def sync(self, user, dry_run=False):
121 """
122 Enqueue a Job to pull the Git repository from the remote and return the sync result.
123
124 Args:
125 user (User): The User that will perform the sync.
126 dry_run (bool): If set, dry-run the Git sync.
127
128 Returns:
129 JobResult
130 """
131 from nautobot.extras.datasources import (
132 enqueue_pull_git_repository_and_refresh_data,
133 enqueue_git_repository_diff_origin_and_local,
134 )
135
136 if dry_run:
137 return enqueue_git_repository_diff_origin_and_local(self, user)
138 return enqueue_pull_git_repository_and_refresh_data(self, user)
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nautobot/extras/models/datasources.py b/nautobot/extras/models/datasources.py
--- a/nautobot/extras/models/datasources.py
+++ b/nautobot/extras/models/datasources.py
@@ -1,4 +1,5 @@
"""Models for representing external data sources."""
+from importlib.util import find_spec
import os
from django.conf import settings
@@ -81,13 +82,24 @@
def clean(self):
super().clean()
- if self.slug != "":
- check_if_key_is_graphql_safe(self.__class__.__name__, self.slug, "slug")
+ # Autogenerate slug now, rather than in pre_save(), if not set already, as we need to check it below.
+ if self.slug == "":
+ self._meta.get_field("slug").create_slug(self, add=(not self.present_in_database))
+
+ if self.present_in_database and self.slug != self.__initial_slug:
+ raise ValidationError(
+ f"Slug cannot be changed once set. Current slug is {self.__initial_slug}, "
+ f"requested slug is {self.slug}"
+ )
- if self.present_in_database and self.slug != self.__initial_slug:
+ if not self.present_in_database:
+ check_if_key_is_graphql_safe(self.__class__.__name__, self.slug, "slug")
+ # Check on create whether the proposed slug conflicts with a module name already in the Python environment.
+ # Because we add GIT_ROOT to the end of sys.path, trying to import this repository will instead
+ # import the earlier-found Python module in its place, which would be undesirable.
+ if find_spec(self.slug) is not None:
raise ValidationError(
- f"Slug cannot be changed once set. Current slug is {self.__initial_slug}, "
- f"requested slug is {self.slug}"
+ f'Please choose a different slug, as "{self.slug}" is an installed Python package or module.'
)
def get_latest_sync(self):
| {"golden_diff": "diff --git a/nautobot/extras/models/datasources.py b/nautobot/extras/models/datasources.py\n--- a/nautobot/extras/models/datasources.py\n+++ b/nautobot/extras/models/datasources.py\n@@ -1,4 +1,5 @@\n \"\"\"Models for representing external data sources.\"\"\"\n+from importlib.util import find_spec\n import os\n \n from django.conf import settings\n@@ -81,13 +82,24 @@\n def clean(self):\n super().clean()\n \n- if self.slug != \"\":\n- check_if_key_is_graphql_safe(self.__class__.__name__, self.slug, \"slug\")\n+ # Autogenerate slug now, rather than in pre_save(), if not set already, as we need to check it below.\n+ if self.slug == \"\":\n+ self._meta.get_field(\"slug\").create_slug(self, add=(not self.present_in_database))\n+\n+ if self.present_in_database and self.slug != self.__initial_slug:\n+ raise ValidationError(\n+ f\"Slug cannot be changed once set. Current slug is {self.__initial_slug}, \"\n+ f\"requested slug is {self.slug}\"\n+ )\n \n- if self.present_in_database and self.slug != self.__initial_slug:\n+ if not self.present_in_database:\n+ check_if_key_is_graphql_safe(self.__class__.__name__, self.slug, \"slug\")\n+ # Check on create whether the proposed slug conflicts with a module name already in the Python environment.\n+ # Because we add GIT_ROOT to the end of sys.path, trying to import this repository will instead\n+ # import the earlier-found Python module in its place, which would be undesirable.\n+ if find_spec(self.slug) is not None:\n raise ValidationError(\n- f\"Slug cannot be changed once set. Current slug is {self.__initial_slug}, \"\n- f\"requested slug is {self.slug}\"\n+ f'Please choose a different slug, as \"{self.slug}\" is an installed Python package or module.'\n )\n \n def get_latest_sync(self):\n", "issue": "Protect against Git-repo jobs clobbering sys.modules\n### As ...\n\nPatti - Platform Admin\n\n### I want ...\n\nTo add Jobs via Git repositories without worrying about them impacting the overall stability of the platform.\r\n\r\nAfter #3840 , there is a risk that a misnamed (or maliciously-named) Git repository could potentially clobber existing Python code. For example, creating a repository and assigning it the slug `nautobot` causes all sorts of havoc as it results in Nautobot unloading itself and then attempting to reimport all of Nautobot's code from the Git repository by that name.\n\n### So that ...\n\nThe application is resilient to user error as well as mischievous/malicious actions by authenticated users.\n\n### I know this is done when...\n\n- Creating a GitRepository whose `slug` matches any currently loaded Python module in the Nautobot environment is rejected with an appropriate error message.\n\n### Optional - Feature groups this request pertains to.\n\n- [X] Automation\n- [ ] Circuits\n- [ ] DCIM\n- [ ] IPAM\n- [ ] Misc (including Data Sources)\n- [ ] Organization\n- [ ] Plugins (and other Extensibility)\n- [X] Security (Secrets, etc)\n- [ ] Image Management\n- [ ] UI/UX\n- [ ] Documentation\n- [ ] Other (not directly a platform feature)\n\n### Database Changes\n\nNone\n\n### External Dependencies\n\nNone\n", "before_files": [{"content": "\"\"\"Models for representing external data sources.\"\"\"\nimport os\n\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.core.serializers.json import DjangoJSONEncoder\nfrom django.core.validators import URLValidator\nfrom django.db import models\n\nfrom nautobot.core.models.fields import AutoSlugField, slugify_dashes_to_underscores\nfrom nautobot.core.models.generics import PrimaryModel\nfrom nautobot.extras.utils import extras_features, check_if_key_is_graphql_safe\n\n\n@extras_features(\n \"config_context_owners\",\n \"export_template_owners\",\n \"job_results\",\n \"webhooks\",\n)\nclass GitRepository(PrimaryModel):\n \"\"\"Representation of a Git repository used as an external data source.\"\"\"\n\n name = models.CharField(\n max_length=100,\n unique=True,\n )\n slug = AutoSlugField(\n populate_from=\"name\",\n help_text=\"Internal field name. Please use underscores rather than dashes in this key.\",\n slugify_function=slugify_dashes_to_underscores,\n )\n\n remote_url = models.URLField(\n max_length=255,\n # For the moment we don't support ssh:// and git:// URLs\n help_text=\"Only HTTP and HTTPS URLs are presently supported\",\n validators=[URLValidator(schemes=[\"http\", \"https\"])],\n )\n branch = models.CharField(\n max_length=64,\n default=\"main\",\n )\n\n current_head = models.CharField(\n help_text=\"Commit hash of the most recent fetch from the selected branch. Used for syncing between workers.\",\n max_length=48,\n default=\"\",\n blank=True,\n )\n\n secrets_group = models.ForeignKey(\n to=\"extras.SecretsGroup\",\n on_delete=models.SET_NULL,\n default=None,\n blank=True,\n null=True,\n related_name=\"git_repositories\",\n )\n\n # Data content types that this repo is a source of. Valid options are dynamically generated based on\n # the data types registered in registry['datasource_contents'].\n provided_contents = models.JSONField(encoder=DjangoJSONEncoder, default=list, blank=True)\n\n clone_fields = [\"remote_url\", \"secrets_group\", \"provided_contents\"]\n\n class Meta:\n ordering = [\"name\"]\n verbose_name = \"Git repository\"\n verbose_name_plural = \"Git repositories\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n # Store the initial repo slug so we can check for changes on save().\n self.__initial_slug = self.slug\n\n def __str__(self):\n return self.name\n\n def clean(self):\n super().clean()\n\n if self.slug != \"\":\n check_if_key_is_graphql_safe(self.__class__.__name__, self.slug, \"slug\")\n\n if self.present_in_database and self.slug != self.__initial_slug:\n raise ValidationError(\n f\"Slug cannot be changed once set. Current slug is {self.__initial_slug}, \"\n f\"requested slug is {self.slug}\"\n )\n\n def get_latest_sync(self):\n \"\"\"\n Return a `JobResult` for the latest sync operation.\n\n Returns:\n JobResult\n \"\"\"\n from nautobot.extras.models import JobResult\n\n # This will match all \"GitRepository\" jobs (pull/refresh, dry-run, etc.)\n prefix = \"nautobot.core.jobs.GitRepository\"\n return JobResult.objects.filter(task_name__startswith=prefix, task_kwargs__repository=self.pk).latest()\n\n def to_csv(self):\n return (\n self.name,\n self.slug,\n self.remote_url,\n self.branch,\n self.secrets_group.name if self.secrets_group else None,\n self.provided_contents,\n )\n\n @property\n def filesystem_path(self):\n return os.path.join(settings.GIT_ROOT, self.slug)\n\n def sync(self, user, dry_run=False):\n \"\"\"\n Enqueue a Job to pull the Git repository from the remote and return the sync result.\n\n Args:\n user (User): The User that will perform the sync.\n dry_run (bool): If set, dry-run the Git sync.\n\n Returns:\n JobResult\n \"\"\"\n from nautobot.extras.datasources import (\n enqueue_pull_git_repository_and_refresh_data,\n enqueue_git_repository_diff_origin_and_local,\n )\n\n if dry_run:\n return enqueue_git_repository_diff_origin_and_local(self, user)\n return enqueue_pull_git_repository_and_refresh_data(self, user)\n", "path": "nautobot/extras/models/datasources.py"}], "after_files": [{"content": "\"\"\"Models for representing external data sources.\"\"\"\nfrom importlib.util import find_spec\nimport os\n\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.core.serializers.json import DjangoJSONEncoder\nfrom django.core.validators import URLValidator\nfrom django.db import models\n\nfrom nautobot.core.models.fields import AutoSlugField, slugify_dashes_to_underscores\nfrom nautobot.core.models.generics import PrimaryModel\nfrom nautobot.extras.utils import extras_features, check_if_key_is_graphql_safe\n\n\n@extras_features(\n \"config_context_owners\",\n \"export_template_owners\",\n \"job_results\",\n \"webhooks\",\n)\nclass GitRepository(PrimaryModel):\n \"\"\"Representation of a Git repository used as an external data source.\"\"\"\n\n name = models.CharField(\n max_length=100,\n unique=True,\n )\n slug = AutoSlugField(\n populate_from=\"name\",\n help_text=\"Internal field name. Please use underscores rather than dashes in this key.\",\n slugify_function=slugify_dashes_to_underscores,\n )\n\n remote_url = models.URLField(\n max_length=255,\n # For the moment we don't support ssh:// and git:// URLs\n help_text=\"Only HTTP and HTTPS URLs are presently supported\",\n validators=[URLValidator(schemes=[\"http\", \"https\"])],\n )\n branch = models.CharField(\n max_length=64,\n default=\"main\",\n )\n\n current_head = models.CharField(\n help_text=\"Commit hash of the most recent fetch from the selected branch. Used for syncing between workers.\",\n max_length=48,\n default=\"\",\n blank=True,\n )\n\n secrets_group = models.ForeignKey(\n to=\"extras.SecretsGroup\",\n on_delete=models.SET_NULL,\n default=None,\n blank=True,\n null=True,\n related_name=\"git_repositories\",\n )\n\n # Data content types that this repo is a source of. Valid options are dynamically generated based on\n # the data types registered in registry['datasource_contents'].\n provided_contents = models.JSONField(encoder=DjangoJSONEncoder, default=list, blank=True)\n\n clone_fields = [\"remote_url\", \"secrets_group\", \"provided_contents\"]\n\n class Meta:\n ordering = [\"name\"]\n verbose_name = \"Git repository\"\n verbose_name_plural = \"Git repositories\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n # Store the initial repo slug so we can check for changes on save().\n self.__initial_slug = self.slug\n\n def __str__(self):\n return self.name\n\n def clean(self):\n super().clean()\n\n # Autogenerate slug now, rather than in pre_save(), if not set already, as we need to check it below.\n if self.slug == \"\":\n self._meta.get_field(\"slug\").create_slug(self, add=(not self.present_in_database))\n\n if self.present_in_database and self.slug != self.__initial_slug:\n raise ValidationError(\n f\"Slug cannot be changed once set. Current slug is {self.__initial_slug}, \"\n f\"requested slug is {self.slug}\"\n )\n\n if not self.present_in_database:\n check_if_key_is_graphql_safe(self.__class__.__name__, self.slug, \"slug\")\n # Check on create whether the proposed slug conflicts with a module name already in the Python environment.\n # Because we add GIT_ROOT to the end of sys.path, trying to import this repository will instead\n # import the earlier-found Python module in its place, which would be undesirable.\n if find_spec(self.slug) is not None:\n raise ValidationError(\n f'Please choose a different slug, as \"{self.slug}\" is an installed Python package or module.'\n )\n\n def get_latest_sync(self):\n \"\"\"\n Return a `JobResult` for the latest sync operation.\n\n Returns:\n JobResult\n \"\"\"\n from nautobot.extras.models import JobResult\n\n # This will match all \"GitRepository\" jobs (pull/refresh, dry-run, etc.)\n prefix = \"nautobot.core.jobs.GitRepository\"\n return JobResult.objects.filter(task_name__startswith=prefix, task_kwargs__repository=self.pk).latest()\n\n def to_csv(self):\n return (\n self.name,\n self.slug,\n self.remote_url,\n self.branch,\n self.secrets_group.name if self.secrets_group else None,\n self.provided_contents,\n )\n\n @property\n def filesystem_path(self):\n return os.path.join(settings.GIT_ROOT, self.slug)\n\n def sync(self, user, dry_run=False):\n \"\"\"\n Enqueue a Job to pull the Git repository from the remote and return the sync result.\n\n Args:\n user (User): The User that will perform the sync.\n dry_run (bool): If set, dry-run the Git sync.\n\n Returns:\n JobResult\n \"\"\"\n from nautobot.extras.datasources import (\n enqueue_pull_git_repository_and_refresh_data,\n enqueue_git_repository_diff_origin_and_local,\n )\n\n if dry_run:\n return enqueue_git_repository_diff_origin_and_local(self, user)\n return enqueue_pull_git_repository_and_refresh_data(self, user)\n", "path": "nautobot/extras/models/datasources.py"}]} | 1,838 | 445 |
gh_patches_debug_160 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-609 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Drop distutils support
I feel it's about time to drop **distutils** support as we rely on [extras_require of setuptools](https://pythonhosted.org/setuptools/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies) to handle **yaml** dependencies..
What's your opinion on this matter?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 import os
4 import sys
5
6 try:
7 from setuptools import setup
8 except ImportError:
9 from distutils.core import setup
10
11 version = "1.3.0"
12
13 if sys.argv[-1] == 'publish':
14 os.system('python setup.py sdist upload')
15 os.system('python setup.py bdist_wheel upload')
16 sys.exit()
17
18 if sys.argv[-1] == 'tag':
19 os.system("git tag -a %s -m 'version %s'" % (version, version))
20 os.system("git push --tags")
21 sys.exit()
22
23 with open('README.rst') as readme_file:
24 readme = readme_file.read()
25
26 with open('HISTORY.rst') as history_file:
27 history = history_file.read().replace('.. :changelog:', '')
28
29 requirements = [
30 'future>=0.15.2',
31 'binaryornot>=0.2.0',
32 'jinja2>=2.7',
33 'click>=5.0',
34 'whichcraft>=0.1.1'
35 ]
36
37 long_description = readme + '\n\n' + history
38
39 if sys.argv[-1] == 'readme':
40 print(long_description)
41 sys.exit()
42
43
44 setup(
45 name='cookiecutter',
46 version=version,
47 description=('A command-line utility that creates projects from project '
48 'templates, e.g. creating a Python package project from a '
49 'Python package project template.'),
50 long_description=long_description,
51 author='Audrey Roy',
52 author_email='[email protected]',
53 url='https://github.com/audreyr/cookiecutter',
54 packages=[
55 'cookiecutter',
56 ],
57 package_dir={'cookiecutter': 'cookiecutter'},
58 entry_points={
59 'console_scripts': [
60 'cookiecutter = cookiecutter.cli:main',
61 ]
62 },
63 include_package_data=True,
64 install_requires=requirements,
65 extras_require={
66 ':sys_platform=="win32" and python_version=="2.7"': [
67 'PyYAML>=3.10'
68 ],
69 ':sys_platform!="win32" or python_version!="2.7"': [
70 'ruamel.yaml>=0.10.12'
71 ]
72 },
73 license='BSD',
74 zip_safe=False,
75 classifiers=[
76 'Development Status :: 5 - Production/Stable',
77 'Environment :: Console',
78 'Intended Audience :: Developers',
79 'Natural Language :: English',
80 'License :: OSI Approved :: BSD License',
81 'Programming Language :: Python',
82 'Programming Language :: Python :: 2',
83 'Programming Language :: Python :: 2.7',
84 'Programming Language :: Python :: 3',
85 'Programming Language :: Python :: 3.3',
86 'Programming Language :: Python :: 3.4',
87 'Programming Language :: Python :: 3.5',
88 'Programming Language :: Python :: Implementation :: CPython',
89 'Programming Language :: Python :: Implementation :: PyPy',
90 'Topic :: Software Development',
91 ],
92 keywords=(
93 'cookiecutter, Python, projects, project templates, Jinja2, '
94 'skeleton, scaffolding, project directory, setup.py, package, '
95 'packaging'
96 ),
97 )
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -3,10 +3,7 @@
import os
import sys
-try:
- from setuptools import setup
-except ImportError:
- from distutils.core import setup
+from setuptools import setup
version = "1.3.0"
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -3,10 +3,7 @@\n import os\n import sys\n \n-try:\n- from setuptools import setup\n-except ImportError:\n- from distutils.core import setup\n+from setuptools import setup\n \n version = \"1.3.0\"\n", "issue": "Drop distutils support\nI feel it's about time to drop **distutils** support as we rely on [extras_require of setuptools](https://pythonhosted.org/setuptools/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies) to handle **yaml** dependencies..\n\nWhat's your opinion on this matter?\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\nimport sys\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nversion = \"1.3.0\"\n\nif sys.argv[-1] == 'publish':\n os.system('python setup.py sdist upload')\n os.system('python setup.py bdist_wheel upload')\n sys.exit()\n\nif sys.argv[-1] == 'tag':\n os.system(\"git tag -a %s -m 'version %s'\" % (version, version))\n os.system(\"git push --tags\")\n sys.exit()\n\nwith open('README.rst') as readme_file:\n readme = readme_file.read()\n\nwith open('HISTORY.rst') as history_file:\n history = history_file.read().replace('.. :changelog:', '')\n\nrequirements = [\n 'future>=0.15.2',\n 'binaryornot>=0.2.0',\n 'jinja2>=2.7',\n 'click>=5.0',\n 'whichcraft>=0.1.1'\n]\n\nlong_description = readme + '\\n\\n' + history\n\nif sys.argv[-1] == 'readme':\n print(long_description)\n sys.exit()\n\n\nsetup(\n name='cookiecutter',\n version=version,\n description=('A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'),\n long_description=long_description,\n author='Audrey Roy',\n author_email='[email protected]',\n url='https://github.com/audreyr/cookiecutter',\n packages=[\n 'cookiecutter',\n ],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={\n 'console_scripts': [\n 'cookiecutter = cookiecutter.cli:main',\n ]\n },\n include_package_data=True,\n install_requires=requirements,\n extras_require={\n ':sys_platform==\"win32\" and python_version==\"2.7\"': [\n 'PyYAML>=3.10'\n ],\n ':sys_platform!=\"win32\" or python_version!=\"2.7\"': [\n 'ruamel.yaml>=0.10.12'\n ]\n },\n license='BSD',\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'License :: OSI Approved :: BSD License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development',\n ],\n keywords=(\n 'cookiecutter, Python, projects, project templates, Jinja2, '\n 'skeleton, scaffolding, project directory, setup.py, package, '\n 'packaging'\n ),\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nimport os\nimport sys\n\nfrom setuptools import setup\n\nversion = \"1.3.0\"\n\nif sys.argv[-1] == 'publish':\n os.system('python setup.py sdist upload')\n os.system('python setup.py bdist_wheel upload')\n sys.exit()\n\nif sys.argv[-1] == 'tag':\n os.system(\"git tag -a %s -m 'version %s'\" % (version, version))\n os.system(\"git push --tags\")\n sys.exit()\n\nwith open('README.rst') as readme_file:\n readme = readme_file.read()\n\nwith open('HISTORY.rst') as history_file:\n history = history_file.read().replace('.. :changelog:', '')\n\nrequirements = [\n 'future>=0.15.2',\n 'binaryornot>=0.2.0',\n 'jinja2>=2.7',\n 'click>=5.0',\n 'whichcraft>=0.1.1'\n]\n\nlong_description = readme + '\\n\\n' + history\n\nif sys.argv[-1] == 'readme':\n print(long_description)\n sys.exit()\n\n\nsetup(\n name='cookiecutter',\n version=version,\n description=('A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'),\n long_description=long_description,\n author='Audrey Roy',\n author_email='[email protected]',\n url='https://github.com/audreyr/cookiecutter',\n packages=[\n 'cookiecutter',\n ],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={\n 'console_scripts': [\n 'cookiecutter = cookiecutter.cli:main',\n ]\n },\n include_package_data=True,\n install_requires=requirements,\n extras_require={\n ':sys_platform==\"win32\" and python_version==\"2.7\"': [\n 'PyYAML>=3.10'\n ],\n ':sys_platform!=\"win32\" or python_version!=\"2.7\"': [\n 'ruamel.yaml>=0.10.12'\n ]\n },\n license='BSD',\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'License :: OSI Approved :: BSD License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development',\n ],\n keywords=(\n 'cookiecutter, Python, projects, project templates, Jinja2, '\n 'skeleton, scaffolding, project directory, setup.py, package, '\n 'packaging'\n ),\n)\n", "path": "setup.py"}]} | 1,217 | 75 |
gh_patches_debug_23437 | rasdani/github-patches | git_diff | ansible__ansible-modules-core-845 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
rhn_channel is using depreciated methods
Issue Type:
Bug Report
Ansible Version:
1.7.2
Environment:
OS X 10.9 / RHEL 6/7
Summary:
When adding a child channel to rhn sat server rhn_channel fails with 500 error.
Steps To Reproduce:
attempt to subscribe to a child channel with rhn_channel to sat server 5.4+
Expected Results:
Server gets subscribed to child channel.
Actual Results:
xmlrpclib.ProtocolError: <ProtocolError for redhat.example.com/rpc/api: 500 Internal Server Error>
channel.software.setSystemChannels is depreciated.
https://access.redhat.com/documentation/en-US/Red_Hat_Network_Satellite/5.5/html/API_Overview/files/html/handlers/ChannelSoftwareHandler.html#setSystemChannels
PR to follow
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `packaging/os/rhn_channel.py`
Content:
```
1 #!/usr/bin/python
2
3 # (c) Vincent Van de Kussen
4 #
5 # This file is part of Ansible
6 #
7 # Ansible is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # Ansible is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
19
20 DOCUMENTATION = '''
21 ---
22 module: rhn_channel
23 short_description: Adds or removes Red Hat software channels
24 description:
25 - Adds or removes Red Hat software channels
26 version_added: "1.1"
27 author: Vincent Van der Kussen
28 notes:
29 - this module fetches the system id from RHN.
30 requirements:
31 - none
32 options:
33 name:
34 description:
35 - name of the software channel
36 required: true
37 default: null
38 sysname:
39 description:
40 - name of the system as it is known in RHN/Satellite
41 required: true
42 default: null
43 state:
44 description:
45 - whether the channel should be present or not
46 required: false
47 default: present
48 url:
49 description:
50 - The full url to the RHN/Satellite api
51 required: true
52 user:
53 description:
54 - RHN/Satellite user
55 required: true
56 password:
57 description:
58 - "the user's password"
59 required: true
60 '''
61
62 EXAMPLES = '''
63 - rhn_channel: name=rhel-x86_64-server-v2vwin-6 sysname=server01 url=https://rhn.redhat.com/rpc/api user=rhnuser password=guessme
64 '''
65
66 import xmlrpclib
67 from operator import itemgetter
68 import re
69
70
71 # ------------------------------------------------------- #
72
73 def get_systemid(client, session, sysname):
74 systems = client.system.listUserSystems(session)
75 for system in systems:
76 if system.get('name') == sysname:
77 idres = system.get('id')
78 idd = int(idres)
79 return idd
80
81 # ------------------------------------------------------- #
82
83 # unused:
84 #
85 #def get_localsystemid():
86 # f = open("/etc/sysconfig/rhn/systemid", "r")
87 # content = f.read()
88 # loc_id = re.search(r'\b(ID-)(\d{10})' ,content)
89 # return loc_id.group(2)
90
91 # ------------------------------------------------------- #
92
93 def subscribe_channels(channels, client, session, sysname, sys_id):
94 c = base_channels(client, session, sys_id)
95 c.append(channels)
96 return client.channel.software.setSystemChannels(session, sys_id, c)
97
98 # ------------------------------------------------------- #
99
100 def unsubscribe_channels(channels, client, session, sysname, sys_id):
101 c = base_channels(client, session, sys_id)
102 c.remove(channels)
103 return client.channel.software.setSystemChannels(session, sys_id, c)
104
105 # ------------------------------------------------------- #
106
107 def base_channels(client, session, sys_id):
108 basechan = client.channel.software.listSystemChannels(session, sys_id)
109 try:
110 chans = [item['label'] for item in basechan]
111 except KeyError:
112 chans = [item['channel_label'] for item in basechan]
113 return chans
114
115 # ------------------------------------------------------- #
116
117
118 def main():
119
120 module = AnsibleModule(
121 argument_spec = dict(
122 state = dict(default='present', choices=['present', 'absent']),
123 name = dict(required=True),
124 sysname = dict(required=True),
125 url = dict(required=True),
126 user = dict(required=True),
127 password = dict(required=True, aliases=['pwd']),
128 )
129 # supports_check_mode=True
130 )
131
132 state = module.params['state']
133 channelname = module.params['name']
134 systname = module.params['sysname']
135 saturl = module.params['url']
136 user = module.params['user']
137 password = module.params['password']
138
139 #initialize connection
140 client = xmlrpclib.Server(saturl, verbose=0)
141 session = client.auth.login(user, password)
142
143 # get systemid
144 sys_id = get_systemid(client, session, systname)
145
146 # get channels for system
147 chans = base_channels(client, session, sys_id)
148
149
150 if state == 'present':
151 if channelname in chans:
152 module.exit_json(changed=False, msg="Channel %s already exists" % channelname)
153 else:
154 subscribe_channels(channelname, client, session, systname, sys_id)
155 module.exit_json(changed=True, msg="Channel %s added" % channelname)
156
157 if state == 'absent':
158 if not channelname in chans:
159 module.exit_json(changed=False, msg="Not subscribed to channel %s." % channelname)
160 else:
161 unsubscribe_channels(channelname, client, session, systname, sys_id)
162 module.exit_json(changed=True, msg="Channel %s removed" % channelname)
163
164 client.auth.logout(session)
165
166
167 # import module snippets
168 from ansible.module_utils.basic import *
169 main()
170
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/packaging/os/rhn_channel.py b/packaging/os/rhn_channel.py
--- a/packaging/os/rhn_channel.py
+++ b/packaging/os/rhn_channel.py
@@ -90,17 +90,17 @@
# ------------------------------------------------------- #
-def subscribe_channels(channels, client, session, sysname, sys_id):
- c = base_channels(client, session, sys_id)
- c.append(channels)
- return client.channel.software.setSystemChannels(session, sys_id, c)
+def subscribe_channels(channelname, client, session, sysname, sys_id):
+ channels = base_channels(client, session, sys_id)
+ channels.append(channelname)
+ return client.system.setChildChannels(session, sys_id, channels)
# ------------------------------------------------------- #
-def unsubscribe_channels(channels, client, session, sysname, sys_id):
- c = base_channels(client, session, sys_id)
- c.remove(channels)
- return client.channel.software.setSystemChannels(session, sys_id, c)
+def unsubscribe_channels(channelname, client, session, sysname, sys_id):
+ channels = base_channels(client, session, sys_id)
+ channels.remove(channelname)
+ return client.system.setChildChannels(session, sys_id, channels)
# ------------------------------------------------------- #
@@ -167,3 +167,4 @@
# import module snippets
from ansible.module_utils.basic import *
main()
+
| {"golden_diff": "diff --git a/packaging/os/rhn_channel.py b/packaging/os/rhn_channel.py\n--- a/packaging/os/rhn_channel.py\n+++ b/packaging/os/rhn_channel.py\n@@ -90,17 +90,17 @@\n \n # ------------------------------------------------------- #\n \n-def subscribe_channels(channels, client, session, sysname, sys_id):\n- c = base_channels(client, session, sys_id)\n- c.append(channels)\n- return client.channel.software.setSystemChannels(session, sys_id, c)\n+def subscribe_channels(channelname, client, session, sysname, sys_id):\n+ channels = base_channels(client, session, sys_id)\n+ channels.append(channelname)\n+ return client.system.setChildChannels(session, sys_id, channels)\n \n # ------------------------------------------------------- #\n \n-def unsubscribe_channels(channels, client, session, sysname, sys_id):\n- c = base_channels(client, session, sys_id)\n- c.remove(channels)\n- return client.channel.software.setSystemChannels(session, sys_id, c)\n+def unsubscribe_channels(channelname, client, session, sysname, sys_id):\n+ channels = base_channels(client, session, sys_id)\n+ channels.remove(channelname)\n+ return client.system.setChildChannels(session, sys_id, channels)\n \n # ------------------------------------------------------- #\n \n@@ -167,3 +167,4 @@\n # import module snippets\n from ansible.module_utils.basic import *\n main()\n+\n", "issue": "rhn_channel is using depreciated methods\nIssue Type:\nBug Report\n\nAnsible Version: \n1.7.2\n\nEnvironment: \nOS X 10.9 / RHEL 6/7\n\nSummary:\nWhen adding a child channel to rhn sat server rhn_channel fails with 500 error. \n\nSteps To Reproduce:\nattempt to subscribe to a child channel with rhn_channel to sat server 5.4+\n\nExpected Results:\nServer gets subscribed to child channel.\n\nActual Results:\nxmlrpclib.ProtocolError: <ProtocolError for redhat.example.com/rpc/api: 500 Internal Server Error>\n\nchannel.software.setSystemChannels is depreciated. \nhttps://access.redhat.com/documentation/en-US/Red_Hat_Network_Satellite/5.5/html/API_Overview/files/html/handlers/ChannelSoftwareHandler.html#setSystemChannels\n\nPR to follow\n\n", "before_files": [{"content": "#!/usr/bin/python\n\n# (c) Vincent Van de Kussen\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nDOCUMENTATION = '''\n---\nmodule: rhn_channel\nshort_description: Adds or removes Red Hat software channels\ndescription:\n - Adds or removes Red Hat software channels\nversion_added: \"1.1\"\nauthor: Vincent Van der Kussen\nnotes:\n - this module fetches the system id from RHN. \nrequirements:\n - none\noptions:\n name:\n description:\n - name of the software channel\n required: true\n default: null\n sysname:\n description:\n - name of the system as it is known in RHN/Satellite\n required: true\n default: null\n state:\n description:\n - whether the channel should be present or not\n required: false\n default: present\n url:\n description: \n - The full url to the RHN/Satellite api\n required: true\n user:\n description:\n - RHN/Satellite user\n required: true\n password:\n description:\n - \"the user's password\"\n required: true\n'''\n\nEXAMPLES = '''\n- rhn_channel: name=rhel-x86_64-server-v2vwin-6 sysname=server01 url=https://rhn.redhat.com/rpc/api user=rhnuser password=guessme\n'''\n\nimport xmlrpclib\nfrom operator import itemgetter\nimport re\n\n\n# ------------------------------------------------------- #\n\ndef get_systemid(client, session, sysname):\n systems = client.system.listUserSystems(session)\n for system in systems:\n if system.get('name') == sysname:\n idres = system.get('id')\n idd = int(idres)\n return idd\n\n# ------------------------------------------------------- #\n\n# unused:\n#\n#def get_localsystemid():\n# f = open(\"/etc/sysconfig/rhn/systemid\", \"r\")\n# content = f.read()\n# loc_id = re.search(r'\\b(ID-)(\\d{10})' ,content)\n# return loc_id.group(2)\n\n# ------------------------------------------------------- #\n\ndef subscribe_channels(channels, client, session, sysname, sys_id):\n c = base_channels(client, session, sys_id)\n c.append(channels)\n return client.channel.software.setSystemChannels(session, sys_id, c)\n\n# ------------------------------------------------------- #\n\ndef unsubscribe_channels(channels, client, session, sysname, sys_id):\n c = base_channels(client, session, sys_id)\n c.remove(channels)\n return client.channel.software.setSystemChannels(session, sys_id, c)\n\n# ------------------------------------------------------- #\n\ndef base_channels(client, session, sys_id):\n basechan = client.channel.software.listSystemChannels(session, sys_id)\n try:\n chans = [item['label'] for item in basechan]\n except KeyError:\n chans = [item['channel_label'] for item in basechan]\n return chans\n\n# ------------------------------------------------------- #\n\n\ndef main():\n\n module = AnsibleModule(\n argument_spec = dict(\n state = dict(default='present', choices=['present', 'absent']),\n name = dict(required=True),\n sysname = dict(required=True),\n url = dict(required=True),\n user = dict(required=True),\n password = dict(required=True, aliases=['pwd']),\n )\n# supports_check_mode=True\n )\n\n state = module.params['state']\n channelname = module.params['name']\n systname = module.params['sysname']\n saturl = module.params['url']\n user = module.params['user']\n password = module.params['password']\n \n #initialize connection\n client = xmlrpclib.Server(saturl, verbose=0)\n session = client.auth.login(user, password)\n \n # get systemid\n sys_id = get_systemid(client, session, systname)\n\n # get channels for system\n chans = base_channels(client, session, sys_id)\n \n \n if state == 'present':\n if channelname in chans:\n module.exit_json(changed=False, msg=\"Channel %s already exists\" % channelname)\n else:\n subscribe_channels(channelname, client, session, systname, sys_id)\n module.exit_json(changed=True, msg=\"Channel %s added\" % channelname)\n\n if state == 'absent':\n if not channelname in chans:\n module.exit_json(changed=False, msg=\"Not subscribed to channel %s.\" % channelname)\n else:\n unsubscribe_channels(channelname, client, session, systname, sys_id)\n module.exit_json(changed=True, msg=\"Channel %s removed\" % channelname)\n\n client.auth.logout(session)\n\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nmain()\n", "path": "packaging/os/rhn_channel.py"}], "after_files": [{"content": "#!/usr/bin/python\n\n# (c) Vincent Van de Kussen\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nDOCUMENTATION = '''\n---\nmodule: rhn_channel\nshort_description: Adds or removes Red Hat software channels\ndescription:\n - Adds or removes Red Hat software channels\nversion_added: \"1.1\"\nauthor: Vincent Van der Kussen\nnotes:\n - this module fetches the system id from RHN. \nrequirements:\n - none\noptions:\n name:\n description:\n - name of the software channel\n required: true\n default: null\n sysname:\n description:\n - name of the system as it is known in RHN/Satellite\n required: true\n default: null\n state:\n description:\n - whether the channel should be present or not\n required: false\n default: present\n url:\n description: \n - The full url to the RHN/Satellite api\n required: true\n user:\n description:\n - RHN/Satellite user\n required: true\n password:\n description:\n - \"the user's password\"\n required: true\n'''\n\nEXAMPLES = '''\n- rhn_channel: name=rhel-x86_64-server-v2vwin-6 sysname=server01 url=https://rhn.redhat.com/rpc/api user=rhnuser password=guessme\n'''\n\nimport xmlrpclib\nfrom operator import itemgetter\nimport re\n\n\n# ------------------------------------------------------- #\n\ndef get_systemid(client, session, sysname):\n systems = client.system.listUserSystems(session)\n for system in systems:\n if system.get('name') == sysname:\n idres = system.get('id')\n idd = int(idres)\n return idd\n\n# ------------------------------------------------------- #\n\n# unused:\n#\n#def get_localsystemid():\n# f = open(\"/etc/sysconfig/rhn/systemid\", \"r\")\n# content = f.read()\n# loc_id = re.search(r'\\b(ID-)(\\d{10})' ,content)\n# return loc_id.group(2)\n\n# ------------------------------------------------------- #\n\ndef subscribe_channels(channelname, client, session, sysname, sys_id):\n channels = base_channels(client, session, sys_id)\n channels.append(channelname)\n return client.system.setChildChannels(session, sys_id, channels)\n\n# ------------------------------------------------------- #\n\ndef unsubscribe_channels(channelname, client, session, sysname, sys_id):\n channels = base_channels(client, session, sys_id)\n channels.remove(channelname)\n return client.system.setChildChannels(session, sys_id, channels)\n\n# ------------------------------------------------------- #\n\ndef base_channels(client, session, sys_id):\n basechan = client.channel.software.listSystemChannels(session, sys_id)\n try:\n chans = [item['label'] for item in basechan]\n except KeyError:\n chans = [item['channel_label'] for item in basechan]\n return chans\n\n# ------------------------------------------------------- #\n\n\ndef main():\n\n module = AnsibleModule(\n argument_spec = dict(\n state = dict(default='present', choices=['present', 'absent']),\n name = dict(required=True),\n sysname = dict(required=True),\n url = dict(required=True),\n user = dict(required=True),\n password = dict(required=True, aliases=['pwd']),\n )\n# supports_check_mode=True\n )\n\n state = module.params['state']\n channelname = module.params['name']\n systname = module.params['sysname']\n saturl = module.params['url']\n user = module.params['user']\n password = module.params['password']\n \n #initialize connection\n client = xmlrpclib.Server(saturl, verbose=0)\n session = client.auth.login(user, password)\n \n # get systemid\n sys_id = get_systemid(client, session, systname)\n\n # get channels for system\n chans = base_channels(client, session, sys_id)\n \n \n if state == 'present':\n if channelname in chans:\n module.exit_json(changed=False, msg=\"Channel %s already exists\" % channelname)\n else:\n subscribe_channels(channelname, client, session, systname, sys_id)\n module.exit_json(changed=True, msg=\"Channel %s added\" % channelname)\n\n if state == 'absent':\n if not channelname in chans:\n module.exit_json(changed=False, msg=\"Not subscribed to channel %s.\" % channelname)\n else:\n unsubscribe_channels(channelname, client, session, systname, sys_id)\n module.exit_json(changed=True, msg=\"Channel %s removed\" % channelname)\n\n client.auth.logout(session)\n\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nmain()\n\n", "path": "packaging/os/rhn_channel.py"}]} | 2,034 | 315 |
gh_patches_debug_3973 | rasdani/github-patches | git_diff | encode__uvicorn-513 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
--reload not working on Windows but --loop asyncio is
Duplicate of https://github.com/encode/uvicorn/issues/477 but per request a new topic to track the issue. Especially useful since `--loop asyncio` does seem to work just fine.
@tomchristie suggests to default to asyncio on all platforms. I can make a PR for that. Do you still want to keep the optional uvloop dependency? (It's incompatible with Windows in any case, so perhaps it should be removed from the requirements file at least.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `uvicorn/loops/auto.py`
Content:
```
1 import sys
2
3
4 def auto_loop_setup():
5 try:
6 import uvloop
7 except ImportError as exc: # pragma: no cover
8 if sys.platform == "win32":
9 from uvicorn.loops.iocp import iocp_setup as loop_setup
10 else:
11 from uvicorn.loops.asyncio import asyncio_setup as loop_setup
12
13 loop_setup()
14 else:
15 from uvicorn.loops.uvloop import uvloop_setup
16
17 uvloop_setup()
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/uvicorn/loops/auto.py b/uvicorn/loops/auto.py
--- a/uvicorn/loops/auto.py
+++ b/uvicorn/loops/auto.py
@@ -5,10 +5,7 @@
try:
import uvloop
except ImportError as exc: # pragma: no cover
- if sys.platform == "win32":
- from uvicorn.loops.iocp import iocp_setup as loop_setup
- else:
- from uvicorn.loops.asyncio import asyncio_setup as loop_setup
+ from uvicorn.loops.asyncio import asyncio_setup as loop_setup
loop_setup()
else:
| {"golden_diff": "diff --git a/uvicorn/loops/auto.py b/uvicorn/loops/auto.py\n--- a/uvicorn/loops/auto.py\n+++ b/uvicorn/loops/auto.py\n@@ -5,10 +5,7 @@\n try:\n import uvloop\n except ImportError as exc: # pragma: no cover\n- if sys.platform == \"win32\":\n- from uvicorn.loops.iocp import iocp_setup as loop_setup\n- else:\n- from uvicorn.loops.asyncio import asyncio_setup as loop_setup\n+ from uvicorn.loops.asyncio import asyncio_setup as loop_setup\n \n loop_setup()\n else:\n", "issue": "--reload not working on Windows but --loop asyncio is\nDuplicate of https://github.com/encode/uvicorn/issues/477 but per request a new topic to track the issue. Especially useful since `--loop asyncio` does seem to work just fine.\r\n\r\n@tomchristie suggests to default to asyncio on all platforms. I can make a PR for that. Do you still want to keep the optional uvloop dependency? (It's incompatible with Windows in any case, so perhaps it should be removed from the requirements file at least.)\n", "before_files": [{"content": "import sys\n\n\ndef auto_loop_setup():\n try:\n import uvloop\n except ImportError as exc: # pragma: no cover\n if sys.platform == \"win32\":\n from uvicorn.loops.iocp import iocp_setup as loop_setup\n else:\n from uvicorn.loops.asyncio import asyncio_setup as loop_setup\n\n loop_setup()\n else:\n from uvicorn.loops.uvloop import uvloop_setup\n\n uvloop_setup()\n", "path": "uvicorn/loops/auto.py"}], "after_files": [{"content": "import sys\n\n\ndef auto_loop_setup():\n try:\n import uvloop\n except ImportError as exc: # pragma: no cover\n from uvicorn.loops.asyncio import asyncio_setup as loop_setup\n\n loop_setup()\n else:\n from uvicorn.loops.uvloop import uvloop_setup\n\n uvloop_setup()\n", "path": "uvicorn/loops/auto.py"}]} | 502 | 145 |
gh_patches_debug_195 | rasdani/github-patches | git_diff | conda__conda-3524 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Progress bar broken

```
C:\Users\Korijn\dev\myproject>conda info
Current conda install:
platform : win-64
conda version : 4.2.7
conda is private : False
conda-env version : 4.2.7
conda-build version : 2.0.1
python version : 3.5.1.final.0
requests version : 2.9.1
root environment : C:\Users\Korijn\Miniconda3 (writable)
default environment : C:\Users\Korijn\Miniconda3
envs directories : C:\Users\Korijn\Miniconda3\envs
package cache : C:\Users\Korijn\Miniconda3\pkgs
channel URLs : https://repo.continuum.io/pkgs/free/win-64/
https://repo.continuum.io/pkgs/free/noarch/
https://repo.continuum.io/pkgs/pro/win-64/
https://repo.continuum.io/pkgs/pro/noarch/
https://repo.continuum.io/pkgs/msys2/win-64/
https://repo.continuum.io/pkgs/msys2/noarch/
config file : C:\Users\Korijn\.condarc
offline mode : False
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda/base/constants.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 This file should hold almost all string literals and magic numbers used throughout the code base.
4 The exception is if a literal is specifically meant to be private to and isolated within a module.
5 """
6 from __future__ import absolute_import, division, print_function
7
8 import os
9 import sys
10 from logging import getLogger
11 from platform import machine
12
13 from enum import Enum
14
15 from conda._vendor.auxlib.collection import frozendict
16
17 log = getLogger(__name__)
18
19
20 class Arch(Enum):
21 x86 = 'x86'
22 x86_64 = 'x86_64'
23 armv6l = 'armv6l'
24 armv7l = 'armv7l'
25 ppc64le = 'ppc64le'
26
27 @classmethod
28 def from_sys(cls):
29 return cls[machine()]
30
31
32 class Platform(Enum):
33 linux = 'linux'
34 win = 'win32'
35 openbsd = 'openbsd5'
36 osx = 'darwin'
37
38 @classmethod
39 def from_sys(cls):
40 p = sys.platform
41 if p.startswith('linux'):
42 # Changed in version 2.7.3: Since lots of code check for sys.platform == 'linux2',
43 # and there is no essential change between Linux 2.x and 3.x, sys.platform is always
44 # set to 'linux2', even on Linux 3.x. In Python 3.3 and later, the value will always
45 # be set to 'linux'
46 p = 'linux'
47 return cls(p)
48
49 machine_bits = 8 * tuple.__itemsize__
50
51 # UID = os.getuid()
52 PWD = os.getcwd()
53 CONDA = 'CONDA'
54 CONDA_ = 'CONDA_'
55 conda = 'conda'
56
57 SEARCH_PATH = (
58 '/etc/conda/condarc',
59 '/etc/conda/condarc.d/',
60 '/var/lib/conda/condarc',
61 '/var/lib/conda/condarc.d/',
62 '$CONDA_ROOT/condarc',
63 '$CONDA_ROOT/.condarc',
64 '$CONDA_ROOT/condarc.d/',
65 '~/.conda/condarc',
66 '~/.conda/condarc.d/',
67 '~/.condarc',
68 '$CONDA_PREFIX/.condarc',
69 '$CONDA_PREFIX/condarc.d/',
70 '$CONDARC',
71 )
72
73 DEFAULT_CHANNEL_ALIAS = 'https://conda.anaconda.org/'
74
75 PLATFORM_DIRECTORIES = ("linux-64", "linux-32",
76 "win-64", "win-32",
77 "osx-64", "noarch")
78
79 RECOGNIZED_URL_SCHEMES = ('http', 'https', 'ftp', 's3', 'file')
80
81
82 if Platform.from_sys() is Platform.win:
83 DEFAULT_CHANNELS = ('https://repo.continuum.io/pkgs/free',
84 'https://repo.continuum.io/pkgs/pro',
85 'https://repo.continuum.io/pkgs/msys2',
86 )
87 else:
88 DEFAULT_CHANNELS = ('https://repo.continuum.io/pkgs/free',
89 'https://repo.continuum.io/pkgs/pro',
90 )
91
92 ROOT_ENV_NAME = 'root'
93
94 EMPTY_LIST = ()
95 EMPTY_MAP = frozendict()
96
97
98 class _Null(object):
99 def __nonzero__(self):
100 return False
101
102 NULL = _Null()
103
104 UTF8 = 'UTF-8'
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda/base/constants.py b/conda/base/constants.py
--- a/conda/base/constants.py
+++ b/conda/base/constants.py
@@ -99,6 +99,9 @@
def __nonzero__(self):
return False
+ def __bool__(self):
+ return False
+
NULL = _Null()
UTF8 = 'UTF-8'
| {"golden_diff": "diff --git a/conda/base/constants.py b/conda/base/constants.py\n--- a/conda/base/constants.py\n+++ b/conda/base/constants.py\n@@ -99,6 +99,9 @@\n def __nonzero__(self):\n return False\n \n+ def __bool__(self):\n+ return False\n+\n NULL = _Null()\n \n UTF8 = 'UTF-8'\n", "issue": "Progress bar broken\n\n\n```\nC:\\Users\\Korijn\\dev\\myproject>conda info\nCurrent conda install:\n\n platform : win-64\n conda version : 4.2.7\n conda is private : False\n conda-env version : 4.2.7\n conda-build version : 2.0.1\n python version : 3.5.1.final.0\n requests version : 2.9.1\n root environment : C:\\Users\\Korijn\\Miniconda3 (writable)\n default environment : C:\\Users\\Korijn\\Miniconda3\n envs directories : C:\\Users\\Korijn\\Miniconda3\\envs\n package cache : C:\\Users\\Korijn\\Miniconda3\\pkgs\n channel URLs : https://repo.continuum.io/pkgs/free/win-64/\n https://repo.continuum.io/pkgs/free/noarch/\n https://repo.continuum.io/pkgs/pro/win-64/\n https://repo.continuum.io/pkgs/pro/noarch/\n https://repo.continuum.io/pkgs/msys2/win-64/\n https://repo.continuum.io/pkgs/msys2/noarch/\n config file : C:\\Users\\Korijn\\.condarc\n offline mode : False\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nThis file should hold almost all string literals and magic numbers used throughout the code base.\nThe exception is if a literal is specifically meant to be private to and isolated within a module.\n\"\"\"\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport sys\nfrom logging import getLogger\nfrom platform import machine\n\nfrom enum import Enum\n\nfrom conda._vendor.auxlib.collection import frozendict\n\nlog = getLogger(__name__)\n\n\nclass Arch(Enum):\n x86 = 'x86'\n x86_64 = 'x86_64'\n armv6l = 'armv6l'\n armv7l = 'armv7l'\n ppc64le = 'ppc64le'\n\n @classmethod\n def from_sys(cls):\n return cls[machine()]\n\n\nclass Platform(Enum):\n linux = 'linux'\n win = 'win32'\n openbsd = 'openbsd5'\n osx = 'darwin'\n\n @classmethod\n def from_sys(cls):\n p = sys.platform\n if p.startswith('linux'):\n # Changed in version 2.7.3: Since lots of code check for sys.platform == 'linux2',\n # and there is no essential change between Linux 2.x and 3.x, sys.platform is always\n # set to 'linux2', even on Linux 3.x. In Python 3.3 and later, the value will always\n # be set to 'linux'\n p = 'linux'\n return cls(p)\n\nmachine_bits = 8 * tuple.__itemsize__\n\n# UID = os.getuid()\nPWD = os.getcwd()\nCONDA = 'CONDA'\nCONDA_ = 'CONDA_'\nconda = 'conda'\n\nSEARCH_PATH = (\n '/etc/conda/condarc',\n '/etc/conda/condarc.d/',\n '/var/lib/conda/condarc',\n '/var/lib/conda/condarc.d/',\n '$CONDA_ROOT/condarc',\n '$CONDA_ROOT/.condarc',\n '$CONDA_ROOT/condarc.d/',\n '~/.conda/condarc',\n '~/.conda/condarc.d/',\n '~/.condarc',\n '$CONDA_PREFIX/.condarc',\n '$CONDA_PREFIX/condarc.d/',\n '$CONDARC',\n)\n\nDEFAULT_CHANNEL_ALIAS = 'https://conda.anaconda.org/'\n\nPLATFORM_DIRECTORIES = (\"linux-64\", \"linux-32\",\n \"win-64\", \"win-32\",\n \"osx-64\", \"noarch\")\n\nRECOGNIZED_URL_SCHEMES = ('http', 'https', 'ftp', 's3', 'file')\n\n\nif Platform.from_sys() is Platform.win:\n DEFAULT_CHANNELS = ('https://repo.continuum.io/pkgs/free',\n 'https://repo.continuum.io/pkgs/pro',\n 'https://repo.continuum.io/pkgs/msys2',\n )\nelse:\n DEFAULT_CHANNELS = ('https://repo.continuum.io/pkgs/free',\n 'https://repo.continuum.io/pkgs/pro',\n )\n\nROOT_ENV_NAME = 'root'\n\nEMPTY_LIST = ()\nEMPTY_MAP = frozendict()\n\n\nclass _Null(object):\n def __nonzero__(self):\n return False\n\nNULL = _Null()\n\nUTF8 = 'UTF-8'\n", "path": "conda/base/constants.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nThis file should hold almost all string literals and magic numbers used throughout the code base.\nThe exception is if a literal is specifically meant to be private to and isolated within a module.\n\"\"\"\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport sys\nfrom logging import getLogger\nfrom platform import machine\n\nfrom enum import Enum\n\nfrom conda._vendor.auxlib.collection import frozendict\n\nlog = getLogger(__name__)\n\n\nclass Arch(Enum):\n x86 = 'x86'\n x86_64 = 'x86_64'\n armv6l = 'armv6l'\n armv7l = 'armv7l'\n ppc64le = 'ppc64le'\n\n @classmethod\n def from_sys(cls):\n return cls[machine()]\n\n\nclass Platform(Enum):\n linux = 'linux'\n win = 'win32'\n openbsd = 'openbsd5'\n osx = 'darwin'\n\n @classmethod\n def from_sys(cls):\n p = sys.platform\n if p.startswith('linux'):\n # Changed in version 2.7.3: Since lots of code check for sys.platform == 'linux2',\n # and there is no essential change between Linux 2.x and 3.x, sys.platform is always\n # set to 'linux2', even on Linux 3.x. In Python 3.3 and later, the value will always\n # be set to 'linux'\n p = 'linux'\n return cls(p)\n\nmachine_bits = 8 * tuple.__itemsize__\n\n# UID = os.getuid()\nPWD = os.getcwd()\nCONDA = 'CONDA'\nCONDA_ = 'CONDA_'\nconda = 'conda'\n\nSEARCH_PATH = (\n '/etc/conda/condarc',\n '/etc/conda/condarc.d/',\n '/var/lib/conda/condarc',\n '/var/lib/conda/condarc.d/',\n '$CONDA_ROOT/condarc',\n '$CONDA_ROOT/.condarc',\n '$CONDA_ROOT/condarc.d/',\n '~/.conda/condarc',\n '~/.conda/condarc.d/',\n '~/.condarc',\n '$CONDA_PREFIX/.condarc',\n '$CONDA_PREFIX/condarc.d/',\n '$CONDARC',\n)\n\nDEFAULT_CHANNEL_ALIAS = 'https://conda.anaconda.org/'\n\nPLATFORM_DIRECTORIES = (\"linux-64\", \"linux-32\",\n \"win-64\", \"win-32\",\n \"osx-64\", \"noarch\")\n\nRECOGNIZED_URL_SCHEMES = ('http', 'https', 'ftp', 's3', 'file')\n\n\nif Platform.from_sys() is Platform.win:\n DEFAULT_CHANNELS = ('https://repo.continuum.io/pkgs/free',\n 'https://repo.continuum.io/pkgs/pro',\n 'https://repo.continuum.io/pkgs/msys2',\n )\nelse:\n DEFAULT_CHANNELS = ('https://repo.continuum.io/pkgs/free',\n 'https://repo.continuum.io/pkgs/pro',\n )\n\nROOT_ENV_NAME = 'root'\n\nEMPTY_LIST = ()\nEMPTY_MAP = frozendict()\n\n\nclass _Null(object):\n def __nonzero__(self):\n return False\n\n def __bool__(self):\n return False\n\nNULL = _Null()\n\nUTF8 = 'UTF-8'\n", "path": "conda/base/constants.py"}]} | 1,586 | 84 |
gh_patches_debug_14579 | rasdani/github-patches | git_diff | boto__boto-1543 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support addtional regions in CloudSearch
It appears that boto currently only supports two regions. It would be nice to support the more recently added ones.
Current list of CloudSearch regions is here: http://aws.amazon.com/cloudsearch/faqs/#Which_AWS_regions_is_CloudSearch_available_in?
I think it just needs to be added to boto/cloudsearch/**init**.py:
return [RegionInfo(name='us-east-1',
endpoint='cloudsearch.us-east-1.amazonaws.com',
connection_cls=boto.cloudsearch.layer1.Layer1),
RegionInfo(name='eu-west-1',
endpoint='cloudsearch.eu-west-1.amazonaws.com',
connection_cls=boto.cloudsearch.layer1.Layer1),
And yes, you can use another region by changing the default field in .boto
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `boto/cloudsearch/__init__.py`
Content:
```
1 # Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
2 # Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
3 # All Rights Reserved
4 #
5 # Permission is hereby granted, free of charge, to any person obtaining a
6 # copy of this software and associated documentation files (the
7 # "Software"), to deal in the Software without restriction, including
8 # without limitation the rights to use, copy, modify, merge, publish, dis-
9 # tribute, sublicense, and/or sell copies of the Software, and to permit
10 # persons to whom the Software is furnished to do so, subject to the fol-
11 # lowing conditions:
12 #
13 # The above copyright notice and this permission notice shall be included
14 # in all copies or substantial portions of the Software.
15 #
16 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
17 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
18 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
19 # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
20 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
22 # IN THE SOFTWARE.
23 #
24 from boto.ec2.regioninfo import RegionInfo
25
26
27 def regions():
28 """
29 Get all available regions for the Amazon CloudSearch service.
30
31 :rtype: list
32 :return: A list of :class:`boto.regioninfo.RegionInfo`
33 """
34 import boto.cloudsearch.layer1
35 return [RegionInfo(name='us-east-1',
36 endpoint='cloudsearch.us-east-1.amazonaws.com',
37 connection_cls=boto.cloudsearch.layer1.Layer1),
38 RegionInfo(name='eu-west-1',
39 endpoint='cloudsearch.eu-west-1.amazonaws.com',
40 connection_cls=boto.cloudsearch.layer1.Layer1),
41 ]
42
43
44 def connect_to_region(region_name, **kw_params):
45 for region in regions():
46 if region.name == region_name:
47 return region.connect(**kw_params)
48 return None
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/boto/cloudsearch/__init__.py b/boto/cloudsearch/__init__.py
--- a/boto/cloudsearch/__init__.py
+++ b/boto/cloudsearch/__init__.py
@@ -38,6 +38,16 @@
RegionInfo(name='eu-west-1',
endpoint='cloudsearch.eu-west-1.amazonaws.com',
connection_cls=boto.cloudsearch.layer1.Layer1),
+ RegionInfo(name='us-west-1',
+ endpoint='cloudsearch.us-west-1.amazonaws.com',
+ connection_cls=boto.cloudsearch.layer1.Layer1),
+ RegionInfo(name='us-west-2',
+ endpoint='cloudsearch.us-west-2.amazonaws.com',
+ connection_cls=boto.cloudsearch.layer1.Layer1),
+ RegionInfo(name='ap-southeast-1',
+ endpoint='cloudsearch.ap-southeast-1.amazonaws.com',
+ connection_cls=boto.cloudsearch.layer1.Layer1),
+
]
| {"golden_diff": "diff --git a/boto/cloudsearch/__init__.py b/boto/cloudsearch/__init__.py\n--- a/boto/cloudsearch/__init__.py\n+++ b/boto/cloudsearch/__init__.py\n@@ -38,6 +38,16 @@\n RegionInfo(name='eu-west-1',\n endpoint='cloudsearch.eu-west-1.amazonaws.com',\n connection_cls=boto.cloudsearch.layer1.Layer1),\n+ RegionInfo(name='us-west-1',\n+ endpoint='cloudsearch.us-west-1.amazonaws.com',\n+ connection_cls=boto.cloudsearch.layer1.Layer1),\n+ RegionInfo(name='us-west-2',\n+ endpoint='cloudsearch.us-west-2.amazonaws.com',\n+ connection_cls=boto.cloudsearch.layer1.Layer1),\n+ RegionInfo(name='ap-southeast-1',\n+ endpoint='cloudsearch.ap-southeast-1.amazonaws.com',\n+ connection_cls=boto.cloudsearch.layer1.Layer1),\n+\n ]\n", "issue": "Support addtional regions in CloudSearch \nIt appears that boto currently only supports two regions. It would be nice to support the more recently added ones. \n\nCurrent list of CloudSearch regions is here: http://aws.amazon.com/cloudsearch/faqs/#Which_AWS_regions_is_CloudSearch_available_in?\n\nI think it just needs to be added to boto/cloudsearch/**init**.py:\n\nreturn [RegionInfo(name='us-east-1',\n endpoint='cloudsearch.us-east-1.amazonaws.com',\n connection_cls=boto.cloudsearch.layer1.Layer1),\n RegionInfo(name='eu-west-1',\n endpoint='cloudsearch.eu-west-1.amazonaws.com',\n connection_cls=boto.cloudsearch.layer1.Layer1),\n\nAnd yes, you can use another region by changing the default field in .boto \n\n", "before_files": [{"content": "# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/\n# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.\n# All Rights Reserved\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n#\nfrom boto.ec2.regioninfo import RegionInfo\n\n\ndef regions():\n \"\"\"\n Get all available regions for the Amazon CloudSearch service.\n\n :rtype: list\n :return: A list of :class:`boto.regioninfo.RegionInfo`\n \"\"\"\n import boto.cloudsearch.layer1\n return [RegionInfo(name='us-east-1',\n endpoint='cloudsearch.us-east-1.amazonaws.com',\n connection_cls=boto.cloudsearch.layer1.Layer1),\n RegionInfo(name='eu-west-1',\n endpoint='cloudsearch.eu-west-1.amazonaws.com',\n connection_cls=boto.cloudsearch.layer1.Layer1),\n ]\n\n\ndef connect_to_region(region_name, **kw_params):\n for region in regions():\n if region.name == region_name:\n return region.connect(**kw_params)\n return None\n", "path": "boto/cloudsearch/__init__.py"}], "after_files": [{"content": "# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/\n# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.\n# All Rights Reserved\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n#\nfrom boto.ec2.regioninfo import RegionInfo\n\n\ndef regions():\n \"\"\"\n Get all available regions for the Amazon CloudSearch service.\n\n :rtype: list\n :return: A list of :class:`boto.regioninfo.RegionInfo`\n \"\"\"\n import boto.cloudsearch.layer1\n return [RegionInfo(name='us-east-1',\n endpoint='cloudsearch.us-east-1.amazonaws.com',\n connection_cls=boto.cloudsearch.layer1.Layer1),\n RegionInfo(name='eu-west-1',\n endpoint='cloudsearch.eu-west-1.amazonaws.com',\n connection_cls=boto.cloudsearch.layer1.Layer1),\n RegionInfo(name='us-west-1',\n endpoint='cloudsearch.us-west-1.amazonaws.com',\n connection_cls=boto.cloudsearch.layer1.Layer1),\n RegionInfo(name='us-west-2',\n endpoint='cloudsearch.us-west-2.amazonaws.com',\n connection_cls=boto.cloudsearch.layer1.Layer1),\n RegionInfo(name='ap-southeast-1',\n endpoint='cloudsearch.ap-southeast-1.amazonaws.com',\n connection_cls=boto.cloudsearch.layer1.Layer1),\n\n ]\n\n\ndef connect_to_region(region_name, **kw_params):\n for region in regions():\n if region.name == region_name:\n return region.connect(**kw_params)\n return None\n", "path": "boto/cloudsearch/__init__.py"}]} | 964 | 209 |
gh_patches_debug_23513 | rasdani/github-patches | git_diff | pyca__cryptography-2385 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Python 3.6 incompatibility because of getargspec
Hi, on testing pywikibot on 3.6 I noticed that your package uses `inspect.getargspec` which has been removed in 3.6 after it has been deprecated in 3.5 (although the documentation mentions it has been deprecated since Python 3).
I know that Python 3.6 hasn't been released yet but I just wanted to let you know so you have enough time to prevent issues before it is released.
Here the Travis build: https://travis-ci.org/xZise/pywikibot-core/jobs/82446934
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cryptography/utils.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import abc
8 import binascii
9 import inspect
10 import struct
11 import sys
12 import warnings
13
14
15 DeprecatedIn09 = DeprecationWarning
16 DeprecatedIn10 = PendingDeprecationWarning
17
18
19 def read_only_property(name):
20 return property(lambda self: getattr(self, name))
21
22
23 def register_interface(iface):
24 def register_decorator(klass):
25 verify_interface(iface, klass)
26 iface.register(klass)
27 return klass
28 return register_decorator
29
30
31 if hasattr(int, "from_bytes"):
32 int_from_bytes = int.from_bytes
33 else:
34 def int_from_bytes(data, byteorder, signed=False):
35 assert byteorder == 'big'
36 assert not signed
37
38 if len(data) % 4 != 0:
39 data = (b'\x00' * (4 - (len(data) % 4))) + data
40
41 result = 0
42
43 while len(data) > 0:
44 digit, = struct.unpack('>I', data[:4])
45 result = (result << 32) + digit
46 data = data[4:]
47
48 return result
49
50
51 def int_to_bytes(integer):
52 hex_string = '%x' % integer
53 n = len(hex_string)
54 return binascii.unhexlify(hex_string.zfill(n + (n & 1)))
55
56
57 class InterfaceNotImplemented(Exception):
58 pass
59
60
61 def verify_interface(iface, klass):
62 for method in iface.__abstractmethods__:
63 if not hasattr(klass, method):
64 raise InterfaceNotImplemented(
65 "{0} is missing a {1!r} method".format(klass, method)
66 )
67 if isinstance(getattr(iface, method), abc.abstractproperty):
68 # Can't properly verify these yet.
69 continue
70 spec = inspect.getargspec(getattr(iface, method))
71 actual = inspect.getargspec(getattr(klass, method))
72 if spec != actual:
73 raise InterfaceNotImplemented(
74 "{0}.{1}'s signature differs from the expected. Expected: "
75 "{2!r}. Received: {3!r}".format(
76 klass, method, spec, actual
77 )
78 )
79
80
81 if sys.version_info >= (2, 7):
82 def bit_length(x):
83 return x.bit_length()
84 else:
85 def bit_length(x):
86 return len(bin(x)) - (2 + (x <= 0))
87
88
89 class _DeprecatedValue(object):
90 def __init__(self, value, message, warning_class):
91 self.value = value
92 self.message = message
93 self.warning_class = warning_class
94
95
96 class _ModuleWithDeprecations(object):
97 def __init__(self, module):
98 self.__dict__["_module"] = module
99
100 def __getattr__(self, attr):
101 obj = getattr(self._module, attr)
102 if isinstance(obj, _DeprecatedValue):
103 warnings.warn(obj.message, obj.warning_class, stacklevel=2)
104 obj = obj.value
105 return obj
106
107 def __setattr__(self, attr, value):
108 setattr(self._module, attr, value)
109
110 def __dir__(self):
111 return ["_module"] + dir(self._module)
112
113
114 def deprecated(value, module_name, message, warning_class):
115 module = sys.modules[module_name]
116 if not isinstance(module, _ModuleWithDeprecations):
117 sys.modules[module_name] = module = _ModuleWithDeprecations(module)
118 return _DeprecatedValue(value, message, warning_class)
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cryptography/utils.py b/src/cryptography/utils.py
--- a/src/cryptography/utils.py
+++ b/src/cryptography/utils.py
@@ -58,6 +58,12 @@
pass
+if hasattr(inspect, "signature"):
+ signature = inspect.signature
+else:
+ signature = inspect.getargspec
+
+
def verify_interface(iface, klass):
for method in iface.__abstractmethods__:
if not hasattr(klass, method):
@@ -67,13 +73,13 @@
if isinstance(getattr(iface, method), abc.abstractproperty):
# Can't properly verify these yet.
continue
- spec = inspect.getargspec(getattr(iface, method))
- actual = inspect.getargspec(getattr(klass, method))
- if spec != actual:
+ sig = signature(getattr(iface, method))
+ actual = signature(getattr(klass, method))
+ if sig != actual:
raise InterfaceNotImplemented(
"{0}.{1}'s signature differs from the expected. Expected: "
"{2!r}. Received: {3!r}".format(
- klass, method, spec, actual
+ klass, method, sig, actual
)
)
| {"golden_diff": "diff --git a/src/cryptography/utils.py b/src/cryptography/utils.py\n--- a/src/cryptography/utils.py\n+++ b/src/cryptography/utils.py\n@@ -58,6 +58,12 @@\n pass\n \n \n+if hasattr(inspect, \"signature\"):\n+ signature = inspect.signature\n+else:\n+ signature = inspect.getargspec\n+\n+\n def verify_interface(iface, klass):\n for method in iface.__abstractmethods__:\n if not hasattr(klass, method):\n@@ -67,13 +73,13 @@\n if isinstance(getattr(iface, method), abc.abstractproperty):\n # Can't properly verify these yet.\n continue\n- spec = inspect.getargspec(getattr(iface, method))\n- actual = inspect.getargspec(getattr(klass, method))\n- if spec != actual:\n+ sig = signature(getattr(iface, method))\n+ actual = signature(getattr(klass, method))\n+ if sig != actual:\n raise InterfaceNotImplemented(\n \"{0}.{1}'s signature differs from the expected. Expected: \"\n \"{2!r}. Received: {3!r}\".format(\n- klass, method, spec, actual\n+ klass, method, sig, actual\n )\n )\n", "issue": "Python 3.6 incompatibility because of getargspec\nHi, on testing pywikibot on 3.6 I noticed that your package uses `inspect.getargspec` which has been removed in 3.6 after it has been deprecated in 3.5 (although the documentation mentions it has been deprecated since Python 3).\n\nI know that Python 3.6 hasn't been released yet but I just wanted to let you know so you have enough time to prevent issues before it is released.\n\nHere the Travis build: https://travis-ci.org/xZise/pywikibot-core/jobs/82446934\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport abc\nimport binascii\nimport inspect\nimport struct\nimport sys\nimport warnings\n\n\nDeprecatedIn09 = DeprecationWarning\nDeprecatedIn10 = PendingDeprecationWarning\n\n\ndef read_only_property(name):\n return property(lambda self: getattr(self, name))\n\n\ndef register_interface(iface):\n def register_decorator(klass):\n verify_interface(iface, klass)\n iface.register(klass)\n return klass\n return register_decorator\n\n\nif hasattr(int, \"from_bytes\"):\n int_from_bytes = int.from_bytes\nelse:\n def int_from_bytes(data, byteorder, signed=False):\n assert byteorder == 'big'\n assert not signed\n\n if len(data) % 4 != 0:\n data = (b'\\x00' * (4 - (len(data) % 4))) + data\n\n result = 0\n\n while len(data) > 0:\n digit, = struct.unpack('>I', data[:4])\n result = (result << 32) + digit\n data = data[4:]\n\n return result\n\n\ndef int_to_bytes(integer):\n hex_string = '%x' % integer\n n = len(hex_string)\n return binascii.unhexlify(hex_string.zfill(n + (n & 1)))\n\n\nclass InterfaceNotImplemented(Exception):\n pass\n\n\ndef verify_interface(iface, klass):\n for method in iface.__abstractmethods__:\n if not hasattr(klass, method):\n raise InterfaceNotImplemented(\n \"{0} is missing a {1!r} method\".format(klass, method)\n )\n if isinstance(getattr(iface, method), abc.abstractproperty):\n # Can't properly verify these yet.\n continue\n spec = inspect.getargspec(getattr(iface, method))\n actual = inspect.getargspec(getattr(klass, method))\n if spec != actual:\n raise InterfaceNotImplemented(\n \"{0}.{1}'s signature differs from the expected. Expected: \"\n \"{2!r}. Received: {3!r}\".format(\n klass, method, spec, actual\n )\n )\n\n\nif sys.version_info >= (2, 7):\n def bit_length(x):\n return x.bit_length()\nelse:\n def bit_length(x):\n return len(bin(x)) - (2 + (x <= 0))\n\n\nclass _DeprecatedValue(object):\n def __init__(self, value, message, warning_class):\n self.value = value\n self.message = message\n self.warning_class = warning_class\n\n\nclass _ModuleWithDeprecations(object):\n def __init__(self, module):\n self.__dict__[\"_module\"] = module\n\n def __getattr__(self, attr):\n obj = getattr(self._module, attr)\n if isinstance(obj, _DeprecatedValue):\n warnings.warn(obj.message, obj.warning_class, stacklevel=2)\n obj = obj.value\n return obj\n\n def __setattr__(self, attr, value):\n setattr(self._module, attr, value)\n\n def __dir__(self):\n return [\"_module\"] + dir(self._module)\n\n\ndef deprecated(value, module_name, message, warning_class):\n module = sys.modules[module_name]\n if not isinstance(module, _ModuleWithDeprecations):\n sys.modules[module_name] = module = _ModuleWithDeprecations(module)\n return _DeprecatedValue(value, message, warning_class)\n", "path": "src/cryptography/utils.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport abc\nimport binascii\nimport inspect\nimport struct\nimport sys\nimport warnings\n\n\nDeprecatedIn09 = DeprecationWarning\nDeprecatedIn10 = PendingDeprecationWarning\n\n\ndef read_only_property(name):\n return property(lambda self: getattr(self, name))\n\n\ndef register_interface(iface):\n def register_decorator(klass):\n verify_interface(iface, klass)\n iface.register(klass)\n return klass\n return register_decorator\n\n\nif hasattr(int, \"from_bytes\"):\n int_from_bytes = int.from_bytes\nelse:\n def int_from_bytes(data, byteorder, signed=False):\n assert byteorder == 'big'\n assert not signed\n\n if len(data) % 4 != 0:\n data = (b'\\x00' * (4 - (len(data) % 4))) + data\n\n result = 0\n\n while len(data) > 0:\n digit, = struct.unpack('>I', data[:4])\n result = (result << 32) + digit\n data = data[4:]\n\n return result\n\n\ndef int_to_bytes(integer):\n hex_string = '%x' % integer\n n = len(hex_string)\n return binascii.unhexlify(hex_string.zfill(n + (n & 1)))\n\n\nclass InterfaceNotImplemented(Exception):\n pass\n\n\nif hasattr(inspect, \"signature\"):\n signature = inspect.signature\nelse:\n signature = inspect.getargspec\n\n\ndef verify_interface(iface, klass):\n for method in iface.__abstractmethods__:\n if not hasattr(klass, method):\n raise InterfaceNotImplemented(\n \"{0} is missing a {1!r} method\".format(klass, method)\n )\n if isinstance(getattr(iface, method), abc.abstractproperty):\n # Can't properly verify these yet.\n continue\n sig = signature(getattr(iface, method))\n actual = signature(getattr(klass, method))\n if sig != actual:\n raise InterfaceNotImplemented(\n \"{0}.{1}'s signature differs from the expected. Expected: \"\n \"{2!r}. Received: {3!r}\".format(\n klass, method, sig, actual\n )\n )\n\n\nif sys.version_info >= (2, 7):\n def bit_length(x):\n return x.bit_length()\nelse:\n def bit_length(x):\n return len(bin(x)) - (2 + (x <= 0))\n\n\nclass _DeprecatedValue(object):\n def __init__(self, value, message, warning_class):\n self.value = value\n self.message = message\n self.warning_class = warning_class\n\n\nclass _ModuleWithDeprecations(object):\n def __init__(self, module):\n self.__dict__[\"_module\"] = module\n\n def __getattr__(self, attr):\n obj = getattr(self._module, attr)\n if isinstance(obj, _DeprecatedValue):\n warnings.warn(obj.message, obj.warning_class, stacklevel=2)\n obj = obj.value\n return obj\n\n def __setattr__(self, attr, value):\n setattr(self._module, attr, value)\n\n def __dir__(self):\n return [\"_module\"] + dir(self._module)\n\n\ndef deprecated(value, module_name, message, warning_class):\n module = sys.modules[module_name]\n if not isinstance(module, _ModuleWithDeprecations):\n sys.modules[module_name] = module = _ModuleWithDeprecations(module)\n return _DeprecatedValue(value, message, warning_class)\n", "path": "src/cryptography/utils.py"}]} | 1,443 | 272 |
gh_patches_debug_2763 | rasdani/github-patches | git_diff | google__timesketch-949 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Yeti analyzer crash
From the celery log:
```
[2019-07-17 09:11:37,463: ERROR/ForkPoolWorker-1] Task timesketch.lib.tasks.run_sketch_analyzer[46d24990-12df-4c88-a79b-a3b5f1c04b01] raised unexpected: TypeError("unsupported operand type(s) for +: 'NoneType' and 'unicode'",)
Traceback (most recent call last):
File "/opt/timesketch/env/local/lib/python2.7/site-packages/celery/app/trace.py", line 374, in trace_task
R = retval = fun(*args, **kwargs)
File "/opt/timesketch/env/local/lib/python2.7/site-packages/timesketch/__init__.py", line 181, in __call__
return TaskBase.__call__(self, *args, **kwargs)
File "/opt/timesketch/env/local/lib/python2.7/site-packages/celery/app/trace.py", line 629, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/timesketch/env/local/lib/python2.7/site-packages/timesketch/lib/tasks.py", line 339, in run_sketch_analyzer
result = analyzer.run_wrapper()
File "/opt/timesketch/env/local/lib/python2.7/site-packages/timesketch/lib/analyzers/interface.py", line 37, in wrapper
func_return = func(self, *args, **kwargs)
File "/opt/timesketch/env/local/lib/python2.7/site-packages/timesketch/lib/analyzers/interface.py", line 403, in run_wrapper
result = self.run()
File "/opt/timesketch/env/local/lib/python2.7/site-packages/timesketch/lib/analyzers/yetiindicators.py", line 92, in run
self.get_intrusion_sets()
File "/opt/timesketch/env/local/lib/python2.7/site-packages/timesketch/lib/analyzers/yetiindicators.py", line 75, in get_intrusion_sets
self.yeti_api_root + '/entities/filter/',
TypeError: unsupported operand type(s) for +: 'NoneType' and 'unicode'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `timesketch/lib/analyzers/yetiindicators.py`
Content:
```
1 """Index analyzer plugin for Yeti indicators."""
2 from __future__ import unicode_literals
3
4 from flask import current_app
5 import requests
6
7 from timesketch.lib.analyzers import interface
8 from timesketch.lib.analyzers import manager
9 from timesketch.lib import emojis
10
11
12 def build_query_for_indicators(indicators):
13 """Builds an Elasticsearch query for Yeti indicator patterns.
14
15 Prepends and appends .* to the regex to be able to search within a field.
16
17 Returns:
18 The resulting ES query string.
19 """
20 query = []
21 for domain in indicators:
22 query.append('domain:/.*{0:s}.*/'.format(domain['pattern']))
23 return ' OR '.join(query)
24
25
26 class YetiIndicators(interface.BaseSketchAnalyzer):
27 """Index analyzer for Yeti threat intel indicators."""
28
29 NAME = 'yetiindicators'
30 DEPENDENCIES = frozenset(['domain'])
31
32 def __init__(self, index_name, sketch_id):
33 """Initialize the Index Analyzer.
34
35 Args:
36 index_name: Elasticsearch index name
37 """
38 super(YetiIndicators, self).__init__(index_name, sketch_id)
39 self.intel = {}
40 self.yeti_api_root = current_app.config.get('YETI_API_ROOT')
41 self.yeti_api_key = current_app.config.get('YETI_API_KEY')
42 self.yeti_indicator_labels = current_app.config.get(
43 'YETI_INDICATOR_LABELS', [])
44
45 def get_bad_domain_indicators(self, entity_id):
46 """Retrieves a list of indicators associated to a given entity.
47
48 Args:
49 entity_id (str): STIX ID of the entity to get associated inticators
50 from. (typically an Intrusion Set)
51
52 Returns:
53 A list of JSON objects describing a Yeti Indicator.
54 """
55 results = requests.post(
56 self.yeti_api_root + '/entities/{0:s}/neighbors/'.format(entity_id),
57 headers={'X-Yeti-API': self.yeti_api_key},
58 )
59 if results.status_code != 200:
60 return []
61 domain_indicators = []
62 for neighbor in results.json().get('vertices', {}).values():
63 if neighbor['type'] == 'x-regex' and \
64 set(self.yeti_indicator_labels) <= set(neighbor['labels']):
65 domain_indicators.append(neighbor)
66
67 return domain_indicators
68
69 def get_intrusion_sets(self):
70 """Populates the intel attribute with data from Yeti.
71
72 Retrieved intel consists of Intrusion sets and associated Indicators.
73 """
74 results = requests.post(
75 self.yeti_api_root + '/entities/filter/',
76 json={'name': '', 'type': 'intrusion-set'},
77 headers={'X-Yeti-API': self.yeti_api_key},
78 )
79 if results.status_code != 200:
80 return
81 self.intel = {item['id']: item for item in results.json()}
82 for _id in self.intel:
83 self.intel[_id]['indicators'] = self.get_bad_domain_indicators(_id)
84
85 def run(self):
86 """Entry point for the analyzer.
87
88 Returns:
89 String with summary of the analyzer result
90 """
91
92 self.get_intrusion_sets()
93 actors_found = []
94 for intrusion_set in self.intel.values():
95 if not intrusion_set['indicators']:
96 continue
97
98 found = False
99
100 for indicator in intrusion_set['indicators']:
101 query = build_query_for_indicators([indicator])
102
103 events = self.event_stream(query_string=query,
104 return_fields=[])
105
106 name = intrusion_set['name']
107 for event in events:
108 found = True
109 event.add_emojis([emojis.get_emoji('SKULL')])
110 event.add_tags([name])
111 event.commit()
112 event.add_comment(
113 'Indicator "{0:s}" found for actor "{1:s}"'.format(
114 indicator['name'], name))
115
116 if found:
117 actors_found.append(name)
118 self.sketch.add_view(
119 'Domain activity for actor {0:s}'.format(name),
120 self.NAME,
121 query_string=query)
122
123 if actors_found:
124 return '{0:d} actors were found! [{1:s}]'.format(
125 len(actors_found), ', '.join(actors_found))
126 return 'No indicators were found in the timeline.'
127
128
129 manager.AnalysisManager.register_analyzer(YetiIndicators)
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/timesketch/lib/analyzers/yetiindicators.py b/timesketch/lib/analyzers/yetiindicators.py
--- a/timesketch/lib/analyzers/yetiindicators.py
+++ b/timesketch/lib/analyzers/yetiindicators.py
@@ -88,6 +88,8 @@
Returns:
String with summary of the analyzer result
"""
+ if not self.yeti_api_root or not self.yeti_api_key:
+ return 'No Yeti configuration settings found, aborting.'
self.get_intrusion_sets()
actors_found = []
| {"golden_diff": "diff --git a/timesketch/lib/analyzers/yetiindicators.py b/timesketch/lib/analyzers/yetiindicators.py\n--- a/timesketch/lib/analyzers/yetiindicators.py\n+++ b/timesketch/lib/analyzers/yetiindicators.py\n@@ -88,6 +88,8 @@\n Returns:\n String with summary of the analyzer result\n \"\"\"\n+ if not self.yeti_api_root or not self.yeti_api_key:\n+ return 'No Yeti configuration settings found, aborting.'\n \n self.get_intrusion_sets()\n actors_found = []\n", "issue": "Yeti analyzer crash\nFrom the celery log:\r\n\r\n```\r\n[2019-07-17 09:11:37,463: ERROR/ForkPoolWorker-1] Task timesketch.lib.tasks.run_sketch_analyzer[46d24990-12df-4c88-a79b-a3b5f1c04b01] raised unexpected: TypeError(\"unsupported operand type(s) for +: 'NoneType' and 'unicode'\",)\r\nTraceback (most recent call last):\r\n File \"/opt/timesketch/env/local/lib/python2.7/site-packages/celery/app/trace.py\", line 374, in trace_task\r\n R = retval = fun(*args, **kwargs)\r\n File \"/opt/timesketch/env/local/lib/python2.7/site-packages/timesketch/__init__.py\", line 181, in __call__\r\n return TaskBase.__call__(self, *args, **kwargs)\r\n File \"/opt/timesketch/env/local/lib/python2.7/site-packages/celery/app/trace.py\", line 629, in __protected_call__\r\n return self.run(*args, **kwargs)\r\n File \"/opt/timesketch/env/local/lib/python2.7/site-packages/timesketch/lib/tasks.py\", line 339, in run_sketch_analyzer\r\n result = analyzer.run_wrapper()\r\n File \"/opt/timesketch/env/local/lib/python2.7/site-packages/timesketch/lib/analyzers/interface.py\", line 37, in wrapper\r\n func_return = func(self, *args, **kwargs)\r\n File \"/opt/timesketch/env/local/lib/python2.7/site-packages/timesketch/lib/analyzers/interface.py\", line 403, in run_wrapper\r\n result = self.run()\r\n File \"/opt/timesketch/env/local/lib/python2.7/site-packages/timesketch/lib/analyzers/yetiindicators.py\", line 92, in run\r\n self.get_intrusion_sets()\r\n File \"/opt/timesketch/env/local/lib/python2.7/site-packages/timesketch/lib/analyzers/yetiindicators.py\", line 75, in get_intrusion_sets\r\n self.yeti_api_root + '/entities/filter/',\r\nTypeError: unsupported operand type(s) for +: 'NoneType' and 'unicode'\r\n```\n", "before_files": [{"content": "\"\"\"Index analyzer plugin for Yeti indicators.\"\"\"\nfrom __future__ import unicode_literals\n\nfrom flask import current_app\nimport requests\n\nfrom timesketch.lib.analyzers import interface\nfrom timesketch.lib.analyzers import manager\nfrom timesketch.lib import emojis\n\n\ndef build_query_for_indicators(indicators):\n \"\"\"Builds an Elasticsearch query for Yeti indicator patterns.\n\n Prepends and appends .* to the regex to be able to search within a field.\n\n Returns:\n The resulting ES query string.\n \"\"\"\n query = []\n for domain in indicators:\n query.append('domain:/.*{0:s}.*/'.format(domain['pattern']))\n return ' OR '.join(query)\n\n\nclass YetiIndicators(interface.BaseSketchAnalyzer):\n \"\"\"Index analyzer for Yeti threat intel indicators.\"\"\"\n\n NAME = 'yetiindicators'\n DEPENDENCIES = frozenset(['domain'])\n\n def __init__(self, index_name, sketch_id):\n \"\"\"Initialize the Index Analyzer.\n\n Args:\n index_name: Elasticsearch index name\n \"\"\"\n super(YetiIndicators, self).__init__(index_name, sketch_id)\n self.intel = {}\n self.yeti_api_root = current_app.config.get('YETI_API_ROOT')\n self.yeti_api_key = current_app.config.get('YETI_API_KEY')\n self.yeti_indicator_labels = current_app.config.get(\n 'YETI_INDICATOR_LABELS', [])\n\n def get_bad_domain_indicators(self, entity_id):\n \"\"\"Retrieves a list of indicators associated to a given entity.\n\n Args:\n entity_id (str): STIX ID of the entity to get associated inticators\n from. (typically an Intrusion Set)\n\n Returns:\n A list of JSON objects describing a Yeti Indicator.\n \"\"\"\n results = requests.post(\n self.yeti_api_root + '/entities/{0:s}/neighbors/'.format(entity_id),\n headers={'X-Yeti-API': self.yeti_api_key},\n )\n if results.status_code != 200:\n return []\n domain_indicators = []\n for neighbor in results.json().get('vertices', {}).values():\n if neighbor['type'] == 'x-regex' and \\\n set(self.yeti_indicator_labels) <= set(neighbor['labels']):\n domain_indicators.append(neighbor)\n\n return domain_indicators\n\n def get_intrusion_sets(self):\n \"\"\"Populates the intel attribute with data from Yeti.\n\n Retrieved intel consists of Intrusion sets and associated Indicators.\n \"\"\"\n results = requests.post(\n self.yeti_api_root + '/entities/filter/',\n json={'name': '', 'type': 'intrusion-set'},\n headers={'X-Yeti-API': self.yeti_api_key},\n )\n if results.status_code != 200:\n return\n self.intel = {item['id']: item for item in results.json()}\n for _id in self.intel:\n self.intel[_id]['indicators'] = self.get_bad_domain_indicators(_id)\n\n def run(self):\n \"\"\"Entry point for the analyzer.\n\n Returns:\n String with summary of the analyzer result\n \"\"\"\n\n self.get_intrusion_sets()\n actors_found = []\n for intrusion_set in self.intel.values():\n if not intrusion_set['indicators']:\n continue\n\n found = False\n\n for indicator in intrusion_set['indicators']:\n query = build_query_for_indicators([indicator])\n\n events = self.event_stream(query_string=query,\n return_fields=[])\n\n name = intrusion_set['name']\n for event in events:\n found = True\n event.add_emojis([emojis.get_emoji('SKULL')])\n event.add_tags([name])\n event.commit()\n event.add_comment(\n 'Indicator \"{0:s}\" found for actor \"{1:s}\"'.format(\n indicator['name'], name))\n\n if found:\n actors_found.append(name)\n self.sketch.add_view(\n 'Domain activity for actor {0:s}'.format(name),\n self.NAME,\n query_string=query)\n\n if actors_found:\n return '{0:d} actors were found! [{1:s}]'.format(\n len(actors_found), ', '.join(actors_found))\n return 'No indicators were found in the timeline.'\n\n\nmanager.AnalysisManager.register_analyzer(YetiIndicators)\n", "path": "timesketch/lib/analyzers/yetiindicators.py"}], "after_files": [{"content": "\"\"\"Index analyzer plugin for Yeti indicators.\"\"\"\nfrom __future__ import unicode_literals\n\nfrom flask import current_app\nimport requests\n\nfrom timesketch.lib.analyzers import interface\nfrom timesketch.lib.analyzers import manager\nfrom timesketch.lib import emojis\n\n\ndef build_query_for_indicators(indicators):\n \"\"\"Builds an Elasticsearch query for Yeti indicator patterns.\n\n Prepends and appends .* to the regex to be able to search within a field.\n\n Returns:\n The resulting ES query string.\n \"\"\"\n query = []\n for domain in indicators:\n query.append('domain:/.*{0:s}.*/'.format(domain['pattern']))\n return ' OR '.join(query)\n\n\nclass YetiIndicators(interface.BaseSketchAnalyzer):\n \"\"\"Index analyzer for Yeti threat intel indicators.\"\"\"\n\n NAME = 'yetiindicators'\n DEPENDENCIES = frozenset(['domain'])\n\n def __init__(self, index_name, sketch_id):\n \"\"\"Initialize the Index Analyzer.\n\n Args:\n index_name: Elasticsearch index name\n \"\"\"\n super(YetiIndicators, self).__init__(index_name, sketch_id)\n self.intel = {}\n self.yeti_api_root = current_app.config.get('YETI_API_ROOT')\n self.yeti_api_key = current_app.config.get('YETI_API_KEY')\n self.yeti_indicator_labels = current_app.config.get(\n 'YETI_INDICATOR_LABELS', [])\n\n def get_bad_domain_indicators(self, entity_id):\n \"\"\"Retrieves a list of indicators associated to a given entity.\n\n Args:\n entity_id (str): STIX ID of the entity to get associated inticators\n from. (typically an Intrusion Set)\n\n Returns:\n A list of JSON objects describing a Yeti Indicator.\n \"\"\"\n results = requests.post(\n self.yeti_api_root + '/entities/{0:s}/neighbors/'.format(entity_id),\n headers={'X-Yeti-API': self.yeti_api_key},\n )\n if results.status_code != 200:\n return []\n domain_indicators = []\n for neighbor in results.json().get('vertices', {}).values():\n if neighbor['type'] == 'x-regex' and \\\n set(self.yeti_indicator_labels) <= set(neighbor['labels']):\n domain_indicators.append(neighbor)\n\n return domain_indicators\n\n def get_intrusion_sets(self):\n \"\"\"Populates the intel attribute with data from Yeti.\n\n Retrieved intel consists of Intrusion sets and associated Indicators.\n \"\"\"\n results = requests.post(\n self.yeti_api_root + '/entities/filter/',\n json={'name': '', 'type': 'intrusion-set'},\n headers={'X-Yeti-API': self.yeti_api_key},\n )\n if results.status_code != 200:\n return\n self.intel = {item['id']: item for item in results.json()}\n for _id in self.intel:\n self.intel[_id]['indicators'] = self.get_bad_domain_indicators(_id)\n\n def run(self):\n \"\"\"Entry point for the analyzer.\n\n Returns:\n String with summary of the analyzer result\n \"\"\"\n if not self.yeti_api_root or not self.yeti_api_key:\n return 'No Yeti configuration settings found, aborting.'\n\n self.get_intrusion_sets()\n actors_found = []\n for intrusion_set in self.intel.values():\n if not intrusion_set['indicators']:\n continue\n\n found = False\n\n for indicator in intrusion_set['indicators']:\n query = build_query_for_indicators([indicator])\n\n events = self.event_stream(query_string=query,\n return_fields=[])\n\n name = intrusion_set['name']\n for event in events:\n found = True\n event.add_emojis([emojis.get_emoji('SKULL')])\n event.add_tags([name])\n event.commit()\n event.add_comment(\n 'Indicator \"{0:s}\" found for actor \"{1:s}\"'.format(\n indicator['name'], name))\n\n if found:\n actors_found.append(name)\n self.sketch.add_view(\n 'Domain activity for actor {0:s}'.format(name),\n self.NAME,\n query_string=query)\n\n if actors_found:\n return '{0:d} actors were found! [{1:s}]'.format(\n len(actors_found), ', '.join(actors_found))\n return 'No indicators were found in the timeline.'\n\n\nmanager.AnalysisManager.register_analyzer(YetiIndicators)\n", "path": "timesketch/lib/analyzers/yetiindicators.py"}]} | 2,013 | 134 |
gh_patches_debug_20930 | rasdani/github-patches | git_diff | wright-group__WrightTools-543 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
multiplication doesn't seem to work in d.transform
`d.transform('w2+w2+w2', 'w1', 'd2')` works.
`d.transform('3*w2', 'w1', 'd2')` does not work (even with varying spacing around the multiplication operator)
Part of the error that is raised:
```
File "/home/darien/source/WrightTools/WrightTools/data/_data.py", line 1306, in transform
axis = current.get(expression, Axis(self, expression))
File "/home/darien/source/WrightTools/WrightTools/data/_axis.py", line 53, in __init__
self.units = self.variables[0].units
File "/home/darien/source/WrightTools/WrightTools/data/_axis.py", line 152, in variables
return self._variables
AttributeError: 'Axis' object has no attribute '_variables'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `WrightTools/data/_axis.py`
Content:
```
1 """Axis class and associated."""
2
3
4 # --- import --------------------------------------------------------------------------------------
5
6
7 import re
8 import numexpr
9 import operator
10 import functools
11
12 import numpy as np
13
14 from .. import exceptions as wt_exceptions
15 from .. import kit as wt_kit
16 from .. import units as wt_units
17
18
19 # --- define --------------------------------------------------------------------------------------
20
21
22 operator_to_identifier = {}
23 operator_to_identifier['/'] = '__d__'
24 operator_to_identifier['='] = '__e__'
25 operator_to_identifier['-'] = '__m__'
26 operator_to_identifier['+'] = '__p__'
27 operator_to_identifier['*'] = '__t__'
28 identifier_to_operator = {value: key for key, value in operator_to_identifier.items()}
29 operators = ''.join(operator_to_identifier.keys())
30
31
32 # --- class ---------------------------------------------------------------------------------------
33
34
35 class Axis(object):
36 """Axis class."""
37
38 def __init__(self, parent, expression, units=None):
39 """Data axis.
40
41 Parameters
42 ----------
43 parent : WrightTools.Data
44 Parent data object.
45 expression : string
46 Axis expression.
47 units : string (optional)
48 Axis units. Default is None.
49 """
50 self.parent = parent
51 self.expression = expression
52 if units is None:
53 self.units = self.variables[0].units
54 else:
55 self.units = units
56
57 def __getitem__(self, index):
58 vs = {}
59 for variable in self.variables:
60 arr = variable[index]
61 vs[variable.natural_name] = wt_units.converter(arr, variable.units, self.units)
62 return numexpr.evaluate(self.expression.split('=')[0], local_dict=vs)
63
64 def __repr__(self):
65 return '<WrightTools.Axis {0} ({1}) at {2}>'.format(self.expression, str(self.units),
66 id(self))
67
68 @property
69 def _leaf(self):
70 out = self.expression
71 if self.units is not None:
72 out += ' ({0}) {1}'.format(self.units, self.shape)
73 return out
74
75 @property
76 def full(self):
77 arr = self[:]
78 for i in range(arr.ndim):
79 if arr.shape[i] == 1:
80 arr = np.repeat(arr, self.parent.shape[i], axis=i)
81 return arr
82
83 @property
84 def identity(self):
85 return self.natural_name + ' {%s}' % self.units
86
87 @property
88 def label(self):
89 symbol = wt_units.get_symbol(self.units)
90 label = r'$\mathsf{' + self.expression
91 for v in self.variables:
92 label = label.replace(v.natural_name, '%s_{%s}' % (symbol, v.label))
93 if self.units_kind:
94 units_dictionary = getattr(wt_units, self.units_kind)
95 label += r'\,'
96 label += r'\left('
97 label += units_dictionary[self.units][2]
98 label += r'\right)'
99 else:
100 pass
101 label += r'}$'
102 return label
103
104 @property
105 def natural_name(self):
106 name = self.expression.strip()
107 for op in operators:
108 name = name.replace(op, operator_to_identifier[op])
109 return name
110
111 @property
112 def ndim(self):
113 """Get number of dimensions."""
114 try:
115 assert self._ndim is not None
116 except (AssertionError, AttributeError):
117 self._ndim = self.variables[0].ndim
118 finally:
119 return self._ndim
120
121 @property
122 def points(self):
123 """Squeezed array."""
124 return np.squeeze(self[:])
125
126 @property
127 def shape(self):
128 """Shape."""
129 return wt_kit.joint_shape(*self.variables)
130
131 @property
132 def size(self):
133 """Size."""
134 return functools.reduce(operator.mul, self.shape)
135
136 @property
137 def units_kind(self):
138 """Units kind."""
139 return wt_units.kind(self.units)
140
141 @property
142 def variables(self):
143 """Variables."""
144 try:
145 assert self._variables is not None
146 except (AssertionError, AttributeError):
147 pattern = '|'.join(map(re.escape, operators))
148 keys = re.split(pattern, self.expression)
149 indices = [self.parent.variable_names.index(key) for key in keys]
150 self._variables = [self.parent.variables[i] for i in indices]
151 finally:
152 return self._variables
153
154 def convert(self, destination_units, *, convert_variables=False):
155 """Convert axis to destination_units.
156
157 Parameters
158 ----------
159 destination_units : string
160 Destination units.
161 convert_variables : boolean (optional)
162 Toggle conversion of stored arrays. Default is False.
163 """
164 if not wt_units.is_valid_conversion(self.units, destination_units):
165 kind = wt_units.kind(self.units)
166 valid = list(wt_units.dicts[kind].keys())
167 raise wt_exceptions.UnitsError(valid, destination_units)
168 if convert_variables:
169 for v in self.variables:
170 v.convert(destination_units)
171 self.units = destination_units
172
173 def max(self):
174 """Axis max."""
175 return np.max(self[:])
176
177 def min(self):
178 """Axis min."""
179 return np.min(self[:])
180
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/WrightTools/data/_axis.py b/WrightTools/data/_axis.py
--- a/WrightTools/data/_axis.py
+++ b/WrightTools/data/_axis.py
@@ -106,7 +106,7 @@
name = self.expression.strip()
for op in operators:
name = name.replace(op, operator_to_identifier[op])
- return name
+ return wt_kit.string2identifier(name)
@property
def ndim(self):
@@ -146,7 +146,10 @@
except (AssertionError, AttributeError):
pattern = '|'.join(map(re.escape, operators))
keys = re.split(pattern, self.expression)
- indices = [self.parent.variable_names.index(key) for key in keys]
+ indices = []
+ for key in keys:
+ if key in self.parent.variable_names:
+ indices.append(self.parent.variable_names.index(key))
self._variables = [self.parent.variables[i] for i in indices]
finally:
return self._variables
| {"golden_diff": "diff --git a/WrightTools/data/_axis.py b/WrightTools/data/_axis.py\n--- a/WrightTools/data/_axis.py\n+++ b/WrightTools/data/_axis.py\n@@ -106,7 +106,7 @@\n name = self.expression.strip()\n for op in operators:\n name = name.replace(op, operator_to_identifier[op])\n- return name\n+ return wt_kit.string2identifier(name)\n \n @property\n def ndim(self):\n@@ -146,7 +146,10 @@\n except (AssertionError, AttributeError):\n pattern = '|'.join(map(re.escape, operators))\n keys = re.split(pattern, self.expression)\n- indices = [self.parent.variable_names.index(key) for key in keys]\n+ indices = []\n+ for key in keys:\n+ if key in self.parent.variable_names:\n+ indices.append(self.parent.variable_names.index(key))\n self._variables = [self.parent.variables[i] for i in indices]\n finally:\n return self._variables\n", "issue": "multiplication doesn't seem to work in d.transform\n`d.transform('w2+w2+w2', 'w1', 'd2')` works.\r\n\r\n`d.transform('3*w2', 'w1', 'd2')` does not work (even with varying spacing around the multiplication operator)\r\nPart of the error that is raised:\r\n```\r\n File \"/home/darien/source/WrightTools/WrightTools/data/_data.py\", line 1306, in transform\r\n axis = current.get(expression, Axis(self, expression))\r\n\r\n File \"/home/darien/source/WrightTools/WrightTools/data/_axis.py\", line 53, in __init__\r\n self.units = self.variables[0].units\r\n\r\n File \"/home/darien/source/WrightTools/WrightTools/data/_axis.py\", line 152, in variables\r\n return self._variables\r\n\r\nAttributeError: 'Axis' object has no attribute '_variables'\r\n```\n", "before_files": [{"content": "\"\"\"Axis class and associated.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport re\nimport numexpr\nimport operator\nimport functools\n\nimport numpy as np\n\nfrom .. import exceptions as wt_exceptions\nfrom .. import kit as wt_kit\nfrom .. import units as wt_units\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\noperator_to_identifier = {}\noperator_to_identifier['/'] = '__d__'\noperator_to_identifier['='] = '__e__'\noperator_to_identifier['-'] = '__m__'\noperator_to_identifier['+'] = '__p__'\noperator_to_identifier['*'] = '__t__'\nidentifier_to_operator = {value: key for key, value in operator_to_identifier.items()}\noperators = ''.join(operator_to_identifier.keys())\n\n\n# --- class ---------------------------------------------------------------------------------------\n\n\nclass Axis(object):\n \"\"\"Axis class.\"\"\"\n\n def __init__(self, parent, expression, units=None):\n \"\"\"Data axis.\n\n Parameters\n ----------\n parent : WrightTools.Data\n Parent data object.\n expression : string\n Axis expression.\n units : string (optional)\n Axis units. Default is None.\n \"\"\"\n self.parent = parent\n self.expression = expression\n if units is None:\n self.units = self.variables[0].units\n else:\n self.units = units\n\n def __getitem__(self, index):\n vs = {}\n for variable in self.variables:\n arr = variable[index]\n vs[variable.natural_name] = wt_units.converter(arr, variable.units, self.units)\n return numexpr.evaluate(self.expression.split('=')[0], local_dict=vs)\n\n def __repr__(self):\n return '<WrightTools.Axis {0} ({1}) at {2}>'.format(self.expression, str(self.units),\n id(self))\n\n @property\n def _leaf(self):\n out = self.expression\n if self.units is not None:\n out += ' ({0}) {1}'.format(self.units, self.shape)\n return out\n\n @property\n def full(self):\n arr = self[:]\n for i in range(arr.ndim):\n if arr.shape[i] == 1:\n arr = np.repeat(arr, self.parent.shape[i], axis=i)\n return arr\n\n @property\n def identity(self):\n return self.natural_name + ' {%s}' % self.units\n\n @property\n def label(self):\n symbol = wt_units.get_symbol(self.units)\n label = r'$\\mathsf{' + self.expression\n for v in self.variables:\n label = label.replace(v.natural_name, '%s_{%s}' % (symbol, v.label))\n if self.units_kind:\n units_dictionary = getattr(wt_units, self.units_kind)\n label += r'\\,'\n label += r'\\left('\n label += units_dictionary[self.units][2]\n label += r'\\right)'\n else:\n pass\n label += r'}$'\n return label\n\n @property\n def natural_name(self):\n name = self.expression.strip()\n for op in operators:\n name = name.replace(op, operator_to_identifier[op])\n return name\n\n @property\n def ndim(self):\n \"\"\"Get number of dimensions.\"\"\"\n try:\n assert self._ndim is not None\n except (AssertionError, AttributeError):\n self._ndim = self.variables[0].ndim\n finally:\n return self._ndim\n\n @property\n def points(self):\n \"\"\"Squeezed array.\"\"\"\n return np.squeeze(self[:])\n\n @property\n def shape(self):\n \"\"\"Shape.\"\"\"\n return wt_kit.joint_shape(*self.variables)\n\n @property\n def size(self):\n \"\"\"Size.\"\"\"\n return functools.reduce(operator.mul, self.shape)\n\n @property\n def units_kind(self):\n \"\"\"Units kind.\"\"\"\n return wt_units.kind(self.units)\n\n @property\n def variables(self):\n \"\"\"Variables.\"\"\"\n try:\n assert self._variables is not None\n except (AssertionError, AttributeError):\n pattern = '|'.join(map(re.escape, operators))\n keys = re.split(pattern, self.expression)\n indices = [self.parent.variable_names.index(key) for key in keys]\n self._variables = [self.parent.variables[i] for i in indices]\n finally:\n return self._variables\n\n def convert(self, destination_units, *, convert_variables=False):\n \"\"\"Convert axis to destination_units.\n\n Parameters\n ----------\n destination_units : string\n Destination units.\n convert_variables : boolean (optional)\n Toggle conversion of stored arrays. Default is False.\n \"\"\"\n if not wt_units.is_valid_conversion(self.units, destination_units):\n kind = wt_units.kind(self.units)\n valid = list(wt_units.dicts[kind].keys())\n raise wt_exceptions.UnitsError(valid, destination_units)\n if convert_variables:\n for v in self.variables:\n v.convert(destination_units)\n self.units = destination_units\n\n def max(self):\n \"\"\"Axis max.\"\"\"\n return np.max(self[:])\n\n def min(self):\n \"\"\"Axis min.\"\"\"\n return np.min(self[:])\n", "path": "WrightTools/data/_axis.py"}], "after_files": [{"content": "\"\"\"Axis class and associated.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport re\nimport numexpr\nimport operator\nimport functools\n\nimport numpy as np\n\nfrom .. import exceptions as wt_exceptions\nfrom .. import kit as wt_kit\nfrom .. import units as wt_units\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\noperator_to_identifier = {}\noperator_to_identifier['/'] = '__d__'\noperator_to_identifier['='] = '__e__'\noperator_to_identifier['-'] = '__m__'\noperator_to_identifier['+'] = '__p__'\noperator_to_identifier['*'] = '__t__'\nidentifier_to_operator = {value: key for key, value in operator_to_identifier.items()}\noperators = ''.join(operator_to_identifier.keys())\n\n\n# --- class ---------------------------------------------------------------------------------------\n\n\nclass Axis(object):\n \"\"\"Axis class.\"\"\"\n\n def __init__(self, parent, expression, units=None):\n \"\"\"Data axis.\n\n Parameters\n ----------\n parent : WrightTools.Data\n Parent data object.\n expression : string\n Axis expression.\n units : string (optional)\n Axis units. Default is None.\n \"\"\"\n self.parent = parent\n self.expression = expression\n if units is None:\n self.units = self.variables[0].units\n else:\n self.units = units\n\n def __getitem__(self, index):\n vs = {}\n for variable in self.variables:\n arr = variable[index]\n vs[variable.natural_name] = wt_units.converter(arr, variable.units, self.units)\n return numexpr.evaluate(self.expression.split('=')[0], local_dict=vs)\n\n def __repr__(self):\n return '<WrightTools.Axis {0} ({1}) at {2}>'.format(self.expression, str(self.units),\n id(self))\n\n @property\n def _leaf(self):\n out = self.expression\n if self.units is not None:\n out += ' ({0}) {1}'.format(self.units, self.shape)\n return out\n\n @property\n def full(self):\n arr = self[:]\n for i in range(arr.ndim):\n if arr.shape[i] == 1:\n arr = np.repeat(arr, self.parent.shape[i], axis=i)\n return arr\n\n @property\n def identity(self):\n return self.natural_name + ' {%s}' % self.units\n\n @property\n def label(self):\n symbol = wt_units.get_symbol(self.units)\n label = r'$\\mathsf{' + self.expression\n for v in self.variables:\n label = label.replace(v.natural_name, '%s_{%s}' % (symbol, v.label))\n if self.units_kind:\n units_dictionary = getattr(wt_units, self.units_kind)\n label += r'\\,'\n label += r'\\left('\n label += units_dictionary[self.units][2]\n label += r'\\right)'\n else:\n pass\n label += r'}$'\n return label\n\n @property\n def natural_name(self):\n name = self.expression.strip()\n for op in operators:\n name = name.replace(op, operator_to_identifier[op])\n return wt_kit.string2identifier(name)\n\n @property\n def ndim(self):\n \"\"\"Get number of dimensions.\"\"\"\n try:\n assert self._ndim is not None\n except (AssertionError, AttributeError):\n self._ndim = self.variables[0].ndim\n finally:\n return self._ndim\n\n @property\n def points(self):\n \"\"\"Squeezed array.\"\"\"\n return np.squeeze(self[:])\n\n @property\n def shape(self):\n \"\"\"Shape.\"\"\"\n return wt_kit.joint_shape(*self.variables)\n\n @property\n def size(self):\n \"\"\"Size.\"\"\"\n return functools.reduce(operator.mul, self.shape)\n\n @property\n def units_kind(self):\n \"\"\"Units kind.\"\"\"\n return wt_units.kind(self.units)\n\n @property\n def variables(self):\n \"\"\"Variables.\"\"\"\n try:\n assert self._variables is not None\n except (AssertionError, AttributeError):\n pattern = '|'.join(map(re.escape, operators))\n keys = re.split(pattern, self.expression)\n indices = []\n for key in keys:\n if key in self.parent.variable_names:\n indices.append(self.parent.variable_names.index(key))\n self._variables = [self.parent.variables[i] for i in indices]\n finally:\n return self._variables\n\n def convert(self, destination_units, *, convert_variables=False):\n \"\"\"Convert axis to destination_units.\n\n Parameters\n ----------\n destination_units : string\n Destination units.\n convert_variables : boolean (optional)\n Toggle conversion of stored arrays. Default is False.\n \"\"\"\n if not wt_units.is_valid_conversion(self.units, destination_units):\n kind = wt_units.kind(self.units)\n valid = list(wt_units.dicts[kind].keys())\n raise wt_exceptions.UnitsError(valid, destination_units)\n if convert_variables:\n for v in self.variables:\n v.convert(destination_units)\n self.units = destination_units\n\n def max(self):\n \"\"\"Axis max.\"\"\"\n return np.max(self[:])\n\n def min(self):\n \"\"\"Axis min.\"\"\"\n return np.min(self[:])\n", "path": "WrightTools/data/_axis.py"}]} | 1,995 | 226 |
gh_patches_debug_30023 | rasdani/github-patches | git_diff | scalableminds__webknossos-libs-1083 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Webknossos CLI download annotation
## Context
- Affected library: webknossos
When using the webknossos CLI to download an annotation, the download command is overloaded to take care of dataset downloads and annotation downloads. An annotation should be downloaded when no dataset is found. This is triggered by an AssertionError. Currently no assertion is thrown. Instead, an UnexpectedStatusError is raised.
## Expected Behavior
When a URL to an Annotation is given, the Annotation should be downloaded.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `webknossos/webknossos/cli/download.py`
Content:
```
1 """This module takes care of downloading WEBKNOSSOS datasets."""
2
3 from typing import Any, List, Optional
4
5 import typer
6 from typing_extensions import Annotated
7
8 from ..annotation import Annotation
9 from ..client import webknossos_context
10 from ..dataset import Dataset
11 from ..geometry import BoundingBox, Mag
12 from ._utils import parse_bbox, parse_mag, parse_path
13
14
15 def main(
16 *,
17 target: Annotated[
18 Any,
19 typer.Argument(
20 show_default=False,
21 help="Path to save your WEBKNOSSOS dataset.",
22 parser=parse_path,
23 ),
24 ],
25 url: Annotated[
26 str,
27 typer.Option(
28 help="URL of your dataset or your annotation.",
29 ),
30 ],
31 token: Annotated[
32 Optional[str],
33 typer.Option(
34 help="Authentication token for WEBKNOSSOS instance "
35 "(https://webknossos.org/auth/token).",
36 rich_help_panel="WEBKNOSSOS context",
37 envvar="WK_TOKEN",
38 ),
39 ] = None,
40 bbox: Annotated[
41 Optional[BoundingBox],
42 typer.Option(
43 rich_help_panel="Partial download",
44 help="Bounding box that should be downloaded. "
45 "The input format is x,y,z,width,height,depth. "
46 "Should be a comma separated string (e.g. 0,0,0,10,10,10).",
47 parser=parse_bbox,
48 metavar="BBOX",
49 ),
50 ] = None,
51 layer: Annotated[
52 Optional[List[str]],
53 typer.Option(
54 rich_help_panel="Partial download",
55 help="Layers that should be downloaded. "
56 "For multiple layers type: --layer color --layer segmentation",
57 ),
58 ] = None,
59 mag: Annotated[
60 Optional[List[Mag]],
61 typer.Option(
62 rich_help_panel="Partial download",
63 help="Mags that should be downloaded. "
64 "Should be number or minus separated string (e.g. 2 or 2-2-2). "
65 "For multiple mags type: --mag 1 --mag 2",
66 parser=parse_mag,
67 metavar="MAG",
68 ),
69 ] = None,
70 ) -> None:
71 """Download a dataset from a WEBKNOSSOS server."""
72
73 layers = layer if layer else None
74 mags = mag if mag else None
75
76 with webknossos_context(token=token):
77 try:
78 Dataset.download(
79 dataset_name_or_url=url,
80 path=target,
81 bbox=bbox,
82 layers=layers,
83 mags=mags,
84 )
85 except AssertionError:
86 Annotation.download(annotation_id_or_url=url).save(target)
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/webknossos/webknossos/cli/download.py b/webknossos/webknossos/cli/download.py
--- a/webknossos/webknossos/cli/download.py
+++ b/webknossos/webknossos/cli/download.py
@@ -1,13 +1,15 @@
"""This module takes care of downloading WEBKNOSSOS datasets."""
+import re
from typing import Any, List, Optional
import typer
from typing_extensions import Annotated
-from ..annotation import Annotation
+from ..annotation.annotation import _ANNOTATION_URL_REGEX, Annotation
from ..client import webknossos_context
-from ..dataset import Dataset
+from ..client._resolve_short_link import resolve_short_link
+from ..dataset.dataset import _DATASET_URL_REGEX, Dataset
from ..geometry import BoundingBox, Mag
from ._utils import parse_bbox, parse_mag, parse_path
@@ -72,9 +74,10 @@
layers = layer if layer else None
mags = mag if mag else None
+ url = resolve_short_link(url)
with webknossos_context(token=token):
- try:
+ if re.match(_DATASET_URL_REGEX, url):
Dataset.download(
dataset_name_or_url=url,
path=target,
@@ -82,5 +85,13 @@
layers=layers,
mags=mags,
)
- except AssertionError:
+ elif re.match(_ANNOTATION_URL_REGEX, url):
Annotation.download(annotation_id_or_url=url).save(target)
+ else:
+ raise RuntimeError(
+ "The provided URL does not lead to a dataset or annotation."
+ )
+
+
+if __name__ == "__main__":
+ typer.run(main)
| {"golden_diff": "diff --git a/webknossos/webknossos/cli/download.py b/webknossos/webknossos/cli/download.py\n--- a/webknossos/webknossos/cli/download.py\n+++ b/webknossos/webknossos/cli/download.py\n@@ -1,13 +1,15 @@\n \"\"\"This module takes care of downloading WEBKNOSSOS datasets.\"\"\"\n \n+import re\n from typing import Any, List, Optional\n \n import typer\n from typing_extensions import Annotated\n \n-from ..annotation import Annotation\n+from ..annotation.annotation import _ANNOTATION_URL_REGEX, Annotation\n from ..client import webknossos_context\n-from ..dataset import Dataset\n+from ..client._resolve_short_link import resolve_short_link\n+from ..dataset.dataset import _DATASET_URL_REGEX, Dataset\n from ..geometry import BoundingBox, Mag\n from ._utils import parse_bbox, parse_mag, parse_path\n \n@@ -72,9 +74,10 @@\n \n layers = layer if layer else None\n mags = mag if mag else None\n+ url = resolve_short_link(url)\n \n with webknossos_context(token=token):\n- try:\n+ if re.match(_DATASET_URL_REGEX, url):\n Dataset.download(\n dataset_name_or_url=url,\n path=target,\n@@ -82,5 +85,13 @@\n layers=layers,\n mags=mags,\n )\n- except AssertionError:\n+ elif re.match(_ANNOTATION_URL_REGEX, url):\n Annotation.download(annotation_id_or_url=url).save(target)\n+ else:\n+ raise RuntimeError(\n+ \"The provided URL does not lead to a dataset or annotation.\"\n+ )\n+\n+\n+if __name__ == \"__main__\":\n+ typer.run(main)\n", "issue": "Webknossos CLI download annotation\n## Context\r\n- Affected library: webknossos\r\nWhen using the webknossos CLI to download an annotation, the download command is overloaded to take care of dataset downloads and annotation downloads. An annotation should be downloaded when no dataset is found. This is triggered by an AssertionError. Currently no assertion is thrown. Instead, an UnexpectedStatusError is raised.\r\n\r\n## Expected Behavior\r\nWhen a URL to an Annotation is given, the Annotation should be downloaded.\r\n\r\n\n", "before_files": [{"content": "\"\"\"This module takes care of downloading WEBKNOSSOS datasets.\"\"\"\n\nfrom typing import Any, List, Optional\n\nimport typer\nfrom typing_extensions import Annotated\n\nfrom ..annotation import Annotation\nfrom ..client import webknossos_context\nfrom ..dataset import Dataset\nfrom ..geometry import BoundingBox, Mag\nfrom ._utils import parse_bbox, parse_mag, parse_path\n\n\ndef main(\n *,\n target: Annotated[\n Any,\n typer.Argument(\n show_default=False,\n help=\"Path to save your WEBKNOSSOS dataset.\",\n parser=parse_path,\n ),\n ],\n url: Annotated[\n str,\n typer.Option(\n help=\"URL of your dataset or your annotation.\",\n ),\n ],\n token: Annotated[\n Optional[str],\n typer.Option(\n help=\"Authentication token for WEBKNOSSOS instance \"\n \"(https://webknossos.org/auth/token).\",\n rich_help_panel=\"WEBKNOSSOS context\",\n envvar=\"WK_TOKEN\",\n ),\n ] = None,\n bbox: Annotated[\n Optional[BoundingBox],\n typer.Option(\n rich_help_panel=\"Partial download\",\n help=\"Bounding box that should be downloaded. \"\n \"The input format is x,y,z,width,height,depth. \"\n \"Should be a comma separated string (e.g. 0,0,0,10,10,10).\",\n parser=parse_bbox,\n metavar=\"BBOX\",\n ),\n ] = None,\n layer: Annotated[\n Optional[List[str]],\n typer.Option(\n rich_help_panel=\"Partial download\",\n help=\"Layers that should be downloaded. \"\n \"For multiple layers type: --layer color --layer segmentation\",\n ),\n ] = None,\n mag: Annotated[\n Optional[List[Mag]],\n typer.Option(\n rich_help_panel=\"Partial download\",\n help=\"Mags that should be downloaded. \"\n \"Should be number or minus separated string (e.g. 2 or 2-2-2). \"\n \"For multiple mags type: --mag 1 --mag 2\",\n parser=parse_mag,\n metavar=\"MAG\",\n ),\n ] = None,\n) -> None:\n \"\"\"Download a dataset from a WEBKNOSSOS server.\"\"\"\n\n layers = layer if layer else None\n mags = mag if mag else None\n\n with webknossos_context(token=token):\n try:\n Dataset.download(\n dataset_name_or_url=url,\n path=target,\n bbox=bbox,\n layers=layers,\n mags=mags,\n )\n except AssertionError:\n Annotation.download(annotation_id_or_url=url).save(target)\n", "path": "webknossos/webknossos/cli/download.py"}], "after_files": [{"content": "\"\"\"This module takes care of downloading WEBKNOSSOS datasets.\"\"\"\n\nimport re\nfrom typing import Any, List, Optional\n\nimport typer\nfrom typing_extensions import Annotated\n\nfrom ..annotation.annotation import _ANNOTATION_URL_REGEX, Annotation\nfrom ..client import webknossos_context\nfrom ..client._resolve_short_link import resolve_short_link\nfrom ..dataset.dataset import _DATASET_URL_REGEX, Dataset\nfrom ..geometry import BoundingBox, Mag\nfrom ._utils import parse_bbox, parse_mag, parse_path\n\n\ndef main(\n *,\n target: Annotated[\n Any,\n typer.Argument(\n show_default=False,\n help=\"Path to save your WEBKNOSSOS dataset.\",\n parser=parse_path,\n ),\n ],\n url: Annotated[\n str,\n typer.Option(\n help=\"URL of your dataset or your annotation.\",\n ),\n ],\n token: Annotated[\n Optional[str],\n typer.Option(\n help=\"Authentication token for WEBKNOSSOS instance \"\n \"(https://webknossos.org/auth/token).\",\n rich_help_panel=\"WEBKNOSSOS context\",\n envvar=\"WK_TOKEN\",\n ),\n ] = None,\n bbox: Annotated[\n Optional[BoundingBox],\n typer.Option(\n rich_help_panel=\"Partial download\",\n help=\"Bounding box that should be downloaded. \"\n \"The input format is x,y,z,width,height,depth. \"\n \"Should be a comma separated string (e.g. 0,0,0,10,10,10).\",\n parser=parse_bbox,\n metavar=\"BBOX\",\n ),\n ] = None,\n layer: Annotated[\n Optional[List[str]],\n typer.Option(\n rich_help_panel=\"Partial download\",\n help=\"Layers that should be downloaded. \"\n \"For multiple layers type: --layer color --layer segmentation\",\n ),\n ] = None,\n mag: Annotated[\n Optional[List[Mag]],\n typer.Option(\n rich_help_panel=\"Partial download\",\n help=\"Mags that should be downloaded. \"\n \"Should be number or minus separated string (e.g. 2 or 2-2-2). \"\n \"For multiple mags type: --mag 1 --mag 2\",\n parser=parse_mag,\n metavar=\"MAG\",\n ),\n ] = None,\n) -> None:\n \"\"\"Download a dataset from a WEBKNOSSOS server.\"\"\"\n\n layers = layer if layer else None\n mags = mag if mag else None\n url = resolve_short_link(url)\n\n with webknossos_context(token=token):\n if re.match(_DATASET_URL_REGEX, url):\n Dataset.download(\n dataset_name_or_url=url,\n path=target,\n bbox=bbox,\n layers=layers,\n mags=mags,\n )\n elif re.match(_ANNOTATION_URL_REGEX, url):\n Annotation.download(annotation_id_or_url=url).save(target)\n else:\n raise RuntimeError(\n \"The provided URL does not lead to a dataset or annotation.\"\n )\n\n\nif __name__ == \"__main__\":\n typer.run(main)\n", "path": "webknossos/webknossos/cli/download.py"}]} | 1,111 | 379 |
gh_patches_debug_14944 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-1788 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Move getting started guide to opentelemetry.io
For GA there is an ask to add at least all getting started documentation to the opentelemetry.io website -- for Python this is tracked in: https://github.com/open-telemetry/opentelemetry.io/issues/285. Maintaining the documentation in two places is not ideal so proposal is to move getting started documentation. A similar activity already happened for Java.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/getting_started/otlpcollector_example.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # otcollector.py
16 import time
17
18 from opentelemetry import trace
19 from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
20 OTLPSpanExporter,
21 )
22 from opentelemetry.sdk.trace import TracerProvider
23 from opentelemetry.sdk.trace.export import BatchSpanProcessor
24
25 span_exporter = OTLPSpanExporter(
26 # optional
27 # endpoint:="myCollectorURL:55678",
28 # credentials=ChannelCredentials(credentials),
29 # headers=(("metadata", "metadata")),
30 )
31 tracer_provider = TracerProvider()
32 trace.set_tracer_provider(tracer_provider)
33 span_processor = BatchSpanProcessor(span_exporter)
34 tracer_provider.add_span_processor(span_processor)
35
36 # Configure the tracer to use the collector exporter
37 tracer = trace.get_tracer_provider().get_tracer(__name__)
38
39 with tracer.start_as_current_span("foo"):
40 print("Hello world!")
41
```
Path: `docs/getting_started/tracing_example.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # tracing.py
16 from opentelemetry import trace
17 from opentelemetry.sdk.trace import TracerProvider
18 from opentelemetry.sdk.trace.export import (
19 ConsoleSpanExporter,
20 SimpleSpanProcessor,
21 )
22
23 trace.set_tracer_provider(TracerProvider())
24 trace.get_tracer_provider().add_span_processor(
25 SimpleSpanProcessor(ConsoleSpanExporter())
26 )
27
28 tracer = trace.get_tracer(__name__)
29
30 with tracer.start_as_current_span("foo"):
31 with tracer.start_as_current_span("bar"):
32 with tracer.start_as_current_span("baz"):
33 print("Hello world from OpenTelemetry Python!")
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/getting_started/otlpcollector_example.py b/docs/getting_started/otlpcollector_example.py
--- a/docs/getting_started/otlpcollector_example.py
+++ b/docs/getting_started/otlpcollector_example.py
@@ -24,7 +24,7 @@
span_exporter = OTLPSpanExporter(
# optional
- # endpoint:="myCollectorURL:55678",
+ # endpoint:="myCollectorURL:4317",
# credentials=ChannelCredentials(credentials),
# headers=(("metadata", "metadata")),
)
diff --git a/docs/getting_started/tracing_example.py b/docs/getting_started/tracing_example.py
--- a/docs/getting_started/tracing_example.py
+++ b/docs/getting_started/tracing_example.py
@@ -20,10 +20,11 @@
SimpleSpanProcessor,
)
-trace.set_tracer_provider(TracerProvider())
-trace.get_tracer_provider().add_span_processor(
- SimpleSpanProcessor(ConsoleSpanExporter())
-)
+provider = TracerProvider()
+processor = SimpleSpanProcessor(ConsoleSpanExporter())
+provider.add_span_processor(processor)
+trace.set_tracer_provider(provider)
+
tracer = trace.get_tracer(__name__)
| {"golden_diff": "diff --git a/docs/getting_started/otlpcollector_example.py b/docs/getting_started/otlpcollector_example.py\n--- a/docs/getting_started/otlpcollector_example.py\n+++ b/docs/getting_started/otlpcollector_example.py\n@@ -24,7 +24,7 @@\n \n span_exporter = OTLPSpanExporter(\n # optional\n- # endpoint:=\"myCollectorURL:55678\",\n+ # endpoint:=\"myCollectorURL:4317\",\n # credentials=ChannelCredentials(credentials),\n # headers=((\"metadata\", \"metadata\")),\n )\ndiff --git a/docs/getting_started/tracing_example.py b/docs/getting_started/tracing_example.py\n--- a/docs/getting_started/tracing_example.py\n+++ b/docs/getting_started/tracing_example.py\n@@ -20,10 +20,11 @@\n SimpleSpanProcessor,\n )\n \n-trace.set_tracer_provider(TracerProvider())\n-trace.get_tracer_provider().add_span_processor(\n- SimpleSpanProcessor(ConsoleSpanExporter())\n-)\n+provider = TracerProvider()\n+processor = SimpleSpanProcessor(ConsoleSpanExporter())\n+provider.add_span_processor(processor)\n+trace.set_tracer_provider(provider)\n+\n \n tracer = trace.get_tracer(__name__)\n", "issue": "Move getting started guide to opentelemetry.io\nFor GA there is an ask to add at least all getting started documentation to the opentelemetry.io website -- for Python this is tracked in: https://github.com/open-telemetry/opentelemetry.io/issues/285. Maintaining the documentation in two places is not ideal so proposal is to move getting started documentation. A similar activity already happened for Java.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# otcollector.py\nimport time\n\nfrom opentelemetry import trace\nfrom opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (\n OTLPSpanExporter,\n)\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import BatchSpanProcessor\n\nspan_exporter = OTLPSpanExporter(\n # optional\n # endpoint:=\"myCollectorURL:55678\",\n # credentials=ChannelCredentials(credentials),\n # headers=((\"metadata\", \"metadata\")),\n)\ntracer_provider = TracerProvider()\ntrace.set_tracer_provider(tracer_provider)\nspan_processor = BatchSpanProcessor(span_exporter)\ntracer_provider.add_span_processor(span_processor)\n\n# Configure the tracer to use the collector exporter\ntracer = trace.get_tracer_provider().get_tracer(__name__)\n\nwith tracer.start_as_current_span(\"foo\"):\n print(\"Hello world!\")\n", "path": "docs/getting_started/otlpcollector_example.py"}, {"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# tracing.py\nfrom opentelemetry import trace\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import (\n ConsoleSpanExporter,\n SimpleSpanProcessor,\n)\n\ntrace.set_tracer_provider(TracerProvider())\ntrace.get_tracer_provider().add_span_processor(\n SimpleSpanProcessor(ConsoleSpanExporter())\n)\n\ntracer = trace.get_tracer(__name__)\n\nwith tracer.start_as_current_span(\"foo\"):\n with tracer.start_as_current_span(\"bar\"):\n with tracer.start_as_current_span(\"baz\"):\n print(\"Hello world from OpenTelemetry Python!\")\n", "path": "docs/getting_started/tracing_example.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# otcollector.py\nimport time\n\nfrom opentelemetry import trace\nfrom opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (\n OTLPSpanExporter,\n)\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import BatchSpanProcessor\n\nspan_exporter = OTLPSpanExporter(\n # optional\n # endpoint:=\"myCollectorURL:4317\",\n # credentials=ChannelCredentials(credentials),\n # headers=((\"metadata\", \"metadata\")),\n)\ntracer_provider = TracerProvider()\ntrace.set_tracer_provider(tracer_provider)\nspan_processor = BatchSpanProcessor(span_exporter)\ntracer_provider.add_span_processor(span_processor)\n\n# Configure the tracer to use the collector exporter\ntracer = trace.get_tracer_provider().get_tracer(__name__)\n\nwith tracer.start_as_current_span(\"foo\"):\n print(\"Hello world!\")\n", "path": "docs/getting_started/otlpcollector_example.py"}, {"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# tracing.py\nfrom opentelemetry import trace\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import (\n ConsoleSpanExporter,\n SimpleSpanProcessor,\n)\n\nprovider = TracerProvider()\nprocessor = SimpleSpanProcessor(ConsoleSpanExporter())\nprovider.add_span_processor(processor)\ntrace.set_tracer_provider(provider)\n\n\ntracer = trace.get_tracer(__name__)\n\nwith tracer.start_as_current_span(\"foo\"):\n with tracer.start_as_current_span(\"bar\"):\n with tracer.start_as_current_span(\"baz\"):\n print(\"Hello world from OpenTelemetry Python!\")\n", "path": "docs/getting_started/tracing_example.py"}]} | 1,065 | 271 |
gh_patches_debug_7326 | rasdani/github-patches | git_diff | microsoft__ptvsd-619 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Launching a Python module using `-m` with `ptvsd.enable_attach` in __init__ fails
## Environment data
- PTVSD version: Master
- OS and version: Mac (may not be OS specific)
- Python version (& distribution if applicable, e.g. Anaconda): 3.6
- Using VS Code or Visual Studio:N/A
## Actual behavior
```
Traceback (most recent call last):
File "/Users/donjayamanne/Desktop/Development/vscode/ptvsd/tests/resources/system_tests/test_exceptions/mymod_attach1/__init__.py", line 4, in <module>
ptvsd.enable_attach((sys.argv[1], sys.argv[2]))
File "/Users/donjayamanne/Desktop/Development/vscode/ptvsd/ptvsd/attach_server.py", line 71, in enable_attach
redirect_output=redirect_output,
File "/Users/donjayamanne/Desktop/Development/vscode/ptvsd/ptvsd/_remote.py", line 27, in enable_attach
**kwargs
File "/Users/donjayamanne/Desktop/Development/vscode/ptvsd/ptvsd/pydevd_hooks.py", line 95, in install
if __main__ is not pydevd and __main__.__file__ == pydevd.__file__:
AttributeError: module '__main__' has no attribute '__file__'
```
## Expected behavior
Should not error out
## Steps to reproduce:
* Create a python module
* Add the following code in the `__init__.py`
```python
import sys
import ptvsd
ptvsd.enable_attach(('localhost', 9876))
ptvsd.wait_for_attach()
try:
raise ArithmeticError('Hello')
except Exception:
pass
sys.stdout.write('end')
```
* Start the module `python -m xyz`
That's when it goes kaboom
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ptvsd/pydevd_hooks.py`
Content:
```
1 import sys
2
3 from _pydevd_bundle import pydevd_comm
4
5 from ptvsd.socket import Address
6 from ptvsd.daemon import Daemon, DaemonStoppedError, DaemonClosedError
7 from ptvsd._util import debug, new_hidden_thread
8
9
10 def start_server(daemon, host, port, **kwargs):
11 """Return a socket to a (new) local pydevd-handling daemon.
12
13 The daemon supports the pydevd client wire protocol, sending
14 requests and handling responses (and events).
15
16 This is a replacement for _pydevd_bundle.pydevd_comm.start_server.
17 """
18 sock, next_session = daemon.start_server((host, port))
19
20 def handle_next():
21 try:
22 session = next_session(**kwargs)
23 debug('done waiting')
24 return session
25 except (DaemonClosedError, DaemonStoppedError):
26 # Typically won't happen.
27 debug('stopped')
28 raise
29 except Exception as exc:
30 # TODO: log this?
31 debug('failed:', exc, tb=True)
32 return None
33
34 while True:
35 debug('waiting on initial connection')
36 handle_next()
37 break
38
39 def serve_forever():
40 while True:
41 debug('waiting on next connection')
42 try:
43 handle_next()
44 except (DaemonClosedError, DaemonStoppedError):
45 break
46 debug('done')
47
48 t = new_hidden_thread(
49 target=serve_forever,
50 name='sessions',
51 )
52 t.start()
53 return sock
54
55
56 def start_client(daemon, host, port, **kwargs):
57 """Return a socket to an existing "remote" pydevd-handling daemon.
58
59 The daemon supports the pydevd client wire protocol, sending
60 requests and handling responses (and events).
61
62 This is a replacement for _pydevd_bundle.pydevd_comm.start_client.
63 """
64 sock, start_session = daemon.start_client((host, port))
65 start_session(**kwargs)
66 return sock
67
68
69 def install(pydevd, address,
70 start_server=start_server, start_client=start_client,
71 **kwargs):
72 """Configure pydevd to use our wrapper.
73
74 This is a bit of a hack to allow us to run our VSC debug adapter
75 in the same process as pydevd. Note that, as with most hacks,
76 this is somewhat fragile (since the monkeypatching sites may
77 change).
78 """
79 addr = Address.from_raw(address)
80 daemon = Daemon(**kwargs)
81
82 _start_server = (lambda p: start_server(daemon, addr.host, p))
83 _start_server.orig = start_server
84 _start_client = (lambda h, p: start_client(daemon, h, p))
85 _start_client.orig = start_client
86
87 # These are the functions pydevd invokes to get a socket to the client.
88 pydevd_comm.start_server = _start_server
89 pydevd_comm.start_client = _start_client
90
91 # Ensure that pydevd is using our functions.
92 pydevd.start_server = _start_server
93 pydevd.start_client = _start_client
94 __main__ = sys.modules['__main__']
95 if __main__ is not pydevd and __main__.__file__ == pydevd.__file__:
96 __main__.start_server = _start_server
97 __main__.start_client = _start_client
98 return daemon
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ptvsd/pydevd_hooks.py b/ptvsd/pydevd_hooks.py
--- a/ptvsd/pydevd_hooks.py
+++ b/ptvsd/pydevd_hooks.py
@@ -92,7 +92,8 @@
pydevd.start_server = _start_server
pydevd.start_client = _start_client
__main__ = sys.modules['__main__']
- if __main__ is not pydevd and __main__.__file__ == pydevd.__file__:
- __main__.start_server = _start_server
- __main__.start_client = _start_client
+ if __main__ is not pydevd:
+ if getattr(__main__, '__file__', None) == pydevd.__file__:
+ __main__.start_server = _start_server
+ __main__.start_client = _start_client
return daemon
| {"golden_diff": "diff --git a/ptvsd/pydevd_hooks.py b/ptvsd/pydevd_hooks.py\n--- a/ptvsd/pydevd_hooks.py\n+++ b/ptvsd/pydevd_hooks.py\n@@ -92,7 +92,8 @@\n pydevd.start_server = _start_server\n pydevd.start_client = _start_client\n __main__ = sys.modules['__main__']\n- if __main__ is not pydevd and __main__.__file__ == pydevd.__file__:\n- __main__.start_server = _start_server\n- __main__.start_client = _start_client\n+ if __main__ is not pydevd:\n+ if getattr(__main__, '__file__', None) == pydevd.__file__:\n+ __main__.start_server = _start_server\n+ __main__.start_client = _start_client\n return daemon\n", "issue": "Launching a Python module using `-m` with `ptvsd.enable_attach` in __init__ fails\n## Environment data\r\n\r\n- PTVSD version: Master\r\n- OS and version: Mac (may not be OS specific)\r\n- Python version (& distribution if applicable, e.g. Anaconda): 3.6\r\n- Using VS Code or Visual Studio:N/A\r\n\r\n## Actual behavior\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/donjayamanne/Desktop/Development/vscode/ptvsd/tests/resources/system_tests/test_exceptions/mymod_attach1/__init__.py\", line 4, in <module>\r\n ptvsd.enable_attach((sys.argv[1], sys.argv[2]))\r\n File \"/Users/donjayamanne/Desktop/Development/vscode/ptvsd/ptvsd/attach_server.py\", line 71, in enable_attach\r\n redirect_output=redirect_output,\r\n File \"/Users/donjayamanne/Desktop/Development/vscode/ptvsd/ptvsd/_remote.py\", line 27, in enable_attach\r\n **kwargs\r\n File \"/Users/donjayamanne/Desktop/Development/vscode/ptvsd/ptvsd/pydevd_hooks.py\", line 95, in install\r\n if __main__ is not pydevd and __main__.__file__ == pydevd.__file__:\r\nAttributeError: module '__main__' has no attribute '__file__'\r\n```\r\n\r\n## Expected behavior\r\n\r\nShould not error out\r\n\r\n## Steps to reproduce:\r\n\r\n* Create a python module\r\n* Add the following code in the `__init__.py`\r\n```python\r\nimport sys\r\nimport ptvsd\r\nptvsd.enable_attach(('localhost', 9876))\r\nptvsd.wait_for_attach()\r\n\r\ntry:\r\n raise ArithmeticError('Hello')\r\nexcept Exception:\r\n pass\r\nsys.stdout.write('end')\r\n```\r\n* Start the module `python -m xyz`\r\nThat's when it goes kaboom\n", "before_files": [{"content": "import sys\n\nfrom _pydevd_bundle import pydevd_comm\n\nfrom ptvsd.socket import Address\nfrom ptvsd.daemon import Daemon, DaemonStoppedError, DaemonClosedError\nfrom ptvsd._util import debug, new_hidden_thread\n\n\ndef start_server(daemon, host, port, **kwargs):\n \"\"\"Return a socket to a (new) local pydevd-handling daemon.\n\n The daemon supports the pydevd client wire protocol, sending\n requests and handling responses (and events).\n\n This is a replacement for _pydevd_bundle.pydevd_comm.start_server.\n \"\"\"\n sock, next_session = daemon.start_server((host, port))\n\n def handle_next():\n try:\n session = next_session(**kwargs)\n debug('done waiting')\n return session\n except (DaemonClosedError, DaemonStoppedError):\n # Typically won't happen.\n debug('stopped')\n raise\n except Exception as exc:\n # TODO: log this?\n debug('failed:', exc, tb=True)\n return None\n\n while True:\n debug('waiting on initial connection')\n handle_next()\n break\n\n def serve_forever():\n while True:\n debug('waiting on next connection')\n try:\n handle_next()\n except (DaemonClosedError, DaemonStoppedError):\n break\n debug('done')\n\n t = new_hidden_thread(\n target=serve_forever,\n name='sessions',\n )\n t.start()\n return sock\n\n\ndef start_client(daemon, host, port, **kwargs):\n \"\"\"Return a socket to an existing \"remote\" pydevd-handling daemon.\n\n The daemon supports the pydevd client wire protocol, sending\n requests and handling responses (and events).\n\n This is a replacement for _pydevd_bundle.pydevd_comm.start_client.\n \"\"\"\n sock, start_session = daemon.start_client((host, port))\n start_session(**kwargs)\n return sock\n\n\ndef install(pydevd, address,\n start_server=start_server, start_client=start_client,\n **kwargs):\n \"\"\"Configure pydevd to use our wrapper.\n\n This is a bit of a hack to allow us to run our VSC debug adapter\n in the same process as pydevd. Note that, as with most hacks,\n this is somewhat fragile (since the monkeypatching sites may\n change).\n \"\"\"\n addr = Address.from_raw(address)\n daemon = Daemon(**kwargs)\n\n _start_server = (lambda p: start_server(daemon, addr.host, p))\n _start_server.orig = start_server\n _start_client = (lambda h, p: start_client(daemon, h, p))\n _start_client.orig = start_client\n\n # These are the functions pydevd invokes to get a socket to the client.\n pydevd_comm.start_server = _start_server\n pydevd_comm.start_client = _start_client\n\n # Ensure that pydevd is using our functions.\n pydevd.start_server = _start_server\n pydevd.start_client = _start_client\n __main__ = sys.modules['__main__']\n if __main__ is not pydevd and __main__.__file__ == pydevd.__file__:\n __main__.start_server = _start_server\n __main__.start_client = _start_client\n return daemon\n", "path": "ptvsd/pydevd_hooks.py"}], "after_files": [{"content": "import sys\n\nfrom _pydevd_bundle import pydevd_comm\n\nfrom ptvsd.socket import Address\nfrom ptvsd.daemon import Daemon, DaemonStoppedError, DaemonClosedError\nfrom ptvsd._util import debug, new_hidden_thread\n\n\ndef start_server(daemon, host, port, **kwargs):\n \"\"\"Return a socket to a (new) local pydevd-handling daemon.\n\n The daemon supports the pydevd client wire protocol, sending\n requests and handling responses (and events).\n\n This is a replacement for _pydevd_bundle.pydevd_comm.start_server.\n \"\"\"\n sock, next_session = daemon.start_server((host, port))\n\n def handle_next():\n try:\n session = next_session(**kwargs)\n debug('done waiting')\n return session\n except (DaemonClosedError, DaemonStoppedError):\n # Typically won't happen.\n debug('stopped')\n raise\n except Exception as exc:\n # TODO: log this?\n debug('failed:', exc, tb=True)\n return None\n\n while True:\n debug('waiting on initial connection')\n handle_next()\n break\n\n def serve_forever():\n while True:\n debug('waiting on next connection')\n try:\n handle_next()\n except (DaemonClosedError, DaemonStoppedError):\n break\n debug('done')\n\n t = new_hidden_thread(\n target=serve_forever,\n name='sessions',\n )\n t.start()\n return sock\n\n\ndef start_client(daemon, host, port, **kwargs):\n \"\"\"Return a socket to an existing \"remote\" pydevd-handling daemon.\n\n The daemon supports the pydevd client wire protocol, sending\n requests and handling responses (and events).\n\n This is a replacement for _pydevd_bundle.pydevd_comm.start_client.\n \"\"\"\n sock, start_session = daemon.start_client((host, port))\n start_session(**kwargs)\n return sock\n\n\ndef install(pydevd, address,\n start_server=start_server, start_client=start_client,\n **kwargs):\n \"\"\"Configure pydevd to use our wrapper.\n\n This is a bit of a hack to allow us to run our VSC debug adapter\n in the same process as pydevd. Note that, as with most hacks,\n this is somewhat fragile (since the monkeypatching sites may\n change).\n \"\"\"\n addr = Address.from_raw(address)\n daemon = Daemon(**kwargs)\n\n _start_server = (lambda p: start_server(daemon, addr.host, p))\n _start_server.orig = start_server\n _start_client = (lambda h, p: start_client(daemon, h, p))\n _start_client.orig = start_client\n\n # These are the functions pydevd invokes to get a socket to the client.\n pydevd_comm.start_server = _start_server\n pydevd_comm.start_client = _start_client\n\n # Ensure that pydevd is using our functions.\n pydevd.start_server = _start_server\n pydevd.start_client = _start_client\n __main__ = sys.modules['__main__']\n if __main__ is not pydevd:\n if getattr(__main__, '__file__', None) == pydevd.__file__:\n __main__.start_server = _start_server\n __main__.start_client = _start_client\n return daemon\n", "path": "ptvsd/pydevd_hooks.py"}]} | 1,600 | 202 |
gh_patches_debug_26783 | rasdani/github-patches | git_diff | bridgecrewio__checkov-748 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
False positive for check 'MSKClusterEncryption' (CKV_AWS_81)
**Describe the bug**
CKV_AWS_81 is reporting MSK clusters as unencrypted at rest while they are encrypted at rest.
**To Reproduce**
Example Terraform code producing a cluster with encryption (it will use the default KMS key as documented in https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/msk_cluster#encryption_at_rest_kms_key_arn)
(slightly adapted from example on the msk_cluster resource docs)
<details>
<summary>Code</summary>
```
resource "aws_vpc" "vpc" {
cidr_block = "192.168.0.0/22"
}
data "aws_availability_zones" "azs" {
state = "available"
}
resource "aws_subnet" "subnet_az1" {
availability_zone = data.aws_availability_zones.azs.names[0]
cidr_block = "192.168.0.0/24"
vpc_id = aws_vpc.vpc.id
}
resource "aws_subnet" "subnet_az2" {
availability_zone = data.aws_availability_zones.azs.names[1]
cidr_block = "192.168.1.0/24"
vpc_id = aws_vpc.vpc.id
}
resource "aws_subnet" "subnet_az3" {
availability_zone = data.aws_availability_zones.azs.names[2]
cidr_block = "192.168.2.0/24"
vpc_id = aws_vpc.vpc.id
}
resource "aws_security_group" "sg" {
vpc_id = aws_vpc.vpc.id
}
resource "aws_kms_key" "kms" {
description = "example"
}
resource "aws_cloudwatch_log_group" "test" {
name = "msk_broker_logs"
}
resource "aws_s3_bucket" "bucket" {
bucket = "msk-broker-logs-bucket"
acl = "private"
}
resource "aws_iam_role" "firehose_role" {
name = "firehose_test_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
name = "terraform-kinesis-firehose-msk-broker-logs-stream"
destination = "s3"
s3_configuration {
role_arn = aws_iam_role.firehose_role.arn
bucket_arn = aws_s3_bucket.bucket.arn
}
tags = {
LogDeliveryEnabled = "placeholder"
}
lifecycle {
ignore_changes = [
tags["LogDeliveryEnabled"],
]
}
}
resource "aws_msk_cluster" "example" {
cluster_name = "example"
kafka_version = "2.4.1"
number_of_broker_nodes = 3
broker_node_group_info {
instance_type = "kafka.m5.large"
ebs_volume_size = 1000
client_subnets = [
aws_subnet.subnet_az1.id,
aws_subnet.subnet_az2.id,
aws_subnet.subnet_az3.id,
]
security_groups = [aws_security_group.sg.id]
}
encryption_info {
encryption_in_transit {
client_broker = "TLS"
in_cluster = true
}
}
open_monitoring {
prometheus {
jmx_exporter {
enabled_in_broker = true
}
node_exporter {
enabled_in_broker = true
}
}
}
logging_info {
broker_logs {
cloudwatch_logs {
enabled = true
log_group = aws_cloudwatch_log_group.test.name
}
firehose {
enabled = true
delivery_stream = aws_kinesis_firehose_delivery_stream.test_stream.name
}
s3 {
enabled = true
bucket = aws_s3_bucket.bucket.id
prefix = "logs/msk-"
}
}
}
tags = {
foo = "bar"
}
}
output "zookeeper_connect_string" {
value = aws_msk_cluster.example.zookeeper_connect_string
}
output "bootstrap_brokers_tls" {
description = "TLS connection host:port pairs"
value = aws_msk_cluster.example.bootstrap_brokers_tls
}
```
</details>
**Expected behavior**
No error is thrown as long as an encryption_info block is defined
Checkov Version 1.0.669
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/aws/MSKClusterEncryption.py`
Content:
```
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
3
4
5 class MSKClusterEncryption(BaseResourceCheck):
6 def __init__(self):
7 name = "Ensure MSK Cluster encryption in rest and transit is enabled"
8 id = "CKV_AWS_81"
9 supported_resources = ['aws_msk_cluster']
10 categories = [CheckCategories.ENCRYPTION]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def scan_resource_conf(self, conf):
14 if 'encryption_info' in conf.keys():
15 encryption = conf['encryption_info'][0]
16 if 'encryption_at_rest_kms_key_arn' in encryption:
17 if 'encryption_in_transit' in encryption:
18 transit = encryption['encryption_in_transit'][0]
19 if 'client_broker' in transit and transit['client_broker'][0] != 'TLS' or \
20 'in_cluster' in transit and transit['in_cluster'][0] is False:
21 return CheckResult.FAILED
22 return CheckResult.PASSED
23 return CheckResult.PASSED
24 return CheckResult.FAILED
25
26
27 check = MSKClusterEncryption()
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/aws/MSKClusterEncryption.py b/checkov/terraform/checks/resource/aws/MSKClusterEncryption.py
--- a/checkov/terraform/checks/resource/aws/MSKClusterEncryption.py
+++ b/checkov/terraform/checks/resource/aws/MSKClusterEncryption.py
@@ -11,16 +11,18 @@
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def scan_resource_conf(self, conf):
+ # Note: As long as the 'encryption_info' block is specified, the cluster
+ # will be encrypted at rest even if 'encryption_at_rest_kms_key_arn' is not specified
+ # See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/msk_cluster#encryption_at_rest_kms_key_arn
if 'encryption_info' in conf.keys():
encryption = conf['encryption_info'][0]
- if 'encryption_at_rest_kms_key_arn' in encryption:
- if 'encryption_in_transit' in encryption:
- transit = encryption['encryption_in_transit'][0]
- if 'client_broker' in transit and transit['client_broker'][0] != 'TLS' or \
- 'in_cluster' in transit and transit['in_cluster'][0] is False:
- return CheckResult.FAILED
- return CheckResult.PASSED
+ if 'encryption_in_transit' in encryption:
+ transit = encryption['encryption_in_transit'][0]
+ if 'client_broker' in transit and transit['client_broker'][0] != 'TLS' or \
+ 'in_cluster' in transit and transit['in_cluster'][0] is False:
+ return CheckResult.FAILED
return CheckResult.PASSED
+ return CheckResult.PASSED
return CheckResult.FAILED
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/MSKClusterEncryption.py b/checkov/terraform/checks/resource/aws/MSKClusterEncryption.py\n--- a/checkov/terraform/checks/resource/aws/MSKClusterEncryption.py\n+++ b/checkov/terraform/checks/resource/aws/MSKClusterEncryption.py\n@@ -11,16 +11,18 @@\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n def scan_resource_conf(self, conf):\n+ # Note: As long as the 'encryption_info' block is specified, the cluster\n+ # will be encrypted at rest even if 'encryption_at_rest_kms_key_arn' is not specified\n+ # See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/msk_cluster#encryption_at_rest_kms_key_arn\n if 'encryption_info' in conf.keys():\n encryption = conf['encryption_info'][0]\n- if 'encryption_at_rest_kms_key_arn' in encryption:\n- if 'encryption_in_transit' in encryption:\n- transit = encryption['encryption_in_transit'][0]\n- if 'client_broker' in transit and transit['client_broker'][0] != 'TLS' or \\\n- 'in_cluster' in transit and transit['in_cluster'][0] is False:\n- return CheckResult.FAILED\n- return CheckResult.PASSED\n+ if 'encryption_in_transit' in encryption:\n+ transit = encryption['encryption_in_transit'][0]\n+ if 'client_broker' in transit and transit['client_broker'][0] != 'TLS' or \\\n+ 'in_cluster' in transit and transit['in_cluster'][0] is False:\n+ return CheckResult.FAILED\n return CheckResult.PASSED\n+ return CheckResult.PASSED\n return CheckResult.FAILED\n", "issue": "False positive for check 'MSKClusterEncryption' (CKV_AWS_81)\n**Describe the bug**\r\nCKV_AWS_81 is reporting MSK clusters as unencrypted at rest while they are encrypted at rest.\r\n\r\n**To Reproduce**\r\nExample Terraform code producing a cluster with encryption (it will use the default KMS key as documented in https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/msk_cluster#encryption_at_rest_kms_key_arn)\r\n\r\n(slightly adapted from example on the msk_cluster resource docs)\r\n<details>\r\n <summary>Code</summary>\r\n\r\n```\r\nresource \"aws_vpc\" \"vpc\" {\r\n cidr_block = \"192.168.0.0/22\"\r\n}\r\n\r\ndata \"aws_availability_zones\" \"azs\" {\r\n state = \"available\"\r\n}\r\n\r\nresource \"aws_subnet\" \"subnet_az1\" {\r\n availability_zone = data.aws_availability_zones.azs.names[0]\r\n cidr_block = \"192.168.0.0/24\"\r\n vpc_id = aws_vpc.vpc.id\r\n}\r\n\r\nresource \"aws_subnet\" \"subnet_az2\" {\r\n availability_zone = data.aws_availability_zones.azs.names[1]\r\n cidr_block = \"192.168.1.0/24\"\r\n vpc_id = aws_vpc.vpc.id\r\n}\r\n\r\nresource \"aws_subnet\" \"subnet_az3\" {\r\n availability_zone = data.aws_availability_zones.azs.names[2]\r\n cidr_block = \"192.168.2.0/24\"\r\n vpc_id = aws_vpc.vpc.id\r\n}\r\n\r\nresource \"aws_security_group\" \"sg\" {\r\n vpc_id = aws_vpc.vpc.id\r\n}\r\n\r\nresource \"aws_kms_key\" \"kms\" {\r\n description = \"example\"\r\n}\r\n\r\nresource \"aws_cloudwatch_log_group\" \"test\" {\r\n name = \"msk_broker_logs\"\r\n}\r\n\r\nresource \"aws_s3_bucket\" \"bucket\" {\r\n bucket = \"msk-broker-logs-bucket\"\r\n acl = \"private\"\r\n}\r\n\r\nresource \"aws_iam_role\" \"firehose_role\" {\r\n name = \"firehose_test_role\"\r\n\r\n assume_role_policy = <<EOF\r\n{\r\n\"Version\": \"2012-10-17\",\r\n\"Statement\": [\r\n {\r\n \"Action\": \"sts:AssumeRole\",\r\n \"Principal\": {\r\n \"Service\": \"firehose.amazonaws.com\"\r\n },\r\n \"Effect\": \"Allow\",\r\n \"Sid\": \"\"\r\n }\r\n ]\r\n}\r\nEOF\r\n}\r\n\r\nresource \"aws_kinesis_firehose_delivery_stream\" \"test_stream\" {\r\n name = \"terraform-kinesis-firehose-msk-broker-logs-stream\"\r\n destination = \"s3\"\r\n\r\n s3_configuration {\r\n role_arn = aws_iam_role.firehose_role.arn\r\n bucket_arn = aws_s3_bucket.bucket.arn\r\n }\r\n\r\n tags = {\r\n LogDeliveryEnabled = \"placeholder\"\r\n }\r\n\r\n lifecycle {\r\n ignore_changes = [\r\n tags[\"LogDeliveryEnabled\"],\r\n ]\r\n }\r\n}\r\n\r\nresource \"aws_msk_cluster\" \"example\" {\r\n cluster_name = \"example\"\r\n kafka_version = \"2.4.1\"\r\n number_of_broker_nodes = 3\r\n\r\n broker_node_group_info {\r\n instance_type = \"kafka.m5.large\"\r\n ebs_volume_size = 1000\r\n client_subnets = [\r\n aws_subnet.subnet_az1.id,\r\n aws_subnet.subnet_az2.id,\r\n aws_subnet.subnet_az3.id,\r\n ]\r\n security_groups = [aws_security_group.sg.id]\r\n }\r\n\r\n encryption_info {\r\n encryption_in_transit { \r\n client_broker = \"TLS\"\r\n in_cluster = true \r\n }\r\n }\r\n\r\n open_monitoring {\r\n prometheus {\r\n jmx_exporter {\r\n enabled_in_broker = true\r\n }\r\n node_exporter {\r\n enabled_in_broker = true\r\n }\r\n }\r\n }\r\n\r\n logging_info {\r\n broker_logs {\r\n cloudwatch_logs {\r\n enabled = true\r\n log_group = aws_cloudwatch_log_group.test.name\r\n }\r\n firehose {\r\n enabled = true\r\n delivery_stream = aws_kinesis_firehose_delivery_stream.test_stream.name\r\n }\r\n s3 {\r\n enabled = true\r\n bucket = aws_s3_bucket.bucket.id\r\n prefix = \"logs/msk-\"\r\n }\r\n }\r\n }\r\n\r\n tags = {\r\n foo = \"bar\"\r\n }\r\n}\r\n\r\noutput \"zookeeper_connect_string\" {\r\n value = aws_msk_cluster.example.zookeeper_connect_string\r\n}\r\n\r\noutput \"bootstrap_brokers_tls\" {\r\n description = \"TLS connection host:port pairs\"\r\n value = aws_msk_cluster.example.bootstrap_brokers_tls\r\n}\r\n```\r\n\r\n</details>\r\n\r\n**Expected behavior**\r\nNo error is thrown as long as an encryption_info block is defined\r\n\r\n\r\n Checkov Version 1.0.669\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass MSKClusterEncryption(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure MSK Cluster encryption in rest and transit is enabled\"\n id = \"CKV_AWS_81\"\n supported_resources = ['aws_msk_cluster']\n categories = [CheckCategories.ENCRYPTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if 'encryption_info' in conf.keys():\n encryption = conf['encryption_info'][0]\n if 'encryption_at_rest_kms_key_arn' in encryption:\n if 'encryption_in_transit' in encryption:\n transit = encryption['encryption_in_transit'][0]\n if 'client_broker' in transit and transit['client_broker'][0] != 'TLS' or \\\n 'in_cluster' in transit and transit['in_cluster'][0] is False:\n return CheckResult.FAILED\n return CheckResult.PASSED\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = MSKClusterEncryption()\n", "path": "checkov/terraform/checks/resource/aws/MSKClusterEncryption.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass MSKClusterEncryption(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure MSK Cluster encryption in rest and transit is enabled\"\n id = \"CKV_AWS_81\"\n supported_resources = ['aws_msk_cluster']\n categories = [CheckCategories.ENCRYPTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n # Note: As long as the 'encryption_info' block is specified, the cluster\n # will be encrypted at rest even if 'encryption_at_rest_kms_key_arn' is not specified\n # See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/msk_cluster#encryption_at_rest_kms_key_arn\n if 'encryption_info' in conf.keys():\n encryption = conf['encryption_info'][0]\n if 'encryption_in_transit' in encryption:\n transit = encryption['encryption_in_transit'][0]\n if 'client_broker' in transit and transit['client_broker'][0] != 'TLS' or \\\n 'in_cluster' in transit and transit['in_cluster'][0] is False:\n return CheckResult.FAILED\n return CheckResult.PASSED\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = MSKClusterEncryption()\n", "path": "checkov/terraform/checks/resource/aws/MSKClusterEncryption.py"}]} | 1,664 | 414 |
gh_patches_debug_19960 | rasdani/github-patches | git_diff | microsoft__Qcodes-565 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
visa_handle.clear on instantiation causes some instruments to freeze
The use of visa_handle.clear on the instantiation of a VISA instrument causes some instruments to lock up. In particular I've noticed for SERIAL instruments, this command spews some garbage on the serial port that requires the instrument to be reset before it is possible to use it again.
### Steps to reproduce
1. Plug in a buggy instrument and instantiate an instance of it.
### Actual behaviour
The instrument freezes
### System
Win 7
Using the Harvard DecaDAC
2b6d72b
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qcodes/instrument/visa.py`
Content:
```
1 """Visa instrument driver based on pyvisa."""
2 import visa
3
4 from .base import Instrument
5 import qcodes.utils.validators as vals
6
7
8 class VisaInstrument(Instrument):
9
10 """
11 Base class for all instruments using visa connections.
12
13 Args:
14 name (str): What this instrument is called locally.
15
16 address (str): The visa resource name to use to connect.
17 Optionally includes '@<backend>' at the end. For example,
18 'ASRL2' will open COM2 with the default NI backend, but
19 'ASRL2@py' will open COM2 using pyvisa-py. Note that qcodes
20 does not install (or even require) ANY backends, it is up to
21 the user to do that. see eg:
22 http://pyvisa.readthedocs.org/en/stable/names.html
23
24 timeout (number): seconds to allow for responses. Default 5.
25
26 terminator: Read termination character(s) to look for. Default ''.
27
28 metadata (Optional[Dict]): additional static metadata to add to this
29 instrument's JSON snapshot.
30
31 See help for ``qcodes.Instrument`` for additional information on writing
32 instrument subclasses.
33
34 Attributes:
35 visa_handle (pyvisa.resources.Resource): The communication channel.
36 """
37
38 def __init__(self, name, address=None, timeout=5, terminator='', **kwargs):
39 super().__init__(name, **kwargs)
40
41 self.add_parameter('timeout',
42 get_cmd=self._get_visa_timeout,
43 set_cmd=self._set_visa_timeout,
44 unit='s',
45 vals=vals.MultiType(vals.Numbers(min_value=0),
46 vals.Enum(None)))
47
48 self.set_address(address)
49 self.set_terminator(terminator)
50 self.timeout.set(timeout)
51
52 def set_address(self, address):
53 """
54 Change the address for this instrument.
55
56 Args:
57 address: The visa resource name to use to connect.
58 Optionally includes '@<backend>' at the end. For example,
59 'ASRL2' will open COM2 with the default NI backend, but
60 'ASRL2@py' will open COM2 using pyvisa-py. Note that qcodes
61 does not install (or even require) ANY backends, it is up to
62 the user to do that.
63 see eg: http://pyvisa.readthedocs.org/en/stable/names.html
64 """
65 # in case we're changing the address - close the old handle first
66 if getattr(self, 'visa_handle', None):
67 self.visa_handle.close()
68
69 if address and '@' in address:
70 address, visa_library = address.split('@')
71 resource_manager = visa.ResourceManager('@' + visa_library)
72 else:
73 resource_manager = visa.ResourceManager()
74
75 self.visa_handle = resource_manager.open_resource(address)
76
77 self.visa_handle.clear()
78 self._address = address
79
80 def set_terminator(self, terminator):
81 r"""
82 Change the read terminator to use.
83
84 Args:
85 terminator (str): Character(s) to look for at the end of a read.
86 eg. '\r\n'.
87 """
88 self.visa_handle.read_termination = terminator
89 self._terminator = terminator
90
91 def _set_visa_timeout(self, timeout):
92 if timeout is None:
93 self.visa_handle.timeout = None
94 else:
95 # pyvisa uses milliseconds but we use seconds
96 self.visa_handle.timeout = timeout * 1000.0
97
98 def _get_visa_timeout(self):
99 timeout_ms = self.visa_handle.timeout
100 if timeout_ms is None:
101 return None
102 else:
103 # pyvisa uses milliseconds but we use seconds
104 return timeout_ms / 1000
105
106 def close(self):
107 """Disconnect and irreversibly tear down the instrument."""
108 if getattr(self, 'visa_handle', None):
109 self.visa_handle.close()
110 super().close()
111
112 def check_error(self, ret_code):
113 """
114 Default error checking, raises an error if return code !=0.
115
116 Does not differentiate between warnings or specific error messages.
117 Override this function in your driver if you want to add specific
118 error messages.
119
120 Args:
121 ret_code (int): A Visa error code. See eg:
122 https://github.com/hgrecco/pyvisa/blob/master/pyvisa/errors.py
123
124 Raises:
125 visa.VisaIOError: if ``ret_code`` indicates a communication
126 problem.
127 """
128 if ret_code != 0:
129 raise visa.VisaIOError(ret_code)
130
131 def write_raw(self, cmd):
132 """
133 Low-level interface to ``visa_handle.write``.
134
135 Args:
136 cmd (str): The command to send to the instrument.
137 """
138 nr_bytes_written, ret_code = self.visa_handle.write(cmd)
139 self.check_error(ret_code)
140
141 def ask_raw(self, cmd):
142 """
143 Low-level interface to ``visa_handle.ask``.
144
145 Args:
146 cmd (str): The command to send to the instrument.
147
148 Returns:
149 str: The instrument's response.
150 """
151 return self.visa_handle.ask(cmd)
152
153 def snapshot_base(self, update=False):
154 """
155 State of the instrument as a JSON-compatible dict.
156
157 Args:
158 update (bool): If True, update the state by querying the
159 instrument. If False, just use the latest values in memory.
160
161 Returns:
162 dict: base snapshot
163 """
164 snap = super().snapshot_base(update=update)
165
166 snap['address'] = self._address
167 snap['terminator'] = self._terminator
168 snap['timeout'] = self.timeout.get()
169
170 return snap
171
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qcodes/instrument/visa.py b/qcodes/instrument/visa.py
--- a/qcodes/instrument/visa.py
+++ b/qcodes/instrument/visa.py
@@ -1,10 +1,11 @@
"""Visa instrument driver based on pyvisa."""
import visa
+import pyvisa.constants as vi_const
+import pyvisa.resources
from .base import Instrument
import qcodes.utils.validators as vals
-
class VisaInstrument(Instrument):
"""
@@ -74,7 +75,13 @@
self.visa_handle = resource_manager.open_resource(address)
- self.visa_handle.clear()
+ # Serial instruments have a separate flush method to clear their buffers
+ # which behaves differently to clear. This is particularly important
+ # for instruments which do not support SCPI commands.
+ if isinstance(self.visa_handle, pyvisa.resources.SerialInstrument):
+ self.visa_handle.flush(vi_const.VI_READ_BUF_DISCARD | vi_const.VI_WRITE_BUF_DISCARD)
+ else:
+ self.visa_handle.clear()
self._address = address
def set_terminator(self, terminator):
| {"golden_diff": "diff --git a/qcodes/instrument/visa.py b/qcodes/instrument/visa.py\n--- a/qcodes/instrument/visa.py\n+++ b/qcodes/instrument/visa.py\n@@ -1,10 +1,11 @@\n \"\"\"Visa instrument driver based on pyvisa.\"\"\"\n import visa\n+import pyvisa.constants as vi_const\n+import pyvisa.resources\n \n from .base import Instrument\n import qcodes.utils.validators as vals\n \n-\n class VisaInstrument(Instrument):\n \n \"\"\"\n@@ -74,7 +75,13 @@\n \n self.visa_handle = resource_manager.open_resource(address)\n \n- self.visa_handle.clear()\n+ # Serial instruments have a separate flush method to clear their buffers\n+ # which behaves differently to clear. This is particularly important\n+ # for instruments which do not support SCPI commands.\n+ if isinstance(self.visa_handle, pyvisa.resources.SerialInstrument):\n+ self.visa_handle.flush(vi_const.VI_READ_BUF_DISCARD | vi_const.VI_WRITE_BUF_DISCARD)\n+ else:\n+ self.visa_handle.clear()\n self._address = address\n \n def set_terminator(self, terminator):\n", "issue": "visa_handle.clear on instantiation causes some instruments to freeze\nThe use of visa_handle.clear on the instantiation of a VISA instrument causes some instruments to lock up. In particular I've noticed for SERIAL instruments, this command spews some garbage on the serial port that requires the instrument to be reset before it is possible to use it again.\r\n\r\n### Steps to reproduce\r\n1. Plug in a buggy instrument and instantiate an instance of it.\r\n\r\n### Actual behaviour\r\nThe instrument freezes\r\n\r\n### System\r\nWin 7\r\nUsing the Harvard DecaDAC\r\n2b6d72b\n", "before_files": [{"content": "\"\"\"Visa instrument driver based on pyvisa.\"\"\"\nimport visa\n\nfrom .base import Instrument\nimport qcodes.utils.validators as vals\n\n\nclass VisaInstrument(Instrument):\n\n \"\"\"\n Base class for all instruments using visa connections.\n\n Args:\n name (str): What this instrument is called locally.\n\n address (str): The visa resource name to use to connect.\n Optionally includes '@<backend>' at the end. For example,\n 'ASRL2' will open COM2 with the default NI backend, but\n 'ASRL2@py' will open COM2 using pyvisa-py. Note that qcodes\n does not install (or even require) ANY backends, it is up to\n the user to do that. see eg:\n http://pyvisa.readthedocs.org/en/stable/names.html\n\n timeout (number): seconds to allow for responses. Default 5.\n\n terminator: Read termination character(s) to look for. Default ''.\n\n metadata (Optional[Dict]): additional static metadata to add to this\n instrument's JSON snapshot.\n\n See help for ``qcodes.Instrument`` for additional information on writing\n instrument subclasses.\n\n Attributes:\n visa_handle (pyvisa.resources.Resource): The communication channel.\n \"\"\"\n\n def __init__(self, name, address=None, timeout=5, terminator='', **kwargs):\n super().__init__(name, **kwargs)\n\n self.add_parameter('timeout',\n get_cmd=self._get_visa_timeout,\n set_cmd=self._set_visa_timeout,\n unit='s',\n vals=vals.MultiType(vals.Numbers(min_value=0),\n vals.Enum(None)))\n\n self.set_address(address)\n self.set_terminator(terminator)\n self.timeout.set(timeout)\n\n def set_address(self, address):\n \"\"\"\n Change the address for this instrument.\n\n Args:\n address: The visa resource name to use to connect.\n Optionally includes '@<backend>' at the end. For example,\n 'ASRL2' will open COM2 with the default NI backend, but\n 'ASRL2@py' will open COM2 using pyvisa-py. Note that qcodes\n does not install (or even require) ANY backends, it is up to\n the user to do that.\n see eg: http://pyvisa.readthedocs.org/en/stable/names.html\n \"\"\"\n # in case we're changing the address - close the old handle first\n if getattr(self, 'visa_handle', None):\n self.visa_handle.close()\n\n if address and '@' in address:\n address, visa_library = address.split('@')\n resource_manager = visa.ResourceManager('@' + visa_library)\n else:\n resource_manager = visa.ResourceManager()\n\n self.visa_handle = resource_manager.open_resource(address)\n\n self.visa_handle.clear()\n self._address = address\n\n def set_terminator(self, terminator):\n r\"\"\"\n Change the read terminator to use.\n\n Args:\n terminator (str): Character(s) to look for at the end of a read.\n eg. '\\r\\n'.\n \"\"\"\n self.visa_handle.read_termination = terminator\n self._terminator = terminator\n\n def _set_visa_timeout(self, timeout):\n if timeout is None:\n self.visa_handle.timeout = None\n else:\n # pyvisa uses milliseconds but we use seconds\n self.visa_handle.timeout = timeout * 1000.0\n\n def _get_visa_timeout(self):\n timeout_ms = self.visa_handle.timeout\n if timeout_ms is None:\n return None\n else:\n # pyvisa uses milliseconds but we use seconds\n return timeout_ms / 1000\n\n def close(self):\n \"\"\"Disconnect and irreversibly tear down the instrument.\"\"\"\n if getattr(self, 'visa_handle', None):\n self.visa_handle.close()\n super().close()\n\n def check_error(self, ret_code):\n \"\"\"\n Default error checking, raises an error if return code !=0.\n\n Does not differentiate between warnings or specific error messages.\n Override this function in your driver if you want to add specific\n error messages.\n\n Args:\n ret_code (int): A Visa error code. See eg:\n https://github.com/hgrecco/pyvisa/blob/master/pyvisa/errors.py\n\n Raises:\n visa.VisaIOError: if ``ret_code`` indicates a communication\n problem.\n \"\"\"\n if ret_code != 0:\n raise visa.VisaIOError(ret_code)\n\n def write_raw(self, cmd):\n \"\"\"\n Low-level interface to ``visa_handle.write``.\n\n Args:\n cmd (str): The command to send to the instrument.\n \"\"\"\n nr_bytes_written, ret_code = self.visa_handle.write(cmd)\n self.check_error(ret_code)\n\n def ask_raw(self, cmd):\n \"\"\"\n Low-level interface to ``visa_handle.ask``.\n\n Args:\n cmd (str): The command to send to the instrument.\n\n Returns:\n str: The instrument's response.\n \"\"\"\n return self.visa_handle.ask(cmd)\n\n def snapshot_base(self, update=False):\n \"\"\"\n State of the instrument as a JSON-compatible dict.\n\n Args:\n update (bool): If True, update the state by querying the\n instrument. If False, just use the latest values in memory.\n\n Returns:\n dict: base snapshot\n \"\"\"\n snap = super().snapshot_base(update=update)\n\n snap['address'] = self._address\n snap['terminator'] = self._terminator\n snap['timeout'] = self.timeout.get()\n\n return snap\n", "path": "qcodes/instrument/visa.py"}], "after_files": [{"content": "\"\"\"Visa instrument driver based on pyvisa.\"\"\"\nimport visa\nimport pyvisa.constants as vi_const\nimport pyvisa.resources\n\nfrom .base import Instrument\nimport qcodes.utils.validators as vals\n\nclass VisaInstrument(Instrument):\n\n \"\"\"\n Base class for all instruments using visa connections.\n\n Args:\n name (str): What this instrument is called locally.\n\n address (str): The visa resource name to use to connect.\n Optionally includes '@<backend>' at the end. For example,\n 'ASRL2' will open COM2 with the default NI backend, but\n 'ASRL2@py' will open COM2 using pyvisa-py. Note that qcodes\n does not install (or even require) ANY backends, it is up to\n the user to do that. see eg:\n http://pyvisa.readthedocs.org/en/stable/names.html\n\n timeout (number): seconds to allow for responses. Default 5.\n\n terminator: Read termination character(s) to look for. Default ''.\n\n metadata (Optional[Dict]): additional static metadata to add to this\n instrument's JSON snapshot.\n\n See help for ``qcodes.Instrument`` for additional information on writing\n instrument subclasses.\n\n Attributes:\n visa_handle (pyvisa.resources.Resource): The communication channel.\n \"\"\"\n\n def __init__(self, name, address=None, timeout=5, terminator='', **kwargs):\n super().__init__(name, **kwargs)\n\n self.add_parameter('timeout',\n get_cmd=self._get_visa_timeout,\n set_cmd=self._set_visa_timeout,\n unit='s',\n vals=vals.MultiType(vals.Numbers(min_value=0),\n vals.Enum(None)))\n\n self.set_address(address)\n self.set_terminator(terminator)\n self.timeout.set(timeout)\n\n def set_address(self, address):\n \"\"\"\n Change the address for this instrument.\n\n Args:\n address: The visa resource name to use to connect.\n Optionally includes '@<backend>' at the end. For example,\n 'ASRL2' will open COM2 with the default NI backend, but\n 'ASRL2@py' will open COM2 using pyvisa-py. Note that qcodes\n does not install (or even require) ANY backends, it is up to\n the user to do that.\n see eg: http://pyvisa.readthedocs.org/en/stable/names.html\n \"\"\"\n # in case we're changing the address - close the old handle first\n if getattr(self, 'visa_handle', None):\n self.visa_handle.close()\n\n if address and '@' in address:\n address, visa_library = address.split('@')\n resource_manager = visa.ResourceManager('@' + visa_library)\n else:\n resource_manager = visa.ResourceManager()\n\n self.visa_handle = resource_manager.open_resource(address)\n\n # Serial instruments have a separate flush method to clear their buffers\n # which behaves differently to clear. This is particularly important\n # for instruments which do not support SCPI commands.\n if isinstance(self.visa_handle, pyvisa.resources.SerialInstrument):\n self.visa_handle.flush(vi_const.VI_READ_BUF_DISCARD | vi_const.VI_WRITE_BUF_DISCARD)\n else:\n self.visa_handle.clear()\n self._address = address\n\n def set_terminator(self, terminator):\n r\"\"\"\n Change the read terminator to use.\n\n Args:\n terminator (str): Character(s) to look for at the end of a read.\n eg. '\\r\\n'.\n \"\"\"\n self.visa_handle.read_termination = terminator\n self._terminator = terminator\n\n def _set_visa_timeout(self, timeout):\n if timeout is None:\n self.visa_handle.timeout = None\n else:\n # pyvisa uses milliseconds but we use seconds\n self.visa_handle.timeout = timeout * 1000.0\n\n def _get_visa_timeout(self):\n timeout_ms = self.visa_handle.timeout\n if timeout_ms is None:\n return None\n else:\n # pyvisa uses milliseconds but we use seconds\n return timeout_ms / 1000\n\n def close(self):\n \"\"\"Disconnect and irreversibly tear down the instrument.\"\"\"\n if getattr(self, 'visa_handle', None):\n self.visa_handle.close()\n super().close()\n\n def check_error(self, ret_code):\n \"\"\"\n Default error checking, raises an error if return code !=0.\n\n Does not differentiate between warnings or specific error messages.\n Override this function in your driver if you want to add specific\n error messages.\n\n Args:\n ret_code (int): A Visa error code. See eg:\n https://github.com/hgrecco/pyvisa/blob/master/pyvisa/errors.py\n\n Raises:\n visa.VisaIOError: if ``ret_code`` indicates a communication\n problem.\n \"\"\"\n if ret_code != 0:\n raise visa.VisaIOError(ret_code)\n\n def write_raw(self, cmd):\n \"\"\"\n Low-level interface to ``visa_handle.write``.\n\n Args:\n cmd (str): The command to send to the instrument.\n \"\"\"\n nr_bytes_written, ret_code = self.visa_handle.write(cmd)\n self.check_error(ret_code)\n\n def ask_raw(self, cmd):\n \"\"\"\n Low-level interface to ``visa_handle.ask``.\n\n Args:\n cmd (str): The command to send to the instrument.\n\n Returns:\n str: The instrument's response.\n \"\"\"\n return self.visa_handle.ask(cmd)\n\n def snapshot_base(self, update=False):\n \"\"\"\n State of the instrument as a JSON-compatible dict.\n\n Args:\n update (bool): If True, update the state by querying the\n instrument. If False, just use the latest values in memory.\n\n Returns:\n dict: base snapshot\n \"\"\"\n snap = super().snapshot_base(update=update)\n\n snap['address'] = self._address\n snap['terminator'] = self._terminator\n snap['timeout'] = self.timeout.get()\n\n return snap\n", "path": "qcodes/instrument/visa.py"}]} | 2,009 | 251 |
gh_patches_debug_25982 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-1241 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Language Tree isn't handled correctly by cache in Page Tree
### Describe the Bug
So far, our Table rows aren't consistent after we delete a language tree node and recreate it. We should probably empty the cacheops cache after we delete a language tree node.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `integreat_cms/cms/forms/language_tree/language_tree_node_form.py`
Content:
```
1 import logging
2
3 from django import forms
4 from django.utils.text import capfirst
5 from django.utils.translation import ugettext_lazy as _
6
7 from ..custom_model_form import CustomModelForm
8 from ..custom_tree_node_form import CustomTreeNodeForm
9 from ...models import Language, LanguageTreeNode
10
11
12 logger = logging.getLogger(__name__)
13
14
15 class LanguageTreeNodeForm(CustomModelForm, CustomTreeNodeForm):
16 """
17 Form for creating and modifying language tree node objects
18 """
19
20 parent = forms.ModelChoiceField(
21 queryset=LanguageTreeNode.objects.all(),
22 required=False,
23 label=capfirst(LanguageTreeNode._meta.get_field("parent").verbose_name),
24 )
25
26 class Meta:
27 """
28 This class contains additional meta configuration of the form class, see the :class:`django.forms.ModelForm`
29 for more information.
30 """
31
32 #: The model of this :class:`django.forms.ModelForm`
33 model = LanguageTreeNode
34 #: The fields of the model which should be handled by this form
35 fields = ["language", "visible", "active"]
36
37 def __init__(self, **kwargs):
38 r"""
39 Initialize language tree node form
40
41 :param \**kwargs: The supplied keyword arguments
42 :type \**kwargs: dict
43 """
44
45 if "data" in kwargs:
46 # Copy QueryDict because it is immutable
47 data = kwargs.pop("data").copy()
48 # Use the parent node as value for the ref node
49 data["_ref_node_id"] = data["parent"]
50 data["_position"] = "first-child"
51 # Set the kwargs to updated POST data again
52 kwargs["data"] = data
53
54 # Instantiate CustomModelForm
55 super().__init__(**kwargs)
56
57 parent_queryset = self.instance.region.language_tree_nodes
58
59 if self.instance.id:
60 descendant_ids = [
61 descendant.id
62 for descendant in self.instance.get_cached_descendants(
63 include_self=True
64 )
65 ]
66 parent_queryset = parent_queryset.exclude(id__in=descendant_ids)
67 self.fields["parent"].initial = self.instance.parent_id
68 excluded_languages = [
69 language.id
70 for language in self.instance.region.languages
71 if language != self.instance.language
72 ]
73 else:
74 excluded_languages = [
75 language.id for language in self.instance.region.languages
76 ]
77
78 # limit possible parents to nodes of current region
79 self.fields["parent"].queryset = parent_queryset
80 self.fields["_ref_node_id"].choices = self.fields["parent"].choices
81 # limit possible languages to those which are not yet included in the tree
82 self.fields["language"].queryset = Language.objects.exclude(
83 id__in=excluded_languages
84 )
85
86 def clean(self):
87 """
88 Validate form fields which depend on each other, see :meth:`django.forms.Form.clean`:
89 Don't allow multiple root nodes for one region:
90 If self is a root node and the region already has a default language, raise a
91 :class:`~django.core.exceptions.ValidationError`.
92
93 :return: The cleaned form data
94 :rtype: dict
95 """
96 cleaned_data = super().clean()
97 default_language = self.instance.region.default_language
98 # There are two cases in which this error is thrown.
99 # Both cases include that the parent field is None.
100 # 1. The instance does exist:
101 # - The default language is different from the instance language
102 # 2. The instance does not exist:
103 # - The default language exists
104 if not cleaned_data.get("parent") and (
105 (self.instance.id and default_language != self.instance.language)
106 or (not self.instance.id and default_language)
107 ):
108 self.add_error(
109 "parent",
110 forms.ValidationError(
111 _(
112 "This region has already a default language."
113 "Please specify a source language for this language."
114 ),
115 code="invalid",
116 ),
117 )
118 logger.debug(
119 "LanguageTreeNodeForm validated [2] with cleaned data %r", cleaned_data
120 )
121 return cleaned_data
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/integreat_cms/cms/forms/language_tree/language_tree_node_form.py b/integreat_cms/cms/forms/language_tree/language_tree_node_form.py
--- a/integreat_cms/cms/forms/language_tree/language_tree_node_form.py
+++ b/integreat_cms/cms/forms/language_tree/language_tree_node_form.py
@@ -4,6 +4,8 @@
from django.utils.text import capfirst
from django.utils.translation import ugettext_lazy as _
+from cacheops import invalidate_obj
+
from ..custom_model_form import CustomModelForm
from ..custom_tree_node_form import CustomTreeNodeForm
from ...models import Language, LanguageTreeNode
@@ -119,3 +121,25 @@
"LanguageTreeNodeForm validated [2] with cleaned data %r", cleaned_data
)
return cleaned_data
+
+ def save(self, commit=True):
+ """
+ This method extends the default ``save()``-method of the base :class:`~django.forms.ModelForm` to flush
+ the cache after commiting.
+
+ :param commit: Whether or not the changes should be written to the database
+ :type commit: bool
+
+ :return: The saved page translation object
+ :rtype: ~integreat_cms.cms.models.pages.page_translation.PageTranslation
+ """
+ # Save CustomModelForm and flush Cache
+ result = super().save(commit=commit)
+
+ for page in self.instance.region.pages.all():
+ invalidate_obj(page)
+ for poi in self.instance.region.pois.all():
+ invalidate_obj(poi)
+ for event in self.instance.region.events.all():
+ invalidate_obj(event)
+ return result
| {"golden_diff": "diff --git a/integreat_cms/cms/forms/language_tree/language_tree_node_form.py b/integreat_cms/cms/forms/language_tree/language_tree_node_form.py\n--- a/integreat_cms/cms/forms/language_tree/language_tree_node_form.py\n+++ b/integreat_cms/cms/forms/language_tree/language_tree_node_form.py\n@@ -4,6 +4,8 @@\n from django.utils.text import capfirst\n from django.utils.translation import ugettext_lazy as _\n \n+from cacheops import invalidate_obj\n+\n from ..custom_model_form import CustomModelForm\n from ..custom_tree_node_form import CustomTreeNodeForm\n from ...models import Language, LanguageTreeNode\n@@ -119,3 +121,25 @@\n \"LanguageTreeNodeForm validated [2] with cleaned data %r\", cleaned_data\n )\n return cleaned_data\n+\n+ def save(self, commit=True):\n+ \"\"\"\n+ This method extends the default ``save()``-method of the base :class:`~django.forms.ModelForm` to flush\n+ the cache after commiting.\n+\n+ :param commit: Whether or not the changes should be written to the database\n+ :type commit: bool\n+\n+ :return: The saved page translation object\n+ :rtype: ~integreat_cms.cms.models.pages.page_translation.PageTranslation\n+ \"\"\"\n+ # Save CustomModelForm and flush Cache\n+ result = super().save(commit=commit)\n+\n+ for page in self.instance.region.pages.all():\n+ invalidate_obj(page)\n+ for poi in self.instance.region.pois.all():\n+ invalidate_obj(poi)\n+ for event in self.instance.region.events.all():\n+ invalidate_obj(event)\n+ return result\n", "issue": "Language Tree isn't handled correctly by cache in Page Tree\n### Describe the Bug\r\nSo far, our Table rows aren't consistent after we delete a language tree node and recreate it. We should probably empty the cacheops cache after we delete a language tree node.\r\n\n", "before_files": [{"content": "import logging\n\nfrom django import forms\nfrom django.utils.text import capfirst\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ..custom_model_form import CustomModelForm\nfrom ..custom_tree_node_form import CustomTreeNodeForm\nfrom ...models import Language, LanguageTreeNode\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass LanguageTreeNodeForm(CustomModelForm, CustomTreeNodeForm):\n \"\"\"\n Form for creating and modifying language tree node objects\n \"\"\"\n\n parent = forms.ModelChoiceField(\n queryset=LanguageTreeNode.objects.all(),\n required=False,\n label=capfirst(LanguageTreeNode._meta.get_field(\"parent\").verbose_name),\n )\n\n class Meta:\n \"\"\"\n This class contains additional meta configuration of the form class, see the :class:`django.forms.ModelForm`\n for more information.\n \"\"\"\n\n #: The model of this :class:`django.forms.ModelForm`\n model = LanguageTreeNode\n #: The fields of the model which should be handled by this form\n fields = [\"language\", \"visible\", \"active\"]\n\n def __init__(self, **kwargs):\n r\"\"\"\n Initialize language tree node form\n\n :param \\**kwargs: The supplied keyword arguments\n :type \\**kwargs: dict\n \"\"\"\n\n if \"data\" in kwargs:\n # Copy QueryDict because it is immutable\n data = kwargs.pop(\"data\").copy()\n # Use the parent node as value for the ref node\n data[\"_ref_node_id\"] = data[\"parent\"]\n data[\"_position\"] = \"first-child\"\n # Set the kwargs to updated POST data again\n kwargs[\"data\"] = data\n\n # Instantiate CustomModelForm\n super().__init__(**kwargs)\n\n parent_queryset = self.instance.region.language_tree_nodes\n\n if self.instance.id:\n descendant_ids = [\n descendant.id\n for descendant in self.instance.get_cached_descendants(\n include_self=True\n )\n ]\n parent_queryset = parent_queryset.exclude(id__in=descendant_ids)\n self.fields[\"parent\"].initial = self.instance.parent_id\n excluded_languages = [\n language.id\n for language in self.instance.region.languages\n if language != self.instance.language\n ]\n else:\n excluded_languages = [\n language.id for language in self.instance.region.languages\n ]\n\n # limit possible parents to nodes of current region\n self.fields[\"parent\"].queryset = parent_queryset\n self.fields[\"_ref_node_id\"].choices = self.fields[\"parent\"].choices\n # limit possible languages to those which are not yet included in the tree\n self.fields[\"language\"].queryset = Language.objects.exclude(\n id__in=excluded_languages\n )\n\n def clean(self):\n \"\"\"\n Validate form fields which depend on each other, see :meth:`django.forms.Form.clean`:\n Don't allow multiple root nodes for one region:\n If self is a root node and the region already has a default language, raise a\n :class:`~django.core.exceptions.ValidationError`.\n\n :return: The cleaned form data\n :rtype: dict\n \"\"\"\n cleaned_data = super().clean()\n default_language = self.instance.region.default_language\n # There are two cases in which this error is thrown.\n # Both cases include that the parent field is None.\n # 1. The instance does exist:\n # - The default language is different from the instance language\n # 2. The instance does not exist:\n # - The default language exists\n if not cleaned_data.get(\"parent\") and (\n (self.instance.id and default_language != self.instance.language)\n or (not self.instance.id and default_language)\n ):\n self.add_error(\n \"parent\",\n forms.ValidationError(\n _(\n \"This region has already a default language.\"\n \"Please specify a source language for this language.\"\n ),\n code=\"invalid\",\n ),\n )\n logger.debug(\n \"LanguageTreeNodeForm validated [2] with cleaned data %r\", cleaned_data\n )\n return cleaned_data\n", "path": "integreat_cms/cms/forms/language_tree/language_tree_node_form.py"}], "after_files": [{"content": "import logging\n\nfrom django import forms\nfrom django.utils.text import capfirst\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom cacheops import invalidate_obj\n\nfrom ..custom_model_form import CustomModelForm\nfrom ..custom_tree_node_form import CustomTreeNodeForm\nfrom ...models import Language, LanguageTreeNode\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass LanguageTreeNodeForm(CustomModelForm, CustomTreeNodeForm):\n \"\"\"\n Form for creating and modifying language tree node objects\n \"\"\"\n\n parent = forms.ModelChoiceField(\n queryset=LanguageTreeNode.objects.all(),\n required=False,\n label=capfirst(LanguageTreeNode._meta.get_field(\"parent\").verbose_name),\n )\n\n class Meta:\n \"\"\"\n This class contains additional meta configuration of the form class, see the :class:`django.forms.ModelForm`\n for more information.\n \"\"\"\n\n #: The model of this :class:`django.forms.ModelForm`\n model = LanguageTreeNode\n #: The fields of the model which should be handled by this form\n fields = [\"language\", \"visible\", \"active\"]\n\n def __init__(self, **kwargs):\n r\"\"\"\n Initialize language tree node form\n\n :param \\**kwargs: The supplied keyword arguments\n :type \\**kwargs: dict\n \"\"\"\n\n if \"data\" in kwargs:\n # Copy QueryDict because it is immutable\n data = kwargs.pop(\"data\").copy()\n # Use the parent node as value for the ref node\n data[\"_ref_node_id\"] = data[\"parent\"]\n data[\"_position\"] = \"first-child\"\n # Set the kwargs to updated POST data again\n kwargs[\"data\"] = data\n\n # Instantiate CustomModelForm\n super().__init__(**kwargs)\n\n parent_queryset = self.instance.region.language_tree_nodes\n\n if self.instance.id:\n descendant_ids = [\n descendant.id\n for descendant in self.instance.get_cached_descendants(\n include_self=True\n )\n ]\n parent_queryset = parent_queryset.exclude(id__in=descendant_ids)\n self.fields[\"parent\"].initial = self.instance.parent_id\n excluded_languages = [\n language.id\n for language in self.instance.region.languages\n if language != self.instance.language\n ]\n else:\n excluded_languages = [\n language.id for language in self.instance.region.languages\n ]\n\n # limit possible parents to nodes of current region\n self.fields[\"parent\"].queryset = parent_queryset\n self.fields[\"_ref_node_id\"].choices = self.fields[\"parent\"].choices\n # limit possible languages to those which are not yet included in the tree\n self.fields[\"language\"].queryset = Language.objects.exclude(\n id__in=excluded_languages\n )\n\n def clean(self):\n \"\"\"\n Validate form fields which depend on each other, see :meth:`django.forms.Form.clean`:\n Don't allow multiple root nodes for one region:\n If self is a root node and the region already has a default language, raise a\n :class:`~django.core.exceptions.ValidationError`.\n\n :return: The cleaned form data\n :rtype: dict\n \"\"\"\n cleaned_data = super().clean()\n default_language = self.instance.region.default_language\n # There are two cases in which this error is thrown.\n # Both cases include that the parent field is None.\n # 1. The instance does exist:\n # - The default language is different from the instance language\n # 2. The instance does not exist:\n # - The default language exists\n if not cleaned_data.get(\"parent\") and (\n (self.instance.id and default_language != self.instance.language)\n or (not self.instance.id and default_language)\n ):\n self.add_error(\n \"parent\",\n forms.ValidationError(\n _(\n \"This region has already a default language.\"\n \"Please specify a source language for this language.\"\n ),\n code=\"invalid\",\n ),\n )\n logger.debug(\n \"LanguageTreeNodeForm validated [2] with cleaned data %r\", cleaned_data\n )\n return cleaned_data\n\n def save(self, commit=True):\n \"\"\"\n This method extends the default ``save()``-method of the base :class:`~django.forms.ModelForm` to flush\n the cache after commiting.\n\n :param commit: Whether or not the changes should be written to the database\n :type commit: bool\n\n :return: The saved page translation object\n :rtype: ~integreat_cms.cms.models.pages.page_translation.PageTranslation\n \"\"\"\n # Save CustomModelForm and flush Cache\n result = super().save(commit=commit)\n\n for page in self.instance.region.pages.all():\n invalidate_obj(page)\n for poi in self.instance.region.pois.all():\n invalidate_obj(poi)\n for event in self.instance.region.events.all():\n invalidate_obj(event)\n return result\n", "path": "integreat_cms/cms/forms/language_tree/language_tree_node_form.py"}]} | 1,436 | 365 |
gh_patches_debug_11755 | rasdani/github-patches | git_diff | kserve__kserve-705 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
KFServing Transformer does not work with onnxruntime and trtis model server
/kind bug
**What steps did you take and what happened:**
Currently kfserving transformer does not work with onnxruntime and trtis model server due to following validation because their data plane do not support `instances` key in JSON.
https://github.com/kubeflow/kfserving/blob/master/python/kfserving/kfserving/handlers/http.py#L24
**What did you expect to happen:**
Before we get to V2 data plane with agreed protocol across model servers, we should relax this validation.
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
**Environment:**
- Istio Version:
- Knative Version:
- KFServing Version: 0.2.2
- Kubeflow version:
- Minikube version:
- Kubernetes version: (use `kubectl version`):
- OS (e.g. from `/etc/os-release`):
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/kfserving/kfserving/handlers/http.py`
Content:
```
1 import tornado.web
2 import json
3 from typing import Dict
4 from http import HTTPStatus
5 from kfserving.kfmodel import KFModel
6
7
8 class HTTPHandler(tornado.web.RequestHandler):
9 def initialize(self, models: Dict[str, KFModel]):
10 self.models = models # pylint:disable=attribute-defined-outside-init
11
12 def get_model(self, name: str):
13 if name not in self.models:
14 raise tornado.web.HTTPError(
15 status_code=HTTPStatus.NOT_FOUND,
16 reason="Model with name %s does not exist." % name
17 )
18 model = self.models[name]
19 if not model.ready:
20 model.load()
21 return model
22
23 def validate(self, request):
24 if "instances" not in request:
25 raise tornado.web.HTTPError(
26 status_code=HTTPStatus.BAD_REQUEST,
27 reason="Expected key \"instances\" in request body"
28 )
29
30 if not isinstance(request["instances"], list):
31 raise tornado.web.HTTPError(
32 status_code=HTTPStatus.BAD_REQUEST,
33 reason="Expected \"instances\" to be a list"
34 )
35 return request
36
37
38 class PredictHandler(HTTPHandler):
39 def post(self, name: str):
40 model = self.get_model(name)
41 try:
42 body = json.loads(self.request.body)
43 except json.decoder.JSONDecodeError as e:
44 raise tornado.web.HTTPError(
45 status_code=HTTPStatus.BAD_REQUEST,
46 reason="Unrecognized request format: %s" % e
47 )
48 request = model.preprocess(body)
49 request = self.validate(request)
50 response = model.predict(request)
51 response = model.postprocess(response)
52 self.write(response)
53
54
55 class ExplainHandler(HTTPHandler):
56 def post(self, name: str):
57 model = self.get_model(name)
58 try:
59 body = json.loads(self.request.body)
60 except json.decoder.JSONDecodeError as e:
61 raise tornado.web.HTTPError(
62 status_code=HTTPStatus.BAD_REQUEST,
63 reason="Unrecognized request format: %s" % e
64 )
65 request = model.preprocess(body)
66 request = self.validate(request)
67 response = model.explain(request)
68 response = model.postprocess(response)
69 self.write(response)
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/kfserving/kfserving/handlers/http.py b/python/kfserving/kfserving/handlers/http.py
--- a/python/kfserving/kfserving/handlers/http.py
+++ b/python/kfserving/kfserving/handlers/http.py
@@ -21,13 +21,7 @@
return model
def validate(self, request):
- if "instances" not in request:
- raise tornado.web.HTTPError(
- status_code=HTTPStatus.BAD_REQUEST,
- reason="Expected key \"instances\" in request body"
- )
-
- if not isinstance(request["instances"], list):
+ if "instances" in request and not isinstance(request["instances"], list):
raise tornado.web.HTTPError(
status_code=HTTPStatus.BAD_REQUEST,
reason="Expected \"instances\" to be a list"
| {"golden_diff": "diff --git a/python/kfserving/kfserving/handlers/http.py b/python/kfserving/kfserving/handlers/http.py\n--- a/python/kfserving/kfserving/handlers/http.py\n+++ b/python/kfserving/kfserving/handlers/http.py\n@@ -21,13 +21,7 @@\n return model\n \n def validate(self, request):\n- if \"instances\" not in request:\n- raise tornado.web.HTTPError(\n- status_code=HTTPStatus.BAD_REQUEST,\n- reason=\"Expected key \\\"instances\\\" in request body\"\n- )\n-\n- if not isinstance(request[\"instances\"], list):\n+ if \"instances\" in request and not isinstance(request[\"instances\"], list):\n raise tornado.web.HTTPError(\n status_code=HTTPStatus.BAD_REQUEST,\n reason=\"Expected \\\"instances\\\" to be a list\"\n", "issue": "KFServing Transformer does not work with onnxruntime and trtis model server\n/kind bug\r\n\r\n**What steps did you take and what happened:**\r\nCurrently kfserving transformer does not work with onnxruntime and trtis model server due to following validation because their data plane do not support `instances` key in JSON.\r\nhttps://github.com/kubeflow/kfserving/blob/master/python/kfserving/kfserving/handlers/http.py#L24 \r\n\r\n\r\n\r\n**What did you expect to happen:**\r\nBefore we get to V2 data plane with agreed protocol across model servers, we should relax this validation.\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\r\n\r\n**Environment:**\r\n\r\n- Istio Version:\r\n- Knative Version:\r\n- KFServing Version: 0.2.2\r\n- Kubeflow version:\r\n- Minikube version:\r\n- Kubernetes version: (use `kubectl version`):\r\n- OS (e.g. from `/etc/os-release`):\r\n\n", "before_files": [{"content": "import tornado.web\nimport json\nfrom typing import Dict\nfrom http import HTTPStatus\nfrom kfserving.kfmodel import KFModel\n\n\nclass HTTPHandler(tornado.web.RequestHandler):\n def initialize(self, models: Dict[str, KFModel]):\n self.models = models # pylint:disable=attribute-defined-outside-init\n\n def get_model(self, name: str):\n if name not in self.models:\n raise tornado.web.HTTPError(\n status_code=HTTPStatus.NOT_FOUND,\n reason=\"Model with name %s does not exist.\" % name\n )\n model = self.models[name]\n if not model.ready:\n model.load()\n return model\n\n def validate(self, request):\n if \"instances\" not in request:\n raise tornado.web.HTTPError(\n status_code=HTTPStatus.BAD_REQUEST,\n reason=\"Expected key \\\"instances\\\" in request body\"\n )\n\n if not isinstance(request[\"instances\"], list):\n raise tornado.web.HTTPError(\n status_code=HTTPStatus.BAD_REQUEST,\n reason=\"Expected \\\"instances\\\" to be a list\"\n )\n return request\n\n\nclass PredictHandler(HTTPHandler):\n def post(self, name: str):\n model = self.get_model(name)\n try:\n body = json.loads(self.request.body)\n except json.decoder.JSONDecodeError as e:\n raise tornado.web.HTTPError(\n status_code=HTTPStatus.BAD_REQUEST,\n reason=\"Unrecognized request format: %s\" % e\n )\n request = model.preprocess(body)\n request = self.validate(request)\n response = model.predict(request)\n response = model.postprocess(response)\n self.write(response)\n\n\nclass ExplainHandler(HTTPHandler):\n def post(self, name: str):\n model = self.get_model(name)\n try:\n body = json.loads(self.request.body)\n except json.decoder.JSONDecodeError as e:\n raise tornado.web.HTTPError(\n status_code=HTTPStatus.BAD_REQUEST,\n reason=\"Unrecognized request format: %s\" % e\n )\n request = model.preprocess(body)\n request = self.validate(request)\n response = model.explain(request)\n response = model.postprocess(response)\n self.write(response)\n", "path": "python/kfserving/kfserving/handlers/http.py"}], "after_files": [{"content": "import tornado.web\nimport json\nfrom typing import Dict\nfrom http import HTTPStatus\nfrom kfserving.kfmodel import KFModel\n\n\nclass HTTPHandler(tornado.web.RequestHandler):\n def initialize(self, models: Dict[str, KFModel]):\n self.models = models # pylint:disable=attribute-defined-outside-init\n\n def get_model(self, name: str):\n if name not in self.models:\n raise tornado.web.HTTPError(\n status_code=HTTPStatus.NOT_FOUND,\n reason=\"Model with name %s does not exist.\" % name\n )\n model = self.models[name]\n if not model.ready:\n model.load()\n return model\n\n def validate(self, request):\n if \"instances\" in request and not isinstance(request[\"instances\"], list):\n raise tornado.web.HTTPError(\n status_code=HTTPStatus.BAD_REQUEST,\n reason=\"Expected \\\"instances\\\" to be a list\"\n )\n return request\n\n\nclass PredictHandler(HTTPHandler):\n def post(self, name: str):\n model = self.get_model(name)\n try:\n body = json.loads(self.request.body)\n except json.decoder.JSONDecodeError as e:\n raise tornado.web.HTTPError(\n status_code=HTTPStatus.BAD_REQUEST,\n reason=\"Unrecognized request format: %s\" % e\n )\n request = model.preprocess(body)\n request = self.validate(request)\n response = model.predict(request)\n response = model.postprocess(response)\n self.write(response)\n\n\nclass ExplainHandler(HTTPHandler):\n def post(self, name: str):\n model = self.get_model(name)\n try:\n body = json.loads(self.request.body)\n except json.decoder.JSONDecodeError as e:\n raise tornado.web.HTTPError(\n status_code=HTTPStatus.BAD_REQUEST,\n reason=\"Unrecognized request format: %s\" % e\n )\n request = model.preprocess(body)\n request = self.validate(request)\n response = model.explain(request)\n response = model.postprocess(response)\n self.write(response)\n", "path": "python/kfserving/kfserving/handlers/http.py"}]} | 1,079 | 185 |
gh_patches_debug_7329 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-8842 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[CT-3213] [Bug] nitpick: incorrect docstring
should be `BaseAdapter.capabilities()` not `BaseAdapter.has_feature()`
https://github.com/dbt-labs/dbt-core/blob/1baebb423c82a9c645e59b390fc3a69089623600/core/dbt/adapters/capability.py#L7
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/dbt/adapters/capability.py`
Content:
```
1 from dataclasses import dataclass
2 from enum import Enum
3 from typing import Optional, DefaultDict, Mapping
4
5
6 class Capability(str, Enum):
7 """Enumeration of optional adapter features which can be probed using BaseAdapter.has_feature()"""
8
9 SchemaMetadataByRelations = "SchemaMetadataByRelations"
10 """Indicates efficient support for retrieving schema metadata for a list of relations, rather than always retrieving
11 all the relations in a schema."""
12
13 TableLastModifiedMetadata = "TableLastModifiedMetadata"
14 """Indicates support for determining the time of the last table modification by querying database metadata."""
15
16
17 class Support(str, Enum):
18 Unknown = "Unknown"
19 """The adapter has not declared whether this capability is a feature of the underlying DBMS."""
20
21 Unsupported = "Unsupported"
22 """This capability is not possible with the underlying DBMS, so the adapter does not implement related macros."""
23
24 NotImplemented = "NotImplemented"
25 """This capability is available in the underlying DBMS, but support has not yet been implemented in the adapter."""
26
27 Versioned = "Versioned"
28 """Some versions of the DBMS supported by the adapter support this capability and the adapter has implemented any
29 macros needed to use it."""
30
31 Full = "Full"
32 """All versions of the DBMS supported by the adapter support this capability and the adapter has implemented any
33 macros needed to use it."""
34
35
36 @dataclass
37 class CapabilitySupport:
38 support: Support
39 first_version: Optional[str] = None
40
41 def __bool__(self):
42 return self.support == Support.Versioned or self.support == Support.Full
43
44
45 class CapabilityDict(DefaultDict[Capability, CapabilitySupport]):
46 def __init__(self, vals: Mapping[Capability, CapabilitySupport]):
47 super().__init__(self._default)
48 self.update(vals)
49
50 @staticmethod
51 def _default():
52 return CapabilitySupport(support=Support.Unknown)
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/dbt/adapters/capability.py b/core/dbt/adapters/capability.py
--- a/core/dbt/adapters/capability.py
+++ b/core/dbt/adapters/capability.py
@@ -4,7 +4,7 @@
class Capability(str, Enum):
- """Enumeration of optional adapter features which can be probed using BaseAdapter.has_feature()"""
+ """Enumeration of optional adapter features which can be probed using BaseAdapter.capabilities()"""
SchemaMetadataByRelations = "SchemaMetadataByRelations"
"""Indicates efficient support for retrieving schema metadata for a list of relations, rather than always retrieving
| {"golden_diff": "diff --git a/core/dbt/adapters/capability.py b/core/dbt/adapters/capability.py\n--- a/core/dbt/adapters/capability.py\n+++ b/core/dbt/adapters/capability.py\n@@ -4,7 +4,7 @@\n \n \n class Capability(str, Enum):\n- \"\"\"Enumeration of optional adapter features which can be probed using BaseAdapter.has_feature()\"\"\"\n+ \"\"\"Enumeration of optional adapter features which can be probed using BaseAdapter.capabilities()\"\"\"\n \n SchemaMetadataByRelations = \"SchemaMetadataByRelations\"\n \"\"\"Indicates efficient support for retrieving schema metadata for a list of relations, rather than always retrieving\n", "issue": "[CT-3213] [Bug] nitpick: incorrect docstring\nshould be `BaseAdapter.capabilities()` not `BaseAdapter.has_feature()`\r\n\r\nhttps://github.com/dbt-labs/dbt-core/blob/1baebb423c82a9c645e59b390fc3a69089623600/core/dbt/adapters/capability.py#L7\n", "before_files": [{"content": "from dataclasses import dataclass\nfrom enum import Enum\nfrom typing import Optional, DefaultDict, Mapping\n\n\nclass Capability(str, Enum):\n \"\"\"Enumeration of optional adapter features which can be probed using BaseAdapter.has_feature()\"\"\"\n\n SchemaMetadataByRelations = \"SchemaMetadataByRelations\"\n \"\"\"Indicates efficient support for retrieving schema metadata for a list of relations, rather than always retrieving\n all the relations in a schema.\"\"\"\n\n TableLastModifiedMetadata = \"TableLastModifiedMetadata\"\n \"\"\"Indicates support for determining the time of the last table modification by querying database metadata.\"\"\"\n\n\nclass Support(str, Enum):\n Unknown = \"Unknown\"\n \"\"\"The adapter has not declared whether this capability is a feature of the underlying DBMS.\"\"\"\n\n Unsupported = \"Unsupported\"\n \"\"\"This capability is not possible with the underlying DBMS, so the adapter does not implement related macros.\"\"\"\n\n NotImplemented = \"NotImplemented\"\n \"\"\"This capability is available in the underlying DBMS, but support has not yet been implemented in the adapter.\"\"\"\n\n Versioned = \"Versioned\"\n \"\"\"Some versions of the DBMS supported by the adapter support this capability and the adapter has implemented any\n macros needed to use it.\"\"\"\n\n Full = \"Full\"\n \"\"\"All versions of the DBMS supported by the adapter support this capability and the adapter has implemented any\n macros needed to use it.\"\"\"\n\n\n@dataclass\nclass CapabilitySupport:\n support: Support\n first_version: Optional[str] = None\n\n def __bool__(self):\n return self.support == Support.Versioned or self.support == Support.Full\n\n\nclass CapabilityDict(DefaultDict[Capability, CapabilitySupport]):\n def __init__(self, vals: Mapping[Capability, CapabilitySupport]):\n super().__init__(self._default)\n self.update(vals)\n\n @staticmethod\n def _default():\n return CapabilitySupport(support=Support.Unknown)\n", "path": "core/dbt/adapters/capability.py"}], "after_files": [{"content": "from dataclasses import dataclass\nfrom enum import Enum\nfrom typing import Optional, DefaultDict, Mapping\n\n\nclass Capability(str, Enum):\n \"\"\"Enumeration of optional adapter features which can be probed using BaseAdapter.capabilities()\"\"\"\n\n SchemaMetadataByRelations = \"SchemaMetadataByRelations\"\n \"\"\"Indicates efficient support for retrieving schema metadata for a list of relations, rather than always retrieving\n all the relations in a schema.\"\"\"\n\n TableLastModifiedMetadata = \"TableLastModifiedMetadata\"\n \"\"\"Indicates support for determining the time of the last table modification by querying database metadata.\"\"\"\n\n\nclass Support(str, Enum):\n Unknown = \"Unknown\"\n \"\"\"The adapter has not declared whether this capability is a feature of the underlying DBMS.\"\"\"\n\n Unsupported = \"Unsupported\"\n \"\"\"This capability is not possible with the underlying DBMS, so the adapter does not implement related macros.\"\"\"\n\n NotImplemented = \"NotImplemented\"\n \"\"\"This capability is available in the underlying DBMS, but support has not yet been implemented in the adapter.\"\"\"\n\n Versioned = \"Versioned\"\n \"\"\"Some versions of the DBMS supported by the adapter support this capability and the adapter has implemented any\n macros needed to use it.\"\"\"\n\n Full = \"Full\"\n \"\"\"All versions of the DBMS supported by the adapter support this capability and the adapter has implemented any\n macros needed to use it.\"\"\"\n\n\n@dataclass\nclass CapabilitySupport:\n support: Support\n first_version: Optional[str] = None\n\n def __bool__(self):\n return self.support == Support.Versioned or self.support == Support.Full\n\n\nclass CapabilityDict(DefaultDict[Capability, CapabilitySupport]):\n def __init__(self, vals: Mapping[Capability, CapabilitySupport]):\n super().__init__(self._default)\n self.update(vals)\n\n @staticmethod\n def _default():\n return CapabilitySupport(support=Support.Unknown)\n", "path": "core/dbt/adapters/capability.py"}]} | 857 | 139 |
gh_patches_debug_2937 | rasdani/github-patches | git_diff | openai__gym-1708 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug in PixelObservationWrapper
Error log
```
env = PixelObservationWrapper(env, pixels_only=True)
File "/home/tsan/Desktop/gym/gym/wrappers/pixel_observation.py", line 89, in __init__
pixels = self.env.render(**render_kwargs)
File "/home/tsan/Desktop/gym/gym/core.py", line 233, in render
return self.env.render(mode, **kwargs)
TypeError: render() got an unexpected keyword argument 'pixels'
```
Can be reproduced by running
```
import gym
from gym.wrappers.pixel_observation import PixelObservationWrapper # pylint: disable=E0401
env = gym.make('Acrobot-v1')
env.reset()
env = PixelObservationWrapper(env, pixels_only=True)
env.step(0)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gym/wrappers/pixel_observation.py`
Content:
```
1 """An observation wrapper that augments observations by pixel values."""
2
3 import collections
4 import copy
5
6 import numpy as np
7
8 from gym import spaces
9 from gym import ObservationWrapper
10
11 STATE_KEY = 'state'
12
13
14 class PixelObservationWrapper(ObservationWrapper):
15 """Augment observations by pixel values."""
16
17 def __init__(self,
18 env,
19 pixels_only=True,
20 render_kwargs=None,
21 pixel_keys=('pixels', )):
22 """Initializes a new pixel Wrapper.
23
24 Args:
25 env: The environment to wrap.
26 pixels_only: If `True` (default), the original observation returned
27 by the wrapped environment will be discarded, and a dictionary
28 observation will only include pixels. If `False`, the
29 observation dictionary will contain both the original
30 observations and the pixel observations.
31 render_kwargs: Optional `dict` containing keyword arguments passed
32 to the `self.render` method.
33 pixel_keys: Optional custom string specifying the pixel
34 observation's key in the `OrderedDict` of observations.
35 Defaults to 'pixels'.
36
37 Raises:
38 ValueError: If `env`'s observation spec is not compatible with the
39 wrapper. Supported formats are a single array, or a dict of
40 arrays.
41 ValueError: If `env`'s observation already contains any of the
42 specified `pixel_keys`.
43 """
44
45 super(PixelObservationWrapper, self).__init__(env)
46
47 if render_kwargs is None:
48 render_kwargs = {}
49
50 for key in pixel_keys:
51 render_kwargs.setdefault(key, {})
52
53 render_mode = render_kwargs[key].pop('mode', 'rgb_array')
54 assert render_mode == 'rgb_array', render_mode
55 render_kwargs[key]['mode'] = 'rgb_array'
56
57 wrapped_observation_space = env.observation_space
58
59 if isinstance(wrapped_observation_space, spaces.Box):
60 self._observation_is_dict = False
61 invalid_keys = set([STATE_KEY])
62 elif isinstance(wrapped_observation_space,
63 (spaces.Dict, collections.MutableMapping)):
64 self._observation_is_dict = True
65 invalid_keys = set(wrapped_observation_space.spaces.keys())
66 else:
67 raise ValueError("Unsupported observation space structure.")
68
69 if not pixels_only:
70 # Make sure that now keys in the `pixel_keys` overlap with
71 # `observation_keys`
72 overlapping_keys = set(pixel_keys) & set(invalid_keys)
73 if overlapping_keys:
74 raise ValueError("Duplicate or reserved pixel keys {!r}."
75 .format(overlapping_keys))
76
77 if pixels_only:
78 self.observation_space = spaces.Dict()
79 elif self._observation_is_dict:
80 self.observation_space = copy.deepcopy(wrapped_observation_space)
81 else:
82 self.observation_space = spaces.Dict()
83 self.observation_space.spaces[STATE_KEY] = wrapped_observation_space
84
85 # Extend observation space with pixels.
86
87 pixels_spaces = {}
88 for pixel_key in pixel_keys:
89 pixels = self.env.render(**render_kwargs)
90
91 if np.issubdtype(pixels.dtype, np.integer):
92 low, high = (0, 255)
93 elif np.issubdtype(pixels.dtype, np.float):
94 low, high = (-float('inf'), float('inf'))
95 else:
96 raise TypeError(pixels.dtype)
97
98 pixels_space = spaces.Box(
99 shape=pixels.shape, low=low, high=high, dtype=pixels.dtype)
100 pixels_spaces[pixel_key] = pixels_space
101
102 self.observation_space.spaces.update(pixels_spaces)
103
104 self._env = env
105 self._pixels_only = pixels_only
106 self._render_kwargs = render_kwargs
107 self._pixel_keys = pixel_keys
108
109 def observation(self, observation):
110 pixel_observation = self._add_pixel_observation(observation)
111 return pixel_observation
112
113 def _add_pixel_observation(self, observation):
114 if self._pixels_only:
115 observation = collections.OrderedDict()
116 elif self._observation_is_dict:
117 observation = type(observation)(observation)
118 else:
119 observation = collections.OrderedDict()
120 observation[STATE_KEY] = observation
121
122 pixel_observations = {
123 pixel_key: self.env.render(**self._render_kwargs[pixel_key])
124 for pixel_key in self._pixel_keys
125 }
126
127 observation.update(pixel_observations)
128
129 return observation
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gym/wrappers/pixel_observation.py b/gym/wrappers/pixel_observation.py
--- a/gym/wrappers/pixel_observation.py
+++ b/gym/wrappers/pixel_observation.py
@@ -86,7 +86,7 @@
pixels_spaces = {}
for pixel_key in pixel_keys:
- pixels = self.env.render(**render_kwargs)
+ pixels = self.env.render(**render_kwargs[pixel_key])
if np.issubdtype(pixels.dtype, np.integer):
low, high = (0, 255)
| {"golden_diff": "diff --git a/gym/wrappers/pixel_observation.py b/gym/wrappers/pixel_observation.py\n--- a/gym/wrappers/pixel_observation.py\n+++ b/gym/wrappers/pixel_observation.py\n@@ -86,7 +86,7 @@\n \n pixels_spaces = {}\n for pixel_key in pixel_keys:\n- pixels = self.env.render(**render_kwargs)\n+ pixels = self.env.render(**render_kwargs[pixel_key])\n \n if np.issubdtype(pixels.dtype, np.integer):\n low, high = (0, 255)\n", "issue": "Bug in PixelObservationWrapper \nError log\r\n```\r\n env = PixelObservationWrapper(env, pixels_only=True)\r\n File \"/home/tsan/Desktop/gym/gym/wrappers/pixel_observation.py\", line 89, in __init__\r\n pixels = self.env.render(**render_kwargs)\r\n File \"/home/tsan/Desktop/gym/gym/core.py\", line 233, in render\r\n return self.env.render(mode, **kwargs)\r\nTypeError: render() got an unexpected keyword argument 'pixels'\r\n```\r\n\r\nCan be reproduced by running\r\n```\r\nimport gym\r\nfrom gym.wrappers.pixel_observation import PixelObservationWrapper # pylint: disable=E0401\r\n\r\nenv = gym.make('Acrobot-v1')\r\nenv.reset()\r\nenv = PixelObservationWrapper(env, pixels_only=True)\r\nenv.step(0)\r\n```\n", "before_files": [{"content": "\"\"\"An observation wrapper that augments observations by pixel values.\"\"\"\n\nimport collections\nimport copy\n\nimport numpy as np\n\nfrom gym import spaces\nfrom gym import ObservationWrapper\n\nSTATE_KEY = 'state'\n\n\nclass PixelObservationWrapper(ObservationWrapper):\n \"\"\"Augment observations by pixel values.\"\"\"\n\n def __init__(self,\n env,\n pixels_only=True,\n render_kwargs=None,\n pixel_keys=('pixels', )):\n \"\"\"Initializes a new pixel Wrapper.\n\n Args:\n env: The environment to wrap.\n pixels_only: If `True` (default), the original observation returned\n by the wrapped environment will be discarded, and a dictionary\n observation will only include pixels. If `False`, the\n observation dictionary will contain both the original\n observations and the pixel observations.\n render_kwargs: Optional `dict` containing keyword arguments passed\n to the `self.render` method.\n pixel_keys: Optional custom string specifying the pixel\n observation's key in the `OrderedDict` of observations.\n Defaults to 'pixels'.\n\n Raises:\n ValueError: If `env`'s observation spec is not compatible with the\n wrapper. Supported formats are a single array, or a dict of\n arrays.\n ValueError: If `env`'s observation already contains any of the\n specified `pixel_keys`.\n \"\"\"\n\n super(PixelObservationWrapper, self).__init__(env)\n\n if render_kwargs is None:\n render_kwargs = {}\n\n for key in pixel_keys:\n render_kwargs.setdefault(key, {})\n\n render_mode = render_kwargs[key].pop('mode', 'rgb_array')\n assert render_mode == 'rgb_array', render_mode\n render_kwargs[key]['mode'] = 'rgb_array'\n\n wrapped_observation_space = env.observation_space\n\n if isinstance(wrapped_observation_space, spaces.Box):\n self._observation_is_dict = False\n invalid_keys = set([STATE_KEY])\n elif isinstance(wrapped_observation_space,\n (spaces.Dict, collections.MutableMapping)):\n self._observation_is_dict = True\n invalid_keys = set(wrapped_observation_space.spaces.keys())\n else:\n raise ValueError(\"Unsupported observation space structure.\")\n\n if not pixels_only:\n # Make sure that now keys in the `pixel_keys` overlap with\n # `observation_keys`\n overlapping_keys = set(pixel_keys) & set(invalid_keys)\n if overlapping_keys:\n raise ValueError(\"Duplicate or reserved pixel keys {!r}.\"\n .format(overlapping_keys))\n\n if pixels_only:\n self.observation_space = spaces.Dict()\n elif self._observation_is_dict:\n self.observation_space = copy.deepcopy(wrapped_observation_space)\n else:\n self.observation_space = spaces.Dict()\n self.observation_space.spaces[STATE_KEY] = wrapped_observation_space\n\n # Extend observation space with pixels.\n\n pixels_spaces = {}\n for pixel_key in pixel_keys:\n pixels = self.env.render(**render_kwargs)\n\n if np.issubdtype(pixels.dtype, np.integer):\n low, high = (0, 255)\n elif np.issubdtype(pixels.dtype, np.float):\n low, high = (-float('inf'), float('inf'))\n else:\n raise TypeError(pixels.dtype)\n\n pixels_space = spaces.Box(\n shape=pixels.shape, low=low, high=high, dtype=pixels.dtype)\n pixels_spaces[pixel_key] = pixels_space\n\n self.observation_space.spaces.update(pixels_spaces)\n\n self._env = env\n self._pixels_only = pixels_only\n self._render_kwargs = render_kwargs\n self._pixel_keys = pixel_keys\n\n def observation(self, observation):\n pixel_observation = self._add_pixel_observation(observation)\n return pixel_observation\n\n def _add_pixel_observation(self, observation):\n if self._pixels_only:\n observation = collections.OrderedDict()\n elif self._observation_is_dict:\n observation = type(observation)(observation)\n else:\n observation = collections.OrderedDict()\n observation[STATE_KEY] = observation\n\n pixel_observations = {\n pixel_key: self.env.render(**self._render_kwargs[pixel_key])\n for pixel_key in self._pixel_keys\n }\n\n observation.update(pixel_observations)\n\n return observation\n", "path": "gym/wrappers/pixel_observation.py"}], "after_files": [{"content": "\"\"\"An observation wrapper that augments observations by pixel values.\"\"\"\n\nimport collections\nimport copy\n\nimport numpy as np\n\nfrom gym import spaces\nfrom gym import ObservationWrapper\n\nSTATE_KEY = 'state'\n\n\nclass PixelObservationWrapper(ObservationWrapper):\n \"\"\"Augment observations by pixel values.\"\"\"\n\n def __init__(self,\n env,\n pixels_only=True,\n render_kwargs=None,\n pixel_keys=('pixels', )):\n \"\"\"Initializes a new pixel Wrapper.\n\n Args:\n env: The environment to wrap.\n pixels_only: If `True` (default), the original observation returned\n by the wrapped environment will be discarded, and a dictionary\n observation will only include pixels. If `False`, the\n observation dictionary will contain both the original\n observations and the pixel observations.\n render_kwargs: Optional `dict` containing keyword arguments passed\n to the `self.render` method.\n pixel_keys: Optional custom string specifying the pixel\n observation's key in the `OrderedDict` of observations.\n Defaults to 'pixels'.\n\n Raises:\n ValueError: If `env`'s observation spec is not compatible with the\n wrapper. Supported formats are a single array, or a dict of\n arrays.\n ValueError: If `env`'s observation already contains any of the\n specified `pixel_keys`.\n \"\"\"\n\n super(PixelObservationWrapper, self).__init__(env)\n\n if render_kwargs is None:\n render_kwargs = {}\n\n for key in pixel_keys:\n render_kwargs.setdefault(key, {})\n\n render_mode = render_kwargs[key].pop('mode', 'rgb_array')\n assert render_mode == 'rgb_array', render_mode\n render_kwargs[key]['mode'] = 'rgb_array'\n\n wrapped_observation_space = env.observation_space\n\n if isinstance(wrapped_observation_space, spaces.Box):\n self._observation_is_dict = False\n invalid_keys = set([STATE_KEY])\n elif isinstance(wrapped_observation_space,\n (spaces.Dict, collections.MutableMapping)):\n self._observation_is_dict = True\n invalid_keys = set(wrapped_observation_space.spaces.keys())\n else:\n raise ValueError(\"Unsupported observation space structure.\")\n\n if not pixels_only:\n # Make sure that now keys in the `pixel_keys` overlap with\n # `observation_keys`\n overlapping_keys = set(pixel_keys) & set(invalid_keys)\n if overlapping_keys:\n raise ValueError(\"Duplicate or reserved pixel keys {!r}.\"\n .format(overlapping_keys))\n\n if pixels_only:\n self.observation_space = spaces.Dict()\n elif self._observation_is_dict:\n self.observation_space = copy.deepcopy(wrapped_observation_space)\n else:\n self.observation_space = spaces.Dict()\n self.observation_space.spaces[STATE_KEY] = wrapped_observation_space\n\n # Extend observation space with pixels.\n\n pixels_spaces = {}\n for pixel_key in pixel_keys:\n pixels = self.env.render(**render_kwargs[pixel_key])\n\n if np.issubdtype(pixels.dtype, np.integer):\n low, high = (0, 255)\n elif np.issubdtype(pixels.dtype, np.float):\n low, high = (-float('inf'), float('inf'))\n else:\n raise TypeError(pixels.dtype)\n\n pixels_space = spaces.Box(\n shape=pixels.shape, low=low, high=high, dtype=pixels.dtype)\n pixels_spaces[pixel_key] = pixels_space\n\n self.observation_space.spaces.update(pixels_spaces)\n\n self._env = env\n self._pixels_only = pixels_only\n self._render_kwargs = render_kwargs\n self._pixel_keys = pixel_keys\n\n def observation(self, observation):\n pixel_observation = self._add_pixel_observation(observation)\n return pixel_observation\n\n def _add_pixel_observation(self, observation):\n if self._pixels_only:\n observation = collections.OrderedDict()\n elif self._observation_is_dict:\n observation = type(observation)(observation)\n else:\n observation = collections.OrderedDict()\n observation[STATE_KEY] = observation\n\n pixel_observations = {\n pixel_key: self.env.render(**self._render_kwargs[pixel_key])\n for pixel_key in self._pixel_keys\n }\n\n observation.update(pixel_observations)\n\n return observation\n", "path": "gym/wrappers/pixel_observation.py"}]} | 1,651 | 131 |
gh_patches_debug_908 | rasdani/github-patches | git_diff | mlflow__mlflow-9827 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[DOC-FIX] Doc for Run.inputs erroneously refers to Run.data
### Willingness to contribute
No. I cannot contribute a documentation fix at this time.
### URL(s) with the issue
https://www.mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.Run
### Description of proposal (what needs changing)
In the Run doc page, the doc for Run.inputs refers to Run.data instead of Run.input.
property inputs
The run inputs, including dataset inputs
Return type
mlflow.entities.RunData
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mlflow/entities/run.py`
Content:
```
1 from typing import Any, Dict, Optional
2
3 from mlflow.entities._mlflow_object import _MLflowObject
4 from mlflow.entities.run_data import RunData
5 from mlflow.entities.run_info import RunInfo
6 from mlflow.entities.run_inputs import RunInputs
7 from mlflow.exceptions import MlflowException
8 from mlflow.protos.service_pb2 import Run as ProtoRun
9
10
11 class Run(_MLflowObject):
12 """
13 Run object.
14 """
15
16 def __init__(
17 self, run_info: RunInfo, run_data: RunData, run_inputs: Optional[RunInputs] = None
18 ) -> None:
19 if run_info is None:
20 raise MlflowException("run_info cannot be None")
21 self._info = run_info
22 self._data = run_data
23 self._inputs = run_inputs
24
25 @property
26 def info(self) -> RunInfo:
27 """
28 The run metadata, such as the run id, start time, and status.
29
30 :rtype: :py:class:`mlflow.entities.RunInfo`
31 """
32 return self._info
33
34 @property
35 def data(self) -> RunData:
36 """
37 The run data, including metrics, parameters, and tags.
38
39 :rtype: :py:class:`mlflow.entities.RunData`
40 """
41 return self._data
42
43 @property
44 def inputs(self) -> RunInputs:
45 """
46 The run inputs, including dataset inputs
47
48 :rtype: :py:class:`mlflow.entities.RunData`
49 """
50 return self._inputs
51
52 def to_proto(self):
53 run = ProtoRun()
54 run.info.MergeFrom(self.info.to_proto())
55 if self.data:
56 run.data.MergeFrom(self.data.to_proto())
57 if self.inputs:
58 run.inputs.MergeFrom(self.inputs.to_proto())
59 return run
60
61 @classmethod
62 def from_proto(cls, proto):
63 return cls(
64 RunInfo.from_proto(proto.info),
65 RunData.from_proto(proto.data),
66 RunInputs.from_proto(proto.inputs),
67 )
68
69 def to_dictionary(self) -> Dict[Any, Any]:
70 run_dict = {
71 "info": dict(self.info),
72 }
73 if self.data:
74 run_dict["data"] = self.data.to_dictionary()
75 if self.inputs:
76 run_dict["inputs"] = self.inputs.to_dictionary()
77 return run_dict
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mlflow/entities/run.py b/mlflow/entities/run.py
--- a/mlflow/entities/run.py
+++ b/mlflow/entities/run.py
@@ -45,7 +45,7 @@
"""
The run inputs, including dataset inputs
- :rtype: :py:class:`mlflow.entities.RunData`
+ :rtype: :py:class:`mlflow.entities.RunInputs`
"""
return self._inputs
| {"golden_diff": "diff --git a/mlflow/entities/run.py b/mlflow/entities/run.py\n--- a/mlflow/entities/run.py\n+++ b/mlflow/entities/run.py\n@@ -45,7 +45,7 @@\n \"\"\"\n The run inputs, including dataset inputs\n \n- :rtype: :py:class:`mlflow.entities.RunData`\n+ :rtype: :py:class:`mlflow.entities.RunInputs`\n \"\"\"\n return self._inputs\n", "issue": "[DOC-FIX] Doc for Run.inputs erroneously refers to Run.data\n### Willingness to contribute\n\nNo. I cannot contribute a documentation fix at this time.\n\n### URL(s) with the issue\n\nhttps://www.mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.Run\n\n### Description of proposal (what needs changing)\n\nIn the Run doc page, the doc for Run.inputs refers to Run.data instead of Run.input.\r\n\r\n\r\nproperty inputs\r\nThe run inputs, including dataset inputs\r\n\r\nReturn type\r\nmlflow.entities.RunData\r\n\r\n\n", "before_files": [{"content": "from typing import Any, Dict, Optional\n\nfrom mlflow.entities._mlflow_object import _MLflowObject\nfrom mlflow.entities.run_data import RunData\nfrom mlflow.entities.run_info import RunInfo\nfrom mlflow.entities.run_inputs import RunInputs\nfrom mlflow.exceptions import MlflowException\nfrom mlflow.protos.service_pb2 import Run as ProtoRun\n\n\nclass Run(_MLflowObject):\n \"\"\"\n Run object.\n \"\"\"\n\n def __init__(\n self, run_info: RunInfo, run_data: RunData, run_inputs: Optional[RunInputs] = None\n ) -> None:\n if run_info is None:\n raise MlflowException(\"run_info cannot be None\")\n self._info = run_info\n self._data = run_data\n self._inputs = run_inputs\n\n @property\n def info(self) -> RunInfo:\n \"\"\"\n The run metadata, such as the run id, start time, and status.\n\n :rtype: :py:class:`mlflow.entities.RunInfo`\n \"\"\"\n return self._info\n\n @property\n def data(self) -> RunData:\n \"\"\"\n The run data, including metrics, parameters, and tags.\n\n :rtype: :py:class:`mlflow.entities.RunData`\n \"\"\"\n return self._data\n\n @property\n def inputs(self) -> RunInputs:\n \"\"\"\n The run inputs, including dataset inputs\n\n :rtype: :py:class:`mlflow.entities.RunData`\n \"\"\"\n return self._inputs\n\n def to_proto(self):\n run = ProtoRun()\n run.info.MergeFrom(self.info.to_proto())\n if self.data:\n run.data.MergeFrom(self.data.to_proto())\n if self.inputs:\n run.inputs.MergeFrom(self.inputs.to_proto())\n return run\n\n @classmethod\n def from_proto(cls, proto):\n return cls(\n RunInfo.from_proto(proto.info),\n RunData.from_proto(proto.data),\n RunInputs.from_proto(proto.inputs),\n )\n\n def to_dictionary(self) -> Dict[Any, Any]:\n run_dict = {\n \"info\": dict(self.info),\n }\n if self.data:\n run_dict[\"data\"] = self.data.to_dictionary()\n if self.inputs:\n run_dict[\"inputs\"] = self.inputs.to_dictionary()\n return run_dict\n", "path": "mlflow/entities/run.py"}], "after_files": [{"content": "from typing import Any, Dict, Optional\n\nfrom mlflow.entities._mlflow_object import _MLflowObject\nfrom mlflow.entities.run_data import RunData\nfrom mlflow.entities.run_info import RunInfo\nfrom mlflow.entities.run_inputs import RunInputs\nfrom mlflow.exceptions import MlflowException\nfrom mlflow.protos.service_pb2 import Run as ProtoRun\n\n\nclass Run(_MLflowObject):\n \"\"\"\n Run object.\n \"\"\"\n\n def __init__(\n self, run_info: RunInfo, run_data: RunData, run_inputs: Optional[RunInputs] = None\n ) -> None:\n if run_info is None:\n raise MlflowException(\"run_info cannot be None\")\n self._info = run_info\n self._data = run_data\n self._inputs = run_inputs\n\n @property\n def info(self) -> RunInfo:\n \"\"\"\n The run metadata, such as the run id, start time, and status.\n\n :rtype: :py:class:`mlflow.entities.RunInfo`\n \"\"\"\n return self._info\n\n @property\n def data(self) -> RunData:\n \"\"\"\n The run data, including metrics, parameters, and tags.\n\n :rtype: :py:class:`mlflow.entities.RunData`\n \"\"\"\n return self._data\n\n @property\n def inputs(self) -> RunInputs:\n \"\"\"\n The run inputs, including dataset inputs\n\n :rtype: :py:class:`mlflow.entities.RunInputs`\n \"\"\"\n return self._inputs\n\n def to_proto(self):\n run = ProtoRun()\n run.info.MergeFrom(self.info.to_proto())\n if self.data:\n run.data.MergeFrom(self.data.to_proto())\n if self.inputs:\n run.inputs.MergeFrom(self.inputs.to_proto())\n return run\n\n @classmethod\n def from_proto(cls, proto):\n return cls(\n RunInfo.from_proto(proto.info),\n RunData.from_proto(proto.data),\n RunInputs.from_proto(proto.inputs),\n )\n\n def to_dictionary(self) -> Dict[Any, Any]:\n run_dict = {\n \"info\": dict(self.info),\n }\n if self.data:\n run_dict[\"data\"] = self.data.to_dictionary()\n if self.inputs:\n run_dict[\"inputs\"] = self.inputs.to_dictionary()\n return run_dict\n", "path": "mlflow/entities/run.py"}]} | 1,020 | 93 |
gh_patches_debug_23900 | rasdani/github-patches | git_diff | CTFd__CTFd-1823 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Submissions should link directly to the user that submitted
Submissions don't link directly to the user in team mode which means you need to search to see what user submitted for a given team.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/constants/config.py`
Content:
```
1 import json
2
3 from flask import url_for
4
5 from CTFd.constants import JinjaEnum, RawEnum
6 from CTFd.utils import get_config
7
8
9 class ConfigTypes(str, RawEnum):
10 CHALLENGE_VISIBILITY = "challenge_visibility"
11 SCORE_VISIBILITY = "score_visibility"
12 ACCOUNT_VISIBILITY = "account_visibility"
13 REGISTRATION_VISIBILITY = "registration_visibility"
14
15
16 @JinjaEnum
17 class ChallengeVisibilityTypes(str, RawEnum):
18 PUBLIC = "public"
19 PRIVATE = "private"
20 ADMINS = "admins"
21
22
23 @JinjaEnum
24 class ScoreVisibilityTypes(str, RawEnum):
25 PUBLIC = "public"
26 PRIVATE = "private"
27 HIDDEN = "hidden"
28 ADMINS = "admins"
29
30
31 @JinjaEnum
32 class AccountVisibilityTypes(str, RawEnum):
33 PUBLIC = "public"
34 PRIVATE = "private"
35 ADMINS = "admins"
36
37
38 @JinjaEnum
39 class RegistrationVisibilityTypes(str, RawEnum):
40 PUBLIC = "public"
41 PRIVATE = "private"
42
43
44 class _ConfigsWrapper:
45 def __getattr__(self, attr):
46 return get_config(attr)
47
48 @property
49 def ctf_name(self):
50 return get_config("ctf_name", default="CTFd")
51
52 @property
53 def ctf_small_icon(self):
54 icon = get_config("ctf_small_icon")
55 if icon:
56 return url_for("views.files", path=icon)
57 return url_for("views.themes", path="img/favicon.ico")
58
59 @property
60 def theme_header(self):
61 from CTFd.utils.helpers import markup
62
63 return markup(get_config("theme_header", default=""))
64
65 @property
66 def theme_footer(self):
67 from CTFd.utils.helpers import markup
68
69 return markup(get_config("theme_footer", default=""))
70
71 @property
72 def theme_settings(self):
73 return json.loads(get_config("theme_settings", default="null"))
74
75 @property
76 def tos_or_privacy(self):
77 tos = bool(get_config("tos_url") or get_config("tos_text"))
78 privacy = bool(get_config("privacy_url") or get_config("privacy_text"))
79 return tos or privacy
80
81 @property
82 def tos_link(self):
83 return get_config("tos_url", default=url_for("views.tos"))
84
85 @property
86 def privacy_link(self):
87 return get_config("privacy_url", default=url_for("views.privacy"))
88
89
90 Configs = _ConfigsWrapper()
91
```
Path: `CTFd/utils/modes/__init__.py`
Content:
```
1 from flask import url_for
2
3 from CTFd.models import Teams, Users
4 from CTFd.utils import get_config
5
6 USERS_MODE = "users"
7 TEAMS_MODE = "teams"
8
9
10 def generate_account_url(account_id, admin=False):
11 if get_config("user_mode") == USERS_MODE:
12 if admin:
13 return url_for("admin.users_detail", user_id=account_id)
14 else:
15 return url_for("users.public", user_id=account_id)
16 elif get_config("user_mode") == TEAMS_MODE:
17 if admin:
18 return url_for("admin.teams_detail", team_id=account_id)
19 else:
20 return url_for("teams.public", team_id=account_id)
21
22
23 def get_model():
24 if get_config("user_mode") == USERS_MODE:
25 return Users
26 elif get_config("user_mode") == TEAMS_MODE:
27 return Teams
28
29
30 def get_mode_as_word(plural=False, capitalize=False):
31 if get_config("user_mode") == USERS_MODE:
32 word = "user"
33 else:
34 word = "team"
35
36 if plural:
37 word += "s"
38 if capitalize:
39 word = word.title()
40 return word
41
```
Path: `CTFd/admin/submissions.py`
Content:
```
1 from flask import render_template, request, url_for
2
3 from CTFd.admin import admin
4 from CTFd.models import Challenges, Submissions
5 from CTFd.utils.decorators import admins_only
6 from CTFd.utils.helpers.models import build_model_filters
7 from CTFd.utils.modes import get_model
8
9
10 @admin.route("/admin/submissions", defaults={"submission_type": None})
11 @admin.route("/admin/submissions/<submission_type>")
12 @admins_only
13 def submissions_listing(submission_type):
14 filters_by = {}
15 if submission_type:
16 filters_by["type"] = submission_type
17 filters = []
18
19 q = request.args.get("q")
20 field = request.args.get("field")
21 page = abs(request.args.get("page", 1, type=int))
22
23 filters = build_model_filters(
24 model=Submissions,
25 query=q,
26 field=field,
27 extra_columns={
28 "challenge_name": Challenges.name,
29 "account_id": Submissions.account_id,
30 },
31 )
32
33 Model = get_model()
34
35 submissions = (
36 Submissions.query.add_columns(
37 Submissions.id,
38 Submissions.type,
39 Submissions.challenge_id,
40 Submissions.provided,
41 Submissions.account_id,
42 Submissions.date,
43 Challenges.name.label("challenge_name"),
44 Model.name.label("account_name"),
45 )
46 .filter_by(**filters_by)
47 .filter(*filters)
48 .join(Challenges)
49 .join(Model)
50 .order_by(Submissions.date.desc())
51 .paginate(page=page, per_page=50)
52 )
53
54 args = dict(request.args)
55 args.pop("page", 1)
56
57 return render_template(
58 "admin/submissions.html",
59 submissions=submissions,
60 prev_page=url_for(
61 request.endpoint,
62 submission_type=submission_type,
63 page=submissions.prev_num,
64 **args
65 ),
66 next_page=url_for(
67 request.endpoint,
68 submission_type=submission_type,
69 page=submissions.next_num,
70 **args
71 ),
72 type=submission_type,
73 q=q,
74 field=field,
75 )
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/CTFd/admin/submissions.py b/CTFd/admin/submissions.py
--- a/CTFd/admin/submissions.py
+++ b/CTFd/admin/submissions.py
@@ -33,17 +33,7 @@
Model = get_model()
submissions = (
- Submissions.query.add_columns(
- Submissions.id,
- Submissions.type,
- Submissions.challenge_id,
- Submissions.provided,
- Submissions.account_id,
- Submissions.date,
- Challenges.name.label("challenge_name"),
- Model.name.label("account_name"),
- )
- .filter_by(**filters_by)
+ Submissions.query.filter_by(**filters_by)
.filter(*filters)
.join(Challenges)
.join(Model)
diff --git a/CTFd/constants/config.py b/CTFd/constants/config.py
--- a/CTFd/constants/config.py
+++ b/CTFd/constants/config.py
@@ -13,6 +13,12 @@
REGISTRATION_VISIBILITY = "registration_visibility"
+@JinjaEnum
+class UserModeTypes(str, RawEnum):
+ USERS = "users"
+ TEAMS = "teams"
+
+
@JinjaEnum
class ChallengeVisibilityTypes(str, RawEnum):
PUBLIC = "public"
diff --git a/CTFd/utils/modes/__init__.py b/CTFd/utils/modes/__init__.py
--- a/CTFd/utils/modes/__init__.py
+++ b/CTFd/utils/modes/__init__.py
@@ -3,6 +3,7 @@
from CTFd.models import Teams, Users
from CTFd.utils import get_config
+# TODO: Replace these constants with the UserModeTypes enum
USERS_MODE = "users"
TEAMS_MODE = "teams"
| {"golden_diff": "diff --git a/CTFd/admin/submissions.py b/CTFd/admin/submissions.py\n--- a/CTFd/admin/submissions.py\n+++ b/CTFd/admin/submissions.py\n@@ -33,17 +33,7 @@\n Model = get_model()\n \n submissions = (\n- Submissions.query.add_columns(\n- Submissions.id,\n- Submissions.type,\n- Submissions.challenge_id,\n- Submissions.provided,\n- Submissions.account_id,\n- Submissions.date,\n- Challenges.name.label(\"challenge_name\"),\n- Model.name.label(\"account_name\"),\n- )\n- .filter_by(**filters_by)\n+ Submissions.query.filter_by(**filters_by)\n .filter(*filters)\n .join(Challenges)\n .join(Model)\ndiff --git a/CTFd/constants/config.py b/CTFd/constants/config.py\n--- a/CTFd/constants/config.py\n+++ b/CTFd/constants/config.py\n@@ -13,6 +13,12 @@\n REGISTRATION_VISIBILITY = \"registration_visibility\"\n \n \n+@JinjaEnum\n+class UserModeTypes(str, RawEnum):\n+ USERS = \"users\"\n+ TEAMS = \"teams\"\n+\n+\n @JinjaEnum\n class ChallengeVisibilityTypes(str, RawEnum):\n PUBLIC = \"public\"\ndiff --git a/CTFd/utils/modes/__init__.py b/CTFd/utils/modes/__init__.py\n--- a/CTFd/utils/modes/__init__.py\n+++ b/CTFd/utils/modes/__init__.py\n@@ -3,6 +3,7 @@\n from CTFd.models import Teams, Users\n from CTFd.utils import get_config\n \n+# TODO: Replace these constants with the UserModeTypes enum\n USERS_MODE = \"users\"\n TEAMS_MODE = \"teams\"\n", "issue": "Submissions should link directly to the user that submitted\nSubmissions don't link directly to the user in team mode which means you need to search to see what user submitted for a given team.\r\n\r\n\n", "before_files": [{"content": "import json\n\nfrom flask import url_for\n\nfrom CTFd.constants import JinjaEnum, RawEnum\nfrom CTFd.utils import get_config\n\n\nclass ConfigTypes(str, RawEnum):\n CHALLENGE_VISIBILITY = \"challenge_visibility\"\n SCORE_VISIBILITY = \"score_visibility\"\n ACCOUNT_VISIBILITY = \"account_visibility\"\n REGISTRATION_VISIBILITY = \"registration_visibility\"\n\n\n@JinjaEnum\nclass ChallengeVisibilityTypes(str, RawEnum):\n PUBLIC = \"public\"\n PRIVATE = \"private\"\n ADMINS = \"admins\"\n\n\n@JinjaEnum\nclass ScoreVisibilityTypes(str, RawEnum):\n PUBLIC = \"public\"\n PRIVATE = \"private\"\n HIDDEN = \"hidden\"\n ADMINS = \"admins\"\n\n\n@JinjaEnum\nclass AccountVisibilityTypes(str, RawEnum):\n PUBLIC = \"public\"\n PRIVATE = \"private\"\n ADMINS = \"admins\"\n\n\n@JinjaEnum\nclass RegistrationVisibilityTypes(str, RawEnum):\n PUBLIC = \"public\"\n PRIVATE = \"private\"\n\n\nclass _ConfigsWrapper:\n def __getattr__(self, attr):\n return get_config(attr)\n\n @property\n def ctf_name(self):\n return get_config(\"ctf_name\", default=\"CTFd\")\n\n @property\n def ctf_small_icon(self):\n icon = get_config(\"ctf_small_icon\")\n if icon:\n return url_for(\"views.files\", path=icon)\n return url_for(\"views.themes\", path=\"img/favicon.ico\")\n\n @property\n def theme_header(self):\n from CTFd.utils.helpers import markup\n\n return markup(get_config(\"theme_header\", default=\"\"))\n\n @property\n def theme_footer(self):\n from CTFd.utils.helpers import markup\n\n return markup(get_config(\"theme_footer\", default=\"\"))\n\n @property\n def theme_settings(self):\n return json.loads(get_config(\"theme_settings\", default=\"null\"))\n\n @property\n def tos_or_privacy(self):\n tos = bool(get_config(\"tos_url\") or get_config(\"tos_text\"))\n privacy = bool(get_config(\"privacy_url\") or get_config(\"privacy_text\"))\n return tos or privacy\n\n @property\n def tos_link(self):\n return get_config(\"tos_url\", default=url_for(\"views.tos\"))\n\n @property\n def privacy_link(self):\n return get_config(\"privacy_url\", default=url_for(\"views.privacy\"))\n\n\nConfigs = _ConfigsWrapper()\n", "path": "CTFd/constants/config.py"}, {"content": "from flask import url_for\n\nfrom CTFd.models import Teams, Users\nfrom CTFd.utils import get_config\n\nUSERS_MODE = \"users\"\nTEAMS_MODE = \"teams\"\n\n\ndef generate_account_url(account_id, admin=False):\n if get_config(\"user_mode\") == USERS_MODE:\n if admin:\n return url_for(\"admin.users_detail\", user_id=account_id)\n else:\n return url_for(\"users.public\", user_id=account_id)\n elif get_config(\"user_mode\") == TEAMS_MODE:\n if admin:\n return url_for(\"admin.teams_detail\", team_id=account_id)\n else:\n return url_for(\"teams.public\", team_id=account_id)\n\n\ndef get_model():\n if get_config(\"user_mode\") == USERS_MODE:\n return Users\n elif get_config(\"user_mode\") == TEAMS_MODE:\n return Teams\n\n\ndef get_mode_as_word(plural=False, capitalize=False):\n if get_config(\"user_mode\") == USERS_MODE:\n word = \"user\"\n else:\n word = \"team\"\n\n if plural:\n word += \"s\"\n if capitalize:\n word = word.title()\n return word\n", "path": "CTFd/utils/modes/__init__.py"}, {"content": "from flask import render_template, request, url_for\n\nfrom CTFd.admin import admin\nfrom CTFd.models import Challenges, Submissions\nfrom CTFd.utils.decorators import admins_only\nfrom CTFd.utils.helpers.models import build_model_filters\nfrom CTFd.utils.modes import get_model\n\n\[email protected](\"/admin/submissions\", defaults={\"submission_type\": None})\[email protected](\"/admin/submissions/<submission_type>\")\n@admins_only\ndef submissions_listing(submission_type):\n filters_by = {}\n if submission_type:\n filters_by[\"type\"] = submission_type\n filters = []\n\n q = request.args.get(\"q\")\n field = request.args.get(\"field\")\n page = abs(request.args.get(\"page\", 1, type=int))\n\n filters = build_model_filters(\n model=Submissions,\n query=q,\n field=field,\n extra_columns={\n \"challenge_name\": Challenges.name,\n \"account_id\": Submissions.account_id,\n },\n )\n\n Model = get_model()\n\n submissions = (\n Submissions.query.add_columns(\n Submissions.id,\n Submissions.type,\n Submissions.challenge_id,\n Submissions.provided,\n Submissions.account_id,\n Submissions.date,\n Challenges.name.label(\"challenge_name\"),\n Model.name.label(\"account_name\"),\n )\n .filter_by(**filters_by)\n .filter(*filters)\n .join(Challenges)\n .join(Model)\n .order_by(Submissions.date.desc())\n .paginate(page=page, per_page=50)\n )\n\n args = dict(request.args)\n args.pop(\"page\", 1)\n\n return render_template(\n \"admin/submissions.html\",\n submissions=submissions,\n prev_page=url_for(\n request.endpoint,\n submission_type=submission_type,\n page=submissions.prev_num,\n **args\n ),\n next_page=url_for(\n request.endpoint,\n submission_type=submission_type,\n page=submissions.next_num,\n **args\n ),\n type=submission_type,\n q=q,\n field=field,\n )\n", "path": "CTFd/admin/submissions.py"}], "after_files": [{"content": "import json\n\nfrom flask import url_for\n\nfrom CTFd.constants import JinjaEnum, RawEnum\nfrom CTFd.utils import get_config\n\n\nclass ConfigTypes(str, RawEnum):\n CHALLENGE_VISIBILITY = \"challenge_visibility\"\n SCORE_VISIBILITY = \"score_visibility\"\n ACCOUNT_VISIBILITY = \"account_visibility\"\n REGISTRATION_VISIBILITY = \"registration_visibility\"\n\n\n@JinjaEnum\nclass UserModeTypes(str, RawEnum):\n USERS = \"users\"\n TEAMS = \"teams\"\n\n\n@JinjaEnum\nclass ChallengeVisibilityTypes(str, RawEnum):\n PUBLIC = \"public\"\n PRIVATE = \"private\"\n ADMINS = \"admins\"\n\n\n@JinjaEnum\nclass ScoreVisibilityTypes(str, RawEnum):\n PUBLIC = \"public\"\n PRIVATE = \"private\"\n HIDDEN = \"hidden\"\n ADMINS = \"admins\"\n\n\n@JinjaEnum\nclass AccountVisibilityTypes(str, RawEnum):\n PUBLIC = \"public\"\n PRIVATE = \"private\"\n ADMINS = \"admins\"\n\n\n@JinjaEnum\nclass RegistrationVisibilityTypes(str, RawEnum):\n PUBLIC = \"public\"\n PRIVATE = \"private\"\n\n\nclass _ConfigsWrapper:\n def __getattr__(self, attr):\n return get_config(attr)\n\n @property\n def ctf_name(self):\n return get_config(\"ctf_name\", default=\"CTFd\")\n\n @property\n def ctf_small_icon(self):\n icon = get_config(\"ctf_small_icon\")\n if icon:\n return url_for(\"views.files\", path=icon)\n return url_for(\"views.themes\", path=\"img/favicon.ico\")\n\n @property\n def theme_header(self):\n from CTFd.utils.helpers import markup\n\n return markup(get_config(\"theme_header\", default=\"\"))\n\n @property\n def theme_footer(self):\n from CTFd.utils.helpers import markup\n\n return markup(get_config(\"theme_footer\", default=\"\"))\n\n @property\n def theme_settings(self):\n return json.loads(get_config(\"theme_settings\", default=\"null\"))\n\n @property\n def tos_or_privacy(self):\n tos = bool(get_config(\"tos_url\") or get_config(\"tos_text\"))\n privacy = bool(get_config(\"privacy_url\") or get_config(\"privacy_text\"))\n return tos or privacy\n\n @property\n def tos_link(self):\n return get_config(\"tos_url\", default=url_for(\"views.tos\"))\n\n @property\n def privacy_link(self):\n return get_config(\"privacy_url\", default=url_for(\"views.privacy\"))\n\n\nConfigs = _ConfigsWrapper()\n", "path": "CTFd/constants/config.py"}, {"content": "from flask import url_for\n\nfrom CTFd.models import Teams, Users\nfrom CTFd.utils import get_config\n\n# TODO: Replace these constants with the UserModeTypes enum\nUSERS_MODE = \"users\"\nTEAMS_MODE = \"teams\"\n\n\ndef generate_account_url(account_id, admin=False):\n if get_config(\"user_mode\") == USERS_MODE:\n if admin:\n return url_for(\"admin.users_detail\", user_id=account_id)\n else:\n return url_for(\"users.public\", user_id=account_id)\n elif get_config(\"user_mode\") == TEAMS_MODE:\n if admin:\n return url_for(\"admin.teams_detail\", team_id=account_id)\n else:\n return url_for(\"teams.public\", team_id=account_id)\n\n\ndef get_model():\n if get_config(\"user_mode\") == USERS_MODE:\n return Users\n elif get_config(\"user_mode\") == TEAMS_MODE:\n return Teams\n\n\ndef get_mode_as_word(plural=False, capitalize=False):\n if get_config(\"user_mode\") == USERS_MODE:\n word = \"user\"\n else:\n word = \"team\"\n\n if plural:\n word += \"s\"\n if capitalize:\n word = word.title()\n return word\n", "path": "CTFd/utils/modes/__init__.py"}, {"content": "from flask import render_template, request, url_for\n\nfrom CTFd.admin import admin\nfrom CTFd.models import Challenges, Submissions\nfrom CTFd.utils.decorators import admins_only\nfrom CTFd.utils.helpers.models import build_model_filters\nfrom CTFd.utils.modes import get_model\n\n\[email protected](\"/admin/submissions\", defaults={\"submission_type\": None})\[email protected](\"/admin/submissions/<submission_type>\")\n@admins_only\ndef submissions_listing(submission_type):\n filters_by = {}\n if submission_type:\n filters_by[\"type\"] = submission_type\n filters = []\n\n q = request.args.get(\"q\")\n field = request.args.get(\"field\")\n page = abs(request.args.get(\"page\", 1, type=int))\n\n filters = build_model_filters(\n model=Submissions,\n query=q,\n field=field,\n extra_columns={\n \"challenge_name\": Challenges.name,\n \"account_id\": Submissions.account_id,\n },\n )\n\n Model = get_model()\n\n submissions = (\n Submissions.query.filter_by(**filters_by)\n .filter(*filters)\n .join(Challenges)\n .join(Model)\n .order_by(Submissions.date.desc())\n .paginate(page=page, per_page=50)\n )\n\n args = dict(request.args)\n args.pop(\"page\", 1)\n\n return render_template(\n \"admin/submissions.html\",\n submissions=submissions,\n prev_page=url_for(\n request.endpoint,\n submission_type=submission_type,\n page=submissions.prev_num,\n **args\n ),\n next_page=url_for(\n request.endpoint,\n submission_type=submission_type,\n page=submissions.next_num,\n **args\n ),\n type=submission_type,\n q=q,\n field=field,\n )\n", "path": "CTFd/admin/submissions.py"}]} | 1,963 | 391 |
gh_patches_debug_12551 | rasdani/github-patches | git_diff | quantumlib__Cirq-5211 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cirq-web doesn't support LineQubit
```python
import cirq
import cirq_web
cirq_circuit = cirq.Circuit(cirq.H(cirq.LineQubit(0)))
cirq_web.Circuit3D(cirq_circuit).generate_html_file(
file_name="circuit_viewer.html",
open_in_browser=True,
)
```
results in
```
AttributeError: 'LineQubit' object has no attribute 'row'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cirq-web/cirq_web/circuits/circuit.py`
Content:
```
1 # Copyright 2021 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Iterable
15 import cirq
16 from cirq_web import widget
17 from cirq_web.circuits.symbols import (
18 Operation3DSymbol,
19 SymbolResolver,
20 resolve_operation,
21 DEFAULT_SYMBOL_RESOLVERS,
22 )
23
24
25 class Circuit3D(widget.Widget):
26 """Takes cirq.Circuit objects and displays them in 3D."""
27
28 def __init__(
29 self,
30 circuit: cirq.Circuit,
31 resolvers: Iterable[SymbolResolver] = DEFAULT_SYMBOL_RESOLVERS,
32 padding_factor: float = 1,
33 ):
34 """Initializes a Circuit instance.
35
36 Args:
37 circuit: The `cirq.Circuit` to be represented in 3D.
38 resolvers: The symbol resolve for how to show symbols in 3D.
39 padding_factor: The distance between meshes.
40 """
41 super().__init__()
42 self.circuit = circuit
43 self._resolvers = resolvers
44 self.padding_factor = padding_factor
45
46 def get_client_code(self) -> str:
47 # Remove hyphens from the id so that we can use
48 # it as the variable name in TS.
49 # It's important that we assign the circuit to a variable
50 # for animation purposes. Alternatively, there may be ways
51 # to select/manipulate elements on the screen from three.js
52 stripped_id = self.id.replace('-', '')
53 moments = len(self.circuit.moments)
54 self.serialized_circuit = self._serialize_circuit()
55
56 return f"""
57 <button id="camera-reset">Reset Camera</button>
58 <button id="camera-toggle">Toggle Camera Type</button>
59 <script>
60 let viz_{stripped_id} = createGridCircuit({self.serialized_circuit}, {moments}, "{self.id}", {self.padding_factor});
61
62 document.getElementById("camera-reset").addEventListener('click', () => {{
63 viz_{stripped_id}.scene.setCameraAndControls(viz_{stripped_id}.circuit);
64 }});
65
66 document.getElementById("camera-toggle").addEventListener('click', () => {{
67 viz_{stripped_id}.scene.toggleCamera(viz_{stripped_id}.circuit);
68 }});
69 </script>
70 """
71
72 def get_widget_bundle_name(self) -> str:
73 return 'circuit.bundle.js'
74
75 def _serialize_circuit(self) -> str:
76 args = []
77 moments = self.circuit.moments
78 for moment_id, moment in enumerate(moments):
79 for item in moment:
80 symbol = self._build_3D_symbol(item, moment_id)
81 args.append(symbol.to_typescript())
82
83 argument_str = ','.join(str(item) for item in args)
84 return f'[{argument_str}]'
85
86 def _build_3D_symbol(self, operation, moment) -> Operation3DSymbol:
87 symbol_info = resolve_operation(operation, self._resolvers)
88 location_info = []
89 for qubit in operation.qubits:
90 location_info.append({'row': qubit.row, 'col': qubit.col})
91 return Operation3DSymbol(symbol_info.labels, location_info, symbol_info.colors, moment)
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cirq-web/cirq_web/circuits/circuit.py b/cirq-web/cirq_web/circuits/circuit.py
--- a/cirq-web/cirq_web/circuits/circuit.py
+++ b/cirq-web/cirq_web/circuits/circuit.py
@@ -87,5 +87,10 @@
symbol_info = resolve_operation(operation, self._resolvers)
location_info = []
for qubit in operation.qubits:
- location_info.append({'row': qubit.row, 'col': qubit.col})
+ if isinstance(qubit, cirq.GridQubit):
+ location_info.append({'row': qubit.row, 'col': qubit.col})
+ elif isinstance(qubit, cirq.LineQubit):
+ location_info.append({'row': qubit.x, 'col': 0})
+ else:
+ raise ValueError('Unsupported qubit type')
return Operation3DSymbol(symbol_info.labels, location_info, symbol_info.colors, moment)
| {"golden_diff": "diff --git a/cirq-web/cirq_web/circuits/circuit.py b/cirq-web/cirq_web/circuits/circuit.py\n--- a/cirq-web/cirq_web/circuits/circuit.py\n+++ b/cirq-web/cirq_web/circuits/circuit.py\n@@ -87,5 +87,10 @@\n symbol_info = resolve_operation(operation, self._resolvers)\n location_info = []\n for qubit in operation.qubits:\n- location_info.append({'row': qubit.row, 'col': qubit.col})\n+ if isinstance(qubit, cirq.GridQubit):\n+ location_info.append({'row': qubit.row, 'col': qubit.col})\n+ elif isinstance(qubit, cirq.LineQubit):\n+ location_info.append({'row': qubit.x, 'col': 0})\n+ else:\n+ raise ValueError('Unsupported qubit type')\n return Operation3DSymbol(symbol_info.labels, location_info, symbol_info.colors, moment)\n", "issue": "cirq-web doesn't support LineQubit\n```python\r\nimport cirq\r\nimport cirq_web\r\n\r\ncirq_circuit = cirq.Circuit(cirq.H(cirq.LineQubit(0)))\r\ncirq_web.Circuit3D(cirq_circuit).generate_html_file(\r\n file_name=\"circuit_viewer.html\",\r\n open_in_browser=True,\r\n)\r\n```\r\n\r\nresults in\r\n\r\n```\r\nAttributeError: 'LineQubit' object has no attribute 'row'\r\n```\n", "before_files": [{"content": "# Copyright 2021 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Iterable\nimport cirq\nfrom cirq_web import widget\nfrom cirq_web.circuits.symbols import (\n Operation3DSymbol,\n SymbolResolver,\n resolve_operation,\n DEFAULT_SYMBOL_RESOLVERS,\n)\n\n\nclass Circuit3D(widget.Widget):\n \"\"\"Takes cirq.Circuit objects and displays them in 3D.\"\"\"\n\n def __init__(\n self,\n circuit: cirq.Circuit,\n resolvers: Iterable[SymbolResolver] = DEFAULT_SYMBOL_RESOLVERS,\n padding_factor: float = 1,\n ):\n \"\"\"Initializes a Circuit instance.\n\n Args:\n circuit: The `cirq.Circuit` to be represented in 3D.\n resolvers: The symbol resolve for how to show symbols in 3D.\n padding_factor: The distance between meshes.\n \"\"\"\n super().__init__()\n self.circuit = circuit\n self._resolvers = resolvers\n self.padding_factor = padding_factor\n\n def get_client_code(self) -> str:\n # Remove hyphens from the id so that we can use\n # it as the variable name in TS.\n # It's important that we assign the circuit to a variable\n # for animation purposes. Alternatively, there may be ways\n # to select/manipulate elements on the screen from three.js\n stripped_id = self.id.replace('-', '')\n moments = len(self.circuit.moments)\n self.serialized_circuit = self._serialize_circuit()\n\n return f\"\"\"\n <button id=\"camera-reset\">Reset Camera</button>\n <button id=\"camera-toggle\">Toggle Camera Type</button>\n <script>\n let viz_{stripped_id} = createGridCircuit({self.serialized_circuit}, {moments}, \"{self.id}\", {self.padding_factor});\n\n document.getElementById(\"camera-reset\").addEventListener('click', () => {{\n viz_{stripped_id}.scene.setCameraAndControls(viz_{stripped_id}.circuit);\n }});\n\n document.getElementById(\"camera-toggle\").addEventListener('click', () => {{\n viz_{stripped_id}.scene.toggleCamera(viz_{stripped_id}.circuit);\n }});\n </script>\n \"\"\"\n\n def get_widget_bundle_name(self) -> str:\n return 'circuit.bundle.js'\n\n def _serialize_circuit(self) -> str:\n args = []\n moments = self.circuit.moments\n for moment_id, moment in enumerate(moments):\n for item in moment:\n symbol = self._build_3D_symbol(item, moment_id)\n args.append(symbol.to_typescript())\n\n argument_str = ','.join(str(item) for item in args)\n return f'[{argument_str}]'\n\n def _build_3D_symbol(self, operation, moment) -> Operation3DSymbol:\n symbol_info = resolve_operation(operation, self._resolvers)\n location_info = []\n for qubit in operation.qubits:\n location_info.append({'row': qubit.row, 'col': qubit.col})\n return Operation3DSymbol(symbol_info.labels, location_info, symbol_info.colors, moment)\n", "path": "cirq-web/cirq_web/circuits/circuit.py"}], "after_files": [{"content": "# Copyright 2021 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Iterable\nimport cirq\nfrom cirq_web import widget\nfrom cirq_web.circuits.symbols import (\n Operation3DSymbol,\n SymbolResolver,\n resolve_operation,\n DEFAULT_SYMBOL_RESOLVERS,\n)\n\n\nclass Circuit3D(widget.Widget):\n \"\"\"Takes cirq.Circuit objects and displays them in 3D.\"\"\"\n\n def __init__(\n self,\n circuit: cirq.Circuit,\n resolvers: Iterable[SymbolResolver] = DEFAULT_SYMBOL_RESOLVERS,\n padding_factor: float = 1,\n ):\n \"\"\"Initializes a Circuit instance.\n\n Args:\n circuit: The `cirq.Circuit` to be represented in 3D.\n resolvers: The symbol resolve for how to show symbols in 3D.\n padding_factor: The distance between meshes.\n \"\"\"\n super().__init__()\n self.circuit = circuit\n self._resolvers = resolvers\n self.padding_factor = padding_factor\n\n def get_client_code(self) -> str:\n # Remove hyphens from the id so that we can use\n # it as the variable name in TS.\n # It's important that we assign the circuit to a variable\n # for animation purposes. Alternatively, there may be ways\n # to select/manipulate elements on the screen from three.js\n stripped_id = self.id.replace('-', '')\n moments = len(self.circuit.moments)\n self.serialized_circuit = self._serialize_circuit()\n\n return f\"\"\"\n <button id=\"camera-reset\">Reset Camera</button>\n <button id=\"camera-toggle\">Toggle Camera Type</button>\n <script>\n let viz_{stripped_id} = createGridCircuit({self.serialized_circuit}, {moments}, \"{self.id}\", {self.padding_factor});\n\n document.getElementById(\"camera-reset\").addEventListener('click', () => {{\n viz_{stripped_id}.scene.setCameraAndControls(viz_{stripped_id}.circuit);\n }});\n\n document.getElementById(\"camera-toggle\").addEventListener('click', () => {{\n viz_{stripped_id}.scene.toggleCamera(viz_{stripped_id}.circuit);\n }});\n </script>\n \"\"\"\n\n def get_widget_bundle_name(self) -> str:\n return 'circuit.bundle.js'\n\n def _serialize_circuit(self) -> str:\n args = []\n moments = self.circuit.moments\n for moment_id, moment in enumerate(moments):\n for item in moment:\n symbol = self._build_3D_symbol(item, moment_id)\n args.append(symbol.to_typescript())\n\n argument_str = ','.join(str(item) for item in args)\n return f'[{argument_str}]'\n\n def _build_3D_symbol(self, operation, moment) -> Operation3DSymbol:\n symbol_info = resolve_operation(operation, self._resolvers)\n location_info = []\n for qubit in operation.qubits:\n if isinstance(qubit, cirq.GridQubit):\n location_info.append({'row': qubit.row, 'col': qubit.col})\n elif isinstance(qubit, cirq.LineQubit):\n location_info.append({'row': qubit.x, 'col': 0})\n else:\n raise ValueError('Unsupported qubit type')\n return Operation3DSymbol(symbol_info.labels, location_info, symbol_info.colors, moment)\n", "path": "cirq-web/cirq_web/circuits/circuit.py"}]} | 1,339 | 215 |
gh_patches_debug_16494 | rasdani/github-patches | git_diff | googleapis__google-api-python-client-1083 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add exception handling to docs
Hi :)
I was reading the [docs](https://github.com/googleapis/google-api-python-client/blob/master/docs/start.md) looking for an example to handle exceptions from when request.execute() goes wrong e.g. a 403 due to
Exceeding qouta limits.
I would like for the docs to be updated with a try: and except: like this
``` python
try:
response = request.execute()
except HttpError as e:
logger.error('Error response status code %d, reason %s:', e.resp.status, e.content)
return {'error': 403, 'body' : 'YouTube API Data v3 qouta limit exceeded'}
```
or something else in the `except` block
If you're happy with this I'd like to contribute this as a first timer to open source?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `googleapiclient/errors.py`
Content:
```
1 # Copyright 2014 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Errors for the library.
16
17 All exceptions defined by the library
18 should be defined in this file.
19 """
20 from __future__ import absolute_import
21
22 __author__ = "[email protected] (Joe Gregorio)"
23
24 import json
25
26 from googleapiclient import _helpers as util
27
28
29 class Error(Exception):
30 """Base error for this module."""
31
32 pass
33
34
35 class HttpError(Error):
36 """HTTP data was invalid or unexpected."""
37
38 @util.positional(3)
39 def __init__(self, resp, content, uri=None):
40 self.resp = resp
41 if not isinstance(content, bytes):
42 raise TypeError("HTTP content should be bytes")
43 self.content = content
44 self.uri = uri
45 self.error_details = ""
46
47 def _get_reason(self):
48 """Calculate the reason for the error from the response content."""
49 reason = self.resp.reason
50 try:
51 data = json.loads(self.content.decode("utf-8"))
52 if isinstance(data, dict):
53 reason = data["error"]["message"]
54 if "details" in data["error"]:
55 self.error_details = data["error"]["details"]
56 elif "detail" in data["error"]:
57 self.error_details = data["error"]["detail"]
58 elif isinstance(data, list) and len(data) > 0:
59 first_error = data[0]
60 reason = first_error["error"]["message"]
61 if "details" in first_error["error"]:
62 self.error_details = first_error["error"]["details"]
63 except (ValueError, KeyError, TypeError):
64 pass
65 if reason is None:
66 reason = ""
67 return reason
68
69 def __repr__(self):
70 reason = self._get_reason()
71 if self.error_details:
72 return '<HttpError %s when requesting %s returned "%s". Details: "%s">' % (
73 self.resp.status,
74 self.uri,
75 reason.strip(),
76 self.error_details,
77 )
78 elif self.uri:
79 return '<HttpError %s when requesting %s returned "%s">' % (
80 self.resp.status,
81 self.uri,
82 self._get_reason().strip(),
83 )
84 else:
85 return '<HttpError %s "%s">' % (self.resp.status, self._get_reason())
86
87 __str__ = __repr__
88
89
90 class InvalidJsonError(Error):
91 """The JSON returned could not be parsed."""
92
93 pass
94
95
96 class UnknownFileType(Error):
97 """File type unknown or unexpected."""
98
99 pass
100
101
102 class UnknownLinkType(Error):
103 """Link type unknown or unexpected."""
104
105 pass
106
107
108 class UnknownApiNameOrVersion(Error):
109 """No API with that name and version exists."""
110
111 pass
112
113
114 class UnacceptableMimeTypeError(Error):
115 """That is an unacceptable mimetype for this operation."""
116
117 pass
118
119
120 class MediaUploadSizeError(Error):
121 """Media is larger than the method can accept."""
122
123 pass
124
125
126 class ResumableUploadError(HttpError):
127 """Error occurred during resumable upload."""
128
129 pass
130
131
132 class InvalidChunkSizeError(Error):
133 """The given chunksize is not valid."""
134
135 pass
136
137
138 class InvalidNotificationError(Error):
139 """The channel Notification is invalid."""
140
141 pass
142
143
144 class BatchError(HttpError):
145 """Error occurred during batch operations."""
146
147 @util.positional(2)
148 def __init__(self, reason, resp=None, content=None):
149 self.resp = resp
150 self.content = content
151 self.reason = reason
152
153 def __repr__(self):
154 if getattr(self.resp, "status", None) is None:
155 return '<BatchError "%s">' % (self.reason)
156 else:
157 return '<BatchError %s "%s">' % (self.resp.status, self.reason)
158
159 __str__ = __repr__
160
161
162 class UnexpectedMethodError(Error):
163 """Exception raised by RequestMockBuilder on unexpected calls."""
164
165 @util.positional(1)
166 def __init__(self, methodId=None):
167 """Constructor for an UnexpectedMethodError."""
168 super(UnexpectedMethodError, self).__init__(
169 "Received unexpected call %s" % methodId
170 )
171
172
173 class UnexpectedBodyError(Error):
174 """Exception raised by RequestMockBuilder on unexpected bodies."""
175
176 def __init__(self, expected, provided):
177 """Constructor for an UnexpectedMethodError."""
178 super(UnexpectedBodyError, self).__init__(
179 "Expected: [%s] - Provided: [%s]" % (expected, provided)
180 )
181
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/googleapiclient/errors.py b/googleapiclient/errors.py
--- a/googleapiclient/errors.py
+++ b/googleapiclient/errors.py
@@ -51,10 +51,9 @@
data = json.loads(self.content.decode("utf-8"))
if isinstance(data, dict):
reason = data["error"]["message"]
- if "details" in data["error"]:
- self.error_details = data["error"]["details"]
- elif "detail" in data["error"]:
- self.error_details = data["error"]["detail"]
+ error_detail_keyword = next((kw for kw in ["detail", "details", "message"] if kw in data["error"]), "")
+ if error_detail_keyword:
+ self.error_details = data["error"][error_detail_keyword]
elif isinstance(data, list) and len(data) > 0:
first_error = data[0]
reason = first_error["error"]["message"]
| {"golden_diff": "diff --git a/googleapiclient/errors.py b/googleapiclient/errors.py\n--- a/googleapiclient/errors.py\n+++ b/googleapiclient/errors.py\n@@ -51,10 +51,9 @@\n data = json.loads(self.content.decode(\"utf-8\"))\n if isinstance(data, dict):\n reason = data[\"error\"][\"message\"]\n- if \"details\" in data[\"error\"]:\n- self.error_details = data[\"error\"][\"details\"]\n- elif \"detail\" in data[\"error\"]:\n- self.error_details = data[\"error\"][\"detail\"]\n+ error_detail_keyword = next((kw for kw in [\"detail\", \"details\", \"message\"] if kw in data[\"error\"]), \"\")\n+ if error_detail_keyword:\n+ self.error_details = data[\"error\"][error_detail_keyword]\n elif isinstance(data, list) and len(data) > 0:\n first_error = data[0]\n reason = first_error[\"error\"][\"message\"]\n", "issue": "Add exception handling to docs\nHi :) \r\n\r\n\r\nI was reading the [docs](https://github.com/googleapis/google-api-python-client/blob/master/docs/start.md) looking for an example to handle exceptions from when request.execute() goes wrong e.g. a 403 due to \r\nExceeding qouta limits.\r\n\r\n\r\nI would like for the docs to be updated with a try: and except: like this\r\n``` python\r\n try:\r\n response = request.execute()\r\n except HttpError as e:\r\n logger.error('Error response status code %d, reason %s:', e.resp.status, e.content)\r\n return {'error': 403, 'body' : 'YouTube API Data v3 qouta limit exceeded'}\r\n```\r\nor something else in the `except` block\r\n \r\nIf you're happy with this I'd like to contribute this as a first timer to open source?\r\n\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Errors for the library.\n\nAll exceptions defined by the library\nshould be defined in this file.\n\"\"\"\nfrom __future__ import absolute_import\n\n__author__ = \"[email protected] (Joe Gregorio)\"\n\nimport json\n\nfrom googleapiclient import _helpers as util\n\n\nclass Error(Exception):\n \"\"\"Base error for this module.\"\"\"\n\n pass\n\n\nclass HttpError(Error):\n \"\"\"HTTP data was invalid or unexpected.\"\"\"\n\n @util.positional(3)\n def __init__(self, resp, content, uri=None):\n self.resp = resp\n if not isinstance(content, bytes):\n raise TypeError(\"HTTP content should be bytes\")\n self.content = content\n self.uri = uri\n self.error_details = \"\"\n\n def _get_reason(self):\n \"\"\"Calculate the reason for the error from the response content.\"\"\"\n reason = self.resp.reason\n try:\n data = json.loads(self.content.decode(\"utf-8\"))\n if isinstance(data, dict):\n reason = data[\"error\"][\"message\"]\n if \"details\" in data[\"error\"]:\n self.error_details = data[\"error\"][\"details\"]\n elif \"detail\" in data[\"error\"]:\n self.error_details = data[\"error\"][\"detail\"]\n elif isinstance(data, list) and len(data) > 0:\n first_error = data[0]\n reason = first_error[\"error\"][\"message\"]\n if \"details\" in first_error[\"error\"]:\n self.error_details = first_error[\"error\"][\"details\"]\n except (ValueError, KeyError, TypeError):\n pass\n if reason is None:\n reason = \"\"\n return reason\n\n def __repr__(self):\n reason = self._get_reason()\n if self.error_details:\n return '<HttpError %s when requesting %s returned \"%s\". Details: \"%s\">' % (\n self.resp.status,\n self.uri,\n reason.strip(),\n self.error_details,\n )\n elif self.uri:\n return '<HttpError %s when requesting %s returned \"%s\">' % (\n self.resp.status,\n self.uri,\n self._get_reason().strip(),\n )\n else:\n return '<HttpError %s \"%s\">' % (self.resp.status, self._get_reason())\n\n __str__ = __repr__\n\n\nclass InvalidJsonError(Error):\n \"\"\"The JSON returned could not be parsed.\"\"\"\n\n pass\n\n\nclass UnknownFileType(Error):\n \"\"\"File type unknown or unexpected.\"\"\"\n\n pass\n\n\nclass UnknownLinkType(Error):\n \"\"\"Link type unknown or unexpected.\"\"\"\n\n pass\n\n\nclass UnknownApiNameOrVersion(Error):\n \"\"\"No API with that name and version exists.\"\"\"\n\n pass\n\n\nclass UnacceptableMimeTypeError(Error):\n \"\"\"That is an unacceptable mimetype for this operation.\"\"\"\n\n pass\n\n\nclass MediaUploadSizeError(Error):\n \"\"\"Media is larger than the method can accept.\"\"\"\n\n pass\n\n\nclass ResumableUploadError(HttpError):\n \"\"\"Error occurred during resumable upload.\"\"\"\n\n pass\n\n\nclass InvalidChunkSizeError(Error):\n \"\"\"The given chunksize is not valid.\"\"\"\n\n pass\n\n\nclass InvalidNotificationError(Error):\n \"\"\"The channel Notification is invalid.\"\"\"\n\n pass\n\n\nclass BatchError(HttpError):\n \"\"\"Error occurred during batch operations.\"\"\"\n\n @util.positional(2)\n def __init__(self, reason, resp=None, content=None):\n self.resp = resp\n self.content = content\n self.reason = reason\n\n def __repr__(self):\n if getattr(self.resp, \"status\", None) is None:\n return '<BatchError \"%s\">' % (self.reason)\n else:\n return '<BatchError %s \"%s\">' % (self.resp.status, self.reason)\n\n __str__ = __repr__\n\n\nclass UnexpectedMethodError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected calls.\"\"\"\n\n @util.positional(1)\n def __init__(self, methodId=None):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedMethodError, self).__init__(\n \"Received unexpected call %s\" % methodId\n )\n\n\nclass UnexpectedBodyError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected bodies.\"\"\"\n\n def __init__(self, expected, provided):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedBodyError, self).__init__(\n \"Expected: [%s] - Provided: [%s]\" % (expected, provided)\n )\n", "path": "googleapiclient/errors.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Errors for the library.\n\nAll exceptions defined by the library\nshould be defined in this file.\n\"\"\"\nfrom __future__ import absolute_import\n\n__author__ = \"[email protected] (Joe Gregorio)\"\n\nimport json\n\nfrom googleapiclient import _helpers as util\n\n\nclass Error(Exception):\n \"\"\"Base error for this module.\"\"\"\n\n pass\n\n\nclass HttpError(Error):\n \"\"\"HTTP data was invalid or unexpected.\"\"\"\n\n @util.positional(3)\n def __init__(self, resp, content, uri=None):\n self.resp = resp\n if not isinstance(content, bytes):\n raise TypeError(\"HTTP content should be bytes\")\n self.content = content\n self.uri = uri\n self.error_details = \"\"\n\n def _get_reason(self):\n \"\"\"Calculate the reason for the error from the response content.\"\"\"\n reason = self.resp.reason\n try:\n data = json.loads(self.content.decode(\"utf-8\"))\n if isinstance(data, dict):\n reason = data[\"error\"][\"message\"]\n error_detail_keyword = next((kw for kw in [\"detail\", \"details\", \"message\"] if kw in data[\"error\"]), \"\")\n if error_detail_keyword:\n self.error_details = data[\"error\"][error_detail_keyword]\n elif isinstance(data, list) and len(data) > 0:\n first_error = data[0]\n reason = first_error[\"error\"][\"message\"]\n if \"details\" in first_error[\"error\"]:\n self.error_details = first_error[\"error\"][\"details\"]\n except (ValueError, KeyError, TypeError):\n pass\n if reason is None:\n reason = \"\"\n return reason\n\n def __repr__(self):\n reason = self._get_reason()\n if self.error_details:\n return '<HttpError %s when requesting %s returned \"%s\". Details: \"%s\">' % (\n self.resp.status,\n self.uri,\n reason.strip(),\n self.error_details,\n )\n elif self.uri:\n return '<HttpError %s when requesting %s returned \"%s\">' % (\n self.resp.status,\n self.uri,\n self._get_reason().strip(),\n )\n else:\n return '<HttpError %s \"%s\">' % (self.resp.status, self._get_reason())\n\n __str__ = __repr__\n\n\nclass InvalidJsonError(Error):\n \"\"\"The JSON returned could not be parsed.\"\"\"\n\n pass\n\n\nclass UnknownFileType(Error):\n \"\"\"File type unknown or unexpected.\"\"\"\n\n pass\n\n\nclass UnknownLinkType(Error):\n \"\"\"Link type unknown or unexpected.\"\"\"\n\n pass\n\n\nclass UnknownApiNameOrVersion(Error):\n \"\"\"No API with that name and version exists.\"\"\"\n\n pass\n\n\nclass UnacceptableMimeTypeError(Error):\n \"\"\"That is an unacceptable mimetype for this operation.\"\"\"\n\n pass\n\n\nclass MediaUploadSizeError(Error):\n \"\"\"Media is larger than the method can accept.\"\"\"\n\n pass\n\n\nclass ResumableUploadError(HttpError):\n \"\"\"Error occurred during resumable upload.\"\"\"\n\n pass\n\n\nclass InvalidChunkSizeError(Error):\n \"\"\"The given chunksize is not valid.\"\"\"\n\n pass\n\n\nclass InvalidNotificationError(Error):\n \"\"\"The channel Notification is invalid.\"\"\"\n\n pass\n\n\nclass BatchError(HttpError):\n \"\"\"Error occurred during batch operations.\"\"\"\n\n @util.positional(2)\n def __init__(self, reason, resp=None, content=None):\n self.resp = resp\n self.content = content\n self.reason = reason\n\n def __repr__(self):\n if getattr(self.resp, \"status\", None) is None:\n return '<BatchError \"%s\">' % (self.reason)\n else:\n return '<BatchError %s \"%s\">' % (self.resp.status, self.reason)\n\n __str__ = __repr__\n\n\nclass UnexpectedMethodError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected calls.\"\"\"\n\n @util.positional(1)\n def __init__(self, methodId=None):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedMethodError, self).__init__(\n \"Received unexpected call %s\" % methodId\n )\n\n\nclass UnexpectedBodyError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected bodies.\"\"\"\n\n def __init__(self, expected, provided):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedBodyError, self).__init__(\n \"Expected: [%s] - Provided: [%s]\" % (expected, provided)\n )\n", "path": "googleapiclient/errors.py"}]} | 1,977 | 207 |
gh_patches_debug_14573 | rasdani/github-patches | git_diff | ethereum__web3.py-996 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add python 3.7 to CI tests
### What was wrong?
python 3.7 is out, and we should include it in our testing.
### How can it be fixed?
add python 3.7 to our tox.ini & circleci config
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from setuptools import (
4 find_packages,
5 setup,
6 )
7
8
9 setup(
10 name='web3',
11 # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.
12 version='4.5.0',
13 description="""Web3.py""",
14 long_description_markdown_filename='README.md',
15 author='Piper Merriam',
16 author_email='[email protected]',
17 url='https://github.com/ethereum/web3.py',
18 include_package_data=True,
19 install_requires=[
20 "toolz>=0.9.0,<1.0.0;implementation_name=='pypy'",
21 "cytoolz>=0.9.0,<1.0.0;implementation_name=='cpython'",
22 "eth-abi>=1.1.1,<2",
23 "eth-account>=0.2.1,<0.4.0",
24 "eth-utils>=1.0.1,<2.0.0",
25 "hexbytes>=0.1.0,<1.0.0",
26 "lru-dict>=1.1.6,<2.0.0",
27 "eth-hash[pycryptodome]",
28 "requests>=2.16.0,<3.0.0",
29 "websockets>=5.0.1,<6.0.0",
30 "pypiwin32>=223;platform_system=='Windows'",
31 ],
32 setup_requires=['setuptools-markdown'],
33 python_requires='>=3.5, <4',
34 extras_require={
35 'tester': [
36 "eth-tester[py-evm]==0.1.0-beta.30",
37 "py-geth>=2.0.1,<3.0.0",
38 ],
39 'testrpc': ["eth-testrpc>=1.3.3,<2.0.0"],
40 'linter': [
41 "flake8==3.4.1",
42 "isort>=4.2.15,<5",
43 ],
44 },
45 py_modules=['web3', 'ens'],
46 license="MIT",
47 zip_safe=False,
48 keywords='ethereum',
49 packages=find_packages(exclude=["tests", "tests.*"]),
50 classifiers=[
51 'Development Status :: 5 - Production/Stable',
52 'Intended Audience :: Developers',
53 'License :: OSI Approved :: MIT License',
54 'Natural Language :: English',
55 'Programming Language :: Python :: 3',
56 'Programming Language :: Python :: 3.5',
57 'Programming Language :: Python :: 3.6',
58 ],
59 )
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,14 +26,14 @@
"lru-dict>=1.1.6,<2.0.0",
"eth-hash[pycryptodome]",
"requests>=2.16.0,<3.0.0",
- "websockets>=5.0.1,<6.0.0",
+ "websockets>=6.0.0,<7.0.0",
"pypiwin32>=223;platform_system=='Windows'",
],
setup_requires=['setuptools-markdown'],
python_requires='>=3.5, <4',
extras_require={
'tester': [
- "eth-tester[py-evm]==0.1.0-beta.30",
+ "eth-tester[py-evm]==0.1.0-beta.31",
"py-geth>=2.0.1,<3.0.0",
],
'testrpc': ["eth-testrpc>=1.3.3,<2.0.0"],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -26,14 +26,14 @@\n \"lru-dict>=1.1.6,<2.0.0\",\n \"eth-hash[pycryptodome]\",\n \"requests>=2.16.0,<3.0.0\",\n- \"websockets>=5.0.1,<6.0.0\",\n+ \"websockets>=6.0.0,<7.0.0\",\n \"pypiwin32>=223;platform_system=='Windows'\",\n ],\n setup_requires=['setuptools-markdown'],\n python_requires='>=3.5, <4',\n extras_require={\n 'tester': [\n- \"eth-tester[py-evm]==0.1.0-beta.30\",\n+ \"eth-tester[py-evm]==0.1.0-beta.31\",\n \"py-geth>=2.0.1,<3.0.0\",\n ],\n 'testrpc': [\"eth-testrpc>=1.3.3,<2.0.0\"],\n", "issue": "Add python 3.7 to CI tests\n### What was wrong?\r\npython 3.7 is out, and we should include it in our testing.\r\n\r\n\r\n### How can it be fixed?\r\n\r\nadd python 3.7 to our tox.ini & circleci config\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\n\nsetup(\n name='web3',\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version='4.5.0',\n description=\"\"\"Web3.py\"\"\",\n long_description_markdown_filename='README.md',\n author='Piper Merriam',\n author_email='[email protected]',\n url='https://github.com/ethereum/web3.py',\n include_package_data=True,\n install_requires=[\n \"toolz>=0.9.0,<1.0.0;implementation_name=='pypy'\",\n \"cytoolz>=0.9.0,<1.0.0;implementation_name=='cpython'\",\n \"eth-abi>=1.1.1,<2\",\n \"eth-account>=0.2.1,<0.4.0\",\n \"eth-utils>=1.0.1,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"eth-hash[pycryptodome]\",\n \"requests>=2.16.0,<3.0.0\",\n \"websockets>=5.0.1,<6.0.0\",\n \"pypiwin32>=223;platform_system=='Windows'\",\n ],\n setup_requires=['setuptools-markdown'],\n python_requires='>=3.5, <4',\n extras_require={\n 'tester': [\n \"eth-tester[py-evm]==0.1.0-beta.30\",\n \"py-geth>=2.0.1,<3.0.0\",\n ],\n 'testrpc': [\"eth-testrpc>=1.3.3,<2.0.0\"],\n 'linter': [\n \"flake8==3.4.1\",\n \"isort>=4.2.15,<5\",\n ],\n },\n py_modules=['web3', 'ens'],\n license=\"MIT\",\n zip_safe=False,\n keywords='ethereum',\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\n\nsetup(\n name='web3',\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version='4.5.0',\n description=\"\"\"Web3.py\"\"\",\n long_description_markdown_filename='README.md',\n author='Piper Merriam',\n author_email='[email protected]',\n url='https://github.com/ethereum/web3.py',\n include_package_data=True,\n install_requires=[\n \"toolz>=0.9.0,<1.0.0;implementation_name=='pypy'\",\n \"cytoolz>=0.9.0,<1.0.0;implementation_name=='cpython'\",\n \"eth-abi>=1.1.1,<2\",\n \"eth-account>=0.2.1,<0.4.0\",\n \"eth-utils>=1.0.1,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"eth-hash[pycryptodome]\",\n \"requests>=2.16.0,<3.0.0\",\n \"websockets>=6.0.0,<7.0.0\",\n \"pypiwin32>=223;platform_system=='Windows'\",\n ],\n setup_requires=['setuptools-markdown'],\n python_requires='>=3.5, <4',\n extras_require={\n 'tester': [\n \"eth-tester[py-evm]==0.1.0-beta.31\",\n \"py-geth>=2.0.1,<3.0.0\",\n ],\n 'testrpc': [\"eth-testrpc>=1.3.3,<2.0.0\"],\n 'linter': [\n \"flake8==3.4.1\",\n \"isort>=4.2.15,<5\",\n ],\n },\n py_modules=['web3', 'ens'],\n license=\"MIT\",\n zip_safe=False,\n keywords='ethereum',\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n)\n", "path": "setup.py"}]} | 989 | 249 |
gh_patches_debug_16110 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-339 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bug] [recycleapp_be] UnboundLocalError: local variable 'streetId' referenced before assignment
Hello,
I have a problem with the recycleapp_be source.
I have the error into the log :
```
Logger: waste_collection_schedule.scraper
Source: custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py:143
Integration: waste_collection_schedule ([documentation](https://github.com/mampfes/hacs_waste_collection_schedule#readme))
First occurred: 15:24:43 (1 occurrences)
Last logged: 15:24:43
fetch failed for source Recycle!: Traceback (most recent call last): File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py", line 141, in fetch entries = self._source.fetch() File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py", line 66, in fetch if streetId is None: UnboundLocalError: local variable 'streetId' referenced before assignment
` ``
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py`
Content:
```
1 import logging
2 from datetime import datetime, timedelta
3
4 import requests
5 from waste_collection_schedule import Collection # type: ignore[attr-defined]
6
7 TITLE = "Recycle!"
8 DESCRIPTION = "Source for RecycleApp.be"
9 URL = "https://www.recycleapp.be"
10 TEST_CASES = {
11 "1140 Evere, Bazellaan 1": {
12 "postcode": 1140,
13 "street": "Bazellaan",
14 "house_number": 1,
15 },
16 "3001, Waversebaan 276 with events": {
17 "postcode": 3001,
18 "street": "Waversebaan",
19 "house_number": 276,
20 },
21 "3001, Waversebaan 276 without events": {
22 "postcode": 3001,
23 "street": "Waversebaan",
24 "house_number": 276,
25 "add_events": False,
26 },
27 }
28
29 _LOGGER = logging.getLogger(__name__)
30
31
32 class Source:
33 def __init__(self, postcode, street, house_number, add_events=True):
34 self._postcode = postcode
35 self._street = street
36 self._house_number = house_number
37 self._add_events = add_events
38
39 def fetch(self):
40 url = "https://api.recycleapp.be/api/app/v1"
41 headers = {
42 "x-secret": "Crgja3EGWe8jdapyr4EEoMBgZACYYjRRcRpaMQrLDW9HJBvmgkfGQyYqLgeXPavAGvnJqkV87PBB2b8zx43q46sUgzqio4yRZbABhtKeagkVKypTEDjKfPgGycjLyJTtLHYpzwJgp4YmmCuJZN9ZmJY8CGEoFs8MKfdJpU9RjkEVfngmmk2LYD4QzFegLNKUbcCeAdEW",
43 "x-consumer": "recycleapp.be",
44 "User-Agent": "",
45 "Authorization": "",
46 }
47 r = requests.get(f"{url}/access-token", headers=headers)
48 headers["Authorization"] = r.json()["accessToken"]
49
50 params = {"q": self._postcode}
51 r = requests.get(f"{url}/zipcodes", params=params, headers=headers)
52 if r.status_code != 200:
53 _LOGGER.error("Get zip code failed")
54 return []
55 zipcodeId = r.json()["items"][0]["id"]
56
57 params = {"q": self._street, "zipcodes": zipcodeId}
58 r = requests.get(f"{url}/streets", params=params, headers=headers)
59 if r.status_code != 200:
60 _LOGGER.error("Get street id failed")
61 return []
62
63 for item in r.json()["items"]:
64 if item["name"] == self._street:
65 streetId = item["id"]
66 if streetId is None:
67 streetId = r.json()["items"][0]["id"]
68
69 now = datetime.now()
70 fromDate = now.strftime("%Y-%m-%d")
71 untilDate = (now + timedelta(days=365)).strftime("%Y-%m-%d")
72 params = {
73 "zipcodeId": zipcodeId,
74 "streetId": streetId,
75 "houseNumber": self._house_number,
76 "fromDate": fromDate,
77 "untilDate": untilDate,
78 # "size":100,
79 }
80 r = requests.get(f"{url}/collections", params=params, headers=headers)
81 if r.status_code != 200:
82 _LOGGER.error("Get data failed")
83 return []
84
85 entries = []
86 for item in r.json()["items"]:
87 if "exception" in item and "replacedBy" in item["exception"]:
88 continue
89
90 date = datetime.strptime(item["timestamp"], "%Y-%m-%dT%H:%M:%S.000Z").date()
91 if item["type"] == "collection":
92 entries.append(Collection(date, item["fraction"]["name"]["en"]))
93 elif item["type"] == "event" and self._add_events:
94 entries.append(Collection(date, item["event"]["title"]["en"]))
95
96 return entries
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py
@@ -24,6 +24,12 @@
"house_number": 276,
"add_events": False,
},
+ "1400, Rue de namur 1 with events": {
+ "postcode": 1400,
+ "street": "Rue de namur",
+ "house_number": 1,
+ "add_events": True,
+ },
}
_LOGGER = logging.getLogger(__name__)
@@ -60,6 +66,7 @@
_LOGGER.error("Get street id failed")
return []
+ streetId = None
for item in r.json()["items"]:
if item["name"] == self._street:
streetId = item["id"]
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py\n@@ -24,6 +24,12 @@\n \"house_number\": 276,\n \"add_events\": False,\n },\n+ \"1400, Rue de namur 1 with events\": {\n+ \"postcode\": 1400,\n+ \"street\": \"Rue de namur\",\n+ \"house_number\": 1,\n+ \"add_events\": True,\n+ },\n }\n \n _LOGGER = logging.getLogger(__name__)\n@@ -60,6 +66,7 @@\n _LOGGER.error(\"Get street id failed\")\n return []\n \n+ streetId = None\n for item in r.json()[\"items\"]:\n if item[\"name\"] == self._street:\n streetId = item[\"id\"]\n", "issue": "[bug] [recycleapp_be] UnboundLocalError: local variable 'streetId' referenced before assignment\nHello, \r\n\r\nI have a problem with the recycleapp_be source.\r\n\r\nI have the error into the log : \r\n\r\n```\r\nLogger: waste_collection_schedule.scraper\r\nSource: custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py:143\r\nIntegration: waste_collection_schedule ([documentation](https://github.com/mampfes/hacs_waste_collection_schedule#readme))\r\nFirst occurred: 15:24:43 (1 occurrences)\r\nLast logged: 15:24:43\r\n\r\nfetch failed for source Recycle!: Traceback (most recent call last): File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py\", line 141, in fetch entries = self._source.fetch() File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py\", line 66, in fetch if streetId is None: UnboundLocalError: local variable 'streetId' referenced before assignment\r\n` ``\r\n\n", "before_files": [{"content": "import logging\nfrom datetime import datetime, timedelta\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Recycle!\"\nDESCRIPTION = \"Source for RecycleApp.be\"\nURL = \"https://www.recycleapp.be\"\nTEST_CASES = {\n \"1140 Evere, Bazellaan 1\": {\n \"postcode\": 1140,\n \"street\": \"Bazellaan\",\n \"house_number\": 1,\n },\n \"3001, Waversebaan 276 with events\": {\n \"postcode\": 3001,\n \"street\": \"Waversebaan\",\n \"house_number\": 276,\n },\n \"3001, Waversebaan 276 without events\": {\n \"postcode\": 3001,\n \"street\": \"Waversebaan\",\n \"house_number\": 276,\n \"add_events\": False,\n },\n}\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(self, postcode, street, house_number, add_events=True):\n self._postcode = postcode\n self._street = street\n self._house_number = house_number\n self._add_events = add_events\n\n def fetch(self):\n url = \"https://api.recycleapp.be/api/app/v1\"\n headers = {\n \"x-secret\": \"Crgja3EGWe8jdapyr4EEoMBgZACYYjRRcRpaMQrLDW9HJBvmgkfGQyYqLgeXPavAGvnJqkV87PBB2b8zx43q46sUgzqio4yRZbABhtKeagkVKypTEDjKfPgGycjLyJTtLHYpzwJgp4YmmCuJZN9ZmJY8CGEoFs8MKfdJpU9RjkEVfngmmk2LYD4QzFegLNKUbcCeAdEW\",\n \"x-consumer\": \"recycleapp.be\",\n \"User-Agent\": \"\",\n \"Authorization\": \"\",\n }\n r = requests.get(f\"{url}/access-token\", headers=headers)\n headers[\"Authorization\"] = r.json()[\"accessToken\"]\n\n params = {\"q\": self._postcode}\n r = requests.get(f\"{url}/zipcodes\", params=params, headers=headers)\n if r.status_code != 200:\n _LOGGER.error(\"Get zip code failed\")\n return []\n zipcodeId = r.json()[\"items\"][0][\"id\"]\n\n params = {\"q\": self._street, \"zipcodes\": zipcodeId}\n r = requests.get(f\"{url}/streets\", params=params, headers=headers)\n if r.status_code != 200:\n _LOGGER.error(\"Get street id failed\")\n return []\n\n for item in r.json()[\"items\"]:\n if item[\"name\"] == self._street:\n streetId = item[\"id\"]\n if streetId is None:\n streetId = r.json()[\"items\"][0][\"id\"]\n\n now = datetime.now()\n fromDate = now.strftime(\"%Y-%m-%d\")\n untilDate = (now + timedelta(days=365)).strftime(\"%Y-%m-%d\")\n params = {\n \"zipcodeId\": zipcodeId,\n \"streetId\": streetId,\n \"houseNumber\": self._house_number,\n \"fromDate\": fromDate,\n \"untilDate\": untilDate,\n # \"size\":100,\n }\n r = requests.get(f\"{url}/collections\", params=params, headers=headers)\n if r.status_code != 200:\n _LOGGER.error(\"Get data failed\")\n return []\n\n entries = []\n for item in r.json()[\"items\"]:\n if \"exception\" in item and \"replacedBy\" in item[\"exception\"]:\n continue\n\n date = datetime.strptime(item[\"timestamp\"], \"%Y-%m-%dT%H:%M:%S.000Z\").date()\n if item[\"type\"] == \"collection\":\n entries.append(Collection(date, item[\"fraction\"][\"name\"][\"en\"]))\n elif item[\"type\"] == \"event\" and self._add_events:\n entries.append(Collection(date, item[\"event\"][\"title\"][\"en\"]))\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py"}], "after_files": [{"content": "import logging\nfrom datetime import datetime, timedelta\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Recycle!\"\nDESCRIPTION = \"Source for RecycleApp.be\"\nURL = \"https://www.recycleapp.be\"\nTEST_CASES = {\n \"1140 Evere, Bazellaan 1\": {\n \"postcode\": 1140,\n \"street\": \"Bazellaan\",\n \"house_number\": 1,\n },\n \"3001, Waversebaan 276 with events\": {\n \"postcode\": 3001,\n \"street\": \"Waversebaan\",\n \"house_number\": 276,\n },\n \"3001, Waversebaan 276 without events\": {\n \"postcode\": 3001,\n \"street\": \"Waversebaan\",\n \"house_number\": 276,\n \"add_events\": False,\n },\n \"1400, Rue de namur 1 with events\": {\n \"postcode\": 1400,\n \"street\": \"Rue de namur\",\n \"house_number\": 1,\n \"add_events\": True,\n },\n}\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(self, postcode, street, house_number, add_events=True):\n self._postcode = postcode\n self._street = street\n self._house_number = house_number\n self._add_events = add_events\n\n def fetch(self):\n url = \"https://api.recycleapp.be/api/app/v1\"\n headers = {\n \"x-secret\": \"Crgja3EGWe8jdapyr4EEoMBgZACYYjRRcRpaMQrLDW9HJBvmgkfGQyYqLgeXPavAGvnJqkV87PBB2b8zx43q46sUgzqio4yRZbABhtKeagkVKypTEDjKfPgGycjLyJTtLHYpzwJgp4YmmCuJZN9ZmJY8CGEoFs8MKfdJpU9RjkEVfngmmk2LYD4QzFegLNKUbcCeAdEW\",\n \"x-consumer\": \"recycleapp.be\",\n \"User-Agent\": \"\",\n \"Authorization\": \"\",\n }\n r = requests.get(f\"{url}/access-token\", headers=headers)\n headers[\"Authorization\"] = r.json()[\"accessToken\"]\n\n params = {\"q\": self._postcode}\n r = requests.get(f\"{url}/zipcodes\", params=params, headers=headers)\n if r.status_code != 200:\n _LOGGER.error(\"Get zip code failed\")\n return []\n zipcodeId = r.json()[\"items\"][0][\"id\"]\n\n params = {\"q\": self._street, \"zipcodes\": zipcodeId}\n r = requests.get(f\"{url}/streets\", params=params, headers=headers)\n if r.status_code != 200:\n _LOGGER.error(\"Get street id failed\")\n return []\n\n streetId = None\n for item in r.json()[\"items\"]:\n if item[\"name\"] == self._street:\n streetId = item[\"id\"]\n if streetId is None:\n streetId = r.json()[\"items\"][0][\"id\"]\n\n now = datetime.now()\n fromDate = now.strftime(\"%Y-%m-%d\")\n untilDate = (now + timedelta(days=365)).strftime(\"%Y-%m-%d\")\n params = {\n \"zipcodeId\": zipcodeId,\n \"streetId\": streetId,\n \"houseNumber\": self._house_number,\n \"fromDate\": fromDate,\n \"untilDate\": untilDate,\n # \"size\":100,\n }\n r = requests.get(f\"{url}/collections\", params=params, headers=headers)\n if r.status_code != 200:\n _LOGGER.error(\"Get data failed\")\n return []\n\n entries = []\n for item in r.json()[\"items\"]:\n if \"exception\" in item and \"replacedBy\" in item[\"exception\"]:\n continue\n\n date = datetime.strptime(item[\"timestamp\"], \"%Y-%m-%dT%H:%M:%S.000Z\").date()\n if item[\"type\"] == \"collection\":\n entries.append(Collection(date, item[\"fraction\"][\"name\"][\"en\"]))\n elif item[\"type\"] == \"event\" and self._add_events:\n entries.append(Collection(date, item[\"event\"][\"title\"][\"en\"]))\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/recycleapp_be.py"}]} | 1,627 | 240 |
gh_patches_debug_58681 | rasdani/github-patches | git_diff | lightly-ai__lightly-1009 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Loss stuck
Hi, I am trying to run the tutorial posted here
https://docs.lightly.ai/self-supervised-learning/tutorials/package/tutorial_moco_memory_bank.html
But my loss is stuck at 8.32 after 100 epochs
python 3.9
pytorch-lightning 1.8.1
lightly 1.2.38
Any suggestions on how I should troubleshoot this?
Thanks in advance!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lightly/loss/memory_bank.py`
Content:
```
1 """ Memory Bank Wrapper """
2
3 # Copyright (c) 2020. Lightly AG and its affiliates.
4 # All Rights Reserved
5
6 import torch
7 import functools
8
9 class MemoryBankModule(torch.nn.Module):
10 """Memory bank implementation
11
12 This is a parent class to all loss functions implemented by the lightly
13 Python package. This way, any loss can be used with a memory bank if
14 desired.
15
16 Attributes:
17 size:
18 Number of keys the memory bank can store. If set to 0,
19 memory bank is not used.
20
21 Examples:
22 >>> class MyLossFunction(MemoryBankModule):
23 >>>
24 >>> def __init__(self, memory_bank_size: int = 2 ** 16):
25 >>> super(MyLossFunction, self).__init__(memory_bank_size)
26 >>>
27 >>> def forward(self, output: torch.Tensor,
28 >>> labels: torch.Tensor = None):
29 >>>
30 >>> output, negatives = super(
31 >>> MyLossFunction, self).forward(output)
32 >>>
33 >>> if negatives is not None:
34 >>> # evaluate loss with negative samples
35 >>> else:
36 >>> # evaluate loss without negative samples
37
38 """
39
40 def __init__(self, size: int = 2 ** 16):
41
42 super(MemoryBankModule, self).__init__()
43
44 if size < 0:
45 msg = f'Illegal memory bank size {size}, must be non-negative.'
46 raise ValueError(msg)
47
48 self.size = size
49 self.register_buffer("bank", tensor=torch.empty(0, dtype=torch.float), persistent=False)
50 self.register_buffer("bank_ptr", tensor=torch.empty(0, dtype=torch.long), persistent=False)
51
52 @torch.no_grad()
53 def _init_memory_bank(self, dim: int):
54 """Initialize the memory bank if it's empty
55
56 Args:
57 dim:
58 The dimension of the which are stored in the bank.
59
60 """
61 # create memory bank
62 # we could use register buffers like in the moco repo
63 # https://github.com/facebookresearch/moco but we don't
64 # want to pollute our checkpoints
65 self.bank = torch.randn(dim, self.size).type_as(self.bank)
66 torch.nn.functional.normalize(self.bank, dim=0)
67 self.bank_ptr = torch.zeros(1).type_as(self.bank_ptr)
68
69 @torch.no_grad()
70 def _dequeue_and_enqueue(self, batch: torch.Tensor):
71 """Dequeue the oldest batch and add the latest one
72
73 Args:
74 batch:
75 The latest batch of keys to add to the memory bank.
76
77 """
78 batch_size = batch.shape[0]
79 ptr = int(self.bank_ptr)
80
81 if ptr + batch_size >= self.size:
82 self.bank[:, ptr:] = batch[:self.size - ptr].T.detach()
83 self.bank_ptr[0] = 0
84 else:
85 self.bank[:, ptr:ptr + batch_size] = batch.T.detach()
86 self.bank_ptr[0] = ptr + batch_size
87
88 def forward(self,
89 output: torch.Tensor,
90 labels: torch.Tensor = None,
91 update: bool = False):
92 """Query memory bank for additional negative samples
93
94 Args:
95 output:
96 The output of the model.
97 labels:
98 Should always be None, will be ignored.
99
100 Returns:
101 The output if the memory bank is of size 0, otherwise the output
102 and the entries from the memory bank.
103
104 """
105
106 # no memory bank, return the output
107 if self.size == 0:
108 return output, None
109
110 _, dim = output.shape
111
112 # initialize the memory bank if it is not already done
113 if self.bank.nelement() == 0:
114 self._init_memory_bank(dim)
115
116 # query and update memory bank
117 bank = self.bank.clone().detach()
118
119 # only update memory bank if we later do backward pass (gradient)
120 if update:
121 self._dequeue_and_enqueue(output)
122
123 return output, bank
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lightly/loss/memory_bank.py b/lightly/loss/memory_bank.py
--- a/lightly/loss/memory_bank.py
+++ b/lightly/loss/memory_bank.py
@@ -63,7 +63,7 @@
# https://github.com/facebookresearch/moco but we don't
# want to pollute our checkpoints
self.bank = torch.randn(dim, self.size).type_as(self.bank)
- torch.nn.functional.normalize(self.bank, dim=0)
+ self.bank = torch.nn.functional.normalize(self.bank, dim=0)
self.bank_ptr = torch.zeros(1).type_as(self.bank_ptr)
@torch.no_grad()
| {"golden_diff": "diff --git a/lightly/loss/memory_bank.py b/lightly/loss/memory_bank.py\n--- a/lightly/loss/memory_bank.py\n+++ b/lightly/loss/memory_bank.py\n@@ -63,7 +63,7 @@\n # https://github.com/facebookresearch/moco but we don't\n # want to pollute our checkpoints\n self.bank = torch.randn(dim, self.size).type_as(self.bank)\n- torch.nn.functional.normalize(self.bank, dim=0)\n+ self.bank = torch.nn.functional.normalize(self.bank, dim=0)\n self.bank_ptr = torch.zeros(1).type_as(self.bank_ptr)\n \n @torch.no_grad()\n", "issue": "Loss stuck\nHi, I am trying to run the tutorial posted here \r\nhttps://docs.lightly.ai/self-supervised-learning/tutorials/package/tutorial_moco_memory_bank.html\r\nBut my loss is stuck at 8.32 after 100 epochs\r\npython 3.9\r\npytorch-lightning 1.8.1 \r\nlightly 1.2.38\r\n\r\nAny suggestions on how I should troubleshoot this?\r\nThanks in advance!\n", "before_files": [{"content": "\"\"\" Memory Bank Wrapper \"\"\"\n\n# Copyright (c) 2020. Lightly AG and its affiliates.\n# All Rights Reserved\n\nimport torch\nimport functools\n\nclass MemoryBankModule(torch.nn.Module):\n \"\"\"Memory bank implementation\n\n This is a parent class to all loss functions implemented by the lightly\n Python package. This way, any loss can be used with a memory bank if \n desired.\n\n Attributes:\n size:\n Number of keys the memory bank can store. If set to 0,\n memory bank is not used.\n\n Examples:\n >>> class MyLossFunction(MemoryBankModule):\n >>>\n >>> def __init__(self, memory_bank_size: int = 2 ** 16):\n >>> super(MyLossFunction, self).__init__(memory_bank_size)\n >>>\n >>> def forward(self, output: torch.Tensor,\n >>> labels: torch.Tensor = None):\n >>>\n >>> output, negatives = super(\n >>> MyLossFunction, self).forward(output)\n >>>\n >>> if negatives is not None:\n >>> # evaluate loss with negative samples\n >>> else:\n >>> # evaluate loss without negative samples\n\n \"\"\"\n\n def __init__(self, size: int = 2 ** 16):\n\n super(MemoryBankModule, self).__init__()\n\n if size < 0:\n msg = f'Illegal memory bank size {size}, must be non-negative.'\n raise ValueError(msg)\n\n self.size = size\n self.register_buffer(\"bank\", tensor=torch.empty(0, dtype=torch.float), persistent=False)\n self.register_buffer(\"bank_ptr\", tensor=torch.empty(0, dtype=torch.long), persistent=False)\n\n @torch.no_grad()\n def _init_memory_bank(self, dim: int):\n \"\"\"Initialize the memory bank if it's empty\n\n Args:\n dim:\n The dimension of the which are stored in the bank.\n\n \"\"\"\n # create memory bank\n # we could use register buffers like in the moco repo\n # https://github.com/facebookresearch/moco but we don't\n # want to pollute our checkpoints\n self.bank = torch.randn(dim, self.size).type_as(self.bank)\n torch.nn.functional.normalize(self.bank, dim=0)\n self.bank_ptr = torch.zeros(1).type_as(self.bank_ptr)\n\n @torch.no_grad()\n def _dequeue_and_enqueue(self, batch: torch.Tensor):\n \"\"\"Dequeue the oldest batch and add the latest one\n\n Args:\n batch:\n The latest batch of keys to add to the memory bank.\n\n \"\"\"\n batch_size = batch.shape[0]\n ptr = int(self.bank_ptr)\n\n if ptr + batch_size >= self.size:\n self.bank[:, ptr:] = batch[:self.size - ptr].T.detach()\n self.bank_ptr[0] = 0\n else:\n self.bank[:, ptr:ptr + batch_size] = batch.T.detach()\n self.bank_ptr[0] = ptr + batch_size\n\n def forward(self,\n output: torch.Tensor,\n labels: torch.Tensor = None,\n update: bool = False):\n \"\"\"Query memory bank for additional negative samples\n\n Args:\n output:\n The output of the model.\n labels:\n Should always be None, will be ignored.\n\n Returns:\n The output if the memory bank is of size 0, otherwise the output\n and the entries from the memory bank.\n\n \"\"\"\n\n # no memory bank, return the output\n if self.size == 0:\n return output, None\n\n _, dim = output.shape\n\n # initialize the memory bank if it is not already done\n if self.bank.nelement() == 0:\n self._init_memory_bank(dim)\n\n # query and update memory bank\n bank = self.bank.clone().detach()\n\n # only update memory bank if we later do backward pass (gradient)\n if update:\n self._dequeue_and_enqueue(output)\n\n return output, bank\n", "path": "lightly/loss/memory_bank.py"}], "after_files": [{"content": "\"\"\" Memory Bank Wrapper \"\"\"\n\n# Copyright (c) 2020. Lightly AG and its affiliates.\n# All Rights Reserved\n\nimport torch\nimport functools\n\nclass MemoryBankModule(torch.nn.Module):\n \"\"\"Memory bank implementation\n\n This is a parent class to all loss functions implemented by the lightly\n Python package. This way, any loss can be used with a memory bank if \n desired.\n\n Attributes:\n size:\n Number of keys the memory bank can store. If set to 0,\n memory bank is not used.\n\n Examples:\n >>> class MyLossFunction(MemoryBankModule):\n >>>\n >>> def __init__(self, memory_bank_size: int = 2 ** 16):\n >>> super(MyLossFunction, self).__init__(memory_bank_size)\n >>>\n >>> def forward(self, output: torch.Tensor,\n >>> labels: torch.Tensor = None):\n >>>\n >>> output, negatives = super(\n >>> MyLossFunction, self).forward(output)\n >>>\n >>> if negatives is not None:\n >>> # evaluate loss with negative samples\n >>> else:\n >>> # evaluate loss without negative samples\n\n \"\"\"\n\n def __init__(self, size: int = 2 ** 16):\n\n super(MemoryBankModule, self).__init__()\n\n if size < 0:\n msg = f'Illegal memory bank size {size}, must be non-negative.'\n raise ValueError(msg)\n\n self.size = size\n self.register_buffer(\"bank\", tensor=torch.empty(0, dtype=torch.float), persistent=False)\n self.register_buffer(\"bank_ptr\", tensor=torch.empty(0, dtype=torch.long), persistent=False)\n\n @torch.no_grad()\n def _init_memory_bank(self, dim: int):\n \"\"\"Initialize the memory bank if it's empty\n\n Args:\n dim:\n The dimension of the which are stored in the bank.\n\n \"\"\"\n # create memory bank\n # we could use register buffers like in the moco repo\n # https://github.com/facebookresearch/moco but we don't\n # want to pollute our checkpoints\n self.bank = torch.randn(dim, self.size).type_as(self.bank)\n self.bank = torch.nn.functional.normalize(self.bank, dim=0)\n self.bank_ptr = torch.zeros(1).type_as(self.bank_ptr)\n\n @torch.no_grad()\n def _dequeue_and_enqueue(self, batch: torch.Tensor):\n \"\"\"Dequeue the oldest batch and add the latest one\n\n Args:\n batch:\n The latest batch of keys to add to the memory bank.\n\n \"\"\"\n batch_size = batch.shape[0]\n ptr = int(self.bank_ptr)\n\n if ptr + batch_size >= self.size:\n self.bank[:, ptr:] = batch[:self.size - ptr].T.detach()\n self.bank_ptr[0] = 0\n else:\n self.bank[:, ptr:ptr + batch_size] = batch.T.detach()\n self.bank_ptr[0] = ptr + batch_size\n\n def forward(self,\n output: torch.Tensor,\n labels: torch.Tensor = None,\n update: bool = False):\n \"\"\"Query memory bank for additional negative samples\n\n Args:\n output:\n The output of the model.\n labels:\n Should always be None, will be ignored.\n\n Returns:\n The output if the memory bank is of size 0, otherwise the output\n and the entries from the memory bank.\n\n \"\"\"\n\n # no memory bank, return the output\n if self.size == 0:\n return output, None\n\n _, dim = output.shape\n\n # initialize the memory bank if it is not already done\n if self.bank.nelement() == 0:\n self._init_memory_bank(dim)\n\n # query and update memory bank\n bank = self.bank.clone().detach()\n\n # only update memory bank if we later do backward pass (gradient)\n if update:\n self._dequeue_and_enqueue(output)\n\n return output, bank\n", "path": "lightly/loss/memory_bank.py"}]} | 1,491 | 144 |
gh_patches_debug_23108 | rasdani/github-patches | git_diff | keras-team__autokeras-568 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Evaluation criteria for MLP
Are there any evaluation criterias for MLP module in Autokeras?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `autokeras/net_module.py`
Content:
```
1 from functools import reduce
2
3 import torch
4 import numpy as np
5
6 import os
7 import time
8
9 from autokeras.constant import Constant
10 from autokeras.search import BayesianSearcher, train
11
12 from autokeras.utils import pickle_to_file, rand_temp_folder_generator, ensure_dir
13 from autokeras.nn.generator import CnnGenerator, MlpGenerator, ResNetGenerator, DenseNetGenerator
14
15
16 class NetworkModule:
17 """ Class to create a network module.
18
19 Attributes:
20 loss: A function taking two parameters, the predictions and the ground truth.
21 metric: An instance of the Metric subclasses.
22 searcher_args: A dictionary containing the parameters for the searcher's __init__ function.
23 searcher: An instance of the Searcher class.
24 path: A string. The path to the directory to save the searcher.
25 verbose: A boolean. Setting it to true prints to stdout.
26 generators: A list of instances of the NetworkGenerator class or its subclasses.
27 search_type: A constant denoting the type of hyperparameter search algorithm that must be used.
28 """
29
30 def __init__(self, loss, metric, searcher_args=None, path=None, verbose=False, search_type=BayesianSearcher):
31 self.searcher_args = searcher_args if searcher_args is not None else {}
32 self.searcher = None
33 self.path = path if path is not None else rand_temp_folder_generator()
34 ensure_dir(self.path)
35 if verbose:
36 print('Saving Directory:', self.path)
37 self.verbose = verbose
38 self.loss = loss
39 self.metric = metric
40 self.generators = []
41 self.search_type = search_type
42
43 def fit(self, n_output_node, input_shape, train_data, test_data, time_limit=24 * 60 * 60):
44 """ Search the best network.
45
46 Args:
47 n_output_node: A integer value represent the number of output node in the final layer.
48 input_shape: A tuple to express the shape of every train entry. For example,
49 MNIST dataset would be (28,28,1).
50 train_data: A PyTorch DataLoader instance representing the training data.
51 test_data: A PyTorch DataLoader instance representing the testing data.
52 time_limit: A integer value represents the time limit on searching for models.
53 """
54 # Create the searcher and save on disk
55
56 if not self.searcher:
57 input_shape = input_shape[1:]
58 self.searcher_args['n_output_node'] = n_output_node
59 self.searcher_args['input_shape'] = input_shape
60 self.searcher_args['path'] = self.path
61 self.searcher_args['metric'] = self.metric
62 self.searcher_args['loss'] = self.loss
63 self.searcher_args['generators'] = self.generators
64 self.searcher_args['verbose'] = self.verbose
65 pickle_to_file(self, os.path.join(self.path, 'module'))
66 self.searcher = self.search_type(**self.searcher_args)
67
68 start_time = time.time()
69 time_remain = time_limit
70 try:
71 while time_remain > 0:
72 self.searcher.search(train_data, test_data, int(time_remain))
73 pickle_to_file(self, os.path.join(self.path, 'module'))
74 if len(self.searcher.history) >= Constant.MAX_MODEL_NUM:
75 break
76 time_elapsed = time.time() - start_time
77 time_remain = time_limit - time_elapsed
78 # if no search executed during the time_limit, then raise an error
79 if time_remain <= 0:
80 raise TimeoutError
81 except TimeoutError:
82 if len(self.searcher.history) == 0:
83 raise TimeoutError("Search Time too short. No model was found during the search time.")
84 elif self.verbose:
85 print('Time is out.')
86
87 def final_fit(self, train_data, test_data, trainer_args=None, retrain=False):
88 """Final training after found the best architecture.
89
90 Args:
91 train_data: A DataLoader instance representing the training data.
92 test_data: A DataLoader instance representing the testing data.
93 trainer_args: A dictionary containing the parameters of the ModelTrainer constructor.
94 retrain: A boolean of whether reinitialize the weights of the model.
95 """
96 graph = self.searcher.load_best_model()
97
98 if retrain:
99 graph.weighted = False
100 _, _1, graph = train(None, graph,
101 train_data,
102 test_data,
103 trainer_args,
104 self.metric,
105 self.loss,
106 self.verbose,
107 self.path)
108 self.searcher.replace_model(graph, self.searcher.get_best_model_id())
109 pickle_to_file(self, os.path.join(self.path, 'module'))
110
111 @property
112 def best_model(self):
113 return self.searcher.load_best_model()
114
115 def predict(self, test_loader):
116 model = self.best_model.produce_model()
117 model.eval()
118
119 outputs = []
120 with torch.no_grad():
121 for index, inputs in enumerate(test_loader):
122 outputs.append(model(inputs).numpy())
123 output = reduce(lambda x, y: np.concatenate((x, y)), outputs)
124 return output
125
126
127 class CnnModule(NetworkModule):
128 """ Class to create a CNN module."""
129
130 def __init__(self, loss, metric, searcher_args=None, path=None, verbose=False,
131 search_type=BayesianSearcher):
132 super(CnnModule, self).__init__(loss, metric, searcher_args, path, verbose, search_type)
133 self.generators.append(CnnGenerator)
134 self.generators.append(ResNetGenerator)
135 self.generators.append(DenseNetGenerator)
136
137
138 class MlpModule(NetworkModule):
139 """ Class to create an MLP module."""
140
141 def __init__(self, loss, metric, searcher_args=None, path=None, verbose=False):
142 super(MlpModule, self).__init__(loss, metric, searcher_args, path, verbose)
143 self.generators.extend([MlpGenerator] * 2)
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/autokeras/net_module.py b/autokeras/net_module.py
--- a/autokeras/net_module.py
+++ b/autokeras/net_module.py
@@ -11,6 +11,7 @@
from autokeras.utils import pickle_to_file, rand_temp_folder_generator, ensure_dir
from autokeras.nn.generator import CnnGenerator, MlpGenerator, ResNetGenerator, DenseNetGenerator
+from autokeras.utils import get_device
class NetworkModule:
@@ -123,6 +124,24 @@
output = reduce(lambda x, y: np.concatenate((x, y)), outputs)
return output
+ def evaluate(self, test_data):
+ """Evaluate the performance of the best architecture in terms of the loss.
+
+ Args:
+ test_data: A DataLoader instance representing the testing data.
+ """
+ model = self.best_model.produce_model()
+ model.eval()
+ device = get_device()
+ target, prediction = [], []
+
+ with torch.no_grad():
+ for _, (x, y) in enumerate(test_data):
+ x, y = x.to(device), y.to(device)
+ prediction.append(model(x))
+ target.append(y)
+ return self.metric().compute(prediction, target)
+
class CnnModule(NetworkModule):
""" Class to create a CNN module."""
| {"golden_diff": "diff --git a/autokeras/net_module.py b/autokeras/net_module.py\n--- a/autokeras/net_module.py\n+++ b/autokeras/net_module.py\n@@ -11,6 +11,7 @@\n \n from autokeras.utils import pickle_to_file, rand_temp_folder_generator, ensure_dir\n from autokeras.nn.generator import CnnGenerator, MlpGenerator, ResNetGenerator, DenseNetGenerator\n+from autokeras.utils import get_device\n \n \n class NetworkModule:\n@@ -123,6 +124,24 @@\n output = reduce(lambda x, y: np.concatenate((x, y)), outputs)\n return output\n \n+ def evaluate(self, test_data):\n+ \"\"\"Evaluate the performance of the best architecture in terms of the loss.\n+\n+ Args:\n+ test_data: A DataLoader instance representing the testing data.\n+ \"\"\"\n+ model = self.best_model.produce_model()\n+ model.eval()\n+ device = get_device()\n+ target, prediction = [], []\n+\n+ with torch.no_grad():\n+ for _, (x, y) in enumerate(test_data):\n+ x, y = x.to(device), y.to(device)\n+ prediction.append(model(x))\n+ target.append(y)\n+ return self.metric().compute(prediction, target)\n+\n \n class CnnModule(NetworkModule):\n \"\"\" Class to create a CNN module.\"\"\"\n", "issue": "Evaluation criteria for MLP\nAre there any evaluation criterias for MLP module in Autokeras? \r\n\n", "before_files": [{"content": "from functools import reduce\n\nimport torch\nimport numpy as np\n\nimport os\nimport time\n\nfrom autokeras.constant import Constant\nfrom autokeras.search import BayesianSearcher, train\n\nfrom autokeras.utils import pickle_to_file, rand_temp_folder_generator, ensure_dir\nfrom autokeras.nn.generator import CnnGenerator, MlpGenerator, ResNetGenerator, DenseNetGenerator\n\n\nclass NetworkModule:\n \"\"\" Class to create a network module.\n\n Attributes:\n loss: A function taking two parameters, the predictions and the ground truth.\n metric: An instance of the Metric subclasses.\n searcher_args: A dictionary containing the parameters for the searcher's __init__ function.\n searcher: An instance of the Searcher class.\n path: A string. The path to the directory to save the searcher.\n verbose: A boolean. Setting it to true prints to stdout.\n generators: A list of instances of the NetworkGenerator class or its subclasses.\n search_type: A constant denoting the type of hyperparameter search algorithm that must be used.\n \"\"\"\n\n def __init__(self, loss, metric, searcher_args=None, path=None, verbose=False, search_type=BayesianSearcher):\n self.searcher_args = searcher_args if searcher_args is not None else {}\n self.searcher = None\n self.path = path if path is not None else rand_temp_folder_generator()\n ensure_dir(self.path)\n if verbose:\n print('Saving Directory:', self.path)\n self.verbose = verbose\n self.loss = loss\n self.metric = metric\n self.generators = []\n self.search_type = search_type\n\n def fit(self, n_output_node, input_shape, train_data, test_data, time_limit=24 * 60 * 60):\n \"\"\" Search the best network.\n\n Args:\n n_output_node: A integer value represent the number of output node in the final layer.\n input_shape: A tuple to express the shape of every train entry. For example,\n MNIST dataset would be (28,28,1).\n train_data: A PyTorch DataLoader instance representing the training data.\n test_data: A PyTorch DataLoader instance representing the testing data.\n time_limit: A integer value represents the time limit on searching for models.\n \"\"\"\n # Create the searcher and save on disk\n\n if not self.searcher:\n input_shape = input_shape[1:]\n self.searcher_args['n_output_node'] = n_output_node\n self.searcher_args['input_shape'] = input_shape\n self.searcher_args['path'] = self.path\n self.searcher_args['metric'] = self.metric\n self.searcher_args['loss'] = self.loss\n self.searcher_args['generators'] = self.generators\n self.searcher_args['verbose'] = self.verbose\n pickle_to_file(self, os.path.join(self.path, 'module'))\n self.searcher = self.search_type(**self.searcher_args)\n\n start_time = time.time()\n time_remain = time_limit\n try:\n while time_remain > 0:\n self.searcher.search(train_data, test_data, int(time_remain))\n pickle_to_file(self, os.path.join(self.path, 'module'))\n if len(self.searcher.history) >= Constant.MAX_MODEL_NUM:\n break\n time_elapsed = time.time() - start_time\n time_remain = time_limit - time_elapsed\n # if no search executed during the time_limit, then raise an error\n if time_remain <= 0:\n raise TimeoutError\n except TimeoutError:\n if len(self.searcher.history) == 0:\n raise TimeoutError(\"Search Time too short. No model was found during the search time.\")\n elif self.verbose:\n print('Time is out.')\n\n def final_fit(self, train_data, test_data, trainer_args=None, retrain=False):\n \"\"\"Final training after found the best architecture.\n\n Args:\n train_data: A DataLoader instance representing the training data.\n test_data: A DataLoader instance representing the testing data.\n trainer_args: A dictionary containing the parameters of the ModelTrainer constructor.\n retrain: A boolean of whether reinitialize the weights of the model.\n \"\"\"\n graph = self.searcher.load_best_model()\n\n if retrain:\n graph.weighted = False\n _, _1, graph = train(None, graph,\n train_data,\n test_data,\n trainer_args,\n self.metric,\n self.loss,\n self.verbose,\n self.path)\n self.searcher.replace_model(graph, self.searcher.get_best_model_id())\n pickle_to_file(self, os.path.join(self.path, 'module'))\n\n @property\n def best_model(self):\n return self.searcher.load_best_model()\n\n def predict(self, test_loader):\n model = self.best_model.produce_model()\n model.eval()\n\n outputs = []\n with torch.no_grad():\n for index, inputs in enumerate(test_loader):\n outputs.append(model(inputs).numpy())\n output = reduce(lambda x, y: np.concatenate((x, y)), outputs)\n return output\n\n\nclass CnnModule(NetworkModule):\n \"\"\" Class to create a CNN module.\"\"\"\n\n def __init__(self, loss, metric, searcher_args=None, path=None, verbose=False,\n search_type=BayesianSearcher):\n super(CnnModule, self).__init__(loss, metric, searcher_args, path, verbose, search_type)\n self.generators.append(CnnGenerator)\n self.generators.append(ResNetGenerator)\n self.generators.append(DenseNetGenerator)\n\n\nclass MlpModule(NetworkModule):\n \"\"\" Class to create an MLP module.\"\"\"\n\n def __init__(self, loss, metric, searcher_args=None, path=None, verbose=False):\n super(MlpModule, self).__init__(loss, metric, searcher_args, path, verbose)\n self.generators.extend([MlpGenerator] * 2)\n", "path": "autokeras/net_module.py"}], "after_files": [{"content": "from functools import reduce\n\nimport torch\nimport numpy as np\n\nimport os\nimport time\n\nfrom autokeras.constant import Constant\nfrom autokeras.search import BayesianSearcher, train\n\nfrom autokeras.utils import pickle_to_file, rand_temp_folder_generator, ensure_dir\nfrom autokeras.nn.generator import CnnGenerator, MlpGenerator, ResNetGenerator, DenseNetGenerator\nfrom autokeras.utils import get_device\n\n\nclass NetworkModule:\n \"\"\" Class to create a network module.\n\n Attributes:\n loss: A function taking two parameters, the predictions and the ground truth.\n metric: An instance of the Metric subclasses.\n searcher_args: A dictionary containing the parameters for the searcher's __init__ function.\n searcher: An instance of the Searcher class.\n path: A string. The path to the directory to save the searcher.\n verbose: A boolean. Setting it to true prints to stdout.\n generators: A list of instances of the NetworkGenerator class or its subclasses.\n search_type: A constant denoting the type of hyperparameter search algorithm that must be used.\n \"\"\"\n\n def __init__(self, loss, metric, searcher_args=None, path=None, verbose=False, search_type=BayesianSearcher):\n self.searcher_args = searcher_args if searcher_args is not None else {}\n self.searcher = None\n self.path = path if path is not None else rand_temp_folder_generator()\n ensure_dir(self.path)\n if verbose:\n print('Saving Directory:', self.path)\n self.verbose = verbose\n self.loss = loss\n self.metric = metric\n self.generators = []\n self.search_type = search_type\n\n def fit(self, n_output_node, input_shape, train_data, test_data, time_limit=24 * 60 * 60):\n \"\"\" Search the best network.\n\n Args:\n n_output_node: A integer value represent the number of output node in the final layer.\n input_shape: A tuple to express the shape of every train entry. For example,\n MNIST dataset would be (28,28,1).\n train_data: A PyTorch DataLoader instance representing the training data.\n test_data: A PyTorch DataLoader instance representing the testing data.\n time_limit: A integer value represents the time limit on searching for models.\n \"\"\"\n # Create the searcher and save on disk\n\n if not self.searcher:\n input_shape = input_shape[1:]\n self.searcher_args['n_output_node'] = n_output_node\n self.searcher_args['input_shape'] = input_shape\n self.searcher_args['path'] = self.path\n self.searcher_args['metric'] = self.metric\n self.searcher_args['loss'] = self.loss\n self.searcher_args['generators'] = self.generators\n self.searcher_args['verbose'] = self.verbose\n pickle_to_file(self, os.path.join(self.path, 'module'))\n self.searcher = self.search_type(**self.searcher_args)\n\n start_time = time.time()\n time_remain = time_limit\n try:\n while time_remain > 0:\n self.searcher.search(train_data, test_data, int(time_remain))\n pickle_to_file(self, os.path.join(self.path, 'module'))\n if len(self.searcher.history) >= Constant.MAX_MODEL_NUM:\n break\n time_elapsed = time.time() - start_time\n time_remain = time_limit - time_elapsed\n # if no search executed during the time_limit, then raise an error\n if time_remain <= 0:\n raise TimeoutError\n except TimeoutError:\n if len(self.searcher.history) == 0:\n raise TimeoutError(\"Search Time too short. No model was found during the search time.\")\n elif self.verbose:\n print('Time is out.')\n\n def final_fit(self, train_data, test_data, trainer_args=None, retrain=False):\n \"\"\"Final training after found the best architecture.\n\n Args:\n train_data: A DataLoader instance representing the training data.\n test_data: A DataLoader instance representing the testing data.\n trainer_args: A dictionary containing the parameters of the ModelTrainer constructor.\n retrain: A boolean of whether reinitialize the weights of the model.\n \"\"\"\n graph = self.searcher.load_best_model()\n\n if retrain:\n graph.weighted = False\n _, _1, graph = train(None, graph,\n train_data,\n test_data,\n trainer_args,\n self.metric,\n self.loss,\n self.verbose,\n self.path)\n self.searcher.replace_model(graph, self.searcher.get_best_model_id())\n pickle_to_file(self, os.path.join(self.path, 'module'))\n\n @property\n def best_model(self):\n return self.searcher.load_best_model()\n\n def predict(self, test_loader):\n model = self.best_model.produce_model()\n model.eval()\n\n outputs = []\n with torch.no_grad():\n for index, inputs in enumerate(test_loader):\n outputs.append(model(inputs).numpy())\n output = reduce(lambda x, y: np.concatenate((x, y)), outputs)\n return output\n\n def evaluate(self, test_data):\n \"\"\"Evaluate the performance of the best architecture in terms of the loss.\n\n Args:\n test_data: A DataLoader instance representing the testing data.\n \"\"\"\n model = self.best_model.produce_model()\n model.eval()\n device = get_device()\n target, prediction = [], []\n\n with torch.no_grad():\n for _, (x, y) in enumerate(test_data):\n x, y = x.to(device), y.to(device)\n prediction.append(model(x))\n target.append(y)\n return self.metric().compute(prediction, target)\n\n\nclass CnnModule(NetworkModule):\n \"\"\" Class to create a CNN module.\"\"\"\n\n def __init__(self, loss, metric, searcher_args=None, path=None, verbose=False,\n search_type=BayesianSearcher):\n super(CnnModule, self).__init__(loss, metric, searcher_args, path, verbose, search_type)\n self.generators.append(CnnGenerator)\n self.generators.append(ResNetGenerator)\n self.generators.append(DenseNetGenerator)\n\n\nclass MlpModule(NetworkModule):\n \"\"\" Class to create an MLP module.\"\"\"\n\n def __init__(self, loss, metric, searcher_args=None, path=None, verbose=False):\n super(MlpModule, self).__init__(loss, metric, searcher_args, path, verbose)\n self.generators.extend([MlpGenerator] * 2)\n", "path": "autokeras/net_module.py"}]} | 1,870 | 301 |
gh_patches_debug_11990 | rasdani/github-patches | git_diff | kivy__python-for-android-1513 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Didn't find any valid dependency graphs. - Flask and websocket-client
In my app I use both flask and websocket-client. However, when i try to add both of these dependencies to my app, p4a fails. However, when I build my app only with `flask`, or only with `websocket-client` p4a works correctly.
```
p4a apk --private /home/user/sample/ --package=samplepackage --name="Sample app" --version 0.1 --bootstrap=sdl2 --requirements=python2,websocket-client,flask
[ERROR]: Didn't find any valid dependency graphs.
[ERROR]: This means that some of your requirements pull in conflicting dependencies.
[ERROR]: Exiting.```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pythonforandroid/recipes/websocket-client/__init__.py`
Content:
```
1 from pythonforandroid.toolchain import Recipe
2
3 # if android app crashes on start with "ImportError: No module named websocket"
4 #
5 # copy the 'websocket' directory into your app directory to force inclusion.
6 #
7 # see my example at https://github.com/debauchery1st/example_kivy_websocket-recipe
8 #
9 # If you see errors relating to 'SSL not available' ensure you have the package backports.ssl-match-hostname
10 # in the buildozer requirements, since Kivy targets python 2.7.x
11 #
12 # You may also need sslopt={"cert_reqs": ssl.CERT_NONE} as a parameter to ws.run_forever() if you get an error relating to
13 # host verification
14
15
16 class WebSocketClient(Recipe):
17
18 url = 'https://github.com/debauchery1st/websocket-client/raw/master/websocket_client-0.40.0.tar.gz'
19
20 version = '0.40.0'
21 # md5sum = 'f1cf4cc7869ef97a98e5f4be25c30986'
22
23 # patches = ['websocket.patch'] # Paths relative to the recipe dir
24
25 depends = ['kivy', 'python2', 'android', 'pyjnius',
26 'cryptography', 'pyasn1', 'pyopenssl']
27
28
29 recipe = WebSocketClient()
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pythonforandroid/recipes/websocket-client/__init__.py b/pythonforandroid/recipes/websocket-client/__init__.py
--- a/pythonforandroid/recipes/websocket-client/__init__.py
+++ b/pythonforandroid/recipes/websocket-client/__init__.py
@@ -15,15 +15,13 @@
class WebSocketClient(Recipe):
- url = 'https://github.com/debauchery1st/websocket-client/raw/master/websocket_client-0.40.0.tar.gz'
+ url = 'https://github.com/websocket-client/websocket-client/archive/v{version}.tar.gz'
version = '0.40.0'
- # md5sum = 'f1cf4cc7869ef97a98e5f4be25c30986'
# patches = ['websocket.patch'] # Paths relative to the recipe dir
- depends = ['kivy', 'python2', 'android', 'pyjnius',
- 'cryptography', 'pyasn1', 'pyopenssl']
+ depends = ['python2', 'android', 'pyjnius', 'cryptography', 'pyasn1', 'pyopenssl']
recipe = WebSocketClient()
| {"golden_diff": "diff --git a/pythonforandroid/recipes/websocket-client/__init__.py b/pythonforandroid/recipes/websocket-client/__init__.py\n--- a/pythonforandroid/recipes/websocket-client/__init__.py\n+++ b/pythonforandroid/recipes/websocket-client/__init__.py\n@@ -15,15 +15,13 @@\n \n class WebSocketClient(Recipe):\n \n- url = 'https://github.com/debauchery1st/websocket-client/raw/master/websocket_client-0.40.0.tar.gz'\n+ url = 'https://github.com/websocket-client/websocket-client/archive/v{version}.tar.gz'\n \n version = '0.40.0'\n- # md5sum = 'f1cf4cc7869ef97a98e5f4be25c30986'\n \n # patches = ['websocket.patch'] # Paths relative to the recipe dir\n \n- depends = ['kivy', 'python2', 'android', 'pyjnius',\n- 'cryptography', 'pyasn1', 'pyopenssl']\n+ depends = ['python2', 'android', 'pyjnius', 'cryptography', 'pyasn1', 'pyopenssl']\n \n \n recipe = WebSocketClient()\n", "issue": "Didn't find any valid dependency graphs. - Flask and websocket-client\nIn my app I use both flask and websocket-client. However, when i try to add both of these dependencies to my app, p4a fails. However, when I build my app only with `flask`, or only with `websocket-client` p4a works correctly.\r\n```\r\np4a apk --private /home/user/sample/ --package=samplepackage --name=\"Sample app\" --version 0.1 --bootstrap=sdl2 --requirements=python2,websocket-client,flask\r\n[ERROR]: Didn't find any valid dependency graphs.\r\n[ERROR]: This means that some of your requirements pull in conflicting dependencies.\r\n[ERROR]: Exiting.```\r\n\r\n\n", "before_files": [{"content": "from pythonforandroid.toolchain import Recipe\n\n# if android app crashes on start with \"ImportError: No module named websocket\"\n#\n# copy the 'websocket' directory into your app directory to force inclusion.\n#\n# see my example at https://github.com/debauchery1st/example_kivy_websocket-recipe\n#\n# If you see errors relating to 'SSL not available' ensure you have the package backports.ssl-match-hostname\n# in the buildozer requirements, since Kivy targets python 2.7.x\n#\n# You may also need sslopt={\"cert_reqs\": ssl.CERT_NONE} as a parameter to ws.run_forever() if you get an error relating to\n# host verification\n\n\nclass WebSocketClient(Recipe):\n\n url = 'https://github.com/debauchery1st/websocket-client/raw/master/websocket_client-0.40.0.tar.gz'\n\n version = '0.40.0'\n # md5sum = 'f1cf4cc7869ef97a98e5f4be25c30986'\n\n # patches = ['websocket.patch'] # Paths relative to the recipe dir\n\n depends = ['kivy', 'python2', 'android', 'pyjnius',\n 'cryptography', 'pyasn1', 'pyopenssl']\n\n\nrecipe = WebSocketClient()\n", "path": "pythonforandroid/recipes/websocket-client/__init__.py"}], "after_files": [{"content": "from pythonforandroid.toolchain import Recipe\n\n# if android app crashes on start with \"ImportError: No module named websocket\"\n#\n# copy the 'websocket' directory into your app directory to force inclusion.\n#\n# see my example at https://github.com/debauchery1st/example_kivy_websocket-recipe\n#\n# If you see errors relating to 'SSL not available' ensure you have the package backports.ssl-match-hostname\n# in the buildozer requirements, since Kivy targets python 2.7.x\n#\n# You may also need sslopt={\"cert_reqs\": ssl.CERT_NONE} as a parameter to ws.run_forever() if you get an error relating to\n# host verification\n\n\nclass WebSocketClient(Recipe):\n\n url = 'https://github.com/websocket-client/websocket-client/archive/v{version}.tar.gz'\n\n version = '0.40.0'\n\n # patches = ['websocket.patch'] # Paths relative to the recipe dir\n\n depends = ['python2', 'android', 'pyjnius', 'cryptography', 'pyasn1', 'pyopenssl']\n\n\nrecipe = WebSocketClient()\n", "path": "pythonforandroid/recipes/websocket-client/__init__.py"}]} | 773 | 275 |
gh_patches_debug_35836 | rasdani/github-patches | git_diff | pyca__cryptography-1532 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for loading DSA OpenSSH public keys
Should be straightforward to add support to the existing code.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cryptography/hazmat/primitives/serialization.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import base64
8 import struct
9 import warnings
10
11 from cryptography import utils
12 from cryptography.exceptions import UnsupportedAlgorithm
13 from cryptography.hazmat.primitives.asymmetric.rsa import RSAPublicNumbers
14
15
16 def load_pem_traditional_openssl_private_key(data, password, backend):
17 warnings.warn(
18 "load_pem_traditional_openssl_private_key is deprecated and will be "
19 "removed in a future version, use load_pem_private_key instead.",
20 utils.DeprecatedIn06,
21 stacklevel=2
22 )
23
24 return backend.load_traditional_openssl_pem_private_key(
25 data, password
26 )
27
28
29 def load_pem_pkcs8_private_key(data, password, backend):
30 warnings.warn(
31 "load_pem_pkcs8_private_key is deprecated and will be removed in a "
32 "future version, use load_pem_private_key instead.",
33 utils.DeprecatedIn06,
34 stacklevel=2
35 )
36
37 return backend.load_pkcs8_pem_private_key(data, password)
38
39
40 def load_pem_private_key(data, password, backend):
41 return backend.load_pem_private_key(data, password)
42
43
44 def load_pem_public_key(data, backend):
45 return backend.load_pem_public_key(data)
46
47
48 def load_ssh_public_key(data, backend):
49 key_parts = data.split(b' ')
50
51 if len(key_parts) != 2 and len(key_parts) != 3:
52 raise ValueError(
53 'Key is not in the proper format or contains extra data.')
54
55 key_type = key_parts[0]
56 key_body = key_parts[1]
57
58 if not key_type.startswith(b'ssh-'):
59 raise ValueError('SSH-formatted keys must begin with \'ssh-\'.')
60
61 if not key_type.startswith(b'ssh-rsa'):
62 raise UnsupportedAlgorithm('Only RSA keys are currently supported.')
63
64 return _load_ssh_rsa_public_key(key_body, backend)
65
66
67 def _load_ssh_rsa_public_key(key_body, backend):
68 data = base64.b64decode(key_body)
69
70 key_type, rest = _read_next_string(data)
71 e, rest = _read_next_mpint(rest)
72 n, rest = _read_next_mpint(rest)
73
74 if key_type != b'ssh-rsa':
75 raise ValueError(
76 'Key header and key body contain different key type values.')
77
78 if rest:
79 raise ValueError('Key body contains extra bytes.')
80
81 return backend.load_rsa_public_numbers(RSAPublicNumbers(e, n))
82
83
84 def _read_next_string(data):
85 """Retrieves the next RFC 4251 string value from the data."""
86 str_len, = struct.unpack('>I', data[:4])
87 return data[4:4 + str_len], data[4 + str_len:]
88
89
90 def _read_next_mpint(data):
91 """
92 Reads the next mpint from the data.
93
94 Currently, all mpints are interpreted as unsigned.
95 """
96 mpint_data, rest = _read_next_string(data)
97
98 return _int_from_bytes(mpint_data, byteorder='big', signed=False), rest
99
100
101 if hasattr(int, "from_bytes"):
102 _int_from_bytes = int.from_bytes
103 else:
104 def _int_from_bytes(data, byteorder, signed=False):
105 assert byteorder == 'big'
106 assert not signed
107
108 if len(data) % 4 != 0:
109 data = (b'\x00' * (4 - (len(data) % 4))) + data
110
111 result = 0
112
113 while len(data) > 0:
114 digit, = struct.unpack('>I', data[:4])
115 result = (result << 32) + digit
116 data = data[4:]
117
118 return result
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cryptography/hazmat/primitives/serialization.py b/src/cryptography/hazmat/primitives/serialization.py
--- a/src/cryptography/hazmat/primitives/serialization.py
+++ b/src/cryptography/hazmat/primitives/serialization.py
@@ -10,6 +10,9 @@
from cryptography import utils
from cryptography.exceptions import UnsupportedAlgorithm
+from cryptography.hazmat.primitives.asymmetric.dsa import (
+ DSAParameterNumbers, DSAPublicNumbers
+)
from cryptography.hazmat.primitives.asymmetric.rsa import RSAPublicNumbers
@@ -55,19 +58,23 @@
key_type = key_parts[0]
key_body = key_parts[1]
- if not key_type.startswith(b'ssh-'):
- raise ValueError('SSH-formatted keys must begin with \'ssh-\'.')
+ try:
+ decoded_data = base64.b64decode(key_body)
+ except TypeError:
+ raise ValueError('Key is not in the proper format.')
- if not key_type.startswith(b'ssh-rsa'):
- raise UnsupportedAlgorithm('Only RSA keys are currently supported.')
+ if key_type == b'ssh-rsa':
+ return _load_ssh_rsa_public_key(decoded_data, backend)
+ elif key_type == b'ssh-dss':
+ return _load_ssh_dss_public_key(decoded_data, backend)
+ else:
+ raise UnsupportedAlgorithm(
+ 'Only RSA and DSA keys are currently supported.'
+ )
- return _load_ssh_rsa_public_key(key_body, backend)
-
-def _load_ssh_rsa_public_key(key_body, backend):
- data = base64.b64decode(key_body)
-
- key_type, rest = _read_next_string(data)
+def _load_ssh_rsa_public_key(decoded_data, backend):
+ key_type, rest = _read_next_string(decoded_data)
e, rest = _read_next_mpint(rest)
n, rest = _read_next_mpint(rest)
@@ -81,6 +88,26 @@
return backend.load_rsa_public_numbers(RSAPublicNumbers(e, n))
+def _load_ssh_dss_public_key(decoded_data, backend):
+ key_type, rest = _read_next_string(decoded_data)
+ p, rest = _read_next_mpint(rest)
+ q, rest = _read_next_mpint(rest)
+ g, rest = _read_next_mpint(rest)
+ y, rest = _read_next_mpint(rest)
+
+ if key_type != b'ssh-dss':
+ raise ValueError(
+ 'Key header and key body contain different key type values.')
+
+ if rest:
+ raise ValueError('Key body contains extra bytes.')
+
+ parameter_numbers = DSAParameterNumbers(p, q, g)
+ public_numbers = DSAPublicNumbers(y, parameter_numbers)
+
+ return backend.load_dsa_public_numbers(public_numbers)
+
+
def _read_next_string(data):
"""Retrieves the next RFC 4251 string value from the data."""
str_len, = struct.unpack('>I', data[:4])
| {"golden_diff": "diff --git a/src/cryptography/hazmat/primitives/serialization.py b/src/cryptography/hazmat/primitives/serialization.py\n--- a/src/cryptography/hazmat/primitives/serialization.py\n+++ b/src/cryptography/hazmat/primitives/serialization.py\n@@ -10,6 +10,9 @@\n \n from cryptography import utils\n from cryptography.exceptions import UnsupportedAlgorithm\n+from cryptography.hazmat.primitives.asymmetric.dsa import (\n+ DSAParameterNumbers, DSAPublicNumbers\n+)\n from cryptography.hazmat.primitives.asymmetric.rsa import RSAPublicNumbers\n \n \n@@ -55,19 +58,23 @@\n key_type = key_parts[0]\n key_body = key_parts[1]\n \n- if not key_type.startswith(b'ssh-'):\n- raise ValueError('SSH-formatted keys must begin with \\'ssh-\\'.')\n+ try:\n+ decoded_data = base64.b64decode(key_body)\n+ except TypeError:\n+ raise ValueError('Key is not in the proper format.')\n \n- if not key_type.startswith(b'ssh-rsa'):\n- raise UnsupportedAlgorithm('Only RSA keys are currently supported.')\n+ if key_type == b'ssh-rsa':\n+ return _load_ssh_rsa_public_key(decoded_data, backend)\n+ elif key_type == b'ssh-dss':\n+ return _load_ssh_dss_public_key(decoded_data, backend)\n+ else:\n+ raise UnsupportedAlgorithm(\n+ 'Only RSA and DSA keys are currently supported.'\n+ )\n \n- return _load_ssh_rsa_public_key(key_body, backend)\n \n-\n-def _load_ssh_rsa_public_key(key_body, backend):\n- data = base64.b64decode(key_body)\n-\n- key_type, rest = _read_next_string(data)\n+def _load_ssh_rsa_public_key(decoded_data, backend):\n+ key_type, rest = _read_next_string(decoded_data)\n e, rest = _read_next_mpint(rest)\n n, rest = _read_next_mpint(rest)\n \n@@ -81,6 +88,26 @@\n return backend.load_rsa_public_numbers(RSAPublicNumbers(e, n))\n \n \n+def _load_ssh_dss_public_key(decoded_data, backend):\n+ key_type, rest = _read_next_string(decoded_data)\n+ p, rest = _read_next_mpint(rest)\n+ q, rest = _read_next_mpint(rest)\n+ g, rest = _read_next_mpint(rest)\n+ y, rest = _read_next_mpint(rest)\n+\n+ if key_type != b'ssh-dss':\n+ raise ValueError(\n+ 'Key header and key body contain different key type values.')\n+\n+ if rest:\n+ raise ValueError('Key body contains extra bytes.')\n+\n+ parameter_numbers = DSAParameterNumbers(p, q, g)\n+ public_numbers = DSAPublicNumbers(y, parameter_numbers)\n+\n+ return backend.load_dsa_public_numbers(public_numbers)\n+\n+\n def _read_next_string(data):\n \"\"\"Retrieves the next RFC 4251 string value from the data.\"\"\"\n str_len, = struct.unpack('>I', data[:4])\n", "issue": "Add support for loading DSA OpenSSH public keys\nShould be straightforward to add support to the existing code.\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport base64\nimport struct\nimport warnings\n\nfrom cryptography import utils\nfrom cryptography.exceptions import UnsupportedAlgorithm\nfrom cryptography.hazmat.primitives.asymmetric.rsa import RSAPublicNumbers\n\n\ndef load_pem_traditional_openssl_private_key(data, password, backend):\n warnings.warn(\n \"load_pem_traditional_openssl_private_key is deprecated and will be \"\n \"removed in a future version, use load_pem_private_key instead.\",\n utils.DeprecatedIn06,\n stacklevel=2\n )\n\n return backend.load_traditional_openssl_pem_private_key(\n data, password\n )\n\n\ndef load_pem_pkcs8_private_key(data, password, backend):\n warnings.warn(\n \"load_pem_pkcs8_private_key is deprecated and will be removed in a \"\n \"future version, use load_pem_private_key instead.\",\n utils.DeprecatedIn06,\n stacklevel=2\n )\n\n return backend.load_pkcs8_pem_private_key(data, password)\n\n\ndef load_pem_private_key(data, password, backend):\n return backend.load_pem_private_key(data, password)\n\n\ndef load_pem_public_key(data, backend):\n return backend.load_pem_public_key(data)\n\n\ndef load_ssh_public_key(data, backend):\n key_parts = data.split(b' ')\n\n if len(key_parts) != 2 and len(key_parts) != 3:\n raise ValueError(\n 'Key is not in the proper format or contains extra data.')\n\n key_type = key_parts[0]\n key_body = key_parts[1]\n\n if not key_type.startswith(b'ssh-'):\n raise ValueError('SSH-formatted keys must begin with \\'ssh-\\'.')\n\n if not key_type.startswith(b'ssh-rsa'):\n raise UnsupportedAlgorithm('Only RSA keys are currently supported.')\n\n return _load_ssh_rsa_public_key(key_body, backend)\n\n\ndef _load_ssh_rsa_public_key(key_body, backend):\n data = base64.b64decode(key_body)\n\n key_type, rest = _read_next_string(data)\n e, rest = _read_next_mpint(rest)\n n, rest = _read_next_mpint(rest)\n\n if key_type != b'ssh-rsa':\n raise ValueError(\n 'Key header and key body contain different key type values.')\n\n if rest:\n raise ValueError('Key body contains extra bytes.')\n\n return backend.load_rsa_public_numbers(RSAPublicNumbers(e, n))\n\n\ndef _read_next_string(data):\n \"\"\"Retrieves the next RFC 4251 string value from the data.\"\"\"\n str_len, = struct.unpack('>I', data[:4])\n return data[4:4 + str_len], data[4 + str_len:]\n\n\ndef _read_next_mpint(data):\n \"\"\"\n Reads the next mpint from the data.\n\n Currently, all mpints are interpreted as unsigned.\n \"\"\"\n mpint_data, rest = _read_next_string(data)\n\n return _int_from_bytes(mpint_data, byteorder='big', signed=False), rest\n\n\nif hasattr(int, \"from_bytes\"):\n _int_from_bytes = int.from_bytes\nelse:\n def _int_from_bytes(data, byteorder, signed=False):\n assert byteorder == 'big'\n assert not signed\n\n if len(data) % 4 != 0:\n data = (b'\\x00' * (4 - (len(data) % 4))) + data\n\n result = 0\n\n while len(data) > 0:\n digit, = struct.unpack('>I', data[:4])\n result = (result << 32) + digit\n data = data[4:]\n\n return result\n", "path": "src/cryptography/hazmat/primitives/serialization.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport base64\nimport struct\nimport warnings\n\nfrom cryptography import utils\nfrom cryptography.exceptions import UnsupportedAlgorithm\nfrom cryptography.hazmat.primitives.asymmetric.dsa import (\n DSAParameterNumbers, DSAPublicNumbers\n)\nfrom cryptography.hazmat.primitives.asymmetric.rsa import RSAPublicNumbers\n\n\ndef load_pem_traditional_openssl_private_key(data, password, backend):\n warnings.warn(\n \"load_pem_traditional_openssl_private_key is deprecated and will be \"\n \"removed in a future version, use load_pem_private_key instead.\",\n utils.DeprecatedIn06,\n stacklevel=2\n )\n\n return backend.load_traditional_openssl_pem_private_key(\n data, password\n )\n\n\ndef load_pem_pkcs8_private_key(data, password, backend):\n warnings.warn(\n \"load_pem_pkcs8_private_key is deprecated and will be removed in a \"\n \"future version, use load_pem_private_key instead.\",\n utils.DeprecatedIn06,\n stacklevel=2\n )\n\n return backend.load_pkcs8_pem_private_key(data, password)\n\n\ndef load_pem_private_key(data, password, backend):\n return backend.load_pem_private_key(data, password)\n\n\ndef load_pem_public_key(data, backend):\n return backend.load_pem_public_key(data)\n\n\ndef load_ssh_public_key(data, backend):\n key_parts = data.split(b' ')\n\n if len(key_parts) != 2 and len(key_parts) != 3:\n raise ValueError(\n 'Key is not in the proper format or contains extra data.')\n\n key_type = key_parts[0]\n key_body = key_parts[1]\n\n try:\n decoded_data = base64.b64decode(key_body)\n except TypeError:\n raise ValueError('Key is not in the proper format.')\n\n if key_type == b'ssh-rsa':\n return _load_ssh_rsa_public_key(decoded_data, backend)\n elif key_type == b'ssh-dss':\n return _load_ssh_dss_public_key(decoded_data, backend)\n else:\n raise UnsupportedAlgorithm(\n 'Only RSA and DSA keys are currently supported.'\n )\n\n\ndef _load_ssh_rsa_public_key(decoded_data, backend):\n key_type, rest = _read_next_string(decoded_data)\n e, rest = _read_next_mpint(rest)\n n, rest = _read_next_mpint(rest)\n\n if key_type != b'ssh-rsa':\n raise ValueError(\n 'Key header and key body contain different key type values.')\n\n if rest:\n raise ValueError('Key body contains extra bytes.')\n\n return backend.load_rsa_public_numbers(RSAPublicNumbers(e, n))\n\n\ndef _load_ssh_dss_public_key(decoded_data, backend):\n key_type, rest = _read_next_string(decoded_data)\n p, rest = _read_next_mpint(rest)\n q, rest = _read_next_mpint(rest)\n g, rest = _read_next_mpint(rest)\n y, rest = _read_next_mpint(rest)\n\n if key_type != b'ssh-dss':\n raise ValueError(\n 'Key header and key body contain different key type values.')\n\n if rest:\n raise ValueError('Key body contains extra bytes.')\n\n parameter_numbers = DSAParameterNumbers(p, q, g)\n public_numbers = DSAPublicNumbers(y, parameter_numbers)\n\n return backend.load_dsa_public_numbers(public_numbers)\n\n\ndef _read_next_string(data):\n \"\"\"Retrieves the next RFC 4251 string value from the data.\"\"\"\n str_len, = struct.unpack('>I', data[:4])\n return data[4:4 + str_len], data[4 + str_len:]\n\n\ndef _read_next_mpint(data):\n \"\"\"\n Reads the next mpint from the data.\n\n Currently, all mpints are interpreted as unsigned.\n \"\"\"\n mpint_data, rest = _read_next_string(data)\n\n return _int_from_bytes(mpint_data, byteorder='big', signed=False), rest\n\n\nif hasattr(int, \"from_bytes\"):\n _int_from_bytes = int.from_bytes\nelse:\n def _int_from_bytes(data, byteorder, signed=False):\n assert byteorder == 'big'\n assert not signed\n\n if len(data) % 4 != 0:\n data = (b'\\x00' * (4 - (len(data) % 4))) + data\n\n result = 0\n\n while len(data) > 0:\n digit, = struct.unpack('>I', data[:4])\n result = (result << 32) + digit\n data = data[4:]\n\n return result\n", "path": "src/cryptography/hazmat/primitives/serialization.py"}]} | 1,406 | 685 |
gh_patches_debug_23749 | rasdani/github-patches | git_diff | SeldonIO__MLServer-301 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tempo example fails when parallel inference is enabled
When parallel inference is enabled, the [outlier example using the Tempo runtime](https://tempo.readthedocs.io/en/latest/examples/outlier/README.html)seems to fail. In particular, it seems that either the `cifar10-service` or the `outlier` containers block the request path and never return a response.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mlserver/parallel.py`
Content:
```
1 import asyncio
2
3 from functools import wraps
4 from concurrent.futures import ProcessPoolExecutor
5 from typing import Any, Coroutine, Callable, Optional
6
7 from .errors import MLServerError
8 from .settings import ModelSettings
9 from .model import MLModel
10 from .types import InferenceRequest, InferenceResponse
11
12 _InferencePoolAttr = "__inference_pool__"
13
14 # NOTE: Workaround for mypy
15 _mp_model: MLModel
16
17
18 class InvalidParallelMethod(MLServerError):
19 def __init__(self, method_name: str, reason: Optional[str] = None):
20 msg = f"Method {method_name} can't be parallelised"
21 if reason:
22 msg += f": {reason}"
23
24 super().__init__(msg)
25
26
27 def _mp_load(model_settings: ModelSettings):
28 """
29 This method is meant to run internally in the multiprocessing workers.
30 The loading needs to run synchronously, since the initializer argument
31 doesn't support coroutines.
32 """
33 # NOTE: The global `_mp_model` variable is shared with the `_mp_predict`
34 # method.
35 # This global variable should only be used within the inference
36 # multiprocessing workers.
37 global _mp_model
38
39 model_class = model_settings.implementation
40 _mp_model = model_class(model_settings) # type: ignore
41 return asyncio.run(_mp_model.load())
42
43
44 def _mp_predict(payload: InferenceRequest) -> InferenceResponse:
45 """
46 This method is meant to run internally in the multiprocessing workers.
47 The prediction needs to run synchronously, since multiprocessing
48 doesn't know how to serialise coroutines.
49 """
50 # NOTE: `_mp_model` is a global variable initialised in the `_mp_load`
51 # method.
52 # This global variable is only to be used within the inference worker
53 # context.
54 global _mp_model
55
56 return asyncio.run(_mp_model.predict(payload))
57
58
59 class InferencePool:
60 """
61 The InferencePool class represents a pool of workers where we can run
62 inference on.
63
64 Under the hood, it's responsible for managing a pool of multiprocessing
65 workers, where the model is loaded.
66 This approach lets MLServer work around the GIL to make sure that inference
67 can occur in parallel across multiple models or instances of a model.
68 """
69
70 def __init__(self, model: MLModel):
71 parallel_workers = model.settings.parallel_workers
72 self._executor = ProcessPoolExecutor(
73 max_workers=parallel_workers,
74 initializer=_mp_load,
75 initargs=(model.settings,),
76 )
77
78 async def predict(self, payload: InferenceRequest) -> InferenceResponse:
79 # What if we serialise payload?
80 loop = asyncio.get_running_loop()
81 return await loop.run_in_executor(self._executor, _mp_predict, payload)
82
83 def __del__(self):
84 self._executor.shutdown(wait=True)
85
86
87 def parallel(f: Callable[[InferenceRequest], Coroutine[Any, Any, InferenceResponse]]):
88 """
89 Decorator to attach to model's methods so that they run in parallel.
90 By default, this will get attached to every model's "inference" method.
91
92 NOTE: At the moment, this method only works with `predict()`.
93 """
94 # TODO: Extend to multiple methods
95 @wraps(f)
96 async def _inner(payload: InferenceRequest) -> InferenceResponse:
97 if not hasattr(f, "__self__"):
98 raise InvalidParallelMethod(f.__name__, reason="method is not bound")
99
100 model = getattr(f, "__self__")
101 if not hasattr(model, _InferencePoolAttr):
102 raise InvalidParallelMethod(
103 f.__name__, reason="inference pool has not been loaded"
104 )
105
106 pool = getattr(model, _InferencePoolAttr)
107 return await pool.predict(payload)
108
109 return _inner
110
111
112 async def load_inference_pool(model: MLModel):
113 if model.settings.parallel_workers == 0:
114 # When parallel workers is set to 0, disable parallel inference
115 return model
116
117 pool = InferencePool(model)
118 setattr(model, _InferencePoolAttr, pool)
119
120 # Decorate predict method
121 setattr(model, "predict", parallel(model.predict))
122
123 return model
124
125
126 async def unload_inference_pool(model: MLModel):
127 pool = getattr(model, _InferencePoolAttr)
128 if not pool:
129 return
130
131 pool.__del__()
132 delattr(model, _InferencePoolAttr)
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mlserver/parallel.py b/mlserver/parallel.py
--- a/mlserver/parallel.py
+++ b/mlserver/parallel.py
@@ -1,4 +1,5 @@
import asyncio
+import multiprocessing as mp
from functools import wraps
from concurrent.futures import ProcessPoolExecutor
@@ -52,7 +53,6 @@
# This global variable is only to be used within the inference worker
# context.
global _mp_model
-
return asyncio.run(_mp_model.predict(payload))
@@ -69,8 +69,14 @@
def __init__(self, model: MLModel):
parallel_workers = model.settings.parallel_workers
+
+ # Use 'spawn' instead of 'fork' to ensure that models are loaded in a
+ # clean environment (e.g. to avoid issues like
+ # https://github.com/tensorflow/tensorflow/issues/8220)
+ ctx = mp.get_context("spawn")
self._executor = ProcessPoolExecutor(
max_workers=parallel_workers,
+ mp_context=ctx,
initializer=_mp_load,
initargs=(model.settings,),
)
| {"golden_diff": "diff --git a/mlserver/parallel.py b/mlserver/parallel.py\n--- a/mlserver/parallel.py\n+++ b/mlserver/parallel.py\n@@ -1,4 +1,5 @@\n import asyncio\n+import multiprocessing as mp\n \n from functools import wraps\n from concurrent.futures import ProcessPoolExecutor\n@@ -52,7 +53,6 @@\n # This global variable is only to be used within the inference worker\n # context.\n global _mp_model\n-\n return asyncio.run(_mp_model.predict(payload))\n \n \n@@ -69,8 +69,14 @@\n \n def __init__(self, model: MLModel):\n parallel_workers = model.settings.parallel_workers\n+\n+ # Use 'spawn' instead of 'fork' to ensure that models are loaded in a\n+ # clean environment (e.g. to avoid issues like\n+ # https://github.com/tensorflow/tensorflow/issues/8220)\n+ ctx = mp.get_context(\"spawn\")\n self._executor = ProcessPoolExecutor(\n max_workers=parallel_workers,\n+ mp_context=ctx,\n initializer=_mp_load,\n initargs=(model.settings,),\n )\n", "issue": "Tempo example fails when parallel inference is enabled\nWhen parallel inference is enabled, the [outlier example using the Tempo runtime](https://tempo.readthedocs.io/en/latest/examples/outlier/README.html)seems to fail. In particular, it seems that either the `cifar10-service` or the `outlier` containers block the request path and never return a response.\n", "before_files": [{"content": "import asyncio\n\nfrom functools import wraps\nfrom concurrent.futures import ProcessPoolExecutor\nfrom typing import Any, Coroutine, Callable, Optional\n\nfrom .errors import MLServerError\nfrom .settings import ModelSettings\nfrom .model import MLModel\nfrom .types import InferenceRequest, InferenceResponse\n\n_InferencePoolAttr = \"__inference_pool__\"\n\n# NOTE: Workaround for mypy\n_mp_model: MLModel\n\n\nclass InvalidParallelMethod(MLServerError):\n def __init__(self, method_name: str, reason: Optional[str] = None):\n msg = f\"Method {method_name} can't be parallelised\"\n if reason:\n msg += f\": {reason}\"\n\n super().__init__(msg)\n\n\ndef _mp_load(model_settings: ModelSettings):\n \"\"\"\n This method is meant to run internally in the multiprocessing workers.\n The loading needs to run synchronously, since the initializer argument\n doesn't support coroutines.\n \"\"\"\n # NOTE: The global `_mp_model` variable is shared with the `_mp_predict`\n # method.\n # This global variable should only be used within the inference\n # multiprocessing workers.\n global _mp_model\n\n model_class = model_settings.implementation\n _mp_model = model_class(model_settings) # type: ignore\n return asyncio.run(_mp_model.load())\n\n\ndef _mp_predict(payload: InferenceRequest) -> InferenceResponse:\n \"\"\"\n This method is meant to run internally in the multiprocessing workers.\n The prediction needs to run synchronously, since multiprocessing\n doesn't know how to serialise coroutines.\n \"\"\"\n # NOTE: `_mp_model` is a global variable initialised in the `_mp_load`\n # method.\n # This global variable is only to be used within the inference worker\n # context.\n global _mp_model\n\n return asyncio.run(_mp_model.predict(payload))\n\n\nclass InferencePool:\n \"\"\"\n The InferencePool class represents a pool of workers where we can run\n inference on.\n\n Under the hood, it's responsible for managing a pool of multiprocessing\n workers, where the model is loaded.\n This approach lets MLServer work around the GIL to make sure that inference\n can occur in parallel across multiple models or instances of a model.\n \"\"\"\n\n def __init__(self, model: MLModel):\n parallel_workers = model.settings.parallel_workers\n self._executor = ProcessPoolExecutor(\n max_workers=parallel_workers,\n initializer=_mp_load,\n initargs=(model.settings,),\n )\n\n async def predict(self, payload: InferenceRequest) -> InferenceResponse:\n # What if we serialise payload?\n loop = asyncio.get_running_loop()\n return await loop.run_in_executor(self._executor, _mp_predict, payload)\n\n def __del__(self):\n self._executor.shutdown(wait=True)\n\n\ndef parallel(f: Callable[[InferenceRequest], Coroutine[Any, Any, InferenceResponse]]):\n \"\"\"\n Decorator to attach to model's methods so that they run in parallel.\n By default, this will get attached to every model's \"inference\" method.\n\n NOTE: At the moment, this method only works with `predict()`.\n \"\"\"\n # TODO: Extend to multiple methods\n @wraps(f)\n async def _inner(payload: InferenceRequest) -> InferenceResponse:\n if not hasattr(f, \"__self__\"):\n raise InvalidParallelMethod(f.__name__, reason=\"method is not bound\")\n\n model = getattr(f, \"__self__\")\n if not hasattr(model, _InferencePoolAttr):\n raise InvalidParallelMethod(\n f.__name__, reason=\"inference pool has not been loaded\"\n )\n\n pool = getattr(model, _InferencePoolAttr)\n return await pool.predict(payload)\n\n return _inner\n\n\nasync def load_inference_pool(model: MLModel):\n if model.settings.parallel_workers == 0:\n # When parallel workers is set to 0, disable parallel inference\n return model\n\n pool = InferencePool(model)\n setattr(model, _InferencePoolAttr, pool)\n\n # Decorate predict method\n setattr(model, \"predict\", parallel(model.predict))\n\n return model\n\n\nasync def unload_inference_pool(model: MLModel):\n pool = getattr(model, _InferencePoolAttr)\n if not pool:\n return\n\n pool.__del__()\n delattr(model, _InferencePoolAttr)\n", "path": "mlserver/parallel.py"}], "after_files": [{"content": "import asyncio\nimport multiprocessing as mp\n\nfrom functools import wraps\nfrom concurrent.futures import ProcessPoolExecutor\nfrom typing import Any, Coroutine, Callable, Optional\n\nfrom .errors import MLServerError\nfrom .settings import ModelSettings\nfrom .model import MLModel\nfrom .types import InferenceRequest, InferenceResponse\n\n_InferencePoolAttr = \"__inference_pool__\"\n\n# NOTE: Workaround for mypy\n_mp_model: MLModel\n\n\nclass InvalidParallelMethod(MLServerError):\n def __init__(self, method_name: str, reason: Optional[str] = None):\n msg = f\"Method {method_name} can't be parallelised\"\n if reason:\n msg += f\": {reason}\"\n\n super().__init__(msg)\n\n\ndef _mp_load(model_settings: ModelSettings):\n \"\"\"\n This method is meant to run internally in the multiprocessing workers.\n The loading needs to run synchronously, since the initializer argument\n doesn't support coroutines.\n \"\"\"\n # NOTE: The global `_mp_model` variable is shared with the `_mp_predict`\n # method.\n # This global variable should only be used within the inference\n # multiprocessing workers.\n global _mp_model\n\n model_class = model_settings.implementation\n _mp_model = model_class(model_settings) # type: ignore\n return asyncio.run(_mp_model.load())\n\n\ndef _mp_predict(payload: InferenceRequest) -> InferenceResponse:\n \"\"\"\n This method is meant to run internally in the multiprocessing workers.\n The prediction needs to run synchronously, since multiprocessing\n doesn't know how to serialise coroutines.\n \"\"\"\n # NOTE: `_mp_model` is a global variable initialised in the `_mp_load`\n # method.\n # This global variable is only to be used within the inference worker\n # context.\n global _mp_model\n return asyncio.run(_mp_model.predict(payload))\n\n\nclass InferencePool:\n \"\"\"\n The InferencePool class represents a pool of workers where we can run\n inference on.\n\n Under the hood, it's responsible for managing a pool of multiprocessing\n workers, where the model is loaded.\n This approach lets MLServer work around the GIL to make sure that inference\n can occur in parallel across multiple models or instances of a model.\n \"\"\"\n\n def __init__(self, model: MLModel):\n parallel_workers = model.settings.parallel_workers\n\n # Use 'spawn' instead of 'fork' to ensure that models are loaded in a\n # clean environment (e.g. to avoid issues like\n # https://github.com/tensorflow/tensorflow/issues/8220)\n ctx = mp.get_context(\"spawn\")\n self._executor = ProcessPoolExecutor(\n max_workers=parallel_workers,\n mp_context=ctx,\n initializer=_mp_load,\n initargs=(model.settings,),\n )\n\n async def predict(self, payload: InferenceRequest) -> InferenceResponse:\n # What if we serialise payload?\n loop = asyncio.get_running_loop()\n return await loop.run_in_executor(self._executor, _mp_predict, payload)\n\n def __del__(self):\n self._executor.shutdown(wait=True)\n\n\ndef parallel(f: Callable[[InferenceRequest], Coroutine[Any, Any, InferenceResponse]]):\n \"\"\"\n Decorator to attach to model's methods so that they run in parallel.\n By default, this will get attached to every model's \"inference\" method.\n\n NOTE: At the moment, this method only works with `predict()`.\n \"\"\"\n # TODO: Extend to multiple methods\n @wraps(f)\n async def _inner(payload: InferenceRequest) -> InferenceResponse:\n if not hasattr(f, \"__self__\"):\n raise InvalidParallelMethod(f.__name__, reason=\"method is not bound\")\n\n model = getattr(f, \"__self__\")\n if not hasattr(model, _InferencePoolAttr):\n raise InvalidParallelMethod(\n f.__name__, reason=\"inference pool has not been loaded\"\n )\n\n pool = getattr(model, _InferencePoolAttr)\n return await pool.predict(payload)\n\n return _inner\n\n\nasync def load_inference_pool(model: MLModel):\n if model.settings.parallel_workers == 0:\n # When parallel workers is set to 0, disable parallel inference\n return model\n\n pool = InferencePool(model)\n setattr(model, _InferencePoolAttr, pool)\n\n # Decorate predict method\n setattr(model, \"predict\", parallel(model.predict))\n\n return model\n\n\nasync def unload_inference_pool(model: MLModel):\n pool = getattr(model, _InferencePoolAttr)\n if not pool:\n return\n\n pool.__del__()\n delattr(model, _InferencePoolAttr)\n", "path": "mlserver/parallel.py"}]} | 1,588 | 253 |
gh_patches_debug_32691 | rasdani/github-patches | git_diff | mindsdb__lightwood-619 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`Img2VecEncoder` calls `self.prepare` twice
## Your Environment
* Python version: Python 3.8.10
* Operating system: Ubuntu 20.04.3 LTS
* Lightwood version: 1.3.0
## Describe your issue
`Img2VecEncoder` crashes when `.encode(images)` is called.
Internally there seems to be some confusion on what the `.prepare` method actually does.
For some reason it is called to convert images to tensors, while in reality it should be used to initialize the model and (maybe) perform some initial training.
## Fixing the issue
Implement a method to convert images to torch tensors.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lightwood/encoder/image/img_2_vec.py`
Content:
```
1 import logging
2 import torch
3 import torchvision.transforms as transforms
4 from lightwood.encoder.image.helpers.img_to_vec import Img2Vec
5 from lightwood.encoder.base import BaseEncoder
6
7
8 class Img2VecEncoder(BaseEncoder):
9
10 def __init__(self, is_target: bool = False):
11 super().__init__(is_target)
12 self.model = None
13 # I think we should make this an enum, something like: speed, balance, accuracy
14 self.aim = aim
15 self._prepared = False
16
17 self._scaler = transforms.Scale((224, 224))
18 self._normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
19 self._to_tensor = transforms.ToTensor()
20
21 pil_logger = logging.getLogger('PIL')
22 pil_logger.setLevel(logging.ERROR)
23
24 def prepare(self, priming_data):
25 if self._prepared:
26 raise Exception('You can only call "prepare" once for a given encoder.')
27
28 if self.model is None:
29 self.model = Img2Vec(model='resnext-50-small')
30 self._prepared = True
31
32 def encode(self, images):
33 """
34 Encode list of images
35
36 :images : list of images, each image is a path to a file or a url
37 :return: a torch.floatTensor
38 """
39 if not self._prepared:
40 raise Exception('You need to call "prepare" before calling "encode" or "decode".')
41
42 img_tensors = self.prepare(images)
43 vec_arr = []
44 self.model.eval()
45 with torch.no_grad():
46 for img_tensor in img_tensors:
47 vec = self.model(img_tensor.unsqueeze(0), batch=False)
48 vec_arr.append(vec)
49 return torch.stack(vec_arr)
50
51 def decode(self, encoded_values_tensor):
52 raise Exception('This encoder is not bi-directional')
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lightwood/encoder/image/img_2_vec.py b/lightwood/encoder/image/img_2_vec.py
--- a/lightwood/encoder/image/img_2_vec.py
+++ b/lightwood/encoder/image/img_2_vec.py
@@ -3,6 +3,7 @@
import torchvision.transforms as transforms
from lightwood.encoder.image.helpers.img_to_vec import Img2Vec
from lightwood.encoder.base import BaseEncoder
+from PIL import Image
class Img2VecEncoder(BaseEncoder):
@@ -10,13 +11,18 @@
def __init__(self, is_target: bool = False):
super().__init__(is_target)
self.model = None
- # I think we should make this an enum, something like: speed, balance, accuracy
- self.aim = aim
+ # # I think we should make this an enum, something like: speed, balance, accuracy
+ # self.aim = aim
self._prepared = False
- self._scaler = transforms.Scale((224, 224))
+ self._scaler = transforms.Resize((224, 224))
self._normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
self._to_tensor = transforms.ToTensor()
+ self._img_to_tensor = transforms.Compose([
+ self._scaler,
+ self._to_tensor,
+ self._normalize
+ ])
pil_logger = logging.getLogger('PIL')
pil_logger.setLevel(logging.ERROR)
@@ -39,8 +45,11 @@
if not self._prepared:
raise Exception('You need to call "prepare" before calling "encode" or "decode".')
- img_tensors = self.prepare(images)
+ img_tensors = [self._img_to_tensor(
+ Image.open(img_path)
+ ) for img_path in images]
vec_arr = []
+
self.model.eval()
with torch.no_grad():
for img_tensor in img_tensors:
| {"golden_diff": "diff --git a/lightwood/encoder/image/img_2_vec.py b/lightwood/encoder/image/img_2_vec.py\n--- a/lightwood/encoder/image/img_2_vec.py\n+++ b/lightwood/encoder/image/img_2_vec.py\n@@ -3,6 +3,7 @@\n import torchvision.transforms as transforms\n from lightwood.encoder.image.helpers.img_to_vec import Img2Vec\n from lightwood.encoder.base import BaseEncoder\n+from PIL import Image\n \n \n class Img2VecEncoder(BaseEncoder):\n@@ -10,13 +11,18 @@\n def __init__(self, is_target: bool = False):\n super().__init__(is_target)\n self.model = None\n- # I think we should make this an enum, something like: speed, balance, accuracy\n- self.aim = aim\n+ # # I think we should make this an enum, something like: speed, balance, accuracy\n+ # self.aim = aim\n self._prepared = False\n \n- self._scaler = transforms.Scale((224, 224))\n+ self._scaler = transforms.Resize((224, 224))\n self._normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n self._to_tensor = transforms.ToTensor()\n+ self._img_to_tensor = transforms.Compose([\n+ self._scaler,\n+ self._to_tensor,\n+ self._normalize\n+ ])\n \n pil_logger = logging.getLogger('PIL')\n pil_logger.setLevel(logging.ERROR)\n@@ -39,8 +45,11 @@\n if not self._prepared:\n raise Exception('You need to call \"prepare\" before calling \"encode\" or \"decode\".')\n \n- img_tensors = self.prepare(images)\n+ img_tensors = [self._img_to_tensor(\n+ Image.open(img_path)\n+ ) for img_path in images]\n vec_arr = []\n+\n self.model.eval()\n with torch.no_grad():\n for img_tensor in img_tensors:\n", "issue": "`Img2VecEncoder` calls `self.prepare` twice\n## Your Environment\r\n* Python version: Python 3.8.10\r\n* Operating system: Ubuntu 20.04.3 LTS\r\n* Lightwood version: 1.3.0\r\n\r\n## Describe your issue\r\n`Img2VecEncoder` crashes when `.encode(images)` is called.\r\n\r\nInternally there seems to be some confusion on what the `.prepare` method actually does.\r\nFor some reason it is called to convert images to tensors, while in reality it should be used to initialize the model and (maybe) perform some initial training.\r\n\r\n## Fixing the issue\r\n\r\nImplement a method to convert images to torch tensors.\r\n\n", "before_files": [{"content": "import logging\nimport torch\nimport torchvision.transforms as transforms\nfrom lightwood.encoder.image.helpers.img_to_vec import Img2Vec\nfrom lightwood.encoder.base import BaseEncoder\n\n\nclass Img2VecEncoder(BaseEncoder):\n\n def __init__(self, is_target: bool = False):\n super().__init__(is_target)\n self.model = None\n # I think we should make this an enum, something like: speed, balance, accuracy\n self.aim = aim\n self._prepared = False\n\n self._scaler = transforms.Scale((224, 224))\n self._normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n self._to_tensor = transforms.ToTensor()\n\n pil_logger = logging.getLogger('PIL')\n pil_logger.setLevel(logging.ERROR)\n\n def prepare(self, priming_data):\n if self._prepared:\n raise Exception('You can only call \"prepare\" once for a given encoder.')\n\n if self.model is None:\n self.model = Img2Vec(model='resnext-50-small')\n self._prepared = True\n\n def encode(self, images):\n \"\"\"\n Encode list of images\n\n :images : list of images, each image is a path to a file or a url\n :return: a torch.floatTensor\n \"\"\"\n if not self._prepared:\n raise Exception('You need to call \"prepare\" before calling \"encode\" or \"decode\".')\n\n img_tensors = self.prepare(images)\n vec_arr = []\n self.model.eval()\n with torch.no_grad():\n for img_tensor in img_tensors:\n vec = self.model(img_tensor.unsqueeze(0), batch=False)\n vec_arr.append(vec)\n return torch.stack(vec_arr)\n\n def decode(self, encoded_values_tensor):\n raise Exception('This encoder is not bi-directional')\n", "path": "lightwood/encoder/image/img_2_vec.py"}], "after_files": [{"content": "import logging\nimport torch\nimport torchvision.transforms as transforms\nfrom lightwood.encoder.image.helpers.img_to_vec import Img2Vec\nfrom lightwood.encoder.base import BaseEncoder\nfrom PIL import Image\n\n\nclass Img2VecEncoder(BaseEncoder):\n\n def __init__(self, is_target: bool = False):\n super().__init__(is_target)\n self.model = None\n # # I think we should make this an enum, something like: speed, balance, accuracy\n # self.aim = aim\n self._prepared = False\n\n self._scaler = transforms.Resize((224, 224))\n self._normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n self._to_tensor = transforms.ToTensor()\n self._img_to_tensor = transforms.Compose([\n self._scaler,\n self._to_tensor,\n self._normalize\n ])\n\n pil_logger = logging.getLogger('PIL')\n pil_logger.setLevel(logging.ERROR)\n\n def prepare(self, priming_data):\n if self._prepared:\n raise Exception('You can only call \"prepare\" once for a given encoder.')\n\n if self.model is None:\n self.model = Img2Vec(model='resnext-50-small')\n self._prepared = True\n\n def encode(self, images):\n \"\"\"\n Encode list of images\n\n :images : list of images, each image is a path to a file or a url\n :return: a torch.floatTensor\n \"\"\"\n if not self._prepared:\n raise Exception('You need to call \"prepare\" before calling \"encode\" or \"decode\".')\n\n img_tensors = [self._img_to_tensor(\n Image.open(img_path)\n ) for img_path in images]\n vec_arr = []\n\n self.model.eval()\n with torch.no_grad():\n for img_tensor in img_tensors:\n vec = self.model(img_tensor.unsqueeze(0), batch=False)\n vec_arr.append(vec)\n return torch.stack(vec_arr)\n\n def decode(self, encoded_values_tensor):\n raise Exception('This encoder is not bi-directional')\n", "path": "lightwood/encoder/image/img_2_vec.py"}]} | 934 | 470 |
gh_patches_debug_16989 | rasdani/github-patches | git_diff | gpodder__mygpo-493 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
API: Device Synchronization API - Start / Stop Sync returning HTTP status 500
During my work on PR https://github.com/gpodder/mygpo/pull/122 is was testing the Device Synchronization API - Start / Stop Sync (https://gpoddernet.readthedocs.io/en/latest/api/reference/sync.html#post--api-2-sync-devices-(username).json)
I sent the following request
```json
{
"synchronize": [
[
"my-desktop", "cellphone"
]
]
}
```
and it is returning HTTP 500
```html
<html>
<head>
<title>500 Internal server error (gpodder.net)</title>
<link rel="stylesheet" type="text/css" href="/static/css/fail.css" />
</head>
<body>
<div id="c">
<div id="fail">
<h1>500 - Internal server error.</h1>
<p>
The service is currently overloaded.
Please try again later or contact us.
</p>
</div>
</div>
<img id="icon" src="/static/failpodder.png">
</body>
</html>
```
as a reference, a previous call to https://gpoddernet.readthedocs.io/en/latest/api/reference/sync.html#get--api-2-sync-devices-(username).json was returning:
```json
{
"synchronized": [],
"not-synchronized": [
"cellphone",
"my-desktop"
]
}
```
I'm able ot sync this devices on the web ui though.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mygpo/api/advanced/sync.py`
Content:
```
1 from django.http import HttpResponseBadRequest, HttpResponseNotFound
2 from django.views.decorators.csrf import csrf_exempt
3 from django.views.decorators.cache import never_cache
4
5 from mygpo.decorators import allowed_methods, cors_origin
6 from mygpo.utils import parse_request_body
7 from mygpo.api.basic_auth import require_valid_user, check_username
8 from mygpo.api.httpresponse import JsonResponse
9 from mygpo.users.models import Client, UserProxy
10 from mygpo.users.tasks import sync_user
11
12
13 @csrf_exempt
14 @require_valid_user
15 @check_username
16 @never_cache
17 @allowed_methods(["GET", "POST"])
18 @cors_origin()
19 def main(request, username):
20 """ API Endpoint for Device Synchronisation """
21
22 if request.method == "GET":
23 return JsonResponse(get_sync_status(request.user))
24
25 else:
26 try:
27 actions = parse_request_body(request)
28 except ValueError as e:
29 return HttpResponseBadRequest(str(e))
30
31 synclist = actions.get("synchronize", [])
32 stopsync = actions.get("stop-synchronize", [])
33
34 try:
35 update_sync_status(request.user, synclist, stopsync)
36 except ValueError as e:
37 return HttpResponseBadRequest(str(e))
38 except Client.DoesNotExist as e:
39 return HttpResponseNotFound(str(e))
40
41 return JsonResponse(get_sync_status(user))
42
43
44 def get_sync_status(user):
45 """ Returns the current Device Sync status """
46
47 sync_groups = []
48 unsynced = []
49
50 user = UserProxy.objects.from_user(user)
51 for group in user.get_grouped_devices():
52 uids = [device.uid for device in group.devices]
53
54 if group.is_synced:
55 sync_groups.append(uids)
56
57 else:
58 unsynced = uids
59
60 return {"synchronized": sync_groups, "not-synchronized": unsynced}
61
62
63 def update_sync_status(user, synclist, stopsync):
64 """Updates the current Device Sync status
65
66 Synchronisation between devices can be set up and stopped. Devices are
67 identified by their UIDs. Unknown UIDs cause errors, no new devices are
68 created."""
69
70 for devlist in synclist:
71
72 if len(devlist) <= 1:
73 raise ValueError("at least two devices are needed to sync")
74
75 # Setup all devices to sync with the first in the list
76 uid = devlist[0]
77 dev = user.client_set.get(uid=uid)
78
79 for other_uid in devlist[1:]:
80 other = user.get_device_by_uid(other_uid)
81 dev.sync_with(other)
82
83 for uid in stopsync:
84 dev = user.get_device_by_uid(uid)
85 try:
86 dev.stop_sync()
87 except ValueError:
88 # if all devices of a sync-group are un-synced,
89 # the last one will raise a ValueError, because it is no longer
90 # being synced -- we just ignore it
91 pass
92
93 user.save()
94
95 sync_user.delay(user)
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mygpo/api/advanced/sync.py b/mygpo/api/advanced/sync.py
--- a/mygpo/api/advanced/sync.py
+++ b/mygpo/api/advanced/sync.py
@@ -38,7 +38,7 @@
except Client.DoesNotExist as e:
return HttpResponseNotFound(str(e))
- return JsonResponse(get_sync_status(user))
+ return JsonResponse(get_sync_status(request.user))
def get_sync_status(user):
@@ -77,11 +77,11 @@
dev = user.client_set.get(uid=uid)
for other_uid in devlist[1:]:
- other = user.get_device_by_uid(other_uid)
+ other = user.client_set.get(uid=other_uid)
dev.sync_with(other)
for uid in stopsync:
- dev = user.get_device_by_uid(uid)
+ dev = user.client_set.get(uid=uid)
try:
dev.stop_sync()
except ValueError:
| {"golden_diff": "diff --git a/mygpo/api/advanced/sync.py b/mygpo/api/advanced/sync.py\n--- a/mygpo/api/advanced/sync.py\n+++ b/mygpo/api/advanced/sync.py\n@@ -38,7 +38,7 @@\n except Client.DoesNotExist as e:\n return HttpResponseNotFound(str(e))\n \n- return JsonResponse(get_sync_status(user))\n+ return JsonResponse(get_sync_status(request.user))\n \n \n def get_sync_status(user):\n@@ -77,11 +77,11 @@\n dev = user.client_set.get(uid=uid)\n \n for other_uid in devlist[1:]:\n- other = user.get_device_by_uid(other_uid)\n+ other = user.client_set.get(uid=other_uid)\n dev.sync_with(other)\n \n for uid in stopsync:\n- dev = user.get_device_by_uid(uid)\n+ dev = user.client_set.get(uid=uid)\n try:\n dev.stop_sync()\n except ValueError:\n", "issue": "API: Device Synchronization API - Start / Stop Sync returning HTTP status 500\nDuring my work on PR https://github.com/gpodder/mygpo/pull/122 is was testing the Device Synchronization API - Start / Stop Sync (https://gpoddernet.readthedocs.io/en/latest/api/reference/sync.html#post--api-2-sync-devices-(username).json)\r\n\r\nI sent the following request\r\n```json\r\n{\r\n \"synchronize\": [\r\n [\r\n \"my-desktop\", \"cellphone\"\r\n ]\r\n ]\r\n}\r\n```\r\n\r\nand it is returning HTTP 500\r\n```html\r\n<html>\r\n <head>\r\n <title>500 Internal server error (gpodder.net)</title>\r\n <link rel=\"stylesheet\" type=\"text/css\" href=\"/static/css/fail.css\" />\r\n </head>\r\n <body>\r\n <div id=\"c\">\r\n <div id=\"fail\">\r\n <h1>500 - Internal server error.</h1>\r\n <p>\r\n The service is currently overloaded.\r\n Please try again later or contact us.\r\n </p>\r\n </div>\r\n </div>\r\n <img id=\"icon\" src=\"/static/failpodder.png\">\r\n </body>\r\n</html>\r\n```\r\n\r\nas a reference, a previous call to https://gpoddernet.readthedocs.io/en/latest/api/reference/sync.html#get--api-2-sync-devices-(username).json was returning:\r\n```json\r\n{\r\n \"synchronized\": [],\r\n \"not-synchronized\": [\r\n \"cellphone\",\r\n \"my-desktop\"\r\n ]\r\n}\r\n```\r\n\r\nI'm able ot sync this devices on the web ui though.\n", "before_files": [{"content": "from django.http import HttpResponseBadRequest, HttpResponseNotFound\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.views.decorators.cache import never_cache\n\nfrom mygpo.decorators import allowed_methods, cors_origin\nfrom mygpo.utils import parse_request_body\nfrom mygpo.api.basic_auth import require_valid_user, check_username\nfrom mygpo.api.httpresponse import JsonResponse\nfrom mygpo.users.models import Client, UserProxy\nfrom mygpo.users.tasks import sync_user\n\n\n@csrf_exempt\n@require_valid_user\n@check_username\n@never_cache\n@allowed_methods([\"GET\", \"POST\"])\n@cors_origin()\ndef main(request, username):\n \"\"\" API Endpoint for Device Synchronisation \"\"\"\n\n if request.method == \"GET\":\n return JsonResponse(get_sync_status(request.user))\n\n else:\n try:\n actions = parse_request_body(request)\n except ValueError as e:\n return HttpResponseBadRequest(str(e))\n\n synclist = actions.get(\"synchronize\", [])\n stopsync = actions.get(\"stop-synchronize\", [])\n\n try:\n update_sync_status(request.user, synclist, stopsync)\n except ValueError as e:\n return HttpResponseBadRequest(str(e))\n except Client.DoesNotExist as e:\n return HttpResponseNotFound(str(e))\n\n return JsonResponse(get_sync_status(user))\n\n\ndef get_sync_status(user):\n \"\"\" Returns the current Device Sync status \"\"\"\n\n sync_groups = []\n unsynced = []\n\n user = UserProxy.objects.from_user(user)\n for group in user.get_grouped_devices():\n uids = [device.uid for device in group.devices]\n\n if group.is_synced:\n sync_groups.append(uids)\n\n else:\n unsynced = uids\n\n return {\"synchronized\": sync_groups, \"not-synchronized\": unsynced}\n\n\ndef update_sync_status(user, synclist, stopsync):\n \"\"\"Updates the current Device Sync status\n\n Synchronisation between devices can be set up and stopped. Devices are\n identified by their UIDs. Unknown UIDs cause errors, no new devices are\n created.\"\"\"\n\n for devlist in synclist:\n\n if len(devlist) <= 1:\n raise ValueError(\"at least two devices are needed to sync\")\n\n # Setup all devices to sync with the first in the list\n uid = devlist[0]\n dev = user.client_set.get(uid=uid)\n\n for other_uid in devlist[1:]:\n other = user.get_device_by_uid(other_uid)\n dev.sync_with(other)\n\n for uid in stopsync:\n dev = user.get_device_by_uid(uid)\n try:\n dev.stop_sync()\n except ValueError:\n # if all devices of a sync-group are un-synced,\n # the last one will raise a ValueError, because it is no longer\n # being synced -- we just ignore it\n pass\n\n user.save()\n\n sync_user.delay(user)\n", "path": "mygpo/api/advanced/sync.py"}], "after_files": [{"content": "from django.http import HttpResponseBadRequest, HttpResponseNotFound\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.views.decorators.cache import never_cache\n\nfrom mygpo.decorators import allowed_methods, cors_origin\nfrom mygpo.utils import parse_request_body\nfrom mygpo.api.basic_auth import require_valid_user, check_username\nfrom mygpo.api.httpresponse import JsonResponse\nfrom mygpo.users.models import Client, UserProxy\nfrom mygpo.users.tasks import sync_user\n\n\n@csrf_exempt\n@require_valid_user\n@check_username\n@never_cache\n@allowed_methods([\"GET\", \"POST\"])\n@cors_origin()\ndef main(request, username):\n \"\"\" API Endpoint for Device Synchronisation \"\"\"\n\n if request.method == \"GET\":\n return JsonResponse(get_sync_status(request.user))\n\n else:\n try:\n actions = parse_request_body(request)\n except ValueError as e:\n return HttpResponseBadRequest(str(e))\n\n synclist = actions.get(\"synchronize\", [])\n stopsync = actions.get(\"stop-synchronize\", [])\n\n try:\n update_sync_status(request.user, synclist, stopsync)\n except ValueError as e:\n return HttpResponseBadRequest(str(e))\n except Client.DoesNotExist as e:\n return HttpResponseNotFound(str(e))\n\n return JsonResponse(get_sync_status(request.user))\n\n\ndef get_sync_status(user):\n \"\"\" Returns the current Device Sync status \"\"\"\n\n sync_groups = []\n unsynced = []\n\n user = UserProxy.objects.from_user(user)\n for group in user.get_grouped_devices():\n uids = [device.uid for device in group.devices]\n\n if group.is_synced:\n sync_groups.append(uids)\n\n else:\n unsynced = uids\n\n return {\"synchronized\": sync_groups, \"not-synchronized\": unsynced}\n\n\ndef update_sync_status(user, synclist, stopsync):\n \"\"\"Updates the current Device Sync status\n\n Synchronisation between devices can be set up and stopped. Devices are\n identified by their UIDs. Unknown UIDs cause errors, no new devices are\n created.\"\"\"\n\n for devlist in synclist:\n\n if len(devlist) <= 1:\n raise ValueError(\"at least two devices are needed to sync\")\n\n # Setup all devices to sync with the first in the list\n uid = devlist[0]\n dev = user.client_set.get(uid=uid)\n\n for other_uid in devlist[1:]:\n other = user.client_set.get(uid=other_uid)\n dev.sync_with(other)\n\n for uid in stopsync:\n dev = user.client_set.get(uid=uid)\n try:\n dev.stop_sync()\n except ValueError:\n # if all devices of a sync-group are un-synced,\n # the last one will raise a ValueError, because it is no longer\n # being synced -- we just ignore it\n pass\n\n user.save()\n\n sync_user.delay(user)\n", "path": "mygpo/api/advanced/sync.py"}]} | 1,421 | 211 |
gh_patches_debug_67497 | rasdani/github-patches | git_diff | vllm-project__vllm-2887 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[v0.3.1] Release Tracker
**ETA**: Feb 14-16 th
## Major changes
TBD
## PRs to be merged before the release
- [x] #2855
- [x] #2845
- [x] ~~#2514~~
- [x] Ensure memory release when `LLM` class is deleted. #2882
- [x] #2875 #2880
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vllm/__init__.py`
Content:
```
1 """vLLM: a high-throughput and memory-efficient inference engine for LLMs"""
2
3 from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs
4 from vllm.engine.async_llm_engine import AsyncLLMEngine
5 from vllm.engine.llm_engine import LLMEngine
6 from vllm.engine.ray_utils import initialize_cluster
7 from vllm.entrypoints.llm import LLM
8 from vllm.outputs import CompletionOutput, RequestOutput
9 from vllm.sampling_params import SamplingParams
10
11 __version__ = "0.3.0"
12
13 __all__ = [
14 "LLM",
15 "SamplingParams",
16 "RequestOutput",
17 "CompletionOutput",
18 "LLMEngine",
19 "EngineArgs",
20 "AsyncLLMEngine",
21 "AsyncEngineArgs",
22 "initialize_cluster",
23 ]
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vllm/__init__.py b/vllm/__init__.py
--- a/vllm/__init__.py
+++ b/vllm/__init__.py
@@ -8,7 +8,7 @@
from vllm.outputs import CompletionOutput, RequestOutput
from vllm.sampling_params import SamplingParams
-__version__ = "0.3.0"
+__version__ = "0.3.1"
__all__ = [
"LLM",
| {"golden_diff": "diff --git a/vllm/__init__.py b/vllm/__init__.py\n--- a/vllm/__init__.py\n+++ b/vllm/__init__.py\n@@ -8,7 +8,7 @@\n from vllm.outputs import CompletionOutput, RequestOutput\n from vllm.sampling_params import SamplingParams\n \n-__version__ = \"0.3.0\"\n+__version__ = \"0.3.1\"\n \n __all__ = [\n \"LLM\",\n", "issue": "[v0.3.1] Release Tracker\n**ETA**: Feb 14-16 th\r\n\r\n## Major changes\r\n\r\nTBD\r\n\r\n## PRs to be merged before the release\r\n\r\n- [x] #2855 \r\n- [x] #2845 \r\n- [x] ~~#2514~~\r\n- [x] Ensure memory release when `LLM` class is deleted. #2882 \r\n- [x] #2875 #2880\n", "before_files": [{"content": "\"\"\"vLLM: a high-throughput and memory-efficient inference engine for LLMs\"\"\"\n\nfrom vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs\nfrom vllm.engine.async_llm_engine import AsyncLLMEngine\nfrom vllm.engine.llm_engine import LLMEngine\nfrom vllm.engine.ray_utils import initialize_cluster\nfrom vllm.entrypoints.llm import LLM\nfrom vllm.outputs import CompletionOutput, RequestOutput\nfrom vllm.sampling_params import SamplingParams\n\n__version__ = \"0.3.0\"\n\n__all__ = [\n \"LLM\",\n \"SamplingParams\",\n \"RequestOutput\",\n \"CompletionOutput\",\n \"LLMEngine\",\n \"EngineArgs\",\n \"AsyncLLMEngine\",\n \"AsyncEngineArgs\",\n \"initialize_cluster\",\n]\n", "path": "vllm/__init__.py"}], "after_files": [{"content": "\"\"\"vLLM: a high-throughput and memory-efficient inference engine for LLMs\"\"\"\n\nfrom vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs\nfrom vllm.engine.async_llm_engine import AsyncLLMEngine\nfrom vllm.engine.llm_engine import LLMEngine\nfrom vllm.engine.ray_utils import initialize_cluster\nfrom vllm.entrypoints.llm import LLM\nfrom vllm.outputs import CompletionOutput, RequestOutput\nfrom vllm.sampling_params import SamplingParams\n\n__version__ = \"0.3.1\"\n\n__all__ = [\n \"LLM\",\n \"SamplingParams\",\n \"RequestOutput\",\n \"CompletionOutput\",\n \"LLMEngine\",\n \"EngineArgs\",\n \"AsyncLLMEngine\",\n \"AsyncEngineArgs\",\n \"initialize_cluster\",\n]\n", "path": "vllm/__init__.py"}]} | 586 | 108 |
gh_patches_debug_6598 | rasdani/github-patches | git_diff | holoviz__panel-2883 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
panel examples gives UnboundLocalError
#### ALL software version info
panel 0.12.4
#### Description of expected behavior and the observed behavior
`$ panel examples` doesn't raise an error
#### Complete, minimal, self-contained example code that reproduces the issue
Was taking a look at https://panel.holoviz.org/#id1
```
panel examples
```
#### Stack traceback and/or browser JavaScript console output
#### Screenshots or screencasts of the bug in action
<img width="846" alt="Screen Shot 2021-11-04 at 9 23 56 PM" src="https://user-images.githubusercontent.com/17162724/140442696-82e6c5c2-4cd6-40f6-821d-47c87f5e1541.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `panel/command/__init__.py`
Content:
```
1 """
2 Commandline interface to Panel
3 """
4 import sys
5 import argparse
6
7 from bokeh.__main__ import main as bokeh_entry_point
8 from bokeh.command.subcommands.serve import Serve as BkServe
9 from bokeh.command.util import die
10 from bokeh.util.string import nice_join
11
12 from .. import __version__
13 from .serve import Serve
14 from .oauth_secret import OAuthSecret
15
16
17 def transform_cmds(argv):
18 """
19 Allows usage with anaconda-project by remapping the argv list provided
20 into arguments accepted by Bokeh 0.12.7 or later.
21 """
22 replacements = {
23 '--anaconda-project-host':'--allow-websocket-origin',
24 '--anaconda-project-port': '--port',
25 '--anaconda-project-address': '--address'
26 }
27 transformed = []
28 skip = False
29 for arg in argv:
30 if skip:
31 skip = False
32 continue
33 if arg in replacements.keys():
34 transformed.append(replacements[arg])
35 elif arg == '--anaconda-project-iframe-hosts':
36 skip = True
37 continue
38 elif arg.startswith('--anaconda-project'):
39 continue
40 else:
41 transformed.append(arg)
42 return transformed
43
44
45 def main(args=None):
46 """Merges commands offered by pyct and bokeh and provides help for both"""
47 from bokeh.command.subcommands import all as bokeh_commands
48 bokeh_commands = bokeh_commands + [OAuthSecret]
49
50 try:
51 import pyct.cmd
52 pyct_commands = ['copy-examples', 'examples']
53 except Exception:
54 pass
55
56 parser = argparse.ArgumentParser(
57 prog="panel", epilog="See '<command> --help' to read about a specific subcommand."
58 )
59
60 parser.add_argument('-v', '--version', action='version', version=__version__)
61
62 subs = parser.add_subparsers(help="Sub-commands")
63
64 for cmd in pyct_commands:
65 cmd = cmd.replace('-', '_')
66 fn = getattr(pyct.cmd, cmd)
67 subs.add_parser(cmd, help=fn.__doc__)
68
69 for cls in bokeh_commands:
70 if cls is BkServe:
71 subparser = subs.add_parser(Serve.name, help=Serve.help)
72 subcommand = Serve(parser=subparser)
73 subparser.set_defaults(invoke=subcommand.invoke)
74 else:
75 subs.add_parser(cls.name, help=cls.help)
76
77 if len(sys.argv) == 1:
78 all_commands = sorted([c.name for c in bokeh_commands] + pyct_commands)
79 die("ERROR: Must specify subcommand, one of: %s" % nice_join(all_commands))
80
81 if sys.argv[1] in ('--help', '-h'):
82 args = parser.parse_args(sys.argv[1:])
83 args.invoke(args)
84 sys.exit()
85
86 if len(sys.argv) > 1 and any(sys.argv[1] == c.name for c in bokeh_commands):
87 sys.argv = transform_cmds(sys.argv)
88 if sys.argv[1] == 'serve':
89 args = parser.parse_args(sys.argv[1:])
90 try:
91 ret = args.invoke(args)
92 except Exception as e:
93 die("ERROR: " + str(e))
94 elif sys.argv[1] == 'oauth-secret':
95 ret = OAuthSecret(parser).invoke(args)
96 else:
97 ret = bokeh_entry_point()
98 elif sys.argv[1] in pyct_commands:
99 try:
100 import pyct.cmd
101 except ImportError:
102 print("install pyct to enable this command (e.g. `conda install -c pyviz pyct` or `pip install pyct[cmd]`)")
103 sys.exit(1)
104 pyct.cmd.substitute_main('panel', cmds=pyct_commands, args=args)
105 else:
106 parser.parse_args(sys.argv[1:])
107 sys.exit(1)
108
109 if ret is False:
110 sys.exit(1)
111 elif ret is not True and isinstance(ret, int) and ret != 0:
112 sys.exit(ret)
113
114
115
116 if __name__ == "__main__":
117 main()
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/panel/command/__init__.py b/panel/command/__init__.py
--- a/panel/command/__init__.py
+++ b/panel/command/__init__.py
@@ -102,6 +102,7 @@
print("install pyct to enable this command (e.g. `conda install -c pyviz pyct` or `pip install pyct[cmd]`)")
sys.exit(1)
pyct.cmd.substitute_main('panel', cmds=pyct_commands, args=args)
+ sys.exit()
else:
parser.parse_args(sys.argv[1:])
sys.exit(1)
| {"golden_diff": "diff --git a/panel/command/__init__.py b/panel/command/__init__.py\n--- a/panel/command/__init__.py\n+++ b/panel/command/__init__.py\n@@ -102,6 +102,7 @@\n print(\"install pyct to enable this command (e.g. `conda install -c pyviz pyct` or `pip install pyct[cmd]`)\")\n sys.exit(1)\n pyct.cmd.substitute_main('panel', cmds=pyct_commands, args=args)\n+ sys.exit()\n else:\n parser.parse_args(sys.argv[1:])\n sys.exit(1)\n", "issue": "panel examples gives UnboundLocalError\n#### ALL software version info\r\npanel 0.12.4\r\n\r\n#### Description of expected behavior and the observed behavior\r\n`$ panel examples` doesn't raise an error\r\n\r\n#### Complete, minimal, self-contained example code that reproduces the issue\r\n\r\nWas taking a look at https://panel.holoviz.org/#id1\r\n\r\n```\r\npanel examples\r\n```\r\n\r\n#### Stack traceback and/or browser JavaScript console output\r\n\r\n#### Screenshots or screencasts of the bug in action\r\n\r\n<img width=\"846\" alt=\"Screen Shot 2021-11-04 at 9 23 56 PM\" src=\"https://user-images.githubusercontent.com/17162724/140442696-82e6c5c2-4cd6-40f6-821d-47c87f5e1541.png\">\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nCommandline interface to Panel\n\"\"\"\nimport sys\nimport argparse\n\nfrom bokeh.__main__ import main as bokeh_entry_point\nfrom bokeh.command.subcommands.serve import Serve as BkServe\nfrom bokeh.command.util import die\nfrom bokeh.util.string import nice_join\n\nfrom .. import __version__\nfrom .serve import Serve\nfrom .oauth_secret import OAuthSecret\n\n\ndef transform_cmds(argv):\n \"\"\"\n Allows usage with anaconda-project by remapping the argv list provided\n into arguments accepted by Bokeh 0.12.7 or later.\n \"\"\"\n replacements = {\n '--anaconda-project-host':'--allow-websocket-origin',\n '--anaconda-project-port': '--port',\n '--anaconda-project-address': '--address'\n }\n transformed = []\n skip = False\n for arg in argv:\n if skip:\n skip = False\n continue\n if arg in replacements.keys():\n transformed.append(replacements[arg])\n elif arg == '--anaconda-project-iframe-hosts':\n skip = True\n continue\n elif arg.startswith('--anaconda-project'):\n continue\n else:\n transformed.append(arg)\n return transformed\n\n\ndef main(args=None):\n \"\"\"Merges commands offered by pyct and bokeh and provides help for both\"\"\"\n from bokeh.command.subcommands import all as bokeh_commands\n bokeh_commands = bokeh_commands + [OAuthSecret]\n\n try:\n import pyct.cmd\n pyct_commands = ['copy-examples', 'examples']\n except Exception:\n pass\n\n parser = argparse.ArgumentParser(\n prog=\"panel\", epilog=\"See '<command> --help' to read about a specific subcommand.\"\n )\n\n parser.add_argument('-v', '--version', action='version', version=__version__)\n\n subs = parser.add_subparsers(help=\"Sub-commands\")\n\n for cmd in pyct_commands:\n cmd = cmd.replace('-', '_')\n fn = getattr(pyct.cmd, cmd)\n subs.add_parser(cmd, help=fn.__doc__)\n\n for cls in bokeh_commands:\n if cls is BkServe:\n subparser = subs.add_parser(Serve.name, help=Serve.help)\n subcommand = Serve(parser=subparser)\n subparser.set_defaults(invoke=subcommand.invoke)\n else:\n subs.add_parser(cls.name, help=cls.help)\n\n if len(sys.argv) == 1:\n all_commands = sorted([c.name for c in bokeh_commands] + pyct_commands)\n die(\"ERROR: Must specify subcommand, one of: %s\" % nice_join(all_commands))\n\n if sys.argv[1] in ('--help', '-h'):\n args = parser.parse_args(sys.argv[1:])\n args.invoke(args)\n sys.exit()\n\n if len(sys.argv) > 1 and any(sys.argv[1] == c.name for c in bokeh_commands):\n sys.argv = transform_cmds(sys.argv)\n if sys.argv[1] == 'serve':\n args = parser.parse_args(sys.argv[1:])\n try:\n ret = args.invoke(args)\n except Exception as e:\n die(\"ERROR: \" + str(e))\n elif sys.argv[1] == 'oauth-secret':\n ret = OAuthSecret(parser).invoke(args)\n else:\n ret = bokeh_entry_point()\n elif sys.argv[1] in pyct_commands:\n try:\n import pyct.cmd\n except ImportError:\n print(\"install pyct to enable this command (e.g. `conda install -c pyviz pyct` or `pip install pyct[cmd]`)\")\n sys.exit(1)\n pyct.cmd.substitute_main('panel', cmds=pyct_commands, args=args)\n else:\n parser.parse_args(sys.argv[1:])\n sys.exit(1)\n\n if ret is False:\n sys.exit(1)\n elif ret is not True and isinstance(ret, int) and ret != 0:\n sys.exit(ret)\n\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "panel/command/__init__.py"}], "after_files": [{"content": "\"\"\"\nCommandline interface to Panel\n\"\"\"\nimport sys\nimport argparse\n\nfrom bokeh.__main__ import main as bokeh_entry_point\nfrom bokeh.command.subcommands.serve import Serve as BkServe\nfrom bokeh.command.util import die\nfrom bokeh.util.string import nice_join\n\nfrom .. import __version__\nfrom .serve import Serve\nfrom .oauth_secret import OAuthSecret\n\n\ndef transform_cmds(argv):\n \"\"\"\n Allows usage with anaconda-project by remapping the argv list provided\n into arguments accepted by Bokeh 0.12.7 or later.\n \"\"\"\n replacements = {\n '--anaconda-project-host':'--allow-websocket-origin',\n '--anaconda-project-port': '--port',\n '--anaconda-project-address': '--address'\n }\n transformed = []\n skip = False\n for arg in argv:\n if skip:\n skip = False\n continue\n if arg in replacements.keys():\n transformed.append(replacements[arg])\n elif arg == '--anaconda-project-iframe-hosts':\n skip = True\n continue\n elif arg.startswith('--anaconda-project'):\n continue\n else:\n transformed.append(arg)\n return transformed\n\n\ndef main(args=None):\n \"\"\"Merges commands offered by pyct and bokeh and provides help for both\"\"\"\n from bokeh.command.subcommands import all as bokeh_commands\n bokeh_commands = bokeh_commands + [OAuthSecret]\n\n try:\n import pyct.cmd\n pyct_commands = ['copy-examples', 'examples']\n except Exception:\n pass\n\n parser = argparse.ArgumentParser(\n prog=\"panel\", epilog=\"See '<command> --help' to read about a specific subcommand.\"\n )\n\n parser.add_argument('-v', '--version', action='version', version=__version__)\n\n subs = parser.add_subparsers(help=\"Sub-commands\")\n\n for cmd in pyct_commands:\n cmd = cmd.replace('-', '_')\n fn = getattr(pyct.cmd, cmd)\n subs.add_parser(cmd, help=fn.__doc__)\n\n for cls in bokeh_commands:\n if cls is BkServe:\n subparser = subs.add_parser(Serve.name, help=Serve.help)\n subcommand = Serve(parser=subparser)\n subparser.set_defaults(invoke=subcommand.invoke)\n else:\n subs.add_parser(cls.name, help=cls.help)\n\n if len(sys.argv) == 1:\n all_commands = sorted([c.name for c in bokeh_commands] + pyct_commands)\n die(\"ERROR: Must specify subcommand, one of: %s\" % nice_join(all_commands))\n\n if sys.argv[1] in ('--help', '-h'):\n args = parser.parse_args(sys.argv[1:])\n args.invoke(args)\n sys.exit()\n\n if len(sys.argv) > 1 and any(sys.argv[1] == c.name for c in bokeh_commands):\n sys.argv = transform_cmds(sys.argv)\n if sys.argv[1] == 'serve':\n args = parser.parse_args(sys.argv[1:])\n try:\n ret = args.invoke(args)\n except Exception as e:\n die(\"ERROR: \" + str(e))\n elif sys.argv[1] == 'oauth-secret':\n ret = OAuthSecret(parser).invoke(args)\n else:\n ret = bokeh_entry_point()\n elif sys.argv[1] in pyct_commands:\n try:\n import pyct.cmd\n except ImportError:\n print(\"install pyct to enable this command (e.g. `conda install -c pyviz pyct` or `pip install pyct[cmd]`)\")\n sys.exit(1)\n pyct.cmd.substitute_main('panel', cmds=pyct_commands, args=args)\n sys.exit()\n else:\n parser.parse_args(sys.argv[1:])\n sys.exit(1)\n\n if ret is False:\n sys.exit(1)\n elif ret is not True and isinstance(ret, int) and ret != 0:\n sys.exit(ret)\n\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "panel/command/__init__.py"}]} | 1,576 | 137 |
gh_patches_debug_3210 | rasdani/github-patches | git_diff | ray-project__ray-10443 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[rllib] _get_torch_exploration_action doesn't support tuple action dist
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### System information
* **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Mac OS 10.15.4
* **Ray installed from (source or binary)**: binary (via pip)
* **Ray version**: 0.8.6., but nothing seems to have changed on master
* **Python version**: 3.7
### What is the problem?
When using tuple action distributions (as advised in #6372) and exploration is disabled, the line:
https://github.com/ray-project/ray/blob/a462ae2747afbeb9047e443cd51e67e3fe0b49e6/rllib/utils/exploration/stochastic_sampling.py#L75
from `_get_torch_exploration_action` raises the following exception:
```
AttributeError: 'tuple' object has no attribute 'size'
```
A simple fix that supports any type of distribution would be:
```python
logp = torch.zeros_like(action_dist.sampled_action_logp())
```
I can submit a PR if it helps.
### Reproduction (REQUIRED)
Exact command to reproduce: python `rllib_cartpole.py` for the following file
```python
import gym.envs.classic_control
from gym.spaces import Tuple, Discrete
import ray
from ray import tune
class CustomCartpole(gym.envs.classic_control.CartPoleEnv):
"""Add a dimension to the cartpole action space that is ignored."""
def __init__(self, env_config):
super().__init__()
# if override_actions is false this is just the Cartpole environment
self.override_actions = env_config['override_actions']
if self.override_actions:
# 2 is the environment's normal action space
# 4 is just a dummy number to give it an extra dimension
self.original_action_space = self.action_space
self.action_space = Tuple([Discrete(2), Discrete(4)])
self.tuple_action_space = self.action_space
def step(self, action):
# call the cartpole environment with the original action
if self.override_actions:
self.action_space = self.original_action_space
return super().step(action[0])
else:
return super().step(action)
def main():
ray.init()
tune.run(
"PPO",
stop={"episode_reward_mean": 50},
config={
"env": CustomCartpole,
"env_config": {'override_actions': True},
"num_gpus": 0,
"num_workers": 1,
"eager": False,
"evaluation_interval": 1,
"evaluation_config": {
"explore": False,
},
"framework": "torch",
},
)
if __name__ == '__main__':
main()
```
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rllib/utils/exploration/stochastic_sampling.py`
Content:
```
1 import tree
2 from typing import Union
3
4 from ray.rllib.models.action_dist import ActionDistribution
5 from ray.rllib.models.modelv2 import ModelV2
6 from ray.rllib.utils.annotations import override
7 from ray.rllib.utils.exploration.exploration import Exploration
8 from ray.rllib.utils.framework import try_import_tf, try_import_torch, \
9 TensorType
10
11 tf1, tf, tfv = try_import_tf()
12 torch, _ = try_import_torch()
13
14
15 class StochasticSampling(Exploration):
16 """An exploration that simply samples from a distribution.
17
18 The sampling can be made deterministic by passing explore=False into
19 the call to `get_exploration_action`.
20 Also allows for scheduled parameters for the distributions, such as
21 lowering stddev, temperature, etc.. over time.
22 """
23
24 def __init__(self, action_space, *, framework: str, model: ModelV2,
25 **kwargs):
26 """Initializes a StochasticSampling Exploration object.
27
28 Args:
29 action_space (Space): The gym action space used by the environment.
30 framework (str): One of None, "tf", "torch".
31 """
32 assert framework is not None
33 super().__init__(
34 action_space, model=model, framework=framework, **kwargs)
35
36 @override(Exploration)
37 def get_exploration_action(self,
38 *,
39 action_distribution: ActionDistribution,
40 timestep: Union[int, TensorType],
41 explore: bool = True):
42 if self.framework == "torch":
43 return self._get_torch_exploration_action(action_distribution,
44 explore)
45 else:
46 return self._get_tf_exploration_action_op(action_distribution,
47 explore)
48
49 def _get_tf_exploration_action_op(self, action_dist, explore):
50 sample = action_dist.sample()
51 deterministic_sample = action_dist.deterministic_sample()
52 action = tf.cond(
53 tf.constant(explore) if isinstance(explore, bool) else explore,
54 true_fn=lambda: sample,
55 false_fn=lambda: deterministic_sample)
56
57 def logp_false_fn():
58 batch_size = tf.shape(tree.flatten(action)[0])[0]
59 return tf.zeros(shape=(batch_size, ), dtype=tf.float32)
60
61 logp = tf.cond(
62 tf.constant(explore) if isinstance(explore, bool) else explore,
63 true_fn=lambda: action_dist.sampled_action_logp(),
64 false_fn=logp_false_fn)
65
66 return action, logp
67
68 @staticmethod
69 def _get_torch_exploration_action(action_dist, explore):
70 if explore:
71 action = action_dist.sample()
72 logp = action_dist.sampled_action_logp()
73 else:
74 action = action_dist.deterministic_sample()
75 logp = torch.zeros((action.size()[0], ), dtype=torch.float32)
76 return action, logp
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rllib/utils/exploration/stochastic_sampling.py b/rllib/utils/exploration/stochastic_sampling.py
--- a/rllib/utils/exploration/stochastic_sampling.py
+++ b/rllib/utils/exploration/stochastic_sampling.py
@@ -72,5 +72,5 @@
logp = action_dist.sampled_action_logp()
else:
action = action_dist.deterministic_sample()
- logp = torch.zeros((action.size()[0], ), dtype=torch.float32)
+ logp = torch.zeros_like(action_dist.sampled_action_logp())
return action, logp
| {"golden_diff": "diff --git a/rllib/utils/exploration/stochastic_sampling.py b/rllib/utils/exploration/stochastic_sampling.py\n--- a/rllib/utils/exploration/stochastic_sampling.py\n+++ b/rllib/utils/exploration/stochastic_sampling.py\n@@ -72,5 +72,5 @@\n logp = action_dist.sampled_action_logp()\n else:\n action = action_dist.deterministic_sample()\n- logp = torch.zeros((action.size()[0], ), dtype=torch.float32)\n+ logp = torch.zeros_like(action_dist.sampled_action_logp())\n return action, logp\n", "issue": "[rllib] _get_torch_exploration_action doesn't support tuple action dist\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n### System information\r\n\r\n* **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Mac OS 10.15.4\r\n* **Ray installed from (source or binary)**: binary (via pip)\r\n* **Ray version**: 0.8.6., but nothing seems to have changed on master\r\n* **Python version**: 3.7\r\n\r\n### What is the problem?\r\n\r\nWhen using tuple action distributions (as advised in #6372) and exploration is disabled, the line:\r\n\r\nhttps://github.com/ray-project/ray/blob/a462ae2747afbeb9047e443cd51e67e3fe0b49e6/rllib/utils/exploration/stochastic_sampling.py#L75\r\n\r\nfrom `_get_torch_exploration_action` raises the following exception:\r\n\r\n```\r\nAttributeError: 'tuple' object has no attribute 'size'\r\n```\r\n\r\nA simple fix that supports any type of distribution would be:\r\n```python\r\nlogp = torch.zeros_like(action_dist.sampled_action_logp())\r\n```\r\n\r\nI can submit a PR if it helps. \r\n\r\n### Reproduction (REQUIRED)\r\n\r\nExact command to reproduce: python `rllib_cartpole.py` for the following file\r\n\r\n```python\r\nimport gym.envs.classic_control\r\nfrom gym.spaces import Tuple, Discrete\r\n\r\nimport ray\r\nfrom ray import tune\r\n\r\n\r\nclass CustomCartpole(gym.envs.classic_control.CartPoleEnv):\r\n \"\"\"Add a dimension to the cartpole action space that is ignored.\"\"\"\r\n\r\n def __init__(self, env_config):\r\n super().__init__()\r\n # if override_actions is false this is just the Cartpole environment\r\n self.override_actions = env_config['override_actions']\r\n if self.override_actions:\r\n # 2 is the environment's normal action space\r\n # 4 is just a dummy number to give it an extra dimension\r\n self.original_action_space = self.action_space\r\n self.action_space = Tuple([Discrete(2), Discrete(4)])\r\n self.tuple_action_space = self.action_space\r\n\r\n def step(self, action):\r\n # call the cartpole environment with the original action\r\n if self.override_actions:\r\n self.action_space = self.original_action_space\r\n return super().step(action[0])\r\n else:\r\n return super().step(action)\r\n\r\n\r\ndef main():\r\n ray.init()\r\n tune.run(\r\n \"PPO\",\r\n stop={\"episode_reward_mean\": 50},\r\n config={\r\n \"env\": CustomCartpole,\r\n \"env_config\": {'override_actions': True},\r\n \"num_gpus\": 0,\r\n \"num_workers\": 1,\r\n \"eager\": False,\r\n \"evaluation_interval\": 1,\r\n \"evaluation_config\": {\r\n \"explore\": False,\r\n },\r\n \"framework\": \"torch\",\r\n },\r\n )\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\n\r\n- [x] I have verified my script runs in a clean environment and reproduces the issue.\r\n- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).\r\n\n", "before_files": [{"content": "import tree\nfrom typing import Union\n\nfrom ray.rllib.models.action_dist import ActionDistribution\nfrom ray.rllib.models.modelv2 import ModelV2\nfrom ray.rllib.utils.annotations import override\nfrom ray.rllib.utils.exploration.exploration import Exploration\nfrom ray.rllib.utils.framework import try_import_tf, try_import_torch, \\\n TensorType\n\ntf1, tf, tfv = try_import_tf()\ntorch, _ = try_import_torch()\n\n\nclass StochasticSampling(Exploration):\n \"\"\"An exploration that simply samples from a distribution.\n\n The sampling can be made deterministic by passing explore=False into\n the call to `get_exploration_action`.\n Also allows for scheduled parameters for the distributions, such as\n lowering stddev, temperature, etc.. over time.\n \"\"\"\n\n def __init__(self, action_space, *, framework: str, model: ModelV2,\n **kwargs):\n \"\"\"Initializes a StochasticSampling Exploration object.\n\n Args:\n action_space (Space): The gym action space used by the environment.\n framework (str): One of None, \"tf\", \"torch\".\n \"\"\"\n assert framework is not None\n super().__init__(\n action_space, model=model, framework=framework, **kwargs)\n\n @override(Exploration)\n def get_exploration_action(self,\n *,\n action_distribution: ActionDistribution,\n timestep: Union[int, TensorType],\n explore: bool = True):\n if self.framework == \"torch\":\n return self._get_torch_exploration_action(action_distribution,\n explore)\n else:\n return self._get_tf_exploration_action_op(action_distribution,\n explore)\n\n def _get_tf_exploration_action_op(self, action_dist, explore):\n sample = action_dist.sample()\n deterministic_sample = action_dist.deterministic_sample()\n action = tf.cond(\n tf.constant(explore) if isinstance(explore, bool) else explore,\n true_fn=lambda: sample,\n false_fn=lambda: deterministic_sample)\n\n def logp_false_fn():\n batch_size = tf.shape(tree.flatten(action)[0])[0]\n return tf.zeros(shape=(batch_size, ), dtype=tf.float32)\n\n logp = tf.cond(\n tf.constant(explore) if isinstance(explore, bool) else explore,\n true_fn=lambda: action_dist.sampled_action_logp(),\n false_fn=logp_false_fn)\n\n return action, logp\n\n @staticmethod\n def _get_torch_exploration_action(action_dist, explore):\n if explore:\n action = action_dist.sample()\n logp = action_dist.sampled_action_logp()\n else:\n action = action_dist.deterministic_sample()\n logp = torch.zeros((action.size()[0], ), dtype=torch.float32)\n return action, logp\n", "path": "rllib/utils/exploration/stochastic_sampling.py"}], "after_files": [{"content": "import tree\nfrom typing import Union\n\nfrom ray.rllib.models.action_dist import ActionDistribution\nfrom ray.rllib.models.modelv2 import ModelV2\nfrom ray.rllib.utils.annotations import override\nfrom ray.rllib.utils.exploration.exploration import Exploration\nfrom ray.rllib.utils.framework import try_import_tf, try_import_torch, \\\n TensorType\n\ntf1, tf, tfv = try_import_tf()\ntorch, _ = try_import_torch()\n\n\nclass StochasticSampling(Exploration):\n \"\"\"An exploration that simply samples from a distribution.\n\n The sampling can be made deterministic by passing explore=False into\n the call to `get_exploration_action`.\n Also allows for scheduled parameters for the distributions, such as\n lowering stddev, temperature, etc.. over time.\n \"\"\"\n\n def __init__(self, action_space, *, framework: str, model: ModelV2,\n **kwargs):\n \"\"\"Initializes a StochasticSampling Exploration object.\n\n Args:\n action_space (Space): The gym action space used by the environment.\n framework (str): One of None, \"tf\", \"torch\".\n \"\"\"\n assert framework is not None\n super().__init__(\n action_space, model=model, framework=framework, **kwargs)\n\n @override(Exploration)\n def get_exploration_action(self,\n *,\n action_distribution: ActionDistribution,\n timestep: Union[int, TensorType],\n explore: bool = True):\n if self.framework == \"torch\":\n return self._get_torch_exploration_action(action_distribution,\n explore)\n else:\n return self._get_tf_exploration_action_op(action_distribution,\n explore)\n\n def _get_tf_exploration_action_op(self, action_dist, explore):\n sample = action_dist.sample()\n deterministic_sample = action_dist.deterministic_sample()\n action = tf.cond(\n tf.constant(explore) if isinstance(explore, bool) else explore,\n true_fn=lambda: sample,\n false_fn=lambda: deterministic_sample)\n\n def logp_false_fn():\n batch_size = tf.shape(tree.flatten(action)[0])[0]\n return tf.zeros(shape=(batch_size, ), dtype=tf.float32)\n\n logp = tf.cond(\n tf.constant(explore) if isinstance(explore, bool) else explore,\n true_fn=lambda: action_dist.sampled_action_logp(),\n false_fn=logp_false_fn)\n\n return action, logp\n\n @staticmethod\n def _get_torch_exploration_action(action_dist, explore):\n if explore:\n action = action_dist.sample()\n logp = action_dist.sampled_action_logp()\n else:\n action = action_dist.deterministic_sample()\n logp = torch.zeros_like(action_dist.sampled_action_logp())\n return action, logp\n", "path": "rllib/utils/exploration/stochastic_sampling.py"}]} | 1,728 | 132 |
gh_patches_debug_14983 | rasdani/github-patches | git_diff | saleor__saleor-5302 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The clear database command should be runnable with debug disabled
We should be able to run `cleardb` when `DEBUG=False` but we should have a `--force` flag to actually allow that action when the debug mode is turned off as it is a dangerous command.
Definition of done:
- Prints an error to stderr when `DEBUG=False` and `--force` is not passed (flagged)
- Exits with 1 (raises `SystemExit` which allows Django to handle it and cleanup the opened connections, such as the database)
- User can clear the database when debug mode is turned off and only when `--force` was passed
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/core/management/commands/cleardb.py`
Content:
```
1 """Clear the database preserving shop's configuration.
2
3 This command clears the database from data such as orders, products or customer
4 accounts. It doesn't remove shop's configuration, such as: staff accounts, service
5 accounts, plugin configurations, site settings or navigation menus.
6 """
7
8 from django.conf import settings
9 from django.core.management.base import BaseCommand, CommandError
10 from django.db.models import Q
11
12 from ....account.models import User
13 from ....checkout.models import Checkout
14 from ....discount.models import Sale, Voucher
15 from ....giftcard.models import GiftCard
16 from ....order.models import Order
17 from ....page.models import Page
18 from ....payment.models import Payment, Transaction
19 from ....product.models import Attribute, Category, Collection, Product, ProductType
20 from ....shipping.models import ShippingMethod, ShippingZone
21 from ....warehouse.models import Warehouse
22 from ....webhook.models import Webhook
23
24
25 class Command(BaseCommand):
26 help = "Removes data from the database preserving shop configuration."
27
28 def add_arguments(self, parser):
29 parser.add_argument(
30 "--delete-staff",
31 action="store_true",
32 help="Delete staff user accounts (doesn't delete superuser accounts).",
33 )
34
35 def handle(self, **options):
36 if not settings.DEBUG:
37 raise CommandError("Cannot clear the database in DEBUG=True mode.")
38
39 Checkout.objects.all().delete()
40 self.stdout.write("Removed checkouts")
41
42 Transaction.objects.all().delete()
43 self.stdout.write("Removed transactions")
44
45 Payment.objects.all().delete()
46 self.stdout.write("Removed payments")
47
48 Order.objects.all().delete()
49 self.stdout.write("Removed orders")
50
51 Product.objects.all().delete()
52 self.stdout.write("Removed products")
53
54 ProductType.objects.all().delete()
55 self.stdout.write("Removed product types")
56
57 Attribute.objects.all().delete()
58 self.stdout.write("Removed attributes")
59
60 Category.objects.all().delete()
61 self.stdout.write("Removed categories")
62
63 Collection.objects.all().delete()
64 self.stdout.write("Removed collections")
65
66 Sale.objects.all().delete()
67 self.stdout.write("Removed sales")
68
69 ShippingMethod.objects.all().delete()
70 self.stdout.write("Removed shipping methods")
71
72 ShippingZone.objects.all().delete()
73 self.stdout.write("Removed shipping zones")
74
75 Voucher.objects.all().delete()
76 self.stdout.write("Removed vouchers")
77
78 GiftCard.objects.all().delete()
79 self.stdout.write("Removed gift cards")
80
81 self.stdout.write("Removed warehouses")
82 Warehouse.objects.all().delete()
83
84 Page.objects.all().delete()
85 self.stdout.write("Removed pages")
86
87 Webhook.objects.all().delete()
88 self.stdout.write("Removed webhooks")
89
90 # Delete all users except for staff members.
91 staff = User.objects.filter(Q(is_staff=True) | Q(is_superuser=True))
92 User.objects.exclude(pk__in=staff).delete()
93 self.stdout.write("Removed customers")
94
95 should_delete_staff = options.get("delete_staff")
96 if should_delete_staff:
97 staff = staff.exclude(is_superuser=True)
98 staff.delete()
99 self.stdout.write("Removed staff users")
100
101 # Remove addresses of staff members. Used to clear saved addresses of staff
102 # accounts used on demo for testing checkout.
103 for user in staff:
104 user.addresses.all().delete()
105 self.stdout.write("Removed staff addresses")
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/core/management/commands/cleardb.py b/saleor/core/management/commands/cleardb.py
--- a/saleor/core/management/commands/cleardb.py
+++ b/saleor/core/management/commands/cleardb.py
@@ -31,10 +31,16 @@
action="store_true",
help="Delete staff user accounts (doesn't delete superuser accounts).",
)
+ parser.add_argument(
+ "--force",
+ action="store_true",
+ help="Allows running the cleardb command in DEBUG=False mode.",
+ )
def handle(self, **options):
- if not settings.DEBUG:
- raise CommandError("Cannot clear the database in DEBUG=True mode.")
+ force = options.get("force", False)
+ if not settings.DEBUG and not force:
+ raise CommandError("Cannot clear the database in DEBUG=False mode.")
Checkout.objects.all().delete()
self.stdout.write("Removed checkouts")
| {"golden_diff": "diff --git a/saleor/core/management/commands/cleardb.py b/saleor/core/management/commands/cleardb.py\n--- a/saleor/core/management/commands/cleardb.py\n+++ b/saleor/core/management/commands/cleardb.py\n@@ -31,10 +31,16 @@\n action=\"store_true\",\n help=\"Delete staff user accounts (doesn't delete superuser accounts).\",\n )\n+ parser.add_argument(\n+ \"--force\",\n+ action=\"store_true\",\n+ help=\"Allows running the cleardb command in DEBUG=False mode.\",\n+ )\n \n def handle(self, **options):\n- if not settings.DEBUG:\n- raise CommandError(\"Cannot clear the database in DEBUG=True mode.\")\n+ force = options.get(\"force\", False)\n+ if not settings.DEBUG and not force:\n+ raise CommandError(\"Cannot clear the database in DEBUG=False mode.\")\n \n Checkout.objects.all().delete()\n self.stdout.write(\"Removed checkouts\")\n", "issue": "The clear database command should be runnable with debug disabled\nWe should be able to run `cleardb` when `DEBUG=False` but we should have a `--force` flag to actually allow that action when the debug mode is turned off as it is a dangerous command.\r\n\r\nDefinition of done:\r\n- Prints an error to stderr when `DEBUG=False` and `--force` is not passed (flagged)\r\n- Exits with 1 (raises `SystemExit` which allows Django to handle it and cleanup the opened connections, such as the database)\r\n- User can clear the database when debug mode is turned off and only when `--force` was passed\n", "before_files": [{"content": "\"\"\"Clear the database preserving shop's configuration.\n\nThis command clears the database from data such as orders, products or customer\naccounts. It doesn't remove shop's configuration, such as: staff accounts, service\naccounts, plugin configurations, site settings or navigation menus.\n\"\"\"\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand, CommandError\nfrom django.db.models import Q\n\nfrom ....account.models import User\nfrom ....checkout.models import Checkout\nfrom ....discount.models import Sale, Voucher\nfrom ....giftcard.models import GiftCard\nfrom ....order.models import Order\nfrom ....page.models import Page\nfrom ....payment.models import Payment, Transaction\nfrom ....product.models import Attribute, Category, Collection, Product, ProductType\nfrom ....shipping.models import ShippingMethod, ShippingZone\nfrom ....warehouse.models import Warehouse\nfrom ....webhook.models import Webhook\n\n\nclass Command(BaseCommand):\n help = \"Removes data from the database preserving shop configuration.\"\n\n def add_arguments(self, parser):\n parser.add_argument(\n \"--delete-staff\",\n action=\"store_true\",\n help=\"Delete staff user accounts (doesn't delete superuser accounts).\",\n )\n\n def handle(self, **options):\n if not settings.DEBUG:\n raise CommandError(\"Cannot clear the database in DEBUG=True mode.\")\n\n Checkout.objects.all().delete()\n self.stdout.write(\"Removed checkouts\")\n\n Transaction.objects.all().delete()\n self.stdout.write(\"Removed transactions\")\n\n Payment.objects.all().delete()\n self.stdout.write(\"Removed payments\")\n\n Order.objects.all().delete()\n self.stdout.write(\"Removed orders\")\n\n Product.objects.all().delete()\n self.stdout.write(\"Removed products\")\n\n ProductType.objects.all().delete()\n self.stdout.write(\"Removed product types\")\n\n Attribute.objects.all().delete()\n self.stdout.write(\"Removed attributes\")\n\n Category.objects.all().delete()\n self.stdout.write(\"Removed categories\")\n\n Collection.objects.all().delete()\n self.stdout.write(\"Removed collections\")\n\n Sale.objects.all().delete()\n self.stdout.write(\"Removed sales\")\n\n ShippingMethod.objects.all().delete()\n self.stdout.write(\"Removed shipping methods\")\n\n ShippingZone.objects.all().delete()\n self.stdout.write(\"Removed shipping zones\")\n\n Voucher.objects.all().delete()\n self.stdout.write(\"Removed vouchers\")\n\n GiftCard.objects.all().delete()\n self.stdout.write(\"Removed gift cards\")\n\n self.stdout.write(\"Removed warehouses\")\n Warehouse.objects.all().delete()\n\n Page.objects.all().delete()\n self.stdout.write(\"Removed pages\")\n\n Webhook.objects.all().delete()\n self.stdout.write(\"Removed webhooks\")\n\n # Delete all users except for staff members.\n staff = User.objects.filter(Q(is_staff=True) | Q(is_superuser=True))\n User.objects.exclude(pk__in=staff).delete()\n self.stdout.write(\"Removed customers\")\n\n should_delete_staff = options.get(\"delete_staff\")\n if should_delete_staff:\n staff = staff.exclude(is_superuser=True)\n staff.delete()\n self.stdout.write(\"Removed staff users\")\n\n # Remove addresses of staff members. Used to clear saved addresses of staff\n # accounts used on demo for testing checkout.\n for user in staff:\n user.addresses.all().delete()\n self.stdout.write(\"Removed staff addresses\")\n", "path": "saleor/core/management/commands/cleardb.py"}], "after_files": [{"content": "\"\"\"Clear the database preserving shop's configuration.\n\nThis command clears the database from data such as orders, products or customer\naccounts. It doesn't remove shop's configuration, such as: staff accounts, service\naccounts, plugin configurations, site settings or navigation menus.\n\"\"\"\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand, CommandError\nfrom django.db.models import Q\n\nfrom ....account.models import User\nfrom ....checkout.models import Checkout\nfrom ....discount.models import Sale, Voucher\nfrom ....giftcard.models import GiftCard\nfrom ....order.models import Order\nfrom ....page.models import Page\nfrom ....payment.models import Payment, Transaction\nfrom ....product.models import Attribute, Category, Collection, Product, ProductType\nfrom ....shipping.models import ShippingMethod, ShippingZone\nfrom ....warehouse.models import Warehouse\nfrom ....webhook.models import Webhook\n\n\nclass Command(BaseCommand):\n help = \"Removes data from the database preserving shop configuration.\"\n\n def add_arguments(self, parser):\n parser.add_argument(\n \"--delete-staff\",\n action=\"store_true\",\n help=\"Delete staff user accounts (doesn't delete superuser accounts).\",\n )\n parser.add_argument(\n \"--force\",\n action=\"store_true\",\n help=\"Allows running the cleardb command in DEBUG=False mode.\",\n )\n\n def handle(self, **options):\n force = options.get(\"force\", False)\n if not settings.DEBUG and not force:\n raise CommandError(\"Cannot clear the database in DEBUG=False mode.\")\n\n Checkout.objects.all().delete()\n self.stdout.write(\"Removed checkouts\")\n\n Transaction.objects.all().delete()\n self.stdout.write(\"Removed transactions\")\n\n Payment.objects.all().delete()\n self.stdout.write(\"Removed payments\")\n\n Order.objects.all().delete()\n self.stdout.write(\"Removed orders\")\n\n Product.objects.all().delete()\n self.stdout.write(\"Removed products\")\n\n ProductType.objects.all().delete()\n self.stdout.write(\"Removed product types\")\n\n Attribute.objects.all().delete()\n self.stdout.write(\"Removed attributes\")\n\n Category.objects.all().delete()\n self.stdout.write(\"Removed categories\")\n\n Collection.objects.all().delete()\n self.stdout.write(\"Removed collections\")\n\n Sale.objects.all().delete()\n self.stdout.write(\"Removed sales\")\n\n ShippingMethod.objects.all().delete()\n self.stdout.write(\"Removed shipping methods\")\n\n ShippingZone.objects.all().delete()\n self.stdout.write(\"Removed shipping zones\")\n\n Voucher.objects.all().delete()\n self.stdout.write(\"Removed vouchers\")\n\n GiftCard.objects.all().delete()\n self.stdout.write(\"Removed gift cards\")\n\n self.stdout.write(\"Removed warehouses\")\n Warehouse.objects.all().delete()\n\n Page.objects.all().delete()\n self.stdout.write(\"Removed pages\")\n\n Webhook.objects.all().delete()\n self.stdout.write(\"Removed webhooks\")\n\n # Delete all users except for staff members.\n staff = User.objects.filter(Q(is_staff=True) | Q(is_superuser=True))\n User.objects.exclude(pk__in=staff).delete()\n self.stdout.write(\"Removed customers\")\n\n should_delete_staff = options.get(\"delete_staff\")\n if should_delete_staff:\n staff = staff.exclude(is_superuser=True)\n staff.delete()\n self.stdout.write(\"Removed staff users\")\n\n # Remove addresses of staff members. Used to clear saved addresses of staff\n # accounts used on demo for testing checkout.\n for user in staff:\n user.addresses.all().delete()\n self.stdout.write(\"Removed staff addresses\")\n", "path": "saleor/core/management/commands/cleardb.py"}]} | 1,307 | 223 |
gh_patches_debug_6825 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-1579 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"no match response" button present in qna maker dialog when active learning is disabled
Python tracking issue for repo code-owners
See original issue for details: microsoft/botframework-sdk#6146
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from typing import List
5 from botbuilder.core import CardFactory
6 from botbuilder.schema import Activity, ActivityTypes, CardAction, HeroCard
7
8 from ..models import QueryResult
9
10
11 class QnACardBuilder:
12 """
13 Message activity card builder for QnAMaker dialogs.
14 """
15
16 @staticmethod
17 def get_suggestions_card(
18 suggestions: List[str], card_title: str, card_no_match: str
19 ) -> Activity:
20 """
21 Get active learning suggestions card.
22 """
23
24 if not suggestions:
25 raise TypeError("suggestions list is required")
26
27 if not card_title:
28 raise TypeError("card_title is required")
29
30 if not card_no_match:
31 raise TypeError("card_no_match is required")
32
33 # Add all suggestions
34 button_list = [
35 CardAction(value=suggestion, type="imBack", title=suggestion)
36 for suggestion in suggestions
37 ]
38
39 # Add No match text
40 button_list.append(
41 CardAction(value=card_no_match, type="imBack", title=card_no_match)
42 )
43
44 attachment = CardFactory.hero_card(HeroCard(buttons=button_list))
45
46 return Activity(
47 type=ActivityTypes.message, text=card_title, attachments=[attachment]
48 )
49
50 @staticmethod
51 def get_qna_prompts_card(result: QueryResult, card_no_match_text: str) -> Activity:
52 """
53 Get active learning suggestions card.
54 """
55
56 if not result:
57 raise TypeError("result is required")
58
59 if not card_no_match_text:
60 raise TypeError("card_no_match_text is required")
61
62 # Add all prompts
63 button_list = [
64 CardAction(
65 value=prompt.display_text, type="imBack", title=prompt.display_text,
66 )
67 for prompt in result.context.prompts
68 ]
69
70 # Add No match text
71 button_list.append(
72 CardAction(
73 value=card_no_match_text, type="imBack", title=card_no_match_text,
74 )
75 )
76
77 attachment = CardFactory.hero_card(HeroCard(buttons=button_list))
78
79 return Activity(
80 type=ActivityTypes.message, text=result.answer, attachments=[attachment]
81 )
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py b/libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py
--- a/libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py
+++ b/libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py
@@ -67,13 +67,6 @@
for prompt in result.context.prompts
]
- # Add No match text
- button_list.append(
- CardAction(
- value=card_no_match_text, type="imBack", title=card_no_match_text,
- )
- )
-
attachment = CardFactory.hero_card(HeroCard(buttons=button_list))
return Activity(
| {"golden_diff": "diff --git a/libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py b/libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py\n--- a/libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py\n+++ b/libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py\n@@ -67,13 +67,6 @@\n for prompt in result.context.prompts\r\n ]\r\n \r\n- # Add No match text\r\n- button_list.append(\r\n- CardAction(\r\n- value=card_no_match_text, type=\"imBack\", title=card_no_match_text,\r\n- )\r\n- )\r\n-\r\n attachment = CardFactory.hero_card(HeroCard(buttons=button_list))\r\n \r\n return Activity(\n", "issue": "\"no match response\" button present in qna maker dialog when active learning is disabled\nPython tracking issue for repo code-owners\r\n\r\nSee original issue for details: microsoft/botframework-sdk#6146\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\r\n# Licensed under the MIT License.\r\n\r\nfrom typing import List\r\nfrom botbuilder.core import CardFactory\r\nfrom botbuilder.schema import Activity, ActivityTypes, CardAction, HeroCard\r\n\r\nfrom ..models import QueryResult\r\n\r\n\r\nclass QnACardBuilder:\r\n \"\"\"\r\n Message activity card builder for QnAMaker dialogs.\r\n \"\"\"\r\n\r\n @staticmethod\r\n def get_suggestions_card(\r\n suggestions: List[str], card_title: str, card_no_match: str\r\n ) -> Activity:\r\n \"\"\"\r\n Get active learning suggestions card.\r\n \"\"\"\r\n\r\n if not suggestions:\r\n raise TypeError(\"suggestions list is required\")\r\n\r\n if not card_title:\r\n raise TypeError(\"card_title is required\")\r\n\r\n if not card_no_match:\r\n raise TypeError(\"card_no_match is required\")\r\n\r\n # Add all suggestions\r\n button_list = [\r\n CardAction(value=suggestion, type=\"imBack\", title=suggestion)\r\n for suggestion in suggestions\r\n ]\r\n\r\n # Add No match text\r\n button_list.append(\r\n CardAction(value=card_no_match, type=\"imBack\", title=card_no_match)\r\n )\r\n\r\n attachment = CardFactory.hero_card(HeroCard(buttons=button_list))\r\n\r\n return Activity(\r\n type=ActivityTypes.message, text=card_title, attachments=[attachment]\r\n )\r\n\r\n @staticmethod\r\n def get_qna_prompts_card(result: QueryResult, card_no_match_text: str) -> Activity:\r\n \"\"\"\r\n Get active learning suggestions card.\r\n \"\"\"\r\n\r\n if not result:\r\n raise TypeError(\"result is required\")\r\n\r\n if not card_no_match_text:\r\n raise TypeError(\"card_no_match_text is required\")\r\n\r\n # Add all prompts\r\n button_list = [\r\n CardAction(\r\n value=prompt.display_text, type=\"imBack\", title=prompt.display_text,\r\n )\r\n for prompt in result.context.prompts\r\n ]\r\n\r\n # Add No match text\r\n button_list.append(\r\n CardAction(\r\n value=card_no_match_text, type=\"imBack\", title=card_no_match_text,\r\n )\r\n )\r\n\r\n attachment = CardFactory.hero_card(HeroCard(buttons=button_list))\r\n\r\n return Activity(\r\n type=ActivityTypes.message, text=result.answer, attachments=[attachment]\r\n )\r\n", "path": "libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\r\n# Licensed under the MIT License.\r\n\r\nfrom typing import List\r\nfrom botbuilder.core import CardFactory\r\nfrom botbuilder.schema import Activity, ActivityTypes, CardAction, HeroCard\r\n\r\nfrom ..models import QueryResult\r\n\r\n\r\nclass QnACardBuilder:\r\n \"\"\"\r\n Message activity card builder for QnAMaker dialogs.\r\n \"\"\"\r\n\r\n @staticmethod\r\n def get_suggestions_card(\r\n suggestions: List[str], card_title: str, card_no_match: str\r\n ) -> Activity:\r\n \"\"\"\r\n Get active learning suggestions card.\r\n \"\"\"\r\n\r\n if not suggestions:\r\n raise TypeError(\"suggestions list is required\")\r\n\r\n if not card_title:\r\n raise TypeError(\"card_title is required\")\r\n\r\n if not card_no_match:\r\n raise TypeError(\"card_no_match is required\")\r\n\r\n # Add all suggestions\r\n button_list = [\r\n CardAction(value=suggestion, type=\"imBack\", title=suggestion)\r\n for suggestion in suggestions\r\n ]\r\n\r\n # Add No match text\r\n button_list.append(\r\n CardAction(value=card_no_match, type=\"imBack\", title=card_no_match)\r\n )\r\n\r\n attachment = CardFactory.hero_card(HeroCard(buttons=button_list))\r\n\r\n return Activity(\r\n type=ActivityTypes.message, text=card_title, attachments=[attachment]\r\n )\r\n\r\n @staticmethod\r\n def get_qna_prompts_card(result: QueryResult, card_no_match_text: str) -> Activity:\r\n \"\"\"\r\n Get active learning suggestions card.\r\n \"\"\"\r\n\r\n if not result:\r\n raise TypeError(\"result is required\")\r\n\r\n if not card_no_match_text:\r\n raise TypeError(\"card_no_match_text is required\")\r\n\r\n # Add all prompts\r\n button_list = [\r\n CardAction(\r\n value=prompt.display_text, type=\"imBack\", title=prompt.display_text,\r\n )\r\n for prompt in result.context.prompts\r\n ]\r\n\r\n attachment = CardFactory.hero_card(HeroCard(buttons=button_list))\r\n\r\n return Activity(\r\n type=ActivityTypes.message, text=result.answer, attachments=[attachment]\r\n )\r\n", "path": "libraries/botbuilder-ai/botbuilder/ai/qna/utils/qna_card_builder.py"}]} | 960 | 185 |
gh_patches_debug_38316 | rasdani/github-patches | git_diff | streamlink__streamlink-5946 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.turkuvaz: no data on minikacocuk.com.tr
### Checklist
- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
6.7.2
### Description
### Debug log
```text
Not working Python 3!!!! "Minikacocuk" channel ,please help.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/turkuvaz.py`
Content:
```
1 """
2 $description Turkish live TV channels from Turkuvaz Media Group, including Ahaber, ATV, Minika COCUK and MinikaGO.
3 $url a2tv.com.tr
4 $url ahaber.com.tr
5 $url anews.com.tr
6 $url apara.com.tr
7 $url aspor.com.tr
8 $url atv.com.tr
9 $url atvavrupa.tv
10 $url minikacocuk.com.tr
11 $url minikago.com.tr
12 $url vavtv.com.tr
13 $type live, vod
14 $metadata id
15 $metadata title
16 $region various
17 """
18
19 import logging
20 import re
21
22 from streamlink.plugin import Plugin, pluginmatcher
23 from streamlink.plugin.api import validate
24 from streamlink.stream.hls import HLSStream
25
26
27 log = logging.getLogger(__name__)
28
29
30 @pluginmatcher(re.compile(r"""
31 https?://(?:www\.)?
32 (?:
33 atvavrupa\.tv
34 |
35 (?:a2tv|ahaber|anews|apara|aspor|atv|minikacocuk|minikago|vavtv)\.com\.tr
36 )
37 """, re.VERBOSE))
38 class Turkuvaz(Plugin):
39 def _get_streams(self):
40 _find_and_get_attrs = validate.Schema(
41 validate.xml_find(".//div[@data-videoid][@data-websiteid]"),
42 validate.union_get("data-videoid", "data-websiteid"),
43 )
44
45 id_data = self.session.http.get(
46 self.url,
47 schema=validate.Schema(
48 validate.parse_html(),
49 validate.any(
50 _find_and_get_attrs,
51 validate.all(
52 validate.xml_xpath_string(
53 ".//script[contains(text(),'data-videoid') and contains(text(),'data-websiteid')]/text()",
54 ),
55 validate.none_or_all(
56 str,
57 validate.regex(re.compile(r"""var\s+tmdPlayer\s*=\s*(?P<q>["'])(.*?)(?P=q)""")),
58 validate.get(0),
59 validate.parse_html(),
60 _find_and_get_attrs,
61 ),
62 ),
63 ),
64 ),
65 )
66
67 if not id_data:
68 return
69
70 video_id, website_id = id_data
71 log.debug(f"video_id={video_id}")
72 log.debug(f"website_id={website_id}")
73
74 self.id, self.title, hls_url = self.session.http.get(
75 f"https://videojs.tmgrup.com.tr/getvideo/{website_id}/{video_id}",
76 schema=validate.Schema(
77 validate.parse_json(),
78 {
79 "success": True,
80 "video": {
81 "VideoId": str,
82 "Title": str,
83 "VideoSmilUrl": validate.url(),
84 },
85 },
86 validate.get("video"),
87 validate.union_get("VideoId", "Title", "VideoSmilUrl"),
88 ),
89 )
90 log.debug(f"hls_url={hls_url}")
91
92 secure_hls_url = self.session.http.get(
93 "https://securevideotoken.tmgrup.com.tr/webtv/secure",
94 params=f"url={hls_url}",
95 headers={"Referer": self.url},
96 schema=validate.Schema(
97 validate.parse_json(),
98 {
99 "Success": True,
100 "Url": validate.url(),
101 },
102 validate.get("Url"),
103 ),
104 )
105 log.debug(f"secure_hls_url={secure_hls_url}")
106
107 if secure_hls_url:
108 return HLSStream.parse_variant_playlist(self.session, secure_hls_url)
109
110
111 __plugin__ = Turkuvaz
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/turkuvaz.py b/src/streamlink/plugins/turkuvaz.py
--- a/src/streamlink/plugins/turkuvaz.py
+++ b/src/streamlink/plugins/turkuvaz.py
@@ -36,8 +36,19 @@
)
""", re.VERBOSE))
class Turkuvaz(Plugin):
+ _VIDEOID_LIVE = "00000000-0000-0000-0000-000000000000"
+
+ # hardcoded in https://i.tmgrup.com.tr/videojs/js/tmdplayersetup.js?v=651
+ # (via https://www.minikacocuk.com.tr/webtv/canli-yayin)
+ _MAPPING_WEBSITEID_HLSURL = {
+ "9BBE055A-4CF6-4BC3-A675-D40E89B55B91": "https://trkvz.daioncdn.net/aspor/aspor.m3u8?ce=3&app=45f847c4-04e8-419a-a561-2ebf87084765",
+ "0C1BC8FF-C3B1-45BE-A95B-F7BB9C8B03ED": "https://trkvz.daioncdn.net/a2tv/a2tv.m3u8?ce=3&app=59363a60-be96-4f73-9eff-355d0ff2c758",
+ "AAE2E325-4EAE-45B7-B017-26FD7DDB6CE4": "https://trkvz.daioncdn.net/minikago/minikago.m3u8?app=web&ce=3",
+ "01ED59F2-4067-4945-8204-45F6C6DB4045": "https://trkvz.daioncdn.net/minikago_cocuk/minikago_cocuk.m3u8?app=web&ce=3",
+ }
+
def _get_streams(self):
- _find_and_get_attrs = validate.Schema(
+ _find_and_get_attrs = validate.all(
validate.xml_find(".//div[@data-videoid][@data-websiteid]"),
validate.union_get("data-videoid", "data-websiteid"),
)
@@ -68,8 +79,8 @@
return
video_id, website_id = id_data
- log.debug(f"video_id={video_id}")
- log.debug(f"website_id={website_id}")
+ log.debug(f"{video_id=}")
+ log.debug(f"{website_id=}")
self.id, self.title, hls_url = self.session.http.get(
f"https://videojs.tmgrup.com.tr/getvideo/{website_id}/{video_id}",
@@ -87,11 +98,14 @@
validate.union_get("VideoId", "Title", "VideoSmilUrl"),
),
)
- log.debug(f"hls_url={hls_url}")
+
+ if video_id == self._VIDEOID_LIVE:
+ hls_url = self._MAPPING_WEBSITEID_HLSURL.get(website_id.upper(), hls_url)
+ log.debug(f"{hls_url=}")
secure_hls_url = self.session.http.get(
"https://securevideotoken.tmgrup.com.tr/webtv/secure",
- params=f"url={hls_url}",
+ params={"url": hls_url},
headers={"Referer": self.url},
schema=validate.Schema(
validate.parse_json(),
@@ -102,7 +116,7 @@
validate.get("Url"),
),
)
- log.debug(f"secure_hls_url={secure_hls_url}")
+ log.debug(f"{secure_hls_url=}")
if secure_hls_url:
return HLSStream.parse_variant_playlist(self.session, secure_hls_url)
| {"golden_diff": "diff --git a/src/streamlink/plugins/turkuvaz.py b/src/streamlink/plugins/turkuvaz.py\n--- a/src/streamlink/plugins/turkuvaz.py\n+++ b/src/streamlink/plugins/turkuvaz.py\n@@ -36,8 +36,19 @@\n )\n \"\"\", re.VERBOSE))\n class Turkuvaz(Plugin):\n+ _VIDEOID_LIVE = \"00000000-0000-0000-0000-000000000000\"\n+\n+ # hardcoded in https://i.tmgrup.com.tr/videojs/js/tmdplayersetup.js?v=651\n+ # (via https://www.minikacocuk.com.tr/webtv/canli-yayin)\n+ _MAPPING_WEBSITEID_HLSURL = {\n+ \"9BBE055A-4CF6-4BC3-A675-D40E89B55B91\": \"https://trkvz.daioncdn.net/aspor/aspor.m3u8?ce=3&app=45f847c4-04e8-419a-a561-2ebf87084765\",\n+ \"0C1BC8FF-C3B1-45BE-A95B-F7BB9C8B03ED\": \"https://trkvz.daioncdn.net/a2tv/a2tv.m3u8?ce=3&app=59363a60-be96-4f73-9eff-355d0ff2c758\",\n+ \"AAE2E325-4EAE-45B7-B017-26FD7DDB6CE4\": \"https://trkvz.daioncdn.net/minikago/minikago.m3u8?app=web&ce=3\",\n+ \"01ED59F2-4067-4945-8204-45F6C6DB4045\": \"https://trkvz.daioncdn.net/minikago_cocuk/minikago_cocuk.m3u8?app=web&ce=3\",\n+ }\n+\n def _get_streams(self):\n- _find_and_get_attrs = validate.Schema(\n+ _find_and_get_attrs = validate.all(\n validate.xml_find(\".//div[@data-videoid][@data-websiteid]\"),\n validate.union_get(\"data-videoid\", \"data-websiteid\"),\n )\n@@ -68,8 +79,8 @@\n return\n \n video_id, website_id = id_data\n- log.debug(f\"video_id={video_id}\")\n- log.debug(f\"website_id={website_id}\")\n+ log.debug(f\"{video_id=}\")\n+ log.debug(f\"{website_id=}\")\n \n self.id, self.title, hls_url = self.session.http.get(\n f\"https://videojs.tmgrup.com.tr/getvideo/{website_id}/{video_id}\",\n@@ -87,11 +98,14 @@\n validate.union_get(\"VideoId\", \"Title\", \"VideoSmilUrl\"),\n ),\n )\n- log.debug(f\"hls_url={hls_url}\")\n+\n+ if video_id == self._VIDEOID_LIVE:\n+ hls_url = self._MAPPING_WEBSITEID_HLSURL.get(website_id.upper(), hls_url)\n+ log.debug(f\"{hls_url=}\")\n \n secure_hls_url = self.session.http.get(\n \"https://securevideotoken.tmgrup.com.tr/webtv/secure\",\n- params=f\"url={hls_url}\",\n+ params={\"url\": hls_url},\n headers={\"Referer\": self.url},\n schema=validate.Schema(\n validate.parse_json(),\n@@ -102,7 +116,7 @@\n validate.get(\"Url\"),\n ),\n )\n- log.debug(f\"secure_hls_url={secure_hls_url}\")\n+ log.debug(f\"{secure_hls_url=}\")\n \n if secure_hls_url:\n return HLSStream.parse_variant_playlist(self.session, secure_hls_url)\n", "issue": "plugins.turkuvaz: no data on minikacocuk.com.tr\n### Checklist\r\n\r\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\r\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\r\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\r\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\r\n\r\n### Streamlink version\r\n\r\n6.7.2\r\n\r\n### Description\r\n\r\n### Debug log\r\n\r\n```text\r\nNot working Python 3!!!! \"Minikacocuk\" channel ,please help.\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\n$description Turkish live TV channels from Turkuvaz Media Group, including Ahaber, ATV, Minika COCUK and MinikaGO.\n$url a2tv.com.tr\n$url ahaber.com.tr\n$url anews.com.tr\n$url apara.com.tr\n$url aspor.com.tr\n$url atv.com.tr\n$url atvavrupa.tv\n$url minikacocuk.com.tr\n$url minikago.com.tr\n$url vavtv.com.tr\n$type live, vod\n$metadata id\n$metadata title\n$region various\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(r\"\"\"\n https?://(?:www\\.)?\n (?:\n atvavrupa\\.tv\n |\n (?:a2tv|ahaber|anews|apara|aspor|atv|minikacocuk|minikago|vavtv)\\.com\\.tr\n )\n\"\"\", re.VERBOSE))\nclass Turkuvaz(Plugin):\n def _get_streams(self):\n _find_and_get_attrs = validate.Schema(\n validate.xml_find(\".//div[@data-videoid][@data-websiteid]\"),\n validate.union_get(\"data-videoid\", \"data-websiteid\"),\n )\n\n id_data = self.session.http.get(\n self.url,\n schema=validate.Schema(\n validate.parse_html(),\n validate.any(\n _find_and_get_attrs,\n validate.all(\n validate.xml_xpath_string(\n \".//script[contains(text(),'data-videoid') and contains(text(),'data-websiteid')]/text()\",\n ),\n validate.none_or_all(\n str,\n validate.regex(re.compile(r\"\"\"var\\s+tmdPlayer\\s*=\\s*(?P<q>[\"'])(.*?)(?P=q)\"\"\")),\n validate.get(0),\n validate.parse_html(),\n _find_and_get_attrs,\n ),\n ),\n ),\n ),\n )\n\n if not id_data:\n return\n\n video_id, website_id = id_data\n log.debug(f\"video_id={video_id}\")\n log.debug(f\"website_id={website_id}\")\n\n self.id, self.title, hls_url = self.session.http.get(\n f\"https://videojs.tmgrup.com.tr/getvideo/{website_id}/{video_id}\",\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"success\": True,\n \"video\": {\n \"VideoId\": str,\n \"Title\": str,\n \"VideoSmilUrl\": validate.url(),\n },\n },\n validate.get(\"video\"),\n validate.union_get(\"VideoId\", \"Title\", \"VideoSmilUrl\"),\n ),\n )\n log.debug(f\"hls_url={hls_url}\")\n\n secure_hls_url = self.session.http.get(\n \"https://securevideotoken.tmgrup.com.tr/webtv/secure\",\n params=f\"url={hls_url}\",\n headers={\"Referer\": self.url},\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"Success\": True,\n \"Url\": validate.url(),\n },\n validate.get(\"Url\"),\n ),\n )\n log.debug(f\"secure_hls_url={secure_hls_url}\")\n\n if secure_hls_url:\n return HLSStream.parse_variant_playlist(self.session, secure_hls_url)\n\n\n__plugin__ = Turkuvaz\n", "path": "src/streamlink/plugins/turkuvaz.py"}], "after_files": [{"content": "\"\"\"\n$description Turkish live TV channels from Turkuvaz Media Group, including Ahaber, ATV, Minika COCUK and MinikaGO.\n$url a2tv.com.tr\n$url ahaber.com.tr\n$url anews.com.tr\n$url apara.com.tr\n$url aspor.com.tr\n$url atv.com.tr\n$url atvavrupa.tv\n$url minikacocuk.com.tr\n$url minikago.com.tr\n$url vavtv.com.tr\n$type live, vod\n$metadata id\n$metadata title\n$region various\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(r\"\"\"\n https?://(?:www\\.)?\n (?:\n atvavrupa\\.tv\n |\n (?:a2tv|ahaber|anews|apara|aspor|atv|minikacocuk|minikago|vavtv)\\.com\\.tr\n )\n\"\"\", re.VERBOSE))\nclass Turkuvaz(Plugin):\n _VIDEOID_LIVE = \"00000000-0000-0000-0000-000000000000\"\n\n # hardcoded in https://i.tmgrup.com.tr/videojs/js/tmdplayersetup.js?v=651\n # (via https://www.minikacocuk.com.tr/webtv/canli-yayin)\n _MAPPING_WEBSITEID_HLSURL = {\n \"9BBE055A-4CF6-4BC3-A675-D40E89B55B91\": \"https://trkvz.daioncdn.net/aspor/aspor.m3u8?ce=3&app=45f847c4-04e8-419a-a561-2ebf87084765\",\n \"0C1BC8FF-C3B1-45BE-A95B-F7BB9C8B03ED\": \"https://trkvz.daioncdn.net/a2tv/a2tv.m3u8?ce=3&app=59363a60-be96-4f73-9eff-355d0ff2c758\",\n \"AAE2E325-4EAE-45B7-B017-26FD7DDB6CE4\": \"https://trkvz.daioncdn.net/minikago/minikago.m3u8?app=web&ce=3\",\n \"01ED59F2-4067-4945-8204-45F6C6DB4045\": \"https://trkvz.daioncdn.net/minikago_cocuk/minikago_cocuk.m3u8?app=web&ce=3\",\n }\n\n def _get_streams(self):\n _find_and_get_attrs = validate.all(\n validate.xml_find(\".//div[@data-videoid][@data-websiteid]\"),\n validate.union_get(\"data-videoid\", \"data-websiteid\"),\n )\n\n id_data = self.session.http.get(\n self.url,\n schema=validate.Schema(\n validate.parse_html(),\n validate.any(\n _find_and_get_attrs,\n validate.all(\n validate.xml_xpath_string(\n \".//script[contains(text(),'data-videoid') and contains(text(),'data-websiteid')]/text()\",\n ),\n validate.none_or_all(\n str,\n validate.regex(re.compile(r\"\"\"var\\s+tmdPlayer\\s*=\\s*(?P<q>[\"'])(.*?)(?P=q)\"\"\")),\n validate.get(0),\n validate.parse_html(),\n _find_and_get_attrs,\n ),\n ),\n ),\n ),\n )\n\n if not id_data:\n return\n\n video_id, website_id = id_data\n log.debug(f\"{video_id=}\")\n log.debug(f\"{website_id=}\")\n\n self.id, self.title, hls_url = self.session.http.get(\n f\"https://videojs.tmgrup.com.tr/getvideo/{website_id}/{video_id}\",\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"success\": True,\n \"video\": {\n \"VideoId\": str,\n \"Title\": str,\n \"VideoSmilUrl\": validate.url(),\n },\n },\n validate.get(\"video\"),\n validate.union_get(\"VideoId\", \"Title\", \"VideoSmilUrl\"),\n ),\n )\n\n if video_id == self._VIDEOID_LIVE:\n hls_url = self._MAPPING_WEBSITEID_HLSURL.get(website_id.upper(), hls_url)\n log.debug(f\"{hls_url=}\")\n\n secure_hls_url = self.session.http.get(\n \"https://securevideotoken.tmgrup.com.tr/webtv/secure\",\n params={\"url\": hls_url},\n headers={\"Referer\": self.url},\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"Success\": True,\n \"Url\": validate.url(),\n },\n validate.get(\"Url\"),\n ),\n )\n log.debug(f\"{secure_hls_url=}\")\n\n if secure_hls_url:\n return HLSStream.parse_variant_playlist(self.session, secure_hls_url)\n\n\n__plugin__ = Turkuvaz\n", "path": "src/streamlink/plugins/turkuvaz.py"}]} | 1,474 | 935 |
gh_patches_debug_34583 | rasdani/github-patches | git_diff | scikit-image__scikit-image-1526 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error installing skimage-
Hi!
I've install skimage on a MacOs 10.10, like it's said in the documentation:
pip install -U scikit-image
and it say that need scipy to work, I had to install it to work.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #! /usr/bin/env python
2
3 descr = """Image Processing SciKit
4
5 Image processing algorithms for SciPy, including IO, morphology, filtering,
6 warping, color manipulation, object detection, etc.
7
8 Please refer to the online documentation at
9 http://scikit-image.org/
10 """
11
12 DISTNAME = 'scikit-image'
13 DESCRIPTION = 'Image processing routines for SciPy'
14 LONG_DESCRIPTION = descr
15 MAINTAINER = 'Stefan van der Walt'
16 MAINTAINER_EMAIL = '[email protected]'
17 URL = 'http://scikit-image.org'
18 LICENSE = 'Modified BSD'
19 DOWNLOAD_URL = 'http://github.com/scikit-image/scikit-image'
20
21 import os
22 import sys
23
24 import setuptools
25 from distutils.command.build_py import build_py
26
27
28 with open('skimage/__init__.py') as fid:
29 for line in fid:
30 if line.startswith('__version__'):
31 VERSION = line.strip().split()[-1][1:-1]
32 break
33
34 with open('requirements.txt') as fid:
35 INSTALL_REQUIRES = [l.strip() for l in fid.readlines() if l]
36
37 # development versions do not have the cythonized files
38 if VERSION.endswith('dev'):
39 SETUP_REQUIRES = [r for r in INSTALL_REQUIRES if r.startswith('cython')]
40 else:
41 INSTALL_REQUIRES = [r for r in INSTALL_REQUIRES
42 if not r.startswith('cython')]
43 SETUP_REQUIRES = []
44
45
46 # list requirements for PyPI
47 REQUIRES = [r.replace('>=', ' (>= ') + ')'
48 for r in INSTALL_REQUIRES + SETUP_REQUIRES]
49 REQUIRES = [r.replace('==', ' (== ') for r in REQUIRES]
50
51
52 # do not attempt to install numpy and scipy until they have eggs available
53 INSTALL_REQUIRES = [r for r in INSTALL_REQUIRES
54 if not r.startswith(('scipy', 'numpy'))]
55
56
57 def configuration(parent_package='', top_path=None):
58 if os.path.exists('MANIFEST'): os.remove('MANIFEST')
59
60 from numpy.distutils.misc_util import Configuration
61 config = Configuration(None, parent_package, top_path)
62
63 config.set_options(
64 ignore_setup_xxx_py=True,
65 assume_default_configuration=True,
66 delegate_options_to_subpackages=True,
67 quiet=True)
68
69 config.add_subpackage('skimage')
70 config.add_data_dir('skimage/data')
71
72 return config
73
74
75 if __name__ == "__main__":
76 # purposely fail loudly if numpy or scipy are not available
77 from numpy.distutils.core import setup
78 import scipy
79
80 setup(
81 name=DISTNAME,
82 description=DESCRIPTION,
83 long_description=LONG_DESCRIPTION,
84 maintainer=MAINTAINER,
85 maintainer_email=MAINTAINER_EMAIL,
86 url=URL,
87 license=LICENSE,
88 download_url=DOWNLOAD_URL,
89 version=VERSION,
90
91 classifiers=[
92 'Development Status :: 4 - Beta',
93 'Environment :: Console',
94 'Intended Audience :: Developers',
95 'Intended Audience :: Science/Research',
96 'License :: OSI Approved :: BSD License',
97 'Programming Language :: C',
98 'Programming Language :: Python',
99 'Programming Language :: Python :: 3',
100 'Topic :: Scientific/Engineering',
101 'Operating System :: Microsoft :: Windows',
102 'Operating System :: POSIX',
103 'Operating System :: Unix',
104 'Operating System :: MacOS',
105 ],
106
107 configuration=configuration,
108 setup_requires=SETUP_REQUIRES,
109 install_requires=INSTALL_REQUIRES,
110 requires=REQUIRES,
111 packages=setuptools.find_packages(exclude=['doc']),
112 include_package_data=True,
113 zip_safe=False, # the package can run out of an .egg file
114
115 entry_points={
116 'console_scripts': ['skivi = skimage.scripts.skivi:main'],
117 },
118
119 cmdclass={'build_py': build_py},
120 )
121
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -34,24 +34,10 @@
with open('requirements.txt') as fid:
INSTALL_REQUIRES = [l.strip() for l in fid.readlines() if l]
-# development versions do not have the cythonized files
-if VERSION.endswith('dev'):
- SETUP_REQUIRES = [r for r in INSTALL_REQUIRES if r.startswith('cython')]
-else:
- INSTALL_REQUIRES = [r for r in INSTALL_REQUIRES
- if not r.startswith('cython')]
- SETUP_REQUIRES = []
-
-
-# list requirements for PyPI
-REQUIRES = [r.replace('>=', ' (>= ') + ')'
- for r in INSTALL_REQUIRES + SETUP_REQUIRES]
+# requirements for those browsing PyPI
+REQUIRES = [r.replace('>=', ' (>= ') + ')' for r in INSTALL_REQUIRES]
REQUIRES = [r.replace('==', ' (== ') for r in REQUIRES]
-
-
-# do not attempt to install numpy and scipy until they have eggs available
-INSTALL_REQUIRES = [r for r in INSTALL_REQUIRES
- if not r.startswith(('scipy', 'numpy'))]
+REQUIRES = [r.replace('[array]', '') for r in REQUIRES]
def configuration(parent_package='', top_path=None):
@@ -73,9 +59,17 @@
if __name__ == "__main__":
- # purposely fail loudly if numpy or scipy are not available
- from numpy.distutils.core import setup
- import scipy
+ # purposely fail if numpy is not available
+ # other dependecies will be resolved by pip (install_requires)
+ try:
+ from numpy.distutils.core import setup
+ except ImportError:
+ print('To install scikit-image from source, you will need numpy.\n' +
+ 'Install numpy with pip:\n' +
+ 'pip install numpy\n'
+ 'Or use your operating system package manager. For more\n' +
+ 'details, see http://scikit-image.org/docs/stable/install.html')
+ sys.exit(1)
setup(
name=DISTNAME,
@@ -105,8 +99,9 @@
],
configuration=configuration,
- setup_requires=SETUP_REQUIRES,
install_requires=INSTALL_REQUIRES,
+ # install cython when running setup.py (source install)
+ setup_requires=['cython>=0.21'],
requires=REQUIRES,
packages=setuptools.find_packages(exclude=['doc']),
include_package_data=True,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -34,24 +34,10 @@\n with open('requirements.txt') as fid:\n INSTALL_REQUIRES = [l.strip() for l in fid.readlines() if l]\n \n-# development versions do not have the cythonized files\n-if VERSION.endswith('dev'):\n- SETUP_REQUIRES = [r for r in INSTALL_REQUIRES if r.startswith('cython')]\n-else:\n- INSTALL_REQUIRES = [r for r in INSTALL_REQUIRES\n- if not r.startswith('cython')]\n- SETUP_REQUIRES = []\n-\n-\n-# list requirements for PyPI\n-REQUIRES = [r.replace('>=', ' (>= ') + ')'\n- for r in INSTALL_REQUIRES + SETUP_REQUIRES]\n+# requirements for those browsing PyPI\n+REQUIRES = [r.replace('>=', ' (>= ') + ')' for r in INSTALL_REQUIRES]\n REQUIRES = [r.replace('==', ' (== ') for r in REQUIRES]\n-\n-\n-# do not attempt to install numpy and scipy until they have eggs available\n-INSTALL_REQUIRES = [r for r in INSTALL_REQUIRES\n- if not r.startswith(('scipy', 'numpy'))]\n+REQUIRES = [r.replace('[array]', '') for r in REQUIRES]\n \n \n def configuration(parent_package='', top_path=None):\n@@ -73,9 +59,17 @@\n \n \n if __name__ == \"__main__\":\n- # purposely fail loudly if numpy or scipy are not available\n- from numpy.distutils.core import setup\n- import scipy\n+ # purposely fail if numpy is not available\n+ # other dependecies will be resolved by pip (install_requires)\n+ try:\n+ from numpy.distutils.core import setup\n+ except ImportError:\n+ print('To install scikit-image from source, you will need numpy.\\n' +\n+ 'Install numpy with pip:\\n' +\n+ 'pip install numpy\\n'\n+ 'Or use your operating system package manager. For more\\n' +\n+ 'details, see http://scikit-image.org/docs/stable/install.html')\n+ sys.exit(1)\n \n setup(\n name=DISTNAME,\n@@ -105,8 +99,9 @@\n ],\n \n configuration=configuration,\n- setup_requires=SETUP_REQUIRES,\n install_requires=INSTALL_REQUIRES,\n+ # install cython when running setup.py (source install)\n+ setup_requires=['cython>=0.21'],\n requires=REQUIRES,\n packages=setuptools.find_packages(exclude=['doc']),\n include_package_data=True,\n", "issue": "Error installing skimage-\nHi!\nI've install skimage on a MacOs 10.10, like it's said in the documentation:\npip install -U scikit-image\nand it say that need scipy to work, I had to install it to work.\n\n", "before_files": [{"content": "#! /usr/bin/env python\n\ndescr = \"\"\"Image Processing SciKit\n\nImage processing algorithms for SciPy, including IO, morphology, filtering,\nwarping, color manipulation, object detection, etc.\n\nPlease refer to the online documentation at\nhttp://scikit-image.org/\n\"\"\"\n\nDISTNAME = 'scikit-image'\nDESCRIPTION = 'Image processing routines for SciPy'\nLONG_DESCRIPTION = descr\nMAINTAINER = 'Stefan van der Walt'\nMAINTAINER_EMAIL = '[email protected]'\nURL = 'http://scikit-image.org'\nLICENSE = 'Modified BSD'\nDOWNLOAD_URL = 'http://github.com/scikit-image/scikit-image'\n\nimport os\nimport sys\n\nimport setuptools\nfrom distutils.command.build_py import build_py\n\n\nwith open('skimage/__init__.py') as fid:\n for line in fid:\n if line.startswith('__version__'):\n VERSION = line.strip().split()[-1][1:-1]\n break\n\nwith open('requirements.txt') as fid:\n INSTALL_REQUIRES = [l.strip() for l in fid.readlines() if l]\n\n# development versions do not have the cythonized files\nif VERSION.endswith('dev'):\n SETUP_REQUIRES = [r for r in INSTALL_REQUIRES if r.startswith('cython')]\nelse:\n INSTALL_REQUIRES = [r for r in INSTALL_REQUIRES\n if not r.startswith('cython')]\n SETUP_REQUIRES = []\n\n\n# list requirements for PyPI\nREQUIRES = [r.replace('>=', ' (>= ') + ')'\n for r in INSTALL_REQUIRES + SETUP_REQUIRES]\nREQUIRES = [r.replace('==', ' (== ') for r in REQUIRES]\n\n\n# do not attempt to install numpy and scipy until they have eggs available\nINSTALL_REQUIRES = [r for r in INSTALL_REQUIRES\n if not r.startswith(('scipy', 'numpy'))]\n\n\ndef configuration(parent_package='', top_path=None):\n if os.path.exists('MANIFEST'): os.remove('MANIFEST')\n\n from numpy.distutils.misc_util import Configuration\n config = Configuration(None, parent_package, top_path)\n\n config.set_options(\n ignore_setup_xxx_py=True,\n assume_default_configuration=True,\n delegate_options_to_subpackages=True,\n quiet=True)\n\n config.add_subpackage('skimage')\n config.add_data_dir('skimage/data')\n\n return config\n\n\nif __name__ == \"__main__\":\n # purposely fail loudly if numpy or scipy are not available\n from numpy.distutils.core import setup\n import scipy\n\n setup(\n name=DISTNAME,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n url=URL,\n license=LICENSE,\n download_url=DOWNLOAD_URL,\n version=VERSION,\n\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Programming Language :: C',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Topic :: Scientific/Engineering',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: Unix',\n 'Operating System :: MacOS',\n ],\n\n configuration=configuration,\n setup_requires=SETUP_REQUIRES,\n install_requires=INSTALL_REQUIRES,\n requires=REQUIRES,\n packages=setuptools.find_packages(exclude=['doc']),\n include_package_data=True,\n zip_safe=False, # the package can run out of an .egg file\n\n entry_points={\n 'console_scripts': ['skivi = skimage.scripts.skivi:main'],\n },\n\n cmdclass={'build_py': build_py},\n )\n", "path": "setup.py"}], "after_files": [{"content": "#! /usr/bin/env python\n\ndescr = \"\"\"Image Processing SciKit\n\nImage processing algorithms for SciPy, including IO, morphology, filtering,\nwarping, color manipulation, object detection, etc.\n\nPlease refer to the online documentation at\nhttp://scikit-image.org/\n\"\"\"\n\nDISTNAME = 'scikit-image'\nDESCRIPTION = 'Image processing routines for SciPy'\nLONG_DESCRIPTION = descr\nMAINTAINER = 'Stefan van der Walt'\nMAINTAINER_EMAIL = '[email protected]'\nURL = 'http://scikit-image.org'\nLICENSE = 'Modified BSD'\nDOWNLOAD_URL = 'http://github.com/scikit-image/scikit-image'\n\nimport os\nimport sys\n\nimport setuptools\nfrom distutils.command.build_py import build_py\n\n\nwith open('skimage/__init__.py') as fid:\n for line in fid:\n if line.startswith('__version__'):\n VERSION = line.strip().split()[-1][1:-1]\n break\n\nwith open('requirements.txt') as fid:\n INSTALL_REQUIRES = [l.strip() for l in fid.readlines() if l]\n\n# requirements for those browsing PyPI\nREQUIRES = [r.replace('>=', ' (>= ') + ')' for r in INSTALL_REQUIRES]\nREQUIRES = [r.replace('==', ' (== ') for r in REQUIRES]\nREQUIRES = [r.replace('[array]', '') for r in REQUIRES]\n\n\ndef configuration(parent_package='', top_path=None):\n if os.path.exists('MANIFEST'): os.remove('MANIFEST')\n\n from numpy.distutils.misc_util import Configuration\n config = Configuration(None, parent_package, top_path)\n\n config.set_options(\n ignore_setup_xxx_py=True,\n assume_default_configuration=True,\n delegate_options_to_subpackages=True,\n quiet=True)\n\n config.add_subpackage('skimage')\n config.add_data_dir('skimage/data')\n\n return config\n\n\nif __name__ == \"__main__\":\n # purposely fail if numpy is not available\n # other dependecies will be resolved by pip (install_requires)\n try:\n from numpy.distutils.core import setup\n except ImportError:\n print('To install scikit-image from source, you will need numpy.\\n' +\n 'Install numpy with pip:\\n' +\n 'pip install numpy\\n'\n 'Or use your operating system package manager. For more\\n' +\n 'details, see http://scikit-image.org/docs/stable/install.html')\n sys.exit(1)\n\n setup(\n name=DISTNAME,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n url=URL,\n license=LICENSE,\n download_url=DOWNLOAD_URL,\n version=VERSION,\n\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Programming Language :: C',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Topic :: Scientific/Engineering',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: Unix',\n 'Operating System :: MacOS',\n ],\n\n configuration=configuration,\n install_requires=INSTALL_REQUIRES,\n # install cython when running setup.py (source install)\n setup_requires=['cython>=0.21'],\n requires=REQUIRES,\n packages=setuptools.find_packages(exclude=['doc']),\n include_package_data=True,\n zip_safe=False, # the package can run out of an .egg file\n\n entry_points={\n 'console_scripts': ['skivi = skimage.scripts.skivi:main'],\n },\n\n cmdclass={'build_py': build_py},\n )\n", "path": "setup.py"}]} | 1,396 | 576 |
gh_patches_debug_1542 | rasdani/github-patches | git_diff | pyodide__pyodide-987 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add PYODIDE_MINIMAL build option
From the added documentation,
> Minimal pyodide build can be enabled by setting the `PYODIDE_MINIMAL`
environment variable. For instance,
> ```
> PYODIDE_MINIMAL=true PYODIDE_PACKAGES="micropip" make
> ```
>
> This will,
> - not include freetype and libpng libraries (it won't be possible to build matplotlib)
> - not include the jedi library, disabling auto-completion in iodide
>
> As as a result the size will of the core pyodide binaries will be ~15% smaller.
Addresses two points from https://github.com/iodide-project/pyodide/issues/646
Before (master),
```
6,6M pyodide.asm.data
310K pyodide.asm.data.js
2,8M pyodide.asm.js
11M pyodide.asm.wasm
16K pyodide.js
16K pyodide_dev.js
Total: 20.7 MB
```
after (this PR with PYODIDE_MINIMAL=true)
```
5,1M build/pyodide.asm.data
124K build/pyodide.asm.data.js
2,6M build/pyodide.asm.js
9,9M build/pyodide.asm.wasm
16K build/pyodide.js
16K build/pyodide_dev.js
Total: 17.7 MB
```
so it's not that different (14% less), but it's start.
Draft PR for now, as I think I need to go in a bit more details through tests that are run in the minimal build CI job.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pyodide-py/pyodide/console.py`
Content:
```
1 from typing import List, Optional
2
3
4 def get_completions(
5 code: str, cursor: Optional[int] = None, namespaces: Optional[List] = None
6 ) -> List[str]:
7 """
8 Get code autocompletion candidates
9
10 Note that this function requires to have the jedi module loaded.
11
12 Parameters
13 ----------
14 code
15 the Python code to complete.
16 cursor
17 optional position in the code at which to autocomplete
18 namespaces
19 a list of namespaces
20
21 Returns
22 -------
23 a list of autocompleted modules
24 """
25 import jedi
26 import __main__
27
28 if namespaces is None:
29 namespaces = [__main__.__dict__]
30
31 if cursor is None:
32 cursor = len(code)
33 code = code[:cursor]
34 interp = jedi.Interpreter(code, namespaces)
35 completions = interp.completions()
36
37 return [x.name for x in completions]
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pyodide-py/pyodide/console.py b/src/pyodide-py/pyodide/console.py
--- a/src/pyodide-py/pyodide/console.py
+++ b/src/pyodide-py/pyodide/console.py
@@ -32,6 +32,6 @@
cursor = len(code)
code = code[:cursor]
interp = jedi.Interpreter(code, namespaces)
- completions = interp.completions()
+ completions = interp.complete()
return [x.name for x in completions]
| {"golden_diff": "diff --git a/src/pyodide-py/pyodide/console.py b/src/pyodide-py/pyodide/console.py\n--- a/src/pyodide-py/pyodide/console.py\n+++ b/src/pyodide-py/pyodide/console.py\n@@ -32,6 +32,6 @@\n cursor = len(code)\n code = code[:cursor]\n interp = jedi.Interpreter(code, namespaces)\n- completions = interp.completions()\n+ completions = interp.complete()\n \n return [x.name for x in completions]\n", "issue": "Add PYODIDE_MINIMAL build option\nFrom the added documentation,\r\n\r\n> Minimal pyodide build can be enabled by setting the `PYODIDE_MINIMAL`\r\nenvironment variable. For instance,\r\n> ``` \r\n> PYODIDE_MINIMAL=true PYODIDE_PACKAGES=\"micropip\" make\r\n> ``` \r\n> \r\n> This will,\r\n> - not include freetype and libpng libraries (it won't be possible to build matplotlib)\r\n> - not include the jedi library, disabling auto-completion in iodide\r\n> \r\n> As as a result the size will of the core pyodide binaries will be ~15% smaller.\r\n\r\nAddresses two points from https://github.com/iodide-project/pyodide/issues/646\r\n\r\nBefore (master),\r\n```\r\n6,6M pyodide.asm.data\r\n310K pyodide.asm.data.js\r\n2,8M pyodide.asm.js\r\n 11M pyodide.asm.wasm\r\n 16K pyodide.js\r\n 16K pyodide_dev.js\r\n\r\nTotal: 20.7 MB\r\n```\r\nafter (this PR with PYODIDE_MINIMAL=true)\r\n```\r\n5,1M build/pyodide.asm.data\r\n124K build/pyodide.asm.data.js\r\n2,6M build/pyodide.asm.js\r\n9,9M build/pyodide.asm.wasm\r\n 16K build/pyodide.js\r\n 16K build/pyodide_dev.js\r\n\r\nTotal: 17.7 MB\r\n```\r\n\r\nso it's not that different (14% less), but it's start. \r\n\r\nDraft PR for now, as I think I need to go in a bit more details through tests that are run in the minimal build CI job.\n", "before_files": [{"content": "from typing import List, Optional\n\n\ndef get_completions(\n code: str, cursor: Optional[int] = None, namespaces: Optional[List] = None\n) -> List[str]:\n \"\"\"\n Get code autocompletion candidates\n\n Note that this function requires to have the jedi module loaded.\n\n Parameters\n ----------\n code\n the Python code to complete.\n cursor\n optional position in the code at which to autocomplete\n namespaces\n a list of namespaces\n\n Returns\n -------\n a list of autocompleted modules\n \"\"\"\n import jedi\n import __main__\n\n if namespaces is None:\n namespaces = [__main__.__dict__]\n\n if cursor is None:\n cursor = len(code)\n code = code[:cursor]\n interp = jedi.Interpreter(code, namespaces)\n completions = interp.completions()\n\n return [x.name for x in completions]\n", "path": "src/pyodide-py/pyodide/console.py"}], "after_files": [{"content": "from typing import List, Optional\n\n\ndef get_completions(\n code: str, cursor: Optional[int] = None, namespaces: Optional[List] = None\n) -> List[str]:\n \"\"\"\n Get code autocompletion candidates\n\n Note that this function requires to have the jedi module loaded.\n\n Parameters\n ----------\n code\n the Python code to complete.\n cursor\n optional position in the code at which to autocomplete\n namespaces\n a list of namespaces\n\n Returns\n -------\n a list of autocompleted modules\n \"\"\"\n import jedi\n import __main__\n\n if namespaces is None:\n namespaces = [__main__.__dict__]\n\n if cursor is None:\n cursor = len(code)\n code = code[:cursor]\n interp = jedi.Interpreter(code, namespaces)\n completions = interp.complete()\n\n return [x.name for x in completions]\n", "path": "src/pyodide-py/pyodide/console.py"}]} | 918 | 122 |
gh_patches_debug_14695 | rasdani/github-patches | git_diff | Netflix__lemur-142 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SubCA autogenerated descriptions for their certs are incorrect
If you create a root CA, and look up the certificate for that CA its description is:
This is the ROOT certificate for the $CN certificate authority.
If you create a subCA off of that rootCA, and look up the certificate for that SubCA its description is:
This is the ROOT certificate for the $CN certificate authority
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lemur/authorities/service.py`
Content:
```
1 """
2 .. module: lemur.authorities.service
3 :platform: Unix
4 :synopsis: This module contains all of the services level functions used to
5 administer authorities in Lemur
6 :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more
7 :license: Apache, see LICENSE for more details.
8 .. moduleauthor:: Kevin Glisson <[email protected]>
9
10 """
11 from flask import g
12 from flask import current_app
13
14 from lemur import database
15 from lemur.authorities.models import Authority
16 from lemur.roles import service as role_service
17 from lemur.notifications import service as notification_service
18
19 from lemur.roles.models import Role
20 from lemur.certificates.models import Certificate
21
22 from lemur.plugins.base import plugins
23
24
25 def update(authority_id, description=None, owner=None, active=None, roles=None):
26 """
27 Update a an authority with new values.
28
29 :param authority_id:
30 :param roles: roles that are allowed to use this authority
31 :rtype : Authority
32 :return:
33 """
34 authority = get(authority_id)
35 if roles:
36 authority = database.update_list(authority, 'roles', Role, roles)
37
38 if active:
39 authority.active = active
40
41 authority.description = description
42 authority.owner = owner
43 return database.update(authority)
44
45
46 def create(kwargs):
47 """
48 Create a new authority.
49
50 :rtype : Authority
51 :return:
52 """
53
54 issuer = plugins.get(kwargs.get('pluginName'))
55
56 kwargs['creator'] = g.current_user.email
57 cert_body, intermediate, issuer_roles = issuer.create_authority(kwargs)
58
59 cert = Certificate(cert_body, chain=intermediate)
60 cert.owner = kwargs['ownerEmail']
61 cert.description = "This is the ROOT certificate for the {0} certificate authority".format(kwargs.get('caName'))
62 cert.user = g.current_user
63
64 cert.notifications = notification_service.create_default_expiration_notifications(
65 'DEFAULT_SECURITY',
66 current_app.config.get('LEMUR_SECURITY_TEAM_EMAIL')
67 )
68
69 # we create and attach any roles that the issuer gives us
70 role_objs = []
71 for r in issuer_roles:
72
73 role = role_service.create(
74 r['name'],
75 password=r['password'],
76 description="{0} auto generated role".format(kwargs.get('pluginName')),
77 username=r['username'])
78
79 # the user creating the authority should be able to administer it
80 if role.username == 'admin':
81 g.current_user.roles.append(role)
82
83 role_objs.append(role)
84
85 authority = Authority(
86 kwargs.get('caName'),
87 kwargs['ownerEmail'],
88 kwargs['pluginName'],
89 cert_body,
90 description=kwargs['caDescription'],
91 chain=intermediate,
92 roles=role_objs
93 )
94
95 database.update(cert)
96 authority = database.create(authority)
97
98 g.current_user.authorities.append(authority)
99
100 return authority
101
102
103 def get_all():
104 """
105 Get all authorities that are currently in Lemur.
106
107 :rtype : List
108 :return:
109 """
110 query = database.session_query(Authority)
111 return database.find_all(query, Authority, {}).all()
112
113
114 def get(authority_id):
115 """
116 Retrieves an authority given it's ID
117
118 :rtype : Authority
119 :param authority_id:
120 :return:
121 """
122 return database.get(Authority, authority_id)
123
124
125 def get_by_name(authority_name):
126 """
127 Retrieves an authority given it's name.
128
129 :param authority_name:
130 :rtype : Authority
131 :return:
132 """
133 return database.get(Authority, authority_name, field='name')
134
135
136 def get_authority_role(ca_name):
137 """
138 Attempts to get the authority role for a given ca uses current_user
139 as a basis for accomplishing that.
140
141 :param ca_name:
142 """
143 if g.current_user.is_admin:
144 authority = get_by_name(ca_name)
145 # TODO we should pick admin ca roles for admin
146 return authority.roles[0]
147 else:
148 for role in g.current_user.roles:
149 if role.authority:
150 if role.authority.name == ca_name:
151 return role
152
153
154 def render(args):
155 """
156 Helper that helps us render the REST Api responses.
157 :param args:
158 :return:
159 """
160 query = database.session_query(Authority)
161 sort_by = args.pop('sort_by')
162 sort_dir = args.pop('sort_dir')
163 page = args.pop('page')
164 count = args.pop('count')
165 filt = args.pop('filter')
166
167 if filt:
168 terms = filt.split(';')
169 if 'active' in filt: # this is really weird but strcmp seems to not work here??
170 query = query.filter(Authority.active == terms[1])
171 else:
172 query = database.filter(query, Authority, terms)
173
174 # we make sure that a user can only use an authority they either own are are a member of - admins can see all
175 if not g.current_user.is_admin:
176 authority_ids = []
177 for role in g.current_user.roles:
178 if role.authority:
179 authority_ids.append(role.authority.id)
180 query = query.filter(Authority.id.in_(authority_ids))
181
182 query = database.find_all(query, Authority, args)
183
184 if sort_by and sort_dir:
185 query = database.sort(query, Authority, sort_by, sort_dir)
186
187 return database.paginate(query, page, count)
188
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lemur/authorities/service.py b/lemur/authorities/service.py
--- a/lemur/authorities/service.py
+++ b/lemur/authorities/service.py
@@ -58,7 +58,15 @@
cert = Certificate(cert_body, chain=intermediate)
cert.owner = kwargs['ownerEmail']
- cert.description = "This is the ROOT certificate for the {0} certificate authority".format(kwargs.get('caName'))
+
+ if kwargs['caType'] == 'subca':
+ cert.description = "This is the ROOT certificate for the {0} sub certificate authority the parent \
+ authority is {1}.".format(kwargs.get('caName'), kwargs.get('caParent'))
+ else:
+ cert.description = "This is the ROOT certificate for the {0} certificate authority.".format(
+ kwargs.get('caName')
+ )
+
cert.user = g.current_user
cert.notifications = notification_service.create_default_expiration_notifications(
| {"golden_diff": "diff --git a/lemur/authorities/service.py b/lemur/authorities/service.py\n--- a/lemur/authorities/service.py\n+++ b/lemur/authorities/service.py\n@@ -58,7 +58,15 @@\n \n cert = Certificate(cert_body, chain=intermediate)\n cert.owner = kwargs['ownerEmail']\n- cert.description = \"This is the ROOT certificate for the {0} certificate authority\".format(kwargs.get('caName'))\n+\n+ if kwargs['caType'] == 'subca':\n+ cert.description = \"This is the ROOT certificate for the {0} sub certificate authority the parent \\\n+ authority is {1}.\".format(kwargs.get('caName'), kwargs.get('caParent'))\n+ else:\n+ cert.description = \"This is the ROOT certificate for the {0} certificate authority.\".format(\n+ kwargs.get('caName')\n+ )\n+\n cert.user = g.current_user\n \n cert.notifications = notification_service.create_default_expiration_notifications(\n", "issue": "SubCA autogenerated descriptions for their certs are incorrect\nIf you create a root CA, and look up the certificate for that CA its description is: \nThis is the ROOT certificate for the $CN certificate authority.\nIf you create a subCA off of that rootCA, and look up the certificate for that SubCA its description is: \nThis is the ROOT certificate for the $CN certificate authority\n\n", "before_files": [{"content": "\"\"\"\n.. module: lemur.authorities.service\n :platform: Unix\n :synopsis: This module contains all of the services level functions used to\n administer authorities in Lemur\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\n\"\"\"\nfrom flask import g\nfrom flask import current_app\n\nfrom lemur import database\nfrom lemur.authorities.models import Authority\nfrom lemur.roles import service as role_service\nfrom lemur.notifications import service as notification_service\n\nfrom lemur.roles.models import Role\nfrom lemur.certificates.models import Certificate\n\nfrom lemur.plugins.base import plugins\n\n\ndef update(authority_id, description=None, owner=None, active=None, roles=None):\n \"\"\"\n Update a an authority with new values.\n\n :param authority_id:\n :param roles: roles that are allowed to use this authority\n :rtype : Authority\n :return:\n \"\"\"\n authority = get(authority_id)\n if roles:\n authority = database.update_list(authority, 'roles', Role, roles)\n\n if active:\n authority.active = active\n\n authority.description = description\n authority.owner = owner\n return database.update(authority)\n\n\ndef create(kwargs):\n \"\"\"\n Create a new authority.\n\n :rtype : Authority\n :return:\n \"\"\"\n\n issuer = plugins.get(kwargs.get('pluginName'))\n\n kwargs['creator'] = g.current_user.email\n cert_body, intermediate, issuer_roles = issuer.create_authority(kwargs)\n\n cert = Certificate(cert_body, chain=intermediate)\n cert.owner = kwargs['ownerEmail']\n cert.description = \"This is the ROOT certificate for the {0} certificate authority\".format(kwargs.get('caName'))\n cert.user = g.current_user\n\n cert.notifications = notification_service.create_default_expiration_notifications(\n 'DEFAULT_SECURITY',\n current_app.config.get('LEMUR_SECURITY_TEAM_EMAIL')\n )\n\n # we create and attach any roles that the issuer gives us\n role_objs = []\n for r in issuer_roles:\n\n role = role_service.create(\n r['name'],\n password=r['password'],\n description=\"{0} auto generated role\".format(kwargs.get('pluginName')),\n username=r['username'])\n\n # the user creating the authority should be able to administer it\n if role.username == 'admin':\n g.current_user.roles.append(role)\n\n role_objs.append(role)\n\n authority = Authority(\n kwargs.get('caName'),\n kwargs['ownerEmail'],\n kwargs['pluginName'],\n cert_body,\n description=kwargs['caDescription'],\n chain=intermediate,\n roles=role_objs\n )\n\n database.update(cert)\n authority = database.create(authority)\n\n g.current_user.authorities.append(authority)\n\n return authority\n\n\ndef get_all():\n \"\"\"\n Get all authorities that are currently in Lemur.\n\n :rtype : List\n :return:\n \"\"\"\n query = database.session_query(Authority)\n return database.find_all(query, Authority, {}).all()\n\n\ndef get(authority_id):\n \"\"\"\n Retrieves an authority given it's ID\n\n :rtype : Authority\n :param authority_id:\n :return:\n \"\"\"\n return database.get(Authority, authority_id)\n\n\ndef get_by_name(authority_name):\n \"\"\"\n Retrieves an authority given it's name.\n\n :param authority_name:\n :rtype : Authority\n :return:\n \"\"\"\n return database.get(Authority, authority_name, field='name')\n\n\ndef get_authority_role(ca_name):\n \"\"\"\n Attempts to get the authority role for a given ca uses current_user\n as a basis for accomplishing that.\n\n :param ca_name:\n \"\"\"\n if g.current_user.is_admin:\n authority = get_by_name(ca_name)\n # TODO we should pick admin ca roles for admin\n return authority.roles[0]\n else:\n for role in g.current_user.roles:\n if role.authority:\n if role.authority.name == ca_name:\n return role\n\n\ndef render(args):\n \"\"\"\n Helper that helps us render the REST Api responses.\n :param args:\n :return:\n \"\"\"\n query = database.session_query(Authority)\n sort_by = args.pop('sort_by')\n sort_dir = args.pop('sort_dir')\n page = args.pop('page')\n count = args.pop('count')\n filt = args.pop('filter')\n\n if filt:\n terms = filt.split(';')\n if 'active' in filt: # this is really weird but strcmp seems to not work here??\n query = query.filter(Authority.active == terms[1])\n else:\n query = database.filter(query, Authority, terms)\n\n # we make sure that a user can only use an authority they either own are are a member of - admins can see all\n if not g.current_user.is_admin:\n authority_ids = []\n for role in g.current_user.roles:\n if role.authority:\n authority_ids.append(role.authority.id)\n query = query.filter(Authority.id.in_(authority_ids))\n\n query = database.find_all(query, Authority, args)\n\n if sort_by and sort_dir:\n query = database.sort(query, Authority, sort_by, sort_dir)\n\n return database.paginate(query, page, count)\n", "path": "lemur/authorities/service.py"}], "after_files": [{"content": "\"\"\"\n.. module: lemur.authorities.service\n :platform: Unix\n :synopsis: This module contains all of the services level functions used to\n administer authorities in Lemur\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\n\"\"\"\nfrom flask import g\nfrom flask import current_app\n\nfrom lemur import database\nfrom lemur.authorities.models import Authority\nfrom lemur.roles import service as role_service\nfrom lemur.notifications import service as notification_service\n\nfrom lemur.roles.models import Role\nfrom lemur.certificates.models import Certificate\n\nfrom lemur.plugins.base import plugins\n\n\ndef update(authority_id, description=None, owner=None, active=None, roles=None):\n \"\"\"\n Update a an authority with new values.\n\n :param authority_id:\n :param roles: roles that are allowed to use this authority\n :rtype : Authority\n :return:\n \"\"\"\n authority = get(authority_id)\n if roles:\n authority = database.update_list(authority, 'roles', Role, roles)\n\n if active:\n authority.active = active\n\n authority.description = description\n authority.owner = owner\n return database.update(authority)\n\n\ndef create(kwargs):\n \"\"\"\n Create a new authority.\n\n :rtype : Authority\n :return:\n \"\"\"\n\n issuer = plugins.get(kwargs.get('pluginName'))\n\n kwargs['creator'] = g.current_user.email\n cert_body, intermediate, issuer_roles = issuer.create_authority(kwargs)\n\n cert = Certificate(cert_body, chain=intermediate)\n cert.owner = kwargs['ownerEmail']\n\n if kwargs['caType'] == 'subca':\n cert.description = \"This is the ROOT certificate for the {0} sub certificate authority the parent \\\n authority is {1}.\".format(kwargs.get('caName'), kwargs.get('caParent'))\n else:\n cert.description = \"This is the ROOT certificate for the {0} certificate authority.\".format(\n kwargs.get('caName')\n )\n\n cert.user = g.current_user\n\n cert.notifications = notification_service.create_default_expiration_notifications(\n 'DEFAULT_SECURITY',\n current_app.config.get('LEMUR_SECURITY_TEAM_EMAIL')\n )\n\n # we create and attach any roles that the issuer gives us\n role_objs = []\n for r in issuer_roles:\n\n role = role_service.create(\n r['name'],\n password=r['password'],\n description=\"{0} auto generated role\".format(kwargs.get('pluginName')),\n username=r['username'])\n\n # the user creating the authority should be able to administer it\n if role.username == 'admin':\n g.current_user.roles.append(role)\n\n role_objs.append(role)\n\n authority = Authority(\n kwargs.get('caName'),\n kwargs['ownerEmail'],\n kwargs['pluginName'],\n cert_body,\n description=kwargs['caDescription'],\n chain=intermediate,\n roles=role_objs\n )\n\n database.update(cert)\n authority = database.create(authority)\n\n g.current_user.authorities.append(authority)\n\n return authority\n\n\ndef get_all():\n \"\"\"\n Get all authorities that are currently in Lemur.\n\n :rtype : List\n :return:\n \"\"\"\n query = database.session_query(Authority)\n return database.find_all(query, Authority, {}).all()\n\n\ndef get(authority_id):\n \"\"\"\n Retrieves an authority given it's ID\n\n :rtype : Authority\n :param authority_id:\n :return:\n \"\"\"\n return database.get(Authority, authority_id)\n\n\ndef get_by_name(authority_name):\n \"\"\"\n Retrieves an authority given it's name.\n\n :param authority_name:\n :rtype : Authority\n :return:\n \"\"\"\n return database.get(Authority, authority_name, field='name')\n\n\ndef get_authority_role(ca_name):\n \"\"\"\n Attempts to get the authority role for a given ca uses current_user\n as a basis for accomplishing that.\n\n :param ca_name:\n \"\"\"\n if g.current_user.is_admin:\n authority = get_by_name(ca_name)\n # TODO we should pick admin ca roles for admin\n return authority.roles[0]\n else:\n for role in g.current_user.roles:\n if role.authority:\n if role.authority.name == ca_name:\n return role\n\n\ndef render(args):\n \"\"\"\n Helper that helps us render the REST Api responses.\n :param args:\n :return:\n \"\"\"\n query = database.session_query(Authority)\n sort_by = args.pop('sort_by')\n sort_dir = args.pop('sort_dir')\n page = args.pop('page')\n count = args.pop('count')\n filt = args.pop('filter')\n\n if filt:\n terms = filt.split(';')\n if 'active' in filt: # this is really weird but strcmp seems to not work here??\n query = query.filter(Authority.active == terms[1])\n else:\n query = database.filter(query, Authority, terms)\n\n # we make sure that a user can only use an authority they either own are are a member of - admins can see all\n if not g.current_user.is_admin:\n authority_ids = []\n for role in g.current_user.roles:\n if role.authority:\n authority_ids.append(role.authority.id)\n query = query.filter(Authority.id.in_(authority_ids))\n\n query = database.find_all(query, Authority, args)\n\n if sort_by and sort_dir:\n query = database.sort(query, Authority, sort_by, sort_dir)\n\n return database.paginate(query, page, count)\n", "path": "lemur/authorities/service.py"}]} | 1,972 | 219 |
gh_patches_debug_66174 | rasdani/github-patches | git_diff | cisagov__manage.get.gov-1985 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Provide documentation about user_groups and permission changes
### Issue description
If we edit permissions for User Groups, such as to make different items viewable on django admin, this requires a very specific set of instructions that is only documentated inside user group migration files. We should reference this documentation somewhere else in our developer readme so it's clear what is needed to be done if you ever change permissions in the future.
We had a case where permissions were changed and it wasn't clear what needed to be done (make a manual migration). The developers lost time trying to figure out why permissions didn't show and why running makemigrations changed nothing. I suggest adding an inline code comment around where permissions are set in user_groups too that points to documentation in developer readme. This may save future developer's time.
### Acceptance criteria
- [ ] update developer documentation (inline and md) about user_group/ permission changes
### Additional context
the migration files that have documentation about this are all ones that say _create_groups_v (followed by a number), such as 0075_create_groups_v08.py. See those for the current documentation and note that this was hard for developers to find.
[Slack thread](https://cisa-corp.slack.com/archives/C05BGB4L5NF/p1709939481415349)
### Links to other issues
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/registrar/models/user_group.py`
Content:
```
1 from django.contrib.auth.models import Group
2 import logging
3
4 logger = logging.getLogger(__name__)
5
6
7 class UserGroup(Group):
8 class Meta:
9 verbose_name = "User group"
10 verbose_name_plural = "User groups"
11
12 def create_cisa_analyst_group(apps, schema_editor):
13 """This method gets run from a data migration."""
14
15 # Hard to pass self to these methods as the calls from migrations
16 # are only expecting apps and schema_editor, so we'll just define
17 # apps, schema_editor in the local scope instead
18 CISA_ANALYST_GROUP_PERMISSIONS = [
19 {
20 "app_label": "auditlog",
21 "model": "logentry",
22 "permissions": ["view_logentry"],
23 },
24 {
25 "app_label": "registrar",
26 "model": "contact",
27 "permissions": ["change_contact"],
28 },
29 {
30 "app_label": "registrar",
31 "model": "domainrequest",
32 "permissions": ["change_domainrequest"],
33 },
34 {
35 "app_label": "registrar",
36 "model": "domain",
37 "permissions": ["view_domain"],
38 },
39 {
40 "app_label": "registrar",
41 "model": "draftdomain",
42 "permissions": ["change_draftdomain"],
43 },
44 {
45 "app_label": "registrar",
46 "model": "user",
47 "permissions": ["analyst_access_permission", "change_user"],
48 },
49 {
50 "app_label": "registrar",
51 "model": "domaininvitation",
52 "permissions": ["add_domaininvitation", "view_domaininvitation"],
53 },
54 {
55 "app_label": "registrar",
56 "model": "website",
57 "permissions": ["change_website"],
58 },
59 {
60 "app_label": "registrar",
61 "model": "userdomainrole",
62 "permissions": ["view_userdomainrole", "delete_userdomainrole"],
63 },
64 {
65 "app_label": "registrar",
66 "model": "verifiedbystaff",
67 "permissions": ["add_verifiedbystaff", "change_verifiedbystaff", "delete_verifiedbystaff"],
68 },
69 {
70 "app_label": "registrar",
71 "model": "federalagency",
72 "permissions": ["add_federalagency", "change_federalagency", "delete_federalagency"],
73 },
74 ]
75
76 # Avoid error: You can't execute queries until the end
77 # of the 'atomic' block.
78 # From django docs:
79 # https://docs.djangoproject.com/en/4.2/topics/migrations/#data-migrations
80 # We can’t import the Person model directly as it may be a newer
81 # version than this migration expects. We use the historical version.
82 ContentType = apps.get_model("contenttypes", "ContentType")
83 Permission = apps.get_model("auth", "Permission")
84 UserGroup = apps.get_model("registrar", "UserGroup")
85
86 logger.info("Going to create the Analyst Group")
87 try:
88 cisa_analysts_group, _ = UserGroup.objects.get_or_create(
89 name="cisa_analysts_group",
90 )
91
92 cisa_analysts_group.permissions.clear()
93
94 for permission in CISA_ANALYST_GROUP_PERMISSIONS:
95 app_label = permission["app_label"]
96 model_name = permission["model"]
97 permissions = permission["permissions"]
98
99 # Retrieve the content type for the app and model
100 content_type = ContentType.objects.get(app_label=app_label, model=model_name)
101
102 # Retrieve the permissions based on their codenames
103 permissions = Permission.objects.filter(content_type=content_type, codename__in=permissions)
104
105 # Assign the permissions to the group
106 cisa_analysts_group.permissions.add(*permissions)
107
108 # Convert the permissions QuerySet to a list of codenames
109 permission_list = list(permissions.values_list("codename", flat=True))
110
111 logger.debug(
112 app_label
113 + " | "
114 + model_name
115 + " | "
116 + ", ".join(permission_list)
117 + " added to group "
118 + cisa_analysts_group.name
119 )
120
121 cisa_analysts_group.save()
122 logger.debug("CISA Analyst permissions added to group " + cisa_analysts_group.name)
123 except Exception as e:
124 logger.error(f"Error creating analyst permissions group: {e}")
125
126 def create_full_access_group(apps, schema_editor):
127 """This method gets run from a data migration."""
128
129 Permission = apps.get_model("auth", "Permission")
130 UserGroup = apps.get_model("registrar", "UserGroup")
131
132 logger.info("Going to create the Full Access Group")
133 try:
134 full_access_group, _ = UserGroup.objects.get_or_create(
135 name="full_access_group",
136 )
137 # Get all available permissions
138 all_permissions = Permission.objects.all()
139
140 # Assign all permissions to the group
141 full_access_group.permissions.add(*all_permissions)
142
143 full_access_group.save()
144 logger.debug("All permissions added to group " + full_access_group.name)
145 except Exception as e:
146 logger.error(f"Error creating full access group: {e}")
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/registrar/models/user_group.py b/src/registrar/models/user_group.py
--- a/src/registrar/models/user_group.py
+++ b/src/registrar/models/user_group.py
@@ -5,6 +5,11 @@
class UserGroup(Group):
+ """
+ UserGroup sets read and write permissions for superusers (who have full access)
+ and analysts. For more details, see the dev docs for user-permissions.
+ """
+
class Meta:
verbose_name = "User group"
verbose_name_plural = "User groups"
| {"golden_diff": "diff --git a/src/registrar/models/user_group.py b/src/registrar/models/user_group.py\n--- a/src/registrar/models/user_group.py\n+++ b/src/registrar/models/user_group.py\n@@ -5,6 +5,11 @@\n \n \n class UserGroup(Group):\n+ \"\"\"\n+ UserGroup sets read and write permissions for superusers (who have full access)\n+ and analysts. For more details, see the dev docs for user-permissions.\n+ \"\"\"\n+\n class Meta:\n verbose_name = \"User group\"\n verbose_name_plural = \"User groups\"\n", "issue": "Provide documentation about user_groups and permission changes\n### Issue description\r\n\r\nIf we edit permissions for User Groups, such as to make different items viewable on django admin, this requires a very specific set of instructions that is only documentated inside user group migration files. We should reference this documentation somewhere else in our developer readme so it's clear what is needed to be done if you ever change permissions in the future.\r\n\r\n We had a case where permissions were changed and it wasn't clear what needed to be done (make a manual migration). The developers lost time trying to figure out why permissions didn't show and why running makemigrations changed nothing. I suggest adding an inline code comment around where permissions are set in user_groups too that points to documentation in developer readme. This may save future developer's time.\r\n\r\n### Acceptance criteria\r\n\r\n- [ ] update developer documentation (inline and md) about user_group/ permission changes\r\n\r\n### Additional context\r\nthe migration files that have documentation about this are all ones that say _create_groups_v (followed by a number), such as 0075_create_groups_v08.py. See those for the current documentation and note that this was hard for developers to find.\r\n[Slack thread](https://cisa-corp.slack.com/archives/C05BGB4L5NF/p1709939481415349)\r\n\r\n### Links to other issues\r\n\r\n_No response_\n", "before_files": [{"content": "from django.contrib.auth.models import Group\nimport logging\n\nlogger = logging.getLogger(__name__)\n\n\nclass UserGroup(Group):\n class Meta:\n verbose_name = \"User group\"\n verbose_name_plural = \"User groups\"\n\n def create_cisa_analyst_group(apps, schema_editor):\n \"\"\"This method gets run from a data migration.\"\"\"\n\n # Hard to pass self to these methods as the calls from migrations\n # are only expecting apps and schema_editor, so we'll just define\n # apps, schema_editor in the local scope instead\n CISA_ANALYST_GROUP_PERMISSIONS = [\n {\n \"app_label\": \"auditlog\",\n \"model\": \"logentry\",\n \"permissions\": [\"view_logentry\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"contact\",\n \"permissions\": [\"change_contact\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"domainrequest\",\n \"permissions\": [\"change_domainrequest\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"domain\",\n \"permissions\": [\"view_domain\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"draftdomain\",\n \"permissions\": [\"change_draftdomain\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"user\",\n \"permissions\": [\"analyst_access_permission\", \"change_user\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"domaininvitation\",\n \"permissions\": [\"add_domaininvitation\", \"view_domaininvitation\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"website\",\n \"permissions\": [\"change_website\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"userdomainrole\",\n \"permissions\": [\"view_userdomainrole\", \"delete_userdomainrole\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"verifiedbystaff\",\n \"permissions\": [\"add_verifiedbystaff\", \"change_verifiedbystaff\", \"delete_verifiedbystaff\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"federalagency\",\n \"permissions\": [\"add_federalagency\", \"change_federalagency\", \"delete_federalagency\"],\n },\n ]\n\n # Avoid error: You can't execute queries until the end\n # of the 'atomic' block.\n # From django docs:\n # https://docs.djangoproject.com/en/4.2/topics/migrations/#data-migrations\n # We can\u2019t import the Person model directly as it may be a newer\n # version than this migration expects. We use the historical version.\n ContentType = apps.get_model(\"contenttypes\", \"ContentType\")\n Permission = apps.get_model(\"auth\", \"Permission\")\n UserGroup = apps.get_model(\"registrar\", \"UserGroup\")\n\n logger.info(\"Going to create the Analyst Group\")\n try:\n cisa_analysts_group, _ = UserGroup.objects.get_or_create(\n name=\"cisa_analysts_group\",\n )\n\n cisa_analysts_group.permissions.clear()\n\n for permission in CISA_ANALYST_GROUP_PERMISSIONS:\n app_label = permission[\"app_label\"]\n model_name = permission[\"model\"]\n permissions = permission[\"permissions\"]\n\n # Retrieve the content type for the app and model\n content_type = ContentType.objects.get(app_label=app_label, model=model_name)\n\n # Retrieve the permissions based on their codenames\n permissions = Permission.objects.filter(content_type=content_type, codename__in=permissions)\n\n # Assign the permissions to the group\n cisa_analysts_group.permissions.add(*permissions)\n\n # Convert the permissions QuerySet to a list of codenames\n permission_list = list(permissions.values_list(\"codename\", flat=True))\n\n logger.debug(\n app_label\n + \" | \"\n + model_name\n + \" | \"\n + \", \".join(permission_list)\n + \" added to group \"\n + cisa_analysts_group.name\n )\n\n cisa_analysts_group.save()\n logger.debug(\"CISA Analyst permissions added to group \" + cisa_analysts_group.name)\n except Exception as e:\n logger.error(f\"Error creating analyst permissions group: {e}\")\n\n def create_full_access_group(apps, schema_editor):\n \"\"\"This method gets run from a data migration.\"\"\"\n\n Permission = apps.get_model(\"auth\", \"Permission\")\n UserGroup = apps.get_model(\"registrar\", \"UserGroup\")\n\n logger.info(\"Going to create the Full Access Group\")\n try:\n full_access_group, _ = UserGroup.objects.get_or_create(\n name=\"full_access_group\",\n )\n # Get all available permissions\n all_permissions = Permission.objects.all()\n\n # Assign all permissions to the group\n full_access_group.permissions.add(*all_permissions)\n\n full_access_group.save()\n logger.debug(\"All permissions added to group \" + full_access_group.name)\n except Exception as e:\n logger.error(f\"Error creating full access group: {e}\")\n", "path": "src/registrar/models/user_group.py"}], "after_files": [{"content": "from django.contrib.auth.models import Group\nimport logging\n\nlogger = logging.getLogger(__name__)\n\n\nclass UserGroup(Group):\n \"\"\"\n UserGroup sets read and write permissions for superusers (who have full access)\n and analysts. For more details, see the dev docs for user-permissions.\n \"\"\"\n\n class Meta:\n verbose_name = \"User group\"\n verbose_name_plural = \"User groups\"\n\n def create_cisa_analyst_group(apps, schema_editor):\n \"\"\"This method gets run from a data migration.\"\"\"\n\n # Hard to pass self to these methods as the calls from migrations\n # are only expecting apps and schema_editor, so we'll just define\n # apps, schema_editor in the local scope instead\n CISA_ANALYST_GROUP_PERMISSIONS = [\n {\n \"app_label\": \"auditlog\",\n \"model\": \"logentry\",\n \"permissions\": [\"view_logentry\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"contact\",\n \"permissions\": [\"change_contact\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"domaininformation\",\n \"permissions\": [\"change_domaininformation\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"domainrequest\",\n \"permissions\": [\"change_domainrequest\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"domain\",\n \"permissions\": [\"view_domain\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"draftdomain\",\n \"permissions\": [\"change_draftdomain\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"user\",\n \"permissions\": [\"analyst_access_permission\", \"change_user\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"domaininvitation\",\n \"permissions\": [\"add_domaininvitation\", \"view_domaininvitation\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"website\",\n \"permissions\": [\"change_website\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"userdomainrole\",\n \"permissions\": [\"view_userdomainrole\", \"delete_userdomainrole\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"verifiedbystaff\",\n \"permissions\": [\"add_verifiedbystaff\", \"change_verifiedbystaff\", \"delete_verifiedbystaff\"],\n },\n {\n \"app_label\": \"registrar\",\n \"model\": \"federalagency\",\n \"permissions\": [\"add_federalagency\", \"change_federalagency\", \"delete_federalagency\"],\n },\n ]\n\n # Avoid error: You can't execute queries until the end\n # of the 'atomic' block.\n # From django docs:\n # https://docs.djangoproject.com/en/4.2/topics/migrations/#data-migrations\n # We can\u2019t import the Person model directly as it may be a newer\n # version than this migration expects. We use the historical version.\n ContentType = apps.get_model(\"contenttypes\", \"ContentType\")\n Permission = apps.get_model(\"auth\", \"Permission\")\n UserGroup = apps.get_model(\"registrar\", \"UserGroup\")\n\n logger.info(\"Going to create the Analyst Group\")\n try:\n cisa_analysts_group, _ = UserGroup.objects.get_or_create(\n name=\"cisa_analysts_group\",\n )\n\n cisa_analysts_group.permissions.clear()\n\n for permission in CISA_ANALYST_GROUP_PERMISSIONS:\n app_label = permission[\"app_label\"]\n model_name = permission[\"model\"]\n permissions = permission[\"permissions\"]\n\n # Retrieve the content type for the app and model\n content_type = ContentType.objects.get(app_label=app_label, model=model_name)\n\n # Retrieve the permissions based on their codenames\n permissions = Permission.objects.filter(content_type=content_type, codename__in=permissions)\n\n # Assign the permissions to the group\n cisa_analysts_group.permissions.add(*permissions)\n\n # Convert the permissions QuerySet to a list of codenames\n permission_list = list(permissions.values_list(\"codename\", flat=True))\n\n logger.debug(\n app_label\n + \" | \"\n + model_name\n + \" | \"\n + \", \".join(permission_list)\n + \" added to group \"\n + cisa_analysts_group.name\n )\n\n cisa_analysts_group.save()\n logger.debug(\"CISA Analyst permissions added to group \" + cisa_analysts_group.name)\n except Exception as e:\n logger.error(f\"Error creating analyst permissions group: {e}\")\n\n def create_full_access_group(apps, schema_editor):\n \"\"\"This method gets run from a data migration.\"\"\"\n\n Permission = apps.get_model(\"auth\", \"Permission\")\n UserGroup = apps.get_model(\"registrar\", \"UserGroup\")\n\n logger.info(\"Going to create the Full Access Group\")\n try:\n full_access_group, _ = UserGroup.objects.get_or_create(\n name=\"full_access_group\",\n )\n # Get all available permissions\n all_permissions = Permission.objects.all()\n\n # Assign all permissions to the group\n full_access_group.permissions.add(*all_permissions)\n\n full_access_group.save()\n logger.debug(\"All permissions added to group \" + full_access_group.name)\n except Exception as e:\n logger.error(f\"Error creating full access group: {e}\")\n", "path": "src/registrar/models/user_group.py"}]} | 1,989 | 121 |
gh_patches_debug_9130 | rasdani/github-patches | git_diff | opsdroid__opsdroid-615 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add Python 3.7 support
We need to update opsdroid to be fully supported in 3.7.
- [x] Test against Python 3.7.
- [x] Travis
- [x] AppVeyor
- [x] Fix any bugs highlighted.
- [x] Add 3.7 to supported versions in `setup.py`.
- [ ] ~Update docker base image to be latest supported version~.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python3
2 import os
3 from setuptools import setup, find_packages
4 from setuptools.command.build_py import build_py
5 from setuptools.command.sdist import sdist
6 from setuptools.command.develop import develop
7 from opsdroid import __version__
8
9 PACKAGE_NAME = 'opsdroid'
10 HERE = os.path.abspath(os.path.dirname(__file__))
11 README = open(os.path.join(HERE, 'README.md'), encoding="utf8").read()
12
13 PACKAGES = find_packages(exclude=['tests', 'tests.*', 'modules',
14 'modules.*', 'docs', 'docs.*'])
15
16
17 # For now we simply define the install_requires based on the contents
18 # of requirements.txt. In the future, install_requires may become much
19 # looser than the (automatically) resolved requirements.txt.
20 with open(os.path.join(HERE, 'requirements.txt'), 'r') as fh:
21 REQUIRES = [line.strip() for line in fh]
22
23
24 class Develop(develop):
25 """Custom `develop` command to always build mo files on install -e."""
26
27 def run(self):
28 self.run_command('compile_catalog')
29 develop.run(self) # old style class
30
31
32 class BuildPy(build_py):
33 """Custom `build_py` command to always build mo files for wheels."""
34
35 def run(self):
36 self.run_command('compile_catalog')
37 build_py.run(self) # old style class
38
39
40 class Sdist(sdist):
41 """Custom `sdist` command to ensure that mo files are always created."""
42
43 def run(self):
44 self.run_command('compile_catalog')
45 sdist.run(self) # old style class
46
47
48 setup(
49 name=PACKAGE_NAME,
50 version=__version__,
51 license='Apache License 2.0',
52 url='https://opsdroid.github.io/',
53 download_url='https://github.com/opsdroid/opsdroid/releases',
54 author='Jacob Tomlinson',
55 author_email='[email protected]',
56 description='An open source ChatOps bot framework.',
57 long_description=README,
58 packages=PACKAGES,
59 include_package_data=True,
60 zip_safe=False,
61 platforms='any',
62 classifiers=[
63 'Development Status :: 4 - Beta',
64 'Environment :: Console',
65 'Framework :: AsyncIO',
66 'Intended Audience :: Developers',
67 'Intended Audience :: System Administrators',
68 'Intended Audience :: Information Technology',
69 'License :: OSI Approved :: Apache Software License',
70 'Programming Language :: Python',
71 'Programming Language :: Python :: 3',
72 'Programming Language :: Python :: 3 :: Only',
73 'Programming Language :: Python :: 3.5',
74 'Programming Language :: Python :: 3.6',
75 'Topic :: Communications :: Chat',
76 'Topic :: Scientific/Engineering :: Artificial Intelligence',
77 'Topic :: Software Development :: Libraries :: Python Modules'
78 ],
79 install_requires=REQUIRES,
80 test_suite='tests',
81 keywords=[
82 'bot',
83 'bot-framework',
84 'opsdroid',
85 'botkit',
86 'python3',
87 'asyncio',
88 'chatops',
89 'devops',
90 'nlu'
91 ],
92 setup_requires=['Babel'],
93 cmdclass={'sdist': Sdist, 'build_py': BuildPy, 'develop': Develop},
94 entry_points={
95 'console_scripts': [
96 'opsdroid = opsdroid.__main__:main'
97 ]
98 },
99 )
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -72,6 +72,7 @@
'Programming Language :: Python :: 3 :: Only',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
+ 'Programming Language :: Python :: 3.7',
'Topic :: Communications :: Chat',
'Topic :: Scientific/Engineering :: Artificial Intelligence',
'Topic :: Software Development :: Libraries :: Python Modules'
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -72,6 +72,7 @@\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n+ 'Programming Language :: Python :: 3.7',\n 'Topic :: Communications :: Chat',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development :: Libraries :: Python Modules'\n", "issue": "Add Python 3.7 support\nWe need to update opsdroid to be fully supported in 3.7.\r\n\r\n- [x] Test against Python 3.7.\r\n - [x] Travis\r\n - [x] AppVeyor\r\n- [x] Fix any bugs highlighted.\r\n- [x] Add 3.7 to supported versions in `setup.py`.\r\n- [ ] ~Update docker base image to be latest supported version~.\n", "before_files": [{"content": "#!/usr/bin/env python3\nimport os\nfrom setuptools import setup, find_packages\nfrom setuptools.command.build_py import build_py\nfrom setuptools.command.sdist import sdist\nfrom setuptools.command.develop import develop\nfrom opsdroid import __version__\n\nPACKAGE_NAME = 'opsdroid'\nHERE = os.path.abspath(os.path.dirname(__file__))\nREADME = open(os.path.join(HERE, 'README.md'), encoding=\"utf8\").read()\n\nPACKAGES = find_packages(exclude=['tests', 'tests.*', 'modules',\n 'modules.*', 'docs', 'docs.*'])\n\n\n# For now we simply define the install_requires based on the contents\n# of requirements.txt. In the future, install_requires may become much\n# looser than the (automatically) resolved requirements.txt.\nwith open(os.path.join(HERE, 'requirements.txt'), 'r') as fh:\n REQUIRES = [line.strip() for line in fh]\n\n\nclass Develop(develop):\n \"\"\"Custom `develop` command to always build mo files on install -e.\"\"\"\n\n def run(self):\n self.run_command('compile_catalog')\n develop.run(self) # old style class\n\n\nclass BuildPy(build_py):\n \"\"\"Custom `build_py` command to always build mo files for wheels.\"\"\"\n\n def run(self):\n self.run_command('compile_catalog')\n build_py.run(self) # old style class\n\n\nclass Sdist(sdist):\n \"\"\"Custom `sdist` command to ensure that mo files are always created.\"\"\"\n\n def run(self):\n self.run_command('compile_catalog')\n sdist.run(self) # old style class\n\n\nsetup(\n name=PACKAGE_NAME,\n version=__version__,\n license='Apache License 2.0',\n url='https://opsdroid.github.io/',\n download_url='https://github.com/opsdroid/opsdroid/releases',\n author='Jacob Tomlinson',\n author_email='[email protected]',\n description='An open source ChatOps bot framework.',\n long_description=README,\n packages=PACKAGES,\n include_package_data=True,\n zip_safe=False,\n platforms='any',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Framework :: AsyncIO',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Intended Audience :: Information Technology',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Communications :: Chat',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development :: Libraries :: Python Modules'\n ],\n install_requires=REQUIRES,\n test_suite='tests',\n keywords=[\n 'bot',\n 'bot-framework',\n 'opsdroid',\n 'botkit',\n 'python3',\n 'asyncio',\n 'chatops',\n 'devops',\n 'nlu'\n ],\n setup_requires=['Babel'],\n cmdclass={'sdist': Sdist, 'build_py': BuildPy, 'develop': Develop},\n entry_points={\n 'console_scripts': [\n 'opsdroid = opsdroid.__main__:main'\n ]\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python3\nimport os\nfrom setuptools import setup, find_packages\nfrom setuptools.command.build_py import build_py\nfrom setuptools.command.sdist import sdist\nfrom setuptools.command.develop import develop\nfrom opsdroid import __version__\n\nPACKAGE_NAME = 'opsdroid'\nHERE = os.path.abspath(os.path.dirname(__file__))\nREADME = open(os.path.join(HERE, 'README.md'), encoding=\"utf8\").read()\n\nPACKAGES = find_packages(exclude=['tests', 'tests.*', 'modules',\n 'modules.*', 'docs', 'docs.*'])\n\n\n# For now we simply define the install_requires based on the contents\n# of requirements.txt. In the future, install_requires may become much\n# looser than the (automatically) resolved requirements.txt.\nwith open(os.path.join(HERE, 'requirements.txt'), 'r') as fh:\n REQUIRES = [line.strip() for line in fh]\n\n\nclass Develop(develop):\n \"\"\"Custom `develop` command to always build mo files on install -e.\"\"\"\n\n def run(self):\n self.run_command('compile_catalog')\n develop.run(self) # old style class\n\n\nclass BuildPy(build_py):\n \"\"\"Custom `build_py` command to always build mo files for wheels.\"\"\"\n\n def run(self):\n self.run_command('compile_catalog')\n build_py.run(self) # old style class\n\n\nclass Sdist(sdist):\n \"\"\"Custom `sdist` command to ensure that mo files are always created.\"\"\"\n\n def run(self):\n self.run_command('compile_catalog')\n sdist.run(self) # old style class\n\n\nsetup(\n name=PACKAGE_NAME,\n version=__version__,\n license='Apache License 2.0',\n url='https://opsdroid.github.io/',\n download_url='https://github.com/opsdroid/opsdroid/releases',\n author='Jacob Tomlinson',\n author_email='[email protected]',\n description='An open source ChatOps bot framework.',\n long_description=README,\n packages=PACKAGES,\n include_package_data=True,\n zip_safe=False,\n platforms='any',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Framework :: AsyncIO',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Intended Audience :: Information Technology',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Communications :: Chat',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development :: Libraries :: Python Modules'\n ],\n install_requires=REQUIRES,\n test_suite='tests',\n keywords=[\n 'bot',\n 'bot-framework',\n 'opsdroid',\n 'botkit',\n 'python3',\n 'asyncio',\n 'chatops',\n 'devops',\n 'nlu'\n ],\n setup_requires=['Babel'],\n cmdclass={'sdist': Sdist, 'build_py': BuildPy, 'develop': Develop},\n entry_points={\n 'console_scripts': [\n 'opsdroid = opsdroid.__main__:main'\n ]\n },\n)\n", "path": "setup.py"}]} | 1,276 | 112 |
gh_patches_debug_43223 | rasdani/github-patches | git_diff | ephios-dev__ephios-80 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Benutzer bearbeiten
Als Manager möchte ich einen Nutzer bearbeiten. Dabei möchte ich die selben Optionen haben wie beim Anlegen des Nutzers. In der Liste der Nutzer soll es für jeden Nutzer eine entsprechende Schaltfläche geben. Der Nutzer soll über Änderungen per Mail informiert werden.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `user_management/mail.py`
Content:
```
1 from django.contrib.auth.tokens import default_token_generator
2 from django.core import mail
3 from django.core.mail import EmailMultiAlternatives
4 from django.template.loader import render_to_string
5 from django.urls import reverse
6 from django.utils.encoding import force_bytes
7 from django.utils.http import urlsafe_base64_encode
8 from django.utils.translation import gettext as _
9
10 from jep.settings import SITE_URL
11
12
13 def send_account_creation_info(userprofile):
14 messages = []
15 subject = _("Welcome to JEP!")
16 uid = urlsafe_base64_encode(force_bytes(userprofile.id))
17 token = default_token_generator.make_token(userprofile)
18 reset_link = reverse("password_reset_confirm", kwargs={"uidb64": uid, "token": token})
19 text_content = _(
20 "You're receiving this email because a new account has been created for you at JEP.\n"
21 "Please go to the following page and choose a password: {url}{reset_link}\n"
22 "Your username is your email address: {email}\n"
23 ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)
24
25 html_content = render_to_string(
26 "user_management/new_account_email.html",
27 {"uid": uid, "token": token, "site_url": SITE_URL, "email": userprofile.email,},
28 )
29 message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)
30 message.attach_alternative(html_content, "text/html")
31 messages.append(message)
32
33 mail.get_connection().send_messages(messages)
34
```
Path: `user_management/urls.py`
Content:
```
1 from django.urls import path
2
3 from user_management import views
4
5 app_name = "user_management"
6 urlpatterns = [
7 path("profile/", views.ProfileView.as_view(), name="profile"),
8 path("groups/", views.GroupListView.as_view(), name="group_list"),
9 path("groups/<int:pk>/edit", views.GroupUpdateView.as_view(), name="group_edit"),
10 path("groups/<int:pk>/delete", views.GroupDeleteView.as_view(), name="group_delete"),
11 path("groups/create", views.GroupCreateView.as_view(), name="group_add"),
12 path("users/", views.UserProfileListView.as_view(), name="user_list",),
13 path("users/create/", views.UserProfileCreateView.as_view(), name="userprofile_create",),
14 ]
15
```
Path: `user_management/views.py`
Content:
```
1 from django.contrib import messages
2 from django.contrib.auth.mixins import LoginRequiredMixin, PermissionRequiredMixin
3 from django.contrib.auth.models import Group
4 from django.urls import reverse
5 from django.views.generic import DetailView, ListView, UpdateView, CreateView, DeleteView
6 from guardian.shortcuts import get_objects_for_group
7
8 from user_management import mail
9 from user_management.forms import GroupForm, UserProfileForm
10 from django.utils.translation import gettext as _
11
12 from user_management.models import UserProfile
13
14
15 class ProfileView(LoginRequiredMixin, DetailView):
16 def get_object(self, queryset=None):
17 return self.request.user
18
19
20 class UserProfileListView(PermissionRequiredMixin, ListView):
21 model = UserProfile
22 permission_required = "user_management.view_userprofile"
23
24
25 class UserProfileCreateView(PermissionRequiredMixin, CreateView):
26 template_name = "user_management/userprofile_form.html"
27 permission_required = "user_management.add_userprofile"
28 model = UserProfile
29 form_class = UserProfileForm
30
31 def get_success_url(self):
32 messages.success(self.request, _("User added successfully."))
33 return reverse("user_management:user_list")
34
35 def form_valid(self, form):
36 response = super().form_valid(form)
37 userprofile = self.object
38 if userprofile.is_active:
39 mail.send_account_creation_info(userprofile)
40 return response
41
42
43 class GroupListView(PermissionRequiredMixin, ListView):
44 model = Group
45 permission_required = "auth.view_group"
46 template_name = "user_management/group_list.html"
47
48
49 class GroupCreateView(PermissionRequiredMixin, CreateView):
50 model = Group
51 permission_required = "auth.add_group"
52 template_name = "user_management/group_form.html"
53 form_class = GroupForm
54
55 def get_form_kwargs(self):
56 kwargs = super().get_form_kwargs()
57 kwargs["initial"] = {
58 "users": UserProfile.objects.none(),
59 "can_add_event": False,
60 "publish_event_for_group": Group.objects.none(),
61 }
62 return kwargs
63
64 def get_success_url(self):
65 messages.success(self.request, _("Group created successfully."))
66 return reverse("user_management:group_list")
67
68
69 class GroupUpdateView(PermissionRequiredMixin, UpdateView):
70 model = Group
71 permission_required = "auth.change_group"
72 template_name = "user_management/group_form.html"
73 form_class = GroupForm
74
75 def get_form_kwargs(self):
76 kwargs = super().get_form_kwargs()
77 kwargs["initial"] = {
78 "users": self.object.user_set.all(),
79 "can_view_past_event": self.object.permissions.filter(
80 codename="view_past_event"
81 ).exists(),
82 "can_add_event": self.object.permissions.filter(codename="add_event").exists(),
83 "publish_event_for_group": get_objects_for_group(
84 self.object, "publish_event_for_group", klass=Group
85 ),
86 }
87 return kwargs
88
89 def get_success_url(self):
90 return reverse("user_management:group_list")
91
92
93 class GroupDeleteView(PermissionRequiredMixin, DeleteView):
94 model = Group
95 permission_required = "auth.delete_group"
96 template_name = "user_management/group_confirm_delete.html"
97
98 def get_success_url(self):
99 return reverse("user_management:group_list")
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/user_management/mail.py b/user_management/mail.py
--- a/user_management/mail.py
+++ b/user_management/mail.py
@@ -11,7 +11,6 @@
def send_account_creation_info(userprofile):
- messages = []
subject = _("Welcome to JEP!")
uid = urlsafe_base64_encode(force_bytes(userprofile.id))
token = default_token_generator.make_token(userprofile)
@@ -24,10 +23,26 @@
html_content = render_to_string(
"user_management/new_account_email.html",
- {"uid": uid, "token": token, "site_url": SITE_URL, "email": userprofile.email,},
+ {"uid": uid, "token": token, "site_url": SITE_URL, "email": userprofile.email},
)
message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)
message.attach_alternative(html_content, "text/html")
- messages.append(message)
+ message.send()
- mail.get_connection().send_messages(messages)
+
+def send_account_update_info(userprofile):
+ subject = _("JEP account updated")
+ url = reverse("user_management:profile")
+ text_content = _(
+ "You're receiving this email because your account at JEP has been updated.\n"
+ "You can see the changes in your profile: {site_url}{url}\n"
+ "Your username is your email address: {email}\n"
+ ).format(site_url=SITE_URL, url=url, email=userprofile.email)
+
+ html_content = render_to_string(
+ "user_management/account_updated_email.html",
+ {"site_url": SITE_URL, "url": url, "email": userprofile.email},
+ )
+ message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)
+ message.attach_alternative(html_content, "text/html")
+ message.send()
diff --git a/user_management/urls.py b/user_management/urls.py
--- a/user_management/urls.py
+++ b/user_management/urls.py
@@ -9,6 +9,7 @@
path("groups/<int:pk>/edit", views.GroupUpdateView.as_view(), name="group_edit"),
path("groups/<int:pk>/delete", views.GroupDeleteView.as_view(), name="group_delete"),
path("groups/create", views.GroupCreateView.as_view(), name="group_add"),
- path("users/", views.UserProfileListView.as_view(), name="user_list",),
+ path("users/", views.UserProfileListView.as_view(), name="userprofile_list",),
+ path("users/<int:pk>/edit", views.UserProfileUpdateView.as_view(), name="userprofile_edit",),
path("users/create/", views.UserProfileCreateView.as_view(), name="userprofile_create",),
]
diff --git a/user_management/views.py b/user_management/views.py
--- a/user_management/views.py
+++ b/user_management/views.py
@@ -30,7 +30,7 @@
def get_success_url(self):
messages.success(self.request, _("User added successfully."))
- return reverse("user_management:user_list")
+ return reverse("user_management:userprofile_list")
def form_valid(self, form):
response = super().form_valid(form)
@@ -40,6 +40,31 @@
return response
+class UserProfileUpdateView(PermissionRequiredMixin, UpdateView):
+ model = UserProfile
+ permission_required = "user_management.change_userprofile"
+ template_name = "user_management/userprofile_form.html"
+ form_class = UserProfileForm
+
+ def get_success_url(self):
+ messages.success(self.request, _("User updated successfully."))
+ return reverse("user_management:userprofile_list")
+
+ def form_valid(self, form):
+ response = super().form_valid(form)
+ userprofile = self.object
+ if userprofile.is_active:
+ mail.send_account_update_info(userprofile)
+ return response
+
+ def get_form_kwargs(self):
+ kwargs = super().get_form_kwargs()
+ kwargs["initial"] = {
+ "groups": self.object.groups.all(),
+ }
+ return kwargs
+
+
class GroupListView(PermissionRequiredMixin, ListView):
model = Group
permission_required = "auth.view_group"
@@ -87,6 +112,7 @@
return kwargs
def get_success_url(self):
+ messages.success(self.request, _("Group updated successfully."))
return reverse("user_management:group_list")
| {"golden_diff": "diff --git a/user_management/mail.py b/user_management/mail.py\n--- a/user_management/mail.py\n+++ b/user_management/mail.py\n@@ -11,7 +11,6 @@\n \n \n def send_account_creation_info(userprofile):\n- messages = []\n subject = _(\"Welcome to JEP!\")\n uid = urlsafe_base64_encode(force_bytes(userprofile.id))\n token = default_token_generator.make_token(userprofile)\n@@ -24,10 +23,26 @@\n \n html_content = render_to_string(\n \"user_management/new_account_email.html\",\n- {\"uid\": uid, \"token\": token, \"site_url\": SITE_URL, \"email\": userprofile.email,},\n+ {\"uid\": uid, \"token\": token, \"site_url\": SITE_URL, \"email\": userprofile.email},\n )\n message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n- messages.append(message)\n+ message.send()\n \n- mail.get_connection().send_messages(messages)\n+\n+def send_account_update_info(userprofile):\n+ subject = _(\"JEP account updated\")\n+ url = reverse(\"user_management:profile\")\n+ text_content = _(\n+ \"You're receiving this email because your account at JEP has been updated.\\n\"\n+ \"You can see the changes in your profile: {site_url}{url}\\n\"\n+ \"Your username is your email address: {email}\\n\"\n+ ).format(site_url=SITE_URL, url=url, email=userprofile.email)\n+\n+ html_content = render_to_string(\n+ \"user_management/account_updated_email.html\",\n+ {\"site_url\": SITE_URL, \"url\": url, \"email\": userprofile.email},\n+ )\n+ message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n+ message.attach_alternative(html_content, \"text/html\")\n+ message.send()\ndiff --git a/user_management/urls.py b/user_management/urls.py\n--- a/user_management/urls.py\n+++ b/user_management/urls.py\n@@ -9,6 +9,7 @@\n path(\"groups/<int:pk>/edit\", views.GroupUpdateView.as_view(), name=\"group_edit\"),\n path(\"groups/<int:pk>/delete\", views.GroupDeleteView.as_view(), name=\"group_delete\"),\n path(\"groups/create\", views.GroupCreateView.as_view(), name=\"group_add\"),\n- path(\"users/\", views.UserProfileListView.as_view(), name=\"user_list\",),\n+ path(\"users/\", views.UserProfileListView.as_view(), name=\"userprofile_list\",),\n+ path(\"users/<int:pk>/edit\", views.UserProfileUpdateView.as_view(), name=\"userprofile_edit\",),\n path(\"users/create/\", views.UserProfileCreateView.as_view(), name=\"userprofile_create\",),\n ]\ndiff --git a/user_management/views.py b/user_management/views.py\n--- a/user_management/views.py\n+++ b/user_management/views.py\n@@ -30,7 +30,7 @@\n \n def get_success_url(self):\n messages.success(self.request, _(\"User added successfully.\"))\n- return reverse(\"user_management:user_list\")\n+ return reverse(\"user_management:userprofile_list\")\n \n def form_valid(self, form):\n response = super().form_valid(form)\n@@ -40,6 +40,31 @@\n return response\n \n \n+class UserProfileUpdateView(PermissionRequiredMixin, UpdateView):\n+ model = UserProfile\n+ permission_required = \"user_management.change_userprofile\"\n+ template_name = \"user_management/userprofile_form.html\"\n+ form_class = UserProfileForm\n+\n+ def get_success_url(self):\n+ messages.success(self.request, _(\"User updated successfully.\"))\n+ return reverse(\"user_management:userprofile_list\")\n+\n+ def form_valid(self, form):\n+ response = super().form_valid(form)\n+ userprofile = self.object\n+ if userprofile.is_active:\n+ mail.send_account_update_info(userprofile)\n+ return response\n+\n+ def get_form_kwargs(self):\n+ kwargs = super().get_form_kwargs()\n+ kwargs[\"initial\"] = {\n+ \"groups\": self.object.groups.all(),\n+ }\n+ return kwargs\n+\n+\n class GroupListView(PermissionRequiredMixin, ListView):\n model = Group\n permission_required = \"auth.view_group\"\n@@ -87,6 +112,7 @@\n return kwargs\n \n def get_success_url(self):\n+ messages.success(self.request, _(\"Group updated successfully.\"))\n return reverse(\"user_management:group_list\")\n", "issue": "Benutzer bearbeiten\nAls Manager m\u00f6chte ich einen Nutzer bearbeiten. Dabei m\u00f6chte ich die selben Optionen haben wie beim Anlegen des Nutzers. In der Liste der Nutzer soll es f\u00fcr jeden Nutzer eine entsprechende Schaltfl\u00e4che geben. Der Nutzer soll \u00fcber \u00c4nderungen per Mail informiert werden.\n", "before_files": [{"content": "from django.contrib.auth.tokens import default_token_generator\nfrom django.core import mail\nfrom django.core.mail import EmailMultiAlternatives\nfrom django.template.loader import render_to_string\nfrom django.urls import reverse\nfrom django.utils.encoding import force_bytes\nfrom django.utils.http import urlsafe_base64_encode\nfrom django.utils.translation import gettext as _\n\nfrom jep.settings import SITE_URL\n\n\ndef send_account_creation_info(userprofile):\n messages = []\n subject = _(\"Welcome to JEP!\")\n uid = urlsafe_base64_encode(force_bytes(userprofile.id))\n token = default_token_generator.make_token(userprofile)\n reset_link = reverse(\"password_reset_confirm\", kwargs={\"uidb64\": uid, \"token\": token})\n text_content = _(\n \"You're receiving this email because a new account has been created for you at JEP.\\n\"\n \"Please go to the following page and choose a password: {url}{reset_link}\\n\"\n \"Your username is your email address: {email}\\n\"\n ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)\n\n html_content = render_to_string(\n \"user_management/new_account_email.html\",\n {\"uid\": uid, \"token\": token, \"site_url\": SITE_URL, \"email\": userprofile.email,},\n )\n message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n\n mail.get_connection().send_messages(messages)\n", "path": "user_management/mail.py"}, {"content": "from django.urls import path\n\nfrom user_management import views\n\napp_name = \"user_management\"\nurlpatterns = [\n path(\"profile/\", views.ProfileView.as_view(), name=\"profile\"),\n path(\"groups/\", views.GroupListView.as_view(), name=\"group_list\"),\n path(\"groups/<int:pk>/edit\", views.GroupUpdateView.as_view(), name=\"group_edit\"),\n path(\"groups/<int:pk>/delete\", views.GroupDeleteView.as_view(), name=\"group_delete\"),\n path(\"groups/create\", views.GroupCreateView.as_view(), name=\"group_add\"),\n path(\"users/\", views.UserProfileListView.as_view(), name=\"user_list\",),\n path(\"users/create/\", views.UserProfileCreateView.as_view(), name=\"userprofile_create\",),\n]\n", "path": "user_management/urls.py"}, {"content": "from django.contrib import messages\nfrom django.contrib.auth.mixins import LoginRequiredMixin, PermissionRequiredMixin\nfrom django.contrib.auth.models import Group\nfrom django.urls import reverse\nfrom django.views.generic import DetailView, ListView, UpdateView, CreateView, DeleteView\nfrom guardian.shortcuts import get_objects_for_group\n\nfrom user_management import mail\nfrom user_management.forms import GroupForm, UserProfileForm\nfrom django.utils.translation import gettext as _\n\nfrom user_management.models import UserProfile\n\n\nclass ProfileView(LoginRequiredMixin, DetailView):\n def get_object(self, queryset=None):\n return self.request.user\n\n\nclass UserProfileListView(PermissionRequiredMixin, ListView):\n model = UserProfile\n permission_required = \"user_management.view_userprofile\"\n\n\nclass UserProfileCreateView(PermissionRequiredMixin, CreateView):\n template_name = \"user_management/userprofile_form.html\"\n permission_required = \"user_management.add_userprofile\"\n model = UserProfile\n form_class = UserProfileForm\n\n def get_success_url(self):\n messages.success(self.request, _(\"User added successfully.\"))\n return reverse(\"user_management:user_list\")\n\n def form_valid(self, form):\n response = super().form_valid(form)\n userprofile = self.object\n if userprofile.is_active:\n mail.send_account_creation_info(userprofile)\n return response\n\n\nclass GroupListView(PermissionRequiredMixin, ListView):\n model = Group\n permission_required = \"auth.view_group\"\n template_name = \"user_management/group_list.html\"\n\n\nclass GroupCreateView(PermissionRequiredMixin, CreateView):\n model = Group\n permission_required = \"auth.add_group\"\n template_name = \"user_management/group_form.html\"\n form_class = GroupForm\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"initial\"] = {\n \"users\": UserProfile.objects.none(),\n \"can_add_event\": False,\n \"publish_event_for_group\": Group.objects.none(),\n }\n return kwargs\n\n def get_success_url(self):\n messages.success(self.request, _(\"Group created successfully.\"))\n return reverse(\"user_management:group_list\")\n\n\nclass GroupUpdateView(PermissionRequiredMixin, UpdateView):\n model = Group\n permission_required = \"auth.change_group\"\n template_name = \"user_management/group_form.html\"\n form_class = GroupForm\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"initial\"] = {\n \"users\": self.object.user_set.all(),\n \"can_view_past_event\": self.object.permissions.filter(\n codename=\"view_past_event\"\n ).exists(),\n \"can_add_event\": self.object.permissions.filter(codename=\"add_event\").exists(),\n \"publish_event_for_group\": get_objects_for_group(\n self.object, \"publish_event_for_group\", klass=Group\n ),\n }\n return kwargs\n\n def get_success_url(self):\n return reverse(\"user_management:group_list\")\n\n\nclass GroupDeleteView(PermissionRequiredMixin, DeleteView):\n model = Group\n permission_required = \"auth.delete_group\"\n template_name = \"user_management/group_confirm_delete.html\"\n\n def get_success_url(self):\n return reverse(\"user_management:group_list\")\n", "path": "user_management/views.py"}], "after_files": [{"content": "from django.contrib.auth.tokens import default_token_generator\nfrom django.core import mail\nfrom django.core.mail import EmailMultiAlternatives\nfrom django.template.loader import render_to_string\nfrom django.urls import reverse\nfrom django.utils.encoding import force_bytes\nfrom django.utils.http import urlsafe_base64_encode\nfrom django.utils.translation import gettext as _\n\nfrom jep.settings import SITE_URL\n\n\ndef send_account_creation_info(userprofile):\n subject = _(\"Welcome to JEP!\")\n uid = urlsafe_base64_encode(force_bytes(userprofile.id))\n token = default_token_generator.make_token(userprofile)\n reset_link = reverse(\"password_reset_confirm\", kwargs={\"uidb64\": uid, \"token\": token})\n text_content = _(\n \"You're receiving this email because a new account has been created for you at JEP.\\n\"\n \"Please go to the following page and choose a password: {url}{reset_link}\\n\"\n \"Your username is your email address: {email}\\n\"\n ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)\n\n html_content = render_to_string(\n \"user_management/new_account_email.html\",\n {\"uid\": uid, \"token\": token, \"site_url\": SITE_URL, \"email\": userprofile.email},\n )\n message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n message.send()\n\n\ndef send_account_update_info(userprofile):\n subject = _(\"JEP account updated\")\n url = reverse(\"user_management:profile\")\n text_content = _(\n \"You're receiving this email because your account at JEP has been updated.\\n\"\n \"You can see the changes in your profile: {site_url}{url}\\n\"\n \"Your username is your email address: {email}\\n\"\n ).format(site_url=SITE_URL, url=url, email=userprofile.email)\n\n html_content = render_to_string(\n \"user_management/account_updated_email.html\",\n {\"site_url\": SITE_URL, \"url\": url, \"email\": userprofile.email},\n )\n message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n message.send()\n", "path": "user_management/mail.py"}, {"content": "from django.urls import path\n\nfrom user_management import views\n\napp_name = \"user_management\"\nurlpatterns = [\n path(\"profile/\", views.ProfileView.as_view(), name=\"profile\"),\n path(\"groups/\", views.GroupListView.as_view(), name=\"group_list\"),\n path(\"groups/<int:pk>/edit\", views.GroupUpdateView.as_view(), name=\"group_edit\"),\n path(\"groups/<int:pk>/delete\", views.GroupDeleteView.as_view(), name=\"group_delete\"),\n path(\"groups/create\", views.GroupCreateView.as_view(), name=\"group_add\"),\n path(\"users/\", views.UserProfileListView.as_view(), name=\"userprofile_list\",),\n path(\"users/<int:pk>/edit\", views.UserProfileUpdateView.as_view(), name=\"userprofile_edit\",),\n path(\"users/create/\", views.UserProfileCreateView.as_view(), name=\"userprofile_create\",),\n]\n", "path": "user_management/urls.py"}, {"content": "from django.contrib import messages\nfrom django.contrib.auth.mixins import LoginRequiredMixin, PermissionRequiredMixin\nfrom django.contrib.auth.models import Group\nfrom django.urls import reverse\nfrom django.views.generic import DetailView, ListView, UpdateView, CreateView, DeleteView\nfrom guardian.shortcuts import get_objects_for_group\n\nfrom user_management import mail\nfrom user_management.forms import GroupForm, UserProfileForm\nfrom django.utils.translation import gettext as _\n\nfrom user_management.models import UserProfile\n\n\nclass ProfileView(LoginRequiredMixin, DetailView):\n def get_object(self, queryset=None):\n return self.request.user\n\n\nclass UserProfileListView(PermissionRequiredMixin, ListView):\n model = UserProfile\n permission_required = \"user_management.view_userprofile\"\n\n\nclass UserProfileCreateView(PermissionRequiredMixin, CreateView):\n template_name = \"user_management/userprofile_form.html\"\n permission_required = \"user_management.add_userprofile\"\n model = UserProfile\n form_class = UserProfileForm\n\n def get_success_url(self):\n messages.success(self.request, _(\"User added successfully.\"))\n return reverse(\"user_management:userprofile_list\")\n\n def form_valid(self, form):\n response = super().form_valid(form)\n userprofile = self.object\n if userprofile.is_active:\n mail.send_account_creation_info(userprofile)\n return response\n\n\nclass UserProfileUpdateView(PermissionRequiredMixin, UpdateView):\n model = UserProfile\n permission_required = \"user_management.change_userprofile\"\n template_name = \"user_management/userprofile_form.html\"\n form_class = UserProfileForm\n\n def get_success_url(self):\n messages.success(self.request, _(\"User updated successfully.\"))\n return reverse(\"user_management:userprofile_list\")\n\n def form_valid(self, form):\n response = super().form_valid(form)\n userprofile = self.object\n if userprofile.is_active:\n mail.send_account_update_info(userprofile)\n return response\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"initial\"] = {\n \"groups\": self.object.groups.all(),\n }\n return kwargs\n\n\nclass GroupListView(PermissionRequiredMixin, ListView):\n model = Group\n permission_required = \"auth.view_group\"\n template_name = \"user_management/group_list.html\"\n\n\nclass GroupCreateView(PermissionRequiredMixin, CreateView):\n model = Group\n permission_required = \"auth.add_group\"\n template_name = \"user_management/group_form.html\"\n form_class = GroupForm\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"initial\"] = {\n \"users\": UserProfile.objects.none(),\n \"can_add_event\": False,\n \"publish_event_for_group\": Group.objects.none(),\n }\n return kwargs\n\n def get_success_url(self):\n messages.success(self.request, _(\"Group created successfully.\"))\n return reverse(\"user_management:group_list\")\n\n\nclass GroupUpdateView(PermissionRequiredMixin, UpdateView):\n model = Group\n permission_required = \"auth.change_group\"\n template_name = \"user_management/group_form.html\"\n form_class = GroupForm\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"initial\"] = {\n \"users\": self.object.user_set.all(),\n \"can_view_past_event\": self.object.permissions.filter(\n codename=\"view_past_event\"\n ).exists(),\n \"can_add_event\": self.object.permissions.filter(codename=\"add_event\").exists(),\n \"publish_event_for_group\": get_objects_for_group(\n self.object, \"publish_event_for_group\", klass=Group\n ),\n }\n return kwargs\n\n def get_success_url(self):\n messages.success(self.request, _(\"Group updated successfully.\"))\n return reverse(\"user_management:group_list\")\n\n\nclass GroupDeleteView(PermissionRequiredMixin, DeleteView):\n model = Group\n permission_required = \"auth.delete_group\"\n template_name = \"user_management/group_confirm_delete.html\"\n\n def get_success_url(self):\n return reverse(\"user_management:group_list\")\n", "path": "user_management/views.py"}]} | 1,779 | 985 |
gh_patches_debug_10891 | rasdani/github-patches | git_diff | openfun__marsha-2578 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
check_live_state pop from empty list
## Bug Report
**Problematic Behavior**
The management command check_live_state has a recurring error, it tries to pop a value from an empty list. This list comes from cloudwatch service : https://github.com/openfun/marsha/blob/29e1f78ed6e288f7bba3c198bb7b7179e7af4fe0/src/backend/marsha/core/management/commands/check_live_state.py#L100
**Expected behavior/code**
This error seems to occur when a live has no activity anymore. We try to compare alerts set and the clear to determine if they are still active.
**Steps to Reproduce**
1. Start a webinar
2. Once started, stop all activity
3. Run the management command `check_live_state`
4. And then the bug happens!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/backend/marsha/core/management/commands/check_live_state.py`
Content:
```
1 """Check live state management command."""
2
3 from datetime import datetime, timedelta, timezone
4 import json
5 import re
6
7 from django.conf import settings
8 from django.core.management.base import BaseCommand
9
10 import boto3
11 from dateutil.parser import isoparse
12
13 from marsha.core.defaults import RUNNING, STOPPING
14 from marsha.core.models import Video
15 from marsha.core.utils.medialive_utils import stop_live_channel
16
17
18 aws_credentials = {
19 "aws_access_key_id": settings.AWS_ACCESS_KEY_ID,
20 "aws_secret_access_key": settings.AWS_SECRET_ACCESS_KEY,
21 "region_name": settings.AWS_S3_REGION_NAME,
22 }
23
24 # Configure medialive client
25 medialive_client = boto3.client("medialive", **aws_credentials)
26
27 # Configure cloudwatch logs client
28 logs_client = boto3.client("logs", **aws_credentials)
29
30
31 def parse_iso_date(iso_date):
32 """Parse an iso 8601 date and return a datetime object."""
33 return isoparse(iso_date)
34
35
36 def generate_expired_date():
37 """Generate a datetime object 25 minutes in the past."""
38 return datetime.now(tz=timezone.utc) - timedelta(minutes=25)
39
40
41 # pylint: disable=too-many-locals
42 class Command(BaseCommand):
43 """Check every live streaming running state on AWS."""
44
45 help = (
46 "Check activity on AWS for every live streaming running"
47 "and close them if there is not."
48 )
49
50 def handle(self, *args, **options):
51 """Execute management command."""
52 extract_message_pattern = (
53 r"^(?P<ingestion_time>.*)\t"
54 r"(?P<request_id>.*)\t"
55 r"(?P<level>.*)\t"
56 r"Received event:(?P<message>.*)$"
57 )
58 extract_message_regex = re.compile(extract_message_pattern)
59
60 videos = Video.objects.filter(live_state=RUNNING)
61 for video in videos:
62 # For each running live video, we query cloudwatch on the current live
63 # to search messages having detail.alert_type set to `RTMP Has No Audio/Video`.
64 # This alert tell us there is no stream and the live can be stopped if the message is
65 # older than 25 minutes.
66 self.stdout.write(f"Checking video {video.id}")
67 live_info = video.live_info
68 logs = logs_client.filter_log_events(
69 logGroupName=live_info["cloudwatch"]["logGroupName"],
70 startTime=int(int(video.live_info.get("started_at")) * 1000),
71 filterPattern=(
72 "{"
73 '($.detail-type = "MediaLive Channel Alert") && '
74 f"($.resources[0] = \"{live_info['medialive']['channel']['arn']}\") &&"
75 '($.detail.alert_type = "RTMP Has No Audio/Video")'
76 "}"
77 ),
78 )
79
80 pipelines_queue = {"0": [], "1": []}
81
82 for event in logs["events"]:
83 # All events must be parsed to extract the JSON message. When an alert is added,
84 # the `alarm_state` property value is `SET` and when the alert is removed,
85 # the `alarm_state` property value is `CLEARED`.
86 # We have 2 pipelines, a live is over when the 2 pipeline have `SET` value
87 # in `alarm_state`.
88 # Alarm state act like a list with all the event history. It means a `CLEARED`
89 # event is related to a `SET` one. So we have to look over all events, put in
90 # a list all `SET` events and remove it if a `CLEARED` event is here. At the
91 # end if we have 2 `SET` events, the live has no activity and we have to check
92 # the time of the last `SET` event. If this time is older than 25 minutes we
93 # stop the channel.
94 log = extract_message_regex.match(event["message"])
95 message = json.loads(log.group("message"))
96
97 if message["detail"]["alarm_state"] == "SET":
98 pipelines_queue[message["detail"]["pipeline"]].append(message)
99 else:
100 pipelines_queue[message["detail"]["pipeline"]].pop()
101
102 if len(pipelines_queue["0"]) == 1 and len(pipelines_queue["1"]) == 1:
103 # Both pipelines receive no stream, we have to check the more recent one
104 # and if the time is older than 25 minutes we stop the channel.
105 datetime_pipeline0 = parse_iso_date(pipelines_queue["0"][0]["time"])
106 datetime_pipeline1 = parse_iso_date(pipelines_queue["1"][0]["time"])
107 expired_date = generate_expired_date()
108
109 if (
110 datetime_pipeline0 < expired_date
111 or datetime_pipeline1 < expired_date
112 ):
113 # Stop this channel
114 self.stdout.write(
115 f"Stopping channel with id {live_info['medialive']['channel']['id']}"
116 )
117 stop_live_channel(live_info["medialive"]["channel"]["id"])
118
119 video.live_state = STOPPING
120 video.save()
121 self.stdout.write("Channel stopped")
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/backend/marsha/core/management/commands/check_live_state.py b/src/backend/marsha/core/management/commands/check_live_state.py
--- a/src/backend/marsha/core/management/commands/check_live_state.py
+++ b/src/backend/marsha/core/management/commands/check_live_state.py
@@ -67,7 +67,7 @@
live_info = video.live_info
logs = logs_client.filter_log_events(
logGroupName=live_info["cloudwatch"]["logGroupName"],
- startTime=int(int(video.live_info.get("started_at")) * 1000),
+ startTime=int((int(video.live_info.get("started_at")) - 60) * 1000),
filterPattern=(
"{"
'($.detail-type = "MediaLive Channel Alert") && '
| {"golden_diff": "diff --git a/src/backend/marsha/core/management/commands/check_live_state.py b/src/backend/marsha/core/management/commands/check_live_state.py\n--- a/src/backend/marsha/core/management/commands/check_live_state.py\n+++ b/src/backend/marsha/core/management/commands/check_live_state.py\n@@ -67,7 +67,7 @@\n live_info = video.live_info\n logs = logs_client.filter_log_events(\n logGroupName=live_info[\"cloudwatch\"][\"logGroupName\"],\n- startTime=int(int(video.live_info.get(\"started_at\")) * 1000),\n+ startTime=int((int(video.live_info.get(\"started_at\")) - 60) * 1000),\n filterPattern=(\n \"{\"\n '($.detail-type = \"MediaLive Channel Alert\") && '\n", "issue": "check_live_state pop from empty list\n## Bug Report\r\n\r\n**Problematic Behavior**\r\n\r\nThe management command check_live_state has a recurring error, it tries to pop a value from an empty list. This list comes from cloudwatch service : https://github.com/openfun/marsha/blob/29e1f78ed6e288f7bba3c198bb7b7179e7af4fe0/src/backend/marsha/core/management/commands/check_live_state.py#L100\r\n\r\n**Expected behavior/code**\r\n\r\nThis error seems to occur when a live has no activity anymore. We try to compare alerts set and the clear to determine if they are still active.\r\n\r\n\r\n**Steps to Reproduce**\r\n1. Start a webinar\r\n2. Once started, stop all activity\r\n3. Run the management command `check_live_state`\r\n4. And then the bug happens!\r\n\r\n\n", "before_files": [{"content": "\"\"\"Check live state management command.\"\"\"\n\nfrom datetime import datetime, timedelta, timezone\nimport json\nimport re\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\n\nimport boto3\nfrom dateutil.parser import isoparse\n\nfrom marsha.core.defaults import RUNNING, STOPPING\nfrom marsha.core.models import Video\nfrom marsha.core.utils.medialive_utils import stop_live_channel\n\n\naws_credentials = {\n \"aws_access_key_id\": settings.AWS_ACCESS_KEY_ID,\n \"aws_secret_access_key\": settings.AWS_SECRET_ACCESS_KEY,\n \"region_name\": settings.AWS_S3_REGION_NAME,\n}\n\n# Configure medialive client\nmedialive_client = boto3.client(\"medialive\", **aws_credentials)\n\n# Configure cloudwatch logs client\nlogs_client = boto3.client(\"logs\", **aws_credentials)\n\n\ndef parse_iso_date(iso_date):\n \"\"\"Parse an iso 8601 date and return a datetime object.\"\"\"\n return isoparse(iso_date)\n\n\ndef generate_expired_date():\n \"\"\"Generate a datetime object 25 minutes in the past.\"\"\"\n return datetime.now(tz=timezone.utc) - timedelta(minutes=25)\n\n\n# pylint: disable=too-many-locals\nclass Command(BaseCommand):\n \"\"\"Check every live streaming running state on AWS.\"\"\"\n\n help = (\n \"Check activity on AWS for every live streaming running\"\n \"and close them if there is not.\"\n )\n\n def handle(self, *args, **options):\n \"\"\"Execute management command.\"\"\"\n extract_message_pattern = (\n r\"^(?P<ingestion_time>.*)\\t\"\n r\"(?P<request_id>.*)\\t\"\n r\"(?P<level>.*)\\t\"\n r\"Received event:(?P<message>.*)$\"\n )\n extract_message_regex = re.compile(extract_message_pattern)\n\n videos = Video.objects.filter(live_state=RUNNING)\n for video in videos:\n # For each running live video, we query cloudwatch on the current live\n # to search messages having detail.alert_type set to `RTMP Has No Audio/Video`.\n # This alert tell us there is no stream and the live can be stopped if the message is\n # older than 25 minutes.\n self.stdout.write(f\"Checking video {video.id}\")\n live_info = video.live_info\n logs = logs_client.filter_log_events(\n logGroupName=live_info[\"cloudwatch\"][\"logGroupName\"],\n startTime=int(int(video.live_info.get(\"started_at\")) * 1000),\n filterPattern=(\n \"{\"\n '($.detail-type = \"MediaLive Channel Alert\") && '\n f\"($.resources[0] = \\\"{live_info['medialive']['channel']['arn']}\\\") &&\"\n '($.detail.alert_type = \"RTMP Has No Audio/Video\")'\n \"}\"\n ),\n )\n\n pipelines_queue = {\"0\": [], \"1\": []}\n\n for event in logs[\"events\"]:\n # All events must be parsed to extract the JSON message. When an alert is added,\n # the `alarm_state` property value is `SET` and when the alert is removed,\n # the `alarm_state` property value is `CLEARED`.\n # We have 2 pipelines, a live is over when the 2 pipeline have `SET` value\n # in `alarm_state`.\n # Alarm state act like a list with all the event history. It means a `CLEARED`\n # event is related to a `SET` one. So we have to look over all events, put in\n # a list all `SET` events and remove it if a `CLEARED` event is here. At the\n # end if we have 2 `SET` events, the live has no activity and we have to check\n # the time of the last `SET` event. If this time is older than 25 minutes we\n # stop the channel.\n log = extract_message_regex.match(event[\"message\"])\n message = json.loads(log.group(\"message\"))\n\n if message[\"detail\"][\"alarm_state\"] == \"SET\":\n pipelines_queue[message[\"detail\"][\"pipeline\"]].append(message)\n else:\n pipelines_queue[message[\"detail\"][\"pipeline\"]].pop()\n\n if len(pipelines_queue[\"0\"]) == 1 and len(pipelines_queue[\"1\"]) == 1:\n # Both pipelines receive no stream, we have to check the more recent one\n # and if the time is older than 25 minutes we stop the channel.\n datetime_pipeline0 = parse_iso_date(pipelines_queue[\"0\"][0][\"time\"])\n datetime_pipeline1 = parse_iso_date(pipelines_queue[\"1\"][0][\"time\"])\n expired_date = generate_expired_date()\n\n if (\n datetime_pipeline0 < expired_date\n or datetime_pipeline1 < expired_date\n ):\n # Stop this channel\n self.stdout.write(\n f\"Stopping channel with id {live_info['medialive']['channel']['id']}\"\n )\n stop_live_channel(live_info[\"medialive\"][\"channel\"][\"id\"])\n\n video.live_state = STOPPING\n video.save()\n self.stdout.write(\"Channel stopped\")\n", "path": "src/backend/marsha/core/management/commands/check_live_state.py"}], "after_files": [{"content": "\"\"\"Check live state management command.\"\"\"\n\nfrom datetime import datetime, timedelta, timezone\nimport json\nimport re\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\n\nimport boto3\nfrom dateutil.parser import isoparse\n\nfrom marsha.core.defaults import RUNNING, STOPPING\nfrom marsha.core.models import Video\nfrom marsha.core.utils.medialive_utils import stop_live_channel\n\n\naws_credentials = {\n \"aws_access_key_id\": settings.AWS_ACCESS_KEY_ID,\n \"aws_secret_access_key\": settings.AWS_SECRET_ACCESS_KEY,\n \"region_name\": settings.AWS_S3_REGION_NAME,\n}\n\n# Configure medialive client\nmedialive_client = boto3.client(\"medialive\", **aws_credentials)\n\n# Configure cloudwatch logs client\nlogs_client = boto3.client(\"logs\", **aws_credentials)\n\n\ndef parse_iso_date(iso_date):\n \"\"\"Parse an iso 8601 date and return a datetime object.\"\"\"\n return isoparse(iso_date)\n\n\ndef generate_expired_date():\n \"\"\"Generate a datetime object 25 minutes in the past.\"\"\"\n return datetime.now(tz=timezone.utc) - timedelta(minutes=25)\n\n\n# pylint: disable=too-many-locals\nclass Command(BaseCommand):\n \"\"\"Check every live streaming running state on AWS.\"\"\"\n\n help = (\n \"Check activity on AWS for every live streaming running\"\n \"and close them if there is not.\"\n )\n\n def handle(self, *args, **options):\n \"\"\"Execute management command.\"\"\"\n extract_message_pattern = (\n r\"^(?P<ingestion_time>.*)\\t\"\n r\"(?P<request_id>.*)\\t\"\n r\"(?P<level>.*)\\t\"\n r\"Received event:(?P<message>.*)$\"\n )\n extract_message_regex = re.compile(extract_message_pattern)\n\n videos = Video.objects.filter(live_state=RUNNING)\n for video in videos:\n # For each running live video, we query cloudwatch on the current live\n # to search messages having detail.alert_type set to `RTMP Has No Audio/Video`.\n # This alert tell us there is no stream and the live can be stopped if the message is\n # older than 25 minutes.\n self.stdout.write(f\"Checking video {video.id}\")\n live_info = video.live_info\n logs = logs_client.filter_log_events(\n logGroupName=live_info[\"cloudwatch\"][\"logGroupName\"],\n startTime=int((int(video.live_info.get(\"started_at\")) - 60) * 1000),\n filterPattern=(\n \"{\"\n '($.detail-type = \"MediaLive Channel Alert\") && '\n f\"($.resources[0] = \\\"{live_info['medialive']['channel']['arn']}\\\") &&\"\n '($.detail.alert_type = \"RTMP Has No Audio/Video\")'\n \"}\"\n ),\n )\n\n pipelines_queue = {\"0\": [], \"1\": []}\n\n for event in logs[\"events\"]:\n # All events must be parsed to extract the JSON message. When an alert is added,\n # the `alarm_state` property value is `SET` and when the alert is removed,\n # the `alarm_state` property value is `CLEARED`.\n # We have 2 pipelines, a live is over when the 2 pipeline have `SET` value\n # in `alarm_state`.\n # Alarm state act like a list with all the event history. It means a `CLEARED`\n # event is related to a `SET` one. So we have to look over all events, put in\n # a list all `SET` events and remove it if a `CLEARED` event is here. At the\n # end if we have 2 `SET` events, the live has no activity and we have to check\n # the time of the last `SET` event. If this time is older than 25 minutes we\n # stop the channel.\n log = extract_message_regex.match(event[\"message\"])\n message = json.loads(log.group(\"message\"))\n\n if message[\"detail\"][\"alarm_state\"] == \"SET\":\n pipelines_queue[message[\"detail\"][\"pipeline\"]].append(message)\n else:\n pipelines_queue[message[\"detail\"][\"pipeline\"]].pop()\n\n if len(pipelines_queue[\"0\"]) == 1 and len(pipelines_queue[\"1\"]) == 1:\n # Both pipelines receive no stream, we have to check the more recent one\n # and if the time is older than 25 minutes we stop the channel.\n datetime_pipeline0 = parse_iso_date(pipelines_queue[\"0\"][0][\"time\"])\n datetime_pipeline1 = parse_iso_date(pipelines_queue[\"1\"][0][\"time\"])\n expired_date = generate_expired_date()\n\n if (\n datetime_pipeline0 < expired_date\n or datetime_pipeline1 < expired_date\n ):\n # Stop this channel\n self.stdout.write(\n f\"Stopping channel with id {live_info['medialive']['channel']['id']}\"\n )\n stop_live_channel(live_info[\"medialive\"][\"channel\"][\"id\"])\n\n video.live_state = STOPPING\n video.save()\n self.stdout.write(\"Channel stopped\")\n", "path": "src/backend/marsha/core/management/commands/check_live_state.py"}]} | 1,827 | 177 |
gh_patches_debug_23423 | rasdani/github-patches | git_diff | biolab__orange3-4389 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Transpose remembers old unexisting data
I have a table file that I overwrite with different values (rows and columns stay the same). When using a certain version of the file with certain workflow the workflow remembers old data, that was previously overwritten (and does not exist on my disk anymore). I could not replicate this in a new workflow made from scratch or with Orange data sets. However, it occurs even when I reopen Orange or when I copy the workflow to a new file.
Below are the workflow and the data and an image of what is happening.

[transpose_remembering.zip](https://github.com/biolab/orange3/files/4102632/transpose_remembering.zip)
Orange: Last master.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Orange/widgets/data/owtranspose.py`
Content:
```
1 from Orange.data import Table, ContinuousVariable, StringVariable
2 from Orange.widgets.settings import (Setting, ContextSetting,
3 DomainContextHandler)
4 from Orange.widgets.utils.itemmodels import DomainModel
5 from Orange.widgets.utils.widgetpreview import WidgetPreview
6 from Orange.widgets.widget import OWWidget, Msg
7 from Orange.widgets import gui
8 from Orange.widgets.widget import Input, Output
9
10
11 class OWTranspose(OWWidget):
12 name = "Transpose"
13 description = "Transpose data table."
14 icon = "icons/Transpose.svg"
15 priority = 2000
16 keywords = []
17
18 class Inputs:
19 data = Input("Data", Table)
20
21 class Outputs:
22 data = Output("Data", Table, dynamic=False)
23
24 GENERIC, FROM_VAR = range(2)
25
26 resizing_enabled = False
27 want_main_area = False
28
29 DEFAULT_PREFIX = "Feature"
30
31 settingsHandler = DomainContextHandler()
32 feature_type = ContextSetting(GENERIC)
33 feature_name = ContextSetting("")
34 feature_names_column = ContextSetting(None)
35 auto_apply = Setting(True)
36
37 class Warning(OWWidget.Warning):
38 duplicate_names = Msg("Values are not unique.\nTo avoid multiple "
39 "features with the same name, values \nof "
40 "'{}' have been augmented with indices.")
41
42 class Error(OWWidget.Error):
43 value_error = Msg("{}")
44
45 def __init__(self):
46 super().__init__()
47 self.data = None
48
49 # self.apply is changed later, pylint: disable=unnecessary-lambda
50 box = gui.radioButtons(
51 self.controlArea, self, "feature_type", box="Feature names",
52 callback=lambda: self.apply())
53
54 button = gui.appendRadioButton(box, "Generic")
55 edit = gui.lineEdit(
56 gui.indentedBox(box, gui.checkButtonOffsetHint(button)), self,
57 "feature_name",
58 placeholderText="Type a prefix ...", toolTip="Custom feature name")
59 edit.editingFinished.connect(self._apply_editing)
60
61 self.meta_button = gui.appendRadioButton(box, "From variable:")
62 self.feature_model = DomainModel(
63 valid_types=(ContinuousVariable, StringVariable),
64 alphabetical=False)
65 self.feature_combo = gui.comboBox(
66 gui.indentedBox(box, gui.checkButtonOffsetHint(button)), self,
67 "feature_names_column", contentsLength=12,
68 callback=self._feature_combo_changed, model=self.feature_model)
69
70 self.apply_button = gui.auto_apply(self.controlArea, self, box=False, commit=self.apply)
71 self.apply_button.button.setAutoDefault(False)
72
73 self.info.set_output_summary(self.info.NoInput)
74 self.info.set_input_summary(self.info.NoInput)
75
76 self.set_controls()
77
78 def _apply_editing(self):
79 self.feature_type = self.GENERIC
80 self.feature_name = self.feature_name.strip()
81 self.apply()
82
83 def _feature_combo_changed(self):
84 self.feature_type = self.FROM_VAR
85 self.apply()
86
87 @Inputs.data
88 def set_data(self, data):
89 # Skip the context if the combo is empty: a context with
90 # feature_model == None would then match all domains
91 if self.feature_model:
92 self.closeContext()
93 self.data = data
94 if data:
95 self.info.set_input_summary(len(data))
96 else:
97 self.info.set_input_summary(self.info.NoInput)
98 self.set_controls()
99 if self.feature_model:
100 self.openContext(data)
101 self.unconditional_apply()
102
103 def set_controls(self):
104 self.feature_model.set_domain(self.data and self.data.domain)
105 self.meta_button.setEnabled(bool(self.feature_model))
106 if self.feature_model:
107 self.feature_names_column = self.feature_model[0]
108 self.feature_type = self.FROM_VAR
109 else:
110 self.feature_names_column = None
111
112 def apply(self):
113 self.clear_messages()
114 transposed = None
115 if self.data:
116 try:
117 variable = self.feature_type == self.FROM_VAR and \
118 self.feature_names_column
119 transposed = Table.transpose(
120 self.data, variable,
121 feature_name=self.feature_name or self.DEFAULT_PREFIX)
122 if variable:
123 names = self.data.get_column_view(variable)[0]
124 if len(names) != len(set(names)):
125 self.Warning.duplicate_names(variable)
126 self.info.set_output_summary(len(transposed))
127 except ValueError as e:
128 self.Error.value_error(e)
129 else:
130 self.info.set_output_summary(self.info.NoInput)
131 self.Outputs.data.send(transposed)
132
133 def send_report(self):
134 if self.feature_type == self.GENERIC:
135 names = self.feature_name or self.DEFAULT_PREFIX
136 else:
137 names = "from variable"
138 if self.feature_names_column:
139 names += " '{}'".format(self.feature_names_column.name)
140 self.report_items("", [("Feature names", names)])
141 if self.data:
142 self.report_data("Data", self.data)
143
144
145 if __name__ == "__main__": # pragma: no cover
146 WidgetPreview(OWTranspose).run(Table("iris"))
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Orange/widgets/data/owtranspose.py b/Orange/widgets/data/owtranspose.py
--- a/Orange/widgets/data/owtranspose.py
+++ b/Orange/widgets/data/owtranspose.py
@@ -38,6 +38,7 @@
duplicate_names = Msg("Values are not unique.\nTo avoid multiple "
"features with the same name, values \nof "
"'{}' have been augmented with indices.")
+ discrete_attrs = Msg("Categorical features have been encoded as numbers.")
class Error(OWWidget.Error):
value_error = Msg("{}")
@@ -123,6 +124,8 @@
names = self.data.get_column_view(variable)[0]
if len(names) != len(set(names)):
self.Warning.duplicate_names(variable)
+ if self.data.domain.has_discrete_attributes():
+ self.Warning.discrete_attrs()
self.info.set_output_summary(len(transposed))
except ValueError as e:
self.Error.value_error(e)
| {"golden_diff": "diff --git a/Orange/widgets/data/owtranspose.py b/Orange/widgets/data/owtranspose.py\n--- a/Orange/widgets/data/owtranspose.py\n+++ b/Orange/widgets/data/owtranspose.py\n@@ -38,6 +38,7 @@\n duplicate_names = Msg(\"Values are not unique.\\nTo avoid multiple \"\n \"features with the same name, values \\nof \"\n \"'{}' have been augmented with indices.\")\n+ discrete_attrs = Msg(\"Categorical features have been encoded as numbers.\")\n \n class Error(OWWidget.Error):\n value_error = Msg(\"{}\")\n@@ -123,6 +124,8 @@\n names = self.data.get_column_view(variable)[0]\n if len(names) != len(set(names)):\n self.Warning.duplicate_names(variable)\n+ if self.data.domain.has_discrete_attributes():\n+ self.Warning.discrete_attrs()\n self.info.set_output_summary(len(transposed))\n except ValueError as e:\n self.Error.value_error(e)\n", "issue": "Transpose remembers old unexisting data\nI have a table file that I overwrite with different values (rows and columns stay the same). When using a certain version of the file with certain workflow the workflow remembers old data, that was previously overwritten (and does not exist on my disk anymore). I could not replicate this in a new workflow made from scratch or with Orange data sets. However, it occurs even when I reopen Orange or when I copy the workflow to a new file. \r\nBelow are the workflow and the data and an image of what is happening. \r\n\r\n\r\n\r\n[transpose_remembering.zip](https://github.com/biolab/orange3/files/4102632/transpose_remembering.zip)\r\n\r\nOrange: Last master.\r\n\n", "before_files": [{"content": "from Orange.data import Table, ContinuousVariable, StringVariable\nfrom Orange.widgets.settings import (Setting, ContextSetting,\n DomainContextHandler)\nfrom Orange.widgets.utils.itemmodels import DomainModel\nfrom Orange.widgets.utils.widgetpreview import WidgetPreview\nfrom Orange.widgets.widget import OWWidget, Msg\nfrom Orange.widgets import gui\nfrom Orange.widgets.widget import Input, Output\n\n\nclass OWTranspose(OWWidget):\n name = \"Transpose\"\n description = \"Transpose data table.\"\n icon = \"icons/Transpose.svg\"\n priority = 2000\n keywords = []\n\n class Inputs:\n data = Input(\"Data\", Table)\n\n class Outputs:\n data = Output(\"Data\", Table, dynamic=False)\n\n GENERIC, FROM_VAR = range(2)\n\n resizing_enabled = False\n want_main_area = False\n\n DEFAULT_PREFIX = \"Feature\"\n\n settingsHandler = DomainContextHandler()\n feature_type = ContextSetting(GENERIC)\n feature_name = ContextSetting(\"\")\n feature_names_column = ContextSetting(None)\n auto_apply = Setting(True)\n\n class Warning(OWWidget.Warning):\n duplicate_names = Msg(\"Values are not unique.\\nTo avoid multiple \"\n \"features with the same name, values \\nof \"\n \"'{}' have been augmented with indices.\")\n\n class Error(OWWidget.Error):\n value_error = Msg(\"{}\")\n\n def __init__(self):\n super().__init__()\n self.data = None\n\n # self.apply is changed later, pylint: disable=unnecessary-lambda\n box = gui.radioButtons(\n self.controlArea, self, \"feature_type\", box=\"Feature names\",\n callback=lambda: self.apply())\n\n button = gui.appendRadioButton(box, \"Generic\")\n edit = gui.lineEdit(\n gui.indentedBox(box, gui.checkButtonOffsetHint(button)), self,\n \"feature_name\",\n placeholderText=\"Type a prefix ...\", toolTip=\"Custom feature name\")\n edit.editingFinished.connect(self._apply_editing)\n\n self.meta_button = gui.appendRadioButton(box, \"From variable:\")\n self.feature_model = DomainModel(\n valid_types=(ContinuousVariable, StringVariable),\n alphabetical=False)\n self.feature_combo = gui.comboBox(\n gui.indentedBox(box, gui.checkButtonOffsetHint(button)), self,\n \"feature_names_column\", contentsLength=12,\n callback=self._feature_combo_changed, model=self.feature_model)\n\n self.apply_button = gui.auto_apply(self.controlArea, self, box=False, commit=self.apply)\n self.apply_button.button.setAutoDefault(False)\n\n self.info.set_output_summary(self.info.NoInput)\n self.info.set_input_summary(self.info.NoInput)\n\n self.set_controls()\n\n def _apply_editing(self):\n self.feature_type = self.GENERIC\n self.feature_name = self.feature_name.strip()\n self.apply()\n\n def _feature_combo_changed(self):\n self.feature_type = self.FROM_VAR\n self.apply()\n\n @Inputs.data\n def set_data(self, data):\n # Skip the context if the combo is empty: a context with\n # feature_model == None would then match all domains\n if self.feature_model:\n self.closeContext()\n self.data = data\n if data:\n self.info.set_input_summary(len(data))\n else:\n self.info.set_input_summary(self.info.NoInput)\n self.set_controls()\n if self.feature_model:\n self.openContext(data)\n self.unconditional_apply()\n\n def set_controls(self):\n self.feature_model.set_domain(self.data and self.data.domain)\n self.meta_button.setEnabled(bool(self.feature_model))\n if self.feature_model:\n self.feature_names_column = self.feature_model[0]\n self.feature_type = self.FROM_VAR\n else:\n self.feature_names_column = None\n\n def apply(self):\n self.clear_messages()\n transposed = None\n if self.data:\n try:\n variable = self.feature_type == self.FROM_VAR and \\\n self.feature_names_column\n transposed = Table.transpose(\n self.data, variable,\n feature_name=self.feature_name or self.DEFAULT_PREFIX)\n if variable:\n names = self.data.get_column_view(variable)[0]\n if len(names) != len(set(names)):\n self.Warning.duplicate_names(variable)\n self.info.set_output_summary(len(transposed))\n except ValueError as e:\n self.Error.value_error(e)\n else:\n self.info.set_output_summary(self.info.NoInput)\n self.Outputs.data.send(transposed)\n\n def send_report(self):\n if self.feature_type == self.GENERIC:\n names = self.feature_name or self.DEFAULT_PREFIX\n else:\n names = \"from variable\"\n if self.feature_names_column:\n names += \" '{}'\".format(self.feature_names_column.name)\n self.report_items(\"\", [(\"Feature names\", names)])\n if self.data:\n self.report_data(\"Data\", self.data)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n WidgetPreview(OWTranspose).run(Table(\"iris\"))\n", "path": "Orange/widgets/data/owtranspose.py"}], "after_files": [{"content": "from Orange.data import Table, ContinuousVariable, StringVariable\nfrom Orange.widgets.settings import (Setting, ContextSetting,\n DomainContextHandler)\nfrom Orange.widgets.utils.itemmodels import DomainModel\nfrom Orange.widgets.utils.widgetpreview import WidgetPreview\nfrom Orange.widgets.widget import OWWidget, Msg\nfrom Orange.widgets import gui\nfrom Orange.widgets.widget import Input, Output\n\n\nclass OWTranspose(OWWidget):\n name = \"Transpose\"\n description = \"Transpose data table.\"\n icon = \"icons/Transpose.svg\"\n priority = 2000\n keywords = []\n\n class Inputs:\n data = Input(\"Data\", Table)\n\n class Outputs:\n data = Output(\"Data\", Table, dynamic=False)\n\n GENERIC, FROM_VAR = range(2)\n\n resizing_enabled = False\n want_main_area = False\n\n DEFAULT_PREFIX = \"Feature\"\n\n settingsHandler = DomainContextHandler()\n feature_type = ContextSetting(GENERIC)\n feature_name = ContextSetting(\"\")\n feature_names_column = ContextSetting(None)\n auto_apply = Setting(True)\n\n class Warning(OWWidget.Warning):\n duplicate_names = Msg(\"Values are not unique.\\nTo avoid multiple \"\n \"features with the same name, values \\nof \"\n \"'{}' have been augmented with indices.\")\n discrete_attrs = Msg(\"Categorical features have been encoded as numbers.\")\n\n class Error(OWWidget.Error):\n value_error = Msg(\"{}\")\n\n def __init__(self):\n super().__init__()\n self.data = None\n\n # self.apply is changed later, pylint: disable=unnecessary-lambda\n box = gui.radioButtons(\n self.controlArea, self, \"feature_type\", box=\"Feature names\",\n callback=lambda: self.apply())\n\n button = gui.appendRadioButton(box, \"Generic\")\n edit = gui.lineEdit(\n gui.indentedBox(box, gui.checkButtonOffsetHint(button)), self,\n \"feature_name\",\n placeholderText=\"Type a prefix ...\", toolTip=\"Custom feature name\")\n edit.editingFinished.connect(self._apply_editing)\n\n self.meta_button = gui.appendRadioButton(box, \"From variable:\")\n self.feature_model = DomainModel(\n valid_types=(ContinuousVariable, StringVariable),\n alphabetical=False)\n self.feature_combo = gui.comboBox(\n gui.indentedBox(box, gui.checkButtonOffsetHint(button)), self,\n \"feature_names_column\", contentsLength=12,\n callback=self._feature_combo_changed, model=self.feature_model)\n\n self.apply_button = gui.auto_apply(self.controlArea, self, box=False, commit=self.apply)\n self.apply_button.button.setAutoDefault(False)\n\n self.info.set_output_summary(self.info.NoInput)\n self.info.set_input_summary(self.info.NoInput)\n\n self.set_controls()\n\n def _apply_editing(self):\n self.feature_type = self.GENERIC\n self.feature_name = self.feature_name.strip()\n self.apply()\n\n def _feature_combo_changed(self):\n self.feature_type = self.FROM_VAR\n self.apply()\n\n @Inputs.data\n def set_data(self, data):\n # Skip the context if the combo is empty: a context with\n # feature_model == None would then match all domains\n if self.feature_model:\n self.closeContext()\n self.data = data\n if data:\n self.info.set_input_summary(len(data))\n else:\n self.info.set_input_summary(self.info.NoInput)\n self.set_controls()\n if self.feature_model:\n self.openContext(data)\n self.unconditional_apply()\n\n def set_controls(self):\n self.feature_model.set_domain(self.data and self.data.domain)\n self.meta_button.setEnabled(bool(self.feature_model))\n if self.feature_model:\n self.feature_names_column = self.feature_model[0]\n self.feature_type = self.FROM_VAR\n else:\n self.feature_names_column = None\n\n def apply(self):\n self.clear_messages()\n transposed = None\n if self.data:\n try:\n variable = self.feature_type == self.FROM_VAR and \\\n self.feature_names_column\n transposed = Table.transpose(\n self.data, variable,\n feature_name=self.feature_name or self.DEFAULT_PREFIX)\n if variable:\n names = self.data.get_column_view(variable)[0]\n if len(names) != len(set(names)):\n self.Warning.duplicate_names(variable)\n if self.data.domain.has_discrete_attributes():\n self.Warning.discrete_attrs()\n self.info.set_output_summary(len(transposed))\n except ValueError as e:\n self.Error.value_error(e)\n else:\n self.info.set_output_summary(self.info.NoInput)\n self.Outputs.data.send(transposed)\n\n def send_report(self):\n if self.feature_type == self.GENERIC:\n names = self.feature_name or self.DEFAULT_PREFIX\n else:\n names = \"from variable\"\n if self.feature_names_column:\n names += \" '{}'\".format(self.feature_names_column.name)\n self.report_items(\"\", [(\"Feature names\", names)])\n if self.data:\n self.report_data(\"Data\", self.data)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n WidgetPreview(OWTranspose).run(Table(\"iris\"))\n", "path": "Orange/widgets/data/owtranspose.py"}]} | 1,895 | 212 |
gh_patches_debug_38 | rasdani/github-patches | git_diff | ipython__ipython-5701 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Move ssh out of external and into lib
This module does not belong in external - it cannot be replaced by an external system module.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `IPython/external/ssh/__init__.py`
Content:
```
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/IPython/external/ssh/__init__.py b/IPython/external/ssh/__init__.py
--- a/IPython/external/ssh/__init__.py
+++ b/IPython/external/ssh/__init__.py
@@ -0,0 +1,7 @@
+"""This is a copy of zmq.ssh"""
+
+try:
+ from zmq.ssh import *
+except ImportError:
+ from . import tunnel
+ from .tunnel import *
| {"golden_diff": "diff --git a/IPython/external/ssh/__init__.py b/IPython/external/ssh/__init__.py\n--- a/IPython/external/ssh/__init__.py\n+++ b/IPython/external/ssh/__init__.py\n@@ -0,0 +1,7 @@\n+\"\"\"This is a copy of zmq.ssh\"\"\"\n+\n+try:\n+ from zmq.ssh import *\n+except ImportError:\n+ from . import tunnel\n+ from .tunnel import *\n", "issue": "Move ssh out of external and into lib\nThis module does not belong in external - it cannot be replaced by an external system module.\n\n", "before_files": [{"content": "", "path": "IPython/external/ssh/__init__.py"}], "after_files": [{"content": "\"\"\"This is a copy of zmq.ssh\"\"\"\n\ntry:\n from zmq.ssh import *\nexcept ImportError:\n from . import tunnel\n from .tunnel import *\n", "path": "IPython/external/ssh/__init__.py"}]} | 290 | 104 |
gh_patches_debug_19799 | rasdani/github-patches | git_diff | mirumee__ariadne-266 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change `make_executable_schema` API to accept multiple bindables args
Currently, the second argument to `make_executable_schema` is list of `SchemaBindlables` or single bindable:
```python
# Single bindable:
schema = make_executable_schema(type_defs, query_type, debug=True)
# Multiple bindables:
schema = make_executable_schema(type_defs, [query_type, mutation_type], debug=True)
```
Looking at Ariadne uses in the wild, a pattern is starting to emerge where developers create dedicated modules/packages in their project for `scalars`, `mutations` or `types`, that use their `__init__.py`'s to gather all bindables into single lists:
```
from .scalars import scalars
from .types import types
from .mutations import mutations
```
Those are then combined into single list and passed to `make_executable_schema`:
```
schema = make_executable_schema(type_defs, scalars + types + mutations, debug=True)
```
This looks ugly, but things get uglier when there's bindable involved:
```
schema = make_executable_schema(type_defs, scalars + types + mutations + [fallback_resolvers], debug=True)
```
We can simplify this by changing bindables to `*bindables`:
```
schema = make_executable_schema(type_defs, scalars, types, mutations, fallback_resolvers, debug=True)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ariadne/executable_schema.py`
Content:
```
1 from typing import Dict, List, Type, Union
2
3 from graphql import (
4 DocumentNode,
5 GraphQLSchema,
6 assert_valid_schema,
7 build_ast_schema,
8 extend_schema,
9 parse,
10 validate_schema,
11 )
12
13 from .enums import set_default_enum_values_on_schema
14 from .schema_visitor import SchemaDirectiveVisitor
15 from .types import SchemaBindable
16
17
18 def make_executable_schema(
19 type_defs: Union[str, List[str]],
20 bindables: Union[SchemaBindable, List[SchemaBindable], None] = None,
21 *,
22 directives: Dict[str, Type[SchemaDirectiveVisitor]] = None,
23 ) -> GraphQLSchema:
24 if isinstance(type_defs, list):
25 type_defs = join_type_defs(type_defs)
26
27 ast_document = parse(type_defs)
28 schema = build_and_extend_schema(ast_document)
29 validate_schema(schema)
30
31 if isinstance(bindables, list):
32 for obj in bindables:
33 obj.bind_to_schema(schema)
34 elif bindables:
35 bindables.bind_to_schema(schema)
36
37 set_default_enum_values_on_schema(schema)
38
39 if directives:
40 SchemaDirectiveVisitor.visit_schema_directives(schema, directives)
41
42 assert_valid_schema(schema)
43
44 return schema
45
46
47 def join_type_defs(type_defs: List[str]) -> str:
48 return "\n\n".join(t.strip() for t in type_defs)
49
50
51 def build_and_extend_schema(ast: DocumentNode) -> GraphQLSchema:
52 schema = build_ast_schema(ast)
53 extension_ast = extract_extensions(ast)
54
55 if extension_ast.definitions:
56 schema = extend_schema(schema, extension_ast)
57
58 return schema
59
60
61 EXTENSION_KINDS = [
62 "scalar_type_extension",
63 "object_type_extension",
64 "interface_type_extension",
65 "union_type_extension",
66 "enum_type_extension",
67 "input_object_type_extension",
68 ]
69
70
71 def extract_extensions(ast: DocumentNode) -> DocumentNode:
72 extensions = [node for node in ast.definitions if node.kind in EXTENSION_KINDS]
73 return DocumentNode(definitions=extensions)
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ariadne/executable_schema.py b/ariadne/executable_schema.py
--- a/ariadne/executable_schema.py
+++ b/ariadne/executable_schema.py
@@ -17,8 +17,7 @@
def make_executable_schema(
type_defs: Union[str, List[str]],
- bindables: Union[SchemaBindable, List[SchemaBindable], None] = None,
- *,
+ *bindables: Union[SchemaBindable, List[SchemaBindable]],
directives: Dict[str, Type[SchemaDirectiveVisitor]] = None,
) -> GraphQLSchema:
if isinstance(type_defs, list):
@@ -28,11 +27,12 @@
schema = build_and_extend_schema(ast_document)
validate_schema(schema)
- if isinstance(bindables, list):
- for obj in bindables:
- obj.bind_to_schema(schema)
- elif bindables:
- bindables.bind_to_schema(schema)
+ for bindable in bindables:
+ if isinstance(bindable, list):
+ for obj in bindable:
+ obj.bind_to_schema(schema)
+ else:
+ bindable.bind_to_schema(schema)
set_default_enum_values_on_schema(schema)
| {"golden_diff": "diff --git a/ariadne/executable_schema.py b/ariadne/executable_schema.py\n--- a/ariadne/executable_schema.py\n+++ b/ariadne/executable_schema.py\n@@ -17,8 +17,7 @@\n \n def make_executable_schema(\n type_defs: Union[str, List[str]],\n- bindables: Union[SchemaBindable, List[SchemaBindable], None] = None,\n- *,\n+ *bindables: Union[SchemaBindable, List[SchemaBindable]],\n directives: Dict[str, Type[SchemaDirectiveVisitor]] = None,\n ) -> GraphQLSchema:\n if isinstance(type_defs, list):\n@@ -28,11 +27,12 @@\n schema = build_and_extend_schema(ast_document)\n validate_schema(schema)\n \n- if isinstance(bindables, list):\n- for obj in bindables:\n- obj.bind_to_schema(schema)\n- elif bindables:\n- bindables.bind_to_schema(schema)\n+ for bindable in bindables:\n+ if isinstance(bindable, list):\n+ for obj in bindable:\n+ obj.bind_to_schema(schema)\n+ else:\n+ bindable.bind_to_schema(schema)\n \n set_default_enum_values_on_schema(schema)\n", "issue": "Change `make_executable_schema` API to accept multiple bindables args\nCurrently, the second argument to `make_executable_schema` is list of `SchemaBindlables` or single bindable:\r\n\r\n```python\r\n# Single bindable:\r\nschema = make_executable_schema(type_defs, query_type, debug=True)\r\n\r\n# Multiple bindables:\r\nschema = make_executable_schema(type_defs, [query_type, mutation_type], debug=True)\r\n```\r\n\r\nLooking at Ariadne uses in the wild, a pattern is starting to emerge where developers create dedicated modules/packages in their project for `scalars`, `mutations` or `types`, that use their `__init__.py`'s to gather all bindables into single lists:\r\n\r\n```\r\nfrom .scalars import scalars\r\nfrom .types import types\r\nfrom .mutations import mutations\r\n```\r\n\r\nThose are then combined into single list and passed to `make_executable_schema`:\r\n\r\n```\r\nschema = make_executable_schema(type_defs, scalars + types + mutations, debug=True)\r\n```\r\n\r\nThis looks ugly, but things get uglier when there's bindable involved:\r\n\r\n```\r\nschema = make_executable_schema(type_defs, scalars + types + mutations + [fallback_resolvers], debug=True)\r\n```\r\n\r\nWe can simplify this by changing bindables to `*bindables`:\r\n\r\n```\r\nschema = make_executable_schema(type_defs, scalars, types, mutations, fallback_resolvers, debug=True)\r\n```\n", "before_files": [{"content": "from typing import Dict, List, Type, Union\n\nfrom graphql import (\n DocumentNode,\n GraphQLSchema,\n assert_valid_schema,\n build_ast_schema,\n extend_schema,\n parse,\n validate_schema,\n)\n\nfrom .enums import set_default_enum_values_on_schema\nfrom .schema_visitor import SchemaDirectiveVisitor\nfrom .types import SchemaBindable\n\n\ndef make_executable_schema(\n type_defs: Union[str, List[str]],\n bindables: Union[SchemaBindable, List[SchemaBindable], None] = None,\n *,\n directives: Dict[str, Type[SchemaDirectiveVisitor]] = None,\n) -> GraphQLSchema:\n if isinstance(type_defs, list):\n type_defs = join_type_defs(type_defs)\n\n ast_document = parse(type_defs)\n schema = build_and_extend_schema(ast_document)\n validate_schema(schema)\n\n if isinstance(bindables, list):\n for obj in bindables:\n obj.bind_to_schema(schema)\n elif bindables:\n bindables.bind_to_schema(schema)\n\n set_default_enum_values_on_schema(schema)\n\n if directives:\n SchemaDirectiveVisitor.visit_schema_directives(schema, directives)\n\n assert_valid_schema(schema)\n\n return schema\n\n\ndef join_type_defs(type_defs: List[str]) -> str:\n return \"\\n\\n\".join(t.strip() for t in type_defs)\n\n\ndef build_and_extend_schema(ast: DocumentNode) -> GraphQLSchema:\n schema = build_ast_schema(ast)\n extension_ast = extract_extensions(ast)\n\n if extension_ast.definitions:\n schema = extend_schema(schema, extension_ast)\n\n return schema\n\n\nEXTENSION_KINDS = [\n \"scalar_type_extension\",\n \"object_type_extension\",\n \"interface_type_extension\",\n \"union_type_extension\",\n \"enum_type_extension\",\n \"input_object_type_extension\",\n]\n\n\ndef extract_extensions(ast: DocumentNode) -> DocumentNode:\n extensions = [node for node in ast.definitions if node.kind in EXTENSION_KINDS]\n return DocumentNode(definitions=extensions)\n", "path": "ariadne/executable_schema.py"}], "after_files": [{"content": "from typing import Dict, List, Type, Union\n\nfrom graphql import (\n DocumentNode,\n GraphQLSchema,\n assert_valid_schema,\n build_ast_schema,\n extend_schema,\n parse,\n validate_schema,\n)\n\nfrom .enums import set_default_enum_values_on_schema\nfrom .schema_visitor import SchemaDirectiveVisitor\nfrom .types import SchemaBindable\n\n\ndef make_executable_schema(\n type_defs: Union[str, List[str]],\n *bindables: Union[SchemaBindable, List[SchemaBindable]],\n directives: Dict[str, Type[SchemaDirectiveVisitor]] = None,\n) -> GraphQLSchema:\n if isinstance(type_defs, list):\n type_defs = join_type_defs(type_defs)\n\n ast_document = parse(type_defs)\n schema = build_and_extend_schema(ast_document)\n validate_schema(schema)\n\n for bindable in bindables:\n if isinstance(bindable, list):\n for obj in bindable:\n obj.bind_to_schema(schema)\n else:\n bindable.bind_to_schema(schema)\n\n set_default_enum_values_on_schema(schema)\n\n if directives:\n SchemaDirectiveVisitor.visit_schema_directives(schema, directives)\n\n assert_valid_schema(schema)\n\n return schema\n\n\ndef join_type_defs(type_defs: List[str]) -> str:\n return \"\\n\\n\".join(t.strip() for t in type_defs)\n\n\ndef build_and_extend_schema(ast: DocumentNode) -> GraphQLSchema:\n schema = build_ast_schema(ast)\n extension_ast = extract_extensions(ast)\n\n if extension_ast.definitions:\n schema = extend_schema(schema, extension_ast)\n\n return schema\n\n\nEXTENSION_KINDS = [\n \"scalar_type_extension\",\n \"object_type_extension\",\n \"interface_type_extension\",\n \"union_type_extension\",\n \"enum_type_extension\",\n \"input_object_type_extension\",\n]\n\n\ndef extract_extensions(ast: DocumentNode) -> DocumentNode:\n extensions = [node for node in ast.definitions if node.kind in EXTENSION_KINDS]\n return DocumentNode(definitions=extensions)\n", "path": "ariadne/executable_schema.py"}]} | 1,122 | 267 |
gh_patches_debug_17397 | rasdani/github-patches | git_diff | benoitc__gunicorn-2570 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
threading.Thread.setDaemon has been deprecated in favor of setting daemon attribute directly in Python 3.10
Ref : python/cpython#25174
https://github.com/benoitc/gunicorn/blob/cf55d2cec277f220ebd605989ce78ad1bb553c46/gunicorn/reloader.py#L20
https://github.com/benoitc/gunicorn/blob/cf55d2cec277f220ebd605989ce78ad1bb553c46/gunicorn/reloader.py#L77
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gunicorn/reloader.py`
Content:
```
1 # -*- coding: utf-8 -
2 #
3 # This file is part of gunicorn released under the MIT license.
4 # See the NOTICE for more information.
5 # pylint: disable=no-else-continue
6
7 import os
8 import os.path
9 import re
10 import sys
11 import time
12 import threading
13
14 COMPILED_EXT_RE = re.compile(r'py[co]$')
15
16
17 class Reloader(threading.Thread):
18 def __init__(self, extra_files=None, interval=1, callback=None):
19 super().__init__()
20 self.setDaemon(True)
21 self._extra_files = set(extra_files or ())
22 self._interval = interval
23 self._callback = callback
24
25 def add_extra_file(self, filename):
26 self._extra_files.add(filename)
27
28 def get_files(self):
29 fnames = [
30 COMPILED_EXT_RE.sub('py', module.__file__)
31 for module in tuple(sys.modules.values())
32 if getattr(module, '__file__', None)
33 ]
34
35 fnames.extend(self._extra_files)
36
37 return fnames
38
39 def run(self):
40 mtimes = {}
41 while True:
42 for filename in self.get_files():
43 try:
44 mtime = os.stat(filename).st_mtime
45 except OSError:
46 continue
47 old_time = mtimes.get(filename)
48 if old_time is None:
49 mtimes[filename] = mtime
50 continue
51 elif mtime > old_time:
52 if self._callback:
53 self._callback(filename)
54 time.sleep(self._interval)
55
56
57 has_inotify = False
58 if sys.platform.startswith('linux'):
59 try:
60 from inotify.adapters import Inotify
61 import inotify.constants
62 has_inotify = True
63 except ImportError:
64 pass
65
66
67 if has_inotify:
68
69 class InotifyReloader(threading.Thread):
70 event_mask = (inotify.constants.IN_CREATE | inotify.constants.IN_DELETE
71 | inotify.constants.IN_DELETE_SELF | inotify.constants.IN_MODIFY
72 | inotify.constants.IN_MOVE_SELF | inotify.constants.IN_MOVED_FROM
73 | inotify.constants.IN_MOVED_TO)
74
75 def __init__(self, extra_files=None, callback=None):
76 super().__init__()
77 self.setDaemon(True)
78 self._callback = callback
79 self._dirs = set()
80 self._watcher = Inotify()
81
82 for extra_file in extra_files:
83 self.add_extra_file(extra_file)
84
85 def add_extra_file(self, filename):
86 dirname = os.path.dirname(filename)
87
88 if dirname in self._dirs:
89 return
90
91 self._watcher.add_watch(dirname, mask=self.event_mask)
92 self._dirs.add(dirname)
93
94 def get_dirs(self):
95 fnames = [
96 os.path.dirname(os.path.abspath(COMPILED_EXT_RE.sub('py', module.__file__)))
97 for module in tuple(sys.modules.values())
98 if getattr(module, '__file__', None)
99 ]
100
101 return set(fnames)
102
103 def run(self):
104 self._dirs = self.get_dirs()
105
106 for dirname in self._dirs:
107 if os.path.isdir(dirname):
108 self._watcher.add_watch(dirname, mask=self.event_mask)
109
110 for event in self._watcher.event_gen():
111 if event is None:
112 continue
113
114 filename = event[3]
115
116 self._callback(filename)
117
118 else:
119
120 class InotifyReloader(object):
121 def __init__(self, callback=None):
122 raise ImportError('You must have the inotify module installed to '
123 'use the inotify reloader')
124
125
126 preferred_reloader = InotifyReloader if has_inotify else Reloader
127
128 reloader_engines = {
129 'auto': preferred_reloader,
130 'poll': Reloader,
131 'inotify': InotifyReloader,
132 }
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gunicorn/reloader.py b/gunicorn/reloader.py
--- a/gunicorn/reloader.py
+++ b/gunicorn/reloader.py
@@ -17,7 +17,7 @@
class Reloader(threading.Thread):
def __init__(self, extra_files=None, interval=1, callback=None):
super().__init__()
- self.setDaemon(True)
+ self.daemon = True
self._extra_files = set(extra_files or ())
self._interval = interval
self._callback = callback
@@ -74,7 +74,7 @@
def __init__(self, extra_files=None, callback=None):
super().__init__()
- self.setDaemon(True)
+ self.daemon = True
self._callback = callback
self._dirs = set()
self._watcher = Inotify()
| {"golden_diff": "diff --git a/gunicorn/reloader.py b/gunicorn/reloader.py\n--- a/gunicorn/reloader.py\n+++ b/gunicorn/reloader.py\n@@ -17,7 +17,7 @@\n class Reloader(threading.Thread):\n def __init__(self, extra_files=None, interval=1, callback=None):\n super().__init__()\n- self.setDaemon(True)\n+ self.daemon = True\n self._extra_files = set(extra_files or ())\n self._interval = interval\n self._callback = callback\n@@ -74,7 +74,7 @@\n \n def __init__(self, extra_files=None, callback=None):\n super().__init__()\n- self.setDaemon(True)\n+ self.daemon = True\n self._callback = callback\n self._dirs = set()\n self._watcher = Inotify()\n", "issue": "threading.Thread.setDaemon has been deprecated in favor of setting daemon attribute directly in Python 3.10\nRef : python/cpython#25174\r\n\r\nhttps://github.com/benoitc/gunicorn/blob/cf55d2cec277f220ebd605989ce78ad1bb553c46/gunicorn/reloader.py#L20\r\n\r\nhttps://github.com/benoitc/gunicorn/blob/cf55d2cec277f220ebd605989ce78ad1bb553c46/gunicorn/reloader.py#L77\n", "before_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n# pylint: disable=no-else-continue\n\nimport os\nimport os.path\nimport re\nimport sys\nimport time\nimport threading\n\nCOMPILED_EXT_RE = re.compile(r'py[co]$')\n\n\nclass Reloader(threading.Thread):\n def __init__(self, extra_files=None, interval=1, callback=None):\n super().__init__()\n self.setDaemon(True)\n self._extra_files = set(extra_files or ())\n self._interval = interval\n self._callback = callback\n\n def add_extra_file(self, filename):\n self._extra_files.add(filename)\n\n def get_files(self):\n fnames = [\n COMPILED_EXT_RE.sub('py', module.__file__)\n for module in tuple(sys.modules.values())\n if getattr(module, '__file__', None)\n ]\n\n fnames.extend(self._extra_files)\n\n return fnames\n\n def run(self):\n mtimes = {}\n while True:\n for filename in self.get_files():\n try:\n mtime = os.stat(filename).st_mtime\n except OSError:\n continue\n old_time = mtimes.get(filename)\n if old_time is None:\n mtimes[filename] = mtime\n continue\n elif mtime > old_time:\n if self._callback:\n self._callback(filename)\n time.sleep(self._interval)\n\n\nhas_inotify = False\nif sys.platform.startswith('linux'):\n try:\n from inotify.adapters import Inotify\n import inotify.constants\n has_inotify = True\n except ImportError:\n pass\n\n\nif has_inotify:\n\n class InotifyReloader(threading.Thread):\n event_mask = (inotify.constants.IN_CREATE | inotify.constants.IN_DELETE\n | inotify.constants.IN_DELETE_SELF | inotify.constants.IN_MODIFY\n | inotify.constants.IN_MOVE_SELF | inotify.constants.IN_MOVED_FROM\n | inotify.constants.IN_MOVED_TO)\n\n def __init__(self, extra_files=None, callback=None):\n super().__init__()\n self.setDaemon(True)\n self._callback = callback\n self._dirs = set()\n self._watcher = Inotify()\n\n for extra_file in extra_files:\n self.add_extra_file(extra_file)\n\n def add_extra_file(self, filename):\n dirname = os.path.dirname(filename)\n\n if dirname in self._dirs:\n return\n\n self._watcher.add_watch(dirname, mask=self.event_mask)\n self._dirs.add(dirname)\n\n def get_dirs(self):\n fnames = [\n os.path.dirname(os.path.abspath(COMPILED_EXT_RE.sub('py', module.__file__)))\n for module in tuple(sys.modules.values())\n if getattr(module, '__file__', None)\n ]\n\n return set(fnames)\n\n def run(self):\n self._dirs = self.get_dirs()\n\n for dirname in self._dirs:\n if os.path.isdir(dirname):\n self._watcher.add_watch(dirname, mask=self.event_mask)\n\n for event in self._watcher.event_gen():\n if event is None:\n continue\n\n filename = event[3]\n\n self._callback(filename)\n\nelse:\n\n class InotifyReloader(object):\n def __init__(self, callback=None):\n raise ImportError('You must have the inotify module installed to '\n 'use the inotify reloader')\n\n\npreferred_reloader = InotifyReloader if has_inotify else Reloader\n\nreloader_engines = {\n 'auto': preferred_reloader,\n 'poll': Reloader,\n 'inotify': InotifyReloader,\n}\n", "path": "gunicorn/reloader.py"}], "after_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n# pylint: disable=no-else-continue\n\nimport os\nimport os.path\nimport re\nimport sys\nimport time\nimport threading\n\nCOMPILED_EXT_RE = re.compile(r'py[co]$')\n\n\nclass Reloader(threading.Thread):\n def __init__(self, extra_files=None, interval=1, callback=None):\n super().__init__()\n self.daemon = True\n self._extra_files = set(extra_files or ())\n self._interval = interval\n self._callback = callback\n\n def add_extra_file(self, filename):\n self._extra_files.add(filename)\n\n def get_files(self):\n fnames = [\n COMPILED_EXT_RE.sub('py', module.__file__)\n for module in tuple(sys.modules.values())\n if getattr(module, '__file__', None)\n ]\n\n fnames.extend(self._extra_files)\n\n return fnames\n\n def run(self):\n mtimes = {}\n while True:\n for filename in self.get_files():\n try:\n mtime = os.stat(filename).st_mtime\n except OSError:\n continue\n old_time = mtimes.get(filename)\n if old_time is None:\n mtimes[filename] = mtime\n continue\n elif mtime > old_time:\n if self._callback:\n self._callback(filename)\n time.sleep(self._interval)\n\n\nhas_inotify = False\nif sys.platform.startswith('linux'):\n try:\n from inotify.adapters import Inotify\n import inotify.constants\n has_inotify = True\n except ImportError:\n pass\n\n\nif has_inotify:\n\n class InotifyReloader(threading.Thread):\n event_mask = (inotify.constants.IN_CREATE | inotify.constants.IN_DELETE\n | inotify.constants.IN_DELETE_SELF | inotify.constants.IN_MODIFY\n | inotify.constants.IN_MOVE_SELF | inotify.constants.IN_MOVED_FROM\n | inotify.constants.IN_MOVED_TO)\n\n def __init__(self, extra_files=None, callback=None):\n super().__init__()\n self.daemon = True\n self._callback = callback\n self._dirs = set()\n self._watcher = Inotify()\n\n for extra_file in extra_files:\n self.add_extra_file(extra_file)\n\n def add_extra_file(self, filename):\n dirname = os.path.dirname(filename)\n\n if dirname in self._dirs:\n return\n\n self._watcher.add_watch(dirname, mask=self.event_mask)\n self._dirs.add(dirname)\n\n def get_dirs(self):\n fnames = [\n os.path.dirname(os.path.abspath(COMPILED_EXT_RE.sub('py', module.__file__)))\n for module in tuple(sys.modules.values())\n if getattr(module, '__file__', None)\n ]\n\n return set(fnames)\n\n def run(self):\n self._dirs = self.get_dirs()\n\n for dirname in self._dirs:\n if os.path.isdir(dirname):\n self._watcher.add_watch(dirname, mask=self.event_mask)\n\n for event in self._watcher.event_gen():\n if event is None:\n continue\n\n filename = event[3]\n\n self._callback(filename)\n\nelse:\n\n class InotifyReloader(object):\n def __init__(self, callback=None):\n raise ImportError('You must have the inotify module installed to '\n 'use the inotify reloader')\n\n\npreferred_reloader = InotifyReloader if has_inotify else Reloader\n\nreloader_engines = {\n 'auto': preferred_reloader,\n 'poll': Reloader,\n 'inotify': InotifyReloader,\n}\n", "path": "gunicorn/reloader.py"}]} | 1,497 | 184 |
gh_patches_debug_6470 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-2306 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix enum_value handling for inputs
A clean and easy solution for fixing the broken enum_value handling for inputs
Closes https://github.com/strawberry-graphql/strawberry/issues/2305
Closes https://github.com/strawberry-graphql/strawberry/pull/2203
Closes https://github.com/strawberry-graphql/strawberry/pull/2185
Closes https://github.com/strawberry-graphql/strawberry/pull/2306
@patrick91 sorry for stealing your release file and tests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/enum.py`
Content:
```
1 import dataclasses
2 from enum import EnumMeta
3 from typing import (
4 Any,
5 Callable,
6 Iterable,
7 List,
8 Mapping,
9 Optional,
10 TypeVar,
11 Union,
12 overload,
13 )
14
15 from strawberry.type import StrawberryType
16
17 from .exceptions import ObjectIsNotAnEnumError
18
19
20 @dataclasses.dataclass
21 class EnumValue:
22 name: str
23 value: Any
24 deprecation_reason: Optional[str] = None
25 directives: Iterable[object] = ()
26 description: Optional[str] = None
27
28
29 @dataclasses.dataclass
30 class EnumDefinition(StrawberryType):
31 wrapped_cls: EnumMeta
32 name: str
33 values: List[EnumValue]
34 description: Optional[str]
35 directives: Iterable[object] = ()
36
37 def __hash__(self) -> int:
38 # TODO: Is this enough for unique-ness?
39 return hash(self.name)
40
41 def copy_with(
42 self, type_var_map: Mapping[TypeVar, Union[StrawberryType, type]]
43 ) -> Union[StrawberryType, type]:
44 return super().copy_with(type_var_map)
45
46 @property
47 def is_generic(self) -> bool:
48 return False
49
50
51 # TODO: remove duplication of EnumValueDefinition and EnumValue
52 @dataclasses.dataclass
53 class EnumValueDefinition:
54 value: Any
55 deprecation_reason: Optional[str] = None
56 directives: Iterable[object] = ()
57 description: Optional[str] = None
58
59
60 def enum_value(
61 value: Any,
62 deprecation_reason: Optional[str] = None,
63 directives: Iterable[object] = (),
64 description: Optional[str] = None,
65 ) -> EnumValueDefinition:
66 return EnumValueDefinition(
67 value=value,
68 deprecation_reason=deprecation_reason,
69 directives=directives,
70 description=description,
71 )
72
73
74 EnumType = TypeVar("EnumType", bound=EnumMeta)
75
76
77 def _process_enum(
78 cls: EnumType,
79 name: Optional[str] = None,
80 description: Optional[str] = None,
81 directives: Iterable[object] = (),
82 ) -> EnumType:
83 if not isinstance(cls, EnumMeta):
84 raise ObjectIsNotAnEnumError(cls)
85
86 if not name:
87 name = cls.__name__
88
89 description = description
90
91 values = []
92 for item in cls: # type: ignore
93 item_value = item.value
94 item_name = item.name
95 deprecation_reason = None
96 item_directives: Iterable[object] = ()
97 enum_value_description = None
98
99 if isinstance(item_value, EnumValueDefinition):
100 item_directives = item_value.directives
101 enum_value_description = item_value.description
102 deprecation_reason = item_value.deprecation_reason
103 item_value = item_value.value
104
105 value = EnumValue(
106 item_name,
107 item_value,
108 deprecation_reason=deprecation_reason,
109 directives=item_directives,
110 description=enum_value_description,
111 )
112 values.append(value)
113
114 cls._enum_definition = EnumDefinition( # type: ignore
115 wrapped_cls=cls,
116 name=name,
117 values=values,
118 description=description,
119 directives=directives,
120 )
121
122 return cls
123
124
125 @overload
126 def enum(
127 _cls: EnumType,
128 *,
129 name: Optional[str] = None,
130 description: Optional[str] = None,
131 directives: Iterable[object] = ()
132 ) -> EnumType:
133 ...
134
135
136 @overload
137 def enum(
138 _cls: None = None,
139 *,
140 name: Optional[str] = None,
141 description: Optional[str] = None,
142 directives: Iterable[object] = ()
143 ) -> Callable[[EnumType], EnumType]:
144 ...
145
146
147 def enum(
148 _cls: Optional[EnumType] = None,
149 *,
150 name: Optional[str] = None,
151 description: Optional[str] = None,
152 directives: Iterable[object] = ()
153 ) -> Union[EnumType, Callable[[EnumType], EnumType]]:
154 """Registers the enum in the GraphQL type system.
155
156 If name is passed, the name of the GraphQL type will be
157 the value passed of name instead of the Enum class name.
158 """
159
160 def wrap(cls: EnumType) -> EnumType:
161 return _process_enum(cls, name, description, directives=directives)
162
163 if not _cls:
164 return wrap
165
166 return wrap(_cls)
167
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/strawberry/enum.py b/strawberry/enum.py
--- a/strawberry/enum.py
+++ b/strawberry/enum.py
@@ -102,6 +102,11 @@
deprecation_reason = item_value.deprecation_reason
item_value = item_value.value
+ # update _value2member_map_ so that doing `MyEnum.MY_VALUE` and
+ # `MyEnum['MY_VALUE']` both work
+ cls._value2member_map_[item_value] = item
+ cls._member_map_[item_name]._value_ = item_value
+
value = EnumValue(
item_name,
item_value,
| {"golden_diff": "diff --git a/strawberry/enum.py b/strawberry/enum.py\n--- a/strawberry/enum.py\n+++ b/strawberry/enum.py\n@@ -102,6 +102,11 @@\n deprecation_reason = item_value.deprecation_reason\n item_value = item_value.value\n \n+ # update _value2member_map_ so that doing `MyEnum.MY_VALUE` and\n+ # `MyEnum['MY_VALUE']` both work\n+ cls._value2member_map_[item_value] = item\n+ cls._member_map_[item_name]._value_ = item_value\n+\n value = EnumValue(\n item_name,\n item_value,\n", "issue": "Fix enum_value handling for inputs\nA clean and easy solution for fixing the broken enum_value handling for inputs\r\n\r\nCloses https://github.com/strawberry-graphql/strawberry/issues/2305\r\nCloses https://github.com/strawberry-graphql/strawberry/pull/2203\r\nCloses https://github.com/strawberry-graphql/strawberry/pull/2185\r\nCloses https://github.com/strawberry-graphql/strawberry/pull/2306\r\n\r\n@patrick91 sorry for stealing your release file and tests\n", "before_files": [{"content": "import dataclasses\nfrom enum import EnumMeta\nfrom typing import (\n Any,\n Callable,\n Iterable,\n List,\n Mapping,\n Optional,\n TypeVar,\n Union,\n overload,\n)\n\nfrom strawberry.type import StrawberryType\n\nfrom .exceptions import ObjectIsNotAnEnumError\n\n\[email protected]\nclass EnumValue:\n name: str\n value: Any\n deprecation_reason: Optional[str] = None\n directives: Iterable[object] = ()\n description: Optional[str] = None\n\n\[email protected]\nclass EnumDefinition(StrawberryType):\n wrapped_cls: EnumMeta\n name: str\n values: List[EnumValue]\n description: Optional[str]\n directives: Iterable[object] = ()\n\n def __hash__(self) -> int:\n # TODO: Is this enough for unique-ness?\n return hash(self.name)\n\n def copy_with(\n self, type_var_map: Mapping[TypeVar, Union[StrawberryType, type]]\n ) -> Union[StrawberryType, type]:\n return super().copy_with(type_var_map)\n\n @property\n def is_generic(self) -> bool:\n return False\n\n\n# TODO: remove duplication of EnumValueDefinition and EnumValue\[email protected]\nclass EnumValueDefinition:\n value: Any\n deprecation_reason: Optional[str] = None\n directives: Iterable[object] = ()\n description: Optional[str] = None\n\n\ndef enum_value(\n value: Any,\n deprecation_reason: Optional[str] = None,\n directives: Iterable[object] = (),\n description: Optional[str] = None,\n) -> EnumValueDefinition:\n return EnumValueDefinition(\n value=value,\n deprecation_reason=deprecation_reason,\n directives=directives,\n description=description,\n )\n\n\nEnumType = TypeVar(\"EnumType\", bound=EnumMeta)\n\n\ndef _process_enum(\n cls: EnumType,\n name: Optional[str] = None,\n description: Optional[str] = None,\n directives: Iterable[object] = (),\n) -> EnumType:\n if not isinstance(cls, EnumMeta):\n raise ObjectIsNotAnEnumError(cls)\n\n if not name:\n name = cls.__name__\n\n description = description\n\n values = []\n for item in cls: # type: ignore\n item_value = item.value\n item_name = item.name\n deprecation_reason = None\n item_directives: Iterable[object] = ()\n enum_value_description = None\n\n if isinstance(item_value, EnumValueDefinition):\n item_directives = item_value.directives\n enum_value_description = item_value.description\n deprecation_reason = item_value.deprecation_reason\n item_value = item_value.value\n\n value = EnumValue(\n item_name,\n item_value,\n deprecation_reason=deprecation_reason,\n directives=item_directives,\n description=enum_value_description,\n )\n values.append(value)\n\n cls._enum_definition = EnumDefinition( # type: ignore\n wrapped_cls=cls,\n name=name,\n values=values,\n description=description,\n directives=directives,\n )\n\n return cls\n\n\n@overload\ndef enum(\n _cls: EnumType,\n *,\n name: Optional[str] = None,\n description: Optional[str] = None,\n directives: Iterable[object] = ()\n) -> EnumType:\n ...\n\n\n@overload\ndef enum(\n _cls: None = None,\n *,\n name: Optional[str] = None,\n description: Optional[str] = None,\n directives: Iterable[object] = ()\n) -> Callable[[EnumType], EnumType]:\n ...\n\n\ndef enum(\n _cls: Optional[EnumType] = None,\n *,\n name: Optional[str] = None,\n description: Optional[str] = None,\n directives: Iterable[object] = ()\n) -> Union[EnumType, Callable[[EnumType], EnumType]]:\n \"\"\"Registers the enum in the GraphQL type system.\n\n If name is passed, the name of the GraphQL type will be\n the value passed of name instead of the Enum class name.\n \"\"\"\n\n def wrap(cls: EnumType) -> EnumType:\n return _process_enum(cls, name, description, directives=directives)\n\n if not _cls:\n return wrap\n\n return wrap(_cls)\n", "path": "strawberry/enum.py"}], "after_files": [{"content": "import dataclasses\nfrom enum import EnumMeta\nfrom typing import (\n Any,\n Callable,\n Iterable,\n List,\n Mapping,\n Optional,\n TypeVar,\n Union,\n overload,\n)\n\nfrom strawberry.type import StrawberryType\n\nfrom .exceptions import ObjectIsNotAnEnumError\n\n\[email protected]\nclass EnumValue:\n name: str\n value: Any\n deprecation_reason: Optional[str] = None\n directives: Iterable[object] = ()\n description: Optional[str] = None\n\n\[email protected]\nclass EnumDefinition(StrawberryType):\n wrapped_cls: EnumMeta\n name: str\n values: List[EnumValue]\n description: Optional[str]\n directives: Iterable[object] = ()\n\n def __hash__(self) -> int:\n # TODO: Is this enough for unique-ness?\n return hash(self.name)\n\n def copy_with(\n self, type_var_map: Mapping[TypeVar, Union[StrawberryType, type]]\n ) -> Union[StrawberryType, type]:\n return super().copy_with(type_var_map)\n\n @property\n def is_generic(self) -> bool:\n return False\n\n\n# TODO: remove duplication of EnumValueDefinition and EnumValue\[email protected]\nclass EnumValueDefinition:\n value: Any\n deprecation_reason: Optional[str] = None\n directives: Iterable[object] = ()\n description: Optional[str] = None\n\n\ndef enum_value(\n value: Any,\n deprecation_reason: Optional[str] = None,\n directives: Iterable[object] = (),\n description: Optional[str] = None,\n) -> EnumValueDefinition:\n return EnumValueDefinition(\n value=value,\n deprecation_reason=deprecation_reason,\n directives=directives,\n description=description,\n )\n\n\nEnumType = TypeVar(\"EnumType\", bound=EnumMeta)\n\n\ndef _process_enum(\n cls: EnumType,\n name: Optional[str] = None,\n description: Optional[str] = None,\n directives: Iterable[object] = (),\n) -> EnumType:\n if not isinstance(cls, EnumMeta):\n raise ObjectIsNotAnEnumError(cls)\n\n if not name:\n name = cls.__name__\n\n description = description\n\n values = []\n for item in cls: # type: ignore\n item_value = item.value\n item_name = item.name\n deprecation_reason = None\n item_directives: Iterable[object] = ()\n enum_value_description = None\n\n if isinstance(item_value, EnumValueDefinition):\n item_directives = item_value.directives\n enum_value_description = item_value.description\n deprecation_reason = item_value.deprecation_reason\n item_value = item_value.value\n\n # update _value2member_map_ so that doing `MyEnum.MY_VALUE` and\n # `MyEnum['MY_VALUE']` both work\n cls._value2member_map_[item_value] = item\n cls._member_map_[item_name]._value_ = item_value\n\n value = EnumValue(\n item_name,\n item_value,\n deprecation_reason=deprecation_reason,\n directives=item_directives,\n description=enum_value_description,\n )\n values.append(value)\n\n cls._enum_definition = EnumDefinition( # type: ignore\n wrapped_cls=cls,\n name=name,\n values=values,\n description=description,\n directives=directives,\n )\n\n return cls\n\n\n@overload\ndef enum(\n _cls: EnumType,\n *,\n name: Optional[str] = None,\n description: Optional[str] = None,\n directives: Iterable[object] = ()\n) -> EnumType:\n ...\n\n\n@overload\ndef enum(\n _cls: None = None,\n *,\n name: Optional[str] = None,\n description: Optional[str] = None,\n directives: Iterable[object] = ()\n) -> Callable[[EnumType], EnumType]:\n ...\n\n\ndef enum(\n _cls: Optional[EnumType] = None,\n *,\n name: Optional[str] = None,\n description: Optional[str] = None,\n directives: Iterable[object] = ()\n) -> Union[EnumType, Callable[[EnumType], EnumType]]:\n \"\"\"Registers the enum in the GraphQL type system.\n\n If name is passed, the name of the GraphQL type will be\n the value passed of name instead of the Enum class name.\n \"\"\"\n\n def wrap(cls: EnumType) -> EnumType:\n return _process_enum(cls, name, description, directives=directives)\n\n if not _cls:\n return wrap\n\n return wrap(_cls)\n", "path": "strawberry/enum.py"}]} | 1,739 | 156 |
gh_patches_debug_5348 | rasdani/github-patches | git_diff | localstack__localstack-536 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix Java Lambda Handler Static Imports
By mistake - autoimport functionality wrong static methods has been imported. This fix the issues reported by in #534
It is unrelated to the JavaFX as that has been imported by mistake. Will prepare another PR with a bit more tests don't understand how it could have passed the CI / CD build with the wrong imports.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `localstack/constants.py`
Content:
```
1 import os
2 import localstack_client.config
3
4 # LocalStack version
5 VERSION = '0.8.4'
6
7 # default AWS region
8 if 'DEFAULT_REGION' not in os.environ:
9 os.environ['DEFAULT_REGION'] = 'us-east-1'
10 DEFAULT_REGION = os.environ['DEFAULT_REGION']
11
12 # constant to represent the "local" region, i.e., local machine
13 REGION_LOCAL = 'local'
14
15 # dev environment
16 ENV_DEV = 'dev'
17
18 # backend service ports, for services that are behind a proxy (counting down from 4566)
19 DEFAULT_PORT_APIGATEWAY_BACKEND = 4566
20 DEFAULT_PORT_KINESIS_BACKEND = 4565
21 DEFAULT_PORT_DYNAMODB_BACKEND = 4564
22 DEFAULT_PORT_S3_BACKEND = 4563
23 DEFAULT_PORT_SNS_BACKEND = 4562
24 DEFAULT_PORT_SQS_BACKEND = 4561
25 DEFAULT_PORT_ELASTICSEARCH_BACKEND = 4560
26 DEFAULT_PORT_CLOUDFORMATION_BACKEND = 4559
27
28 DEFAULT_PORT_WEB_UI = 8080
29
30 LOCALHOST = 'localhost'
31
32 # version of the Maven dependency with Java utility code
33 LOCALSTACK_MAVEN_VERSION = '0.1.9'
34
35 # map of default service APIs and ports to be spun up (fetch map from localstack_client)
36 DEFAULT_SERVICE_PORTS = localstack_client.config.get_service_ports()
37
38 # host to bind to when starting the services
39 BIND_HOST = '0.0.0.0'
40
41 # AWS user account ID used for tests
42 TEST_AWS_ACCOUNT_ID = '000000000000'
43 os.environ['TEST_AWS_ACCOUNT_ID'] = TEST_AWS_ACCOUNT_ID
44
45 # root code folder
46 LOCALSTACK_ROOT_FOLDER = os.path.realpath(os.path.join(os.path.dirname(os.path.realpath(__file__)), '..'))
47
48 # virtualenv folder
49 LOCALSTACK_VENV_FOLDER = os.path.join(LOCALSTACK_ROOT_FOLDER, '.venv')
50 if not os.path.isdir(LOCALSTACK_VENV_FOLDER):
51 # assuming this package lives here: <python>/lib/pythonX.X/site-packages/localstack/
52 LOCALSTACK_VENV_FOLDER = os.path.realpath(os.path.join(LOCALSTACK_ROOT_FOLDER, '..', '..', '..'))
53
54 # API Gateway path to indicate a user request sent to the gateway
55 PATH_USER_REQUEST = '_user_request_'
56
57 # name of LocalStack Docker image
58 DOCKER_IMAGE_NAME = 'localstack/localstack'
59
60 # environment variable name to tag local test runs
61 ENV_INTERNAL_TEST_RUN = 'LOCALSTACK_INTERNAL_TEST_RUN'
62
63 # content types
64 APPLICATION_AMZ_JSON_1_0 = 'application/x-amz-json-1.0'
65 APPLICATION_AMZ_JSON_1_1 = 'application/x-amz-json-1.1'
66 APPLICATION_JSON = 'application/json'
67
68 # Lambda defaults
69 LAMBDA_TEST_ROLE = 'arn:aws:iam::%s:role/lambda-test-role' % TEST_AWS_ACCOUNT_ID
70
71 # installation constants
72 ELASTICSEARCH_JAR_URL = 'https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.zip'
73 DYNAMODB_JAR_URL = 'https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.zip'
74 ELASTICMQ_JAR_URL = 'https://s3-eu-west-1.amazonaws.com/softwaremill-public/elasticmq-server-0.13.8.jar'
75 STS_JAR_URL = 'http://central.maven.org/maven2/com/amazonaws/aws-java-sdk-sts/1.11.14/aws-java-sdk-sts-1.11.14.jar'
76
77 # API endpoint for analytics events
78 API_ENDPOINT = 'https://api.localstack.cloud/v1'
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/localstack/constants.py b/localstack/constants.py
--- a/localstack/constants.py
+++ b/localstack/constants.py
@@ -30,7 +30,7 @@
LOCALHOST = 'localhost'
# version of the Maven dependency with Java utility code
-LOCALSTACK_MAVEN_VERSION = '0.1.9'
+LOCALSTACK_MAVEN_VERSION = '0.1.10'
# map of default service APIs and ports to be spun up (fetch map from localstack_client)
DEFAULT_SERVICE_PORTS = localstack_client.config.get_service_ports()
| {"golden_diff": "diff --git a/localstack/constants.py b/localstack/constants.py\n--- a/localstack/constants.py\n+++ b/localstack/constants.py\n@@ -30,7 +30,7 @@\n LOCALHOST = 'localhost'\n \n # version of the Maven dependency with Java utility code\n-LOCALSTACK_MAVEN_VERSION = '0.1.9'\n+LOCALSTACK_MAVEN_VERSION = '0.1.10'\n \n # map of default service APIs and ports to be spun up (fetch map from localstack_client)\n DEFAULT_SERVICE_PORTS = localstack_client.config.get_service_ports()\n", "issue": "Fix Java Lambda Handler Static Imports\nBy mistake - autoimport functionality wrong static methods has been imported. This fix the issues reported by in #534 \r\n\r\nIt is unrelated to the JavaFX as that has been imported by mistake. Will prepare another PR with a bit more tests don't understand how it could have passed the CI / CD build with the wrong imports.\r\n\n", "before_files": [{"content": "import os\nimport localstack_client.config\n\n# LocalStack version\nVERSION = '0.8.4'\n\n# default AWS region\nif 'DEFAULT_REGION' not in os.environ:\n os.environ['DEFAULT_REGION'] = 'us-east-1'\nDEFAULT_REGION = os.environ['DEFAULT_REGION']\n\n# constant to represent the \"local\" region, i.e., local machine\nREGION_LOCAL = 'local'\n\n# dev environment\nENV_DEV = 'dev'\n\n# backend service ports, for services that are behind a proxy (counting down from 4566)\nDEFAULT_PORT_APIGATEWAY_BACKEND = 4566\nDEFAULT_PORT_KINESIS_BACKEND = 4565\nDEFAULT_PORT_DYNAMODB_BACKEND = 4564\nDEFAULT_PORT_S3_BACKEND = 4563\nDEFAULT_PORT_SNS_BACKEND = 4562\nDEFAULT_PORT_SQS_BACKEND = 4561\nDEFAULT_PORT_ELASTICSEARCH_BACKEND = 4560\nDEFAULT_PORT_CLOUDFORMATION_BACKEND = 4559\n\nDEFAULT_PORT_WEB_UI = 8080\n\nLOCALHOST = 'localhost'\n\n# version of the Maven dependency with Java utility code\nLOCALSTACK_MAVEN_VERSION = '0.1.9'\n\n# map of default service APIs and ports to be spun up (fetch map from localstack_client)\nDEFAULT_SERVICE_PORTS = localstack_client.config.get_service_ports()\n\n# host to bind to when starting the services\nBIND_HOST = '0.0.0.0'\n\n# AWS user account ID used for tests\nTEST_AWS_ACCOUNT_ID = '000000000000'\nos.environ['TEST_AWS_ACCOUNT_ID'] = TEST_AWS_ACCOUNT_ID\n\n# root code folder\nLOCALSTACK_ROOT_FOLDER = os.path.realpath(os.path.join(os.path.dirname(os.path.realpath(__file__)), '..'))\n\n# virtualenv folder\nLOCALSTACK_VENV_FOLDER = os.path.join(LOCALSTACK_ROOT_FOLDER, '.venv')\nif not os.path.isdir(LOCALSTACK_VENV_FOLDER):\n # assuming this package lives here: <python>/lib/pythonX.X/site-packages/localstack/\n LOCALSTACK_VENV_FOLDER = os.path.realpath(os.path.join(LOCALSTACK_ROOT_FOLDER, '..', '..', '..'))\n\n# API Gateway path to indicate a user request sent to the gateway\nPATH_USER_REQUEST = '_user_request_'\n\n# name of LocalStack Docker image\nDOCKER_IMAGE_NAME = 'localstack/localstack'\n\n# environment variable name to tag local test runs\nENV_INTERNAL_TEST_RUN = 'LOCALSTACK_INTERNAL_TEST_RUN'\n\n# content types\nAPPLICATION_AMZ_JSON_1_0 = 'application/x-amz-json-1.0'\nAPPLICATION_AMZ_JSON_1_1 = 'application/x-amz-json-1.1'\nAPPLICATION_JSON = 'application/json'\n\n# Lambda defaults\nLAMBDA_TEST_ROLE = 'arn:aws:iam::%s:role/lambda-test-role' % TEST_AWS_ACCOUNT_ID\n\n# installation constants\nELASTICSEARCH_JAR_URL = 'https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.zip'\nDYNAMODB_JAR_URL = 'https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.zip'\nELASTICMQ_JAR_URL = 'https://s3-eu-west-1.amazonaws.com/softwaremill-public/elasticmq-server-0.13.8.jar'\nSTS_JAR_URL = 'http://central.maven.org/maven2/com/amazonaws/aws-java-sdk-sts/1.11.14/aws-java-sdk-sts-1.11.14.jar'\n\n# API endpoint for analytics events\nAPI_ENDPOINT = 'https://api.localstack.cloud/v1'\n", "path": "localstack/constants.py"}], "after_files": [{"content": "import os\nimport localstack_client.config\n\n# LocalStack version\nVERSION = '0.8.4'\n\n# default AWS region\nif 'DEFAULT_REGION' not in os.environ:\n os.environ['DEFAULT_REGION'] = 'us-east-1'\nDEFAULT_REGION = os.environ['DEFAULT_REGION']\n\n# constant to represent the \"local\" region, i.e., local machine\nREGION_LOCAL = 'local'\n\n# dev environment\nENV_DEV = 'dev'\n\n# backend service ports, for services that are behind a proxy (counting down from 4566)\nDEFAULT_PORT_APIGATEWAY_BACKEND = 4566\nDEFAULT_PORT_KINESIS_BACKEND = 4565\nDEFAULT_PORT_DYNAMODB_BACKEND = 4564\nDEFAULT_PORT_S3_BACKEND = 4563\nDEFAULT_PORT_SNS_BACKEND = 4562\nDEFAULT_PORT_SQS_BACKEND = 4561\nDEFAULT_PORT_ELASTICSEARCH_BACKEND = 4560\nDEFAULT_PORT_CLOUDFORMATION_BACKEND = 4559\n\nDEFAULT_PORT_WEB_UI = 8080\n\nLOCALHOST = 'localhost'\n\n# version of the Maven dependency with Java utility code\nLOCALSTACK_MAVEN_VERSION = '0.1.10'\n\n# map of default service APIs and ports to be spun up (fetch map from localstack_client)\nDEFAULT_SERVICE_PORTS = localstack_client.config.get_service_ports()\n\n# host to bind to when starting the services\nBIND_HOST = '0.0.0.0'\n\n# AWS user account ID used for tests\nTEST_AWS_ACCOUNT_ID = '000000000000'\nos.environ['TEST_AWS_ACCOUNT_ID'] = TEST_AWS_ACCOUNT_ID\n\n# root code folder\nLOCALSTACK_ROOT_FOLDER = os.path.realpath(os.path.join(os.path.dirname(os.path.realpath(__file__)), '..'))\n\n# virtualenv folder\nLOCALSTACK_VENV_FOLDER = os.path.join(LOCALSTACK_ROOT_FOLDER, '.venv')\nif not os.path.isdir(LOCALSTACK_VENV_FOLDER):\n # assuming this package lives here: <python>/lib/pythonX.X/site-packages/localstack/\n LOCALSTACK_VENV_FOLDER = os.path.realpath(os.path.join(LOCALSTACK_ROOT_FOLDER, '..', '..', '..'))\n\n# API Gateway path to indicate a user request sent to the gateway\nPATH_USER_REQUEST = '_user_request_'\n\n# name of LocalStack Docker image\nDOCKER_IMAGE_NAME = 'localstack/localstack'\n\n# environment variable name to tag local test runs\nENV_INTERNAL_TEST_RUN = 'LOCALSTACK_INTERNAL_TEST_RUN'\n\n# content types\nAPPLICATION_AMZ_JSON_1_0 = 'application/x-amz-json-1.0'\nAPPLICATION_AMZ_JSON_1_1 = 'application/x-amz-json-1.1'\nAPPLICATION_JSON = 'application/json'\n\n# Lambda defaults\nLAMBDA_TEST_ROLE = 'arn:aws:iam::%s:role/lambda-test-role' % TEST_AWS_ACCOUNT_ID\n\n# installation constants\nELASTICSEARCH_JAR_URL = 'https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.zip'\nDYNAMODB_JAR_URL = 'https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.zip'\nELASTICMQ_JAR_URL = 'https://s3-eu-west-1.amazonaws.com/softwaremill-public/elasticmq-server-0.13.8.jar'\nSTS_JAR_URL = 'http://central.maven.org/maven2/com/amazonaws/aws-java-sdk-sts/1.11.14/aws-java-sdk-sts-1.11.14.jar'\n\n# API endpoint for analytics events\nAPI_ENDPOINT = 'https://api.localstack.cloud/v1'\n", "path": "localstack/constants.py"}]} | 1,281 | 122 |
gh_patches_debug_21589 | rasdani/github-patches | git_diff | conan-io__conan-center-index-7891 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] sqlpp11/0.60: scripts are not in the package
sqlpp11 provides some scripts that can be used by the consumer: https://github.com/rbock/sqlpp11/tree/develop/scripts
But these scripts are not in the conan package.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/sqlpp11/all/conanfile.py`
Content:
```
1 from conans import ConanFile, tools
2 import os
3
4 required_conan_version = ">=1.33.0"
5
6
7 class Sqlpp11Conan(ConanFile):
8 name = "sqlpp11"
9 license = "BSD-2-Clause"
10 url = "https://github.com/conan-io/conan-center-index"
11 homepage = "https://github.com/rbock/sqlpp11"
12 description = "A type safe SQL template library for C++"
13 topics = ("SQL", "DSL", "embedded", "data-base")
14 no_copy_source = True
15
16 @property
17 def _source_subfolder(self):
18 return "source_subfolder"
19
20 def requirements(self):
21 self.requires("date/3.0.1")
22
23 def package_id(self):
24 self.info.header_only()
25
26 def source(self):
27 tools.get(**self.conan_data["sources"][self.version],
28 destination=self._source_subfolder, strip_root=True)
29
30 def package(self):
31 self.copy("LICENSE", dst="licenses", src=self._source_subfolder)
32 self.copy("*.h", dst="include", src=os.path.join(self._source_subfolder, "include"))
33
34 def package_info(self):
35 self.cpp_info.filenames["cmake_find_package"] = "Sqlpp11"
36 self.cpp_info.filenames["cmake_find_package_multi"] = "Sqlpp11"
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/sqlpp11/all/conanfile.py b/recipes/sqlpp11/all/conanfile.py
--- a/recipes/sqlpp11/all/conanfile.py
+++ b/recipes/sqlpp11/all/conanfile.py
@@ -24,13 +24,21 @@
self.info.header_only()
def source(self):
- tools.get(**self.conan_data["sources"][self.version],
- destination=self._source_subfolder, strip_root=True)
+ tools.get(
+ **self.conan_data["sources"][self.version],
+ destination=self._source_subfolder,
+ strip_root=True
+ )
def package(self):
self.copy("LICENSE", dst="licenses", src=self._source_subfolder)
self.copy("*.h", dst="include", src=os.path.join(self._source_subfolder, "include"))
+ self.copy("*", dst="bin", src=os.path.join(self._source_subfolder, "scripts"))
def package_info(self):
self.cpp_info.filenames["cmake_find_package"] = "Sqlpp11"
self.cpp_info.filenames["cmake_find_package_multi"] = "Sqlpp11"
+
+ bindir = os.path.join(self.package_folder, "bin")
+ self.output.info("Appending PATH environment variable: {}".format(bindir))
+ self.env_info.PATH.append(bindir)
| {"golden_diff": "diff --git a/recipes/sqlpp11/all/conanfile.py b/recipes/sqlpp11/all/conanfile.py\n--- a/recipes/sqlpp11/all/conanfile.py\n+++ b/recipes/sqlpp11/all/conanfile.py\n@@ -24,13 +24,21 @@\n self.info.header_only()\n \n def source(self):\n- tools.get(**self.conan_data[\"sources\"][self.version],\n- destination=self._source_subfolder, strip_root=True)\n+ tools.get(\n+ **self.conan_data[\"sources\"][self.version],\n+ destination=self._source_subfolder,\n+ strip_root=True\n+ )\n \n def package(self):\n self.copy(\"LICENSE\", dst=\"licenses\", src=self._source_subfolder)\n self.copy(\"*.h\", dst=\"include\", src=os.path.join(self._source_subfolder, \"include\"))\n+ self.copy(\"*\", dst=\"bin\", src=os.path.join(self._source_subfolder, \"scripts\"))\n \n def package_info(self):\n self.cpp_info.filenames[\"cmake_find_package\"] = \"Sqlpp11\"\n self.cpp_info.filenames[\"cmake_find_package_multi\"] = \"Sqlpp11\"\n+\n+ bindir = os.path.join(self.package_folder, \"bin\")\n+ self.output.info(\"Appending PATH environment variable: {}\".format(bindir))\n+ self.env_info.PATH.append(bindir)\n", "issue": "[package] sqlpp11/0.60: scripts are not in the package\nsqlpp11 provides some scripts that can be used by the consumer: https://github.com/rbock/sqlpp11/tree/develop/scripts \r\nBut these scripts are not in the conan package.\n", "before_files": [{"content": "from conans import ConanFile, tools\nimport os\n\nrequired_conan_version = \">=1.33.0\"\n\n\nclass Sqlpp11Conan(ConanFile):\n name = \"sqlpp11\"\n license = \"BSD-2-Clause\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/rbock/sqlpp11\"\n description = \"A type safe SQL template library for C++\"\n topics = (\"SQL\", \"DSL\", \"embedded\", \"data-base\")\n no_copy_source = True\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def requirements(self):\n self.requires(\"date/3.0.1\")\n\n def package_id(self):\n self.info.header_only()\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version],\n destination=self._source_subfolder, strip_root=True)\n\n def package(self):\n self.copy(\"LICENSE\", dst=\"licenses\", src=self._source_subfolder)\n self.copy(\"*.h\", dst=\"include\", src=os.path.join(self._source_subfolder, \"include\"))\n\n def package_info(self):\n self.cpp_info.filenames[\"cmake_find_package\"] = \"Sqlpp11\"\n self.cpp_info.filenames[\"cmake_find_package_multi\"] = \"Sqlpp11\"\n", "path": "recipes/sqlpp11/all/conanfile.py"}], "after_files": [{"content": "from conans import ConanFile, tools\nimport os\n\nrequired_conan_version = \">=1.33.0\"\n\n\nclass Sqlpp11Conan(ConanFile):\n name = \"sqlpp11\"\n license = \"BSD-2-Clause\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/rbock/sqlpp11\"\n description = \"A type safe SQL template library for C++\"\n topics = (\"SQL\", \"DSL\", \"embedded\", \"data-base\")\n no_copy_source = True\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def requirements(self):\n self.requires(\"date/3.0.1\")\n\n def package_id(self):\n self.info.header_only()\n\n def source(self):\n tools.get(\n **self.conan_data[\"sources\"][self.version],\n destination=self._source_subfolder,\n strip_root=True\n )\n\n def package(self):\n self.copy(\"LICENSE\", dst=\"licenses\", src=self._source_subfolder)\n self.copy(\"*.h\", dst=\"include\", src=os.path.join(self._source_subfolder, \"include\"))\n self.copy(\"*\", dst=\"bin\", src=os.path.join(self._source_subfolder, \"scripts\"))\n\n def package_info(self):\n self.cpp_info.filenames[\"cmake_find_package\"] = \"Sqlpp11\"\n self.cpp_info.filenames[\"cmake_find_package_multi\"] = \"Sqlpp11\"\n\n bindir = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bindir))\n self.env_info.PATH.append(bindir)\n", "path": "recipes/sqlpp11/all/conanfile.py"}]} | 695 | 305 |
gh_patches_debug_761 | rasdani/github-patches | git_diff | encode__uvicorn-324 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TypeError: __init__() when run "uvicorn app:App"
I'm working on Mac Os Sierra 10.12.6, python 3.7.2 and uvicorn via pip3 0.5.1.
When I run the example uvicorn app:App get the following error:
Traceback (most recent call last):
File "/usr/local/bin/uvicorn", line 11, in <module>
load_entry_point('uvicorn==0.5.1', 'console_scripts', 'uvicorn')()
File "/usr/local/lib/python3.7/site-packages/pkg_resources/__init__.py", line 489, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/local/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2793, in load_entry_point
return ep.load()
File "/usr/local/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2411, in load
return self.resolve()
File "/usr/local/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2417, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/local/lib/python3.7/site-packages/uvicorn/__init__.py", line 2, in <module>
from uvicorn.main import Server, main, run
File "/usr/local/lib/python3.7/site-packages/uvicorn/main.py", line 212, in <module>
ssl_ciphers: str,
File "/usr/local/lib/python3.7/site-packages/click/decorators.py", line 170, in decorator
_param_memo(f, OptionClass(param_decls, **attrs))
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1460, in __init__
Parameter.__init__(self, param_decls, type=type, **attrs)
TypeError: __init__() got an unexpected keyword argument 'hidden'
Thank you
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import os
5 import re
6 import sys
7 import platform
8
9 from setuptools import setup
10
11
12 def get_version(package):
13 """
14 Return package version as listed in `__version__` in `init.py`.
15 """
16 path = os.path.join(package, '__init__.py')
17 init_py = open(path, 'r', encoding='utf8').read()
18 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
19
20
21 def get_long_description():
22 """
23 Return the README.
24 """
25 return open('README.md', 'r', encoding='utf8').read()
26
27
28 def get_packages(package):
29 """
30 Return root package and all sub-packages.
31 """
32 return [dirpath
33 for dirpath, dirnames, filenames in os.walk(package)
34 if os.path.exists(os.path.join(dirpath, '__init__.py'))]
35
36
37 env_marker = (
38 "sys_platform != 'win32'"
39 " and sys_platform != 'cygwin'"
40 " and platform_python_implementation != 'pypy'"
41 )
42
43 requirements = [
44 "click",
45 "h11",
46 "websockets>=6.0",
47 "httptools;" + env_marker,
48 "uvloop;" + env_marker,
49 ]
50
51
52 setup(
53 name='uvicorn',
54 version=get_version('uvicorn'),
55 url='https://github.com/encode/uvicorn',
56 license='BSD',
57 description='The lightning-fast ASGI server.',
58 long_description=get_long_description(),
59 long_description_content_type='text/markdown',
60 author='Tom Christie',
61 author_email='[email protected]',
62 packages=get_packages('uvicorn'),
63 install_requires=requirements,
64 data_files = [("", ["LICENSE.md"])],
65 classifiers=[
66 'Development Status :: 3 - Alpha',
67 'Environment :: Web Environment',
68 'Intended Audience :: Developers',
69 'License :: OSI Approved :: BSD License',
70 'Operating System :: OS Independent',
71 'Topic :: Internet :: WWW/HTTP',
72 'Programming Language :: Python :: 3',
73 'Programming Language :: Python :: 3.5',
74 'Programming Language :: Python :: 3.6',
75 'Programming Language :: Python :: 3.7',
76 'Programming Language :: Python :: Implementation :: CPython',
77 'Programming Language :: Python :: Implementation :: PyPy',
78 ],
79 entry_points="""
80 [console_scripts]
81 uvicorn=uvicorn.main:main
82 """
83 )
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -41,11 +41,11 @@
)
requirements = [
- "click",
- "h11",
- "websockets>=6.0",
- "httptools;" + env_marker,
- "uvloop;" + env_marker,
+ "click==7.*",
+ "h11==0.8.*",
+ "websockets==7.*",
+ "httptools==0.0.13 ;" + env_marker,
+ "uvloop==0.12.* ;" + env_marker,
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -41,11 +41,11 @@\n )\n \n requirements = [\n- \"click\",\n- \"h11\",\n- \"websockets>=6.0\",\n- \"httptools;\" + env_marker,\n- \"uvloop;\" + env_marker,\n+ \"click==7.*\",\n+ \"h11==0.8.*\",\n+ \"websockets==7.*\",\n+ \"httptools==0.0.13 ;\" + env_marker,\n+ \"uvloop==0.12.* ;\" + env_marker,\n ]\n", "issue": "TypeError: __init__() when run \"uvicorn app:App\"\nI'm working on Mac Os Sierra 10.12.6, python 3.7.2 and uvicorn via pip3 0.5.1.\r\nWhen I run the example uvicorn app:App get the following error:\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/uvicorn\", line 11, in <module>\r\n load_entry_point('uvicorn==0.5.1', 'console_scripts', 'uvicorn')()\r\n File \"/usr/local/lib/python3.7/site-packages/pkg_resources/__init__.py\", line 489, in load_entry_point\r\n return get_distribution(dist).load_entry_point(group, name)\r\n File \"/usr/local/lib/python3.7/site-packages/pkg_resources/__init__.py\", line 2793, in load_entry_point\r\n return ep.load()\r\n File \"/usr/local/lib/python3.7/site-packages/pkg_resources/__init__.py\", line 2411, in load\r\n return self.resolve()\r\n File \"/usr/local/lib/python3.7/site-packages/pkg_resources/__init__.py\", line 2417, in resolve\r\n module = __import__(self.module_name, fromlist=['__name__'], level=0)\r\n File \"/usr/local/lib/python3.7/site-packages/uvicorn/__init__.py\", line 2, in <module>\r\n from uvicorn.main import Server, main, run\r\n File \"/usr/local/lib/python3.7/site-packages/uvicorn/main.py\", line 212, in <module>\r\n ssl_ciphers: str,\r\n File \"/usr/local/lib/python3.7/site-packages/click/decorators.py\", line 170, in decorator\r\n _param_memo(f, OptionClass(param_decls, **attrs))\r\n File \"/usr/local/lib/python3.7/site-packages/click/core.py\", line 1460, in __init__\r\n Parameter.__init__(self, param_decls, type=type, **attrs)\r\nTypeError: __init__() got an unexpected keyword argument 'hidden'\r\n\r\nThank you\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport os\nimport re\nimport sys\nimport platform\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n path = os.path.join(package, '__init__.py')\n init_py = open(path, 'r', encoding='utf8').read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n return open('README.md', 'r', encoding='utf8').read()\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\nenv_marker = (\n \"sys_platform != 'win32'\"\n \" and sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'pypy'\"\n)\n\nrequirements = [\n \"click\",\n \"h11\",\n \"websockets>=6.0\",\n \"httptools;\" + env_marker,\n \"uvloop;\" + env_marker,\n]\n\n\nsetup(\n name='uvicorn',\n version=get_version('uvicorn'),\n url='https://github.com/encode/uvicorn',\n license='BSD',\n description='The lightning-fast ASGI server.',\n long_description=get_long_description(),\n long_description_content_type='text/markdown',\n author='Tom Christie',\n author_email='[email protected]',\n packages=get_packages('uvicorn'),\n install_requires=requirements,\n data_files = [(\"\", [\"LICENSE.md\"])],\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Topic :: Internet :: WWW/HTTP',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n entry_points=\"\"\"\n [console_scripts]\n uvicorn=uvicorn.main:main\n \"\"\"\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport os\nimport re\nimport sys\nimport platform\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n path = os.path.join(package, '__init__.py')\n init_py = open(path, 'r', encoding='utf8').read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n return open('README.md', 'r', encoding='utf8').read()\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\nenv_marker = (\n \"sys_platform != 'win32'\"\n \" and sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'pypy'\"\n)\n\nrequirements = [\n \"click==7.*\",\n \"h11==0.8.*\",\n \"websockets==7.*\",\n \"httptools==0.0.13 ;\" + env_marker,\n \"uvloop==0.12.* ;\" + env_marker,\n]\n\n\nsetup(\n name='uvicorn',\n version=get_version('uvicorn'),\n url='https://github.com/encode/uvicorn',\n license='BSD',\n description='The lightning-fast ASGI server.',\n long_description=get_long_description(),\n long_description_content_type='text/markdown',\n author='Tom Christie',\n author_email='[email protected]',\n packages=get_packages('uvicorn'),\n install_requires=requirements,\n data_files = [(\"\", [\"LICENSE.md\"])],\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Topic :: Internet :: WWW/HTTP',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n entry_points=\"\"\"\n [console_scripts]\n uvicorn=uvicorn.main:main\n \"\"\"\n)\n", "path": "setup.py"}]} | 1,419 | 148 |
gh_patches_debug_27919 | rasdani/github-patches | git_diff | saulpw__visidata-1059 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[unfurl-col] unfurling a column with TypedWrapper halts unfurl
**Small description**
When unfurling a column which has a TypedWrapper a sheet loading halt occurs
**Expected result**
The same response as whatever `options.unfurl_empty` would do for a row without a list.
**Actual result with screenshot**
AttributeError: 'TypedWrapper' object has no attribute 'xyz'

If you get an unexpected error, please include the full stack trace that you get with `Ctrl-E`.
**Steps to reproduce with sample data and a .vd**
`echo '[{"a":1,"d":{"b":[1,2,3]}},{"a":2,"d":{"c":[1,2,3]}},{"a":3,"d":{"b":[1,2,3]}}]' | vd -f json`
```
sheet col row longname input keystrokes comment
open-file - o
_ d addcol-expr curcol['b'] = create new column from Python expression, with column names as variables
_ curcol_b_ unfurl-col row-wise expand current column of lists (e.g. [2]) or dicts (e.g. {3}) within that column
```
**Additional context**
v2.6dev
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `visidata/unfurl.py`
Content:
```
1 '''This adds the `unfurl-col` command, to unfurl a column containing iterable values, such as lists and dicts.
2 Unfurling pushes a new sheet, with each key/value pair in the unfurled column values getting its own row, with the rest of the source sheet's columns copied for each of those rows.
3
4 Note: When unfurling a column, non-iterable objects (numbers, and also strings) are treated as single-item lists, so that they too can be unfurled.
5
6 Credit to Jeremy Singer-Vine for the idea and original implementation.
7 '''
8
9 from collections.abc import Iterable, Mapping
10 from visidata import vd, Progress, Sheet, Column, ColumnItem, SettableColumn, SubColumnFunc, asyncthread, clean_to_id
11
12
13 vd.option('unfurl_empty', False, 'if unfurl includes rows for empty containers', replay=True)
14
15
16 class UnfurledSheet(Sheet):
17 @asyncthread
18 def reload(self):
19 # Copy over base sheet, using SubColumnFunc
20 self.columns = []
21 for col in self.source.columns:
22 if col is self.source_col:
23 # Replace iterable column with two columns: keys and values
24 self.cursorVisibleColIndex = len(self.columns)-1
25 self.addColumn(ColumnItem(col.name + "_key", 1))
26 self.addColumn(ColumnItem(col.name + "_value", 2))
27 else:
28 self.addColumn(SubColumnFunc(col.name, col, 0, keycol=col.keycol))
29
30 self.rows = []
31 unfurl_empty = self.options.unfurl_empty
32 for row in Progress(self.source.rows):
33 val = self.source_col.getValue(row)
34
35 if not isinstance(val, Iterable) or isinstance(val, str):
36 val = [ val ]
37
38 if isinstance(val, Mapping):
39 gen = val.items()
40 else:
41 gen = enumerate(val)
42
43 nadded = 0
44 for key, sub_value in gen:
45 new_row = [ row, key, sub_value ]
46 self.addRow(new_row)
47 nadded += 1
48
49 if unfurl_empty and not nadded:
50 self.addRow([row, None, None])
51
52 @Sheet.api
53 def unfurl_col(sheet, col):
54 clean_id = clean_to_id(col.name)
55 vs = UnfurledSheet(f"{sheet.name}_{clean_id}_unfurled", source=sheet, source_col=col)
56 return vs
57
58
59 Sheet.addCommand("zM", "unfurl-col", "vd.push(unfurl_col(cursorCol))", "row-wise expand current column of lists (e.g. [2]) or dicts (e.g. {3}) within that column")
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/visidata/unfurl.py b/visidata/unfurl.py
--- a/visidata/unfurl.py
+++ b/visidata/unfurl.py
@@ -8,12 +8,14 @@
from collections.abc import Iterable, Mapping
from visidata import vd, Progress, Sheet, Column, ColumnItem, SettableColumn, SubColumnFunc, asyncthread, clean_to_id
+from visidata import stacktrace, TypedExceptionWrapper
vd.option('unfurl_empty', False, 'if unfurl includes rows for empty containers', replay=True)
class UnfurledSheet(Sheet):
+ # rowdef: [row, key, sub_value]
@asyncthread
def reload(self):
# Copy over base sheet, using SubColumnFunc
@@ -30,7 +32,16 @@
self.rows = []
unfurl_empty = self.options.unfurl_empty
for row in Progress(self.source.rows):
- val = self.source_col.getValue(row)
+ try:
+ val = self.source_col.getValue(row)
+ except Exception as e:
+ e.stacktrace = stacktrace()
+ if unfurl_empty:
+ # TypedExceptionWrapper allows the use of z^E to see the stacktrace
+ # the exception on its own lacks clarity
+ self.addRow([row, TypedExceptionWrapper(None, exception=e), TypedExceptionWrapper(None, exception=e)])
+ else:
+ vd.exceptionCaught(e)
if not isinstance(val, Iterable) or isinstance(val, str):
val = [ val ]
| {"golden_diff": "diff --git a/visidata/unfurl.py b/visidata/unfurl.py\n--- a/visidata/unfurl.py\n+++ b/visidata/unfurl.py\n@@ -8,12 +8,14 @@\n \n from collections.abc import Iterable, Mapping\n from visidata import vd, Progress, Sheet, Column, ColumnItem, SettableColumn, SubColumnFunc, asyncthread, clean_to_id\n+from visidata import stacktrace, TypedExceptionWrapper\n \n \n vd.option('unfurl_empty', False, 'if unfurl includes rows for empty containers', replay=True)\n \n \n class UnfurledSheet(Sheet):\n+ # rowdef: [row, key, sub_value]\n @asyncthread\n def reload(self):\n # Copy over base sheet, using SubColumnFunc\n@@ -30,7 +32,16 @@\n self.rows = []\n unfurl_empty = self.options.unfurl_empty\n for row in Progress(self.source.rows):\n- val = self.source_col.getValue(row)\n+ try:\n+ val = self.source_col.getValue(row)\n+ except Exception as e:\n+ e.stacktrace = stacktrace()\n+ if unfurl_empty:\n+ # TypedExceptionWrapper allows the use of z^E to see the stacktrace\n+ # the exception on its own lacks clarity\n+ self.addRow([row, TypedExceptionWrapper(None, exception=e), TypedExceptionWrapper(None, exception=e)])\n+ else:\n+ vd.exceptionCaught(e)\n \n if not isinstance(val, Iterable) or isinstance(val, str):\n val = [ val ]\n", "issue": "[unfurl-col] unfurling a column with TypedWrapper halts unfurl\n**Small description**\r\nWhen unfurling a column which has a TypedWrapper a sheet loading halt occurs\r\n\r\n**Expected result**\r\nThe same response as whatever `options.unfurl_empty` would do for a row without a list.\r\n\r\n**Actual result with screenshot**\r\nAttributeError: 'TypedWrapper' object has no attribute 'xyz'\r\n\r\n\r\nIf you get an unexpected error, please include the full stack trace that you get with `Ctrl-E`.\r\n\r\n**Steps to reproduce with sample data and a .vd**\r\n`echo '[{\"a\":1,\"d\":{\"b\":[1,2,3]}},{\"a\":2,\"d\":{\"c\":[1,2,3]}},{\"a\":3,\"d\":{\"b\":[1,2,3]}}]' | vd -f json`\r\n\r\n```\r\nsheet\tcol\trow\tlongname\tinput\tkeystrokes\tcomment\r\n\t\t\topen-file\t-\to\t\r\n_\td\t\taddcol-expr\tcurcol['b']\t=\tcreate new column from Python expression, with column names as variables\r\n_\tcurcol_b_\t\tunfurl-col\t\t\trow-wise expand current column of lists (e.g. [2]) or dicts (e.g. {3}) within that column\r\n\r\n```\r\n\r\n**Additional context**\r\nv2.6dev\r\n\n", "before_files": [{"content": "'''This adds the `unfurl-col` command, to unfurl a column containing iterable values, such as lists and dicts.\nUnfurling pushes a new sheet, with each key/value pair in the unfurled column values getting its own row, with the rest of the source sheet's columns copied for each of those rows.\n\nNote: When unfurling a column, non-iterable objects (numbers, and also strings) are treated as single-item lists, so that they too can be unfurled.\n\nCredit to Jeremy Singer-Vine for the idea and original implementation.\n'''\n\nfrom collections.abc import Iterable, Mapping\nfrom visidata import vd, Progress, Sheet, Column, ColumnItem, SettableColumn, SubColumnFunc, asyncthread, clean_to_id\n\n\nvd.option('unfurl_empty', False, 'if unfurl includes rows for empty containers', replay=True)\n\n\nclass UnfurledSheet(Sheet):\n @asyncthread\n def reload(self):\n # Copy over base sheet, using SubColumnFunc\n self.columns = []\n for col in self.source.columns:\n if col is self.source_col:\n # Replace iterable column with two columns: keys and values\n self.cursorVisibleColIndex = len(self.columns)-1\n self.addColumn(ColumnItem(col.name + \"_key\", 1))\n self.addColumn(ColumnItem(col.name + \"_value\", 2))\n else:\n self.addColumn(SubColumnFunc(col.name, col, 0, keycol=col.keycol))\n\n self.rows = []\n unfurl_empty = self.options.unfurl_empty\n for row in Progress(self.source.rows):\n val = self.source_col.getValue(row)\n\n if not isinstance(val, Iterable) or isinstance(val, str):\n val = [ val ]\n\n if isinstance(val, Mapping):\n gen = val.items()\n else:\n gen = enumerate(val)\n\n nadded = 0\n for key, sub_value in gen:\n new_row = [ row, key, sub_value ]\n self.addRow(new_row)\n nadded += 1\n\n if unfurl_empty and not nadded:\n self.addRow([row, None, None])\n\[email protected]\ndef unfurl_col(sheet, col):\n clean_id = clean_to_id(col.name)\n vs = UnfurledSheet(f\"{sheet.name}_{clean_id}_unfurled\", source=sheet, source_col=col)\n return vs\n\n\nSheet.addCommand(\"zM\", \"unfurl-col\", \"vd.push(unfurl_col(cursorCol))\", \"row-wise expand current column of lists (e.g. [2]) or dicts (e.g. {3}) within that column\")\n", "path": "visidata/unfurl.py"}], "after_files": [{"content": "'''This adds the `unfurl-col` command, to unfurl a column containing iterable values, such as lists and dicts.\nUnfurling pushes a new sheet, with each key/value pair in the unfurled column values getting its own row, with the rest of the source sheet's columns copied for each of those rows.\n\nNote: When unfurling a column, non-iterable objects (numbers, and also strings) are treated as single-item lists, so that they too can be unfurled.\n\nCredit to Jeremy Singer-Vine for the idea and original implementation.\n'''\n\nfrom collections.abc import Iterable, Mapping\nfrom visidata import vd, Progress, Sheet, Column, ColumnItem, SettableColumn, SubColumnFunc, asyncthread, clean_to_id\nfrom visidata import stacktrace, TypedExceptionWrapper\n\n\nvd.option('unfurl_empty', False, 'if unfurl includes rows for empty containers', replay=True)\n\n\nclass UnfurledSheet(Sheet):\n # rowdef: [row, key, sub_value]\n @asyncthread\n def reload(self):\n # Copy over base sheet, using SubColumnFunc\n self.columns = []\n for col in self.source.columns:\n if col is self.source_col:\n # Replace iterable column with two columns: keys and values\n self.cursorVisibleColIndex = len(self.columns)-1\n self.addColumn(ColumnItem(col.name + \"_key\", 1))\n self.addColumn(ColumnItem(col.name + \"_value\", 2))\n else:\n self.addColumn(SubColumnFunc(col.name, col, 0, keycol=col.keycol))\n\n self.rows = []\n unfurl_empty = self.options.unfurl_empty\n for row in Progress(self.source.rows):\n try:\n val = self.source_col.getValue(row)\n except Exception as e:\n e.stacktrace = stacktrace()\n if unfurl_empty:\n # TypedExceptionWrapper allows the use of z^E to see the stacktrace\n # the exception on its own lacks clarity\n self.addRow([row, TypedExceptionWrapper(None, exception=e), TypedExceptionWrapper(None, exception=e)])\n else:\n vd.exceptionCaught(e)\n\n if not isinstance(val, Iterable) or isinstance(val, str):\n val = [ val ]\n\n if isinstance(val, Mapping):\n gen = val.items()\n else:\n gen = enumerate(val)\n\n nadded = 0\n for key, sub_value in gen:\n new_row = [ row, key, sub_value ]\n self.addRow(new_row)\n nadded += 1\n\n if unfurl_empty and not nadded:\n self.addRow([row, None, None])\n\[email protected]\ndef unfurl_col(sheet, col):\n clean_id = clean_to_id(col.name)\n vs = UnfurledSheet(f\"{sheet.name}_{clean_id}_unfurled\", source=sheet, source_col=col)\n return vs\n\n\nSheet.addCommand(\"zM\", \"unfurl-col\", \"vd.push(unfurl_col(cursorCol))\", \"row-wise expand current column of lists (e.g. [2]) or dicts (e.g. {3}) within that column\")\n", "path": "visidata/unfurl.py"}]} | 1,281 | 348 |
gh_patches_debug_11658 | rasdani/github-patches | git_diff | elastic__apm-agent-python-1890 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IndexError when "url" not found in args or kwargs.
### Overview
I've found an issue in line 46 of `call()` method in `AioHttpClientInstrumentation(...)` class.
https://github.com/elastic/apm-agent-python/blob/da93e7af448abcac367d216e2d20a584051f6e50/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py#L44-L47
I'm getting an `IndexError` exception due to lack or "url" in both kwargs and in args[1]. The the reason is that the argument containing urls is called "str_or_url".
https://github.com/aio-libs/aiohttp/blob/4b59d55e9e79f5a0b1932d6dc9f6b12a33d19266/aiohttp/client.py#L325-L328
By default the code is running fine, but this issue will appear in cases where someone will try to use `ClientSession._request()` method directly AND use keyword-arguments.
### How to recreate the bug?
This is a general example on how to recreate the bug. Lets assume that somewhere in my code I want to connect to some external http rest-api service using aiohttp library. I'll be using custom made session object based on ClientSession object from aiohttp library.
```python
from aiohttp import ClientSession, ClientResponse
class CustomSession(ClientSession):
async def _request(self, method: str, str_or_url: StrOrURL, **kwargs: Any) -> ClientResponse:
# put some extra code here, like add retry functionality
client_response = await super()._request(method=method, str_or_url=url, **kwargs) # << ISSUE IS HERE
return client_response
```
the above code is valid and it works as long as there is no apm agent running as middleware. With apm agent added, the code has to be written as below:
```python
client_response = await super()._request(method, url, **kwargs)
```
### How to fix it?
Do search for both `url` and `str_or_url` in kwargs. It's simple fix that can fit in same line, I bet.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticapm/instrumentation/packages/asyncio/aiohttp_client.py`
Content:
```
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 from elasticapm import async_capture_span
32 from elasticapm.conf import constants
33 from elasticapm.instrumentation.packages.asyncio.base import AsyncAbstractInstrumentedModule
34 from elasticapm.traces import DroppedSpan, execution_context
35 from elasticapm.utils import get_host_from_url, sanitize_url
36 from elasticapm.utils.disttracing import TracingOptions
37
38
39 class AioHttpClientInstrumentation(AsyncAbstractInstrumentedModule):
40 name = "aiohttp_client"
41
42 instrument_list = [("aiohttp.client", "ClientSession._request")]
43
44 async def call(self, module, method, wrapped, instance, args, kwargs):
45 method = kwargs["method"] if "method" in kwargs else args[0]
46 url = kwargs["url"] if "url" in kwargs else args[1]
47 url = str(url)
48
49 signature = " ".join([method.upper(), get_host_from_url(url)])
50 url = sanitize_url(url)
51 transaction = execution_context.get_transaction()
52
53 async with async_capture_span(
54 signature,
55 span_type="external",
56 span_subtype="http",
57 extra={"http": {"url": url}},
58 leaf=True,
59 ) as span:
60 leaf_span = span
61 while isinstance(leaf_span, DroppedSpan):
62 leaf_span = leaf_span.parent
63
64 parent_id = leaf_span.id if leaf_span else transaction.id
65 trace_parent = transaction.trace_parent.copy_from(
66 span_id=parent_id, trace_options=TracingOptions(recorded=True)
67 )
68 headers = kwargs.get("headers") or {}
69 self._set_disttracing_headers(headers, trace_parent, transaction)
70 kwargs["headers"] = headers
71 response = await wrapped(*args, **kwargs)
72 if response:
73 if span.context:
74 span.context["http"]["status_code"] = response.status
75 span.set_success() if response.status < 400 else span.set_failure()
76 return response
77
78 def mutate_unsampled_call_args(self, module, method, wrapped, instance, args, kwargs, transaction):
79 # since we don't have a span, we set the span id to the transaction id
80 trace_parent = transaction.trace_parent.copy_from(
81 span_id=transaction.id, trace_options=TracingOptions(recorded=False)
82 )
83
84 headers = kwargs.get("headers") or {}
85 self._set_disttracing_headers(headers, trace_parent, transaction)
86 kwargs["headers"] = headers
87 return args, kwargs
88
89 def _set_disttracing_headers(self, headers, trace_parent, transaction):
90 trace_parent_str = trace_parent.to_string()
91 headers[constants.TRACEPARENT_HEADER_NAME] = trace_parent_str
92 if transaction.tracer.config.use_elastic_traceparent_header:
93 headers[constants.TRACEPARENT_LEGACY_HEADER_NAME] = trace_parent_str
94 if trace_parent.tracestate:
95 headers[constants.TRACESTATE_HEADER_NAME] = trace_parent.tracestate
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py b/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py
--- a/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py
+++ b/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py
@@ -42,8 +42,10 @@
instrument_list = [("aiohttp.client", "ClientSession._request")]
async def call(self, module, method, wrapped, instance, args, kwargs):
- method = kwargs["method"] if "method" in kwargs else args[0]
- url = kwargs["url"] if "url" in kwargs else args[1]
+ method = kwargs.get("method", args[0])
+ url = kwargs.get("url", kwargs.get("str_or_url", None))
+ if url is None:
+ url = args[1]
url = str(url)
signature = " ".join([method.upper(), get_host_from_url(url)])
| {"golden_diff": "diff --git a/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py b/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py\n--- a/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py\n+++ b/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py\n@@ -42,8 +42,10 @@\n instrument_list = [(\"aiohttp.client\", \"ClientSession._request\")]\n \n async def call(self, module, method, wrapped, instance, args, kwargs):\n- method = kwargs[\"method\"] if \"method\" in kwargs else args[0]\n- url = kwargs[\"url\"] if \"url\" in kwargs else args[1]\n+ method = kwargs.get(\"method\", args[0])\n+ url = kwargs.get(\"url\", kwargs.get(\"str_or_url\", None))\n+ if url is None:\n+ url = args[1]\n url = str(url)\n \n signature = \" \".join([method.upper(), get_host_from_url(url)])\n", "issue": "IndexError when \"url\" not found in args or kwargs.\n### Overview\r\n\r\nI've found an issue in line 46 of `call()` method in `AioHttpClientInstrumentation(...)` class.\r\n\r\nhttps://github.com/elastic/apm-agent-python/blob/da93e7af448abcac367d216e2d20a584051f6e50/elasticapm/instrumentation/packages/asyncio/aiohttp_client.py#L44-L47\r\n\r\nI'm getting an `IndexError` exception due to lack or \"url\" in both kwargs and in args[1]. The the reason is that the argument containing urls is called \"str_or_url\".\r\n\r\nhttps://github.com/aio-libs/aiohttp/blob/4b59d55e9e79f5a0b1932d6dc9f6b12a33d19266/aiohttp/client.py#L325-L328\r\n\r\nBy default the code is running fine, but this issue will appear in cases where someone will try to use `ClientSession._request()` method directly AND use keyword-arguments.\r\n\r\n### How to recreate the bug?\r\n\r\nThis is a general example on how to recreate the bug. Lets assume that somewhere in my code I want to connect to some external http rest-api service using aiohttp library. I'll be using custom made session object based on ClientSession object from aiohttp library.\r\n\r\n```python\r\nfrom aiohttp import ClientSession, ClientResponse\r\n\r\nclass CustomSession(ClientSession):\r\n\r\n async def _request(self, method: str, str_or_url: StrOrURL, **kwargs: Any) -> ClientResponse:\r\n # put some extra code here, like add retry functionality\r\n client_response = await super()._request(method=method, str_or_url=url, **kwargs) # << ISSUE IS HERE\r\n return client_response\r\n```\r\nthe above code is valid and it works as long as there is no apm agent running as middleware. With apm agent added, the code has to be written as below:\r\n```python\r\n client_response = await super()._request(method, url, **kwargs)\r\n```\r\n\r\n### How to fix it?\r\n\r\nDo search for both `url` and `str_or_url` in kwargs. It's simple fix that can fit in same line, I bet.\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom elasticapm import async_capture_span\nfrom elasticapm.conf import constants\nfrom elasticapm.instrumentation.packages.asyncio.base import AsyncAbstractInstrumentedModule\nfrom elasticapm.traces import DroppedSpan, execution_context\nfrom elasticapm.utils import get_host_from_url, sanitize_url\nfrom elasticapm.utils.disttracing import TracingOptions\n\n\nclass AioHttpClientInstrumentation(AsyncAbstractInstrumentedModule):\n name = \"aiohttp_client\"\n\n instrument_list = [(\"aiohttp.client\", \"ClientSession._request\")]\n\n async def call(self, module, method, wrapped, instance, args, kwargs):\n method = kwargs[\"method\"] if \"method\" in kwargs else args[0]\n url = kwargs[\"url\"] if \"url\" in kwargs else args[1]\n url = str(url)\n\n signature = \" \".join([method.upper(), get_host_from_url(url)])\n url = sanitize_url(url)\n transaction = execution_context.get_transaction()\n\n async with async_capture_span(\n signature,\n span_type=\"external\",\n span_subtype=\"http\",\n extra={\"http\": {\"url\": url}},\n leaf=True,\n ) as span:\n leaf_span = span\n while isinstance(leaf_span, DroppedSpan):\n leaf_span = leaf_span.parent\n\n parent_id = leaf_span.id if leaf_span else transaction.id\n trace_parent = transaction.trace_parent.copy_from(\n span_id=parent_id, trace_options=TracingOptions(recorded=True)\n )\n headers = kwargs.get(\"headers\") or {}\n self._set_disttracing_headers(headers, trace_parent, transaction)\n kwargs[\"headers\"] = headers\n response = await wrapped(*args, **kwargs)\n if response:\n if span.context:\n span.context[\"http\"][\"status_code\"] = response.status\n span.set_success() if response.status < 400 else span.set_failure()\n return response\n\n def mutate_unsampled_call_args(self, module, method, wrapped, instance, args, kwargs, transaction):\n # since we don't have a span, we set the span id to the transaction id\n trace_parent = transaction.trace_parent.copy_from(\n span_id=transaction.id, trace_options=TracingOptions(recorded=False)\n )\n\n headers = kwargs.get(\"headers\") or {}\n self._set_disttracing_headers(headers, trace_parent, transaction)\n kwargs[\"headers\"] = headers\n return args, kwargs\n\n def _set_disttracing_headers(self, headers, trace_parent, transaction):\n trace_parent_str = trace_parent.to_string()\n headers[constants.TRACEPARENT_HEADER_NAME] = trace_parent_str\n if transaction.tracer.config.use_elastic_traceparent_header:\n headers[constants.TRACEPARENT_LEGACY_HEADER_NAME] = trace_parent_str\n if trace_parent.tracestate:\n headers[constants.TRACESTATE_HEADER_NAME] = trace_parent.tracestate\n", "path": "elasticapm/instrumentation/packages/asyncio/aiohttp_client.py"}], "after_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom elasticapm import async_capture_span\nfrom elasticapm.conf import constants\nfrom elasticapm.instrumentation.packages.asyncio.base import AsyncAbstractInstrumentedModule\nfrom elasticapm.traces import DroppedSpan, execution_context\nfrom elasticapm.utils import get_host_from_url, sanitize_url\nfrom elasticapm.utils.disttracing import TracingOptions\n\n\nclass AioHttpClientInstrumentation(AsyncAbstractInstrumentedModule):\n name = \"aiohttp_client\"\n\n instrument_list = [(\"aiohttp.client\", \"ClientSession._request\")]\n\n async def call(self, module, method, wrapped, instance, args, kwargs):\n method = kwargs.get(\"method\", args[0])\n url = kwargs.get(\"url\", kwargs.get(\"str_or_url\", None))\n if url is None:\n url = args[1]\n url = str(url)\n\n signature = \" \".join([method.upper(), get_host_from_url(url)])\n url = sanitize_url(url)\n transaction = execution_context.get_transaction()\n\n async with async_capture_span(\n signature,\n span_type=\"external\",\n span_subtype=\"http\",\n extra={\"http\": {\"url\": url}},\n leaf=True,\n ) as span:\n leaf_span = span\n while isinstance(leaf_span, DroppedSpan):\n leaf_span = leaf_span.parent\n\n parent_id = leaf_span.id if leaf_span else transaction.id\n trace_parent = transaction.trace_parent.copy_from(\n span_id=parent_id, trace_options=TracingOptions(recorded=True)\n )\n headers = kwargs.get(\"headers\") or {}\n self._set_disttracing_headers(headers, trace_parent, transaction)\n kwargs[\"headers\"] = headers\n response = await wrapped(*args, **kwargs)\n if response:\n if span.context:\n span.context[\"http\"][\"status_code\"] = response.status\n span.set_success() if response.status < 400 else span.set_failure()\n return response\n\n def mutate_unsampled_call_args(self, module, method, wrapped, instance, args, kwargs, transaction):\n # since we don't have a span, we set the span id to the transaction id\n trace_parent = transaction.trace_parent.copy_from(\n span_id=transaction.id, trace_options=TracingOptions(recorded=False)\n )\n\n headers = kwargs.get(\"headers\") or {}\n self._set_disttracing_headers(headers, trace_parent, transaction)\n kwargs[\"headers\"] = headers\n return args, kwargs\n\n def _set_disttracing_headers(self, headers, trace_parent, transaction):\n trace_parent_str = trace_parent.to_string()\n headers[constants.TRACEPARENT_HEADER_NAME] = trace_parent_str\n if transaction.tracer.config.use_elastic_traceparent_header:\n headers[constants.TRACEPARENT_LEGACY_HEADER_NAME] = trace_parent_str\n if trace_parent.tracestate:\n headers[constants.TRACESTATE_HEADER_NAME] = trace_parent.tracestate\n", "path": "elasticapm/instrumentation/packages/asyncio/aiohttp_client.py"}]} | 1,917 | 231 |
gh_patches_debug_17921 | rasdani/github-patches | git_diff | mirumee__ariadne-59 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Drop Python 3.5
Due to lack of support for [variable type annotations](https://www.python.org/dev/peps/pep-0526/) I suggest to drop support for Python 3.5. This is already a problem in #30 in which either code or mypy is failing and only solution is to remove hints.
We might consider testing ariadne on 3.7 for future-proofing our project.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ariadne/executable_schema.py`
Content:
```
1 from collections import defaultdict
2 from itertools import chain
3 from typing import Iterator, List, Union
4
5 from graphql import GraphQLSchema
6
7 from .build_schema import build_schema_from_type_definitions
8 from .resolvers import add_resolve_functions_to_schema
9
10
11 def decompose_maps(resolvers_maps: List[dict]) -> Iterator[tuple]:
12 def flatten(rm):
13 for key, value in rm.items():
14 for resolver_name, resolver in value.items():
15 yield (key, resolver_name, resolver)
16
17 return chain.from_iterable(flatten(m) for m in resolvers_maps)
18
19
20 def merge_resolvers(resolver_list: Iterator[tuple]) -> dict:
21 output = defaultdict(dict) # type: dict
22 for key, resolver_name, resolver in resolver_list:
23 output[key][resolver_name] = resolver
24 return output
25
26
27 def join_type_defs(type_defs: List[str]) -> str:
28 return "\n\n".join(t.strip() for t in type_defs)
29
30
31 def make_executable_schema(
32 type_defs: Union[str, List[str]], resolvers: Union[dict, List[dict]]
33 ) -> GraphQLSchema:
34 if isinstance(type_defs, list):
35 type_defs = join_type_defs(type_defs)
36
37 schema = build_schema_from_type_definitions(type_defs)
38
39 if isinstance(resolvers, list):
40 add_resolve_functions_to_schema(
41 schema, merge_resolvers(decompose_maps(resolvers))
42 )
43 elif isinstance(resolvers, dict):
44 add_resolve_functions_to_schema(schema, resolvers)
45
46 return schema
47
```
Path: `setup.py`
Content:
```
1 #! /usr/bin/env python
2 import os
3 from setuptools import setup
4
5 CLASSIFIERS = [
6 "Intended Audience :: Developers",
7 "License :: OSI Approved :: BSD License",
8 "Operating System :: OS Independent",
9 "Programming Language :: Python",
10 "Programming Language :: Python :: 3.5",
11 "Programming Language :: Python :: 3.6",
12 "Programming Language :: Python :: 3.7",
13 "Topic :: Software Development :: Libraries :: Python Modules",
14 ]
15
16 README_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "README.md")
17 with open(README_PATH, "r") as f:
18 README = f.read()
19
20 setup(
21 name="ariadne",
22 author="Mirumee Software",
23 author_email="[email protected]",
24 description="Ariadne is a Python library for implementing GraphQL servers.",
25 long_description=README,
26 long_description_content_type="text/markdown",
27 license="BSD",
28 version="0.1.0",
29 url="https://github.com/mirumee/ariadne",
30 packages=["ariadne"],
31 install_requires=["graphql-core>=2.1", "typing>=3.6.0"],
32 classifiers=CLASSIFIERS,
33 platforms=["any"],
34 )
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ariadne/executable_schema.py b/ariadne/executable_schema.py
--- a/ariadne/executable_schema.py
+++ b/ariadne/executable_schema.py
@@ -18,7 +18,7 @@
def merge_resolvers(resolver_list: Iterator[tuple]) -> dict:
- output = defaultdict(dict) # type: dict
+ output: dict = defaultdict(dict)
for key, resolver_name, resolver in resolver_list:
output[key][resolver_name] = resolver
return output
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -7,7 +7,6 @@
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python",
- "Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Topic :: Software Development :: Libraries :: Python Modules",
| {"golden_diff": "diff --git a/ariadne/executable_schema.py b/ariadne/executable_schema.py\n--- a/ariadne/executable_schema.py\n+++ b/ariadne/executable_schema.py\n@@ -18,7 +18,7 @@\n \n \n def merge_resolvers(resolver_list: Iterator[tuple]) -> dict:\n- output = defaultdict(dict) # type: dict\n+ output: dict = defaultdict(dict)\n for key, resolver_name, resolver in resolver_list:\n output[key][resolver_name] = resolver\n return output\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -7,7 +7,6 @@\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n- \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n", "issue": "Drop Python 3.5\nDue to lack of support for [variable type annotations](https://www.python.org/dev/peps/pep-0526/) I suggest to drop support for Python 3.5. This is already a problem in #30 in which either code or mypy is failing and only solution is to remove hints.\r\n\r\nWe might consider testing ariadne on 3.7 for future-proofing our project.\n", "before_files": [{"content": "from collections import defaultdict\nfrom itertools import chain\nfrom typing import Iterator, List, Union\n\nfrom graphql import GraphQLSchema\n\nfrom .build_schema import build_schema_from_type_definitions\nfrom .resolvers import add_resolve_functions_to_schema\n\n\ndef decompose_maps(resolvers_maps: List[dict]) -> Iterator[tuple]:\n def flatten(rm):\n for key, value in rm.items():\n for resolver_name, resolver in value.items():\n yield (key, resolver_name, resolver)\n\n return chain.from_iterable(flatten(m) for m in resolvers_maps)\n\n\ndef merge_resolvers(resolver_list: Iterator[tuple]) -> dict:\n output = defaultdict(dict) # type: dict\n for key, resolver_name, resolver in resolver_list:\n output[key][resolver_name] = resolver\n return output\n\n\ndef join_type_defs(type_defs: List[str]) -> str:\n return \"\\n\\n\".join(t.strip() for t in type_defs)\n\n\ndef make_executable_schema(\n type_defs: Union[str, List[str]], resolvers: Union[dict, List[dict]]\n) -> GraphQLSchema:\n if isinstance(type_defs, list):\n type_defs = join_type_defs(type_defs)\n\n schema = build_schema_from_type_definitions(type_defs)\n\n if isinstance(resolvers, list):\n add_resolve_functions_to_schema(\n schema, merge_resolvers(decompose_maps(resolvers))\n )\n elif isinstance(resolvers, dict):\n add_resolve_functions_to_schema(schema, resolvers)\n\n return schema\n", "path": "ariadne/executable_schema.py"}, {"content": "#! /usr/bin/env python\nimport os\nfrom setuptools import setup\n\nCLASSIFIERS = [\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n]\n\nREADME_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"README.md\")\nwith open(README_PATH, \"r\") as f:\n README = f.read()\n\nsetup(\n name=\"ariadne\",\n author=\"Mirumee Software\",\n author_email=\"[email protected]\",\n description=\"Ariadne is a Python library for implementing GraphQL servers.\",\n long_description=README,\n long_description_content_type=\"text/markdown\",\n license=\"BSD\",\n version=\"0.1.0\",\n url=\"https://github.com/mirumee/ariadne\",\n packages=[\"ariadne\"],\n install_requires=[\"graphql-core>=2.1\", \"typing>=3.6.0\"],\n classifiers=CLASSIFIERS,\n platforms=[\"any\"],\n)\n", "path": "setup.py"}], "after_files": [{"content": "from collections import defaultdict\nfrom itertools import chain\nfrom typing import Iterator, List, Union\n\nfrom graphql import GraphQLSchema\n\nfrom .build_schema import build_schema_from_type_definitions\nfrom .resolvers import add_resolve_functions_to_schema\n\n\ndef decompose_maps(resolvers_maps: List[dict]) -> Iterator[tuple]:\n def flatten(rm):\n for key, value in rm.items():\n for resolver_name, resolver in value.items():\n yield (key, resolver_name, resolver)\n\n return chain.from_iterable(flatten(m) for m in resolvers_maps)\n\n\ndef merge_resolvers(resolver_list: Iterator[tuple]) -> dict:\n output: dict = defaultdict(dict)\n for key, resolver_name, resolver in resolver_list:\n output[key][resolver_name] = resolver\n return output\n\n\ndef join_type_defs(type_defs: List[str]) -> str:\n return \"\\n\\n\".join(t.strip() for t in type_defs)\n\n\ndef make_executable_schema(\n type_defs: Union[str, List[str]], resolvers: Union[dict, List[dict]]\n) -> GraphQLSchema:\n if isinstance(type_defs, list):\n type_defs = join_type_defs(type_defs)\n\n schema = build_schema_from_type_definitions(type_defs)\n\n if isinstance(resolvers, list):\n add_resolve_functions_to_schema(\n schema, merge_resolvers(decompose_maps(resolvers))\n )\n elif isinstance(resolvers, dict):\n add_resolve_functions_to_schema(schema, resolvers)\n\n return schema\n", "path": "ariadne/executable_schema.py"}, {"content": "#! /usr/bin/env python\nimport os\nfrom setuptools import setup\n\nCLASSIFIERS = [\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n]\n\nREADME_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"README.md\")\nwith open(README_PATH, \"r\") as f:\n README = f.read()\n\nsetup(\n name=\"ariadne\",\n author=\"Mirumee Software\",\n author_email=\"[email protected]\",\n description=\"Ariadne is a Python library for implementing GraphQL servers.\",\n long_description=README,\n long_description_content_type=\"text/markdown\",\n license=\"BSD\",\n version=\"0.1.0\",\n url=\"https://github.com/mirumee/ariadne\",\n packages=[\"ariadne\"],\n install_requires=[\"graphql-core>=2.1\", \"typing>=3.6.0\"],\n classifiers=CLASSIFIERS,\n platforms=[\"any\"],\n)\n", "path": "setup.py"}]} | 1,111 | 224 |
gh_patches_debug_10782 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-1076 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Type refactoring has a regression with inheritance and explicit fields.
```python
@strawberry.input
class A:
a: str = strawberry.field(default='', desc='')
@strawberry.input
class B(A):
b: Optional[str] = strawberry.field(default=None, desc='')
@strawberry.type
class Query:
@strawberry.field
def field(self, arg: B) -> str:
return ''
schema = strawberry.Schema(query=Query)
result = schema.execute_sync('{ field(arg: {}) }')
assert not result.errors
```
raises `TypeError: B fields cannot be resolved. unhashable type: 'StrawberryAnnotation'`.
`StrawberryAnnotation` has a custom `__eq__` without a `__hash__`, causing a set lookup to fail. However adding a suitable `__hash__` just lead to the next `TypeError`.
`StrawberryOptional` likely has the same problem.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/types/type_resolver.py`
Content:
```
1 import dataclasses
2 import sys
3 from typing import Dict, List, Type
4
5 from strawberry.annotation import StrawberryAnnotation
6 from strawberry.exceptions import (
7 FieldWithResolverAndDefaultFactoryError,
8 FieldWithResolverAndDefaultValueError,
9 PrivateStrawberryFieldError,
10 )
11 from strawberry.field import StrawberryField
12 from strawberry.private import Private
13 from strawberry.utils.str_converters import to_camel_case
14
15 from ..arguments import UNSET
16
17
18 def _get_fields(cls: Type) -> List[StrawberryField]:
19 """Get all the strawberry fields off a strawberry.type cls
20
21 This function returns a list of StrawberryFields (one for each field item), while
22 also paying attention the name and typing of the field.
23
24 StrawberryFields can be defined on a strawberry.type class as either a dataclass-
25 style field or using strawberry.field as a decorator.
26
27 >>> import strawberry
28 >>> @strawberry.type
29 ... class Query:
30 ... type_1a: int = 5
31 ... type_1b: int = strawberry.field(...)
32 ... type_1c: int = strawberry.field(resolver=...)
33 ...
34 ... @strawberry.field
35 ... def type_2(self) -> int:
36 ... ...
37
38 Type #1:
39 A pure dataclass-style field. Will not have a StrawberryField; one will need to
40 be created in this function. Type annotation is required.
41
42 Type #2:
43 A field defined using @strawberry.field as a decorator around the resolver. The
44 resolver must be type-annotated.
45
46 The StrawberryField.python_name value will be assigned to the field's name on the
47 class if one is not set by either using an explicit strawberry.field(name=...) or by
48 passing a named function (i.e. not an anonymous lambda) to strawberry.field
49 (typically as a decorator).
50 """
51 # Deferred import to avoid import cycles
52 from strawberry.field import StrawberryField
53
54 fields: Dict[str, StrawberryField] = {}
55
56 # before trying to find any fields, let's first add the fields defined in
57 # parent classes, we do this by checking if parents have a type definition
58 for base in cls.__bases__:
59 if hasattr(base, "_type_definition"):
60 base_fields = {
61 field.graphql_name: field
62 # TODO: we need to rename _fields to something else
63 for field in base._type_definition._fields # type: ignore
64 }
65
66 # Add base's fields to cls' fields
67 fields = {**fields, **base_fields}
68
69 # then we can proceed with finding the fields for the current class
70 for field in dataclasses.fields(cls):
71
72 if isinstance(field, StrawberryField):
73 # Check that the field type is not Private
74 if isinstance(field.type, Private):
75 raise PrivateStrawberryFieldError(field.python_name, cls.__name__)
76
77 # Check that default is not set if a resolver is defined
78 if field.default != dataclasses.MISSING and field.base_resolver is not None:
79 raise FieldWithResolverAndDefaultValueError(
80 field.python_name, cls.__name__
81 )
82
83 # Check that default_factory is not set if a resolver is defined
84 # Note: using getattr because of this issue:
85 # https://github.com/python/mypy/issues/6910
86 if (
87 getattr(field, "default_factory") != dataclasses.MISSING # noqa
88 and field.base_resolver is not None
89 ):
90 raise FieldWithResolverAndDefaultFactoryError(
91 field.python_name, cls.__name__
92 )
93
94 # we make sure that the origin is either the field's resolver when
95 # called as:
96 #
97 # >>> @strawberry.field
98 # ... def x(self): ...
99 #
100 # or the class where this field was defined, so we always have
101 # the correct origin for determining field types when resolving
102 # the types.
103 field.origin = field.origin or cls
104
105 # Make sure types are StrawberryAnnotations
106 if not isinstance(field.type, StrawberryAnnotation):
107 module = sys.modules[field.origin.__module__]
108 field.type_annotation = StrawberryAnnotation(
109 annotation=field.type_annotation, namespace=module.__dict__
110 )
111
112 # Create a StrawberryField for fields that didn't use strawberry.field
113 else:
114 # Only ignore Private fields that weren't defined using StrawberryFields
115 if isinstance(field.type, Private):
116 continue
117
118 field_type = field.type
119
120 module = sys.modules[cls.__module__]
121
122 # Create a StrawberryField, for fields of Types #1 and #2a
123 field = StrawberryField(
124 python_name=field.name,
125 graphql_name=to_camel_case(field.name),
126 type_annotation=StrawberryAnnotation(
127 annotation=field_type,
128 namespace=module.__dict__,
129 ),
130 origin=cls,
131 default=getattr(cls, field.name, UNSET),
132 )
133
134 field_name = field.graphql_name
135
136 assert_message = "Field must have a name by the time the schema is generated"
137 assert field_name is not None, assert_message
138
139 # TODO: Raise exception if field_name already in fields
140 fields[field_name] = field
141
142 return list(fields.values())
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/strawberry/types/type_resolver.py b/strawberry/types/type_resolver.py
--- a/strawberry/types/type_resolver.py
+++ b/strawberry/types/type_resolver.py
@@ -103,7 +103,7 @@
field.origin = field.origin or cls
# Make sure types are StrawberryAnnotations
- if not isinstance(field.type, StrawberryAnnotation):
+ if not isinstance(field.type_annotation, StrawberryAnnotation):
module = sys.modules[field.origin.__module__]
field.type_annotation = StrawberryAnnotation(
annotation=field.type_annotation, namespace=module.__dict__
| {"golden_diff": "diff --git a/strawberry/types/type_resolver.py b/strawberry/types/type_resolver.py\n--- a/strawberry/types/type_resolver.py\n+++ b/strawberry/types/type_resolver.py\n@@ -103,7 +103,7 @@\n field.origin = field.origin or cls\n \n # Make sure types are StrawberryAnnotations\n- if not isinstance(field.type, StrawberryAnnotation):\n+ if not isinstance(field.type_annotation, StrawberryAnnotation):\n module = sys.modules[field.origin.__module__]\n field.type_annotation = StrawberryAnnotation(\n annotation=field.type_annotation, namespace=module.__dict__\n", "issue": "Type refactoring has a regression with inheritance and explicit fields.\n```python\r\[email protected]\r\nclass A:\r\n a: str = strawberry.field(default='', desc='')\r\n\r\[email protected]\r\nclass B(A):\r\n b: Optional[str] = strawberry.field(default=None, desc='')\r\n\r\[email protected]\r\nclass Query:\r\n @strawberry.field\r\n def field(self, arg: B) -> str:\r\n return ''\r\n\r\nschema = strawberry.Schema(query=Query)\r\nresult = schema.execute_sync('{ field(arg: {}) }')\r\nassert not result.errors\r\n```\r\nraises `TypeError: B fields cannot be resolved. unhashable type: 'StrawberryAnnotation'`.\r\n\r\n`StrawberryAnnotation` has a custom `__eq__` without a `__hash__`, causing a set lookup to fail. However adding a suitable `__hash__` just lead to the next `TypeError`.\r\n\r\n`StrawberryOptional` likely has the same problem.\r\n\n", "before_files": [{"content": "import dataclasses\nimport sys\nfrom typing import Dict, List, Type\n\nfrom strawberry.annotation import StrawberryAnnotation\nfrom strawberry.exceptions import (\n FieldWithResolverAndDefaultFactoryError,\n FieldWithResolverAndDefaultValueError,\n PrivateStrawberryFieldError,\n)\nfrom strawberry.field import StrawberryField\nfrom strawberry.private import Private\nfrom strawberry.utils.str_converters import to_camel_case\n\nfrom ..arguments import UNSET\n\n\ndef _get_fields(cls: Type) -> List[StrawberryField]:\n \"\"\"Get all the strawberry fields off a strawberry.type cls\n\n This function returns a list of StrawberryFields (one for each field item), while\n also paying attention the name and typing of the field.\n\n StrawberryFields can be defined on a strawberry.type class as either a dataclass-\n style field or using strawberry.field as a decorator.\n\n >>> import strawberry\n >>> @strawberry.type\n ... class Query:\n ... type_1a: int = 5\n ... type_1b: int = strawberry.field(...)\n ... type_1c: int = strawberry.field(resolver=...)\n ...\n ... @strawberry.field\n ... def type_2(self) -> int:\n ... ...\n\n Type #1:\n A pure dataclass-style field. Will not have a StrawberryField; one will need to\n be created in this function. Type annotation is required.\n\n Type #2:\n A field defined using @strawberry.field as a decorator around the resolver. The\n resolver must be type-annotated.\n\n The StrawberryField.python_name value will be assigned to the field's name on the\n class if one is not set by either using an explicit strawberry.field(name=...) or by\n passing a named function (i.e. not an anonymous lambda) to strawberry.field\n (typically as a decorator).\n \"\"\"\n # Deferred import to avoid import cycles\n from strawberry.field import StrawberryField\n\n fields: Dict[str, StrawberryField] = {}\n\n # before trying to find any fields, let's first add the fields defined in\n # parent classes, we do this by checking if parents have a type definition\n for base in cls.__bases__:\n if hasattr(base, \"_type_definition\"):\n base_fields = {\n field.graphql_name: field\n # TODO: we need to rename _fields to something else\n for field in base._type_definition._fields # type: ignore\n }\n\n # Add base's fields to cls' fields\n fields = {**fields, **base_fields}\n\n # then we can proceed with finding the fields for the current class\n for field in dataclasses.fields(cls):\n\n if isinstance(field, StrawberryField):\n # Check that the field type is not Private\n if isinstance(field.type, Private):\n raise PrivateStrawberryFieldError(field.python_name, cls.__name__)\n\n # Check that default is not set if a resolver is defined\n if field.default != dataclasses.MISSING and field.base_resolver is not None:\n raise FieldWithResolverAndDefaultValueError(\n field.python_name, cls.__name__\n )\n\n # Check that default_factory is not set if a resolver is defined\n # Note: using getattr because of this issue:\n # https://github.com/python/mypy/issues/6910\n if (\n getattr(field, \"default_factory\") != dataclasses.MISSING # noqa\n and field.base_resolver is not None\n ):\n raise FieldWithResolverAndDefaultFactoryError(\n field.python_name, cls.__name__\n )\n\n # we make sure that the origin is either the field's resolver when\n # called as:\n #\n # >>> @strawberry.field\n # ... def x(self): ...\n #\n # or the class where this field was defined, so we always have\n # the correct origin for determining field types when resolving\n # the types.\n field.origin = field.origin or cls\n\n # Make sure types are StrawberryAnnotations\n if not isinstance(field.type, StrawberryAnnotation):\n module = sys.modules[field.origin.__module__]\n field.type_annotation = StrawberryAnnotation(\n annotation=field.type_annotation, namespace=module.__dict__\n )\n\n # Create a StrawberryField for fields that didn't use strawberry.field\n else:\n # Only ignore Private fields that weren't defined using StrawberryFields\n if isinstance(field.type, Private):\n continue\n\n field_type = field.type\n\n module = sys.modules[cls.__module__]\n\n # Create a StrawberryField, for fields of Types #1 and #2a\n field = StrawberryField(\n python_name=field.name,\n graphql_name=to_camel_case(field.name),\n type_annotation=StrawberryAnnotation(\n annotation=field_type,\n namespace=module.__dict__,\n ),\n origin=cls,\n default=getattr(cls, field.name, UNSET),\n )\n\n field_name = field.graphql_name\n\n assert_message = \"Field must have a name by the time the schema is generated\"\n assert field_name is not None, assert_message\n\n # TODO: Raise exception if field_name already in fields\n fields[field_name] = field\n\n return list(fields.values())\n", "path": "strawberry/types/type_resolver.py"}], "after_files": [{"content": "import dataclasses\nimport sys\nfrom typing import Dict, List, Type\n\nfrom strawberry.annotation import StrawberryAnnotation\nfrom strawberry.exceptions import (\n FieldWithResolverAndDefaultFactoryError,\n FieldWithResolverAndDefaultValueError,\n PrivateStrawberryFieldError,\n)\nfrom strawberry.field import StrawberryField\nfrom strawberry.private import Private\nfrom strawberry.utils.str_converters import to_camel_case\n\nfrom ..arguments import UNSET\n\n\ndef _get_fields(cls: Type) -> List[StrawberryField]:\n \"\"\"Get all the strawberry fields off a strawberry.type cls\n\n This function returns a list of StrawberryFields (one for each field item), while\n also paying attention the name and typing of the field.\n\n StrawberryFields can be defined on a strawberry.type class as either a dataclass-\n style field or using strawberry.field as a decorator.\n\n >>> import strawberry\n >>> @strawberry.type\n ... class Query:\n ... type_1a: int = 5\n ... type_1b: int = strawberry.field(...)\n ... type_1c: int = strawberry.field(resolver=...)\n ...\n ... @strawberry.field\n ... def type_2(self) -> int:\n ... ...\n\n Type #1:\n A pure dataclass-style field. Will not have a StrawberryField; one will need to\n be created in this function. Type annotation is required.\n\n Type #2:\n A field defined using @strawberry.field as a decorator around the resolver. The\n resolver must be type-annotated.\n\n The StrawberryField.python_name value will be assigned to the field's name on the\n class if one is not set by either using an explicit strawberry.field(name=...) or by\n passing a named function (i.e. not an anonymous lambda) to strawberry.field\n (typically as a decorator).\n \"\"\"\n # Deferred import to avoid import cycles\n from strawberry.field import StrawberryField\n\n fields: Dict[str, StrawberryField] = {}\n\n # before trying to find any fields, let's first add the fields defined in\n # parent classes, we do this by checking if parents have a type definition\n for base in cls.__bases__:\n if hasattr(base, \"_type_definition\"):\n base_fields = {\n field.graphql_name: field\n # TODO: we need to rename _fields to something else\n for field in base._type_definition._fields # type: ignore\n }\n\n # Add base's fields to cls' fields\n fields = {**fields, **base_fields}\n\n # then we can proceed with finding the fields for the current class\n for field in dataclasses.fields(cls):\n\n if isinstance(field, StrawberryField):\n # Check that the field type is not Private\n if isinstance(field.type, Private):\n raise PrivateStrawberryFieldError(field.python_name, cls.__name__)\n\n # Check that default is not set if a resolver is defined\n if field.default != dataclasses.MISSING and field.base_resolver is not None:\n raise FieldWithResolverAndDefaultValueError(\n field.python_name, cls.__name__\n )\n\n # Check that default_factory is not set if a resolver is defined\n # Note: using getattr because of this issue:\n # https://github.com/python/mypy/issues/6910\n if (\n getattr(field, \"default_factory\") != dataclasses.MISSING # noqa\n and field.base_resolver is not None\n ):\n raise FieldWithResolverAndDefaultFactoryError(\n field.python_name, cls.__name__\n )\n\n # we make sure that the origin is either the field's resolver when\n # called as:\n #\n # >>> @strawberry.field\n # ... def x(self): ...\n #\n # or the class where this field was defined, so we always have\n # the correct origin for determining field types when resolving\n # the types.\n field.origin = field.origin or cls\n\n # Make sure types are StrawberryAnnotations\n if not isinstance(field.type_annotation, StrawberryAnnotation):\n module = sys.modules[field.origin.__module__]\n field.type_annotation = StrawberryAnnotation(\n annotation=field.type_annotation, namespace=module.__dict__\n )\n\n # Create a StrawberryField for fields that didn't use strawberry.field\n else:\n # Only ignore Private fields that weren't defined using StrawberryFields\n if isinstance(field.type, Private):\n continue\n\n field_type = field.type\n\n module = sys.modules[cls.__module__]\n\n # Create a StrawberryField, for fields of Types #1 and #2a\n field = StrawberryField(\n python_name=field.name,\n graphql_name=to_camel_case(field.name),\n type_annotation=StrawberryAnnotation(\n annotation=field_type,\n namespace=module.__dict__,\n ),\n origin=cls,\n default=getattr(cls, field.name, UNSET),\n )\n\n field_name = field.graphql_name\n\n assert_message = \"Field must have a name by the time the schema is generated\"\n assert field_name is not None, assert_message\n\n # TODO: Raise exception if field_name already in fields\n fields[field_name] = field\n\n return list(fields.values())\n", "path": "strawberry/types/type_resolver.py"}]} | 1,917 | 131 |
gh_patches_debug_15599 | rasdani/github-patches | git_diff | getsentry__sentry-22143 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sentry ratelimit cannot be changed when using self-hosted
## Important Details
How are you running Sentry?
<!-- Please pick one of the following -->
On-Premise wo/ Docker, version 20.8.0
## Description
Sentry ratelimit cannot be changed when running on premises
## Steps to Reproduce
1. Go to web-interface, Admin/Settings
2. Set a non-zero ratelimit.
3. Get an error.
````
Oct 6 07:18:49 jsentry sentry[4128]: 10.100.33.5 - - [06/Oct/2020:04:18:49 +0000] "GET /api/0/internal/options/ HTTP/1.1" 200 20407 "https://sentry.findmykids.org/manage/settings/" "Mozilla/5.0 (X11; FreeBSD amd64; rv:76.0) Gecko/20100101 Firefox/76.0"
Oct 6 07:19:09 jsentry sentry[4128]: Traceback (most recent call last):
Oct 6 07:19:09 jsentry sentry[4128]: File "/usr/local/lib/python2.7/site-packages/sentry-20.8.0-py2.7.egg/sentry/api/base.py", line 134, in handle_exception
Oct 6 07:19:09 jsentry sentry[4128]: response = super(Endpoint, self).handle_exception(exc)
Oct 6 07:19:09 jsentry sentry[4128]: File "/usr/local/lib/python2.7/site-packages/djangorestframework-3.6.4-py2.7.egg/rest_framework/views.py", line 449, in handle_exception
Oct 6 07:19:09 jsentry sentry[4128]: self.raise_uncaught_exception(exc)
Oct 6 07:19:09 jsentry sentry[4128]: File "/usr/local/lib/python2.7/site-packages/sentry-20.8.0-py2.7.egg/sentry/api/base.py", line 247, in dispatch
Oct 6 07:19:09 jsentry sentry[4128]: response = handler(request, *args, **kwargs)
Oct 6 07:19:09 jsentry sentry[4128]: File "/usr/local/lib/python2.7/site-packages/sentry-20.8.0-py2.7.egg/sentry/api/endpoints/system_options.py", line 74, in put
Oct 6 07:19:09 jsentry sentry[4128]: options.set(k, v)
Oct 6 07:19:09 jsentry sentry[4128]: File "/usr/local/lib/python2.7/site-packages/sentry-20.8.0-py2.7.egg/sentry/options/manager.py", line 83, in set
Oct 6 07:19:09 jsentry sentry[4128]: "%r cannot be changed at runtime because it is configured on disk" % key
Oct 6 07:19:09 jsentry sentry[4128]: AssertionError: u'system.url-prefix' cannot be changed at runtime because it is configured on disk
Oct 6 07:19:09 jsentry sentry[4128]: 10.100.33.5 - - [06/Oct/2020:04:19:09 +0000] "PUT /api/0/internal/options/ HTTP/1.1" 500 746 "https://sentry.findmykids.org/manage/settings/" "Mozilla/5.0 (X11; FreeBSD amd64; rv:76.0) Gecko/20100101 Firefox/76.0"
````
### What you expected to happen
Ratelimit should be changeable from web-interface.
### Possible Solution
Stop sending system.url-prefix in PUT request ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/api/endpoints/system_options.py`
Content:
```
1 from __future__ import absolute_import
2
3 import six
4
5 import sentry
6
7 from django.conf import settings
8 from rest_framework.response import Response
9
10 from sentry import options
11 from sentry.api.base import Endpoint
12 from sentry.api.permissions import SuperuserPermission
13 from sentry.utils.email import is_smtp_enabled
14
15
16 class SystemOptionsEndpoint(Endpoint):
17 permission_classes = (SuperuserPermission,)
18
19 def get(self, request):
20 query = request.GET.get("query")
21 if query == "is:required":
22 option_list = options.filter(flag=options.FLAG_REQUIRED)
23 elif query:
24 return Response(u"{} is not a supported search query".format(query), status=400)
25 else:
26 option_list = options.all()
27
28 smtp_disabled = not is_smtp_enabled()
29
30 results = {}
31 for k in option_list:
32 disabled, disabled_reason = False, None
33
34 if smtp_disabled and k.name[:5] == "mail.":
35 disabled_reason, disabled = "smtpDisabled", True
36 elif bool(
37 k.flags & options.FLAG_PRIORITIZE_DISK and settings.SENTRY_OPTIONS.get(k.name)
38 ):
39 # TODO(mattrobenolt): Expose this as a property on Key.
40 disabled_reason, disabled = "diskPriority", True
41
42 # TODO(mattrobenolt): help, placeholder, title, type
43 results[k.name] = {
44 "value": options.get(k.name),
45 "field": {
46 "default": k.default(),
47 "required": bool(k.flags & options.FLAG_REQUIRED),
48 "disabled": disabled,
49 "disabledReason": disabled_reason,
50 "isSet": options.isset(k.name),
51 "allowEmpty": bool(k.flags & options.FLAG_ALLOW_EMPTY),
52 },
53 }
54
55 return Response(results)
56
57 def put(self, request):
58 # TODO(dcramer): this should validate options before saving them
59 for k, v in six.iteritems(request.data):
60 if v and isinstance(v, six.string_types):
61 v = v.strip()
62 try:
63 option = options.lookup_key(k)
64 except options.UnknownOption:
65 # TODO(dcramer): unify API errors
66 return Response(
67 {"error": "unknown_option", "errorDetail": {"option": k}}, status=400
68 )
69
70 try:
71 if not (option.flags & options.FLAG_ALLOW_EMPTY) and not v:
72 options.delete(k)
73 else:
74 options.set(k, v)
75 except TypeError as e:
76 return Response(
77 {
78 "error": "invalid_type",
79 "errorDetail": {"option": k, "message": six.text_type(e)},
80 },
81 status=400,
82 )
83 # TODO(dcramer): this has nothing to do with configuring options and
84 # should not be set here
85 options.set("sentry:version-configured", sentry.get_version())
86 return Response(status=200)
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/api/endpoints/system_options.py b/src/sentry/api/endpoints/system_options.py
--- a/src/sentry/api/endpoints/system_options.py
+++ b/src/sentry/api/endpoints/system_options.py
@@ -72,10 +72,13 @@
options.delete(k)
else:
options.set(k, v)
- except TypeError as e:
+ except (TypeError, AssertionError) as e:
+ # TODO(chadwhitacre): Use a custom exception for the
+ # immutability case, especially since asserts disappear with
+ # `python -O`.
return Response(
{
- "error": "invalid_type",
+ "error": "invalid_type" if type(e) is TypeError else "immutable_option",
"errorDetail": {"option": k, "message": six.text_type(e)},
},
status=400,
| {"golden_diff": "diff --git a/src/sentry/api/endpoints/system_options.py b/src/sentry/api/endpoints/system_options.py\n--- a/src/sentry/api/endpoints/system_options.py\n+++ b/src/sentry/api/endpoints/system_options.py\n@@ -72,10 +72,13 @@\n options.delete(k)\n else:\n options.set(k, v)\n- except TypeError as e:\n+ except (TypeError, AssertionError) as e:\n+ # TODO(chadwhitacre): Use a custom exception for the\n+ # immutability case, especially since asserts disappear with\n+ # `python -O`.\n return Response(\n {\n- \"error\": \"invalid_type\",\n+ \"error\": \"invalid_type\" if type(e) is TypeError else \"immutable_option\",\n \"errorDetail\": {\"option\": k, \"message\": six.text_type(e)},\n },\n status=400,\n", "issue": "Sentry ratelimit cannot be changed when using self-hosted\n## Important Details\r\n\r\nHow are you running Sentry?\r\n\r\n<!-- Please pick one of the following -->\r\nOn-Premise wo/ Docker, version 20.8.0\r\n\r\n## Description\r\nSentry ratelimit cannot be changed when running on premises\r\n\r\n## Steps to Reproduce\r\n\r\n1. Go to web-interface, Admin/Settings\r\n2. Set a non-zero ratelimit.\r\n3. Get an error.\r\n\r\n````\r\nOct 6 07:18:49 jsentry sentry[4128]: 10.100.33.5 - - [06/Oct/2020:04:18:49 +0000] \"GET /api/0/internal/options/ HTTP/1.1\" 200 20407 \"https://sentry.findmykids.org/manage/settings/\" \"Mozilla/5.0 (X11; FreeBSD amd64; rv:76.0) Gecko/20100101 Firefox/76.0\"\r\nOct 6 07:19:09 jsentry sentry[4128]: Traceback (most recent call last):\r\nOct 6 07:19:09 jsentry sentry[4128]: File \"/usr/local/lib/python2.7/site-packages/sentry-20.8.0-py2.7.egg/sentry/api/base.py\", line 134, in handle_exception\r\nOct 6 07:19:09 jsentry sentry[4128]: response = super(Endpoint, self).handle_exception(exc)\r\nOct 6 07:19:09 jsentry sentry[4128]: File \"/usr/local/lib/python2.7/site-packages/djangorestframework-3.6.4-py2.7.egg/rest_framework/views.py\", line 449, in handle_exception\r\nOct 6 07:19:09 jsentry sentry[4128]: self.raise_uncaught_exception(exc)\r\nOct 6 07:19:09 jsentry sentry[4128]: File \"/usr/local/lib/python2.7/site-packages/sentry-20.8.0-py2.7.egg/sentry/api/base.py\", line 247, in dispatch\r\nOct 6 07:19:09 jsentry sentry[4128]: response = handler(request, *args, **kwargs)\r\nOct 6 07:19:09 jsentry sentry[4128]: File \"/usr/local/lib/python2.7/site-packages/sentry-20.8.0-py2.7.egg/sentry/api/endpoints/system_options.py\", line 74, in put\r\nOct 6 07:19:09 jsentry sentry[4128]: options.set(k, v)\r\nOct 6 07:19:09 jsentry sentry[4128]: File \"/usr/local/lib/python2.7/site-packages/sentry-20.8.0-py2.7.egg/sentry/options/manager.py\", line 83, in set\r\nOct 6 07:19:09 jsentry sentry[4128]: \"%r cannot be changed at runtime because it is configured on disk\" % key\r\nOct 6 07:19:09 jsentry sentry[4128]: AssertionError: u'system.url-prefix' cannot be changed at runtime because it is configured on disk\r\nOct 6 07:19:09 jsentry sentry[4128]: 10.100.33.5 - - [06/Oct/2020:04:19:09 +0000] \"PUT /api/0/internal/options/ HTTP/1.1\" 500 746 \"https://sentry.findmykids.org/manage/settings/\" \"Mozilla/5.0 (X11; FreeBSD amd64; rv:76.0) Gecko/20100101 Firefox/76.0\"\r\n````\r\n\r\n### What you expected to happen\r\nRatelimit should be changeable from web-interface.\r\n\r\n### Possible Solution\r\nStop sending system.url-prefix in PUT request ?\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport six\n\nimport sentry\n\nfrom django.conf import settings\nfrom rest_framework.response import Response\n\nfrom sentry import options\nfrom sentry.api.base import Endpoint\nfrom sentry.api.permissions import SuperuserPermission\nfrom sentry.utils.email import is_smtp_enabled\n\n\nclass SystemOptionsEndpoint(Endpoint):\n permission_classes = (SuperuserPermission,)\n\n def get(self, request):\n query = request.GET.get(\"query\")\n if query == \"is:required\":\n option_list = options.filter(flag=options.FLAG_REQUIRED)\n elif query:\n return Response(u\"{} is not a supported search query\".format(query), status=400)\n else:\n option_list = options.all()\n\n smtp_disabled = not is_smtp_enabled()\n\n results = {}\n for k in option_list:\n disabled, disabled_reason = False, None\n\n if smtp_disabled and k.name[:5] == \"mail.\":\n disabled_reason, disabled = \"smtpDisabled\", True\n elif bool(\n k.flags & options.FLAG_PRIORITIZE_DISK and settings.SENTRY_OPTIONS.get(k.name)\n ):\n # TODO(mattrobenolt): Expose this as a property on Key.\n disabled_reason, disabled = \"diskPriority\", True\n\n # TODO(mattrobenolt): help, placeholder, title, type\n results[k.name] = {\n \"value\": options.get(k.name),\n \"field\": {\n \"default\": k.default(),\n \"required\": bool(k.flags & options.FLAG_REQUIRED),\n \"disabled\": disabled,\n \"disabledReason\": disabled_reason,\n \"isSet\": options.isset(k.name),\n \"allowEmpty\": bool(k.flags & options.FLAG_ALLOW_EMPTY),\n },\n }\n\n return Response(results)\n\n def put(self, request):\n # TODO(dcramer): this should validate options before saving them\n for k, v in six.iteritems(request.data):\n if v and isinstance(v, six.string_types):\n v = v.strip()\n try:\n option = options.lookup_key(k)\n except options.UnknownOption:\n # TODO(dcramer): unify API errors\n return Response(\n {\"error\": \"unknown_option\", \"errorDetail\": {\"option\": k}}, status=400\n )\n\n try:\n if not (option.flags & options.FLAG_ALLOW_EMPTY) and not v:\n options.delete(k)\n else:\n options.set(k, v)\n except TypeError as e:\n return Response(\n {\n \"error\": \"invalid_type\",\n \"errorDetail\": {\"option\": k, \"message\": six.text_type(e)},\n },\n status=400,\n )\n # TODO(dcramer): this has nothing to do with configuring options and\n # should not be set here\n options.set(\"sentry:version-configured\", sentry.get_version())\n return Response(status=200)\n", "path": "src/sentry/api/endpoints/system_options.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport six\n\nimport sentry\n\nfrom django.conf import settings\nfrom rest_framework.response import Response\n\nfrom sentry import options\nfrom sentry.api.base import Endpoint\nfrom sentry.api.permissions import SuperuserPermission\nfrom sentry.utils.email import is_smtp_enabled\n\n\nclass SystemOptionsEndpoint(Endpoint):\n permission_classes = (SuperuserPermission,)\n\n def get(self, request):\n query = request.GET.get(\"query\")\n if query == \"is:required\":\n option_list = options.filter(flag=options.FLAG_REQUIRED)\n elif query:\n return Response(u\"{} is not a supported search query\".format(query), status=400)\n else:\n option_list = options.all()\n\n smtp_disabled = not is_smtp_enabled()\n\n results = {}\n for k in option_list:\n disabled, disabled_reason = False, None\n\n if smtp_disabled and k.name[:5] == \"mail.\":\n disabled_reason, disabled = \"smtpDisabled\", True\n elif bool(\n k.flags & options.FLAG_PRIORITIZE_DISK and settings.SENTRY_OPTIONS.get(k.name)\n ):\n # TODO(mattrobenolt): Expose this as a property on Key.\n disabled_reason, disabled = \"diskPriority\", True\n\n # TODO(mattrobenolt): help, placeholder, title, type\n results[k.name] = {\n \"value\": options.get(k.name),\n \"field\": {\n \"default\": k.default(),\n \"required\": bool(k.flags & options.FLAG_REQUIRED),\n \"disabled\": disabled,\n \"disabledReason\": disabled_reason,\n \"isSet\": options.isset(k.name),\n \"allowEmpty\": bool(k.flags & options.FLAG_ALLOW_EMPTY),\n },\n }\n\n return Response(results)\n\n def put(self, request):\n # TODO(dcramer): this should validate options before saving them\n for k, v in six.iteritems(request.data):\n if v and isinstance(v, six.string_types):\n v = v.strip()\n try:\n option = options.lookup_key(k)\n except options.UnknownOption:\n # TODO(dcramer): unify API errors\n return Response(\n {\"error\": \"unknown_option\", \"errorDetail\": {\"option\": k}}, status=400\n )\n\n try:\n if not (option.flags & options.FLAG_ALLOW_EMPTY) and not v:\n options.delete(k)\n else:\n options.set(k, v)\n except (TypeError, AssertionError) as e:\n # TODO(chadwhitacre): Use a custom exception for the\n # immutability case, especially since asserts disappear with\n # `python -O`.\n return Response(\n {\n \"error\": \"invalid_type\" if type(e) is TypeError else \"immutable_option\",\n \"errorDetail\": {\"option\": k, \"message\": six.text_type(e)},\n },\n status=400,\n )\n # TODO(dcramer): this has nothing to do with configuring options and\n # should not be set here\n options.set(\"sentry:version-configured\", sentry.get_version())\n return Response(status=200)\n", "path": "src/sentry/api/endpoints/system_options.py"}]} | 2,041 | 196 |
gh_patches_debug_3815 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-2381 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Documentation site code uses incorrect class from ddtrace/contrib/wsgi/wsgi.py
### Which version of dd-trace-py are you using?
0.48.1
### What libraries am I using:
Python: 3.8
### How can we reproduce your problem?
1. Copy the example code from https://ddtrace.readthedocs.io/en/stable/integrations.html#wsgi
2. Example code cites `DDTraceMiddleware` class
2. Run any wsgi application with said code
### What is the result that you get?
```
ImportError: cannot import name 'DDTraceMiddleware' from 'ddtrace.contrib.wsgi' (/usr/local/lib/python3.8/site-packages/ddtrace/contrib/wsgi/__init__.py)
```
### What is the result that you expected?
Example code in readthedocs site should properly cite the import for the `DDWSGIMiddleware` class: https://github.com/DataDog/dd-trace-py/blob/master/ddtrace/contrib/wsgi/wsgi.py#L83
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/wsgi/__init__.py`
Content:
```
1 """
2 The Datadog WSGI middleware traces all WSGI requests.
3
4
5 Usage
6 ~~~~~
7
8 The middleware can be used manually via the following command::
9
10
11 from ddtrace.contrib.wsgi import DDTraceMiddleware
12
13 # application is a WSGI application
14 application = DDTraceMiddleware(application)
15
16
17 Global Configuration
18 ~~~~~~~~~~~~~~~~~~~~
19
20 .. py:data:: ddtrace.config.wsgi["service"]
21
22 The service name reported for the WSGI application.
23
24 This option can also be set with the ``DD_SERVICE`` environment
25 variable.
26
27 Default: ``"wsgi"``
28
29 .. py:data:: ddtrace.config.wsgi["distributed_tracing"]
30
31 Configuration that allows distributed tracing to be enabled.
32
33 Default: ``True``
34
35
36 :ref:`All HTTP tags <http-tagging>` are supported for this integration.
37
38 """
39 from .wsgi import DDWSGIMiddleware
40
41
42 __all__ = [
43 "DDWSGIMiddleware",
44 ]
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ddtrace/contrib/wsgi/__init__.py b/ddtrace/contrib/wsgi/__init__.py
--- a/ddtrace/contrib/wsgi/__init__.py
+++ b/ddtrace/contrib/wsgi/__init__.py
@@ -8,10 +8,10 @@
The middleware can be used manually via the following command::
- from ddtrace.contrib.wsgi import DDTraceMiddleware
+ from ddtrace.contrib.wsgi import DDWSGIMiddleware
# application is a WSGI application
- application = DDTraceMiddleware(application)
+ application = DDWSGIMiddleware(application)
Global Configuration
| {"golden_diff": "diff --git a/ddtrace/contrib/wsgi/__init__.py b/ddtrace/contrib/wsgi/__init__.py\n--- a/ddtrace/contrib/wsgi/__init__.py\n+++ b/ddtrace/contrib/wsgi/__init__.py\n@@ -8,10 +8,10 @@\n The middleware can be used manually via the following command::\n \n \n- from ddtrace.contrib.wsgi import DDTraceMiddleware\n+ from ddtrace.contrib.wsgi import DDWSGIMiddleware\n \n # application is a WSGI application\n- application = DDTraceMiddleware(application)\n+ application = DDWSGIMiddleware(application)\n \n \n Global Configuration\n", "issue": "Documentation site code uses incorrect class from ddtrace/contrib/wsgi/wsgi.py\n### Which version of dd-trace-py are you using?\r\n0.48.1\r\n\r\n### What libraries am I using:\r\nPython: 3.8\r\n\r\n### How can we reproduce your problem?\r\n1. Copy the example code from https://ddtrace.readthedocs.io/en/stable/integrations.html#wsgi\r\n2. Example code cites `DDTraceMiddleware` class\r\n2. Run any wsgi application with said code\r\n\r\n### What is the result that you get?\r\n```\r\nImportError: cannot import name 'DDTraceMiddleware' from 'ddtrace.contrib.wsgi' (/usr/local/lib/python3.8/site-packages/ddtrace/contrib/wsgi/__init__.py)\r\n```\r\n\r\n### What is the result that you expected?\r\nExample code in readthedocs site should properly cite the import for the `DDWSGIMiddleware` class: https://github.com/DataDog/dd-trace-py/blob/master/ddtrace/contrib/wsgi/wsgi.py#L83\r\n\n", "before_files": [{"content": "\"\"\"\nThe Datadog WSGI middleware traces all WSGI requests.\n\n\nUsage\n~~~~~\n\nThe middleware can be used manually via the following command::\n\n\n from ddtrace.contrib.wsgi import DDTraceMiddleware\n\n # application is a WSGI application\n application = DDTraceMiddleware(application)\n\n\nGlobal Configuration\n~~~~~~~~~~~~~~~~~~~~\n\n.. py:data:: ddtrace.config.wsgi[\"service\"]\n\n The service name reported for the WSGI application.\n\n This option can also be set with the ``DD_SERVICE`` environment\n variable.\n\n Default: ``\"wsgi\"``\n\n.. py:data:: ddtrace.config.wsgi[\"distributed_tracing\"]\n\n Configuration that allows distributed tracing to be enabled.\n\n Default: ``True``\n\n\n:ref:`All HTTP tags <http-tagging>` are supported for this integration.\n\n\"\"\"\nfrom .wsgi import DDWSGIMiddleware\n\n\n__all__ = [\n \"DDWSGIMiddleware\",\n]\n", "path": "ddtrace/contrib/wsgi/__init__.py"}], "after_files": [{"content": "\"\"\"\nThe Datadog WSGI middleware traces all WSGI requests.\n\n\nUsage\n~~~~~\n\nThe middleware can be used manually via the following command::\n\n\n from ddtrace.contrib.wsgi import DDWSGIMiddleware\n\n # application is a WSGI application\n application = DDWSGIMiddleware(application)\n\n\nGlobal Configuration\n~~~~~~~~~~~~~~~~~~~~\n\n.. py:data:: ddtrace.config.wsgi[\"service\"]\n\n The service name reported for the WSGI application.\n\n This option can also be set with the ``DD_SERVICE`` environment\n variable.\n\n Default: ``\"wsgi\"``\n\n.. py:data:: ddtrace.config.wsgi[\"distributed_tracing\"]\n\n Configuration that allows distributed tracing to be enabled.\n\n Default: ``True``\n\n\n:ref:`All HTTP tags <http-tagging>` are supported for this integration.\n\n\"\"\"\nfrom .wsgi import DDWSGIMiddleware\n\n\n__all__ = [\n \"DDWSGIMiddleware\",\n]\n", "path": "ddtrace/contrib/wsgi/__init__.py"}]} | 778 | 138 |
gh_patches_debug_16625 | rasdani/github-patches | git_diff | vispy__vispy-1380 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Vispy Linux Error ( division by zero)
When I try to tun the vispy on linux I get the following error:
```
from vispy.plot import Fig
f = Fig()
/lib/python2.7/site-packages/vispy/util/dpi/_linux.pyc in <lambda>(x_px, y_px, x_mm, y_mm)
49 from_xrandr = _get_dpi_from(
50 'xrandr', r'(\d+)x(\d+).*?(\d+)mm x (\d+)mm',
---> 51 lambda x_px, y_px, x_mm, y_mm: 25.4 * (x_px / x_mm + y_px / y_mm) / 2)
52 if from_xrandr is not None:
53 return from_xrandr
```
ZeroDivisionError: float division by zero
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vispy/util/dpi/_linux.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # -----------------------------------------------------------------------------
3 # Copyright (c) Vispy Development Team. All Rights Reserved.
4 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
5 # -----------------------------------------------------------------------------
6
7 import os
8 import re
9 from subprocess import CalledProcessError
10
11 from ..logs import logger
12 from ..wrappers import run_subprocess
13
14
15 def _get_dpi_from(cmd, pattern, func):
16 """Match pattern against the output of func, passing the results as
17 floats to func. If anything fails, return None.
18 """
19 try:
20 out, _ = run_subprocess([cmd])
21 except (OSError, CalledProcessError):
22 pass
23 else:
24 match = re.search(pattern, out)
25 if match:
26 return func(*map(float, match.groups()))
27
28
29 def get_dpi(raise_error=True):
30 """Get screen DPI from the OS
31
32 Parameters
33 ----------
34 raise_error : bool
35 If True, raise an error if DPI could not be determined.
36
37 Returns
38 -------
39 dpi : float
40 Dots per inch of the primary screen.
41 """
42 # If we are running without an X server (e.g. OSMesa), use a fixed DPI
43 if 'DISPLAY' not in os.environ:
44 return 96.
45
46 from_xdpyinfo = _get_dpi_from(
47 'xdpyinfo', r'(\d+)x(\d+) dots per inch',
48 lambda x_dpi, y_dpi: (x_dpi + y_dpi) / 2)
49 if from_xdpyinfo is not None:
50 return from_xdpyinfo
51
52 from_xrandr = _get_dpi_from(
53 'xrandr', r'(\d+)x(\d+).*?(\d+)mm x (\d+)mm',
54 lambda x_px, y_px, x_mm, y_mm: 25.4 * (x_px / x_mm + y_px / y_mm) / 2)
55 if from_xrandr is not None:
56 return from_xrandr
57 if raise_error:
58 raise RuntimeError('could not determine DPI')
59 else:
60 logger.warning('could not determine DPI')
61 return 96
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vispy/util/dpi/_linux.py b/vispy/util/dpi/_linux.py
--- a/vispy/util/dpi/_linux.py
+++ b/vispy/util/dpi/_linux.py
@@ -26,6 +26,14 @@
return func(*map(float, match.groups()))
+def _xrandr_calc(x_px, y_px, x_mm, y_mm):
+ if x_mm == 0 or y_mm == 0:
+ logger.warning("'xrandr' output has screen dimension of 0mm, " +
+ "can't compute proper DPI")
+ return 96.
+ return 25.4 * (x_px / x_mm + y_px / y_mm) / 2
+
+
def get_dpi(raise_error=True):
"""Get screen DPI from the OS
@@ -51,7 +59,7 @@
from_xrandr = _get_dpi_from(
'xrandr', r'(\d+)x(\d+).*?(\d+)mm x (\d+)mm',
- lambda x_px, y_px, x_mm, y_mm: 25.4 * (x_px / x_mm + y_px / y_mm) / 2)
+ _xrandr_calc)
if from_xrandr is not None:
return from_xrandr
if raise_error:
| {"golden_diff": "diff --git a/vispy/util/dpi/_linux.py b/vispy/util/dpi/_linux.py\n--- a/vispy/util/dpi/_linux.py\n+++ b/vispy/util/dpi/_linux.py\n@@ -26,6 +26,14 @@\n return func(*map(float, match.groups()))\n \n \n+def _xrandr_calc(x_px, y_px, x_mm, y_mm):\n+ if x_mm == 0 or y_mm == 0:\n+ logger.warning(\"'xrandr' output has screen dimension of 0mm, \" +\n+ \"can't compute proper DPI\")\n+ return 96.\n+ return 25.4 * (x_px / x_mm + y_px / y_mm) / 2\n+\n+\n def get_dpi(raise_error=True):\n \"\"\"Get screen DPI from the OS\n \n@@ -51,7 +59,7 @@\n \n from_xrandr = _get_dpi_from(\n 'xrandr', r'(\\d+)x(\\d+).*?(\\d+)mm x (\\d+)mm',\n- lambda x_px, y_px, x_mm, y_mm: 25.4 * (x_px / x_mm + y_px / y_mm) / 2)\n+ _xrandr_calc)\n if from_xrandr is not None:\n return from_xrandr\n if raise_error:\n", "issue": "Vispy Linux Error ( division by zero)\nWhen I try to tun the vispy on linux I get the following error:\n\n```\nfrom vispy.plot import Fig\nf = Fig()\n\n/lib/python2.7/site-packages/vispy/util/dpi/_linux.pyc in <lambda>(x_px, y_px, x_mm, y_mm)\n 49 from_xrandr = _get_dpi_from(\n 50 'xrandr', r'(\\d+)x(\\d+).*?(\\d+)mm x (\\d+)mm',\n---> 51 lambda x_px, y_px, x_mm, y_mm: 25.4 * (x_px / x_mm + y_px / y_mm) / 2)\n 52 if from_xrandr is not None:\n 53 return from_xrandr\n```\n\nZeroDivisionError: float division by zero\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# -----------------------------------------------------------------------------\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n# -----------------------------------------------------------------------------\n\nimport os\nimport re\nfrom subprocess import CalledProcessError\n\nfrom ..logs import logger\nfrom ..wrappers import run_subprocess\n\n\ndef _get_dpi_from(cmd, pattern, func):\n \"\"\"Match pattern against the output of func, passing the results as\n floats to func. If anything fails, return None.\n \"\"\"\n try:\n out, _ = run_subprocess([cmd])\n except (OSError, CalledProcessError):\n pass\n else:\n match = re.search(pattern, out)\n if match:\n return func(*map(float, match.groups()))\n\n\ndef get_dpi(raise_error=True):\n \"\"\"Get screen DPI from the OS\n\n Parameters\n ----------\n raise_error : bool\n If True, raise an error if DPI could not be determined.\n\n Returns\n -------\n dpi : float\n Dots per inch of the primary screen.\n \"\"\"\n # If we are running without an X server (e.g. OSMesa), use a fixed DPI\n if 'DISPLAY' not in os.environ:\n return 96.\n\n from_xdpyinfo = _get_dpi_from(\n 'xdpyinfo', r'(\\d+)x(\\d+) dots per inch',\n lambda x_dpi, y_dpi: (x_dpi + y_dpi) / 2)\n if from_xdpyinfo is not None:\n return from_xdpyinfo\n\n from_xrandr = _get_dpi_from(\n 'xrandr', r'(\\d+)x(\\d+).*?(\\d+)mm x (\\d+)mm',\n lambda x_px, y_px, x_mm, y_mm: 25.4 * (x_px / x_mm + y_px / y_mm) / 2)\n if from_xrandr is not None:\n return from_xrandr\n if raise_error:\n raise RuntimeError('could not determine DPI')\n else:\n logger.warning('could not determine DPI')\n return 96\n", "path": "vispy/util/dpi/_linux.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# -----------------------------------------------------------------------------\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n# -----------------------------------------------------------------------------\n\nimport os\nimport re\nfrom subprocess import CalledProcessError\n\nfrom ..logs import logger\nfrom ..wrappers import run_subprocess\n\n\ndef _get_dpi_from(cmd, pattern, func):\n \"\"\"Match pattern against the output of func, passing the results as\n floats to func. If anything fails, return None.\n \"\"\"\n try:\n out, _ = run_subprocess([cmd])\n except (OSError, CalledProcessError):\n pass\n else:\n match = re.search(pattern, out)\n if match:\n return func(*map(float, match.groups()))\n\n\ndef _xrandr_calc(x_px, y_px, x_mm, y_mm):\n if x_mm == 0 or y_mm == 0:\n logger.warning(\"'xrandr' output has screen dimension of 0mm, \" +\n \"can't compute proper DPI\")\n return 96.\n return 25.4 * (x_px / x_mm + y_px / y_mm) / 2\n\n\ndef get_dpi(raise_error=True):\n \"\"\"Get screen DPI from the OS\n\n Parameters\n ----------\n raise_error : bool\n If True, raise an error if DPI could not be determined.\n\n Returns\n -------\n dpi : float\n Dots per inch of the primary screen.\n \"\"\"\n # If we are running without an X server (e.g. OSMesa), use a fixed DPI\n if 'DISPLAY' not in os.environ:\n return 96.\n\n from_xdpyinfo = _get_dpi_from(\n 'xdpyinfo', r'(\\d+)x(\\d+) dots per inch',\n lambda x_dpi, y_dpi: (x_dpi + y_dpi) / 2)\n if from_xdpyinfo is not None:\n return from_xdpyinfo\n\n from_xrandr = _get_dpi_from(\n 'xrandr', r'(\\d+)x(\\d+).*?(\\d+)mm x (\\d+)mm',\n _xrandr_calc)\n if from_xrandr is not None:\n return from_xrandr\n if raise_error:\n raise RuntimeError('could not determine DPI')\n else:\n logger.warning('could not determine DPI')\n return 96\n", "path": "vispy/util/dpi/_linux.py"}]} | 1,052 | 308 |
gh_patches_debug_25292 | rasdani/github-patches | git_diff | OWASP__owasp-mastg-2253 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] Broken links in MASTG Webpage
### MASTG Chapter
0x04g-Testing-Cryptography.md
### File Line Number
90
### Context
The links to "Data Storage on Android" and "Data Storage on iOS" lead to 404
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/scripts/transform_files.py`
Content:
```
1 import re, os
2 from pathlib import Path
3
4 EMOJIS_regex = r"🥇 |🎁 |📝 |❗ "
5
6 def transform_links(file_text):
7 print("[*] Regex Substitutions ../Document to MASTG/")
8 found = re.findall(r'(\((?:../)*Document/.*\.md/*)', file_text)
9 print(f" Found: {found}")
10 return re.sub(r"\(((?:../)*)Document/(.*)\.md/*", r"(\1MASTG/\2/", file_text)
11
12 def remove_emojis(file_text):
13 print("[*] Regex Substitutions for emojis")
14 found = re.findall(EMOJIS_regex, file_text)
15 print(f" Found: {found}")
16 return re.sub(EMOJIS_regex, r"", file_text)
17
18 def transform(folder, functions):
19 print(f"[*] Applying transforms to {folder}")
20 for root, dirname, filenames in os.walk(folder):
21 if len(filenames):
22 files = Path(root).glob('*.md')
23
24 for file in files:
25 file_obj = Path(file)
26 print(f" - File {file_obj.as_posix()}")
27 file_text = file_obj.read_text()
28
29 new_text = None
30 for function in functions:
31 if new_text is None:
32 new_text = function(file_text)
33 else:
34 new_text = function(new_text)
35
36 file_obj.write_text(new_text)
37
38 transform("docs", [transform_links])
39 transform("Document", [remove_emojis])
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/scripts/transform_files.py b/tools/scripts/transform_files.py
--- a/tools/scripts/transform_files.py
+++ b/tools/scripts/transform_files.py
@@ -4,10 +4,22 @@
EMOJIS_regex = r"🥇 |🎁 |📝 |❗ "
def transform_links(file_text):
- print("[*] Regex Substitutions ../Document to MASTG/")
- found = re.findall(r'(\((?:../)*Document/.*\.md/*)', file_text)
- print(f" Found: {found}")
- return re.sub(r"\(((?:../)*)Document/(.*)\.md/*", r"(\1MASTG/\2/", file_text)
+ # print("[*] Regex Substitutions ../Document to MASTG/")
+ found = re.findall(r'(\(0x.*\.md/*)', file_text)
+
+ # TODO FIX we must find a better solution to this
+ while len(found) > 0:
+ print(f" Found: {found}")
+ file_text = re.sub(r"\((0x0[1-3].*\.md)", r"(../Intro/\1", file_text)
+ file_text = re.sub(r"\((0x04.*\.md)", r"(../General/\1", file_text)
+ file_text = re.sub(r"\((0x05.*\.md)", r"(../Android/\1", file_text)
+ file_text = re.sub(r"\((0x06.*\.md)", r"(../iOS/\1", file_text)
+ file_text = re.sub(r"\((0x08.*\.md)", r"(../Tools/\1", file_text)
+ file_text = re.sub(r"\((0x09.*\.md)", r"(../References/\1", file_text)
+
+ found = re.findall(r'(\(0x.*\.md/*)', file_text)
+
+ return file_text
def remove_emojis(file_text):
print("[*] Regex Substitutions for emojis")
@@ -35,5 +47,5 @@
file_obj.write_text(new_text)
-transform("docs", [transform_links])
-transform("Document", [remove_emojis])
\ No newline at end of file
+transform("docs/MASTG", [transform_links])
+# transform("docs/MASTG", [remove_emojis])
\ No newline at end of file
| {"golden_diff": "diff --git a/tools/scripts/transform_files.py b/tools/scripts/transform_files.py\n--- a/tools/scripts/transform_files.py\n+++ b/tools/scripts/transform_files.py\n@@ -4,10 +4,22 @@\n EMOJIS_regex = r\"\ud83e\udd47 |\ud83c\udf81 |\ud83d\udcdd |\u2757 \"\n \n def transform_links(file_text):\n- print(\"[*] Regex Substitutions ../Document to MASTG/\")\n- found = re.findall(r'(\\((?:../)*Document/.*\\.md/*)', file_text)\n- print(f\" Found: {found}\")\n- return re.sub(r\"\\(((?:../)*)Document/(.*)\\.md/*\", r\"(\\1MASTG/\\2/\", file_text)\n+ # print(\"[*] Regex Substitutions ../Document to MASTG/\")\n+ found = re.findall(r'(\\(0x.*\\.md/*)', file_text)\n+\n+ # TODO FIX we must find a better solution to this\n+ while len(found) > 0:\n+ print(f\" Found: {found}\")\n+ file_text = re.sub(r\"\\((0x0[1-3].*\\.md)\", r\"(../Intro/\\1\", file_text)\n+ file_text = re.sub(r\"\\((0x04.*\\.md)\", r\"(../General/\\1\", file_text)\n+ file_text = re.sub(r\"\\((0x05.*\\.md)\", r\"(../Android/\\1\", file_text)\n+ file_text = re.sub(r\"\\((0x06.*\\.md)\", r\"(../iOS/\\1\", file_text)\n+ file_text = re.sub(r\"\\((0x08.*\\.md)\", r\"(../Tools/\\1\", file_text)\n+ file_text = re.sub(r\"\\((0x09.*\\.md)\", r\"(../References/\\1\", file_text)\n+\n+ found = re.findall(r'(\\(0x.*\\.md/*)', file_text)\n+\n+ return file_text\n \n def remove_emojis(file_text):\n print(\"[*] Regex Substitutions for emojis\")\n@@ -35,5 +47,5 @@\n \n file_obj.write_text(new_text)\n \n-transform(\"docs\", [transform_links])\n-transform(\"Document\", [remove_emojis])\n\\ No newline at end of file\n+transform(\"docs/MASTG\", [transform_links])\n+# transform(\"docs/MASTG\", [remove_emojis])\n\\ No newline at end of file\n", "issue": "[Bug] Broken links in MASTG Webpage\n### MASTG Chapter\n\n0x04g-Testing-Cryptography.md\n\n### File Line Number\n\n90\n\n### Context\n\nThe links to \"Data Storage on Android\" and \"Data Storage on iOS\" lead to 404\n", "before_files": [{"content": "import re, os\nfrom pathlib import Path\n\nEMOJIS_regex = r\"\ud83e\udd47 |\ud83c\udf81 |\ud83d\udcdd |\u2757 \"\n\ndef transform_links(file_text):\n print(\"[*] Regex Substitutions ../Document to MASTG/\")\n found = re.findall(r'(\\((?:../)*Document/.*\\.md/*)', file_text)\n print(f\" Found: {found}\")\n return re.sub(r\"\\(((?:../)*)Document/(.*)\\.md/*\", r\"(\\1MASTG/\\2/\", file_text)\n\ndef remove_emojis(file_text):\n print(\"[*] Regex Substitutions for emojis\")\n found = re.findall(EMOJIS_regex, file_text)\n print(f\" Found: {found}\")\n return re.sub(EMOJIS_regex, r\"\", file_text)\n\ndef transform(folder, functions):\n print(f\"[*] Applying transforms to {folder}\")\n for root, dirname, filenames in os.walk(folder):\n if len(filenames):\n files = Path(root).glob('*.md')\n\n for file in files:\n file_obj = Path(file)\n print(f\" - File {file_obj.as_posix()}\")\n file_text = file_obj.read_text()\n \n new_text = None\n for function in functions:\n if new_text is None:\n new_text = function(file_text)\n else:\n new_text = function(new_text)\n\n file_obj.write_text(new_text)\n\ntransform(\"docs\", [transform_links])\ntransform(\"Document\", [remove_emojis])", "path": "tools/scripts/transform_files.py"}], "after_files": [{"content": "import re, os\nfrom pathlib import Path\n\nEMOJIS_regex = r\"\ud83e\udd47 |\ud83c\udf81 |\ud83d\udcdd |\u2757 \"\n\ndef transform_links(file_text):\n # print(\"[*] Regex Substitutions ../Document to MASTG/\")\n found = re.findall(r'(\\(0x.*\\.md/*)', file_text)\n\n # TODO FIX we must find a better solution to this\n while len(found) > 0:\n print(f\" Found: {found}\")\n file_text = re.sub(r\"\\((0x0[1-3].*\\.md)\", r\"(../Intro/\\1\", file_text)\n file_text = re.sub(r\"\\((0x04.*\\.md)\", r\"(../General/\\1\", file_text)\n file_text = re.sub(r\"\\((0x05.*\\.md)\", r\"(../Android/\\1\", file_text)\n file_text = re.sub(r\"\\((0x06.*\\.md)\", r\"(../iOS/\\1\", file_text)\n file_text = re.sub(r\"\\((0x08.*\\.md)\", r\"(../Tools/\\1\", file_text)\n file_text = re.sub(r\"\\((0x09.*\\.md)\", r\"(../References/\\1\", file_text)\n\n found = re.findall(r'(\\(0x.*\\.md/*)', file_text)\n\n return file_text\n\ndef remove_emojis(file_text):\n print(\"[*] Regex Substitutions for emojis\")\n found = re.findall(EMOJIS_regex, file_text)\n print(f\" Found: {found}\")\n return re.sub(EMOJIS_regex, r\"\", file_text)\n\ndef transform(folder, functions):\n print(f\"[*] Applying transforms to {folder}\")\n for root, dirname, filenames in os.walk(folder):\n if len(filenames):\n files = Path(root).glob('*.md')\n\n for file in files:\n file_obj = Path(file)\n print(f\" - File {file_obj.as_posix()}\")\n file_text = file_obj.read_text()\n \n new_text = None\n for function in functions:\n if new_text is None:\n new_text = function(file_text)\n else:\n new_text = function(new_text)\n\n file_obj.write_text(new_text)\n\ntransform(\"docs/MASTG\", [transform_links])\n# transform(\"docs/MASTG\", [remove_emojis])", "path": "tools/scripts/transform_files.py"}]} | 718 | 529 |
gh_patches_debug_16918 | rasdani/github-patches | git_diff | ansible__ansible-18121 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
safe_eval() warning for variables containing curly braces and Jinja2 expressions
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`template/safe_eval`
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### SUMMARY
There is a strange `safe_eval()` warning if one wants to template a string containing Jinja2 expressions and curly braces. Even escaping the curly braces like `{{ '{}' }}` does not help.
##### STEPS TO REPRODUCE
```
- hosts: localhost
vars:
foo: "bar"
tasks:
- debug: msg="{}/{{ foo }}"
```
##### EXPECTED RESULTS
Apart from the warning the expected results coincide with the actual ones.
##### ACTUAL RESULTS
```
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [debug] *******************************************************************
[WARNING]: Exception in safe_eval() on expr: {}/bar (name 'bar' is not defined)
ok: [localhost] => {
"msg": "{}/bar"
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansible/template/safe_eval.py`
Content:
```
1 # (c) 2012, Michael DeHaan <[email protected]>
2 #
3 # This file is part of Ansible
4 #
5 # Ansible is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # Ansible is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
17 from __future__ import (absolute_import, division, print_function)
18 __metaclass__ = type
19
20 import ast
21 import sys
22
23 from ansible.compat.six import string_types
24 from ansible.compat.six.moves import builtins
25
26 from ansible import constants as C
27 from ansible.plugins import filter_loader, test_loader
28
29 try:
30 from __main__ import display
31 except ImportError:
32 from ansible.utils.display import Display
33 display = Display()
34
35 def safe_eval(expr, locals={}, include_exceptions=False):
36 '''
37 This is intended for allowing things like:
38 with_items: a_list_variable
39
40 Where Jinja2 would return a string but we do not want to allow it to
41 call functions (outside of Jinja2, where the env is constrained). If
42 the input data to this function came from an untrusted (remote) source,
43 it should first be run through _clean_data_struct() to ensure the data
44 is further sanitized prior to evaluation.
45
46 Based on:
47 http://stackoverflow.com/questions/12523516/using-ast-and-whitelists-to-make-pythons-eval-safe
48 '''
49
50 # define certain JSON types
51 # eg. JSON booleans are unknown to python eval()
52 JSON_TYPES = {
53 'false': False,
54 'null': None,
55 'true': True,
56 }
57
58 # this is the whitelist of AST nodes we are going to
59 # allow in the evaluation. Any node type other than
60 # those listed here will raise an exception in our custom
61 # visitor class defined below.
62 SAFE_NODES = set(
63 (
64 ast.Add,
65 ast.BinOp,
66 ast.Call,
67 ast.Compare,
68 ast.Dict,
69 ast.Div,
70 ast.Expression,
71 ast.List,
72 ast.Load,
73 ast.Mult,
74 ast.Num,
75 ast.Name,
76 ast.Str,
77 ast.Sub,
78 ast.Tuple,
79 ast.UnaryOp,
80 )
81 )
82
83 # AST node types were expanded after 2.6
84 if sys.version_info[:2] >= (2, 7):
85 SAFE_NODES.update(
86 set(
87 (ast.Set,)
88 )
89 )
90
91 # And in Python 3.4 too
92 if sys.version_info[:2] >= (3, 4):
93 SAFE_NODES.update(
94 set(
95 (ast.NameConstant,)
96 )
97 )
98
99 filter_list = []
100 for filter in filter_loader.all():
101 filter_list.extend(filter.filters().keys())
102
103 test_list = []
104 for test in test_loader.all():
105 test_list.extend(test.tests().keys())
106
107 CALL_WHITELIST = C.DEFAULT_CALLABLE_WHITELIST + filter_list + test_list
108
109 class CleansingNodeVisitor(ast.NodeVisitor):
110 def generic_visit(self, node, inside_call=False):
111 if type(node) not in SAFE_NODES:
112 raise Exception("invalid expression (%s)" % expr)
113 elif isinstance(node, ast.Call):
114 inside_call = True
115 elif isinstance(node, ast.Name) and inside_call:
116 if hasattr(builtins, node.id) and node.id not in CALL_WHITELIST:
117 raise Exception("invalid function: %s" % node.id)
118 # iterate over all child nodes
119 for child_node in ast.iter_child_nodes(node):
120 self.generic_visit(child_node, inside_call)
121
122 if not isinstance(expr, string_types):
123 # already templated to a datastructure, perhaps?
124 if include_exceptions:
125 return (expr, None)
126 return expr
127
128 cnv = CleansingNodeVisitor()
129 try:
130 parsed_tree = ast.parse(expr, mode='eval')
131 cnv.visit(parsed_tree)
132 compiled = compile(parsed_tree, expr, 'eval')
133 result = eval(compiled, JSON_TYPES, dict(locals))
134
135 if include_exceptions:
136 return (result, None)
137 else:
138 return result
139 except SyntaxError as e:
140 # special handling for syntax errors, we just return
141 # the expression string back as-is to support late evaluation
142 if include_exceptions:
143 return (expr, None)
144 return expr
145 except Exception as e:
146 display.warning('Exception in safe_eval() on expr: %s (%s)' % (expr, e))
147 if include_exceptions:
148 return (expr, e)
149 return expr
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/ansible/template/safe_eval.py b/lib/ansible/template/safe_eval.py
--- a/lib/ansible/template/safe_eval.py
+++ b/lib/ansible/template/safe_eval.py
@@ -26,12 +26,6 @@
from ansible import constants as C
from ansible.plugins import filter_loader, test_loader
-try:
- from __main__ import display
-except ImportError:
- from ansible.utils.display import Display
- display = Display()
-
def safe_eval(expr, locals={}, include_exceptions=False):
'''
This is intended for allowing things like:
@@ -143,7 +137,6 @@
return (expr, None)
return expr
except Exception as e:
- display.warning('Exception in safe_eval() on expr: %s (%s)' % (expr, e))
if include_exceptions:
return (expr, e)
return expr
| {"golden_diff": "diff --git a/lib/ansible/template/safe_eval.py b/lib/ansible/template/safe_eval.py\n--- a/lib/ansible/template/safe_eval.py\n+++ b/lib/ansible/template/safe_eval.py\n@@ -26,12 +26,6 @@\n from ansible import constants as C\n from ansible.plugins import filter_loader, test_loader\n \n-try:\n- from __main__ import display\n-except ImportError:\n- from ansible.utils.display import Display\n- display = Display()\n-\n def safe_eval(expr, locals={}, include_exceptions=False):\n '''\n This is intended for allowing things like:\n@@ -143,7 +137,6 @@\n return (expr, None)\n return expr\n except Exception as e:\n- display.warning('Exception in safe_eval() on expr: %s (%s)' % (expr, e))\n if include_exceptions:\n return (expr, e)\n return expr\n", "issue": "safe_eval() warning for variables containing curly braces and Jinja2 expressions\n##### ISSUE TYPE\n- Bug Report\n##### COMPONENT NAME\n\n`template/safe_eval`\n##### ANSIBLE VERSION\n\n```\nansible 2.2.0.0\n config file = \n configured module search path = Default w/o overrides\n```\n##### SUMMARY\n\nThere is a strange `safe_eval()` warning if one wants to template a string containing Jinja2 expressions and curly braces. Even escaping the curly braces like `{{ '{}' }}` does not help.\n##### STEPS TO REPRODUCE\n\n```\n- hosts: localhost\n vars:\n foo: \"bar\"\n tasks:\n - debug: msg=\"{}/{{ foo }}\"\n```\n##### EXPECTED RESULTS\n\nApart from the warning the expected results coincide with the actual ones.\n##### ACTUAL RESULTS\n\n```\nPLAY [localhost] ***************************************************************\n\nTASK [setup] *******************************************************************\nok: [localhost]\n\nTASK [debug] *******************************************************************\n [WARNING]: Exception in safe_eval() on expr: {}/bar (name 'bar' is not defined)\n\nok: [localhost] => {\n \"msg\": \"{}/bar\"\n}\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=2 changed=0 unreachable=0 failed=0 \n```\n\n", "before_files": [{"content": "# (c) 2012, Michael DeHaan <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport ast\nimport sys\n\nfrom ansible.compat.six import string_types\nfrom ansible.compat.six.moves import builtins\n\nfrom ansible import constants as C\nfrom ansible.plugins import filter_loader, test_loader\n\ntry:\n from __main__ import display\nexcept ImportError:\n from ansible.utils.display import Display\n display = Display()\n\ndef safe_eval(expr, locals={}, include_exceptions=False):\n '''\n This is intended for allowing things like:\n with_items: a_list_variable\n\n Where Jinja2 would return a string but we do not want to allow it to\n call functions (outside of Jinja2, where the env is constrained). If\n the input data to this function came from an untrusted (remote) source,\n it should first be run through _clean_data_struct() to ensure the data\n is further sanitized prior to evaluation.\n\n Based on:\n http://stackoverflow.com/questions/12523516/using-ast-and-whitelists-to-make-pythons-eval-safe\n '''\n\n # define certain JSON types\n # eg. JSON booleans are unknown to python eval()\n JSON_TYPES = {\n 'false': False,\n 'null': None,\n 'true': True,\n }\n\n # this is the whitelist of AST nodes we are going to\n # allow in the evaluation. Any node type other than\n # those listed here will raise an exception in our custom\n # visitor class defined below.\n SAFE_NODES = set(\n (\n ast.Add,\n ast.BinOp,\n ast.Call,\n ast.Compare,\n ast.Dict,\n ast.Div,\n ast.Expression,\n ast.List,\n ast.Load,\n ast.Mult,\n ast.Num,\n ast.Name,\n ast.Str,\n ast.Sub,\n ast.Tuple,\n ast.UnaryOp,\n )\n )\n\n # AST node types were expanded after 2.6\n if sys.version_info[:2] >= (2, 7):\n SAFE_NODES.update(\n set(\n (ast.Set,)\n )\n )\n\n # And in Python 3.4 too\n if sys.version_info[:2] >= (3, 4):\n SAFE_NODES.update(\n set(\n (ast.NameConstant,)\n )\n )\n\n filter_list = []\n for filter in filter_loader.all():\n filter_list.extend(filter.filters().keys())\n\n test_list = []\n for test in test_loader.all():\n test_list.extend(test.tests().keys())\n\n CALL_WHITELIST = C.DEFAULT_CALLABLE_WHITELIST + filter_list + test_list\n\n class CleansingNodeVisitor(ast.NodeVisitor):\n def generic_visit(self, node, inside_call=False):\n if type(node) not in SAFE_NODES:\n raise Exception(\"invalid expression (%s)\" % expr)\n elif isinstance(node, ast.Call):\n inside_call = True\n elif isinstance(node, ast.Name) and inside_call:\n if hasattr(builtins, node.id) and node.id not in CALL_WHITELIST:\n raise Exception(\"invalid function: %s\" % node.id)\n # iterate over all child nodes\n for child_node in ast.iter_child_nodes(node):\n self.generic_visit(child_node, inside_call)\n\n if not isinstance(expr, string_types):\n # already templated to a datastructure, perhaps?\n if include_exceptions:\n return (expr, None)\n return expr\n\n cnv = CleansingNodeVisitor()\n try:\n parsed_tree = ast.parse(expr, mode='eval')\n cnv.visit(parsed_tree)\n compiled = compile(parsed_tree, expr, 'eval')\n result = eval(compiled, JSON_TYPES, dict(locals))\n\n if include_exceptions:\n return (result, None)\n else:\n return result\n except SyntaxError as e:\n # special handling for syntax errors, we just return\n # the expression string back as-is to support late evaluation\n if include_exceptions:\n return (expr, None)\n return expr\n except Exception as e:\n display.warning('Exception in safe_eval() on expr: %s (%s)' % (expr, e))\n if include_exceptions:\n return (expr, e)\n return expr\n", "path": "lib/ansible/template/safe_eval.py"}], "after_files": [{"content": "# (c) 2012, Michael DeHaan <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport ast\nimport sys\n\nfrom ansible.compat.six import string_types\nfrom ansible.compat.six.moves import builtins\n\nfrom ansible import constants as C\nfrom ansible.plugins import filter_loader, test_loader\n\ndef safe_eval(expr, locals={}, include_exceptions=False):\n '''\n This is intended for allowing things like:\n with_items: a_list_variable\n\n Where Jinja2 would return a string but we do not want to allow it to\n call functions (outside of Jinja2, where the env is constrained). If\n the input data to this function came from an untrusted (remote) source,\n it should first be run through _clean_data_struct() to ensure the data\n is further sanitized prior to evaluation.\n\n Based on:\n http://stackoverflow.com/questions/12523516/using-ast-and-whitelists-to-make-pythons-eval-safe\n '''\n\n # define certain JSON types\n # eg. JSON booleans are unknown to python eval()\n JSON_TYPES = {\n 'false': False,\n 'null': None,\n 'true': True,\n }\n\n # this is the whitelist of AST nodes we are going to\n # allow in the evaluation. Any node type other than\n # those listed here will raise an exception in our custom\n # visitor class defined below.\n SAFE_NODES = set(\n (\n ast.Add,\n ast.BinOp,\n ast.Call,\n ast.Compare,\n ast.Dict,\n ast.Div,\n ast.Expression,\n ast.List,\n ast.Load,\n ast.Mult,\n ast.Num,\n ast.Name,\n ast.Str,\n ast.Sub,\n ast.Tuple,\n ast.UnaryOp,\n )\n )\n\n # AST node types were expanded after 2.6\n if sys.version_info[:2] >= (2, 7):\n SAFE_NODES.update(\n set(\n (ast.Set,)\n )\n )\n\n # And in Python 3.4 too\n if sys.version_info[:2] >= (3, 4):\n SAFE_NODES.update(\n set(\n (ast.NameConstant,)\n )\n )\n\n filter_list = []\n for filter in filter_loader.all():\n filter_list.extend(filter.filters().keys())\n\n test_list = []\n for test in test_loader.all():\n test_list.extend(test.tests().keys())\n\n CALL_WHITELIST = C.DEFAULT_CALLABLE_WHITELIST + filter_list + test_list\n\n class CleansingNodeVisitor(ast.NodeVisitor):\n def generic_visit(self, node, inside_call=False):\n if type(node) not in SAFE_NODES:\n raise Exception(\"invalid expression (%s)\" % expr)\n elif isinstance(node, ast.Call):\n inside_call = True\n elif isinstance(node, ast.Name) and inside_call:\n if hasattr(builtins, node.id) and node.id not in CALL_WHITELIST:\n raise Exception(\"invalid function: %s\" % node.id)\n # iterate over all child nodes\n for child_node in ast.iter_child_nodes(node):\n self.generic_visit(child_node, inside_call)\n\n if not isinstance(expr, string_types):\n # already templated to a datastructure, perhaps?\n if include_exceptions:\n return (expr, None)\n return expr\n\n cnv = CleansingNodeVisitor()\n try:\n parsed_tree = ast.parse(expr, mode='eval')\n cnv.visit(parsed_tree)\n compiled = compile(parsed_tree, expr, 'eval')\n result = eval(compiled, JSON_TYPES, dict(locals))\n\n if include_exceptions:\n return (result, None)\n else:\n return result\n except SyntaxError as e:\n # special handling for syntax errors, we just return\n # the expression string back as-is to support late evaluation\n if include_exceptions:\n return (expr, None)\n return expr\n except Exception as e:\n if include_exceptions:\n return (expr, e)\n return expr\n", "path": "lib/ansible/template/safe_eval.py"}]} | 1,966 | 199 |
gh_patches_debug_1086 | rasdani/github-patches | git_diff | HypothesisWorks__hypothesis-1084 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TypeError thrown when trying to import hypothesis in 3.44.21
hypothesis (3.44.21)
In [4]: from hypothesis import given
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-4ce9639ca03b> in <module>()
----> 1 from hypothesis import given
/usr/local/lib/python2.7/dist-packages/hypothesis/__init__.py in <module>()
29 from hypothesis.version import __version_info__, __version__
30 from hypothesis.control import assume, note, reject, event
---> 31 from hypothesis.core import given, find, example, seed, reproduce_failure, \
32 PrintSettings
33 from hypothesis.utils.conventions import infer
/usr/local/lib/python2.7/dist-packages/hypothesis/core.py in <module>()
35 from coverage.collector import Collector
36
---> 37 import hypothesis.strategies as st
38 from hypothesis import __version__
39 from hypothesis.errors import Flaky, Timeout, NoSuchExample, \
/usr/local/lib/python2.7/dist-packages/hypothesis/strategies.py in <module>()
30 from hypothesis.control import assume
31 from hypothesis._settings import note_deprecation
---> 32 from hypothesis.internal.cache import LRUReusedCache
33 from hypothesis.searchstrategy import SearchStrategy
34 from hypothesis.internal.compat import gcd, ceil, floor, hrange, \
/usr/local/lib/python2.7/dist-packages/hypothesis/internal/cache.py in <module>()
21
22
---> 23 @attr.s(slots=True)
24 class Entry(object):
25 key = attr.ib()
TypeError: attributes() got an unexpected keyword argument 'slots'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis, which may be found at
4 # https://github.com/HypothesisWorks/hypothesis-python
5 #
6 # Most of this work is copyright (C) 2013-2018 David R. MacIver
7 # ([email protected]), but it contains contributions by others. See
8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
9 # consult the git log if you need to determine who owns an individual
10 # contribution.
11 #
12 # This Source Code Form is subject to the terms of the Mozilla Public License,
13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
14 # obtain one at http://mozilla.org/MPL/2.0/.
15 #
16 # END HEADER
17
18 from __future__ import division, print_function, absolute_import
19
20 import os
21 import sys
22
23 import setuptools
24
25
26 def local_file(name):
27 return os.path.relpath(os.path.join(os.path.dirname(__file__), name))
28
29
30 SOURCE = local_file('src')
31 README = local_file('README.rst')
32
33
34 # Assignment to placate pyflakes. The actual version is from the exec that
35 # follows.
36 __version__ = None
37
38 with open(local_file('src/hypothesis/version.py')) as o:
39 exec(o.read())
40
41 assert __version__ is not None
42
43
44 extras = {
45 'datetime': ['pytz'],
46 'pytz': ['pytz'],
47 'fakefactory': ['Faker>=0.7'],
48 'numpy': ['numpy>=1.9.0'],
49 'pytest': ['pytest>=2.8.0'],
50 }
51
52 # Django 2 only supports Python 3, but doesn't have any python_requires
53 # markers in its setup.py --- so "pip install django" just fails in
54 # Python 2. So rather than relying on pip, we pin the version of
55 # Django on Python 2 ourselves.
56 #
57 # See https://github.com/HypothesisWorks/hypothesis-python/pull/1008
58 if sys.version_info[0] < 3:
59 django_major_pin = '<2'
60 else:
61 django_major_pin = '<3'
62
63 # We only support the releases of Django that are supported by the Django
64 # core team. See https://www.djangoproject.com/download/#supported-versions
65 #
66 # New versions of setuptools allow us to set very precise pins; older versions
67 # of setuptools are coarser.
68 major_setuptools_version = int(setuptools.__version__.split('.')[0])
69 if major_setuptools_version >= 8:
70 django_minor_pin = '>=1.8,!=1.9.*,!=1.10.*'
71 else:
72 django_minor_pin = '>=1.8'
73
74 django_pin = 'django%s,%s' % (django_minor_pin, django_major_pin)
75 extras['django'] = ['pytz', django_pin]
76
77 extras['faker'] = extras['fakefactory']
78
79 extras['all'] = sorted(sum(extras.values(), []))
80
81 extras[":python_version == '2.7'"] = ['enum34']
82
83 install_requires = ['attrs', 'coverage']
84
85 if sys.version_info[0] < 3:
86 install_requires.append('enum34')
87
88 setuptools.setup(
89 name='hypothesis',
90 version=__version__,
91 author='David R. MacIver',
92 author_email='[email protected]',
93 packages=setuptools.find_packages(SOURCE),
94 package_dir={'': SOURCE},
95 url='https://github.com/HypothesisWorks/hypothesis-python',
96 license='MPL v2',
97 description='A library for property based testing',
98 zip_safe=False,
99 extras_require=extras,
100 install_requires=install_requires,
101 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
102 classifiers=[
103 'Development Status :: 5 - Production/Stable',
104 'Intended Audience :: Developers',
105 'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)',
106 'Operating System :: Unix',
107 'Operating System :: POSIX',
108 'Operating System :: Microsoft :: Windows',
109 'Programming Language :: Python',
110 'Programming Language :: Python :: 2.7',
111 'Programming Language :: Python :: 3',
112 'Programming Language :: Python :: 3.4',
113 'Programming Language :: Python :: 3.5',
114 'Programming Language :: Python :: 3.6',
115 'Programming Language :: Python :: Implementation :: CPython',
116 'Programming Language :: Python :: Implementation :: PyPy',
117 'Topic :: Software Development :: Testing',
118 ],
119 entry_points={
120 'pytest11': ['hypothesispytest = hypothesis.extra.pytestplugin'],
121 },
122 long_description=open(README).read(),
123 )
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -80,7 +80,7 @@
extras[":python_version == '2.7'"] = ['enum34']
-install_requires = ['attrs', 'coverage']
+install_requires = ['attrs>=16.0.0', 'coverage']
if sys.version_info[0] < 3:
install_requires.append('enum34')
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -80,7 +80,7 @@\n \n extras[\":python_version == '2.7'\"] = ['enum34']\n \n-install_requires = ['attrs', 'coverage']\n+install_requires = ['attrs>=16.0.0', 'coverage']\n \n if sys.version_info[0] < 3:\n install_requires.append('enum34')\n", "issue": "TypeError thrown when trying to import hypothesis in 3.44.21\nhypothesis (3.44.21)\r\n\r\nIn [4]: from hypothesis import given\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-4-4ce9639ca03b> in <module>()\r\n----> 1 from hypothesis import given\r\n\r\n/usr/local/lib/python2.7/dist-packages/hypothesis/__init__.py in <module>()\r\n 29 from hypothesis.version import __version_info__, __version__\r\n 30 from hypothesis.control import assume, note, reject, event\r\n---> 31 from hypothesis.core import given, find, example, seed, reproduce_failure, \\\r\n 32 PrintSettings\r\n 33 from hypothesis.utils.conventions import infer\r\n\r\n/usr/local/lib/python2.7/dist-packages/hypothesis/core.py in <module>()\r\n 35 from coverage.collector import Collector\r\n 36\r\n---> 37 import hypothesis.strategies as st\r\n 38 from hypothesis import __version__\r\n 39 from hypothesis.errors import Flaky, Timeout, NoSuchExample, \\\r\n\r\n/usr/local/lib/python2.7/dist-packages/hypothesis/strategies.py in <module>()\r\n 30 from hypothesis.control import assume\r\n 31 from hypothesis._settings import note_deprecation\r\n---> 32 from hypothesis.internal.cache import LRUReusedCache\r\n 33 from hypothesis.searchstrategy import SearchStrategy\r\n 34 from hypothesis.internal.compat import gcd, ceil, floor, hrange, \\\r\n\r\n/usr/local/lib/python2.7/dist-packages/hypothesis/internal/cache.py in <module>()\r\n 21\r\n 22\r\n---> 23 @attr.s(slots=True)\r\n 24 class Entry(object):\r\n 25 key = attr.ib()\r\n\r\nTypeError: attributes() got an unexpected keyword argument 'slots'\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport sys\n\nimport setuptools\n\n\ndef local_file(name):\n return os.path.relpath(os.path.join(os.path.dirname(__file__), name))\n\n\nSOURCE = local_file('src')\nREADME = local_file('README.rst')\n\n\n# Assignment to placate pyflakes. The actual version is from the exec that\n# follows.\n__version__ = None\n\nwith open(local_file('src/hypothesis/version.py')) as o:\n exec(o.read())\n\nassert __version__ is not None\n\n\nextras = {\n 'datetime': ['pytz'],\n 'pytz': ['pytz'],\n 'fakefactory': ['Faker>=0.7'],\n 'numpy': ['numpy>=1.9.0'],\n 'pytest': ['pytest>=2.8.0'],\n}\n\n# Django 2 only supports Python 3, but doesn't have any python_requires\n# markers in its setup.py --- so \"pip install django\" just fails in\n# Python 2. So rather than relying on pip, we pin the version of\n# Django on Python 2 ourselves.\n#\n# See https://github.com/HypothesisWorks/hypothesis-python/pull/1008\nif sys.version_info[0] < 3:\n django_major_pin = '<2'\nelse:\n django_major_pin = '<3'\n\n# We only support the releases of Django that are supported by the Django\n# core team. See https://www.djangoproject.com/download/#supported-versions\n#\n# New versions of setuptools allow us to set very precise pins; older versions\n# of setuptools are coarser.\nmajor_setuptools_version = int(setuptools.__version__.split('.')[0])\nif major_setuptools_version >= 8:\n django_minor_pin = '>=1.8,!=1.9.*,!=1.10.*'\nelse:\n django_minor_pin = '>=1.8'\n\ndjango_pin = 'django%s,%s' % (django_minor_pin, django_major_pin)\nextras['django'] = ['pytz', django_pin]\n\nextras['faker'] = extras['fakefactory']\n\nextras['all'] = sorted(sum(extras.values(), []))\n\nextras[\":python_version == '2.7'\"] = ['enum34']\n\ninstall_requires = ['attrs', 'coverage']\n\nif sys.version_info[0] < 3:\n install_requires.append('enum34')\n\nsetuptools.setup(\n name='hypothesis',\n version=__version__,\n author='David R. MacIver',\n author_email='[email protected]',\n packages=setuptools.find_packages(SOURCE),\n package_dir={'': SOURCE},\n url='https://github.com/HypothesisWorks/hypothesis-python',\n license='MPL v2',\n description='A library for property based testing',\n zip_safe=False,\n extras_require=extras,\n install_requires=install_requires,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)',\n 'Operating System :: Unix',\n 'Operating System :: POSIX',\n 'Operating System :: Microsoft :: Windows',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development :: Testing',\n ],\n entry_points={\n 'pytest11': ['hypothesispytest = hypothesis.extra.pytestplugin'],\n },\n long_description=open(README).read(),\n)\n", "path": "setup.py"}], "after_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport sys\n\nimport setuptools\n\n\ndef local_file(name):\n return os.path.relpath(os.path.join(os.path.dirname(__file__), name))\n\n\nSOURCE = local_file('src')\nREADME = local_file('README.rst')\n\n\n# Assignment to placate pyflakes. The actual version is from the exec that\n# follows.\n__version__ = None\n\nwith open(local_file('src/hypothesis/version.py')) as o:\n exec(o.read())\n\nassert __version__ is not None\n\n\nextras = {\n 'datetime': ['pytz'],\n 'pytz': ['pytz'],\n 'fakefactory': ['Faker>=0.7'],\n 'numpy': ['numpy>=1.9.0'],\n 'pytest': ['pytest>=2.8.0'],\n}\n\n# Django 2 only supports Python 3, but doesn't have any python_requires\n# markers in its setup.py --- so \"pip install django\" just fails in\n# Python 2. So rather than relying on pip, we pin the version of\n# Django on Python 2 ourselves.\n#\n# See https://github.com/HypothesisWorks/hypothesis-python/pull/1008\nif sys.version_info[0] < 3:\n django_major_pin = '<2'\nelse:\n django_major_pin = '<3'\n\n# We only support the releases of Django that are supported by the Django\n# core team. See https://www.djangoproject.com/download/#supported-versions\n#\n# New versions of setuptools allow us to set very precise pins; older versions\n# of setuptools are coarser.\nmajor_setuptools_version = int(setuptools.__version__.split('.')[0])\nif major_setuptools_version >= 8:\n django_minor_pin = '>=1.8,!=1.9.*,!=1.10.*'\nelse:\n django_minor_pin = '>=1.8'\n\ndjango_pin = 'django%s,%s' % (django_minor_pin, django_major_pin)\nextras['django'] = ['pytz', django_pin]\n\nextras['faker'] = extras['fakefactory']\n\nextras['all'] = sorted(sum(extras.values(), []))\n\nextras[\":python_version == '2.7'\"] = ['enum34']\n\ninstall_requires = ['attrs>=16.0.0', 'coverage']\n\nif sys.version_info[0] < 3:\n install_requires.append('enum34')\n\nsetuptools.setup(\n name='hypothesis',\n version=__version__,\n author='David R. MacIver',\n author_email='[email protected]',\n packages=setuptools.find_packages(SOURCE),\n package_dir={'': SOURCE},\n url='https://github.com/HypothesisWorks/hypothesis-python',\n license='MPL v2',\n description='A library for property based testing',\n zip_safe=False,\n extras_require=extras,\n install_requires=install_requires,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)',\n 'Operating System :: Unix',\n 'Operating System :: POSIX',\n 'Operating System :: Microsoft :: Windows',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development :: Testing',\n ],\n entry_points={\n 'pytest11': ['hypothesispytest = hypothesis.extra.pytestplugin'],\n },\n long_description=open(README).read(),\n)\n", "path": "setup.py"}]} | 1,981 | 99 |
gh_patches_debug_22830 | rasdani/github-patches | git_diff | wright-group__WrightTools-201 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
distribute client_secrets
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # !/usr/bin/env python
2
3 import os
4 from setuptools import setup, find_packages
5
6 def package_files(directory):
7 paths = []
8 for (path, directories, filenames) in os.walk(directory):
9 for filename in filenames:
10 paths.append(os.path.join('..', path, filename))
11 return paths
12
13 here = os.path.abspath(os.path.dirname(__file__))
14
15 extra_files = package_files(os.path.join(here, 'WrightTools', 'datasets'))
16 extra_files.append(os.path.join(here, 'CONTRIBUTORS'))
17 extra_files.append(os.path.join(here, 'LICENSE'))
18 extra_files.append(os.path.join(here, 'README.rst'))
19 extra_files.append(os.path.join(here, 'requirements.txt'))
20 extra_files.append(os.path.join(here, 'VERSION'))
21
22 with open(os.path.join(here, 'requirements.txt')) as f:
23 required = f.read().splitlines()
24
25 with open(os.path.join(here, 'VERSION')) as version_file:
26 version = version_file.read().strip()
27
28 setup(
29 name='WrightTools',
30 packages=find_packages(),
31 package_data={'': extra_files},
32 setup_requires=['pytest-runner'],
33 tests_require=['pytest'],
34 install_requires=required,
35 extras_require={'docs': ['sphinx-gallery>=0.1.9']},
36 version=version,
37 description='Tools for loading, processing, and plotting multidimensional spectroscopy data.',
38 author='Blaise Thompson',
39 author_email='[email protected]',
40 license='MIT',
41 url='http://wright.tools',
42 keywords='spectroscopy science multidimensional visualization',
43 classifiers=['Development Status :: 5 - Production/Stable',
44 'Intended Audience :: Science/Research',
45 'License :: OSI Approved :: MIT License',
46 'Natural Language :: English',
47 'Programming Language :: Python :: 2',
48 'Programming Language :: Python :: 2.7',
49 'Programming Language :: Python :: 3',
50 'Programming Language :: Python :: 3.3',
51 'Programming Language :: Python :: 3.4',
52 'Programming Language :: Python :: 3.5',
53 'Topic :: Scientific/Engineering']
54 )
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -3,6 +3,7 @@
import os
from setuptools import setup, find_packages
+
def package_files(directory):
paths = []
for (path, directories, filenames) in os.walk(directory):
@@ -10,6 +11,7 @@
paths.append(os.path.join('..', path, filename))
return paths
+
here = os.path.abspath(os.path.dirname(__file__))
extra_files = package_files(os.path.join(here, 'WrightTools', 'datasets'))
@@ -18,6 +20,7 @@
extra_files.append(os.path.join(here, 'README.rst'))
extra_files.append(os.path.join(here, 'requirements.txt'))
extra_files.append(os.path.join(here, 'VERSION'))
+extra_files.append(os.path.join(here, 'WrightTools', 'client_secrets.json'))
with open(os.path.join(here, 'requirements.txt')) as f:
required = f.read().splitlines()
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -3,6 +3,7 @@\n import os\n from setuptools import setup, find_packages\n \n+\n def package_files(directory):\n paths = []\n for (path, directories, filenames) in os.walk(directory):\n@@ -10,6 +11,7 @@\n paths.append(os.path.join('..', path, filename))\n return paths\n \n+\n here = os.path.abspath(os.path.dirname(__file__))\n \n extra_files = package_files(os.path.join(here, 'WrightTools', 'datasets'))\n@@ -18,6 +20,7 @@\n extra_files.append(os.path.join(here, 'README.rst'))\n extra_files.append(os.path.join(here, 'requirements.txt'))\n extra_files.append(os.path.join(here, 'VERSION'))\n+extra_files.append(os.path.join(here, 'WrightTools', 'client_secrets.json'))\n \n with open(os.path.join(here, 'requirements.txt')) as f:\n required = f.read().splitlines()\n", "issue": "distribute client_secrets\n\n", "before_files": [{"content": "# !/usr/bin/env python\n\nimport os\nfrom setuptools import setup, find_packages\n\ndef package_files(directory):\n paths = []\n for (path, directories, filenames) in os.walk(directory):\n for filename in filenames:\n paths.append(os.path.join('..', path, filename))\n return paths\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\nextra_files = package_files(os.path.join(here, 'WrightTools', 'datasets'))\nextra_files.append(os.path.join(here, 'CONTRIBUTORS'))\nextra_files.append(os.path.join(here, 'LICENSE'))\nextra_files.append(os.path.join(here, 'README.rst'))\nextra_files.append(os.path.join(here, 'requirements.txt'))\nextra_files.append(os.path.join(here, 'VERSION'))\n\nwith open(os.path.join(here, 'requirements.txt')) as f:\n required = f.read().splitlines()\n\nwith open(os.path.join(here, 'VERSION')) as version_file:\n version = version_file.read().strip()\n\nsetup(\n name='WrightTools',\n packages=find_packages(),\n package_data={'': extra_files},\n setup_requires=['pytest-runner'],\n tests_require=['pytest'],\n install_requires=required,\n extras_require={'docs': ['sphinx-gallery>=0.1.9']},\n version=version,\n description='Tools for loading, processing, and plotting multidimensional spectroscopy data.',\n author='Blaise Thompson',\n author_email='[email protected]',\n license='MIT',\n url='http://wright.tools',\n keywords='spectroscopy science multidimensional visualization',\n classifiers=['Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Scientific/Engineering']\n)\n", "path": "setup.py"}], "after_files": [{"content": "# !/usr/bin/env python\n\nimport os\nfrom setuptools import setup, find_packages\n\n\ndef package_files(directory):\n paths = []\n for (path, directories, filenames) in os.walk(directory):\n for filename in filenames:\n paths.append(os.path.join('..', path, filename))\n return paths\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\nextra_files = package_files(os.path.join(here, 'WrightTools', 'datasets'))\nextra_files.append(os.path.join(here, 'CONTRIBUTORS'))\nextra_files.append(os.path.join(here, 'LICENSE'))\nextra_files.append(os.path.join(here, 'README.rst'))\nextra_files.append(os.path.join(here, 'requirements.txt'))\nextra_files.append(os.path.join(here, 'VERSION'))\nextra_files.append(os.path.join(here, 'WrightTools', 'client_secrets.json'))\n\nwith open(os.path.join(here, 'requirements.txt')) as f:\n required = f.read().splitlines()\n\nwith open(os.path.join(here, 'VERSION')) as version_file:\n version = version_file.read().strip()\n\nsetup(\n name='WrightTools',\n packages=find_packages(),\n package_data={'': extra_files},\n setup_requires=['pytest-runner'],\n tests_require=['pytest'],\n install_requires=required,\n extras_require={'docs': ['sphinx-gallery>=0.1.9']},\n version=version,\n description='Tools for loading, processing, and plotting multidimensional spectroscopy data.',\n author='Blaise Thompson',\n author_email='[email protected]',\n license='MIT',\n url='http://wright.tools',\n keywords='spectroscopy science multidimensional visualization',\n classifiers=['Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Scientific/Engineering']\n)\n", "path": "setup.py"}]} | 818 | 223 |
gh_patches_debug_24880 | rasdani/github-patches | git_diff | nextcloud__appstore-441 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
At least a single resource for support or contact should be required
This is simply asking us for trouble. No repo. No website. Nothing. The app was provided only over an archive on Google Drive (https://github.com/nextcloud/app-certificate-requests/pull/52) ... this is not offending developers, google drive or anything 😅 But - you know what I mean I hope 😢
<img width="919" alt="bildschirmfoto 2017-03-01 um 11 44 38" src="https://cloud.githubusercontent.com/assets/19711361/23456641/8c1a9b1c-fe74-11e6-9894-e0fec0f622e7.png">
At least a single resource for support or contact should be required
This is simply asking us for trouble. No repo. No website. Nothing. The app was provided only over an archive on Google Drive (https://github.com/nextcloud/app-certificate-requests/pull/52) ... this is not offending developers, google drive or anything 😅 But - you know what I mean I hope 😢
<img width="919" alt="bildschirmfoto 2017-03-01 um 11 44 38" src="https://cloud.githubusercontent.com/assets/19711361/23456641/8c1a9b1c-fe74-11e6-9894-e0fec0f622e7.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nextcloudappstore/core/scaffolding/forms.py`
Content:
```
1 import re
2 from os import listdir
3
4 from django.core.exceptions import ValidationError
5 from django.forms import Textarea, Form, URLField, MultipleChoiceField
6 from django.utils.translation import ugettext_lazy as _ # type: ignore
7 from django.forms.fields import EmailField, CharField, ChoiceField
8 from django.conf import settings
9
10 from nextcloudappstore.core.facades import resolve_file_relative_path
11 from nextcloudappstore.core.models import Category
12 from django.utils.functional import lazy
13
14
15 def get_categories():
16 return [(cat.id, cat.name) for cat in Category.objects.all()]
17
18
19 def get_versions():
20 tpls = listdir(resolve_file_relative_path(__file__, 'app-templates'))
21 return [(v, v) for v in tpls]
22
23
24 def validate_id(input: str) -> str:
25 regex = r'^([A-Z][a-z]*)+$'
26 if not re.match(regex, input):
27 raise ValidationError(_('The app name must be camel case e.g. MyApp'))
28
29
30 class AppScaffoldingForm(Form):
31 name = CharField(max_length=80, label=_('App name'),
32 validators=[validate_id],
33 help_text=_('The app name must be camel case e.g. MyApp'))
34 platform = ChoiceField(choices=lazy(get_versions, list), required=True,
35 label=_('Nextcloud version'))
36 author_name = CharField(max_length=80, label=_('Author\'s full name'))
37 author_email = EmailField(label=_('Author\'s e-mail'))
38 author_homepage = URLField(label=_('Author\'s homepage'), required=False)
39 categories = MultipleChoiceField(required=True, label=_('Categories'),
40 choices=lazy(get_categories, list),
41 help_text=_('Hold down CTRL and click to '
42 'select multiple entries'))
43 summary = CharField(max_length=256, label=_('Summary'), help_text=_(
44 'Short description of your app that will be rendered as short teaser'))
45 description = CharField(widget=Textarea, label=_('Description'),
46 help_text=_('Full description of what your app '
47 'does. Can contain Markdown.'))
48
```
Path: `nextcloudappstore/core/scaffolding/archive.py`
Content:
```
1 import re
2 import tarfile
3 from io import BytesIO, StringIO
4 from typing import Dict
5 from os.path import join, isdir, relpath
6 from os import walk
7
8 from django.template import Context
9 from django.template import Template
10
11 from nextcloudappstore.core.facades import resolve_file_relative_path
12 from nextcloudappstore.settings.base import APP_SCAFFOLDING_PROFILES
13
14
15 def build_files(args: Dict[str, str]) -> Dict[str, str]:
16 platform = int(args['platform']) # prevent path traversal
17 vars = {
18 'id': args['name'].lower(),
19 'summary': args['summary'],
20 'description': args['description'],
21 'name': ' '.join(re.findall(r'[A-Z][^A-Z]*', args['name'])),
22 'namespace': args['name'],
23 'author_name': args['author_name'],
24 'author_mail': args['author_email'],
25 'author_homepage': args['author_homepage'],
26 'categories': args['categories'],
27 'nextcloud_version': platform
28 }
29 vars.update(APP_SCAFFOLDING_PROFILES.get(platform, {}))
30 relative_base = 'app-templates/%i/app/' % platform
31 base = resolve_file_relative_path(__file__, relative_base)
32
33 context = Context({'app': vars})
34 result = {}
35 if isdir(base):
36 for root, dirs, files in walk(base):
37 for file in files:
38 file_path = join(root, file)
39 rel_file_path = '%s/%s' % (
40 vars['id'], relpath(file_path, base)
41 )
42 with open(file_path) as f:
43 t = Template(f.read())
44 result[rel_file_path] = t.render(context)
45
46 return result
47
48
49 def build_archive(parameters: Dict[str, str]) -> BytesIO:
50 buffer = BytesIO()
51 with tarfile.open(fileobj=buffer, mode='w:gz') as f:
52 files = build_files(parameters)
53 for path, contents in files.items():
54 info = tarfile.TarInfo(path)
55 info.size = len(contents)
56 f.addfile(info, BytesIO(contents.encode()))
57 buffer.seek(0)
58 return buffer
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nextcloudappstore/core/scaffolding/archive.py b/nextcloudappstore/core/scaffolding/archive.py
--- a/nextcloudappstore/core/scaffolding/archive.py
+++ b/nextcloudappstore/core/scaffolding/archive.py
@@ -23,6 +23,7 @@
'author_name': args['author_name'],
'author_mail': args['author_email'],
'author_homepage': args['author_homepage'],
+ 'issue_tracker': args['issue_tracker'],
'categories': args['categories'],
'nextcloud_version': platform
}
diff --git a/nextcloudappstore/core/scaffolding/forms.py b/nextcloudappstore/core/scaffolding/forms.py
--- a/nextcloudappstore/core/scaffolding/forms.py
+++ b/nextcloudappstore/core/scaffolding/forms.py
@@ -36,6 +36,8 @@
author_name = CharField(max_length=80, label=_('Author\'s full name'))
author_email = EmailField(label=_('Author\'s e-mail'))
author_homepage = URLField(label=_('Author\'s homepage'), required=False)
+ issue_tracker = URLField(label=_('Issue tracker URL'), required=True,
+ help_text=_('Bug reports and feature requests'))
categories = MultipleChoiceField(required=True, label=_('Categories'),
choices=lazy(get_categories, list),
help_text=_('Hold down CTRL and click to '
| {"golden_diff": "diff --git a/nextcloudappstore/core/scaffolding/archive.py b/nextcloudappstore/core/scaffolding/archive.py\n--- a/nextcloudappstore/core/scaffolding/archive.py\n+++ b/nextcloudappstore/core/scaffolding/archive.py\n@@ -23,6 +23,7 @@\n 'author_name': args['author_name'],\n 'author_mail': args['author_email'],\n 'author_homepage': args['author_homepage'],\n+ 'issue_tracker': args['issue_tracker'],\n 'categories': args['categories'],\n 'nextcloud_version': platform\n }\ndiff --git a/nextcloudappstore/core/scaffolding/forms.py b/nextcloudappstore/core/scaffolding/forms.py\n--- a/nextcloudappstore/core/scaffolding/forms.py\n+++ b/nextcloudappstore/core/scaffolding/forms.py\n@@ -36,6 +36,8 @@\n author_name = CharField(max_length=80, label=_('Author\\'s full name'))\n author_email = EmailField(label=_('Author\\'s e-mail'))\n author_homepage = URLField(label=_('Author\\'s homepage'), required=False)\n+ issue_tracker = URLField(label=_('Issue tracker URL'), required=True,\n+ help_text=_('Bug reports and feature requests'))\n categories = MultipleChoiceField(required=True, label=_('Categories'),\n choices=lazy(get_categories, list),\n help_text=_('Hold down CTRL and click to '\n", "issue": "At least a single resource for support or contact should be required\nThis is simply asking us for trouble. No repo. No website. Nothing. The app was provided only over an archive on Google Drive (https://github.com/nextcloud/app-certificate-requests/pull/52) ... this is not offending developers, google drive or anything \ud83d\ude05 But - you know what I mean I hope \ud83d\ude22 \r\n\r\n<img width=\"919\" alt=\"bildschirmfoto 2017-03-01 um 11 44 38\" src=\"https://cloud.githubusercontent.com/assets/19711361/23456641/8c1a9b1c-fe74-11e6-9894-e0fec0f622e7.png\">\nAt least a single resource for support or contact should be required\nThis is simply asking us for trouble. No repo. No website. Nothing. The app was provided only over an archive on Google Drive (https://github.com/nextcloud/app-certificate-requests/pull/52) ... this is not offending developers, google drive or anything \ud83d\ude05 But - you know what I mean I hope \ud83d\ude22 \r\n\r\n<img width=\"919\" alt=\"bildschirmfoto 2017-03-01 um 11 44 38\" src=\"https://cloud.githubusercontent.com/assets/19711361/23456641/8c1a9b1c-fe74-11e6-9894-e0fec0f622e7.png\">\n", "before_files": [{"content": "import re\nfrom os import listdir\n\nfrom django.core.exceptions import ValidationError\nfrom django.forms import Textarea, Form, URLField, MultipleChoiceField\nfrom django.utils.translation import ugettext_lazy as _ # type: ignore\nfrom django.forms.fields import EmailField, CharField, ChoiceField\nfrom django.conf import settings\n\nfrom nextcloudappstore.core.facades import resolve_file_relative_path\nfrom nextcloudappstore.core.models import Category\nfrom django.utils.functional import lazy\n\n\ndef get_categories():\n return [(cat.id, cat.name) for cat in Category.objects.all()]\n\n\ndef get_versions():\n tpls = listdir(resolve_file_relative_path(__file__, 'app-templates'))\n return [(v, v) for v in tpls]\n\n\ndef validate_id(input: str) -> str:\n regex = r'^([A-Z][a-z]*)+$'\n if not re.match(regex, input):\n raise ValidationError(_('The app name must be camel case e.g. MyApp'))\n\n\nclass AppScaffoldingForm(Form):\n name = CharField(max_length=80, label=_('App name'),\n validators=[validate_id],\n help_text=_('The app name must be camel case e.g. MyApp'))\n platform = ChoiceField(choices=lazy(get_versions, list), required=True,\n label=_('Nextcloud version'))\n author_name = CharField(max_length=80, label=_('Author\\'s full name'))\n author_email = EmailField(label=_('Author\\'s e-mail'))\n author_homepage = URLField(label=_('Author\\'s homepage'), required=False)\n categories = MultipleChoiceField(required=True, label=_('Categories'),\n choices=lazy(get_categories, list),\n help_text=_('Hold down CTRL and click to '\n 'select multiple entries'))\n summary = CharField(max_length=256, label=_('Summary'), help_text=_(\n 'Short description of your app that will be rendered as short teaser'))\n description = CharField(widget=Textarea, label=_('Description'),\n help_text=_('Full description of what your app '\n 'does. Can contain Markdown.'))\n", "path": "nextcloudappstore/core/scaffolding/forms.py"}, {"content": "import re\nimport tarfile\nfrom io import BytesIO, StringIO\nfrom typing import Dict\nfrom os.path import join, isdir, relpath\nfrom os import walk\n\nfrom django.template import Context\nfrom django.template import Template\n\nfrom nextcloudappstore.core.facades import resolve_file_relative_path\nfrom nextcloudappstore.settings.base import APP_SCAFFOLDING_PROFILES\n\n\ndef build_files(args: Dict[str, str]) -> Dict[str, str]:\n platform = int(args['platform']) # prevent path traversal\n vars = {\n 'id': args['name'].lower(),\n 'summary': args['summary'],\n 'description': args['description'],\n 'name': ' '.join(re.findall(r'[A-Z][^A-Z]*', args['name'])),\n 'namespace': args['name'],\n 'author_name': args['author_name'],\n 'author_mail': args['author_email'],\n 'author_homepage': args['author_homepage'],\n 'categories': args['categories'],\n 'nextcloud_version': platform\n }\n vars.update(APP_SCAFFOLDING_PROFILES.get(platform, {}))\n relative_base = 'app-templates/%i/app/' % platform\n base = resolve_file_relative_path(__file__, relative_base)\n\n context = Context({'app': vars})\n result = {}\n if isdir(base):\n for root, dirs, files in walk(base):\n for file in files:\n file_path = join(root, file)\n rel_file_path = '%s/%s' % (\n vars['id'], relpath(file_path, base)\n )\n with open(file_path) as f:\n t = Template(f.read())\n result[rel_file_path] = t.render(context)\n\n return result\n\n\ndef build_archive(parameters: Dict[str, str]) -> BytesIO:\n buffer = BytesIO()\n with tarfile.open(fileobj=buffer, mode='w:gz') as f:\n files = build_files(parameters)\n for path, contents in files.items():\n info = tarfile.TarInfo(path)\n info.size = len(contents)\n f.addfile(info, BytesIO(contents.encode()))\n buffer.seek(0)\n return buffer\n", "path": "nextcloudappstore/core/scaffolding/archive.py"}], "after_files": [{"content": "import re\nfrom os import listdir\n\nfrom django.core.exceptions import ValidationError\nfrom django.forms import Textarea, Form, URLField, MultipleChoiceField\nfrom django.utils.translation import ugettext_lazy as _ # type: ignore\nfrom django.forms.fields import EmailField, CharField, ChoiceField\nfrom django.conf import settings\n\nfrom nextcloudappstore.core.facades import resolve_file_relative_path\nfrom nextcloudappstore.core.models import Category\nfrom django.utils.functional import lazy\n\n\ndef get_categories():\n return [(cat.id, cat.name) for cat in Category.objects.all()]\n\n\ndef get_versions():\n tpls = listdir(resolve_file_relative_path(__file__, 'app-templates'))\n return [(v, v) for v in tpls]\n\n\ndef validate_id(input: str) -> str:\n regex = r'^([A-Z][a-z]*)+$'\n if not re.match(regex, input):\n raise ValidationError(_('The app name must be camel case e.g. MyApp'))\n\n\nclass AppScaffoldingForm(Form):\n name = CharField(max_length=80, label=_('App name'),\n validators=[validate_id],\n help_text=_('The app name must be camel case e.g. MyApp'))\n platform = ChoiceField(choices=lazy(get_versions, list), required=True,\n label=_('Nextcloud version'))\n author_name = CharField(max_length=80, label=_('Author\\'s full name'))\n author_email = EmailField(label=_('Author\\'s e-mail'))\n author_homepage = URLField(label=_('Author\\'s homepage'), required=False)\n issue_tracker = URLField(label=_('Issue tracker URL'), required=True,\n help_text=_('Bug reports and feature requests'))\n categories = MultipleChoiceField(required=True, label=_('Categories'),\n choices=lazy(get_categories, list),\n help_text=_('Hold down CTRL and click to '\n 'select multiple entries'))\n summary = CharField(max_length=256, label=_('Summary'), help_text=_(\n 'Short description of your app that will be rendered as short teaser'))\n description = CharField(widget=Textarea, label=_('Description'),\n help_text=_('Full description of what your app '\n 'does. Can contain Markdown.'))\n", "path": "nextcloudappstore/core/scaffolding/forms.py"}, {"content": "import re\nimport tarfile\nfrom io import BytesIO, StringIO\nfrom typing import Dict\nfrom os.path import join, isdir, relpath\nfrom os import walk\n\nfrom django.template import Context\nfrom django.template import Template\n\nfrom nextcloudappstore.core.facades import resolve_file_relative_path\nfrom nextcloudappstore.settings.base import APP_SCAFFOLDING_PROFILES\n\n\ndef build_files(args: Dict[str, str]) -> Dict[str, str]:\n platform = int(args['platform']) # prevent path traversal\n vars = {\n 'id': args['name'].lower(),\n 'summary': args['summary'],\n 'description': args['description'],\n 'name': ' '.join(re.findall(r'[A-Z][^A-Z]*', args['name'])),\n 'namespace': args['name'],\n 'author_name': args['author_name'],\n 'author_mail': args['author_email'],\n 'author_homepage': args['author_homepage'],\n 'issue_tracker': args['issue_tracker'],\n 'categories': args['categories'],\n 'nextcloud_version': platform\n }\n vars.update(APP_SCAFFOLDING_PROFILES.get(platform, {}))\n relative_base = 'app-templates/%i/app/' % platform\n base = resolve_file_relative_path(__file__, relative_base)\n\n context = Context({'app': vars})\n result = {}\n if isdir(base):\n for root, dirs, files in walk(base):\n for file in files:\n file_path = join(root, file)\n rel_file_path = '%s/%s' % (\n vars['id'], relpath(file_path, base)\n )\n with open(file_path) as f:\n t = Template(f.read())\n result[rel_file_path] = t.render(context)\n\n return result\n\n\ndef build_archive(parameters: Dict[str, str]) -> BytesIO:\n buffer = BytesIO()\n with tarfile.open(fileobj=buffer, mode='w:gz') as f:\n files = build_files(parameters)\n for path, contents in files.items():\n info = tarfile.TarInfo(path)\n info.size = len(contents)\n f.addfile(info, BytesIO(contents.encode()))\n buffer.seek(0)\n return buffer\n", "path": "nextcloudappstore/core/scaffolding/archive.py"}]} | 1,749 | 308 |
gh_patches_debug_18788 | rasdani/github-patches | git_diff | easybuilders__easybuild-framework-4377 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OneAPI ifort: '-fiopenmp' not supported
I am using `intel-2022b` toolchain with the traditional `ifort` compiler. EasyBuild is adding the `-fiopenmp` flag which, according to [here](https://www.intel.com/content/www/us/en/docs/fortran-compiler/developer-guide-reference/2023-2/fiopenmp-qiopenmp.html) is only supported for `ifx` but not `ifort`. From discussions on Slack it appears that the problem is in `intel_compilers.py` and @bartoldeman suggested this workaround in `toolchainopts`:
```
toolchainopts = {'openmp': False, 'extra_fflags': '-fopenmp' }
```
which seems to do the trick. Maybe that could be fixed for the upcoming 4.9 release?
Thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `easybuild/toolchains/compiler/intel_compilers.py`
Content:
```
1 ##
2 # Copyright 2021-2023 Ghent University
3 #
4 # This file is part of EasyBuild,
5 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
6 # with support of Ghent University (http://ugent.be/hpc),
7 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),
8 # Flemish Research Foundation (FWO) (http://www.fwo.be/en)
9 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
10 #
11 # https://github.com/easybuilders/easybuild
12 #
13 # EasyBuild is free software: you can redistribute it and/or modify
14 # it under the terms of the GNU General Public License as published by
15 # the Free Software Foundation v2.
16 #
17 # EasyBuild is distributed in the hope that it will be useful,
18 # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 # GNU General Public License for more details.
21 #
22 # You should have received a copy of the GNU General Public License
23 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
24 ##
25 """
26 Support for Intel compilers (icc, ifort) as toolchain compilers, version 2021.x and newer (oneAPI).
27
28 Authors:
29
30 * Kenneth Hoste (Ghent University)
31 """
32 import os
33
34 import easybuild.tools.systemtools as systemtools
35 from easybuild.toolchains.compiler.inteliccifort import IntelIccIfort
36 from easybuild.tools import LooseVersion
37 from easybuild.tools.toolchain.compiler import Compiler
38
39
40 class IntelCompilers(IntelIccIfort):
41 """
42 Compiler class for Intel oneAPI compilers
43 """
44
45 COMPILER_MODULE_NAME = ['intel-compilers']
46 COMPILER_UNIQUE_OPTS = dict(IntelIccIfort.COMPILER_UNIQUE_OPTS)
47 COMPILER_UNIQUE_OPTS.update({
48 'oneapi': (None, "Use oneAPI compilers icx/icpx/ifx instead of classic compilers"),
49 'oneapi_c_cxx': (None, "Use oneAPI C/C++ compilers icx/icpx instead of classic Intel C/C++ compilers "
50 "(auto-enabled for Intel compilers version 2022.2.0, or newer)"),
51 'oneapi_fortran': (False, "Use oneAPI Fortran compiler ifx instead of classic Intel Fortran compiler"),
52 })
53
54 def _set_compiler_vars(self):
55 """Intel compilers-specific adjustments after setting compiler variables."""
56
57 # skip IntelIccIfort._set_compiler_vars (no longer relevant for recent versions)
58 Compiler._set_compiler_vars(self)
59
60 root = self.get_software_root(self.COMPILER_MODULE_NAME)[0]
61 version = self.get_software_version(self.COMPILER_MODULE_NAME)[0]
62
63 libbase = os.path.join('compiler', version, 'linux')
64 libpaths = [
65 os.path.join(libbase, 'compiler', 'lib', 'intel64'),
66 ]
67
68 self.variables.append_subdirs("LDFLAGS", root, subdirs=libpaths)
69
70 def set_variables(self):
71 """Set the variables."""
72
73 oneapi = False
74
75 # auto-enable use of oneAPI C/C++ compilers for sufficiently recent versions of Intel compilers
76 comp_ver = self.get_software_version(self.COMPILER_MODULE_NAME)[0]
77 if LooseVersion(comp_ver) >= LooseVersion('2022.2.0'):
78 if self.options.get('oneapi_c_cxx', None) is None:
79 self.options['oneapi_c_cxx'] = True
80
81 oneapi_tcopt = self.options.get('oneapi')
82 if oneapi_tcopt:
83 oneapi = True
84 self.COMPILER_CXX = 'icpx'
85 self.COMPILER_CC = 'icx'
86 self.COMPILER_F77 = 'ifx'
87 self.COMPILER_F90 = 'ifx'
88 self.COMPILER_FC = 'ifx'
89
90 # if both 'oneapi' and 'oneapi_*' are set, the latter are ignored
91 elif oneapi_tcopt is None:
92 if self.options.get('oneapi_c_cxx', False):
93 oneapi = True
94 self.COMPILER_CC = 'icx'
95 self.COMPILER_CXX = 'icpx'
96
97 if self.options.get('oneapi_fortran', False):
98 oneapi = True
99 self.COMPILER_F77 = 'ifx'
100 self.COMPILER_F90 = 'ifx'
101 self.COMPILER_FC = 'ifx'
102
103 if oneapi:
104 # fp-model source is not supported by icx but is equivalent to precise
105 self.options.options_map['defaultprec'] = ['fp-speculation=safe', 'fp-model precise']
106 if LooseVersion(comp_ver) >= LooseVersion('2022'):
107 self.options.options_map['defaultprec'].insert(0, 'ftz')
108 # icx doesn't like -fp-model fast=1; fp-model fast is equivalent
109 self.options.options_map['loose'] = ['fp-model fast']
110 # fp-model fast=2 gives "warning: overriding '-ffp-model=fast=2' option with '-ffp-model=fast'"
111 self.options.options_map['veryloose'] = ['fp-model fast']
112 # recommended in porting guide
113 self.options.options_map['openmp'] = ['fiopenmp']
114
115 # -xSSE2 is not supported by Intel oneAPI compilers,
116 # so use -march=x86-64 -mtune=generic when using optarch=GENERIC
117 self.COMPILER_GENERIC_OPTION = {
118 (systemtools.X86_64, systemtools.AMD): 'march=x86-64 -mtune=generic',
119 (systemtools.X86_64, systemtools.INTEL): 'march=x86-64 -mtune=generic',
120 }
121
122 # skip IntelIccIfort.set_variables (no longer relevant for recent versions)
123 Compiler.set_variables(self)
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/easybuild/toolchains/compiler/intel_compilers.py b/easybuild/toolchains/compiler/intel_compilers.py
--- a/easybuild/toolchains/compiler/intel_compilers.py
+++ b/easybuild/toolchains/compiler/intel_compilers.py
@@ -109,8 +109,9 @@
self.options.options_map['loose'] = ['fp-model fast']
# fp-model fast=2 gives "warning: overriding '-ffp-model=fast=2' option with '-ffp-model=fast'"
self.options.options_map['veryloose'] = ['fp-model fast']
- # recommended in porting guide
- self.options.options_map['openmp'] = ['fiopenmp']
+ # recommended in porting guide: qopenmp, unlike fiopenmp, works for both classic and oneapi compilers
+ # https://www.intel.com/content/www/us/en/developer/articles/guide/porting-guide-for-ifort-to-ifx.html
+ self.options.options_map['openmp'] = ['qopenmp']
# -xSSE2 is not supported by Intel oneAPI compilers,
# so use -march=x86-64 -mtune=generic when using optarch=GENERIC
| {"golden_diff": "diff --git a/easybuild/toolchains/compiler/intel_compilers.py b/easybuild/toolchains/compiler/intel_compilers.py\n--- a/easybuild/toolchains/compiler/intel_compilers.py\n+++ b/easybuild/toolchains/compiler/intel_compilers.py\n@@ -109,8 +109,9 @@\n self.options.options_map['loose'] = ['fp-model fast']\n # fp-model fast=2 gives \"warning: overriding '-ffp-model=fast=2' option with '-ffp-model=fast'\"\n self.options.options_map['veryloose'] = ['fp-model fast']\n- # recommended in porting guide\n- self.options.options_map['openmp'] = ['fiopenmp']\n+ # recommended in porting guide: qopenmp, unlike fiopenmp, works for both classic and oneapi compilers\n+ # https://www.intel.com/content/www/us/en/developer/articles/guide/porting-guide-for-ifort-to-ifx.html\n+ self.options.options_map['openmp'] = ['qopenmp']\n \n # -xSSE2 is not supported by Intel oneAPI compilers,\n # so use -march=x86-64 -mtune=generic when using optarch=GENERIC\n", "issue": "OneAPI ifort: '-fiopenmp' not supported\nI am using `intel-2022b` toolchain with the traditional `ifort` compiler. EasyBuild is adding the `-fiopenmp` flag which, according to [here](https://www.intel.com/content/www/us/en/docs/fortran-compiler/developer-guide-reference/2023-2/fiopenmp-qiopenmp.html) is only supported for `ifx` but not `ifort`. From discussions on Slack it appears that the problem is in `intel_compilers.py` and @bartoldeman suggested this workaround in `toolchainopts`:\r\n```\r\ntoolchainopts = {'openmp': False, 'extra_fflags': '-fopenmp' }\r\n```\r\nwhich seems to do the trick. Maybe that could be fixed for the upcoming 4.9 release?\r\nThanks.\n", "before_files": [{"content": "##\n# Copyright 2021-2023 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nSupport for Intel compilers (icc, ifort) as toolchain compilers, version 2021.x and newer (oneAPI).\n\nAuthors:\n\n* Kenneth Hoste (Ghent University)\n\"\"\"\nimport os\n\nimport easybuild.tools.systemtools as systemtools\nfrom easybuild.toolchains.compiler.inteliccifort import IntelIccIfort\nfrom easybuild.tools import LooseVersion\nfrom easybuild.tools.toolchain.compiler import Compiler\n\n\nclass IntelCompilers(IntelIccIfort):\n \"\"\"\n Compiler class for Intel oneAPI compilers\n \"\"\"\n\n COMPILER_MODULE_NAME = ['intel-compilers']\n COMPILER_UNIQUE_OPTS = dict(IntelIccIfort.COMPILER_UNIQUE_OPTS)\n COMPILER_UNIQUE_OPTS.update({\n 'oneapi': (None, \"Use oneAPI compilers icx/icpx/ifx instead of classic compilers\"),\n 'oneapi_c_cxx': (None, \"Use oneAPI C/C++ compilers icx/icpx instead of classic Intel C/C++ compilers \"\n \"(auto-enabled for Intel compilers version 2022.2.0, or newer)\"),\n 'oneapi_fortran': (False, \"Use oneAPI Fortran compiler ifx instead of classic Intel Fortran compiler\"),\n })\n\n def _set_compiler_vars(self):\n \"\"\"Intel compilers-specific adjustments after setting compiler variables.\"\"\"\n\n # skip IntelIccIfort._set_compiler_vars (no longer relevant for recent versions)\n Compiler._set_compiler_vars(self)\n\n root = self.get_software_root(self.COMPILER_MODULE_NAME)[0]\n version = self.get_software_version(self.COMPILER_MODULE_NAME)[0]\n\n libbase = os.path.join('compiler', version, 'linux')\n libpaths = [\n os.path.join(libbase, 'compiler', 'lib', 'intel64'),\n ]\n\n self.variables.append_subdirs(\"LDFLAGS\", root, subdirs=libpaths)\n\n def set_variables(self):\n \"\"\"Set the variables.\"\"\"\n\n oneapi = False\n\n # auto-enable use of oneAPI C/C++ compilers for sufficiently recent versions of Intel compilers\n comp_ver = self.get_software_version(self.COMPILER_MODULE_NAME)[0]\n if LooseVersion(comp_ver) >= LooseVersion('2022.2.0'):\n if self.options.get('oneapi_c_cxx', None) is None:\n self.options['oneapi_c_cxx'] = True\n\n oneapi_tcopt = self.options.get('oneapi')\n if oneapi_tcopt:\n oneapi = True\n self.COMPILER_CXX = 'icpx'\n self.COMPILER_CC = 'icx'\n self.COMPILER_F77 = 'ifx'\n self.COMPILER_F90 = 'ifx'\n self.COMPILER_FC = 'ifx'\n\n # if both 'oneapi' and 'oneapi_*' are set, the latter are ignored\n elif oneapi_tcopt is None:\n if self.options.get('oneapi_c_cxx', False):\n oneapi = True\n self.COMPILER_CC = 'icx'\n self.COMPILER_CXX = 'icpx'\n\n if self.options.get('oneapi_fortran', False):\n oneapi = True\n self.COMPILER_F77 = 'ifx'\n self.COMPILER_F90 = 'ifx'\n self.COMPILER_FC = 'ifx'\n\n if oneapi:\n # fp-model source is not supported by icx but is equivalent to precise\n self.options.options_map['defaultprec'] = ['fp-speculation=safe', 'fp-model precise']\n if LooseVersion(comp_ver) >= LooseVersion('2022'):\n self.options.options_map['defaultprec'].insert(0, 'ftz')\n # icx doesn't like -fp-model fast=1; fp-model fast is equivalent\n self.options.options_map['loose'] = ['fp-model fast']\n # fp-model fast=2 gives \"warning: overriding '-ffp-model=fast=2' option with '-ffp-model=fast'\"\n self.options.options_map['veryloose'] = ['fp-model fast']\n # recommended in porting guide\n self.options.options_map['openmp'] = ['fiopenmp']\n\n # -xSSE2 is not supported by Intel oneAPI compilers,\n # so use -march=x86-64 -mtune=generic when using optarch=GENERIC\n self.COMPILER_GENERIC_OPTION = {\n (systemtools.X86_64, systemtools.AMD): 'march=x86-64 -mtune=generic',\n (systemtools.X86_64, systemtools.INTEL): 'march=x86-64 -mtune=generic',\n }\n\n # skip IntelIccIfort.set_variables (no longer relevant for recent versions)\n Compiler.set_variables(self)\n", "path": "easybuild/toolchains/compiler/intel_compilers.py"}], "after_files": [{"content": "##\n# Copyright 2021-2023 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nSupport for Intel compilers (icc, ifort) as toolchain compilers, version 2021.x and newer (oneAPI).\n\nAuthors:\n\n* Kenneth Hoste (Ghent University)\n\"\"\"\nimport os\n\nimport easybuild.tools.systemtools as systemtools\nfrom easybuild.toolchains.compiler.inteliccifort import IntelIccIfort\nfrom easybuild.tools import LooseVersion\nfrom easybuild.tools.toolchain.compiler import Compiler\n\n\nclass IntelCompilers(IntelIccIfort):\n \"\"\"\n Compiler class for Intel oneAPI compilers\n \"\"\"\n\n COMPILER_MODULE_NAME = ['intel-compilers']\n COMPILER_UNIQUE_OPTS = dict(IntelIccIfort.COMPILER_UNIQUE_OPTS)\n COMPILER_UNIQUE_OPTS.update({\n 'oneapi': (None, \"Use oneAPI compilers icx/icpx/ifx instead of classic compilers\"),\n 'oneapi_c_cxx': (None, \"Use oneAPI C/C++ compilers icx/icpx instead of classic Intel C/C++ compilers \"\n \"(auto-enabled for Intel compilers version 2022.2.0, or newer)\"),\n 'oneapi_fortran': (False, \"Use oneAPI Fortran compiler ifx instead of classic Intel Fortran compiler\"),\n })\n\n def _set_compiler_vars(self):\n \"\"\"Intel compilers-specific adjustments after setting compiler variables.\"\"\"\n\n # skip IntelIccIfort._set_compiler_vars (no longer relevant for recent versions)\n Compiler._set_compiler_vars(self)\n\n root = self.get_software_root(self.COMPILER_MODULE_NAME)[0]\n version = self.get_software_version(self.COMPILER_MODULE_NAME)[0]\n\n libbase = os.path.join('compiler', version, 'linux')\n libpaths = [\n os.path.join(libbase, 'compiler', 'lib', 'intel64'),\n ]\n\n self.variables.append_subdirs(\"LDFLAGS\", root, subdirs=libpaths)\n\n def set_variables(self):\n \"\"\"Set the variables.\"\"\"\n\n oneapi = False\n\n # auto-enable use of oneAPI C/C++ compilers for sufficiently recent versions of Intel compilers\n comp_ver = self.get_software_version(self.COMPILER_MODULE_NAME)[0]\n if LooseVersion(comp_ver) >= LooseVersion('2022.2.0'):\n if self.options.get('oneapi_c_cxx', None) is None:\n self.options['oneapi_c_cxx'] = True\n\n oneapi_tcopt = self.options.get('oneapi')\n if oneapi_tcopt:\n oneapi = True\n self.COMPILER_CXX = 'icpx'\n self.COMPILER_CC = 'icx'\n self.COMPILER_F77 = 'ifx'\n self.COMPILER_F90 = 'ifx'\n self.COMPILER_FC = 'ifx'\n\n # if both 'oneapi' and 'oneapi_*' are set, the latter are ignored\n elif oneapi_tcopt is None:\n if self.options.get('oneapi_c_cxx', False):\n oneapi = True\n self.COMPILER_CC = 'icx'\n self.COMPILER_CXX = 'icpx'\n\n if self.options.get('oneapi_fortran', False):\n oneapi = True\n self.COMPILER_F77 = 'ifx'\n self.COMPILER_F90 = 'ifx'\n self.COMPILER_FC = 'ifx'\n\n if oneapi:\n # fp-model source is not supported by icx but is equivalent to precise\n self.options.options_map['defaultprec'] = ['fp-speculation=safe', 'fp-model precise']\n if LooseVersion(comp_ver) >= LooseVersion('2022'):\n self.options.options_map['defaultprec'].insert(0, 'ftz')\n # icx doesn't like -fp-model fast=1; fp-model fast is equivalent\n self.options.options_map['loose'] = ['fp-model fast']\n # fp-model fast=2 gives \"warning: overriding '-ffp-model=fast=2' option with '-ffp-model=fast'\"\n self.options.options_map['veryloose'] = ['fp-model fast']\n # recommended in porting guide: qopenmp, unlike fiopenmp, works for both classic and oneapi compilers\n # https://www.intel.com/content/www/us/en/developer/articles/guide/porting-guide-for-ifort-to-ifx.html\n self.options.options_map['openmp'] = ['qopenmp']\n\n # -xSSE2 is not supported by Intel oneAPI compilers,\n # so use -march=x86-64 -mtune=generic when using optarch=GENERIC\n self.COMPILER_GENERIC_OPTION = {\n (systemtools.X86_64, systemtools.AMD): 'march=x86-64 -mtune=generic',\n (systemtools.X86_64, systemtools.INTEL): 'march=x86-64 -mtune=generic',\n }\n\n # skip IntelIccIfort.set_variables (no longer relevant for recent versions)\n Compiler.set_variables(self)\n", "path": "easybuild/toolchains/compiler/intel_compilers.py"}]} | 2,034 | 271 |
gh_patches_debug_2248 | rasdani/github-patches | git_diff | googleapis__python-bigquery-942 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
chore: update system tests and samples to use and `@google.com` email address
Re: https://github.com/googleapis/python-bigquery/pull/935#issuecomment-911791623
It may be some time before we can reconfigure our test project(s) to allow permissions for non-google.com folks.
I propose we use `[email protected]`, which is intended for interacting with the public and most Googlers who support this client are members of but don't have email enabled, so it should be less of a spam risk.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `samples/update_dataset_access.py`
Content:
```
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 def update_dataset_access(dataset_id):
17
18 # [START bigquery_update_dataset_access]
19 from google.cloud import bigquery
20
21 # Construct a BigQuery client object.
22 client = bigquery.Client()
23
24 # TODO(developer): Set dataset_id to the ID of the dataset to fetch.
25 # dataset_id = 'your-project.your_dataset'
26
27 dataset = client.get_dataset(dataset_id) # Make an API request.
28
29 entry = bigquery.AccessEntry(
30 role="READER",
31 entity_type="userByEmail",
32 entity_id="[email protected]",
33 )
34
35 entries = list(dataset.access_entries)
36 entries.append(entry)
37 dataset.access_entries = entries
38
39 dataset = client.update_dataset(dataset, ["access_entries"]) # Make an API request.
40
41 full_dataset_id = "{}.{}".format(dataset.project, dataset.dataset_id)
42 print(
43 "Updated dataset '{}' with modified user permissions.".format(full_dataset_id)
44 )
45 # [END bigquery_update_dataset_access]
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/samples/update_dataset_access.py b/samples/update_dataset_access.py
--- a/samples/update_dataset_access.py
+++ b/samples/update_dataset_access.py
@@ -28,8 +28,8 @@
entry = bigquery.AccessEntry(
role="READER",
- entity_type="userByEmail",
- entity_id="[email protected]",
+ entity_type="groupByEmail",
+ entity_id="[email protected]",
)
entries = list(dataset.access_entries)
| {"golden_diff": "diff --git a/samples/update_dataset_access.py b/samples/update_dataset_access.py\n--- a/samples/update_dataset_access.py\n+++ b/samples/update_dataset_access.py\n@@ -28,8 +28,8 @@\n \n entry = bigquery.AccessEntry(\n role=\"READER\",\n- entity_type=\"userByEmail\",\n- entity_id=\"[email protected]\",\n+ entity_type=\"groupByEmail\",\n+ entity_id=\"[email protected]\",\n )\n \n entries = list(dataset.access_entries)\n", "issue": "chore: update system tests and samples to use and `@google.com` email address\nRe: https://github.com/googleapis/python-bigquery/pull/935#issuecomment-911791623\r\n\r\nIt may be some time before we can reconfigure our test project(s) to allow permissions for non-google.com folks.\r\n\r\nI propose we use `[email protected]`, which is intended for interacting with the public and most Googlers who support this client are members of but don't have email enabled, so it should be less of a spam risk.\r\n\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\ndef update_dataset_access(dataset_id):\n\n # [START bigquery_update_dataset_access]\n from google.cloud import bigquery\n\n # Construct a BigQuery client object.\n client = bigquery.Client()\n\n # TODO(developer): Set dataset_id to the ID of the dataset to fetch.\n # dataset_id = 'your-project.your_dataset'\n\n dataset = client.get_dataset(dataset_id) # Make an API request.\n\n entry = bigquery.AccessEntry(\n role=\"READER\",\n entity_type=\"userByEmail\",\n entity_id=\"[email protected]\",\n )\n\n entries = list(dataset.access_entries)\n entries.append(entry)\n dataset.access_entries = entries\n\n dataset = client.update_dataset(dataset, [\"access_entries\"]) # Make an API request.\n\n full_dataset_id = \"{}.{}\".format(dataset.project, dataset.dataset_id)\n print(\n \"Updated dataset '{}' with modified user permissions.\".format(full_dataset_id)\n )\n # [END bigquery_update_dataset_access]\n", "path": "samples/update_dataset_access.py"}], "after_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\ndef update_dataset_access(dataset_id):\n\n # [START bigquery_update_dataset_access]\n from google.cloud import bigquery\n\n # Construct a BigQuery client object.\n client = bigquery.Client()\n\n # TODO(developer): Set dataset_id to the ID of the dataset to fetch.\n # dataset_id = 'your-project.your_dataset'\n\n dataset = client.get_dataset(dataset_id) # Make an API request.\n\n entry = bigquery.AccessEntry(\n role=\"READER\",\n entity_type=\"groupByEmail\",\n entity_id=\"[email protected]\",\n )\n\n entries = list(dataset.access_entries)\n entries.append(entry)\n dataset.access_entries = entries\n\n dataset = client.update_dataset(dataset, [\"access_entries\"]) # Make an API request.\n\n full_dataset_id = \"{}.{}\".format(dataset.project, dataset.dataset_id)\n print(\n \"Updated dataset '{}' with modified user permissions.\".format(full_dataset_id)\n )\n # [END bigquery_update_dataset_access]\n", "path": "samples/update_dataset_access.py"}]} | 815 | 116 |
gh_patches_debug_27274 | rasdani/github-patches | git_diff | open-mmlab__mmpose-465 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add a README file to explain the settings in the config files
A good example can be found [here](https://github.com/open-mmlab/mmaction2/blob/master/docs/config.md)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/train.py`
Content:
```
1 import argparse
2 import copy
3 import os
4 import os.path as osp
5 import time
6
7 import mmcv
8 import torch
9 from mmcv import Config, DictAction
10 from mmcv.runner import init_dist, set_random_seed
11 from mmcv.utils import get_git_hash
12
13 from mmpose import __version__
14 from mmpose.apis import train_model
15 from mmpose.datasets import build_dataset
16 from mmpose.models import build_posenet
17 from mmpose.utils import collect_env, get_root_logger
18
19
20 def parse_args():
21 parser = argparse.ArgumentParser(description='Train a pose model')
22 parser.add_argument('config', help='train config file path')
23 parser.add_argument('--work-dir', help='the dir to save logs and models')
24 parser.add_argument(
25 '--resume-from', help='the checkpoint file to resume from')
26 parser.add_argument(
27 '--no-validate',
28 action='store_true',
29 help='whether not to evaluate the checkpoint during training')
30 group_gpus = parser.add_mutually_exclusive_group()
31 group_gpus.add_argument(
32 '--gpus',
33 type=int,
34 help='number of gpus to use '
35 '(only applicable to non-distributed training)')
36 group_gpus.add_argument(
37 '--gpu-ids',
38 type=int,
39 nargs='+',
40 help='ids of gpus to use '
41 '(only applicable to non-distributed training)')
42 parser.add_argument('--seed', type=int, default=None, help='random seed')
43 parser.add_argument(
44 '--deterministic',
45 action='store_true',
46 help='whether to set deterministic options for CUDNN backend.')
47 parser.add_argument(
48 '--options', nargs='+', action=DictAction, help='arguments in dict')
49 parser.add_argument(
50 '--launcher',
51 choices=['none', 'pytorch', 'slurm', 'mpi'],
52 default='none',
53 help='job launcher')
54 parser.add_argument('--local_rank', type=int, default=0)
55 parser.add_argument(
56 '--autoscale-lr',
57 action='store_true',
58 help='automatically scale lr with the number of gpus')
59 args = parser.parse_args()
60 if 'LOCAL_RANK' not in os.environ:
61 os.environ['LOCAL_RANK'] = str(args.local_rank)
62
63 return args
64
65
66 def main():
67 args = parse_args()
68 cfg = Config.fromfile(args.config)
69 if args.options is not None:
70 cfg.merge_from_dict(args.options)
71 # set cudnn_benchmark
72 if cfg.get('cudnn_benchmark', False):
73 torch.backends.cudnn.benchmark = True
74 # work_dir is determined in this priority: CLI > segment in file > filename
75 if args.work_dir is not None:
76 # update configs according to CLI args if args.work_dir is not None
77 cfg.work_dir = args.work_dir
78 elif cfg.get('work_dir', None) is None:
79 # use config filename as default work_dir if cfg.work_dir is None
80 cfg.work_dir = osp.join('./work_dirs',
81 osp.splitext(osp.basename(args.config))[0])
82 if args.resume_from is not None:
83 cfg.resume_from = args.resume_from
84 if args.gpu_ids is not None:
85 cfg.gpu_ids = args.gpu_ids
86 else:
87 cfg.gpu_ids = range(1) if args.gpus is None else range(args.gpus)
88
89 if args.autoscale_lr:
90 # apply the linear scaling rule (https://arxiv.org/abs/1706.02677)
91 cfg.optimizer['lr'] = cfg.optimizer['lr'] * len(cfg.gpu_ids) / 8
92
93 # init distributed env first, since logger depends on the dist info.
94 if args.launcher == 'none':
95 distributed = False
96 else:
97 distributed = True
98 init_dist(args.launcher, **cfg.dist_params)
99
100 # create work_dir
101 mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
102 # init the logger before other steps
103 timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())
104 log_file = osp.join(cfg.work_dir, f'{timestamp}.log')
105 logger = get_root_logger(log_file=log_file, log_level=cfg.log_level)
106
107 # init the meta dict to record some important information such as
108 # environment info and seed, which will be logged
109 meta = dict()
110 # log env info
111 env_info_dict = collect_env()
112 env_info = '\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()])
113 dash_line = '-' * 60 + '\n'
114 logger.info('Environment info:\n' + dash_line + env_info + '\n' +
115 dash_line)
116 meta['env_info'] = env_info
117
118 # log some basic info
119 logger.info(f'Distributed training: {distributed}')
120 logger.info(f'Config:\n{cfg.pretty_text}')
121
122 # set random seeds
123 if args.seed is not None:
124 logger.info(f'Set random seed to {args.seed}, '
125 f'deterministic: {args.deterministic}')
126 set_random_seed(args.seed, deterministic=args.deterministic)
127 cfg.seed = args.seed
128 meta['seed'] = args.seed
129
130 model = build_posenet(cfg.model)
131 datasets = [build_dataset(cfg.data.train)]
132
133 if len(cfg.workflow) == 2:
134 val_dataset = copy.deepcopy(cfg.data.val)
135 val_dataset.pipeline = cfg.data.train.pipeline
136 datasets.append(build_dataset(val_dataset))
137
138 if cfg.checkpoint_config is not None:
139 # save mmpose version, config file content
140 # checkpoints as meta data
141 cfg.checkpoint_config.meta = dict(
142 mmpose_version=__version__ + get_git_hash(digits=7),
143 config=cfg.pretty_text,
144 )
145 train_model(
146 model,
147 datasets,
148 cfg,
149 distributed=distributed,
150 validate=(not args.no_validate),
151 timestamp=timestamp,
152 meta=meta)
153
154
155 if __name__ == '__main__':
156 main()
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/train.py b/tools/train.py
--- a/tools/train.py
+++ b/tools/train.py
@@ -45,7 +45,13 @@
action='store_true',
help='whether to set deterministic options for CUDNN backend.')
parser.add_argument(
- '--options', nargs='+', action=DictAction, help='arguments in dict')
+ '--cfg-options',
+ nargs='+',
+ action=DictAction,
+ default={},
+ help='override some settings in the used config, the key-value pair '
+ 'in xxx=yyy format will be merged into config file. For example, '
+ "'--cfg-options model.backbone.depth=18 model.backbone.with_cp=True'")
parser.add_argument(
'--launcher',
choices=['none', 'pytorch', 'slurm', 'mpi'],
@@ -65,12 +71,16 @@
def main():
args = parse_args()
+
cfg = Config.fromfile(args.config)
- if args.options is not None:
- cfg.merge_from_dict(args.options)
+
+ if args.cfg_options is not None:
+ cfg.merge_from_dict(args.cfg_options)
+
# set cudnn_benchmark
if cfg.get('cudnn_benchmark', False):
torch.backends.cudnn.benchmark = True
+
# work_dir is determined in this priority: CLI > segment in file > filename
if args.work_dir is not None:
# update configs according to CLI args if args.work_dir is not None
| {"golden_diff": "diff --git a/tools/train.py b/tools/train.py\n--- a/tools/train.py\n+++ b/tools/train.py\n@@ -45,7 +45,13 @@\n action='store_true',\n help='whether to set deterministic options for CUDNN backend.')\n parser.add_argument(\n- '--options', nargs='+', action=DictAction, help='arguments in dict')\n+ '--cfg-options',\n+ nargs='+',\n+ action=DictAction,\n+ default={},\n+ help='override some settings in the used config, the key-value pair '\n+ 'in xxx=yyy format will be merged into config file. For example, '\n+ \"'--cfg-options model.backbone.depth=18 model.backbone.with_cp=True'\")\n parser.add_argument(\n '--launcher',\n choices=['none', 'pytorch', 'slurm', 'mpi'],\n@@ -65,12 +71,16 @@\n \n def main():\n args = parse_args()\n+\n cfg = Config.fromfile(args.config)\n- if args.options is not None:\n- cfg.merge_from_dict(args.options)\n+\n+ if args.cfg_options is not None:\n+ cfg.merge_from_dict(args.cfg_options)\n+\n # set cudnn_benchmark\n if cfg.get('cudnn_benchmark', False):\n torch.backends.cudnn.benchmark = True\n+\n # work_dir is determined in this priority: CLI > segment in file > filename\n if args.work_dir is not None:\n # update configs according to CLI args if args.work_dir is not None\n", "issue": "Add a README file to explain the settings in the config files\nA good example can be found [here](https://github.com/open-mmlab/mmaction2/blob/master/docs/config.md)\n", "before_files": [{"content": "import argparse\nimport copy\nimport os\nimport os.path as osp\nimport time\n\nimport mmcv\nimport torch\nfrom mmcv import Config, DictAction\nfrom mmcv.runner import init_dist, set_random_seed\nfrom mmcv.utils import get_git_hash\n\nfrom mmpose import __version__\nfrom mmpose.apis import train_model\nfrom mmpose.datasets import build_dataset\nfrom mmpose.models import build_posenet\nfrom mmpose.utils import collect_env, get_root_logger\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(description='Train a pose model')\n parser.add_argument('config', help='train config file path')\n parser.add_argument('--work-dir', help='the dir to save logs and models')\n parser.add_argument(\n '--resume-from', help='the checkpoint file to resume from')\n parser.add_argument(\n '--no-validate',\n action='store_true',\n help='whether not to evaluate the checkpoint during training')\n group_gpus = parser.add_mutually_exclusive_group()\n group_gpus.add_argument(\n '--gpus',\n type=int,\n help='number of gpus to use '\n '(only applicable to non-distributed training)')\n group_gpus.add_argument(\n '--gpu-ids',\n type=int,\n nargs='+',\n help='ids of gpus to use '\n '(only applicable to non-distributed training)')\n parser.add_argument('--seed', type=int, default=None, help='random seed')\n parser.add_argument(\n '--deterministic',\n action='store_true',\n help='whether to set deterministic options for CUDNN backend.')\n parser.add_argument(\n '--options', nargs='+', action=DictAction, help='arguments in dict')\n parser.add_argument(\n '--launcher',\n choices=['none', 'pytorch', 'slurm', 'mpi'],\n default='none',\n help='job launcher')\n parser.add_argument('--local_rank', type=int, default=0)\n parser.add_argument(\n '--autoscale-lr',\n action='store_true',\n help='automatically scale lr with the number of gpus')\n args = parser.parse_args()\n if 'LOCAL_RANK' not in os.environ:\n os.environ['LOCAL_RANK'] = str(args.local_rank)\n\n return args\n\n\ndef main():\n args = parse_args()\n cfg = Config.fromfile(args.config)\n if args.options is not None:\n cfg.merge_from_dict(args.options)\n # set cudnn_benchmark\n if cfg.get('cudnn_benchmark', False):\n torch.backends.cudnn.benchmark = True\n # work_dir is determined in this priority: CLI > segment in file > filename\n if args.work_dir is not None:\n # update configs according to CLI args if args.work_dir is not None\n cfg.work_dir = args.work_dir\n elif cfg.get('work_dir', None) is None:\n # use config filename as default work_dir if cfg.work_dir is None\n cfg.work_dir = osp.join('./work_dirs',\n osp.splitext(osp.basename(args.config))[0])\n if args.resume_from is not None:\n cfg.resume_from = args.resume_from\n if args.gpu_ids is not None:\n cfg.gpu_ids = args.gpu_ids\n else:\n cfg.gpu_ids = range(1) if args.gpus is None else range(args.gpus)\n\n if args.autoscale_lr:\n # apply the linear scaling rule (https://arxiv.org/abs/1706.02677)\n cfg.optimizer['lr'] = cfg.optimizer['lr'] * len(cfg.gpu_ids) / 8\n\n # init distributed env first, since logger depends on the dist info.\n if args.launcher == 'none':\n distributed = False\n else:\n distributed = True\n init_dist(args.launcher, **cfg.dist_params)\n\n # create work_dir\n mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))\n # init the logger before other steps\n timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())\n log_file = osp.join(cfg.work_dir, f'{timestamp}.log')\n logger = get_root_logger(log_file=log_file, log_level=cfg.log_level)\n\n # init the meta dict to record some important information such as\n # environment info and seed, which will be logged\n meta = dict()\n # log env info\n env_info_dict = collect_env()\n env_info = '\\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()])\n dash_line = '-' * 60 + '\\n'\n logger.info('Environment info:\\n' + dash_line + env_info + '\\n' +\n dash_line)\n meta['env_info'] = env_info\n\n # log some basic info\n logger.info(f'Distributed training: {distributed}')\n logger.info(f'Config:\\n{cfg.pretty_text}')\n\n # set random seeds\n if args.seed is not None:\n logger.info(f'Set random seed to {args.seed}, '\n f'deterministic: {args.deterministic}')\n set_random_seed(args.seed, deterministic=args.deterministic)\n cfg.seed = args.seed\n meta['seed'] = args.seed\n\n model = build_posenet(cfg.model)\n datasets = [build_dataset(cfg.data.train)]\n\n if len(cfg.workflow) == 2:\n val_dataset = copy.deepcopy(cfg.data.val)\n val_dataset.pipeline = cfg.data.train.pipeline\n datasets.append(build_dataset(val_dataset))\n\n if cfg.checkpoint_config is not None:\n # save mmpose version, config file content\n # checkpoints as meta data\n cfg.checkpoint_config.meta = dict(\n mmpose_version=__version__ + get_git_hash(digits=7),\n config=cfg.pretty_text,\n )\n train_model(\n model,\n datasets,\n cfg,\n distributed=distributed,\n validate=(not args.no_validate),\n timestamp=timestamp,\n meta=meta)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/train.py"}], "after_files": [{"content": "import argparse\nimport copy\nimport os\nimport os.path as osp\nimport time\n\nimport mmcv\nimport torch\nfrom mmcv import Config, DictAction\nfrom mmcv.runner import init_dist, set_random_seed\nfrom mmcv.utils import get_git_hash\n\nfrom mmpose import __version__\nfrom mmpose.apis import train_model\nfrom mmpose.datasets import build_dataset\nfrom mmpose.models import build_posenet\nfrom mmpose.utils import collect_env, get_root_logger\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(description='Train a pose model')\n parser.add_argument('config', help='train config file path')\n parser.add_argument('--work-dir', help='the dir to save logs and models')\n parser.add_argument(\n '--resume-from', help='the checkpoint file to resume from')\n parser.add_argument(\n '--no-validate',\n action='store_true',\n help='whether not to evaluate the checkpoint during training')\n group_gpus = parser.add_mutually_exclusive_group()\n group_gpus.add_argument(\n '--gpus',\n type=int,\n help='number of gpus to use '\n '(only applicable to non-distributed training)')\n group_gpus.add_argument(\n '--gpu-ids',\n type=int,\n nargs='+',\n help='ids of gpus to use '\n '(only applicable to non-distributed training)')\n parser.add_argument('--seed', type=int, default=None, help='random seed')\n parser.add_argument(\n '--deterministic',\n action='store_true',\n help='whether to set deterministic options for CUDNN backend.')\n parser.add_argument(\n '--cfg-options',\n nargs='+',\n action=DictAction,\n default={},\n help='override some settings in the used config, the key-value pair '\n 'in xxx=yyy format will be merged into config file. For example, '\n \"'--cfg-options model.backbone.depth=18 model.backbone.with_cp=True'\")\n parser.add_argument(\n '--launcher',\n choices=['none', 'pytorch', 'slurm', 'mpi'],\n default='none',\n help='job launcher')\n parser.add_argument('--local_rank', type=int, default=0)\n parser.add_argument(\n '--autoscale-lr',\n action='store_true',\n help='automatically scale lr with the number of gpus')\n args = parser.parse_args()\n if 'LOCAL_RANK' not in os.environ:\n os.environ['LOCAL_RANK'] = str(args.local_rank)\n\n return args\n\n\ndef main():\n args = parse_args()\n\n cfg = Config.fromfile(args.config)\n\n if args.cfg_options is not None:\n cfg.merge_from_dict(args.cfg_options)\n\n # set cudnn_benchmark\n if cfg.get('cudnn_benchmark', False):\n torch.backends.cudnn.benchmark = True\n\n # work_dir is determined in this priority: CLI > segment in file > filename\n if args.work_dir is not None:\n # update configs according to CLI args if args.work_dir is not None\n cfg.work_dir = args.work_dir\n elif cfg.get('work_dir', None) is None:\n # use config filename as default work_dir if cfg.work_dir is None\n cfg.work_dir = osp.join('./work_dirs',\n osp.splitext(osp.basename(args.config))[0])\n if args.resume_from is not None:\n cfg.resume_from = args.resume_from\n if args.gpu_ids is not None:\n cfg.gpu_ids = args.gpu_ids\n else:\n cfg.gpu_ids = range(1) if args.gpus is None else range(args.gpus)\n\n if args.autoscale_lr:\n # apply the linear scaling rule (https://arxiv.org/abs/1706.02677)\n cfg.optimizer['lr'] = cfg.optimizer['lr'] * len(cfg.gpu_ids) / 8\n\n # init distributed env first, since logger depends on the dist info.\n if args.launcher == 'none':\n distributed = False\n else:\n distributed = True\n init_dist(args.launcher, **cfg.dist_params)\n\n # create work_dir\n mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))\n # init the logger before other steps\n timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())\n log_file = osp.join(cfg.work_dir, f'{timestamp}.log')\n logger = get_root_logger(log_file=log_file, log_level=cfg.log_level)\n\n # init the meta dict to record some important information such as\n # environment info and seed, which will be logged\n meta = dict()\n # log env info\n env_info_dict = collect_env()\n env_info = '\\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()])\n dash_line = '-' * 60 + '\\n'\n logger.info('Environment info:\\n' + dash_line + env_info + '\\n' +\n dash_line)\n meta['env_info'] = env_info\n\n # log some basic info\n logger.info(f'Distributed training: {distributed}')\n logger.info(f'Config:\\n{cfg.pretty_text}')\n\n # set random seeds\n if args.seed is not None:\n logger.info(f'Set random seed to {args.seed}, '\n f'deterministic: {args.deterministic}')\n set_random_seed(args.seed, deterministic=args.deterministic)\n cfg.seed = args.seed\n meta['seed'] = args.seed\n\n model = build_posenet(cfg.model)\n datasets = [build_dataset(cfg.data.train)]\n\n if len(cfg.workflow) == 2:\n val_dataset = copy.deepcopy(cfg.data.val)\n val_dataset.pipeline = cfg.data.train.pipeline\n datasets.append(build_dataset(val_dataset))\n\n if cfg.checkpoint_config is not None:\n # save mmpose version, config file content\n # checkpoints as meta data\n cfg.checkpoint_config.meta = dict(\n mmpose_version=__version__ + get_git_hash(digits=7),\n config=cfg.pretty_text,\n )\n train_model(\n model,\n datasets,\n cfg,\n distributed=distributed,\n validate=(not args.no_validate),\n timestamp=timestamp,\n meta=meta)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/train.py"}]} | 1,956 | 335 |
gh_patches_debug_17048 | rasdani/github-patches | git_diff | TheAlgorithms__Python-10073 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Concatenate/consolidate all algorithms with different implementations
### Feature description
There are lots of algorithms with the same concept but different implementations/methods in different files. All these should be moved into one file
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dynamic_programming/longest_sub_array.py`
Content:
```
1 """
2 Author : Yvonne
3
4 This is a pure Python implementation of Dynamic Programming solution to the
5 longest_sub_array problem.
6
7 The problem is :
8 Given an array, to find the longest and continuous sub array and get the max sum of the
9 sub array in the given array.
10 """
11
12
13 class SubArray:
14 def __init__(self, arr):
15 # we need a list not a string, so do something to change the type
16 self.array = arr.split(",")
17
18 def solve_sub_array(self):
19 rear = [int(self.array[0])] * len(self.array)
20 sum_value = [int(self.array[0])] * len(self.array)
21 for i in range(1, len(self.array)):
22 sum_value[i] = max(
23 int(self.array[i]) + sum_value[i - 1], int(self.array[i])
24 )
25 rear[i] = max(sum_value[i], rear[i - 1])
26 return rear[len(self.array) - 1]
27
28
29 if __name__ == "__main__":
30 whole_array = input("please input some numbers:")
31 array = SubArray(whole_array)
32 re = array.solve_sub_array()
33 print(("the results is:", re))
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dynamic_programming/longest_sub_array.py b/dynamic_programming/longest_sub_array.py
deleted file mode 100644
--- a/dynamic_programming/longest_sub_array.py
+++ /dev/null
@@ -1,33 +0,0 @@
-"""
-Author : Yvonne
-
-This is a pure Python implementation of Dynamic Programming solution to the
- longest_sub_array problem.
-
-The problem is :
-Given an array, to find the longest and continuous sub array and get the max sum of the
- sub array in the given array.
-"""
-
-
-class SubArray:
- def __init__(self, arr):
- # we need a list not a string, so do something to change the type
- self.array = arr.split(",")
-
- def solve_sub_array(self):
- rear = [int(self.array[0])] * len(self.array)
- sum_value = [int(self.array[0])] * len(self.array)
- for i in range(1, len(self.array)):
- sum_value[i] = max(
- int(self.array[i]) + sum_value[i - 1], int(self.array[i])
- )
- rear[i] = max(sum_value[i], rear[i - 1])
- return rear[len(self.array) - 1]
-
-
-if __name__ == "__main__":
- whole_array = input("please input some numbers:")
- array = SubArray(whole_array)
- re = array.solve_sub_array()
- print(("the results is:", re))
| {"golden_diff": "diff --git a/dynamic_programming/longest_sub_array.py b/dynamic_programming/longest_sub_array.py\ndeleted file mode 100644\n--- a/dynamic_programming/longest_sub_array.py\n+++ /dev/null\n@@ -1,33 +0,0 @@\n-\"\"\"\n-Author : Yvonne\n-\n-This is a pure Python implementation of Dynamic Programming solution to the\n- longest_sub_array problem.\n-\n-The problem is :\n-Given an array, to find the longest and continuous sub array and get the max sum of the\n- sub array in the given array.\n-\"\"\"\n-\n-\n-class SubArray:\n- def __init__(self, arr):\n- # we need a list not a string, so do something to change the type\n- self.array = arr.split(\",\")\n-\n- def solve_sub_array(self):\n- rear = [int(self.array[0])] * len(self.array)\n- sum_value = [int(self.array[0])] * len(self.array)\n- for i in range(1, len(self.array)):\n- sum_value[i] = max(\n- int(self.array[i]) + sum_value[i - 1], int(self.array[i])\n- )\n- rear[i] = max(sum_value[i], rear[i - 1])\n- return rear[len(self.array) - 1]\n-\n-\n-if __name__ == \"__main__\":\n- whole_array = input(\"please input some numbers:\")\n- array = SubArray(whole_array)\n- re = array.solve_sub_array()\n- print((\"the results is:\", re))\n", "issue": "Concatenate/consolidate all algorithms with different implementations\n### Feature description\n\nThere are lots of algorithms with the same concept but different implementations/methods in different files. All these should be moved into one file\n", "before_files": [{"content": "\"\"\"\nAuthor : Yvonne\n\nThis is a pure Python implementation of Dynamic Programming solution to the\n longest_sub_array problem.\n\nThe problem is :\nGiven an array, to find the longest and continuous sub array and get the max sum of the\n sub array in the given array.\n\"\"\"\n\n\nclass SubArray:\n def __init__(self, arr):\n # we need a list not a string, so do something to change the type\n self.array = arr.split(\",\")\n\n def solve_sub_array(self):\n rear = [int(self.array[0])] * len(self.array)\n sum_value = [int(self.array[0])] * len(self.array)\n for i in range(1, len(self.array)):\n sum_value[i] = max(\n int(self.array[i]) + sum_value[i - 1], int(self.array[i])\n )\n rear[i] = max(sum_value[i], rear[i - 1])\n return rear[len(self.array) - 1]\n\n\nif __name__ == \"__main__\":\n whole_array = input(\"please input some numbers:\")\n array = SubArray(whole_array)\n re = array.solve_sub_array()\n print((\"the results is:\", re))\n", "path": "dynamic_programming/longest_sub_array.py"}], "after_files": [{"content": null, "path": "dynamic_programming/longest_sub_array.py"}]} | 622 | 346 |
gh_patches_debug_33172 | rasdani/github-patches | git_diff | pytorch__text-58 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Include Moses Tokenizer
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchtext/data/example.py`
Content:
```
1 import csv
2 import json
3
4
5 class Example(object):
6 """Defines a single training or test example.
7
8 Stores each column of the example as an attribute.
9 """
10
11 @classmethod
12 def fromJSON(cls, data, fields):
13 return cls.fromdict(json.loads(data), fields)
14
15 @classmethod
16 def fromdict(cls, data, fields):
17 ex = cls()
18 for key, vals in fields.items():
19 if key in data and vals is not None:
20 if not isinstance(vals, list):
21 vals = [vals]
22 for val in vals:
23 name, field = val
24 setattr(ex, name, field.preprocess(data[key]))
25 return ex
26
27 @classmethod
28 def fromTSV(cls, data, fields):
29 if data[-1] == '\n':
30 data = data[:-1]
31 return cls.fromlist(data.split('\t'), fields)
32
33 @classmethod
34 def fromCSV(cls, data, fields):
35 if data[-1] == '\n':
36 data = data[:-1]
37 return cls.fromlist(list(csv.reader([data]))[0], fields)
38
39 @classmethod
40 def fromlist(cls, data, fields):
41 ex = cls()
42 for (name, field), val in zip(fields, data):
43 if field is not None:
44 setattr(ex, name, field.preprocess(val))
45 return ex
46
47 @classmethod
48 def fromtree(cls, data, fields, subtrees=False):
49 try:
50 from nltk.tree import Tree
51 except ImportError:
52 print('''Please install NLTK:
53 $ pip install nltk''')
54 raise
55 tree = Tree.fromstring(data)
56 if subtrees:
57 return [cls.fromlist(
58 [t.leaves(), t.label()], fields) for t in tree.subtrees()]
59 return cls.fromlist([tree.leaves(), tree.label()], fields)
60
```
Path: `torchtext/data/utils.py`
Content:
```
1 def get_tokenizer(tokenizer):
2 if callable(tokenizer):
3 return tokenizer
4 if tokenizer == 'spacy':
5 try:
6 import spacy
7 spacy_en = spacy.load('en')
8 return lambda s: [tok.text for tok in spacy_en.tokenizer(s)]
9 except ImportError:
10 print("Please install SpaCy and the SpaCy English tokenizer. "
11 "See the docs at https://spacy.io for more information.")
12 raise
13 except AttributeError:
14 print("Please install SpaCy and the SpaCy English tokenizer. "
15 "See the docs at https://spacy.io for more information.")
16 raise
17 raise ValueError("Requested tokenizer {}, valid choices are a "
18 "callable that takes a single string as input "
19 "and \"spacy\" for the SpaCy English "
20 "tokenizer.".format(tokenizer))
21
22
23 def interleave_keys(a, b):
24 """Interleave bits from two sort keys to form a joint sort key.
25
26 Examples that are similar in both of the provided keys will have similar
27 values for the key defined by this function. Useful for tasks with two
28 text fields like machine translation or natural language inference.
29 """
30 def interleave(args):
31 return ''.join([x for t in zip(*args) for x in t])
32 return int(''.join(interleave(format(x, '016b') for x in (a, b))), base=2)
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchtext/data/example.py b/torchtext/data/example.py
--- a/torchtext/data/example.py
+++ b/torchtext/data/example.py
@@ -49,8 +49,8 @@
try:
from nltk.tree import Tree
except ImportError:
- print('''Please install NLTK:
- $ pip install nltk''')
+ print("Please install NLTK. "
+ "See the docs at http://nltk.org for more information.")
raise
tree = Tree.fromstring(data)
if subtrees:
diff --git a/torchtext/data/utils.py b/torchtext/data/utils.py
--- a/torchtext/data/utils.py
+++ b/torchtext/data/utils.py
@@ -1,7 +1,7 @@
def get_tokenizer(tokenizer):
if callable(tokenizer):
return tokenizer
- if tokenizer == 'spacy':
+ if tokenizer == "spacy":
try:
import spacy
spacy_en = spacy.load('en')
@@ -14,10 +14,24 @@
print("Please install SpaCy and the SpaCy English tokenizer. "
"See the docs at https://spacy.io for more information.")
raise
+ elif tokenizer == "moses":
+ try:
+ from nltk.tokenize.moses import MosesTokenizer
+ moses_tokenizer = MosesTokenizer()
+ return moses_tokenizer.tokenize
+ except ImportError:
+ print("Please install NLTK. "
+ "See the docs at http://nltk.org for more information.")
+ raise
+ except LookupError:
+ print("Please install the necessary NLTK corpora. "
+ "See the docs at http://nltk.org for more information.")
+ raise
raise ValueError("Requested tokenizer {}, valid choices are a "
- "callable that takes a single string as input "
- "and \"spacy\" for the SpaCy English "
- "tokenizer.".format(tokenizer))
+ "callable that takes a single string as input, "
+ "\"spacy\" for the SpaCy English tokenizer, or "
+ "\"moses\" for the NLTK port of the Moses tokenization "
+ "script.".format(tokenizer))
def interleave_keys(a, b):
| {"golden_diff": "diff --git a/torchtext/data/example.py b/torchtext/data/example.py\n--- a/torchtext/data/example.py\n+++ b/torchtext/data/example.py\n@@ -49,8 +49,8 @@\n try:\n from nltk.tree import Tree\n except ImportError:\n- print('''Please install NLTK:\n- $ pip install nltk''')\n+ print(\"Please install NLTK. \"\n+ \"See the docs at http://nltk.org for more information.\")\n raise\n tree = Tree.fromstring(data)\n if subtrees:\ndiff --git a/torchtext/data/utils.py b/torchtext/data/utils.py\n--- a/torchtext/data/utils.py\n+++ b/torchtext/data/utils.py\n@@ -1,7 +1,7 @@\n def get_tokenizer(tokenizer):\n if callable(tokenizer):\n return tokenizer\n- if tokenizer == 'spacy':\n+ if tokenizer == \"spacy\":\n try:\n import spacy\n spacy_en = spacy.load('en')\n@@ -14,10 +14,24 @@\n print(\"Please install SpaCy and the SpaCy English tokenizer. \"\n \"See the docs at https://spacy.io for more information.\")\n raise\n+ elif tokenizer == \"moses\":\n+ try:\n+ from nltk.tokenize.moses import MosesTokenizer\n+ moses_tokenizer = MosesTokenizer()\n+ return moses_tokenizer.tokenize\n+ except ImportError:\n+ print(\"Please install NLTK. \"\n+ \"See the docs at http://nltk.org for more information.\")\n+ raise\n+ except LookupError:\n+ print(\"Please install the necessary NLTK corpora. \"\n+ \"See the docs at http://nltk.org for more information.\")\n+ raise\n raise ValueError(\"Requested tokenizer {}, valid choices are a \"\n- \"callable that takes a single string as input \"\n- \"and \\\"spacy\\\" for the SpaCy English \"\n- \"tokenizer.\".format(tokenizer))\n+ \"callable that takes a single string as input, \"\n+ \"\\\"spacy\\\" for the SpaCy English tokenizer, or \"\n+ \"\\\"moses\\\" for the NLTK port of the Moses tokenization \"\n+ \"script.\".format(tokenizer))\n \n \n def interleave_keys(a, b):\n", "issue": "Include Moses Tokenizer\n\n", "before_files": [{"content": "import csv\nimport json\n\n\nclass Example(object):\n \"\"\"Defines a single training or test example.\n\n Stores each column of the example as an attribute.\n \"\"\"\n\n @classmethod\n def fromJSON(cls, data, fields):\n return cls.fromdict(json.loads(data), fields)\n\n @classmethod\n def fromdict(cls, data, fields):\n ex = cls()\n for key, vals in fields.items():\n if key in data and vals is not None:\n if not isinstance(vals, list):\n vals = [vals]\n for val in vals:\n name, field = val\n setattr(ex, name, field.preprocess(data[key]))\n return ex\n\n @classmethod\n def fromTSV(cls, data, fields):\n if data[-1] == '\\n':\n data = data[:-1]\n return cls.fromlist(data.split('\\t'), fields)\n\n @classmethod\n def fromCSV(cls, data, fields):\n if data[-1] == '\\n':\n data = data[:-1]\n return cls.fromlist(list(csv.reader([data]))[0], fields)\n\n @classmethod\n def fromlist(cls, data, fields):\n ex = cls()\n for (name, field), val in zip(fields, data):\n if field is not None:\n setattr(ex, name, field.preprocess(val))\n return ex\n\n @classmethod\n def fromtree(cls, data, fields, subtrees=False):\n try:\n from nltk.tree import Tree\n except ImportError:\n print('''Please install NLTK:\n $ pip install nltk''')\n raise\n tree = Tree.fromstring(data)\n if subtrees:\n return [cls.fromlist(\n [t.leaves(), t.label()], fields) for t in tree.subtrees()]\n return cls.fromlist([tree.leaves(), tree.label()], fields)\n", "path": "torchtext/data/example.py"}, {"content": "def get_tokenizer(tokenizer):\n if callable(tokenizer):\n return tokenizer\n if tokenizer == 'spacy':\n try:\n import spacy\n spacy_en = spacy.load('en')\n return lambda s: [tok.text for tok in spacy_en.tokenizer(s)]\n except ImportError:\n print(\"Please install SpaCy and the SpaCy English tokenizer. \"\n \"See the docs at https://spacy.io for more information.\")\n raise\n except AttributeError:\n print(\"Please install SpaCy and the SpaCy English tokenizer. \"\n \"See the docs at https://spacy.io for more information.\")\n raise\n raise ValueError(\"Requested tokenizer {}, valid choices are a \"\n \"callable that takes a single string as input \"\n \"and \\\"spacy\\\" for the SpaCy English \"\n \"tokenizer.\".format(tokenizer))\n\n\ndef interleave_keys(a, b):\n \"\"\"Interleave bits from two sort keys to form a joint sort key.\n\n Examples that are similar in both of the provided keys will have similar\n values for the key defined by this function. Useful for tasks with two\n text fields like machine translation or natural language inference.\n \"\"\"\n def interleave(args):\n return ''.join([x for t in zip(*args) for x in t])\n return int(''.join(interleave(format(x, '016b') for x in (a, b))), base=2)\n", "path": "torchtext/data/utils.py"}], "after_files": [{"content": "import csv\nimport json\n\n\nclass Example(object):\n \"\"\"Defines a single training or test example.\n\n Stores each column of the example as an attribute.\n \"\"\"\n\n @classmethod\n def fromJSON(cls, data, fields):\n return cls.fromdict(json.loads(data), fields)\n\n @classmethod\n def fromdict(cls, data, fields):\n ex = cls()\n for key, vals in fields.items():\n if key in data and vals is not None:\n if not isinstance(vals, list):\n vals = [vals]\n for val in vals:\n name, field = val\n setattr(ex, name, field.preprocess(data[key]))\n return ex\n\n @classmethod\n def fromTSV(cls, data, fields):\n if data[-1] == '\\n':\n data = data[:-1]\n return cls.fromlist(data.split('\\t'), fields)\n\n @classmethod\n def fromCSV(cls, data, fields):\n if data[-1] == '\\n':\n data = data[:-1]\n return cls.fromlist(list(csv.reader([data]))[0], fields)\n\n @classmethod\n def fromlist(cls, data, fields):\n ex = cls()\n for (name, field), val in zip(fields, data):\n if field is not None:\n setattr(ex, name, field.preprocess(val))\n return ex\n\n @classmethod\n def fromtree(cls, data, fields, subtrees=False):\n try:\n from nltk.tree import Tree\n except ImportError:\n print(\"Please install NLTK. \"\n \"See the docs at http://nltk.org for more information.\")\n raise\n tree = Tree.fromstring(data)\n if subtrees:\n return [cls.fromlist(\n [t.leaves(), t.label()], fields) for t in tree.subtrees()]\n return cls.fromlist([tree.leaves(), tree.label()], fields)\n", "path": "torchtext/data/example.py"}, {"content": "def get_tokenizer(tokenizer):\n if callable(tokenizer):\n return tokenizer\n if tokenizer == \"spacy\":\n try:\n import spacy\n spacy_en = spacy.load('en')\n return lambda s: [tok.text for tok in spacy_en.tokenizer(s)]\n except ImportError:\n print(\"Please install SpaCy and the SpaCy English tokenizer. \"\n \"See the docs at https://spacy.io for more information.\")\n raise\n except AttributeError:\n print(\"Please install SpaCy and the SpaCy English tokenizer. \"\n \"See the docs at https://spacy.io for more information.\")\n raise\n elif tokenizer == \"moses\":\n try:\n from nltk.tokenize.moses import MosesTokenizer\n moses_tokenizer = MosesTokenizer()\n return moses_tokenizer.tokenize\n except ImportError:\n print(\"Please install NLTK. \"\n \"See the docs at http://nltk.org for more information.\")\n raise\n except LookupError:\n print(\"Please install the necessary NLTK corpora. \"\n \"See the docs at http://nltk.org for more information.\")\n raise\n raise ValueError(\"Requested tokenizer {}, valid choices are a \"\n \"callable that takes a single string as input, \"\n \"\\\"spacy\\\" for the SpaCy English tokenizer, or \"\n \"\\\"moses\\\" for the NLTK port of the Moses tokenization \"\n \"script.\".format(tokenizer))\n\n\ndef interleave_keys(a, b):\n \"\"\"Interleave bits from two sort keys to form a joint sort key.\n\n Examples that are similar in both of the provided keys will have similar\n values for the key defined by this function. Useful for tasks with two\n text fields like machine translation or natural language inference.\n \"\"\"\n def interleave(args):\n return ''.join([x for t in zip(*args) for x in t])\n return int(''.join(interleave(format(x, '016b') for x in (a, b))), base=2)\n", "path": "torchtext/data/utils.py"}]} | 1,150 | 496 |
gh_patches_debug_14448 | rasdani/github-patches | git_diff | oppia__oppia-9270 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Typescript checks don't work on circle CI
<!--
- Thanks for taking the time to report a bug in the Oppia project.
- Before filing a new issue, please do a quick search to check that it hasn't
- already been filed on the [issue tracker](https://github.com/oppia/oppia/issues)._
-->
**Describe the bug**
Typescript checks don't work on circle CI
**To Reproduce**
Steps to reproduce the behavior:
1. See logs on any pull request.
**Observed behavior**
```
Compiling and testing typescript...
/usr/bin/env: ‘node’: No such file or directory
Compilation successful!
```
**Expected behavior**
They should run.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/typescript_checks.py`
Content:
```
1 # Copyright 2019 The Oppia Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS-IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """File for compiling and checking typescript."""
16
17 from __future__ import absolute_import # pylint: disable=import-only-modules
18 from __future__ import unicode_literals # pylint: disable=import-only-modules
19
20 import json
21 import os
22 import shutil
23 import subprocess
24 import sys
25
26 import python_utils
27
28 COMPILED_JS_DIR = os.path.join('local_compiled_js_for_test', '')
29 TSCONFIG_FILEPATH = 'tsconfig.json'
30
31
32 def validate_compiled_js_dir():
33 """Validates that compiled js dir matches out dir in tsconfig."""
34 with python_utils.open_file(TSCONFIG_FILEPATH, 'r') as f:
35 config_data = json.load(f)
36 out_dir = os.path.join(config_data['compilerOptions']['outDir'], '')
37 if out_dir != COMPILED_JS_DIR:
38 raise Exception(
39 'COMPILED_JS_DIR: %s does not match the output directory '
40 'in %s: %s' % (COMPILED_JS_DIR, TSCONFIG_FILEPATH, out_dir))
41
42
43 def compile_and_check_typescript():
44 """Compiles typescript files and checks the compilation errors."""
45 node_path = os.path.join(os.pardir, 'oppia_tools/node-10.18.0')
46 os.environ['PATH'] = '%s/bin:' % node_path + os.environ['PATH']
47
48 validate_compiled_js_dir()
49
50 if os.path.exists(COMPILED_JS_DIR):
51 shutil.rmtree(COMPILED_JS_DIR)
52
53 python_utils.PRINT('Compiling and testing typescript...')
54 cmd = [
55 './node_modules/typescript/bin/tsc', '--project',
56 TSCONFIG_FILEPATH]
57 process = subprocess.Popen(cmd, stdout=subprocess.PIPE)
58 error_messages = []
59 for line in iter(process.stdout.readline, ''):
60 error_messages.append(line)
61 if os.path.exists(COMPILED_JS_DIR):
62 shutil.rmtree(COMPILED_JS_DIR)
63 if error_messages:
64 python_utils.PRINT('Errors found during compilation\n')
65 for message in error_messages:
66 python_utils.PRINT(message)
67 sys.exit(1)
68 else:
69 python_utils.PRINT('Compilation successful!')
70
71
72 # The 'no coverage' pragma is used as this line is un-testable. This is because
73 # it will only be called when typescript_checks.py is used as a script.
74 if __name__ == '__main__': # pragma: no cover
75 compile_and_check_typescript()
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/typescript_checks.py b/scripts/typescript_checks.py
--- a/scripts/typescript_checks.py
+++ b/scripts/typescript_checks.py
@@ -24,6 +24,7 @@
import sys
import python_utils
+from . import common
COMPILED_JS_DIR = os.path.join('local_compiled_js_for_test', '')
TSCONFIG_FILEPATH = 'tsconfig.json'
@@ -42,7 +43,7 @@
def compile_and_check_typescript():
"""Compiles typescript files and checks the compilation errors."""
- node_path = os.path.join(os.pardir, 'oppia_tools/node-10.18.0')
+ node_path = common.NODE_PATH
os.environ['PATH'] = '%s/bin:' % node_path + os.environ['PATH']
validate_compiled_js_dir()
| {"golden_diff": "diff --git a/scripts/typescript_checks.py b/scripts/typescript_checks.py\n--- a/scripts/typescript_checks.py\n+++ b/scripts/typescript_checks.py\n@@ -24,6 +24,7 @@\n import sys\n \n import python_utils\n+from . import common\n \n COMPILED_JS_DIR = os.path.join('local_compiled_js_for_test', '')\n TSCONFIG_FILEPATH = 'tsconfig.json'\n@@ -42,7 +43,7 @@\n \n def compile_and_check_typescript():\n \"\"\"Compiles typescript files and checks the compilation errors.\"\"\"\n- node_path = os.path.join(os.pardir, 'oppia_tools/node-10.18.0')\n+ node_path = common.NODE_PATH\n os.environ['PATH'] = '%s/bin:' % node_path + os.environ['PATH']\n \n validate_compiled_js_dir()\n", "issue": "Typescript checks don't work on circle CI\n<!--\r\n - Thanks for taking the time to report a bug in the Oppia project.\r\n - Before filing a new issue, please do a quick search to check that it hasn't\r\n - already been filed on the [issue tracker](https://github.com/oppia/oppia/issues)._\r\n -->\r\n\r\n**Describe the bug**\r\nTypescript checks don't work on circle CI\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. See logs on any pull request.\r\n\r\n**Observed behavior**\r\n```\r\nCompiling and testing typescript...\r\n/usr/bin/env: \u2018node\u2019: No such file or directory\r\nCompilation successful!\r\n```\r\n\r\n**Expected behavior**\r\nThey should run.\r\n\n", "before_files": [{"content": "# Copyright 2019 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"File for compiling and checking typescript.\"\"\"\n\nfrom __future__ import absolute_import # pylint: disable=import-only-modules\nfrom __future__ import unicode_literals # pylint: disable=import-only-modules\n\nimport json\nimport os\nimport shutil\nimport subprocess\nimport sys\n\nimport python_utils\n\nCOMPILED_JS_DIR = os.path.join('local_compiled_js_for_test', '')\nTSCONFIG_FILEPATH = 'tsconfig.json'\n\n\ndef validate_compiled_js_dir():\n \"\"\"Validates that compiled js dir matches out dir in tsconfig.\"\"\"\n with python_utils.open_file(TSCONFIG_FILEPATH, 'r') as f:\n config_data = json.load(f)\n out_dir = os.path.join(config_data['compilerOptions']['outDir'], '')\n if out_dir != COMPILED_JS_DIR:\n raise Exception(\n 'COMPILED_JS_DIR: %s does not match the output directory '\n 'in %s: %s' % (COMPILED_JS_DIR, TSCONFIG_FILEPATH, out_dir))\n\n\ndef compile_and_check_typescript():\n \"\"\"Compiles typescript files and checks the compilation errors.\"\"\"\n node_path = os.path.join(os.pardir, 'oppia_tools/node-10.18.0')\n os.environ['PATH'] = '%s/bin:' % node_path + os.environ['PATH']\n\n validate_compiled_js_dir()\n\n if os.path.exists(COMPILED_JS_DIR):\n shutil.rmtree(COMPILED_JS_DIR)\n\n python_utils.PRINT('Compiling and testing typescript...')\n cmd = [\n './node_modules/typescript/bin/tsc', '--project',\n TSCONFIG_FILEPATH]\n process = subprocess.Popen(cmd, stdout=subprocess.PIPE)\n error_messages = []\n for line in iter(process.stdout.readline, ''):\n error_messages.append(line)\n if os.path.exists(COMPILED_JS_DIR):\n shutil.rmtree(COMPILED_JS_DIR)\n if error_messages:\n python_utils.PRINT('Errors found during compilation\\n')\n for message in error_messages:\n python_utils.PRINT(message)\n sys.exit(1)\n else:\n python_utils.PRINT('Compilation successful!')\n\n\n# The 'no coverage' pragma is used as this line is un-testable. This is because\n# it will only be called when typescript_checks.py is used as a script.\nif __name__ == '__main__': # pragma: no cover\n compile_and_check_typescript()\n", "path": "scripts/typescript_checks.py"}], "after_files": [{"content": "# Copyright 2019 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"File for compiling and checking typescript.\"\"\"\n\nfrom __future__ import absolute_import # pylint: disable=import-only-modules\nfrom __future__ import unicode_literals # pylint: disable=import-only-modules\n\nimport json\nimport os\nimport shutil\nimport subprocess\nimport sys\n\nimport python_utils\nfrom . import common\n\nCOMPILED_JS_DIR = os.path.join('local_compiled_js_for_test', '')\nTSCONFIG_FILEPATH = 'tsconfig.json'\n\n\ndef validate_compiled_js_dir():\n \"\"\"Validates that compiled js dir matches out dir in tsconfig.\"\"\"\n with python_utils.open_file(TSCONFIG_FILEPATH, 'r') as f:\n config_data = json.load(f)\n out_dir = os.path.join(config_data['compilerOptions']['outDir'], '')\n if out_dir != COMPILED_JS_DIR:\n raise Exception(\n 'COMPILED_JS_DIR: %s does not match the output directory '\n 'in %s: %s' % (COMPILED_JS_DIR, TSCONFIG_FILEPATH, out_dir))\n\n\ndef compile_and_check_typescript():\n \"\"\"Compiles typescript files and checks the compilation errors.\"\"\"\n node_path = common.NODE_PATH\n os.environ['PATH'] = '%s/bin:' % node_path + os.environ['PATH']\n\n validate_compiled_js_dir()\n\n if os.path.exists(COMPILED_JS_DIR):\n shutil.rmtree(COMPILED_JS_DIR)\n\n python_utils.PRINT('Compiling and testing typescript...')\n cmd = [\n './node_modules/typescript/bin/tsc', '--project',\n TSCONFIG_FILEPATH]\n process = subprocess.Popen(cmd, stdout=subprocess.PIPE)\n error_messages = []\n for line in iter(process.stdout.readline, ''):\n error_messages.append(line)\n if os.path.exists(COMPILED_JS_DIR):\n shutil.rmtree(COMPILED_JS_DIR)\n if error_messages:\n python_utils.PRINT('Errors found during compilation\\n')\n for message in error_messages:\n python_utils.PRINT(message)\n sys.exit(1)\n else:\n python_utils.PRINT('Compilation successful!')\n\n\n# The 'no coverage' pragma is used as this line is un-testable. This is because\n# it will only be called when typescript_checks.py is used as a script.\nif __name__ == '__main__': # pragma: no cover\n compile_and_check_typescript()\n", "path": "scripts/typescript_checks.py"}]} | 1,215 | 184 |
gh_patches_debug_257 | rasdani/github-patches | git_diff | facebookresearch__habitat-lab-66 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Mistake in Agent class' docs
Agent's class docs string states that user has to implement 2 methods: `reset` and `step`. However, If I understood correctly, there's no method named `step` and there is method `act` instead. This is quite tiny issue but still.
https://github.com/facebookresearch/habitat-api/blob/c7443c39c5186e517d8a4b7c87a1b42106e4e77a/habitat/core/agent.py#L10-L17
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `habitat/core/agent.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6
7 from habitat.core.simulator import Observations
8
9
10 class Agent:
11 """Abstract class for defining agents which act inside Env. This abstract
12 class standardizes agents to allow seamless benchmarking. To implement an
13 agent the user has to implement two methods:
14
15 reset
16 step
17 """
18
19 def reset(self) -> None:
20 """Called before starting a new episode in environment.
21 """
22 raise NotImplementedError
23
24 def act(self, observations: Observations) -> int:
25 """
26
27 Args:
28 observations: observations coming in from environment to be used
29 by agent to decide action.
30
31 Returns:
32 action to be taken inside the environment
33 """
34 raise NotImplementedError
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/habitat/core/agent.py b/habitat/core/agent.py
--- a/habitat/core/agent.py
+++ b/habitat/core/agent.py
@@ -13,7 +13,7 @@
agent the user has to implement two methods:
reset
- step
+ act
"""
def reset(self) -> None:
| {"golden_diff": "diff --git a/habitat/core/agent.py b/habitat/core/agent.py\n--- a/habitat/core/agent.py\n+++ b/habitat/core/agent.py\n@@ -13,7 +13,7 @@\n agent the user has to implement two methods:\n \n reset\n- step\n+ act\n \"\"\"\n \n def reset(self) -> None:\n", "issue": "Mistake in Agent class' docs\nAgent's class docs string states that user has to implement 2 methods: `reset` and `step`. However, If I understood correctly, there's no method named `step` and there is method `act` instead. This is quite tiny issue but still.\r\nhttps://github.com/facebookresearch/habitat-api/blob/c7443c39c5186e517d8a4b7c87a1b42106e4e77a/habitat/core/agent.py#L10-L17\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom habitat.core.simulator import Observations\n\n\nclass Agent:\n \"\"\"Abstract class for defining agents which act inside Env. This abstract\n class standardizes agents to allow seamless benchmarking. To implement an\n agent the user has to implement two methods:\n\n reset\n step\n \"\"\"\n\n def reset(self) -> None:\n \"\"\"Called before starting a new episode in environment.\n \"\"\"\n raise NotImplementedError\n\n def act(self, observations: Observations) -> int:\n \"\"\"\n\n Args:\n observations: observations coming in from environment to be used\n by agent to decide action.\n\n Returns:\n action to be taken inside the environment\n \"\"\"\n raise NotImplementedError\n", "path": "habitat/core/agent.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom habitat.core.simulator import Observations\n\n\nclass Agent:\n \"\"\"Abstract class for defining agents which act inside Env. This abstract\n class standardizes agents to allow seamless benchmarking. To implement an\n agent the user has to implement two methods:\n\n reset\n act\n \"\"\"\n\n def reset(self) -> None:\n \"\"\"Called before starting a new episode in environment.\n \"\"\"\n raise NotImplementedError\n\n def act(self, observations: Observations) -> int:\n \"\"\"\n\n Args:\n observations: observations coming in from environment to be used\n by agent to decide action.\n\n Returns:\n action to be taken inside the environment\n \"\"\"\n raise NotImplementedError\n", "path": "habitat/core/agent.py"}]} | 646 | 85 |
gh_patches_debug_10850 | rasdani/github-patches | git_diff | mne-tools__mne-python-10894 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
update path handling in examples/preprocessing/otp.py
all of our `data_path` functions now return `pathlib.Path` objects, which allows things like the following:
https://github.com/mne-tools/mne-python/blob/37a9e5587443109515a85cdc4396af4e014ecf82/examples/datasets/brainstorm_data.py#L32-L35
Compare this to the old pattern (using `os.path.join`, often imported as `op.join`):
```py
raw_path = os.path.join(data_path, 'MEG', 'bst_raw',
'subj001_somatosensory_20111109_01_AUX-f.ds')
```
or the *really* old pattern (using the `+` operator on plain strings):
```py
raw_path = (data_path + '/MEG/bst_raw/' +
'subj001_somatosensory_20111109_01_AUX-f.ds')
```
Update the paths in the file mentioned in the issue title to use the new pathlib-style, which is considered the modern, best-practice way to handle file paths.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/preprocessing/otp.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 .. _ex-otp:
4
5 ===========================================================
6 Plot sensor denoising using oversampled temporal projection
7 ===========================================================
8
9 This demonstrates denoising using the OTP algorithm :footcite:`LarsonTaulu2018`
10 on data with with sensor artifacts (flux jumps) and random noise.
11 """
12 # Author: Eric Larson <[email protected]>
13 #
14 # License: BSD-3-Clause
15
16 # %%
17
18 import os.path as op
19 import mne
20 import numpy as np
21
22 from mne import find_events, fit_dipole
23 from mne.datasets.brainstorm import bst_phantom_elekta
24 from mne.io import read_raw_fif
25
26 print(__doc__)
27
28 # %%
29 # Plot the phantom data, lowpassed to get rid of high-frequency artifacts.
30 # We also crop to a single 10-second segment for speed.
31 # Notice that there are two large flux jumps on channel 1522 that could
32 # spread to other channels when performing subsequent spatial operations
33 # (e.g., Maxwell filtering, SSP, or ICA).
34
35 dipole_number = 1
36 data_path = bst_phantom_elekta.data_path()
37 raw = read_raw_fif(
38 op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif'))
39 raw.crop(40., 50.).load_data()
40 order = list(range(160, 170))
41 raw.copy().filter(0., 40.).plot(order=order, n_channels=10)
42
43 # %%
44 # Now we can clean the data with OTP, lowpass, and plot. The flux jumps have
45 # been suppressed alongside the random sensor noise.
46
47 raw_clean = mne.preprocessing.oversampled_temporal_projection(raw)
48 raw_clean.filter(0., 40.)
49 raw_clean.plot(order=order, n_channels=10)
50
51
52 # %%
53 # We can also look at the effect on single-trial phantom localization.
54 # See the :ref:`tut-brainstorm-elekta-phantom`
55 # for more information. Here we use a version that does single-trial
56 # localization across the 17 trials are in our 10-second window:
57
58 def compute_bias(raw):
59 events = find_events(raw, 'STI201', verbose=False)
60 events = events[1:] # first one has an artifact
61 tmin, tmax = -0.2, 0.1
62 epochs = mne.Epochs(raw, events, dipole_number, tmin, tmax,
63 baseline=(None, -0.01), preload=True, verbose=False)
64 sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None,
65 verbose=False)
66 cov = mne.compute_covariance(epochs, tmax=0, method='oas',
67 rank=None, verbose=False)
68 idx = epochs.time_as_index(0.036)[0]
69 data = epochs.get_data()[:, :, idx].T
70 evoked = mne.EvokedArray(data, epochs.info, tmin=0.)
71 dip = fit_dipole(evoked, cov, sphere, n_jobs=None, verbose=False)[0]
72 actual_pos = mne.dipole.get_phantom_dipoles()[0][dipole_number - 1]
73 misses = 1000 * np.linalg.norm(dip.pos - actual_pos, axis=-1)
74 return misses
75
76
77 bias = compute_bias(raw)
78 print('Raw bias: %0.1fmm (worst: %0.1fmm)'
79 % (np.mean(bias), np.max(bias)))
80 bias_clean = compute_bias(raw_clean)
81 print('OTP bias: %0.1fmm (worst: %0.1fmm)'
82 % (np.mean(bias_clean), np.max(bias_clean),))
83
84 # %%
85 # References
86 # ----------
87 # .. footbibliography::
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/preprocessing/otp.py b/examples/preprocessing/otp.py
--- a/examples/preprocessing/otp.py
+++ b/examples/preprocessing/otp.py
@@ -15,7 +15,6 @@
# %%
-import os.path as op
import mne
import numpy as np
@@ -34,8 +33,7 @@
dipole_number = 1
data_path = bst_phantom_elekta.data_path()
-raw = read_raw_fif(
- op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif'))
+raw = read_raw_fif(data_path / 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')
raw.crop(40., 50.).load_data()
order = list(range(160, 170))
raw.copy().filter(0., 40.).plot(order=order, n_channels=10)
| {"golden_diff": "diff --git a/examples/preprocessing/otp.py b/examples/preprocessing/otp.py\n--- a/examples/preprocessing/otp.py\n+++ b/examples/preprocessing/otp.py\n@@ -15,7 +15,6 @@\n \n # %%\n \n-import os.path as op\n import mne\n import numpy as np\n \n@@ -34,8 +33,7 @@\n \n dipole_number = 1\n data_path = bst_phantom_elekta.data_path()\n-raw = read_raw_fif(\n- op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif'))\n+raw = read_raw_fif(data_path / 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')\n raw.crop(40., 50.).load_data()\n order = list(range(160, 170))\n raw.copy().filter(0., 40.).plot(order=order, n_channels=10)\n", "issue": "update path handling in examples/preprocessing/otp.py\nall of our `data_path` functions now return `pathlib.Path` objects, which allows things like the following:\r\n\r\nhttps://github.com/mne-tools/mne-python/blob/37a9e5587443109515a85cdc4396af4e014ecf82/examples/datasets/brainstorm_data.py#L32-L35\r\n\r\nCompare this to the old pattern (using `os.path.join`, often imported as `op.join`):\r\n\r\n```py\r\nraw_path = os.path.join(data_path, 'MEG', 'bst_raw',\r\n 'subj001_somatosensory_20111109_01_AUX-f.ds')\r\n```\r\n\r\nor the *really* old pattern (using the `+` operator on plain strings):\r\n\r\n```py\r\nraw_path = (data_path + '/MEG/bst_raw/' +\r\n 'subj001_somatosensory_20111109_01_AUX-f.ds')\r\n```\r\n\r\nUpdate the paths in the file mentioned in the issue title to use the new pathlib-style, which is considered the modern, best-practice way to handle file paths.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n.. _ex-otp:\n\n===========================================================\nPlot sensor denoising using oversampled temporal projection\n===========================================================\n\nThis demonstrates denoising using the OTP algorithm :footcite:`LarsonTaulu2018`\non data with with sensor artifacts (flux jumps) and random noise.\n\"\"\"\n# Author: Eric Larson <[email protected]>\n#\n# License: BSD-3-Clause\n\n# %%\n\nimport os.path as op\nimport mne\nimport numpy as np\n\nfrom mne import find_events, fit_dipole\nfrom mne.datasets.brainstorm import bst_phantom_elekta\nfrom mne.io import read_raw_fif\n\nprint(__doc__)\n\n# %%\n# Plot the phantom data, lowpassed to get rid of high-frequency artifacts.\n# We also crop to a single 10-second segment for speed.\n# Notice that there are two large flux jumps on channel 1522 that could\n# spread to other channels when performing subsequent spatial operations\n# (e.g., Maxwell filtering, SSP, or ICA).\n\ndipole_number = 1\ndata_path = bst_phantom_elekta.data_path()\nraw = read_raw_fif(\n op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif'))\nraw.crop(40., 50.).load_data()\norder = list(range(160, 170))\nraw.copy().filter(0., 40.).plot(order=order, n_channels=10)\n\n# %%\n# Now we can clean the data with OTP, lowpass, and plot. The flux jumps have\n# been suppressed alongside the random sensor noise.\n\nraw_clean = mne.preprocessing.oversampled_temporal_projection(raw)\nraw_clean.filter(0., 40.)\nraw_clean.plot(order=order, n_channels=10)\n\n\n# %%\n# We can also look at the effect on single-trial phantom localization.\n# See the :ref:`tut-brainstorm-elekta-phantom`\n# for more information. Here we use a version that does single-trial\n# localization across the 17 trials are in our 10-second window:\n\ndef compute_bias(raw):\n events = find_events(raw, 'STI201', verbose=False)\n events = events[1:] # first one has an artifact\n tmin, tmax = -0.2, 0.1\n epochs = mne.Epochs(raw, events, dipole_number, tmin, tmax,\n baseline=(None, -0.01), preload=True, verbose=False)\n sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None,\n verbose=False)\n cov = mne.compute_covariance(epochs, tmax=0, method='oas',\n rank=None, verbose=False)\n idx = epochs.time_as_index(0.036)[0]\n data = epochs.get_data()[:, :, idx].T\n evoked = mne.EvokedArray(data, epochs.info, tmin=0.)\n dip = fit_dipole(evoked, cov, sphere, n_jobs=None, verbose=False)[0]\n actual_pos = mne.dipole.get_phantom_dipoles()[0][dipole_number - 1]\n misses = 1000 * np.linalg.norm(dip.pos - actual_pos, axis=-1)\n return misses\n\n\nbias = compute_bias(raw)\nprint('Raw bias: %0.1fmm (worst: %0.1fmm)'\n % (np.mean(bias), np.max(bias)))\nbias_clean = compute_bias(raw_clean)\nprint('OTP bias: %0.1fmm (worst: %0.1fmm)'\n % (np.mean(bias_clean), np.max(bias_clean),))\n\n# %%\n# References\n# ----------\n# .. footbibliography::\n", "path": "examples/preprocessing/otp.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n.. _ex-otp:\n\n===========================================================\nPlot sensor denoising using oversampled temporal projection\n===========================================================\n\nThis demonstrates denoising using the OTP algorithm :footcite:`LarsonTaulu2018`\non data with with sensor artifacts (flux jumps) and random noise.\n\"\"\"\n# Author: Eric Larson <[email protected]>\n#\n# License: BSD-3-Clause\n\n# %%\n\nimport mne\nimport numpy as np\n\nfrom mne import find_events, fit_dipole\nfrom mne.datasets.brainstorm import bst_phantom_elekta\nfrom mne.io import read_raw_fif\n\nprint(__doc__)\n\n# %%\n# Plot the phantom data, lowpassed to get rid of high-frequency artifacts.\n# We also crop to a single 10-second segment for speed.\n# Notice that there are two large flux jumps on channel 1522 that could\n# spread to other channels when performing subsequent spatial operations\n# (e.g., Maxwell filtering, SSP, or ICA).\n\ndipole_number = 1\ndata_path = bst_phantom_elekta.data_path()\nraw = read_raw_fif(data_path / 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')\nraw.crop(40., 50.).load_data()\norder = list(range(160, 170))\nraw.copy().filter(0., 40.).plot(order=order, n_channels=10)\n\n# %%\n# Now we can clean the data with OTP, lowpass, and plot. The flux jumps have\n# been suppressed alongside the random sensor noise.\n\nraw_clean = mne.preprocessing.oversampled_temporal_projection(raw)\nraw_clean.filter(0., 40.)\nraw_clean.plot(order=order, n_channels=10)\n\n\n# %%\n# We can also look at the effect on single-trial phantom localization.\n# See the :ref:`tut-brainstorm-elekta-phantom`\n# for more information. Here we use a version that does single-trial\n# localization across the 17 trials are in our 10-second window:\n\ndef compute_bias(raw):\n events = find_events(raw, 'STI201', verbose=False)\n events = events[1:] # first one has an artifact\n tmin, tmax = -0.2, 0.1\n epochs = mne.Epochs(raw, events, dipole_number, tmin, tmax,\n baseline=(None, -0.01), preload=True, verbose=False)\n sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None,\n verbose=False)\n cov = mne.compute_covariance(epochs, tmax=0, method='oas',\n rank=None, verbose=False)\n idx = epochs.time_as_index(0.036)[0]\n data = epochs.get_data()[:, :, idx].T\n evoked = mne.EvokedArray(data, epochs.info, tmin=0.)\n dip = fit_dipole(evoked, cov, sphere, n_jobs=None, verbose=False)[0]\n actual_pos = mne.dipole.get_phantom_dipoles()[0][dipole_number - 1]\n misses = 1000 * np.linalg.norm(dip.pos - actual_pos, axis=-1)\n return misses\n\n\nbias = compute_bias(raw)\nprint('Raw bias: %0.1fmm (worst: %0.1fmm)'\n % (np.mean(bias), np.max(bias)))\nbias_clean = compute_bias(raw_clean)\nprint('OTP bias: %0.1fmm (worst: %0.1fmm)'\n % (np.mean(bias_clean), np.max(bias_clean),))\n\n# %%\n# References\n# ----------\n# .. footbibliography::\n", "path": "examples/preprocessing/otp.py"}]} | 1,552 | 213 |
gh_patches_debug_43985 | rasdani/github-patches | git_diff | spyder-ide__spyder-7478 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Check for new conda packages in Anaconda's repos instead of Github
Right now we're looking for new releases only on Github. But this gives a bad UX in Anaconda because they take several days to create new conda packages for Spyder.
So in that case we need to look for new packages (for example) in
https://repo.continuum.io/pkgs/main/win-64/
to only show the update message when new packages are placed there.
@dalthviz, please work on this one.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `spyder/workers/updates.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright © Spyder Project Contributors
4 # Licensed under the terms of the MIT License
5 # (see spyder/__init__.py for details)
6
7 # Standard library imports
8 import json
9 import ssl
10
11 # Third party imports
12 from qtpy.QtCore import QObject, Signal
13
14 # Local imports
15 from spyder import __version__
16 from spyder.config.base import _
17 from spyder.py3compat import PY3
18 from spyder.utils.programs import check_version, is_stable_version
19
20
21 if PY3:
22 from urllib.request import urlopen
23 from urllib.error import URLError, HTTPError
24 else:
25 from urllib2 import urlopen, URLError, HTTPError
26
27
28 class WorkerUpdates(QObject):
29 """
30 Worker that checks for releases using the Github API without blocking the
31 Spyder user interface, in case of connections issues.
32 """
33 sig_ready = Signal()
34
35 def __init__(self, parent, startup):
36 QObject.__init__(self)
37 self._parent = parent
38 self.error = None
39 self.latest_release = None
40 self.startup = startup
41
42 def check_update_available(self, version, releases):
43 """Checks if there is an update available.
44
45 It takes as parameters the current version of Spyder and a list of
46 valid cleaned releases in chronological order (what github api returns
47 by default). Example: ['2.3.4', '2.3.3' ...]
48 """
49 if is_stable_version(version):
50 # Remove non stable versions from the list
51 releases = [r for r in releases if is_stable_version(r)]
52
53 latest_release = releases[0]
54
55 if version.endswith('dev'):
56 return (False, latest_release)
57
58 return (check_version(version, latest_release, '<'), latest_release)
59
60 def start(self):
61 """Main method of the WorkerUpdates worker"""
62 self.url = 'https://api.github.com/repos/spyder-ide/spyder/releases'
63 self.update_available = False
64 self.latest_release = __version__
65
66 error_msg = None
67
68 try:
69 if hasattr(ssl, '_create_unverified_context'):
70 # Fix for issue # 2685 [Works only with Python >=2.7.9]
71 # More info: https://www.python.org/dev/peps/pep-0476/#opting-out
72 context = ssl._create_unverified_context()
73 page = urlopen(self.url, context=context)
74 else:
75 page = urlopen(self.url)
76 try:
77 data = page.read()
78
79 # Needed step for python3 compatibility
80 if not isinstance(data, str):
81 data = data.decode()
82
83 data = json.loads(data)
84 releases = [item['tag_name'].replace('v', '') for item in data]
85 version = __version__
86
87 result = self.check_update_available(version, releases)
88 self.update_available, self.latest_release = result
89 except Exception:
90 error_msg = _('Unable to retrieve information.')
91 except HTTPError:
92 error_msg = _('Unable to retrieve information.')
93 except URLError:
94 error_msg = _('Unable to connect to the internet. <br><br>Make '
95 'sure the connection is working properly.')
96 except Exception:
97 error_msg = _('Unable to check for updates.')
98
99 # Don't show dialog when starting up spyder and an error occur
100 if not (self.startup and error_msg is not None):
101 self.error = error_msg
102 self.sig_ready.emit()
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/spyder/workers/updates.py b/spyder/workers/updates.py
--- a/spyder/workers/updates.py
+++ b/spyder/workers/updates.py
@@ -6,7 +6,9 @@
# Standard library imports
import json
+import os
import ssl
+import sys
# Third party imports
from qtpy.QtCore import QObject, Signal
@@ -15,6 +17,7 @@
from spyder import __version__
from spyder.config.base import _
from spyder.py3compat import PY3
+from spyder.config.utils import is_anaconda
from spyder.utils.programs import check_version, is_stable_version
@@ -27,30 +30,36 @@
class WorkerUpdates(QObject):
"""
- Worker that checks for releases using the Github API without blocking the
- Spyder user interface, in case of connections issues.
+ Worker that checks for releases using either the Anaconda
+ default channels or the Github Releases page without
+ blocking the Spyder user interface, in case of connection
+ issues.
"""
sig_ready = Signal()
- def __init__(self, parent, startup):
+ def __init__(self, parent, startup, version=""):
QObject.__init__(self)
self._parent = parent
self.error = None
self.latest_release = None
self.startup = startup
+ self.version = version
- def check_update_available(self, version, releases):
+ def check_update_available(self, version, releases, github=False):
"""Checks if there is an update available.
It takes as parameters the current version of Spyder and a list of
- valid cleaned releases in chronological order (what github api returns
- by default). Example: ['2.3.4', '2.3.3' ...]
+ valid cleaned releases in chronological order.
+ Example: ['2.3.2', '2.3.3' ...] or with github ['2.3.4', '2.3.3' ...]
"""
if is_stable_version(version):
# Remove non stable versions from the list
releases = [r for r in releases if is_stable_version(r)]
- latest_release = releases[0]
+ if github:
+ latest_release = releases[0]
+ else:
+ latest_release = releases[-1]
if version.endswith('dev'):
return (False, latest_release)
@@ -59,7 +68,17 @@
def start(self):
"""Main method of the WorkerUpdates worker"""
- self.url = 'https://api.github.com/repos/spyder-ide/spyder/releases'
+ if is_anaconda():
+ self.url = 'https://repo.anaconda.com/pkgs/main'
+ if os.name == 'nt':
+ self.url += '/win-64/repodata.json'
+ elif sys.platform == 'darwin':
+ self.url += '/osx-64/repodata.json'
+ else:
+ self.url += '/linux-64/repodata.json'
+ else:
+ self.url = ('https://api.github.com/repos/'
+ 'spyder-ide/spyder/releases')
self.update_available = False
self.latest_release = __version__
@@ -81,10 +100,22 @@
data = data.decode()
data = json.loads(data)
- releases = [item['tag_name'].replace('v', '') for item in data]
- version = __version__
-
- result = self.check_update_available(version, releases)
+ if not self.version:
+ self.version = __version__
+
+ if is_anaconda():
+ releases = []
+ for item in data['packages']:
+ if 'spyder' in item and 'spyder-kernels' not in item:
+ releases.append(item.split('-')[1])
+ result = self.check_update_available(self.version,
+ releases)
+ else:
+ releases = [item['tag_name'].replace('v', '')
+ for item in data]
+ result = self.check_update_available(self.version,
+ releases,
+ github=True)
self.update_available, self.latest_release = result
except Exception:
error_msg = _('Unable to retrieve information.')
| {"golden_diff": "diff --git a/spyder/workers/updates.py b/spyder/workers/updates.py\n--- a/spyder/workers/updates.py\n+++ b/spyder/workers/updates.py\n@@ -6,7 +6,9 @@\n \n # Standard library imports\n import json\n+import os\n import ssl\n+import sys\n \n # Third party imports\n from qtpy.QtCore import QObject, Signal\n@@ -15,6 +17,7 @@\n from spyder import __version__\n from spyder.config.base import _\n from spyder.py3compat import PY3\n+from spyder.config.utils import is_anaconda\n from spyder.utils.programs import check_version, is_stable_version\n \n \n@@ -27,30 +30,36 @@\n \n class WorkerUpdates(QObject):\n \"\"\"\n- Worker that checks for releases using the Github API without blocking the\n- Spyder user interface, in case of connections issues.\n+ Worker that checks for releases using either the Anaconda\n+ default channels or the Github Releases page without\n+ blocking the Spyder user interface, in case of connection\n+ issues.\n \"\"\"\n sig_ready = Signal()\n \n- def __init__(self, parent, startup):\n+ def __init__(self, parent, startup, version=\"\"):\n QObject.__init__(self)\n self._parent = parent\n self.error = None\n self.latest_release = None\n self.startup = startup\n+ self.version = version\n \n- def check_update_available(self, version, releases):\n+ def check_update_available(self, version, releases, github=False):\n \"\"\"Checks if there is an update available.\n \n It takes as parameters the current version of Spyder and a list of\n- valid cleaned releases in chronological order (what github api returns\n- by default). Example: ['2.3.4', '2.3.3' ...]\n+ valid cleaned releases in chronological order.\n+ Example: ['2.3.2', '2.3.3' ...] or with github ['2.3.4', '2.3.3' ...]\n \"\"\"\n if is_stable_version(version):\n # Remove non stable versions from the list\n releases = [r for r in releases if is_stable_version(r)]\n \n- latest_release = releases[0]\n+ if github:\n+ latest_release = releases[0]\n+ else:\n+ latest_release = releases[-1]\n \n if version.endswith('dev'):\n return (False, latest_release)\n@@ -59,7 +68,17 @@\n \n def start(self):\n \"\"\"Main method of the WorkerUpdates worker\"\"\"\n- self.url = 'https://api.github.com/repos/spyder-ide/spyder/releases'\n+ if is_anaconda():\n+ self.url = 'https://repo.anaconda.com/pkgs/main'\n+ if os.name == 'nt':\n+ self.url += '/win-64/repodata.json'\n+ elif sys.platform == 'darwin':\n+ self.url += '/osx-64/repodata.json'\n+ else:\n+ self.url += '/linux-64/repodata.json'\n+ else:\n+ self.url = ('https://api.github.com/repos/'\n+ 'spyder-ide/spyder/releases')\n self.update_available = False\n self.latest_release = __version__\n \n@@ -81,10 +100,22 @@\n data = data.decode()\n \n data = json.loads(data)\n- releases = [item['tag_name'].replace('v', '') for item in data]\n- version = __version__\n-\n- result = self.check_update_available(version, releases)\n+ if not self.version:\n+ self.version = __version__\n+\n+ if is_anaconda():\n+ releases = []\n+ for item in data['packages']:\n+ if 'spyder' in item and 'spyder-kernels' not in item:\n+ releases.append(item.split('-')[1])\n+ result = self.check_update_available(self.version,\n+ releases)\n+ else:\n+ releases = [item['tag_name'].replace('v', '')\n+ for item in data]\n+ result = self.check_update_available(self.version,\n+ releases,\n+ github=True)\n self.update_available, self.latest_release = result\n except Exception:\n error_msg = _('Unable to retrieve information.')\n", "issue": "Check for new conda packages in Anaconda's repos instead of Github\nRight now we're looking for new releases only on Github. But this gives a bad UX in Anaconda because they take several days to create new conda packages for Spyder.\r\n\r\nSo in that case we need to look for new packages (for example) in\r\n\r\nhttps://repo.continuum.io/pkgs/main/win-64/\r\n\r\nto only show the update message when new packages are placed there.\r\n\r\n@dalthviz, please work on this one.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright \u00a9 Spyder Project Contributors\n# Licensed under the terms of the MIT License\n# (see spyder/__init__.py for details)\n\n# Standard library imports\nimport json\nimport ssl\n\n# Third party imports\nfrom qtpy.QtCore import QObject, Signal\n\n# Local imports\nfrom spyder import __version__\nfrom spyder.config.base import _\nfrom spyder.py3compat import PY3\nfrom spyder.utils.programs import check_version, is_stable_version\n\n\nif PY3:\n from urllib.request import urlopen\n from urllib.error import URLError, HTTPError\nelse:\n from urllib2 import urlopen, URLError, HTTPError\n\n\nclass WorkerUpdates(QObject):\n \"\"\"\n Worker that checks for releases using the Github API without blocking the\n Spyder user interface, in case of connections issues.\n \"\"\"\n sig_ready = Signal()\n\n def __init__(self, parent, startup):\n QObject.__init__(self)\n self._parent = parent\n self.error = None\n self.latest_release = None\n self.startup = startup\n\n def check_update_available(self, version, releases):\n \"\"\"Checks if there is an update available.\n\n It takes as parameters the current version of Spyder and a list of\n valid cleaned releases in chronological order (what github api returns\n by default). Example: ['2.3.4', '2.3.3' ...]\n \"\"\"\n if is_stable_version(version):\n # Remove non stable versions from the list\n releases = [r for r in releases if is_stable_version(r)]\n\n latest_release = releases[0]\n\n if version.endswith('dev'):\n return (False, latest_release)\n\n return (check_version(version, latest_release, '<'), latest_release)\n\n def start(self):\n \"\"\"Main method of the WorkerUpdates worker\"\"\"\n self.url = 'https://api.github.com/repos/spyder-ide/spyder/releases'\n self.update_available = False\n self.latest_release = __version__\n\n error_msg = None\n\n try:\n if hasattr(ssl, '_create_unverified_context'):\n # Fix for issue # 2685 [Works only with Python >=2.7.9]\n # More info: https://www.python.org/dev/peps/pep-0476/#opting-out\n context = ssl._create_unverified_context()\n page = urlopen(self.url, context=context)\n else:\n page = urlopen(self.url)\n try:\n data = page.read()\n\n # Needed step for python3 compatibility\n if not isinstance(data, str):\n data = data.decode()\n\n data = json.loads(data)\n releases = [item['tag_name'].replace('v', '') for item in data]\n version = __version__\n\n result = self.check_update_available(version, releases)\n self.update_available, self.latest_release = result\n except Exception:\n error_msg = _('Unable to retrieve information.')\n except HTTPError:\n error_msg = _('Unable to retrieve information.')\n except URLError:\n error_msg = _('Unable to connect to the internet. <br><br>Make '\n 'sure the connection is working properly.')\n except Exception:\n error_msg = _('Unable to check for updates.')\n\n # Don't show dialog when starting up spyder and an error occur\n if not (self.startup and error_msg is not None):\n self.error = error_msg\n self.sig_ready.emit()\n", "path": "spyder/workers/updates.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright \u00a9 Spyder Project Contributors\n# Licensed under the terms of the MIT License\n# (see spyder/__init__.py for details)\n\n# Standard library imports\nimport json\nimport os\nimport ssl\nimport sys\n\n# Third party imports\nfrom qtpy.QtCore import QObject, Signal\n\n# Local imports\nfrom spyder import __version__\nfrom spyder.config.base import _\nfrom spyder.py3compat import PY3\nfrom spyder.config.utils import is_anaconda\nfrom spyder.utils.programs import check_version, is_stable_version\n\n\nif PY3:\n from urllib.request import urlopen\n from urllib.error import URLError, HTTPError\nelse:\n from urllib2 import urlopen, URLError, HTTPError\n\n\nclass WorkerUpdates(QObject):\n \"\"\"\n Worker that checks for releases using either the Anaconda\n default channels or the Github Releases page without\n blocking the Spyder user interface, in case of connection\n issues.\n \"\"\"\n sig_ready = Signal()\n\n def __init__(self, parent, startup, version=\"\"):\n QObject.__init__(self)\n self._parent = parent\n self.error = None\n self.latest_release = None\n self.startup = startup\n self.version = version\n\n def check_update_available(self, version, releases, github=False):\n \"\"\"Checks if there is an update available.\n\n It takes as parameters the current version of Spyder and a list of\n valid cleaned releases in chronological order.\n Example: ['2.3.2', '2.3.3' ...] or with github ['2.3.4', '2.3.3' ...]\n \"\"\"\n if is_stable_version(version):\n # Remove non stable versions from the list\n releases = [r for r in releases if is_stable_version(r)]\n\n if github:\n latest_release = releases[0]\n else:\n latest_release = releases[-1]\n\n if version.endswith('dev'):\n return (False, latest_release)\n\n return (check_version(version, latest_release, '<'), latest_release)\n\n def start(self):\n \"\"\"Main method of the WorkerUpdates worker\"\"\"\n if is_anaconda():\n self.url = 'https://repo.anaconda.com/pkgs/main'\n if os.name == 'nt':\n self.url += '/win-64/repodata.json'\n elif sys.platform == 'darwin':\n self.url += '/osx-64/repodata.json'\n else:\n self.url += '/linux-64/repodata.json'\n else:\n self.url = ('https://api.github.com/repos/'\n 'spyder-ide/spyder/releases')\n self.update_available = False\n self.latest_release = __version__\n\n error_msg = None\n\n try:\n if hasattr(ssl, '_create_unverified_context'):\n # Fix for issue # 2685 [Works only with Python >=2.7.9]\n # More info: https://www.python.org/dev/peps/pep-0476/#opting-out\n context = ssl._create_unverified_context()\n page = urlopen(self.url, context=context)\n else:\n page = urlopen(self.url)\n try:\n data = page.read()\n\n # Needed step for python3 compatibility\n if not isinstance(data, str):\n data = data.decode()\n\n data = json.loads(data)\n if not self.version:\n self.version = __version__\n\n if is_anaconda():\n releases = []\n for item in data['packages']:\n if 'spyder' in item and 'spyder-kernels' not in item:\n releases.append(item.split('-')[1])\n result = self.check_update_available(self.version,\n releases)\n else:\n releases = [item['tag_name'].replace('v', '')\n for item in data]\n result = self.check_update_available(self.version,\n releases,\n github=True)\n self.update_available, self.latest_release = result\n except Exception:\n error_msg = _('Unable to retrieve information.')\n except HTTPError:\n error_msg = _('Unable to retrieve information.')\n except URLError:\n error_msg = _('Unable to connect to the internet. <br><br>Make '\n 'sure the connection is working properly.')\n except Exception:\n error_msg = _('Unable to check for updates.')\n\n # Don't show dialog when starting up spyder and an error occur\n if not (self.startup and error_msg is not None):\n self.error = error_msg\n self.sig_ready.emit()\n", "path": "spyder/workers/updates.py"}]} | 1,329 | 964 |
gh_patches_debug_25229 | rasdani/github-patches | git_diff | microsoft__AzureTRE-217 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Task] Change location to azure_location for porter parameters from API
**Description**
Change location to azure_location for porter parameters from API - as location appears to be a reserved word for porter
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `management_api_app/db/repositories/workspaces.py`
Content:
```
1 import uuid
2 from typing import List
3
4 from azure.cosmos import CosmosClient
5 from pydantic import UUID4
6
7 from core import config
8 from db.errors import EntityDoesNotExist
9 from db.repositories.base import BaseRepository
10 from db.repositories.workspace_templates import WorkspaceTemplateRepository
11 from models.domain.resource import Status
12 from models.domain.workspace import Workspace
13 from models.schemas.workspace import WorkspaceInCreate
14
15
16 class WorkspaceRepository(BaseRepository):
17 def __init__(self, client: CosmosClient):
18 super().__init__(client, config.STATE_STORE_RESOURCES_CONTAINER)
19
20 @staticmethod
21 def _active_workspaces_query():
22 return 'SELECT * FROM c WHERE c.resourceType = "workspace" AND c.isDeleted = false'
23
24 def _get_template_version(self, template_name):
25 workspace_template_repo = WorkspaceTemplateRepository(self._client)
26 template = workspace_template_repo.get_current_workspace_template_by_name(template_name)
27 return template["version"]
28
29 def get_all_active_workspaces(self) -> List[Workspace]:
30 query = self._active_workspaces_query()
31 return self.query(query=query)
32
33 def get_workspace_by_workspace_id(self, workspace_id: UUID4) -> Workspace:
34 query = self._active_workspaces_query() + f' AND c.id="{workspace_id}"'
35 workspaces = self.query(query=query)
36 if not workspaces:
37 raise EntityDoesNotExist
38 return workspaces[0]
39
40 def create_workspace_item(self, workspace_create: WorkspaceInCreate) -> Workspace:
41 full_workspace_id = str(uuid.uuid4())
42
43 try:
44 template_version = self._get_template_version(workspace_create.workspaceType)
45 except EntityDoesNotExist:
46 raise ValueError(f"The workspace type '{workspace_create.workspaceType}' does not exist")
47
48 resource_spec_parameters = {
49 "location": config.RESOURCE_LOCATION,
50 "workspace_id": full_workspace_id[-4:],
51 "tre_id": config.TRE_ID,
52 "address_space": "10.2.1.0/24" # TODO: Calculate this value - Issue #52
53 }
54
55 workspace = Workspace(
56 id=full_workspace_id,
57 displayName=workspace_create.displayName,
58 description=workspace_create.description,
59 resourceTemplateName=workspace_create.workspaceType,
60 resourceTemplateVersion=template_version,
61 resourceTemplateParameters=resource_spec_parameters,
62 status=Status.NotDeployed
63 )
64
65 return workspace
66
67 def save_workspace(self, workspace: Workspace):
68 self.create_item(workspace)
69
```
Path: `management_api_app/models/schemas/workspace_template.py`
Content:
```
1 from typing import List
2 from pydantic import BaseModel, Field
3
4 from models.domain.resource import ResourceType
5 from models.domain.resource_template import ResourceTemplate, Parameter
6
7
8 def get_sample_workspace_template_object(template_name: str = "tre-workspace-vanilla") -> ResourceTemplate:
9 return ResourceTemplate(
10 id="a7a7a7bd-7f4e-4a4e-b970-dc86a6b31dfb",
11 name=template_name,
12 description="vanilla workspace bundle",
13 version="0.1.0",
14 parameters=[
15 Parameter(name="location", type="string"),
16 Parameter(name="tre_id", type="string"),
17 Parameter(name="workspace_id", type="string"),
18 Parameter(name="address_space", type="string", default="10.2.1.0/24", description="VNet address space for the workspace services")
19 ],
20 resourceType=ResourceType.Workspace,
21 current=True,
22 )
23
24
25 def get_sample_workspace_template() -> dict:
26 return get_sample_workspace_template_object().dict()
27
28
29 class WorkspaceTemplateNamesInList(BaseModel):
30 templateNames: List[str]
31
32 class Config:
33 schema_extra = {
34 "example": {
35 "templateNames": ["tre-workspace-vanilla", "tre-workspace-base"]
36 }
37 }
38
39
40 class WorkspaceTemplateInCreate(BaseModel):
41
42 name: str = Field(title="Name of workspace template")
43 version: str = Field(title="Version of workspace template")
44 description: str = Field(title=" Description of workspace template")
45 properties: dict = Field({}, title="Workspace template properties",
46 description="Values for the properties required by the workspace template")
47 resourceType: str = Field(title="Type of workspace template")
48 current: bool = Field(title="Mark this version as current")
49
50 class Config:
51 schema_extra = {
52 "example": {
53 "name": "My workspace template",
54 "version": "0.0.1",
55 "description": "workspace template for great product",
56 "properties": {},
57 "resourceType": "workspace",
58 "current": "true"
59 }
60 }
61
62
63 class WorkspaceTemplateIdInResponse(BaseModel):
64 resourceTemplateId: str
65
66
67 class WorkspaceTemplateInResponse(BaseModel):
68 workspaceTemplate: ResourceTemplate
69
70 class Config:
71 schema_extra = {
72 "example": {
73 "resourceTemplateId": "49a7445c-aae6-41ec-a539-30dfa90ab1ae",
74 "workspaceTemplate": get_sample_workspace_template()
75 }
76 }
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/management_api_app/db/repositories/workspaces.py b/management_api_app/db/repositories/workspaces.py
--- a/management_api_app/db/repositories/workspaces.py
+++ b/management_api_app/db/repositories/workspaces.py
@@ -46,7 +46,7 @@
raise ValueError(f"The workspace type '{workspace_create.workspaceType}' does not exist")
resource_spec_parameters = {
- "location": config.RESOURCE_LOCATION,
+ "azure_location": config.RESOURCE_LOCATION,
"workspace_id": full_workspace_id[-4:],
"tre_id": config.TRE_ID,
"address_space": "10.2.1.0/24" # TODO: Calculate this value - Issue #52
diff --git a/management_api_app/models/schemas/workspace_template.py b/management_api_app/models/schemas/workspace_template.py
--- a/management_api_app/models/schemas/workspace_template.py
+++ b/management_api_app/models/schemas/workspace_template.py
@@ -12,7 +12,7 @@
description="vanilla workspace bundle",
version="0.1.0",
parameters=[
- Parameter(name="location", type="string"),
+ Parameter(name="azure_location", type="string"),
Parameter(name="tre_id", type="string"),
Parameter(name="workspace_id", type="string"),
Parameter(name="address_space", type="string", default="10.2.1.0/24", description="VNet address space for the workspace services")
| {"golden_diff": "diff --git a/management_api_app/db/repositories/workspaces.py b/management_api_app/db/repositories/workspaces.py\n--- a/management_api_app/db/repositories/workspaces.py\n+++ b/management_api_app/db/repositories/workspaces.py\n@@ -46,7 +46,7 @@\n raise ValueError(f\"The workspace type '{workspace_create.workspaceType}' does not exist\")\n \n resource_spec_parameters = {\n- \"location\": config.RESOURCE_LOCATION,\n+ \"azure_location\": config.RESOURCE_LOCATION,\n \"workspace_id\": full_workspace_id[-4:],\n \"tre_id\": config.TRE_ID,\n \"address_space\": \"10.2.1.0/24\" # TODO: Calculate this value - Issue #52\ndiff --git a/management_api_app/models/schemas/workspace_template.py b/management_api_app/models/schemas/workspace_template.py\n--- a/management_api_app/models/schemas/workspace_template.py\n+++ b/management_api_app/models/schemas/workspace_template.py\n@@ -12,7 +12,7 @@\n description=\"vanilla workspace bundle\",\n version=\"0.1.0\",\n parameters=[\n- Parameter(name=\"location\", type=\"string\"),\n+ Parameter(name=\"azure_location\", type=\"string\"),\n Parameter(name=\"tre_id\", type=\"string\"),\n Parameter(name=\"workspace_id\", type=\"string\"),\n Parameter(name=\"address_space\", type=\"string\", default=\"10.2.1.0/24\", description=\"VNet address space for the workspace services\")\n", "issue": "[Task] Change location to azure_location for porter parameters from API\n**Description**\r\n \r\nChange location to azure_location for porter parameters from API - as location appears to be a reserved word for porter\r\n\r\n\n", "before_files": [{"content": "import uuid\nfrom typing import List\n\nfrom azure.cosmos import CosmosClient\nfrom pydantic import UUID4\n\nfrom core import config\nfrom db.errors import EntityDoesNotExist\nfrom db.repositories.base import BaseRepository\nfrom db.repositories.workspace_templates import WorkspaceTemplateRepository\nfrom models.domain.resource import Status\nfrom models.domain.workspace import Workspace\nfrom models.schemas.workspace import WorkspaceInCreate\n\n\nclass WorkspaceRepository(BaseRepository):\n def __init__(self, client: CosmosClient):\n super().__init__(client, config.STATE_STORE_RESOURCES_CONTAINER)\n\n @staticmethod\n def _active_workspaces_query():\n return 'SELECT * FROM c WHERE c.resourceType = \"workspace\" AND c.isDeleted = false'\n\n def _get_template_version(self, template_name):\n workspace_template_repo = WorkspaceTemplateRepository(self._client)\n template = workspace_template_repo.get_current_workspace_template_by_name(template_name)\n return template[\"version\"]\n\n def get_all_active_workspaces(self) -> List[Workspace]:\n query = self._active_workspaces_query()\n return self.query(query=query)\n\n def get_workspace_by_workspace_id(self, workspace_id: UUID4) -> Workspace:\n query = self._active_workspaces_query() + f' AND c.id=\"{workspace_id}\"'\n workspaces = self.query(query=query)\n if not workspaces:\n raise EntityDoesNotExist\n return workspaces[0]\n\n def create_workspace_item(self, workspace_create: WorkspaceInCreate) -> Workspace:\n full_workspace_id = str(uuid.uuid4())\n\n try:\n template_version = self._get_template_version(workspace_create.workspaceType)\n except EntityDoesNotExist:\n raise ValueError(f\"The workspace type '{workspace_create.workspaceType}' does not exist\")\n\n resource_spec_parameters = {\n \"location\": config.RESOURCE_LOCATION,\n \"workspace_id\": full_workspace_id[-4:],\n \"tre_id\": config.TRE_ID,\n \"address_space\": \"10.2.1.0/24\" # TODO: Calculate this value - Issue #52\n }\n\n workspace = Workspace(\n id=full_workspace_id,\n displayName=workspace_create.displayName,\n description=workspace_create.description,\n resourceTemplateName=workspace_create.workspaceType,\n resourceTemplateVersion=template_version,\n resourceTemplateParameters=resource_spec_parameters,\n status=Status.NotDeployed\n )\n\n return workspace\n\n def save_workspace(self, workspace: Workspace):\n self.create_item(workspace)\n", "path": "management_api_app/db/repositories/workspaces.py"}, {"content": "from typing import List\nfrom pydantic import BaseModel, Field\n\nfrom models.domain.resource import ResourceType\nfrom models.domain.resource_template import ResourceTemplate, Parameter\n\n\ndef get_sample_workspace_template_object(template_name: str = \"tre-workspace-vanilla\") -> ResourceTemplate:\n return ResourceTemplate(\n id=\"a7a7a7bd-7f4e-4a4e-b970-dc86a6b31dfb\",\n name=template_name,\n description=\"vanilla workspace bundle\",\n version=\"0.1.0\",\n parameters=[\n Parameter(name=\"location\", type=\"string\"),\n Parameter(name=\"tre_id\", type=\"string\"),\n Parameter(name=\"workspace_id\", type=\"string\"),\n Parameter(name=\"address_space\", type=\"string\", default=\"10.2.1.0/24\", description=\"VNet address space for the workspace services\")\n ],\n resourceType=ResourceType.Workspace,\n current=True,\n )\n\n\ndef get_sample_workspace_template() -> dict:\n return get_sample_workspace_template_object().dict()\n\n\nclass WorkspaceTemplateNamesInList(BaseModel):\n templateNames: List[str]\n\n class Config:\n schema_extra = {\n \"example\": {\n \"templateNames\": [\"tre-workspace-vanilla\", \"tre-workspace-base\"]\n }\n }\n\n\nclass WorkspaceTemplateInCreate(BaseModel):\n\n name: str = Field(title=\"Name of workspace template\")\n version: str = Field(title=\"Version of workspace template\")\n description: str = Field(title=\" Description of workspace template\")\n properties: dict = Field({}, title=\"Workspace template properties\",\n description=\"Values for the properties required by the workspace template\")\n resourceType: str = Field(title=\"Type of workspace template\")\n current: bool = Field(title=\"Mark this version as current\")\n\n class Config:\n schema_extra = {\n \"example\": {\n \"name\": \"My workspace template\",\n \"version\": \"0.0.1\",\n \"description\": \"workspace template for great product\",\n \"properties\": {},\n \"resourceType\": \"workspace\",\n \"current\": \"true\"\n }\n }\n\n\nclass WorkspaceTemplateIdInResponse(BaseModel):\n resourceTemplateId: str\n\n\nclass WorkspaceTemplateInResponse(BaseModel):\n workspaceTemplate: ResourceTemplate\n\n class Config:\n schema_extra = {\n \"example\": {\n \"resourceTemplateId\": \"49a7445c-aae6-41ec-a539-30dfa90ab1ae\",\n \"workspaceTemplate\": get_sample_workspace_template()\n }\n }\n", "path": "management_api_app/models/schemas/workspace_template.py"}], "after_files": [{"content": "import uuid\nfrom typing import List\n\nfrom azure.cosmos import CosmosClient\nfrom pydantic import UUID4\n\nfrom core import config\nfrom db.errors import EntityDoesNotExist\nfrom db.repositories.base import BaseRepository\nfrom db.repositories.workspace_templates import WorkspaceTemplateRepository\nfrom models.domain.resource import Status\nfrom models.domain.workspace import Workspace\nfrom models.schemas.workspace import WorkspaceInCreate\n\n\nclass WorkspaceRepository(BaseRepository):\n def __init__(self, client: CosmosClient):\n super().__init__(client, config.STATE_STORE_RESOURCES_CONTAINER)\n\n @staticmethod\n def _active_workspaces_query():\n return 'SELECT * FROM c WHERE c.resourceType = \"workspace\" AND c.isDeleted = false'\n\n def _get_template_version(self, template_name):\n workspace_template_repo = WorkspaceTemplateRepository(self._client)\n template = workspace_template_repo.get_current_workspace_template_by_name(template_name)\n return template[\"version\"]\n\n def get_all_active_workspaces(self) -> List[Workspace]:\n query = self._active_workspaces_query()\n return self.query(query=query)\n\n def get_workspace_by_workspace_id(self, workspace_id: UUID4) -> Workspace:\n query = self._active_workspaces_query() + f' AND c.id=\"{workspace_id}\"'\n workspaces = self.query(query=query)\n if not workspaces:\n raise EntityDoesNotExist\n return workspaces[0]\n\n def create_workspace_item(self, workspace_create: WorkspaceInCreate) -> Workspace:\n full_workspace_id = str(uuid.uuid4())\n\n try:\n template_version = self._get_template_version(workspace_create.workspaceType)\n except EntityDoesNotExist:\n raise ValueError(f\"The workspace type '{workspace_create.workspaceType}' does not exist\")\n\n resource_spec_parameters = {\n \"azure_location\": config.RESOURCE_LOCATION,\n \"workspace_id\": full_workspace_id[-4:],\n \"tre_id\": config.TRE_ID,\n \"address_space\": \"10.2.1.0/24\" # TODO: Calculate this value - Issue #52\n }\n\n workspace = Workspace(\n id=full_workspace_id,\n displayName=workspace_create.displayName,\n description=workspace_create.description,\n resourceTemplateName=workspace_create.workspaceType,\n resourceTemplateVersion=template_version,\n resourceTemplateParameters=resource_spec_parameters,\n status=Status.NotDeployed\n )\n\n return workspace\n\n def save_workspace(self, workspace: Workspace):\n self.create_item(workspace)\n", "path": "management_api_app/db/repositories/workspaces.py"}, {"content": "from typing import List\nfrom pydantic import BaseModel, Field\n\nfrom models.domain.resource import ResourceType\nfrom models.domain.resource_template import ResourceTemplate, Parameter\n\n\ndef get_sample_workspace_template_object(template_name: str = \"tre-workspace-vanilla\") -> ResourceTemplate:\n return ResourceTemplate(\n id=\"a7a7a7bd-7f4e-4a4e-b970-dc86a6b31dfb\",\n name=template_name,\n description=\"vanilla workspace bundle\",\n version=\"0.1.0\",\n parameters=[\n Parameter(name=\"azure_location\", type=\"string\"),\n Parameter(name=\"tre_id\", type=\"string\"),\n Parameter(name=\"workspace_id\", type=\"string\"),\n Parameter(name=\"address_space\", type=\"string\", default=\"10.2.1.0/24\", description=\"VNet address space for the workspace services\")\n ],\n resourceType=ResourceType.Workspace,\n current=True,\n )\n\n\ndef get_sample_workspace_template() -> dict:\n return get_sample_workspace_template_object().dict()\n\n\nclass WorkspaceTemplateNamesInList(BaseModel):\n templateNames: List[str]\n\n class Config:\n schema_extra = {\n \"example\": {\n \"templateNames\": [\"tre-workspace-vanilla\", \"tre-workspace-base\"]\n }\n }\n\n\nclass WorkspaceTemplateInCreate(BaseModel):\n\n name: str = Field(title=\"Name of workspace template\")\n version: str = Field(title=\"Version of workspace template\")\n description: str = Field(title=\" Description of workspace template\")\n properties: dict = Field({}, title=\"Workspace template properties\",\n description=\"Values for the properties required by the workspace template\")\n resourceType: str = Field(title=\"Type of workspace template\")\n current: bool = Field(title=\"Mark this version as current\")\n\n class Config:\n schema_extra = {\n \"example\": {\n \"name\": \"My workspace template\",\n \"version\": \"0.0.1\",\n \"description\": \"workspace template for great product\",\n \"properties\": {},\n \"resourceType\": \"workspace\",\n \"current\": \"true\"\n }\n }\n\n\nclass WorkspaceTemplateIdInResponse(BaseModel):\n resourceTemplateId: str\n\n\nclass WorkspaceTemplateInResponse(BaseModel):\n workspaceTemplate: ResourceTemplate\n\n class Config:\n schema_extra = {\n \"example\": {\n \"resourceTemplateId\": \"49a7445c-aae6-41ec-a539-30dfa90ab1ae\",\n \"workspaceTemplate\": get_sample_workspace_template()\n }\n }\n", "path": "management_api_app/models/schemas/workspace_template.py"}]} | 1,680 | 327 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.