problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_35252
|
rasdani/github-patches
|
git_diff
|
pytorch__text-1525
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add a `max_words` argument to `build_vocab_from_iterator`
## π Feature
<!-- A clear and concise description of the feature proposal -->
[Link to the docs](https://pytorch.org/text/stable/vocab.html?highlight=build%20vocab#torchtext.vocab.build_vocab_from_iterator)
I believe it would be beneficial to limit the number of words you want in your vocabulary with an argument like `max_words`, e.g.:
```
vocab = build_vocab_from_iterator(yield_tokens_batch(file_path), specials=["<unk>"], max_words=50000)
```
**Motivation**
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
This allows a controllable-sized `nn.Embedding`, with rare words being mapped to `<unk>`. Otherwise, it would not be practical to use `build_vocab_from_iterator` for larger datasets.
**Alternatives**
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
Keras and Huggingface's tokenizers would be viable alternatives, but do not nicely integrate with the torchtext ecosystem.
</issue>
<code>
[start of torchtext/vocab/vocab_factory.py]
1 from .vocab import Vocab
2 from typing import Dict, Iterable, Optional, List
3 from collections import Counter, OrderedDict
4 from torchtext._torchtext import (
5 Vocab as VocabPybind,
6 )
7
8
9 def vocab(ordered_dict: Dict, min_freq: int = 1,
10 specials: Optional[List[str]] = None,
11 special_first: bool = True) -> Vocab:
12 r"""Factory method for creating a vocab object which maps tokens to indices.
13
14 Note that the ordering in which key value pairs were inserted in the `ordered_dict` will be respected when building the vocab.
15 Therefore if sorting by token frequency is important to the user, the `ordered_dict` should be created in a way to reflect this.
16
17 Args:
18 ordered_dict: Ordered Dictionary mapping tokens to their corresponding occurance frequencies.
19 min_freq: The minimum frequency needed to include a token in the vocabulary.
20 specials: Special symbols to add. The order of supplied tokens will be preserved.
21 special_first: Indicates whether to insert symbols at the beginning or at the end.
22
23 Returns:
24 torchtext.vocab.Vocab: A `Vocab` object
25
26 Examples:
27 >>> from torchtext.vocab import vocab
28 >>> from collections import Counter, OrderedDict
29 >>> counter = Counter(["a", "a", "b", "b", "b"])
30 >>> sorted_by_freq_tuples = sorted(counter.items(), key=lambda x: x[1], reverse=True)
31 >>> ordered_dict = OrderedDict(sorted_by_freq_tuples)
32 >>> v1 = vocab(ordered_dict)
33 >>> print(v1['a']) #prints 1
34 >>> print(v1['out of vocab']) #raise RuntimeError since default index is not set
35 >>> tokens = ['e', 'd', 'c', 'b', 'a']
36 >>> #adding <unk> token and default index
37 >>> unk_token = '<unk>'
38 >>> default_index = -1
39 >>> v2 = vocab(OrderedDict([(token, 1) for token in tokens]), specials=[unk_token])
40 >>> v2.set_default_index(default_index)
41 >>> print(v2['<unk>']) #prints 0
42 >>> print(v2['out of vocab']) #prints -1
43 >>> #make default index same as index of unk_token
44 >>> v2.set_default_index(v2[unk_token])
45 >>> v2['out of vocab'] is v2[unk_token] #prints True
46 """
47 specials = specials or []
48 for token in specials:
49 ordered_dict.pop(token, None)
50
51 tokens = []
52 for token, freq in ordered_dict.items():
53 if freq >= min_freq:
54 tokens.append(token)
55
56 if special_first:
57 tokens[0:0] = specials
58 else:
59 tokens.extend(specials)
60
61 return Vocab(VocabPybind(tokens, None))
62
63
64 def build_vocab_from_iterator(iterator: Iterable, min_freq: int = 1, specials: Optional[List[str]] = None, special_first: bool = True) -> Vocab:
65 """
66 Build a Vocab from an iterator.
67
68 Args:
69 iterator: Iterator used to build Vocab. Must yield list or iterator of tokens.
70 min_freq: The minimum frequency needed to include a token in the vocabulary.
71 specials: Special symbols to add. The order of supplied tokens will be preserved.
72 special_first: Indicates whether to insert symbols at the beginning or at the end.
73
74
75 Returns:
76 torchtext.vocab.Vocab: A `Vocab` object
77
78 Examples:
79 >>> #generating vocab from text file
80 >>> import io
81 >>> from torchtext.vocab import build_vocab_from_iterator
82 >>> def yield_tokens(file_path):
83 >>> with io.open(file_path, encoding = 'utf-8') as f:
84 >>> for line in f:
85 >>> yield line.strip().split()
86 >>> vocab = build_vocab_from_iterator(yield_tokens_batch(file_path), specials=["<unk>"])
87 """
88
89 counter = Counter()
90 for tokens in iterator:
91 counter.update(tokens)
92
93 sorted_by_freq_tuples = sorted(counter.items(), key=lambda x: x[0])
94 sorted_by_freq_tuples.sort(key=lambda x: x[1], reverse=True)
95 ordered_dict = OrderedDict(sorted_by_freq_tuples)
96
97 word_vocab = vocab(ordered_dict, min_freq=min_freq, specials=specials or [],
98 special_first=special_first)
99 return word_vocab
100
[end of torchtext/vocab/vocab_factory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/torchtext/vocab/vocab_factory.py b/torchtext/vocab/vocab_factory.py
--- a/torchtext/vocab/vocab_factory.py
+++ b/torchtext/vocab/vocab_factory.py
@@ -49,6 +49,7 @@
ordered_dict.pop(token, None)
tokens = []
+ # Save room for special tokens
for token, freq in ordered_dict.items():
if freq >= min_freq:
tokens.append(token)
@@ -61,7 +62,7 @@
return Vocab(VocabPybind(tokens, None))
-def build_vocab_from_iterator(iterator: Iterable, min_freq: int = 1, specials: Optional[List[str]] = None, special_first: bool = True) -> Vocab:
+def build_vocab_from_iterator(iterator: Iterable, min_freq: int = 1, specials: Optional[List[str]] = None, special_first: bool = True, max_tokens: Optional[int] = None) -> Vocab:
"""
Build a Vocab from an iterator.
@@ -70,6 +71,7 @@
min_freq: The minimum frequency needed to include a token in the vocabulary.
specials: Special symbols to add. The order of supplied tokens will be preserved.
special_first: Indicates whether to insert symbols at the beginning or at the end.
+ max_tokens: If provided, creates the vocab from the `max_tokens - len(specials)` most frequent tokens.
Returns:
@@ -90,10 +92,16 @@
for tokens in iterator:
counter.update(tokens)
- sorted_by_freq_tuples = sorted(counter.items(), key=lambda x: x[0])
- sorted_by_freq_tuples.sort(key=lambda x: x[1], reverse=True)
- ordered_dict = OrderedDict(sorted_by_freq_tuples)
+ specials = specials or []
+
+ # First sort by descending frequency, then lexicographically
+ sorted_by_freq_tuples = sorted(counter.items(), key=lambda x: (-x[1], x[0]))
+
+ if max_tokens is None:
+ ordered_dict = OrderedDict(sorted_by_freq_tuples)
+ else:
+ assert len(specials) < max_tokens, "len(specials) >= max_tokens, so the vocab will be entirely special tokens."
+ ordered_dict = OrderedDict(sorted_by_freq_tuples[:max_tokens - len(specials)])
- word_vocab = vocab(ordered_dict, min_freq=min_freq, specials=specials or [],
- special_first=special_first)
+ word_vocab = vocab(ordered_dict, min_freq=min_freq, specials=specials, special_first=special_first)
return word_vocab
|
{"golden_diff": "diff --git a/torchtext/vocab/vocab_factory.py b/torchtext/vocab/vocab_factory.py\n--- a/torchtext/vocab/vocab_factory.py\n+++ b/torchtext/vocab/vocab_factory.py\n@@ -49,6 +49,7 @@\n ordered_dict.pop(token, None)\n \n tokens = []\n+ # Save room for special tokens\n for token, freq in ordered_dict.items():\n if freq >= min_freq:\n tokens.append(token)\n@@ -61,7 +62,7 @@\n return Vocab(VocabPybind(tokens, None))\n \n \n-def build_vocab_from_iterator(iterator: Iterable, min_freq: int = 1, specials: Optional[List[str]] = None, special_first: bool = True) -> Vocab:\n+def build_vocab_from_iterator(iterator: Iterable, min_freq: int = 1, specials: Optional[List[str]] = None, special_first: bool = True, max_tokens: Optional[int] = None) -> Vocab:\n \"\"\"\n Build a Vocab from an iterator.\n \n@@ -70,6 +71,7 @@\n min_freq: The minimum frequency needed to include a token in the vocabulary.\n specials: Special symbols to add. The order of supplied tokens will be preserved.\n special_first: Indicates whether to insert symbols at the beginning or at the end.\n+ max_tokens: If provided, creates the vocab from the `max_tokens - len(specials)` most frequent tokens.\n \n \n Returns:\n@@ -90,10 +92,16 @@\n for tokens in iterator:\n counter.update(tokens)\n \n- sorted_by_freq_tuples = sorted(counter.items(), key=lambda x: x[0])\n- sorted_by_freq_tuples.sort(key=lambda x: x[1], reverse=True)\n- ordered_dict = OrderedDict(sorted_by_freq_tuples)\n+ specials = specials or []\n+\n+ # First sort by descending frequency, then lexicographically\n+ sorted_by_freq_tuples = sorted(counter.items(), key=lambda x: (-x[1], x[0]))\n+\n+ if max_tokens is None:\n+ ordered_dict = OrderedDict(sorted_by_freq_tuples)\n+ else:\n+ assert len(specials) < max_tokens, \"len(specials) >= max_tokens, so the vocab will be entirely special tokens.\"\n+ ordered_dict = OrderedDict(sorted_by_freq_tuples[:max_tokens - len(specials)])\n \n- word_vocab = vocab(ordered_dict, min_freq=min_freq, specials=specials or [],\n- special_first=special_first)\n+ word_vocab = vocab(ordered_dict, min_freq=min_freq, specials=specials, special_first=special_first)\n return word_vocab\n", "issue": "Add a `max_words` argument to `build_vocab_from_iterator`\n## \ud83d\ude80 Feature\r\n<!-- A clear and concise description of the feature proposal -->\r\n\r\n[Link to the docs](https://pytorch.org/text/stable/vocab.html?highlight=build%20vocab#torchtext.vocab.build_vocab_from_iterator)\r\n\r\nI believe it would be beneficial to limit the number of words you want in your vocabulary with an argument like `max_words`, e.g.:\r\n```\r\nvocab = build_vocab_from_iterator(yield_tokens_batch(file_path), specials=[\"<unk>\"], max_words=50000)\r\n```\r\n\r\n**Motivation**\r\n\r\n<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->\r\n\r\n\r\nThis allows a controllable-sized `nn.Embedding`, with rare words being mapped to `<unk>`. Otherwise, it would not be practical to use `build_vocab_from_iterator` for larger datasets.\r\n\r\n\r\n**Alternatives**\r\n\r\n<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->\r\n\r\nKeras and Huggingface's tokenizers would be viable alternatives, but do not nicely integrate with the torchtext ecosystem.\r\n\r\n\n", "before_files": [{"content": "from .vocab import Vocab\nfrom typing import Dict, Iterable, Optional, List\nfrom collections import Counter, OrderedDict\nfrom torchtext._torchtext import (\n Vocab as VocabPybind,\n)\n\n\ndef vocab(ordered_dict: Dict, min_freq: int = 1,\n specials: Optional[List[str]] = None,\n special_first: bool = True) -> Vocab:\n r\"\"\"Factory method for creating a vocab object which maps tokens to indices.\n\n Note that the ordering in which key value pairs were inserted in the `ordered_dict` will be respected when building the vocab.\n Therefore if sorting by token frequency is important to the user, the `ordered_dict` should be created in a way to reflect this.\n\n Args:\n ordered_dict: Ordered Dictionary mapping tokens to their corresponding occurance frequencies.\n min_freq: The minimum frequency needed to include a token in the vocabulary.\n specials: Special symbols to add. The order of supplied tokens will be preserved.\n special_first: Indicates whether to insert symbols at the beginning or at the end.\n\n Returns:\n torchtext.vocab.Vocab: A `Vocab` object\n\n Examples:\n >>> from torchtext.vocab import vocab\n >>> from collections import Counter, OrderedDict\n >>> counter = Counter([\"a\", \"a\", \"b\", \"b\", \"b\"])\n >>> sorted_by_freq_tuples = sorted(counter.items(), key=lambda x: x[1], reverse=True)\n >>> ordered_dict = OrderedDict(sorted_by_freq_tuples)\n >>> v1 = vocab(ordered_dict)\n >>> print(v1['a']) #prints 1\n >>> print(v1['out of vocab']) #raise RuntimeError since default index is not set\n >>> tokens = ['e', 'd', 'c', 'b', 'a']\n >>> #adding <unk> token and default index\n >>> unk_token = '<unk>'\n >>> default_index = -1\n >>> v2 = vocab(OrderedDict([(token, 1) for token in tokens]), specials=[unk_token])\n >>> v2.set_default_index(default_index)\n >>> print(v2['<unk>']) #prints 0\n >>> print(v2['out of vocab']) #prints -1\n >>> #make default index same as index of unk_token\n >>> v2.set_default_index(v2[unk_token])\n >>> v2['out of vocab'] is v2[unk_token] #prints True\n \"\"\"\n specials = specials or []\n for token in specials:\n ordered_dict.pop(token, None)\n\n tokens = []\n for token, freq in ordered_dict.items():\n if freq >= min_freq:\n tokens.append(token)\n\n if special_first:\n tokens[0:0] = specials\n else:\n tokens.extend(specials)\n\n return Vocab(VocabPybind(tokens, None))\n\n\ndef build_vocab_from_iterator(iterator: Iterable, min_freq: int = 1, specials: Optional[List[str]] = None, special_first: bool = True) -> Vocab:\n \"\"\"\n Build a Vocab from an iterator.\n\n Args:\n iterator: Iterator used to build Vocab. Must yield list or iterator of tokens.\n min_freq: The minimum frequency needed to include a token in the vocabulary.\n specials: Special symbols to add. The order of supplied tokens will be preserved.\n special_first: Indicates whether to insert symbols at the beginning or at the end.\n\n\n Returns:\n torchtext.vocab.Vocab: A `Vocab` object\n\n Examples:\n >>> #generating vocab from text file\n >>> import io\n >>> from torchtext.vocab import build_vocab_from_iterator\n >>> def yield_tokens(file_path):\n >>> with io.open(file_path, encoding = 'utf-8') as f:\n >>> for line in f:\n >>> yield line.strip().split()\n >>> vocab = build_vocab_from_iterator(yield_tokens_batch(file_path), specials=[\"<unk>\"])\n \"\"\"\n\n counter = Counter()\n for tokens in iterator:\n counter.update(tokens)\n\n sorted_by_freq_tuples = sorted(counter.items(), key=lambda x: x[0])\n sorted_by_freq_tuples.sort(key=lambda x: x[1], reverse=True)\n ordered_dict = OrderedDict(sorted_by_freq_tuples)\n\n word_vocab = vocab(ordered_dict, min_freq=min_freq, specials=specials or [],\n special_first=special_first)\n return word_vocab\n", "path": "torchtext/vocab/vocab_factory.py"}]}
| 1,940 | 572 |
gh_patches_debug_20842
|
rasdani/github-patches
|
git_diff
|
napari__napari-2398
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Settings manager may need to handle edge case where loaded data is None
## π Bug
Looks like the settings manager `_load` method may need to handle the case where `safe_load` returns `None`. I don't yet have a reproducible example... but I'm working on some stuff that is crashing napari a lot :joy:, so maybe settings aren't getting written correctly at close? and during one of my runs I got this traceback:
```pytb
File "/Users/talley/Desktop/t.py", line 45, in <module>
import napari
File "/Users/talley/Dropbox (HMS)/Python/forks/napari/napari/__init__.py", line 22, in <module>
from ._event_loop import gui_qt, run
File "/Users/talley/Dropbox (HMS)/Python/forks/napari/napari/_event_loop.py", line 2, in <module>
from ._qt.qt_event_loop import gui_qt, run
File "/Users/talley/Dropbox (HMS)/Python/forks/napari/napari/_qt/__init__.py", line 41, in <module>
from .qt_main_window import Window
File "/Users/talley/Dropbox (HMS)/Python/forks/napari/napari/_qt/qt_main_window.py", line 30, in <module>
from ..utils.settings import SETTINGS
File "/Users/talley/Dropbox (HMS)/Python/forks/napari/napari/utils/settings/__init__.py", line 5, in <module>
from ._manager import SETTINGS
File "/Users/talley/Dropbox (HMS)/Python/forks/napari/napari/utils/settings/_manager.py", line 177, in <module>
SETTINGS = SettingsManager()
File "/Users/talley/Dropbox (HMS)/Python/forks/napari/napari/utils/settings/_manager.py", line 66, in __init__
self._load()
File "/Users/talley/Dropbox (HMS)/Python/forks/napari/napari/utils/settings/_manager.py", line 115, in _load
for section, model_data in data.items():
AttributeError: 'NoneType' object has no attribute 'items'
```
</issue>
<code>
[start of napari/utils/settings/_manager.py]
1 """Settings management.
2 """
3
4 import os
5 from pathlib import Path
6
7 from appdirs import user_config_dir
8 from pydantic import ValidationError
9 from yaml import safe_dump, safe_load
10
11 from ._defaults import CORE_SETTINGS, ApplicationSettings, PluginSettings
12
13
14 class SettingsManager:
15 """
16 Napari settings manager using evented SettingsModels.
17
18 This provides the presistence layer for the application settings.
19
20 Parameters
21 ----------
22 config_path : str, optional
23 Provide the base folder to store napari configuration. Default is None,
24 which will point to user config provided by `appdirs`.
25 save_to_disk : bool, optional
26 Persist settings on disk. Default is True.
27
28 Notes
29 -----
30 The settings manager will create a new user configuration folder which is
31 provided by `appdirs` in a cross platform manner. On the first startup a
32 new configuration file will be created using the default values defined by
33 the `CORE_SETTINGS` models.
34
35 If a configuration file is found in the specified location, it will be
36 loaded by the `_load` method. On configuration load the following checks
37 are performed:
38
39 - If invalid sections are found, these will be removed from the file.
40 - If invalid keys are found within a valid section, these will be removed
41 from the file.
42 - If invalid values are found within valid sections and valid keys, these
43 will be replaced by the default value provided by `CORE_SETTINGS`
44 models.
45 """
46
47 _FILENAME = "settings.yaml"
48 _APPNAME = "Napari"
49 _APPAUTHOR = "Napari"
50 application: ApplicationSettings
51 plugin: PluginSettings
52
53 def __init__(self, config_path: str = None, save_to_disk: bool = True):
54 self._config_path = (
55 Path(user_config_dir(self._APPNAME, self._APPAUTHOR))
56 if config_path is None
57 else Path(config_path)
58 )
59 self._save_to_disk = save_to_disk
60 self._settings = {}
61 self._defaults = {}
62 self._models = {}
63 self._plugins = []
64
65 if not self._config_path.is_dir():
66 os.makedirs(self._config_path)
67
68 self._load()
69
70 def __getattr__(self, attr):
71 if attr in self._settings:
72 return self._settings[attr]
73
74 def __dir__(self):
75 """Add setting keys to make tab completion works."""
76 return super().__dir__() + list(self._settings)
77
78 @staticmethod
79 def _get_section_name(settings) -> str:
80 """
81 Return the normalized name of a section based on its config title.
82 """
83 section = settings.Config.title.replace(" ", "_").lower()
84 if section.endswith("_settings"):
85 section = section.replace("_settings", "")
86
87 return section
88
89 def _to_dict(self) -> dict:
90 """Convert the settings to a dictionary."""
91 data = {}
92 for section, model in self._settings.items():
93 data[section] = model.dict()
94
95 return data
96
97 def _save(self):
98 """Save configuration to disk."""
99 if self._save_to_disk:
100 path = self.path / self._FILENAME
101 with open(path, "w") as fh:
102 fh.write(safe_dump(self._to_dict()))
103
104 def _load(self):
105 """Read configuration from disk."""
106 path = self.path / self._FILENAME
107 for plugin in CORE_SETTINGS:
108 section = self._get_section_name(plugin)
109 self._defaults[section] = plugin()
110 self._models[section] = plugin
111
112 if path.is_file():
113 with open(path) as fh:
114 data = safe_load(fh.read())
115
116 # Check with models
117 for section, model_data in data.items():
118 try:
119 model = self._models[section](**model_data)
120 model.events.connect(lambda x: self._save())
121 self._settings[section] = model
122 except KeyError:
123 pass
124 except ValidationError as e:
125 # Handle extra fields
126 model_data_replace = {}
127 for error in e.errors():
128 # Grab the first error entry
129 item = error["loc"][0]
130 try:
131 model_data_replace[item] = getattr(
132 self._defaults[section], item
133 )
134 except AttributeError:
135 model_data.pop(item)
136
137 model_data.update(model_data_replace)
138 model = self._models[section](**model_data)
139 model.events.connect(lambda x: self._save())
140 self._settings[section] = model
141 else:
142 self._settings = self._defaults
143
144 self._save()
145
146 @property
147 def path(self):
148 return self._config_path
149
150 def reset(self):
151 """Reset settings to default values."""
152 for section in self._settings:
153 self._settings[section] = self._models[section]()
154
155 self._save()
156
157 def schemas(self) -> dict:
158 """Return the json schema for each of the settings model."""
159 schemas = {}
160 for section, settings in self._settings.items():
161 schemas[section] = {
162 "json_schema": settings.schema_json(),
163 "model": settings,
164 }
165
166 return schemas
167
168 def register_plugin(self, plugin):
169 """Register plugin settings with the settings manager.
170
171 Parameters
172 ----------
173 plugin:
174 The napari plugin that may or may not provide settings.
175 """
176 self._plugins.append(plugin)
177
178
179 SETTINGS = SettingsManager()
180
[end of napari/utils/settings/_manager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/napari/utils/settings/_manager.py b/napari/utils/settings/_manager.py
--- a/napari/utils/settings/_manager.py
+++ b/napari/utils/settings/_manager.py
@@ -108,10 +108,11 @@
section = self._get_section_name(plugin)
self._defaults[section] = plugin()
self._models[section] = plugin
+ self._settings[section] = plugin()
if path.is_file():
with open(path) as fh:
- data = safe_load(fh.read())
+ data = safe_load(fh.read()) or {}
# Check with models
for section, model_data in data.items():
@@ -138,8 +139,6 @@
model = self._models[section](**model_data)
model.events.connect(lambda x: self._save())
self._settings[section] = model
- else:
- self._settings = self._defaults
self._save()
|
{"golden_diff": "diff --git a/napari/utils/settings/_manager.py b/napari/utils/settings/_manager.py\n--- a/napari/utils/settings/_manager.py\n+++ b/napari/utils/settings/_manager.py\n@@ -108,10 +108,11 @@\n section = self._get_section_name(plugin)\n self._defaults[section] = plugin()\n self._models[section] = plugin\n+ self._settings[section] = plugin()\n \n if path.is_file():\n with open(path) as fh:\n- data = safe_load(fh.read())\n+ data = safe_load(fh.read()) or {}\n \n # Check with models\n for section, model_data in data.items():\n@@ -138,8 +139,6 @@\n model = self._models[section](**model_data)\n model.events.connect(lambda x: self._save())\n self._settings[section] = model\n- else:\n- self._settings = self._defaults\n \n self._save()\n", "issue": "Settings manager may need to handle edge case where loaded data is None\n## \ud83d\udc1b Bug\r\nLooks like the settings manager `_load` method may need to handle the case where `safe_load` returns `None`. I don't yet have a reproducible example... but I'm working on some stuff that is crashing napari a lot :joy:, so maybe settings aren't getting written correctly at close? and during one of my runs I got this traceback:\r\n\r\n```pytb\r\n File \"/Users/talley/Desktop/t.py\", line 45, in <module>\r\n import napari\r\n File \"/Users/talley/Dropbox (HMS)/Python/forks/napari/napari/__init__.py\", line 22, in <module>\r\n from ._event_loop import gui_qt, run\r\n File \"/Users/talley/Dropbox (HMS)/Python/forks/napari/napari/_event_loop.py\", line 2, in <module>\r\n from ._qt.qt_event_loop import gui_qt, run\r\n File \"/Users/talley/Dropbox (HMS)/Python/forks/napari/napari/_qt/__init__.py\", line 41, in <module>\r\n from .qt_main_window import Window\r\n File \"/Users/talley/Dropbox (HMS)/Python/forks/napari/napari/_qt/qt_main_window.py\", line 30, in <module>\r\n from ..utils.settings import SETTINGS\r\n File \"/Users/talley/Dropbox (HMS)/Python/forks/napari/napari/utils/settings/__init__.py\", line 5, in <module>\r\n from ._manager import SETTINGS\r\n File \"/Users/talley/Dropbox (HMS)/Python/forks/napari/napari/utils/settings/_manager.py\", line 177, in <module>\r\n SETTINGS = SettingsManager()\r\n File \"/Users/talley/Dropbox (HMS)/Python/forks/napari/napari/utils/settings/_manager.py\", line 66, in __init__\r\n self._load()\r\n File \"/Users/talley/Dropbox (HMS)/Python/forks/napari/napari/utils/settings/_manager.py\", line 115, in _load\r\n for section, model_data in data.items():\r\nAttributeError: 'NoneType' object has no attribute 'items'\r\n```\n", "before_files": [{"content": "\"\"\"Settings management.\n\"\"\"\n\nimport os\nfrom pathlib import Path\n\nfrom appdirs import user_config_dir\nfrom pydantic import ValidationError\nfrom yaml import safe_dump, safe_load\n\nfrom ._defaults import CORE_SETTINGS, ApplicationSettings, PluginSettings\n\n\nclass SettingsManager:\n \"\"\"\n Napari settings manager using evented SettingsModels.\n\n This provides the presistence layer for the application settings.\n\n Parameters\n ----------\n config_path : str, optional\n Provide the base folder to store napari configuration. Default is None,\n which will point to user config provided by `appdirs`.\n save_to_disk : bool, optional\n Persist settings on disk. Default is True.\n\n Notes\n -----\n The settings manager will create a new user configuration folder which is\n provided by `appdirs` in a cross platform manner. On the first startup a\n new configuration file will be created using the default values defined by\n the `CORE_SETTINGS` models.\n\n If a configuration file is found in the specified location, it will be\n loaded by the `_load` method. On configuration load the following checks\n are performed:\n\n - If invalid sections are found, these will be removed from the file.\n - If invalid keys are found within a valid section, these will be removed\n from the file.\n - If invalid values are found within valid sections and valid keys, these\n will be replaced by the default value provided by `CORE_SETTINGS`\n models.\n \"\"\"\n\n _FILENAME = \"settings.yaml\"\n _APPNAME = \"Napari\"\n _APPAUTHOR = \"Napari\"\n application: ApplicationSettings\n plugin: PluginSettings\n\n def __init__(self, config_path: str = None, save_to_disk: bool = True):\n self._config_path = (\n Path(user_config_dir(self._APPNAME, self._APPAUTHOR))\n if config_path is None\n else Path(config_path)\n )\n self._save_to_disk = save_to_disk\n self._settings = {}\n self._defaults = {}\n self._models = {}\n self._plugins = []\n\n if not self._config_path.is_dir():\n os.makedirs(self._config_path)\n\n self._load()\n\n def __getattr__(self, attr):\n if attr in self._settings:\n return self._settings[attr]\n\n def __dir__(self):\n \"\"\"Add setting keys to make tab completion works.\"\"\"\n return super().__dir__() + list(self._settings)\n\n @staticmethod\n def _get_section_name(settings) -> str:\n \"\"\"\n Return the normalized name of a section based on its config title.\n \"\"\"\n section = settings.Config.title.replace(\" \", \"_\").lower()\n if section.endswith(\"_settings\"):\n section = section.replace(\"_settings\", \"\")\n\n return section\n\n def _to_dict(self) -> dict:\n \"\"\"Convert the settings to a dictionary.\"\"\"\n data = {}\n for section, model in self._settings.items():\n data[section] = model.dict()\n\n return data\n\n def _save(self):\n \"\"\"Save configuration to disk.\"\"\"\n if self._save_to_disk:\n path = self.path / self._FILENAME\n with open(path, \"w\") as fh:\n fh.write(safe_dump(self._to_dict()))\n\n def _load(self):\n \"\"\"Read configuration from disk.\"\"\"\n path = self.path / self._FILENAME\n for plugin in CORE_SETTINGS:\n section = self._get_section_name(plugin)\n self._defaults[section] = plugin()\n self._models[section] = plugin\n\n if path.is_file():\n with open(path) as fh:\n data = safe_load(fh.read())\n\n # Check with models\n for section, model_data in data.items():\n try:\n model = self._models[section](**model_data)\n model.events.connect(lambda x: self._save())\n self._settings[section] = model\n except KeyError:\n pass\n except ValidationError as e:\n # Handle extra fields\n model_data_replace = {}\n for error in e.errors():\n # Grab the first error entry\n item = error[\"loc\"][0]\n try:\n model_data_replace[item] = getattr(\n self._defaults[section], item\n )\n except AttributeError:\n model_data.pop(item)\n\n model_data.update(model_data_replace)\n model = self._models[section](**model_data)\n model.events.connect(lambda x: self._save())\n self._settings[section] = model\n else:\n self._settings = self._defaults\n\n self._save()\n\n @property\n def path(self):\n return self._config_path\n\n def reset(self):\n \"\"\"Reset settings to default values.\"\"\"\n for section in self._settings:\n self._settings[section] = self._models[section]()\n\n self._save()\n\n def schemas(self) -> dict:\n \"\"\"Return the json schema for each of the settings model.\"\"\"\n schemas = {}\n for section, settings in self._settings.items():\n schemas[section] = {\n \"json_schema\": settings.schema_json(),\n \"model\": settings,\n }\n\n return schemas\n\n def register_plugin(self, plugin):\n \"\"\"Register plugin settings with the settings manager.\n\n Parameters\n ----------\n plugin:\n The napari plugin that may or may not provide settings.\n \"\"\"\n self._plugins.append(plugin)\n\n\nSETTINGS = SettingsManager()\n", "path": "napari/utils/settings/_manager.py"}]}
| 2,640 | 215 |
gh_patches_debug_11496
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-9498
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Setting external methods should throw errors when the method does not exist
### What I'm trying to achieve
I'm setting an external method that does not exist
### Steps to reproduce the problem
I base64 encoded `app:1234:some-id` that was never a real id external method id:
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/2566928/154252619-496c1b91-ca79-4fe8-bc1d-abcd0cbb743c.png">
There is no error, but the delivery method is still null.
### What I expected to happen
I would expect an error response, rather than noop.
### Screenshots and logs
<!-- If applicable, add screenshots to help explain your problem. -->
**System information**
<!-- Provide the version of Saleor or whether you're using it from the `main` branch. If using Saleor Dashboard or Storefront, provide their versions too. -->
Saleor version:
- [ ] dev (current main)
- [ ] 3.0
- [ ] 2.11
- [ ] 2.10
Operating system:
- [ ] Windows
- [ ] Linux
- [ ] MacOS
- [ ] Other
Setting external methods should throw errors when the method does not exist
### What I'm trying to achieve
I'm setting an external method that does not exist
### Steps to reproduce the problem
I base64 encoded `app:1234:some-id` that was never a real id external method id:
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/2566928/154252619-496c1b91-ca79-4fe8-bc1d-abcd0cbb743c.png">
There is no error, but the delivery method is still null.
### What I expected to happen
I would expect an error response, rather than noop.
### Screenshots and logs
<!-- If applicable, add screenshots to help explain your problem. -->
**System information**
<!-- Provide the version of Saleor or whether you're using it from the `main` branch. If using Saleor Dashboard or Storefront, provide their versions too. -->
Saleor version:
- [ ] dev (current main)
- [ ] 3.0
- [ ] 2.11
- [ ] 2.10
Operating system:
- [ ] Windows
- [ ] Linux
- [ ] MacOS
- [ ] Other
</issue>
<code>
[start of saleor/graphql/checkout/mutations/checkout_delivery_method_update.py]
1 from typing import Optional
2
3 import graphene
4 from django.core.exceptions import ValidationError
5
6 from ....checkout.error_codes import CheckoutErrorCode
7 from ....checkout.fetch import fetch_checkout_info, fetch_checkout_lines
8 from ....checkout.utils import (
9 delete_external_shipping_id,
10 is_shipping_required,
11 recalculate_checkout_discount,
12 set_external_shipping_id,
13 )
14 from ....plugins.webhook.utils import APP_ID_PREFIX
15 from ....shipping import interface as shipping_interface
16 from ....shipping import models as shipping_models
17 from ....shipping.utils import convert_to_shipping_method_data
18 from ....warehouse import models as warehouse_models
19 from ...core.descriptions import ADDED_IN_31, PREVIEW_FEATURE
20 from ...core.mutations import BaseMutation
21 from ...core.scalars import UUID
22 from ...core.types import CheckoutError
23 from ...core.utils import from_global_id_or_error
24 from ...shipping.types import ShippingMethod
25 from ...warehouse.types import Warehouse
26 from ..types import Checkout
27 from .utils import ERROR_DOES_NOT_SHIP, clean_delivery_method, get_checkout_by_token
28
29
30 class CheckoutDeliveryMethodUpdate(BaseMutation):
31 checkout = graphene.Field(Checkout, description="An updated checkout.")
32
33 class Arguments:
34 token = UUID(description="Checkout token.", required=False)
35 delivery_method_id = graphene.ID(
36 description="Delivery Method ID (`Warehouse` ID or `ShippingMethod` ID).",
37 required=False,
38 )
39
40 class Meta:
41 description = (
42 f"{ADDED_IN_31} Updates the delivery method "
43 f"(shipping method or pick up point) of the checkout. {PREVIEW_FEATURE}"
44 )
45 error_type_class = CheckoutError
46
47 @classmethod
48 def perform_on_shipping_method(
49 cls, info, shipping_method_id, checkout_info, lines, checkout, manager
50 ):
51 shipping_method = cls.get_node_or_error(
52 info,
53 shipping_method_id,
54 only_type=ShippingMethod,
55 field="delivery_method_id",
56 qs=shipping_models.ShippingMethod.objects.prefetch_related(
57 "postal_code_rules"
58 ),
59 )
60
61 delivery_method = convert_to_shipping_method_data(
62 shipping_method,
63 shipping_models.ShippingMethodChannelListing.objects.filter(
64 shipping_method=shipping_method,
65 channel=checkout_info.channel,
66 ).first(),
67 )
68 cls._check_delivery_method(
69 checkout_info, lines, shipping_method=delivery_method, collection_point=None
70 )
71
72 cls._update_delivery_method(
73 manager,
74 checkout,
75 shipping_method=shipping_method,
76 external_shipping_method=None,
77 collection_point=None,
78 )
79 recalculate_checkout_discount(
80 manager, checkout_info, lines, info.context.discounts
81 )
82 return CheckoutDeliveryMethodUpdate(checkout=checkout)
83
84 @classmethod
85 def perform_on_external_shipping_method(
86 cls, info, shipping_method_id, checkout_info, lines, checkout, manager
87 ):
88 delivery_method = manager.get_shipping_method(
89 checkout=checkout,
90 channel_slug=checkout.channel.slug,
91 shipping_method_id=shipping_method_id,
92 )
93
94 cls._check_delivery_method(
95 checkout_info, lines, shipping_method=delivery_method, collection_point=None
96 )
97
98 cls._update_delivery_method(
99 manager,
100 checkout,
101 shipping_method=None,
102 external_shipping_method=delivery_method,
103 collection_point=None,
104 )
105 recalculate_checkout_discount(
106 manager, checkout_info, lines, info.context.discounts
107 )
108 return CheckoutDeliveryMethodUpdate(checkout=checkout)
109
110 @classmethod
111 def perform_on_collection_point(
112 cls, info, collection_point_id, checkout_info, lines, checkout, manager
113 ):
114 collection_point = cls.get_node_or_error(
115 info,
116 collection_point_id,
117 only_type=Warehouse,
118 field="delivery_method_id",
119 qs=warehouse_models.Warehouse.objects.select_related("address"),
120 )
121 cls._check_delivery_method(
122 checkout_info,
123 lines,
124 shipping_method=None,
125 collection_point=collection_point,
126 )
127 cls._update_delivery_method(
128 manager,
129 checkout,
130 shipping_method=None,
131 external_shipping_method=None,
132 collection_point=collection_point,
133 )
134 return CheckoutDeliveryMethodUpdate(checkout=checkout)
135
136 @staticmethod
137 def _check_delivery_method(
138 checkout_info,
139 lines,
140 *,
141 shipping_method: Optional[shipping_interface.ShippingMethodData],
142 collection_point: Optional[Warehouse]
143 ) -> None:
144 delivery_method = shipping_method
145 error_msg = "This shipping method is not applicable."
146
147 if collection_point is not None:
148 delivery_method = collection_point
149 error_msg = "This pick up point is not applicable."
150
151 delivery_method_is_valid = clean_delivery_method(
152 checkout_info=checkout_info, lines=lines, method=delivery_method
153 )
154 if not delivery_method_is_valid:
155 raise ValidationError(
156 {
157 "delivery_method_id": ValidationError(
158 error_msg,
159 code=CheckoutErrorCode.DELIVERY_METHOD_NOT_APPLICABLE.value,
160 )
161 }
162 )
163
164 @staticmethod
165 def _update_delivery_method(
166 manager,
167 checkout: Checkout,
168 *,
169 shipping_method: Optional[ShippingMethod],
170 external_shipping_method: Optional[shipping_interface.ShippingMethodData],
171 collection_point: Optional[Warehouse]
172 ) -> None:
173 if external_shipping_method:
174 set_external_shipping_id(
175 checkout=checkout, app_shipping_id=external_shipping_method.id
176 )
177 else:
178 delete_external_shipping_id(checkout=checkout)
179 checkout.shipping_method = shipping_method
180 checkout.collection_point = collection_point
181 checkout.save(
182 update_fields=[
183 "private_metadata",
184 "shipping_method",
185 "collection_point",
186 "last_change",
187 ]
188 )
189 manager.checkout_updated(checkout)
190
191 @staticmethod
192 def _resolve_delivery_method_type(id_) -> Optional[str]:
193 if id_ is None:
194 return None
195
196 possible_types = ("Warehouse", "ShippingMethod", APP_ID_PREFIX)
197 type_, id_ = from_global_id_or_error(id_)
198 str_type = str(type_)
199
200 if str_type not in possible_types:
201 raise ValidationError(
202 {
203 "delivery_method_id": ValidationError(
204 "ID does not belong to Warehouse or ShippingMethod",
205 code=CheckoutErrorCode.INVALID.value,
206 )
207 }
208 )
209
210 return str_type
211
212 @classmethod
213 def perform_mutation(
214 cls,
215 _,
216 info,
217 token,
218 delivery_method_id=None,
219 ):
220
221 checkout = get_checkout_by_token(token)
222
223 manager = info.context.plugins
224 lines, unavailable_variant_pks = fetch_checkout_lines(checkout)
225 if unavailable_variant_pks:
226 not_available_variants_ids = {
227 graphene.Node.to_global_id("ProductVariant", pk)
228 for pk in unavailable_variant_pks
229 }
230 raise ValidationError(
231 {
232 "lines": ValidationError(
233 "Some of the checkout lines variants are unavailable.",
234 code=CheckoutErrorCode.UNAVAILABLE_VARIANT_IN_CHANNEL.value,
235 params={"variants": not_available_variants_ids},
236 )
237 }
238 )
239
240 if not is_shipping_required(lines):
241 raise ValidationError(
242 {
243 "delivery_method": ValidationError(
244 ERROR_DOES_NOT_SHIP,
245 code=CheckoutErrorCode.SHIPPING_NOT_REQUIRED,
246 )
247 }
248 )
249 type_name = cls._resolve_delivery_method_type(delivery_method_id)
250
251 checkout_info = fetch_checkout_info(
252 checkout, lines, info.context.discounts, manager
253 )
254 if type_name == "Warehouse":
255 return cls.perform_on_collection_point(
256 info, delivery_method_id, checkout_info, lines, checkout, manager
257 )
258 if type_name == "ShippingMethod":
259 return cls.perform_on_shipping_method(
260 info, delivery_method_id, checkout_info, lines, checkout, manager
261 )
262 return cls.perform_on_external_shipping_method(
263 info, delivery_method_id, checkout_info, lines, checkout, manager
264 )
265
[end of saleor/graphql/checkout/mutations/checkout_delivery_method_update.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/saleor/graphql/checkout/mutations/checkout_delivery_method_update.py b/saleor/graphql/checkout/mutations/checkout_delivery_method_update.py
--- a/saleor/graphql/checkout/mutations/checkout_delivery_method_update.py
+++ b/saleor/graphql/checkout/mutations/checkout_delivery_method_update.py
@@ -91,6 +91,16 @@
shipping_method_id=shipping_method_id,
)
+ if delivery_method is None and shipping_method_id:
+ raise ValidationError(
+ {
+ "delivery_method_id": ValidationError(
+ f"Couldn't resolve to a node: ${shipping_method_id}",
+ code=CheckoutErrorCode.NOT_FOUND,
+ )
+ }
+ )
+
cls._check_delivery_method(
checkout_info, lines, shipping_method=delivery_method, collection_point=None
)
|
{"golden_diff": "diff --git a/saleor/graphql/checkout/mutations/checkout_delivery_method_update.py b/saleor/graphql/checkout/mutations/checkout_delivery_method_update.py\n--- a/saleor/graphql/checkout/mutations/checkout_delivery_method_update.py\n+++ b/saleor/graphql/checkout/mutations/checkout_delivery_method_update.py\n@@ -91,6 +91,16 @@\n shipping_method_id=shipping_method_id,\n )\n \n+ if delivery_method is None and shipping_method_id:\n+ raise ValidationError(\n+ {\n+ \"delivery_method_id\": ValidationError(\n+ f\"Couldn't resolve to a node: ${shipping_method_id}\",\n+ code=CheckoutErrorCode.NOT_FOUND,\n+ )\n+ }\n+ )\n+\n cls._check_delivery_method(\n checkout_info, lines, shipping_method=delivery_method, collection_point=None\n )\n", "issue": "Setting external methods should throw errors when the method does not exist\n### What I'm trying to achieve\r\nI'm setting an external method that does not exist\r\n\r\n### Steps to reproduce the problem\r\nI base64 encoded `app:1234:some-id` that was never a real id external method id:\r\n\r\n<img width=\"1440\" alt=\"image\" src=\"https://user-images.githubusercontent.com/2566928/154252619-496c1b91-ca79-4fe8-bc1d-abcd0cbb743c.png\">\r\n\r\nThere is no error, but the delivery method is still null.\r\n\r\n\r\n### What I expected to happen\r\nI would expect an error response, rather than noop.\r\n\r\n### Screenshots and logs\r\n<!-- If applicable, add screenshots to help explain your problem. -->\r\n\r\n**System information**\r\n<!-- Provide the version of Saleor or whether you're using it from the `main` branch. If using Saleor Dashboard or Storefront, provide their versions too. -->\r\nSaleor version:\r\n- [ ] dev (current main)\r\n- [ ] 3.0\r\n- [ ] 2.11\r\n- [ ] 2.10\r\n\r\nOperating system:\r\n- [ ] Windows\r\n- [ ] Linux\r\n- [ ] MacOS\r\n- [ ] Other\r\n\nSetting external methods should throw errors when the method does not exist\n### What I'm trying to achieve\r\nI'm setting an external method that does not exist\r\n\r\n### Steps to reproduce the problem\r\nI base64 encoded `app:1234:some-id` that was never a real id external method id:\r\n\r\n<img width=\"1440\" alt=\"image\" src=\"https://user-images.githubusercontent.com/2566928/154252619-496c1b91-ca79-4fe8-bc1d-abcd0cbb743c.png\">\r\n\r\nThere is no error, but the delivery method is still null.\r\n\r\n\r\n### What I expected to happen\r\nI would expect an error response, rather than noop.\r\n\r\n### Screenshots and logs\r\n<!-- If applicable, add screenshots to help explain your problem. -->\r\n\r\n**System information**\r\n<!-- Provide the version of Saleor or whether you're using it from the `main` branch. If using Saleor Dashboard or Storefront, provide their versions too. -->\r\nSaleor version:\r\n- [ ] dev (current main)\r\n- [ ] 3.0\r\n- [ ] 2.11\r\n- [ ] 2.10\r\n\r\nOperating system:\r\n- [ ] Windows\r\n- [ ] Linux\r\n- [ ] MacOS\r\n- [ ] Other\r\n\n", "before_files": [{"content": "from typing import Optional\n\nimport graphene\nfrom django.core.exceptions import ValidationError\n\nfrom ....checkout.error_codes import CheckoutErrorCode\nfrom ....checkout.fetch import fetch_checkout_info, fetch_checkout_lines\nfrom ....checkout.utils import (\n delete_external_shipping_id,\n is_shipping_required,\n recalculate_checkout_discount,\n set_external_shipping_id,\n)\nfrom ....plugins.webhook.utils import APP_ID_PREFIX\nfrom ....shipping import interface as shipping_interface\nfrom ....shipping import models as shipping_models\nfrom ....shipping.utils import convert_to_shipping_method_data\nfrom ....warehouse import models as warehouse_models\nfrom ...core.descriptions import ADDED_IN_31, PREVIEW_FEATURE\nfrom ...core.mutations import BaseMutation\nfrom ...core.scalars import UUID\nfrom ...core.types import CheckoutError\nfrom ...core.utils import from_global_id_or_error\nfrom ...shipping.types import ShippingMethod\nfrom ...warehouse.types import Warehouse\nfrom ..types import Checkout\nfrom .utils import ERROR_DOES_NOT_SHIP, clean_delivery_method, get_checkout_by_token\n\n\nclass CheckoutDeliveryMethodUpdate(BaseMutation):\n checkout = graphene.Field(Checkout, description=\"An updated checkout.\")\n\n class Arguments:\n token = UUID(description=\"Checkout token.\", required=False)\n delivery_method_id = graphene.ID(\n description=\"Delivery Method ID (`Warehouse` ID or `ShippingMethod` ID).\",\n required=False,\n )\n\n class Meta:\n description = (\n f\"{ADDED_IN_31} Updates the delivery method \"\n f\"(shipping method or pick up point) of the checkout. {PREVIEW_FEATURE}\"\n )\n error_type_class = CheckoutError\n\n @classmethod\n def perform_on_shipping_method(\n cls, info, shipping_method_id, checkout_info, lines, checkout, manager\n ):\n shipping_method = cls.get_node_or_error(\n info,\n shipping_method_id,\n only_type=ShippingMethod,\n field=\"delivery_method_id\",\n qs=shipping_models.ShippingMethod.objects.prefetch_related(\n \"postal_code_rules\"\n ),\n )\n\n delivery_method = convert_to_shipping_method_data(\n shipping_method,\n shipping_models.ShippingMethodChannelListing.objects.filter(\n shipping_method=shipping_method,\n channel=checkout_info.channel,\n ).first(),\n )\n cls._check_delivery_method(\n checkout_info, lines, shipping_method=delivery_method, collection_point=None\n )\n\n cls._update_delivery_method(\n manager,\n checkout,\n shipping_method=shipping_method,\n external_shipping_method=None,\n collection_point=None,\n )\n recalculate_checkout_discount(\n manager, checkout_info, lines, info.context.discounts\n )\n return CheckoutDeliveryMethodUpdate(checkout=checkout)\n\n @classmethod\n def perform_on_external_shipping_method(\n cls, info, shipping_method_id, checkout_info, lines, checkout, manager\n ):\n delivery_method = manager.get_shipping_method(\n checkout=checkout,\n channel_slug=checkout.channel.slug,\n shipping_method_id=shipping_method_id,\n )\n\n cls._check_delivery_method(\n checkout_info, lines, shipping_method=delivery_method, collection_point=None\n )\n\n cls._update_delivery_method(\n manager,\n checkout,\n shipping_method=None,\n external_shipping_method=delivery_method,\n collection_point=None,\n )\n recalculate_checkout_discount(\n manager, checkout_info, lines, info.context.discounts\n )\n return CheckoutDeliveryMethodUpdate(checkout=checkout)\n\n @classmethod\n def perform_on_collection_point(\n cls, info, collection_point_id, checkout_info, lines, checkout, manager\n ):\n collection_point = cls.get_node_or_error(\n info,\n collection_point_id,\n only_type=Warehouse,\n field=\"delivery_method_id\",\n qs=warehouse_models.Warehouse.objects.select_related(\"address\"),\n )\n cls._check_delivery_method(\n checkout_info,\n lines,\n shipping_method=None,\n collection_point=collection_point,\n )\n cls._update_delivery_method(\n manager,\n checkout,\n shipping_method=None,\n external_shipping_method=None,\n collection_point=collection_point,\n )\n return CheckoutDeliveryMethodUpdate(checkout=checkout)\n\n @staticmethod\n def _check_delivery_method(\n checkout_info,\n lines,\n *,\n shipping_method: Optional[shipping_interface.ShippingMethodData],\n collection_point: Optional[Warehouse]\n ) -> None:\n delivery_method = shipping_method\n error_msg = \"This shipping method is not applicable.\"\n\n if collection_point is not None:\n delivery_method = collection_point\n error_msg = \"This pick up point is not applicable.\"\n\n delivery_method_is_valid = clean_delivery_method(\n checkout_info=checkout_info, lines=lines, method=delivery_method\n )\n if not delivery_method_is_valid:\n raise ValidationError(\n {\n \"delivery_method_id\": ValidationError(\n error_msg,\n code=CheckoutErrorCode.DELIVERY_METHOD_NOT_APPLICABLE.value,\n )\n }\n )\n\n @staticmethod\n def _update_delivery_method(\n manager,\n checkout: Checkout,\n *,\n shipping_method: Optional[ShippingMethod],\n external_shipping_method: Optional[shipping_interface.ShippingMethodData],\n collection_point: Optional[Warehouse]\n ) -> None:\n if external_shipping_method:\n set_external_shipping_id(\n checkout=checkout, app_shipping_id=external_shipping_method.id\n )\n else:\n delete_external_shipping_id(checkout=checkout)\n checkout.shipping_method = shipping_method\n checkout.collection_point = collection_point\n checkout.save(\n update_fields=[\n \"private_metadata\",\n \"shipping_method\",\n \"collection_point\",\n \"last_change\",\n ]\n )\n manager.checkout_updated(checkout)\n\n @staticmethod\n def _resolve_delivery_method_type(id_) -> Optional[str]:\n if id_ is None:\n return None\n\n possible_types = (\"Warehouse\", \"ShippingMethod\", APP_ID_PREFIX)\n type_, id_ = from_global_id_or_error(id_)\n str_type = str(type_)\n\n if str_type not in possible_types:\n raise ValidationError(\n {\n \"delivery_method_id\": ValidationError(\n \"ID does not belong to Warehouse or ShippingMethod\",\n code=CheckoutErrorCode.INVALID.value,\n )\n }\n )\n\n return str_type\n\n @classmethod\n def perform_mutation(\n cls,\n _,\n info,\n token,\n delivery_method_id=None,\n ):\n\n checkout = get_checkout_by_token(token)\n\n manager = info.context.plugins\n lines, unavailable_variant_pks = fetch_checkout_lines(checkout)\n if unavailable_variant_pks:\n not_available_variants_ids = {\n graphene.Node.to_global_id(\"ProductVariant\", pk)\n for pk in unavailable_variant_pks\n }\n raise ValidationError(\n {\n \"lines\": ValidationError(\n \"Some of the checkout lines variants are unavailable.\",\n code=CheckoutErrorCode.UNAVAILABLE_VARIANT_IN_CHANNEL.value,\n params={\"variants\": not_available_variants_ids},\n )\n }\n )\n\n if not is_shipping_required(lines):\n raise ValidationError(\n {\n \"delivery_method\": ValidationError(\n ERROR_DOES_NOT_SHIP,\n code=CheckoutErrorCode.SHIPPING_NOT_REQUIRED,\n )\n }\n )\n type_name = cls._resolve_delivery_method_type(delivery_method_id)\n\n checkout_info = fetch_checkout_info(\n checkout, lines, info.context.discounts, manager\n )\n if type_name == \"Warehouse\":\n return cls.perform_on_collection_point(\n info, delivery_method_id, checkout_info, lines, checkout, manager\n )\n if type_name == \"ShippingMethod\":\n return cls.perform_on_shipping_method(\n info, delivery_method_id, checkout_info, lines, checkout, manager\n )\n return cls.perform_on_external_shipping_method(\n info, delivery_method_id, checkout_info, lines, checkout, manager\n )\n", "path": "saleor/graphql/checkout/mutations/checkout_delivery_method_update.py"}]}
| 3,462 | 182 |
gh_patches_debug_30437
|
rasdani/github-patches
|
git_diff
|
rotki__rotki-4261
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docker container's /tmp doesn't get automatically cleaned
## Problem Definition
PyInstaller extracts the files in /tmp every time the backend starts
In the docker container /tmp is never cleaned which results in an ever-increasing size on every application restart
## TODO
- [ ] Add /tmp cleanup on start
</issue>
<code>
[start of packaging/docker/entrypoint.py]
1 #!/usr/bin/python3
2 import json
3 import logging
4 import os
5 import subprocess
6 import time
7 from pathlib import Path
8 from typing import Dict, Optional, Any, List
9
10 logger = logging.getLogger('monitor')
11 logging.basicConfig(level=logging.DEBUG)
12
13 DEFAULT_LOG_LEVEL = 'critical'
14
15
16 def load_config_from_file() -> Optional[Dict[str, Any]]:
17 config_file = Path('/config/rotki_config.json')
18 if not config_file.exists():
19 logger.info('no config file provided')
20 return None
21
22 with open(config_file) as file:
23 try:
24 data = json.load(file)
25 return data
26 except json.JSONDecodeError as e:
27 logger.error(e)
28 return None
29
30
31 def load_config_from_env() -> Dict[str, Any]:
32 loglevel = os.environ.get('LOGLEVEL')
33 logfromothermodules = os.environ.get('LOGFROMOTHERMODDULES')
34 sleep_secs = os.environ.get('SLEEP_SECS')
35 max_size_in_mb_all_logs = os.environ.get('MAX_SIZE_IN_MB_ALL_LOGS')
36 max_logfiles_num = os.environ.get('MAX_LOGFILES_NUM')
37
38 return {
39 'loglevel': loglevel,
40 'logfromothermodules': logfromothermodules,
41 'sleep_secs': sleep_secs,
42 'max_logfiles_num': max_logfiles_num,
43 'max_size_in_mb_all_logs': max_size_in_mb_all_logs,
44 }
45
46
47 def load_config() -> List[str]:
48 env_config = load_config_from_env()
49 file_config = load_config_from_file()
50
51 logger.info('loading config from env')
52
53 loglevel = env_config.get('loglevel')
54 log_from_other_modules = env_config.get('logfromothermodules')
55 sleep_secs = env_config.get('sleep_secs')
56 max_logfiles_num = env_config.get('max_logfiles_num')
57 max_size_in_mb_all_logs = env_config.get('max_size_in_mb_all_logs')
58
59 if file_config is not None:
60 logger.info('loading config from file')
61
62 if file_config.get('loglevel') is not None:
63 loglevel = file_config.get('loglevel')
64
65 if file_config.get('logfromothermodules') is not None:
66 log_from_other_modules = file_config.get('logfromothermodules')
67
68 if file_config.get('sleep-secs') is not None:
69 sleep_secs = file_config.get('sleep-secs')
70
71 if file_config.get('max_logfiles_num') is not None:
72 max_logfiles_num = file_config.get('max_logfiles_num')
73
74 if file_config.get('max_size_in_mb_all_logs') is not None:
75 max_size_in_mb_all_logs = file_config.get('max_size_in_mb_all_logs')
76
77 args = [
78 '--data-dir',
79 '/data',
80 '--logfile',
81 '/logs/rotki.log',
82 '--loglevel',
83 loglevel if loglevel is not None else DEFAULT_LOG_LEVEL,
84 ]
85
86 if log_from_other_modules is True:
87 args.append('--logfromothermodules')
88
89 if sleep_secs is not None:
90 args.append('--sleep-secs')
91 args.append(str(sleep_secs))
92
93 if max_logfiles_num is not None:
94 args.append('--max-logfiles-num')
95 args.append(str(max_logfiles_num))
96
97 if max_size_in_mb_all_logs is not None:
98 args.append('--max-size-in-mb-all-logs')
99 args.append(str(max_size_in_mb_all_logs))
100
101 return args
102
103
104 base_args = [
105 '/usr/sbin/rotki',
106 '--rest-api-port',
107 '4242',
108 '--websockets-api-port',
109 '4243',
110 '--api-cors',
111 'http://localhost:*/*,app://.',
112 '--api-host',
113 '0.0.0.0',
114 ]
115
116 config_args = load_config()
117 cmd = base_args + config_args
118
119 logger.info('starting rotki backend')
120
121 rotki = subprocess.Popen(cmd)
122
123 if rotki.returncode == 1:
124 logger.error('Failed to start rotki')
125 exit(1)
126
127 logger.info('starting nginx')
128
129 nginx = subprocess.Popen('nginx -g "daemon off;"', shell=True)
130
131 if nginx.returncode == 1:
132 logger.error('Failed to start nginx')
133 exit(1)
134
135 while True:
136 time.sleep(60)
137
138 if rotki.poll() is not None:
139 logger.error('rotki has terminated exiting')
140 exit(1)
141
142 if nginx.poll() is not None:
143 logger.error('nginx was not running')
144 exit(1)
145
146 logger.info('OK: processes still running')
147
[end of packaging/docker/entrypoint.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/packaging/docker/entrypoint.py b/packaging/docker/entrypoint.py
--- a/packaging/docker/entrypoint.py
+++ b/packaging/docker/entrypoint.py
@@ -2,10 +2,12 @@
import json
import logging
import os
+import shutil
import subprocess
import time
+from datetime import datetime, timedelta
from pathlib import Path
-from typing import Dict, Optional, Any, List
+from typing import Any, Dict, List, Optional
logger = logging.getLogger('monitor')
logging.basicConfig(level=logging.DEBUG)
@@ -13,6 +15,41 @@
DEFAULT_LOG_LEVEL = 'critical'
+def can_delete(file: Path, cutoff: int) -> bool:
+ return int(os.stat(file).st_mtime) <= cutoff or file.name.startswith('_MEI')
+
+
+def cleanup_tmp() -> None:
+ logger.info('Preparing to cleanup tmp directory')
+ tmp_dir = Path('/tmp/').glob('*')
+ cache_cutoff = datetime.today() - timedelta(hours=6)
+ cutoff_epoch = int(cache_cutoff.strftime("%s"))
+ to_delete = filter(lambda x: can_delete(x, cutoff_epoch), tmp_dir)
+
+ deleted = 0
+ skipped = 0
+
+ for item in to_delete:
+ path = Path(item)
+ if path.is_file():
+ try:
+ path.unlink()
+ deleted += 1
+ continue
+ except PermissionError:
+ skipped += 1
+ continue
+
+ try:
+ shutil.rmtree(item)
+ deleted += 1
+ except OSError:
+ skipped += 1
+ continue
+
+ logger.info(f'Deleted {deleted} files or directories, skipped {skipped} from /tmp')
+
+
def load_config_from_file() -> Optional[Dict[str, Any]]:
config_file = Path('/config/rotki_config.json')
if not config_file.exists():
@@ -101,6 +138,8 @@
return args
+cleanup_tmp()
+
base_args = [
'/usr/sbin/rotki',
'--rest-api-port',
|
{"golden_diff": "diff --git a/packaging/docker/entrypoint.py b/packaging/docker/entrypoint.py\n--- a/packaging/docker/entrypoint.py\n+++ b/packaging/docker/entrypoint.py\n@@ -2,10 +2,12 @@\n import json\n import logging\n import os\n+import shutil\n import subprocess\n import time\n+from datetime import datetime, timedelta\n from pathlib import Path\n-from typing import Dict, Optional, Any, List\n+from typing import Any, Dict, List, Optional\n \n logger = logging.getLogger('monitor')\n logging.basicConfig(level=logging.DEBUG)\n@@ -13,6 +15,41 @@\n DEFAULT_LOG_LEVEL = 'critical'\n \n \n+def can_delete(file: Path, cutoff: int) -> bool:\n+ return int(os.stat(file).st_mtime) <= cutoff or file.name.startswith('_MEI')\n+\n+\n+def cleanup_tmp() -> None:\n+ logger.info('Preparing to cleanup tmp directory')\n+ tmp_dir = Path('/tmp/').glob('*')\n+ cache_cutoff = datetime.today() - timedelta(hours=6)\n+ cutoff_epoch = int(cache_cutoff.strftime(\"%s\"))\n+ to_delete = filter(lambda x: can_delete(x, cutoff_epoch), tmp_dir)\n+\n+ deleted = 0\n+ skipped = 0\n+\n+ for item in to_delete:\n+ path = Path(item)\n+ if path.is_file():\n+ try:\n+ path.unlink()\n+ deleted += 1\n+ continue\n+ except PermissionError:\n+ skipped += 1\n+ continue\n+\n+ try:\n+ shutil.rmtree(item)\n+ deleted += 1\n+ except OSError:\n+ skipped += 1\n+ continue\n+\n+ logger.info(f'Deleted {deleted} files or directories, skipped {skipped} from /tmp')\n+\n+\n def load_config_from_file() -> Optional[Dict[str, Any]]:\n config_file = Path('/config/rotki_config.json')\n if not config_file.exists():\n@@ -101,6 +138,8 @@\n return args\n \n \n+cleanup_tmp()\n+\n base_args = [\n '/usr/sbin/rotki',\n '--rest-api-port',\n", "issue": "Docker container's /tmp doesn't get automatically cleaned\n## Problem Definition\r\n\r\nPyInstaller extracts the files in /tmp every time the backend starts\r\nIn the docker container /tmp is never cleaned which results in an ever-increasing size on every application restart\r\n\r\n## TODO\r\n\r\n- [ ] Add /tmp cleanup on start\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/python3\nimport json\nimport logging\nimport os\nimport subprocess\nimport time\nfrom pathlib import Path\nfrom typing import Dict, Optional, Any, List\n\nlogger = logging.getLogger('monitor')\nlogging.basicConfig(level=logging.DEBUG)\n\nDEFAULT_LOG_LEVEL = 'critical'\n\n\ndef load_config_from_file() -> Optional[Dict[str, Any]]:\n config_file = Path('/config/rotki_config.json')\n if not config_file.exists():\n logger.info('no config file provided')\n return None\n\n with open(config_file) as file:\n try:\n data = json.load(file)\n return data\n except json.JSONDecodeError as e:\n logger.error(e)\n return None\n\n\ndef load_config_from_env() -> Dict[str, Any]:\n loglevel = os.environ.get('LOGLEVEL')\n logfromothermodules = os.environ.get('LOGFROMOTHERMODDULES')\n sleep_secs = os.environ.get('SLEEP_SECS')\n max_size_in_mb_all_logs = os.environ.get('MAX_SIZE_IN_MB_ALL_LOGS')\n max_logfiles_num = os.environ.get('MAX_LOGFILES_NUM')\n\n return {\n 'loglevel': loglevel,\n 'logfromothermodules': logfromothermodules,\n 'sleep_secs': sleep_secs,\n 'max_logfiles_num': max_logfiles_num,\n 'max_size_in_mb_all_logs': max_size_in_mb_all_logs,\n }\n\n\ndef load_config() -> List[str]:\n env_config = load_config_from_env()\n file_config = load_config_from_file()\n\n logger.info('loading config from env')\n\n loglevel = env_config.get('loglevel')\n log_from_other_modules = env_config.get('logfromothermodules')\n sleep_secs = env_config.get('sleep_secs')\n max_logfiles_num = env_config.get('max_logfiles_num')\n max_size_in_mb_all_logs = env_config.get('max_size_in_mb_all_logs')\n\n if file_config is not None:\n logger.info('loading config from file')\n\n if file_config.get('loglevel') is not None:\n loglevel = file_config.get('loglevel')\n\n if file_config.get('logfromothermodules') is not None:\n log_from_other_modules = file_config.get('logfromothermodules')\n\n if file_config.get('sleep-secs') is not None:\n sleep_secs = file_config.get('sleep-secs')\n\n if file_config.get('max_logfiles_num') is not None:\n max_logfiles_num = file_config.get('max_logfiles_num')\n\n if file_config.get('max_size_in_mb_all_logs') is not None:\n max_size_in_mb_all_logs = file_config.get('max_size_in_mb_all_logs')\n\n args = [\n '--data-dir',\n '/data',\n '--logfile',\n '/logs/rotki.log',\n '--loglevel',\n loglevel if loglevel is not None else DEFAULT_LOG_LEVEL,\n ]\n\n if log_from_other_modules is True:\n args.append('--logfromothermodules')\n\n if sleep_secs is not None:\n args.append('--sleep-secs')\n args.append(str(sleep_secs))\n\n if max_logfiles_num is not None:\n args.append('--max-logfiles-num')\n args.append(str(max_logfiles_num))\n\n if max_size_in_mb_all_logs is not None:\n args.append('--max-size-in-mb-all-logs')\n args.append(str(max_size_in_mb_all_logs))\n\n return args\n\n\nbase_args = [\n '/usr/sbin/rotki',\n '--rest-api-port',\n '4242',\n '--websockets-api-port',\n '4243',\n '--api-cors',\n 'http://localhost:*/*,app://.',\n '--api-host',\n '0.0.0.0',\n]\n\nconfig_args = load_config()\ncmd = base_args + config_args\n\nlogger.info('starting rotki backend')\n\nrotki = subprocess.Popen(cmd)\n\nif rotki.returncode == 1:\n logger.error('Failed to start rotki')\n exit(1)\n\nlogger.info('starting nginx')\n\nnginx = subprocess.Popen('nginx -g \"daemon off;\"', shell=True)\n\nif nginx.returncode == 1:\n logger.error('Failed to start nginx')\n exit(1)\n\nwhile True:\n time.sleep(60)\n\n if rotki.poll() is not None:\n logger.error('rotki has terminated exiting')\n exit(1)\n\n if nginx.poll() is not None:\n logger.error('nginx was not running')\n exit(1)\n\n logger.info('OK: processes still running')\n", "path": "packaging/docker/entrypoint.py"}]}
| 1,932 | 472 |
gh_patches_debug_105
|
rasdani/github-patches
|
git_diff
|
celery__celery-3671
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Request on_timeout should ignore soft time limit exception
When Request.on_timeout receive a soft timeout from billiard, it does the same as if it was receiving a hard time limit exception. This is ran by the controller.
But the task may catch this exception and eg. return (this is what soft timeout are for).
This cause:
1. the result to be saved once as an exception by the controller (on_timeout) and another time with the result returned by the task
2. the task status to be passed to failure and to success on the same manner
3. if the task is participating to a chord, the chord result counter (at least with redis) is incremented twice (instead of once), making the chord to return prematurely and eventually loose tasksβ¦
1, 2 and 3 can leads of course to strange race conditionsβ¦
## Steps to reproduce (Illustration)
with the program in test_timeout.py:
```python
import time
import celery
app = celery.Celery('test_timeout')
app.conf.update(
result_backend="redis://localhost/0",
broker_url="amqp://celery:celery@localhost:5672/host",
)
@app.task(soft_time_limit=1)
def test():
try:
time.sleep(2)
except Exception:
return 1
@app.task()
def add(args):
print("### adding", args)
return sum(args)
@app.task()
def on_error(context, exception, traceback, **kwargs):
print("### on_error:Β ", exception)
if __name__ == "__main__":
result = celery.chord([test.s().set(link_error=on_error.s()), test.s().set(link_error=on_error.s())])(add.s())
result.get()
```
start a worker and the program:
```
$ celery -A test_timeout worker -l WARNING
$ python3 test_timeout.py
```
## Expected behavior
add method is called with `[1, 1]` as argument and test_timeout.py return normally
## Actual behavior
The test_timeout.py fails, with
```
celery.backends.base.ChordError: Callback error: ChordError("Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)",
```
On the worker side, the **on_error is called but the add method as well !**
```
[2017-11-29 23:07:25,538: WARNING/MainProcess] Soft time limit (1s) exceeded for test_timeout.test[15109e05-da43-449f-9081-85d839ac0ef2]
[2017-11-29 23:07:25,546: WARNING/MainProcess] ### on_error:
[2017-11-29 23:07:25,546: WARNING/MainProcess] SoftTimeLimitExceeded(True,)
[2017-11-29 23:07:25,547: WARNING/MainProcess] Soft time limit (1s) exceeded for test_timeout.test[38f3f7f2-4a89-4318-8ee9-36a987f73757]
[2017-11-29 23:07:25,553: ERROR/MainProcess] Chord callback for 'ef6d7a38-d1b4-40ad-b937-ffa84e40bb23' raised: ChordError("Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)",)
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 290, in on_chord_part_return
callback.delay([unpack(tup, decode) for tup in resl])
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 290, in <listcomp>
callback.delay([unpack(tup, decode) for tup in resl])
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 243, in _unpack_chord_result
raise ChordError('Dependency {0} raised {1!r}'.format(tid, retval))
celery.exceptions.ChordError: Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)
[2017-11-29 23:07:25,565: WARNING/MainProcess] ### on_error:
[2017-11-29 23:07:25,565: WARNING/MainProcess] SoftTimeLimitExceeded(True,)
[2017-11-29 23:07:27,262: WARNING/PoolWorker-2] ### adding
[2017-11-29 23:07:27,264: WARNING/PoolWorker-2] [1, 1]
```
Of course, on purpose did I choose to call the test.s() twice, to show that the count in the chord continues. In fact:
- the chord result is incremented twice by the error of soft time limit
- the chord result is again incremented twice by the correct returning of `test` task
## Conclusion
Request.on_timeout should not process soft time limit exception.
here is a quick monkey patch (correction of celery is trivial)
```python
def patch_celery_request_on_timeout():
from celery.worker import request
orig = request.Request.on_timeout
def patched_on_timeout(self, soft, timeout):
if not soft:
orig(self, soft, timeout)
request.Request.on_timeout = patched_on_timeout
patch_celery_request_on_timeout()
```
## version info
software -> celery:4.1.0 (latentcall) kombu:4.0.2 py:3.4.3
billiard:3.5.0.2 py-amqp:2.1.4
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:redis://10.0.3.253/0
</issue>
<code>
[start of examples/next-steps/proj/tasks.py]
1 from __future__ import absolute_import, unicode_literals
2 from . import app
3
4
5 @app.task
6 def add(x, y):
7 return x + y
8
9
10 @app.task
11 def mul(x, y):
12 return x * y
13
14
15 @app.task
16 def xsum(numbers):
17 return sum(numbers)
18
[end of examples/next-steps/proj/tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/next-steps/proj/tasks.py b/examples/next-steps/proj/tasks.py
--- a/examples/next-steps/proj/tasks.py
+++ b/examples/next-steps/proj/tasks.py
@@ -1,5 +1,5 @@
from __future__ import absolute_import, unicode_literals
-from . import app
+from .celery import app
@app.task
|
{"golden_diff": "diff --git a/examples/next-steps/proj/tasks.py b/examples/next-steps/proj/tasks.py\n--- a/examples/next-steps/proj/tasks.py\n+++ b/examples/next-steps/proj/tasks.py\n@@ -1,5 +1,5 @@\n from __future__ import absolute_import, unicode_literals\n-from . import app\n+from .celery import app\n \n \n @app.task\n", "issue": "Request on_timeout should ignore soft time limit exception\nWhen Request.on_timeout receive a soft timeout from billiard, it does the same as if it was receiving a hard time limit exception. This is ran by the controller.\r\n\r\nBut the task may catch this exception and eg. return (this is what soft timeout are for).\r\n\r\nThis cause:\r\n1. the result to be saved once as an exception by the controller (on_timeout) and another time with the result returned by the task\r\n2. the task status to be passed to failure and to success on the same manner\r\n3. if the task is participating to a chord, the chord result counter (at least with redis) is incremented twice (instead of once), making the chord to return prematurely and eventually loose tasks\u2026\r\n\r\n1, 2 and 3 can leads of course to strange race conditions\u2026\r\n\r\n## Steps to reproduce (Illustration)\r\n\r\nwith the program in test_timeout.py:\r\n\r\n```python\r\nimport time\r\nimport celery\r\n\r\n\r\napp = celery.Celery('test_timeout')\r\napp.conf.update(\r\n result_backend=\"redis://localhost/0\",\r\n broker_url=\"amqp://celery:celery@localhost:5672/host\",\r\n)\r\n\r\[email protected](soft_time_limit=1)\r\ndef test():\r\n try:\r\n time.sleep(2)\r\n except Exception:\r\n return 1\r\n\r\[email protected]()\r\ndef add(args):\r\n print(\"### adding\", args)\r\n return sum(args)\r\n\r\[email protected]()\r\ndef on_error(context, exception, traceback, **kwargs):\r\n print(\"### on_error:\u00a0\", exception)\r\n\r\nif __name__ == \"__main__\":\r\n result = celery.chord([test.s().set(link_error=on_error.s()), test.s().set(link_error=on_error.s())])(add.s())\r\n result.get()\r\n```\r\n\r\nstart a worker and the program:\r\n\r\n```\r\n$ celery -A test_timeout worker -l WARNING\r\n$ python3 test_timeout.py\r\n```\r\n\r\n## Expected behavior\r\n\r\nadd method is called with `[1, 1]` as argument and test_timeout.py return normally\r\n\r\n## Actual behavior\r\n\r\nThe test_timeout.py fails, with\r\n```\r\ncelery.backends.base.ChordError: Callback error: ChordError(\"Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)\",\r\n```\r\nOn the worker side, the **on_error is called but the add method as well !**\r\n\r\n```\r\n[2017-11-29 23:07:25,538: WARNING/MainProcess] Soft time limit (1s) exceeded for test_timeout.test[15109e05-da43-449f-9081-85d839ac0ef2]\r\n[2017-11-29 23:07:25,546: WARNING/MainProcess] ### on_error:\r\n[2017-11-29 23:07:25,546: WARNING/MainProcess] SoftTimeLimitExceeded(True,)\r\n[2017-11-29 23:07:25,547: WARNING/MainProcess] Soft time limit (1s) exceeded for test_timeout.test[38f3f7f2-4a89-4318-8ee9-36a987f73757]\r\n[2017-11-29 23:07:25,553: ERROR/MainProcess] Chord callback for 'ef6d7a38-d1b4-40ad-b937-ffa84e40bb23' raised: ChordError(\"Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)\",)\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py\", line 290, in on_chord_part_return\r\n callback.delay([unpack(tup, decode) for tup in resl])\r\n File \"/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py\", line 290, in <listcomp>\r\n callback.delay([unpack(tup, decode) for tup in resl])\r\n File \"/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py\", line 243, in _unpack_chord_result\r\n raise ChordError('Dependency {0} raised {1!r}'.format(tid, retval))\r\ncelery.exceptions.ChordError: Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)\r\n[2017-11-29 23:07:25,565: WARNING/MainProcess] ### on_error:\r\n[2017-11-29 23:07:25,565: WARNING/MainProcess] SoftTimeLimitExceeded(True,)\r\n[2017-11-29 23:07:27,262: WARNING/PoolWorker-2] ### adding\r\n[2017-11-29 23:07:27,264: WARNING/PoolWorker-2] [1, 1]\r\n```\r\n\r\nOf course, on purpose did I choose to call the test.s() twice, to show that the count in the chord continues. In fact:\r\n- the chord result is incremented twice by the error of soft time limit\r\n- the chord result is again incremented twice by the correct returning of `test` task\r\n\r\n## Conclusion\r\n\r\nRequest.on_timeout should not process soft time limit exception. \r\n\r\nhere is a quick monkey patch (correction of celery is trivial)\r\n\r\n```python\r\ndef patch_celery_request_on_timeout():\r\n from celery.worker import request\r\n orig = request.Request.on_timeout\r\n def patched_on_timeout(self, soft, timeout):\r\n if not soft:\r\n orig(self, soft, timeout)\r\n request.Request.on_timeout = patched_on_timeout\r\npatch_celery_request_on_timeout()\r\n```\r\n\r\n\r\n\r\n## version info\r\n\r\nsoftware -> celery:4.1.0 (latentcall) kombu:4.0.2 py:3.4.3\r\n billiard:3.5.0.2 py-amqp:2.1.4\r\nplatform -> system:Linux arch:64bit, ELF imp:CPython\r\nloader -> celery.loaders.app.AppLoader\r\nsettings -> transport:amqp results:redis://10.0.3.253/0\r\n\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\nfrom . import app\n\n\[email protected]\ndef add(x, y):\n return x + y\n\n\[email protected]\ndef mul(x, y):\n return x * y\n\n\[email protected]\ndef xsum(numbers):\n return sum(numbers)\n", "path": "examples/next-steps/proj/tasks.py"}]}
| 2,138 | 87 |
gh_patches_debug_4098
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-modules-core-4745
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
apache2_module fails for php7.0 on Ubuntu Xenial
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
apache2_module
##### ANSIBLE VERSION
<!--- Paste verbatim output from βansible --versionβ between quotes below -->
```
ansible 2.2.0 (devel 982db58aff) last updated 2016/09/08 11:50:49 (GMT +100)
lib/ansible/modules/core: (detached HEAD db38f0c876) last updated 2016/09/08 13:03:40 (GMT +100)
lib/ansible/modules/extras: (detached HEAD 8bfdcfcab2) last updated 2016/09/08 11:51:00 (GMT +100)
config file = /home/rowan/.ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Ubuntu Xenial lists the php7.0 module as php7_module when running apache2ctl -M this breaks the regexp checking if the module is enabled.
I've made a work around here https://github.com/rwky/ansible-modules-core/commit/00ad6ef035a10dac7c84b7b68f04b00a739b104b but I didn't make a PR since I expect it may break other distros/versions.
Not entirely sure what the best solution to this is.
##### STEPS TO REPRODUCE
Run apache2_module with name=php7.0 state=present on a xenial server.
</issue>
<code>
[start of web_infrastructure/apache2_module.py]
1 #!/usr/bin/python
2 #coding: utf-8 -*-
3
4 # (c) 2013-2014, Christian Berendt <[email protected]>
5 #
6 # This module is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # This software is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with this software. If not, see <http://www.gnu.org/licenses/>.
18
19 DOCUMENTATION = '''
20 ---
21 module: apache2_module
22 version_added: 1.6
23 author: "Christian Berendt (@berendt)"
24 short_description: enables/disables a module of the Apache2 webserver
25 description:
26 - Enables or disables a specified module of the Apache2 webserver.
27 options:
28 name:
29 description:
30 - name of the module to enable/disable
31 required: true
32 force:
33 description:
34 - force disabling of default modules and override Debian warnings
35 required: false
36 choices: ['yes', 'no']
37 default: no
38 version_added: "2.1"
39 state:
40 description:
41 - indicate the desired state of the resource
42 choices: ['present', 'absent']
43 default: present
44
45 requirements: ["a2enmod","a2dismod"]
46 '''
47
48 EXAMPLES = '''
49 # enables the Apache2 module "wsgi"
50 - apache2_module: state=present name=wsgi
51
52 # disables the Apache2 module "wsgi"
53 - apache2_module: state=absent name=wsgi
54 '''
55
56 import re
57
58 def _run_threaded(module):
59 control_binary = _get_ctl_binary(module)
60
61 result, stdout, stderr = module.run_command("%s -V" % control_binary)
62
63 if re.search(r'threaded:[ ]*yes', stdout):
64 return True
65 else:
66 return False
67
68 def _get_ctl_binary(module):
69 for command in ['apache2ctl', 'apachectl']:
70 ctl_binary = module.get_bin_path(command)
71 if ctl_binary is not None:
72 return ctl_binary
73
74 module.fail_json(
75 msg="None of httpd, apachectl or apach2ctl found. At least one apache control binary is necessary.")
76
77 def _module_is_enabled(module):
78 control_binary = _get_ctl_binary(module)
79 name = module.params['name']
80
81 result, stdout, stderr = module.run_command("%s -M" % control_binary)
82
83 if result != 0:
84 module.fail_json(msg="Error executing %s: %s" % (control_binary, stderr))
85
86 if re.search(r' ' + name + r'_module', stdout):
87 return True
88 else:
89 return False
90
91 def _set_state(module, state):
92 name = module.params['name']
93 force = module.params['force']
94
95 want_enabled = state == 'present'
96 state_string = {'present': 'enabled', 'absent': 'disabled'}[state]
97 a2mod_binary = {'present': 'a2enmod', 'absent': 'a2dismod'}[state]
98 success_msg = "Module %s %s" % (name, state_string)
99
100 if _module_is_enabled(module) != want_enabled:
101 if module.check_mode:
102 module.exit_json(changed = True, result = success_msg)
103
104 a2mod_binary = module.get_bin_path(a2mod_binary)
105 if a2mod_binary is None:
106 module.fail_json(msg="%s not found. Perhaps this system does not use %s to manage apache" % (a2mod_binary, a2mod_binary))
107
108 if not want_enabled and force:
109 # force exists only for a2dismod on debian
110 a2mod_binary += ' -f'
111
112 result, stdout, stderr = module.run_command("%s %s" % (a2mod_binary, name))
113
114 if _module_is_enabled(module) == want_enabled:
115 module.exit_json(changed = True, result = success_msg)
116 else:
117 module.fail_json(msg="Failed to set module %s to %s: %s" % (name, state_string, stdout), rc=result, stdout=stdout, stderr=stderr)
118 else:
119 module.exit_json(changed = False, result = success_msg)
120
121 def main():
122 module = AnsibleModule(
123 argument_spec = dict(
124 name = dict(required=True),
125 force = dict(required=False, type='bool', default=False),
126 state = dict(default='present', choices=['absent', 'present'])
127 ),
128 supports_check_mode = True,
129 )
130
131 name = module.params['name']
132 if name == 'cgi' and _run_threaded(module):
133 module.fail_json(msg="Your MPM seems to be threaded. No automatic actions on module %s possible." % name)
134
135 if module.params['state'] in ['present', 'absent']:
136 _set_state(module, module.params['state'])
137
138 # import module snippets
139 from ansible.module_utils.basic import *
140 if __name__ == '__main__':
141 main()
142
[end of web_infrastructure/apache2_module.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/web_infrastructure/apache2_module.py b/web_infrastructure/apache2_module.py
--- a/web_infrastructure/apache2_module.py
+++ b/web_infrastructure/apache2_module.py
@@ -80,6 +80,12 @@
result, stdout, stderr = module.run_command("%s -M" % control_binary)
+ """
+ Work around for Ubuntu Xenial listing php7_module as php7.0
+ """
+ if name == "php7.0":
+ name = "php7"
+
if result != 0:
module.fail_json(msg="Error executing %s: %s" % (control_binary, stderr))
|
{"golden_diff": "diff --git a/web_infrastructure/apache2_module.py b/web_infrastructure/apache2_module.py\n--- a/web_infrastructure/apache2_module.py\n+++ b/web_infrastructure/apache2_module.py\n@@ -80,6 +80,12 @@\n \n result, stdout, stderr = module.run_command(\"%s -M\" % control_binary)\n \n+ \"\"\"\n+ Work around for Ubuntu Xenial listing php7_module as php7.0\n+ \"\"\"\n+ if name == \"php7.0\":\n+ name = \"php7\"\n+\n if result != 0:\n module.fail_json(msg=\"Error executing %s: %s\" % (control_binary, stderr))\n", "issue": "apache2_module fails for php7.0 on Ubuntu Xenial\n##### ISSUE TYPE\n- Bug Report\n##### COMPONENT NAME\n\napache2_module\n##### ANSIBLE VERSION\n\n<!--- Paste verbatim output from \u201cansible --version\u201d between quotes below -->\n\n```\nansible 2.2.0 (devel 982db58aff) last updated 2016/09/08 11:50:49 (GMT +100)\n lib/ansible/modules/core: (detached HEAD db38f0c876) last updated 2016/09/08 13:03:40 (GMT +100)\n lib/ansible/modules/extras: (detached HEAD 8bfdcfcab2) last updated 2016/09/08 11:51:00 (GMT +100)\n config file = /home/rowan/.ansible.cfg\n configured module search path = Default w/o overrides\n```\n##### CONFIGURATION\n\nN/A\n##### OS / ENVIRONMENT\n\nN/A\n##### SUMMARY\n\nUbuntu Xenial lists the php7.0 module as php7_module when running apache2ctl -M this breaks the regexp checking if the module is enabled.\n\nI've made a work around here https://github.com/rwky/ansible-modules-core/commit/00ad6ef035a10dac7c84b7b68f04b00a739b104b but I didn't make a PR since I expect it may break other distros/versions.\n\nNot entirely sure what the best solution to this is.\n##### STEPS TO REPRODUCE\n\nRun apache2_module with name=php7.0 state=present on a xenial server.\n\n", "before_files": [{"content": "#!/usr/bin/python\n#coding: utf-8 -*-\n\n# (c) 2013-2014, Christian Berendt <[email protected]>\n#\n# This module is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This software is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this software. If not, see <http://www.gnu.org/licenses/>.\n\nDOCUMENTATION = '''\n---\nmodule: apache2_module\nversion_added: 1.6\nauthor: \"Christian Berendt (@berendt)\"\nshort_description: enables/disables a module of the Apache2 webserver\ndescription:\n - Enables or disables a specified module of the Apache2 webserver.\noptions:\n name:\n description:\n - name of the module to enable/disable\n required: true\n force:\n description:\n - force disabling of default modules and override Debian warnings\n required: false\n choices: ['yes', 'no']\n default: no\n version_added: \"2.1\"\n state:\n description:\n - indicate the desired state of the resource\n choices: ['present', 'absent']\n default: present\n\nrequirements: [\"a2enmod\",\"a2dismod\"]\n'''\n\nEXAMPLES = '''\n# enables the Apache2 module \"wsgi\"\n- apache2_module: state=present name=wsgi\n\n# disables the Apache2 module \"wsgi\"\n- apache2_module: state=absent name=wsgi\n'''\n\nimport re\n\ndef _run_threaded(module):\n control_binary = _get_ctl_binary(module)\n\n result, stdout, stderr = module.run_command(\"%s -V\" % control_binary)\n\n if re.search(r'threaded:[ ]*yes', stdout):\n return True\n else:\n return False\n\ndef _get_ctl_binary(module):\n for command in ['apache2ctl', 'apachectl']:\n ctl_binary = module.get_bin_path(command)\n if ctl_binary is not None:\n return ctl_binary\n\n module.fail_json(\n msg=\"None of httpd, apachectl or apach2ctl found. At least one apache control binary is necessary.\")\n\ndef _module_is_enabled(module):\n control_binary = _get_ctl_binary(module)\n name = module.params['name']\n\n result, stdout, stderr = module.run_command(\"%s -M\" % control_binary)\n\n if result != 0:\n module.fail_json(msg=\"Error executing %s: %s\" % (control_binary, stderr))\n\n if re.search(r' ' + name + r'_module', stdout):\n return True\n else:\n return False\n\ndef _set_state(module, state):\n name = module.params['name']\n force = module.params['force']\n\n want_enabled = state == 'present'\n state_string = {'present': 'enabled', 'absent': 'disabled'}[state]\n a2mod_binary = {'present': 'a2enmod', 'absent': 'a2dismod'}[state]\n success_msg = \"Module %s %s\" % (name, state_string)\n\n if _module_is_enabled(module) != want_enabled:\n if module.check_mode:\n module.exit_json(changed = True, result = success_msg)\n\n a2mod_binary = module.get_bin_path(a2mod_binary)\n if a2mod_binary is None:\n module.fail_json(msg=\"%s not found. Perhaps this system does not use %s to manage apache\" % (a2mod_binary, a2mod_binary))\n\n if not want_enabled and force:\n # force exists only for a2dismod on debian\n a2mod_binary += ' -f'\n\n result, stdout, stderr = module.run_command(\"%s %s\" % (a2mod_binary, name))\n\n if _module_is_enabled(module) == want_enabled:\n module.exit_json(changed = True, result = success_msg)\n else:\n module.fail_json(msg=\"Failed to set module %s to %s: %s\" % (name, state_string, stdout), rc=result, stdout=stdout, stderr=stderr)\n else:\n module.exit_json(changed = False, result = success_msg)\n\ndef main():\n module = AnsibleModule(\n argument_spec = dict(\n name = dict(required=True),\n force = dict(required=False, type='bool', default=False),\n state = dict(default='present', choices=['absent', 'present'])\n ),\n supports_check_mode = True,\n )\n\n name = module.params['name']\n if name == 'cgi' and _run_threaded(module):\n module.fail_json(msg=\"Your MPM seems to be threaded. No automatic actions on module %s possible.\" % name)\n\n if module.params['state'] in ['present', 'absent']:\n _set_state(module, module.params['state'])\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nif __name__ == '__main__':\n main()\n", "path": "web_infrastructure/apache2_module.py"}]}
| 2,422 | 146 |
gh_patches_debug_13328
|
rasdani/github-patches
|
git_diff
|
ansible-collections__community.general-7452
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
icinga2_host - ip should not be required
### Summary
Hi all,
as one can see in https://icinga.com/docs/icinga-2/latest/doc/09-object-types/#host the address variable is not mandatory, so IP should be optional in the plugin, too.
### Issue Type
Bug Report
### Component Name
icinga2_host
### Ansible Version
```console (paste below)
$ ansible --version
2.11.4
```
### Community.general Version
```console (paste below)
$ ansible-galaxy collection list community.general
5.5.0
```
### Configuration
```console (paste below)
```
### OS / Environment
Ubuntu 22.04
### Steps to Reproduce
Try to create a host without given an IP
### Expected Results
Address is optionally
### Actual Results
Address is mandatory
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
</issue>
<code>
[start of plugins/modules/icinga2_host.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # This module is proudly sponsored by CGI (www.cgi.com) and
5 # KPN (www.kpn.com).
6 # Copyright (c) Ansible project
7 # GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
8 # SPDX-License-Identifier: GPL-3.0-or-later
9
10 from __future__ import absolute_import, division, print_function
11 __metaclass__ = type
12
13
14 DOCUMENTATION = '''
15 ---
16 module: icinga2_host
17 short_description: Manage a host in Icinga2
18 description:
19 - "Add or remove a host to Icinga2 through the API."
20 - "See U(https://www.icinga.com/docs/icinga2/latest/doc/12-icinga2-api/)"
21 author: "Jurgen Brand (@t794104)"
22 attributes:
23 check_mode:
24 support: full
25 diff_mode:
26 support: none
27 options:
28 url:
29 type: str
30 description:
31 - HTTP, HTTPS, or FTP URL in the form (http|https|ftp)://[user[:pass]]@host.domain[:port]/path
32 use_proxy:
33 description:
34 - If V(false), it will not use a proxy, even if one is defined in
35 an environment variable on the target hosts.
36 type: bool
37 default: true
38 validate_certs:
39 description:
40 - If V(false), SSL certificates will not be validated. This should only be used
41 on personally controlled sites using self-signed certificates.
42 type: bool
43 default: true
44 url_username:
45 type: str
46 description:
47 - The username for use in HTTP basic authentication.
48 - This parameter can be used without O(url_password) for sites that allow empty passwords.
49 url_password:
50 type: str
51 description:
52 - The password for use in HTTP basic authentication.
53 - If the O(url_username) parameter is not specified, the O(url_password) parameter will not be used.
54 force_basic_auth:
55 description:
56 - httplib2, the library used by the uri module only sends authentication information when a webservice
57 responds to an initial request with a 401 status. Since some basic auth services do not properly
58 send a 401, logins will fail. This option forces the sending of the Basic authentication header
59 upon initial request.
60 type: bool
61 default: false
62 client_cert:
63 type: path
64 description:
65 - PEM formatted certificate chain file to be used for SSL client
66 authentication. This file can also include the key as well, and if
67 the key is included, O(client_key) is not required.
68 client_key:
69 type: path
70 description:
71 - PEM formatted file that contains your private key to be used for SSL
72 client authentication. If O(client_cert) contains both the certificate
73 and key, this option is not required.
74 state:
75 type: str
76 description:
77 - Apply feature state.
78 choices: [ "present", "absent" ]
79 default: present
80 name:
81 type: str
82 description:
83 - Name used to create / delete the host. This does not need to be the FQDN, but does needs to be unique.
84 required: true
85 aliases: [host]
86 zone:
87 type: str
88 description:
89 - The zone from where this host should be polled.
90 template:
91 type: str
92 description:
93 - The template used to define the host.
94 - Template cannot be modified after object creation.
95 check_command:
96 type: str
97 description:
98 - The command used to check if the host is alive.
99 default: "hostalive"
100 display_name:
101 type: str
102 description:
103 - The name used to display the host.
104 - If not specified, it defaults to the value of the O(name) parameter.
105 ip:
106 type: str
107 description:
108 - The IP address of the host.
109 required: true
110 variables:
111 type: dict
112 description:
113 - Dictionary of variables.
114 extends_documentation_fragment:
115 - ansible.builtin.url
116 - community.general.attributes
117 '''
118
119 EXAMPLES = '''
120 - name: Add host to icinga
121 community.general.icinga2_host:
122 url: "https://icinga2.example.com"
123 url_username: "ansible"
124 url_password: "a_secret"
125 state: present
126 name: "{{ ansible_fqdn }}"
127 ip: "{{ ansible_default_ipv4.address }}"
128 variables:
129 foo: "bar"
130 delegate_to: 127.0.0.1
131 '''
132
133 RETURN = '''
134 name:
135 description: The name used to create, modify or delete the host
136 type: str
137 returned: always
138 data:
139 description: The data structure used for create, modify or delete of the host
140 type: dict
141 returned: always
142 '''
143
144 import json
145
146 from ansible.module_utils.basic import AnsibleModule
147 from ansible.module_utils.urls import fetch_url, url_argument_spec
148
149
150 # ===========================================
151 # Icinga2 API class
152 #
153 class icinga2_api:
154 module = None
155
156 def __init__(self, module):
157 self.module = module
158
159 def call_url(self, path, data='', method='GET'):
160 headers = {
161 'Accept': 'application/json',
162 'X-HTTP-Method-Override': method,
163 }
164 url = self.module.params.get("url") + "/" + path
165 rsp, info = fetch_url(module=self.module, url=url, data=data, headers=headers, method=method, use_proxy=self.module.params['use_proxy'])
166 body = ''
167 if rsp:
168 body = json.loads(rsp.read())
169 if info['status'] >= 400:
170 body = info['body']
171 return {'code': info['status'], 'data': body}
172
173 def check_connection(self):
174 ret = self.call_url('v1/status')
175 if ret['code'] == 200:
176 return True
177 return False
178
179 def exists(self, hostname):
180 data = {
181 "filter": "match(\"" + hostname + "\", host.name)",
182 }
183 ret = self.call_url(
184 path="v1/objects/hosts",
185 data=self.module.jsonify(data)
186 )
187 if ret['code'] == 200:
188 if len(ret['data']['results']) == 1:
189 return True
190 return False
191
192 def create(self, hostname, data):
193 ret = self.call_url(
194 path="v1/objects/hosts/" + hostname,
195 data=self.module.jsonify(data),
196 method="PUT"
197 )
198 return ret
199
200 def delete(self, hostname):
201 data = {"cascade": 1}
202 ret = self.call_url(
203 path="v1/objects/hosts/" + hostname,
204 data=self.module.jsonify(data),
205 method="DELETE"
206 )
207 return ret
208
209 def modify(self, hostname, data):
210 ret = self.call_url(
211 path="v1/objects/hosts/" + hostname,
212 data=self.module.jsonify(data),
213 method="POST"
214 )
215 return ret
216
217 def diff(self, hostname, data):
218 ret = self.call_url(
219 path="v1/objects/hosts/" + hostname,
220 method="GET"
221 )
222 changed = False
223 ic_data = ret['data']['results'][0]
224 for key in data['attrs']:
225 if key not in ic_data['attrs'].keys():
226 changed = True
227 elif data['attrs'][key] != ic_data['attrs'][key]:
228 changed = True
229 return changed
230
231
232 # ===========================================
233 # Module execution.
234 #
235 def main():
236 # use the predefined argument spec for url
237 argument_spec = url_argument_spec()
238 # add our own arguments
239 argument_spec.update(
240 state=dict(default="present", choices=["absent", "present"]),
241 name=dict(required=True, aliases=['host']),
242 zone=dict(),
243 template=dict(default=None),
244 check_command=dict(default="hostalive"),
245 display_name=dict(default=None),
246 ip=dict(required=True),
247 variables=dict(type='dict', default=None),
248 )
249
250 # Define the main module
251 module = AnsibleModule(
252 argument_spec=argument_spec,
253 supports_check_mode=True
254 )
255
256 state = module.params["state"]
257 name = module.params["name"]
258 zone = module.params["zone"]
259 template = []
260 if module.params["template"]:
261 template = [module.params["template"]]
262 check_command = module.params["check_command"]
263 ip = module.params["ip"]
264 display_name = module.params["display_name"]
265 if not display_name:
266 display_name = name
267 variables = module.params["variables"]
268
269 try:
270 icinga = icinga2_api(module=module)
271 icinga.check_connection()
272 except Exception as e:
273 module.fail_json(msg="unable to connect to Icinga. Exception message: %s" % (e))
274
275 data = {
276 'templates': template,
277 'attrs': {
278 'address': ip,
279 'display_name': display_name,
280 'check_command': check_command,
281 'zone': zone,
282 'vars.made_by': "ansible"
283 }
284 }
285
286 for key, value in variables.items():
287 data['attrs']['vars.' + key] = value
288
289 changed = False
290 if icinga.exists(name):
291 if state == "absent":
292 if module.check_mode:
293 module.exit_json(changed=True, name=name, data=data)
294 else:
295 try:
296 ret = icinga.delete(name)
297 if ret['code'] == 200:
298 changed = True
299 else:
300 module.fail_json(msg="bad return code (%s) deleting host: '%s'" % (ret['code'], ret['data']))
301 except Exception as e:
302 module.fail_json(msg="exception deleting host: " + str(e))
303
304 elif icinga.diff(name, data):
305 if module.check_mode:
306 module.exit_json(changed=False, name=name, data=data)
307
308 # Template attribute is not allowed in modification
309 del data['templates']
310
311 ret = icinga.modify(name, data)
312
313 if ret['code'] == 200:
314 changed = True
315 else:
316 module.fail_json(msg="bad return code (%s) modifying host: '%s'" % (ret['code'], ret['data']))
317
318 else:
319 if state == "present":
320 if module.check_mode:
321 changed = True
322 else:
323 try:
324 ret = icinga.create(name, data)
325 if ret['code'] == 200:
326 changed = True
327 else:
328 module.fail_json(msg="bad return code (%s) creating host: '%s'" % (ret['code'], ret['data']))
329 except Exception as e:
330 module.fail_json(msg="exception creating host: " + str(e))
331
332 module.exit_json(changed=changed, name=name, data=data)
333
334
335 # import module snippets
336 if __name__ == '__main__':
337 main()
338
[end of plugins/modules/icinga2_host.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plugins/modules/icinga2_host.py b/plugins/modules/icinga2_host.py
--- a/plugins/modules/icinga2_host.py
+++ b/plugins/modules/icinga2_host.py
@@ -106,7 +106,7 @@
type: str
description:
- The IP address of the host.
- required: true
+ - This is no longer required since community.general 8.0.0.
variables:
type: dict
description:
@@ -243,7 +243,7 @@
template=dict(default=None),
check_command=dict(default="hostalive"),
display_name=dict(default=None),
- ip=dict(required=True),
+ ip=dict(),
variables=dict(type='dict', default=None),
)
|
{"golden_diff": "diff --git a/plugins/modules/icinga2_host.py b/plugins/modules/icinga2_host.py\n--- a/plugins/modules/icinga2_host.py\n+++ b/plugins/modules/icinga2_host.py\n@@ -106,7 +106,7 @@\n type: str\n description:\n - The IP address of the host.\n- required: true\n+ - This is no longer required since community.general 8.0.0.\n variables:\n type: dict\n description:\n@@ -243,7 +243,7 @@\n template=dict(default=None),\n check_command=dict(default=\"hostalive\"),\n display_name=dict(default=None),\n- ip=dict(required=True),\n+ ip=dict(),\n variables=dict(type='dict', default=None),\n )\n", "issue": "icinga2_host - ip should not be required\n### Summary\r\n\r\nHi all,\r\nas one can see in https://icinga.com/docs/icinga-2/latest/doc/09-object-types/#host the address variable is not mandatory, so IP should be optional in the plugin, too.\r\n\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\nicinga2_host\r\n\r\n### Ansible Version\r\n\r\n```console (paste below)\r\n$ ansible --version\r\n2.11.4\r\n```\r\n\r\n\r\n### Community.general Version\r\n\r\n```console (paste below)\r\n$ ansible-galaxy collection list community.general\r\n5.5.0\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console (paste below)\r\n\r\n```\r\n\r\n\r\n### OS / Environment\r\n\r\nUbuntu 22.04\r\n\r\n### Steps to Reproduce\r\nTry to create a host without given an IP\r\n\r\n\r\n### Expected Results\r\n\r\nAddress is optionally\r\n\r\n### Actual Results\r\n\r\nAddress is mandatory\r\n\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# This module is proudly sponsored by CGI (www.cgi.com) and\n# KPN (www.kpn.com).\n# Copyright (c) Ansible project\n# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: icinga2_host\nshort_description: Manage a host in Icinga2\ndescription:\n - \"Add or remove a host to Icinga2 through the API.\"\n - \"See U(https://www.icinga.com/docs/icinga2/latest/doc/12-icinga2-api/)\"\nauthor: \"Jurgen Brand (@t794104)\"\nattributes:\n check_mode:\n support: full\n diff_mode:\n support: none\noptions:\n url:\n type: str\n description:\n - HTTP, HTTPS, or FTP URL in the form (http|https|ftp)://[user[:pass]]@host.domain[:port]/path\n use_proxy:\n description:\n - If V(false), it will not use a proxy, even if one is defined in\n an environment variable on the target hosts.\n type: bool\n default: true\n validate_certs:\n description:\n - If V(false), SSL certificates will not be validated. This should only be used\n on personally controlled sites using self-signed certificates.\n type: bool\n default: true\n url_username:\n type: str\n description:\n - The username for use in HTTP basic authentication.\n - This parameter can be used without O(url_password) for sites that allow empty passwords.\n url_password:\n type: str\n description:\n - The password for use in HTTP basic authentication.\n - If the O(url_username) parameter is not specified, the O(url_password) parameter will not be used.\n force_basic_auth:\n description:\n - httplib2, the library used by the uri module only sends authentication information when a webservice\n responds to an initial request with a 401 status. Since some basic auth services do not properly\n send a 401, logins will fail. This option forces the sending of the Basic authentication header\n upon initial request.\n type: bool\n default: false\n client_cert:\n type: path\n description:\n - PEM formatted certificate chain file to be used for SSL client\n authentication. This file can also include the key as well, and if\n the key is included, O(client_key) is not required.\n client_key:\n type: path\n description:\n - PEM formatted file that contains your private key to be used for SSL\n client authentication. If O(client_cert) contains both the certificate\n and key, this option is not required.\n state:\n type: str\n description:\n - Apply feature state.\n choices: [ \"present\", \"absent\" ]\n default: present\n name:\n type: str\n description:\n - Name used to create / delete the host. This does not need to be the FQDN, but does needs to be unique.\n required: true\n aliases: [host]\n zone:\n type: str\n description:\n - The zone from where this host should be polled.\n template:\n type: str\n description:\n - The template used to define the host.\n - Template cannot be modified after object creation.\n check_command:\n type: str\n description:\n - The command used to check if the host is alive.\n default: \"hostalive\"\n display_name:\n type: str\n description:\n - The name used to display the host.\n - If not specified, it defaults to the value of the O(name) parameter.\n ip:\n type: str\n description:\n - The IP address of the host.\n required: true\n variables:\n type: dict\n description:\n - Dictionary of variables.\nextends_documentation_fragment:\n - ansible.builtin.url\n - community.general.attributes\n'''\n\nEXAMPLES = '''\n- name: Add host to icinga\n community.general.icinga2_host:\n url: \"https://icinga2.example.com\"\n url_username: \"ansible\"\n url_password: \"a_secret\"\n state: present\n name: \"{{ ansible_fqdn }}\"\n ip: \"{{ ansible_default_ipv4.address }}\"\n variables:\n foo: \"bar\"\n delegate_to: 127.0.0.1\n'''\n\nRETURN = '''\nname:\n description: The name used to create, modify or delete the host\n type: str\n returned: always\ndata:\n description: The data structure used for create, modify or delete of the host\n type: dict\n returned: always\n'''\n\nimport json\n\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible.module_utils.urls import fetch_url, url_argument_spec\n\n\n# ===========================================\n# Icinga2 API class\n#\nclass icinga2_api:\n module = None\n\n def __init__(self, module):\n self.module = module\n\n def call_url(self, path, data='', method='GET'):\n headers = {\n 'Accept': 'application/json',\n 'X-HTTP-Method-Override': method,\n }\n url = self.module.params.get(\"url\") + \"/\" + path\n rsp, info = fetch_url(module=self.module, url=url, data=data, headers=headers, method=method, use_proxy=self.module.params['use_proxy'])\n body = ''\n if rsp:\n body = json.loads(rsp.read())\n if info['status'] >= 400:\n body = info['body']\n return {'code': info['status'], 'data': body}\n\n def check_connection(self):\n ret = self.call_url('v1/status')\n if ret['code'] == 200:\n return True\n return False\n\n def exists(self, hostname):\n data = {\n \"filter\": \"match(\\\"\" + hostname + \"\\\", host.name)\",\n }\n ret = self.call_url(\n path=\"v1/objects/hosts\",\n data=self.module.jsonify(data)\n )\n if ret['code'] == 200:\n if len(ret['data']['results']) == 1:\n return True\n return False\n\n def create(self, hostname, data):\n ret = self.call_url(\n path=\"v1/objects/hosts/\" + hostname,\n data=self.module.jsonify(data),\n method=\"PUT\"\n )\n return ret\n\n def delete(self, hostname):\n data = {\"cascade\": 1}\n ret = self.call_url(\n path=\"v1/objects/hosts/\" + hostname,\n data=self.module.jsonify(data),\n method=\"DELETE\"\n )\n return ret\n\n def modify(self, hostname, data):\n ret = self.call_url(\n path=\"v1/objects/hosts/\" + hostname,\n data=self.module.jsonify(data),\n method=\"POST\"\n )\n return ret\n\n def diff(self, hostname, data):\n ret = self.call_url(\n path=\"v1/objects/hosts/\" + hostname,\n method=\"GET\"\n )\n changed = False\n ic_data = ret['data']['results'][0]\n for key in data['attrs']:\n if key not in ic_data['attrs'].keys():\n changed = True\n elif data['attrs'][key] != ic_data['attrs'][key]:\n changed = True\n return changed\n\n\n# ===========================================\n# Module execution.\n#\ndef main():\n # use the predefined argument spec for url\n argument_spec = url_argument_spec()\n # add our own arguments\n argument_spec.update(\n state=dict(default=\"present\", choices=[\"absent\", \"present\"]),\n name=dict(required=True, aliases=['host']),\n zone=dict(),\n template=dict(default=None),\n check_command=dict(default=\"hostalive\"),\n display_name=dict(default=None),\n ip=dict(required=True),\n variables=dict(type='dict', default=None),\n )\n\n # Define the main module\n module = AnsibleModule(\n argument_spec=argument_spec,\n supports_check_mode=True\n )\n\n state = module.params[\"state\"]\n name = module.params[\"name\"]\n zone = module.params[\"zone\"]\n template = []\n if module.params[\"template\"]:\n template = [module.params[\"template\"]]\n check_command = module.params[\"check_command\"]\n ip = module.params[\"ip\"]\n display_name = module.params[\"display_name\"]\n if not display_name:\n display_name = name\n variables = module.params[\"variables\"]\n\n try:\n icinga = icinga2_api(module=module)\n icinga.check_connection()\n except Exception as e:\n module.fail_json(msg=\"unable to connect to Icinga. Exception message: %s\" % (e))\n\n data = {\n 'templates': template,\n 'attrs': {\n 'address': ip,\n 'display_name': display_name,\n 'check_command': check_command,\n 'zone': zone,\n 'vars.made_by': \"ansible\"\n }\n }\n\n for key, value in variables.items():\n data['attrs']['vars.' + key] = value\n\n changed = False\n if icinga.exists(name):\n if state == \"absent\":\n if module.check_mode:\n module.exit_json(changed=True, name=name, data=data)\n else:\n try:\n ret = icinga.delete(name)\n if ret['code'] == 200:\n changed = True\n else:\n module.fail_json(msg=\"bad return code (%s) deleting host: '%s'\" % (ret['code'], ret['data']))\n except Exception as e:\n module.fail_json(msg=\"exception deleting host: \" + str(e))\n\n elif icinga.diff(name, data):\n if module.check_mode:\n module.exit_json(changed=False, name=name, data=data)\n\n # Template attribute is not allowed in modification\n del data['templates']\n\n ret = icinga.modify(name, data)\n\n if ret['code'] == 200:\n changed = True\n else:\n module.fail_json(msg=\"bad return code (%s) modifying host: '%s'\" % (ret['code'], ret['data']))\n\n else:\n if state == \"present\":\n if module.check_mode:\n changed = True\n else:\n try:\n ret = icinga.create(name, data)\n if ret['code'] == 200:\n changed = True\n else:\n module.fail_json(msg=\"bad return code (%s) creating host: '%s'\" % (ret['code'], ret['data']))\n except Exception as e:\n module.fail_json(msg=\"exception creating host: \" + str(e))\n\n module.exit_json(changed=changed, name=name, data=data)\n\n\n# import module snippets\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/icinga2_host.py"}]}
| 4,077 | 170 |
gh_patches_debug_29139
|
rasdani/github-patches
|
git_diff
|
zulip__zulip-19038
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add "Privacy and security" section to personal Settings menu
Some personal settings are hard to find right now, and some settings pages have too many different kinds of settings. We should make the settings easier to navigate by splitting "Your account" into two sections:
1. **Profile** (1st on the list). We can try removing all the section headers and see if it's OK or confusing.
Settings (in order): Full name, Profile picture, "Deactivate account" button, everything currently under **Your account > Profile** (custom fields).
I'm not entirely sure about the "Deactivate account" button placement; we can play with it.
2. **Privacy and security** (2nd on the list)
Settings (in order):
a. **User settings**: Email, password, role
b. **Presence** (currently under **Notifications**)
c. **API key**
</issue>
<code>
[start of zerver/lib/markdown/help_relative_links.py]
1 import re
2 from typing import Any, List, Match, Optional
3
4 from markdown import Markdown
5 from markdown.extensions import Extension
6 from markdown.preprocessors import Preprocessor
7
8 # There is a lot of duplicated code between this file and
9 # help_settings_links.py. So if you're making a change here consider making
10 # it there as well.
11
12 REGEXP = re.compile(r"\{relative\|(?P<link_type>.*?)\|(?P<key>.*?)\}")
13
14 gear_info = {
15 # The pattern is key: [name, link]
16 # key is from REGEXP: `{relative|gear|key}`
17 # name is what the item is called in the gear menu: `Select **name**.`
18 # link is used for relative links: `Select [name](link).`
19 "manage-streams": ["Manage streams", "/#streams/subscribed"],
20 "settings": ["Settings", "/#settings/your-account"],
21 "manage-organization": ["Manage organization", "/#organization/organization-profile"],
22 "integrations": ["Integrations", "/integrations"],
23 "stats": ["Usage statistics", "/stats"],
24 "plans": ["Plans and pricing", "/plans"],
25 "billing": ["Billing", "/billing"],
26 "invite": ["Invite users", "/#invite"],
27 }
28
29 gear_instructions = """
30 1. From your desktop, click on the **gear**
31 (<i class="fa fa-cog"></i>) in the upper right corner.
32
33 1. Select {item}.
34 """
35
36
37 def gear_handle_match(key: str) -> str:
38 if relative_help_links:
39 item = f"[{gear_info[key][0]}]({gear_info[key][1]})"
40 else:
41 item = f"**{gear_info[key][0]}**"
42 return gear_instructions.format(item=item)
43
44
45 stream_info = {
46 "all": ["All streams", "/#streams/all"],
47 "subscribed": ["Your streams", "/#streams/subscribed"],
48 }
49
50 stream_instructions_no_link = """
51 1. From your desktop, click on the **gear**
52 (<i class="fa fa-cog"></i>) in the upper right corner.
53
54 1. Click **Manage streams**.
55 """
56
57
58 def stream_handle_match(key: str) -> str:
59 if relative_help_links:
60 return f"1. Go to [{stream_info[key][0]}]({stream_info[key][1]})."
61 if key == "all":
62 return stream_instructions_no_link + "\n\n1. Click **All streams** in the upper left."
63 return stream_instructions_no_link
64
65
66 LINK_TYPE_HANDLERS = {
67 "gear": gear_handle_match,
68 "stream": stream_handle_match,
69 }
70
71
72 class RelativeLinksHelpExtension(Extension):
73 def extendMarkdown(self, md: Markdown) -> None:
74 """Add RelativeLinksHelpExtension to the Markdown instance."""
75 md.registerExtension(self)
76 md.preprocessors.register(RelativeLinks(), "help_relative_links", 520)
77
78
79 relative_help_links: Optional[bool] = None
80
81
82 def set_relative_help_links(value: bool) -> None:
83 global relative_help_links
84 relative_help_links = value
85
86
87 class RelativeLinks(Preprocessor):
88 def run(self, lines: List[str]) -> List[str]:
89 done = False
90 while not done:
91 for line in lines:
92 loc = lines.index(line)
93 match = REGEXP.search(line)
94
95 if match:
96 text = [self.handleMatch(match)]
97 # The line that contains the directive to include the macro
98 # may be preceded or followed by text or tags, in that case
99 # we need to make sure that any preceding or following text
100 # stays the same.
101 line_split = REGEXP.split(line, maxsplit=0)
102 preceding = line_split[0]
103 following = line_split[-1]
104 text = [preceding, *text, following]
105 lines = lines[:loc] + text + lines[loc + 1 :]
106 break
107 else:
108 done = True
109 return lines
110
111 def handleMatch(self, match: Match[str]) -> str:
112 return LINK_TYPE_HANDLERS[match.group("link_type")](match.group("key"))
113
114
115 def makeExtension(*args: Any, **kwargs: Any) -> RelativeLinksHelpExtension:
116 return RelativeLinksHelpExtension(*args, **kwargs)
117
[end of zerver/lib/markdown/help_relative_links.py]
[start of zerver/lib/markdown/help_settings_links.py]
1 import re
2 from typing import Any, List, Match, Optional
3
4 from markdown import Markdown
5 from markdown.extensions import Extension
6 from markdown.preprocessors import Preprocessor
7
8 # There is a lot of duplicated code between this file and
9 # help_relative_links.py. So if you're making a change here consider making
10 # it there as well.
11
12 REGEXP = re.compile(r"\{settings_tab\|(?P<setting_identifier>.*?)\}")
13
14 link_mapping = {
15 # a mapping from the setting identifier that is the same as the final URL
16 # breadcrumb to that setting to the name of its setting type, the setting
17 # name as it appears in the user interface, and a relative link that can
18 # be used to get to that setting
19 "your-account": ["Settings", "Your account", "/#settings/your-account"],
20 "display-settings": ["Settings", "Display settings", "/#settings/display-settings"],
21 "notifications": ["Settings", "Notifications", "/#settings/notifications"],
22 "your-bots": ["Settings", "Your bots", "/#settings/your-bots"],
23 "alert-words": ["Settings", "Alert words", "/#settings/alert-words"],
24 "uploaded-files": ["Settings", "Uploaded files", "/#settings/uploaded-files"],
25 "muted-topics": ["Settings", "Muted topics", "/#settings/muted-topics"],
26 "muted-users": ["Settings", "Muted users", "/#settings/muted-users"],
27 "organization-profile": [
28 "Manage organization",
29 "Organization profile",
30 "/#organization/organization-profile",
31 ],
32 "organization-settings": [
33 "Manage organization",
34 "Organization settings",
35 "/#organization/organization-settings",
36 ],
37 "organization-permissions": [
38 "Manage organization",
39 "Organization permissions",
40 "/#organization/organization-permissions",
41 ],
42 "emoji-settings": ["Manage organization", "Custom emoji", "/#organization/emoji-settings"],
43 "auth-methods": [
44 "Manage organization",
45 "Authentication methods",
46 "/#organization/auth-methods",
47 ],
48 "user-groups-admin": ["Manage organization", "User groups", "/#organization/user-groups-admin"],
49 "user-list-admin": ["Manage organization", "Users", "/#organization/user-list-admin"],
50 "deactivated-users-admin": [
51 "Manage organization",
52 "Deactivated users",
53 "/#organization/deactivated-users-admin",
54 ],
55 "bot-list-admin": ["Manage organization", "Bots", "/#organization/bot-list-admin"],
56 "default-streams-list": [
57 "Manage organization",
58 "Default streams",
59 "/#organization/default-streams-list",
60 ],
61 "linkifier-settings": [
62 "Manage organization",
63 "Linkifiers",
64 "/#organization/linkifier-settings",
65 ],
66 "playground-settings": [
67 "Manage organization",
68 "Code playgrounds",
69 "/#organization/playground-settings",
70 ],
71 "profile-field-settings": [
72 "Manage organization",
73 "Custom profile fields",
74 "/#organization/profile-field-settings",
75 ],
76 "invites-list-admin": [
77 "Manage organization",
78 "Invitations",
79 "/#organization/invites-list-admin",
80 ],
81 "data-exports-admin": [
82 "Manage organization",
83 "Data exports",
84 "/#organization/data-exports-admin",
85 ],
86 }
87
88 settings_markdown = """
89 1. From your desktop, click on the **gear**
90 (<i class="fa fa-cog"></i>) in the upper right corner.
91
92 1. Select **{setting_type_name}**.
93
94 1. On the left, click {setting_reference}.
95 """
96
97
98 class SettingHelpExtension(Extension):
99 def extendMarkdown(self, md: Markdown) -> None:
100 """Add SettingHelpExtension to the Markdown instance."""
101 md.registerExtension(self)
102 md.preprocessors.register(Setting(), "setting", 515)
103
104
105 relative_settings_links: Optional[bool] = None
106
107
108 def set_relative_settings_links(value: bool) -> None:
109 global relative_settings_links
110 relative_settings_links = value
111
112
113 class Setting(Preprocessor):
114 def run(self, lines: List[str]) -> List[str]:
115 done = False
116 while not done:
117 for line in lines:
118 loc = lines.index(line)
119 match = REGEXP.search(line)
120
121 if match:
122 text = [self.handleMatch(match)]
123 # The line that contains the directive to include the macro
124 # may be preceded or followed by text or tags, in that case
125 # we need to make sure that any preceding or following text
126 # stays the same.
127 line_split = REGEXP.split(line, maxsplit=0)
128 preceding = line_split[0]
129 following = line_split[-1]
130 text = [preceding, *text, following]
131 lines = lines[:loc] + text + lines[loc + 1 :]
132 break
133 else:
134 done = True
135 return lines
136
137 def handleMatch(self, match: Match[str]) -> str:
138 setting_identifier = match.group("setting_identifier")
139 setting_type_name = link_mapping[setting_identifier][0]
140 setting_name = link_mapping[setting_identifier][1]
141 setting_link = link_mapping[setting_identifier][2]
142 if relative_settings_links:
143 return f"1. Go to [{setting_name}]({setting_link})."
144 return settings_markdown.format(
145 setting_type_name=setting_type_name,
146 setting_reference=f"**{setting_name}**",
147 )
148
149
150 def makeExtension(*args: Any, **kwargs: Any) -> SettingHelpExtension:
151 return SettingHelpExtension(*args, **kwargs)
152
[end of zerver/lib/markdown/help_settings_links.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/zerver/lib/markdown/help_relative_links.py b/zerver/lib/markdown/help_relative_links.py
--- a/zerver/lib/markdown/help_relative_links.py
+++ b/zerver/lib/markdown/help_relative_links.py
@@ -17,7 +17,7 @@
# name is what the item is called in the gear menu: `Select **name**.`
# link is used for relative links: `Select [name](link).`
"manage-streams": ["Manage streams", "/#streams/subscribed"],
- "settings": ["Settings", "/#settings/your-account"],
+ "settings": ["Settings", "/#settings/profile"],
"manage-organization": ["Manage organization", "/#organization/organization-profile"],
"integrations": ["Integrations", "/integrations"],
"stats": ["Usage statistics", "/stats"],
diff --git a/zerver/lib/markdown/help_settings_links.py b/zerver/lib/markdown/help_settings_links.py
--- a/zerver/lib/markdown/help_settings_links.py
+++ b/zerver/lib/markdown/help_settings_links.py
@@ -16,7 +16,8 @@
# breadcrumb to that setting to the name of its setting type, the setting
# name as it appears in the user interface, and a relative link that can
# be used to get to that setting
- "your-account": ["Settings", "Your account", "/#settings/your-account"],
+ "profile": ["Settings", "Profile", "/#settings/profile"],
+ "account-and-privacy": ["Settings", "Account & privacy", "/#settings/account-and-privacy"],
"display-settings": ["Settings", "Display settings", "/#settings/display-settings"],
"notifications": ["Settings", "Notifications", "/#settings/notifications"],
"your-bots": ["Settings", "Your bots", "/#settings/your-bots"],
|
{"golden_diff": "diff --git a/zerver/lib/markdown/help_relative_links.py b/zerver/lib/markdown/help_relative_links.py\n--- a/zerver/lib/markdown/help_relative_links.py\n+++ b/zerver/lib/markdown/help_relative_links.py\n@@ -17,7 +17,7 @@\n # name is what the item is called in the gear menu: `Select **name**.`\n # link is used for relative links: `Select [name](link).`\n \"manage-streams\": [\"Manage streams\", \"/#streams/subscribed\"],\n- \"settings\": [\"Settings\", \"/#settings/your-account\"],\n+ \"settings\": [\"Settings\", \"/#settings/profile\"],\n \"manage-organization\": [\"Manage organization\", \"/#organization/organization-profile\"],\n \"integrations\": [\"Integrations\", \"/integrations\"],\n \"stats\": [\"Usage statistics\", \"/stats\"],\ndiff --git a/zerver/lib/markdown/help_settings_links.py b/zerver/lib/markdown/help_settings_links.py\n--- a/zerver/lib/markdown/help_settings_links.py\n+++ b/zerver/lib/markdown/help_settings_links.py\n@@ -16,7 +16,8 @@\n # breadcrumb to that setting to the name of its setting type, the setting\n # name as it appears in the user interface, and a relative link that can\n # be used to get to that setting\n- \"your-account\": [\"Settings\", \"Your account\", \"/#settings/your-account\"],\n+ \"profile\": [\"Settings\", \"Profile\", \"/#settings/profile\"],\n+ \"account-and-privacy\": [\"Settings\", \"Account & privacy\", \"/#settings/account-and-privacy\"],\n \"display-settings\": [\"Settings\", \"Display settings\", \"/#settings/display-settings\"],\n \"notifications\": [\"Settings\", \"Notifications\", \"/#settings/notifications\"],\n \"your-bots\": [\"Settings\", \"Your bots\", \"/#settings/your-bots\"],\n", "issue": "Add \"Privacy and security\" section to personal Settings menu \nSome personal settings are hard to find right now, and some settings pages have too many different kinds of settings. We should make the settings easier to navigate by splitting \"Your account\" into two sections:\r\n\r\n1. **Profile** (1st on the list). We can try removing all the section headers and see if it's OK or confusing.\r\nSettings (in order): Full name, Profile picture, \"Deactivate account\" button, everything currently under **Your account > Profile** (custom fields).\r\n\r\nI'm not entirely sure about the \"Deactivate account\" button placement; we can play with it.\r\n\r\n2. **Privacy and security** (2nd on the list) \r\nSettings (in order):\r\n a. **User settings**: Email, password, role\r\n b. **Presence** (currently under **Notifications**)\r\n c. **API key**\r\n\r\n\n", "before_files": [{"content": "import re\nfrom typing import Any, List, Match, Optional\n\nfrom markdown import Markdown\nfrom markdown.extensions import Extension\nfrom markdown.preprocessors import Preprocessor\n\n# There is a lot of duplicated code between this file and\n# help_settings_links.py. So if you're making a change here consider making\n# it there as well.\n\nREGEXP = re.compile(r\"\\{relative\\|(?P<link_type>.*?)\\|(?P<key>.*?)\\}\")\n\ngear_info = {\n # The pattern is key: [name, link]\n # key is from REGEXP: `{relative|gear|key}`\n # name is what the item is called in the gear menu: `Select **name**.`\n # link is used for relative links: `Select [name](link).`\n \"manage-streams\": [\"Manage streams\", \"/#streams/subscribed\"],\n \"settings\": [\"Settings\", \"/#settings/your-account\"],\n \"manage-organization\": [\"Manage organization\", \"/#organization/organization-profile\"],\n \"integrations\": [\"Integrations\", \"/integrations\"],\n \"stats\": [\"Usage statistics\", \"/stats\"],\n \"plans\": [\"Plans and pricing\", \"/plans\"],\n \"billing\": [\"Billing\", \"/billing\"],\n \"invite\": [\"Invite users\", \"/#invite\"],\n}\n\ngear_instructions = \"\"\"\n1. From your desktop, click on the **gear**\n (<i class=\"fa fa-cog\"></i>) in the upper right corner.\n\n1. Select {item}.\n\"\"\"\n\n\ndef gear_handle_match(key: str) -> str:\n if relative_help_links:\n item = f\"[{gear_info[key][0]}]({gear_info[key][1]})\"\n else:\n item = f\"**{gear_info[key][0]}**\"\n return gear_instructions.format(item=item)\n\n\nstream_info = {\n \"all\": [\"All streams\", \"/#streams/all\"],\n \"subscribed\": [\"Your streams\", \"/#streams/subscribed\"],\n}\n\nstream_instructions_no_link = \"\"\"\n1. From your desktop, click on the **gear**\n (<i class=\"fa fa-cog\"></i>) in the upper right corner.\n\n1. Click **Manage streams**.\n\"\"\"\n\n\ndef stream_handle_match(key: str) -> str:\n if relative_help_links:\n return f\"1. Go to [{stream_info[key][0]}]({stream_info[key][1]}).\"\n if key == \"all\":\n return stream_instructions_no_link + \"\\n\\n1. Click **All streams** in the upper left.\"\n return stream_instructions_no_link\n\n\nLINK_TYPE_HANDLERS = {\n \"gear\": gear_handle_match,\n \"stream\": stream_handle_match,\n}\n\n\nclass RelativeLinksHelpExtension(Extension):\n def extendMarkdown(self, md: Markdown) -> None:\n \"\"\"Add RelativeLinksHelpExtension to the Markdown instance.\"\"\"\n md.registerExtension(self)\n md.preprocessors.register(RelativeLinks(), \"help_relative_links\", 520)\n\n\nrelative_help_links: Optional[bool] = None\n\n\ndef set_relative_help_links(value: bool) -> None:\n global relative_help_links\n relative_help_links = value\n\n\nclass RelativeLinks(Preprocessor):\n def run(self, lines: List[str]) -> List[str]:\n done = False\n while not done:\n for line in lines:\n loc = lines.index(line)\n match = REGEXP.search(line)\n\n if match:\n text = [self.handleMatch(match)]\n # The line that contains the directive to include the macro\n # may be preceded or followed by text or tags, in that case\n # we need to make sure that any preceding or following text\n # stays the same.\n line_split = REGEXP.split(line, maxsplit=0)\n preceding = line_split[0]\n following = line_split[-1]\n text = [preceding, *text, following]\n lines = lines[:loc] + text + lines[loc + 1 :]\n break\n else:\n done = True\n return lines\n\n def handleMatch(self, match: Match[str]) -> str:\n return LINK_TYPE_HANDLERS[match.group(\"link_type\")](match.group(\"key\"))\n\n\ndef makeExtension(*args: Any, **kwargs: Any) -> RelativeLinksHelpExtension:\n return RelativeLinksHelpExtension(*args, **kwargs)\n", "path": "zerver/lib/markdown/help_relative_links.py"}, {"content": "import re\nfrom typing import Any, List, Match, Optional\n\nfrom markdown import Markdown\nfrom markdown.extensions import Extension\nfrom markdown.preprocessors import Preprocessor\n\n# There is a lot of duplicated code between this file and\n# help_relative_links.py. So if you're making a change here consider making\n# it there as well.\n\nREGEXP = re.compile(r\"\\{settings_tab\\|(?P<setting_identifier>.*?)\\}\")\n\nlink_mapping = {\n # a mapping from the setting identifier that is the same as the final URL\n # breadcrumb to that setting to the name of its setting type, the setting\n # name as it appears in the user interface, and a relative link that can\n # be used to get to that setting\n \"your-account\": [\"Settings\", \"Your account\", \"/#settings/your-account\"],\n \"display-settings\": [\"Settings\", \"Display settings\", \"/#settings/display-settings\"],\n \"notifications\": [\"Settings\", \"Notifications\", \"/#settings/notifications\"],\n \"your-bots\": [\"Settings\", \"Your bots\", \"/#settings/your-bots\"],\n \"alert-words\": [\"Settings\", \"Alert words\", \"/#settings/alert-words\"],\n \"uploaded-files\": [\"Settings\", \"Uploaded files\", \"/#settings/uploaded-files\"],\n \"muted-topics\": [\"Settings\", \"Muted topics\", \"/#settings/muted-topics\"],\n \"muted-users\": [\"Settings\", \"Muted users\", \"/#settings/muted-users\"],\n \"organization-profile\": [\n \"Manage organization\",\n \"Organization profile\",\n \"/#organization/organization-profile\",\n ],\n \"organization-settings\": [\n \"Manage organization\",\n \"Organization settings\",\n \"/#organization/organization-settings\",\n ],\n \"organization-permissions\": [\n \"Manage organization\",\n \"Organization permissions\",\n \"/#organization/organization-permissions\",\n ],\n \"emoji-settings\": [\"Manage organization\", \"Custom emoji\", \"/#organization/emoji-settings\"],\n \"auth-methods\": [\n \"Manage organization\",\n \"Authentication methods\",\n \"/#organization/auth-methods\",\n ],\n \"user-groups-admin\": [\"Manage organization\", \"User groups\", \"/#organization/user-groups-admin\"],\n \"user-list-admin\": [\"Manage organization\", \"Users\", \"/#organization/user-list-admin\"],\n \"deactivated-users-admin\": [\n \"Manage organization\",\n \"Deactivated users\",\n \"/#organization/deactivated-users-admin\",\n ],\n \"bot-list-admin\": [\"Manage organization\", \"Bots\", \"/#organization/bot-list-admin\"],\n \"default-streams-list\": [\n \"Manage organization\",\n \"Default streams\",\n \"/#organization/default-streams-list\",\n ],\n \"linkifier-settings\": [\n \"Manage organization\",\n \"Linkifiers\",\n \"/#organization/linkifier-settings\",\n ],\n \"playground-settings\": [\n \"Manage organization\",\n \"Code playgrounds\",\n \"/#organization/playground-settings\",\n ],\n \"profile-field-settings\": [\n \"Manage organization\",\n \"Custom profile fields\",\n \"/#organization/profile-field-settings\",\n ],\n \"invites-list-admin\": [\n \"Manage organization\",\n \"Invitations\",\n \"/#organization/invites-list-admin\",\n ],\n \"data-exports-admin\": [\n \"Manage organization\",\n \"Data exports\",\n \"/#organization/data-exports-admin\",\n ],\n}\n\nsettings_markdown = \"\"\"\n1. From your desktop, click on the **gear**\n (<i class=\"fa fa-cog\"></i>) in the upper right corner.\n\n1. Select **{setting_type_name}**.\n\n1. On the left, click {setting_reference}.\n\"\"\"\n\n\nclass SettingHelpExtension(Extension):\n def extendMarkdown(self, md: Markdown) -> None:\n \"\"\"Add SettingHelpExtension to the Markdown instance.\"\"\"\n md.registerExtension(self)\n md.preprocessors.register(Setting(), \"setting\", 515)\n\n\nrelative_settings_links: Optional[bool] = None\n\n\ndef set_relative_settings_links(value: bool) -> None:\n global relative_settings_links\n relative_settings_links = value\n\n\nclass Setting(Preprocessor):\n def run(self, lines: List[str]) -> List[str]:\n done = False\n while not done:\n for line in lines:\n loc = lines.index(line)\n match = REGEXP.search(line)\n\n if match:\n text = [self.handleMatch(match)]\n # The line that contains the directive to include the macro\n # may be preceded or followed by text or tags, in that case\n # we need to make sure that any preceding or following text\n # stays the same.\n line_split = REGEXP.split(line, maxsplit=0)\n preceding = line_split[0]\n following = line_split[-1]\n text = [preceding, *text, following]\n lines = lines[:loc] + text + lines[loc + 1 :]\n break\n else:\n done = True\n return lines\n\n def handleMatch(self, match: Match[str]) -> str:\n setting_identifier = match.group(\"setting_identifier\")\n setting_type_name = link_mapping[setting_identifier][0]\n setting_name = link_mapping[setting_identifier][1]\n setting_link = link_mapping[setting_identifier][2]\n if relative_settings_links:\n return f\"1. Go to [{setting_name}]({setting_link}).\"\n return settings_markdown.format(\n setting_type_name=setting_type_name,\n setting_reference=f\"**{setting_name}**\",\n )\n\n\ndef makeExtension(*args: Any, **kwargs: Any) -> SettingHelpExtension:\n return SettingHelpExtension(*args, **kwargs)\n", "path": "zerver/lib/markdown/help_settings_links.py"}]}
| 3,489 | 404 |
gh_patches_debug_42171
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-5620
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of colossalai/shardformer/shard/shard_config.py]
1 import warnings
2 from dataclasses import dataclass, field
3 from typing import Any, Dict, Optional
4
5 import torch.distributed as dist
6 from torch.distributed import ProcessGroup
7
8 from colossalai.pipeline.stage_manager import PipelineStageManager
9
10 from .grad_ckpt_config import GradientCheckpointConfig
11
12 __all__ = ["ShardConfig"]
13 SUPPORT_SP_MODE = ["split_gather", "ring", "all_to_all"]
14
15
16 @dataclass
17 class ShardConfig:
18 r"""
19 The config for sharding the huggingface model
20
21 Args:
22 tensor_parallel_process_group (Optional[ProcessGroup]): The process group of tensor parallelism, it's necessary when using tensor parallel. Defaults to None, which is the global process group.
23 pipeline_stage_manager (Optional[PipelineStageManager]): If using pipeline parallelism, it's necessary to specify a pipeline stage manager for inter-process communication in pipeline parallelism. Defaults to None, which means not using pipeline parallelism.
24 enable_tensor_parallelism (bool): Whether to use tensor parallelism. Defaults to True.
25 enable_fused_normalization (bool): Whether to use fused layernorm. Defaults to False.
26 enable_flash_attention (bool, optional): Whether to switch on flash attention. Defaults to False.
27 enable_jit_fused (bool, optional): Whether to switch on JIT fused operators. Defaults to False.
28 enable_sequence_parallelism (bool): Whether to turn on sequence parallelism, which partitions non-tensor-parallel regions along the sequence dimension. Defaults to False.
29 enable_sequence_overlap (bool): Whether to turn on sequence overlap, which overlap the computation and communication in sequence parallelism. It can only be used when enable_sequence_parallelism is True. Defaults to False.
30 gradient_checkpoint_config (Optional[GradientCheckpointConfig]): The gradient checkpoint config. Defaults to None.
31 enable_all_optimization (bool): Whether to turn on all optimization tools including 'fused normalization', 'flash attention', 'JIT fused operators', 'sequence parallelism' and 'sequence overlap'. Defaults to False.
32 """
33 tensor_parallel_process_group: Optional[ProcessGroup] = None
34 sequence_parallel_process_group: Optional[ProcessGroup] = None
35 pipeline_stage_manager: Optional[PipelineStageManager] = None
36 enable_tensor_parallelism: bool = True
37 enable_all_optimization: bool = False
38 enable_fused_normalization: bool = False
39 enable_flash_attention: bool = False
40 enable_jit_fused: bool = False
41 enable_sequence_parallelism: bool = False
42 sequence_parallelism_mode: str = None
43 enable_sequence_overlap: bool = False
44 parallel_output: bool = True
45 make_vocab_size_divisible_by: int = 64
46 gradient_checkpoint_config: Optional[GradientCheckpointConfig] = None
47 extra_kwargs: Dict[str, Any] = field(default_factory=dict)
48 # pipeline_parallel_size: int
49 # data_parallel_size: int
50 # tensor_parallel_mode: Literal['1d', '2d', '2.5d', '3d']
51
52 @property
53 def tensor_parallel_size(self):
54 return self._tensor_parallel_size
55
56 @property
57 def sequence_parallel_size(self):
58 return self._sequence_parallel_size
59
60 def __post_init__(self):
61 # turn on all optimization if all_optimization is set to True
62 if self.enable_all_optimization:
63 self._turn_on_all_optimization()
64
65 if self.enable_sequence_parallelism:
66 self.sequence_parallelism_mode = (
67 "split_gather" if self.sequence_parallelism_mode is None else self.sequence_parallelism_mode
68 )
69 assert (
70 self.sequence_parallelism_mode in SUPPORT_SP_MODE
71 ), f"Sequence parallelism mode {self.sequence_parallelism_mode} is not in the supported list {SUPPORT_SP_MODE}"
72 if self.sequence_parallelism_mode in ["split_gather", "ring"]:
73 assert (
74 self.enable_tensor_parallelism
75 ), f"sequence parallelism mode {self.sequence_parallelism_mode} can only be used when enable_tensor_parallelism is True"
76 elif self.sequence_parallelism_mode in ["all_to_all"]:
77 assert (
78 not self.enable_tensor_parallelism
79 ), f"sequence parallelism mode {self.sequence_parallelism_mode} can only be used when enable_tensor_parallelism is False"
80 if self.enable_sequence_overlap:
81 self.enable_sequence_overlap = False
82 warnings.warn(
83 f"The enable_sequence_overlap flag will be ignored in sequence parallelism mode {self.sequence_parallelism_mode}"
84 )
85 else:
86 if self.sequence_parallelism_mode:
87 self.sequence_parallelism_mode = None
88 warnings.warn(
89 f"The sequence_parallelism_mode will be ignored when enable_sequence_parallelism is False"
90 )
91 assert (
92 not self.enable_sequence_overlap
93 ), f"enable_sequence_overlap can only be set to True when enable_sequence_parallelism is True"
94
95 # get the tensor parallel size
96 if not self.enable_tensor_parallelism:
97 self._tensor_parallel_size = 1
98 else:
99 self._tensor_parallel_size = dist.get_world_size(self.tensor_parallel_process_group)
100
101 # get the sequence parallel size
102 if not self.enable_sequence_parallelism:
103 self._sequence_parallel_size = 1
104 else:
105 self._sequence_parallel_size = dist.get_world_size(self.sequence_parallel_process_group)
106
107 def _turn_on_all_optimization(self):
108 """
109 Turn on all optimization.
110 """
111 # you can add all the optimization flag here
112 self.enable_fused_normalization = True
113 self.enable_flash_attention = True
114 self.enable_jit_fused = True
115 # This can cause non-in-place param sharding when used without ZeRO.
116 # It may also slow down training when seq len is small. Plz enable manually.
117 # self.enable_sequence_parallelism = True
118 # self.enable_sequence_overlap = True
119
120 def _infer(self):
121 """
122 Set default params for inference.
123 """
124 # assert self.pipeline_stage_manager is None, "pipeline parallelism is not supported in inference for now"
125
[end of colossalai/shardformer/shard/shard_config.py]
[start of colossalai/shardformer/shard/grad_ckpt_config.py]
1 from dataclasses import dataclass
2 from typing import List, Optional
3
4
5 @dataclass
6 class GradientCheckpointConfig:
7 gradient_checkpointing_ratio: float = 0.0
8
9 def get_num_ckpt_layers(self, num_layers: int) -> int:
10 return int(self.gradient_checkpointing_ratio * num_layers)
11
12
13 @dataclass
14 class PipelineGradientCheckpointConfig(GradientCheckpointConfig):
15 r"""
16 The pipeline gradient config is designed to provide more flexibility for users to control gradient checkpoint in pipeline parallelism.
17 Combined with PipelineStageManager.set_distribution_config, user can fully control the distribution of layers and checkpointed layers in pipeline parallelism.
18 Refer to https://github.com/hpcaitech/ColossalAI/issues/5509 for more details.
19
20 It provides the following features:
21 1. `gradient_checkpointing_ratio`: This is used to control gradient checkpointing more precisely, e.g., set 50% of the layers to use gradient checkpointing.
22 2. Customize # ckpt layers assigned to each stage. This takes precedence over `gradient_checkpointing_ratio`.
23
24 """
25 """
26 Args:
27 gradient_checkpointing_ratio (Optional[float]): The ratio of gradient checkpointing. It can only be used in pipeline parallelism. Defaults to None.
28 num_stages (Optional[int]): Number of stages in the pipeline. Defaults to None. For sanity check.
29 num_model_chunks (Optional[int]): Number of model chunks (1F1B or Interleaved). Defaults to None. For sanity check.
30 num_model_layers (Optional[int]): Number of model layers. Defaults to None. For sanity check.
31 num_ckpt_layers_per_stage (Optional[List[int]]): Number of checkpointed layers for each stage. Defaults to None.
32
33 Example 1:
34 num_stages = 8
35 num_layers = 80
36 num_model_chunks = 1
37 num_layers_per_stage = [9, 9, 9, 10, 11, 10, 11, 11]
38 num_ckpt_layers_per_stage = [4, 4, 2, 2, 0, 0, 0, 0]
39
40 Example 2:
41 num_stages = 4
42 num_layers = 80
43 num_model_chunks = 2
44 num_layers_per_stage = [9, 9, 9, 10, 11, 10, 11, 11]
45 # device 0 holds num_layers_per_stage[0] and num_layers_per_stage[4] layers
46 ...
47
48 """
49 num_stages: Optional[int] = None
50 num_model_chunks: Optional[int] = None
51 num_model_layers: Optional[int] = None
52 num_ckpt_layers_per_stage: Optional[List[int]] = None
53
54 def __post_init__(self):
55 if self._enable_gradient_checkpointing_ratio:
56 if not (0 <= self.gradient_checkpointing_ratio <= 1):
57 raise ValueError("gradient_checkpointing_ratio should be in 0% to 100%")
58
59 if self._enable_customized_ckpt_layers_per_stage:
60 assert (
61 self.num_stages is not None and self.num_model_chunks is not None and self.num_model_layers is not None
62 )
63 assert len(self.num_ckpt_layers_per_stage) == self.num_stages * self.num_model_chunks
64 assert all(
65 [0 <= num_ckpt_layers < self.num_model_layers for num_ckpt_layers in self.num_ckpt_layers_per_stage]
66 )
67 self.gradient_checkpointing_ratio = sum(self.num_ckpt_layers_per_stage) / self.num_model_layers
68
69 @property
70 def _enable_gradient_checkpointing_ratio(self) -> bool:
71 return self.gradient_checkpointing_ratio is not None
72
73 @property
74 def _enable_customized_ckpt_layers_per_stage(self) -> bool:
75 return self.num_ckpt_layers_per_stage is not None
76
77 def get_num_ckpt_layers(self, stage: int, num_layers: int, model_chunk_id: int = 0) -> int:
78 if not self._enable_gradient_checkpointing_ratio and not self._enable_customized_ckpt_layers_per_stage:
79 raise RuntimeError("No checkpointed layers information is provided")
80
81 if self._enable_customized_ckpt_layers_per_stage:
82 assert stage <= self.num_stages and model_chunk_id <= self.num_model_chunks
83 num_ckpt_layers = self.num_ckpt_layers_per_stage[stage + model_chunk_id * self.num_stages]
84 assert num_ckpt_layers <= num_layers
85 return num_ckpt_layers
86 else:
87 return int(self.gradient_checkpointing_ratio * num_layers)
88
[end of colossalai/shardformer/shard/grad_ckpt_config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/colossalai/shardformer/shard/grad_ckpt_config.py b/colossalai/shardformer/shard/grad_ckpt_config.py
--- a/colossalai/shardformer/shard/grad_ckpt_config.py
+++ b/colossalai/shardformer/shard/grad_ckpt_config.py
@@ -22,6 +22,7 @@
2. Customize # ckpt layers assigned to each stage. This takes precedence over `gradient_checkpointing_ratio`.
"""
+
"""
Args:
gradient_checkpointing_ratio (Optional[float]): The ratio of gradient checkpointing. It can only be used in pipeline parallelism. Defaults to None.
@@ -49,6 +50,7 @@
num_stages: Optional[int] = None
num_model_chunks: Optional[int] = None
num_model_layers: Optional[int] = None
+ num_layers_per_stage: Optional[List[int]] = None
num_ckpt_layers_per_stage: Optional[List[int]] = None
def __post_init__(self):
@@ -70,6 +72,10 @@
def _enable_gradient_checkpointing_ratio(self) -> bool:
return self.gradient_checkpointing_ratio is not None
+ @property
+ def _customize_num_layers_per_stage(self) -> bool:
+ return self.num_layers_per_stage is not None and self.num_model_layers is not None
+
@property
def _enable_customized_ckpt_layers_per_stage(self) -> bool:
return self.num_ckpt_layers_per_stage is not None
diff --git a/colossalai/shardformer/shard/shard_config.py b/colossalai/shardformer/shard/shard_config.py
--- a/colossalai/shardformer/shard/shard_config.py
+++ b/colossalai/shardformer/shard/shard_config.py
@@ -7,7 +7,7 @@
from colossalai.pipeline.stage_manager import PipelineStageManager
-from .grad_ckpt_config import GradientCheckpointConfig
+from .grad_ckpt_config import GradientCheckpointConfig, PipelineGradientCheckpointConfig
__all__ = ["ShardConfig"]
SUPPORT_SP_MODE = ["split_gather", "ring", "all_to_all"]
@@ -30,6 +30,7 @@
gradient_checkpoint_config (Optional[GradientCheckpointConfig]): The gradient checkpoint config. Defaults to None.
enable_all_optimization (bool): Whether to turn on all optimization tools including 'fused normalization', 'flash attention', 'JIT fused operators', 'sequence parallelism' and 'sequence overlap'. Defaults to False.
"""
+
tensor_parallel_process_group: Optional[ProcessGroup] = None
sequence_parallel_process_group: Optional[ProcessGroup] = None
pipeline_stage_manager: Optional[PipelineStageManager] = None
@@ -104,6 +105,16 @@
else:
self._sequence_parallel_size = dist.get_world_size(self.sequence_parallel_process_group)
+ if (
+ self.pipeline_stage_manager is not None
+ and isinstance(self.gradient_checkpoint_config, PipelineGradientCheckpointConfig)
+ and self.gradient_checkpoint_config._customize_num_layers_per_stage
+ ):
+ self.pipeline_stage_manager.set_distribution_config(
+ self.gradient_checkpoint_config.num_model_layers,
+ self.gradient_checkpoint_config.num_layers_per_stage,
+ )
+
def _turn_on_all_optimization(self):
"""
Turn on all optimization.
|
{"golden_diff": "diff --git a/colossalai/shardformer/shard/grad_ckpt_config.py b/colossalai/shardformer/shard/grad_ckpt_config.py\n--- a/colossalai/shardformer/shard/grad_ckpt_config.py\n+++ b/colossalai/shardformer/shard/grad_ckpt_config.py\n@@ -22,6 +22,7 @@\n 2. Customize # ckpt layers assigned to each stage. This takes precedence over `gradient_checkpointing_ratio`.\n \n \"\"\"\n+\n \"\"\"\n Args:\n gradient_checkpointing_ratio (Optional[float]): The ratio of gradient checkpointing. It can only be used in pipeline parallelism. Defaults to None.\n@@ -49,6 +50,7 @@\n num_stages: Optional[int] = None\n num_model_chunks: Optional[int] = None\n num_model_layers: Optional[int] = None\n+ num_layers_per_stage: Optional[List[int]] = None\n num_ckpt_layers_per_stage: Optional[List[int]] = None\n \n def __post_init__(self):\n@@ -70,6 +72,10 @@\n def _enable_gradient_checkpointing_ratio(self) -> bool:\n return self.gradient_checkpointing_ratio is not None\n \n+ @property\n+ def _customize_num_layers_per_stage(self) -> bool:\n+ return self.num_layers_per_stage is not None and self.num_model_layers is not None\n+\n @property\n def _enable_customized_ckpt_layers_per_stage(self) -> bool:\n return self.num_ckpt_layers_per_stage is not None\ndiff --git a/colossalai/shardformer/shard/shard_config.py b/colossalai/shardformer/shard/shard_config.py\n--- a/colossalai/shardformer/shard/shard_config.py\n+++ b/colossalai/shardformer/shard/shard_config.py\n@@ -7,7 +7,7 @@\n \n from colossalai.pipeline.stage_manager import PipelineStageManager\n \n-from .grad_ckpt_config import GradientCheckpointConfig\n+from .grad_ckpt_config import GradientCheckpointConfig, PipelineGradientCheckpointConfig\n \n __all__ = [\"ShardConfig\"]\n SUPPORT_SP_MODE = [\"split_gather\", \"ring\", \"all_to_all\"]\n@@ -30,6 +30,7 @@\n gradient_checkpoint_config (Optional[GradientCheckpointConfig]): The gradient checkpoint config. Defaults to None.\n enable_all_optimization (bool): Whether to turn on all optimization tools including 'fused normalization', 'flash attention', 'JIT fused operators', 'sequence parallelism' and 'sequence overlap'. Defaults to False.\n \"\"\"\n+\n tensor_parallel_process_group: Optional[ProcessGroup] = None\n sequence_parallel_process_group: Optional[ProcessGroup] = None\n pipeline_stage_manager: Optional[PipelineStageManager] = None\n@@ -104,6 +105,16 @@\n else:\n self._sequence_parallel_size = dist.get_world_size(self.sequence_parallel_process_group)\n \n+ if (\n+ self.pipeline_stage_manager is not None\n+ and isinstance(self.gradient_checkpoint_config, PipelineGradientCheckpointConfig)\n+ and self.gradient_checkpoint_config._customize_num_layers_per_stage\n+ ):\n+ self.pipeline_stage_manager.set_distribution_config(\n+ self.gradient_checkpoint_config.num_model_layers,\n+ self.gradient_checkpoint_config.num_layers_per_stage,\n+ )\n+\n def _turn_on_all_optimization(self):\n \"\"\"\n Turn on all optimization.\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "import warnings\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, Optional\n\nimport torch.distributed as dist\nfrom torch.distributed import ProcessGroup\n\nfrom colossalai.pipeline.stage_manager import PipelineStageManager\n\nfrom .grad_ckpt_config import GradientCheckpointConfig\n\n__all__ = [\"ShardConfig\"]\nSUPPORT_SP_MODE = [\"split_gather\", \"ring\", \"all_to_all\"]\n\n\n@dataclass\nclass ShardConfig:\n r\"\"\"\n The config for sharding the huggingface model\n\n Args:\n tensor_parallel_process_group (Optional[ProcessGroup]): The process group of tensor parallelism, it's necessary when using tensor parallel. Defaults to None, which is the global process group.\n pipeline_stage_manager (Optional[PipelineStageManager]): If using pipeline parallelism, it's necessary to specify a pipeline stage manager for inter-process communication in pipeline parallelism. Defaults to None, which means not using pipeline parallelism.\n enable_tensor_parallelism (bool): Whether to use tensor parallelism. Defaults to True.\n enable_fused_normalization (bool): Whether to use fused layernorm. Defaults to False.\n enable_flash_attention (bool, optional): Whether to switch on flash attention. Defaults to False.\n enable_jit_fused (bool, optional): Whether to switch on JIT fused operators. Defaults to False.\n enable_sequence_parallelism (bool): Whether to turn on sequence parallelism, which partitions non-tensor-parallel regions along the sequence dimension. Defaults to False.\n enable_sequence_overlap (bool): Whether to turn on sequence overlap, which overlap the computation and communication in sequence parallelism. It can only be used when enable_sequence_parallelism is True. Defaults to False.\n gradient_checkpoint_config (Optional[GradientCheckpointConfig]): The gradient checkpoint config. Defaults to None.\n enable_all_optimization (bool): Whether to turn on all optimization tools including 'fused normalization', 'flash attention', 'JIT fused operators', 'sequence parallelism' and 'sequence overlap'. Defaults to False.\n \"\"\"\n tensor_parallel_process_group: Optional[ProcessGroup] = None\n sequence_parallel_process_group: Optional[ProcessGroup] = None\n pipeline_stage_manager: Optional[PipelineStageManager] = None\n enable_tensor_parallelism: bool = True\n enable_all_optimization: bool = False\n enable_fused_normalization: bool = False\n enable_flash_attention: bool = False\n enable_jit_fused: bool = False\n enable_sequence_parallelism: bool = False\n sequence_parallelism_mode: str = None\n enable_sequence_overlap: bool = False\n parallel_output: bool = True\n make_vocab_size_divisible_by: int = 64\n gradient_checkpoint_config: Optional[GradientCheckpointConfig] = None\n extra_kwargs: Dict[str, Any] = field(default_factory=dict)\n # pipeline_parallel_size: int\n # data_parallel_size: int\n # tensor_parallel_mode: Literal['1d', '2d', '2.5d', '3d']\n\n @property\n def tensor_parallel_size(self):\n return self._tensor_parallel_size\n\n @property\n def sequence_parallel_size(self):\n return self._sequence_parallel_size\n\n def __post_init__(self):\n # turn on all optimization if all_optimization is set to True\n if self.enable_all_optimization:\n self._turn_on_all_optimization()\n\n if self.enable_sequence_parallelism:\n self.sequence_parallelism_mode = (\n \"split_gather\" if self.sequence_parallelism_mode is None else self.sequence_parallelism_mode\n )\n assert (\n self.sequence_parallelism_mode in SUPPORT_SP_MODE\n ), f\"Sequence parallelism mode {self.sequence_parallelism_mode} is not in the supported list {SUPPORT_SP_MODE}\"\n if self.sequence_parallelism_mode in [\"split_gather\", \"ring\"]:\n assert (\n self.enable_tensor_parallelism\n ), f\"sequence parallelism mode {self.sequence_parallelism_mode} can only be used when enable_tensor_parallelism is True\"\n elif self.sequence_parallelism_mode in [\"all_to_all\"]:\n assert (\n not self.enable_tensor_parallelism\n ), f\"sequence parallelism mode {self.sequence_parallelism_mode} can only be used when enable_tensor_parallelism is False\"\n if self.enable_sequence_overlap:\n self.enable_sequence_overlap = False\n warnings.warn(\n f\"The enable_sequence_overlap flag will be ignored in sequence parallelism mode {self.sequence_parallelism_mode}\"\n )\n else:\n if self.sequence_parallelism_mode:\n self.sequence_parallelism_mode = None\n warnings.warn(\n f\"The sequence_parallelism_mode will be ignored when enable_sequence_parallelism is False\"\n )\n assert (\n not self.enable_sequence_overlap\n ), f\"enable_sequence_overlap can only be set to True when enable_sequence_parallelism is True\"\n\n # get the tensor parallel size\n if not self.enable_tensor_parallelism:\n self._tensor_parallel_size = 1\n else:\n self._tensor_parallel_size = dist.get_world_size(self.tensor_parallel_process_group)\n\n # get the sequence parallel size\n if not self.enable_sequence_parallelism:\n self._sequence_parallel_size = 1\n else:\n self._sequence_parallel_size = dist.get_world_size(self.sequence_parallel_process_group)\n\n def _turn_on_all_optimization(self):\n \"\"\"\n Turn on all optimization.\n \"\"\"\n # you can add all the optimization flag here\n self.enable_fused_normalization = True\n self.enable_flash_attention = True\n self.enable_jit_fused = True\n # This can cause non-in-place param sharding when used without ZeRO.\n # It may also slow down training when seq len is small. Plz enable manually.\n # self.enable_sequence_parallelism = True\n # self.enable_sequence_overlap = True\n\n def _infer(self):\n \"\"\"\n Set default params for inference.\n \"\"\"\n # assert self.pipeline_stage_manager is None, \"pipeline parallelism is not supported in inference for now\"\n", "path": "colossalai/shardformer/shard/shard_config.py"}, {"content": "from dataclasses import dataclass\nfrom typing import List, Optional\n\n\n@dataclass\nclass GradientCheckpointConfig:\n gradient_checkpointing_ratio: float = 0.0\n\n def get_num_ckpt_layers(self, num_layers: int) -> int:\n return int(self.gradient_checkpointing_ratio * num_layers)\n\n\n@dataclass\nclass PipelineGradientCheckpointConfig(GradientCheckpointConfig):\n r\"\"\"\n The pipeline gradient config is designed to provide more flexibility for users to control gradient checkpoint in pipeline parallelism.\n Combined with PipelineStageManager.set_distribution_config, user can fully control the distribution of layers and checkpointed layers in pipeline parallelism.\n Refer to https://github.com/hpcaitech/ColossalAI/issues/5509 for more details.\n\n It provides the following features:\n 1. `gradient_checkpointing_ratio`: This is used to control gradient checkpointing more precisely, e.g., set 50% of the layers to use gradient checkpointing.\n 2. Customize # ckpt layers assigned to each stage. This takes precedence over `gradient_checkpointing_ratio`.\n\n \"\"\"\n \"\"\"\n Args:\n gradient_checkpointing_ratio (Optional[float]): The ratio of gradient checkpointing. It can only be used in pipeline parallelism. Defaults to None.\n num_stages (Optional[int]): Number of stages in the pipeline. Defaults to None. For sanity check.\n num_model_chunks (Optional[int]): Number of model chunks (1F1B or Interleaved). Defaults to None. For sanity check.\n num_model_layers (Optional[int]): Number of model layers. Defaults to None. For sanity check.\n num_ckpt_layers_per_stage (Optional[List[int]]): Number of checkpointed layers for each stage. Defaults to None.\n\n Example 1:\n num_stages = 8\n num_layers = 80\n num_model_chunks = 1\n num_layers_per_stage = [9, 9, 9, 10, 11, 10, 11, 11]\n num_ckpt_layers_per_stage = [4, 4, 2, 2, 0, 0, 0, 0]\n\n Example 2:\n num_stages = 4\n num_layers = 80\n num_model_chunks = 2\n num_layers_per_stage = [9, 9, 9, 10, 11, 10, 11, 11]\n # device 0 holds num_layers_per_stage[0] and num_layers_per_stage[4] layers\n ...\n\n \"\"\"\n num_stages: Optional[int] = None\n num_model_chunks: Optional[int] = None\n num_model_layers: Optional[int] = None\n num_ckpt_layers_per_stage: Optional[List[int]] = None\n\n def __post_init__(self):\n if self._enable_gradient_checkpointing_ratio:\n if not (0 <= self.gradient_checkpointing_ratio <= 1):\n raise ValueError(\"gradient_checkpointing_ratio should be in 0% to 100%\")\n\n if self._enable_customized_ckpt_layers_per_stage:\n assert (\n self.num_stages is not None and self.num_model_chunks is not None and self.num_model_layers is not None\n )\n assert len(self.num_ckpt_layers_per_stage) == self.num_stages * self.num_model_chunks\n assert all(\n [0 <= num_ckpt_layers < self.num_model_layers for num_ckpt_layers in self.num_ckpt_layers_per_stage]\n )\n self.gradient_checkpointing_ratio = sum(self.num_ckpt_layers_per_stage) / self.num_model_layers\n\n @property\n def _enable_gradient_checkpointing_ratio(self) -> bool:\n return self.gradient_checkpointing_ratio is not None\n\n @property\n def _enable_customized_ckpt_layers_per_stage(self) -> bool:\n return self.num_ckpt_layers_per_stage is not None\n\n def get_num_ckpt_layers(self, stage: int, num_layers: int, model_chunk_id: int = 0) -> int:\n if not self._enable_gradient_checkpointing_ratio and not self._enable_customized_ckpt_layers_per_stage:\n raise RuntimeError(\"No checkpointed layers information is provided\")\n\n if self._enable_customized_ckpt_layers_per_stage:\n assert stage <= self.num_stages and model_chunk_id <= self.num_model_chunks\n num_ckpt_layers = self.num_ckpt_layers_per_stage[stage + model_chunk_id * self.num_stages]\n assert num_ckpt_layers <= num_layers\n return num_ckpt_layers\n else:\n return int(self.gradient_checkpointing_ratio * num_layers)\n", "path": "colossalai/shardformer/shard/grad_ckpt_config.py"}]}
| 3,315 | 735 |
gh_patches_debug_31975
|
rasdani/github-patches
|
git_diff
|
python-pillow__Pillow-535
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Some pcx images are displayed incorrectly
Hi everyone!
I'm trying to use pcx images extracted from an old game (Might and Magic VI, the game has 2D UI over the 3D environment. This 2D UI is composed by various internal formats and pcx files.)
The problem is that one of these pcx is displayed incorrecly by pillow, while being displayed correctly by EyeOfGnome (gdk-pixbuf) image viewer.
Another pcx is displayed correctly by pillow!
What do I mean by incorrectly? The bad image has a kind of "red shift" of pixels colors. the left border is a bit red. It' like a "3D-ified" image, it's a bit blurry and the whole is a bit darker.
I opened the pcx files with an hex editor and the main difference I spotted in the pcx-header is that the wrong one has the "48 byte palette" field completely empty. while the other one has some data in it. But it shoudn't be a problem because other viewers display it correctly.
I simply use this code
img = Image.open(pcxfilename)
img.show()
this archive contains the pcx files https://db.tt/td291VTh
gdk-pixbuf decoding of pcx is a bit more complex than the pillow one:
gdk-pixbuf/io-pcx.c VS libImaging/PcxDecode.c
thanks for any help!

Some pcx images are displayed incorrectly
Hi everyone!
I'm trying to use pcx images extracted from an old game (Might and Magic VI, the game has 2D UI over the 3D environment. This 2D UI is composed by various internal formats and pcx files.)
The problem is that one of these pcx is displayed incorrecly by pillow, while being displayed correctly by EyeOfGnome (gdk-pixbuf) image viewer.
Another pcx is displayed correctly by pillow!
What do I mean by incorrectly? The bad image has a kind of "red shift" of pixels colors. the left border is a bit red. It' like a "3D-ified" image, it's a bit blurry and the whole is a bit darker.
I opened the pcx files with an hex editor and the main difference I spotted in the pcx-header is that the wrong one has the "48 byte palette" field completely empty. while the other one has some data in it. But it shoudn't be a problem because other viewers display it correctly.
I simply use this code
img = Image.open(pcxfilename)
img.show()
this archive contains the pcx files https://db.tt/td291VTh
gdk-pixbuf decoding of pcx is a bit more complex than the pillow one:
gdk-pixbuf/io-pcx.c VS libImaging/PcxDecode.c
thanks for any help!

</issue>
<code>
[start of PIL/PcxImagePlugin.py]
1 #
2 # The Python Imaging Library.
3 # $Id$
4 #
5 # PCX file handling
6 #
7 # This format was originally used by ZSoft's popular PaintBrush
8 # program for the IBM PC. It is also supported by many MS-DOS and
9 # Windows applications, including the Windows PaintBrush program in
10 # Windows 3.
11 #
12 # history:
13 # 1995-09-01 fl Created
14 # 1996-05-20 fl Fixed RGB support
15 # 1997-01-03 fl Fixed 2-bit and 4-bit support
16 # 1999-02-03 fl Fixed 8-bit support (broken in 1.0b1)
17 # 1999-02-07 fl Added write support
18 # 2002-06-09 fl Made 2-bit and 4-bit support a bit more robust
19 # 2002-07-30 fl Seek from to current position, not beginning of file
20 # 2003-06-03 fl Extract DPI settings (info["dpi"])
21 #
22 # Copyright (c) 1997-2003 by Secret Labs AB.
23 # Copyright (c) 1995-2003 by Fredrik Lundh.
24 #
25 # See the README file for information on usage and redistribution.
26 #
27
28 __version__ = "0.6"
29
30 from PIL import Image, ImageFile, ImagePalette, _binary
31
32 i8 = _binary.i8
33 i16 = _binary.i16le
34 o8 = _binary.o8
35
36 def _accept(prefix):
37 return i8(prefix[0]) == 10 and i8(prefix[1]) in [0, 2, 3, 5]
38
39 ##
40 # Image plugin for Paintbrush images.
41
42 class PcxImageFile(ImageFile.ImageFile):
43
44 format = "PCX"
45 format_description = "Paintbrush"
46
47 def _open(self):
48
49 # header
50 s = self.fp.read(128)
51 if not _accept(s):
52 raise SyntaxError("not a PCX file")
53
54 # image
55 bbox = i16(s,4), i16(s,6), i16(s,8)+1, i16(s,10)+1
56 if bbox[2] <= bbox[0] or bbox[3] <= bbox[1]:
57 raise SyntaxError("bad PCX image size")
58
59 # format
60 version = i8(s[1])
61 bits = i8(s[3])
62 planes = i8(s[65])
63 stride = i16(s,66)
64
65 self.info["dpi"] = i16(s,12), i16(s,14)
66
67 if bits == 1 and planes == 1:
68 mode = rawmode = "1"
69
70 elif bits == 1 and planes in (2, 4):
71 mode = "P"
72 rawmode = "P;%dL" % planes
73 self.palette = ImagePalette.raw("RGB", s[16:64])
74
75 elif version == 5 and bits == 8 and planes == 1:
76 mode = rawmode = "L"
77 # FIXME: hey, this doesn't work with the incremental loader !!!
78 self.fp.seek(-769, 2)
79 s = self.fp.read(769)
80 if len(s) == 769 and i8(s[0]) == 12:
81 # check if the palette is linear greyscale
82 for i in range(256):
83 if s[i*3+1:i*3+4] != o8(i)*3:
84 mode = rawmode = "P"
85 break
86 if mode == "P":
87 self.palette = ImagePalette.raw("RGB", s[1:])
88 self.fp.seek(128)
89
90 elif version == 5 and bits == 8 and planes == 3:
91 mode = "RGB"
92 rawmode = "RGB;L"
93
94 else:
95 raise IOError("unknown PCX mode")
96
97 self.mode = mode
98 self.size = bbox[2]-bbox[0], bbox[3]-bbox[1]
99
100 bbox = (0, 0) + self.size
101
102 self.tile = [("pcx", bbox, self.fp.tell(), (rawmode, planes * stride))]
103
104 # --------------------------------------------------------------------
105 # save PCX files
106
107 SAVE = {
108 # mode: (version, bits, planes, raw mode)
109 "1": (2, 1, 1, "1"),
110 "L": (5, 8, 1, "L"),
111 "P": (5, 8, 1, "P"),
112 "RGB": (5, 8, 3, "RGB;L"),
113 }
114
115 o16 = _binary.o16le
116
117 def _save(im, fp, filename, check=0):
118
119 try:
120 version, bits, planes, rawmode = SAVE[im.mode]
121 except KeyError:
122 raise ValueError("Cannot save %s images as PCX" % im.mode)
123
124 if check:
125 return check
126
127 # bytes per plane
128 stride = (im.size[0] * bits + 7) // 8
129
130 # under windows, we could determine the current screen size with
131 # "Image.core.display_mode()[1]", but I think that's overkill...
132
133 screen = im.size
134
135 dpi = 100, 100
136
137 # PCX header
138 fp.write(
139 o8(10) + o8(version) + o8(1) + o8(bits) + o16(0) +
140 o16(0) + o16(im.size[0]-1) + o16(im.size[1]-1) + o16(dpi[0]) +
141 o16(dpi[1]) + b"\0"*24 + b"\xFF"*24 + b"\0" + o8(planes) +
142 o16(stride) + o16(1) + o16(screen[0]) + o16(screen[1]) +
143 b"\0"*54
144 )
145
146 assert fp.tell() == 128
147
148 ImageFile._save(im, fp, [("pcx", (0,0)+im.size, 0,
149 (rawmode, bits*planes))])
150
151 if im.mode == "P":
152 # colour palette
153 fp.write(o8(12))
154 fp.write(im.im.getpalette("RGB", "RGB")) # 768 bytes
155 elif im.mode == "L":
156 # greyscale palette
157 fp.write(o8(12))
158 for i in range(256):
159 fp.write(o8(i)*3)
160
161 # --------------------------------------------------------------------
162 # registry
163
164 Image.register_open("PCX", PcxImageFile, _accept)
165 Image.register_save("PCX", _save)
166
167 Image.register_extension("PCX", ".pcx")
168
[end of PIL/PcxImagePlugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/PIL/PcxImagePlugin.py b/PIL/PcxImagePlugin.py
--- a/PIL/PcxImagePlugin.py
+++ b/PIL/PcxImagePlugin.py
@@ -55,12 +55,18 @@
bbox = i16(s,4), i16(s,6), i16(s,8)+1, i16(s,10)+1
if bbox[2] <= bbox[0] or bbox[3] <= bbox[1]:
raise SyntaxError("bad PCX image size")
+ if Image.DEBUG:
+ print ("BBox: %s %s %s %s" % bbox)
+
# format
version = i8(s[1])
bits = i8(s[3])
planes = i8(s[65])
stride = i16(s,66)
+ if Image.DEBUG:
+ print ("PCX version %s, bits %s, planes %s, stride %s" %
+ (version, bits, planes, stride))
self.info["dpi"] = i16(s,12), i16(s,14)
@@ -98,7 +104,9 @@
self.size = bbox[2]-bbox[0], bbox[3]-bbox[1]
bbox = (0, 0) + self.size
-
+ if Image.DEBUG:
+ print ("size: %sx%s" % self.size)
+
self.tile = [("pcx", bbox, self.fp.tell(), (rawmode, planes * stride))]
# --------------------------------------------------------------------
@@ -126,6 +134,16 @@
# bytes per plane
stride = (im.size[0] * bits + 7) // 8
+ # stride should be even
+ stride = stride + (stride % 2)
+ # Stride needs to be kept in sync with the PcxEncode.c version.
+ # Ideally it should be passed in in the state, but the bytes value
+ # gets overwritten.
+
+
+ if Image.DEBUG:
+ print ("PcxImagePlugin._save: xwidth: %d, bits: %d, stride: %d" % (
+ im.size[0], bits, stride))
# under windows, we could determine the current screen size with
# "Image.core.display_mode()[1]", but I think that's overkill...
|
{"golden_diff": "diff --git a/PIL/PcxImagePlugin.py b/PIL/PcxImagePlugin.py\n--- a/PIL/PcxImagePlugin.py\n+++ b/PIL/PcxImagePlugin.py\n@@ -55,12 +55,18 @@\n bbox = i16(s,4), i16(s,6), i16(s,8)+1, i16(s,10)+1\n if bbox[2] <= bbox[0] or bbox[3] <= bbox[1]:\n raise SyntaxError(\"bad PCX image size\")\n+ if Image.DEBUG:\n+ print (\"BBox: %s %s %s %s\" % bbox)\n+\n \n # format\n version = i8(s[1])\n bits = i8(s[3])\n planes = i8(s[65])\n stride = i16(s,66)\n+ if Image.DEBUG:\n+ print (\"PCX version %s, bits %s, planes %s, stride %s\" %\n+ (version, bits, planes, stride))\n \n self.info[\"dpi\"] = i16(s,12), i16(s,14)\n \n@@ -98,7 +104,9 @@\n self.size = bbox[2]-bbox[0], bbox[3]-bbox[1]\n \n bbox = (0, 0) + self.size\n-\n+ if Image.DEBUG:\n+ print (\"size: %sx%s\" % self.size)\n+ \n self.tile = [(\"pcx\", bbox, self.fp.tell(), (rawmode, planes * stride))]\n \n # --------------------------------------------------------------------\n@@ -126,6 +134,16 @@\n \n # bytes per plane\n stride = (im.size[0] * bits + 7) // 8\n+ # stride should be even\n+ stride = stride + (stride % 2)\n+ # Stride needs to be kept in sync with the PcxEncode.c version.\n+ # Ideally it should be passed in in the state, but the bytes value\n+ # gets overwritten. \n+\n+\n+ if Image.DEBUG:\n+ print (\"PcxImagePlugin._save: xwidth: %d, bits: %d, stride: %d\" % (\n+ im.size[0], bits, stride))\n \n # under windows, we could determine the current screen size with\n # \"Image.core.display_mode()[1]\", but I think that's overkill...\n", "issue": "Some pcx images are displayed incorrectly\nHi everyone!\nI'm trying to use pcx images extracted from an old game (Might and Magic VI, the game has 2D UI over the 3D environment. This 2D UI is composed by various internal formats and pcx files.)\n\nThe problem is that one of these pcx is displayed incorrecly by pillow, while being displayed correctly by EyeOfGnome (gdk-pixbuf) image viewer.\nAnother pcx is displayed correctly by pillow!\n\nWhat do I mean by incorrectly? The bad image has a kind of \"red shift\" of pixels colors. the left border is a bit red. It' like a \"3D-ified\" image, it's a bit blurry and the whole is a bit darker.\n\nI opened the pcx files with an hex editor and the main difference I spotted in the pcx-header is that the wrong one has the \"48 byte palette\" field completely empty. while the other one has some data in it. But it shoudn't be a problem because other viewers display it correctly.\n\nI simply use this code\nimg = Image.open(pcxfilename)\nimg.show()\n\nthis archive contains the pcx files https://db.tt/td291VTh\n\ngdk-pixbuf decoding of pcx is a bit more complex than the pillow one:\ngdk-pixbuf/io-pcx.c VS libImaging/PcxDecode.c\n\nthanks for any help!\n\n\n\nSome pcx images are displayed incorrectly\nHi everyone!\nI'm trying to use pcx images extracted from an old game (Might and Magic VI, the game has 2D UI over the 3D environment. This 2D UI is composed by various internal formats and pcx files.)\n\nThe problem is that one of these pcx is displayed incorrecly by pillow, while being displayed correctly by EyeOfGnome (gdk-pixbuf) image viewer.\nAnother pcx is displayed correctly by pillow!\n\nWhat do I mean by incorrectly? The bad image has a kind of \"red shift\" of pixels colors. the left border is a bit red. It' like a \"3D-ified\" image, it's a bit blurry and the whole is a bit darker.\n\nI opened the pcx files with an hex editor and the main difference I spotted in the pcx-header is that the wrong one has the \"48 byte palette\" field completely empty. while the other one has some data in it. But it shoudn't be a problem because other viewers display it correctly.\n\nI simply use this code\nimg = Image.open(pcxfilename)\nimg.show()\n\nthis archive contains the pcx files https://db.tt/td291VTh\n\ngdk-pixbuf decoding of pcx is a bit more complex than the pillow one:\ngdk-pixbuf/io-pcx.c VS libImaging/PcxDecode.c\n\nthanks for any help!\n\n\n\n", "before_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# PCX file handling\n#\n# This format was originally used by ZSoft's popular PaintBrush\n# program for the IBM PC. It is also supported by many MS-DOS and\n# Windows applications, including the Windows PaintBrush program in\n# Windows 3.\n#\n# history:\n# 1995-09-01 fl Created\n# 1996-05-20 fl Fixed RGB support\n# 1997-01-03 fl Fixed 2-bit and 4-bit support\n# 1999-02-03 fl Fixed 8-bit support (broken in 1.0b1)\n# 1999-02-07 fl Added write support\n# 2002-06-09 fl Made 2-bit and 4-bit support a bit more robust\n# 2002-07-30 fl Seek from to current position, not beginning of file\n# 2003-06-03 fl Extract DPI settings (info[\"dpi\"])\n#\n# Copyright (c) 1997-2003 by Secret Labs AB.\n# Copyright (c) 1995-2003 by Fredrik Lundh.\n#\n# See the README file for information on usage and redistribution.\n#\n\n__version__ = \"0.6\"\n\nfrom PIL import Image, ImageFile, ImagePalette, _binary\n\ni8 = _binary.i8\ni16 = _binary.i16le\no8 = _binary.o8\n\ndef _accept(prefix):\n return i8(prefix[0]) == 10 and i8(prefix[1]) in [0, 2, 3, 5]\n\n##\n# Image plugin for Paintbrush images.\n\nclass PcxImageFile(ImageFile.ImageFile):\n\n format = \"PCX\"\n format_description = \"Paintbrush\"\n\n def _open(self):\n\n # header\n s = self.fp.read(128)\n if not _accept(s):\n raise SyntaxError(\"not a PCX file\")\n\n # image\n bbox = i16(s,4), i16(s,6), i16(s,8)+1, i16(s,10)+1\n if bbox[2] <= bbox[0] or bbox[3] <= bbox[1]:\n raise SyntaxError(\"bad PCX image size\")\n\n # format\n version = i8(s[1])\n bits = i8(s[3])\n planes = i8(s[65])\n stride = i16(s,66)\n\n self.info[\"dpi\"] = i16(s,12), i16(s,14)\n\n if bits == 1 and planes == 1:\n mode = rawmode = \"1\"\n\n elif bits == 1 and planes in (2, 4):\n mode = \"P\"\n rawmode = \"P;%dL\" % planes\n self.palette = ImagePalette.raw(\"RGB\", s[16:64])\n\n elif version == 5 and bits == 8 and planes == 1:\n mode = rawmode = \"L\"\n # FIXME: hey, this doesn't work with the incremental loader !!!\n self.fp.seek(-769, 2)\n s = self.fp.read(769)\n if len(s) == 769 and i8(s[0]) == 12:\n # check if the palette is linear greyscale\n for i in range(256):\n if s[i*3+1:i*3+4] != o8(i)*3:\n mode = rawmode = \"P\"\n break\n if mode == \"P\":\n self.palette = ImagePalette.raw(\"RGB\", s[1:])\n self.fp.seek(128)\n\n elif version == 5 and bits == 8 and planes == 3:\n mode = \"RGB\"\n rawmode = \"RGB;L\"\n\n else:\n raise IOError(\"unknown PCX mode\")\n\n self.mode = mode\n self.size = bbox[2]-bbox[0], bbox[3]-bbox[1]\n\n bbox = (0, 0) + self.size\n\n self.tile = [(\"pcx\", bbox, self.fp.tell(), (rawmode, planes * stride))]\n\n# --------------------------------------------------------------------\n# save PCX files\n\nSAVE = {\n # mode: (version, bits, planes, raw mode)\n \"1\": (2, 1, 1, \"1\"),\n \"L\": (5, 8, 1, \"L\"),\n \"P\": (5, 8, 1, \"P\"),\n \"RGB\": (5, 8, 3, \"RGB;L\"),\n}\n\no16 = _binary.o16le\n\ndef _save(im, fp, filename, check=0):\n\n try:\n version, bits, planes, rawmode = SAVE[im.mode]\n except KeyError:\n raise ValueError(\"Cannot save %s images as PCX\" % im.mode)\n\n if check:\n return check\n\n # bytes per plane\n stride = (im.size[0] * bits + 7) // 8\n\n # under windows, we could determine the current screen size with\n # \"Image.core.display_mode()[1]\", but I think that's overkill...\n\n screen = im.size\n\n dpi = 100, 100\n\n # PCX header\n fp.write(\n o8(10) + o8(version) + o8(1) + o8(bits) + o16(0) +\n o16(0) + o16(im.size[0]-1) + o16(im.size[1]-1) + o16(dpi[0]) +\n o16(dpi[1]) + b\"\\0\"*24 + b\"\\xFF\"*24 + b\"\\0\" + o8(planes) +\n o16(stride) + o16(1) + o16(screen[0]) + o16(screen[1]) +\n b\"\\0\"*54\n )\n\n assert fp.tell() == 128\n\n ImageFile._save(im, fp, [(\"pcx\", (0,0)+im.size, 0,\n (rawmode, bits*planes))])\n\n if im.mode == \"P\":\n # colour palette\n fp.write(o8(12))\n fp.write(im.im.getpalette(\"RGB\", \"RGB\")) # 768 bytes\n elif im.mode == \"L\":\n # greyscale palette\n fp.write(o8(12))\n for i in range(256):\n fp.write(o8(i)*3)\n\n# --------------------------------------------------------------------\n# registry\n\nImage.register_open(\"PCX\", PcxImageFile, _accept)\nImage.register_save(\"PCX\", _save)\n\nImage.register_extension(\"PCX\", \".pcx\")\n", "path": "PIL/PcxImagePlugin.py"}]}
| 3,221 | 539 |
gh_patches_debug_32429
|
rasdani/github-patches
|
git_diff
|
aio-libs__aiohttp-1117
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
access log format is invalid when using gunicorn worker
It seems you have to pass in `--access-logformat='%a %l %u %t "%r" %s %b "%{Referrer}i" "%{User-Agent}i"'` to gunicorn for logging to work, they default format from gunicorn is `"%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"` which doesn't work with aiohttp
</issue>
<code>
[start of aiohttp/worker.py]
1 """Async gunicorn worker for aiohttp.web"""
2
3 import asyncio
4 import os
5 import signal
6 import ssl
7 import sys
8
9 import gunicorn.workers.base as base
10
11 from aiohttp.helpers import ensure_future
12
13 __all__ = ('GunicornWebWorker', 'GunicornUVLoopWebWorker')
14
15
16 class GunicornWebWorker(base.Worker):
17
18 def __init__(self, *args, **kw): # pragma: no cover
19 super().__init__(*args, **kw)
20
21 self.servers = {}
22 self.exit_code = 0
23
24 def init_process(self):
25 # create new event_loop after fork
26 asyncio.get_event_loop().close()
27
28 self.loop = asyncio.new_event_loop()
29 asyncio.set_event_loop(self.loop)
30
31 super().init_process()
32
33 def run(self):
34 self.loop.run_until_complete(self.wsgi.startup())
35 self._runner = ensure_future(self._run(), loop=self.loop)
36
37 try:
38 self.loop.run_until_complete(self._runner)
39 finally:
40 self.loop.close()
41
42 sys.exit(self.exit_code)
43
44 def make_handler(self, app):
45 return app.make_handler(
46 logger=self.log,
47 debug=self.cfg.debug,
48 timeout=self.cfg.timeout,
49 keep_alive=self.cfg.keepalive,
50 access_log=self.log.access_log,
51 access_log_format=self.cfg.access_log_format)
52
53 @asyncio.coroutine
54 def close(self):
55 if self.servers:
56 servers = self.servers
57 self.servers = None
58
59 # stop accepting connections
60 for server, handler in servers.items():
61 self.log.info("Stopping server: %s, connections: %s",
62 self.pid, len(handler.connections))
63 server.close()
64 yield from server.wait_closed()
65
66 # send on_shutdown event
67 yield from self.wsgi.shutdown()
68
69 # stop alive connections
70 tasks = [
71 handler.finish_connections(
72 timeout=self.cfg.graceful_timeout / 100 * 95)
73 for handler in servers.values()]
74 yield from asyncio.gather(*tasks, loop=self.loop)
75
76 # cleanup application
77 yield from self.wsgi.cleanup()
78
79 @asyncio.coroutine
80 def _run(self):
81
82 ctx = self._create_ssl_context(self.cfg) if self.cfg.is_ssl else None
83
84 for sock in self.sockets:
85 handler = self.make_handler(self.wsgi)
86 srv = yield from self.loop.create_server(handler, sock=sock.sock,
87 ssl=ctx)
88 self.servers[srv] = handler
89
90 # If our parent changed then we shut down.
91 pid = os.getpid()
92 try:
93 while self.alive:
94 self.notify()
95
96 cnt = sum(handler.requests_count
97 for handler in self.servers.values())
98 if self.cfg.max_requests and cnt > self.cfg.max_requests:
99 self.alive = False
100 self.log.info("Max requests, shutting down: %s", self)
101
102 elif pid == os.getpid() and self.ppid != os.getppid():
103 self.alive = False
104 self.log.info("Parent changed, shutting down: %s", self)
105 else:
106 yield from asyncio.sleep(1.0, loop=self.loop)
107
108 except BaseException:
109 pass
110
111 yield from self.close()
112
113 def init_signals(self):
114 # Set up signals through the event loop API.
115
116 self.loop.add_signal_handler(signal.SIGQUIT, self.handle_quit,
117 signal.SIGQUIT, None)
118
119 self.loop.add_signal_handler(signal.SIGTERM, self.handle_exit,
120 signal.SIGTERM, None)
121
122 self.loop.add_signal_handler(signal.SIGINT, self.handle_quit,
123 signal.SIGINT, None)
124
125 self.loop.add_signal_handler(signal.SIGWINCH, self.handle_winch,
126 signal.SIGWINCH, None)
127
128 self.loop.add_signal_handler(signal.SIGUSR1, self.handle_usr1,
129 signal.SIGUSR1, None)
130
131 self.loop.add_signal_handler(signal.SIGABRT, self.handle_abort,
132 signal.SIGABRT, None)
133
134 # Don't let SIGTERM and SIGUSR1 disturb active requests
135 # by interrupting system calls
136 signal.siginterrupt(signal.SIGTERM, False)
137 signal.siginterrupt(signal.SIGUSR1, False)
138
139 def handle_quit(self, sig, frame):
140 self.alive = False
141
142 def handle_abort(self, sig, frame):
143 self.alive = False
144 self.exit_code = 1
145
146 @staticmethod
147 def _create_ssl_context(cfg):
148 """ Creates SSLContext instance for usage in asyncio.create_server.
149
150 See ssl.SSLSocket.__init__ for more details.
151 """
152 ctx = ssl.SSLContext(cfg.ssl_version)
153 ctx.load_cert_chain(cfg.certfile, cfg.keyfile)
154 ctx.verify_mode = cfg.cert_reqs
155 if cfg.ca_certs:
156 ctx.load_verify_locations(cfg.ca_certs)
157 if cfg.ciphers:
158 ctx.set_ciphers(cfg.ciphers)
159 return ctx
160
161
162 class GunicornUVLoopWebWorker(GunicornWebWorker):
163
164 def init_process(self):
165 import uvloop
166
167 # Close any existing event loop before setting a
168 # new policy.
169 asyncio.get_event_loop().close()
170
171 # Setup uvloop policy, so that every
172 # asyncio.get_event_loop() will create an instance
173 # of uvloop event loop.
174 asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
175
176 super().init_process()
177
[end of aiohttp/worker.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/aiohttp/worker.py b/aiohttp/worker.py
--- a/aiohttp/worker.py
+++ b/aiohttp/worker.py
@@ -2,19 +2,24 @@
import asyncio
import os
+import re
import signal
import ssl
import sys
import gunicorn.workers.base as base
-from aiohttp.helpers import ensure_future
+from gunicorn.config import AccessLogFormat as GunicornAccessLogFormat
+from aiohttp.helpers import AccessLogger, ensure_future
__all__ = ('GunicornWebWorker', 'GunicornUVLoopWebWorker')
class GunicornWebWorker(base.Worker):
+ DEFAULT_AIOHTTP_LOG_FORMAT = AccessLogger.LOG_FORMAT
+ DEFAULT_GUNICORN_LOG_FORMAT = GunicornAccessLogFormat.default
+
def __init__(self, *args, **kw): # pragma: no cover
super().__init__(*args, **kw)
@@ -48,7 +53,8 @@
timeout=self.cfg.timeout,
keep_alive=self.cfg.keepalive,
access_log=self.log.access_log,
- access_log_format=self.cfg.access_log_format)
+ access_log_format=self._get_valid_log_format(
+ self.cfg.access_log_format))
@asyncio.coroutine
def close(self):
@@ -158,6 +164,20 @@
ctx.set_ciphers(cfg.ciphers)
return ctx
+ def _get_valid_log_format(self, source_format):
+ if source_format == self.DEFAULT_GUNICORN_LOG_FORMAT:
+ return self.DEFAULT_AIOHTTP_LOG_FORMAT
+ elif re.search(r'%\([^\)]+\)', source_format):
+ raise ValueError(
+ "Gunicorn's style options in form of `%(name)s` are not "
+ "supported for the log formatting. Please use aiohttp's "
+ "format specification to configure access log formatting: "
+ "http://aiohttp.readthedocs.io/en/stable/logging.html"
+ "#format-specification"
+ )
+ else:
+ return source_format
+
class GunicornUVLoopWebWorker(GunicornWebWorker):
|
{"golden_diff": "diff --git a/aiohttp/worker.py b/aiohttp/worker.py\n--- a/aiohttp/worker.py\n+++ b/aiohttp/worker.py\n@@ -2,19 +2,24 @@\n \n import asyncio\n import os\n+import re\n import signal\n import ssl\n import sys\n \n import gunicorn.workers.base as base\n \n-from aiohttp.helpers import ensure_future\n+from gunicorn.config import AccessLogFormat as GunicornAccessLogFormat\n+from aiohttp.helpers import AccessLogger, ensure_future\n \n __all__ = ('GunicornWebWorker', 'GunicornUVLoopWebWorker')\n \n \n class GunicornWebWorker(base.Worker):\n \n+ DEFAULT_AIOHTTP_LOG_FORMAT = AccessLogger.LOG_FORMAT\n+ DEFAULT_GUNICORN_LOG_FORMAT = GunicornAccessLogFormat.default\n+\n def __init__(self, *args, **kw): # pragma: no cover\n super().__init__(*args, **kw)\n \n@@ -48,7 +53,8 @@\n timeout=self.cfg.timeout,\n keep_alive=self.cfg.keepalive,\n access_log=self.log.access_log,\n- access_log_format=self.cfg.access_log_format)\n+ access_log_format=self._get_valid_log_format(\n+ self.cfg.access_log_format))\n \n @asyncio.coroutine\n def close(self):\n@@ -158,6 +164,20 @@\n ctx.set_ciphers(cfg.ciphers)\n return ctx\n \n+ def _get_valid_log_format(self, source_format):\n+ if source_format == self.DEFAULT_GUNICORN_LOG_FORMAT:\n+ return self.DEFAULT_AIOHTTP_LOG_FORMAT\n+ elif re.search(r'%\\([^\\)]+\\)', source_format):\n+ raise ValueError(\n+ \"Gunicorn's style options in form of `%(name)s` are not \"\n+ \"supported for the log formatting. Please use aiohttp's \"\n+ \"format specification to configure access log formatting: \"\n+ \"http://aiohttp.readthedocs.io/en/stable/logging.html\"\n+ \"#format-specification\"\n+ )\n+ else:\n+ return source_format\n+\n \n class GunicornUVLoopWebWorker(GunicornWebWorker):\n", "issue": "access log format is invalid when using gunicorn worker\nIt seems you have to pass in `--access-logformat='%a %l %u %t \"%r\" %s %b \"%{Referrer}i\" \"%{User-Agent}i\"'` to gunicorn for logging to work, they default format from gunicorn is `\"%(h)s %(l)s %(u)s %(t)s \"%(r)s\" %(s)s %(b)s \"%(f)s\" \"%(a)s\"` which doesn't work with aiohttp\n\n", "before_files": [{"content": "\"\"\"Async gunicorn worker for aiohttp.web\"\"\"\n\nimport asyncio\nimport os\nimport signal\nimport ssl\nimport sys\n\nimport gunicorn.workers.base as base\n\nfrom aiohttp.helpers import ensure_future\n\n__all__ = ('GunicornWebWorker', 'GunicornUVLoopWebWorker')\n\n\nclass GunicornWebWorker(base.Worker):\n\n def __init__(self, *args, **kw): # pragma: no cover\n super().__init__(*args, **kw)\n\n self.servers = {}\n self.exit_code = 0\n\n def init_process(self):\n # create new event_loop after fork\n asyncio.get_event_loop().close()\n\n self.loop = asyncio.new_event_loop()\n asyncio.set_event_loop(self.loop)\n\n super().init_process()\n\n def run(self):\n self.loop.run_until_complete(self.wsgi.startup())\n self._runner = ensure_future(self._run(), loop=self.loop)\n\n try:\n self.loop.run_until_complete(self._runner)\n finally:\n self.loop.close()\n\n sys.exit(self.exit_code)\n\n def make_handler(self, app):\n return app.make_handler(\n logger=self.log,\n debug=self.cfg.debug,\n timeout=self.cfg.timeout,\n keep_alive=self.cfg.keepalive,\n access_log=self.log.access_log,\n access_log_format=self.cfg.access_log_format)\n\n @asyncio.coroutine\n def close(self):\n if self.servers:\n servers = self.servers\n self.servers = None\n\n # stop accepting connections\n for server, handler in servers.items():\n self.log.info(\"Stopping server: %s, connections: %s\",\n self.pid, len(handler.connections))\n server.close()\n yield from server.wait_closed()\n\n # send on_shutdown event\n yield from self.wsgi.shutdown()\n\n # stop alive connections\n tasks = [\n handler.finish_connections(\n timeout=self.cfg.graceful_timeout / 100 * 95)\n for handler in servers.values()]\n yield from asyncio.gather(*tasks, loop=self.loop)\n\n # cleanup application\n yield from self.wsgi.cleanup()\n\n @asyncio.coroutine\n def _run(self):\n\n ctx = self._create_ssl_context(self.cfg) if self.cfg.is_ssl else None\n\n for sock in self.sockets:\n handler = self.make_handler(self.wsgi)\n srv = yield from self.loop.create_server(handler, sock=sock.sock,\n ssl=ctx)\n self.servers[srv] = handler\n\n # If our parent changed then we shut down.\n pid = os.getpid()\n try:\n while self.alive:\n self.notify()\n\n cnt = sum(handler.requests_count\n for handler in self.servers.values())\n if self.cfg.max_requests and cnt > self.cfg.max_requests:\n self.alive = False\n self.log.info(\"Max requests, shutting down: %s\", self)\n\n elif pid == os.getpid() and self.ppid != os.getppid():\n self.alive = False\n self.log.info(\"Parent changed, shutting down: %s\", self)\n else:\n yield from asyncio.sleep(1.0, loop=self.loop)\n\n except BaseException:\n pass\n\n yield from self.close()\n\n def init_signals(self):\n # Set up signals through the event loop API.\n\n self.loop.add_signal_handler(signal.SIGQUIT, self.handle_quit,\n signal.SIGQUIT, None)\n\n self.loop.add_signal_handler(signal.SIGTERM, self.handle_exit,\n signal.SIGTERM, None)\n\n self.loop.add_signal_handler(signal.SIGINT, self.handle_quit,\n signal.SIGINT, None)\n\n self.loop.add_signal_handler(signal.SIGWINCH, self.handle_winch,\n signal.SIGWINCH, None)\n\n self.loop.add_signal_handler(signal.SIGUSR1, self.handle_usr1,\n signal.SIGUSR1, None)\n\n self.loop.add_signal_handler(signal.SIGABRT, self.handle_abort,\n signal.SIGABRT, None)\n\n # Don't let SIGTERM and SIGUSR1 disturb active requests\n # by interrupting system calls\n signal.siginterrupt(signal.SIGTERM, False)\n signal.siginterrupt(signal.SIGUSR1, False)\n\n def handle_quit(self, sig, frame):\n self.alive = False\n\n def handle_abort(self, sig, frame):\n self.alive = False\n self.exit_code = 1\n\n @staticmethod\n def _create_ssl_context(cfg):\n \"\"\" Creates SSLContext instance for usage in asyncio.create_server.\n\n See ssl.SSLSocket.__init__ for more details.\n \"\"\"\n ctx = ssl.SSLContext(cfg.ssl_version)\n ctx.load_cert_chain(cfg.certfile, cfg.keyfile)\n ctx.verify_mode = cfg.cert_reqs\n if cfg.ca_certs:\n ctx.load_verify_locations(cfg.ca_certs)\n if cfg.ciphers:\n ctx.set_ciphers(cfg.ciphers)\n return ctx\n\n\nclass GunicornUVLoopWebWorker(GunicornWebWorker):\n\n def init_process(self):\n import uvloop\n\n # Close any existing event loop before setting a\n # new policy.\n asyncio.get_event_loop().close()\n\n # Setup uvloop policy, so that every\n # asyncio.get_event_loop() will create an instance\n # of uvloop event loop.\n asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())\n\n super().init_process()\n", "path": "aiohttp/worker.py"}]}
| 2,230 | 469 |
gh_patches_debug_4839
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-261
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Half installed AioHttpIntegration causes aiohttp to crash
If I call:
```python
sentry_sdk.integrations.setup_integrations(
[sentry_sdk.integrations.aiohttp.AioHttpIntegration()])
```
after `sentry_sdk.init()` the `_handle` method of `aiohttp.web.Application` gets replaced but the integration does not get registered in the client. This causes the replaced `_handle` ro run into a codepath where there as a `await` missing. This gives an exception in every request:
```
ERROR:aiohttp.server:Unhandled exception
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/aiohttp/web_protocol.py", line 447, in start
await resp.prepare(request)
AttributeError: 'coroutine' object has no attribute 'prepare'
/usr/local/lib/python3.7/site-packages/xxx/base.py:151: RuntimeWarning: coroutine 'Application._handle' was never awaited
self._loop.run_forever()
```
This will not get logged to sentry at all, because the `aiohttp.server` logger gets ignored by (half-)installing the integration (see #259).
</issue>
<code>
[start of sentry_sdk/integrations/aiohttp.py]
1 import sys
2 import weakref
3
4 from sentry_sdk._compat import reraise
5 from sentry_sdk.hub import Hub
6 from sentry_sdk.integrations import Integration
7 from sentry_sdk.integrations.logging import ignore_logger
8 from sentry_sdk.integrations._wsgi_common import _filter_headers
9 from sentry_sdk.utils import capture_internal_exceptions, event_from_exception
10
11 import asyncio
12 from aiohttp.web import Application, HTTPException
13
14
15 class AioHttpIntegration(Integration):
16 identifier = "aiohttp"
17
18 @staticmethod
19 def setup_once():
20 if sys.version_info < (3, 7):
21 # We better have contextvars or we're going to leak state between
22 # requests.
23 raise RuntimeError(
24 "The aiohttp integration for Sentry requires Python 3.7+"
25 )
26
27 ignore_logger("aiohttp.server")
28
29 old_handle = Application._handle
30
31 async def sentry_app_handle(self, request, *args, **kwargs):
32 async def inner():
33 hub = Hub.current
34 if hub.get_integration(AioHttpIntegration) is None:
35 return old_handle(self, request, *args, **kwargs)
36
37 weak_request = weakref.ref(request)
38
39 with Hub(Hub.current) as hub:
40 with hub.configure_scope() as scope:
41 scope.add_event_processor(_make_request_processor(weak_request))
42
43 try:
44 response = await old_handle(self, request)
45 except HTTPException:
46 raise
47 except Exception:
48 reraise(*_capture_exception(hub))
49
50 return response
51
52 return await asyncio.create_task(inner())
53
54 Application._handle = sentry_app_handle
55
56
57 def _make_request_processor(weak_request):
58 def aiohttp_processor(event, hint):
59 request = weak_request()
60 if request is None:
61 return event
62
63 with capture_internal_exceptions():
64 # TODO: Figure out what to do with request body. Methods on request
65 # are async, but event processors are not.
66
67 request_info = event.setdefault("request", {})
68
69 request_info["url"] = "%s://%s%s" % (
70 request.scheme,
71 request.host,
72 request.path,
73 )
74
75 request_info["query_string"] = request.query_string
76 request_info["method"] = request.method
77 request_info["env"] = {"REMOTE_ADDR": request.remote}
78 request_info["headers"] = _filter_headers(dict(request.headers))
79
80 return event
81
82 return aiohttp_processor
83
84
85 def _capture_exception(hub):
86 exc_info = sys.exc_info()
87 event, hint = event_from_exception(
88 exc_info,
89 client_options=hub.client.options,
90 mechanism={"type": "aiohttp", "handled": False},
91 )
92 hub.capture_event(event, hint=hint)
93 return exc_info
94
[end of sentry_sdk/integrations/aiohttp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sentry_sdk/integrations/aiohttp.py b/sentry_sdk/integrations/aiohttp.py
--- a/sentry_sdk/integrations/aiohttp.py
+++ b/sentry_sdk/integrations/aiohttp.py
@@ -32,7 +32,7 @@
async def inner():
hub = Hub.current
if hub.get_integration(AioHttpIntegration) is None:
- return old_handle(self, request, *args, **kwargs)
+ return await old_handle(self, request, *args, **kwargs)
weak_request = weakref.ref(request)
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/aiohttp.py b/sentry_sdk/integrations/aiohttp.py\n--- a/sentry_sdk/integrations/aiohttp.py\n+++ b/sentry_sdk/integrations/aiohttp.py\n@@ -32,7 +32,7 @@\n async def inner():\n hub = Hub.current\n if hub.get_integration(AioHttpIntegration) is None:\n- return old_handle(self, request, *args, **kwargs)\n+ return await old_handle(self, request, *args, **kwargs)\n \n weak_request = weakref.ref(request)\n", "issue": "Half installed AioHttpIntegration causes aiohttp to crash\nIf I call:\r\n```python\r\nsentry_sdk.integrations.setup_integrations(\r\n [sentry_sdk.integrations.aiohttp.AioHttpIntegration()])\r\n```\r\nafter `sentry_sdk.init()` the `_handle` method of `aiohttp.web.Application` gets replaced but the integration does not get registered in the client. This causes the replaced `_handle` ro run into a codepath where there as a `await` missing. This gives an exception in every request:\r\n```\r\nERROR:aiohttp.server:Unhandled exception \r\nTraceback (most recent call last): \r\n File \"/usr/local/lib/python3.7/site-packages/aiohttp/web_protocol.py\", line 447, in start \r\n await resp.prepare(request) \r\nAttributeError: 'coroutine' object has no attribute 'prepare' \r\n/usr/local/lib/python3.7/site-packages/xxx/base.py:151: RuntimeWarning: coroutine 'Application._handle' was never awaited \r\n self._loop.run_forever() \r\n```\r\n\r\nThis will not get logged to sentry at all, because the `aiohttp.server` logger gets ignored by (half-)installing the integration (see #259).\n", "before_files": [{"content": "import sys\nimport weakref\n\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.integrations.logging import ignore_logger\nfrom sentry_sdk.integrations._wsgi_common import _filter_headers\nfrom sentry_sdk.utils import capture_internal_exceptions, event_from_exception\n\nimport asyncio\nfrom aiohttp.web import Application, HTTPException\n\n\nclass AioHttpIntegration(Integration):\n identifier = \"aiohttp\"\n\n @staticmethod\n def setup_once():\n if sys.version_info < (3, 7):\n # We better have contextvars or we're going to leak state between\n # requests.\n raise RuntimeError(\n \"The aiohttp integration for Sentry requires Python 3.7+\"\n )\n\n ignore_logger(\"aiohttp.server\")\n\n old_handle = Application._handle\n\n async def sentry_app_handle(self, request, *args, **kwargs):\n async def inner():\n hub = Hub.current\n if hub.get_integration(AioHttpIntegration) is None:\n return old_handle(self, request, *args, **kwargs)\n\n weak_request = weakref.ref(request)\n\n with Hub(Hub.current) as hub:\n with hub.configure_scope() as scope:\n scope.add_event_processor(_make_request_processor(weak_request))\n\n try:\n response = await old_handle(self, request)\n except HTTPException:\n raise\n except Exception:\n reraise(*_capture_exception(hub))\n\n return response\n\n return await asyncio.create_task(inner())\n\n Application._handle = sentry_app_handle\n\n\ndef _make_request_processor(weak_request):\n def aiohttp_processor(event, hint):\n request = weak_request()\n if request is None:\n return event\n\n with capture_internal_exceptions():\n # TODO: Figure out what to do with request body. Methods on request\n # are async, but event processors are not.\n\n request_info = event.setdefault(\"request\", {})\n\n request_info[\"url\"] = \"%s://%s%s\" % (\n request.scheme,\n request.host,\n request.path,\n )\n\n request_info[\"query_string\"] = request.query_string\n request_info[\"method\"] = request.method\n request_info[\"env\"] = {\"REMOTE_ADDR\": request.remote}\n request_info[\"headers\"] = _filter_headers(dict(request.headers))\n\n return event\n\n return aiohttp_processor\n\n\ndef _capture_exception(hub):\n exc_info = sys.exc_info()\n event, hint = event_from_exception(\n exc_info,\n client_options=hub.client.options,\n mechanism={\"type\": \"aiohttp\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n return exc_info\n", "path": "sentry_sdk/integrations/aiohttp.py"}]}
| 1,598 | 129 |
gh_patches_debug_744
|
rasdani/github-patches
|
git_diff
|
LMFDB__lmfdb-5795
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Half integeral weight page visible on prod
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/half/ should redirect to beta, but it doesn't since the whitelist thinks it's inside CMFs.
</issue>
<code>
[start of lmfdb/half_integral_weight_forms/__init__.py]
1 # -*- coding: utf-8 -*-
2
3 from lmfdb.app import app
4 from lmfdb.logger import make_logger
5 from flask import Blueprint
6
7 hiwf_page = Blueprint("hiwf", __name__, template_folder='templates', static_folder="static")
8 hiwf_logger = make_logger(hiwf_page)
9
10
11 @hiwf_page.context_processor
12 def body_class():
13 return {'body_class': 'hiwf'}
14
15 from . import half_integral_form
16 assert half_integral_form
17
18 app.register_blueprint(hiwf_page, url_prefix="/ModularForm/GL2/Q/holomorphic/half")
19
[end of lmfdb/half_integral_weight_forms/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lmfdb/half_integral_weight_forms/__init__.py b/lmfdb/half_integral_weight_forms/__init__.py
--- a/lmfdb/half_integral_weight_forms/__init__.py
+++ b/lmfdb/half_integral_weight_forms/__init__.py
@@ -15,4 +15,4 @@
from . import half_integral_form
assert half_integral_form
-app.register_blueprint(hiwf_page, url_prefix="/ModularForm/GL2/Q/holomorphic/half")
+app.register_blueprint(hiwf_page, url_prefix="/ModularForm/GL2/Q/holomorphic_half")
|
{"golden_diff": "diff --git a/lmfdb/half_integral_weight_forms/__init__.py b/lmfdb/half_integral_weight_forms/__init__.py\n--- a/lmfdb/half_integral_weight_forms/__init__.py\n+++ b/lmfdb/half_integral_weight_forms/__init__.py\n@@ -15,4 +15,4 @@\n from . import half_integral_form\n assert half_integral_form\n \n-app.register_blueprint(hiwf_page, url_prefix=\"/ModularForm/GL2/Q/holomorphic/half\")\n+app.register_blueprint(hiwf_page, url_prefix=\"/ModularForm/GL2/Q/holomorphic_half\")\n", "issue": "Half integeral weight page visible on prod\nhttps://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/half/ should redirect to beta, but it doesn't since the whitelist thinks it's inside CMFs.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom lmfdb.app import app\nfrom lmfdb.logger import make_logger\nfrom flask import Blueprint\n\nhiwf_page = Blueprint(\"hiwf\", __name__, template_folder='templates', static_folder=\"static\")\nhiwf_logger = make_logger(hiwf_page)\n\n\n@hiwf_page.context_processor\ndef body_class():\n return {'body_class': 'hiwf'}\n\nfrom . import half_integral_form\nassert half_integral_form\n\napp.register_blueprint(hiwf_page, url_prefix=\"/ModularForm/GL2/Q/holomorphic/half\")\n", "path": "lmfdb/half_integral_weight_forms/__init__.py"}]}
| 751 | 130 |
gh_patches_debug_29078
|
rasdani/github-patches
|
git_diff
|
mindee__doctr-848
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[datasets] Targets are modified inplace
### Bug description
**Targets** are being changed when iterating over some dataset more than one time.
The reason is storing targets in self.data, and changing them in the `__getitem__` ***in place*** using `pre_transforms`, etc.
```python
# _AbstractDataset
def __getitem__(
self,
index: int
) -> Tuple[Any, Any]:
# Read image
img, target = self._read_sample(index)
# Pre-transforms (format conversion at run-time etc.)
if self._pre_transforms is not None:
img, target = self._pre_transforms(img, target)
if self.img_transforms is not None:
# typing issue cf. https://github.com/python/mypy/issues/5485
img = self.img_transforms(img) # type: ignore[call-arg]
if self.sample_transforms is not None:
img, target = self.sample_transforms(img, target)
return img, target
```
This can be fixed by copying target in the `_read_sample`
```python
# AbstractDataset
def _read_sample(self, index: int) -> Tuple[tf.Tensor, Any]:
img_name, target = self.data[index]
# Read image
img = read_img_as_tensor(os.path.join(self.root, img_name), dtype=tf.float32)
return img, target
```
**OR** returning a copy of the target in all transform methods.
```python
def convert_target_to_relative(img: ImageTensor, target: Dict[str, Any]) -> Tuple[ImageTensor, Dict[str, Any]]:
target['boxes'] = convert_to_relative_coords(target['boxes'], get_img_shape(img))
return img, target
```
### Code snippet to reproduce the bug
```python
def process_image(train_example):
img, target = train_example
img_numpy = img.numpy() * 255
for example in target['boxes']:
print(example)
unnormalized_example = [int(example[0]*img.shape[1]), int(example[1]*img.shape[0]),
int(example[2]*img.shape[1]), int(example[3]*img.shape[0])]
cv2.rectangle(img=img_numpy,
pt1=(unnormalized_example[0], unnormalized_example[1]),
pt2=(unnormalized_example[2], unnormalized_example[3]),
color=(0, 0, 255), thickness=2)
return img_numpy
train_set = SROIE(train=True, download=True)
for i in range(2):
for j, example in enumerate(train_set):
if j == 0:
print(f"{i} ____")
img_n = process_image(example)
```
P.S. Sorry for not a pretty code style. This snippet is just for an example :)
### Error traceback
~changed target box coordinates
### Environment
.
</issue>
<code>
[start of doctr/datasets/datasets/tensorflow.py]
1 # Copyright (C) 2021-2022, Mindee.
2
3 # This program is licensed under the Apache License version 2.
4 # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.
5
6 import os
7 from typing import Any, List, Tuple
8
9 import tensorflow as tf
10
11 from doctr.io import read_img_as_tensor
12
13 from .base import _AbstractDataset, _VisionDataset
14
15 __all__ = ['AbstractDataset', 'VisionDataset']
16
17
18 class AbstractDataset(_AbstractDataset):
19
20 def _read_sample(self, index: int) -> Tuple[tf.Tensor, Any]:
21 img_name, target = self.data[index]
22 # Read image
23 img = read_img_as_tensor(os.path.join(self.root, img_name), dtype=tf.float32)
24
25 return img, target
26
27 @staticmethod
28 def collate_fn(samples: List[Tuple[tf.Tensor, Any]]) -> Tuple[tf.Tensor, List[Any]]:
29
30 images, targets = zip(*samples)
31 images = tf.stack(images, axis=0)
32
33 return images, list(targets)
34
35
36 class VisionDataset(AbstractDataset, _VisionDataset):
37 pass
38
[end of doctr/datasets/datasets/tensorflow.py]
[start of doctr/datasets/datasets/pytorch.py]
1 # Copyright (C) 2021-2022, Mindee.
2
3 # This program is licensed under the Apache License version 2.
4 # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.
5
6 import os
7 from typing import Any, List, Tuple
8
9 import torch
10
11 from doctr.io import read_img_as_tensor
12
13 from .base import _AbstractDataset, _VisionDataset
14
15 __all__ = ['AbstractDataset', 'VisionDataset']
16
17
18 class AbstractDataset(_AbstractDataset):
19
20 def _read_sample(self, index: int) -> Tuple[torch.Tensor, Any]:
21 img_name, target = self.data[index]
22 # Read image
23 img = read_img_as_tensor(os.path.join(self.root, img_name), dtype=torch.float32)
24
25 return img, target
26
27 @staticmethod
28 def collate_fn(samples: List[Tuple[torch.Tensor, Any]]) -> Tuple[torch.Tensor, List[Any]]:
29
30 images, targets = zip(*samples)
31 images = torch.stack(images, dim=0)
32
33 return images, list(targets)
34
35
36 class VisionDataset(AbstractDataset, _VisionDataset):
37 pass
38
[end of doctr/datasets/datasets/pytorch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/doctr/datasets/datasets/pytorch.py b/doctr/datasets/datasets/pytorch.py
--- a/doctr/datasets/datasets/pytorch.py
+++ b/doctr/datasets/datasets/pytorch.py
@@ -4,6 +4,7 @@
# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.
import os
+from copy import deepcopy
from typing import Any, List, Tuple
import torch
@@ -22,7 +23,7 @@
# Read image
img = read_img_as_tensor(os.path.join(self.root, img_name), dtype=torch.float32)
- return img, target
+ return img, deepcopy(target)
@staticmethod
def collate_fn(samples: List[Tuple[torch.Tensor, Any]]) -> Tuple[torch.Tensor, List[Any]]:
diff --git a/doctr/datasets/datasets/tensorflow.py b/doctr/datasets/datasets/tensorflow.py
--- a/doctr/datasets/datasets/tensorflow.py
+++ b/doctr/datasets/datasets/tensorflow.py
@@ -4,6 +4,7 @@
# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.
import os
+from copy import deepcopy
from typing import Any, List, Tuple
import tensorflow as tf
@@ -22,7 +23,7 @@
# Read image
img = read_img_as_tensor(os.path.join(self.root, img_name), dtype=tf.float32)
- return img, target
+ return img, deepcopy(target)
@staticmethod
def collate_fn(samples: List[Tuple[tf.Tensor, Any]]) -> Tuple[tf.Tensor, List[Any]]:
|
{"golden_diff": "diff --git a/doctr/datasets/datasets/pytorch.py b/doctr/datasets/datasets/pytorch.py\n--- a/doctr/datasets/datasets/pytorch.py\n+++ b/doctr/datasets/datasets/pytorch.py\n@@ -4,6 +4,7 @@\n # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n \n import os\n+from copy import deepcopy\n from typing import Any, List, Tuple\n \n import torch\n@@ -22,7 +23,7 @@\n # Read image\n img = read_img_as_tensor(os.path.join(self.root, img_name), dtype=torch.float32)\n \n- return img, target\n+ return img, deepcopy(target)\n \n @staticmethod\n def collate_fn(samples: List[Tuple[torch.Tensor, Any]]) -> Tuple[torch.Tensor, List[Any]]:\ndiff --git a/doctr/datasets/datasets/tensorflow.py b/doctr/datasets/datasets/tensorflow.py\n--- a/doctr/datasets/datasets/tensorflow.py\n+++ b/doctr/datasets/datasets/tensorflow.py\n@@ -4,6 +4,7 @@\n # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n \n import os\n+from copy import deepcopy\n from typing import Any, List, Tuple\n \n import tensorflow as tf\n@@ -22,7 +23,7 @@\n # Read image\n img = read_img_as_tensor(os.path.join(self.root, img_name), dtype=tf.float32)\n \n- return img, target\n+ return img, deepcopy(target)\n \n @staticmethod\n def collate_fn(samples: List[Tuple[tf.Tensor, Any]]) -> Tuple[tf.Tensor, List[Any]]:\n", "issue": "[datasets] Targets are modified inplace\n### Bug description\n\n**Targets** are being changed when iterating over some dataset more than one time.\r\nThe reason is storing targets in self.data, and changing them in the `__getitem__` ***in place*** using `pre_transforms`, etc.\r\n```python\r\n# _AbstractDataset\r\ndef __getitem__(\r\n self,\r\n index: int\r\n ) -> Tuple[Any, Any]:\r\n\r\n # Read image\r\n img, target = self._read_sample(index)\r\n # Pre-transforms (format conversion at run-time etc.)\r\n if self._pre_transforms is not None:\r\n img, target = self._pre_transforms(img, target)\r\n\r\n if self.img_transforms is not None:\r\n # typing issue cf. https://github.com/python/mypy/issues/5485\r\n img = self.img_transforms(img) # type: ignore[call-arg]\r\n\r\n if self.sample_transforms is not None:\r\n img, target = self.sample_transforms(img, target)\r\n\r\n return img, target\r\n```\r\n\r\nThis can be fixed by copying target in the `_read_sample` \r\n```python\r\n# AbstractDataset\r\ndef _read_sample(self, index: int) -> Tuple[tf.Tensor, Any]:\r\n img_name, target = self.data[index]\r\n # Read image\r\n img = read_img_as_tensor(os.path.join(self.root, img_name), dtype=tf.float32)\r\n\r\n return img, target\r\n```\r\n\r\n**OR** returning a copy of the target in all transform methods.\r\n```python\r\ndef convert_target_to_relative(img: ImageTensor, target: Dict[str, Any]) -> Tuple[ImageTensor, Dict[str, Any]]:\r\n\r\n target['boxes'] = convert_to_relative_coords(target['boxes'], get_img_shape(img))\r\n return img, target\r\n```\r\n\n\n### Code snippet to reproduce the bug\n\n```python\r\ndef process_image(train_example):\r\n img, target = train_example\r\n img_numpy = img.numpy() * 255\r\n for example in target['boxes']:\r\n print(example)\r\n unnormalized_example = [int(example[0]*img.shape[1]), int(example[1]*img.shape[0]),\r\n int(example[2]*img.shape[1]), int(example[3]*img.shape[0])]\r\n cv2.rectangle(img=img_numpy,\r\n pt1=(unnormalized_example[0], unnormalized_example[1]),\r\n pt2=(unnormalized_example[2], unnormalized_example[3]),\r\n color=(0, 0, 255), thickness=2)\r\n return img_numpy \r\n\r\n\r\ntrain_set = SROIE(train=True, download=True)\r\n\r\nfor i in range(2):\r\n for j, example in enumerate(train_set):\r\n if j == 0: \r\n print(f\"{i} ____\")\r\n img_n = process_image(example)\r\n```\r\n\r\nP.S. Sorry for not a pretty code style. This snippet is just for an example :) \n\n### Error traceback\n\n~changed target box coordinates\n\n### Environment\n\n.\n", "before_files": [{"content": "# Copyright (C) 2021-2022, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\nimport os\nfrom typing import Any, List, Tuple\n\nimport tensorflow as tf\n\nfrom doctr.io import read_img_as_tensor\n\nfrom .base import _AbstractDataset, _VisionDataset\n\n__all__ = ['AbstractDataset', 'VisionDataset']\n\n\nclass AbstractDataset(_AbstractDataset):\n\n def _read_sample(self, index: int) -> Tuple[tf.Tensor, Any]:\n img_name, target = self.data[index]\n # Read image\n img = read_img_as_tensor(os.path.join(self.root, img_name), dtype=tf.float32)\n\n return img, target\n\n @staticmethod\n def collate_fn(samples: List[Tuple[tf.Tensor, Any]]) -> Tuple[tf.Tensor, List[Any]]:\n\n images, targets = zip(*samples)\n images = tf.stack(images, axis=0)\n\n return images, list(targets)\n\n\nclass VisionDataset(AbstractDataset, _VisionDataset):\n pass\n", "path": "doctr/datasets/datasets/tensorflow.py"}, {"content": "# Copyright (C) 2021-2022, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\nimport os\nfrom typing import Any, List, Tuple\n\nimport torch\n\nfrom doctr.io import read_img_as_tensor\n\nfrom .base import _AbstractDataset, _VisionDataset\n\n__all__ = ['AbstractDataset', 'VisionDataset']\n\n\nclass AbstractDataset(_AbstractDataset):\n\n def _read_sample(self, index: int) -> Tuple[torch.Tensor, Any]:\n img_name, target = self.data[index]\n # Read image\n img = read_img_as_tensor(os.path.join(self.root, img_name), dtype=torch.float32)\n\n return img, target\n\n @staticmethod\n def collate_fn(samples: List[Tuple[torch.Tensor, Any]]) -> Tuple[torch.Tensor, List[Any]]:\n\n images, targets = zip(*samples)\n images = torch.stack(images, dim=0)\n\n return images, list(targets)\n\n\nclass VisionDataset(AbstractDataset, _VisionDataset):\n pass\n", "path": "doctr/datasets/datasets/pytorch.py"}]}
| 1,854 | 386 |
gh_patches_debug_24256
|
rasdani/github-patches
|
git_diff
|
mlcommons__GaNDLF-228
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add center cropping
**Is your feature request related to a problem? Please describe.**
We do not have any mechanism to perform cropping, which is important for certain DL training problems.
**Describe the solution you'd like**
Expose the [cropping functionality in TorchIO](https://torchio.readthedocs.io/transforms/preprocessing.html?highlight=crop#torchio.transforms.Crop) as a preprocessing mechanism.
**Describe alternatives you've considered**
N.A.
**Additional context**
Requested by @Geeks-Sid for SBU-TIL.
</issue>
<code>
[start of GANDLF/data/preprocessing/__init__.py]
1 from .crop_zero_planes import CropExternalZeroplanes
2 from .non_zero_normalize import NonZeroNormalizeOnMaskedRegion
3 from .threshold_and_clip import (
4 threshold_transform,
5 clip_transform,
6 )
7 from .normalize_rgb import (
8 normalize_by_val_transform,
9 normalize_imagenet_transform,
10 normalize_standardize_transform,
11 normalize_div_by_255_transform,
12 )
13
14 from torchio.transforms import (
15 ZNormalization,
16 ToCanonical,
17 )
18
19
20 def positive_voxel_mask(image):
21 return image > 0
22
23
24 def nonzero_voxel_mask(image):
25 return image != 0
26
27
28 def to_canonical_transform(parameters):
29 return ToCanonical()
30
31
32 # defining dict for pre-processing - key is the string and the value is the transform object
33 global_preprocessing_dict = {
34 "to_canonical": to_canonical_transform,
35 "threshold": threshold_transform,
36 "clip": clip_transform,
37 "clamp": clip_transform,
38 "crop_external_zero_planes": CropExternalZeroplanes,
39 "normalize_by_val": normalize_by_val_transform,
40 "normalize_imagenet": normalize_imagenet_transform,
41 "normalize_standardize": normalize_standardize_transform,
42 "normalize_div_by_255": normalize_div_by_255_transform,
43 "normalize": ZNormalization(),
44 "normalize_positive": ZNormalization(masking_method=positive_voxel_mask),
45 "normalize_nonZero": ZNormalization(masking_method=nonzero_voxel_mask),
46 "normalize_nonZero_masked": NonZeroNormalizeOnMaskedRegion(),
47 }
48
[end of GANDLF/data/preprocessing/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/GANDLF/data/preprocessing/__init__.py b/GANDLF/data/preprocessing/__init__.py
--- a/GANDLF/data/preprocessing/__init__.py
+++ b/GANDLF/data/preprocessing/__init__.py
@@ -14,6 +14,8 @@
from torchio.transforms import (
ZNormalization,
ToCanonical,
+ Crop,
+ CropOrPad,
)
@@ -29,6 +31,14 @@
return ToCanonical()
+def crop_transform(patch_size):
+ return Crop(patch_size)
+
+
+def centercrop_transform(patch_size):
+ return CropOrPad(target_shape=patch_size)
+
+
# defining dict for pre-processing - key is the string and the value is the transform object
global_preprocessing_dict = {
"to_canonical": to_canonical_transform,
@@ -36,6 +46,8 @@
"clip": clip_transform,
"clamp": clip_transform,
"crop_external_zero_planes": CropExternalZeroplanes,
+ "crop": crop_transform,
+ "centercrop": centercrop_transform,
"normalize_by_val": normalize_by_val_transform,
"normalize_imagenet": normalize_imagenet_transform,
"normalize_standardize": normalize_standardize_transform,
|
{"golden_diff": "diff --git a/GANDLF/data/preprocessing/__init__.py b/GANDLF/data/preprocessing/__init__.py\n--- a/GANDLF/data/preprocessing/__init__.py\n+++ b/GANDLF/data/preprocessing/__init__.py\n@@ -14,6 +14,8 @@\n from torchio.transforms import (\n ZNormalization,\n ToCanonical,\n+ Crop,\n+ CropOrPad,\n )\n \n \n@@ -29,6 +31,14 @@\n return ToCanonical()\n \n \n+def crop_transform(patch_size):\n+ return Crop(patch_size)\n+\n+\n+def centercrop_transform(patch_size):\n+ return CropOrPad(target_shape=patch_size)\n+\n+\n # defining dict for pre-processing - key is the string and the value is the transform object\n global_preprocessing_dict = {\n \"to_canonical\": to_canonical_transform,\n@@ -36,6 +46,8 @@\n \"clip\": clip_transform,\n \"clamp\": clip_transform,\n \"crop_external_zero_planes\": CropExternalZeroplanes,\n+ \"crop\": crop_transform,\n+ \"centercrop\": centercrop_transform,\n \"normalize_by_val\": normalize_by_val_transform,\n \"normalize_imagenet\": normalize_imagenet_transform,\n \"normalize_standardize\": normalize_standardize_transform,\n", "issue": "Add center cropping\n**Is your feature request related to a problem? Please describe.**\r\nWe do not have any mechanism to perform cropping, which is important for certain DL training problems.\r\n\r\n**Describe the solution you'd like**\r\nExpose the [cropping functionality in TorchIO](https://torchio.readthedocs.io/transforms/preprocessing.html?highlight=crop#torchio.transforms.Crop) as a preprocessing mechanism.\r\n\r\n**Describe alternatives you've considered**\r\nN.A.\r\n\r\n**Additional context**\r\nRequested by @Geeks-Sid for SBU-TIL.\r\n\n", "before_files": [{"content": "from .crop_zero_planes import CropExternalZeroplanes\nfrom .non_zero_normalize import NonZeroNormalizeOnMaskedRegion\nfrom .threshold_and_clip import (\n threshold_transform,\n clip_transform,\n)\nfrom .normalize_rgb import (\n normalize_by_val_transform,\n normalize_imagenet_transform,\n normalize_standardize_transform,\n normalize_div_by_255_transform,\n)\n\nfrom torchio.transforms import (\n ZNormalization,\n ToCanonical,\n)\n\n\ndef positive_voxel_mask(image):\n return image > 0\n\n\ndef nonzero_voxel_mask(image):\n return image != 0\n\n\ndef to_canonical_transform(parameters):\n return ToCanonical()\n\n\n# defining dict for pre-processing - key is the string and the value is the transform object\nglobal_preprocessing_dict = {\n \"to_canonical\": to_canonical_transform,\n \"threshold\": threshold_transform,\n \"clip\": clip_transform,\n \"clamp\": clip_transform,\n \"crop_external_zero_planes\": CropExternalZeroplanes,\n \"normalize_by_val\": normalize_by_val_transform,\n \"normalize_imagenet\": normalize_imagenet_transform,\n \"normalize_standardize\": normalize_standardize_transform,\n \"normalize_div_by_255\": normalize_div_by_255_transform,\n \"normalize\": ZNormalization(),\n \"normalize_positive\": ZNormalization(masking_method=positive_voxel_mask),\n \"normalize_nonZero\": ZNormalization(masking_method=nonzero_voxel_mask),\n \"normalize_nonZero_masked\": NonZeroNormalizeOnMaskedRegion(),\n}\n", "path": "GANDLF/data/preprocessing/__init__.py"}]}
| 1,071 | 277 |
gh_patches_debug_23874
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-6798
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug: Static type checking (mypy) CI job failing
### Meltano Version
N/A
### Python Version
NA
### Bug scope
Other
### Operating System
N/A
### Description
Example from `main`: https://github.com/meltano/meltano/actions/runs/3129670243/jobs/5079038959
```
nox > Running session mypy
nox > Creating virtual environment (virtualenv) using python3.9 in .nox/mypy
nox > poetry build --format=wheel --no-ansi
nox > pip uninstall --yes file:///home/runner/work/meltano/meltano/dist/meltano-2.7.0-py3-none-any.whl
nox > poetry export --format=requirements.txt --dev --extras=infra --extras=mssql --extras=repl --without-hashes
The `--dev` option is deprecated, use the `--with dev` notation instead.
nox > python -m pip install --constraint=.nox/mypy/tmp/requirements.txt file:///home/runner/work/meltano/meltano/dist/meltano-2.7.0-py3-none-any.whl
nox > python -m pip install --constraint=.nox/mypy/tmp/requirements.txt mypy sqlalchemy2-stubs types-requests
nox > mypy src/meltano
src/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:39: error: Variable "f4c225a9492f_create_dedicated_job_state_table.SystemModel" is not valid as a type
src/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:39: note: See https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases
src/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:39: error: Invalid base class "SystemModel"
src/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:69: error: Need type annotation for "completed_state" (hint: "completed_state: Dict[<type>, <type>] = ...")
src/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:70: error: Need type annotation for "partial_state" (hint: "partial_state: Dict[<type>, <type>] = ...")
src/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:274: error: "object" has no attribute "query"
src/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:307: error: Variable "f4c225a9492f_create_dedicated_job_state_table.SystemModel" is not valid as a type
src/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:307: note: See https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases
src/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:307: error: Invalid base class "SystemModel"
src/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:340: error: Need type annotation for "completed_state" (hint: "completed_state: Dict[<type>, <type>] = ...")
src/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:341: error: Need type annotation for "partial_state" (hint: "partial_state: Dict[<type>, <type>] = ...")
src/meltano/core/job_state.py:19: error: Variable "meltano.core.models.SystemModel" is not valid as a type
src/meltano/core/job_state.py:19: note: See https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases
src/meltano/core/job_state.py:19: error: Invalid base class "SystemModel"
Found 11 errors in 2 files (checked 203 source files)
nox > Command mypy src/meltano failed with exit code 1
nox > Session mypy failed.
Error: Process completed with exit code 1.
```
### Code
_No response_
</issue>
<code>
[start of noxfile.py]
1 """Nox configuration."""
2
3 from __future__ import annotations
4
5 import os
6 import sys
7 from pathlib import Path
8 from random import randint
9 from textwrap import dedent
10
11 try:
12 from nox_poetry import Session
13 from nox_poetry import session as nox_session
14 except ImportError:
15 message = f"""\
16 Nox failed to import the 'nox-poetry' package.
17 Please install it using the following command:
18 {sys.executable} -m pip install nox-poetry"""
19 raise SystemExit(dedent(message)) from None
20
21
22 package = "meltano"
23 python_versions = ["3.10", "3.9", "3.8", "3.7"]
24 main_python_version = "3.9"
25 locations = "src", "tests", "noxfile.py"
26
27
28 @nox_session(python=python_versions)
29 def tests(session: Session) -> None:
30 """Execute pytest tests and compute coverage.
31
32 Args:
33 session: Nox session.
34 """
35 backend_db = os.environ.get("PYTEST_BACKEND", "sqlite")
36
37 if backend_db == "mssql":
38 session.install(".[mssql]")
39 else:
40 session.install(".")
41
42 session.install(
43 "freezegun",
44 "mock",
45 "pytest",
46 "pytest-asyncio",
47 "pytest-cov",
48 "pytest-docker",
49 "pytest-order",
50 "pytest-randomly",
51 "pytest-xdist",
52 "requests-mock",
53 )
54
55 try:
56 session.run(
57 "pytest",
58 f"--randomly-seed={randint(0, 2**32-1)}", # noqa: S311, WPS432
59 *session.posargs,
60 env={"NOX_CURRENT_SESSION": "tests"},
61 )
62 finally:
63 if session.interactive:
64 session.notify("coverage", posargs=[])
65
66
67 @nox_session(python=main_python_version)
68 def coverage(session: Session) -> None:
69 """Upload coverage data.
70
71 Args:
72 session: Nox session.
73 """
74 args = session.posargs or ["report"]
75
76 session.install("coverage[toml]")
77
78 if not session.posargs and any(Path().glob(".coverage.*")):
79 session.run("coverage", "combine")
80
81 session.run("coverage", *args)
82
83
84 @nox_session(python=main_python_version)
85 def mypy(session: Session) -> None:
86 """Run mypy type checking.
87
88 Args:
89 session: Nox session.
90 """
91 args = session.posargs or ["src/meltano"]
92
93 session.install(".")
94 session.install(
95 "mypy",
96 "sqlalchemy2-stubs",
97 "types-requests",
98 )
99 session.run("mypy", *args)
100
[end of noxfile.py]
[start of src/meltano/core/job_state.py]
1 """Defines JobState model class."""
2 from __future__ import annotations
3
4 from datetime import datetime
5 from typing import Any
6
7 from sqlalchemy import Column, types
8 from sqlalchemy.ext.mutable import MutableDict
9 from sqlalchemy.orm import Session
10
11 from meltano.core.job import JobFinder, Payload
12 from meltano.core.models import SystemModel
13 from meltano.core.sqlalchemy import JSONEncodedDict
14 from meltano.core.utils import merge
15
16 SINGER_STATE_KEY = "singer_state"
17
18
19 class JobState(SystemModel):
20 """Model class that represents the current state of a given job.
21
22 Modified during `meltano elt` or `meltano run` runs whenever a
23 STATE message is emitted by a Singer target. Also written and read
24 by `meltano state` CLI invocations. Only holds the _current_ state
25 for a given job_name. Full job run history is held by the Job model.
26 """
27
28 __tablename__ = "state"
29 job_name = Column(types.String, unique=True, primary_key=True, nullable=False)
30
31 updated_at = Column(types.DATETIME, onupdate=datetime.now)
32
33 partial_state = Column(MutableDict.as_mutable(JSONEncodedDict))
34 completed_state = Column(MutableDict.as_mutable(JSONEncodedDict))
35
36 @classmethod
37 def from_job_history(cls, session: Session, job_name: str):
38 """Build JobState from job run history.
39
40 Args:
41 session: the session to use in finding job history
42 job_name: the name of the job to build JobState for
43
44 Returns:
45 JobState built from job run history
46 """
47 completed_state: dict[Any, Any] = {}
48 partial_state: dict[Any, Any] = {}
49 incomplete_since = None
50 finder = JobFinder(job_name)
51
52 # Get the state for the most recent completed job.
53 # Do not consider dummy jobs create via add_state.
54 state_job = finder.latest_with_payload(session, flags=Payload.STATE)
55 if state_job:
56 incomplete_since = state_job.ended_at
57 if SINGER_STATE_KEY in state_job.payload:
58 merge(state_job.payload, partial_state)
59
60 # If there have been any incomplete jobs since the most recent completed jobs,
61 # merge the state emitted by those jobs into the state for the most recent
62 # completed job. If there are no completed jobs, get the full history of
63 # incomplete jobs and use the most recent state emitted per stream
64 incomplete_state_jobs = finder.with_payload(
65 session, flags=Payload.INCOMPLETE_STATE, since=incomplete_since
66 )
67 for incomplete_state_job in incomplete_state_jobs:
68 if SINGER_STATE_KEY in incomplete_state_job.payload:
69 partial_state = merge(incomplete_state_job.payload, partial_state)
70
71 return cls(
72 job_name=job_name,
73 partial_state=partial_state,
74 completed_state=completed_state,
75 )
76
[end of src/meltano/core/job_state.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -88,7 +88,7 @@
Args:
session: Nox session.
"""
- args = session.posargs or ["src/meltano"]
+ args = session.posargs or ["src/meltano", "--exclude", "src/meltano/migrations/"]
session.install(".")
session.install(
diff --git a/src/meltano/core/job_state.py b/src/meltano/core/job_state.py
--- a/src/meltano/core/job_state.py
+++ b/src/meltano/core/job_state.py
@@ -6,7 +6,7 @@
from sqlalchemy import Column, types
from sqlalchemy.ext.mutable import MutableDict
-from sqlalchemy.orm import Session
+from sqlalchemy.orm import Mapped, Session
from meltano.core.job import JobFinder, Payload
from meltano.core.models import SystemModel
@@ -30,8 +30,8 @@
updated_at = Column(types.DATETIME, onupdate=datetime.now)
- partial_state = Column(MutableDict.as_mutable(JSONEncodedDict))
- completed_state = Column(MutableDict.as_mutable(JSONEncodedDict))
+ partial_state: Mapped[Any] = Column(MutableDict.as_mutable(JSONEncodedDict))
+ completed_state: Mapped[Any] = Column(MutableDict.as_mutable(JSONEncodedDict))
@classmethod
def from_job_history(cls, session: Session, job_name: str):
|
{"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -88,7 +88,7 @@\n Args:\n session: Nox session.\n \"\"\"\n- args = session.posargs or [\"src/meltano\"]\n+ args = session.posargs or [\"src/meltano\", \"--exclude\", \"src/meltano/migrations/\"]\n \n session.install(\".\")\n session.install(\ndiff --git a/src/meltano/core/job_state.py b/src/meltano/core/job_state.py\n--- a/src/meltano/core/job_state.py\n+++ b/src/meltano/core/job_state.py\n@@ -6,7 +6,7 @@\n \n from sqlalchemy import Column, types\n from sqlalchemy.ext.mutable import MutableDict\n-from sqlalchemy.orm import Session\n+from sqlalchemy.orm import Mapped, Session\n \n from meltano.core.job import JobFinder, Payload\n from meltano.core.models import SystemModel\n@@ -30,8 +30,8 @@\n \n updated_at = Column(types.DATETIME, onupdate=datetime.now)\n \n- partial_state = Column(MutableDict.as_mutable(JSONEncodedDict))\n- completed_state = Column(MutableDict.as_mutable(JSONEncodedDict))\n+ partial_state: Mapped[Any] = Column(MutableDict.as_mutable(JSONEncodedDict))\n+ completed_state: Mapped[Any] = Column(MutableDict.as_mutable(JSONEncodedDict))\n \n @classmethod\n def from_job_history(cls, session: Session, job_name: str):\n", "issue": "bug: Static type checking (mypy) CI job failing\n### Meltano Version\n\nN/A\n\n### Python Version\n\nNA\n\n### Bug scope\n\nOther\n\n### Operating System\n\nN/A\n\n### Description\n\nExample from `main`: https://github.com/meltano/meltano/actions/runs/3129670243/jobs/5079038959\r\n\r\n```\r\nnox > Running session mypy\r\nnox > Creating virtual environment (virtualenv) using python3.9 in .nox/mypy\r\nnox > poetry build --format=wheel --no-ansi\r\nnox > pip uninstall --yes file:///home/runner/work/meltano/meltano/dist/meltano-2.7.0-py3-none-any.whl\r\nnox > poetry export --format=requirements.txt --dev --extras=infra --extras=mssql --extras=repl --without-hashes\r\nThe `--dev` option is deprecated, use the `--with dev` notation instead.\r\nnox > python -m pip install --constraint=.nox/mypy/tmp/requirements.txt file:///home/runner/work/meltano/meltano/dist/meltano-2.7.0-py3-none-any.whl\r\nnox > python -m pip install --constraint=.nox/mypy/tmp/requirements.txt mypy sqlalchemy2-stubs types-requests\r\nnox > mypy src/meltano\r\nsrc/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:39: error: Variable \"f4c225a9492f_create_dedicated_job_state_table.SystemModel\" is not valid as a type\r\nsrc/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:39: note: See https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases\r\nsrc/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:39: error: Invalid base class \"SystemModel\"\r\nsrc/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:69: error: Need type annotation for \"completed_state\" (hint: \"completed_state: Dict[<type>, <type>] = ...\")\r\nsrc/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:70: error: Need type annotation for \"partial_state\" (hint: \"partial_state: Dict[<type>, <type>] = ...\")\r\nsrc/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:274: error: \"object\" has no attribute \"query\"\r\nsrc/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:307: error: Variable \"f4c225a9492f_create_dedicated_job_state_table.SystemModel\" is not valid as a type\r\nsrc/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:307: note: See https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases\r\nsrc/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:307: error: Invalid base class \"SystemModel\"\r\nsrc/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:340: error: Need type annotation for \"completed_state\" (hint: \"completed_state: Dict[<type>, <type>] = ...\")\r\nsrc/meltano/migrations/versions/f4c225a9492f_create_dedicated_job_state_table.py:341: error: Need type annotation for \"partial_state\" (hint: \"partial_state: Dict[<type>, <type>] = ...\")\r\nsrc/meltano/core/job_state.py:19: error: Variable \"meltano.core.models.SystemModel\" is not valid as a type\r\nsrc/meltano/core/job_state.py:19: note: See https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases\r\nsrc/meltano/core/job_state.py:19: error: Invalid base class \"SystemModel\"\r\nFound 11 errors in 2 files (checked 203 source files)\r\nnox > Command mypy src/meltano failed with exit code 1\r\nnox > Session mypy failed.\r\nError: Process completed with exit code 1.\r\n```\n\n### Code\n\n_No response_\n", "before_files": [{"content": "\"\"\"Nox configuration.\"\"\"\n\nfrom __future__ import annotations\n\nimport os\nimport sys\nfrom pathlib import Path\nfrom random import randint\nfrom textwrap import dedent\n\ntry:\n from nox_poetry import Session\n from nox_poetry import session as nox_session\nexcept ImportError:\n message = f\"\"\"\\\n Nox failed to import the 'nox-poetry' package.\n Please install it using the following command:\n {sys.executable} -m pip install nox-poetry\"\"\"\n raise SystemExit(dedent(message)) from None\n\n\npackage = \"meltano\"\npython_versions = [\"3.10\", \"3.9\", \"3.8\", \"3.7\"]\nmain_python_version = \"3.9\"\nlocations = \"src\", \"tests\", \"noxfile.py\"\n\n\n@nox_session(python=python_versions)\ndef tests(session: Session) -> None:\n \"\"\"Execute pytest tests and compute coverage.\n\n Args:\n session: Nox session.\n \"\"\"\n backend_db = os.environ.get(\"PYTEST_BACKEND\", \"sqlite\")\n\n if backend_db == \"mssql\":\n session.install(\".[mssql]\")\n else:\n session.install(\".\")\n\n session.install(\n \"freezegun\",\n \"mock\",\n \"pytest\",\n \"pytest-asyncio\",\n \"pytest-cov\",\n \"pytest-docker\",\n \"pytest-order\",\n \"pytest-randomly\",\n \"pytest-xdist\",\n \"requests-mock\",\n )\n\n try:\n session.run(\n \"pytest\",\n f\"--randomly-seed={randint(0, 2**32-1)}\", # noqa: S311, WPS432\n *session.posargs,\n env={\"NOX_CURRENT_SESSION\": \"tests\"},\n )\n finally:\n if session.interactive:\n session.notify(\"coverage\", posargs=[])\n\n\n@nox_session(python=main_python_version)\ndef coverage(session: Session) -> None:\n \"\"\"Upload coverage data.\n\n Args:\n session: Nox session.\n \"\"\"\n args = session.posargs or [\"report\"]\n\n session.install(\"coverage[toml]\")\n\n if not session.posargs and any(Path().glob(\".coverage.*\")):\n session.run(\"coverage\", \"combine\")\n\n session.run(\"coverage\", *args)\n\n\n@nox_session(python=main_python_version)\ndef mypy(session: Session) -> None:\n \"\"\"Run mypy type checking.\n\n Args:\n session: Nox session.\n \"\"\"\n args = session.posargs or [\"src/meltano\"]\n\n session.install(\".\")\n session.install(\n \"mypy\",\n \"sqlalchemy2-stubs\",\n \"types-requests\",\n )\n session.run(\"mypy\", *args)\n", "path": "noxfile.py"}, {"content": "\"\"\"Defines JobState model class.\"\"\"\nfrom __future__ import annotations\n\nfrom datetime import datetime\nfrom typing import Any\n\nfrom sqlalchemy import Column, types\nfrom sqlalchemy.ext.mutable import MutableDict\nfrom sqlalchemy.orm import Session\n\nfrom meltano.core.job import JobFinder, Payload\nfrom meltano.core.models import SystemModel\nfrom meltano.core.sqlalchemy import JSONEncodedDict\nfrom meltano.core.utils import merge\n\nSINGER_STATE_KEY = \"singer_state\"\n\n\nclass JobState(SystemModel):\n \"\"\"Model class that represents the current state of a given job.\n\n Modified during `meltano elt` or `meltano run` runs whenever a\n STATE message is emitted by a Singer target. Also written and read\n by `meltano state` CLI invocations. Only holds the _current_ state\n for a given job_name. Full job run history is held by the Job model.\n \"\"\"\n\n __tablename__ = \"state\"\n job_name = Column(types.String, unique=True, primary_key=True, nullable=False)\n\n updated_at = Column(types.DATETIME, onupdate=datetime.now)\n\n partial_state = Column(MutableDict.as_mutable(JSONEncodedDict))\n completed_state = Column(MutableDict.as_mutable(JSONEncodedDict))\n\n @classmethod\n def from_job_history(cls, session: Session, job_name: str):\n \"\"\"Build JobState from job run history.\n\n Args:\n session: the session to use in finding job history\n job_name: the name of the job to build JobState for\n\n Returns:\n JobState built from job run history\n \"\"\"\n completed_state: dict[Any, Any] = {}\n partial_state: dict[Any, Any] = {}\n incomplete_since = None\n finder = JobFinder(job_name)\n\n # Get the state for the most recent completed job.\n # Do not consider dummy jobs create via add_state.\n state_job = finder.latest_with_payload(session, flags=Payload.STATE)\n if state_job:\n incomplete_since = state_job.ended_at\n if SINGER_STATE_KEY in state_job.payload:\n merge(state_job.payload, partial_state)\n\n # If there have been any incomplete jobs since the most recent completed jobs,\n # merge the state emitted by those jobs into the state for the most recent\n # completed job. If there are no completed jobs, get the full history of\n # incomplete jobs and use the most recent state emitted per stream\n incomplete_state_jobs = finder.with_payload(\n session, flags=Payload.INCOMPLETE_STATE, since=incomplete_since\n )\n for incomplete_state_job in incomplete_state_jobs:\n if SINGER_STATE_KEY in incomplete_state_job.payload:\n partial_state = merge(incomplete_state_job.payload, partial_state)\n\n return cls(\n job_name=job_name,\n partial_state=partial_state,\n completed_state=completed_state,\n )\n", "path": "src/meltano/core/job_state.py"}]}
| 3,163 | 332 |
gh_patches_debug_31454
|
rasdani/github-patches
|
git_diff
|
zulip__zulip-18885
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Permissions and warning for custom emoji overriding unicode emoji
Only administrators/owners should be able to override unicode emoji
1. If an administrator attempts to override a unicode emoji with a custom emoji, they should get a warning. #16937 attempts to fix this, but it is currently not working in production.
We should also shorten the warning message and avoid referring to "unicode" to avoid confusing non-technical users:
>**Override built-in emoji?**
> Uploading a custom emoji with the name **<name>** will override the built-in **<name>** emoji. Continue?
2. If a non-administrator attempts to override an emoji, show an error in the same style as the error for overriding custom emoji (screenshot below). Text: "Failed: An emoji with this name already exists. Only administrators can override built-in emoji."
Error for overriding custom emoji:
<img width="531" alt="Screen Shot 2021-06-15 at 2 30 38 PM" src="https://user-images.githubusercontent.com/2090066/122126418-915e9880-cde6-11eb-86f6-0a4338478739.png">
Related issue: #18269
[Related CZO thread](https://chat.zulip.org/#narrow/stream/2-general/topic/ok.20emoji)
</issue>
<code>
[start of zerver/views/realm_emoji.py]
1 from django.conf import settings
2 from django.http import HttpRequest, HttpResponse
3 from django.utils.translation import gettext as _
4
5 from zerver.decorator import require_member_or_admin
6 from zerver.lib.actions import check_add_realm_emoji, do_remove_realm_emoji
7 from zerver.lib.emoji import check_emoji_admin, check_valid_emoji_name
8 from zerver.lib.request import REQ, JsonableError, has_request_variables
9 from zerver.lib.response import json_success
10 from zerver.models import RealmEmoji, UserProfile
11
12
13 def list_emoji(request: HttpRequest, user_profile: UserProfile) -> HttpResponse:
14
15 # We don't call check_emoji_admin here because the list of realm
16 # emoji is public.
17 return json_success({"emoji": user_profile.realm.get_emoji()})
18
19
20 @require_member_or_admin
21 @has_request_variables
22 def upload_emoji(
23 request: HttpRequest, user_profile: UserProfile, emoji_name: str = REQ(path_only=True)
24 ) -> HttpResponse:
25 emoji_name = emoji_name.strip().replace(" ", "_")
26 check_valid_emoji_name(emoji_name)
27 check_emoji_admin(user_profile)
28 if RealmEmoji.objects.filter(
29 realm=user_profile.realm, name=emoji_name, deactivated=False
30 ).exists():
31 raise JsonableError(_("A custom emoji with this name already exists."))
32 if len(request.FILES) != 1:
33 raise JsonableError(_("You must upload exactly one file."))
34 emoji_file = list(request.FILES.values())[0]
35 if (settings.MAX_EMOJI_FILE_SIZE_MIB * 1024 * 1024) < emoji_file.size:
36 raise JsonableError(
37 _("Uploaded file is larger than the allowed limit of {} MiB").format(
38 settings.MAX_EMOJI_FILE_SIZE_MIB,
39 )
40 )
41
42 realm_emoji = check_add_realm_emoji(user_profile.realm, emoji_name, user_profile, emoji_file)
43 if realm_emoji is None:
44 raise JsonableError(_("Image file upload failed."))
45 return json_success()
46
47
48 def delete_emoji(request: HttpRequest, user_profile: UserProfile, emoji_name: str) -> HttpResponse:
49 if not RealmEmoji.objects.filter(
50 realm=user_profile.realm, name=emoji_name, deactivated=False
51 ).exists():
52 raise JsonableError(_("Emoji '{}' does not exist").format(emoji_name))
53 check_emoji_admin(user_profile, emoji_name)
54 do_remove_realm_emoji(user_profile.realm, emoji_name)
55 return json_success()
56
[end of zerver/views/realm_emoji.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/zerver/views/realm_emoji.py b/zerver/views/realm_emoji.py
--- a/zerver/views/realm_emoji.py
+++ b/zerver/views/realm_emoji.py
@@ -4,7 +4,7 @@
from zerver.decorator import require_member_or_admin
from zerver.lib.actions import check_add_realm_emoji, do_remove_realm_emoji
-from zerver.lib.emoji import check_emoji_admin, check_valid_emoji_name
+from zerver.lib.emoji import check_emoji_admin, check_valid_emoji_name, name_to_codepoint
from zerver.lib.request import REQ, JsonableError, has_request_variables
from zerver.lib.response import json_success
from zerver.models import RealmEmoji, UserProfile
@@ -23,6 +23,7 @@
request: HttpRequest, user_profile: UserProfile, emoji_name: str = REQ(path_only=True)
) -> HttpResponse:
emoji_name = emoji_name.strip().replace(" ", "_")
+ valid_built_in_emoji = name_to_codepoint.keys()
check_valid_emoji_name(emoji_name)
check_emoji_admin(user_profile)
if RealmEmoji.objects.filter(
@@ -31,6 +32,9 @@
raise JsonableError(_("A custom emoji with this name already exists."))
if len(request.FILES) != 1:
raise JsonableError(_("You must upload exactly one file."))
+ if emoji_name in valid_built_in_emoji:
+ if not user_profile.is_realm_admin:
+ raise JsonableError(_("Only administrators can override built-in emoji."))
emoji_file = list(request.FILES.values())[0]
if (settings.MAX_EMOJI_FILE_SIZE_MIB * 1024 * 1024) < emoji_file.size:
raise JsonableError(
|
{"golden_diff": "diff --git a/zerver/views/realm_emoji.py b/zerver/views/realm_emoji.py\n--- a/zerver/views/realm_emoji.py\n+++ b/zerver/views/realm_emoji.py\n@@ -4,7 +4,7 @@\n \n from zerver.decorator import require_member_or_admin\n from zerver.lib.actions import check_add_realm_emoji, do_remove_realm_emoji\n-from zerver.lib.emoji import check_emoji_admin, check_valid_emoji_name\n+from zerver.lib.emoji import check_emoji_admin, check_valid_emoji_name, name_to_codepoint\n from zerver.lib.request import REQ, JsonableError, has_request_variables\n from zerver.lib.response import json_success\n from zerver.models import RealmEmoji, UserProfile\n@@ -23,6 +23,7 @@\n request: HttpRequest, user_profile: UserProfile, emoji_name: str = REQ(path_only=True)\n ) -> HttpResponse:\n emoji_name = emoji_name.strip().replace(\" \", \"_\")\n+ valid_built_in_emoji = name_to_codepoint.keys()\n check_valid_emoji_name(emoji_name)\n check_emoji_admin(user_profile)\n if RealmEmoji.objects.filter(\n@@ -31,6 +32,9 @@\n raise JsonableError(_(\"A custom emoji with this name already exists.\"))\n if len(request.FILES) != 1:\n raise JsonableError(_(\"You must upload exactly one file.\"))\n+ if emoji_name in valid_built_in_emoji:\n+ if not user_profile.is_realm_admin:\n+ raise JsonableError(_(\"Only administrators can override built-in emoji.\"))\n emoji_file = list(request.FILES.values())[0]\n if (settings.MAX_EMOJI_FILE_SIZE_MIB * 1024 * 1024) < emoji_file.size:\n raise JsonableError(\n", "issue": "Permissions and warning for custom emoji overriding unicode emoji\nOnly administrators/owners should be able to override unicode emoji\r\n\r\n1. If an administrator attempts to override a unicode emoji with a custom emoji, they should get a warning. #16937 attempts to fix this, but it is currently not working in production.\r\n\r\nWe should also shorten the warning message and avoid referring to \"unicode\" to avoid confusing non-technical users:\r\n>**Override built-in emoji?**\r\n> Uploading a custom emoji with the name **<name>** will override the built-in **<name>** emoji. Continue?\r\n\r\n2. If a non-administrator attempts to override an emoji, show an error in the same style as the error for overriding custom emoji (screenshot below). Text: \"Failed: An emoji with this name already exists. Only administrators can override built-in emoji.\"\r\n\r\nError for overriding custom emoji:\r\n<img width=\"531\" alt=\"Screen Shot 2021-06-15 at 2 30 38 PM\" src=\"https://user-images.githubusercontent.com/2090066/122126418-915e9880-cde6-11eb-86f6-0a4338478739.png\">\r\n\r\nRelated issue: #18269\r\n[Related CZO thread](https://chat.zulip.org/#narrow/stream/2-general/topic/ok.20emoji)\r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.http import HttpRequest, HttpResponse\nfrom django.utils.translation import gettext as _\n\nfrom zerver.decorator import require_member_or_admin\nfrom zerver.lib.actions import check_add_realm_emoji, do_remove_realm_emoji\nfrom zerver.lib.emoji import check_emoji_admin, check_valid_emoji_name\nfrom zerver.lib.request import REQ, JsonableError, has_request_variables\nfrom zerver.lib.response import json_success\nfrom zerver.models import RealmEmoji, UserProfile\n\n\ndef list_emoji(request: HttpRequest, user_profile: UserProfile) -> HttpResponse:\n\n # We don't call check_emoji_admin here because the list of realm\n # emoji is public.\n return json_success({\"emoji\": user_profile.realm.get_emoji()})\n\n\n@require_member_or_admin\n@has_request_variables\ndef upload_emoji(\n request: HttpRequest, user_profile: UserProfile, emoji_name: str = REQ(path_only=True)\n) -> HttpResponse:\n emoji_name = emoji_name.strip().replace(\" \", \"_\")\n check_valid_emoji_name(emoji_name)\n check_emoji_admin(user_profile)\n if RealmEmoji.objects.filter(\n realm=user_profile.realm, name=emoji_name, deactivated=False\n ).exists():\n raise JsonableError(_(\"A custom emoji with this name already exists.\"))\n if len(request.FILES) != 1:\n raise JsonableError(_(\"You must upload exactly one file.\"))\n emoji_file = list(request.FILES.values())[0]\n if (settings.MAX_EMOJI_FILE_SIZE_MIB * 1024 * 1024) < emoji_file.size:\n raise JsonableError(\n _(\"Uploaded file is larger than the allowed limit of {} MiB\").format(\n settings.MAX_EMOJI_FILE_SIZE_MIB,\n )\n )\n\n realm_emoji = check_add_realm_emoji(user_profile.realm, emoji_name, user_profile, emoji_file)\n if realm_emoji is None:\n raise JsonableError(_(\"Image file upload failed.\"))\n return json_success()\n\n\ndef delete_emoji(request: HttpRequest, user_profile: UserProfile, emoji_name: str) -> HttpResponse:\n if not RealmEmoji.objects.filter(\n realm=user_profile.realm, name=emoji_name, deactivated=False\n ).exists():\n raise JsonableError(_(\"Emoji '{}' does not exist\").format(emoji_name))\n check_emoji_admin(user_profile, emoji_name)\n do_remove_realm_emoji(user_profile.realm, emoji_name)\n return json_success()\n", "path": "zerver/views/realm_emoji.py"}]}
| 1,488 | 391 |
gh_patches_debug_28335
|
rasdani/github-patches
|
git_diff
|
scikit-image__scikit-image-881
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Viewer: LabelPainter overlay does not update with new loaded image
Reproduce: open the Watershed demo, then load another image of a different `shape`. The overlay won't cover the entire image, and when watershed is called there will be a shape mismatch.
</issue>
<code>
[start of skimage/viewer/plugins/labelplugin.py]
1 import numpy as np
2
3 from .base import Plugin
4 from ..widgets import ComboBox, Slider
5 from ..canvastools import PaintTool
6
7
8 __all__ = ['LabelPainter']
9
10
11 rad2deg = 180 / np.pi
12
13
14 class LabelPainter(Plugin):
15 name = 'LabelPainter'
16
17 def __init__(self, max_radius=20, **kwargs):
18 super(LabelPainter, self).__init__(**kwargs)
19
20 # These widgets adjust plugin properties instead of an image filter.
21 self._radius_widget = Slider('radius', low=1, high=max_radius,
22 value=5, value_type='int', ptype='plugin')
23 labels = [str(i) for i in range(6)]
24 labels[0] = 'Erase'
25 self._label_widget = ComboBox('label', labels, ptype='plugin')
26 self.add_widget(self._radius_widget)
27 self.add_widget(self._label_widget)
28
29 print(self.help())
30
31 def help(self):
32 helpstr = ("Label painter",
33 "Hold left-mouse button and paint on canvas.")
34 return '\n'.join(helpstr)
35
36 def attach(self, image_viewer):
37 super(LabelPainter, self).attach(image_viewer)
38
39 image = image_viewer.original_image
40 self.paint_tool = PaintTool(self.image_viewer.ax, image.shape,
41 on_enter=self.on_enter)
42 self.paint_tool.radius = self.radius
43 self.paint_tool.label = self._label_widget.index = 1
44 self.artists.append(self.paint_tool)
45
46 def on_enter(self, overlay):
47 pass
48
49 @property
50 def radius(self):
51 return self._radius_widget.val
52
53 @radius.setter
54 def radius(self, val):
55 self.paint_tool.radius = val
56
57 @property
58 def label(self):
59 return self._label_widget.val
60
61 @label.setter
62 def label(self, val):
63 self.paint_tool.label = val
64
[end of skimage/viewer/plugins/labelplugin.py]
[start of skimage/viewer/canvastools/painttool.py]
1 import numpy as np
2 import matplotlib.pyplot as plt
3 import matplotlib.colors as mcolors
4 LABELS_CMAP = mcolors.ListedColormap(['white', 'red', 'dodgerblue', 'gold',
5 'greenyellow', 'blueviolet'])
6
7 from skimage.viewer.canvastools.base import CanvasToolBase
8
9
10 __all__ = ['PaintTool']
11
12
13 class PaintTool(CanvasToolBase):
14 """Widget for painting on top of a plot.
15
16 Parameters
17 ----------
18 ax : :class:`matplotlib.axes.Axes`
19 Matplotlib axes where tool is displayed.
20 overlay_shape : shape tuple
21 2D shape tuple used to initialize overlay image.
22 alpha : float (between [0, 1])
23 Opacity of overlay
24 on_move : function
25 Function called whenever a control handle is moved.
26 This function must accept the end points of line as the only argument.
27 on_release : function
28 Function called whenever the control handle is released.
29 on_enter : function
30 Function called whenever the "enter" key is pressed.
31 rect_props : dict
32 Properties for :class:`matplotlib.patches.Rectangle`. This class
33 redefines defaults in :class:`matplotlib.widgets.RectangleSelector`.
34
35 Attributes
36 ----------
37 overlay : array
38 Overlay of painted labels displayed on top of image.
39 label : int
40 Current paint color.
41 """
42 def __init__(self, ax, overlay_shape, radius=5, alpha=0.3, on_move=None,
43 on_release=None, on_enter=None, rect_props=None):
44 super(PaintTool, self).__init__(ax, on_move=on_move, on_enter=on_enter,
45 on_release=on_release)
46
47 props = dict(edgecolor='r', facecolor='0.7', alpha=0.5, animated=True)
48 props.update(rect_props if rect_props is not None else {})
49
50 self.alpha = alpha
51 self.cmap = LABELS_CMAP
52 self._overlay_plot = None
53 self._shape = overlay_shape
54 self.overlay = np.zeros(overlay_shape, dtype='uint8')
55
56 self._cursor = plt.Rectangle((0, 0), 0, 0, **props)
57 self._cursor.set_visible(False)
58 self.ax.add_patch(self._cursor)
59
60 # `label` and `radius` can only be set after initializing `_cursor`
61 self.label = 1
62 self.radius = radius
63
64 # Note that the order is important: Redraw cursor *after* overlay
65 self._artists = [self._overlay_plot, self._cursor]
66
67 self.connect_event('button_press_event', self.on_mouse_press)
68 self.connect_event('button_release_event', self.on_mouse_release)
69 self.connect_event('motion_notify_event', self.on_move)
70
71 @property
72 def label(self):
73 return self._label
74
75 @label.setter
76 def label(self, value):
77 if value >= self.cmap.N:
78 raise ValueError('Maximum label value = %s' % len(self.cmap - 1))
79 self._label = value
80 self._cursor.set_edgecolor(self.cmap(value))
81
82 @property
83 def radius(self):
84 return self._radius
85
86 @radius.setter
87 def radius(self, r):
88 self._radius = r
89 self._width = 2 * r + 1
90 self._cursor.set_width(self._width)
91 self._cursor.set_height(self._width)
92 self.window = CenteredWindow(r, self._shape)
93
94 @property
95 def overlay(self):
96 return self._overlay
97
98 @overlay.setter
99 def overlay(self, image):
100 self._overlay = image
101 if image is None:
102 self.ax.images.remove(self._overlay_plot)
103 self._overlay_plot = None
104 elif self._overlay_plot is None:
105 props = dict(cmap=self.cmap, alpha=self.alpha,
106 norm=mcolors.no_norm(), animated=True)
107 self._overlay_plot = self.ax.imshow(image, **props)
108 else:
109 self._overlay_plot.set_data(image)
110 self.redraw()
111
112 def _on_key_press(self, event):
113 if event.key == 'enter':
114 self.callback_on_enter(self.geometry)
115 self.redraw()
116
117 def on_mouse_press(self, event):
118 if event.button != 1 or not self.ax.in_axes(event):
119 return
120 self.update_cursor(event.xdata, event.ydata)
121 self.update_overlay(event.xdata, event.ydata)
122
123 def on_mouse_release(self, event):
124 if event.button != 1:
125 return
126 self.callback_on_release(self.geometry)
127
128 def on_move(self, event):
129 if not self.ax.in_axes(event):
130 self._cursor.set_visible(False)
131 self.redraw() # make sure cursor is not visible
132 return
133 self._cursor.set_visible(True)
134
135 self.update_cursor(event.xdata, event.ydata)
136 if event.button != 1:
137 self.redraw() # update cursor position
138 return
139 self.update_overlay(event.xdata, event.ydata)
140 self.callback_on_move(self.geometry)
141
142 def update_overlay(self, x, y):
143 overlay = self.overlay
144 overlay[self.window.at(y, x)] = self.label
145 # Note that overlay calls `redraw`
146 self.overlay = overlay
147
148 def update_cursor(self, x, y):
149 x = x - self.radius - 1
150 y = y - self.radius - 1
151 self._cursor.set_xy((x, y))
152
153 @property
154 def geometry(self):
155 return self.overlay
156
157
158 class CenteredWindow(object):
159 """Window that create slices numpy arrays over 2D windows.
160
161 Examples
162 --------
163 >>> a = np.arange(16).reshape(4, 4)
164 >>> w = CenteredWindow(1, a.shape)
165 >>> a[w.at(1, 1)]
166 array([[ 0, 1, 2],
167 [ 4, 5, 6],
168 [ 8, 9, 10]])
169 >>> a[w.at(0, 0)]
170 array([[0, 1],
171 [4, 5]])
172 >>> a[w.at(4, 3)]
173 array([[14, 15]])
174 """
175 def __init__(self, radius, array_shape):
176 self.radius = radius
177 self.array_shape = array_shape
178
179 def at(self, row, col):
180 h, w = self.array_shape
181 r = self.radius
182 xmin = max(0, col - r)
183 xmax = min(w, col + r + 1)
184 ymin = max(0, row - r)
185 ymax = min(h, row + r + 1)
186 return [slice(ymin, ymax), slice(xmin, xmax)]
187
188
189 if __name__ == '__main__':
190 np.testing.rundocs()
191 from skimage import data
192
193 image = data.camera()
194
195 f, ax = plt.subplots()
196 ax.imshow(image, interpolation='nearest')
197 paint_tool = PaintTool(ax, image.shape)
198 plt.show()
199
[end of skimage/viewer/canvastools/painttool.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/skimage/viewer/canvastools/painttool.py b/skimage/viewer/canvastools/painttool.py
--- a/skimage/viewer/canvastools/painttool.py
+++ b/skimage/viewer/canvastools/painttool.py
@@ -50,8 +50,7 @@
self.alpha = alpha
self.cmap = LABELS_CMAP
self._overlay_plot = None
- self._shape = overlay_shape
- self.overlay = np.zeros(overlay_shape, dtype='uint8')
+ self.shape = overlay_shape
self._cursor = plt.Rectangle((0, 0), 0, 0, **props)
self._cursor.set_visible(False)
@@ -109,6 +108,19 @@
self._overlay_plot.set_data(image)
self.redraw()
+ @property
+ def shape(self):
+ return self._shape
+
+ @shape.setter
+ def shape(self, shape):
+ self._shape = shape
+ if not self._overlay_plot is None:
+ self._overlay_plot.set_extent((-0.5, shape[1] + 0.5,
+ shape[0] + 0.5, -0.5))
+ self.radius = self._radius
+ self.overlay = np.zeros(shape, dtype='uint8')
+
def _on_key_press(self, event):
if event.key == 'enter':
self.callback_on_enter(self.geometry)
diff --git a/skimage/viewer/plugins/labelplugin.py b/skimage/viewer/plugins/labelplugin.py
--- a/skimage/viewer/plugins/labelplugin.py
+++ b/skimage/viewer/plugins/labelplugin.py
@@ -43,6 +43,10 @@
self.paint_tool.label = self._label_widget.index = 1
self.artists.append(self.paint_tool)
+ def _on_new_image(self, image):
+ """Update plugin for new images."""
+ self.paint_tool.shape = image.shape
+
def on_enter(self, overlay):
pass
|
{"golden_diff": "diff --git a/skimage/viewer/canvastools/painttool.py b/skimage/viewer/canvastools/painttool.py\n--- a/skimage/viewer/canvastools/painttool.py\n+++ b/skimage/viewer/canvastools/painttool.py\n@@ -50,8 +50,7 @@\n self.alpha = alpha\n self.cmap = LABELS_CMAP\n self._overlay_plot = None\n- self._shape = overlay_shape\n- self.overlay = np.zeros(overlay_shape, dtype='uint8')\n+ self.shape = overlay_shape\n \n self._cursor = plt.Rectangle((0, 0), 0, 0, **props)\n self._cursor.set_visible(False)\n@@ -109,6 +108,19 @@\n self._overlay_plot.set_data(image)\n self.redraw()\n \n+ @property\n+ def shape(self):\n+ return self._shape\n+\n+ @shape.setter\n+ def shape(self, shape):\n+ self._shape = shape\n+ if not self._overlay_plot is None:\n+ self._overlay_plot.set_extent((-0.5, shape[1] + 0.5,\n+ shape[0] + 0.5, -0.5))\n+ self.radius = self._radius\n+ self.overlay = np.zeros(shape, dtype='uint8')\n+\n def _on_key_press(self, event):\n if event.key == 'enter':\n self.callback_on_enter(self.geometry)\ndiff --git a/skimage/viewer/plugins/labelplugin.py b/skimage/viewer/plugins/labelplugin.py\n--- a/skimage/viewer/plugins/labelplugin.py\n+++ b/skimage/viewer/plugins/labelplugin.py\n@@ -43,6 +43,10 @@\n self.paint_tool.label = self._label_widget.index = 1\n self.artists.append(self.paint_tool)\n \n+ def _on_new_image(self, image):\n+ \"\"\"Update plugin for new images.\"\"\"\n+ self.paint_tool.shape = image.shape\n+\n def on_enter(self, overlay):\n pass\n", "issue": "Viewer: LabelPainter overlay does not update with new loaded image\nReproduce: open the Watershed demo, then load another image of a different `shape`. The overlay won't cover the entire image, and when watershed is called there will be a shape mismatch.\n\n", "before_files": [{"content": "import numpy as np\n\nfrom .base import Plugin\nfrom ..widgets import ComboBox, Slider\nfrom ..canvastools import PaintTool\n\n\n__all__ = ['LabelPainter']\n\n\nrad2deg = 180 / np.pi\n\n\nclass LabelPainter(Plugin):\n name = 'LabelPainter'\n\n def __init__(self, max_radius=20, **kwargs):\n super(LabelPainter, self).__init__(**kwargs)\n\n # These widgets adjust plugin properties instead of an image filter.\n self._radius_widget = Slider('radius', low=1, high=max_radius,\n value=5, value_type='int', ptype='plugin')\n labels = [str(i) for i in range(6)]\n labels[0] = 'Erase'\n self._label_widget = ComboBox('label', labels, ptype='plugin')\n self.add_widget(self._radius_widget)\n self.add_widget(self._label_widget)\n\n print(self.help())\n\n def help(self):\n helpstr = (\"Label painter\",\n \"Hold left-mouse button and paint on canvas.\")\n return '\\n'.join(helpstr)\n\n def attach(self, image_viewer):\n super(LabelPainter, self).attach(image_viewer)\n\n image = image_viewer.original_image\n self.paint_tool = PaintTool(self.image_viewer.ax, image.shape,\n on_enter=self.on_enter)\n self.paint_tool.radius = self.radius\n self.paint_tool.label = self._label_widget.index = 1\n self.artists.append(self.paint_tool)\n\n def on_enter(self, overlay):\n pass\n\n @property\n def radius(self):\n return self._radius_widget.val\n\n @radius.setter\n def radius(self, val):\n self.paint_tool.radius = val\n\n @property\n def label(self):\n return self._label_widget.val\n\n @label.setter\n def label(self, val):\n self.paint_tool.label = val\n", "path": "skimage/viewer/plugins/labelplugin.py"}, {"content": "import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as mcolors\nLABELS_CMAP = mcolors.ListedColormap(['white', 'red', 'dodgerblue', 'gold',\n 'greenyellow', 'blueviolet'])\n\nfrom skimage.viewer.canvastools.base import CanvasToolBase\n\n\n__all__ = ['PaintTool']\n\n\nclass PaintTool(CanvasToolBase):\n \"\"\"Widget for painting on top of a plot.\n\n Parameters\n ----------\n ax : :class:`matplotlib.axes.Axes`\n Matplotlib axes where tool is displayed.\n overlay_shape : shape tuple\n 2D shape tuple used to initialize overlay image.\n alpha : float (between [0, 1])\n Opacity of overlay\n on_move : function\n Function called whenever a control handle is moved.\n This function must accept the end points of line as the only argument.\n on_release : function\n Function called whenever the control handle is released.\n on_enter : function\n Function called whenever the \"enter\" key is pressed.\n rect_props : dict\n Properties for :class:`matplotlib.patches.Rectangle`. This class\n redefines defaults in :class:`matplotlib.widgets.RectangleSelector`.\n\n Attributes\n ----------\n overlay : array\n Overlay of painted labels displayed on top of image.\n label : int\n Current paint color.\n \"\"\"\n def __init__(self, ax, overlay_shape, radius=5, alpha=0.3, on_move=None,\n on_release=None, on_enter=None, rect_props=None):\n super(PaintTool, self).__init__(ax, on_move=on_move, on_enter=on_enter,\n on_release=on_release)\n\n props = dict(edgecolor='r', facecolor='0.7', alpha=0.5, animated=True)\n props.update(rect_props if rect_props is not None else {})\n\n self.alpha = alpha\n self.cmap = LABELS_CMAP\n self._overlay_plot = None\n self._shape = overlay_shape\n self.overlay = np.zeros(overlay_shape, dtype='uint8')\n\n self._cursor = plt.Rectangle((0, 0), 0, 0, **props)\n self._cursor.set_visible(False)\n self.ax.add_patch(self._cursor)\n\n # `label` and `radius` can only be set after initializing `_cursor`\n self.label = 1\n self.radius = radius\n\n # Note that the order is important: Redraw cursor *after* overlay\n self._artists = [self._overlay_plot, self._cursor]\n\n self.connect_event('button_press_event', self.on_mouse_press)\n self.connect_event('button_release_event', self.on_mouse_release)\n self.connect_event('motion_notify_event', self.on_move)\n\n @property\n def label(self):\n return self._label\n\n @label.setter\n def label(self, value):\n if value >= self.cmap.N:\n raise ValueError('Maximum label value = %s' % len(self.cmap - 1))\n self._label = value\n self._cursor.set_edgecolor(self.cmap(value))\n\n @property\n def radius(self):\n return self._radius\n\n @radius.setter\n def radius(self, r):\n self._radius = r\n self._width = 2 * r + 1\n self._cursor.set_width(self._width)\n self._cursor.set_height(self._width)\n self.window = CenteredWindow(r, self._shape)\n\n @property\n def overlay(self):\n return self._overlay\n\n @overlay.setter\n def overlay(self, image):\n self._overlay = image\n if image is None:\n self.ax.images.remove(self._overlay_plot)\n self._overlay_plot = None\n elif self._overlay_plot is None:\n props = dict(cmap=self.cmap, alpha=self.alpha,\n norm=mcolors.no_norm(), animated=True)\n self._overlay_plot = self.ax.imshow(image, **props)\n else:\n self._overlay_plot.set_data(image)\n self.redraw()\n\n def _on_key_press(self, event):\n if event.key == 'enter':\n self.callback_on_enter(self.geometry)\n self.redraw()\n\n def on_mouse_press(self, event):\n if event.button != 1 or not self.ax.in_axes(event):\n return\n self.update_cursor(event.xdata, event.ydata)\n self.update_overlay(event.xdata, event.ydata)\n\n def on_mouse_release(self, event):\n if event.button != 1:\n return\n self.callback_on_release(self.geometry)\n\n def on_move(self, event):\n if not self.ax.in_axes(event):\n self._cursor.set_visible(False)\n self.redraw() # make sure cursor is not visible\n return\n self._cursor.set_visible(True)\n\n self.update_cursor(event.xdata, event.ydata)\n if event.button != 1:\n self.redraw() # update cursor position\n return\n self.update_overlay(event.xdata, event.ydata)\n self.callback_on_move(self.geometry)\n\n def update_overlay(self, x, y):\n overlay = self.overlay\n overlay[self.window.at(y, x)] = self.label\n # Note that overlay calls `redraw`\n self.overlay = overlay\n\n def update_cursor(self, x, y):\n x = x - self.radius - 1\n y = y - self.radius - 1\n self._cursor.set_xy((x, y))\n\n @property\n def geometry(self):\n return self.overlay\n\n\nclass CenteredWindow(object):\n \"\"\"Window that create slices numpy arrays over 2D windows.\n\n Examples\n --------\n >>> a = np.arange(16).reshape(4, 4)\n >>> w = CenteredWindow(1, a.shape)\n >>> a[w.at(1, 1)]\n array([[ 0, 1, 2],\n [ 4, 5, 6],\n [ 8, 9, 10]])\n >>> a[w.at(0, 0)]\n array([[0, 1],\n [4, 5]])\n >>> a[w.at(4, 3)]\n array([[14, 15]])\n \"\"\"\n def __init__(self, radius, array_shape):\n self.radius = radius\n self.array_shape = array_shape\n\n def at(self, row, col):\n h, w = self.array_shape\n r = self.radius\n xmin = max(0, col - r)\n xmax = min(w, col + r + 1)\n ymin = max(0, row - r)\n ymax = min(h, row + r + 1)\n return [slice(ymin, ymax), slice(xmin, xmax)]\n\n\nif __name__ == '__main__':\n np.testing.rundocs()\n from skimage import data\n\n image = data.camera()\n\n f, ax = plt.subplots()\n ax.imshow(image, interpolation='nearest')\n paint_tool = PaintTool(ax, image.shape)\n plt.show()\n", "path": "skimage/viewer/canvastools/painttool.py"}]}
| 3,188 | 471 |
gh_patches_debug_14158
|
rasdani/github-patches
|
git_diff
|
Zeroto521__my-data-toolkit-834
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: `geocentroid` coordiantes should divide distance
<!--
Thanks for contributing a pull request!
Please follow these standard acronyms to start the commit message:
- ENH: enhancement
- BUG: bug fix
- DOC: documentation
- TYP: type annotations
- TST: addition or modification of tests
- MAINT: maintenance commit (refactoring, typos, etc.)
- BLD: change related to building
- REL: related to releasing
- API: an (incompatible) API change
- DEP: deprecate something, or remove a deprecated object
- DEV: development tool or utility
- REV: revert an earlier commit
- PERF: performance improvement
- BOT: always commit via a bot
- CI: related to CI or CD
- CLN: Code cleanup
-->
- [x] closes #832
- [x] whatsnew entry
```latex
\left\{\begin{matrix}
d_i &=& D(P(\bar{x}_n, \bar{y}_n), P(x_i,y_i)) \\
\bar{x}_0 &=& \frac{\sum w_i x_i}{\sum w_i} \\
\bar{y}_0 &=& \frac{\sum w_i y_i}{\sum w_i} \\
\bar{x}_{n+1} &=& \frac{\sum w_i x_i / d_i}{\sum w_i / d_i} \\
\bar{y}_{n+1} &=& \frac{\sum w_i y_i / d_i}{\sum w_i / d_i} \\
\end{matrix}\right.
```
</issue>
<code>
[start of dtoolkit/geoaccessor/geoseries/geocentroid.py]
1 import geopandas as gpd
2 import numpy as np
3 import pandas as pd
4 from shapely import Point
5
6 from dtoolkit.geoaccessor.geoseries.geodistance import geodistance
7 from dtoolkit.geoaccessor.geoseries.xy import xy
8 from dtoolkit.geoaccessor.register import register_geoseries_method
9
10
11 @register_geoseries_method
12 def geocentroid(
13 s: gpd.GeoSeries,
14 /,
15 weights: pd.Series = None,
16 max_iter: int = 300,
17 tol: float = 1e-5,
18 ) -> Point:
19 r"""
20 Return the centroid of all points via the center of gravity method.
21
22 .. math::
23
24 \left\{\begin{matrix}
25 d_i &=& D(P(\bar{x}_n, \bar{y}_n), P(x_i, y_i)) \\
26 \bar{x}_0 &=& \frac{\sum w_i x_i}{\sum w_i} \\
27 \bar{y}_0 &=& \frac{\sum w_i y_i}{\sum w_i} \\
28 \bar{x}_{n+1} &=& \frac{\sum w_i x_i / d_i}{\sum w_i / d_i} \\
29 \bar{y}_{n+1} &=& \frac{\sum w_i y_i / d_i}{\sum w_i / d_i} \\
30 \end{matrix}\right.
31
32 Parameters
33 ----------
34 weights : Hashable or 1d array-like, optional
35 - None : All weights will be set to 1.
36 - Hashable : Only for DataFrame, the column name.
37 - 1d array-like : The weights of each point.
38
39 max_iter : int, default 300
40 Maximum number of iterations to perform.
41
42 tol : float, default 1e-5
43 Tolerance for convergence.
44
45 Returns
46 -------
47 Point
48
49 See Also
50 --------
51 geopandas.GeoSeries.centroid
52 dtoolkit.geoaccessor.geoseries.geocentroid
53 dtoolkit.geoaccessor.geodataframe.geocentroid
54
55 Examples
56 --------
57 >>> import dtoolkit.geoaccessor
58 >>> import geopandas as gpd
59 >>> from shapely import Point
60 >>> df = gpd.GeoDataFrame(
61 ... {
62 ... "weights": [1, 2, 3],
63 ... "geometry": [Point(100, 32), Point(120, 50), Point(122, 55)],
64 ... },
65 ... crs=4326,
66 ... )
67 >>> df
68 weights geometry
69 0 1 POINT (100.00000 32.00000)
70 1 2 POINT (120.00000 50.00000)
71 2 3 POINT (122.00000 55.00000)
72 >>> df.geocentroid()
73 <POINT (120 50)>
74
75 Set weights for each point.
76
77 >>> df.geocentroid("weights")
78 <POINT (121.999 54.998)>
79 >>> df.geocentroid([1, 2, 3])
80 <POINT (121.999 54.998)>
81 """
82
83 weights = np.asarray(weights) if weights is not None else 1
84 coord = xy(s)
85 X = coord.mean()
86 for _ in range(max_iter):
87 dis = geodistance(s, Point(*X.tolist())).rdiv(1).mul(weights, axis=0)
88 Xt = coord.mul(dis, axis=0).sum() / dis.sum()
89
90 if ((X - Xt).abs() <= tol).all():
91 X = Xt
92 break
93
94 X = Xt
95
96 return Point(*X.tolist())
97
[end of dtoolkit/geoaccessor/geoseries/geocentroid.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dtoolkit/geoaccessor/geoseries/geocentroid.py b/dtoolkit/geoaccessor/geoseries/geocentroid.py
--- a/dtoolkit/geoaccessor/geoseries/geocentroid.py
+++ b/dtoolkit/geoaccessor/geoseries/geocentroid.py
@@ -75,14 +75,14 @@
Set weights for each point.
>>> df.geocentroid("weights")
- <POINT (121.999 54.998)>
+ <POINT (121.999 54.999)>
>>> df.geocentroid([1, 2, 3])
- <POINT (121.999 54.998)>
+ <POINT (121.999 54.999)>
"""
weights = np.asarray(weights) if weights is not None else 1
coord = xy(s)
- X = coord.mean()
+ X = coord.mul(weights, axis=0).mean()
for _ in range(max_iter):
dis = geodistance(s, Point(*X.tolist())).rdiv(1).mul(weights, axis=0)
Xt = coord.mul(dis, axis=0).sum() / dis.sum()
|
{"golden_diff": "diff --git a/dtoolkit/geoaccessor/geoseries/geocentroid.py b/dtoolkit/geoaccessor/geoseries/geocentroid.py\n--- a/dtoolkit/geoaccessor/geoseries/geocentroid.py\n+++ b/dtoolkit/geoaccessor/geoseries/geocentroid.py\n@@ -75,14 +75,14 @@\n Set weights for each point.\n \n >>> df.geocentroid(\"weights\")\n- <POINT (121.999 54.998)>\n+ <POINT (121.999 54.999)>\n >>> df.geocentroid([1, 2, 3])\n- <POINT (121.999 54.998)>\n+ <POINT (121.999 54.999)>\n \"\"\"\n \n weights = np.asarray(weights) if weights is not None else 1\n coord = xy(s)\n- X = coord.mean()\n+ X = coord.mul(weights, axis=0).mean()\n for _ in range(max_iter):\n dis = geodistance(s, Point(*X.tolist())).rdiv(1).mul(weights, axis=0)\n Xt = coord.mul(dis, axis=0).sum() / dis.sum()\n", "issue": "BUG: `geocentroid` coordiantes should divide distance\n<!--\r\nThanks for contributing a pull request!\r\n\r\nPlease follow these standard acronyms to start the commit message:\r\n\r\n- ENH: enhancement\r\n- BUG: bug fix\r\n- DOC: documentation\r\n- TYP: type annotations\r\n- TST: addition or modification of tests\r\n- MAINT: maintenance commit (refactoring, typos, etc.)\r\n- BLD: change related to building\r\n- REL: related to releasing\r\n- API: an (incompatible) API change\r\n- DEP: deprecate something, or remove a deprecated object\r\n- DEV: development tool or utility\r\n- REV: revert an earlier commit\r\n- PERF: performance improvement\r\n- BOT: always commit via a bot\r\n- CI: related to CI or CD\r\n- CLN: Code cleanup\r\n-->\r\n\r\n- [x] closes #832\r\n- [x] whatsnew entry\r\n\r\n```latex\r\n \\left\\{\\begin{matrix}\r\n d_i &=& D(P(\\bar{x}_n, \\bar{y}_n), P(x_i,y_i)) \\\\\r\n \\bar{x}_0 &=& \\frac{\\sum w_i x_i}{\\sum w_i} \\\\\r\n \\bar{y}_0 &=& \\frac{\\sum w_i y_i}{\\sum w_i} \\\\\r\n \\bar{x}_{n+1} &=& \\frac{\\sum w_i x_i / d_i}{\\sum w_i / d_i} \\\\\r\n \\bar{y}_{n+1} &=& \\frac{\\sum w_i y_i / d_i}{\\sum w_i / d_i} \\\\\r\n \\end{matrix}\\right.\r\n```\n", "before_files": [{"content": "import geopandas as gpd\nimport numpy as np\nimport pandas as pd\nfrom shapely import Point\n\nfrom dtoolkit.geoaccessor.geoseries.geodistance import geodistance\nfrom dtoolkit.geoaccessor.geoseries.xy import xy\nfrom dtoolkit.geoaccessor.register import register_geoseries_method\n\n\n@register_geoseries_method\ndef geocentroid(\n s: gpd.GeoSeries,\n /,\n weights: pd.Series = None,\n max_iter: int = 300,\n tol: float = 1e-5,\n) -> Point:\n r\"\"\"\n Return the centroid of all points via the center of gravity method.\n\n .. math::\n\n \\left\\{\\begin{matrix}\n d_i &=& D(P(\\bar{x}_n, \\bar{y}_n), P(x_i, y_i)) \\\\\n \\bar{x}_0 &=& \\frac{\\sum w_i x_i}{\\sum w_i} \\\\\n \\bar{y}_0 &=& \\frac{\\sum w_i y_i}{\\sum w_i} \\\\\n \\bar{x}_{n+1} &=& \\frac{\\sum w_i x_i / d_i}{\\sum w_i / d_i} \\\\\n \\bar{y}_{n+1} &=& \\frac{\\sum w_i y_i / d_i}{\\sum w_i / d_i} \\\\\n \\end{matrix}\\right.\n\n Parameters\n ----------\n weights : Hashable or 1d array-like, optional\n - None : All weights will be set to 1.\n - Hashable : Only for DataFrame, the column name.\n - 1d array-like : The weights of each point.\n\n max_iter : int, default 300\n Maximum number of iterations to perform.\n\n tol : float, default 1e-5\n Tolerance for convergence.\n\n Returns\n -------\n Point\n\n See Also\n --------\n geopandas.GeoSeries.centroid\n dtoolkit.geoaccessor.geoseries.geocentroid\n dtoolkit.geoaccessor.geodataframe.geocentroid\n\n Examples\n --------\n >>> import dtoolkit.geoaccessor\n >>> import geopandas as gpd\n >>> from shapely import Point\n >>> df = gpd.GeoDataFrame(\n ... {\n ... \"weights\": [1, 2, 3],\n ... \"geometry\": [Point(100, 32), Point(120, 50), Point(122, 55)],\n ... },\n ... crs=4326,\n ... )\n >>> df\n weights geometry\n 0 1 POINT (100.00000 32.00000)\n 1 2 POINT (120.00000 50.00000)\n 2 3 POINT (122.00000 55.00000)\n >>> df.geocentroid()\n <POINT (120 50)>\n\n Set weights for each point.\n\n >>> df.geocentroid(\"weights\")\n <POINT (121.999 54.998)>\n >>> df.geocentroid([1, 2, 3])\n <POINT (121.999 54.998)>\n \"\"\"\n\n weights = np.asarray(weights) if weights is not None else 1\n coord = xy(s)\n X = coord.mean()\n for _ in range(max_iter):\n dis = geodistance(s, Point(*X.tolist())).rdiv(1).mul(weights, axis=0)\n Xt = coord.mul(dis, axis=0).sum() / dis.sum()\n\n if ((X - Xt).abs() <= tol).all():\n X = Xt\n break\n\n X = Xt\n\n return Point(*X.tolist())\n", "path": "dtoolkit/geoaccessor/geoseries/geocentroid.py"}]}
| 1,997 | 296 |
gh_patches_debug_37513
|
rasdani/github-patches
|
git_diff
|
doccano__doccano-1529
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ModuleNotFoundError: No module named 'fcntl'
How to reproduce the behaviour
---------
After downloading doccano and trying to start it via `doccano init` I get the following message:
```
doccano init
Traceback (most recent call last):
File "C:\Users\\AppData\Local\Programs\Python\Python39\Scripts\doccano-script.py", line 33, in <module>
sys.exit(load_entry_point('doccano==1.4.1', 'console_scripts', 'doccano')())
File "C:\Users\\AppData\Local\Programs\Python\Python39\Scripts\doccano-script.py", line 25, in importlib_load_entry_point
return next(matches).load()
File "c:\users\\appdata\local\programs\python\python39\lib\importlib\metadata.py", line 77, in load
module = import_module(match.group('module'))
File "c:\users\\appdata\local\programs\python\python39\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 790, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "c:\users\\appdata\local\programs\python\python39\lib\site-packages\backend\cli.py", line 7, in <module>
import gunicorn.app.base
File "c:\users\\appdata\local\programs\python\python39\lib\site-packages\gunicorn\app\base.py", line 11, in <module>
from gunicorn import util
File "c:\users\\appdata\local\programs\python\python39\lib\site-packages\gunicorn\util.py", line 8, in <module>
import fcntl
ModuleNotFoundError: No module named 'fcntl'
```
Your Environment
---------
* Operating System: Windows 10 1909
* Python Version Used: 3.9.4
* When you install doccano: 17.06.2021
* How did you install doccano (Heroku button etc): `pip install doccano`
Own Research:
----------
Apparently Windows doesn''t support `fcntl`. Therefore nobody that uses Windows can install doccano via pip.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 import io
4 import os
5
6 from setuptools import find_packages, setup
7
8 NAME = 'doccano'
9 DESCRIPTION = 'doccano, text annotation tool for machine learning practitioners'
10 URL = 'https://github.com/doccano/doccano'
11 EMAIL = '[email protected]'
12 AUTHOR = 'Hironsan'
13 LICENSE = 'MIT'
14
15 here = os.path.abspath(os.path.dirname(__file__))
16 with io.open(os.path.join(here, 'README.md'), encoding='utf-8') as f:
17 long_description = '\n' + f.read()
18
19 required = [
20 'apache-libcloud>=3.2.0',
21 'colour>=0.1.5',
22 'conllu>=4.2.2',
23 'dj-database-url>=0.5.0',
24 'django-cors-headers>=3.5.0',
25 'django-filter>=2.4.0',
26 'django-rest-polymorphic>=0.1.9',
27 'djangorestframework-csv>=2.1.0',
28 'djangorestframework-xml>=2.0.0',
29 'drf-yasg>=1.20.0',
30 'environs>=9.2.0',
31 'furl>=2.1.0',
32 'pyexcel>=0.6.6',
33 'pyexcel-xlsx>=0.6.0',
34 'python-jose>=3.2.0',
35 'seqeval>=1.2.2',
36 'social-auth-app-django>=4.0.0',
37 'whitenoise>=5.2.0',
38 'auto-labeling-pipeline>=0.1.12',
39 'celery>=5.0.5',
40 'dj-rest-auth>=2.1.4',
41 'django-celery-results>=2.0.1',
42 'django-drf-filepond>=0.3.0',
43 'sqlalchemy>=1.4.7',
44 'gunicorn>=20.1.0',
45 ]
46
47 setup(
48 name=NAME,
49 use_scm_version=True,
50 setup_requires=['setuptools_scm'],
51 description=DESCRIPTION,
52 long_description=long_description,
53 long_description_content_type='text/markdown',
54 author=AUTHOR,
55 author_email=EMAIL,
56 url=URL,
57 packages=find_packages(exclude=('*.tests',)),
58 entry_points={
59 'console_scripts': [
60 'doccano = backend.cli:main'
61 ]
62 },
63 install_requires=required,
64 extras_require={
65 'postgresql': ['psycopg2-binary>=2.8.6'],
66 'mssql': ['django-mssql-backend>=2.8.1'],
67 },
68 include_package_data=True,
69 license=LICENSE,
70 classifiers=[
71 'License :: OSI Approved :: MIT License',
72 'Programming Language :: Python',
73 'Programming Language :: Python :: 3.6',
74 'Programming Language :: Python :: 3.7',
75 'Programming Language :: Python :: 3.8',
76 'Programming Language :: Python :: Implementation :: CPython',
77 'Programming Language :: Python :: Implementation :: PyPy'
78 ],
79 )
80
[end of setup.py]
[start of backend/cli.py]
1 import argparse
2 import multiprocessing
3 import os
4 import subprocess
5 import sys
6
7 import gunicorn.app.base
8 import gunicorn.util
9
10 from .app.celery import app
11
12 base = os.path.abspath(os.path.dirname(__file__))
13 manage_path = os.path.join(base, 'manage.py')
14 parser = argparse.ArgumentParser(description='doccano, text annotation for machine learning practitioners.')
15
16
17 def number_of_workers():
18 return (multiprocessing.cpu_count() * 2) + 1
19
20
21 class StandaloneApplication(gunicorn.app.base.BaseApplication):
22
23 def __init__(self, options=None):
24 self.options = options or {}
25 super().__init__()
26
27 def load_config(self):
28 config = {key: value for key, value in self.options.items()
29 if key in self.cfg.settings and value is not None}
30 for key, value in config.items():
31 self.cfg.set(key.lower(), value)
32
33 def load(self):
34 sys.path.append(base)
35 return gunicorn.util.import_app('app.wsgi')
36
37
38 def command_db_init(args):
39 print('Setup Database.')
40 subprocess.call([sys.executable, manage_path, 'wait_for_db'], shell=False)
41 subprocess.call([sys.executable, manage_path, 'migrate'], shell=False)
42 subprocess.call([sys.executable, manage_path, 'create_roles'], shell=False)
43
44
45 def command_user_create(args):
46 print('Create admin user.')
47 subprocess.call([sys.executable, manage_path, 'create_admin',
48 '--username', args.username,
49 '--password', args.password,
50 '--email', args.email,
51 '--noinput'], shell=False)
52
53
54 def command_run_webserver(args):
55 print(f'Starting server with port {args.port}.')
56 options = {
57 'bind': '%s:%s' % ('0.0.0.0', args.port),
58 'workers': number_of_workers(),
59 'chdir': base
60 }
61 StandaloneApplication(options).run()
62
63
64 def command_run_task_queue(args):
65 print('Starting task queue.')
66 app.worker_main(
67 argv=[
68 '--app=app',
69 '--workdir={}'.format(base),
70 'worker',
71 '--loglevel=info',
72 '--concurrency={}'.format(args.concurrency),
73 ]
74 )
75
76
77 def command_help(args):
78 print(parser.parse_args([args.command, '--help']))
79
80
81 def main():
82 # Create a command line parser.
83 subparsers = parser.add_subparsers()
84
85 # Create a parser for db initialization.
86 parser_init = subparsers.add_parser('init', help='see `init -h`')
87
88 parser_init.set_defaults(handler=command_db_init)
89
90 # Create a parser for user creation.
91 parser_create_user = subparsers.add_parser('createuser', help='see `createuser -h`')
92 parser_create_user.add_argument('--username', type=str, default='admin', help='admin username')
93 parser_create_user.add_argument('--password', type=str, default='password', help='admin password')
94 parser_create_user.add_argument('--email', type=str, default='[email protected]', help='admin email')
95 parser_create_user.set_defaults(handler=command_user_create)
96
97 # Create a parser for web server.
98 parser_server = subparsers.add_parser('webserver', help='see `webserver -h`')
99 parser_server.add_argument('--port', type=int, default=8000, help='port number')
100 parser_server.set_defaults(handler=command_run_webserver)
101
102 # Create a parser for task queue.
103 parser_queue = subparsers.add_parser('task', help='see `task -h`')
104 parser_queue.add_argument('--concurrency', type=int, default=2, help='concurrency')
105 parser_queue.set_defaults(handler=command_run_task_queue)
106
107 # Create a parser for help.
108 parser_help = subparsers.add_parser('help', help='see `help -h`')
109 parser_help.add_argument('command', help='command name which help is shown')
110 parser_help.set_defaults(handler=command_help)
111
112 # Dispatch handler.
113 args = parser.parse_args()
114 if hasattr(args, 'handler'):
115 args.handler(args)
116 else:
117 # If specified unknown command, show help.
118 parser.print_help()
119
120
121 if __name__ == '__main__':
122 main()
123
[end of backend/cli.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/backend/cli.py b/backend/cli.py
--- a/backend/cli.py
+++ b/backend/cli.py
@@ -1,15 +1,13 @@
import argparse
import multiprocessing
import os
+import platform
import subprocess
import sys
-import gunicorn.app.base
-import gunicorn.util
-
from .app.celery import app
-
base = os.path.abspath(os.path.dirname(__file__))
+sys.path.append(base)
manage_path = os.path.join(base, 'manage.py')
parser = argparse.ArgumentParser(description='doccano, text annotation for machine learning practitioners.')
@@ -18,21 +16,37 @@
return (multiprocessing.cpu_count() * 2) + 1
-class StandaloneApplication(gunicorn.app.base.BaseApplication):
+def run_on_nix(args):
+ import gunicorn.app.base
+ import gunicorn.util
+
+ class StandaloneApplication(gunicorn.app.base.BaseApplication):
- def __init__(self, options=None):
- self.options = options or {}
- super().__init__()
+ def __init__(self, options=None):
+ self.options = options or {}
+ super().__init__()
- def load_config(self):
- config = {key: value for key, value in self.options.items()
- if key in self.cfg.settings and value is not None}
- for key, value in config.items():
- self.cfg.set(key.lower(), value)
+ def load_config(self):
+ config = {key: value for key, value in self.options.items()
+ if key in self.cfg.settings and value is not None}
+ for key, value in config.items():
+ self.cfg.set(key.lower(), value)
- def load(self):
- sys.path.append(base)
- return gunicorn.util.import_app('app.wsgi')
+ def load(self):
+ return gunicorn.util.import_app('app.wsgi')
+
+ options = {
+ 'bind': '%s:%s' % ('0.0.0.0', args.port),
+ 'workers': number_of_workers(),
+ 'chdir': base
+ }
+ StandaloneApplication(options).run()
+
+
+def run_on_windows(args):
+ from waitress import serve
+ from app.wsgi import application
+ serve(application, port=args.port)
def command_db_init(args):
@@ -53,12 +67,10 @@
def command_run_webserver(args):
print(f'Starting server with port {args.port}.')
- options = {
- 'bind': '%s:%s' % ('0.0.0.0', args.port),
- 'workers': number_of_workers(),
- 'chdir': base
- }
- StandaloneApplication(options).run()
+ if platform.system() == 'Windows':
+ run_on_windows(args)
+ else:
+ run_on_nix(args)
def command_run_task_queue(args):
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -42,6 +42,7 @@
'django-drf-filepond>=0.3.0',
'sqlalchemy>=1.4.7',
'gunicorn>=20.1.0',
+ 'waitress>=2.0.0',
]
setup(
|
{"golden_diff": "diff --git a/backend/cli.py b/backend/cli.py\n--- a/backend/cli.py\n+++ b/backend/cli.py\n@@ -1,15 +1,13 @@\n import argparse\n import multiprocessing\n import os\n+import platform\n import subprocess\n import sys\n \n-import gunicorn.app.base\n-import gunicorn.util\n-\n from .app.celery import app\n-\n base = os.path.abspath(os.path.dirname(__file__))\n+sys.path.append(base)\n manage_path = os.path.join(base, 'manage.py')\n parser = argparse.ArgumentParser(description='doccano, text annotation for machine learning practitioners.')\n \n@@ -18,21 +16,37 @@\n return (multiprocessing.cpu_count() * 2) + 1\n \n \n-class StandaloneApplication(gunicorn.app.base.BaseApplication):\n+def run_on_nix(args):\n+ import gunicorn.app.base\n+ import gunicorn.util\n+\n+ class StandaloneApplication(gunicorn.app.base.BaseApplication):\n \n- def __init__(self, options=None):\n- self.options = options or {}\n- super().__init__()\n+ def __init__(self, options=None):\n+ self.options = options or {}\n+ super().__init__()\n \n- def load_config(self):\n- config = {key: value for key, value in self.options.items()\n- if key in self.cfg.settings and value is not None}\n- for key, value in config.items():\n- self.cfg.set(key.lower(), value)\n+ def load_config(self):\n+ config = {key: value for key, value in self.options.items()\n+ if key in self.cfg.settings and value is not None}\n+ for key, value in config.items():\n+ self.cfg.set(key.lower(), value)\n \n- def load(self):\n- sys.path.append(base)\n- return gunicorn.util.import_app('app.wsgi')\n+ def load(self):\n+ return gunicorn.util.import_app('app.wsgi')\n+\n+ options = {\n+ 'bind': '%s:%s' % ('0.0.0.0', args.port),\n+ 'workers': number_of_workers(),\n+ 'chdir': base\n+ }\n+ StandaloneApplication(options).run()\n+\n+\n+def run_on_windows(args):\n+ from waitress import serve\n+ from app.wsgi import application\n+ serve(application, port=args.port)\n \n \n def command_db_init(args):\n@@ -53,12 +67,10 @@\n \n def command_run_webserver(args):\n print(f'Starting server with port {args.port}.')\n- options = {\n- 'bind': '%s:%s' % ('0.0.0.0', args.port),\n- 'workers': number_of_workers(),\n- 'chdir': base\n- }\n- StandaloneApplication(options).run()\n+ if platform.system() == 'Windows':\n+ run_on_windows(args)\n+ else:\n+ run_on_nix(args)\n \n \n def command_run_task_queue(args):\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -42,6 +42,7 @@\n 'django-drf-filepond>=0.3.0',\n 'sqlalchemy>=1.4.7',\n 'gunicorn>=20.1.0',\n+ 'waitress>=2.0.0',\n ]\n \n setup(\n", "issue": "ModuleNotFoundError: No module named 'fcntl'\nHow to reproduce the behaviour\r\n---------\r\nAfter downloading doccano and trying to start it via `doccano init` I get the following message:\r\n\r\n```\r\ndoccano init\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\\\AppData\\Local\\Programs\\Python\\Python39\\Scripts\\doccano-script.py\", line 33, in <module>\r\n sys.exit(load_entry_point('doccano==1.4.1', 'console_scripts', 'doccano')())\r\n File \"C:\\Users\\\\AppData\\Local\\Programs\\Python\\Python39\\Scripts\\doccano-script.py\", line 25, in importlib_load_entry_point\r\n return next(matches).load()\r\n File \"c:\\users\\\\appdata\\local\\programs\\python\\python39\\lib\\importlib\\metadata.py\", line 77, in load\r\n module = import_module(match.group('module'))\r\n File \"c:\\users\\\\appdata\\local\\programs\\python\\python39\\lib\\importlib\\__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 790, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\r\n File \"c:\\users\\\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\backend\\cli.py\", line 7, in <module>\r\n import gunicorn.app.base\r\n File \"c:\\users\\\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\gunicorn\\app\\base.py\", line 11, in <module>\r\n from gunicorn import util\r\n File \"c:\\users\\\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\gunicorn\\util.py\", line 8, in <module>\r\n import fcntl\r\nModuleNotFoundError: No module named 'fcntl' \r\n```\r\n\r\nYour Environment\r\n---------\r\n* Operating System: Windows 10 1909\r\n* Python Version Used: 3.9.4\r\n* When you install doccano: 17.06.2021\r\n* How did you install doccano (Heroku button etc): `pip install doccano`\r\n\r\nOwn Research:\r\n----------\r\nApparently Windows doesn''t support `fcntl`. Therefore nobody that uses Windows can install doccano via pip.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nimport io\nimport os\n\nfrom setuptools import find_packages, setup\n\nNAME = 'doccano'\nDESCRIPTION = 'doccano, text annotation tool for machine learning practitioners'\nURL = 'https://github.com/doccano/doccano'\nEMAIL = '[email protected]'\nAUTHOR = 'Hironsan'\nLICENSE = 'MIT'\n\nhere = os.path.abspath(os.path.dirname(__file__))\nwith io.open(os.path.join(here, 'README.md'), encoding='utf-8') as f:\n long_description = '\\n' + f.read()\n\nrequired = [\n 'apache-libcloud>=3.2.0',\n 'colour>=0.1.5',\n 'conllu>=4.2.2',\n 'dj-database-url>=0.5.0',\n 'django-cors-headers>=3.5.0',\n 'django-filter>=2.4.0',\n 'django-rest-polymorphic>=0.1.9',\n 'djangorestframework-csv>=2.1.0',\n 'djangorestframework-xml>=2.0.0',\n 'drf-yasg>=1.20.0',\n 'environs>=9.2.0',\n 'furl>=2.1.0',\n 'pyexcel>=0.6.6',\n 'pyexcel-xlsx>=0.6.0',\n 'python-jose>=3.2.0',\n 'seqeval>=1.2.2',\n 'social-auth-app-django>=4.0.0',\n 'whitenoise>=5.2.0',\n 'auto-labeling-pipeline>=0.1.12',\n 'celery>=5.0.5',\n 'dj-rest-auth>=2.1.4',\n 'django-celery-results>=2.0.1',\n 'django-drf-filepond>=0.3.0',\n 'sqlalchemy>=1.4.7',\n 'gunicorn>=20.1.0',\n]\n\nsetup(\n name=NAME,\n use_scm_version=True,\n setup_requires=['setuptools_scm'],\n description=DESCRIPTION,\n long_description=long_description,\n long_description_content_type='text/markdown',\n author=AUTHOR,\n author_email=EMAIL,\n url=URL,\n packages=find_packages(exclude=('*.tests',)),\n entry_points={\n 'console_scripts': [\n 'doccano = backend.cli:main'\n ]\n },\n install_requires=required,\n extras_require={\n 'postgresql': ['psycopg2-binary>=2.8.6'],\n 'mssql': ['django-mssql-backend>=2.8.1'],\n },\n include_package_data=True,\n license=LICENSE,\n classifiers=[\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy'\n ],\n)\n", "path": "setup.py"}, {"content": "import argparse\nimport multiprocessing\nimport os\nimport subprocess\nimport sys\n\nimport gunicorn.app.base\nimport gunicorn.util\n\nfrom .app.celery import app\n\nbase = os.path.abspath(os.path.dirname(__file__))\nmanage_path = os.path.join(base, 'manage.py')\nparser = argparse.ArgumentParser(description='doccano, text annotation for machine learning practitioners.')\n\n\ndef number_of_workers():\n return (multiprocessing.cpu_count() * 2) + 1\n\n\nclass StandaloneApplication(gunicorn.app.base.BaseApplication):\n\n def __init__(self, options=None):\n self.options = options or {}\n super().__init__()\n\n def load_config(self):\n config = {key: value for key, value in self.options.items()\n if key in self.cfg.settings and value is not None}\n for key, value in config.items():\n self.cfg.set(key.lower(), value)\n\n def load(self):\n sys.path.append(base)\n return gunicorn.util.import_app('app.wsgi')\n\n\ndef command_db_init(args):\n print('Setup Database.')\n subprocess.call([sys.executable, manage_path, 'wait_for_db'], shell=False)\n subprocess.call([sys.executable, manage_path, 'migrate'], shell=False)\n subprocess.call([sys.executable, manage_path, 'create_roles'], shell=False)\n\n\ndef command_user_create(args):\n print('Create admin user.')\n subprocess.call([sys.executable, manage_path, 'create_admin',\n '--username', args.username,\n '--password', args.password,\n '--email', args.email,\n '--noinput'], shell=False)\n\n\ndef command_run_webserver(args):\n print(f'Starting server with port {args.port}.')\n options = {\n 'bind': '%s:%s' % ('0.0.0.0', args.port),\n 'workers': number_of_workers(),\n 'chdir': base\n }\n StandaloneApplication(options).run()\n\n\ndef command_run_task_queue(args):\n print('Starting task queue.')\n app.worker_main(\n argv=[\n '--app=app',\n '--workdir={}'.format(base),\n 'worker',\n '--loglevel=info',\n '--concurrency={}'.format(args.concurrency),\n ]\n )\n\n\ndef command_help(args):\n print(parser.parse_args([args.command, '--help']))\n\n\ndef main():\n # Create a command line parser.\n subparsers = parser.add_subparsers()\n\n # Create a parser for db initialization.\n parser_init = subparsers.add_parser('init', help='see `init -h`')\n\n parser_init.set_defaults(handler=command_db_init)\n\n # Create a parser for user creation.\n parser_create_user = subparsers.add_parser('createuser', help='see `createuser -h`')\n parser_create_user.add_argument('--username', type=str, default='admin', help='admin username')\n parser_create_user.add_argument('--password', type=str, default='password', help='admin password')\n parser_create_user.add_argument('--email', type=str, default='[email protected]', help='admin email')\n parser_create_user.set_defaults(handler=command_user_create)\n\n # Create a parser for web server.\n parser_server = subparsers.add_parser('webserver', help='see `webserver -h`')\n parser_server.add_argument('--port', type=int, default=8000, help='port number')\n parser_server.set_defaults(handler=command_run_webserver)\n\n # Create a parser for task queue.\n parser_queue = subparsers.add_parser('task', help='see `task -h`')\n parser_queue.add_argument('--concurrency', type=int, default=2, help='concurrency')\n parser_queue.set_defaults(handler=command_run_task_queue)\n\n # Create a parser for help.\n parser_help = subparsers.add_parser('help', help='see `help -h`')\n parser_help.add_argument('command', help='command name which help is shown')\n parser_help.set_defaults(handler=command_help)\n\n # Dispatch handler.\n args = parser.parse_args()\n if hasattr(args, 'handler'):\n args.handler(args)\n else:\n # If specified unknown command, show help.\n parser.print_help()\n\n\nif __name__ == '__main__':\n main()\n", "path": "backend/cli.py"}]}
| 3,210 | 731 |
gh_patches_debug_41792
|
rasdani/github-patches
|
git_diff
|
python-discord__bot-643
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
create command for getting parts of the zen of python
it would be nice to be able to get specific parts of the zen of python from a command. for example,
`!zen 13` would retrieve the fourteenth line (because we index from zero, obviously).
`!zen namespaces` would search for the string "namespaces", and produce the line which matches best.
`!zen` without any arguments would still produce the entire thing in the same way that the `zen` tag does.
i think this is reasonably simple, and could come in handy from time to time. :D
edit: also, i'd quite like to implement this. seems simple enough while i get used to all of the new changes since last year :D
</issue>
<code>
[start of bot/cogs/utils.py]
1 import logging
2 import re
3 import unicodedata
4 from asyncio import TimeoutError, sleep
5 from email.parser import HeaderParser
6 from io import StringIO
7 from typing import Tuple
8
9 from dateutil import relativedelta
10 from discord import Colour, Embed, Message, Role
11 from discord.ext.commands import Cog, Context, command
12
13 from bot.bot import Bot
14 from bot.constants import Channels, MODERATION_ROLES, Mention, STAFF_ROLES
15 from bot.decorators import in_channel, with_role
16 from bot.utils.time import humanize_delta
17
18 log = logging.getLogger(__name__)
19
20
21 class Utils(Cog):
22 """A selection of utilities which don't have a clear category."""
23
24 def __init__(self, bot: Bot):
25 self.bot = bot
26
27 self.base_pep_url = "http://www.python.org/dev/peps/pep-"
28 self.base_github_pep_url = "https://raw.githubusercontent.com/python/peps/master/pep-"
29
30 @command(name='pep', aliases=('get_pep', 'p'))
31 async def pep_command(self, ctx: Context, pep_number: str) -> None:
32 """Fetches information about a PEP and sends it to the channel."""
33 if pep_number.isdigit():
34 pep_number = int(pep_number)
35 else:
36 await ctx.invoke(self.bot.get_command("help"), "pep")
37 return
38
39 possible_extensions = ['.txt', '.rst']
40 found_pep = False
41 for extension in possible_extensions:
42 # Attempt to fetch the PEP
43 pep_url = f"{self.base_github_pep_url}{pep_number:04}{extension}"
44 log.trace(f"Requesting PEP {pep_number} with {pep_url}")
45 response = await self.bot.http_session.get(pep_url)
46
47 if response.status == 200:
48 log.trace("PEP found")
49 found_pep = True
50
51 pep_content = await response.text()
52
53 # Taken from https://github.com/python/peps/blob/master/pep0/pep.py#L179
54 pep_header = HeaderParser().parse(StringIO(pep_content))
55
56 # Assemble the embed
57 pep_embed = Embed(
58 title=f"**PEP {pep_number} - {pep_header['Title']}**",
59 description=f"[Link]({self.base_pep_url}{pep_number:04})",
60 )
61
62 pep_embed.set_thumbnail(url="https://www.python.org/static/opengraph-icon-200x200.png")
63
64 # Add the interesting information
65 fields_to_check = ("Status", "Python-Version", "Created", "Type")
66 for field in fields_to_check:
67 # Check for a PEP metadata field that is present but has an empty value
68 # embed field values can't contain an empty string
69 if pep_header.get(field, ""):
70 pep_embed.add_field(name=field, value=pep_header[field])
71
72 elif response.status != 404:
73 # any response except 200 and 404 is expected
74 found_pep = True # actually not, but it's easier to display this way
75 log.trace(f"The user requested PEP {pep_number}, but the response had an unexpected status code: "
76 f"{response.status}.\n{response.text}")
77
78 error_message = "Unexpected HTTP error during PEP search. Please let us know."
79 pep_embed = Embed(title="Unexpected error", description=error_message)
80 pep_embed.colour = Colour.red()
81 break
82
83 if not found_pep:
84 log.trace("PEP was not found")
85 not_found = f"PEP {pep_number} does not exist."
86 pep_embed = Embed(title="PEP not found", description=not_found)
87 pep_embed.colour = Colour.red()
88
89 await ctx.message.channel.send(embed=pep_embed)
90
91 @command()
92 @in_channel(Channels.bot_commands, bypass_roles=STAFF_ROLES)
93 async def charinfo(self, ctx: Context, *, characters: str) -> None:
94 """Shows you information on up to 25 unicode characters."""
95 match = re.match(r"<(a?):(\w+):(\d+)>", characters)
96 if match:
97 embed = Embed(
98 title="Non-Character Detected",
99 description=(
100 "Only unicode characters can be processed, but a custom Discord emoji "
101 "was found. Please remove it and try again."
102 )
103 )
104 embed.colour = Colour.red()
105 await ctx.send(embed=embed)
106 return
107
108 if len(characters) > 25:
109 embed = Embed(title=f"Too many characters ({len(characters)}/25)")
110 embed.colour = Colour.red()
111 await ctx.send(embed=embed)
112 return
113
114 def get_info(char: str) -> Tuple[str, str]:
115 digit = f"{ord(char):x}"
116 if len(digit) <= 4:
117 u_code = f"\\u{digit:>04}"
118 else:
119 u_code = f"\\U{digit:>08}"
120 url = f"https://www.compart.com/en/unicode/U+{digit:>04}"
121 name = f"[{unicodedata.name(char, '')}]({url})"
122 info = f"`{u_code.ljust(10)}`: {name} - {char}"
123 return info, u_code
124
125 charlist, rawlist = zip(*(get_info(c) for c in characters))
126
127 embed = Embed(description="\n".join(charlist))
128 embed.set_author(name="Character Info")
129
130 if len(characters) > 1:
131 embed.add_field(name='Raw', value=f"`{''.join(rawlist)}`", inline=False)
132
133 await ctx.send(embed=embed)
134
135 @command()
136 @with_role(*MODERATION_ROLES)
137 async def mention(self, ctx: Context, *, role: Role) -> None:
138 """Set a role to be mentionable for a limited time."""
139 if role.mentionable:
140 await ctx.send(f"{role} is already mentionable!")
141 return
142
143 await role.edit(reason=f"Role unlocked by {ctx.author}", mentionable=True)
144
145 human_time = humanize_delta(relativedelta.relativedelta(seconds=Mention.message_timeout))
146 await ctx.send(
147 f"{role} has been made mentionable. I will reset it in {human_time}, or when someone mentions this role."
148 )
149
150 def check(m: Message) -> bool:
151 """Checks that the message contains the role mention."""
152 return role in m.role_mentions
153
154 try:
155 msg = await self.bot.wait_for("message", check=check, timeout=Mention.message_timeout)
156 except TimeoutError:
157 await role.edit(mentionable=False, reason="Automatic role lock - timeout.")
158 await ctx.send(f"{ctx.author.mention}, you took too long. I have reset {role} to be unmentionable.")
159 return
160
161 if any(r.id in MODERATION_ROLES for r in msg.author.roles):
162 await sleep(Mention.reset_delay)
163 await role.edit(mentionable=False, reason=f"Automatic role lock by {msg.author}")
164 await ctx.send(
165 f"{ctx.author.mention}, I have reset {role} to be unmentionable as "
166 f"{msg.author if msg.author != ctx.author else 'you'} sent a message mentioning it."
167 )
168 return
169
170 await role.edit(mentionable=False, reason=f"Automatic role lock - unauthorised use by {msg.author}")
171 await ctx.send(
172 f"{ctx.author.mention}, I have reset {role} to be unmentionable "
173 f"as I detected unauthorised use by {msg.author} (ID: {msg.author.id})."
174 )
175
176
177 def setup(bot: Bot) -> None:
178 """Load the Utils cog."""
179 bot.add_cog(Utils(bot))
180
[end of bot/cogs/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bot/cogs/utils.py b/bot/cogs/utils.py
--- a/bot/cogs/utils.py
+++ b/bot/cogs/utils.py
@@ -1,14 +1,15 @@
+import difflib
import logging
import re
import unicodedata
from asyncio import TimeoutError, sleep
from email.parser import HeaderParser
from io import StringIO
-from typing import Tuple
+from typing import Tuple, Union
from dateutil import relativedelta
from discord import Colour, Embed, Message, Role
-from discord.ext.commands import Cog, Context, command
+from discord.ext.commands import BadArgument, Cog, Context, command
from bot.bot import Bot
from bot.constants import Channels, MODERATION_ROLES, Mention, STAFF_ROLES
@@ -17,6 +18,28 @@
log = logging.getLogger(__name__)
+ZEN_OF_PYTHON = """\
+Beautiful is better than ugly.
+Explicit is better than implicit.
+Simple is better than complex.
+Complex is better than complicated.
+Flat is better than nested.
+Sparse is better than dense.
+Readability counts.
+Special cases aren't special enough to break the rules.
+Although practicality beats purity.
+Errors should never pass silently.
+Unless explicitly silenced.
+In the face of ambiguity, refuse the temptation to guess.
+There should be one-- and preferably only one --obvious way to do it.
+Although that way may not be obvious at first unless you're Dutch.
+Now is better than never.
+Although never is often better than *right* now.
+If the implementation is hard to explain, it's a bad idea.
+If the implementation is easy to explain, it may be a good idea.
+Namespaces are one honking great idea -- let's do more of those!
+"""
+
class Utils(Cog):
"""A selection of utilities which don't have a clear category."""
@@ -173,6 +196,67 @@
f"as I detected unauthorised use by {msg.author} (ID: {msg.author.id})."
)
+ @command()
+ async def zen(self, ctx: Context, *, search_value: Union[int, str, None] = None) -> None:
+ """
+ Show the Zen of Python.
+
+ Without any arguments, the full Zen will be produced.
+ If an integer is provided, the line with that index will be produced.
+ If a string is provided, the line which matches best will be produced.
+ """
+ embed = Embed(
+ colour=Colour.blurple(),
+ title="The Zen of Python",
+ description=ZEN_OF_PYTHON
+ )
+
+ if search_value is None:
+ embed.title += ", by Tim Peters"
+ await ctx.send(embed=embed)
+ return
+
+ zen_lines = ZEN_OF_PYTHON.splitlines()
+
+ # handle if it's an index int
+ if isinstance(search_value, int):
+ upper_bound = len(zen_lines) - 1
+ lower_bound = -1 * upper_bound
+ if not (lower_bound <= search_value <= upper_bound):
+ raise BadArgument(f"Please provide an index between {lower_bound} and {upper_bound}.")
+
+ embed.title += f" (line {search_value % len(zen_lines)}):"
+ embed.description = zen_lines[search_value]
+ await ctx.send(embed=embed)
+ return
+
+ # handle if it's a search string
+ matcher = difflib.SequenceMatcher(None, search_value.lower())
+
+ best_match = ""
+ match_index = 0
+ best_ratio = 0
+
+ for index, line in enumerate(zen_lines):
+ matcher.set_seq2(line.lower())
+
+ # the match ratio needs to be adjusted because, naturally,
+ # longer lines will have worse ratios than shorter lines when
+ # fuzzy searching for keywords. this seems to work okay.
+ adjusted_ratio = (len(line) - 5) ** 0.5 * matcher.ratio()
+
+ if adjusted_ratio > best_ratio:
+ best_ratio = adjusted_ratio
+ best_match = line
+ match_index = index
+
+ if not best_match:
+ raise BadArgument("I didn't get a match! Please try again with a different search term.")
+
+ embed.title += f" (line {match_index}):"
+ embed.description = best_match
+ await ctx.send(embed=embed)
+
def setup(bot: Bot) -> None:
"""Load the Utils cog."""
|
{"golden_diff": "diff --git a/bot/cogs/utils.py b/bot/cogs/utils.py\n--- a/bot/cogs/utils.py\n+++ b/bot/cogs/utils.py\n@@ -1,14 +1,15 @@\n+import difflib\n import logging\n import re\n import unicodedata\n from asyncio import TimeoutError, sleep\n from email.parser import HeaderParser\n from io import StringIO\n-from typing import Tuple\n+from typing import Tuple, Union\n \n from dateutil import relativedelta\n from discord import Colour, Embed, Message, Role\n-from discord.ext.commands import Cog, Context, command\n+from discord.ext.commands import BadArgument, Cog, Context, command\n \n from bot.bot import Bot\n from bot.constants import Channels, MODERATION_ROLES, Mention, STAFF_ROLES\n@@ -17,6 +18,28 @@\n \n log = logging.getLogger(__name__)\n \n+ZEN_OF_PYTHON = \"\"\"\\\n+Beautiful is better than ugly.\n+Explicit is better than implicit.\n+Simple is better than complex.\n+Complex is better than complicated.\n+Flat is better than nested.\n+Sparse is better than dense.\n+Readability counts.\n+Special cases aren't special enough to break the rules.\n+Although practicality beats purity.\n+Errors should never pass silently.\n+Unless explicitly silenced.\n+In the face of ambiguity, refuse the temptation to guess.\n+There should be one-- and preferably only one --obvious way to do it.\n+Although that way may not be obvious at first unless you're Dutch.\n+Now is better than never.\n+Although never is often better than *right* now.\n+If the implementation is hard to explain, it's a bad idea.\n+If the implementation is easy to explain, it may be a good idea.\n+Namespaces are one honking great idea -- let's do more of those!\n+\"\"\"\n+\n \n class Utils(Cog):\n \"\"\"A selection of utilities which don't have a clear category.\"\"\"\n@@ -173,6 +196,67 @@\n f\"as I detected unauthorised use by {msg.author} (ID: {msg.author.id}).\"\n )\n \n+ @command()\n+ async def zen(self, ctx: Context, *, search_value: Union[int, str, None] = None) -> None:\n+ \"\"\"\n+ Show the Zen of Python.\n+\n+ Without any arguments, the full Zen will be produced.\n+ If an integer is provided, the line with that index will be produced.\n+ If a string is provided, the line which matches best will be produced.\n+ \"\"\"\n+ embed = Embed(\n+ colour=Colour.blurple(),\n+ title=\"The Zen of Python\",\n+ description=ZEN_OF_PYTHON\n+ )\n+\n+ if search_value is None:\n+ embed.title += \", by Tim Peters\"\n+ await ctx.send(embed=embed)\n+ return\n+\n+ zen_lines = ZEN_OF_PYTHON.splitlines()\n+\n+ # handle if it's an index int\n+ if isinstance(search_value, int):\n+ upper_bound = len(zen_lines) - 1\n+ lower_bound = -1 * upper_bound\n+ if not (lower_bound <= search_value <= upper_bound):\n+ raise BadArgument(f\"Please provide an index between {lower_bound} and {upper_bound}.\")\n+\n+ embed.title += f\" (line {search_value % len(zen_lines)}):\"\n+ embed.description = zen_lines[search_value]\n+ await ctx.send(embed=embed)\n+ return\n+\n+ # handle if it's a search string\n+ matcher = difflib.SequenceMatcher(None, search_value.lower())\n+\n+ best_match = \"\"\n+ match_index = 0\n+ best_ratio = 0\n+\n+ for index, line in enumerate(zen_lines):\n+ matcher.set_seq2(line.lower())\n+\n+ # the match ratio needs to be adjusted because, naturally,\n+ # longer lines will have worse ratios than shorter lines when\n+ # fuzzy searching for keywords. this seems to work okay.\n+ adjusted_ratio = (len(line) - 5) ** 0.5 * matcher.ratio()\n+\n+ if adjusted_ratio > best_ratio:\n+ best_ratio = adjusted_ratio\n+ best_match = line\n+ match_index = index\n+\n+ if not best_match:\n+ raise BadArgument(\"I didn't get a match! Please try again with a different search term.\")\n+\n+ embed.title += f\" (line {match_index}):\"\n+ embed.description = best_match\n+ await ctx.send(embed=embed)\n+\n \n def setup(bot: Bot) -> None:\n \"\"\"Load the Utils cog.\"\"\"\n", "issue": "create command for getting parts of the zen of python\nit would be nice to be able to get specific parts of the zen of python from a command. for example,\r\n\r\n`!zen 13` would retrieve the fourteenth line (because we index from zero, obviously).\r\n\r\n`!zen namespaces` would search for the string \"namespaces\", and produce the line which matches best. \r\n\r\n`!zen` without any arguments would still produce the entire thing in the same way that the `zen` tag does.\r\n\r\ni think this is reasonably simple, and could come in handy from time to time. :D\r\n\r\nedit: also, i'd quite like to implement this. seems simple enough while i get used to all of the new changes since last year :D\n", "before_files": [{"content": "import logging\nimport re\nimport unicodedata\nfrom asyncio import TimeoutError, sleep\nfrom email.parser import HeaderParser\nfrom io import StringIO\nfrom typing import Tuple\n\nfrom dateutil import relativedelta\nfrom discord import Colour, Embed, Message, Role\nfrom discord.ext.commands import Cog, Context, command\n\nfrom bot.bot import Bot\nfrom bot.constants import Channels, MODERATION_ROLES, Mention, STAFF_ROLES\nfrom bot.decorators import in_channel, with_role\nfrom bot.utils.time import humanize_delta\n\nlog = logging.getLogger(__name__)\n\n\nclass Utils(Cog):\n \"\"\"A selection of utilities which don't have a clear category.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n\n self.base_pep_url = \"http://www.python.org/dev/peps/pep-\"\n self.base_github_pep_url = \"https://raw.githubusercontent.com/python/peps/master/pep-\"\n\n @command(name='pep', aliases=('get_pep', 'p'))\n async def pep_command(self, ctx: Context, pep_number: str) -> None:\n \"\"\"Fetches information about a PEP and sends it to the channel.\"\"\"\n if pep_number.isdigit():\n pep_number = int(pep_number)\n else:\n await ctx.invoke(self.bot.get_command(\"help\"), \"pep\")\n return\n\n possible_extensions = ['.txt', '.rst']\n found_pep = False\n for extension in possible_extensions:\n # Attempt to fetch the PEP\n pep_url = f\"{self.base_github_pep_url}{pep_number:04}{extension}\"\n log.trace(f\"Requesting PEP {pep_number} with {pep_url}\")\n response = await self.bot.http_session.get(pep_url)\n\n if response.status == 200:\n log.trace(\"PEP found\")\n found_pep = True\n\n pep_content = await response.text()\n\n # Taken from https://github.com/python/peps/blob/master/pep0/pep.py#L179\n pep_header = HeaderParser().parse(StringIO(pep_content))\n\n # Assemble the embed\n pep_embed = Embed(\n title=f\"**PEP {pep_number} - {pep_header['Title']}**\",\n description=f\"[Link]({self.base_pep_url}{pep_number:04})\",\n )\n\n pep_embed.set_thumbnail(url=\"https://www.python.org/static/opengraph-icon-200x200.png\")\n\n # Add the interesting information\n fields_to_check = (\"Status\", \"Python-Version\", \"Created\", \"Type\")\n for field in fields_to_check:\n # Check for a PEP metadata field that is present but has an empty value\n # embed field values can't contain an empty string\n if pep_header.get(field, \"\"):\n pep_embed.add_field(name=field, value=pep_header[field])\n\n elif response.status != 404:\n # any response except 200 and 404 is expected\n found_pep = True # actually not, but it's easier to display this way\n log.trace(f\"The user requested PEP {pep_number}, but the response had an unexpected status code: \"\n f\"{response.status}.\\n{response.text}\")\n\n error_message = \"Unexpected HTTP error during PEP search. Please let us know.\"\n pep_embed = Embed(title=\"Unexpected error\", description=error_message)\n pep_embed.colour = Colour.red()\n break\n\n if not found_pep:\n log.trace(\"PEP was not found\")\n not_found = f\"PEP {pep_number} does not exist.\"\n pep_embed = Embed(title=\"PEP not found\", description=not_found)\n pep_embed.colour = Colour.red()\n\n await ctx.message.channel.send(embed=pep_embed)\n\n @command()\n @in_channel(Channels.bot_commands, bypass_roles=STAFF_ROLES)\n async def charinfo(self, ctx: Context, *, characters: str) -> None:\n \"\"\"Shows you information on up to 25 unicode characters.\"\"\"\n match = re.match(r\"<(a?):(\\w+):(\\d+)>\", characters)\n if match:\n embed = Embed(\n title=\"Non-Character Detected\",\n description=(\n \"Only unicode characters can be processed, but a custom Discord emoji \"\n \"was found. Please remove it and try again.\"\n )\n )\n embed.colour = Colour.red()\n await ctx.send(embed=embed)\n return\n\n if len(characters) > 25:\n embed = Embed(title=f\"Too many characters ({len(characters)}/25)\")\n embed.colour = Colour.red()\n await ctx.send(embed=embed)\n return\n\n def get_info(char: str) -> Tuple[str, str]:\n digit = f\"{ord(char):x}\"\n if len(digit) <= 4:\n u_code = f\"\\\\u{digit:>04}\"\n else:\n u_code = f\"\\\\U{digit:>08}\"\n url = f\"https://www.compart.com/en/unicode/U+{digit:>04}\"\n name = f\"[{unicodedata.name(char, '')}]({url})\"\n info = f\"`{u_code.ljust(10)}`: {name} - {char}\"\n return info, u_code\n\n charlist, rawlist = zip(*(get_info(c) for c in characters))\n\n embed = Embed(description=\"\\n\".join(charlist))\n embed.set_author(name=\"Character Info\")\n\n if len(characters) > 1:\n embed.add_field(name='Raw', value=f\"`{''.join(rawlist)}`\", inline=False)\n\n await ctx.send(embed=embed)\n\n @command()\n @with_role(*MODERATION_ROLES)\n async def mention(self, ctx: Context, *, role: Role) -> None:\n \"\"\"Set a role to be mentionable for a limited time.\"\"\"\n if role.mentionable:\n await ctx.send(f\"{role} is already mentionable!\")\n return\n\n await role.edit(reason=f\"Role unlocked by {ctx.author}\", mentionable=True)\n\n human_time = humanize_delta(relativedelta.relativedelta(seconds=Mention.message_timeout))\n await ctx.send(\n f\"{role} has been made mentionable. I will reset it in {human_time}, or when someone mentions this role.\"\n )\n\n def check(m: Message) -> bool:\n \"\"\"Checks that the message contains the role mention.\"\"\"\n return role in m.role_mentions\n\n try:\n msg = await self.bot.wait_for(\"message\", check=check, timeout=Mention.message_timeout)\n except TimeoutError:\n await role.edit(mentionable=False, reason=\"Automatic role lock - timeout.\")\n await ctx.send(f\"{ctx.author.mention}, you took too long. I have reset {role} to be unmentionable.\")\n return\n\n if any(r.id in MODERATION_ROLES for r in msg.author.roles):\n await sleep(Mention.reset_delay)\n await role.edit(mentionable=False, reason=f\"Automatic role lock by {msg.author}\")\n await ctx.send(\n f\"{ctx.author.mention}, I have reset {role} to be unmentionable as \"\n f\"{msg.author if msg.author != ctx.author else 'you'} sent a message mentioning it.\"\n )\n return\n\n await role.edit(mentionable=False, reason=f\"Automatic role lock - unauthorised use by {msg.author}\")\n await ctx.send(\n f\"{ctx.author.mention}, I have reset {role} to be unmentionable \"\n f\"as I detected unauthorised use by {msg.author} (ID: {msg.author.id}).\"\n )\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the Utils cog.\"\"\"\n bot.add_cog(Utils(bot))\n", "path": "bot/cogs/utils.py"}]}
| 2,837 | 1,011 |
gh_patches_debug_19943
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-2937
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Adding user image to profile and saving it results in error page
After user uploads an image to the profile page and updates/saves the profile an error page shows up.

</issue>
<code>
[start of app/views/users/profile.py]
1 from uuid import uuid4
2
3 from flask import Blueprint
4 from flask import render_template
5 from flask import request, url_for, redirect, flash, jsonify
6 from flask.ext import login
7 from markupsafe import Markup
8
9 from app.helpers.auth import AuthManager
10 from app.helpers.data import DataManager, get_facebook_auth, get_instagram_auth, get_twitter_auth_url, save_to_db, get_google_auth
11 from app.helpers.data_getter import DataGetter
12 from app.helpers.helpers import uploaded_file
13 from app.helpers.oauth import FbOAuth, InstagramOAuth, OAuth
14 from app.helpers.storage import upload, UPLOAD_PATHS
15
16 profile = Blueprint('profile', __name__, url_prefix='/profile')
17
18
19 @profile.route('/')
20 def index_view():
21 if not AuthManager.is_verified_user():
22 flash(Markup("Your account is unverified. "
23 "Please verify by clicking on the confirmation link that has been emailed to you."
24 '<br>Did not get the email? Please <a href="/resend_email/" class="alert-link"> '
25 'click here to resend the confirmation.</a>'))
26 profile = DataGetter.get_user(login.current_user.id)
27 return render_template('gentelella/admin/profile/index.html',
28 profile=profile)
29
30
31 @profile.route('/edit/', methods=('GET', 'POST'))
32 @profile.route('/edit/<user_id>', methods=('GET', 'POST'))
33 def edit_view(user_id=None):
34 admin = None
35 if not user_id:
36 user_id = login.current_user.id
37 else:
38 admin = True
39 if request.method == 'POST':
40 DataManager.update_user(request.form, int(user_id))
41 if admin:
42 return redirect(url_for('sadmin_users.details_view', user_id=user_id))
43 return redirect(url_for('.index_view'))
44 return redirect(url_for('.index_view'))
45
46
47 @profile.route('/fb_connect', methods=('GET', 'POST'))
48 def connect_facebook():
49 facebook = get_facebook_auth()
50 fb_auth_url, state = facebook.authorization_url(FbOAuth.get_auth_uri(), access_type='offline')
51 return redirect(fb_auth_url)
52
53
54 @profile.route('/tw_connect', methods=('GET', 'POST'))
55 def connect_twitter():
56 twitter_auth_url, __ = get_twitter_auth_url()
57 return redirect('https://api.twitter.com/oauth/authenticate?' + twitter_auth_url)
58
59 @profile.route('/instagram_connect', methods=('GET', 'POST'))
60 def connect_instagram():
61 instagram = get_instagram_auth()
62 instagram_auth_url, state = instagram.authorization_url(InstagramOAuth.get_auth_uri(), access_type='offline')
63 return redirect(instagram_auth_url)
64
65 @profile.route('/<int:user_id>/editfiles/bgimage', methods=('POST', 'DELETE'))
66 def bgimage_upload(user_id):
67 if request.method == 'POST':
68 background_image = request.form['bgimage']
69 if background_image:
70 background_file = uploaded_file(file_content=background_image)
71 background_url = upload(
72 background_file,
73 UPLOAD_PATHS['user']['avatar'].format(
74 user_id=user_id
75 ))
76 return jsonify({'status': 'ok', 'background_url': background_url})
77 else:
78 return jsonify({'status': 'no bgimage'})
79 elif request.method == 'DELETE':
80 profile = DataGetter.get_user(int(user_id))
81 profile.avatar_uploaded = ''
82 save_to_db(profile)
83 return jsonify({'status': 'ok'})
84
85
86 @profile.route('/create/files/bgimage', methods=('POST',))
87 def create_event_bgimage_upload():
88 if request.method == 'POST':
89 background_image = request.form['bgimage']
90 if background_image:
91 background_file = uploaded_file(file_content=background_image)
92 background_url = upload(
93 background_file,
94 UPLOAD_PATHS['temp']['event'].format(uuid=uuid4())
95 )
96 return jsonify({'status': 'ok', 'background_url': background_url})
97 else:
98 return jsonify({'status': 'no bgimage'})
99
[end of app/views/users/profile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/app/views/users/profile.py b/app/views/users/profile.py
--- a/app/views/users/profile.py
+++ b/app/views/users/profile.py
@@ -11,7 +11,7 @@
from app.helpers.data_getter import DataGetter
from app.helpers.helpers import uploaded_file
from app.helpers.oauth import FbOAuth, InstagramOAuth, OAuth
-from app.helpers.storage import upload, UPLOAD_PATHS
+from app.helpers.storage import upload, UPLOAD_PATHS, upload_local
profile = Blueprint('profile', __name__, url_prefix='/profile')
@@ -89,7 +89,7 @@
background_image = request.form['bgimage']
if background_image:
background_file = uploaded_file(file_content=background_image)
- background_url = upload(
+ background_url = upload_local(
background_file,
UPLOAD_PATHS['temp']['event'].format(uuid=uuid4())
)
|
{"golden_diff": "diff --git a/app/views/users/profile.py b/app/views/users/profile.py\n--- a/app/views/users/profile.py\n+++ b/app/views/users/profile.py\n@@ -11,7 +11,7 @@\n from app.helpers.data_getter import DataGetter\n from app.helpers.helpers import uploaded_file\n from app.helpers.oauth import FbOAuth, InstagramOAuth, OAuth\n-from app.helpers.storage import upload, UPLOAD_PATHS\n+from app.helpers.storage import upload, UPLOAD_PATHS, upload_local\n \n profile = Blueprint('profile', __name__, url_prefix='/profile')\n \n@@ -89,7 +89,7 @@\n background_image = request.form['bgimage']\n if background_image:\n background_file = uploaded_file(file_content=background_image)\n- background_url = upload(\n+ background_url = upload_local(\n background_file,\n UPLOAD_PATHS['temp']['event'].format(uuid=uuid4())\n )\n", "issue": "Adding user image to profile and saving it results in error page\nAfter user uploads an image to the profile page and updates/saves the profile an error page shows up.\r\n\r\n\r\n\n", "before_files": [{"content": "from uuid import uuid4\n\nfrom flask import Blueprint\nfrom flask import render_template\nfrom flask import request, url_for, redirect, flash, jsonify\nfrom flask.ext import login\nfrom markupsafe import Markup\n\nfrom app.helpers.auth import AuthManager\nfrom app.helpers.data import DataManager, get_facebook_auth, get_instagram_auth, get_twitter_auth_url, save_to_db, get_google_auth\nfrom app.helpers.data_getter import DataGetter\nfrom app.helpers.helpers import uploaded_file\nfrom app.helpers.oauth import FbOAuth, InstagramOAuth, OAuth\nfrom app.helpers.storage import upload, UPLOAD_PATHS\n\nprofile = Blueprint('profile', __name__, url_prefix='/profile')\n\n\[email protected]('/')\ndef index_view():\n if not AuthManager.is_verified_user():\n flash(Markup(\"Your account is unverified. \"\n \"Please verify by clicking on the confirmation link that has been emailed to you.\"\n '<br>Did not get the email? Please <a href=\"/resend_email/\" class=\"alert-link\"> '\n 'click here to resend the confirmation.</a>'))\n profile = DataGetter.get_user(login.current_user.id)\n return render_template('gentelella/admin/profile/index.html',\n profile=profile)\n\n\[email protected]('/edit/', methods=('GET', 'POST'))\[email protected]('/edit/<user_id>', methods=('GET', 'POST'))\ndef edit_view(user_id=None):\n admin = None\n if not user_id:\n user_id = login.current_user.id\n else:\n admin = True\n if request.method == 'POST':\n DataManager.update_user(request.form, int(user_id))\n if admin:\n return redirect(url_for('sadmin_users.details_view', user_id=user_id))\n return redirect(url_for('.index_view'))\n return redirect(url_for('.index_view'))\n\n\[email protected]('/fb_connect', methods=('GET', 'POST'))\ndef connect_facebook():\n facebook = get_facebook_auth()\n fb_auth_url, state = facebook.authorization_url(FbOAuth.get_auth_uri(), access_type='offline')\n return redirect(fb_auth_url)\n\n\[email protected]('/tw_connect', methods=('GET', 'POST'))\ndef connect_twitter():\n twitter_auth_url, __ = get_twitter_auth_url()\n return redirect('https://api.twitter.com/oauth/authenticate?' + twitter_auth_url)\n\[email protected]('/instagram_connect', methods=('GET', 'POST'))\ndef connect_instagram():\n instagram = get_instagram_auth()\n instagram_auth_url, state = instagram.authorization_url(InstagramOAuth.get_auth_uri(), access_type='offline')\n return redirect(instagram_auth_url)\n\[email protected]('/<int:user_id>/editfiles/bgimage', methods=('POST', 'DELETE'))\ndef bgimage_upload(user_id):\n if request.method == 'POST':\n background_image = request.form['bgimage']\n if background_image:\n background_file = uploaded_file(file_content=background_image)\n background_url = upload(\n background_file,\n UPLOAD_PATHS['user']['avatar'].format(\n user_id=user_id\n ))\n return jsonify({'status': 'ok', 'background_url': background_url})\n else:\n return jsonify({'status': 'no bgimage'})\n elif request.method == 'DELETE':\n profile = DataGetter.get_user(int(user_id))\n profile.avatar_uploaded = ''\n save_to_db(profile)\n return jsonify({'status': 'ok'})\n\n\[email protected]('/create/files/bgimage', methods=('POST',))\ndef create_event_bgimage_upload():\n if request.method == 'POST':\n background_image = request.form['bgimage']\n if background_image:\n background_file = uploaded_file(file_content=background_image)\n background_url = upload(\n background_file,\n UPLOAD_PATHS['temp']['event'].format(uuid=uuid4())\n )\n return jsonify({'status': 'ok', 'background_url': background_url})\n else:\n return jsonify({'status': 'no bgimage'})\n", "path": "app/views/users/profile.py"}]}
| 1,667 | 192 |
gh_patches_debug_29759
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-1121
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix "complex_logic" issue in website/events/services.py
Consider simplifying this complex logical expression.
https://codeclimate.com/github/svthalia/concrexit/website/events/services.py#issue_5ece61c6b391e1000100034b
</issue>
<code>
[start of website/events/services.py]
1 from collections import OrderedDict
2
3 from django.utils import timezone
4 from django.utils.datetime_safe import date
5 from django.utils.translation import gettext_lazy as _, get_language
6
7 from events import emails
8 from events.exceptions import RegistrationError
9 from events.models import EventRegistration, RegistrationInformationField, Event
10 from payments.models import Payment
11 from payments.services import create_payment, delete_payment
12 from utils.snippets import datetime_to_lectureyear
13
14
15 def is_user_registered(member, event):
16 """
17 Returns if the user is registered for the specified event
18
19 :param member: the user
20 :param event: the event
21 :return: None if registration is not required or no member else True/False
22 """
23 if not event.registration_required or not member.is_authenticated:
24 return None
25
26 return event.registrations.filter(member=member, date_cancelled=None).count() > 0
27
28
29 def event_permissions(member, event, name=None):
30 """
31 Returns a dictionary with the available event permissions of the user
32
33 :param member: the user
34 :param event: the event
35 :param name: the name of a non member registration
36 :return: the permission dictionary
37 """
38 perms = {
39 "create_registration": False,
40 "cancel_registration": False,
41 "update_registration": False,
42 }
43 if member and member.is_authenticated or name:
44 registration = None
45 try:
46 registration = EventRegistration.objects.get(
47 event=event, member=member, name=name
48 )
49 except EventRegistration.DoesNotExist:
50 pass
51
52 perms["create_registration"] = (
53 (registration is None or registration.date_cancelled is not None)
54 and event.registration_allowed
55 and (name or member.can_attend_events)
56 )
57 perms["cancel_registration"] = (
58 registration is not None
59 and registration.date_cancelled is None
60 and (event.cancellation_allowed or name)
61 )
62 perms["update_registration"] = (
63 registration is not None
64 and registration.date_cancelled is None
65 and event.has_fields()
66 and event.registration_allowed
67 and (name or member.can_attend_events)
68 )
69
70 return perms
71
72
73 def is_organiser(member, event):
74 if member and member.is_authenticated:
75 if member.is_superuser or member.has_perm("events.override_organiser"):
76 return True
77
78 if event:
79 return member.get_member_groups().filter(pk=event.organiser.pk).count() != 0
80
81 return False
82
83
84 def create_registration(member, event):
85 """
86 Creates a new user registration for an event
87
88 :param member: the user
89 :param event: the event
90 :return: returns the registration if successful
91 """
92 if event_permissions(member, event)["create_registration"]:
93 registration = None
94 try:
95 registration = EventRegistration.objects.get(event=event, member=member)
96 except EventRegistration.DoesNotExist:
97 pass
98
99 if registration is None:
100 return EventRegistration.objects.create(event=event, member=member)
101 elif registration.date_cancelled is not None:
102 if registration.is_late_cancellation():
103 raise RegistrationError(
104 _(
105 "You cannot re-register anymore "
106 "since you've cancelled after the "
107 "deadline."
108 )
109 )
110 else:
111 registration.date = timezone.now()
112 registration.date_cancelled = None
113 registration.save()
114
115 return registration
116 elif event_permissions(member, event)["cancel_registration"]:
117 raise RegistrationError(_("You were already registered."))
118 else:
119 raise RegistrationError(_("You may not register."))
120
121
122 def cancel_registration(member, event):
123 """
124 Cancel a user registration for an event
125
126 :param member: the user
127 :param event: the event
128 """
129 registration = None
130 try:
131 registration = EventRegistration.objects.get(event=event, member=member)
132 except EventRegistration.DoesNotExist:
133 pass
134
135 if event_permissions(member, event)["cancel_registration"] and registration:
136 if registration.payment is not None:
137 p = registration.payment
138 registration.payment = None
139 registration.save()
140 p.delete()
141 if registration.queue_position == 0:
142 emails.notify_first_waiting(event)
143
144 if event.send_cancel_email and event.after_cancel_deadline:
145 emails.notify_organiser(event, registration)
146
147 # Note that this doesn"t remove the values for the
148 # information fields that the user entered upon registering.
149 # But this is regarded as a feature, not a bug. Especially
150 # since the values will still appear in the backend.
151 registration.date_cancelled = timezone.now()
152 registration.save()
153 else:
154 raise RegistrationError(_("You are not registered for this event."))
155
156
157 def pay_with_tpay(member, event):
158 """
159 Add a Thalia Pay payment to an event registration
160
161 :param member: the user
162 :param event: the event
163 """
164 try:
165 registration = EventRegistration.objects.get(event=event, member=member)
166 except EventRegistration.DoesNotExist:
167 raise RegistrationError(_("You are not registered for this event."))
168
169 if registration.payment is None:
170 registration.payment = create_payment(
171 payable=registration, processed_by=member, pay_type=Payment.TPAY
172 )
173 registration.save()
174 else:
175 raise RegistrationError(_("You have already paid for this event."))
176
177
178 def update_registration(
179 member=None, event=None, name=None, registration=None, field_values=None
180 ):
181 """
182 Updates a user registration of an event
183
184 :param request: http request
185 :param member: the user
186 :param event: the event
187 :param name: the name of a registration not associated with a user
188 :param registration: the registration
189 :param field_values: values for the information fields
190 """
191 if not registration:
192 try:
193 registration = EventRegistration.objects.get(
194 event=event, member=member, name=name
195 )
196 except EventRegistration.DoesNotExist as error:
197 raise RegistrationError(
198 _("You are not registered for this event.")
199 ) from error
200 else:
201 member = registration.member
202 event = registration.event
203 name = registration.name
204
205 if (
206 not event_permissions(member, event, name)["update_registration"]
207 or not field_values
208 ):
209 return
210
211 for field_id, field_value in field_values:
212 field = RegistrationInformationField.objects.get(
213 id=field_id.replace("info_field_", "")
214 )
215
216 if (
217 field.type == RegistrationInformationField.INTEGER_FIELD
218 and field_value is None
219 ):
220 field_value = 0
221 elif (
222 field.type == RegistrationInformationField.BOOLEAN_FIELD
223 and field_value is None
224 ):
225 field_value = False
226 elif (
227 field.type == RegistrationInformationField.TEXT_FIELD
228 and field_value is None
229 ):
230 field_value = ""
231
232 field.set_value_for(registration, field_value)
233
234
235 def registration_fields(request, member=None, event=None, registration=None, name=None):
236 """
237 Returns information about the registration fields of a registration
238
239 :param member: the user (optional if registration provided)
240 :param name: the name of a non member registration
241 (optional if registration provided)
242 :param event: the event (optional if registration provided)
243 :param registration: the registration (optional if member & event provided)
244 :return: the fields
245 """
246
247 if registration is None:
248 try:
249 registration = EventRegistration.objects.get(
250 event=event, member=member, name=name
251 )
252 except EventRegistration.DoesNotExist as error:
253 raise RegistrationError(
254 _("You are not registered for this event.")
255 ) from error
256 except EventRegistration.MultipleObjectsReturned as error:
257 raise RegistrationError(
258 _("Unable to find the right registration.")
259 ) from error
260 else:
261 member = registration.member
262 event = registration.event
263 name = registration.name
264
265 perms = event_permissions(member, event, name)[
266 "update_registration"
267 ] or is_organiser(request.member, event)
268 if perms and registration:
269 information_fields = registration.information_fields
270 fields = OrderedDict()
271
272 for information_field in information_fields:
273 field = information_field["field"]
274
275 fields["info_field_{}".format(field.id)] = {
276 "type": field.type,
277 "label": getattr(field, "{}_{}".format("name", get_language())),
278 "description": getattr(
279 field, "{}_{}".format("description", get_language())
280 ),
281 "value": information_field["value"],
282 "required": field.required,
283 }
284
285 return fields
286 else:
287 raise RegistrationError(_("You are not allowed to update this registration."))
288
289
290 def update_registration_by_organiser(registration, member, data):
291 if not is_organiser(member, registration.event):
292 raise RegistrationError(_("You are not allowed to update this registration."))
293
294 if "payment" in data:
295 if data["payment"]["type"] == Payment.NONE and registration.payment is not None:
296 delete_payment(registration)
297 else:
298 registration.payment = create_payment(
299 payable=registration,
300 processed_by=member,
301 pay_type=data["payment"]["type"],
302 )
303
304 if "present" in data:
305 registration.present = data["present"]
306
307 registration.save()
308
309
310 def generate_category_statistics():
311 """
312 Generate statistics about events, number of events per category
313 :return: Dict with key, value resp. being category, event count.
314 """
315 year = datetime_to_lectureyear(timezone.now())
316
317 data = {}
318 for i in range(5):
319 year_start = date(year=year - i, month=9, day=1)
320 year_end = date(year=year - i + 1, month=9, day=1)
321 data[str(year - i)] = {
322 str(display): Event.objects.filter(
323 category=key, start__gte=year_start, end__lte=year_end
324 ).count()
325 for key, display in Event.EVENT_CATEGORIES
326 }
327
328 return data
329
[end of website/events/services.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/website/events/services.py b/website/events/services.py
--- a/website/events/services.py
+++ b/website/events/services.py
@@ -40,33 +40,36 @@
"cancel_registration": False,
"update_registration": False,
}
- if member and member.is_authenticated or name:
- registration = None
- try:
- registration = EventRegistration.objects.get(
- event=event, member=member, name=name
- )
- except EventRegistration.DoesNotExist:
- pass
+ if not member:
+ return perms
+ if not (member.is_authenticated or name):
+ return perms
- perms["create_registration"] = (
- (registration is None or registration.date_cancelled is not None)
- and event.registration_allowed
- and (name or member.can_attend_events)
- )
- perms["cancel_registration"] = (
- registration is not None
- and registration.date_cancelled is None
- and (event.cancellation_allowed or name)
- )
- perms["update_registration"] = (
- registration is not None
- and registration.date_cancelled is None
- and event.has_fields()
- and event.registration_allowed
- and (name or member.can_attend_events)
+ registration = None
+ try:
+ registration = EventRegistration.objects.get(
+ event=event, member=member, name=name
)
+ except EventRegistration.DoesNotExist:
+ pass
+ perms["create_registration"] = (
+ (registration is None or registration.date_cancelled is not None)
+ and event.registration_allowed
+ and (name or member.can_attend_events)
+ )
+ perms["cancel_registration"] = (
+ registration is not None
+ and registration.date_cancelled is None
+ and (event.cancellation_allowed or name)
+ )
+ perms["update_registration"] = (
+ registration is not None
+ and registration.date_cancelled is None
+ and event.has_fields()
+ and event.registration_allowed
+ and (name or member.can_attend_events)
+ )
return perms
|
{"golden_diff": "diff --git a/website/events/services.py b/website/events/services.py\n--- a/website/events/services.py\n+++ b/website/events/services.py\n@@ -40,33 +40,36 @@\n \"cancel_registration\": False,\n \"update_registration\": False,\n }\n- if member and member.is_authenticated or name:\n- registration = None\n- try:\n- registration = EventRegistration.objects.get(\n- event=event, member=member, name=name\n- )\n- except EventRegistration.DoesNotExist:\n- pass\n+ if not member:\n+ return perms\n+ if not (member.is_authenticated or name):\n+ return perms\n \n- perms[\"create_registration\"] = (\n- (registration is None or registration.date_cancelled is not None)\n- and event.registration_allowed\n- and (name or member.can_attend_events)\n- )\n- perms[\"cancel_registration\"] = (\n- registration is not None\n- and registration.date_cancelled is None\n- and (event.cancellation_allowed or name)\n- )\n- perms[\"update_registration\"] = (\n- registration is not None\n- and registration.date_cancelled is None\n- and event.has_fields()\n- and event.registration_allowed\n- and (name or member.can_attend_events)\n+ registration = None\n+ try:\n+ registration = EventRegistration.objects.get(\n+ event=event, member=member, name=name\n )\n+ except EventRegistration.DoesNotExist:\n+ pass\n \n+ perms[\"create_registration\"] = (\n+ (registration is None or registration.date_cancelled is not None)\n+ and event.registration_allowed\n+ and (name or member.can_attend_events)\n+ )\n+ perms[\"cancel_registration\"] = (\n+ registration is not None\n+ and registration.date_cancelled is None\n+ and (event.cancellation_allowed or name)\n+ )\n+ perms[\"update_registration\"] = (\n+ registration is not None\n+ and registration.date_cancelled is None\n+ and event.has_fields()\n+ and event.registration_allowed\n+ and (name or member.can_attend_events)\n+ )\n return perms\n", "issue": "Fix \"complex_logic\" issue in website/events/services.py\nConsider simplifying this complex logical expression.\n\nhttps://codeclimate.com/github/svthalia/concrexit/website/events/services.py#issue_5ece61c6b391e1000100034b\n", "before_files": [{"content": "from collections import OrderedDict\n\nfrom django.utils import timezone\nfrom django.utils.datetime_safe import date\nfrom django.utils.translation import gettext_lazy as _, get_language\n\nfrom events import emails\nfrom events.exceptions import RegistrationError\nfrom events.models import EventRegistration, RegistrationInformationField, Event\nfrom payments.models import Payment\nfrom payments.services import create_payment, delete_payment\nfrom utils.snippets import datetime_to_lectureyear\n\n\ndef is_user_registered(member, event):\n \"\"\"\n Returns if the user is registered for the specified event\n\n :param member: the user\n :param event: the event\n :return: None if registration is not required or no member else True/False\n \"\"\"\n if not event.registration_required or not member.is_authenticated:\n return None\n\n return event.registrations.filter(member=member, date_cancelled=None).count() > 0\n\n\ndef event_permissions(member, event, name=None):\n \"\"\"\n Returns a dictionary with the available event permissions of the user\n\n :param member: the user\n :param event: the event\n :param name: the name of a non member registration\n :return: the permission dictionary\n \"\"\"\n perms = {\n \"create_registration\": False,\n \"cancel_registration\": False,\n \"update_registration\": False,\n }\n if member and member.is_authenticated or name:\n registration = None\n try:\n registration = EventRegistration.objects.get(\n event=event, member=member, name=name\n )\n except EventRegistration.DoesNotExist:\n pass\n\n perms[\"create_registration\"] = (\n (registration is None or registration.date_cancelled is not None)\n and event.registration_allowed\n and (name or member.can_attend_events)\n )\n perms[\"cancel_registration\"] = (\n registration is not None\n and registration.date_cancelled is None\n and (event.cancellation_allowed or name)\n )\n perms[\"update_registration\"] = (\n registration is not None\n and registration.date_cancelled is None\n and event.has_fields()\n and event.registration_allowed\n and (name or member.can_attend_events)\n )\n\n return perms\n\n\ndef is_organiser(member, event):\n if member and member.is_authenticated:\n if member.is_superuser or member.has_perm(\"events.override_organiser\"):\n return True\n\n if event:\n return member.get_member_groups().filter(pk=event.organiser.pk).count() != 0\n\n return False\n\n\ndef create_registration(member, event):\n \"\"\"\n Creates a new user registration for an event\n\n :param member: the user\n :param event: the event\n :return: returns the registration if successful\n \"\"\"\n if event_permissions(member, event)[\"create_registration\"]:\n registration = None\n try:\n registration = EventRegistration.objects.get(event=event, member=member)\n except EventRegistration.DoesNotExist:\n pass\n\n if registration is None:\n return EventRegistration.objects.create(event=event, member=member)\n elif registration.date_cancelled is not None:\n if registration.is_late_cancellation():\n raise RegistrationError(\n _(\n \"You cannot re-register anymore \"\n \"since you've cancelled after the \"\n \"deadline.\"\n )\n )\n else:\n registration.date = timezone.now()\n registration.date_cancelled = None\n registration.save()\n\n return registration\n elif event_permissions(member, event)[\"cancel_registration\"]:\n raise RegistrationError(_(\"You were already registered.\"))\n else:\n raise RegistrationError(_(\"You may not register.\"))\n\n\ndef cancel_registration(member, event):\n \"\"\"\n Cancel a user registration for an event\n\n :param member: the user\n :param event: the event\n \"\"\"\n registration = None\n try:\n registration = EventRegistration.objects.get(event=event, member=member)\n except EventRegistration.DoesNotExist:\n pass\n\n if event_permissions(member, event)[\"cancel_registration\"] and registration:\n if registration.payment is not None:\n p = registration.payment\n registration.payment = None\n registration.save()\n p.delete()\n if registration.queue_position == 0:\n emails.notify_first_waiting(event)\n\n if event.send_cancel_email and event.after_cancel_deadline:\n emails.notify_organiser(event, registration)\n\n # Note that this doesn\"t remove the values for the\n # information fields that the user entered upon registering.\n # But this is regarded as a feature, not a bug. Especially\n # since the values will still appear in the backend.\n registration.date_cancelled = timezone.now()\n registration.save()\n else:\n raise RegistrationError(_(\"You are not registered for this event.\"))\n\n\ndef pay_with_tpay(member, event):\n \"\"\"\n Add a Thalia Pay payment to an event registration\n\n :param member: the user\n :param event: the event\n \"\"\"\n try:\n registration = EventRegistration.objects.get(event=event, member=member)\n except EventRegistration.DoesNotExist:\n raise RegistrationError(_(\"You are not registered for this event.\"))\n\n if registration.payment is None:\n registration.payment = create_payment(\n payable=registration, processed_by=member, pay_type=Payment.TPAY\n )\n registration.save()\n else:\n raise RegistrationError(_(\"You have already paid for this event.\"))\n\n\ndef update_registration(\n member=None, event=None, name=None, registration=None, field_values=None\n):\n \"\"\"\n Updates a user registration of an event\n\n :param request: http request\n :param member: the user\n :param event: the event\n :param name: the name of a registration not associated with a user\n :param registration: the registration\n :param field_values: values for the information fields\n \"\"\"\n if not registration:\n try:\n registration = EventRegistration.objects.get(\n event=event, member=member, name=name\n )\n except EventRegistration.DoesNotExist as error:\n raise RegistrationError(\n _(\"You are not registered for this event.\")\n ) from error\n else:\n member = registration.member\n event = registration.event\n name = registration.name\n\n if (\n not event_permissions(member, event, name)[\"update_registration\"]\n or not field_values\n ):\n return\n\n for field_id, field_value in field_values:\n field = RegistrationInformationField.objects.get(\n id=field_id.replace(\"info_field_\", \"\")\n )\n\n if (\n field.type == RegistrationInformationField.INTEGER_FIELD\n and field_value is None\n ):\n field_value = 0\n elif (\n field.type == RegistrationInformationField.BOOLEAN_FIELD\n and field_value is None\n ):\n field_value = False\n elif (\n field.type == RegistrationInformationField.TEXT_FIELD\n and field_value is None\n ):\n field_value = \"\"\n\n field.set_value_for(registration, field_value)\n\n\ndef registration_fields(request, member=None, event=None, registration=None, name=None):\n \"\"\"\n Returns information about the registration fields of a registration\n\n :param member: the user (optional if registration provided)\n :param name: the name of a non member registration\n (optional if registration provided)\n :param event: the event (optional if registration provided)\n :param registration: the registration (optional if member & event provided)\n :return: the fields\n \"\"\"\n\n if registration is None:\n try:\n registration = EventRegistration.objects.get(\n event=event, member=member, name=name\n )\n except EventRegistration.DoesNotExist as error:\n raise RegistrationError(\n _(\"You are not registered for this event.\")\n ) from error\n except EventRegistration.MultipleObjectsReturned as error:\n raise RegistrationError(\n _(\"Unable to find the right registration.\")\n ) from error\n else:\n member = registration.member\n event = registration.event\n name = registration.name\n\n perms = event_permissions(member, event, name)[\n \"update_registration\"\n ] or is_organiser(request.member, event)\n if perms and registration:\n information_fields = registration.information_fields\n fields = OrderedDict()\n\n for information_field in information_fields:\n field = information_field[\"field\"]\n\n fields[\"info_field_{}\".format(field.id)] = {\n \"type\": field.type,\n \"label\": getattr(field, \"{}_{}\".format(\"name\", get_language())),\n \"description\": getattr(\n field, \"{}_{}\".format(\"description\", get_language())\n ),\n \"value\": information_field[\"value\"],\n \"required\": field.required,\n }\n\n return fields\n else:\n raise RegistrationError(_(\"You are not allowed to update this registration.\"))\n\n\ndef update_registration_by_organiser(registration, member, data):\n if not is_organiser(member, registration.event):\n raise RegistrationError(_(\"You are not allowed to update this registration.\"))\n\n if \"payment\" in data:\n if data[\"payment\"][\"type\"] == Payment.NONE and registration.payment is not None:\n delete_payment(registration)\n else:\n registration.payment = create_payment(\n payable=registration,\n processed_by=member,\n pay_type=data[\"payment\"][\"type\"],\n )\n\n if \"present\" in data:\n registration.present = data[\"present\"]\n\n registration.save()\n\n\ndef generate_category_statistics():\n \"\"\"\n Generate statistics about events, number of events per category\n :return: Dict with key, value resp. being category, event count.\n \"\"\"\n year = datetime_to_lectureyear(timezone.now())\n\n data = {}\n for i in range(5):\n year_start = date(year=year - i, month=9, day=1)\n year_end = date(year=year - i + 1, month=9, day=1)\n data[str(year - i)] = {\n str(display): Event.objects.filter(\n category=key, start__gte=year_start, end__lte=year_end\n ).count()\n for key, display in Event.EVENT_CATEGORIES\n }\n\n return data\n", "path": "website/events/services.py"}]}
| 3,611 | 472 |
gh_patches_debug_3624
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-912
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pkg_resources (setuptools) requirement not declared in setup.py
*cfn-lint version:* 0.20.1
*Description of issue.*
While trying to package cfn-lint for conda-forge, I ran into the issue that pkg_resources is [imported in a few places](https://github.com/aws-cloudformation/cfn-python-lint/search?q=pkg_resources&unscoped_q=pkg_resources) but that this requirement (setuptools) is not specified in setup.py https://github.com/aws-cloudformation/cfn-python-lint/blob/master/setup.py#L75-L82
Is setuptools desired to be a run time requirement? If so, install_requires should probably list it.
</issue>
<code>
[start of setup.py]
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import codecs
18 import re
19 from setuptools import find_packages
20 from setuptools import setup
21
22
23 def get_version(filename):
24 with codecs.open(filename, 'r', 'utf-8') as fp:
25 contents = fp.read()
26 return re.search(r"__version__ = ['\"]([^'\"]+)['\"]", contents).group(1)
27
28
29 version = get_version('src/cfnlint/version.py')
30
31
32 with open('README.md') as f:
33 readme = f.read()
34
35 setup(
36 name='cfn-lint',
37 version=version,
38 description=('checks cloudformation for practices and behaviour \
39 that could potentially be improved'),
40 long_description=readme,
41 long_description_content_type="text/markdown",
42 keywords='aws, lint',
43 author='kddejong',
44 author_email='[email protected]',
45 url='https://github.com/aws-cloudformation/cfn-python-lint',
46 package_dir={'': 'src'},
47 package_data={'cfnlint': [
48 'data/CloudSpecs/*.json',
49 'data/AdditionalSpecs/*.json',
50 'data/Serverless/*.json',
51 'data/ExtendedSpecs/all/*.json',
52 'data/ExtendedSpecs/ap-northeast-1/*.json',
53 'data/ExtendedSpecs/ap-northeast-2/*.json',
54 'data/ExtendedSpecs/ap-northeast-3/*.json',
55 'data/ExtendedSpecs/ap-south-1/*.json',
56 'data/ExtendedSpecs/ap-southeast-1/*.json',
57 'data/ExtendedSpecs/ap-southeast-2/*.json',
58 'data/ExtendedSpecs/ca-central-1/*.json',
59 'data/ExtendedSpecs/eu-central-1/*.json',
60 'data/ExtendedSpecs/eu-north-1/*.json',
61 'data/ExtendedSpecs/eu-west-1/*.json',
62 'data/ExtendedSpecs/eu-west-2/*.json',
63 'data/ExtendedSpecs/eu-west-3/*.json',
64 'data/ExtendedSpecs/sa-east-1/*.json',
65 'data/ExtendedSpecs/us-east-1/*.json',
66 'data/ExtendedSpecs/us-east-2/*.json',
67 'data/ExtendedSpecs/us-gov-east-1/*.json',
68 'data/ExtendedSpecs/us-gov-west-1/*.json',
69 'data/ExtendedSpecs/us-west-1/*.json',
70 'data/ExtendedSpecs/us-west-2/*.json',
71 'data/CfnLintCli/config/schema.json'
72 ]},
73 packages=find_packages('src'),
74 zip_safe=False,
75 install_requires=[
76 'pyyaml',
77 'six~=1.11',
78 'requests>=2.15.0,<=2.21.0',
79 'aws-sam-translator>=1.10.0',
80 'jsonpatch',
81 'jsonschema~=2.6',
82 'pathlib2>=2.3.0;python_version<"3.4"'
83 ],
84 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
85 entry_points={
86 'console_scripts': [
87 'cfn-lint = cfnlint.__main__:main'
88 ]
89 },
90 license='MIT no attribution',
91 test_suite="unittest",
92 classifiers=[
93 'Development Status :: 5 - Production/Stable',
94 'Intended Audience :: Developers',
95 'License :: OSI Approved :: MIT License',
96 'Natural Language :: English',
97 'Operating System :: OS Independent',
98 'Programming Language :: Python :: 2',
99 'Programming Language :: Python :: 2.7',
100 'Programming Language :: Python :: 3',
101 'Programming Language :: Python :: 3.4',
102 'Programming Language :: Python :: 3.5',
103 'Programming Language :: Python :: 3.6',
104 'Programming Language :: Python :: 3.7',
105 ],
106 )
107
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -79,7 +79,8 @@
'aws-sam-translator>=1.10.0',
'jsonpatch',
'jsonschema~=2.6',
- 'pathlib2>=2.3.0;python_version<"3.4"'
+ 'pathlib2>=2.3.0;python_version<"3.4"',
+ 'setuptools',
],
python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
entry_points={
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -79,7 +79,8 @@\n 'aws-sam-translator>=1.10.0',\n 'jsonpatch',\n 'jsonschema~=2.6',\n- 'pathlib2>=2.3.0;python_version<\"3.4\"'\n+ 'pathlib2>=2.3.0;python_version<\"3.4\"',\n+ 'setuptools',\n ],\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n entry_points={\n", "issue": "pkg_resources (setuptools) requirement not declared in setup.py\n*cfn-lint version:* 0.20.1\r\n\r\n*Description of issue.*\r\nWhile trying to package cfn-lint for conda-forge, I ran into the issue that pkg_resources is [imported in a few places](https://github.com/aws-cloudformation/cfn-python-lint/search?q=pkg_resources&unscoped_q=pkg_resources) but that this requirement (setuptools) is not specified in setup.py https://github.com/aws-cloudformation/cfn-python-lint/blob/master/setup.py#L75-L82\r\n\r\nIs setuptools desired to be a run time requirement? If so, install_requires should probably list it. \n", "before_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport codecs\nimport re\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n\ndef get_version(filename):\n with codecs.open(filename, 'r', 'utf-8') as fp:\n contents = fp.read()\n return re.search(r\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", contents).group(1)\n\n\nversion = get_version('src/cfnlint/version.py')\n\n\nwith open('README.md') as f:\n readme = f.read()\n\nsetup(\n name='cfn-lint',\n version=version,\n description=('checks cloudformation for practices and behaviour \\\n that could potentially be improved'),\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n keywords='aws, lint',\n author='kddejong',\n author_email='[email protected]',\n url='https://github.com/aws-cloudformation/cfn-python-lint',\n package_dir={'': 'src'},\n package_data={'cfnlint': [\n 'data/CloudSpecs/*.json',\n 'data/AdditionalSpecs/*.json',\n 'data/Serverless/*.json',\n 'data/ExtendedSpecs/all/*.json',\n 'data/ExtendedSpecs/ap-northeast-1/*.json',\n 'data/ExtendedSpecs/ap-northeast-2/*.json',\n 'data/ExtendedSpecs/ap-northeast-3/*.json',\n 'data/ExtendedSpecs/ap-south-1/*.json',\n 'data/ExtendedSpecs/ap-southeast-1/*.json',\n 'data/ExtendedSpecs/ap-southeast-2/*.json',\n 'data/ExtendedSpecs/ca-central-1/*.json',\n 'data/ExtendedSpecs/eu-central-1/*.json',\n 'data/ExtendedSpecs/eu-north-1/*.json',\n 'data/ExtendedSpecs/eu-west-1/*.json',\n 'data/ExtendedSpecs/eu-west-2/*.json',\n 'data/ExtendedSpecs/eu-west-3/*.json',\n 'data/ExtendedSpecs/sa-east-1/*.json',\n 'data/ExtendedSpecs/us-east-1/*.json',\n 'data/ExtendedSpecs/us-east-2/*.json',\n 'data/ExtendedSpecs/us-gov-east-1/*.json',\n 'data/ExtendedSpecs/us-gov-west-1/*.json',\n 'data/ExtendedSpecs/us-west-1/*.json',\n 'data/ExtendedSpecs/us-west-2/*.json',\n 'data/CfnLintCli/config/schema.json'\n ]},\n packages=find_packages('src'),\n zip_safe=False,\n install_requires=[\n 'pyyaml',\n 'six~=1.11',\n 'requests>=2.15.0,<=2.21.0',\n 'aws-sam-translator>=1.10.0',\n 'jsonpatch',\n 'jsonschema~=2.6',\n 'pathlib2>=2.3.0;python_version<\"3.4\"'\n ],\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n entry_points={\n 'console_scripts': [\n 'cfn-lint = cfnlint.__main__:main'\n ]\n },\n license='MIT no attribution',\n test_suite=\"unittest\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ],\n)\n", "path": "setup.py"}]}
| 1,952 | 143 |
gh_patches_debug_10280
|
rasdani/github-patches
|
git_diff
|
plotly__plotly.py-762
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
list index out of range exception when importing graph_objs
Hi, I installed the latest version of plotly today (2.0.6) and ran into the following error with the first import line:
```python
import plotly.graph_objs as go
```
It gives me the following error:
```python
/usr/local/lib/python2.7/site-packages/plotly/__init__.py in <module>()
29 from __future__ import absolute_import
30
---> 31 from plotly import (plotly, dashboard_objs, graph_objs, grid_objs, tools,
32 utils, session, offline, colors)
33 from plotly.version import __version__
/usr/local/lib/python2.7/site-packages/plotly/plotly/__init__.py in <module>()
8
9 """
---> 10 from . plotly import (
11 sign_in,
12 update_plot_options,
/usr/local/lib/python2.7/site-packages/plotly/plotly/plotly.py in <module>()
27 from requests.compat import json as _json
28
---> 29 from plotly import exceptions, files, session, tools, utils
30 from plotly.api import v1, v2
31 from plotly.plotly import chunked_requests
/usr/local/lib/python2.7/site-packages/plotly/tools.py in <module>()
58
59 ipython_core_display = optional_imports.get_module('IPython.core.display')
---> 60 matplotlylib = optional_imports.get_module('plotly.matplotlylib')
61 sage_salvus = optional_imports.get_module('sage_salvus')
62
/usr/local/lib/python2.7/site-packages/plotly/optional_imports.pyc in get_module(name)
21 if name not in _not_importable:
22 try:
---> 23 return import_module(name)
24 except ImportError:
25 _not_importable.add(name)
/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.pyc in import_module(name, package)
35 level += 1
36 name = _resolve_name(name[level:], package, level)
---> 37 __import__(name)
38 return sys.modules[name]
/usr/local/lib/python2.7/site-packages/plotly/matplotlylib/__init__.py in <module>()
12 from __future__ import absolute_import
13
---> 14 from plotly.matplotlylib.renderer import PlotlyRenderer
15 from plotly.matplotlylib.mplexporter import Exporter
/usr/local/lib/python2.7/site-packages/plotly/matplotlylib/renderer.py in <module>()
11 import warnings
12
---> 13 import plotly.graph_objs as go
14 from plotly.matplotlylib.mplexporter import Renderer
15 from plotly.matplotlylib import mpltools
/usr/local/lib/python2.7/site-packages/plotly/graph_objs/__init__.py in <module>()
12 from __future__ import absolute_import
13
---> 14 from plotly.graph_objs.graph_objs import * # this is protected with __all__
/usr/local/lib/python2.7/site-packages/plotly/graph_objs/graph_objs.py in <module>()
32 import six
33
---> 34 from plotly import exceptions, graph_reference
35 from plotly.graph_objs import graph_objs_tools
36
/usr/local/lib/python2.7/site-packages/plotly/graph_reference.py in <module>()
230
231
--> 232 @utils.memoize()
233 def _get_valid_attributes(object_name, parent_object_names):
234 attributes = get_attributes_dicts(object_name, parent_object_names)
/usr/local/lib/python2.7/site-packages/plotly/utils.pyc in memoize(maxsize)
490 return result
491
--> 492 return decorator(_memoize)
/usr/local/lib/python2.7/site-packages/decorator.pyc in decorator(caller, _func)
256 callerfunc = caller
257 doc = caller.__doc__
--> 258 fun = getfullargspec(callerfunc).args[0] # first arg
259 else: # assume caller is an object with a __call__ method
260 name = caller.__class__.__name__.lower()
IndexError: list index out of range
```
Please advise on how I can fix this.
</issue>
<code>
[start of setup.py]
1 from setuptools import setup
2
3 exec (open('plotly/version.py').read())
4
5
6 def readme():
7 with open('README.rst') as f:
8 return f.read()
9
10
11 setup(name='plotly',
12 version=__version__,
13 use_2to3=False,
14 author='Chris P',
15 author_email='[email protected]',
16 maintainer='Chris P',
17 maintainer_email='[email protected]',
18 url='https://plot.ly/python/',
19 description="Python plotting library for collaborative, "
20 "interactive, publication-quality graphs.",
21 long_description=readme(),
22 classifiers=[
23 'Development Status :: 4 - Beta',
24 'Programming Language :: Python :: 2',
25 'Programming Language :: Python :: 2.7',
26 'Programming Language :: Python :: 3',
27 'Programming Language :: Python :: 3.3',
28 'Programming Language :: Python :: 3.4',
29 'Programming Language :: Python :: 3.5',
30 'Topic :: Scientific/Engineering :: Visualization',
31 ],
32 license='MIT',
33 packages=['plotly',
34 'plotly/api',
35 'plotly/api/v1',
36 'plotly/api/v2',
37 'plotly/dashboard_objs',
38 'plotly/plotly',
39 'plotly/plotly/chunked_requests',
40 'plotly/figure_factory',
41 'plotly/graph_objs',
42 'plotly/grid_objs',
43 'plotly/widgets',
44 'plotly/offline',
45 'plotly/matplotlylib',
46 'plotly/matplotlylib/mplexporter',
47 'plotly/matplotlylib/mplexporter/renderers'],
48 package_data={'plotly': ['package_data/*']},
49 install_requires=['decorator',
50 'nbformat>=4.2',
51 'pytz',
52 'requests',
53 'six'],
54 zip_safe=False)
55
[end of setup.py]
[start of plotly/version.py]
1 __version__ = '2.0.8'
2
[end of plotly/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plotly/version.py b/plotly/version.py
--- a/plotly/version.py
+++ b/plotly/version.py
@@ -1 +1 @@
-__version__ = '2.0.8'
+__version__ = '2.0.9'
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -46,7 +46,7 @@
'plotly/matplotlylib/mplexporter',
'plotly/matplotlylib/mplexporter/renderers'],
package_data={'plotly': ['package_data/*']},
- install_requires=['decorator',
+ install_requires=['decorator>=4.0.6',
'nbformat>=4.2',
'pytz',
'requests',
|
{"golden_diff": "diff --git a/plotly/version.py b/plotly/version.py\n--- a/plotly/version.py\n+++ b/plotly/version.py\n@@ -1 +1 @@\n-__version__ = '2.0.8'\n+__version__ = '2.0.9'\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -46,7 +46,7 @@\n 'plotly/matplotlylib/mplexporter',\n 'plotly/matplotlylib/mplexporter/renderers'],\n package_data={'plotly': ['package_data/*']},\n- install_requires=['decorator',\n+ install_requires=['decorator>=4.0.6',\n 'nbformat>=4.2',\n 'pytz',\n 'requests',\n", "issue": "list index out of range exception when importing graph_objs\nHi, I installed the latest version of plotly today (2.0.6) and ran into the following error with the first import line:\r\n\r\n```python\r\nimport plotly.graph_objs as go\r\n```\r\n\r\nIt gives me the following error:\r\n```python\r\n/usr/local/lib/python2.7/site-packages/plotly/__init__.py in <module>()\r\n 29 from __future__ import absolute_import\r\n 30\r\n---> 31 from plotly import (plotly, dashboard_objs, graph_objs, grid_objs, tools,\r\n 32 utils, session, offline, colors)\r\n 33 from plotly.version import __version__\r\n\r\n/usr/local/lib/python2.7/site-packages/plotly/plotly/__init__.py in <module>()\r\n 8\r\n 9 \"\"\"\r\n---> 10 from . plotly import (\r\n 11 sign_in,\r\n 12 update_plot_options,\r\n\r\n/usr/local/lib/python2.7/site-packages/plotly/plotly/plotly.py in <module>()\r\n 27 from requests.compat import json as _json\r\n 28\r\n---> 29 from plotly import exceptions, files, session, tools, utils\r\n 30 from plotly.api import v1, v2\r\n 31 from plotly.plotly import chunked_requests\r\n\r\n/usr/local/lib/python2.7/site-packages/plotly/tools.py in <module>()\r\n 58\r\n 59 ipython_core_display = optional_imports.get_module('IPython.core.display')\r\n---> 60 matplotlylib = optional_imports.get_module('plotly.matplotlylib')\r\n 61 sage_salvus = optional_imports.get_module('sage_salvus')\r\n 62\r\n\r\n/usr/local/lib/python2.7/site-packages/plotly/optional_imports.pyc in get_module(name)\r\n 21 if name not in _not_importable:\r\n 22 try:\r\n---> 23 return import_module(name)\r\n 24 except ImportError:\r\n 25 _not_importable.add(name)\r\n\r\n/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.pyc in import_module(name, package)\r\n 35 level += 1\r\n 36 name = _resolve_name(name[level:], package, level)\r\n---> 37 __import__(name)\r\n 38 return sys.modules[name]\r\n\r\n/usr/local/lib/python2.7/site-packages/plotly/matplotlylib/__init__.py in <module>()\r\n 12 from __future__ import absolute_import\r\n 13\r\n---> 14 from plotly.matplotlylib.renderer import PlotlyRenderer\r\n 15 from plotly.matplotlylib.mplexporter import Exporter\r\n\r\n/usr/local/lib/python2.7/site-packages/plotly/matplotlylib/renderer.py in <module>()\r\n 11 import warnings\r\n 12\r\n---> 13 import plotly.graph_objs as go\r\n 14 from plotly.matplotlylib.mplexporter import Renderer\r\n 15 from plotly.matplotlylib import mpltools\r\n\r\n/usr/local/lib/python2.7/site-packages/plotly/graph_objs/__init__.py in <module>()\r\n 12 from __future__ import absolute_import\r\n 13\r\n---> 14 from plotly.graph_objs.graph_objs import * # this is protected with __all__\r\n\r\n/usr/local/lib/python2.7/site-packages/plotly/graph_objs/graph_objs.py in <module>()\r\n 32 import six\r\n 33\r\n---> 34 from plotly import exceptions, graph_reference\r\n 35 from plotly.graph_objs import graph_objs_tools\r\n 36\r\n\r\n/usr/local/lib/python2.7/site-packages/plotly/graph_reference.py in <module>()\r\n 230\r\n 231\r\n--> 232 @utils.memoize()\r\n 233 def _get_valid_attributes(object_name, parent_object_names):\r\n 234 attributes = get_attributes_dicts(object_name, parent_object_names)\r\n\r\n/usr/local/lib/python2.7/site-packages/plotly/utils.pyc in memoize(maxsize)\r\n 490 return result\r\n 491\r\n--> 492 return decorator(_memoize)\r\n\r\n/usr/local/lib/python2.7/site-packages/decorator.pyc in decorator(caller, _func)\r\n 256 callerfunc = caller\r\n 257 doc = caller.__doc__\r\n--> 258 fun = getfullargspec(callerfunc).args[0] # first arg\r\n 259 else: # assume caller is an object with a __call__ method\r\n 260 name = caller.__class__.__name__.lower()\r\n\r\nIndexError: list index out of range\r\n```\r\n\r\nPlease advise on how I can fix this.\n", "before_files": [{"content": "from setuptools import setup\n\nexec (open('plotly/version.py').read())\n\n\ndef readme():\n with open('README.rst') as f:\n return f.read()\n\n\nsetup(name='plotly',\n version=__version__,\n use_2to3=False,\n author='Chris P',\n author_email='[email protected]',\n maintainer='Chris P',\n maintainer_email='[email protected]',\n url='https://plot.ly/python/',\n description=\"Python plotting library for collaborative, \"\n \"interactive, publication-quality graphs.\",\n long_description=readme(),\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Scientific/Engineering :: Visualization',\n ],\n license='MIT',\n packages=['plotly',\n 'plotly/api',\n 'plotly/api/v1',\n 'plotly/api/v2',\n 'plotly/dashboard_objs',\n 'plotly/plotly',\n 'plotly/plotly/chunked_requests',\n 'plotly/figure_factory',\n 'plotly/graph_objs',\n 'plotly/grid_objs',\n 'plotly/widgets',\n 'plotly/offline',\n 'plotly/matplotlylib',\n 'plotly/matplotlylib/mplexporter',\n 'plotly/matplotlylib/mplexporter/renderers'],\n package_data={'plotly': ['package_data/*']},\n install_requires=['decorator',\n 'nbformat>=4.2',\n 'pytz',\n 'requests',\n 'six'],\n zip_safe=False)\n", "path": "setup.py"}, {"content": "__version__ = '2.0.8'\n", "path": "plotly/version.py"}]}
| 2,155 | 172 |
gh_patches_debug_12804
|
rasdani/github-patches
|
git_diff
|
pypa__virtualenv-2088
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use the builtin plugin classes unless another plugin class is specifically asked for.
**What's the problem this feature will solve?**
I have a private plugin for virtualenv where I add an additional discovery class. This discovery class adds a new argument to the command line parser that is required but only when the discovery class is chosen. However I'm seeing an issue where using virtualenv via the command line as normal is now asking for this argument. The reason seems to be that virtualenv is picking a default discovery class but in a non-deterministic way and sometimes the additional discovery class is chosen as the default discovery class and so the argument is required. The default class is chosen depending on which entry point is discovered first. I believe entry points give no guarantees about order of discovery.
The order of entry points discovery seems to change in different installs of virtualenv and the plugin, rather than changing in the same environment between different invocations of virtualenv.
I believe the problem will be the same for creators, seeders, and activators as well.
**Describe the solution you'd like**
I would expect the builtin discovery class to be chosen as the default discovery class unless explicitly set otherwise.
**Alternative Solutions**
These classes could have a priority set at the class level. The builtin classes would have a priority set such that a plugin class could opt to set it's priority lower or higher than the builtins. virtualenv would then order these classes by their priority. Classes would be allowed to have the same priority with the understanding that the order of classes with the same priority value would be non-deterministic.
</issue>
<code>
[start of src/virtualenv/run/plugin/discovery.py]
1 from __future__ import absolute_import, unicode_literals
2
3 from .base import PluginLoader
4
5
6 class Discovery(PluginLoader):
7 """"""
8
9
10 def get_discover(parser, args):
11 discover_types = Discovery.entry_points_for("virtualenv.discovery")
12 discovery_parser = parser.add_argument_group(
13 title="discovery",
14 description="discover and provide a target interpreter",
15 )
16 discovery_parser.add_argument(
17 "--discovery",
18 choices=_get_default_discovery(discover_types),
19 default=next(i for i in discover_types.keys()),
20 required=False,
21 help="interpreter discovery method",
22 )
23 options, _ = parser.parse_known_args(args)
24 discover_class = discover_types[options.discovery]
25 discover_class.add_parser_arguments(discovery_parser)
26 options, _ = parser.parse_known_args(args, namespace=options)
27 discover = discover_class(options)
28 return discover
29
30
31 def _get_default_discovery(discover_types):
32 return list(discover_types.keys())
33
[end of src/virtualenv/run/plugin/discovery.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/virtualenv/run/plugin/discovery.py b/src/virtualenv/run/plugin/discovery.py
--- a/src/virtualenv/run/plugin/discovery.py
+++ b/src/virtualenv/run/plugin/discovery.py
@@ -13,10 +13,13 @@
title="discovery",
description="discover and provide a target interpreter",
)
+ choices = _get_default_discovery(discover_types)
+ # prefer the builtin if present, otherwise fallback to first defined type
+ choices = sorted(choices, key=lambda a: 0 if a == "builtin" else 1)
discovery_parser.add_argument(
"--discovery",
- choices=_get_default_discovery(discover_types),
- default=next(i for i in discover_types.keys()),
+ choices=choices,
+ default=next(iter(choices)),
required=False,
help="interpreter discovery method",
)
|
{"golden_diff": "diff --git a/src/virtualenv/run/plugin/discovery.py b/src/virtualenv/run/plugin/discovery.py\n--- a/src/virtualenv/run/plugin/discovery.py\n+++ b/src/virtualenv/run/plugin/discovery.py\n@@ -13,10 +13,13 @@\n title=\"discovery\",\n description=\"discover and provide a target interpreter\",\n )\n+ choices = _get_default_discovery(discover_types)\n+ # prefer the builtin if present, otherwise fallback to first defined type\n+ choices = sorted(choices, key=lambda a: 0 if a == \"builtin\" else 1)\n discovery_parser.add_argument(\n \"--discovery\",\n- choices=_get_default_discovery(discover_types),\n- default=next(i for i in discover_types.keys()),\n+ choices=choices,\n+ default=next(iter(choices)),\n required=False,\n help=\"interpreter discovery method\",\n )\n", "issue": "Use the builtin plugin classes unless another plugin class is specifically asked for.\n**What's the problem this feature will solve?**\r\n\r\nI have a private plugin for virtualenv where I add an additional discovery class. This discovery class adds a new argument to the command line parser that is required but only when the discovery class is chosen. However I'm seeing an issue where using virtualenv via the command line as normal is now asking for this argument. The reason seems to be that virtualenv is picking a default discovery class but in a non-deterministic way and sometimes the additional discovery class is chosen as the default discovery class and so the argument is required. The default class is chosen depending on which entry point is discovered first. I believe entry points give no guarantees about order of discovery.\r\n\r\nThe order of entry points discovery seems to change in different installs of virtualenv and the plugin, rather than changing in the same environment between different invocations of virtualenv.\r\n\r\nI believe the problem will be the same for creators, seeders, and activators as well.\r\n\r\n**Describe the solution you'd like**\r\n\r\nI would expect the builtin discovery class to be chosen as the default discovery class unless explicitly set otherwise.\r\n\r\n**Alternative Solutions**\r\n\r\nThese classes could have a priority set at the class level. The builtin classes would have a priority set such that a plugin class could opt to set it's priority lower or higher than the builtins. virtualenv would then order these classes by their priority. Classes would be allowed to have the same priority with the understanding that the order of classes with the same priority value would be non-deterministic.\r\n\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nfrom .base import PluginLoader\n\n\nclass Discovery(PluginLoader):\n \"\"\"\"\"\"\n\n\ndef get_discover(parser, args):\n discover_types = Discovery.entry_points_for(\"virtualenv.discovery\")\n discovery_parser = parser.add_argument_group(\n title=\"discovery\",\n description=\"discover and provide a target interpreter\",\n )\n discovery_parser.add_argument(\n \"--discovery\",\n choices=_get_default_discovery(discover_types),\n default=next(i for i in discover_types.keys()),\n required=False,\n help=\"interpreter discovery method\",\n )\n options, _ = parser.parse_known_args(args)\n discover_class = discover_types[options.discovery]\n discover_class.add_parser_arguments(discovery_parser)\n options, _ = parser.parse_known_args(args, namespace=options)\n discover = discover_class(options)\n return discover\n\n\ndef _get_default_discovery(discover_types):\n return list(discover_types.keys())\n", "path": "src/virtualenv/run/plugin/discovery.py"}]}
| 1,133 | 195 |
gh_patches_debug_2032
|
rasdani/github-patches
|
git_diff
|
ansible-collections__community.general-7875
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
community.general.incus connection not working as inventory_hostname treated as litteral
### Summary
In my environment I am connecting to an incus server via a remote client on OSX. Ansible, running on the OSX machine is utilizing roles, and gets the inventory_hostname from the filename under the host_vars directory. I suspect this environment is causing inventory_hostname to be treated as a litteral. A very similar bug was fixed community.general.lxd and be found here: https://github.com/ansible-collections/community.general/pull/4912
I have already implemented the solution and will submit a pull request.
### Issue Type
Bug Report
### Component Name
incus.py connection plugin
### Ansible Version
```console (paste below)
ansible [core 2.16.2]
config file = /Users/travis/workspace/IZUMANETWORKS/siteinfra/Ansible/work/ansible.cfg
configured module search path = ['/Users/travis/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/lib/python3.11/site-packages/ansible
ansible collection location = /Users/travis/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)] (/opt/homebrew/opt/[email protected]/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Community.general Version
```console (paste below)
$ ansible-galaxy collection list community.general
# /Users/travis/.ansible/collections/ansible_collections
Collection Version
----------------- -------
community.general 8.2.0
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
CONFIG_FILE() = /Users/travis/workspace/IZUMANETWORKS/siteinfra/Ansible/work/ansible.cfg
DEFAULT_HASH_BEHAVIOUR(/Users/travis/workspace/IZUMANETWORKS/siteinfra/Ansible/work/ansible.cfg) = merge
DEFAULT_HOST_LIST(/Users/travis/workspace//IZUMANETWORKS/siteinfra/Ansible/work/ansible.cfg) = ['/Users/travis/workspace/IZUMANETWORKS/siteinfra/Ansible/work/ansible.cfg) = ['/Users/travis/workspace/IZUMANETWORKS/siteinfra/Ansible/work/inventory.ini']
EDITOR(env: EDITOR) = emacs
HOST_KEY_CHECKING(/Users/travis/workspace/IZUMANETWORKS/siteinfra/Ansible/work/ansible.cfg) = False
```
### OS / Environment
client: OSX
server: Ubuntu 22.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
# host_var file named IzumaMercury.yaml
ansible_connection: community.general.incus
ansible_user: root
ansible_become: no
ansible_incus_remote: IzumaExplorer
```
### Expected Results
ansible-playbook -i inventories/tests/moffett.yaml setup_izuma_networks_vm_controllers_workers.yml
PLAY [vm_controllers] ****************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************
ok: [IzumaMercury]
### Actual Results
```console (paste below)
ansible-playbook -i inventories/tests/moffett.yaml setup_izuma_networks_vm_controllers_workers.yml
PLAY [vm_controllers] ****************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************
[WARNING]: The "community.general.incus" connection plugin has an improperly configured remote target value, forcing
"inventory_hostname" templated value instead of the string
fatal: [IzumaMercury]: UNREACHABLE! => {"changed": false, "msg": "instance not found: inventory_hostname", "unreachable": true}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
</issue>
<code>
[start of plugins/connection/incus.py]
1 # -*- coding: utf-8 -*-
2 # Based on lxd.py (c) 2016, Matt Clay <[email protected]>
3 # (c) 2023, Stephane Graber <[email protected]>
4 # Copyright (c) 2023 Ansible Project
5 # GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
6 # SPDX-License-Identifier: GPL-3.0-or-later
7
8 from __future__ import (absolute_import, division, print_function)
9 __metaclass__ = type
10
11 DOCUMENTATION = """
12 author: StΓ©phane Graber (@stgraber)
13 name: incus
14 short_description: Run tasks in Incus instances via the Incus CLI.
15 description:
16 - Run commands or put/fetch files to an existing Incus instance using Incus CLI.
17 version_added: "8.2.0"
18 options:
19 remote_addr:
20 description:
21 - The instance identifier.
22 default: inventory_hostname
23 vars:
24 - name: ansible_host
25 - name: ansible_incus_host
26 executable:
27 description:
28 - The shell to use for execution inside the instance.
29 default: /bin/sh
30 vars:
31 - name: ansible_executable
32 - name: ansible_incus_executable
33 remote:
34 description:
35 - The name of the Incus remote to use (per C(incus remote list)).
36 - Remotes are used to access multiple servers from a single client.
37 default: local
38 vars:
39 - name: ansible_incus_remote
40 project:
41 description:
42 - The name of the Incus project to use (per C(incus project list)).
43 - Projects are used to divide the instances running on a server.
44 default: default
45 vars:
46 - name: ansible_incus_project
47 """
48
49 import os
50 from subprocess import call, Popen, PIPE
51
52 from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleFileNotFound
53 from ansible.module_utils.common.process import get_bin_path
54 from ansible.module_utils._text import to_bytes, to_text
55 from ansible.plugins.connection import ConnectionBase
56
57
58 class Connection(ConnectionBase):
59 """ Incus based connections """
60
61 transport = "incus"
62 has_pipelining = True
63 default_user = 'root'
64
65 def __init__(self, play_context, new_stdin, *args, **kwargs):
66 super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
67
68 self._incus_cmd = get_bin_path("incus")
69
70 if not self._incus_cmd:
71 raise AnsibleError("incus command not found in PATH")
72
73 def _connect(self):
74 """connect to Incus (nothing to do here) """
75 super(Connection, self)._connect()
76
77 if not self._connected:
78 self._display.vvv(u"ESTABLISH Incus CONNECTION FOR USER: root",
79 host=self._instance())
80 self._connected = True
81
82 def _instance(self):
83 # Return only the leading part of the FQDN as the instance name
84 # as Incus instance names cannot be a FQDN.
85 return self.get_option('remote_addr').split(".")[0]
86
87 def exec_command(self, cmd, in_data=None, sudoable=True):
88 """ execute a command on the Incus host """
89 super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
90
91 self._display.vvv(u"EXEC {0}".format(cmd),
92 host=self._instance())
93
94 local_cmd = [
95 self._incus_cmd,
96 "--project", self.get_option("project"),
97 "exec",
98 "%s:%s" % (self.get_option("remote"), self._instance()),
99 "--",
100 self._play_context.executable, "-c", cmd]
101
102 local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
103 in_data = to_bytes(in_data, errors='surrogate_or_strict', nonstring='passthru')
104
105 process = Popen(local_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
106 stdout, stderr = process.communicate(in_data)
107
108 stdout = to_text(stdout)
109 stderr = to_text(stderr)
110
111 if stderr == "Error: Instance is not running.\n":
112 raise AnsibleConnectionFailure("instance not running: %s" %
113 self._instance())
114
115 if stderr == "Error: Instance not found\n":
116 raise AnsibleConnectionFailure("instance not found: %s" %
117 self._instance())
118
119 return process.returncode, stdout, stderr
120
121 def put_file(self, in_path, out_path):
122 """ put a file from local to Incus """
123 super(Connection, self).put_file(in_path, out_path)
124
125 self._display.vvv(u"PUT {0} TO {1}".format(in_path, out_path),
126 host=self._instance())
127
128 if not os.path.isfile(to_bytes(in_path, errors='surrogate_or_strict')):
129 raise AnsibleFileNotFound("input path is not a file: %s" % in_path)
130
131 local_cmd = [
132 self._incus_cmd,
133 "--project", self.get_option("project"),
134 "file", "push", "--quiet",
135 in_path,
136 "%s:%s/%s" % (self.get_option("remote"),
137 self._instance(),
138 out_path)]
139
140 local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
141
142 call(local_cmd)
143
144 def fetch_file(self, in_path, out_path):
145 """ fetch a file from Incus to local """
146 super(Connection, self).fetch_file(in_path, out_path)
147
148 self._display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path),
149 host=self._instance())
150
151 local_cmd = [
152 self._incus_cmd,
153 "--project", self.get_option("project"),
154 "file", "pull", "--quiet",
155 "%s:%s/%s" % (self.get_option("remote"),
156 self._instance(),
157 in_path),
158 out_path]
159
160 local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
161
162 call(local_cmd)
163
164 def close(self):
165 """ close the connection (nothing to do here) """
166 super(Connection, self).close()
167
168 self._connected = False
169
[end of plugins/connection/incus.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plugins/connection/incus.py b/plugins/connection/incus.py
--- a/plugins/connection/incus.py
+++ b/plugins/connection/incus.py
@@ -21,6 +21,7 @@
- The instance identifier.
default: inventory_hostname
vars:
+ - name: inventory_hostname
- name: ansible_host
- name: ansible_incus_host
executable:
|
{"golden_diff": "diff --git a/plugins/connection/incus.py b/plugins/connection/incus.py\n--- a/plugins/connection/incus.py\n+++ b/plugins/connection/incus.py\n@@ -21,6 +21,7 @@\n - The instance identifier.\n default: inventory_hostname\n vars:\n+ - name: inventory_hostname\n - name: ansible_host\n - name: ansible_incus_host\n executable:\n", "issue": "community.general.incus connection not working as inventory_hostname treated as litteral\n### Summary\n\nIn my environment I am connecting to an incus server via a remote client on OSX. Ansible, running on the OSX machine is utilizing roles, and gets the inventory_hostname from the filename under the host_vars directory. I suspect this environment is causing inventory_hostname to be treated as a litteral. A very similar bug was fixed community.general.lxd and be found here: https://github.com/ansible-collections/community.general/pull/4912\r\n\r\nI have already implemented the solution and will submit a pull request.\r\n\r\n\n\n### Issue Type\n\nBug Report\n\n### Component Name\n\nincus.py connection plugin\n\n### Ansible Version\n\n```console (paste below)\r\nansible [core 2.16.2]\r\n config file = /Users/travis/workspace/IZUMANETWORKS/siteinfra/Ansible/work/ansible.cfg\r\n configured module search path = ['/Users/travis/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /opt/homebrew/lib/python3.11/site-packages/ansible\r\n ansible collection location = /Users/travis/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /opt/homebrew/bin/ansible\r\n python version = 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)] (/opt/homebrew/opt/[email protected]/bin/python3.11)\r\n jinja version = 3.1.2\r\n libyaml = True\r\n\r\n```\r\n\n\n### Community.general Version\n\n```console (paste below)\r\n$ ansible-galaxy collection list community.general\r\n# /Users/travis/.ansible/collections/ansible_collections\r\nCollection Version\r\n----------------- -------\r\ncommunity.general 8.2.0\r\n```\r\n\n\n### Configuration\n\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\nCONFIG_FILE() = /Users/travis/workspace/IZUMANETWORKS/siteinfra/Ansible/work/ansible.cfg\r\nDEFAULT_HASH_BEHAVIOUR(/Users/travis/workspace/IZUMANETWORKS/siteinfra/Ansible/work/ansible.cfg) = merge\r\nDEFAULT_HOST_LIST(/Users/travis/workspace//IZUMANETWORKS/siteinfra/Ansible/work/ansible.cfg) = ['/Users/travis/workspace/IZUMANETWORKS/siteinfra/Ansible/work/ansible.cfg) = ['/Users/travis/workspace/IZUMANETWORKS/siteinfra/Ansible/work/inventory.ini']\r\nEDITOR(env: EDITOR) = emacs\r\nHOST_KEY_CHECKING(/Users/travis/workspace/IZUMANETWORKS/siteinfra/Ansible/work/ansible.cfg) = False\r\n\r\n```\r\n\n\n### OS / Environment\n\nclient: OSX\r\nserver: Ubuntu 22.04\n\n### Steps to Reproduce\n\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml (paste below)\r\n# host_var file named IzumaMercury.yaml\r\nansible_connection: community.general.incus\r\nansible_user: root\r\nansible_become: no\r\nansible_incus_remote: IzumaExplorer\r\n```\r\n\n\n### Expected Results\n\nansible-playbook -i inventories/tests/moffett.yaml setup_izuma_networks_vm_controllers_workers.yml\r\n\r\nPLAY [vm_controllers] ****************************************************************************************************\r\n\r\nTASK [Gathering Facts] ***************************************************************************************************\r\nok: [IzumaMercury]\r\n\n\n### Actual Results\n\n```console (paste below)\r\nansible-playbook -i inventories/tests/moffett.yaml setup_izuma_networks_vm_controllers_workers.yml\r\n\r\nPLAY [vm_controllers] ****************************************************************************************************\r\n\r\nTASK [Gathering Facts] ***************************************************************************************************\r\n[WARNING]: The \"community.general.incus\" connection plugin has an improperly configured remote target value, forcing\r\n\"inventory_hostname\" templated value instead of the string\r\nfatal: [IzumaMercury]: UNREACHABLE! => {\"changed\": false, \"msg\": \"instance not found: inventory_hostname\", \"unreachable\": true}\r\n```\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Based on lxd.py (c) 2016, Matt Clay <[email protected]>\n# (c) 2023, Stephane Graber <[email protected]>\n# Copyright (c) 2023 Ansible Project\n# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nDOCUMENTATION = \"\"\"\n author: St\u00e9phane Graber (@stgraber)\n name: incus\n short_description: Run tasks in Incus instances via the Incus CLI.\n description:\n - Run commands or put/fetch files to an existing Incus instance using Incus CLI.\n version_added: \"8.2.0\"\n options:\n remote_addr:\n description:\n - The instance identifier.\n default: inventory_hostname\n vars:\n - name: ansible_host\n - name: ansible_incus_host\n executable:\n description:\n - The shell to use for execution inside the instance.\n default: /bin/sh\n vars:\n - name: ansible_executable\n - name: ansible_incus_executable\n remote:\n description:\n - The name of the Incus remote to use (per C(incus remote list)).\n - Remotes are used to access multiple servers from a single client.\n default: local\n vars:\n - name: ansible_incus_remote\n project:\n description:\n - The name of the Incus project to use (per C(incus project list)).\n - Projects are used to divide the instances running on a server.\n default: default\n vars:\n - name: ansible_incus_project\n\"\"\"\n\nimport os\nfrom subprocess import call, Popen, PIPE\n\nfrom ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleFileNotFound\nfrom ansible.module_utils.common.process import get_bin_path\nfrom ansible.module_utils._text import to_bytes, to_text\nfrom ansible.plugins.connection import ConnectionBase\n\n\nclass Connection(ConnectionBase):\n \"\"\" Incus based connections \"\"\"\n\n transport = \"incus\"\n has_pipelining = True\n default_user = 'root'\n\n def __init__(self, play_context, new_stdin, *args, **kwargs):\n super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)\n\n self._incus_cmd = get_bin_path(\"incus\")\n\n if not self._incus_cmd:\n raise AnsibleError(\"incus command not found in PATH\")\n\n def _connect(self):\n \"\"\"connect to Incus (nothing to do here) \"\"\"\n super(Connection, self)._connect()\n\n if not self._connected:\n self._display.vvv(u\"ESTABLISH Incus CONNECTION FOR USER: root\",\n host=self._instance())\n self._connected = True\n\n def _instance(self):\n # Return only the leading part of the FQDN as the instance name\n # as Incus instance names cannot be a FQDN.\n return self.get_option('remote_addr').split(\".\")[0]\n\n def exec_command(self, cmd, in_data=None, sudoable=True):\n \"\"\" execute a command on the Incus host \"\"\"\n super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)\n\n self._display.vvv(u\"EXEC {0}\".format(cmd),\n host=self._instance())\n\n local_cmd = [\n self._incus_cmd,\n \"--project\", self.get_option(\"project\"),\n \"exec\",\n \"%s:%s\" % (self.get_option(\"remote\"), self._instance()),\n \"--\",\n self._play_context.executable, \"-c\", cmd]\n\n local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]\n in_data = to_bytes(in_data, errors='surrogate_or_strict', nonstring='passthru')\n\n process = Popen(local_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)\n stdout, stderr = process.communicate(in_data)\n\n stdout = to_text(stdout)\n stderr = to_text(stderr)\n\n if stderr == \"Error: Instance is not running.\\n\":\n raise AnsibleConnectionFailure(\"instance not running: %s\" %\n self._instance())\n\n if stderr == \"Error: Instance not found\\n\":\n raise AnsibleConnectionFailure(\"instance not found: %s\" %\n self._instance())\n\n return process.returncode, stdout, stderr\n\n def put_file(self, in_path, out_path):\n \"\"\" put a file from local to Incus \"\"\"\n super(Connection, self).put_file(in_path, out_path)\n\n self._display.vvv(u\"PUT {0} TO {1}\".format(in_path, out_path),\n host=self._instance())\n\n if not os.path.isfile(to_bytes(in_path, errors='surrogate_or_strict')):\n raise AnsibleFileNotFound(\"input path is not a file: %s\" % in_path)\n\n local_cmd = [\n self._incus_cmd,\n \"--project\", self.get_option(\"project\"),\n \"file\", \"push\", \"--quiet\",\n in_path,\n \"%s:%s/%s\" % (self.get_option(\"remote\"),\n self._instance(),\n out_path)]\n\n local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]\n\n call(local_cmd)\n\n def fetch_file(self, in_path, out_path):\n \"\"\" fetch a file from Incus to local \"\"\"\n super(Connection, self).fetch_file(in_path, out_path)\n\n self._display.vvv(u\"FETCH {0} TO {1}\".format(in_path, out_path),\n host=self._instance())\n\n local_cmd = [\n self._incus_cmd,\n \"--project\", self.get_option(\"project\"),\n \"file\", \"pull\", \"--quiet\",\n \"%s:%s/%s\" % (self.get_option(\"remote\"),\n self._instance(),\n in_path),\n out_path]\n\n local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]\n\n call(local_cmd)\n\n def close(self):\n \"\"\" close the connection (nothing to do here) \"\"\"\n super(Connection, self).close()\n\n self._connected = False\n", "path": "plugins/connection/incus.py"}]}
| 3,257 | 86 |
gh_patches_debug_14349
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-contrib-2132
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for confluent-kafka v2.3
**Is your feature request related to a problem?**
We've recently upgraded our confluent-kafka python version to v2.3. But this version is not yet supported by the instrumentor.
**Describe the solution you'd like**
Confluent kafka version 2.3.x is supported by the instrumentor
</issue>
<code>
[start of opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # DO NOT EDIT. THIS FILE WAS AUTOGENERATED FROM INSTRUMENTATION PACKAGES.
16 # RUN `python scripts/generate_instrumentation_bootstrap.py` TO REGENERATE.
17
18 libraries = [
19 {
20 "library": "aio_pika >= 7.2.0, < 10.0.0",
21 "instrumentation": "opentelemetry-instrumentation-aio-pika==0.44b0.dev",
22 },
23 {
24 "library": "aiohttp ~= 3.0",
25 "instrumentation": "opentelemetry-instrumentation-aiohttp-client==0.44b0.dev",
26 },
27 {
28 "library": "aiohttp ~= 3.0",
29 "instrumentation": "opentelemetry-instrumentation-aiohttp-server==0.44b0.dev",
30 },
31 {
32 "library": "aiopg >= 0.13.0, < 2.0.0",
33 "instrumentation": "opentelemetry-instrumentation-aiopg==0.44b0.dev",
34 },
35 {
36 "library": "asgiref ~= 3.0",
37 "instrumentation": "opentelemetry-instrumentation-asgi==0.44b0.dev",
38 },
39 {
40 "library": "asyncpg >= 0.12.0",
41 "instrumentation": "opentelemetry-instrumentation-asyncpg==0.44b0.dev",
42 },
43 {
44 "library": "boto~=2.0",
45 "instrumentation": "opentelemetry-instrumentation-boto==0.44b0.dev",
46 },
47 {
48 "library": "boto3 ~= 1.0",
49 "instrumentation": "opentelemetry-instrumentation-boto3sqs==0.44b0.dev",
50 },
51 {
52 "library": "botocore ~= 1.0",
53 "instrumentation": "opentelemetry-instrumentation-botocore==0.44b0.dev",
54 },
55 {
56 "library": "cassandra-driver ~= 3.25",
57 "instrumentation": "opentelemetry-instrumentation-cassandra==0.44b0.dev",
58 },
59 {
60 "library": "scylla-driver ~= 3.25",
61 "instrumentation": "opentelemetry-instrumentation-cassandra==0.44b0.dev",
62 },
63 {
64 "library": "celery >= 4.0, < 6.0",
65 "instrumentation": "opentelemetry-instrumentation-celery==0.44b0.dev",
66 },
67 {
68 "library": "confluent-kafka >= 1.8.2, <= 2.2.0",
69 "instrumentation": "opentelemetry-instrumentation-confluent-kafka==0.44b0.dev",
70 },
71 {
72 "library": "django >= 1.10",
73 "instrumentation": "opentelemetry-instrumentation-django==0.44b0.dev",
74 },
75 {
76 "library": "elasticsearch >= 2.0",
77 "instrumentation": "opentelemetry-instrumentation-elasticsearch==0.44b0.dev",
78 },
79 {
80 "library": "falcon >= 1.4.1, < 3.1.2",
81 "instrumentation": "opentelemetry-instrumentation-falcon==0.44b0.dev",
82 },
83 {
84 "library": "fastapi ~= 0.58",
85 "instrumentation": "opentelemetry-instrumentation-fastapi==0.44b0.dev",
86 },
87 {
88 "library": "werkzeug < 3.0.0",
89 "instrumentation": "opentelemetry-instrumentation-flask==0.44b0.dev",
90 },
91 {
92 "library": "flask >= 1.0",
93 "instrumentation": "opentelemetry-instrumentation-flask==0.44b0.dev",
94 },
95 {
96 "library": "grpcio ~= 1.27",
97 "instrumentation": "opentelemetry-instrumentation-grpc==0.44b0.dev",
98 },
99 {
100 "library": "httpx >= 0.18.0",
101 "instrumentation": "opentelemetry-instrumentation-httpx==0.44b0.dev",
102 },
103 {
104 "library": "jinja2 >= 2.7, < 4.0",
105 "instrumentation": "opentelemetry-instrumentation-jinja2==0.44b0.dev",
106 },
107 {
108 "library": "kafka-python >= 2.0",
109 "instrumentation": "opentelemetry-instrumentation-kafka-python==0.44b0.dev",
110 },
111 {
112 "library": "mysql-connector-python ~= 8.0",
113 "instrumentation": "opentelemetry-instrumentation-mysql==0.44b0.dev",
114 },
115 {
116 "library": "mysqlclient < 3",
117 "instrumentation": "opentelemetry-instrumentation-mysqlclient==0.44b0.dev",
118 },
119 {
120 "library": "pika >= 0.12.0",
121 "instrumentation": "opentelemetry-instrumentation-pika==0.44b0.dev",
122 },
123 {
124 "library": "psycopg2 >= 2.7.3.1",
125 "instrumentation": "opentelemetry-instrumentation-psycopg2==0.44b0.dev",
126 },
127 {
128 "library": "pymemcache >= 1.3.5, < 5",
129 "instrumentation": "opentelemetry-instrumentation-pymemcache==0.44b0.dev",
130 },
131 {
132 "library": "pymongo >= 3.1, < 5.0",
133 "instrumentation": "opentelemetry-instrumentation-pymongo==0.44b0.dev",
134 },
135 {
136 "library": "PyMySQL < 2",
137 "instrumentation": "opentelemetry-instrumentation-pymysql==0.44b0.dev",
138 },
139 {
140 "library": "pyramid >= 1.7",
141 "instrumentation": "opentelemetry-instrumentation-pyramid==0.44b0.dev",
142 },
143 {
144 "library": "redis >= 2.6",
145 "instrumentation": "opentelemetry-instrumentation-redis==0.44b0.dev",
146 },
147 {
148 "library": "remoulade >= 0.50",
149 "instrumentation": "opentelemetry-instrumentation-remoulade==0.44b0.dev",
150 },
151 {
152 "library": "requests ~= 2.0",
153 "instrumentation": "opentelemetry-instrumentation-requests==0.44b0.dev",
154 },
155 {
156 "library": "scikit-learn ~= 0.24.0",
157 "instrumentation": "opentelemetry-instrumentation-sklearn==0.44b0.dev",
158 },
159 {
160 "library": "sqlalchemy",
161 "instrumentation": "opentelemetry-instrumentation-sqlalchemy==0.44b0.dev",
162 },
163 {
164 "library": "starlette ~= 0.13.0",
165 "instrumentation": "opentelemetry-instrumentation-starlette==0.44b0.dev",
166 },
167 {
168 "library": "psutil >= 5",
169 "instrumentation": "opentelemetry-instrumentation-system-metrics==0.44b0.dev",
170 },
171 {
172 "library": "tornado >= 5.1.1",
173 "instrumentation": "opentelemetry-instrumentation-tornado==0.44b0.dev",
174 },
175 {
176 "library": "tortoise-orm >= 0.17.0",
177 "instrumentation": "opentelemetry-instrumentation-tortoiseorm==0.44b0.dev",
178 },
179 {
180 "library": "pydantic >= 1.10.2",
181 "instrumentation": "opentelemetry-instrumentation-tortoiseorm==0.44b0.dev",
182 },
183 {
184 "library": "urllib3 >= 1.0.0, < 3.0.0",
185 "instrumentation": "opentelemetry-instrumentation-urllib3==0.44b0.dev",
186 },
187 ]
188 default_instrumentations = [
189 "opentelemetry-instrumentation-aws-lambda==0.44b0.dev",
190 "opentelemetry-instrumentation-dbapi==0.44b0.dev",
191 "opentelemetry-instrumentation-logging==0.44b0.dev",
192 "opentelemetry-instrumentation-sqlite3==0.44b0.dev",
193 "opentelemetry-instrumentation-urllib==0.44b0.dev",
194 "opentelemetry-instrumentation-wsgi==0.44b0.dev",
195 ]
196
[end of opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py]
[start of instrumentation/opentelemetry-instrumentation-confluent-kafka/src/opentelemetry/instrumentation/confluent_kafka/package.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 _instruments = ("confluent-kafka >= 1.8.2, <= 2.2.0",)
17
[end of instrumentation/opentelemetry-instrumentation-confluent-kafka/src/opentelemetry/instrumentation/confluent_kafka/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/instrumentation/opentelemetry-instrumentation-confluent-kafka/src/opentelemetry/instrumentation/confluent_kafka/package.py b/instrumentation/opentelemetry-instrumentation-confluent-kafka/src/opentelemetry/instrumentation/confluent_kafka/package.py
--- a/instrumentation/opentelemetry-instrumentation-confluent-kafka/src/opentelemetry/instrumentation/confluent_kafka/package.py
+++ b/instrumentation/opentelemetry-instrumentation-confluent-kafka/src/opentelemetry/instrumentation/confluent_kafka/package.py
@@ -13,4 +13,4 @@
# limitations under the License.
-_instruments = ("confluent-kafka >= 1.8.2, <= 2.2.0",)
+_instruments = ("confluent-kafka >= 1.8.2, <= 2.3.0",)
diff --git a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py
--- a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py
+++ b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py
@@ -65,7 +65,7 @@
"instrumentation": "opentelemetry-instrumentation-celery==0.44b0.dev",
},
{
- "library": "confluent-kafka >= 1.8.2, <= 2.2.0",
+ "library": "confluent-kafka >= 1.8.2, <= 2.3.0",
"instrumentation": "opentelemetry-instrumentation-confluent-kafka==0.44b0.dev",
},
{
|
{"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-confluent-kafka/src/opentelemetry/instrumentation/confluent_kafka/package.py b/instrumentation/opentelemetry-instrumentation-confluent-kafka/src/opentelemetry/instrumentation/confluent_kafka/package.py\n--- a/instrumentation/opentelemetry-instrumentation-confluent-kafka/src/opentelemetry/instrumentation/confluent_kafka/package.py\n+++ b/instrumentation/opentelemetry-instrumentation-confluent-kafka/src/opentelemetry/instrumentation/confluent_kafka/package.py\n@@ -13,4 +13,4 @@\n # limitations under the License.\n \n \n-_instruments = (\"confluent-kafka >= 1.8.2, <= 2.2.0\",)\n+_instruments = (\"confluent-kafka >= 1.8.2, <= 2.3.0\",)\ndiff --git a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py\n--- a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py\n+++ b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py\n@@ -65,7 +65,7 @@\n \"instrumentation\": \"opentelemetry-instrumentation-celery==0.44b0.dev\",\n },\n {\n- \"library\": \"confluent-kafka >= 1.8.2, <= 2.2.0\",\n+ \"library\": \"confluent-kafka >= 1.8.2, <= 2.3.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-confluent-kafka==0.44b0.dev\",\n },\n {\n", "issue": "Add support for confluent-kafka v2.3\n\r\n**Is your feature request related to a problem?**\r\nWe've recently upgraded our confluent-kafka python version to v2.3. But this version is not yet supported by the instrumentor.\r\n\r\n**Describe the solution you'd like**\r\nConfluent kafka version 2.3.x is supported by the instrumentor\r\n\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# DO NOT EDIT. THIS FILE WAS AUTOGENERATED FROM INSTRUMENTATION PACKAGES.\n# RUN `python scripts/generate_instrumentation_bootstrap.py` TO REGENERATE.\n\nlibraries = [\n {\n \"library\": \"aio_pika >= 7.2.0, < 10.0.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-aio-pika==0.44b0.dev\",\n },\n {\n \"library\": \"aiohttp ~= 3.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-aiohttp-client==0.44b0.dev\",\n },\n {\n \"library\": \"aiohttp ~= 3.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-aiohttp-server==0.44b0.dev\",\n },\n {\n \"library\": \"aiopg >= 0.13.0, < 2.0.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-aiopg==0.44b0.dev\",\n },\n {\n \"library\": \"asgiref ~= 3.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-asgi==0.44b0.dev\",\n },\n {\n \"library\": \"asyncpg >= 0.12.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-asyncpg==0.44b0.dev\",\n },\n {\n \"library\": \"boto~=2.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-boto==0.44b0.dev\",\n },\n {\n \"library\": \"boto3 ~= 1.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-boto3sqs==0.44b0.dev\",\n },\n {\n \"library\": \"botocore ~= 1.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-botocore==0.44b0.dev\",\n },\n {\n \"library\": \"cassandra-driver ~= 3.25\",\n \"instrumentation\": \"opentelemetry-instrumentation-cassandra==0.44b0.dev\",\n },\n {\n \"library\": \"scylla-driver ~= 3.25\",\n \"instrumentation\": \"opentelemetry-instrumentation-cassandra==0.44b0.dev\",\n },\n {\n \"library\": \"celery >= 4.0, < 6.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-celery==0.44b0.dev\",\n },\n {\n \"library\": \"confluent-kafka >= 1.8.2, <= 2.2.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-confluent-kafka==0.44b0.dev\",\n },\n {\n \"library\": \"django >= 1.10\",\n \"instrumentation\": \"opentelemetry-instrumentation-django==0.44b0.dev\",\n },\n {\n \"library\": \"elasticsearch >= 2.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-elasticsearch==0.44b0.dev\",\n },\n {\n \"library\": \"falcon >= 1.4.1, < 3.1.2\",\n \"instrumentation\": \"opentelemetry-instrumentation-falcon==0.44b0.dev\",\n },\n {\n \"library\": \"fastapi ~= 0.58\",\n \"instrumentation\": \"opentelemetry-instrumentation-fastapi==0.44b0.dev\",\n },\n {\n \"library\": \"werkzeug < 3.0.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-flask==0.44b0.dev\",\n },\n {\n \"library\": \"flask >= 1.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-flask==0.44b0.dev\",\n },\n {\n \"library\": \"grpcio ~= 1.27\",\n \"instrumentation\": \"opentelemetry-instrumentation-grpc==0.44b0.dev\",\n },\n {\n \"library\": \"httpx >= 0.18.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-httpx==0.44b0.dev\",\n },\n {\n \"library\": \"jinja2 >= 2.7, < 4.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-jinja2==0.44b0.dev\",\n },\n {\n \"library\": \"kafka-python >= 2.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-kafka-python==0.44b0.dev\",\n },\n {\n \"library\": \"mysql-connector-python ~= 8.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-mysql==0.44b0.dev\",\n },\n {\n \"library\": \"mysqlclient < 3\",\n \"instrumentation\": \"opentelemetry-instrumentation-mysqlclient==0.44b0.dev\",\n },\n {\n \"library\": \"pika >= 0.12.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-pika==0.44b0.dev\",\n },\n {\n \"library\": \"psycopg2 >= 2.7.3.1\",\n \"instrumentation\": \"opentelemetry-instrumentation-psycopg2==0.44b0.dev\",\n },\n {\n \"library\": \"pymemcache >= 1.3.5, < 5\",\n \"instrumentation\": \"opentelemetry-instrumentation-pymemcache==0.44b0.dev\",\n },\n {\n \"library\": \"pymongo >= 3.1, < 5.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-pymongo==0.44b0.dev\",\n },\n {\n \"library\": \"PyMySQL < 2\",\n \"instrumentation\": \"opentelemetry-instrumentation-pymysql==0.44b0.dev\",\n },\n {\n \"library\": \"pyramid >= 1.7\",\n \"instrumentation\": \"opentelemetry-instrumentation-pyramid==0.44b0.dev\",\n },\n {\n \"library\": \"redis >= 2.6\",\n \"instrumentation\": \"opentelemetry-instrumentation-redis==0.44b0.dev\",\n },\n {\n \"library\": \"remoulade >= 0.50\",\n \"instrumentation\": \"opentelemetry-instrumentation-remoulade==0.44b0.dev\",\n },\n {\n \"library\": \"requests ~= 2.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-requests==0.44b0.dev\",\n },\n {\n \"library\": \"scikit-learn ~= 0.24.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-sklearn==0.44b0.dev\",\n },\n {\n \"library\": \"sqlalchemy\",\n \"instrumentation\": \"opentelemetry-instrumentation-sqlalchemy==0.44b0.dev\",\n },\n {\n \"library\": \"starlette ~= 0.13.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-starlette==0.44b0.dev\",\n },\n {\n \"library\": \"psutil >= 5\",\n \"instrumentation\": \"opentelemetry-instrumentation-system-metrics==0.44b0.dev\",\n },\n {\n \"library\": \"tornado >= 5.1.1\",\n \"instrumentation\": \"opentelemetry-instrumentation-tornado==0.44b0.dev\",\n },\n {\n \"library\": \"tortoise-orm >= 0.17.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-tortoiseorm==0.44b0.dev\",\n },\n {\n \"library\": \"pydantic >= 1.10.2\",\n \"instrumentation\": \"opentelemetry-instrumentation-tortoiseorm==0.44b0.dev\",\n },\n {\n \"library\": \"urllib3 >= 1.0.0, < 3.0.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-urllib3==0.44b0.dev\",\n },\n]\ndefault_instrumentations = [\n \"opentelemetry-instrumentation-aws-lambda==0.44b0.dev\",\n \"opentelemetry-instrumentation-dbapi==0.44b0.dev\",\n \"opentelemetry-instrumentation-logging==0.44b0.dev\",\n \"opentelemetry-instrumentation-sqlite3==0.44b0.dev\",\n \"opentelemetry-instrumentation-urllib==0.44b0.dev\",\n \"opentelemetry-instrumentation-wsgi==0.44b0.dev\",\n]\n", "path": "opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py"}, {"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n_instruments = (\"confluent-kafka >= 1.8.2, <= 2.2.0\",)\n", "path": "instrumentation/opentelemetry-instrumentation-confluent-kafka/src/opentelemetry/instrumentation/confluent_kafka/package.py"}]}
| 3,498 | 397 |
gh_patches_debug_32630
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-6689
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Missing application names under 'Apps with most usage'
**Describe the bug**
After upgrading to 2023.8.1, the 'Apps with most usage' table no longer shows any application names.
**To Reproduce**
Steps to reproduce the behavior:
1. Log in with an administrator account
2. Go to the Admin Interface
3. See 'Apps with most usage'. The table will be present, with correct data, but the `Application` column is empty
**Expected behavior**
The `Application` column should contain the correct application names.
**Screenshots**
<img width="402" alt="Screenshot 2023-08-30 at 11 52 14" src="https://github.com/goauthentik/authentik/assets/1585352/d71ffa8b-e055-4161-9210-c6711fff0a92">
**Logs**
N/A
**Version and Deployment:**
- authentik version: 2023.8.1
- Deployment: Docker Compose
**Additional context**
The underlying cause seems to be a serialization error in the `/api/v3/events/events/top_per_user/?action=authorize_application&top_n=11` endpoint. The `application` field is serialized as a string, rather than an object, as shown in the following screenshot:
<img width="729" alt="Screenshot 2023-08-30 at 11 56 35" src="https://github.com/goauthentik/authentik/assets/1585352/5315f79d-9952-496a-b525-9981884154fb">
</issue>
<code>
[start of authentik/events/api/events.py]
1 """Events API Views"""
2 from datetime import timedelta
3 from json import loads
4
5 import django_filters
6 from django.db.models.aggregates import Count
7 from django.db.models.fields.json import KeyTextTransform
8 from django.db.models.functions import ExtractDay
9 from drf_spectacular.types import OpenApiTypes
10 from drf_spectacular.utils import OpenApiParameter, extend_schema
11 from guardian.shortcuts import get_objects_for_user
12 from rest_framework.decorators import action
13 from rest_framework.fields import DictField, IntegerField
14 from rest_framework.request import Request
15 from rest_framework.response import Response
16 from rest_framework.serializers import ModelSerializer
17 from rest_framework.viewsets import ModelViewSet
18
19 from authentik.admin.api.metrics import CoordinateSerializer
20 from authentik.core.api.utils import PassiveSerializer, TypeCreateSerializer
21 from authentik.events.models import Event, EventAction
22
23
24 class EventSerializer(ModelSerializer):
25 """Event Serializer"""
26
27 class Meta:
28 model = Event
29 fields = [
30 "pk",
31 "user",
32 "action",
33 "app",
34 "context",
35 "client_ip",
36 "created",
37 "expires",
38 "tenant",
39 ]
40
41
42 class EventTopPerUserSerializer(PassiveSerializer):
43 """Response object of Event's top_per_user"""
44
45 application = DictField()
46 counted_events = IntegerField()
47 unique_users = IntegerField()
48
49
50 class EventsFilter(django_filters.FilterSet):
51 """Filter for events"""
52
53 username = django_filters.CharFilter(
54 field_name="user", lookup_expr="username", label="Username"
55 )
56 context_model_pk = django_filters.CharFilter(
57 field_name="context",
58 lookup_expr="model__pk",
59 label="Context Model Primary Key",
60 method="filter_context_model_pk",
61 )
62 context_model_name = django_filters.CharFilter(
63 field_name="context",
64 lookup_expr="model__model_name",
65 label="Context Model Name",
66 )
67 context_model_app = django_filters.CharFilter(
68 field_name="context", lookup_expr="model__app", label="Context Model App"
69 )
70 context_authorized_app = django_filters.CharFilter(
71 field_name="context",
72 lookup_expr="authorized_application__pk",
73 label="Context Authorized application",
74 )
75 action = django_filters.CharFilter(
76 field_name="action",
77 lookup_expr="icontains",
78 )
79 tenant_name = django_filters.CharFilter(
80 field_name="tenant",
81 lookup_expr="name",
82 label="Tenant name",
83 )
84
85 def filter_context_model_pk(self, queryset, name, value):
86 """Because we store the PK as UUID.hex,
87 we need to remove the dashes that a client may send. We can't use a
88 UUIDField for this, as some models might not have a UUID PK"""
89 value = str(value).replace("-", "")
90 return queryset.filter(context__model__pk=value)
91
92 class Meta:
93 model = Event
94 fields = ["action", "client_ip", "username"]
95
96
97 class EventViewSet(ModelViewSet):
98 """Event Read-Only Viewset"""
99
100 queryset = Event.objects.all()
101 serializer_class = EventSerializer
102 ordering = ["-created"]
103 search_fields = [
104 "event_uuid",
105 "user",
106 "action",
107 "app",
108 "context",
109 "client_ip",
110 ]
111 filterset_class = EventsFilter
112
113 @extend_schema(
114 methods=["GET"],
115 responses={200: EventTopPerUserSerializer(many=True)},
116 filters=[],
117 parameters=[
118 OpenApiParameter(
119 "action",
120 type=OpenApiTypes.STR,
121 location=OpenApiParameter.QUERY,
122 required=False,
123 ),
124 OpenApiParameter(
125 "top_n",
126 type=OpenApiTypes.INT,
127 location=OpenApiParameter.QUERY,
128 required=False,
129 ),
130 ],
131 )
132 @action(detail=False, methods=["GET"], pagination_class=None)
133 def top_per_user(self, request: Request):
134 """Get the top_n events grouped by user count"""
135 filtered_action = request.query_params.get("action", EventAction.LOGIN)
136 top_n = int(request.query_params.get("top_n", "15"))
137 return Response(
138 get_objects_for_user(request.user, "authentik_events.view_event")
139 .filter(action=filtered_action)
140 .exclude(context__authorized_application=None)
141 .annotate(application=KeyTextTransform("authorized_application", "context"))
142 .annotate(user_pk=KeyTextTransform("pk", "user"))
143 .values("application")
144 .annotate(counted_events=Count("application"))
145 .annotate(unique_users=Count("user_pk", distinct=True))
146 .values("unique_users", "application", "counted_events")
147 .order_by("-counted_events")[:top_n]
148 )
149
150 @extend_schema(
151 methods=["GET"],
152 responses={200: CoordinateSerializer(many=True)},
153 filters=[],
154 parameters=[
155 OpenApiParameter(
156 "action",
157 type=OpenApiTypes.STR,
158 location=OpenApiParameter.QUERY,
159 required=False,
160 ),
161 OpenApiParameter(
162 "query",
163 type=OpenApiTypes.STR,
164 location=OpenApiParameter.QUERY,
165 required=False,
166 ),
167 ],
168 )
169 @action(detail=False, methods=["GET"], pagination_class=None)
170 def per_month(self, request: Request):
171 """Get the count of events per month"""
172 filtered_action = request.query_params.get("action", EventAction.LOGIN)
173 try:
174 query = loads(request.query_params.get("query", "{}"))
175 except ValueError:
176 return Response(status=400)
177 return Response(
178 get_objects_for_user(request.user, "authentik_events.view_event")
179 .filter(action=filtered_action)
180 .filter(**query)
181 .get_events_per(timedelta(weeks=4), ExtractDay, 30)
182 )
183
184 @extend_schema(responses={200: TypeCreateSerializer(many=True)})
185 @action(detail=False, pagination_class=None, filter_backends=[])
186 def actions(self, request: Request) -> Response:
187 """Get all actions"""
188 data = []
189 for value, name in EventAction.choices:
190 data.append({"name": name, "description": "", "component": value, "model_name": ""})
191 return Response(TypeCreateSerializer(data, many=True).data)
192
[end of authentik/events/api/events.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/authentik/events/api/events.py b/authentik/events/api/events.py
--- a/authentik/events/api/events.py
+++ b/authentik/events/api/events.py
@@ -4,7 +4,7 @@
import django_filters
from django.db.models.aggregates import Count
-from django.db.models.fields.json import KeyTextTransform
+from django.db.models.fields.json import KeyTextTransform, KeyTransform
from django.db.models.functions import ExtractDay
from drf_spectacular.types import OpenApiTypes
from drf_spectacular.utils import OpenApiParameter, extend_schema
@@ -134,11 +134,11 @@
"""Get the top_n events grouped by user count"""
filtered_action = request.query_params.get("action", EventAction.LOGIN)
top_n = int(request.query_params.get("top_n", "15"))
- return Response(
+ events = (
get_objects_for_user(request.user, "authentik_events.view_event")
.filter(action=filtered_action)
.exclude(context__authorized_application=None)
- .annotate(application=KeyTextTransform("authorized_application", "context"))
+ .annotate(application=KeyTransform("authorized_application", "context"))
.annotate(user_pk=KeyTextTransform("pk", "user"))
.values("application")
.annotate(counted_events=Count("application"))
@@ -146,6 +146,7 @@
.values("unique_users", "application", "counted_events")
.order_by("-counted_events")[:top_n]
)
+ return Response(EventTopPerUserSerializer(instance=events, many=True).data)
@extend_schema(
methods=["GET"],
|
{"golden_diff": "diff --git a/authentik/events/api/events.py b/authentik/events/api/events.py\n--- a/authentik/events/api/events.py\n+++ b/authentik/events/api/events.py\n@@ -4,7 +4,7 @@\n \n import django_filters\n from django.db.models.aggregates import Count\n-from django.db.models.fields.json import KeyTextTransform\n+from django.db.models.fields.json import KeyTextTransform, KeyTransform\n from django.db.models.functions import ExtractDay\n from drf_spectacular.types import OpenApiTypes\n from drf_spectacular.utils import OpenApiParameter, extend_schema\n@@ -134,11 +134,11 @@\n \"\"\"Get the top_n events grouped by user count\"\"\"\n filtered_action = request.query_params.get(\"action\", EventAction.LOGIN)\n top_n = int(request.query_params.get(\"top_n\", \"15\"))\n- return Response(\n+ events = (\n get_objects_for_user(request.user, \"authentik_events.view_event\")\n .filter(action=filtered_action)\n .exclude(context__authorized_application=None)\n- .annotate(application=KeyTextTransform(\"authorized_application\", \"context\"))\n+ .annotate(application=KeyTransform(\"authorized_application\", \"context\"))\n .annotate(user_pk=KeyTextTransform(\"pk\", \"user\"))\n .values(\"application\")\n .annotate(counted_events=Count(\"application\"))\n@@ -146,6 +146,7 @@\n .values(\"unique_users\", \"application\", \"counted_events\")\n .order_by(\"-counted_events\")[:top_n]\n )\n+ return Response(EventTopPerUserSerializer(instance=events, many=True).data)\n \n @extend_schema(\n methods=[\"GET\"],\n", "issue": "Missing application names under 'Apps with most usage'\n**Describe the bug**\r\nAfter upgrading to 2023.8.1, the 'Apps with most usage' table no longer shows any application names.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n\r\n1. Log in with an administrator account\r\n2. Go to the Admin Interface\r\n3. See 'Apps with most usage'. The table will be present, with correct data, but the `Application` column is empty\r\n\r\n**Expected behavior**\r\nThe `Application` column should contain the correct application names.\r\n\r\n**Screenshots**\r\n<img width=\"402\" alt=\"Screenshot 2023-08-30 at 11 52 14\" src=\"https://github.com/goauthentik/authentik/assets/1585352/d71ffa8b-e055-4161-9210-c6711fff0a92\">\r\n\r\n**Logs**\r\nN/A\r\n\r\n**Version and Deployment:**\r\n\r\n- authentik version: 2023.8.1\r\n- Deployment: Docker Compose\r\n\r\n**Additional context**\r\n\r\nThe underlying cause seems to be a serialization error in the `/api/v3/events/events/top_per_user/?action=authorize_application&top_n=11` endpoint. The `application` field is serialized as a string, rather than an object, as shown in the following screenshot:\r\n\r\n<img width=\"729\" alt=\"Screenshot 2023-08-30 at 11 56 35\" src=\"https://github.com/goauthentik/authentik/assets/1585352/5315f79d-9952-496a-b525-9981884154fb\">\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"Events API Views\"\"\"\nfrom datetime import timedelta\nfrom json import loads\n\nimport django_filters\nfrom django.db.models.aggregates import Count\nfrom django.db.models.fields.json import KeyTextTransform\nfrom django.db.models.functions import ExtractDay\nfrom drf_spectacular.types import OpenApiTypes\nfrom drf_spectacular.utils import OpenApiParameter, extend_schema\nfrom guardian.shortcuts import get_objects_for_user\nfrom rest_framework.decorators import action\nfrom rest_framework.fields import DictField, IntegerField\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\nfrom rest_framework.serializers import ModelSerializer\nfrom rest_framework.viewsets import ModelViewSet\n\nfrom authentik.admin.api.metrics import CoordinateSerializer\nfrom authentik.core.api.utils import PassiveSerializer, TypeCreateSerializer\nfrom authentik.events.models import Event, EventAction\n\n\nclass EventSerializer(ModelSerializer):\n \"\"\"Event Serializer\"\"\"\n\n class Meta:\n model = Event\n fields = [\n \"pk\",\n \"user\",\n \"action\",\n \"app\",\n \"context\",\n \"client_ip\",\n \"created\",\n \"expires\",\n \"tenant\",\n ]\n\n\nclass EventTopPerUserSerializer(PassiveSerializer):\n \"\"\"Response object of Event's top_per_user\"\"\"\n\n application = DictField()\n counted_events = IntegerField()\n unique_users = IntegerField()\n\n\nclass EventsFilter(django_filters.FilterSet):\n \"\"\"Filter for events\"\"\"\n\n username = django_filters.CharFilter(\n field_name=\"user\", lookup_expr=\"username\", label=\"Username\"\n )\n context_model_pk = django_filters.CharFilter(\n field_name=\"context\",\n lookup_expr=\"model__pk\",\n label=\"Context Model Primary Key\",\n method=\"filter_context_model_pk\",\n )\n context_model_name = django_filters.CharFilter(\n field_name=\"context\",\n lookup_expr=\"model__model_name\",\n label=\"Context Model Name\",\n )\n context_model_app = django_filters.CharFilter(\n field_name=\"context\", lookup_expr=\"model__app\", label=\"Context Model App\"\n )\n context_authorized_app = django_filters.CharFilter(\n field_name=\"context\",\n lookup_expr=\"authorized_application__pk\",\n label=\"Context Authorized application\",\n )\n action = django_filters.CharFilter(\n field_name=\"action\",\n lookup_expr=\"icontains\",\n )\n tenant_name = django_filters.CharFilter(\n field_name=\"tenant\",\n lookup_expr=\"name\",\n label=\"Tenant name\",\n )\n\n def filter_context_model_pk(self, queryset, name, value):\n \"\"\"Because we store the PK as UUID.hex,\n we need to remove the dashes that a client may send. We can't use a\n UUIDField for this, as some models might not have a UUID PK\"\"\"\n value = str(value).replace(\"-\", \"\")\n return queryset.filter(context__model__pk=value)\n\n class Meta:\n model = Event\n fields = [\"action\", \"client_ip\", \"username\"]\n\n\nclass EventViewSet(ModelViewSet):\n \"\"\"Event Read-Only Viewset\"\"\"\n\n queryset = Event.objects.all()\n serializer_class = EventSerializer\n ordering = [\"-created\"]\n search_fields = [\n \"event_uuid\",\n \"user\",\n \"action\",\n \"app\",\n \"context\",\n \"client_ip\",\n ]\n filterset_class = EventsFilter\n\n @extend_schema(\n methods=[\"GET\"],\n responses={200: EventTopPerUserSerializer(many=True)},\n filters=[],\n parameters=[\n OpenApiParameter(\n \"action\",\n type=OpenApiTypes.STR,\n location=OpenApiParameter.QUERY,\n required=False,\n ),\n OpenApiParameter(\n \"top_n\",\n type=OpenApiTypes.INT,\n location=OpenApiParameter.QUERY,\n required=False,\n ),\n ],\n )\n @action(detail=False, methods=[\"GET\"], pagination_class=None)\n def top_per_user(self, request: Request):\n \"\"\"Get the top_n events grouped by user count\"\"\"\n filtered_action = request.query_params.get(\"action\", EventAction.LOGIN)\n top_n = int(request.query_params.get(\"top_n\", \"15\"))\n return Response(\n get_objects_for_user(request.user, \"authentik_events.view_event\")\n .filter(action=filtered_action)\n .exclude(context__authorized_application=None)\n .annotate(application=KeyTextTransform(\"authorized_application\", \"context\"))\n .annotate(user_pk=KeyTextTransform(\"pk\", \"user\"))\n .values(\"application\")\n .annotate(counted_events=Count(\"application\"))\n .annotate(unique_users=Count(\"user_pk\", distinct=True))\n .values(\"unique_users\", \"application\", \"counted_events\")\n .order_by(\"-counted_events\")[:top_n]\n )\n\n @extend_schema(\n methods=[\"GET\"],\n responses={200: CoordinateSerializer(many=True)},\n filters=[],\n parameters=[\n OpenApiParameter(\n \"action\",\n type=OpenApiTypes.STR,\n location=OpenApiParameter.QUERY,\n required=False,\n ),\n OpenApiParameter(\n \"query\",\n type=OpenApiTypes.STR,\n location=OpenApiParameter.QUERY,\n required=False,\n ),\n ],\n )\n @action(detail=False, methods=[\"GET\"], pagination_class=None)\n def per_month(self, request: Request):\n \"\"\"Get the count of events per month\"\"\"\n filtered_action = request.query_params.get(\"action\", EventAction.LOGIN)\n try:\n query = loads(request.query_params.get(\"query\", \"{}\"))\n except ValueError:\n return Response(status=400)\n return Response(\n get_objects_for_user(request.user, \"authentik_events.view_event\")\n .filter(action=filtered_action)\n .filter(**query)\n .get_events_per(timedelta(weeks=4), ExtractDay, 30)\n )\n\n @extend_schema(responses={200: TypeCreateSerializer(many=True)})\n @action(detail=False, pagination_class=None, filter_backends=[])\n def actions(self, request: Request) -> Response:\n \"\"\"Get all actions\"\"\"\n data = []\n for value, name in EventAction.choices:\n data.append({\"name\": name, \"description\": \"\", \"component\": value, \"model_name\": \"\"})\n return Response(TypeCreateSerializer(data, many=True).data)\n", "path": "authentik/events/api/events.py"}]}
| 2,747 | 366 |
gh_patches_debug_8805
|
rasdani/github-patches
|
git_diff
|
pyro-ppl__pyro-741
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bayesian regression example doesnβt work with N=1
Hi,
When I run the [bayesian_regression.py](https://github.com/uber/pyro/blob/dev/examples/bayesian_regression.py) example on github, with `N = 1` on [line 44](https://github.com/uber/pyro/blob/dev/examples/bayesian_regression.py#L44), I get an error from Tensor flow. Is there an assumption that `N` must be at least 2? While values of `N < 2` may be corner cases, Pyro could improve user experience by explicitly checking `N` values and giving users a more friendly error message, rather than sending bad values to Tensor flow. Hereβs the error with `N=1`:
```
$ python bayesian_regression.py
Traceback (most recent call last):
File "bayesian_regression.py", line 140, in <module>
main(args)
File "bayesian_regression.py", line 117, in main
epoch_loss = svi.step(data)
File "/usr/local/lib/python2.7/dist-packages/pyro/infer/svi.py", line 98, in step
loss = self.loss_and_grads(self.model, self.guide, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pyro/infer/elbo.py", line 65, in loss_and_grads
return self.which_elbo.loss_and_grads(model, guide, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pyro/infer/trace_elbo.py", line 181, in loss_and_grads
torch_backward(surrogate_loss)
File "/usr/local/lib/python2.7/dist-packages/pyro/infer/util.py", line 34, in torch_backward
x.backward()
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 167, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/__init__.py", line 99, in backward
variables, grad_variables, retain_graph)
RuntimeError: matrices expected, got 3D, 2D tensors at /pytorch/torch/lib/TH/generic/THTensorMath.c:1411
"
```
Environment:
```
Python version : 3.6
Pyro version: 0.1.2
Pytorch version: 0.3.0post4
```
</issue>
<code>
[start of examples/bayesian_regression.py]
1 import numpy as np
2 import argparse
3 import torch
4 import torch.nn as nn
5 from torch.nn.functional import normalize # noqa: F401
6
7 from torch.autograd import Variable
8
9 import pyro
10 from pyro.distributions import Normal, Bernoulli # noqa: F401
11 from pyro.infer import SVI
12 from pyro.optim import Adam
13
14 """
15 Bayesian Regression
16 Learning a function of the form:
17 y = wx + b
18 """
19
20
21 # generate toy dataset
22 def build_linear_dataset(N, p, noise_std=0.01):
23 X = np.random.rand(N, p)
24 # use random integer weights from [0, 7]
25 w = np.random.randint(8, size=p)
26 # set b = 1
27 y = np.matmul(X, w) + np.repeat(1, N) + np.random.normal(0, noise_std, size=N)
28 y = y.reshape(N, 1)
29 X, y = Variable(torch.Tensor(X)), Variable(torch.Tensor(y))
30 return torch.cat((X, y), 1)
31
32
33 # NN with one linear layer
34 class RegressionModel(nn.Module):
35 def __init__(self, p):
36 super(RegressionModel, self).__init__()
37 self.linear = nn.Linear(p, 1)
38
39 def forward(self, x):
40 # x * w + b
41 return self.linear(x)
42
43
44 N = 100 # size of toy data
45 p = 1 # number of features
46
47 softplus = nn.Softplus()
48 regression_model = RegressionModel(p)
49
50
51 def model(data):
52 # Create unit normal priors over the parameters
53 mu = Variable(torch.zeros(1, p)).type_as(data)
54 sigma = Variable(torch.ones(1, p)).type_as(data)
55 bias_mu = Variable(torch.zeros(1)).type_as(data)
56 bias_sigma = Variable(torch.ones(1)).type_as(data)
57 w_prior, b_prior = Normal(mu, sigma), Normal(bias_mu, bias_sigma)
58 priors = {'linear.weight': w_prior, 'linear.bias': b_prior}
59 # lift module parameters to random variables sampled from the priors
60 lifted_module = pyro.random_module("module", regression_model, priors)
61 # sample a regressor (which also samples w and b)
62 lifted_reg_model = lifted_module()
63
64 with pyro.iarange("map", N, subsample=data):
65 x_data = data[:, :-1]
66 y_data = data[:, -1]
67 # run the regressor forward conditioned on inputs
68 prediction_mean = lifted_reg_model(x_data).squeeze()
69 pyro.sample("obs",
70 Normal(prediction_mean, Variable(torch.ones(data.size(0))).type_as(data)),
71 obs=y_data.squeeze())
72
73
74 def guide(data):
75 w_mu = Variable(torch.randn(1, p).type_as(data.data), requires_grad=True)
76 w_log_sig = Variable((-3.0 * torch.ones(1, p) + 0.05 * torch.randn(1, p)).type_as(data.data), requires_grad=True)
77 b_mu = Variable(torch.randn(1).type_as(data.data), requires_grad=True)
78 b_log_sig = Variable((-3.0 * torch.ones(1) + 0.05 * torch.randn(1)).type_as(data.data), requires_grad=True)
79 # register learnable params in the param store
80 mw_param = pyro.param("guide_mean_weight", w_mu)
81 sw_param = softplus(pyro.param("guide_log_sigma_weight", w_log_sig))
82 mb_param = pyro.param("guide_mean_bias", b_mu)
83 sb_param = softplus(pyro.param("guide_log_sigma_bias", b_log_sig))
84 # gaussian guide distributions for w and b
85 w_dist = Normal(mw_param, sw_param)
86 b_dist = Normal(mb_param, sb_param)
87 dists = {'linear.weight': w_dist, 'linear.bias': b_dist}
88 # overloading the parameters in the module with random samples from the guide distributions
89 lifted_module = pyro.random_module("module", regression_model, dists)
90 # sample a regressor
91 return lifted_module()
92
93
94 # instantiate optim and inference objects
95 optim = Adam({"lr": 0.001})
96 svi = SVI(model, guide, optim, loss="ELBO")
97
98
99 # get array of batch indices
100 def get_batch_indices(N, batch_size):
101 all_batches = np.arange(0, N, batch_size)
102 if all_batches[-1] != N:
103 all_batches = list(all_batches) + [N]
104 return all_batches
105
106
107 def main(args):
108 data = build_linear_dataset(N, p)
109 if args.cuda:
110 # make tensors and modules CUDA
111 data = data.cuda()
112 softplus.cuda()
113 regression_model.cuda()
114 for j in range(args.num_epochs):
115 if args.batch_size == N:
116 # use the entire data set
117 epoch_loss = svi.step(data)
118 else:
119 # mini batch
120 epoch_loss = 0.0
121 perm = torch.randperm(N) if not args.cuda else torch.randperm(N).cuda()
122 # shuffle data
123 data = data[perm]
124 # get indices of each batch
125 all_batches = get_batch_indices(N, args.batch_size)
126 for ix, batch_start in enumerate(all_batches[:-1]):
127 batch_end = all_batches[ix + 1]
128 batch_data = data[batch_start: batch_end]
129 epoch_loss += svi.step(batch_data)
130 if j % 100 == 0:
131 print("epoch avg loss {}".format(epoch_loss/float(N)))
132
133
134 if __name__ == '__main__':
135 parser = argparse.ArgumentParser(description="parse args")
136 parser.add_argument('-n', '--num-epochs', default=1000, type=int)
137 parser.add_argument('-b', '--batch-size', default=N, type=int)
138 parser.add_argument('--cuda', action='store_true')
139 args = parser.parse_args()
140 main(args)
141
[end of examples/bayesian_regression.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/bayesian_regression.py b/examples/bayesian_regression.py
--- a/examples/bayesian_regression.py
+++ b/examples/bayesian_regression.py
@@ -65,7 +65,7 @@
x_data = data[:, :-1]
y_data = data[:, -1]
# run the regressor forward conditioned on inputs
- prediction_mean = lifted_reg_model(x_data).squeeze()
+ prediction_mean = lifted_reg_model(x_data).squeeze(1)
pyro.sample("obs",
Normal(prediction_mean, Variable(torch.ones(data.size(0))).type_as(data)),
obs=y_data.squeeze())
|
{"golden_diff": "diff --git a/examples/bayesian_regression.py b/examples/bayesian_regression.py\n--- a/examples/bayesian_regression.py\n+++ b/examples/bayesian_regression.py\n@@ -65,7 +65,7 @@\n x_data = data[:, :-1]\n y_data = data[:, -1]\n # run the regressor forward conditioned on inputs\n- prediction_mean = lifted_reg_model(x_data).squeeze()\n+ prediction_mean = lifted_reg_model(x_data).squeeze(1)\n pyro.sample(\"obs\",\n Normal(prediction_mean, Variable(torch.ones(data.size(0))).type_as(data)),\n obs=y_data.squeeze())\n", "issue": "Bayesian regression example doesn\u2019t work with N=1\nHi,\r\n\r\nWhen I run the [bayesian_regression.py](https://github.com/uber/pyro/blob/dev/examples/bayesian_regression.py) example on github, with `N = 1` on [line 44](https://github.com/uber/pyro/blob/dev/examples/bayesian_regression.py#L44), I get an error from Tensor flow. Is there an assumption that `N` must be at least 2? While values of `N < 2` may be corner cases, Pyro could improve user experience by explicitly checking `N` values and giving users a more friendly error message, rather than sending bad values to Tensor flow. Here\u2019s the error with `N=1`:\r\n\r\n```\r\n$ python bayesian_regression.py\r\nTraceback (most recent call last):\r\n File \"bayesian_regression.py\", line 140, in <module>\r\n \tmain(args)\r\n File \"bayesian_regression.py\", line 117, in main\r\n \tepoch_loss = svi.step(data)\r\n File \"/usr/local/lib/python2.7/dist-packages/pyro/infer/svi.py\", line 98, in step\r\n \tloss = self.loss_and_grads(self.model, self.guide, *args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/pyro/infer/elbo.py\", line 65, in loss_and_grads\r\n \treturn self.which_elbo.loss_and_grads(model, guide, *args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/pyro/infer/trace_elbo.py\", line 181, in loss_and_grads\r\n \ttorch_backward(surrogate_loss)\r\n File \"/usr/local/lib/python2.7/dist-packages/pyro/infer/util.py\", line 34, in torch_backward\r\n \tx.backward()\r\n File \"/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py\", line 167, in backward\r\n \ttorch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)\r\n File \"/usr/local/lib/python2.7/dist-packages/torch/autograd/__init__.py\", line 99, in backward\r\n \tvariables, grad_variables, retain_graph)\r\nRuntimeError: matrices expected, got 3D, 2D tensors at /pytorch/torch/lib/TH/generic/THTensorMath.c:1411\r\n\"\r\n```\r\nEnvironment:\r\n\r\n```\r\nPython version : 3.6\r\nPyro version: 0.1.2\r\nPytorch version: 0.3.0post4\r\n```\r\n\n", "before_files": [{"content": "import numpy as np\nimport argparse\nimport torch\nimport torch.nn as nn\nfrom torch.nn.functional import normalize # noqa: F401\n\nfrom torch.autograd import Variable\n\nimport pyro\nfrom pyro.distributions import Normal, Bernoulli # noqa: F401\nfrom pyro.infer import SVI\nfrom pyro.optim import Adam\n\n\"\"\"\nBayesian Regression\nLearning a function of the form:\n y = wx + b\n\"\"\"\n\n\n# generate toy dataset\ndef build_linear_dataset(N, p, noise_std=0.01):\n X = np.random.rand(N, p)\n # use random integer weights from [0, 7]\n w = np.random.randint(8, size=p)\n # set b = 1\n y = np.matmul(X, w) + np.repeat(1, N) + np.random.normal(0, noise_std, size=N)\n y = y.reshape(N, 1)\n X, y = Variable(torch.Tensor(X)), Variable(torch.Tensor(y))\n return torch.cat((X, y), 1)\n\n\n# NN with one linear layer\nclass RegressionModel(nn.Module):\n def __init__(self, p):\n super(RegressionModel, self).__init__()\n self.linear = nn.Linear(p, 1)\n\n def forward(self, x):\n # x * w + b\n return self.linear(x)\n\n\nN = 100 # size of toy data\np = 1 # number of features\n\nsoftplus = nn.Softplus()\nregression_model = RegressionModel(p)\n\n\ndef model(data):\n # Create unit normal priors over the parameters\n mu = Variable(torch.zeros(1, p)).type_as(data)\n sigma = Variable(torch.ones(1, p)).type_as(data)\n bias_mu = Variable(torch.zeros(1)).type_as(data)\n bias_sigma = Variable(torch.ones(1)).type_as(data)\n w_prior, b_prior = Normal(mu, sigma), Normal(bias_mu, bias_sigma)\n priors = {'linear.weight': w_prior, 'linear.bias': b_prior}\n # lift module parameters to random variables sampled from the priors\n lifted_module = pyro.random_module(\"module\", regression_model, priors)\n # sample a regressor (which also samples w and b)\n lifted_reg_model = lifted_module()\n\n with pyro.iarange(\"map\", N, subsample=data):\n x_data = data[:, :-1]\n y_data = data[:, -1]\n # run the regressor forward conditioned on inputs\n prediction_mean = lifted_reg_model(x_data).squeeze()\n pyro.sample(\"obs\",\n Normal(prediction_mean, Variable(torch.ones(data.size(0))).type_as(data)),\n obs=y_data.squeeze())\n\n\ndef guide(data):\n w_mu = Variable(torch.randn(1, p).type_as(data.data), requires_grad=True)\n w_log_sig = Variable((-3.0 * torch.ones(1, p) + 0.05 * torch.randn(1, p)).type_as(data.data), requires_grad=True)\n b_mu = Variable(torch.randn(1).type_as(data.data), requires_grad=True)\n b_log_sig = Variable((-3.0 * torch.ones(1) + 0.05 * torch.randn(1)).type_as(data.data), requires_grad=True)\n # register learnable params in the param store\n mw_param = pyro.param(\"guide_mean_weight\", w_mu)\n sw_param = softplus(pyro.param(\"guide_log_sigma_weight\", w_log_sig))\n mb_param = pyro.param(\"guide_mean_bias\", b_mu)\n sb_param = softplus(pyro.param(\"guide_log_sigma_bias\", b_log_sig))\n # gaussian guide distributions for w and b\n w_dist = Normal(mw_param, sw_param)\n b_dist = Normal(mb_param, sb_param)\n dists = {'linear.weight': w_dist, 'linear.bias': b_dist}\n # overloading the parameters in the module with random samples from the guide distributions\n lifted_module = pyro.random_module(\"module\", regression_model, dists)\n # sample a regressor\n return lifted_module()\n\n\n# instantiate optim and inference objects\noptim = Adam({\"lr\": 0.001})\nsvi = SVI(model, guide, optim, loss=\"ELBO\")\n\n\n# get array of batch indices\ndef get_batch_indices(N, batch_size):\n all_batches = np.arange(0, N, batch_size)\n if all_batches[-1] != N:\n all_batches = list(all_batches) + [N]\n return all_batches\n\n\ndef main(args):\n data = build_linear_dataset(N, p)\n if args.cuda:\n # make tensors and modules CUDA\n data = data.cuda()\n softplus.cuda()\n regression_model.cuda()\n for j in range(args.num_epochs):\n if args.batch_size == N:\n # use the entire data set\n epoch_loss = svi.step(data)\n else:\n # mini batch\n epoch_loss = 0.0\n perm = torch.randperm(N) if not args.cuda else torch.randperm(N).cuda()\n # shuffle data\n data = data[perm]\n # get indices of each batch\n all_batches = get_batch_indices(N, args.batch_size)\n for ix, batch_start in enumerate(all_batches[:-1]):\n batch_end = all_batches[ix + 1]\n batch_data = data[batch_start: batch_end]\n epoch_loss += svi.step(batch_data)\n if j % 100 == 0:\n print(\"epoch avg loss {}\".format(epoch_loss/float(N)))\n\n\nif __name__ == '__main__':\n parser = argparse.ArgumentParser(description=\"parse args\")\n parser.add_argument('-n', '--num-epochs', default=1000, type=int)\n parser.add_argument('-b', '--batch-size', default=N, type=int)\n parser.add_argument('--cuda', action='store_true')\n args = parser.parse_args()\n main(args)\n", "path": "examples/bayesian_regression.py"}]}
| 2,718 | 135 |
gh_patches_debug_6429
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-6833
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in validate_price_precision
### What I'm trying to achieve
Get sane validation obviously.
### Steps to reproduce the problem
1. Try to create a voucher with the minimum order amount set to `2000`
2. It will throw an error.
### What I expected to happen
It shouldn't throw an error.
### Observation
Upon normalizing it converts the zeros to exponents.
```python
def validate_price_precision(value: Optional["Decimal"], currency: str = None):
"""Validate if price amount does not have too many decimal places.
Price amount can't have more decimal places than currency allows to.
Works only with decimal created from a string.
"""
# check no needed when there is no value
if not value:
return
currency_fraction = get_currency_fraction(currency or settings.DEFAULT_CURRENCY)
value = value.normalize()
if abs(value.as_tuple().exponent) > currency_fraction:
raise ValidationError(
f"Value cannot have more than {currency_fraction} decimal places."
)
```
should be:
```python
def validate_price_precision(value: Optional["Decimal"], currency: str = None):
"""Validate if price amount does not have too many decimal places.
Price amount can't have more decimal places than currency allows to.
Works only with decimal created from a string.
"""
# check no needed when there is no value
if not value:
return
currency_fraction = get_currency_fraction(currency or settings.DEFAULT_CURRENCY)
value = value.normalize()
exp = value.as_tuple().exponent
if exp < 0 and abs(value.as_tuple().exponent) > currency_fraction:
raise ValidationError(
f"Value cannot have more than {currency_fraction} decimal places."
)
```
So that it doesn't misinterpret zeros from the right as values after decimal places.
</issue>
<code>
[start of saleor/graphql/core/validators.py]
1 from typing import TYPE_CHECKING, Optional
2
3 from django.conf import settings
4 from django.core.exceptions import ValidationError
5 from django_prices.utils.formatting import get_currency_fraction
6 from graphql.error import GraphQLError
7
8 if TYPE_CHECKING:
9 from decimal import Decimal
10
11
12 def validate_one_of_args_is_in_query(*args):
13 # split args into a list with 2-element tuples:
14 # [(arg1_name, arg1_value), (arg2_name, arg2_value), ...]
15 splitted_args = [args[i : i + 2] for i in range(0, len(args), 2)] # noqa: E203
16 # filter trueish values from each tuple
17 filter_args = list(filter(lambda item: bool(item[1]) is True, splitted_args))
18
19 if len(filter_args) > 1:
20 rest_args = ", ".join([f"'{item[0]}'" for item in filter_args[1:]])
21 raise GraphQLError(
22 f"Argument '{filter_args[0][0]}' cannot be combined with {rest_args}"
23 )
24
25 if not filter_args:
26 required_args = ", ".join([f"'{item[0]}'" for item in splitted_args])
27 raise GraphQLError(f"At least one of arguments is required: {required_args}.")
28
29
30 def validate_price_precision(value: Optional["Decimal"], currency: str = None):
31 """Validate if price amount does not have too many decimal places.
32
33 Price amount can't have more decimal places than currency allow to.
34 Works only with decimal created from a string.
35 """
36
37 # check no needed when there is no value
38 if not value:
39 return
40
41 currency_fraction = get_currency_fraction(currency or settings.DEFAULT_CURRENCY)
42 value = value.normalize()
43 if abs(value.as_tuple().exponent) > currency_fraction:
44 raise ValidationError(
45 f"Value cannot have more than {currency_fraction} decimal places."
46 )
47
[end of saleor/graphql/core/validators.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/saleor/graphql/core/validators.py b/saleor/graphql/core/validators.py
--- a/saleor/graphql/core/validators.py
+++ b/saleor/graphql/core/validators.py
@@ -40,7 +40,7 @@
currency_fraction = get_currency_fraction(currency or settings.DEFAULT_CURRENCY)
value = value.normalize()
- if abs(value.as_tuple().exponent) > currency_fraction:
+ if value.as_tuple().exponent < -currency_fraction:
raise ValidationError(
f"Value cannot have more than {currency_fraction} decimal places."
)
|
{"golden_diff": "diff --git a/saleor/graphql/core/validators.py b/saleor/graphql/core/validators.py\n--- a/saleor/graphql/core/validators.py\n+++ b/saleor/graphql/core/validators.py\n@@ -40,7 +40,7 @@\n \n currency_fraction = get_currency_fraction(currency or settings.DEFAULT_CURRENCY)\n value = value.normalize()\n- if abs(value.as_tuple().exponent) > currency_fraction:\n+ if value.as_tuple().exponent < -currency_fraction:\n raise ValidationError(\n f\"Value cannot have more than {currency_fraction} decimal places.\"\n )\n", "issue": "Bug in validate_price_precision\n### What I'm trying to achieve\r\nGet sane validation obviously.\r\n\r\n### Steps to reproduce the problem\r\n1. Try to create a voucher with the minimum order amount set to `2000`\r\n2. It will throw an error.\r\n\r\n### What I expected to happen\r\nIt shouldn't throw an error.\r\n\r\n### Observation\r\nUpon normalizing it converts the zeros to exponents.\r\n\r\n```python\r\ndef validate_price_precision(value: Optional[\"Decimal\"], currency: str = None):\r\n \"\"\"Validate if price amount does not have too many decimal places.\r\n\r\n Price amount can't have more decimal places than currency allows to.\r\n Works only with decimal created from a string.\r\n \"\"\"\r\n\r\n # check no needed when there is no value\r\n if not value:\r\n return\r\n\r\n currency_fraction = get_currency_fraction(currency or settings.DEFAULT_CURRENCY)\r\n value = value.normalize()\r\n if abs(value.as_tuple().exponent) > currency_fraction:\r\n raise ValidationError(\r\n f\"Value cannot have more than {currency_fraction} decimal places.\"\r\n )\r\n```\r\nshould be:\r\n\r\n```python\r\ndef validate_price_precision(value: Optional[\"Decimal\"], currency: str = None):\r\n \"\"\"Validate if price amount does not have too many decimal places.\r\n\r\n Price amount can't have more decimal places than currency allows to.\r\n Works only with decimal created from a string.\r\n \"\"\"\r\n\r\n # check no needed when there is no value\r\n if not value:\r\n return\r\n\r\n currency_fraction = get_currency_fraction(currency or settings.DEFAULT_CURRENCY)\r\n value = value.normalize()\r\n exp = value.as_tuple().exponent\r\n if exp < 0 and abs(value.as_tuple().exponent) > currency_fraction:\r\n raise ValidationError(\r\n f\"Value cannot have more than {currency_fraction} decimal places.\"\r\n )\r\n```\r\nSo that it doesn't misinterpret zeros from the right as values after decimal places.\r\n\n", "before_files": [{"content": "from typing import TYPE_CHECKING, Optional\n\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django_prices.utils.formatting import get_currency_fraction\nfrom graphql.error import GraphQLError\n\nif TYPE_CHECKING:\n from decimal import Decimal\n\n\ndef validate_one_of_args_is_in_query(*args):\n # split args into a list with 2-element tuples:\n # [(arg1_name, arg1_value), (arg2_name, arg2_value), ...]\n splitted_args = [args[i : i + 2] for i in range(0, len(args), 2)] # noqa: E203\n # filter trueish values from each tuple\n filter_args = list(filter(lambda item: bool(item[1]) is True, splitted_args))\n\n if len(filter_args) > 1:\n rest_args = \", \".join([f\"'{item[0]}'\" for item in filter_args[1:]])\n raise GraphQLError(\n f\"Argument '{filter_args[0][0]}' cannot be combined with {rest_args}\"\n )\n\n if not filter_args:\n required_args = \", \".join([f\"'{item[0]}'\" for item in splitted_args])\n raise GraphQLError(f\"At least one of arguments is required: {required_args}.\")\n\n\ndef validate_price_precision(value: Optional[\"Decimal\"], currency: str = None):\n \"\"\"Validate if price amount does not have too many decimal places.\n\n Price amount can't have more decimal places than currency allow to.\n Works only with decimal created from a string.\n \"\"\"\n\n # check no needed when there is no value\n if not value:\n return\n\n currency_fraction = get_currency_fraction(currency or settings.DEFAULT_CURRENCY)\n value = value.normalize()\n if abs(value.as_tuple().exponent) > currency_fraction:\n raise ValidationError(\n f\"Value cannot have more than {currency_fraction} decimal places.\"\n )\n", "path": "saleor/graphql/core/validators.py"}]}
| 1,430 | 127 |
gh_patches_debug_16276
|
rasdani/github-patches
|
git_diff
|
lightly-ai__lightly-1341
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in PMSNLoss
Nice implementation of the [PMSNLoss](https://github.com/lightly-ai/lightly/blob/ddfed3c4dc03a8d2722df24bfa537d24ac80bde6/lightly/loss/pmsn_loss.py)! But the computation of Kullback-Leibler divergence missed `.log()` in Line 71&142.
</issue>
<code>
[start of lightly/loss/pmsn_loss.py]
1 from typing import Callable
2
3 import torch
4 import torch.nn.functional as F
5 from torch import Tensor
6
7 from lightly.loss.msn_loss import MSNLoss
8
9
10 class PMSNLoss(MSNLoss):
11 """Implementation of the loss function from PMSN [0] using a power law target
12 distribution.
13
14 - [0]: Prior Matching for Siamese Networks, 2022, https://arxiv.org/abs/2210.07277
15
16 Attributes:
17 temperature:
18 Similarities between anchors and targets are scaled by the inverse of
19 the temperature. Must be in (0, inf).
20 sinkhorn_iterations:
21 Number of sinkhorn normalization iterations on the targets.
22 regularization_weight:
23 Weight factor lambda by which the regularization loss is scaled. Set to 0
24 to disable regularization.
25 power_law_exponent:
26 Exponent for power law distribution. Entry k of the distribution is
27 proportional to (1 / k) ^ power_law_exponent, with k ranging from 1 to dim + 1.
28 gather_distributed:
29 If True, then target probabilities are gathered from all GPUs.
30
31 Examples:
32
33 >>> # initialize loss function
34 >>> loss_fn = PMSNLoss()
35 >>>
36 >>> # generate anchors and targets of images
37 >>> anchors = transforms(images)
38 >>> targets = transforms(images)
39 >>>
40 >>> # feed through PMSN model
41 >>> anchors_out = model(anchors)
42 >>> targets_out = model.target(targets)
43 >>>
44 >>> # calculate loss
45 >>> loss = loss_fn(anchors_out, targets_out, prototypes=model.prototypes)
46 """
47
48 def __init__(
49 self,
50 temperature: float = 0.1,
51 sinkhorn_iterations: int = 3,
52 regularization_weight: float = 1,
53 power_law_exponent: float = 0.25,
54 gather_distributed: bool = False,
55 ):
56 super().__init__(
57 temperature=temperature,
58 sinkhorn_iterations=sinkhorn_iterations,
59 regularization_weight=regularization_weight,
60 gather_distributed=gather_distributed,
61 )
62 self.power_law_exponent = power_law_exponent
63
64 def regularization_loss(self, mean_anchor_probs: Tensor) -> Tensor:
65 """Calculates regularization loss with a power law target distribution."""
66 power_dist = _power_law_distribution(
67 size=mean_anchor_probs.shape[0],
68 exponent=self.power_law_exponent,
69 device=mean_anchor_probs.device,
70 )
71 loss = F.kl_div(input=mean_anchor_probs, target=power_dist, reduction="sum")
72 return loss
73
74
75 class PMSNCustomLoss(MSNLoss):
76 """Implementation of the loss function from PMSN [0] with a custom target
77 distribution.
78
79 - [0]: Prior Matching for Siamese Networks, 2022, https://arxiv.org/abs/2210.07277
80
81 Attributes:
82 target_distribution:
83 A function that takes the mean anchor probabilities tensor with shape (dim,)
84 as input and returns a target probability distribution tensor with the same
85 shape. The returned distribution should sum up to one. The final
86 regularization loss is calculated as KL(mean_anchor_probs, target_dist)
87 where KL is the Kullback-Leibler divergence.
88 temperature:
89 Similarities between anchors and targets are scaled by the inverse of
90 the temperature. Must be in (0, inf).
91 sinkhorn_iterations:
92 Number of sinkhorn normalization iterations on the targets.
93 regularization_weight:
94 Weight factor lambda by which the regularization loss is scaled. Set to 0
95 to disable regularization.
96 gather_distributed:
97 If True, then target probabilities are gathered from all GPUs.
98
99 Examples:
100
101 >>> # define custom target distribution
102 >>> def my_uniform_distribution(mean_anchor_probabilities: Tensor) -> Tensor:
103 >>> dim = mean_anchor_probabilities.shape[0]
104 >>> return mean_anchor_probabilities.new_ones(dim) / dim
105 >>>
106 >>> # initialize loss function
107 >>> loss_fn = PMSNCustomLoss(target_distribution=my_uniform_distribution)
108 >>>
109 >>> # generate anchors and targets of images
110 >>> anchors = transforms(images)
111 >>> targets = transforms(images)
112 >>>
113 >>> # feed through PMSN model
114 >>> anchors_out = model(anchors)
115 >>> targets_out = model.target(targets)
116 >>>
117 >>> # calculate loss
118 >>> loss = loss_fn(anchors_out, targets_out, prototypes=model.prototypes)
119 """
120
121 def __init__(
122 self,
123 target_distribution: Callable[[Tensor], Tensor],
124 temperature: float = 0.1,
125 sinkhorn_iterations: int = 3,
126 regularization_weight: float = 1,
127 gather_distributed: bool = False,
128 ):
129 super().__init__(
130 temperature=temperature,
131 sinkhorn_iterations=sinkhorn_iterations,
132 regularization_weight=regularization_weight,
133 gather_distributed=gather_distributed,
134 )
135 self.target_distribution = target_distribution
136
137 def regularization_loss(self, mean_anchor_probs: Tensor) -> Tensor:
138 """Calculates regularization loss with a custom target distribution."""
139 target_dist = self.target_distribution(mean_anchor_probs).to(
140 mean_anchor_probs.device
141 )
142 loss = F.kl_div(input=mean_anchor_probs, target=target_dist, reduction="sum")
143 return loss
144
145
146 def _power_law_distribution(size: int, exponent: float, device: torch.device) -> Tensor:
147 """Returns a power law distribution summing up to 1."""
148 k = torch.arange(1, size + 1, device=device)
149 power_dist = k ** (-exponent)
150 power_dist = power_dist / power_dist.sum()
151 return power_dist
152
[end of lightly/loss/pmsn_loss.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lightly/loss/pmsn_loss.py b/lightly/loss/pmsn_loss.py
--- a/lightly/loss/pmsn_loss.py
+++ b/lightly/loss/pmsn_loss.py
@@ -68,7 +68,9 @@
exponent=self.power_law_exponent,
device=mean_anchor_probs.device,
)
- loss = F.kl_div(input=mean_anchor_probs, target=power_dist, reduction="sum")
+ loss = F.kl_div(
+ input=mean_anchor_probs.log(), target=power_dist, reduction="sum"
+ )
return loss
@@ -139,7 +141,9 @@
target_dist = self.target_distribution(mean_anchor_probs).to(
mean_anchor_probs.device
)
- loss = F.kl_div(input=mean_anchor_probs, target=target_dist, reduction="sum")
+ loss = F.kl_div(
+ input=mean_anchor_probs.log(), target=target_dist, reduction="sum"
+ )
return loss
|
{"golden_diff": "diff --git a/lightly/loss/pmsn_loss.py b/lightly/loss/pmsn_loss.py\n--- a/lightly/loss/pmsn_loss.py\n+++ b/lightly/loss/pmsn_loss.py\n@@ -68,7 +68,9 @@\n exponent=self.power_law_exponent,\n device=mean_anchor_probs.device,\n )\n- loss = F.kl_div(input=mean_anchor_probs, target=power_dist, reduction=\"sum\")\n+ loss = F.kl_div(\n+ input=mean_anchor_probs.log(), target=power_dist, reduction=\"sum\"\n+ )\n return loss\n \n \n@@ -139,7 +141,9 @@\n target_dist = self.target_distribution(mean_anchor_probs).to(\n mean_anchor_probs.device\n )\n- loss = F.kl_div(input=mean_anchor_probs, target=target_dist, reduction=\"sum\")\n+ loss = F.kl_div(\n+ input=mean_anchor_probs.log(), target=target_dist, reduction=\"sum\"\n+ )\n return loss\n", "issue": "Bug in PMSNLoss\nNice implementation of the [PMSNLoss](https://github.com/lightly-ai/lightly/blob/ddfed3c4dc03a8d2722df24bfa537d24ac80bde6/lightly/loss/pmsn_loss.py)! But the computation of Kullback-Leibler divergence missed `.log()` in Line 71&142.\r\n\n", "before_files": [{"content": "from typing import Callable\n\nimport torch\nimport torch.nn.functional as F\nfrom torch import Tensor\n\nfrom lightly.loss.msn_loss import MSNLoss\n\n\nclass PMSNLoss(MSNLoss):\n \"\"\"Implementation of the loss function from PMSN [0] using a power law target\n distribution.\n\n - [0]: Prior Matching for Siamese Networks, 2022, https://arxiv.org/abs/2210.07277\n\n Attributes:\n temperature:\n Similarities between anchors and targets are scaled by the inverse of\n the temperature. Must be in (0, inf).\n sinkhorn_iterations:\n Number of sinkhorn normalization iterations on the targets.\n regularization_weight:\n Weight factor lambda by which the regularization loss is scaled. Set to 0\n to disable regularization.\n power_law_exponent:\n Exponent for power law distribution. Entry k of the distribution is\n proportional to (1 / k) ^ power_law_exponent, with k ranging from 1 to dim + 1.\n gather_distributed:\n If True, then target probabilities are gathered from all GPUs.\n\n Examples:\n\n >>> # initialize loss function\n >>> loss_fn = PMSNLoss()\n >>>\n >>> # generate anchors and targets of images\n >>> anchors = transforms(images)\n >>> targets = transforms(images)\n >>>\n >>> # feed through PMSN model\n >>> anchors_out = model(anchors)\n >>> targets_out = model.target(targets)\n >>>\n >>> # calculate loss\n >>> loss = loss_fn(anchors_out, targets_out, prototypes=model.prototypes)\n \"\"\"\n\n def __init__(\n self,\n temperature: float = 0.1,\n sinkhorn_iterations: int = 3,\n regularization_weight: float = 1,\n power_law_exponent: float = 0.25,\n gather_distributed: bool = False,\n ):\n super().__init__(\n temperature=temperature,\n sinkhorn_iterations=sinkhorn_iterations,\n regularization_weight=regularization_weight,\n gather_distributed=gather_distributed,\n )\n self.power_law_exponent = power_law_exponent\n\n def regularization_loss(self, mean_anchor_probs: Tensor) -> Tensor:\n \"\"\"Calculates regularization loss with a power law target distribution.\"\"\"\n power_dist = _power_law_distribution(\n size=mean_anchor_probs.shape[0],\n exponent=self.power_law_exponent,\n device=mean_anchor_probs.device,\n )\n loss = F.kl_div(input=mean_anchor_probs, target=power_dist, reduction=\"sum\")\n return loss\n\n\nclass PMSNCustomLoss(MSNLoss):\n \"\"\"Implementation of the loss function from PMSN [0] with a custom target\n distribution.\n\n - [0]: Prior Matching for Siamese Networks, 2022, https://arxiv.org/abs/2210.07277\n\n Attributes:\n target_distribution:\n A function that takes the mean anchor probabilities tensor with shape (dim,)\n as input and returns a target probability distribution tensor with the same\n shape. The returned distribution should sum up to one. The final\n regularization loss is calculated as KL(mean_anchor_probs, target_dist)\n where KL is the Kullback-Leibler divergence.\n temperature:\n Similarities between anchors and targets are scaled by the inverse of\n the temperature. Must be in (0, inf).\n sinkhorn_iterations:\n Number of sinkhorn normalization iterations on the targets.\n regularization_weight:\n Weight factor lambda by which the regularization loss is scaled. Set to 0\n to disable regularization.\n gather_distributed:\n If True, then target probabilities are gathered from all GPUs.\n\n Examples:\n\n >>> # define custom target distribution\n >>> def my_uniform_distribution(mean_anchor_probabilities: Tensor) -> Tensor:\n >>> dim = mean_anchor_probabilities.shape[0]\n >>> return mean_anchor_probabilities.new_ones(dim) / dim\n >>>\n >>> # initialize loss function\n >>> loss_fn = PMSNCustomLoss(target_distribution=my_uniform_distribution)\n >>>\n >>> # generate anchors and targets of images\n >>> anchors = transforms(images)\n >>> targets = transforms(images)\n >>>\n >>> # feed through PMSN model\n >>> anchors_out = model(anchors)\n >>> targets_out = model.target(targets)\n >>>\n >>> # calculate loss\n >>> loss = loss_fn(anchors_out, targets_out, prototypes=model.prototypes)\n \"\"\"\n\n def __init__(\n self,\n target_distribution: Callable[[Tensor], Tensor],\n temperature: float = 0.1,\n sinkhorn_iterations: int = 3,\n regularization_weight: float = 1,\n gather_distributed: bool = False,\n ):\n super().__init__(\n temperature=temperature,\n sinkhorn_iterations=sinkhorn_iterations,\n regularization_weight=regularization_weight,\n gather_distributed=gather_distributed,\n )\n self.target_distribution = target_distribution\n\n def regularization_loss(self, mean_anchor_probs: Tensor) -> Tensor:\n \"\"\"Calculates regularization loss with a custom target distribution.\"\"\"\n target_dist = self.target_distribution(mean_anchor_probs).to(\n mean_anchor_probs.device\n )\n loss = F.kl_div(input=mean_anchor_probs, target=target_dist, reduction=\"sum\")\n return loss\n\n\ndef _power_law_distribution(size: int, exponent: float, device: torch.device) -> Tensor:\n \"\"\"Returns a power law distribution summing up to 1.\"\"\"\n k = torch.arange(1, size + 1, device=device)\n power_dist = k ** (-exponent)\n power_dist = power_dist / power_dist.sum()\n return power_dist\n", "path": "lightly/loss/pmsn_loss.py"}]}
| 2,220 | 227 |
gh_patches_debug_4288
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-826
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'float' object has no attribute 'get_value' problem.
When I was trying to print the configuration of a model by "model.get_config()" or trying to save my model as a 'json' file:
json_str = autoEncoder.to_json()
open('temp_model.json','w').write(json_str)
autoEncoder.save_weights('temp_model_weights.h5')
It raise the exception "float object has no attribute 'get_value' " in file 'optimizer.py', in class SGD(because I was using SGD as the optimizer), the definition of get_config() is:
def get_config(self):
return {"name": self.**class**.**name**,
"lr": float(self.lr.get_value()),
"momentum": float(self.momentum.get_value()),
"decay": float(self.decay.get_value()),
"nesterov": self.nesterov}
while the **init** of class SGD does not contain decay and nesterov
```
def __init__(self, lr=0.01, momentum=0., decay=0., nesterov=False, *args, **kwargs):
super(SGD, self).__init__(**kwargs)
self.__dict__.update(locals())
self.iterations = shared_scalar(0)
self.lr = shared_scalar(lr)
self.momentum = shared_scalar(momentum)
```
Is it a bug? Can I fix the problem by adding 'self.decay = shared_scalar(decay)' or something like this?
Thank you very much!
</issue>
<code>
[start of keras/optimizers.py]
1 from __future__ import absolute_import
2 import theano
3 import theano.tensor as T
4
5 from .utils.theano_utils import shared_zeros, shared_scalar, floatX
6 from .utils.generic_utils import get_from_module
7 from six.moves import zip
8
9
10 def clip_norm(g, c, n):
11 if c > 0:
12 g = T.switch(T.ge(n, c), g * c / n, g)
13 return g
14
15
16 def kl_divergence(p, p_hat):
17 return p_hat - p + p * T.log(p / p_hat)
18
19
20 class Optimizer(object):
21 def __init__(self, **kwargs):
22 self.__dict__.update(kwargs)
23 self.updates = []
24
25 def get_state(self):
26 return [u[0].get_value() for u in self.updates]
27
28 def set_state(self, value_list):
29 assert len(self.updates) == len(value_list)
30 for u, v in zip(self.updates, value_list):
31 u[0].set_value(floatX(v))
32
33 def get_updates(self, params, constraints, loss):
34 raise NotImplementedError
35
36 def get_gradients(self, loss, params):
37
38 grads = T.grad(loss, params)
39
40 if hasattr(self, 'clipnorm') and self.clipnorm > 0:
41 norm = T.sqrt(sum([T.sum(g ** 2) for g in grads]))
42 grads = [clip_norm(g, self.clipnorm, norm) for g in grads]
43
44 if hasattr(self, 'clipvalue') and self.clipvalue > 0:
45 grads = [T.clip(g, -self.clipvalue, self.clipvalue) for g in grads]
46
47 return grads
48
49 def get_config(self):
50 return {"name": self.__class__.__name__}
51
52
53 class SGD(Optimizer):
54
55 def __init__(self, lr=0.01, momentum=0., decay=0., nesterov=False, *args, **kwargs):
56 super(SGD, self).__init__(**kwargs)
57 self.__dict__.update(locals())
58 self.iterations = shared_scalar(0)
59 self.lr = shared_scalar(lr)
60 self.momentum = shared_scalar(momentum)
61 self.decay = shared_scalar(decay)
62
63 def get_updates(self, params, constraints, loss):
64 grads = self.get_gradients(loss, params)
65 lr = self.lr * (1.0 / (1.0 + self.decay * self.iterations))
66 self.updates = [(self.iterations, self.iterations + 1.)]
67
68 for p, g, c in zip(params, grads, constraints):
69 m = shared_zeros(p.get_value().shape) # momentum
70 v = self.momentum * m - lr * g # velocity
71 self.updates.append((m, v))
72
73 if self.nesterov:
74 new_p = p + self.momentum * v - lr * g
75 else:
76 new_p = p + v
77
78 self.updates.append((p, c(new_p))) # apply constraints
79 return self.updates
80
81 def get_config(self):
82 return {"name": self.__class__.__name__,
83 "lr": float(self.lr.get_value()),
84 "momentum": float(self.momentum.get_value()),
85 "decay": float(self.decay.get_value()),
86 "nesterov": self.nesterov}
87
88
89 class RMSprop(Optimizer):
90 def __init__(self, lr=0.001, rho=0.9, epsilon=1e-6, *args, **kwargs):
91 super(RMSprop, self).__init__(**kwargs)
92 self.__dict__.update(locals())
93 self.lr = shared_scalar(lr)
94 self.rho = shared_scalar(rho)
95
96 def get_updates(self, params, constraints, loss):
97 grads = self.get_gradients(loss, params)
98 accumulators = [shared_zeros(p.get_value().shape) for p in params]
99 self.updates = []
100
101 for p, g, a, c in zip(params, grads, accumulators, constraints):
102 new_a = self.rho * a + (1 - self.rho) * g ** 2 # update accumulator
103 self.updates.append((a, new_a))
104
105 new_p = p - self.lr * g / T.sqrt(new_a + self.epsilon)
106 self.updates.append((p, c(new_p))) # apply constraints
107 return self.updates
108
109 def get_config(self):
110 return {"name": self.__class__.__name__,
111 "lr": float(self.lr.get_value()),
112 "rho": float(self.rho.get_value()),
113 "epsilon": self.epsilon}
114
115
116 class Adagrad(Optimizer):
117 def __init__(self, lr=0.01, epsilon=1e-6, *args, **kwargs):
118 super(Adagrad, self).__init__(**kwargs)
119 self.__dict__.update(locals())
120 self.lr = shared_scalar(lr)
121
122 def get_updates(self, params, constraints, loss):
123 grads = self.get_gradients(loss, params)
124 accumulators = [shared_zeros(p.get_value().shape) for p in params]
125 self.updates = []
126
127 for p, g, a, c in zip(params, grads, accumulators, constraints):
128 new_a = a + g ** 2 # update accumulator
129 self.updates.append((a, new_a))
130 new_p = p - self.lr * g / T.sqrt(new_a + self.epsilon)
131 self.updates.append((p, c(new_p))) # apply constraints
132 return self.updates
133
134 def get_config(self):
135 return {"name": self.__class__.__name__,
136 "lr": float(self.lr.get_value()),
137 "epsilon": self.epsilon}
138
139
140 class Adadelta(Optimizer):
141 '''
142 Reference: http://arxiv.org/abs/1212.5701
143 '''
144 def __init__(self, lr=1.0, rho=0.95, epsilon=1e-6, *args, **kwargs):
145 super(Adadelta, self).__init__(**kwargs)
146 self.__dict__.update(locals())
147 self.lr = shared_scalar(lr)
148
149 def get_updates(self, params, constraints, loss):
150 grads = self.get_gradients(loss, params)
151 accumulators = [shared_zeros(p.get_value().shape) for p in params]
152 delta_accumulators = [shared_zeros(p.get_value().shape) for p in params]
153 self.updates = []
154
155 for p, g, a, d_a, c in zip(params, grads, accumulators,
156 delta_accumulators, constraints):
157 new_a = self.rho * a + (1 - self.rho) * g ** 2 # update accumulator
158 self.updates.append((a, new_a))
159
160 # use the new accumulator and the *old* delta_accumulator
161 update = g * T.sqrt(d_a + self.epsilon) / T.sqrt(new_a +
162 self.epsilon)
163
164 new_p = p - self.lr * update
165 self.updates.append((p, c(new_p))) # apply constraints
166
167 # update delta_accumulator
168 new_d_a = self.rho * d_a + (1 - self.rho) * update ** 2
169 self.updates.append((d_a, new_d_a))
170 return self.updates
171
172 def get_config(self):
173 return {"name": self.__class__.__name__,
174 "lr": float(self.lr.get_value()),
175 "rho": self.rho,
176 "epsilon": self.epsilon}
177
178
179 class Adam(Optimizer):
180 '''
181 Reference: http://arxiv.org/abs/1412.6980v8
182
183 Default parameters follow those provided in the original paper.
184 '''
185 def __init__(self, lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-8, *args, **kwargs):
186 super(Adam, self).__init__(**kwargs)
187 self.__dict__.update(locals())
188 self.iterations = shared_scalar(0)
189 self.lr = shared_scalar(lr)
190
191 def get_updates(self, params, constraints, loss):
192 grads = self.get_gradients(loss, params)
193 self.updates = [(self.iterations, self.iterations+1.)]
194
195 t = self.iterations + 1
196 lr_t = self.lr * T.sqrt(1-self.beta_2**t)/(1-self.beta_1**t)
197
198 for p, g, c in zip(params, grads, constraints):
199 m = theano.shared(p.get_value() * 0.) # zero init of moment
200 v = theano.shared(p.get_value() * 0.) # zero init of velocity
201
202 m_t = (self.beta_1 * m) + (1 - self.beta_1) * g
203 v_t = (self.beta_2 * v) + (1 - self.beta_2) * (g**2)
204 p_t = p - lr_t * m_t / (T.sqrt(v_t) + self.epsilon)
205
206 self.updates.append((m, m_t))
207 self.updates.append((v, v_t))
208 self.updates.append((p, c(p_t))) # apply constraints
209 return self.updates
210
211 def get_config(self):
212 return {"name": self.__class__.__name__,
213 "lr": float(self.lr.get_value()),
214 "beta_1": self.beta_1,
215 "beta_2": self.beta_2,
216 "epsilon": self.epsilon}
217
218 # aliases
219 sgd = SGD
220 rmsprop = RMSprop
221 adagrad = Adagrad
222 adadelta = Adadelta
223 adam = Adam
224
225
226 def get(identifier, kwargs=None):
227 return get_from_module(identifier, globals(), 'optimizer', instantiate=True,
228 kwargs=kwargs)
229
[end of keras/optimizers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/keras/optimizers.py b/keras/optimizers.py
--- a/keras/optimizers.py
+++ b/keras/optimizers.py
@@ -82,7 +82,7 @@
return {"name": self.__class__.__name__,
"lr": float(self.lr.get_value()),
"momentum": float(self.momentum.get_value()),
- "decay": float(self.decay.get_value()),
+ "decay": float(self.decay),
"nesterov": self.nesterov}
|
{"golden_diff": "diff --git a/keras/optimizers.py b/keras/optimizers.py\n--- a/keras/optimizers.py\n+++ b/keras/optimizers.py\n@@ -82,7 +82,7 @@\n return {\"name\": self.__class__.__name__,\n \"lr\": float(self.lr.get_value()),\n \"momentum\": float(self.momentum.get_value()),\n- \"decay\": float(self.decay.get_value()),\n+ \"decay\": float(self.decay),\n \"nesterov\": self.nesterov}\n", "issue": "'float' object has no attribute 'get_value' problem.\nWhen I was trying to print the configuration of a model by \"model.get_config()\" or trying to save my model as a 'json' file:\n\njson_str = autoEncoder.to_json()\nopen('temp_model.json','w').write(json_str)\nautoEncoder.save_weights('temp_model_weights.h5')\n\nIt raise the exception \"float object has no attribute 'get_value' \" in file 'optimizer.py', in class SGD(because I was using SGD as the optimizer), the definition of get_config() is:\n\n def get_config(self):\n return {\"name\": self.**class**.**name**,\n \"lr\": float(self.lr.get_value()),\n \"momentum\": float(self.momentum.get_value()),\n \"decay\": float(self.decay.get_value()),\n \"nesterov\": self.nesterov}\n\nwhile the **init** of class SGD does not contain decay and nesterov\n\n```\ndef __init__(self, lr=0.01, momentum=0., decay=0., nesterov=False, *args, **kwargs):\n super(SGD, self).__init__(**kwargs)\n self.__dict__.update(locals())\n self.iterations = shared_scalar(0)\n self.lr = shared_scalar(lr)\n self.momentum = shared_scalar(momentum)\n```\n\nIs it a bug? Can I fix the problem by adding 'self.decay = shared_scalar(decay)' or something like this?\n\nThank you very much!\n\n", "before_files": [{"content": "from __future__ import absolute_import\nimport theano\nimport theano.tensor as T\n\nfrom .utils.theano_utils import shared_zeros, shared_scalar, floatX\nfrom .utils.generic_utils import get_from_module\nfrom six.moves import zip\n\n\ndef clip_norm(g, c, n):\n if c > 0:\n g = T.switch(T.ge(n, c), g * c / n, g)\n return g\n\n\ndef kl_divergence(p, p_hat):\n return p_hat - p + p * T.log(p / p_hat)\n\n\nclass Optimizer(object):\n def __init__(self, **kwargs):\n self.__dict__.update(kwargs)\n self.updates = []\n\n def get_state(self):\n return [u[0].get_value() for u in self.updates]\n\n def set_state(self, value_list):\n assert len(self.updates) == len(value_list)\n for u, v in zip(self.updates, value_list):\n u[0].set_value(floatX(v))\n\n def get_updates(self, params, constraints, loss):\n raise NotImplementedError\n\n def get_gradients(self, loss, params):\n\n grads = T.grad(loss, params)\n\n if hasattr(self, 'clipnorm') and self.clipnorm > 0:\n norm = T.sqrt(sum([T.sum(g ** 2) for g in grads]))\n grads = [clip_norm(g, self.clipnorm, norm) for g in grads]\n\n if hasattr(self, 'clipvalue') and self.clipvalue > 0:\n grads = [T.clip(g, -self.clipvalue, self.clipvalue) for g in grads]\n\n return grads\n\n def get_config(self):\n return {\"name\": self.__class__.__name__}\n\n\nclass SGD(Optimizer):\n\n def __init__(self, lr=0.01, momentum=0., decay=0., nesterov=False, *args, **kwargs):\n super(SGD, self).__init__(**kwargs)\n self.__dict__.update(locals())\n self.iterations = shared_scalar(0)\n self.lr = shared_scalar(lr)\n self.momentum = shared_scalar(momentum)\n self.decay = shared_scalar(decay)\n\n def get_updates(self, params, constraints, loss):\n grads = self.get_gradients(loss, params)\n lr = self.lr * (1.0 / (1.0 + self.decay * self.iterations))\n self.updates = [(self.iterations, self.iterations + 1.)]\n\n for p, g, c in zip(params, grads, constraints):\n m = shared_zeros(p.get_value().shape) # momentum\n v = self.momentum * m - lr * g # velocity\n self.updates.append((m, v))\n\n if self.nesterov:\n new_p = p + self.momentum * v - lr * g\n else:\n new_p = p + v\n\n self.updates.append((p, c(new_p))) # apply constraints\n return self.updates\n\n def get_config(self):\n return {\"name\": self.__class__.__name__,\n \"lr\": float(self.lr.get_value()),\n \"momentum\": float(self.momentum.get_value()),\n \"decay\": float(self.decay.get_value()),\n \"nesterov\": self.nesterov}\n\n\nclass RMSprop(Optimizer):\n def __init__(self, lr=0.001, rho=0.9, epsilon=1e-6, *args, **kwargs):\n super(RMSprop, self).__init__(**kwargs)\n self.__dict__.update(locals())\n self.lr = shared_scalar(lr)\n self.rho = shared_scalar(rho)\n\n def get_updates(self, params, constraints, loss):\n grads = self.get_gradients(loss, params)\n accumulators = [shared_zeros(p.get_value().shape) for p in params]\n self.updates = []\n\n for p, g, a, c in zip(params, grads, accumulators, constraints):\n new_a = self.rho * a + (1 - self.rho) * g ** 2 # update accumulator\n self.updates.append((a, new_a))\n\n new_p = p - self.lr * g / T.sqrt(new_a + self.epsilon)\n self.updates.append((p, c(new_p))) # apply constraints\n return self.updates\n\n def get_config(self):\n return {\"name\": self.__class__.__name__,\n \"lr\": float(self.lr.get_value()),\n \"rho\": float(self.rho.get_value()),\n \"epsilon\": self.epsilon}\n\n\nclass Adagrad(Optimizer):\n def __init__(self, lr=0.01, epsilon=1e-6, *args, **kwargs):\n super(Adagrad, self).__init__(**kwargs)\n self.__dict__.update(locals())\n self.lr = shared_scalar(lr)\n\n def get_updates(self, params, constraints, loss):\n grads = self.get_gradients(loss, params)\n accumulators = [shared_zeros(p.get_value().shape) for p in params]\n self.updates = []\n\n for p, g, a, c in zip(params, grads, accumulators, constraints):\n new_a = a + g ** 2 # update accumulator\n self.updates.append((a, new_a))\n new_p = p - self.lr * g / T.sqrt(new_a + self.epsilon)\n self.updates.append((p, c(new_p))) # apply constraints\n return self.updates\n\n def get_config(self):\n return {\"name\": self.__class__.__name__,\n \"lr\": float(self.lr.get_value()),\n \"epsilon\": self.epsilon}\n\n\nclass Adadelta(Optimizer):\n '''\n Reference: http://arxiv.org/abs/1212.5701\n '''\n def __init__(self, lr=1.0, rho=0.95, epsilon=1e-6, *args, **kwargs):\n super(Adadelta, self).__init__(**kwargs)\n self.__dict__.update(locals())\n self.lr = shared_scalar(lr)\n\n def get_updates(self, params, constraints, loss):\n grads = self.get_gradients(loss, params)\n accumulators = [shared_zeros(p.get_value().shape) for p in params]\n delta_accumulators = [shared_zeros(p.get_value().shape) for p in params]\n self.updates = []\n\n for p, g, a, d_a, c in zip(params, grads, accumulators,\n delta_accumulators, constraints):\n new_a = self.rho * a + (1 - self.rho) * g ** 2 # update accumulator\n self.updates.append((a, new_a))\n\n # use the new accumulator and the *old* delta_accumulator\n update = g * T.sqrt(d_a + self.epsilon) / T.sqrt(new_a +\n self.epsilon)\n\n new_p = p - self.lr * update\n self.updates.append((p, c(new_p))) # apply constraints\n\n # update delta_accumulator\n new_d_a = self.rho * d_a + (1 - self.rho) * update ** 2\n self.updates.append((d_a, new_d_a))\n return self.updates\n\n def get_config(self):\n return {\"name\": self.__class__.__name__,\n \"lr\": float(self.lr.get_value()),\n \"rho\": self.rho,\n \"epsilon\": self.epsilon}\n\n\nclass Adam(Optimizer):\n '''\n Reference: http://arxiv.org/abs/1412.6980v8\n\n Default parameters follow those provided in the original paper.\n '''\n def __init__(self, lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-8, *args, **kwargs):\n super(Adam, self).__init__(**kwargs)\n self.__dict__.update(locals())\n self.iterations = shared_scalar(0)\n self.lr = shared_scalar(lr)\n\n def get_updates(self, params, constraints, loss):\n grads = self.get_gradients(loss, params)\n self.updates = [(self.iterations, self.iterations+1.)]\n\n t = self.iterations + 1\n lr_t = self.lr * T.sqrt(1-self.beta_2**t)/(1-self.beta_1**t)\n\n for p, g, c in zip(params, grads, constraints):\n m = theano.shared(p.get_value() * 0.) # zero init of moment\n v = theano.shared(p.get_value() * 0.) # zero init of velocity\n\n m_t = (self.beta_1 * m) + (1 - self.beta_1) * g\n v_t = (self.beta_2 * v) + (1 - self.beta_2) * (g**2)\n p_t = p - lr_t * m_t / (T.sqrt(v_t) + self.epsilon)\n\n self.updates.append((m, m_t))\n self.updates.append((v, v_t))\n self.updates.append((p, c(p_t))) # apply constraints\n return self.updates\n\n def get_config(self):\n return {\"name\": self.__class__.__name__,\n \"lr\": float(self.lr.get_value()),\n \"beta_1\": self.beta_1,\n \"beta_2\": self.beta_2,\n \"epsilon\": self.epsilon}\n\n# aliases\nsgd = SGD\nrmsprop = RMSprop\nadagrad = Adagrad\nadadelta = Adadelta\nadam = Adam\n\n\ndef get(identifier, kwargs=None):\n return get_from_module(identifier, globals(), 'optimizer', instantiate=True,\n kwargs=kwargs)\n", "path": "keras/optimizers.py"}]}
| 3,569 | 118 |
gh_patches_debug_9581
|
rasdani/github-patches
|
git_diff
|
privacyidea__privacyidea-3092
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change data message to notification message
Currently the push notification is sent as data message.
Check, if it is sensible to send the message as notification type.
</issue>
<code>
[start of privacyidea/lib/smsprovider/FirebaseProvider.py]
1 # -*- coding: utf-8 -*-
2 #
3 # 2019-02-12 Cornelius KΓΆlbel <[email protected]>
4 #
5 #
6 # This program is free software: you can redistribute it and/or
7 # modify it under the terms of the GNU Affero General Public
8 # License, version 3, as published by the Free Software Foundation.
9 #
10 # This program is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU Affero General Public License for more details.
14 #
15 # You should have received a copy of the
16 # GNU Affero General Public License
17 # along with this program. If not, see <http://www.gnu.org/licenses/>.
18 #
19 #
20
21 __doc__ = """This is the provider class that communicates with Googles
22 Firebase Cloud Messaging Service.
23 This provider is used for the push token and can be used for SMS tokens.
24 """
25
26 from privacyidea.lib.smsprovider.SMSProvider import (ISMSProvider)
27 from privacyidea.lib.error import ConfigAdminError
28 from privacyidea.lib.framework import get_app_local_store
29 from privacyidea.lib import _
30 import logging
31 from google.oauth2 import service_account
32 from google.auth.transport.requests import AuthorizedSession
33 import json
34 import time
35
36 FIREBASE_URL_SEND = 'https://fcm.googleapis.com/v1/projects/{0!s}/messages:send'
37 SCOPES = ['https://www.googleapis.com/auth/cloud-platform',
38 'https://www.googleapis.com/auth/datastore',
39 'https://www.googleapis.com/auth/devstorage.read_write',
40 'https://www.googleapis.com/auth/firebase',
41 'https://www.googleapis.com/auth/identitytoolkit',
42 'https://www.googleapis.com/auth/userinfo.email']
43
44 log = logging.getLogger(__name__)
45
46
47 def get_firebase_access_token(config_file_name):
48 """
49 This returns the access token for a given JSON config file name
50
51 :param config_file_name: The json file with the Service account credentials
52 :type config_file_name: str
53 :return: Firebase credentials
54 :rtype: google.oauth2.service_account.Credentials
55 """
56 fbt = "firebase_token"
57 app_store = get_app_local_store()
58
59 if fbt not in app_store or not isinstance(app_store[fbt], dict):
60 # initialize the firebase_token in the app_store as dict
61 app_store[fbt] = {}
62
63 if not isinstance(app_store[fbt].get(config_file_name), service_account.Credentials) or \
64 app_store[fbt].get(config_file_name).expired:
65 # If the type of the config is not of class Credentials or if the token
66 # has expired we get new scoped access token credentials
67 credentials = service_account.Credentials.from_service_account_file(config_file_name,
68 scopes=SCOPES)
69
70 log.debug("Fetching a new access_token for {!r} from firebase...".format(config_file_name))
71 # We do not use a lock here: The worst that could happen is that two threads
72 # fetch new auth tokens concurrently. In this case, one of them wins and
73 # is written to the dictionary.
74 app_store[fbt][config_file_name] = credentials
75 readable_time = credentials.expiry.isoformat() if credentials.expiry else 'Never'
76 log.debug(u"Setting the expiration for {!r} of the new access_token "
77 u"to {!s}.".format(config_file_name, readable_time))
78
79 return app_store[fbt][config_file_name]
80
81
82 class FIREBASE_CONFIG:
83 REGISTRATION_URL = "registration URL"
84 TTL = "time to live"
85 JSON_CONFIG = "JSON config file"
86 PROJECT_ID = "projectid"
87 PROJECT_NUMBER = "projectnumber"
88 APP_ID = "appid"
89 API_KEY = "apikey"
90 APP_ID_IOS = "appidios"
91 API_KEY_IOS = "apikeyios"
92 HTTPS_PROXY = "httpsproxy"
93
94
95 class FirebaseProvider(ISMSProvider):
96
97 def __init__(self, db_smsprovider_object=None, smsgateway=None):
98 ISMSProvider.__init__(self, db_smsprovider_object, smsgateway)
99 self.access_token_info = None
100 self.access_token_expires_at = 0
101
102 def submit_message(self, firebase_token, data):
103 """
104 send a message to a registered Firebase client
105 This can be a simple OTP value or a cryptographic challenge response.
106
107 :param firebase_token: The firebase token of the smartphone
108 :type firebase_token: str
109 :param data: the data dictionary part of the message to submit to the phone
110 :type data: dict
111 :return: bool
112 """
113 res = False
114
115 credentials = get_firebase_access_token(self.smsgateway.option_dict.get(
116 FIREBASE_CONFIG.JSON_CONFIG))
117
118 authed_session = AuthorizedSession(credentials)
119
120 headers = {
121 'Content-Type': 'application/json; UTF-8',
122 }
123 fcm_message = {
124 "message": {
125 "data": data,
126 "token": firebase_token,
127 "android": {
128 "priority": "HIGH",
129 "ttl": "120s",
130 "fcm_options": {"analytics_label": "AndroidPushToken"}
131 },
132 "apns": {
133 "headers": {
134 "apns-priority": "10",
135 "apns-push-type": "alert",
136 "apns-collapse-id": "privacyidea.pushtoken",
137 "apns-expiration": str(int(time.time()) + 120)
138 },
139 "payload": {
140 "aps": {
141 "alert": {
142 "title": data.get("title"),
143 "body": data.get("question"),
144 },
145 "sound": "default",
146 "category": "PUSH_AUTHENTICATION"
147 },
148 },
149 "fcm_options": {"analytics_label": "iOSPushToken"}
150 }
151 }
152 }
153
154 proxies = {}
155 if self.smsgateway.option_dict.get(FIREBASE_CONFIG.HTTPS_PROXY):
156 proxies["https"] = self.smsgateway.option_dict.get(FIREBASE_CONFIG.HTTPS_PROXY)
157 url = FIREBASE_URL_SEND.format(self.smsgateway.option_dict.get(FIREBASE_CONFIG.PROJECT_ID))
158 resp = authed_session.post(url, data=json.dumps(fcm_message), headers=headers, proxies=proxies)
159
160 if resp.status_code == 200:
161 log.debug("Message sent successfully to Firebase service.")
162 res = True
163 else:
164 log.warning(u"Failed to send message to firebase service: {0!s}".format(resp.text))
165
166 return res
167
168 def check_configuration(self):
169 """
170 This method checks the sanity of the configuration of this provider.
171 If there is a configuration error, than an exception is raised.
172 :return:
173 """
174 json_file = self.smsgateway.option_dict.get(FIREBASE_CONFIG.JSON_CONFIG)
175 server_config = None
176 with open(json_file) as f:
177 server_config = json.load(f)
178 if server_config:
179 if server_config.get("type") != "service_account":
180 raise ConfigAdminError(description="The JSON file is not a valid firebase credentials file.")
181 project_id = self.smsgateway.option_dict.get(FIREBASE_CONFIG.PROJECT_ID)
182 if server_config.get("project_id") != project_id:
183 raise ConfigAdminError(description="The project_id you entered does not match the project_id from the JSON file.")
184
185 else:
186 raise ConfigAdminError(description="Please check your configuration. Can not load JSON file.")
187
188 # We need at least
189 # FIREBASE_CONFIG.API_KEY_IOS and FIREBASE_CONFIG.APP_ID_IOS
190 # or
191 # FIREBASE_CONFIG.API_KEY and FIREBASE_CONFIG.APP_ID
192 android_configured = bool(self.smsgateway.option_dict.get(FIREBASE_CONFIG.APP_ID)) and \
193 bool(self.smsgateway.option_dict.get(FIREBASE_CONFIG.API_KEY))
194 ios_configured = bool(self.smsgateway.option_dict.get(FIREBASE_CONFIG.APP_ID_IOS)) and \
195 bool(self.smsgateway.option_dict.get(FIREBASE_CONFIG.API_KEY_IOS))
196 if not android_configured and not ios_configured:
197 raise ConfigAdminError(description="You need to at least configure either app_id and api_key or"
198 " app_id_ios and api_key_ios.")
199
200 @classmethod
201 def parameters(cls):
202 """
203 Return a dictionary, that describes the parameters and options for the
204 Firebase Provider
205 Parameters are required keys to values.
206
207 :return: dict
208 """
209 params = {"options_allowed": False,
210 "headers_allowed": False,
211 "parameters": {
212 FIREBASE_CONFIG.PROJECT_ID: {
213 "required": True,
214 "description": _("The project ID, that the client should use. Get it from your Firebase console.")
215 },
216 FIREBASE_CONFIG.PROJECT_NUMBER: {
217 "required": True,
218 "description": _(
219 "The project number, that the client should use. Get it from your Firebase console.")
220 },
221 FIREBASE_CONFIG.APP_ID: {
222 "required": False,
223 "description": _(
224 "The App ID, that the Android client should use. Get it from your Firebase console.")
225 },
226 FIREBASE_CONFIG.API_KEY: {
227 "required": False,
228 "description": _(
229 "The API Key, that the Android client should use. Get it from your Firebase console.")
230 },
231 FIREBASE_CONFIG.APP_ID_IOS:{
232 "required": False,
233 "description": _(
234 "The App ID, that the iOS client should use. Get it from your Firebase console.")
235 },
236 FIREBASE_CONFIG.API_KEY_IOS: {
237 "required": False,
238 "description": _(
239 "The API Key, that the iOS client should use. Get it from your Firebase console.")
240 },
241 FIREBASE_CONFIG.JSON_CONFIG: {
242 "required": True,
243 "description": _("The filename of the JSON config file, that allows privacyIDEA to talk"
244 " to the Firebase REST API.")
245 },
246 FIREBASE_CONFIG.HTTPS_PROXY: {
247 "required": False,
248 "description": _("Proxy setting for HTTPS connections to googleapis.com.")
249 }
250 }
251 }
252 return params
253
[end of privacyidea/lib/smsprovider/FirebaseProvider.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/privacyidea/lib/smsprovider/FirebaseProvider.py b/privacyidea/lib/smsprovider/FirebaseProvider.py
--- a/privacyidea/lib/smsprovider/FirebaseProvider.py
+++ b/privacyidea/lib/smsprovider/FirebaseProvider.py
@@ -124,6 +124,10 @@
"message": {
"data": data,
"token": firebase_token,
+ "notification": {
+ "title": data.get("title"),
+ "body": data.get("question")
+ },
"android": {
"priority": "HIGH",
"ttl": "120s",
|
{"golden_diff": "diff --git a/privacyidea/lib/smsprovider/FirebaseProvider.py b/privacyidea/lib/smsprovider/FirebaseProvider.py\n--- a/privacyidea/lib/smsprovider/FirebaseProvider.py\n+++ b/privacyidea/lib/smsprovider/FirebaseProvider.py\n@@ -124,6 +124,10 @@\n \"message\": {\n \"data\": data,\n \"token\": firebase_token,\n+ \"notification\": {\n+ \"title\": data.get(\"title\"),\n+ \"body\": data.get(\"question\")\n+ },\n \"android\": {\n \"priority\": \"HIGH\",\n \"ttl\": \"120s\",\n", "issue": "Change data message to notification message\nCurrently the push notification is sent as data message.\r\nCheck, if it is sensible to send the message as notification type.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# 2019-02-12 Cornelius K\u00f6lbel <[email protected]>\n#\n#\n# This program is free software: you can redistribute it and/or\n# modify it under the terms of the GNU Affero General Public\n# License, version 3, as published by the Free Software Foundation.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the\n# GNU Affero General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n#\n\n__doc__ = \"\"\"This is the provider class that communicates with Googles\nFirebase Cloud Messaging Service.\nThis provider is used for the push token and can be used for SMS tokens.\n\"\"\"\n\nfrom privacyidea.lib.smsprovider.SMSProvider import (ISMSProvider)\nfrom privacyidea.lib.error import ConfigAdminError\nfrom privacyidea.lib.framework import get_app_local_store\nfrom privacyidea.lib import _\nimport logging\nfrom google.oauth2 import service_account\nfrom google.auth.transport.requests import AuthorizedSession\nimport json\nimport time\n\nFIREBASE_URL_SEND = 'https://fcm.googleapis.com/v1/projects/{0!s}/messages:send'\nSCOPES = ['https://www.googleapis.com/auth/cloud-platform',\n 'https://www.googleapis.com/auth/datastore',\n 'https://www.googleapis.com/auth/devstorage.read_write',\n 'https://www.googleapis.com/auth/firebase',\n 'https://www.googleapis.com/auth/identitytoolkit',\n 'https://www.googleapis.com/auth/userinfo.email']\n\nlog = logging.getLogger(__name__)\n\n\ndef get_firebase_access_token(config_file_name):\n \"\"\"\n This returns the access token for a given JSON config file name\n\n :param config_file_name: The json file with the Service account credentials\n :type config_file_name: str\n :return: Firebase credentials\n :rtype: google.oauth2.service_account.Credentials\n \"\"\"\n fbt = \"firebase_token\"\n app_store = get_app_local_store()\n\n if fbt not in app_store or not isinstance(app_store[fbt], dict):\n # initialize the firebase_token in the app_store as dict\n app_store[fbt] = {}\n\n if not isinstance(app_store[fbt].get(config_file_name), service_account.Credentials) or \\\n app_store[fbt].get(config_file_name).expired:\n # If the type of the config is not of class Credentials or if the token\n # has expired we get new scoped access token credentials\n credentials = service_account.Credentials.from_service_account_file(config_file_name,\n scopes=SCOPES)\n\n log.debug(\"Fetching a new access_token for {!r} from firebase...\".format(config_file_name))\n # We do not use a lock here: The worst that could happen is that two threads\n # fetch new auth tokens concurrently. In this case, one of them wins and\n # is written to the dictionary.\n app_store[fbt][config_file_name] = credentials\n readable_time = credentials.expiry.isoformat() if credentials.expiry else 'Never'\n log.debug(u\"Setting the expiration for {!r} of the new access_token \"\n u\"to {!s}.\".format(config_file_name, readable_time))\n\n return app_store[fbt][config_file_name]\n\n\nclass FIREBASE_CONFIG:\n REGISTRATION_URL = \"registration URL\"\n TTL = \"time to live\"\n JSON_CONFIG = \"JSON config file\"\n PROJECT_ID = \"projectid\"\n PROJECT_NUMBER = \"projectnumber\"\n APP_ID = \"appid\"\n API_KEY = \"apikey\"\n APP_ID_IOS = \"appidios\"\n API_KEY_IOS = \"apikeyios\"\n HTTPS_PROXY = \"httpsproxy\"\n\n\nclass FirebaseProvider(ISMSProvider):\n\n def __init__(self, db_smsprovider_object=None, smsgateway=None):\n ISMSProvider.__init__(self, db_smsprovider_object, smsgateway)\n self.access_token_info = None\n self.access_token_expires_at = 0\n\n def submit_message(self, firebase_token, data):\n \"\"\"\n send a message to a registered Firebase client\n This can be a simple OTP value or a cryptographic challenge response.\n\n :param firebase_token: The firebase token of the smartphone\n :type firebase_token: str\n :param data: the data dictionary part of the message to submit to the phone\n :type data: dict\n :return: bool\n \"\"\"\n res = False\n\n credentials = get_firebase_access_token(self.smsgateway.option_dict.get(\n FIREBASE_CONFIG.JSON_CONFIG))\n\n authed_session = AuthorizedSession(credentials)\n\n headers = {\n 'Content-Type': 'application/json; UTF-8',\n }\n fcm_message = {\n \"message\": {\n \"data\": data,\n \"token\": firebase_token,\n \"android\": {\n \"priority\": \"HIGH\",\n \"ttl\": \"120s\",\n \"fcm_options\": {\"analytics_label\": \"AndroidPushToken\"}\n },\n \"apns\": {\n \"headers\": {\n \"apns-priority\": \"10\",\n \"apns-push-type\": \"alert\",\n \"apns-collapse-id\": \"privacyidea.pushtoken\",\n \"apns-expiration\": str(int(time.time()) + 120)\n },\n \"payload\": {\n \"aps\": {\n \"alert\": {\n \"title\": data.get(\"title\"),\n \"body\": data.get(\"question\"),\n },\n \"sound\": \"default\",\n \"category\": \"PUSH_AUTHENTICATION\"\n },\n },\n \"fcm_options\": {\"analytics_label\": \"iOSPushToken\"}\n }\n }\n }\n\n proxies = {}\n if self.smsgateway.option_dict.get(FIREBASE_CONFIG.HTTPS_PROXY):\n proxies[\"https\"] = self.smsgateway.option_dict.get(FIREBASE_CONFIG.HTTPS_PROXY)\n url = FIREBASE_URL_SEND.format(self.smsgateway.option_dict.get(FIREBASE_CONFIG.PROJECT_ID))\n resp = authed_session.post(url, data=json.dumps(fcm_message), headers=headers, proxies=proxies)\n\n if resp.status_code == 200:\n log.debug(\"Message sent successfully to Firebase service.\")\n res = True\n else:\n log.warning(u\"Failed to send message to firebase service: {0!s}\".format(resp.text))\n\n return res\n\n def check_configuration(self):\n \"\"\"\n This method checks the sanity of the configuration of this provider.\n If there is a configuration error, than an exception is raised.\n :return:\n \"\"\"\n json_file = self.smsgateway.option_dict.get(FIREBASE_CONFIG.JSON_CONFIG)\n server_config = None\n with open(json_file) as f:\n server_config = json.load(f)\n if server_config:\n if server_config.get(\"type\") != \"service_account\":\n raise ConfigAdminError(description=\"The JSON file is not a valid firebase credentials file.\")\n project_id = self.smsgateway.option_dict.get(FIREBASE_CONFIG.PROJECT_ID)\n if server_config.get(\"project_id\") != project_id:\n raise ConfigAdminError(description=\"The project_id you entered does not match the project_id from the JSON file.\")\n\n else:\n raise ConfigAdminError(description=\"Please check your configuration. Can not load JSON file.\")\n\n # We need at least\n # FIREBASE_CONFIG.API_KEY_IOS and FIREBASE_CONFIG.APP_ID_IOS\n # or\n # FIREBASE_CONFIG.API_KEY and FIREBASE_CONFIG.APP_ID\n android_configured = bool(self.smsgateway.option_dict.get(FIREBASE_CONFIG.APP_ID)) and \\\n bool(self.smsgateway.option_dict.get(FIREBASE_CONFIG.API_KEY))\n ios_configured = bool(self.smsgateway.option_dict.get(FIREBASE_CONFIG.APP_ID_IOS)) and \\\n bool(self.smsgateway.option_dict.get(FIREBASE_CONFIG.API_KEY_IOS))\n if not android_configured and not ios_configured:\n raise ConfigAdminError(description=\"You need to at least configure either app_id and api_key or\"\n \" app_id_ios and api_key_ios.\")\n\n @classmethod\n def parameters(cls):\n \"\"\"\n Return a dictionary, that describes the parameters and options for the\n Firebase Provider\n Parameters are required keys to values.\n\n :return: dict\n \"\"\"\n params = {\"options_allowed\": False,\n \"headers_allowed\": False,\n \"parameters\": {\n FIREBASE_CONFIG.PROJECT_ID: {\n \"required\": True,\n \"description\": _(\"The project ID, that the client should use. Get it from your Firebase console.\")\n },\n FIREBASE_CONFIG.PROJECT_NUMBER: {\n \"required\": True,\n \"description\": _(\n \"The project number, that the client should use. Get it from your Firebase console.\")\n },\n FIREBASE_CONFIG.APP_ID: {\n \"required\": False,\n \"description\": _(\n \"The App ID, that the Android client should use. Get it from your Firebase console.\")\n },\n FIREBASE_CONFIG.API_KEY: {\n \"required\": False,\n \"description\": _(\n \"The API Key, that the Android client should use. Get it from your Firebase console.\")\n },\n FIREBASE_CONFIG.APP_ID_IOS:{\n \"required\": False,\n \"description\": _(\n \"The App ID, that the iOS client should use. Get it from your Firebase console.\")\n },\n FIREBASE_CONFIG.API_KEY_IOS: {\n \"required\": False,\n \"description\": _(\n \"The API Key, that the iOS client should use. Get it from your Firebase console.\")\n },\n FIREBASE_CONFIG.JSON_CONFIG: {\n \"required\": True,\n \"description\": _(\"The filename of the JSON config file, that allows privacyIDEA to talk\"\n \" to the Firebase REST API.\")\n },\n FIREBASE_CONFIG.HTTPS_PROXY: {\n \"required\": False,\n \"description\": _(\"Proxy setting for HTTPS connections to googleapis.com.\")\n }\n }\n }\n return params\n", "path": "privacyidea/lib/smsprovider/FirebaseProvider.py"}]}
| 3,423 | 140 |
gh_patches_debug_39392
|
rasdani/github-patches
|
git_diff
|
xonsh__xonsh-2232
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
time.clock() is deprecated
Should remove and replace in timings module.
</issue>
<code>
[start of xonsh/timings.py]
1 # -*- coding: utf-8 -*-
2 """Timing related functionality for the xonsh shell.
3
4 The following time_it alias and Timer was forked from the IPython project:
5 * Copyright (c) 2008-2014, IPython Development Team
6 * Copyright (C) 2001-2007 Fernando Perez <[email protected]>
7 * Copyright (c) 2001, Janko Hauser <[email protected]>
8 * Copyright (c) 2001, Nathaniel Gray <[email protected]>
9 """
10 import os
11 import gc
12 import sys
13 import math
14 import time
15 import timeit
16 import builtins
17 import itertools
18
19 from xonsh.lazyasd import lazyobject, lazybool
20 from xonsh.events import events
21 from xonsh.platform import ON_WINDOWS
22
23
24 @lazybool
25 def _HAVE_RESOURCE():
26 try:
27 import resource as r
28 have = True
29 except ImportError:
30 # There is no distinction of user/system time under windows, so we
31 # just use time.clock() for everything...
32 have = False
33 return have
34
35
36 @lazyobject
37 def resource():
38 import resource as r
39 return r
40
41
42 @lazyobject
43 def clocku():
44 if _HAVE_RESOURCE:
45 def clocku():
46 """clocku() -> floating point number
47 Return the *USER* CPU time in seconds since the start of the process.
48 This is done via a call to resource.getrusage, so it avoids the
49 wraparound problems in time.clock()."""
50 return resource.getrusage(resource.RUSAGE_SELF)[0]
51 else:
52 clocku = time.clock
53 return clocku
54
55
56 @lazyobject
57 def clocks():
58 if _HAVE_RESOURCE:
59 def clocks():
60 """clocks() -> floating point number
61 Return the *SYSTEM* CPU time in seconds since the start of the process.
62 This is done via a call to resource.getrusage, so it avoids the
63 wraparound problems in time.clock()."""
64 return resource.getrusage(resource.RUSAGE_SELF)[1]
65 else:
66 clocks = time.clock
67 return clocks
68
69
70 @lazyobject
71 def clock():
72 if _HAVE_RESOURCE:
73 def clock():
74 """clock() -> floating point number
75 Return the *TOTAL USER+SYSTEM* CPU time in seconds since the start of
76 the process. This is done via a call to resource.getrusage, so it
77 avoids the wraparound problems in time.clock()."""
78 u, s = resource.getrusage(resource.RUSAGE_SELF)[:2]
79 return u + s
80 else:
81 clock = time.clock
82 return clock
83
84
85 @lazyobject
86 def clock2():
87 if _HAVE_RESOURCE:
88 def clock2():
89 """clock2() -> (t_user,t_system)
90 Similar to clock(), but return a tuple of user/system times."""
91 return resource.getrusage(resource.RUSAGE_SELF)[:2]
92 else:
93 def clock2():
94 """Under windows, system CPU time can't be measured.
95 This just returns clock() and zero."""
96 return time.clock(), 0.0
97 return clock2
98
99
100 def format_time(timespan, precision=3):
101 """Formats the timespan in a human readable form"""
102 if timespan >= 60.0:
103 # we have more than a minute, format that in a human readable form
104 parts = [("d", 60 * 60 * 24), ("h", 60 * 60), ("min", 60), ("s", 1)]
105 time = []
106 leftover = timespan
107 for suffix, length in parts:
108 value = int(leftover / length)
109 if value > 0:
110 leftover = leftover % length
111 time.append('{0}{1}'.format(str(value), suffix))
112 if leftover < 1:
113 break
114 return " ".join(time)
115 # Unfortunately the unicode 'micro' symbol can cause problems in
116 # certain terminals.
117 # See bug: https://bugs.launchpad.net/ipython/+bug/348466
118 # Try to prevent crashes by being more secure than it needs to
119 # E.g. eclipse is able to print a mu, but has no sys.stdout.encoding set.
120 units = ["s", "ms", 'us', "ns"] # the save value
121 if hasattr(sys.stdout, 'encoding') and sys.stdout.encoding:
122 try:
123 '\xb5'.encode(sys.stdout.encoding)
124 units = ["s", "ms", '\xb5s', "ns"]
125 except Exception:
126 pass
127 scaling = [1, 1e3, 1e6, 1e9]
128
129 if timespan > 0.0:
130 order = min(-int(math.floor(math.log10(timespan)) // 3), 3)
131 else:
132 order = 3
133 return "{1:.{0}g} {2}".format(precision, timespan * scaling[order],
134 units[order])
135
136
137 class Timer(timeit.Timer):
138 """Timer class that explicitly uses self.inner
139 which is an undocumented implementation detail of CPython,
140 not shared by PyPy.
141 """
142 # Timer.timeit copied from CPython 3.4.2
143 def timeit(self, number=timeit.default_number):
144 """Time 'number' executions of the main statement.
145 To be precise, this executes the setup statement once, and
146 then returns the time it takes to execute the main statement
147 a number of times, as a float measured in seconds. The
148 argument is the number of times through the loop, defaulting
149 to one million. The main statement, the setup statement and
150 the timer function to be used are passed to the constructor.
151 """
152 it = itertools.repeat(None, number)
153 gcold = gc.isenabled()
154 gc.disable()
155 try:
156 timing = self.inner(it, self.timer)
157 finally:
158 if gcold:
159 gc.enable()
160 return timing
161
162
163 INNER_TEMPLATE = """
164 def inner(_it, _timer):
165 #setup
166 _t0 = _timer()
167 for _i in _it:
168 {stmt}
169 _t1 = _timer()
170 return _t1 - _t0
171 """
172
173
174 def timeit_alias(args, stdin=None):
175 """Runs timing study on arguments."""
176 # some real args
177 number = 0
178 quiet = False
179 repeat = 3
180 precision = 3
181 # setup
182 ctx = builtins.__xonsh_ctx__
183 timer = Timer(timer=clock)
184 stmt = ' '.join(args)
185 innerstr = INNER_TEMPLATE.format(stmt=stmt)
186 # Track compilation time so it can be reported if too long
187 # Minimum time above which compilation time will be reported
188 tc_min = 0.1
189 t0 = clock()
190 innercode = builtins.compilex(innerstr, filename='<xonsh-timeit>',
191 mode='exec', glbs=ctx)
192 tc = clock() - t0
193 # get inner func
194 ns = {}
195 builtins.execx(innercode, glbs=ctx, locs=ns, mode='exec')
196 timer.inner = ns['inner']
197 # Check if there is a huge difference between the best and worst timings.
198 worst_tuning = 0
199 if number == 0:
200 # determine number so that 0.2 <= total time < 2.0
201 number = 1
202 for _ in range(1, 10):
203 time_number = timer.timeit(number)
204 worst_tuning = max(worst_tuning, time_number / number)
205 if time_number >= 0.2:
206 break
207 number *= 10
208 all_runs = timer.repeat(repeat, number)
209 best = min(all_runs) / number
210 # print some debug info
211 if not quiet:
212 worst = max(all_runs) / number
213 if worst_tuning:
214 worst = max(worst, worst_tuning)
215 # Check best timing is greater than zero to avoid a
216 # ZeroDivisionError.
217 # In cases where the slowest timing is lesser than 10 micoseconds
218 # we assume that it does not really matter if the fastest
219 # timing is 4 times faster than the slowest timing or not.
220 if worst > 4 * best and best > 0 and worst > 1e-5:
221 print(('The slowest run took {0:0.2f} times longer than the '
222 'fastest. This could mean that an intermediate result '
223 'is being cached.').format(worst / best))
224 print("{0} loops, best of {1}: {2} per loop"
225 .format(number, repeat, format_time(best, precision)))
226 if tc > tc_min:
227 print("Compiler time: {0:.2f} s".format(tc))
228 return
229
230
231 _timings = {'start': clock() if ON_WINDOWS else 0.0}
232
233
234 def setup_timings():
235 global _timings
236 if '--timings' in sys.argv:
237 events.doc('on_timingprobe', """
238 on_timingprobe(name: str) -> None
239
240 Fired to insert some timings into the startuptime list
241 """)
242
243 @events.on_timingprobe
244 def timing_on_timingprobe(name, **kw):
245 global _timings
246 _timings[name] = clock()
247
248 @events.on_post_cmdloop
249 def timing_on_post_cmdloop(**kw):
250 global _timings
251 _timings['on_post_cmdloop'] = clock()
252
253 @events.on_post_init
254 def timing_on_post_init(**kw):
255 global _timings
256 _timings['on_post_init'] = clock()
257
258 @events.on_post_rc
259 def timing_on_post_rc(**kw):
260 global _timings
261 _timings['on_post_rc'] = clock()
262
263 @events.on_postcommand
264 def timing_on_postcommand(**kw):
265 global _timings
266 _timings['on_postcommand'] = clock()
267
268 @events.on_pre_cmdloop
269 def timing_on_pre_cmdloop(**kw):
270 global _timings
271 _timings['on_pre_cmdloop'] = clock()
272
273 @events.on_pre_rc
274 def timing_on_pre_rc(**kw):
275 global _timings
276 _timings['on_pre_rc'] = clock()
277
278 @events.on_precommand
279 def timing_on_precommand(**kw):
280 global _timings
281 _timings['on_precommand'] = clock()
282
283 @events.on_ptk_create
284 def timing_on_ptk_create(**kw):
285 global _timings
286 _timings['on_ptk_create'] = clock()
287
288 @events.on_chdir
289 def timing_on_chdir(**kw):
290 global _timings
291 _timings['on_chdir'] = clock()
292
293 @events.on_post_prompt
294 def timing_on_post_prompt(**kw):
295 global _timings
296 _timings = {'on_post_prompt': clock()}
297
298 @events.on_pre_prompt
299 def timing_on_pre_prompt(**kw):
300 global _timings
301 _timings['on_pre_prompt'] = clock()
302 times = list(_timings.items())
303 times = sorted(times, key=lambda x: x[1])
304 width = max(len(s) for s, _ in times) + 2
305 header_format = '|{{:<{}}}|{{:^11}}|{{:^11}}|'.format(width)
306 entry_format = '|{{:<{}}}|{{:^11.3f}}|{{:^11.3f}}|'.format(width)
307 sepline = '|{}|{}|{}|'.format('-'*width, '-'*11, '-'*11)
308 # Print result table
309 print(' Debug level: {}'.format(os.getenv('XONSH_DEBUG', 'Off')))
310 print(sepline)
311 print(header_format.format('Event name', 'Time (s)', 'Delta (s)'))
312 print(sepline)
313 prevtime = tstart = times[0][1]
314 for name, ts in times:
315 print(entry_format.format(name, ts - tstart, ts - prevtime))
316 prevtime = ts
317 print(sepline)
318
[end of xonsh/timings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/xonsh/timings.py b/xonsh/timings.py
--- a/xonsh/timings.py
+++ b/xonsh/timings.py
@@ -28,7 +28,7 @@
have = True
except ImportError:
# There is no distinction of user/system time under windows, so we
- # just use time.clock() for everything...
+ # just use time.perf_counter() for everything...
have = False
return have
@@ -44,12 +44,11 @@
if _HAVE_RESOURCE:
def clocku():
"""clocku() -> floating point number
- Return the *USER* CPU time in seconds since the start of the process.
- This is done via a call to resource.getrusage, so it avoids the
- wraparound problems in time.clock()."""
+ Return the *USER* CPU time in seconds since the start of the
+ process."""
return resource.getrusage(resource.RUSAGE_SELF)[0]
else:
- clocku = time.clock
+ clocku = time.perf_counter
return clocku
@@ -58,12 +57,11 @@
if _HAVE_RESOURCE:
def clocks():
"""clocks() -> floating point number
- Return the *SYSTEM* CPU time in seconds since the start of the process.
- This is done via a call to resource.getrusage, so it avoids the
- wraparound problems in time.clock()."""
+ Return the *SYSTEM* CPU time in seconds since the start of the
+ process."""
return resource.getrusage(resource.RUSAGE_SELF)[1]
else:
- clocks = time.clock
+ clocks = time.perf_counter
return clocks
@@ -72,13 +70,12 @@
if _HAVE_RESOURCE:
def clock():
"""clock() -> floating point number
- Return the *TOTAL USER+SYSTEM* CPU time in seconds since the start of
- the process. This is done via a call to resource.getrusage, so it
- avoids the wraparound problems in time.clock()."""
+ Return the *TOTAL USER+SYSTEM* CPU time in seconds since the
+ start of the process."""
u, s = resource.getrusage(resource.RUSAGE_SELF)[:2]
return u + s
else:
- clock = time.clock
+ clock = time.perf_counter
return clock
@@ -92,8 +89,8 @@
else:
def clock2():
"""Under windows, system CPU time can't be measured.
- This just returns clock() and zero."""
- return time.clock(), 0.0
+ This just returns perf_counter() and zero."""
+ return time.perf_counter(), 0.0
return clock2
@@ -228,7 +225,7 @@
return
-_timings = {'start': clock() if ON_WINDOWS else 0.0}
+_timings = {'start': clock()}
def setup_timings():
|
{"golden_diff": "diff --git a/xonsh/timings.py b/xonsh/timings.py\n--- a/xonsh/timings.py\n+++ b/xonsh/timings.py\n@@ -28,7 +28,7 @@\n have = True\n except ImportError:\n # There is no distinction of user/system time under windows, so we\n- # just use time.clock() for everything...\n+ # just use time.perf_counter() for everything...\n have = False\n return have\n \n@@ -44,12 +44,11 @@\n if _HAVE_RESOURCE:\n def clocku():\n \"\"\"clocku() -> floating point number\n- Return the *USER* CPU time in seconds since the start of the process.\n- This is done via a call to resource.getrusage, so it avoids the\n- wraparound problems in time.clock().\"\"\"\n+ Return the *USER* CPU time in seconds since the start of the\n+ process.\"\"\"\n return resource.getrusage(resource.RUSAGE_SELF)[0]\n else:\n- clocku = time.clock\n+ clocku = time.perf_counter\n return clocku\n \n \n@@ -58,12 +57,11 @@\n if _HAVE_RESOURCE:\n def clocks():\n \"\"\"clocks() -> floating point number\n- Return the *SYSTEM* CPU time in seconds since the start of the process.\n- This is done via a call to resource.getrusage, so it avoids the\n- wraparound problems in time.clock().\"\"\"\n+ Return the *SYSTEM* CPU time in seconds since the start of the\n+ process.\"\"\"\n return resource.getrusage(resource.RUSAGE_SELF)[1]\n else:\n- clocks = time.clock\n+ clocks = time.perf_counter\n return clocks\n \n \n@@ -72,13 +70,12 @@\n if _HAVE_RESOURCE:\n def clock():\n \"\"\"clock() -> floating point number\n- Return the *TOTAL USER+SYSTEM* CPU time in seconds since the start of\n- the process. This is done via a call to resource.getrusage, so it\n- avoids the wraparound problems in time.clock().\"\"\"\n+ Return the *TOTAL USER+SYSTEM* CPU time in seconds since the\n+ start of the process.\"\"\"\n u, s = resource.getrusage(resource.RUSAGE_SELF)[:2]\n return u + s\n else:\n- clock = time.clock\n+ clock = time.perf_counter\n return clock\n \n \n@@ -92,8 +89,8 @@\n else:\n def clock2():\n \"\"\"Under windows, system CPU time can't be measured.\n- This just returns clock() and zero.\"\"\"\n- return time.clock(), 0.0\n+ This just returns perf_counter() and zero.\"\"\"\n+ return time.perf_counter(), 0.0\n return clock2\n \n \n@@ -228,7 +225,7 @@\n return\n \n \n-_timings = {'start': clock() if ON_WINDOWS else 0.0}\n+_timings = {'start': clock()}\n \n \n def setup_timings():\n", "issue": "time.clock() is deprecated\nShould remove and replace in timings module.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Timing related functionality for the xonsh shell.\n\nThe following time_it alias and Timer was forked from the IPython project:\n* Copyright (c) 2008-2014, IPython Development Team\n* Copyright (C) 2001-2007 Fernando Perez <[email protected]>\n* Copyright (c) 2001, Janko Hauser <[email protected]>\n* Copyright (c) 2001, Nathaniel Gray <[email protected]>\n\"\"\"\nimport os\nimport gc\nimport sys\nimport math\nimport time\nimport timeit\nimport builtins\nimport itertools\n\nfrom xonsh.lazyasd import lazyobject, lazybool\nfrom xonsh.events import events\nfrom xonsh.platform import ON_WINDOWS\n\n\n@lazybool\ndef _HAVE_RESOURCE():\n try:\n import resource as r\n have = True\n except ImportError:\n # There is no distinction of user/system time under windows, so we\n # just use time.clock() for everything...\n have = False\n return have\n\n\n@lazyobject\ndef resource():\n import resource as r\n return r\n\n\n@lazyobject\ndef clocku():\n if _HAVE_RESOURCE:\n def clocku():\n \"\"\"clocku() -> floating point number\n Return the *USER* CPU time in seconds since the start of the process.\n This is done via a call to resource.getrusage, so it avoids the\n wraparound problems in time.clock().\"\"\"\n return resource.getrusage(resource.RUSAGE_SELF)[0]\n else:\n clocku = time.clock\n return clocku\n\n\n@lazyobject\ndef clocks():\n if _HAVE_RESOURCE:\n def clocks():\n \"\"\"clocks() -> floating point number\n Return the *SYSTEM* CPU time in seconds since the start of the process.\n This is done via a call to resource.getrusage, so it avoids the\n wraparound problems in time.clock().\"\"\"\n return resource.getrusage(resource.RUSAGE_SELF)[1]\n else:\n clocks = time.clock\n return clocks\n\n\n@lazyobject\ndef clock():\n if _HAVE_RESOURCE:\n def clock():\n \"\"\"clock() -> floating point number\n Return the *TOTAL USER+SYSTEM* CPU time in seconds since the start of\n the process. This is done via a call to resource.getrusage, so it\n avoids the wraparound problems in time.clock().\"\"\"\n u, s = resource.getrusage(resource.RUSAGE_SELF)[:2]\n return u + s\n else:\n clock = time.clock\n return clock\n\n\n@lazyobject\ndef clock2():\n if _HAVE_RESOURCE:\n def clock2():\n \"\"\"clock2() -> (t_user,t_system)\n Similar to clock(), but return a tuple of user/system times.\"\"\"\n return resource.getrusage(resource.RUSAGE_SELF)[:2]\n else:\n def clock2():\n \"\"\"Under windows, system CPU time can't be measured.\n This just returns clock() and zero.\"\"\"\n return time.clock(), 0.0\n return clock2\n\n\ndef format_time(timespan, precision=3):\n \"\"\"Formats the timespan in a human readable form\"\"\"\n if timespan >= 60.0:\n # we have more than a minute, format that in a human readable form\n parts = [(\"d\", 60 * 60 * 24), (\"h\", 60 * 60), (\"min\", 60), (\"s\", 1)]\n time = []\n leftover = timespan\n for suffix, length in parts:\n value = int(leftover / length)\n if value > 0:\n leftover = leftover % length\n time.append('{0}{1}'.format(str(value), suffix))\n if leftover < 1:\n break\n return \" \".join(time)\n # Unfortunately the unicode 'micro' symbol can cause problems in\n # certain terminals.\n # See bug: https://bugs.launchpad.net/ipython/+bug/348466\n # Try to prevent crashes by being more secure than it needs to\n # E.g. eclipse is able to print a mu, but has no sys.stdout.encoding set.\n units = [\"s\", \"ms\", 'us', \"ns\"] # the save value\n if hasattr(sys.stdout, 'encoding') and sys.stdout.encoding:\n try:\n '\\xb5'.encode(sys.stdout.encoding)\n units = [\"s\", \"ms\", '\\xb5s', \"ns\"]\n except Exception:\n pass\n scaling = [1, 1e3, 1e6, 1e9]\n\n if timespan > 0.0:\n order = min(-int(math.floor(math.log10(timespan)) // 3), 3)\n else:\n order = 3\n return \"{1:.{0}g} {2}\".format(precision, timespan * scaling[order],\n units[order])\n\n\nclass Timer(timeit.Timer):\n \"\"\"Timer class that explicitly uses self.inner\n which is an undocumented implementation detail of CPython,\n not shared by PyPy.\n \"\"\"\n # Timer.timeit copied from CPython 3.4.2\n def timeit(self, number=timeit.default_number):\n \"\"\"Time 'number' executions of the main statement.\n To be precise, this executes the setup statement once, and\n then returns the time it takes to execute the main statement\n a number of times, as a float measured in seconds. The\n argument is the number of times through the loop, defaulting\n to one million. The main statement, the setup statement and\n the timer function to be used are passed to the constructor.\n \"\"\"\n it = itertools.repeat(None, number)\n gcold = gc.isenabled()\n gc.disable()\n try:\n timing = self.inner(it, self.timer)\n finally:\n if gcold:\n gc.enable()\n return timing\n\n\nINNER_TEMPLATE = \"\"\"\ndef inner(_it, _timer):\n #setup\n _t0 = _timer()\n for _i in _it:\n {stmt}\n _t1 = _timer()\n return _t1 - _t0\n\"\"\"\n\n\ndef timeit_alias(args, stdin=None):\n \"\"\"Runs timing study on arguments.\"\"\"\n # some real args\n number = 0\n quiet = False\n repeat = 3\n precision = 3\n # setup\n ctx = builtins.__xonsh_ctx__\n timer = Timer(timer=clock)\n stmt = ' '.join(args)\n innerstr = INNER_TEMPLATE.format(stmt=stmt)\n # Track compilation time so it can be reported if too long\n # Minimum time above which compilation time will be reported\n tc_min = 0.1\n t0 = clock()\n innercode = builtins.compilex(innerstr, filename='<xonsh-timeit>',\n mode='exec', glbs=ctx)\n tc = clock() - t0\n # get inner func\n ns = {}\n builtins.execx(innercode, glbs=ctx, locs=ns, mode='exec')\n timer.inner = ns['inner']\n # Check if there is a huge difference between the best and worst timings.\n worst_tuning = 0\n if number == 0:\n # determine number so that 0.2 <= total time < 2.0\n number = 1\n for _ in range(1, 10):\n time_number = timer.timeit(number)\n worst_tuning = max(worst_tuning, time_number / number)\n if time_number >= 0.2:\n break\n number *= 10\n all_runs = timer.repeat(repeat, number)\n best = min(all_runs) / number\n # print some debug info\n if not quiet:\n worst = max(all_runs) / number\n if worst_tuning:\n worst = max(worst, worst_tuning)\n # Check best timing is greater than zero to avoid a\n # ZeroDivisionError.\n # In cases where the slowest timing is lesser than 10 micoseconds\n # we assume that it does not really matter if the fastest\n # timing is 4 times faster than the slowest timing or not.\n if worst > 4 * best and best > 0 and worst > 1e-5:\n print(('The slowest run took {0:0.2f} times longer than the '\n 'fastest. This could mean that an intermediate result '\n 'is being cached.').format(worst / best))\n print(\"{0} loops, best of {1}: {2} per loop\"\n .format(number, repeat, format_time(best, precision)))\n if tc > tc_min:\n print(\"Compiler time: {0:.2f} s\".format(tc))\n return\n\n\n_timings = {'start': clock() if ON_WINDOWS else 0.0}\n\n\ndef setup_timings():\n global _timings\n if '--timings' in sys.argv:\n events.doc('on_timingprobe', \"\"\"\n on_timingprobe(name: str) -> None\n\n Fired to insert some timings into the startuptime list\n \"\"\")\n\n @events.on_timingprobe\n def timing_on_timingprobe(name, **kw):\n global _timings\n _timings[name] = clock()\n\n @events.on_post_cmdloop\n def timing_on_post_cmdloop(**kw):\n global _timings\n _timings['on_post_cmdloop'] = clock()\n\n @events.on_post_init\n def timing_on_post_init(**kw):\n global _timings\n _timings['on_post_init'] = clock()\n\n @events.on_post_rc\n def timing_on_post_rc(**kw):\n global _timings\n _timings['on_post_rc'] = clock()\n\n @events.on_postcommand\n def timing_on_postcommand(**kw):\n global _timings\n _timings['on_postcommand'] = clock()\n\n @events.on_pre_cmdloop\n def timing_on_pre_cmdloop(**kw):\n global _timings\n _timings['on_pre_cmdloop'] = clock()\n\n @events.on_pre_rc\n def timing_on_pre_rc(**kw):\n global _timings\n _timings['on_pre_rc'] = clock()\n\n @events.on_precommand\n def timing_on_precommand(**kw):\n global _timings\n _timings['on_precommand'] = clock()\n\n @events.on_ptk_create\n def timing_on_ptk_create(**kw):\n global _timings\n _timings['on_ptk_create'] = clock()\n\n @events.on_chdir\n def timing_on_chdir(**kw):\n global _timings\n _timings['on_chdir'] = clock()\n\n @events.on_post_prompt\n def timing_on_post_prompt(**kw):\n global _timings\n _timings = {'on_post_prompt': clock()}\n\n @events.on_pre_prompt\n def timing_on_pre_prompt(**kw):\n global _timings\n _timings['on_pre_prompt'] = clock()\n times = list(_timings.items())\n times = sorted(times, key=lambda x: x[1])\n width = max(len(s) for s, _ in times) + 2\n header_format = '|{{:<{}}}|{{:^11}}|{{:^11}}|'.format(width)\n entry_format = '|{{:<{}}}|{{:^11.3f}}|{{:^11.3f}}|'.format(width)\n sepline = '|{}|{}|{}|'.format('-'*width, '-'*11, '-'*11)\n # Print result table\n print(' Debug level: {}'.format(os.getenv('XONSH_DEBUG', 'Off')))\n print(sepline)\n print(header_format.format('Event name', 'Time (s)', 'Delta (s)'))\n print(sepline)\n prevtime = tstart = times[0][1]\n for name, ts in times:\n print(entry_format.format(name, ts - tstart, ts - prevtime))\n prevtime = ts\n print(sepline)\n", "path": "xonsh/timings.py"}]}
| 4,081 | 682 |
gh_patches_debug_38655
|
rasdani/github-patches
|
git_diff
|
sopel-irc__sopel-1492
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Save joined channels in the configuration
Right now when you add a channel with !join it does not save the channel to the config and the channel is not rejoined after restarting the bot.
Personally I would like to see this added, however also with the option for a delay joining channels so that bots who join a lot of channels don't flood the server.
Chances are, that I'll write a module to do this as it's something I need personally.
However in my case it would not make sense to write this into the default.cfg, as more than 10 channels would already be too difficult to maintain in the config file.
</issue>
<code>
[start of sopel/modules/admin.py]
1 # coding=utf-8
2 """
3 admin.py - Sopel Admin Module
4 Copyright 2010-2011, Sean B. Palmer (inamidst.com) and Michael Yanovich
5 (yanovich.net)
6 Copyright Β© 2012, Elad Alfassa, <[email protected]>
7 Copyright 2013, Ari Koivula <[email protected]>
8
9 Licensed under the Eiffel Forum License 2.
10
11 https://sopel.chat
12 """
13 from __future__ import unicode_literals, absolute_import, print_function, division
14
15 from sopel.config.types import (
16 StaticSection, ValidatedAttribute, FilenameAttribute
17 )
18 import sopel.module
19
20
21 class AdminSection(StaticSection):
22 hold_ground = ValidatedAttribute('hold_ground', bool, default=False)
23 """Auto re-join on kick"""
24 auto_accept_invite = ValidatedAttribute('auto_accept_invite', bool,
25 default=True)
26 """Auto-join channels when invited"""
27
28
29 def configure(config):
30 """
31 | name | example | purpose |
32 | ---- | ------- | ------- |
33 | hold\\_ground | False | Auto-rejoin the channel after being kicked. |
34 | auto\\_accept\\_invite | True | Auto-join channels when invited. |
35 """
36 config.define_section('admin', AdminSection)
37 config.admin.configure_setting('hold_ground',
38 "Automatically re-join after being kicked?")
39 config.admin.configure_setting('auto_accept_invite',
40 'Automatically join channels when invited?')
41
42
43 def setup(bot):
44 bot.config.define_section('admin', AdminSection)
45
46
47 @sopel.module.require_privmsg
48 @sopel.module.require_admin
49 @sopel.module.commands('join')
50 @sopel.module.priority('low')
51 @sopel.module.example('.join #example or .join #example key')
52 def join(bot, trigger):
53 """Join the specified channel. This is an admin-only command."""
54 channel, key = trigger.group(3), trigger.group(4)
55 if not channel:
56 return
57 elif not key:
58 bot.join(channel)
59 else:
60 bot.join(channel, key)
61
62
63 @sopel.module.require_privmsg
64 @sopel.module.require_admin
65 @sopel.module.commands('part')
66 @sopel.module.priority('low')
67 @sopel.module.example('.part #example')
68 def part(bot, trigger):
69 """Part the specified channel. This is an admin-only command."""
70 channel, _sep, part_msg = trigger.group(2).partition(' ')
71 if part_msg:
72 bot.part(channel, part_msg)
73 else:
74 bot.part(channel)
75
76
77 @sopel.module.require_privmsg
78 @sopel.module.require_owner
79 @sopel.module.commands('restart')
80 @sopel.module.priority('low')
81 def restart(bot, trigger):
82 """Restart the bot. This is an owner-only command."""
83 quit_message = trigger.group(2)
84 if not quit_message:
85 quit_message = 'Restart on command from %s' % trigger.nick
86
87 bot.restart(quit_message)
88
89
90 @sopel.module.require_privmsg
91 @sopel.module.require_owner
92 @sopel.module.commands('quit')
93 @sopel.module.priority('low')
94 def quit(bot, trigger):
95 """Quit from the server. This is an owner-only command."""
96 quit_message = trigger.group(2)
97 if not quit_message:
98 quit_message = 'Quitting on command from %s' % trigger.nick
99
100 bot.quit(quit_message)
101
102
103 @sopel.module.require_privmsg
104 @sopel.module.require_admin
105 @sopel.module.commands('msg')
106 @sopel.module.priority('low')
107 @sopel.module.example('.msg #YourPants Does anyone else smell neurotoxin?')
108 def msg(bot, trigger):
109 """
110 Send a message to a given channel or nick. Can only be done in privmsg by
111 an admin.
112 """
113 if trigger.group(2) is None:
114 return
115
116 channel, _sep, message = trigger.group(2).partition(' ')
117 message = message.strip()
118 if not channel or not message:
119 return
120
121 bot.msg(channel, message)
122
123
124 @sopel.module.require_privmsg
125 @sopel.module.require_admin
126 @sopel.module.commands('me')
127 @sopel.module.priority('low')
128 def me(bot, trigger):
129 """
130 Send an ACTION (/me) to a given channel or nick. Can only be done in
131 privmsg by an admin.
132 """
133 if trigger.group(2) is None:
134 return
135
136 channel, _sep, action = trigger.group(2).partition(' ')
137 action = action.strip()
138 if not channel or not action:
139 return
140
141 msg = '\x01ACTION %s\x01' % action
142 bot.msg(channel, msg)
143
144
145 @sopel.module.event('INVITE')
146 @sopel.module.rule('.*')
147 @sopel.module.priority('low')
148 def invite_join(bot, trigger):
149 """
150 Join a channel Sopel is invited to, if the inviter is an admin.
151 """
152 if trigger.admin or bot.config.admin.auto_accept_invite:
153 bot.join(trigger.args[1])
154 return
155
156
157 @sopel.module.event('KICK')
158 @sopel.module.rule(r'.*')
159 @sopel.module.priority('low')
160 def hold_ground(bot, trigger):
161 """
162 This function monitors all kicks across all channels Sopel is in. If it
163 detects that it is the one kicked it'll automatically join that channel.
164
165 WARNING: This may not be needed and could cause problems if Sopel becomes
166 annoying. Please use this with caution.
167 """
168 if bot.config.admin.hold_ground:
169 channel = trigger.sender
170 if trigger.args[1] == bot.nick:
171 bot.join(channel)
172
173
174 @sopel.module.require_privmsg
175 @sopel.module.require_admin
176 @sopel.module.commands('mode')
177 @sopel.module.priority('low')
178 def mode(bot, trigger):
179 """Set a user mode on Sopel. Can only be done in privmsg by an admin."""
180 mode = trigger.group(3)
181 bot.write(('MODE ', bot.nick + ' ' + mode))
182
183
184 @sopel.module.require_privmsg("This command only works as a private message.")
185 @sopel.module.require_admin("This command requires admin privileges.")
186 @sopel.module.commands('set')
187 @sopel.module.example('.set core.owner Me')
188 def set_config(bot, trigger):
189 """See and modify values of Sopel's config object.
190
191 Trigger args:
192 arg1 - section and option, in the form "section.option"
193 arg2 - value
194
195 If there is no section, section will default to "core".
196 If value is None, the option will be deleted.
197 """
198 # Get section and option from first argument.
199 arg1 = trigger.group(3).split('.')
200 if len(arg1) == 1:
201 section_name, option = "core", arg1[0]
202 elif len(arg1) == 2:
203 section_name, option = arg1
204 else:
205 bot.reply("Usage: .set section.option value")
206 return
207 section = getattr(bot.config, section_name)
208 static_sec = isinstance(section, StaticSection)
209
210 if static_sec and not hasattr(section, option):
211 bot.say('[{}] section has no option {}.'.format(section_name, option))
212 return
213
214 delim = trigger.group(2).find(' ')
215 # Skip preceding whitespaces, if any.
216 while delim > 0 and delim < len(trigger.group(2)) and trigger.group(2)[delim] == ' ':
217 delim = delim + 1
218
219 # Display current value if no value is given.
220 if delim == -1 or delim == len(trigger.group(2)):
221 if not static_sec and bot.config.parser.has_option(section, option):
222 bot.reply("Option %s.%s does not exist." % (section_name, option))
223 return
224 # Except if the option looks like a password. Censor those to stop them
225 # from being put on log files.
226 if option.endswith("password") or option.endswith("pass"):
227 value = "(password censored)"
228 else:
229 value = getattr(section, option)
230 bot.reply("%s.%s = %s" % (section_name, option, value))
231 return
232
233 # Otherwise, set the value to one given as argument 2.
234 value = trigger.group(2)[delim:]
235 if static_sec:
236 descriptor = getattr(section.__class__, option)
237 try:
238 if isinstance(descriptor, FilenameAttribute):
239 value = descriptor.parse(bot.config, descriptor, value)
240 else:
241 value = descriptor.parse(value)
242 except ValueError as exc:
243 bot.say("Can't set attribute: " + str(exc))
244 return
245 setattr(section, option, value)
246
247
248 @sopel.module.require_privmsg
249 @sopel.module.require_admin
250 @sopel.module.commands('save')
251 @sopel.module.example('.save')
252 def save_config(bot, trigger):
253 """Save state of Sopel's config object to the configuration file."""
254 bot.config.save()
255
[end of sopel/modules/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sopel/modules/admin.py b/sopel/modules/admin.py
--- a/sopel/modules/admin.py
+++ b/sopel/modules/admin.py
@@ -44,6 +44,50 @@
bot.config.define_section('admin', AdminSection)
+def _get_config_channels(channels):
+ """List"""
+ for channel_info in channels:
+ if ' ' in channel_info:
+ yield channel_info.split(' ', 1)
+ else:
+ yield (channel_info, None)
+
+
+def _set_config_channels(bot, channels):
+ bot.config.core.channels = [
+ ' '.join([part for part in items if part])
+ for items in channels.items()
+ ]
+ bot.config.save()
+
+
+def _join(bot, channel, key=None, save=True):
+ if not channel:
+ return
+
+ if not key:
+ bot.join(channel)
+ else:
+ bot.join(channel, key)
+
+ if save:
+ channels = dict(_get_config_channels(bot.config.core.channels))
+ # save only if channel is new or key has been changed
+ if channel not in channels or channels[channel] != key:
+ channels[channel] = key
+ _set_config_channels(bot, channels)
+
+
+def _part(bot, channel, msg=None, save=True):
+ bot.part(channel, msg or None)
+
+ if save:
+ channels = dict(_get_config_channels(bot.config.core.channels))
+ if channel in channels:
+ del channels[channel]
+ _set_config_channels(bot, channels)
+
+
@sopel.module.require_privmsg
@sopel.module.require_admin
@sopel.module.commands('join')
@@ -52,12 +96,22 @@
def join(bot, trigger):
"""Join the specified channel. This is an admin-only command."""
channel, key = trigger.group(3), trigger.group(4)
- if not channel:
- return
- elif not key:
- bot.join(channel)
- else:
- bot.join(channel, key)
+ _join(bot, channel, key)
+
+
[email protected]_privmsg
[email protected]_admin
[email protected]('tmpjoin')
[email protected]('low')
[email protected]('.tmpjoin #example or .tmpjoin #example key')
+def temporary_join(bot, trigger):
+ """Like ``join``, without saving. This is an admin-only command.
+
+ Unlike the ``join`` command, ``tmpjoin`` won't remember the channel upon
+ restarting the bot.
+ """
+ channel, key = trigger.group(3), trigger.group(4)
+ _join(bot, channel, key, save=False)
@sopel.module.require_privmsg
@@ -68,10 +122,22 @@
def part(bot, trigger):
"""Part the specified channel. This is an admin-only command."""
channel, _sep, part_msg = trigger.group(2).partition(' ')
- if part_msg:
- bot.part(channel, part_msg)
- else:
- bot.part(channel)
+ _part(bot, channel, part_msg)
+
+
[email protected]_privmsg
[email protected]_admin
[email protected]('tmppart')
[email protected]('low')
[email protected]('.tmppart #example')
+def temporary_part(bot, trigger):
+ """Like ``part``, without saving. This is an admin-only command.
+
+ Unlike the ``part`` command, ``tmppart`` will rejoin the channel upon
+ restarting the bot.
+ """
+ channel, _sep, part_msg = trigger.group(2).partition(' ')
+ _part(bot, channel, part_msg, save=False)
@sopel.module.require_privmsg
|
{"golden_diff": "diff --git a/sopel/modules/admin.py b/sopel/modules/admin.py\n--- a/sopel/modules/admin.py\n+++ b/sopel/modules/admin.py\n@@ -44,6 +44,50 @@\n bot.config.define_section('admin', AdminSection)\n \n \n+def _get_config_channels(channels):\n+ \"\"\"List\"\"\"\n+ for channel_info in channels:\n+ if ' ' in channel_info:\n+ yield channel_info.split(' ', 1)\n+ else:\n+ yield (channel_info, None)\n+\n+\n+def _set_config_channels(bot, channels):\n+ bot.config.core.channels = [\n+ ' '.join([part for part in items if part])\n+ for items in channels.items()\n+ ]\n+ bot.config.save()\n+\n+\n+def _join(bot, channel, key=None, save=True):\n+ if not channel:\n+ return\n+\n+ if not key:\n+ bot.join(channel)\n+ else:\n+ bot.join(channel, key)\n+\n+ if save:\n+ channels = dict(_get_config_channels(bot.config.core.channels))\n+ # save only if channel is new or key has been changed\n+ if channel not in channels or channels[channel] != key:\n+ channels[channel] = key\n+ _set_config_channels(bot, channels)\n+\n+\n+def _part(bot, channel, msg=None, save=True):\n+ bot.part(channel, msg or None)\n+\n+ if save:\n+ channels = dict(_get_config_channels(bot.config.core.channels))\n+ if channel in channels:\n+ del channels[channel]\n+ _set_config_channels(bot, channels)\n+\n+\n @sopel.module.require_privmsg\n @sopel.module.require_admin\n @sopel.module.commands('join')\n@@ -52,12 +96,22 @@\n def join(bot, trigger):\n \"\"\"Join the specified channel. This is an admin-only command.\"\"\"\n channel, key = trigger.group(3), trigger.group(4)\n- if not channel:\n- return\n- elif not key:\n- bot.join(channel)\n- else:\n- bot.join(channel, key)\n+ _join(bot, channel, key)\n+\n+\[email protected]_privmsg\[email protected]_admin\[email protected]('tmpjoin')\[email protected]('low')\[email protected]('.tmpjoin #example or .tmpjoin #example key')\n+def temporary_join(bot, trigger):\n+ \"\"\"Like ``join``, without saving. This is an admin-only command.\n+\n+ Unlike the ``join`` command, ``tmpjoin`` won't remember the channel upon\n+ restarting the bot.\n+ \"\"\"\n+ channel, key = trigger.group(3), trigger.group(4)\n+ _join(bot, channel, key, save=False)\n \n \n @sopel.module.require_privmsg\n@@ -68,10 +122,22 @@\n def part(bot, trigger):\n \"\"\"Part the specified channel. This is an admin-only command.\"\"\"\n channel, _sep, part_msg = trigger.group(2).partition(' ')\n- if part_msg:\n- bot.part(channel, part_msg)\n- else:\n- bot.part(channel)\n+ _part(bot, channel, part_msg)\n+\n+\[email protected]_privmsg\[email protected]_admin\[email protected]('tmppart')\[email protected]('low')\[email protected]('.tmppart #example')\n+def temporary_part(bot, trigger):\n+ \"\"\"Like ``part``, without saving. This is an admin-only command.\n+\n+ Unlike the ``part`` command, ``tmppart`` will rejoin the channel upon\n+ restarting the bot.\n+ \"\"\"\n+ channel, _sep, part_msg = trigger.group(2).partition(' ')\n+ _part(bot, channel, part_msg, save=False)\n \n \n @sopel.module.require_privmsg\n", "issue": "Save joined channels in the configuration\nRight now when you add a channel with !join it does not save the channel to the config and the channel is not rejoined after restarting the bot.\n\nPersonally I would like to see this added, however also with the option for a delay joining channels so that bots who join a lot of channels don't flood the server.\n\nChances are, that I'll write a module to do this as it's something I need personally.\nHowever in my case it would not make sense to write this into the default.cfg, as more than 10 channels would already be too difficult to maintain in the config file.\n\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"\nadmin.py - Sopel Admin Module\nCopyright 2010-2011, Sean B. Palmer (inamidst.com) and Michael Yanovich\n(yanovich.net)\nCopyright \u00a9 2012, Elad Alfassa, <[email protected]>\nCopyright 2013, Ari Koivula <[email protected]>\n\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nfrom sopel.config.types import (\n StaticSection, ValidatedAttribute, FilenameAttribute\n)\nimport sopel.module\n\n\nclass AdminSection(StaticSection):\n hold_ground = ValidatedAttribute('hold_ground', bool, default=False)\n \"\"\"Auto re-join on kick\"\"\"\n auto_accept_invite = ValidatedAttribute('auto_accept_invite', bool,\n default=True)\n \"\"\"Auto-join channels when invited\"\"\"\n\n\ndef configure(config):\n \"\"\"\n | name | example | purpose |\n | ---- | ------- | ------- |\n | hold\\\\_ground | False | Auto-rejoin the channel after being kicked. |\n | auto\\\\_accept\\\\_invite | True | Auto-join channels when invited. |\n \"\"\"\n config.define_section('admin', AdminSection)\n config.admin.configure_setting('hold_ground',\n \"Automatically re-join after being kicked?\")\n config.admin.configure_setting('auto_accept_invite',\n 'Automatically join channels when invited?')\n\n\ndef setup(bot):\n bot.config.define_section('admin', AdminSection)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('join')\[email protected]('low')\[email protected]('.join #example or .join #example key')\ndef join(bot, trigger):\n \"\"\"Join the specified channel. This is an admin-only command.\"\"\"\n channel, key = trigger.group(3), trigger.group(4)\n if not channel:\n return\n elif not key:\n bot.join(channel)\n else:\n bot.join(channel, key)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('part')\[email protected]('low')\[email protected]('.part #example')\ndef part(bot, trigger):\n \"\"\"Part the specified channel. This is an admin-only command.\"\"\"\n channel, _sep, part_msg = trigger.group(2).partition(' ')\n if part_msg:\n bot.part(channel, part_msg)\n else:\n bot.part(channel)\n\n\[email protected]_privmsg\[email protected]_owner\[email protected]('restart')\[email protected]('low')\ndef restart(bot, trigger):\n \"\"\"Restart the bot. This is an owner-only command.\"\"\"\n quit_message = trigger.group(2)\n if not quit_message:\n quit_message = 'Restart on command from %s' % trigger.nick\n\n bot.restart(quit_message)\n\n\[email protected]_privmsg\[email protected]_owner\[email protected]('quit')\[email protected]('low')\ndef quit(bot, trigger):\n \"\"\"Quit from the server. This is an owner-only command.\"\"\"\n quit_message = trigger.group(2)\n if not quit_message:\n quit_message = 'Quitting on command from %s' % trigger.nick\n\n bot.quit(quit_message)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('msg')\[email protected]('low')\[email protected]('.msg #YourPants Does anyone else smell neurotoxin?')\ndef msg(bot, trigger):\n \"\"\"\n Send a message to a given channel or nick. Can only be done in privmsg by\n an admin.\n \"\"\"\n if trigger.group(2) is None:\n return\n\n channel, _sep, message = trigger.group(2).partition(' ')\n message = message.strip()\n if not channel or not message:\n return\n\n bot.msg(channel, message)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('me')\[email protected]('low')\ndef me(bot, trigger):\n \"\"\"\n Send an ACTION (/me) to a given channel or nick. Can only be done in\n privmsg by an admin.\n \"\"\"\n if trigger.group(2) is None:\n return\n\n channel, _sep, action = trigger.group(2).partition(' ')\n action = action.strip()\n if not channel or not action:\n return\n\n msg = '\\x01ACTION %s\\x01' % action\n bot.msg(channel, msg)\n\n\[email protected]('INVITE')\[email protected]('.*')\[email protected]('low')\ndef invite_join(bot, trigger):\n \"\"\"\n Join a channel Sopel is invited to, if the inviter is an admin.\n \"\"\"\n if trigger.admin or bot.config.admin.auto_accept_invite:\n bot.join(trigger.args[1])\n return\n\n\[email protected]('KICK')\[email protected](r'.*')\[email protected]('low')\ndef hold_ground(bot, trigger):\n \"\"\"\n This function monitors all kicks across all channels Sopel is in. If it\n detects that it is the one kicked it'll automatically join that channel.\n\n WARNING: This may not be needed and could cause problems if Sopel becomes\n annoying. Please use this with caution.\n \"\"\"\n if bot.config.admin.hold_ground:\n channel = trigger.sender\n if trigger.args[1] == bot.nick:\n bot.join(channel)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('mode')\[email protected]('low')\ndef mode(bot, trigger):\n \"\"\"Set a user mode on Sopel. Can only be done in privmsg by an admin.\"\"\"\n mode = trigger.group(3)\n bot.write(('MODE ', bot.nick + ' ' + mode))\n\n\[email protected]_privmsg(\"This command only works as a private message.\")\[email protected]_admin(\"This command requires admin privileges.\")\[email protected]('set')\[email protected]('.set core.owner Me')\ndef set_config(bot, trigger):\n \"\"\"See and modify values of Sopel's config object.\n\n Trigger args:\n arg1 - section and option, in the form \"section.option\"\n arg2 - value\n\n If there is no section, section will default to \"core\".\n If value is None, the option will be deleted.\n \"\"\"\n # Get section and option from first argument.\n arg1 = trigger.group(3).split('.')\n if len(arg1) == 1:\n section_name, option = \"core\", arg1[0]\n elif len(arg1) == 2:\n section_name, option = arg1\n else:\n bot.reply(\"Usage: .set section.option value\")\n return\n section = getattr(bot.config, section_name)\n static_sec = isinstance(section, StaticSection)\n\n if static_sec and not hasattr(section, option):\n bot.say('[{}] section has no option {}.'.format(section_name, option))\n return\n\n delim = trigger.group(2).find(' ')\n # Skip preceding whitespaces, if any.\n while delim > 0 and delim < len(trigger.group(2)) and trigger.group(2)[delim] == ' ':\n delim = delim + 1\n\n # Display current value if no value is given.\n if delim == -1 or delim == len(trigger.group(2)):\n if not static_sec and bot.config.parser.has_option(section, option):\n bot.reply(\"Option %s.%s does not exist.\" % (section_name, option))\n return\n # Except if the option looks like a password. Censor those to stop them\n # from being put on log files.\n if option.endswith(\"password\") or option.endswith(\"pass\"):\n value = \"(password censored)\"\n else:\n value = getattr(section, option)\n bot.reply(\"%s.%s = %s\" % (section_name, option, value))\n return\n\n # Otherwise, set the value to one given as argument 2.\n value = trigger.group(2)[delim:]\n if static_sec:\n descriptor = getattr(section.__class__, option)\n try:\n if isinstance(descriptor, FilenameAttribute):\n value = descriptor.parse(bot.config, descriptor, value)\n else:\n value = descriptor.parse(value)\n except ValueError as exc:\n bot.say(\"Can't set attribute: \" + str(exc))\n return\n setattr(section, option, value)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('save')\[email protected]('.save')\ndef save_config(bot, trigger):\n \"\"\"Save state of Sopel's config object to the configuration file.\"\"\"\n bot.config.save()\n", "path": "sopel/modules/admin.py"}]}
| 3,322 | 874 |
gh_patches_debug_20161
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-3938
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKV_AZURE_144 passes on defaults
**Describe the issue**
If it is related to an existing check, please note the relevant check ID.
Also, explain the logic for this addition / change.
The check CKV_AZURE_144 passes if the property "public_network_access_enabled" is not explicitly set since it assumes that it defaults to false. This seems to not be the case at least for AzureRM < 3.0.0. Right now we have publicly accessible Workspaces for which the check passes since the property is not set.
**Examples**
Please share an example code sample (in the IaC of your choice) + the expected outcomes.
The Module Code:
<img width="567" alt="image" src="https://user-images.githubusercontent.com/34415231/203775024-77d6bc7c-dbec-4e8c-8639-42aa67136a3d.png">
The actual Workspace:
<img width="1182" alt="image" src="https://user-images.githubusercontent.com/34415231/203775161-91611475-5a27-4435-81a8-a40c7430061d.png">
Since the defaults seem to be subject to change the check should probably fail if the property is not set.
</issue>
<code>
[start of checkov/terraform/checks/resource/azure/MLPublicAccess.py]
1 from __future__ import annotations
2
3 from typing import Any
4
5 from checkov.common.models.enums import CheckCategories
6 from checkov.terraform.checks.resource.base_resource_negative_value_check import BaseResourceNegativeValueCheck
7
8
9 class MLPublicAccess(BaseResourceNegativeValueCheck):
10 def __init__(self) -> None:
11 # This is the full description of your check
12 description = "Ensure that Public Access is disabled for Machine Learning Workspace"
13
14 # This is the Unique ID for your check
15 id = "CKV_AZURE_144"
16
17 # These are the terraform objects supported by this check (ex: aws_iam_policy_document)
18 supported_resources = ('azurerm_machine_learning_workspace',)
19
20 # Valid CheckCategories are defined in checkov/common/models/enums.py
21 categories = (CheckCategories.NETWORKING,)
22 super().__init__(name=description, id=id, categories=categories, supported_resources=supported_resources)
23
24 def get_inspected_key(self) -> str:
25 return "public_network_access_enabled"
26
27 def get_forbidden_values(self) -> list[Any]:
28 return [True]
29
30
31 check = MLPublicAccess()
32
[end of checkov/terraform/checks/resource/azure/MLPublicAccess.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/checkov/terraform/checks/resource/azure/MLPublicAccess.py b/checkov/terraform/checks/resource/azure/MLPublicAccess.py
--- a/checkov/terraform/checks/resource/azure/MLPublicAccess.py
+++ b/checkov/terraform/checks/resource/azure/MLPublicAccess.py
@@ -2,7 +2,7 @@
from typing import Any
-from checkov.common.models.enums import CheckCategories
+from checkov.common.models.enums import CheckCategories, CheckResult
from checkov.terraform.checks.resource.base_resource_negative_value_check import BaseResourceNegativeValueCheck
@@ -19,7 +19,8 @@
# Valid CheckCategories are defined in checkov/common/models/enums.py
categories = (CheckCategories.NETWORKING,)
- super().__init__(name=description, id=id, categories=categories, supported_resources=supported_resources)
+ super().__init__(name=description, id=id, categories=categories,
+ supported_resources=supported_resources, missing_attribute_result=CheckResult.FAILED)
def get_inspected_key(self) -> str:
return "public_network_access_enabled"
|
{"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/MLPublicAccess.py b/checkov/terraform/checks/resource/azure/MLPublicAccess.py\n--- a/checkov/terraform/checks/resource/azure/MLPublicAccess.py\n+++ b/checkov/terraform/checks/resource/azure/MLPublicAccess.py\n@@ -2,7 +2,7 @@\n \n from typing import Any\n \n-from checkov.common.models.enums import CheckCategories\n+from checkov.common.models.enums import CheckCategories, CheckResult\n from checkov.terraform.checks.resource.base_resource_negative_value_check import BaseResourceNegativeValueCheck\n \n \n@@ -19,7 +19,8 @@\n \n # Valid CheckCategories are defined in checkov/common/models/enums.py\n categories = (CheckCategories.NETWORKING,)\n- super().__init__(name=description, id=id, categories=categories, supported_resources=supported_resources)\n+ super().__init__(name=description, id=id, categories=categories,\n+ supported_resources=supported_resources, missing_attribute_result=CheckResult.FAILED)\n \n def get_inspected_key(self) -> str:\n return \"public_network_access_enabled\"\n", "issue": "CKV_AZURE_144 passes on defaults\n**Describe the issue**\r\nIf it is related to an existing check, please note the relevant check ID.\r\nAlso, explain the logic for this addition / change.\r\n\r\nThe check CKV_AZURE_144 passes if the property \"public_network_access_enabled\" is not explicitly set since it assumes that it defaults to false. This seems to not be the case at least for AzureRM < 3.0.0. Right now we have publicly accessible Workspaces for which the check passes since the property is not set.\r\n\r\n**Examples**\r\nPlease share an example code sample (in the IaC of your choice) + the expected outcomes.\r\n\r\nThe Module Code:\r\n\r\n<img width=\"567\" alt=\"image\" src=\"https://user-images.githubusercontent.com/34415231/203775024-77d6bc7c-dbec-4e8c-8639-42aa67136a3d.png\">\r\n\r\nThe actual Workspace:\r\n<img width=\"1182\" alt=\"image\" src=\"https://user-images.githubusercontent.com/34415231/203775161-91611475-5a27-4435-81a8-a40c7430061d.png\">\r\n\r\nSince the defaults seem to be subject to change the check should probably fail if the property is not set.\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nfrom checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_negative_value_check import BaseResourceNegativeValueCheck\n\n\nclass MLPublicAccess(BaseResourceNegativeValueCheck):\n def __init__(self) -> None:\n # This is the full description of your check\n description = \"Ensure that Public Access is disabled for Machine Learning Workspace\"\n\n # This is the Unique ID for your check\n id = \"CKV_AZURE_144\"\n\n # These are the terraform objects supported by this check (ex: aws_iam_policy_document)\n supported_resources = ('azurerm_machine_learning_workspace',)\n\n # Valid CheckCategories are defined in checkov/common/models/enums.py\n categories = (CheckCategories.NETWORKING,)\n super().__init__(name=description, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self) -> str:\n return \"public_network_access_enabled\"\n\n def get_forbidden_values(self) -> list[Any]:\n return [True]\n\n\ncheck = MLPublicAccess()\n", "path": "checkov/terraform/checks/resource/azure/MLPublicAccess.py"}]}
| 1,178 | 246 |
gh_patches_debug_7991
|
rasdani/github-patches
|
git_diff
|
biolab__orange3-2093
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fitter preprocessors
<!--
This is an issue template. Please fill in the relevant details in the
sections below.
-->
##### Orange version
<!-- From menu _HelpβAboutβVersion_ or code `Orange.version.full_version` -->
3.4.0
##### Expected behavior
Learners use preprocessors.
##### Actual behavior
Learners extending the Fitter base class do not use preprocessors.
##### Steps to reproduce the behavior
Use a learner on e.g. hearth_disease data set
##### Additional info (worksheets, data, screenshots, ...)
</issue>
<code>
[start of Orange/modelling/base.py]
1 from Orange.base import Learner, Model
2
3
4 class FitterMeta(type):
5 """Ensure that each subclass of the `Fitter` class overrides the `__fits__`
6 attribute with a valid value."""
7 def __new__(mcs, name, bases, attrs):
8 # Check that a fitter implementation defines a valid `__fits__`
9 if any(cls.__name__ == 'Fitter' for cls in bases):
10 fits = attrs.get('__fits__')
11 assert isinstance(fits, dict), '__fits__ must be dict instance'
12 assert fits.get('classification') and fits.get('regression'), \
13 ('`__fits__` property does not define classification '
14 'or regression learner. Use a simple learner if you don\'t '
15 'need the functionality provided by Fitter.')
16 return super().__new__(mcs, name, bases, attrs)
17
18
19 class Fitter(Learner, metaclass=FitterMeta):
20 """Handle multiple types of target variable with one learner.
21
22 Subclasses of this class serve as a sort of dispatcher. When subclassing,
23 we provide a `dict` which contain actual learner classes that handle
24 appropriate data types. The fitter can then be used on any data and will
25 delegate the work to the appropriate learner.
26
27 If the learners that handle each data type require different parameters,
28 you should pass in all the possible parameters to the fitter. The fitter
29 will then determine which parameters have to be passed to individual
30 learners.
31
32 """
33 __fits__ = None
34 __returns__ = Model
35
36 # Constants to indicate what kind of problem we're dealing with
37 CLASSIFICATION, REGRESSION = 'classification', 'regression'
38
39 def __init__(self, preprocessors=None, **kwargs):
40 super().__init__(preprocessors=preprocessors)
41 self.kwargs = kwargs
42 # Make sure to pass preprocessor params to individual learners
43 self.kwargs['preprocessors'] = preprocessors
44 self.__learners = {self.CLASSIFICATION: None, self.REGRESSION: None}
45
46 def _fit_model(self, data):
47 if data.domain.has_discrete_class:
48 learner = self.get_learner(self.CLASSIFICATION)
49 else:
50 learner = self.get_learner(self.REGRESSION)
51
52 if type(self).fit is Learner.fit:
53 return learner.fit_storage(data)
54 else:
55 X, Y, W = data.X, data.Y, data.W if data.has_weights() else None
56 return learner.fit(X, Y, W)
57
58 def get_learner(self, problem_type):
59 """Get the learner for a given problem type.
60
61 Returns
62 -------
63 Learner
64 The appropriate learner for the given problem type.
65
66 """
67 # Prevent trying to access the learner when problem type is None
68 if problem_type not in self.__fits__:
69 raise TypeError("No learner to handle '{}'".format(problem_type))
70 if self.__learners[problem_type] is None:
71 learner = self.__fits__[problem_type](**self.__kwargs(problem_type))
72 learner.use_default_preprocessors = self.use_default_preprocessors
73 self.__learners[problem_type] = learner
74 return self.__learners[problem_type]
75
76 def __kwargs(self, problem_type):
77 learner_kwargs = set(
78 self.__fits__[problem_type].__init__.__code__.co_varnames[1:])
79 changed_kwargs = self._change_kwargs(self.kwargs, problem_type)
80 return {k: v for k, v in changed_kwargs.items() if k in learner_kwargs}
81
82 def _change_kwargs(self, kwargs, problem_type):
83 """Handle the kwargs to be passed to the learner before they are used.
84
85 In some cases we need to manipulate the kwargs that will be passed to
86 the learner, e.g. SGD takes a `loss` parameter in both the regression
87 and classification learners, but the learner widget cannot
88 differentiate between these two, so it passes classification and
89 regression loss parameters individually. The appropriate one must be
90 renamed into `loss` before passed to the actual learner instance. This
91 is done here.
92
93 """
94 return kwargs
95
96 @property
97 def supports_weights(self):
98 """The fitter supports weights if both the classification and
99 regression learners support weights."""
100 return (
101 hasattr(self.get_learner(self.CLASSIFICATION), 'supports_weights')
102 and self.get_learner(self.CLASSIFICATION).supports_weights) and (
103 hasattr(self.get_learner(self.REGRESSION), 'supports_weights')
104 and self.get_learner(self.REGRESSION).supports_weights)
105
106 @property
107 def params(self):
108 raise TypeError(
109 'A fitter does not have its own params. If you need to access '
110 'learner params, please use the `get_params` method.')
111
112 def get_params(self, problem_type):
113 """Access the specific learner params of a given learner."""
114 return self.get_learner(problem_type).params
115
[end of Orange/modelling/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/Orange/modelling/base.py b/Orange/modelling/base.py
--- a/Orange/modelling/base.py
+++ b/Orange/modelling/base.py
@@ -55,6 +55,12 @@
X, Y, W = data.X, data.Y, data.W if data.has_weights() else None
return learner.fit(X, Y, W)
+ def preprocess(self, data):
+ if data.domain.has_discrete_class:
+ return self.get_learner(self.CLASSIFICATION).preprocess(data)
+ else:
+ return self.get_learner(self.REGRESSION).preprocess(data)
+
def get_learner(self, problem_type):
"""Get the learner for a given problem type.
|
{"golden_diff": "diff --git a/Orange/modelling/base.py b/Orange/modelling/base.py\n--- a/Orange/modelling/base.py\n+++ b/Orange/modelling/base.py\n@@ -55,6 +55,12 @@\n X, Y, W = data.X, data.Y, data.W if data.has_weights() else None\n return learner.fit(X, Y, W)\n \n+ def preprocess(self, data):\n+ if data.domain.has_discrete_class:\n+ return self.get_learner(self.CLASSIFICATION).preprocess(data)\n+ else:\n+ return self.get_learner(self.REGRESSION).preprocess(data)\n+\n def get_learner(self, problem_type):\n \"\"\"Get the learner for a given problem type.\n", "issue": "Fitter preprocessors\n<!--\r\nThis is an issue template. Please fill in the relevant details in the\r\nsections below.\r\n-->\r\n\r\n##### Orange version\r\n<!-- From menu _Help\u2192About\u2192Version_ or code `Orange.version.full_version` -->\r\n3.4.0\r\n\r\n##### Expected behavior\r\nLearners use preprocessors.\r\n\r\n\r\n##### Actual behavior\r\nLearners extending the Fitter base class do not use preprocessors.\r\n\r\n\r\n##### Steps to reproduce the behavior\r\nUse a learner on e.g. hearth_disease data set\r\n\r\n\r\n##### Additional info (worksheets, data, screenshots, ...)\r\n\r\n\r\n\n", "before_files": [{"content": "from Orange.base import Learner, Model\n\n\nclass FitterMeta(type):\n \"\"\"Ensure that each subclass of the `Fitter` class overrides the `__fits__`\n attribute with a valid value.\"\"\"\n def __new__(mcs, name, bases, attrs):\n # Check that a fitter implementation defines a valid `__fits__`\n if any(cls.__name__ == 'Fitter' for cls in bases):\n fits = attrs.get('__fits__')\n assert isinstance(fits, dict), '__fits__ must be dict instance'\n assert fits.get('classification') and fits.get('regression'), \\\n ('`__fits__` property does not define classification '\n 'or regression learner. Use a simple learner if you don\\'t '\n 'need the functionality provided by Fitter.')\n return super().__new__(mcs, name, bases, attrs)\n\n\nclass Fitter(Learner, metaclass=FitterMeta):\n \"\"\"Handle multiple types of target variable with one learner.\n\n Subclasses of this class serve as a sort of dispatcher. When subclassing,\n we provide a `dict` which contain actual learner classes that handle\n appropriate data types. The fitter can then be used on any data and will\n delegate the work to the appropriate learner.\n\n If the learners that handle each data type require different parameters,\n you should pass in all the possible parameters to the fitter. The fitter\n will then determine which parameters have to be passed to individual\n learners.\n\n \"\"\"\n __fits__ = None\n __returns__ = Model\n\n # Constants to indicate what kind of problem we're dealing with\n CLASSIFICATION, REGRESSION = 'classification', 'regression'\n\n def __init__(self, preprocessors=None, **kwargs):\n super().__init__(preprocessors=preprocessors)\n self.kwargs = kwargs\n # Make sure to pass preprocessor params to individual learners\n self.kwargs['preprocessors'] = preprocessors\n self.__learners = {self.CLASSIFICATION: None, self.REGRESSION: None}\n\n def _fit_model(self, data):\n if data.domain.has_discrete_class:\n learner = self.get_learner(self.CLASSIFICATION)\n else:\n learner = self.get_learner(self.REGRESSION)\n\n if type(self).fit is Learner.fit:\n return learner.fit_storage(data)\n else:\n X, Y, W = data.X, data.Y, data.W if data.has_weights() else None\n return learner.fit(X, Y, W)\n\n def get_learner(self, problem_type):\n \"\"\"Get the learner for a given problem type.\n\n Returns\n -------\n Learner\n The appropriate learner for the given problem type.\n\n \"\"\"\n # Prevent trying to access the learner when problem type is None\n if problem_type not in self.__fits__:\n raise TypeError(\"No learner to handle '{}'\".format(problem_type))\n if self.__learners[problem_type] is None:\n learner = self.__fits__[problem_type](**self.__kwargs(problem_type))\n learner.use_default_preprocessors = self.use_default_preprocessors\n self.__learners[problem_type] = learner\n return self.__learners[problem_type]\n\n def __kwargs(self, problem_type):\n learner_kwargs = set(\n self.__fits__[problem_type].__init__.__code__.co_varnames[1:])\n changed_kwargs = self._change_kwargs(self.kwargs, problem_type)\n return {k: v for k, v in changed_kwargs.items() if k in learner_kwargs}\n\n def _change_kwargs(self, kwargs, problem_type):\n \"\"\"Handle the kwargs to be passed to the learner before they are used.\n\n In some cases we need to manipulate the kwargs that will be passed to\n the learner, e.g. SGD takes a `loss` parameter in both the regression\n and classification learners, but the learner widget cannot\n differentiate between these two, so it passes classification and\n regression loss parameters individually. The appropriate one must be\n renamed into `loss` before passed to the actual learner instance. This\n is done here.\n\n \"\"\"\n return kwargs\n\n @property\n def supports_weights(self):\n \"\"\"The fitter supports weights if both the classification and\n regression learners support weights.\"\"\"\n return (\n hasattr(self.get_learner(self.CLASSIFICATION), 'supports_weights')\n and self.get_learner(self.CLASSIFICATION).supports_weights) and (\n hasattr(self.get_learner(self.REGRESSION), 'supports_weights')\n and self.get_learner(self.REGRESSION).supports_weights)\n\n @property\n def params(self):\n raise TypeError(\n 'A fitter does not have its own params. If you need to access '\n 'learner params, please use the `get_params` method.')\n\n def get_params(self, problem_type):\n \"\"\"Access the specific learner params of a given learner.\"\"\"\n return self.get_learner(problem_type).params\n", "path": "Orange/modelling/base.py"}]}
| 1,951 | 157 |
gh_patches_debug_14326
|
rasdani/github-patches
|
git_diff
|
urllib3__urllib3-3338
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create a workflow (nox?) for testing Emscripten support locally
Would be great to have an easy-to-use workflow for contributors to:
* Run a single command
* Have all dependencies installed
* Opens a Pyodide console in a web browser
* Makes the local copy of urllib3 available for installation with micropip/Pyodide
This would help greatly with being able to poke around with and develop Emscripten/Pyodide support.
</issue>
<code>
[start of noxfile.py]
1 from __future__ import annotations
2
3 import os
4 import shutil
5 import sys
6 import typing
7 from pathlib import Path
8
9 import nox
10
11 nox.options.error_on_missing_interpreters = True
12
13
14 def tests_impl(
15 session: nox.Session,
16 extras: str = "socks,brotli,zstd,h2",
17 # hypercorn dependency h2 compares bytes and strings
18 # https://github.com/python-hyper/h2/issues/1236
19 byte_string_comparisons: bool = False,
20 integration: bool = False,
21 pytest_extra_args: list[str] = [],
22 ) -> None:
23 # Retrieve sys info from the Python implementation under test
24 # to avoid enabling memray when nox runs under CPython but tests PyPy
25 session_python_info = session.run(
26 "python",
27 "-c",
28 "import sys; print(sys.implementation.name, sys.version_info.releaselevel)",
29 silent=True,
30 ).strip() # type: ignore[union-attr] # mypy doesn't know that silent=True will return a string
31 implementation_name, release_level = session_python_info.split(" ")
32
33 # zstd cannot be installed on CPython 3.13 yet because it pins
34 # an incompatible CFFI version.
35 # https://github.com/indygreg/python-zstandard/issues/210
36 if release_level != "final":
37 extras = extras.replace(",zstd", "")
38
39 # Install deps and the package itself.
40 session.install("-r", "dev-requirements.txt")
41 session.install(f".[{extras}]")
42
43 # Show the pip version.
44 session.run("pip", "--version")
45 # Print the Python version and bytesize.
46 session.run("python", "--version")
47 session.run("python", "-c", "import struct; print(struct.calcsize('P') * 8)")
48 # Print OpenSSL information.
49 session.run("python", "-m", "OpenSSL.debug")
50
51 memray_supported = True
52 if implementation_name != "cpython" or release_level != "final":
53 memray_supported = False # pytest-memray requires CPython 3.8+
54 elif sys.platform == "win32":
55 memray_supported = False
56
57 # Environment variables being passed to the pytest run.
58 pytest_session_envvars = {
59 "PYTHONWARNINGS": "always::DeprecationWarning",
60 }
61
62 # In coverage 7.4.0 we can only set the setting for Python 3.12+
63 # Future versions of coverage will use sys.monitoring based on availability.
64 if (
65 isinstance(session.python, str)
66 and "." in session.python
67 and int(session.python.split(".")[1]) >= 12
68 ):
69 pytest_session_envvars["COVERAGE_CORE"] = "sysmon"
70
71 # Inspired from https://hynek.me/articles/ditch-codecov-python/
72 # We use parallel mode and then combine in a later CI step
73 session.run(
74 "python",
75 *(("-bb",) if byte_string_comparisons else ()),
76 "-m",
77 "coverage",
78 "run",
79 "--parallel-mode",
80 "-m",
81 "pytest",
82 *("--memray", "--hide-memray-summary") if memray_supported else (),
83 "-v",
84 "-ra",
85 *(("--integration",) if integration else ()),
86 f"--color={'yes' if 'GITHUB_ACTIONS' in os.environ else 'auto'}",
87 "--tb=native",
88 "--durations=10",
89 "--strict-config",
90 "--strict-markers",
91 *pytest_extra_args,
92 *(session.posargs or ("test/",)),
93 env=pytest_session_envvars,
94 )
95
96
97 @nox.session(
98 python=[
99 "3.8",
100 "3.9",
101 "3.10",
102 "3.11",
103 "3.12",
104 "3.13",
105 "pypy3.8",
106 "pypy3.9",
107 "pypy3.10",
108 ]
109 )
110 def test(session: nox.Session) -> None:
111 tests_impl(session)
112
113
114 @nox.session(python="3")
115 def test_integration(session: nox.Session) -> None:
116 """Run integration tests"""
117 tests_impl(session, integration=True)
118
119
120 @nox.session(python="3")
121 def test_brotlipy(session: nox.Session) -> None:
122 """Check that if 'brotlipy' is installed instead of 'brotli' or
123 'brotlicffi' that we still don't blow up.
124 """
125 session.install("brotlipy")
126 tests_impl(session, extras="socks", byte_string_comparisons=False)
127
128
129 def git_clone(session: nox.Session, git_url: str) -> None:
130 """We either clone the target repository or if already exist
131 simply reset the state and pull.
132 """
133 expected_directory = git_url.split("/")[-1]
134
135 if expected_directory.endswith(".git"):
136 expected_directory = expected_directory[:-4]
137
138 if not os.path.isdir(expected_directory):
139 session.run("git", "clone", "--depth", "1", git_url, external=True)
140 else:
141 session.run(
142 "git", "-C", expected_directory, "reset", "--hard", "HEAD", external=True
143 )
144 session.run("git", "-C", expected_directory, "pull", external=True)
145
146
147 @nox.session()
148 def downstream_botocore(session: nox.Session) -> None:
149 root = os.getcwd()
150 tmp_dir = session.create_tmp()
151
152 session.cd(tmp_dir)
153 git_clone(session, "https://github.com/boto/botocore")
154 session.chdir("botocore")
155 session.run("git", "rev-parse", "HEAD", external=True)
156 session.run("python", "scripts/ci/install")
157
158 session.cd(root)
159 session.install(".", silent=False)
160 session.cd(f"{tmp_dir}/botocore")
161
162 session.run("python", "-c", "import urllib3; print(urllib3.__version__)")
163 session.run("python", "scripts/ci/run-tests")
164
165
166 @nox.session()
167 def downstream_requests(session: nox.Session) -> None:
168 root = os.getcwd()
169 tmp_dir = session.create_tmp()
170
171 session.cd(tmp_dir)
172 git_clone(session, "https://github.com/psf/requests")
173 session.chdir("requests")
174 session.run("git", "rev-parse", "HEAD", external=True)
175 session.install(".[socks]", silent=False)
176 session.install("-r", "requirements-dev.txt", silent=False)
177
178 # Workaround until https://github.com/psf/httpbin/pull/29 gets released
179 session.install("flask<3", "werkzeug<3", silent=False)
180
181 session.cd(root)
182 session.install(".", silent=False)
183 session.cd(f"{tmp_dir}/requests")
184
185 session.run("python", "-c", "import urllib3; print(urllib3.__version__)")
186 session.run("pytest", "tests")
187
188
189 @nox.session()
190 def format(session: nox.Session) -> None:
191 """Run code formatters."""
192 lint(session)
193
194
195 @nox.session(python="3.12")
196 def lint(session: nox.Session) -> None:
197 session.install("pre-commit")
198 session.run("pre-commit", "run", "--all-files")
199
200 mypy(session)
201
202
203 # TODO: node support is not tested yet - it should work if you require('xmlhttprequest') before
204 # loading pyodide, but there is currently no nice way to do this with pytest-pyodide
205 # because you can't override the test runner properties easily - see
206 # https://github.com/pyodide/pytest-pyodide/issues/118 for more
207 @nox.session(python="3.11")
208 @nox.parametrize("runner", ["firefox", "chrome"])
209 def emscripten(session: nox.Session, runner: str) -> None:
210 """Test on Emscripten with Pyodide & Chrome / Firefox"""
211 session.install("-r", "emscripten-requirements.txt")
212 # build wheel into dist folder
213 session.run("python", "-m", "build")
214 # make sure we have a dist dir for pyodide
215 dist_dir = None
216 if "PYODIDE_ROOT" in os.environ:
217 # we have a pyodide build tree checked out
218 # use the dist directory from that
219 dist_dir = Path(os.environ["PYODIDE_ROOT"]) / "dist"
220 else:
221 # we don't have a build tree, get one
222 # that matches the version of pyodide build
223 pyodide_version = typing.cast(
224 str,
225 session.run(
226 "python",
227 "-c",
228 "import pyodide_build;print(pyodide_build.__version__)",
229 silent=True,
230 ),
231 ).strip()
232
233 pyodide_artifacts_path = Path(session.cache_dir) / f"pyodide-{pyodide_version}"
234 if not pyodide_artifacts_path.exists():
235 print("Fetching pyodide build artifacts")
236 session.run(
237 "wget",
238 f"https://github.com/pyodide/pyodide/releases/download/{pyodide_version}/pyodide-{pyodide_version}.tar.bz2",
239 "-O",
240 f"{pyodide_artifacts_path}.tar.bz2",
241 )
242 pyodide_artifacts_path.mkdir(parents=True)
243 session.run(
244 "tar",
245 "-xjf",
246 f"{pyodide_artifacts_path}.tar.bz2",
247 "-C",
248 str(pyodide_artifacts_path),
249 "--strip-components",
250 "1",
251 )
252
253 dist_dir = pyodide_artifacts_path
254 assert dist_dir is not None
255 assert dist_dir.exists()
256 if runner == "chrome":
257 # install chrome webdriver and add it to path
258 driver = typing.cast(
259 str,
260 session.run(
261 "python",
262 "-c",
263 "from webdriver_manager.chrome import ChromeDriverManager;print(ChromeDriverManager().install())",
264 silent=True,
265 ),
266 ).strip()
267 session.env["PATH"] = f"{Path(driver).parent}:{session.env['PATH']}"
268
269 tests_impl(
270 session,
271 pytest_extra_args=[
272 "--rt",
273 "chrome-no-host",
274 "--dist-dir",
275 str(dist_dir),
276 "test",
277 ],
278 )
279 elif runner == "firefox":
280 driver = typing.cast(
281 str,
282 session.run(
283 "python",
284 "-c",
285 "from webdriver_manager.firefox import GeckoDriverManager;print(GeckoDriverManager().install())",
286 silent=True,
287 ),
288 ).strip()
289 session.env["PATH"] = f"{Path(driver).parent}:{session.env['PATH']}"
290
291 tests_impl(
292 session,
293 pytest_extra_args=[
294 "--rt",
295 "firefox-no-host",
296 "--dist-dir",
297 str(dist_dir),
298 "test",
299 ],
300 )
301 else:
302 raise ValueError(f"Unknown runner: {runner}")
303
304
305 @nox.session(python="3.12")
306 def mypy(session: nox.Session) -> None:
307 """Run mypy."""
308 session.install("-r", "mypy-requirements.txt")
309 session.run("mypy", "--version")
310 session.run(
311 "mypy",
312 "-p",
313 "dummyserver",
314 "-m",
315 "noxfile",
316 "-p",
317 "urllib3",
318 "-p",
319 "test",
320 )
321
322
323 @nox.session
324 def docs(session: nox.Session) -> None:
325 session.install("-r", "docs/requirements.txt")
326 session.install(".[socks,brotli,zstd]")
327
328 session.chdir("docs")
329 if os.path.exists("_build"):
330 shutil.rmtree("_build")
331 session.run("sphinx-build", "-b", "html", "-W", ".", "_build/html")
332
[end of noxfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -200,6 +200,21 @@
mypy(session)
[email protected](python="3.11")
+def pyodideconsole(session: nox.Session) -> None:
+ # build wheel into dist folder
+ session.install("build")
+ session.run("python", "-m", "build")
+ session.run(
+ "cp",
+ "test/contrib/emscripten/templates/pyodide-console.html",
+ "dist/index.html",
+ external=True,
+ )
+ session.cd("dist")
+ session.run("python", "-m", "http.server")
+
+
# TODO: node support is not tested yet - it should work if you require('xmlhttprequest') before
# loading pyodide, but there is currently no nice way to do this with pytest-pyodide
# because you can't override the test runner properties easily - see
|
{"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -200,6 +200,21 @@\n mypy(session)\n \n \[email protected](python=\"3.11\")\n+def pyodideconsole(session: nox.Session) -> None:\n+ # build wheel into dist folder\n+ session.install(\"build\")\n+ session.run(\"python\", \"-m\", \"build\")\n+ session.run(\n+ \"cp\",\n+ \"test/contrib/emscripten/templates/pyodide-console.html\",\n+ \"dist/index.html\",\n+ external=True,\n+ )\n+ session.cd(\"dist\")\n+ session.run(\"python\", \"-m\", \"http.server\")\n+\n+\n # TODO: node support is not tested yet - it should work if you require('xmlhttprequest') before\n # loading pyodide, but there is currently no nice way to do this with pytest-pyodide\n # because you can't override the test runner properties easily - see\n", "issue": "Create a workflow (nox?) for testing Emscripten support locally\nWould be great to have an easy-to-use workflow for contributors to:\r\n\r\n* Run a single command\r\n* Have all dependencies installed\r\n* Opens a Pyodide console in a web browser\r\n* Makes the local copy of urllib3 available for installation with micropip/Pyodide\r\n\r\nThis would help greatly with being able to poke around with and develop Emscripten/Pyodide support.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport os\nimport shutil\nimport sys\nimport typing\nfrom pathlib import Path\n\nimport nox\n\nnox.options.error_on_missing_interpreters = True\n\n\ndef tests_impl(\n session: nox.Session,\n extras: str = \"socks,brotli,zstd,h2\",\n # hypercorn dependency h2 compares bytes and strings\n # https://github.com/python-hyper/h2/issues/1236\n byte_string_comparisons: bool = False,\n integration: bool = False,\n pytest_extra_args: list[str] = [],\n) -> None:\n # Retrieve sys info from the Python implementation under test\n # to avoid enabling memray when nox runs under CPython but tests PyPy\n session_python_info = session.run(\n \"python\",\n \"-c\",\n \"import sys; print(sys.implementation.name, sys.version_info.releaselevel)\",\n silent=True,\n ).strip() # type: ignore[union-attr] # mypy doesn't know that silent=True will return a string\n implementation_name, release_level = session_python_info.split(\" \")\n\n # zstd cannot be installed on CPython 3.13 yet because it pins\n # an incompatible CFFI version.\n # https://github.com/indygreg/python-zstandard/issues/210\n if release_level != \"final\":\n extras = extras.replace(\",zstd\", \"\")\n\n # Install deps and the package itself.\n session.install(\"-r\", \"dev-requirements.txt\")\n session.install(f\".[{extras}]\")\n\n # Show the pip version.\n session.run(\"pip\", \"--version\")\n # Print the Python version and bytesize.\n session.run(\"python\", \"--version\")\n session.run(\"python\", \"-c\", \"import struct; print(struct.calcsize('P') * 8)\")\n # Print OpenSSL information.\n session.run(\"python\", \"-m\", \"OpenSSL.debug\")\n\n memray_supported = True\n if implementation_name != \"cpython\" or release_level != \"final\":\n memray_supported = False # pytest-memray requires CPython 3.8+\n elif sys.platform == \"win32\":\n memray_supported = False\n\n # Environment variables being passed to the pytest run.\n pytest_session_envvars = {\n \"PYTHONWARNINGS\": \"always::DeprecationWarning\",\n }\n\n # In coverage 7.4.0 we can only set the setting for Python 3.12+\n # Future versions of coverage will use sys.monitoring based on availability.\n if (\n isinstance(session.python, str)\n and \".\" in session.python\n and int(session.python.split(\".\")[1]) >= 12\n ):\n pytest_session_envvars[\"COVERAGE_CORE\"] = \"sysmon\"\n\n # Inspired from https://hynek.me/articles/ditch-codecov-python/\n # We use parallel mode and then combine in a later CI step\n session.run(\n \"python\",\n *((\"-bb\",) if byte_string_comparisons else ()),\n \"-m\",\n \"coverage\",\n \"run\",\n \"--parallel-mode\",\n \"-m\",\n \"pytest\",\n *(\"--memray\", \"--hide-memray-summary\") if memray_supported else (),\n \"-v\",\n \"-ra\",\n *((\"--integration\",) if integration else ()),\n f\"--color={'yes' if 'GITHUB_ACTIONS' in os.environ else 'auto'}\",\n \"--tb=native\",\n \"--durations=10\",\n \"--strict-config\",\n \"--strict-markers\",\n *pytest_extra_args,\n *(session.posargs or (\"test/\",)),\n env=pytest_session_envvars,\n )\n\n\[email protected](\n python=[\n \"3.8\",\n \"3.9\",\n \"3.10\",\n \"3.11\",\n \"3.12\",\n \"3.13\",\n \"pypy3.8\",\n \"pypy3.9\",\n \"pypy3.10\",\n ]\n)\ndef test(session: nox.Session) -> None:\n tests_impl(session)\n\n\[email protected](python=\"3\")\ndef test_integration(session: nox.Session) -> None:\n \"\"\"Run integration tests\"\"\"\n tests_impl(session, integration=True)\n\n\[email protected](python=\"3\")\ndef test_brotlipy(session: nox.Session) -> None:\n \"\"\"Check that if 'brotlipy' is installed instead of 'brotli' or\n 'brotlicffi' that we still don't blow up.\n \"\"\"\n session.install(\"brotlipy\")\n tests_impl(session, extras=\"socks\", byte_string_comparisons=False)\n\n\ndef git_clone(session: nox.Session, git_url: str) -> None:\n \"\"\"We either clone the target repository or if already exist\n simply reset the state and pull.\n \"\"\"\n expected_directory = git_url.split(\"/\")[-1]\n\n if expected_directory.endswith(\".git\"):\n expected_directory = expected_directory[:-4]\n\n if not os.path.isdir(expected_directory):\n session.run(\"git\", \"clone\", \"--depth\", \"1\", git_url, external=True)\n else:\n session.run(\n \"git\", \"-C\", expected_directory, \"reset\", \"--hard\", \"HEAD\", external=True\n )\n session.run(\"git\", \"-C\", expected_directory, \"pull\", external=True)\n\n\[email protected]()\ndef downstream_botocore(session: nox.Session) -> None:\n root = os.getcwd()\n tmp_dir = session.create_tmp()\n\n session.cd(tmp_dir)\n git_clone(session, \"https://github.com/boto/botocore\")\n session.chdir(\"botocore\")\n session.run(\"git\", \"rev-parse\", \"HEAD\", external=True)\n session.run(\"python\", \"scripts/ci/install\")\n\n session.cd(root)\n session.install(\".\", silent=False)\n session.cd(f\"{tmp_dir}/botocore\")\n\n session.run(\"python\", \"-c\", \"import urllib3; print(urllib3.__version__)\")\n session.run(\"python\", \"scripts/ci/run-tests\")\n\n\[email protected]()\ndef downstream_requests(session: nox.Session) -> None:\n root = os.getcwd()\n tmp_dir = session.create_tmp()\n\n session.cd(tmp_dir)\n git_clone(session, \"https://github.com/psf/requests\")\n session.chdir(\"requests\")\n session.run(\"git\", \"rev-parse\", \"HEAD\", external=True)\n session.install(\".[socks]\", silent=False)\n session.install(\"-r\", \"requirements-dev.txt\", silent=False)\n\n # Workaround until https://github.com/psf/httpbin/pull/29 gets released\n session.install(\"flask<3\", \"werkzeug<3\", silent=False)\n\n session.cd(root)\n session.install(\".\", silent=False)\n session.cd(f\"{tmp_dir}/requests\")\n\n session.run(\"python\", \"-c\", \"import urllib3; print(urllib3.__version__)\")\n session.run(\"pytest\", \"tests\")\n\n\[email protected]()\ndef format(session: nox.Session) -> None:\n \"\"\"Run code formatters.\"\"\"\n lint(session)\n\n\[email protected](python=\"3.12\")\ndef lint(session: nox.Session) -> None:\n session.install(\"pre-commit\")\n session.run(\"pre-commit\", \"run\", \"--all-files\")\n\n mypy(session)\n\n\n# TODO: node support is not tested yet - it should work if you require('xmlhttprequest') before\n# loading pyodide, but there is currently no nice way to do this with pytest-pyodide\n# because you can't override the test runner properties easily - see\n# https://github.com/pyodide/pytest-pyodide/issues/118 for more\[email protected](python=\"3.11\")\[email protected](\"runner\", [\"firefox\", \"chrome\"])\ndef emscripten(session: nox.Session, runner: str) -> None:\n \"\"\"Test on Emscripten with Pyodide & Chrome / Firefox\"\"\"\n session.install(\"-r\", \"emscripten-requirements.txt\")\n # build wheel into dist folder\n session.run(\"python\", \"-m\", \"build\")\n # make sure we have a dist dir for pyodide\n dist_dir = None\n if \"PYODIDE_ROOT\" in os.environ:\n # we have a pyodide build tree checked out\n # use the dist directory from that\n dist_dir = Path(os.environ[\"PYODIDE_ROOT\"]) / \"dist\"\n else:\n # we don't have a build tree, get one\n # that matches the version of pyodide build\n pyodide_version = typing.cast(\n str,\n session.run(\n \"python\",\n \"-c\",\n \"import pyodide_build;print(pyodide_build.__version__)\",\n silent=True,\n ),\n ).strip()\n\n pyodide_artifacts_path = Path(session.cache_dir) / f\"pyodide-{pyodide_version}\"\n if not pyodide_artifacts_path.exists():\n print(\"Fetching pyodide build artifacts\")\n session.run(\n \"wget\",\n f\"https://github.com/pyodide/pyodide/releases/download/{pyodide_version}/pyodide-{pyodide_version}.tar.bz2\",\n \"-O\",\n f\"{pyodide_artifacts_path}.tar.bz2\",\n )\n pyodide_artifacts_path.mkdir(parents=True)\n session.run(\n \"tar\",\n \"-xjf\",\n f\"{pyodide_artifacts_path}.tar.bz2\",\n \"-C\",\n str(pyodide_artifacts_path),\n \"--strip-components\",\n \"1\",\n )\n\n dist_dir = pyodide_artifacts_path\n assert dist_dir is not None\n assert dist_dir.exists()\n if runner == \"chrome\":\n # install chrome webdriver and add it to path\n driver = typing.cast(\n str,\n session.run(\n \"python\",\n \"-c\",\n \"from webdriver_manager.chrome import ChromeDriverManager;print(ChromeDriverManager().install())\",\n silent=True,\n ),\n ).strip()\n session.env[\"PATH\"] = f\"{Path(driver).parent}:{session.env['PATH']}\"\n\n tests_impl(\n session,\n pytest_extra_args=[\n \"--rt\",\n \"chrome-no-host\",\n \"--dist-dir\",\n str(dist_dir),\n \"test\",\n ],\n )\n elif runner == \"firefox\":\n driver = typing.cast(\n str,\n session.run(\n \"python\",\n \"-c\",\n \"from webdriver_manager.firefox import GeckoDriverManager;print(GeckoDriverManager().install())\",\n silent=True,\n ),\n ).strip()\n session.env[\"PATH\"] = f\"{Path(driver).parent}:{session.env['PATH']}\"\n\n tests_impl(\n session,\n pytest_extra_args=[\n \"--rt\",\n \"firefox-no-host\",\n \"--dist-dir\",\n str(dist_dir),\n \"test\",\n ],\n )\n else:\n raise ValueError(f\"Unknown runner: {runner}\")\n\n\[email protected](python=\"3.12\")\ndef mypy(session: nox.Session) -> None:\n \"\"\"Run mypy.\"\"\"\n session.install(\"-r\", \"mypy-requirements.txt\")\n session.run(\"mypy\", \"--version\")\n session.run(\n \"mypy\",\n \"-p\",\n \"dummyserver\",\n \"-m\",\n \"noxfile\",\n \"-p\",\n \"urllib3\",\n \"-p\",\n \"test\",\n )\n\n\[email protected]\ndef docs(session: nox.Session) -> None:\n session.install(\"-r\", \"docs/requirements.txt\")\n session.install(\".[socks,brotli,zstd]\")\n\n session.chdir(\"docs\")\n if os.path.exists(\"_build\"):\n shutil.rmtree(\"_build\")\n session.run(\"sphinx-build\", \"-b\", \"html\", \"-W\", \".\", \"_build/html\")\n", "path": "noxfile.py"}]}
| 4,088 | 229 |
gh_patches_debug_2852
|
rasdani/github-patches
|
git_diff
|
netbox-community__netbox-14370
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"service" does not preserve the order of ports when editing
### NetBox version
v3.6.5
### Python version
3.8
### Steps to Reproduce
1. Create a "service", and enter the ports as "9093,9095,9998-9999"
2. View the list of services
3. Edit the service (i.e. click the pencil icon at the end of the service row)
### Expected Behavior
Either the ports to remain in the order originally entered, or to be sorted. (I note that the data type in the underlying Postgres column is `integer[]` which is an ordered list)
### Observed Behavior
When viewing the table of services (`/ipam/services/`), the ports are shown in order:
<img width="304" alt="image" src="https://github.com/netbox-community/netbox/assets/44789/632b5313-7241-45d3-8649-b16fb9c4b6f0">
It also shows the same when viewing the details of an individual service (e.g. `/ipam/services/2/`)
However, when editing the service (`/ipam/services/2/edit/`), the ports are in a randomized order:
<img width="681" alt="image" src="https://github.com/netbox-community/netbox/assets/44789/494f89cc-80b5-4b48-a27f-498992c159e3">
This matches what's in the database, which in the same randomized order:
```
netbox=# select ports from ipam_service where id=2
ports
-----------------------
{9999,9093,9998,9095}
(1 row)
```
</issue>
<code>
[start of netbox/utilities/forms/utils.py]
1 import re
2
3 from django import forms
4 from django.forms.models import fields_for_model
5
6 from utilities.choices import unpack_grouped_choices
7 from utilities.querysets import RestrictedQuerySet
8 from .constants import *
9
10 __all__ = (
11 'add_blank_choice',
12 'expand_alphanumeric_pattern',
13 'expand_ipaddress_pattern',
14 'form_from_model',
15 'get_field_value',
16 'get_selected_values',
17 'parse_alphanumeric_range',
18 'parse_numeric_range',
19 'restrict_form_fields',
20 'parse_csv',
21 'validate_csv',
22 )
23
24
25 def parse_numeric_range(string, base=10):
26 """
27 Expand a numeric range (continuous or not) into a decimal or
28 hexadecimal list, as specified by the base parameter
29 '0-3,5' => [0, 1, 2, 3, 5]
30 '2,8-b,d,f' => [2, 8, 9, a, b, d, f]
31 """
32 values = list()
33 for dash_range in string.split(','):
34 try:
35 begin, end = dash_range.split('-')
36 except ValueError:
37 begin, end = dash_range, dash_range
38 try:
39 begin, end = int(begin.strip(), base=base), int(end.strip(), base=base) + 1
40 except ValueError:
41 raise forms.ValidationError(f'Range "{dash_range}" is invalid.')
42 values.extend(range(begin, end))
43 return list(set(values))
44
45
46 def parse_alphanumeric_range(string):
47 """
48 Expand an alphanumeric range (continuous or not) into a list.
49 'a-d,f' => [a, b, c, d, f]
50 '0-3,a-d' => [0, 1, 2, 3, a, b, c, d]
51 """
52 values = []
53 for dash_range in string.split(','):
54 try:
55 begin, end = dash_range.split('-')
56 vals = begin + end
57 # Break out of loop if there's an invalid pattern to return an error
58 if (not (vals.isdigit() or vals.isalpha())) or (vals.isalpha() and not (vals.isupper() or vals.islower())):
59 return []
60 except ValueError:
61 begin, end = dash_range, dash_range
62 if begin.isdigit() and end.isdigit():
63 if int(begin) >= int(end):
64 raise forms.ValidationError(f'Range "{dash_range}" is invalid.')
65
66 for n in list(range(int(begin), int(end) + 1)):
67 values.append(n)
68 else:
69 # Value-based
70 if begin == end:
71 values.append(begin)
72 # Range-based
73 else:
74 # Not a valid range (more than a single character)
75 if not len(begin) == len(end) == 1:
76 raise forms.ValidationError(f'Range "{dash_range}" is invalid.')
77
78 if ord(begin) >= ord(end):
79 raise forms.ValidationError(f'Range "{dash_range}" is invalid.')
80
81 for n in list(range(ord(begin), ord(end) + 1)):
82 values.append(chr(n))
83 return values
84
85
86 def expand_alphanumeric_pattern(string):
87 """
88 Expand an alphabetic pattern into a list of strings.
89 """
90 lead, pattern, remnant = re.split(ALPHANUMERIC_EXPANSION_PATTERN, string, maxsplit=1)
91 parsed_range = parse_alphanumeric_range(pattern)
92 for i in parsed_range:
93 if re.search(ALPHANUMERIC_EXPANSION_PATTERN, remnant):
94 for string in expand_alphanumeric_pattern(remnant):
95 yield "{}{}{}".format(lead, i, string)
96 else:
97 yield "{}{}{}".format(lead, i, remnant)
98
99
100 def expand_ipaddress_pattern(string, family):
101 """
102 Expand an IP address pattern into a list of strings. Examples:
103 '192.0.2.[1,2,100-250]/24' => ['192.0.2.1/24', '192.0.2.2/24', '192.0.2.100/24' ... '192.0.2.250/24']
104 '2001:db8:0:[0,fd-ff]::/64' => ['2001:db8:0:0::/64', '2001:db8:0:fd::/64', ... '2001:db8:0:ff::/64']
105 """
106 if family not in [4, 6]:
107 raise Exception("Invalid IP address family: {}".format(family))
108 if family == 4:
109 regex = IP4_EXPANSION_PATTERN
110 base = 10
111 else:
112 regex = IP6_EXPANSION_PATTERN
113 base = 16
114 lead, pattern, remnant = re.split(regex, string, maxsplit=1)
115 parsed_range = parse_numeric_range(pattern, base)
116 for i in parsed_range:
117 if re.search(regex, remnant):
118 for string in expand_ipaddress_pattern(remnant, family):
119 yield ''.join([lead, format(i, 'x' if family == 6 else 'd'), string])
120 else:
121 yield ''.join([lead, format(i, 'x' if family == 6 else 'd'), remnant])
122
123
124 def get_field_value(form, field_name):
125 """
126 Return the current bound or initial value associated with a form field, prior to calling
127 clean() for the form.
128 """
129 field = form.fields[field_name]
130
131 if form.is_bound:
132 if data := form.data.get(field_name):
133 if field.valid_value(data):
134 return data
135
136 return form.get_initial_for_field(field, field_name)
137
138
139 def get_selected_values(form, field_name):
140 """
141 Return the list of selected human-friendly values for a form field
142 """
143 if not hasattr(form, 'cleaned_data'):
144 form.is_valid()
145 filter_data = form.cleaned_data.get(field_name)
146 field = form.fields[field_name]
147
148 # Non-selection field
149 if not hasattr(field, 'choices'):
150 return [str(filter_data)]
151
152 # Model choice field
153 if type(field.choices) is forms.models.ModelChoiceIterator:
154 # If this is a single-choice field, wrap its value in a list
155 if not hasattr(filter_data, '__iter__'):
156 values = [filter_data]
157 else:
158 values = filter_data
159
160 else:
161 # Static selection field
162 choices = unpack_grouped_choices(field.choices)
163 if type(filter_data) not in (list, tuple):
164 filter_data = [filter_data] # Ensure filter data is iterable
165 values = [
166 label for value, label in choices if str(value) in filter_data or None in filter_data
167 ]
168
169 # If the field has a `null_option` attribute set and it is selected,
170 # add it to the field's grouped choices.
171 if getattr(field, 'null_option', None) and None in filter_data:
172 values.remove(None)
173 values.insert(0, field.null_option)
174
175 return values
176
177
178 def add_blank_choice(choices):
179 """
180 Add a blank choice to the beginning of a choices list.
181 """
182 return ((None, '---------'),) + tuple(choices)
183
184
185 def form_from_model(model, fields):
186 """
187 Return a Form class with the specified fields derived from a model. This is useful when we need a form to be used
188 for creating objects, but want to avoid the model's validation (e.g. for bulk create/edit functions). All fields
189 are marked as not required.
190 """
191 form_fields = fields_for_model(model, fields=fields)
192 for field in form_fields.values():
193 field.required = False
194
195 return type('FormFromModel', (forms.Form,), form_fields)
196
197
198 def restrict_form_fields(form, user, action='view'):
199 """
200 Restrict all form fields which reference a RestrictedQuerySet. This ensures that users see only permitted objects
201 as available choices.
202 """
203 for field in form.fields.values():
204 if hasattr(field, 'queryset') and issubclass(field.queryset.__class__, RestrictedQuerySet):
205 field.queryset = field.queryset.restrict(user, action)
206
207
208 def parse_csv(reader):
209 """
210 Parse a csv_reader object into a headers dictionary and a list of records dictionaries. Raise an error
211 if the records are formatted incorrectly. Return headers and records as a tuple.
212 """
213 records = []
214 headers = {}
215
216 # Consume the first line of CSV data as column headers. Create a dictionary mapping each header to an optional
217 # "to" field specifying how the related object is being referenced. For example, importing a Device might use a
218 # `site.slug` header, to indicate the related site is being referenced by its slug.
219
220 for header in next(reader):
221 header = header.strip()
222 if '.' in header:
223 field, to_field = header.split('.', 1)
224 if field in headers:
225 raise forms.ValidationError(f'Duplicate or conflicting column header for "{field}"')
226 headers[field] = to_field
227 else:
228 if header in headers:
229 raise forms.ValidationError(f'Duplicate or conflicting column header for "{header}"')
230 headers[header] = None
231
232 # Parse CSV rows into a list of dictionaries mapped from the column headers.
233 for i, row in enumerate(reader, start=1):
234 if len(row) != len(headers):
235 raise forms.ValidationError(
236 f"Row {i}: Expected {len(headers)} columns but found {len(row)}"
237 )
238 row = [col.strip() for col in row]
239 record = dict(zip(headers.keys(), row))
240 records.append(record)
241
242 return headers, records
243
244
245 def validate_csv(headers, fields, required_fields):
246 """
247 Validate that parsed csv data conforms to the object's available fields. Raise validation errors
248 if parsed csv data contains invalid headers or does not contain required headers.
249 """
250 # Validate provided column headers
251 is_update = False
252 for field, to_field in headers.items():
253 if field == "id":
254 is_update = True
255 continue
256 if field not in fields:
257 raise forms.ValidationError(f'Unexpected column header "{field}" found.')
258 if to_field and not hasattr(fields[field], 'to_field_name'):
259 raise forms.ValidationError(f'Column "{field}" is not a related object; cannot use dots')
260 if to_field and not hasattr(fields[field].queryset.model, to_field):
261 raise forms.ValidationError(f'Invalid related object attribute for column "{field}": {to_field}')
262
263 # Validate required fields (if not an update)
264 if not is_update:
265 for f in required_fields:
266 if f not in headers:
267 raise forms.ValidationError(f'Required column header "{f}" not found.')
268
[end of netbox/utilities/forms/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/netbox/utilities/forms/utils.py b/netbox/utilities/forms/utils.py
--- a/netbox/utilities/forms/utils.py
+++ b/netbox/utilities/forms/utils.py
@@ -40,7 +40,7 @@
except ValueError:
raise forms.ValidationError(f'Range "{dash_range}" is invalid.')
values.extend(range(begin, end))
- return list(set(values))
+ return sorted(set(values))
def parse_alphanumeric_range(string):
|
{"golden_diff": "diff --git a/netbox/utilities/forms/utils.py b/netbox/utilities/forms/utils.py\n--- a/netbox/utilities/forms/utils.py\n+++ b/netbox/utilities/forms/utils.py\n@@ -40,7 +40,7 @@\n except ValueError:\n raise forms.ValidationError(f'Range \"{dash_range}\" is invalid.')\n values.extend(range(begin, end))\n- return list(set(values))\n+ return sorted(set(values))\n \n \n def parse_alphanumeric_range(string):\n", "issue": "\"service\" does not preserve the order of ports when editing\n### NetBox version\n\nv3.6.5\n\n### Python version\n\n3.8\n\n### Steps to Reproduce\n\n1. Create a \"service\", and enter the ports as \"9093,9095,9998-9999\"\r\n2. View the list of services\r\n3. Edit the service (i.e. click the pencil icon at the end of the service row)\n\n### Expected Behavior\n\nEither the ports to remain in the order originally entered, or to be sorted. (I note that the data type in the underlying Postgres column is `integer[]` which is an ordered list)\n\n### Observed Behavior\n\nWhen viewing the table of services (`/ipam/services/`), the ports are shown in order:\r\n\r\n<img width=\"304\" alt=\"image\" src=\"https://github.com/netbox-community/netbox/assets/44789/632b5313-7241-45d3-8649-b16fb9c4b6f0\">\r\n\r\nIt also shows the same when viewing the details of an individual service (e.g. `/ipam/services/2/`)\r\n\r\nHowever, when editing the service (`/ipam/services/2/edit/`), the ports are in a randomized order:\r\n\r\n<img width=\"681\" alt=\"image\" src=\"https://github.com/netbox-community/netbox/assets/44789/494f89cc-80b5-4b48-a27f-498992c159e3\">\r\n\r\nThis matches what's in the database, which in the same randomized order:\r\n\r\n```\r\nnetbox=# select ports from ipam_service where id=2\r\n ports\r\n-----------------------\r\n {9999,9093,9998,9095}\r\n(1 row)\r\n```\r\n\n", "before_files": [{"content": "import re\n\nfrom django import forms\nfrom django.forms.models import fields_for_model\n\nfrom utilities.choices import unpack_grouped_choices\nfrom utilities.querysets import RestrictedQuerySet\nfrom .constants import *\n\n__all__ = (\n 'add_blank_choice',\n 'expand_alphanumeric_pattern',\n 'expand_ipaddress_pattern',\n 'form_from_model',\n 'get_field_value',\n 'get_selected_values',\n 'parse_alphanumeric_range',\n 'parse_numeric_range',\n 'restrict_form_fields',\n 'parse_csv',\n 'validate_csv',\n)\n\n\ndef parse_numeric_range(string, base=10):\n \"\"\"\n Expand a numeric range (continuous or not) into a decimal or\n hexadecimal list, as specified by the base parameter\n '0-3,5' => [0, 1, 2, 3, 5]\n '2,8-b,d,f' => [2, 8, 9, a, b, d, f]\n \"\"\"\n values = list()\n for dash_range in string.split(','):\n try:\n begin, end = dash_range.split('-')\n except ValueError:\n begin, end = dash_range, dash_range\n try:\n begin, end = int(begin.strip(), base=base), int(end.strip(), base=base) + 1\n except ValueError:\n raise forms.ValidationError(f'Range \"{dash_range}\" is invalid.')\n values.extend(range(begin, end))\n return list(set(values))\n\n\ndef parse_alphanumeric_range(string):\n \"\"\"\n Expand an alphanumeric range (continuous or not) into a list.\n 'a-d,f' => [a, b, c, d, f]\n '0-3,a-d' => [0, 1, 2, 3, a, b, c, d]\n \"\"\"\n values = []\n for dash_range in string.split(','):\n try:\n begin, end = dash_range.split('-')\n vals = begin + end\n # Break out of loop if there's an invalid pattern to return an error\n if (not (vals.isdigit() or vals.isalpha())) or (vals.isalpha() and not (vals.isupper() or vals.islower())):\n return []\n except ValueError:\n begin, end = dash_range, dash_range\n if begin.isdigit() and end.isdigit():\n if int(begin) >= int(end):\n raise forms.ValidationError(f'Range \"{dash_range}\" is invalid.')\n\n for n in list(range(int(begin), int(end) + 1)):\n values.append(n)\n else:\n # Value-based\n if begin == end:\n values.append(begin)\n # Range-based\n else:\n # Not a valid range (more than a single character)\n if not len(begin) == len(end) == 1:\n raise forms.ValidationError(f'Range \"{dash_range}\" is invalid.')\n\n if ord(begin) >= ord(end):\n raise forms.ValidationError(f'Range \"{dash_range}\" is invalid.')\n\n for n in list(range(ord(begin), ord(end) + 1)):\n values.append(chr(n))\n return values\n\n\ndef expand_alphanumeric_pattern(string):\n \"\"\"\n Expand an alphabetic pattern into a list of strings.\n \"\"\"\n lead, pattern, remnant = re.split(ALPHANUMERIC_EXPANSION_PATTERN, string, maxsplit=1)\n parsed_range = parse_alphanumeric_range(pattern)\n for i in parsed_range:\n if re.search(ALPHANUMERIC_EXPANSION_PATTERN, remnant):\n for string in expand_alphanumeric_pattern(remnant):\n yield \"{}{}{}\".format(lead, i, string)\n else:\n yield \"{}{}{}\".format(lead, i, remnant)\n\n\ndef expand_ipaddress_pattern(string, family):\n \"\"\"\n Expand an IP address pattern into a list of strings. Examples:\n '192.0.2.[1,2,100-250]/24' => ['192.0.2.1/24', '192.0.2.2/24', '192.0.2.100/24' ... '192.0.2.250/24']\n '2001:db8:0:[0,fd-ff]::/64' => ['2001:db8:0:0::/64', '2001:db8:0:fd::/64', ... '2001:db8:0:ff::/64']\n \"\"\"\n if family not in [4, 6]:\n raise Exception(\"Invalid IP address family: {}\".format(family))\n if family == 4:\n regex = IP4_EXPANSION_PATTERN\n base = 10\n else:\n regex = IP6_EXPANSION_PATTERN\n base = 16\n lead, pattern, remnant = re.split(regex, string, maxsplit=1)\n parsed_range = parse_numeric_range(pattern, base)\n for i in parsed_range:\n if re.search(regex, remnant):\n for string in expand_ipaddress_pattern(remnant, family):\n yield ''.join([lead, format(i, 'x' if family == 6 else 'd'), string])\n else:\n yield ''.join([lead, format(i, 'x' if family == 6 else 'd'), remnant])\n\n\ndef get_field_value(form, field_name):\n \"\"\"\n Return the current bound or initial value associated with a form field, prior to calling\n clean() for the form.\n \"\"\"\n field = form.fields[field_name]\n\n if form.is_bound:\n if data := form.data.get(field_name):\n if field.valid_value(data):\n return data\n\n return form.get_initial_for_field(field, field_name)\n\n\ndef get_selected_values(form, field_name):\n \"\"\"\n Return the list of selected human-friendly values for a form field\n \"\"\"\n if not hasattr(form, 'cleaned_data'):\n form.is_valid()\n filter_data = form.cleaned_data.get(field_name)\n field = form.fields[field_name]\n\n # Non-selection field\n if not hasattr(field, 'choices'):\n return [str(filter_data)]\n\n # Model choice field\n if type(field.choices) is forms.models.ModelChoiceIterator:\n # If this is a single-choice field, wrap its value in a list\n if not hasattr(filter_data, '__iter__'):\n values = [filter_data]\n else:\n values = filter_data\n\n else:\n # Static selection field\n choices = unpack_grouped_choices(field.choices)\n if type(filter_data) not in (list, tuple):\n filter_data = [filter_data] # Ensure filter data is iterable\n values = [\n label for value, label in choices if str(value) in filter_data or None in filter_data\n ]\n\n # If the field has a `null_option` attribute set and it is selected,\n # add it to the field's grouped choices.\n if getattr(field, 'null_option', None) and None in filter_data:\n values.remove(None)\n values.insert(0, field.null_option)\n\n return values\n\n\ndef add_blank_choice(choices):\n \"\"\"\n Add a blank choice to the beginning of a choices list.\n \"\"\"\n return ((None, '---------'),) + tuple(choices)\n\n\ndef form_from_model(model, fields):\n \"\"\"\n Return a Form class with the specified fields derived from a model. This is useful when we need a form to be used\n for creating objects, but want to avoid the model's validation (e.g. for bulk create/edit functions). All fields\n are marked as not required.\n \"\"\"\n form_fields = fields_for_model(model, fields=fields)\n for field in form_fields.values():\n field.required = False\n\n return type('FormFromModel', (forms.Form,), form_fields)\n\n\ndef restrict_form_fields(form, user, action='view'):\n \"\"\"\n Restrict all form fields which reference a RestrictedQuerySet. This ensures that users see only permitted objects\n as available choices.\n \"\"\"\n for field in form.fields.values():\n if hasattr(field, 'queryset') and issubclass(field.queryset.__class__, RestrictedQuerySet):\n field.queryset = field.queryset.restrict(user, action)\n\n\ndef parse_csv(reader):\n \"\"\"\n Parse a csv_reader object into a headers dictionary and a list of records dictionaries. Raise an error\n if the records are formatted incorrectly. Return headers and records as a tuple.\n \"\"\"\n records = []\n headers = {}\n\n # Consume the first line of CSV data as column headers. Create a dictionary mapping each header to an optional\n # \"to\" field specifying how the related object is being referenced. For example, importing a Device might use a\n # `site.slug` header, to indicate the related site is being referenced by its slug.\n\n for header in next(reader):\n header = header.strip()\n if '.' in header:\n field, to_field = header.split('.', 1)\n if field in headers:\n raise forms.ValidationError(f'Duplicate or conflicting column header for \"{field}\"')\n headers[field] = to_field\n else:\n if header in headers:\n raise forms.ValidationError(f'Duplicate or conflicting column header for \"{header}\"')\n headers[header] = None\n\n # Parse CSV rows into a list of dictionaries mapped from the column headers.\n for i, row in enumerate(reader, start=1):\n if len(row) != len(headers):\n raise forms.ValidationError(\n f\"Row {i}: Expected {len(headers)} columns but found {len(row)}\"\n )\n row = [col.strip() for col in row]\n record = dict(zip(headers.keys(), row))\n records.append(record)\n\n return headers, records\n\n\ndef validate_csv(headers, fields, required_fields):\n \"\"\"\n Validate that parsed csv data conforms to the object's available fields. Raise validation errors\n if parsed csv data contains invalid headers or does not contain required headers.\n \"\"\"\n # Validate provided column headers\n is_update = False\n for field, to_field in headers.items():\n if field == \"id\":\n is_update = True\n continue\n if field not in fields:\n raise forms.ValidationError(f'Unexpected column header \"{field}\" found.')\n if to_field and not hasattr(fields[field], 'to_field_name'):\n raise forms.ValidationError(f'Column \"{field}\" is not a related object; cannot use dots')\n if to_field and not hasattr(fields[field].queryset.model, to_field):\n raise forms.ValidationError(f'Invalid related object attribute for column \"{field}\": {to_field}')\n\n # Validate required fields (if not an update)\n if not is_update:\n for f in required_fields:\n if f not in headers:\n raise forms.ValidationError(f'Required column header \"{f}\" not found.')\n", "path": "netbox/utilities/forms/utils.py"}]}
| 3,986 | 100 |
gh_patches_debug_17346
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-373
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKV_AWS_4 is an invalid check
**Describe the bug**
CKV_AWS_4 for terraform resource `aws_ebs_snapshot` is invalid. There is not an argument for encryption. Please remove this check.
**To Reproduce**
Steps to reproduce the behavior:
1. create tf file ckv_aws_4_test.tf:
```
resource "aws_ebs_volume" "example" {
availability_zone = "us-west-2a"
encrypted = true
size = 40
tags = {
Name = "HelloWorld"
}
}
resource "aws_ebs_snapshot" "example_snapshot" {
volume_id = "${aws_ebs_volume.example.id}"
tags = {
Name = "HelloWorld_snap"
}
}
```
2. Run cli command 'checkov -f ckv_aws_4_test.tf'
3. Failed when should have passed
**Expected behavior**
Passing check
**Screenshots**

**Desktop (please complete the following information):**
- OS: [MacOS]
- Checkov Version [1.0.391]
**Additional context**
- [link to resource doc](https://www.terraform.io/docs/providers/aws/r/ebs_snapshot.html)
As you can see, there is not an argument for encryption listed. Only a computed artifact named encryption.

- [TF SourceCode shows encryption as being computed](https://github.com/terraform-providers/terraform-provider-aws/blob/master/aws/resource_aws_ebs_snapshot.go)

- The docs from AWS explain that snapshots that are taken from encrypted volumes are automatically encrypted. [link](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html)
False positive for CKV_AWS_17
Checkov `1.0.391` will fail CKV_AWS_17 for a Terraform file defining any value for `publicly_accessible`, even false, because the check is for any value rather the actual security goal which should be a test for true:
https://github.com/bridgecrewio/checkov/blob/b906298b4a26135b7ee6b58f1aa4c54fc04ead20/checkov/terraform/checks/resource/aws/RDSPubliclyAccessible.py
It should probably also say βRDS instanceβ rather than βRDS bucketβ
</issue>
<code>
[start of checkov/terraform/checks/resource/aws/RDSPubliclyAccessible.py]
1 from checkov.common.models.consts import ANY_VALUE
2 from checkov.common.models.enums import CheckCategories
3 from checkov.terraform.checks.resource.base_resource_negative_value_check import BaseResourceNegativeValueCheck
4
5
6 class RDSPubliclyAccessible(BaseResourceNegativeValueCheck):
7 def __init__(self):
8 name = "Ensure all data stored in the RDS bucket is not public accessible"
9 id = "CKV_AWS_17"
10 supported_resources = ['aws_db_instance', 'aws_rds_cluster_instance']
11 categories = [CheckCategories.NETWORKING]
12 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
13
14 def get_inspected_key(self):
15 return 'publicly_accessible'
16
17 def get_forbidden_values(self):
18 return [ANY_VALUE]
19
20
21 check = RDSPubliclyAccessible()
22
[end of checkov/terraform/checks/resource/aws/RDSPubliclyAccessible.py]
[start of checkov/terraform/checks/resource/aws/EBSSnapshotEncryption.py]
1 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
2 from checkov.common.models.enums import CheckCategories
3
4
5 class EBSSnapshotEncryption(BaseResourceValueCheck):
6 def __init__(self):
7 name = "Ensure all data stored in the EBS Snapshot is securely encrypted"
8 id = "CKV_AWS_4"
9 supported_resources = ['aws_ebs_snapshot']
10 categories = [CheckCategories.ENCRYPTION]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def get_inspected_key(self):
14 return "encrypted"
15
16
17 check = EBSSnapshotEncryption()
18
[end of checkov/terraform/checks/resource/aws/EBSSnapshotEncryption.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/checkov/terraform/checks/resource/aws/EBSSnapshotEncryption.py b/checkov/terraform/checks/resource/aws/EBSSnapshotEncryption.py
deleted file mode 100644
--- a/checkov/terraform/checks/resource/aws/EBSSnapshotEncryption.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
-from checkov.common.models.enums import CheckCategories
-
-
-class EBSSnapshotEncryption(BaseResourceValueCheck):
- def __init__(self):
- name = "Ensure all data stored in the EBS Snapshot is securely encrypted"
- id = "CKV_AWS_4"
- supported_resources = ['aws_ebs_snapshot']
- categories = [CheckCategories.ENCRYPTION]
- super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
-
- def get_inspected_key(self):
- return "encrypted"
-
-
-check = EBSSnapshotEncryption()
diff --git a/checkov/terraform/checks/resource/aws/RDSPubliclyAccessible.py b/checkov/terraform/checks/resource/aws/RDSPubliclyAccessible.py
--- a/checkov/terraform/checks/resource/aws/RDSPubliclyAccessible.py
+++ b/checkov/terraform/checks/resource/aws/RDSPubliclyAccessible.py
@@ -15,7 +15,7 @@
return 'publicly_accessible'
def get_forbidden_values(self):
- return [ANY_VALUE]
+ return [True]
check = RDSPubliclyAccessible()
|
{"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/EBSSnapshotEncryption.py b/checkov/terraform/checks/resource/aws/EBSSnapshotEncryption.py\ndeleted file mode 100644\n--- a/checkov/terraform/checks/resource/aws/EBSSnapshotEncryption.py\n+++ /dev/null\n@@ -1,17 +0,0 @@\n-from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n-from checkov.common.models.enums import CheckCategories\n-\n-\n-class EBSSnapshotEncryption(BaseResourceValueCheck):\n- def __init__(self):\n- name = \"Ensure all data stored in the EBS Snapshot is securely encrypted\"\n- id = \"CKV_AWS_4\"\n- supported_resources = ['aws_ebs_snapshot']\n- categories = [CheckCategories.ENCRYPTION]\n- super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n-\n- def get_inspected_key(self):\n- return \"encrypted\"\n-\n-\n-check = EBSSnapshotEncryption()\ndiff --git a/checkov/terraform/checks/resource/aws/RDSPubliclyAccessible.py b/checkov/terraform/checks/resource/aws/RDSPubliclyAccessible.py\n--- a/checkov/terraform/checks/resource/aws/RDSPubliclyAccessible.py\n+++ b/checkov/terraform/checks/resource/aws/RDSPubliclyAccessible.py\n@@ -15,7 +15,7 @@\n return 'publicly_accessible'\n \n def get_forbidden_values(self):\n- return [ANY_VALUE]\n+ return [True]\n \n \n check = RDSPubliclyAccessible()\n", "issue": "CKV_AWS_4 is an invalid check\n**Describe the bug**\r\nCKV_AWS_4 for terraform resource `aws_ebs_snapshot` is invalid. There is not an argument for encryption. Please remove this check.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. create tf file ckv_aws_4_test.tf:\r\n```\r\nresource \"aws_ebs_volume\" \"example\" {\r\n availability_zone = \"us-west-2a\"\r\n encrypted = true\r\n size = 40\r\n\r\n tags = {\r\n Name = \"HelloWorld\"\r\n }\r\n}\r\n\r\nresource \"aws_ebs_snapshot\" \"example_snapshot\" {\r\n volume_id = \"${aws_ebs_volume.example.id}\"\r\n\r\n tags = {\r\n Name = \"HelloWorld_snap\"\r\n }\r\n}\r\n```\r\n\r\n2. Run cli command 'checkov -f ckv_aws_4_test.tf'\r\n3. Failed when should have passed\r\n\r\n**Expected behavior**\r\nPassing check\r\n\r\n**Screenshots**\r\n\r\n\r\n\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: [MacOS]\r\n - Checkov Version [1.0.391]\r\n\r\n**Additional context**\r\n- [link to resource doc](https://www.terraform.io/docs/providers/aws/r/ebs_snapshot.html)\r\nAs you can see, there is not an argument for encryption listed. Only a computed artifact named encryption.\r\n\r\n\r\n- [TF SourceCode shows encryption as being computed](https://github.com/terraform-providers/terraform-provider-aws/blob/master/aws/resource_aws_ebs_snapshot.go)\r\n\r\n\r\n- The docs from AWS explain that snapshots that are taken from encrypted volumes are automatically encrypted. [link](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html) \r\n\nFalse positive for CKV_AWS_17\nCheckov `1.0.391` will fail CKV_AWS_17 for a Terraform file defining any value for `publicly_accessible`, even false, because the check is for any value rather the actual security goal which should be a test for true:\r\n\r\nhttps://github.com/bridgecrewio/checkov/blob/b906298b4a26135b7ee6b58f1aa4c54fc04ead20/checkov/terraform/checks/resource/aws/RDSPubliclyAccessible.py\r\n\r\nIt should probably also say \u201cRDS instance\u201d rather than \u201cRDS bucket\u201d\n", "before_files": [{"content": "from checkov.common.models.consts import ANY_VALUE\nfrom checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_negative_value_check import BaseResourceNegativeValueCheck\n\n\nclass RDSPubliclyAccessible(BaseResourceNegativeValueCheck):\n def __init__(self):\n name = \"Ensure all data stored in the RDS bucket is not public accessible\"\n id = \"CKV_AWS_17\"\n supported_resources = ['aws_db_instance', 'aws_rds_cluster_instance']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return 'publicly_accessible'\n\n def get_forbidden_values(self):\n return [ANY_VALUE]\n\n\ncheck = RDSPubliclyAccessible()\n", "path": "checkov/terraform/checks/resource/aws/RDSPubliclyAccessible.py"}, {"content": "from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\nfrom checkov.common.models.enums import CheckCategories\n\n\nclass EBSSnapshotEncryption(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure all data stored in the EBS Snapshot is securely encrypted\"\n id = \"CKV_AWS_4\"\n supported_resources = ['aws_ebs_snapshot']\n categories = [CheckCategories.ENCRYPTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"encrypted\"\n\n\ncheck = EBSSnapshotEncryption()\n", "path": "checkov/terraform/checks/resource/aws/EBSSnapshotEncryption.py"}]}
| 1,664 | 343 |
gh_patches_debug_29473
|
rasdani/github-patches
|
git_diff
|
ciudadanointeligente__votainteligente-portal-electoral-328
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
nueva pregunta en formulario
Modificar la pregunta sobre la soluciΓ³n al problema y dividirla en 2. Una mΓ‘s general y otra mΓ‘s especΓfica.
- La primera: ΒΏQuΓ© deberΓa hacer la municipalidad para solucionar el problema?β
- La segunda es βΒΏQuΓ© avances concretos esperas que se logren durante el periodo del alcalde (4 aΓ±os)?β
</issue>
<code>
[start of popular_proposal/forms.py]
1 # coding=utf-8
2 from django import forms
3 from popular_proposal.models import ProposalTemporaryData, ProposalLike
4 from votainteligente.send_mails import send_mail
5 from django.utils.translation import ugettext as _
6
7
8 WHEN_CHOICES = [
9 ('1_month', u'1 mes despuΓ©s de ingresado'),
10 ('6_months', u'6 Meses'),
11 ('1_year', u'1 aΓ±o'),
12 ('2_year', u'2 aΓ±os'),
13 ('3_year', u'3 aΓ±os'),
14 ('4_year', u'4 aΓ±os'),
15 ]
16
17 TOPIC_CHOICES =(
18 ('otros', 'Otros'),
19 (u'BΓ‘sicos',(
20 (u'salud', u'Salud'),
21 (u'transporte', u'Transporte'),
22 (u'educacion', u'EducaciΓ³n'),
23 (u'seguridad', u'Seguridad'),
24 (u'proteccionsocial', u'ProtecciΓ³n Social'),
25 (u'vivienda', u'Vivienda'),
26 )),
27 (u'Oportunidades',(
28 (u'trabajo', u'Trabajo'),
29 (u'emprendimiento', u'Emprendimiento'),
30 (u'capacitacion', u'CapacitaciΓ³n'),
31 (u'beneficiosbienestar', u'Beneficios/bienestar'),
32 )),
33 (u'Espacios comunales',(
34 (u'areasverdes', u'Γreas verdes'),
35 (u'territoriobarrio', u'Territorio/barrio'),
36 (u'obras', u'Obras'),
37 (u'turismoycomercio', u'Turismo y comercio'),
38 )),
39 (u'Mejor comuna',(
40 (u'medioambiente', u'Medio Ambiente'),
41 (u'culturayrecreacion', u'Cultura y recreaciΓ³n'),
42 (u'deporte', u'Deporte'),
43 (u'servicios', u'Servicios'),
44 )),
45 (u'Mejor representatividad',(
46 (u'transparencia', u'Transparencia'),
47 (u'participacionciudadana', u'ParticipaciΓ³n ciudadana'),
48 (u'genero', u'GΓ©nero'),
49 (u'pueblosindigenas', u'Pueblos indΓgenas'),
50 (u'diversidadsexual', u'Diversidad sexual'),
51 ))
52 )
53
54 class ProposalFormBase(forms.Form):
55 problem = forms.CharField(label=_(u'SegΓΊn la Γ³ptica de tu organizaciΓ³n, describe un problema de tu comuna que \
56 quieras solucionar. lΓneas)'),
57 help_text=_(u'Ej: Poca participaciΓ³n en el Plan Regulador, falta de transparencia en el trabajo de la \
58 municipalidad, pocos puntos de reciclaje, etc.'))
59 solution = forms.CharField(label=_(u'QuΓ© quieres que haga tu autoridad para solucionar el problema? (3 lΓneas)'),
60 help_text=_(u'Ejemplo: "Que se aumenten en 30% las horas de atenciΓ³n de la especialidad CardiologΓa en \
61 los Cesfam y consultorios de la comuna", "Que se publiquen todos los concejos municipales en \
62 el sitio web del municipio".'))
63 when = forms.ChoiceField(choices=WHEN_CHOICES, label=_(u'ΒΏEn quΓ© plazo te gustarΓa que estΓ© solucionado?'))
64 title = forms.CharField(label=_(u'TΓtulo corto'), help_text=_(u"Un tΓtulo que nos permita describir tu propuesta\
65 ciudadana. Ej: 50% mΓ‘s de ciclovΓas para la comuna"))
66 clasification = forms.ChoiceField(choices=TOPIC_CHOICES, label=_(u'ΒΏCΓ³mo clasificarΓas tu propuesta?'))
67 allies = forms.CharField(label=_(u'ΒΏQuiΓ©nes son tus posibles aliados?'))
68 organization = forms.CharField(label=_(u'ΒΏEstΓ‘s haciendo esta propuesta a nombre de una organizaciΓ³n? Escribe su nombre acΓ‘:'),
69 required=False)
70
71
72 class ProposalForm(ProposalFormBase):
73 def __init__(self, *args, **kwargs):
74 self.proposer = kwargs.pop('proposer')
75 self.area = kwargs.pop('area')
76 super(ProposalForm, self).__init__(*args, **kwargs)
77
78 def save(self):
79 return ProposalTemporaryData.objects.create(proposer=self.proposer,
80 area=self.area,
81 data=self.cleaned_data)
82
83
84 class CommentsForm(forms.Form):
85 def __init__(self, *args, **kwargs):
86 self.temporary_data = kwargs.pop('temporary_data')
87 self.moderator = kwargs.pop('moderator')
88 super(CommentsForm, self).__init__(*args, **kwargs)
89 for field in self.temporary_data.comments.keys():
90 help_text = _(u'La ciudadana dijo: %s') % self.temporary_data.data.get(field, u'')
91 comments = self.temporary_data.comments[field]
92 if comments:
93 help_text += _(u' <b>Y tus comentarios fueron: %s </b>') % comments
94 self.fields[field] = forms.CharField(required=False, help_text=help_text)
95
96 def save(self, *args, **kwargs):
97 for field_key in self.cleaned_data.keys():
98 self.temporary_data.comments[field_key] = self.cleaned_data[field_key]
99 self.temporary_data.status = ProposalTemporaryData.Statuses.InTheirSide
100 self.temporary_data.save()
101 comments = {}
102 for key in self.temporary_data.data.keys():
103 if self.temporary_data.comments[key]:
104 comments[key] = {
105 'original': self.temporary_data.data[key],
106 'comments': self.temporary_data.comments[key]
107 }
108 mail_context = {
109 'area': self.temporary_data.area,
110 'temporary_data': self.temporary_data,
111 'moderator': self.moderator,
112 'comments': comments
113 }
114 send_mail(mail_context, 'popular_proposal_moderation', to=[self.temporary_data.proposer.email])
115 return self.temporary_data
116
117
118 class RejectionForm(forms.Form):
119 reason = forms.CharField()
120
121 def __init__(self, *args, **kwargs):
122 self.temporary_data = kwargs.pop('temporary_data')
123 self.moderator = kwargs.pop('moderator')
124 super(RejectionForm, self).__init__(*args, **kwargs)
125
126 def reject(self):
127 self.temporary_data.reject(self.cleaned_data['reason'])
128
129
130 class ProposalTemporaryDataUpdateForm(ProposalFormBase):
131 def __init__(self, *args, **kwargs):
132 self.proposer = kwargs.pop('proposer')
133 self.temporary_data = kwargs.pop('temporary_data')
134 super(ProposalTemporaryDataUpdateForm, self).__init__(*args, **kwargs)
135 self.initial = self.temporary_data.data
136 for comment_key in self.temporary_data.comments.keys():
137 comment = self.temporary_data.comments[comment_key]
138 if comment:
139 self.fields[comment_key].help_text += _(' <b>Commentarios: %s </b>') % (comment)
140
141 def save(self):
142 self.temporary_data.data = self.cleaned_data
143 self.temporary_data.status = ProposalTemporaryData.Statuses.InOurSide
144 self.temporary_data.save()
145 return self.temporary_data
146
147
148 class SubscriptionForm(forms.Form):
149 def __init__(self, *args, **kwargs):
150 self.user = kwargs.pop('user')
151 self.proposal = kwargs.pop('proposal')
152 super(SubscriptionForm, self).__init__(*args, **kwargs)
153
154 def subscribe(self):
155 like = ProposalLike.objects.create(user=self.user,
156 proposal=self.proposal)
157 return like
158
159
[end of popular_proposal/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/popular_proposal/forms.py b/popular_proposal/forms.py
--- a/popular_proposal/forms.py
+++ b/popular_proposal/forms.py
@@ -56,10 +56,11 @@
quieras solucionar. lΓneas)'),
help_text=_(u'Ej: Poca participaciΓ³n en el Plan Regulador, falta de transparencia en el trabajo de la \
municipalidad, pocos puntos de reciclaje, etc.'))
- solution = forms.CharField(label=_(u'QuΓ© quieres que haga tu autoridad para solucionar el problema? (3 lΓneas)'),
- help_text=_(u'Ejemplo: "Que se aumenten en 30% las horas de atenciΓ³n de la especialidad CardiologΓa en \
- los Cesfam y consultorios de la comuna", "Que se publiquen todos los concejos municipales en \
+ solution = forms.CharField(label=_(u'ΒΏQuΓ© deberΓa hacer la municipalidad para solucionar el problema? (3 lΓneas)'),
+ help_text=_(u'Ejemplo: "Crear una ciclovia que circunvale Valdivia", "Que se publiquen todos los concejos municipales en \
el sitio web del municipio".'))
+ solution_at_the_end = forms.CharField(label=u"ΒΏQuΓ© avances concretos esperas que se logren durante el periodo del alcalde (4 aΓ±os)?",
+ help_text=_(u'Ejemplo: "Aumentar en un 20% la cantidad de ciclovΓas en la ciudad"'), required=False)
when = forms.ChoiceField(choices=WHEN_CHOICES, label=_(u'ΒΏEn quΓ© plazo te gustarΓa que estΓ© solucionado?'))
title = forms.CharField(label=_(u'TΓtulo corto'), help_text=_(u"Un tΓtulo que nos permita describir tu propuesta\
ciudadana. Ej: 50% mΓ‘s de ciclovΓas para la comuna"))
|
{"golden_diff": "diff --git a/popular_proposal/forms.py b/popular_proposal/forms.py\n--- a/popular_proposal/forms.py\n+++ b/popular_proposal/forms.py\n@@ -56,10 +56,11 @@\n quieras solucionar. l\u00edneas)'),\n help_text=_(u'Ej: Poca participaci\u00f3n en el Plan Regulador, falta de transparencia en el trabajo de la \\\n municipalidad, pocos puntos de reciclaje, etc.'))\n- solution = forms.CharField(label=_(u'Qu\u00e9 quieres que haga tu autoridad para solucionar el problema? (3 l\u00edneas)'),\n- help_text=_(u'Ejemplo: \"Que se aumenten en 30% las horas de atenci\u00f3n de la especialidad Cardiolog\u00eda en \\\n- los Cesfam y consultorios de la comuna\", \"Que se publiquen todos los concejos municipales en \\\n+ solution = forms.CharField(label=_(u'\u00bfQu\u00e9 deber\u00eda hacer la municipalidad para solucionar el problema? (3 l\u00edneas)'),\n+ help_text=_(u'Ejemplo: \"Crear una ciclovia que circunvale Valdivia\", \"Que se publiquen todos los concejos municipales en \\\n el sitio web del municipio\".'))\n+ solution_at_the_end = forms.CharField(label=u\"\u00bfQu\u00e9 avances concretos esperas que se logren durante el periodo del alcalde (4 a\u00f1os)?\",\n+ help_text=_(u'Ejemplo: \"Aumentar en un 20% la cantidad de ciclov\u00edas en la ciudad\"'), required=False)\n when = forms.ChoiceField(choices=WHEN_CHOICES, label=_(u'\u00bfEn qu\u00e9 plazo te gustar\u00eda que est\u00e9 solucionado?'))\n title = forms.CharField(label=_(u'T\u00edtulo corto'), help_text=_(u\"Un t\u00edtulo que nos permita describir tu propuesta\\\n ciudadana. Ej: 50% m\u00e1s de ciclov\u00edas para la comuna\"))\n", "issue": "nueva pregunta en formulario\nModificar la pregunta sobre la soluci\u00f3n al problema y dividirla en 2. Una m\u00e1s general y otra m\u00e1s espec\u00edfica. \n- La primera: \u00bfQu\u00e9 deber\u00eda hacer la municipalidad para solucionar el problema?\u201d\n- La segunda es \u201c\u00bfQu\u00e9 avances concretos esperas que se logren durante el periodo del alcalde (4 a\u00f1os)?\u201d\n\n", "before_files": [{"content": "# coding=utf-8\nfrom django import forms\nfrom popular_proposal.models import ProposalTemporaryData, ProposalLike\nfrom votainteligente.send_mails import send_mail\nfrom django.utils.translation import ugettext as _\n\n\nWHEN_CHOICES = [\n ('1_month', u'1 mes despu\u00e9s de ingresado'),\n ('6_months', u'6 Meses'),\n ('1_year', u'1 a\u00f1o'),\n ('2_year', u'2 a\u00f1os'),\n ('3_year', u'3 a\u00f1os'),\n ('4_year', u'4 a\u00f1os'),\n]\n\nTOPIC_CHOICES =(\n ('otros', 'Otros'),\n (u'B\u00e1sicos',(\n (u'salud', u'Salud'),\n (u'transporte', u'Transporte'),\n (u'educacion', u'Educaci\u00f3n'),\n (u'seguridad', u'Seguridad'),\n (u'proteccionsocial', u'Protecci\u00f3n Social'),\n (u'vivienda', u'Vivienda'),\n )),\n (u'Oportunidades',(\n (u'trabajo', u'Trabajo'),\n (u'emprendimiento', u'Emprendimiento'),\n (u'capacitacion', u'Capacitaci\u00f3n'),\n (u'beneficiosbienestar', u'Beneficios/bienestar'),\n )),\n (u'Espacios comunales',(\n (u'areasverdes', u'\u00c1reas verdes'),\n (u'territoriobarrio', u'Territorio/barrio'),\n (u'obras', u'Obras'),\n (u'turismoycomercio', u'Turismo y comercio'),\n )),\n (u'Mejor comuna',(\n (u'medioambiente', u'Medio Ambiente'),\n (u'culturayrecreacion', u'Cultura y recreaci\u00f3n'),\n (u'deporte', u'Deporte'),\n (u'servicios', u'Servicios'),\n )),\n (u'Mejor representatividad',(\n (u'transparencia', u'Transparencia'),\n (u'participacionciudadana', u'Participaci\u00f3n ciudadana'),\n (u'genero', u'G\u00e9nero'),\n (u'pueblosindigenas', u'Pueblos ind\u00edgenas'),\n (u'diversidadsexual', u'Diversidad sexual'),\n ))\n)\n\nclass ProposalFormBase(forms.Form):\n problem = forms.CharField(label=_(u'Seg\u00fan la \u00f3ptica de tu organizaci\u00f3n, describe un problema de tu comuna que \\\n quieras solucionar. l\u00edneas)'),\n help_text=_(u'Ej: Poca participaci\u00f3n en el Plan Regulador, falta de transparencia en el trabajo de la \\\n municipalidad, pocos puntos de reciclaje, etc.'))\n solution = forms.CharField(label=_(u'Qu\u00e9 quieres que haga tu autoridad para solucionar el problema? (3 l\u00edneas)'),\n help_text=_(u'Ejemplo: \"Que se aumenten en 30% las horas de atenci\u00f3n de la especialidad Cardiolog\u00eda en \\\n los Cesfam y consultorios de la comuna\", \"Que se publiquen todos los concejos municipales en \\\n el sitio web del municipio\".'))\n when = forms.ChoiceField(choices=WHEN_CHOICES, label=_(u'\u00bfEn qu\u00e9 plazo te gustar\u00eda que est\u00e9 solucionado?'))\n title = forms.CharField(label=_(u'T\u00edtulo corto'), help_text=_(u\"Un t\u00edtulo que nos permita describir tu propuesta\\\n ciudadana. Ej: 50% m\u00e1s de ciclov\u00edas para la comuna\"))\n clasification = forms.ChoiceField(choices=TOPIC_CHOICES, label=_(u'\u00bfC\u00f3mo clasificar\u00edas tu propuesta?'))\n allies = forms.CharField(label=_(u'\u00bfQui\u00e9nes son tus posibles aliados?'))\n organization = forms.CharField(label=_(u'\u00bfEst\u00e1s haciendo esta propuesta a nombre de una organizaci\u00f3n? Escribe su nombre ac\u00e1:'),\n required=False)\n\n\nclass ProposalForm(ProposalFormBase):\n def __init__(self, *args, **kwargs):\n self.proposer = kwargs.pop('proposer')\n self.area = kwargs.pop('area')\n super(ProposalForm, self).__init__(*args, **kwargs)\n\n def save(self):\n return ProposalTemporaryData.objects.create(proposer=self.proposer,\n area=self.area,\n data=self.cleaned_data)\n\n\nclass CommentsForm(forms.Form):\n def __init__(self, *args, **kwargs):\n self.temporary_data = kwargs.pop('temporary_data')\n self.moderator = kwargs.pop('moderator')\n super(CommentsForm, self).__init__(*args, **kwargs)\n for field in self.temporary_data.comments.keys():\n help_text = _(u'La ciudadana dijo: %s') % self.temporary_data.data.get(field, u'')\n comments = self.temporary_data.comments[field]\n if comments:\n help_text += _(u' <b>Y tus comentarios fueron: %s </b>') % comments\n self.fields[field] = forms.CharField(required=False, help_text=help_text)\n\n def save(self, *args, **kwargs):\n for field_key in self.cleaned_data.keys():\n self.temporary_data.comments[field_key] = self.cleaned_data[field_key]\n self.temporary_data.status = ProposalTemporaryData.Statuses.InTheirSide\n self.temporary_data.save()\n comments = {}\n for key in self.temporary_data.data.keys():\n if self.temporary_data.comments[key]:\n comments[key] = {\n 'original': self.temporary_data.data[key],\n 'comments': self.temporary_data.comments[key]\n }\n mail_context = {\n 'area': self.temporary_data.area,\n 'temporary_data': self.temporary_data,\n 'moderator': self.moderator,\n 'comments': comments\n }\n send_mail(mail_context, 'popular_proposal_moderation', to=[self.temporary_data.proposer.email])\n return self.temporary_data\n\n\nclass RejectionForm(forms.Form):\n reason = forms.CharField()\n\n def __init__(self, *args, **kwargs):\n self.temporary_data = kwargs.pop('temporary_data')\n self.moderator = kwargs.pop('moderator')\n super(RejectionForm, self).__init__(*args, **kwargs)\n\n def reject(self):\n self.temporary_data.reject(self.cleaned_data['reason'])\n\n\nclass ProposalTemporaryDataUpdateForm(ProposalFormBase):\n def __init__(self, *args, **kwargs):\n self.proposer = kwargs.pop('proposer')\n self.temporary_data = kwargs.pop('temporary_data')\n super(ProposalTemporaryDataUpdateForm, self).__init__(*args, **kwargs)\n self.initial = self.temporary_data.data\n for comment_key in self.temporary_data.comments.keys():\n comment = self.temporary_data.comments[comment_key]\n if comment:\n self.fields[comment_key].help_text += _(' <b>Commentarios: %s </b>') % (comment)\n\n def save(self):\n self.temporary_data.data = self.cleaned_data\n self.temporary_data.status = ProposalTemporaryData.Statuses.InOurSide\n self.temporary_data.save()\n return self.temporary_data\n\n\nclass SubscriptionForm(forms.Form):\n def __init__(self, *args, **kwargs):\n self.user = kwargs.pop('user')\n self.proposal = kwargs.pop('proposal')\n super(SubscriptionForm, self).__init__(*args, **kwargs)\n\n def subscribe(self):\n like = ProposalLike.objects.create(user=self.user,\n proposal=self.proposal)\n return like\n\n", "path": "popular_proposal/forms.py"}]}
| 2,656 | 435 |
gh_patches_debug_11362
|
rasdani/github-patches
|
git_diff
|
google-deepmind__dm-haiku-540
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Constant folding on `hk.avg_pool` with `padding='SAME'` on large arrays
Hi, lately my model has increased its compilation time due to constant folding and I discovered that the issue comes from using `padding='SAME'` in `hk.avg_pool`.
I tried to create a dummy code to showcase the issue (at the bottom), but the main point is that when `hk.avg_pool` uses `padding='VALID'` I get these output.
```
Wall time [Initialisation]: 0.112s
Wall time [Compilation]: 1.33s
Wall time [Train step]: 0.0024s
```
but when I use `padding='SAME'`
```
Wall time [Initialisation]: 0.119s
2022-10-12 16:01:47.590034: E external/org_tensorflow/tensorflow/compiler/xla/service/slow_operation_alarm.cc:65] Constant folding an instruction is taking > 1s:
reduce-window.16 (displaying the full instruction incurs a runtime overhead. Raise your logging level to 4 or above).
This isn't necessarily a bug; constant-folding is inherently a trade-off between compilation time and speed at runtime. XLA has some guards that attempt to keep constant folding from taking too long, but fundamentally you'll always be able to come up with an input program that takes a long time.
If you'd like to file a bug, run with envvar XLA_FLAGS=--xla_dump_to=/tmp/foo and attach the results.
2022-10-12 16:02:05.867327: E external/org_tensorflow/tensorflow/compiler/xla/service/slow_operation_alarm.cc:133] The operation took 19.277362298s
Constant folding an instruction is taking > 1s:
reduce-window.16 (displaying the full instruction incurs a runtime overhead. Raise your logging level to 4 or above).
This isn't necessarily a bug; constant-folding is inherently a trade-off between compilation time and speed at runtime. XLA has some guards that attempt to keep constant folding from taking too long, but fundamentally you'll always be able to come up with an input program that takes a long time.
If you'd like to file a bug, run with envvar XLA_FLAGS=--xla_dump_to=/tmp/foo and attach the results.
Wall time [Compilation]: 21.8s
Wall time [Train step]: 0.00221s
```
with `21.8s` of compilation!
The code:
```python
import haiku as hk
import jax
import contextlib
import timeit
@contextlib.contextmanager
def time_eval(task):
start = timeit.default_timer()
try:
yield
finally:
end = timeit.default_timer()
print(f'Wall time [{task}]: {(end - start):.3}s')
class Model(hk.Module):
def __call__(self, x):
x = hk.avg_pool(x, window_shape=(1,3,3,1), strides=(1,2,2,1), padding='SAME')
x = hk.Conv2D(32, 4, 2)(x)
x = jax.nn.relu(x)
return x
def forward(x):
return Model()(x)
forward = hk.without_apply_rng(hk.transform(forward))
rng = hk.PRNGSequence(jax.random.PRNGKey(42))
x = jax.random.uniform(next(rng), ([128, 512, 512, 3]))
with time_eval('Initialisation'):
params = jax.jit(forward.init)(next(rng), x)
forward_apply = jax.jit(forward.apply)
with time_eval('Compilation'):
logits = forward_apply(params, x).block_until_ready()
with time_eval('Train step'):
logits = forward_apply(params, x).block_until_ready()
```
</issue>
<code>
[start of haiku/_src/pool.py]
1 # Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Pooling Haiku modules."""
16
17 import types
18 from typing import Optional, Sequence, Tuple, Union
19 import warnings
20
21 from haiku._src import module
22 from jax import lax
23 import jax.numpy as jnp
24 import numpy as np
25
26 # If you are forking replace this block with `import haiku as hk`.
27 hk = types.ModuleType("haiku")
28 hk.Module = module.Module
29 del module
30
31
32 def _infer_shape(
33 x: jnp.ndarray,
34 size: Union[int, Sequence[int]],
35 channel_axis: Optional[int] = -1,
36 ) -> Tuple[int, ...]:
37 """Infer shape for pooling window or strides."""
38 if isinstance(size, int):
39 if channel_axis and not 0 <= abs(channel_axis) < x.ndim:
40 raise ValueError(f"Invalid channel axis {channel_axis} for {x.shape}")
41 if channel_axis and channel_axis < 0:
42 channel_axis = x.ndim + channel_axis
43 return (1,) + tuple(size if d != channel_axis else 1
44 for d in range(1, x.ndim))
45 elif len(size) < x.ndim:
46 # Assume additional dimensions are batch dimensions.
47 return (1,) * (x.ndim - len(size)) + tuple(size)
48 else:
49 assert x.ndim == len(size)
50 return tuple(size)
51
52
53 _VMAP_SHAPE_INFERENCE_WARNING = (
54 "When running under vmap, passing an `int` (except for `1`) for "
55 "`window_shape` or `strides` will result in the wrong shape being inferred "
56 "because the batch dimension is not visible to Haiku. Please update your "
57 "code to specify a full unbatched size. "
58 ""
59 "For example if you had `pool(x, window_shape=3, strides=1)` before, you "
60 "should now pass `pool(x, window_shape=(3, 3, 1), strides=1)`. "
61 ""
62 "Haiku will assume that any additional dimensions in your input are "
63 "batch dimensions, and will pad `window_shape` and `strides` accordingly "
64 "making your module support both batched and per-example inputs."
65 )
66
67
68 def _warn_if_unsafe(window_shape, strides):
69 unsafe = lambda size: isinstance(size, int) and size != 1
70 if unsafe(window_shape) or unsafe(strides):
71 warnings.warn(_VMAP_SHAPE_INFERENCE_WARNING, DeprecationWarning)
72
73
74 def max_pool(
75 value: jnp.ndarray,
76 window_shape: Union[int, Sequence[int]],
77 strides: Union[int, Sequence[int]],
78 padding: str,
79 channel_axis: Optional[int] = -1,
80 ) -> jnp.ndarray:
81 """Max pool.
82
83 Args:
84 value: Value to pool.
85 window_shape: Shape of the pooling window, an int or same rank as value.
86 strides: Strides of the pooling window, an int or same rank as value.
87 padding: Padding algorithm. Either ``VALID`` or ``SAME``.
88 channel_axis: Axis of the spatial channels for which pooling is skipped,
89 used to infer ``window_shape`` or ``strides`` if they are an integer.
90
91 Returns:
92 Pooled result. Same rank as value.
93 """
94 if padding not in ("SAME", "VALID"):
95 raise ValueError(f"Invalid padding '{padding}', must be 'SAME' or 'VALID'.")
96
97 _warn_if_unsafe(window_shape, strides)
98 window_shape = _infer_shape(value, window_shape, channel_axis)
99 strides = _infer_shape(value, strides, channel_axis)
100
101 return lax.reduce_window(value, -jnp.inf, lax.max, window_shape, strides,
102 padding)
103
104
105 def avg_pool(
106 value: jnp.ndarray,
107 window_shape: Union[int, Sequence[int]],
108 strides: Union[int, Sequence[int]],
109 padding: str,
110 channel_axis: Optional[int] = -1,
111 ) -> jnp.ndarray:
112 """Average pool.
113
114 Args:
115 value: Value to pool.
116 window_shape: Shape of the pooling window, an int or same rank as value.
117 strides: Strides of the pooling window, an int or same rank as value.
118 padding: Padding algorithm. Either ``VALID`` or ``SAME``.
119 channel_axis: Axis of the spatial channels for which pooling is skipped,
120 used to infer ``window_shape`` or ``strides`` if they are an integer.
121
122 Returns:
123 Pooled result. Same rank as value.
124
125 Raises:
126 ValueError: If the padding is not valid.
127 """
128 if padding not in ("SAME", "VALID"):
129 raise ValueError(f"Invalid padding '{padding}', must be 'SAME' or 'VALID'.")
130
131 _warn_if_unsafe(window_shape, strides)
132 window_shape = _infer_shape(value, window_shape, channel_axis)
133 strides = _infer_shape(value, strides, channel_axis)
134
135 reduce_window_args = (0., lax.add, window_shape, strides, padding)
136 pooled = lax.reduce_window(value, *reduce_window_args)
137 if padding == "VALID":
138 # Avoid the extra reduce_window.
139 return pooled / np.prod(window_shape)
140 else:
141 # Count the number of valid entries at each input point, then use that for
142 # computing average. Assumes that any two arrays of same shape will be
143 # padded the same.
144 window_counts = lax.reduce_window(jnp.ones_like(value), *reduce_window_args)
145 assert pooled.shape == window_counts.shape
146 return pooled / window_counts
147
148
149 class MaxPool(hk.Module):
150 """Max pool.
151
152 Equivalent to partial application of :func:`max_pool`.
153 """
154
155 def __init__(
156 self,
157 window_shape: Union[int, Sequence[int]],
158 strides: Union[int, Sequence[int]],
159 padding: str,
160 channel_axis: Optional[int] = -1,
161 name: Optional[str] = None,
162 ):
163 """Max pool.
164
165 Args:
166 window_shape: Shape of window to pool over. Same rank as value or ``int``.
167 strides: Strides for the window. Same rank as value or ``int``.
168 padding: Padding algorithm. Either ``VALID`` or ``SAME``.
169 channel_axis: Axis of the spatial channels for which pooling is skipped.
170 name: String name for the module.
171 """
172 super().__init__(name=name)
173 self.window_shape = window_shape
174 self.strides = strides
175 self.padding = padding
176 self.channel_axis = channel_axis
177
178 def __call__(self, value: jnp.ndarray) -> jnp.ndarray:
179 return max_pool(value, self.window_shape, self.strides,
180 self.padding, self.channel_axis)
181
182
183 class AvgPool(hk.Module):
184 """Average pool.
185
186 Equivalent to partial application of :func:`avg_pool`.
187 """
188
189 def __init__(
190 self,
191 window_shape: Union[int, Sequence[int]],
192 strides: Union[int, Sequence[int]],
193 padding: str,
194 channel_axis: Optional[int] = -1,
195 name: Optional[str] = None,
196 ):
197 """Average pool.
198
199 Args:
200 window_shape: Shape of window to pool over. Same rank as value or ``int``.
201 strides: Strides for the window. Same rank as value or ``int``.
202 padding: Padding algorithm. Either ``VALID`` or ``SAME``.
203 channel_axis: Axis of the spatial channels for which pooling is skipped.
204 name: String name for the module.
205 """
206 super().__init__(name=name)
207 self.window_shape = window_shape
208 self.strides = strides
209 self.padding = padding
210 self.channel_axis = channel_axis
211
212 def __call__(self, value: jnp.ndarray) -> jnp.ndarray:
213 return avg_pool(value, self.window_shape, self.strides,
214 self.padding, self.channel_axis)
215
[end of haiku/_src/pool.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/haiku/_src/pool.py b/haiku/_src/pool.py
--- a/haiku/_src/pool.py
+++ b/haiku/_src/pool.py
@@ -140,9 +140,9 @@
else:
# Count the number of valid entries at each input point, then use that for
# computing average. Assumes that any two arrays of same shape will be
- # padded the same.
- window_counts = lax.reduce_window(jnp.ones_like(value), *reduce_window_args)
- assert pooled.shape == window_counts.shape
+ # padded the same. Avoid broadcasting on axis where pooling is skipped.
+ _shape = tuple(vd if wd!=1 else 1 for (vd, wd) in zip(value.shape, window_shape))
+ window_counts = lax.reduce_window(jnp.ones(_shape), *reduce_window_args)
return pooled / window_counts
|
{"golden_diff": "diff --git a/haiku/_src/pool.py b/haiku/_src/pool.py\n--- a/haiku/_src/pool.py\n+++ b/haiku/_src/pool.py\n@@ -140,9 +140,9 @@\n else:\n # Count the number of valid entries at each input point, then use that for\n # computing average. Assumes that any two arrays of same shape will be\n- # padded the same.\n- window_counts = lax.reduce_window(jnp.ones_like(value), *reduce_window_args)\n- assert pooled.shape == window_counts.shape\n+ # padded the same. Avoid broadcasting on axis where pooling is skipped. \n+ _shape = tuple(vd if wd!=1 else 1 for (vd, wd) in zip(value.shape, window_shape))\n+ window_counts = lax.reduce_window(jnp.ones(_shape), *reduce_window_args)\n return pooled / window_counts\n", "issue": "Constant folding on `hk.avg_pool` with `padding='SAME'` on large arrays\nHi, lately my model has increased its compilation time due to constant folding and I discovered that the issue comes from using `padding='SAME'` in `hk.avg_pool`.\r\n\r\nI tried to create a dummy code to showcase the issue (at the bottom), but the main point is that when `hk.avg_pool` uses `padding='VALID'` I get these output.\r\n\r\n```\r\nWall time [Initialisation]: 0.112s\r\nWall time [Compilation]: 1.33s\r\nWall time [Train step]: 0.0024s\r\n```\r\n\r\nbut when I use `padding='SAME'`\r\n```\r\nWall time [Initialisation]: 0.119s\r\n2022-10-12 16:01:47.590034: E external/org_tensorflow/tensorflow/compiler/xla/service/slow_operation_alarm.cc:65] Constant folding an instruction is taking > 1s:\r\n\r\n reduce-window.16 (displaying the full instruction incurs a runtime overhead. Raise your logging level to 4 or above).\r\n\r\nThis isn't necessarily a bug; constant-folding is inherently a trade-off between compilation time and speed at runtime. XLA has some guards that attempt to keep constant folding from taking too long, but fundamentally you'll always be able to come up with an input program that takes a long time.\r\n\r\nIf you'd like to file a bug, run with envvar XLA_FLAGS=--xla_dump_to=/tmp/foo and attach the results.\r\n2022-10-12 16:02:05.867327: E external/org_tensorflow/tensorflow/compiler/xla/service/slow_operation_alarm.cc:133] The operation took 19.277362298s\r\nConstant folding an instruction is taking > 1s:\r\n\r\n reduce-window.16 (displaying the full instruction incurs a runtime overhead. Raise your logging level to 4 or above).\r\n\r\nThis isn't necessarily a bug; constant-folding is inherently a trade-off between compilation time and speed at runtime. XLA has some guards that attempt to keep constant folding from taking too long, but fundamentally you'll always be able to come up with an input program that takes a long time.\r\n\r\nIf you'd like to file a bug, run with envvar XLA_FLAGS=--xla_dump_to=/tmp/foo and attach the results.\r\nWall time [Compilation]: 21.8s\r\nWall time [Train step]: 0.00221s\r\n```\r\nwith `21.8s` of compilation!\r\n\r\nThe code:\r\n```python\r\nimport haiku as hk\r\nimport jax\r\nimport contextlib\r\nimport timeit\r\n\r\[email protected]\r\ndef time_eval(task):\r\n start = timeit.default_timer()\r\n try:\r\n yield\r\n finally:\r\n end = timeit.default_timer()\r\n print(f'Wall time [{task}]: {(end - start):.3}s')\r\n\r\nclass Model(hk.Module):\r\n def __call__(self, x):\r\n x = hk.avg_pool(x, window_shape=(1,3,3,1), strides=(1,2,2,1), padding='SAME')\r\n x = hk.Conv2D(32, 4, 2)(x)\r\n x = jax.nn.relu(x)\r\n return x\r\n\r\ndef forward(x):\r\n return Model()(x)\r\n\r\nforward = hk.without_apply_rng(hk.transform(forward))\r\n\r\nrng = hk.PRNGSequence(jax.random.PRNGKey(42))\r\nx = jax.random.uniform(next(rng), ([128, 512, 512, 3]))\r\n\r\nwith time_eval('Initialisation'):\r\n params = jax.jit(forward.init)(next(rng), x)\r\n\r\nforward_apply = jax.jit(forward.apply)\r\nwith time_eval('Compilation'):\r\n logits = forward_apply(params, x).block_until_ready()\r\n\r\nwith time_eval('Train step'):\r\n logits = forward_apply(params, x).block_until_ready()\r\n\r\n```\n", "before_files": [{"content": "# Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Pooling Haiku modules.\"\"\"\n\nimport types\nfrom typing import Optional, Sequence, Tuple, Union\nimport warnings\n\nfrom haiku._src import module\nfrom jax import lax\nimport jax.numpy as jnp\nimport numpy as np\n\n# If you are forking replace this block with `import haiku as hk`.\nhk = types.ModuleType(\"haiku\")\nhk.Module = module.Module\ndel module\n\n\ndef _infer_shape(\n x: jnp.ndarray,\n size: Union[int, Sequence[int]],\n channel_axis: Optional[int] = -1,\n) -> Tuple[int, ...]:\n \"\"\"Infer shape for pooling window or strides.\"\"\"\n if isinstance(size, int):\n if channel_axis and not 0 <= abs(channel_axis) < x.ndim:\n raise ValueError(f\"Invalid channel axis {channel_axis} for {x.shape}\")\n if channel_axis and channel_axis < 0:\n channel_axis = x.ndim + channel_axis\n return (1,) + tuple(size if d != channel_axis else 1\n for d in range(1, x.ndim))\n elif len(size) < x.ndim:\n # Assume additional dimensions are batch dimensions.\n return (1,) * (x.ndim - len(size)) + tuple(size)\n else:\n assert x.ndim == len(size)\n return tuple(size)\n\n\n_VMAP_SHAPE_INFERENCE_WARNING = (\n \"When running under vmap, passing an `int` (except for `1`) for \"\n \"`window_shape` or `strides` will result in the wrong shape being inferred \"\n \"because the batch dimension is not visible to Haiku. Please update your \"\n \"code to specify a full unbatched size. \"\n \"\"\n \"For example if you had `pool(x, window_shape=3, strides=1)` before, you \"\n \"should now pass `pool(x, window_shape=(3, 3, 1), strides=1)`. \"\n \"\"\n \"Haiku will assume that any additional dimensions in your input are \"\n \"batch dimensions, and will pad `window_shape` and `strides` accordingly \"\n \"making your module support both batched and per-example inputs.\"\n)\n\n\ndef _warn_if_unsafe(window_shape, strides):\n unsafe = lambda size: isinstance(size, int) and size != 1\n if unsafe(window_shape) or unsafe(strides):\n warnings.warn(_VMAP_SHAPE_INFERENCE_WARNING, DeprecationWarning)\n\n\ndef max_pool(\n value: jnp.ndarray,\n window_shape: Union[int, Sequence[int]],\n strides: Union[int, Sequence[int]],\n padding: str,\n channel_axis: Optional[int] = -1,\n) -> jnp.ndarray:\n \"\"\"Max pool.\n\n Args:\n value: Value to pool.\n window_shape: Shape of the pooling window, an int or same rank as value.\n strides: Strides of the pooling window, an int or same rank as value.\n padding: Padding algorithm. Either ``VALID`` or ``SAME``.\n channel_axis: Axis of the spatial channels for which pooling is skipped,\n used to infer ``window_shape`` or ``strides`` if they are an integer.\n\n Returns:\n Pooled result. Same rank as value.\n \"\"\"\n if padding not in (\"SAME\", \"VALID\"):\n raise ValueError(f\"Invalid padding '{padding}', must be 'SAME' or 'VALID'.\")\n\n _warn_if_unsafe(window_shape, strides)\n window_shape = _infer_shape(value, window_shape, channel_axis)\n strides = _infer_shape(value, strides, channel_axis)\n\n return lax.reduce_window(value, -jnp.inf, lax.max, window_shape, strides,\n padding)\n\n\ndef avg_pool(\n value: jnp.ndarray,\n window_shape: Union[int, Sequence[int]],\n strides: Union[int, Sequence[int]],\n padding: str,\n channel_axis: Optional[int] = -1,\n) -> jnp.ndarray:\n \"\"\"Average pool.\n\n Args:\n value: Value to pool.\n window_shape: Shape of the pooling window, an int or same rank as value.\n strides: Strides of the pooling window, an int or same rank as value.\n padding: Padding algorithm. Either ``VALID`` or ``SAME``.\n channel_axis: Axis of the spatial channels for which pooling is skipped,\n used to infer ``window_shape`` or ``strides`` if they are an integer.\n\n Returns:\n Pooled result. Same rank as value.\n\n Raises:\n ValueError: If the padding is not valid.\n \"\"\"\n if padding not in (\"SAME\", \"VALID\"):\n raise ValueError(f\"Invalid padding '{padding}', must be 'SAME' or 'VALID'.\")\n\n _warn_if_unsafe(window_shape, strides)\n window_shape = _infer_shape(value, window_shape, channel_axis)\n strides = _infer_shape(value, strides, channel_axis)\n\n reduce_window_args = (0., lax.add, window_shape, strides, padding)\n pooled = lax.reduce_window(value, *reduce_window_args)\n if padding == \"VALID\":\n # Avoid the extra reduce_window.\n return pooled / np.prod(window_shape)\n else:\n # Count the number of valid entries at each input point, then use that for\n # computing average. Assumes that any two arrays of same shape will be\n # padded the same.\n window_counts = lax.reduce_window(jnp.ones_like(value), *reduce_window_args)\n assert pooled.shape == window_counts.shape\n return pooled / window_counts\n\n\nclass MaxPool(hk.Module):\n \"\"\"Max pool.\n\n Equivalent to partial application of :func:`max_pool`.\n \"\"\"\n\n def __init__(\n self,\n window_shape: Union[int, Sequence[int]],\n strides: Union[int, Sequence[int]],\n padding: str,\n channel_axis: Optional[int] = -1,\n name: Optional[str] = None,\n ):\n \"\"\"Max pool.\n\n Args:\n window_shape: Shape of window to pool over. Same rank as value or ``int``.\n strides: Strides for the window. Same rank as value or ``int``.\n padding: Padding algorithm. Either ``VALID`` or ``SAME``.\n channel_axis: Axis of the spatial channels for which pooling is skipped.\n name: String name for the module.\n \"\"\"\n super().__init__(name=name)\n self.window_shape = window_shape\n self.strides = strides\n self.padding = padding\n self.channel_axis = channel_axis\n\n def __call__(self, value: jnp.ndarray) -> jnp.ndarray:\n return max_pool(value, self.window_shape, self.strides,\n self.padding, self.channel_axis)\n\n\nclass AvgPool(hk.Module):\n \"\"\"Average pool.\n\n Equivalent to partial application of :func:`avg_pool`.\n \"\"\"\n\n def __init__(\n self,\n window_shape: Union[int, Sequence[int]],\n strides: Union[int, Sequence[int]],\n padding: str,\n channel_axis: Optional[int] = -1,\n name: Optional[str] = None,\n ):\n \"\"\"Average pool.\n\n Args:\n window_shape: Shape of window to pool over. Same rank as value or ``int``.\n strides: Strides for the window. Same rank as value or ``int``.\n padding: Padding algorithm. Either ``VALID`` or ``SAME``.\n channel_axis: Axis of the spatial channels for which pooling is skipped.\n name: String name for the module.\n \"\"\"\n super().__init__(name=name)\n self.window_shape = window_shape\n self.strides = strides\n self.padding = padding\n self.channel_axis = channel_axis\n\n def __call__(self, value: jnp.ndarray) -> jnp.ndarray:\n return avg_pool(value, self.window_shape, self.strides,\n self.padding, self.channel_axis)\n", "path": "haiku/_src/pool.py"}]}
| 3,776 | 203 |
gh_patches_debug_47343
|
rasdani/github-patches
|
git_diff
|
enthought__chaco-731
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ArrayDataSource get_mask_data() fails when data is None
See this test here:
https://github.com/enthought/chaco/blob/enh/data-source-tests/chaco/tests/arraydatasource_test_case.py#L108
More generally, I think that the behaviour for an empty data source is probably wrong (why a _scalar_ `0.0` instead of `array([])`?) but I'm not sure what will break if that is changed.
</issue>
<code>
[start of chaco/array_data_source.py]
1 """ Defines the ArrayDataSource class."""
2
3 # Major library imports
4 from numpy import array, empty, isfinite, ones, ndarray
5 import numpy as np
6
7 # Enthought library imports
8 from traits.api import Any, Constant, Int, Tuple
9
10 # Chaco imports
11 from .base import NumericalSequenceTrait, reverse_map_1d, SortOrderTrait
12 from .abstract_data_source import AbstractDataSource
13
14
15 def bounded_nanargmin(arr):
16 """Find the index of the minimum value, ignoring NaNs.
17
18 If all NaNs, return 0.
19 """
20 # Different versions of numpy behave differently in the all-NaN case, so we
21 # catch this condition in two different ways.
22 try:
23 if np.issubdtype(arr.dtype, np.floating):
24 min = np.nanargmin(arr)
25 elif np.issubdtype(arr.dtype, np.number):
26 min = np.argmin(arr)
27 else:
28 min = 0
29 except ValueError:
30 return 0
31 if isfinite(min):
32 return min
33 else:
34 return 0
35
36
37 def bounded_nanargmax(arr):
38 """Find the index of the maximum value, ignoring NaNs.
39
40 If all NaNs, return -1.
41 """
42 try:
43 if np.issubdtype(arr.dtype, np.floating):
44 max = np.nanargmax(arr)
45 elif np.issubdtype(arr.dtype, np.number):
46 max = np.argmax(arr)
47 else:
48 max = -1
49 except ValueError:
50 return -1
51 if isfinite(max):
52 return max
53 else:
54 return -1
55
56
57 class ArrayDataSource(AbstractDataSource):
58 """A data source representing a single, continuous array of numerical data.
59
60 This class does not listen to the array for value changes; if you need that
61 behavior, create a subclass that hooks up the appropriate listeners.
62 """
63
64 # ------------------------------------------------------------------------
65 # AbstractDataSource traits
66 # ------------------------------------------------------------------------
67
68 #: The dimensionality of the indices into this data source (overrides
69 #: AbstractDataSource).
70 index_dimension = Constant("scalar")
71
72 #: The dimensionality of the value at each index point (overrides
73 #: AbstractDataSource).
74 value_dimension = Constant("scalar")
75
76 #: The sort order of the data.
77 #: This is a specialized optimization for 1-D arrays, but it's an important
78 #: one that's used everywhere.
79 sort_order = SortOrderTrait
80
81 # ------------------------------------------------------------------------
82 # Private traits
83 # ------------------------------------------------------------------------
84
85 # The data array itself.
86 _data = NumericalSequenceTrait
87
88 # Cached values of min and max as long as **_data** doesn't change.
89 _cached_bounds = Tuple
90
91 # Not necessary, since this is not a filter, but provided for convenience.
92 _cached_mask = Any
93
94 # The index of the (first) minimum value in self._data
95 # FIXME: This is an Any instead of an Int trait because of how Traits
96 # typechecks numpy.int64 on 64-bit Windows systems.
97 _min_index = Any
98
99 # The index of the (first) maximum value in self._data
100 # FIXME: This is an Any instead of an Int trait because of how Traits
101 # typechecks numpy.int64 on 64-bit Windows systems.
102 _max_index = Any
103
104 # ------------------------------------------------------------------------
105 # Public methods
106 # ------------------------------------------------------------------------
107
108 def __init__(self, data=array([]), sort_order="none", **kw):
109 AbstractDataSource.__init__(self, **kw)
110 self.set_data(data, sort_order)
111
112 def set_data(self, newdata, sort_order=None):
113 """Sets the data, and optionally the sort order, for this data source.
114
115 Parameters
116 ----------
117 newdata : array
118 The data to use.
119 sort_order : SortOrderTrait
120 The sort order of the data
121 """
122 self._data = newdata
123 if sort_order is not None:
124 self.sort_order = sort_order
125 self._compute_bounds()
126 self.data_changed = True
127
128 def set_mask(self, mask):
129 """Sets the mask for this data source."""
130 self._cached_mask = mask
131 self.data_changed = True
132
133 def remove_mask(self):
134 """Removes the mask on this data source."""
135 self._cached_mask = None
136 self.data_changed = True
137
138 # ------------------------------------------------------------------------
139 # AbstractDataSource interface
140 # ------------------------------------------------------------------------
141
142 def get_data(self):
143 """Returns the data for this data source, or 0.0 if it has no data.
144
145 Implements AbstractDataSource.
146 """
147 if self._data is not None:
148 return self._data
149 else:
150 return empty(shape=(0,))
151
152 def get_data_mask(self):
153 """get_data_mask() -> (data_array, mask_array)
154
155 Implements AbstractDataSource.
156 """
157 if self._cached_mask is None:
158 return self._data, ones(len(self._data), dtype=bool)
159 else:
160 return self._data, self._cached_mask
161
162 def is_masked(self):
163 """is_masked() -> bool
164
165 Implements AbstractDataSource.
166 """
167 if self._cached_mask is not None:
168 return True
169 else:
170 return False
171
172 def get_size(self):
173 """get_size() -> int
174
175 Implements AbstractDataSource.
176 """
177 if self._data is not None:
178 return len(self._data)
179 else:
180 return 0
181
182 def get_bounds(self):
183 """Returns the minimum and maximum values of the data source's data.
184
185 Implements AbstractDataSource.
186 """
187 if (
188 self._cached_bounds is None
189 or self._cached_bounds == ()
190 or self._cached_bounds == 0.0
191 ):
192 self._compute_bounds()
193 return self._cached_bounds
194
195 def reverse_map(self, pt, index=0, outside_returns_none=True):
196 """Returns the index of *pt* in the data source.
197
198 Parameters
199 ----------
200 pt : scalar value
201 value to find
202 index
203 ignored for data series with 1-D indices
204 outside_returns_none : Boolean
205 Whether the method returns None if *pt* is outside the range of
206 the data source; if False, the method returns the value of the
207 bound that *pt* is outside of.
208 """
209 if self.sort_order == "none":
210 raise NotImplementedError
211
212 # index is ignored for dataseries with 1-dimensional indices
213 minval, maxval = self._cached_bounds
214 if pt < minval:
215 if outside_returns_none:
216 return None
217 else:
218 return self._min_index
219 elif pt > maxval:
220 if outside_returns_none:
221 return None
222 else:
223 return self._max_index
224 else:
225 return reverse_map_1d(self._data, pt, self.sort_order)
226
227 # ------------------------------------------------------------------------
228 # Private methods
229 # ------------------------------------------------------------------------
230
231 def _compute_bounds(self, data=None):
232 """Computes the minimum and maximum values of self._data.
233
234 If a data array is passed in, then that is used instead of self._data.
235 This behavior is useful for subclasses.
236 """
237 # TODO: as an optimization, perhaps create and cache a sorted
238 # version of the dataset?
239
240 if data is None:
241 data = self.get_data()
242
243 data_len = len(data)
244
245 if data_len == 0:
246 self._min_index = 0
247 self._max_index = 0
248 self._cached_bounds = (0.0, 0.0)
249 elif data_len == 1:
250 self._min_index = 0
251 self._max_index = 0
252 self._cached_bounds = (data[0], data[0])
253 else:
254 if self.sort_order == "ascending":
255 self._min_index = 0
256 self._max_index = -1
257 elif self.sort_order == "descending":
258 self._min_index = -1
259 self._max_index = 0
260 else:
261 # ignore NaN values. This is probably a little slower,
262 # but also much safer.
263
264 # data might be an array of strings or objects that
265 # can't have argmin calculated on them.
266 try:
267 # the data may be in a subclass of numpy.array, viewing
268 # the data as a ndarray will remove side effects of
269 # the subclasses, such as different operator behaviors
270 self._min_index = bounded_nanargmin(data.view(ndarray))
271 self._max_index = bounded_nanargmax(data.view(ndarray))
272 except (TypeError, IndexError, NotImplementedError):
273 # For strings and objects, we punt... These show up in
274 # label-ish data sources.
275 self._cached_bounds = (0.0, 0.0)
276
277 self._cached_bounds = (
278 data[self._min_index],
279 data[self._max_index],
280 )
281
282 # ------------------------------------------------------------------------
283 # Event handlers
284 # ------------------------------------------------------------------------
285
286 def _metadata_changed(self, event):
287 self.metadata_changed = True
288
289 def _metadata_items_changed(self, event):
290 self.metadata_changed = True
291
292 # ------------------------------------------------------------------------
293 # Persistence-related methods
294 # ------------------------------------------------------------------------
295
296 def __getstate__(self):
297 state = super().__getstate__()
298 if not self.persist_data:
299 state.pop("_data", None)
300 state.pop("_cached_mask", None)
301 state.pop("_cached_bounds", None)
302 state.pop("_min_index", None)
303 state.pop("_max_index", None)
304 return state
305
306 def _post_load(self):
307 super()._post_load()
308 self._cached_bounds = ()
309 self._cached_mask = None
310
[end of chaco/array_data_source.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/chaco/array_data_source.py b/chaco/array_data_source.py
--- a/chaco/array_data_source.py
+++ b/chaco/array_data_source.py
@@ -155,7 +155,10 @@
Implements AbstractDataSource.
"""
if self._cached_mask is None:
- return self._data, ones(len(self._data), dtype=bool)
+ if self._data is None:
+ return self._data, ones(0, dtype=bool)
+ else:
+ return self._data, ones(len(self._data), dtype=bool)
else:
return self._data, self._cached_mask
|
{"golden_diff": "diff --git a/chaco/array_data_source.py b/chaco/array_data_source.py\n--- a/chaco/array_data_source.py\n+++ b/chaco/array_data_source.py\n@@ -155,7 +155,10 @@\n Implements AbstractDataSource.\n \"\"\"\n if self._cached_mask is None:\n- return self._data, ones(len(self._data), dtype=bool)\n+ if self._data is None:\n+ return self._data, ones(0, dtype=bool)\n+ else:\n+ return self._data, ones(len(self._data), dtype=bool)\n else:\n return self._data, self._cached_mask\n", "issue": "ArrayDataSource get_mask_data() fails when data is None\nSee this test here:\n\nhttps://github.com/enthought/chaco/blob/enh/data-source-tests/chaco/tests/arraydatasource_test_case.py#L108\n\nMore generally, I think that the behaviour for an empty data source is probably wrong (why a _scalar_ `0.0` instead of `array([])`?) but I'm not sure what will break if that is changed.\n\n", "before_files": [{"content": "\"\"\" Defines the ArrayDataSource class.\"\"\"\n\n# Major library imports\nfrom numpy import array, empty, isfinite, ones, ndarray\nimport numpy as np\n\n# Enthought library imports\nfrom traits.api import Any, Constant, Int, Tuple\n\n# Chaco imports\nfrom .base import NumericalSequenceTrait, reverse_map_1d, SortOrderTrait\nfrom .abstract_data_source import AbstractDataSource\n\n\ndef bounded_nanargmin(arr):\n \"\"\"Find the index of the minimum value, ignoring NaNs.\n\n If all NaNs, return 0.\n \"\"\"\n # Different versions of numpy behave differently in the all-NaN case, so we\n # catch this condition in two different ways.\n try:\n if np.issubdtype(arr.dtype, np.floating):\n min = np.nanargmin(arr)\n elif np.issubdtype(arr.dtype, np.number):\n min = np.argmin(arr)\n else:\n min = 0\n except ValueError:\n return 0\n if isfinite(min):\n return min\n else:\n return 0\n\n\ndef bounded_nanargmax(arr):\n \"\"\"Find the index of the maximum value, ignoring NaNs.\n\n If all NaNs, return -1.\n \"\"\"\n try:\n if np.issubdtype(arr.dtype, np.floating):\n max = np.nanargmax(arr)\n elif np.issubdtype(arr.dtype, np.number):\n max = np.argmax(arr)\n else:\n max = -1\n except ValueError:\n return -1\n if isfinite(max):\n return max\n else:\n return -1\n\n\nclass ArrayDataSource(AbstractDataSource):\n \"\"\"A data source representing a single, continuous array of numerical data.\n\n This class does not listen to the array for value changes; if you need that\n behavior, create a subclass that hooks up the appropriate listeners.\n \"\"\"\n\n # ------------------------------------------------------------------------\n # AbstractDataSource traits\n # ------------------------------------------------------------------------\n\n #: The dimensionality of the indices into this data source (overrides\n #: AbstractDataSource).\n index_dimension = Constant(\"scalar\")\n\n #: The dimensionality of the value at each index point (overrides\n #: AbstractDataSource).\n value_dimension = Constant(\"scalar\")\n\n #: The sort order of the data.\n #: This is a specialized optimization for 1-D arrays, but it's an important\n #: one that's used everywhere.\n sort_order = SortOrderTrait\n\n # ------------------------------------------------------------------------\n # Private traits\n # ------------------------------------------------------------------------\n\n # The data array itself.\n _data = NumericalSequenceTrait\n\n # Cached values of min and max as long as **_data** doesn't change.\n _cached_bounds = Tuple\n\n # Not necessary, since this is not a filter, but provided for convenience.\n _cached_mask = Any\n\n # The index of the (first) minimum value in self._data\n # FIXME: This is an Any instead of an Int trait because of how Traits\n # typechecks numpy.int64 on 64-bit Windows systems.\n _min_index = Any\n\n # The index of the (first) maximum value in self._data\n # FIXME: This is an Any instead of an Int trait because of how Traits\n # typechecks numpy.int64 on 64-bit Windows systems.\n _max_index = Any\n\n # ------------------------------------------------------------------------\n # Public methods\n # ------------------------------------------------------------------------\n\n def __init__(self, data=array([]), sort_order=\"none\", **kw):\n AbstractDataSource.__init__(self, **kw)\n self.set_data(data, sort_order)\n\n def set_data(self, newdata, sort_order=None):\n \"\"\"Sets the data, and optionally the sort order, for this data source.\n\n Parameters\n ----------\n newdata : array\n The data to use.\n sort_order : SortOrderTrait\n The sort order of the data\n \"\"\"\n self._data = newdata\n if sort_order is not None:\n self.sort_order = sort_order\n self._compute_bounds()\n self.data_changed = True\n\n def set_mask(self, mask):\n \"\"\"Sets the mask for this data source.\"\"\"\n self._cached_mask = mask\n self.data_changed = True\n\n def remove_mask(self):\n \"\"\"Removes the mask on this data source.\"\"\"\n self._cached_mask = None\n self.data_changed = True\n\n # ------------------------------------------------------------------------\n # AbstractDataSource interface\n # ------------------------------------------------------------------------\n\n def get_data(self):\n \"\"\"Returns the data for this data source, or 0.0 if it has no data.\n\n Implements AbstractDataSource.\n \"\"\"\n if self._data is not None:\n return self._data\n else:\n return empty(shape=(0,))\n\n def get_data_mask(self):\n \"\"\"get_data_mask() -> (data_array, mask_array)\n\n Implements AbstractDataSource.\n \"\"\"\n if self._cached_mask is None:\n return self._data, ones(len(self._data), dtype=bool)\n else:\n return self._data, self._cached_mask\n\n def is_masked(self):\n \"\"\"is_masked() -> bool\n\n Implements AbstractDataSource.\n \"\"\"\n if self._cached_mask is not None:\n return True\n else:\n return False\n\n def get_size(self):\n \"\"\"get_size() -> int\n\n Implements AbstractDataSource.\n \"\"\"\n if self._data is not None:\n return len(self._data)\n else:\n return 0\n\n def get_bounds(self):\n \"\"\"Returns the minimum and maximum values of the data source's data.\n\n Implements AbstractDataSource.\n \"\"\"\n if (\n self._cached_bounds is None\n or self._cached_bounds == ()\n or self._cached_bounds == 0.0\n ):\n self._compute_bounds()\n return self._cached_bounds\n\n def reverse_map(self, pt, index=0, outside_returns_none=True):\n \"\"\"Returns the index of *pt* in the data source.\n\n Parameters\n ----------\n pt : scalar value\n value to find\n index\n ignored for data series with 1-D indices\n outside_returns_none : Boolean\n Whether the method returns None if *pt* is outside the range of\n the data source; if False, the method returns the value of the\n bound that *pt* is outside of.\n \"\"\"\n if self.sort_order == \"none\":\n raise NotImplementedError\n\n # index is ignored for dataseries with 1-dimensional indices\n minval, maxval = self._cached_bounds\n if pt < minval:\n if outside_returns_none:\n return None\n else:\n return self._min_index\n elif pt > maxval:\n if outside_returns_none:\n return None\n else:\n return self._max_index\n else:\n return reverse_map_1d(self._data, pt, self.sort_order)\n\n # ------------------------------------------------------------------------\n # Private methods\n # ------------------------------------------------------------------------\n\n def _compute_bounds(self, data=None):\n \"\"\"Computes the minimum and maximum values of self._data.\n\n If a data array is passed in, then that is used instead of self._data.\n This behavior is useful for subclasses.\n \"\"\"\n # TODO: as an optimization, perhaps create and cache a sorted\n # version of the dataset?\n\n if data is None:\n data = self.get_data()\n\n data_len = len(data)\n\n if data_len == 0:\n self._min_index = 0\n self._max_index = 0\n self._cached_bounds = (0.0, 0.0)\n elif data_len == 1:\n self._min_index = 0\n self._max_index = 0\n self._cached_bounds = (data[0], data[0])\n else:\n if self.sort_order == \"ascending\":\n self._min_index = 0\n self._max_index = -1\n elif self.sort_order == \"descending\":\n self._min_index = -1\n self._max_index = 0\n else:\n # ignore NaN values. This is probably a little slower,\n # but also much safer.\n\n # data might be an array of strings or objects that\n # can't have argmin calculated on them.\n try:\n # the data may be in a subclass of numpy.array, viewing\n # the data as a ndarray will remove side effects of\n # the subclasses, such as different operator behaviors\n self._min_index = bounded_nanargmin(data.view(ndarray))\n self._max_index = bounded_nanargmax(data.view(ndarray))\n except (TypeError, IndexError, NotImplementedError):\n # For strings and objects, we punt... These show up in\n # label-ish data sources.\n self._cached_bounds = (0.0, 0.0)\n\n self._cached_bounds = (\n data[self._min_index],\n data[self._max_index],\n )\n\n # ------------------------------------------------------------------------\n # Event handlers\n # ------------------------------------------------------------------------\n\n def _metadata_changed(self, event):\n self.metadata_changed = True\n\n def _metadata_items_changed(self, event):\n self.metadata_changed = True\n\n # ------------------------------------------------------------------------\n # Persistence-related methods\n # ------------------------------------------------------------------------\n\n def __getstate__(self):\n state = super().__getstate__()\n if not self.persist_data:\n state.pop(\"_data\", None)\n state.pop(\"_cached_mask\", None)\n state.pop(\"_cached_bounds\", None)\n state.pop(\"_min_index\", None)\n state.pop(\"_max_index\", None)\n return state\n\n def _post_load(self):\n super()._post_load()\n self._cached_bounds = ()\n self._cached_mask = None\n", "path": "chaco/array_data_source.py"}]}
| 3,542 | 140 |
gh_patches_debug_3717
|
rasdani/github-patches
|
git_diff
|
mdn__kuma-5972
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: 'NoneType' object has no attribute 'sites'
**Summary**
_What is the problem?_
I was installing `Kuma` on my computer.
When I run the command `docker-compose exec web ./manage.py configure_github_social`, console desk show the Error `AttributeError: 'NoneType' object has no attribute 'sites'`.
**Steps To Reproduce (STR)**
_How can we reproduce the problem?_
1. Get in [https://kuma.readthedocs.io/en/latest/installation.html#load-the-sample-database](https://kuma.readthedocs.io/en/latest/installation.html#load-the-sample-database)
2. Find the step **Enable GitHub authentication (optional)**
3. At that step I run `docker-compose exec web ./manage.py configure_github_social`, and error occured.
**Actual behavior**
_What actually happened?_
I checked the code and found that in file `kuma/attachments/management/commands/configure_github_social.py` line 75, the variable `social_app` is None. Was I got something wrong?
</issue>
<code>
[start of kuma/attachments/management/commands/configure_github_social.py]
1 import fileinput
2 import os
3 import sys
4
5 from allauth.socialaccount.models import SocialApp
6 from django.conf import settings
7 from django.contrib.sites.models import Site
8 from django.core.management.base import BaseCommand
9
10 try:
11 input = raw_input
12 except NameError:
13 # Python3's input behaves like raw_input
14 # TODO: Delete this block when we've migrated
15 pass
16
17 LOCALHOST = 'localhost:8000'
18 MDN_LOCALHOST = 'mdn.localhost'
19
20 OVERWRITE_PROMPT = 'There\'s already a SocialApp for GitHub, if you want to overwrite it type "yes":'
21 GITHUB_INFO = (
22 'Visit https://github.com/settings/developers and click "New OAuth App"\n'
23 'Set "Homepage URL" to "http://mdn.localhost:8000/" and Authorization callback URL to ' +
24 '"http://mdn.localhost:8000/users/github/login/callback/" respectively'
25 )
26 ENV_INFO = 'Putting SITE_ID and DOMAIN into .env'
27 HOSTS_INFO = (
28 'Make sure your hosts file contains these lines:\n'
29 '127.0.0.1 localhost demos mdn.localhost beta.mdn.localhost wiki.mdn.localhost\n'
30 '::1 mdn.localhost beta.mdn.localhost wiki.mdn.localhost'
31 )
32
33
34 def overwrite_or_create_env_vars(env_vars):
35 file_path = os.path.join(os.getcwd(), '.env')
36
37 for line in fileinput.input(file_path, inplace=True):
38 key = line.strip().split('=')[0]
39 if key not in env_vars:
40 sys.stdout.write(line)
41
42 with open(file_path, 'a') as file:
43 file.write('\n')
44 for key, value in env_vars.items():
45 file.write(key + '=' + str(value) + '\n')
46
47
48 class Command(BaseCommand):
49 help = 'Configure Kuma for Sign-In with GitHub'
50
51 def handle(self, **options):
52 print('\n')
53
54 social_app = SocialApp.objects.filter(provider='github').first()
55 if social_app is not None and input(OVERWRITE_PROMPT) == 'yes':
56 print('\n')
57
58 print(GITHUB_INFO)
59 client_id = input('Client ID: ').strip()
60 client_secret = input('Client Secret: ').strip()
61
62 social_app, created = SocialApp.objects.update_or_create(
63 provider='github',
64 defaults={
65 'name': 'MDN Development',
66 'client_id': client_id,
67 'secret': client_secret
68 }
69 )
70
71 site, created = Site.objects.update_or_create(
72 domain=LOCALHOST,
73 defaults={'name': LOCALHOST}
74 )
75 social_app.sites.add(site)
76
77 print('\n')
78
79 print(ENV_INFO)
80 overwrite_or_create_env_vars(
81 {'SITE_ID': site.id, 'DOMAIN': MDN_LOCALHOST} if site.id != settings.SITE_ID else
82 {'DOMAIN': MDN_LOCALHOST})
83
84 print(HOSTS_INFO)
85
[end of kuma/attachments/management/commands/configure_github_social.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kuma/attachments/management/commands/configure_github_social.py b/kuma/attachments/management/commands/configure_github_social.py
--- a/kuma/attachments/management/commands/configure_github_social.py
+++ b/kuma/attachments/management/commands/configure_github_social.py
@@ -52,7 +52,7 @@
print('\n')
social_app = SocialApp.objects.filter(provider='github').first()
- if social_app is not None and input(OVERWRITE_PROMPT) == 'yes':
+ if social_app is None or input(OVERWRITE_PROMPT) == 'yes':
print('\n')
print(GITHUB_INFO)
|
{"golden_diff": "diff --git a/kuma/attachments/management/commands/configure_github_social.py b/kuma/attachments/management/commands/configure_github_social.py\n--- a/kuma/attachments/management/commands/configure_github_social.py\n+++ b/kuma/attachments/management/commands/configure_github_social.py\n@@ -52,7 +52,7 @@\n print('\\n')\n \n social_app = SocialApp.objects.filter(provider='github').first()\n- if social_app is not None and input(OVERWRITE_PROMPT) == 'yes':\n+ if social_app is None or input(OVERWRITE_PROMPT) == 'yes':\n print('\\n')\n \n print(GITHUB_INFO)\n", "issue": "AttributeError: 'NoneType' object has no attribute 'sites'\n**Summary**\r\n_What is the problem?_\r\nI was installing `Kuma` on my computer.\r\nWhen I run the command `docker-compose exec web ./manage.py configure_github_social`, console desk show the Error `AttributeError: 'NoneType' object has no attribute 'sites'`.\r\n\r\n**Steps To Reproduce (STR)**\r\n_How can we reproduce the problem?_\r\n\r\n1. Get in [https://kuma.readthedocs.io/en/latest/installation.html#load-the-sample-database](https://kuma.readthedocs.io/en/latest/installation.html#load-the-sample-database)\r\n2. Find the step **Enable GitHub authentication (optional)**\r\n3. At that step I run `docker-compose exec web ./manage.py configure_github_social`, and error occured.\r\n\r\n\r\n**Actual behavior**\r\n_What actually happened?_\r\nI checked the code and found that in file `kuma/attachments/management/commands/configure_github_social.py` line 75, the variable `social_app` is None. Was I got something wrong? \n", "before_files": [{"content": "import fileinput\nimport os\nimport sys\n\nfrom allauth.socialaccount.models import SocialApp\nfrom django.conf import settings\nfrom django.contrib.sites.models import Site\nfrom django.core.management.base import BaseCommand\n\ntry:\n input = raw_input\nexcept NameError:\n # Python3's input behaves like raw_input\n # TODO: Delete this block when we've migrated\n pass\n\nLOCALHOST = 'localhost:8000'\nMDN_LOCALHOST = 'mdn.localhost'\n\nOVERWRITE_PROMPT = 'There\\'s already a SocialApp for GitHub, if you want to overwrite it type \"yes\":'\nGITHUB_INFO = (\n 'Visit https://github.com/settings/developers and click \"New OAuth App\"\\n'\n 'Set \"Homepage URL\" to \"http://mdn.localhost:8000/\" and Authorization callback URL to ' +\n '\"http://mdn.localhost:8000/users/github/login/callback/\" respectively'\n)\nENV_INFO = 'Putting SITE_ID and DOMAIN into .env'\nHOSTS_INFO = (\n 'Make sure your hosts file contains these lines:\\n'\n '127.0.0.1 localhost demos mdn.localhost beta.mdn.localhost wiki.mdn.localhost\\n'\n '::1 mdn.localhost beta.mdn.localhost wiki.mdn.localhost'\n)\n\n\ndef overwrite_or_create_env_vars(env_vars):\n file_path = os.path.join(os.getcwd(), '.env')\n\n for line in fileinput.input(file_path, inplace=True):\n key = line.strip().split('=')[0]\n if key not in env_vars:\n sys.stdout.write(line)\n\n with open(file_path, 'a') as file:\n file.write('\\n')\n for key, value in env_vars.items():\n file.write(key + '=' + str(value) + '\\n')\n\n\nclass Command(BaseCommand):\n help = 'Configure Kuma for Sign-In with GitHub'\n\n def handle(self, **options):\n print('\\n')\n\n social_app = SocialApp.objects.filter(provider='github').first()\n if social_app is not None and input(OVERWRITE_PROMPT) == 'yes':\n print('\\n')\n\n print(GITHUB_INFO)\n client_id = input('Client ID: ').strip()\n client_secret = input('Client Secret: ').strip()\n\n social_app, created = SocialApp.objects.update_or_create(\n provider='github',\n defaults={\n 'name': 'MDN Development',\n 'client_id': client_id,\n 'secret': client_secret\n }\n )\n\n site, created = Site.objects.update_or_create(\n domain=LOCALHOST,\n defaults={'name': LOCALHOST}\n )\n social_app.sites.add(site)\n\n print('\\n')\n\n print(ENV_INFO)\n overwrite_or_create_env_vars(\n {'SITE_ID': site.id, 'DOMAIN': MDN_LOCALHOST} if site.id != settings.SITE_ID else\n {'DOMAIN': MDN_LOCALHOST})\n\n print(HOSTS_INFO)\n", "path": "kuma/attachments/management/commands/configure_github_social.py"}]}
| 1,593 | 151 |
gh_patches_debug_5378
|
rasdani/github-patches
|
git_diff
|
cocotb__cocotb-1581
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"Running tests with cocotb ... from Unknown"
In e.g. https://asciinema.org/a/316949, we can see the line
```
0.00ns INFO cocotb __init__.py:144 in _initialise_testbench Running tests with cocotb v1.4.0.dev0 from Unknown
```
@themperek suggested using ``__file__`` in https://github.com/cocotb/cocotb/blob/09effc69f914820d4aa50217b9fad6ba29c4248f/cocotb/__init__.py#L139-L144 instead of ``export``ing ``COCOTB_PY_DIR`` in https://github.com/cocotb/cocotb/blob/67bf79b997b630416b1f98742fbeab09d9e90fef/cocotb/share/makefiles/Makefile.inc#L33
</issue>
<code>
[start of cocotb/__init__.py]
1 # Copyright (c) 2013 Potential Ventures Ltd
2 # Copyright (c) 2013 SolarFlare Communications Inc
3 # All rights reserved.
4
5 # Redistribution and use in source and binary forms, with or without
6 # modification, are permitted provided that the following conditions are met:
7 # * Redistributions of source code must retain the above copyright
8 # notice, this list of conditions and the following disclaimer.
9 # * Redistributions in binary form must reproduce the above copyright
10 # notice, this list of conditions and the following disclaimer in the
11 # documentation and/or other materials provided with the distribution.
12 # * Neither the name of Potential Ventures Ltd,
13 # SolarFlare Communications Inc nor the
14 # names of its contributors may be used to endorse or promote products
15 # derived from this software without specific prior written permission.
16
17 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
18 # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
19 # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
20 # DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY
21 # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
22 # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
23 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
24 # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
25 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
26 # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
27
28 """
29 Cocotb is a coroutine, cosimulation framework for writing testbenches in Python.
30
31 See http://cocotb.readthedocs.org for full documentation
32 """
33 import os
34 import sys
35 import logging
36 import threading
37 import random
38 import time
39 import warnings
40
41 import cocotb.handle
42 import cocotb.log
43 from cocotb.scheduler import Scheduler
44 from cocotb.regression import RegressionManager
45
46
47 # Things we want in the cocotb namespace
48 from cocotb.decorators import test, coroutine, hook, function, external # noqa: F401
49
50 # Singleton scheduler instance
51 # NB this cheekily ensures a singleton since we're replacing the reference
52 # so that cocotb.scheduler gives you the singleton instance and not the
53 # scheduler package
54
55 from ._version import __version__
56
57 # GPI logging instance
58 if "COCOTB_SIM" in os.environ:
59
60 def _reopen_stream_with_buffering(stream_name):
61 try:
62 if not getattr(sys, stream_name).isatty():
63 setattr(sys, stream_name, os.fdopen(getattr(sys, stream_name).fileno(), 'w', 1))
64 return True
65 return False
66 except Exception as e:
67 return e
68
69 # If stdout/stderr are not TTYs, Python may not have opened them with line
70 # buffering. In that case, try to reopen them with line buffering
71 # explicitly enabled. This ensures that prints such as stack traces always
72 # appear. Continue silently if this fails.
73 _stdout_buffer_result = _reopen_stream_with_buffering('stdout')
74 _stderr_buffer_result = _reopen_stream_with_buffering('stderr')
75
76 # Don't set the logging up until we've attempted to fix the standard IO,
77 # otherwise it will end up connected to the unfixed IO.
78 cocotb.log.default_config()
79 log = logging.getLogger(__name__)
80
81 # we can't log these things until the logging is set up!
82 if _stderr_buffer_result is True:
83 log.debug("Reopened stderr with line buffering")
84 if _stdout_buffer_result is True:
85 log.debug("Reopened stdout with line buffering")
86 if isinstance(_stdout_buffer_result, Exception) or isinstance(_stderr_buffer_result, Exception):
87 if isinstance(_stdout_buffer_result, Exception):
88 log.warning("Failed to ensure that stdout is line buffered", exc_info=_stdout_buffer_result)
89 if isinstance(_stderr_buffer_result, Exception):
90 log.warning("Failed to ensure that stderr is line buffered", exc_info=_stderr_buffer_result)
91 log.warning("Some stack traces may not appear because of this.")
92
93 del _stderr_buffer_result, _stdout_buffer_result
94
95 # From https://www.python.org/dev/peps/pep-0565/#recommended-filter-settings-for-test-runners
96 # If the user doesn't want to see these, they can always change the global
97 # warning settings in their test module.
98 if not sys.warnoptions:
99 warnings.simplefilter("default")
100
101 scheduler = Scheduler()
102 """The global scheduler instance."""
103
104 regression_manager = None
105
106 plusargs = {}
107 """A dictionary of "plusargs" handed to the simulation."""
108
109 # To save typing provide an alias to scheduler.add
110 fork = scheduler.add
111
112 # FIXME is this really required?
113 _rlock = threading.RLock()
114
115
116 def mem_debug(port):
117 import cocotb.memdebug
118 cocotb.memdebug.start(port)
119
120
121 def _initialise_testbench(root_name):
122 """Initialize testbench.
123
124 This function is called after the simulator has elaborated all
125 entities and is ready to run the test.
126
127 The test must be defined by the environment variables
128 :envvar:`MODULE` and :envvar:`TESTCASE`.
129
130 The environment variable :envvar:`COCOTB_HOOKS`, if present, contains a
131 comma-separated list of modules to be executed before the first test.
132 """
133 _rlock.acquire()
134
135 memcheck_port = os.getenv('MEMCHECK')
136 if memcheck_port is not None:
137 mem_debug(int(memcheck_port))
138
139 exec_path = os.getenv('COCOTB_PY_DIR')
140 if exec_path is None:
141 exec_path = 'Unknown'
142
143 log.info("Running tests with cocotb v%s from %s" %
144 (__version__, exec_path))
145
146 # Create the base handle type
147
148 process_plusargs()
149
150 # Seed the Python random number generator to make this repeatable
151 global RANDOM_SEED
152 RANDOM_SEED = os.getenv('RANDOM_SEED')
153
154 if RANDOM_SEED is None:
155 if 'ntb_random_seed' in plusargs:
156 RANDOM_SEED = eval(plusargs['ntb_random_seed'])
157 elif 'seed' in plusargs:
158 RANDOM_SEED = eval(plusargs['seed'])
159 else:
160 RANDOM_SEED = int(time.time())
161 log.info("Seeding Python random module with %d" % (RANDOM_SEED))
162 else:
163 RANDOM_SEED = int(RANDOM_SEED)
164 log.info("Seeding Python random module with supplied seed %d" % (RANDOM_SEED))
165 random.seed(RANDOM_SEED)
166
167 module_str = os.getenv('MODULE')
168 test_str = os.getenv('TESTCASE')
169 hooks_str = os.getenv('COCOTB_HOOKS', '')
170
171 if not module_str:
172 raise ImportError("Environment variables defining the module(s) to " +
173 "execute not defined. MODULE=\"%s\"" % (module_str))
174
175 modules = module_str.split(',')
176 hooks = hooks_str.split(',') if hooks_str else []
177
178 global regression_manager
179
180 regression_manager = RegressionManager(root_name, modules, tests=test_str, seed=RANDOM_SEED, hooks=hooks)
181 regression_manager.initialise()
182 regression_manager.execute()
183
184 _rlock.release()
185 return True
186
187
188 def _sim_event(level, message):
189 """Function that can be called externally to signal an event."""
190 SIM_INFO = 0
191 SIM_TEST_FAIL = 1
192 SIM_FAIL = 2
193 from cocotb.result import TestFailure, SimFailure
194
195 if level is SIM_TEST_FAIL:
196 scheduler.log.error("Failing test at simulator request")
197 scheduler.finish_test(TestFailure("Failure from external source: %s" %
198 message))
199 elif level is SIM_FAIL:
200 # We simply return here as the simulator will exit
201 # so no cleanup is needed
202 msg = ("Failing test at simulator request before test run completion: "
203 "%s" % message)
204 scheduler.log.error(msg)
205 scheduler.finish_scheduler(SimFailure(msg))
206 else:
207 scheduler.log.error("Unsupported sim event")
208
209 return True
210
211
212 def process_plusargs():
213
214 global plusargs
215
216 plusargs = {}
217
218 for option in cocotb.argv:
219 if option.startswith('+'):
220 if option.find('=') != -1:
221 (name, value) = option[1:].split('=')
222 plusargs[name] = value
223 else:
224 plusargs[option[1:]] = True
225
[end of cocotb/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cocotb/__init__.py b/cocotb/__init__.py
--- a/cocotb/__init__.py
+++ b/cocotb/__init__.py
@@ -136,12 +136,8 @@
if memcheck_port is not None:
mem_debug(int(memcheck_port))
- exec_path = os.getenv('COCOTB_PY_DIR')
- if exec_path is None:
- exec_path = 'Unknown'
-
log.info("Running tests with cocotb v%s from %s" %
- (__version__, exec_path))
+ (__version__, os.path.dirname(__file__)))
# Create the base handle type
|
{"golden_diff": "diff --git a/cocotb/__init__.py b/cocotb/__init__.py\n--- a/cocotb/__init__.py\n+++ b/cocotb/__init__.py\n@@ -136,12 +136,8 @@\n if memcheck_port is not None:\n mem_debug(int(memcheck_port))\n \n- exec_path = os.getenv('COCOTB_PY_DIR')\n- if exec_path is None:\n- exec_path = 'Unknown'\n-\n log.info(\"Running tests with cocotb v%s from %s\" %\n- (__version__, exec_path))\n+ (__version__, os.path.dirname(__file__)))\n \n # Create the base handle type\n", "issue": "\"Running tests with cocotb ... from Unknown\"\nIn e.g. https://asciinema.org/a/316949, we can see the line\r\n```\r\n0.00ns INFO cocotb __init__.py:144 in _initialise_testbench Running tests with cocotb v1.4.0.dev0 from Unknown\r\n```\r\n\r\n@themperek suggested using ``__file__`` in https://github.com/cocotb/cocotb/blob/09effc69f914820d4aa50217b9fad6ba29c4248f/cocotb/__init__.py#L139-L144 instead of ``export``ing ``COCOTB_PY_DIR`` in https://github.com/cocotb/cocotb/blob/67bf79b997b630416b1f98742fbeab09d9e90fef/cocotb/share/makefiles/Makefile.inc#L33\n", "before_files": [{"content": "# Copyright (c) 2013 Potential Ventures Ltd\n# Copyright (c) 2013 SolarFlare Communications Inc\n# All rights reserved.\n\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of Potential Ventures Ltd,\n# SolarFlare Communications Inc nor the\n# names of its contributors may be used to endorse or promote products\n# derived from this software without specific prior written permission.\n\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\n# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"\nCocotb is a coroutine, cosimulation framework for writing testbenches in Python.\n\nSee http://cocotb.readthedocs.org for full documentation\n\"\"\"\nimport os\nimport sys\nimport logging\nimport threading\nimport random\nimport time\nimport warnings\n\nimport cocotb.handle\nimport cocotb.log\nfrom cocotb.scheduler import Scheduler\nfrom cocotb.regression import RegressionManager\n\n\n# Things we want in the cocotb namespace\nfrom cocotb.decorators import test, coroutine, hook, function, external # noqa: F401\n\n# Singleton scheduler instance\n# NB this cheekily ensures a singleton since we're replacing the reference\n# so that cocotb.scheduler gives you the singleton instance and not the\n# scheduler package\n\nfrom ._version import __version__\n\n# GPI logging instance\nif \"COCOTB_SIM\" in os.environ:\n\n def _reopen_stream_with_buffering(stream_name):\n try:\n if not getattr(sys, stream_name).isatty():\n setattr(sys, stream_name, os.fdopen(getattr(sys, stream_name).fileno(), 'w', 1))\n return True\n return False\n except Exception as e:\n return e\n\n # If stdout/stderr are not TTYs, Python may not have opened them with line\n # buffering. In that case, try to reopen them with line buffering\n # explicitly enabled. This ensures that prints such as stack traces always\n # appear. Continue silently if this fails.\n _stdout_buffer_result = _reopen_stream_with_buffering('stdout')\n _stderr_buffer_result = _reopen_stream_with_buffering('stderr')\n\n # Don't set the logging up until we've attempted to fix the standard IO,\n # otherwise it will end up connected to the unfixed IO.\n cocotb.log.default_config()\n log = logging.getLogger(__name__)\n\n # we can't log these things until the logging is set up!\n if _stderr_buffer_result is True:\n log.debug(\"Reopened stderr with line buffering\")\n if _stdout_buffer_result is True:\n log.debug(\"Reopened stdout with line buffering\")\n if isinstance(_stdout_buffer_result, Exception) or isinstance(_stderr_buffer_result, Exception):\n if isinstance(_stdout_buffer_result, Exception):\n log.warning(\"Failed to ensure that stdout is line buffered\", exc_info=_stdout_buffer_result)\n if isinstance(_stderr_buffer_result, Exception):\n log.warning(\"Failed to ensure that stderr is line buffered\", exc_info=_stderr_buffer_result)\n log.warning(\"Some stack traces may not appear because of this.\")\n\n del _stderr_buffer_result, _stdout_buffer_result\n\n # From https://www.python.org/dev/peps/pep-0565/#recommended-filter-settings-for-test-runners\n # If the user doesn't want to see these, they can always change the global\n # warning settings in their test module.\n if not sys.warnoptions:\n warnings.simplefilter(\"default\")\n\nscheduler = Scheduler()\n\"\"\"The global scheduler instance.\"\"\"\n\nregression_manager = None\n\nplusargs = {}\n\"\"\"A dictionary of \"plusargs\" handed to the simulation.\"\"\"\n\n# To save typing provide an alias to scheduler.add\nfork = scheduler.add\n\n# FIXME is this really required?\n_rlock = threading.RLock()\n\n\ndef mem_debug(port):\n import cocotb.memdebug\n cocotb.memdebug.start(port)\n\n\ndef _initialise_testbench(root_name):\n \"\"\"Initialize testbench.\n\n This function is called after the simulator has elaborated all\n entities and is ready to run the test.\n\n The test must be defined by the environment variables\n :envvar:`MODULE` and :envvar:`TESTCASE`.\n\n The environment variable :envvar:`COCOTB_HOOKS`, if present, contains a\n comma-separated list of modules to be executed before the first test.\n \"\"\"\n _rlock.acquire()\n\n memcheck_port = os.getenv('MEMCHECK')\n if memcheck_port is not None:\n mem_debug(int(memcheck_port))\n\n exec_path = os.getenv('COCOTB_PY_DIR')\n if exec_path is None:\n exec_path = 'Unknown'\n\n log.info(\"Running tests with cocotb v%s from %s\" %\n (__version__, exec_path))\n\n # Create the base handle type\n\n process_plusargs()\n\n # Seed the Python random number generator to make this repeatable\n global RANDOM_SEED\n RANDOM_SEED = os.getenv('RANDOM_SEED')\n\n if RANDOM_SEED is None:\n if 'ntb_random_seed' in plusargs:\n RANDOM_SEED = eval(plusargs['ntb_random_seed'])\n elif 'seed' in plusargs:\n RANDOM_SEED = eval(plusargs['seed'])\n else:\n RANDOM_SEED = int(time.time())\n log.info(\"Seeding Python random module with %d\" % (RANDOM_SEED))\n else:\n RANDOM_SEED = int(RANDOM_SEED)\n log.info(\"Seeding Python random module with supplied seed %d\" % (RANDOM_SEED))\n random.seed(RANDOM_SEED)\n\n module_str = os.getenv('MODULE')\n test_str = os.getenv('TESTCASE')\n hooks_str = os.getenv('COCOTB_HOOKS', '')\n\n if not module_str:\n raise ImportError(\"Environment variables defining the module(s) to \" +\n \"execute not defined. MODULE=\\\"%s\\\"\" % (module_str))\n\n modules = module_str.split(',')\n hooks = hooks_str.split(',') if hooks_str else []\n\n global regression_manager\n\n regression_manager = RegressionManager(root_name, modules, tests=test_str, seed=RANDOM_SEED, hooks=hooks)\n regression_manager.initialise()\n regression_manager.execute()\n\n _rlock.release()\n return True\n\n\ndef _sim_event(level, message):\n \"\"\"Function that can be called externally to signal an event.\"\"\"\n SIM_INFO = 0\n SIM_TEST_FAIL = 1\n SIM_FAIL = 2\n from cocotb.result import TestFailure, SimFailure\n\n if level is SIM_TEST_FAIL:\n scheduler.log.error(\"Failing test at simulator request\")\n scheduler.finish_test(TestFailure(\"Failure from external source: %s\" %\n message))\n elif level is SIM_FAIL:\n # We simply return here as the simulator will exit\n # so no cleanup is needed\n msg = (\"Failing test at simulator request before test run completion: \"\n \"%s\" % message)\n scheduler.log.error(msg)\n scheduler.finish_scheduler(SimFailure(msg))\n else:\n scheduler.log.error(\"Unsupported sim event\")\n\n return True\n\n\ndef process_plusargs():\n\n global plusargs\n\n plusargs = {}\n\n for option in cocotb.argv:\n if option.startswith('+'):\n if option.find('=') != -1:\n (name, value) = option[1:].split('=')\n plusargs[name] = value\n else:\n plusargs[option[1:]] = True\n", "path": "cocotb/__init__.py"}]}
| 3,199 | 155 |
gh_patches_debug_61258
|
rasdani/github-patches
|
git_diff
|
microsoft__torchgeo-80
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Determine minimum supported dependency versions
Before releasing, we should determine the minimum supported version of each dependency. We should also consider a test with this version just to make sure it doesn't change.
</issue>
<code>
[start of docs/conf.py]
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Path setup --------------------------------------------------------------
8
9 import os
10 import sys
11
12 import pytorch_sphinx_theme
13
14 # If extensions (or modules to document with autodoc) are in another directory,
15 # add these directories to sys.path here. If the directory is relative to the
16 # documentation root, use os.path.abspath to make it absolute, like shown here.
17 sys.path.insert(0, os.path.abspath(".."))
18
19 import torchgeo # noqa: E402
20
21 # -- Project information -----------------------------------------------------
22
23 project = "torchgeo"
24 copyright = "2021, Microsoft Corporation"
25 author = "Adam J. Stewart"
26 version = ".".join(torchgeo.__version__.split(".")[:2])
27 release = torchgeo.__version__
28
29
30 # -- General configuration ---------------------------------------------------
31
32 # Add any Sphinx extension module names here, as strings. They can be
33 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
34 # ones.
35 extensions = [
36 "sphinx.ext.autodoc",
37 "sphinx.ext.autosectionlabel",
38 "sphinx.ext.intersphinx",
39 "sphinx.ext.napoleon",
40 "sphinx.ext.todo",
41 "sphinx.ext.viewcode",
42 ]
43
44 # List of patterns, relative to source directory, that match files and
45 # directories to ignore when looking for source files.
46 # This pattern also affects html_static_path and html_extra_path.
47 exclude_patterns = ["_build"]
48
49 nitpicky = True
50 nitpick_ignore = [
51 # https://github.com/sphinx-doc/sphinx/issues/8127
52 ("py:class", ".."),
53 # TODO: can't figure out why this isn't found
54 ("py:class", "LightningDataModule"),
55 ]
56
57
58 # -- Options for HTML output -------------------------------------------------
59
60 # The theme to use for HTML and HTML Help pages. See the documentation for
61 # a list of builtin themes.
62 html_theme = "pytorch_sphinx_theme"
63 html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
64
65 # Theme options are theme-specific and customize the look and feel of a theme
66 # further. For a list of options available for each theme, see the
67 # documentation.
68 html_theme_options = {
69 "collapse_navigation": False,
70 "display_version": True,
71 "logo_only": True,
72 "pytorch_project": "docs",
73 "navigation_with_keys": True,
74 "analytics_id": "UA-117752657-2",
75 }
76
77 # -- Extension configuration -------------------------------------------------
78
79 # sphinx.ext.autodoc
80 autodoc_default_options = {
81 "members": True,
82 "special-members": True,
83 "show-inheritance": True,
84 }
85 autodoc_member_order = "bysource"
86 autodoc_typehints = "description"
87
88 # sphinx.ext.intersphinx
89 intersphinx_mapping = {
90 "python": ("https://docs.python.org/3", None),
91 "pytorch-lightning": ("https://pytorch-lightning.readthedocs.io/en/latest/", None),
92 "rasterio": ("https://rasterio.readthedocs.io/en/latest/", None),
93 "torch": ("https://pytorch.org/docs/stable", None),
94 }
95
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -46,6 +46,10 @@
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ["_build"]
+# Sphinx 3.0+ required for:
+# autodoc_typehints = "description"
+needs_sphinx = "3.0"
+
nitpicky = True
nitpick_ignore = [
# https://github.com/sphinx-doc/sphinx/issues/8127
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -46,6 +46,10 @@\n # This pattern also affects html_static_path and html_extra_path.\n exclude_patterns = [\"_build\"]\n \n+# Sphinx 3.0+ required for:\n+# autodoc_typehints = \"description\"\n+needs_sphinx = \"3.0\"\n+\n nitpicky = True\n nitpick_ignore = [\n # https://github.com/sphinx-doc/sphinx/issues/8127\n", "issue": "Determine minimum supported dependency versions\nBefore releasing, we should determine the minimum supported version of each dependency. We should also consider a test with this version just to make sure it doesn't change.\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\nimport os\nimport sys\n\nimport pytorch_sphinx_theme\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath(\"..\"))\n\nimport torchgeo # noqa: E402\n\n# -- Project information -----------------------------------------------------\n\nproject = \"torchgeo\"\ncopyright = \"2021, Microsoft Corporation\"\nauthor = \"Adam J. Stewart\"\nversion = \".\".join(torchgeo.__version__.split(\".\")[:2])\nrelease = torchgeo.__version__\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosectionlabel\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.viewcode\",\n]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_build\"]\n\nnitpicky = True\nnitpick_ignore = [\n # https://github.com/sphinx-doc/sphinx/issues/8127\n (\"py:class\", \"..\"),\n # TODO: can't figure out why this isn't found\n (\"py:class\", \"LightningDataModule\"),\n]\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"pytorch_sphinx_theme\"\nhtml_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n \"collapse_navigation\": False,\n \"display_version\": True,\n \"logo_only\": True,\n \"pytorch_project\": \"docs\",\n \"navigation_with_keys\": True,\n \"analytics_id\": \"UA-117752657-2\",\n}\n\n# -- Extension configuration -------------------------------------------------\n\n# sphinx.ext.autodoc\nautodoc_default_options = {\n \"members\": True,\n \"special-members\": True,\n \"show-inheritance\": True,\n}\nautodoc_member_order = \"bysource\"\nautodoc_typehints = \"description\"\n\n# sphinx.ext.intersphinx\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3\", None),\n \"pytorch-lightning\": (\"https://pytorch-lightning.readthedocs.io/en/latest/\", None),\n \"rasterio\": (\"https://rasterio.readthedocs.io/en/latest/\", None),\n \"torch\": (\"https://pytorch.org/docs/stable\", None),\n}\n", "path": "docs/conf.py"}]}
| 1,480 | 118 |
gh_patches_debug_37945
|
rasdani/github-patches
|
git_diff
|
Qiskit__qiskit-4435
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Randomized testing failure on main
<!-- commit 60c8bba11f7314e376ff2fd7e6f211bad11b702e@master -->
Trying to build `master` at commit 60c8bba11f7314e376ff2fd7e6f211bad11b702e failed.
More info at: https://travis-ci.com/Qiskit/qiskit-terra/jobs/208623536
</issue>
<code>
[start of qiskit/quantum_info/operators/symplectic/random.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2020
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14 """
15 Random symplectic operator functions
16 """
17
18 import numpy as np
19 from numpy.random import default_rng
20
21 from .clifford import Clifford
22 from .stabilizer_table import StabilizerTable
23 from .pauli_table import PauliTable
24
25
26 def random_pauli_table(num_qubits, size=1, seed=None):
27 """Return a random PauliTable.
28
29 Args:
30 num_qubits (int): the number of qubits.
31 size (int): Optional. The number of rows of the table (Default: 1).
32 seed (int or np.random.Generator): Optional. Set a fixed seed or
33 generator for RNG.
34
35 Returns:
36 PauliTable: a random PauliTable.
37 """
38 if seed is None:
39 rng = np.random.default_rng()
40 elif isinstance(seed, np.random.Generator):
41 rng = seed
42 else:
43 rng = default_rng(seed)
44
45 table = rng.integers(2, size=(size, 2 * num_qubits)).astype(np.bool)
46 return PauliTable(table)
47
48
49 def random_stabilizer_table(num_qubits, size=1, seed=None):
50 """Return a random StabilizerTable.
51
52 Args:
53 num_qubits (int): the number of qubits.
54 size (int): Optional. The number of rows of the table (Default: 1).
55 seed (int or np.random.Generator): Optional. Set a fixed seed or
56 generator for RNG.
57
58 Returns:
59 PauliTable: a random StabilizerTable.
60 """
61 if seed is None:
62 rng = np.random.default_rng()
63 elif isinstance(seed, np.random.Generator):
64 rng = seed
65 else:
66 rng = default_rng(seed)
67
68 table = rng.integers(2, size=(size, 2 * num_qubits)).astype(np.bool)
69 phase = rng.integers(2, size=size).astype(np.bool)
70 return StabilizerTable(table, phase)
71
72
73 def random_clifford(num_qubits, seed=None):
74 """Return a random Clifford operator.
75
76 The Clifford is sampled using the method of Reference [1].
77
78 Args:
79 num_qubits (int): the number of qubits for the Clifford
80 seed (int or np.random.Generator): Optional. Set a fixed seed or
81 generator for RNG.
82
83 Returns:
84 Clifford: a random Clifford operator.
85
86 Reference:
87 1. S. Bravyi and D. Maslov, *Hadamard-free circuits expose the
88 structure of the Clifford group*.
89 `arXiv:2003.09412 [quant-ph] <https://arxiv.org/abs/2003.09412>`_
90 """
91 if seed is None:
92 rng = np.random.default_rng()
93 elif isinstance(seed, np.random.Generator):
94 rng = seed
95 else:
96 rng = default_rng(seed)
97
98 had, perm = _sample_qmallows(num_qubits, rng)
99
100 gamma1 = np.diag(rng.integers(2, size=num_qubits, dtype=np.int8))
101 gamma2 = np.diag(rng.integers(2, size=num_qubits, dtype=np.int8))
102 delta1 = np.eye(num_qubits, dtype=np.int8)
103 delta2 = delta1.copy()
104
105 _fill_tril(gamma1, rng, symmetric=True)
106 _fill_tril(gamma2, rng, symmetric=True)
107 _fill_tril(delta1, rng)
108 _fill_tril(delta2, rng)
109
110 # Compute stabilizer table
111 zero = np.zeros((num_qubits, num_qubits), dtype=np.int8)
112 prod1 = np.matmul(gamma1, delta1) % 2
113 prod2 = np.matmul(gamma2, delta2) % 2
114 inv1 = _inverse_tril(delta1).transpose()
115 inv2 = _inverse_tril(delta2).transpose()
116 table1 = np.block([[delta1, zero], [prod1, inv1]])
117 table2 = np.block([[delta2, zero], [prod2, inv2]])
118
119 # Apply qubit permutation
120 table = table2[np.concatenate([perm, num_qubits + perm])]
121
122 # Apply layer of Hadamards
123 inds = had * np.arange(1, num_qubits + 1)
124 inds = inds[inds > 0] - 1
125 lhs_inds = np.concatenate([inds, inds + num_qubits])
126 rhs_inds = np.concatenate([inds + num_qubits, inds])
127 table[lhs_inds, :] = table[rhs_inds, :]
128
129 # Apply table
130 table = np.mod(np.matmul(table1, table), 2).astype(np.bool)
131
132 # Generate random phases
133 phase = rng.integers(2, size=2 * num_qubits).astype(np.bool)
134 return Clifford(StabilizerTable(table, phase))
135
136
137 def _sample_qmallows(n, rng=None):
138 """Sample from the quantum Mallows distribution"""
139
140 if rng is None:
141 rng = np.random.default_rng()
142
143 # Hadmard layer
144 had = np.zeros(n, dtype=np.bool)
145
146 # Permutation layer
147 perm = np.zeros(n, dtype=int)
148
149 inds = list(range(n))
150 for i in range(n):
151 m = n - i
152 eps = 4 ** (-m)
153 r = rng.uniform(0, 1)
154 index = -int(np.ceil(np.log2(r + (1 - r) * eps)))
155 had[i] = index < m
156 if index < m:
157 k = index
158 else:
159 k = 2 * m - index - 1
160 perm[i] = inds[k]
161 del inds[k]
162 return had, perm
163
164
165 def _fill_tril(mat, rng, symmetric=False):
166 """Add symmetric random ints to off diagonals"""
167 dim = mat.shape[0]
168 # Optimized for low dimensions
169 if dim == 1:
170 return
171
172 if dim <= 4:
173 mat[1, 0] = rng.integers(2, dtype=np.int8)
174 if symmetric:
175 mat[0, 1] = mat[1, 0]
176 if dim > 2:
177 mat[2, 0] = rng.integers(2, dtype=np.int8)
178 mat[2, 1] = rng.integers(2, dtype=np.int8)
179 if symmetric:
180 mat[0, 2] = mat[2, 0]
181 mat[1, 2] = mat[2, 1]
182 if dim > 3:
183 mat[3, 0] = rng.integers(2, dtype=np.int8)
184 mat[3, 1] = rng.integers(2, dtype=np.int8)
185 mat[3, 2] = rng.integers(2, dtype=np.int8)
186 if symmetric:
187 mat[0, 3] = mat[3, 0]
188 mat[1, 3] = mat[3, 1]
189 mat[2, 3] = mat[3, 2]
190 return
191
192 # Use numpy indices for larger dimensions
193 rows, cols = np.tril_indices(dim, -1)
194 vals = rng.integers(2, size=rows.size, dtype=np.int8)
195 mat[(rows, cols)] = vals
196 if symmetric:
197 mat[(cols, rows)] = vals
198
199
200 def _inverse_tril(mat):
201 """Invert a lower-triangular matrix with unit diagonal."""
202 # Optimized inversion function for low dimensions
203 dim = mat.shape[0]
204
205 if dim <= 2:
206 return mat
207
208 if dim <= 5:
209 inv = mat.copy()
210 inv[2, 0] = (mat[2, 0] ^ (mat[1, 0] & mat[2, 1]))
211 if dim > 3:
212 inv[3, 1] = (mat[3, 1] ^ (mat[2, 1] & mat[3, 2]))
213 inv[3, 0] = mat[3, 0] ^ (mat[3, 2] & mat[2, 0]) ^ (mat[1, 0] & inv[3, 1])
214 if dim > 4:
215 inv[4, 2] = ((mat[4, 2] ^ (mat[3, 2] & mat[4, 3]))) & 1
216 inv[4, 1] = mat[4, 1] ^ (mat[4, 3] & mat[3, 1]) ^ (mat[2, 1] & inv[4, 2])
217 inv[4, 0] = mat[4, 0] ^ (mat[1, 0] & inv[4, 1]) ^ (
218 mat[2, 0] & inv[4, 2]) ^ (mat[3, 0] & mat[4, 3])
219 return inv % 2
220
221 # For higher dimensions we use Numpy's inverse function
222 # however this function tends to fail and result in a non-symplectic
223 # final matrix if n is too large.
224 max_np_inv = 150
225 if dim <= max_np_inv:
226 return np.linalg.inv(mat).astype(np.int8) % 2
227
228 # For very large matrices we divide the matrix into 4 blocks of
229 # roughly equal size and use the analytic formula for the inverse
230 # of a block lower-triangular matrix:
231 # inv([[A, 0],[C, D]]) = [[inv(A), 0], [inv(D).C.inv(A), inv(D)]]
232 # call the inverse function recursively to compute inv(A) and invD
233
234 dim1 = dim // 2
235 mat_a = _inverse_tril(mat[0:dim1, 0:dim1])
236 mat_d = _inverse_tril(mat[dim1:dim, dim1:dim])
237 mat_c = np.matmul(np.matmul(mat_d, mat[dim1:dim, 0:dim1]), mat_a)
238 inv = np.block([[mat_a, np.zeros((dim1, dim - dim1), dtype=int)], [mat_c, mat_d]])
239 return inv % 2
240
[end of qiskit/quantum_info/operators/symplectic/random.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/qiskit/quantum_info/operators/symplectic/random.py b/qiskit/quantum_info/operators/symplectic/random.py
--- a/qiskit/quantum_info/operators/symplectic/random.py
+++ b/qiskit/quantum_info/operators/symplectic/random.py
@@ -107,12 +107,18 @@
_fill_tril(delta1, rng)
_fill_tril(delta2, rng)
+ # For large num_qubits numpy.inv function called below can
+ # return invalid output leading to a non-symplectic Clifford
+ # being generated. This can be prevented by manually forcing
+ # block inversion of the matrix.
+ block_inverse_threshold = 50
+
# Compute stabilizer table
zero = np.zeros((num_qubits, num_qubits), dtype=np.int8)
prod1 = np.matmul(gamma1, delta1) % 2
prod2 = np.matmul(gamma2, delta2) % 2
- inv1 = _inverse_tril(delta1).transpose()
- inv2 = _inverse_tril(delta2).transpose()
+ inv1 = _inverse_tril(delta1, block_inverse_threshold).transpose()
+ inv2 = _inverse_tril(delta2, block_inverse_threshold).transpose()
table1 = np.block([[delta1, zero], [prod1, inv1]])
table2 = np.block([[delta2, zero], [prod2, inv2]])
@@ -197,7 +203,7 @@
mat[(cols, rows)] = vals
-def _inverse_tril(mat):
+def _inverse_tril(mat, block_inverse_threshold):
"""Invert a lower-triangular matrix with unit diagonal."""
# Optimized inversion function for low dimensions
dim = mat.shape[0]
@@ -221,8 +227,7 @@
# For higher dimensions we use Numpy's inverse function
# however this function tends to fail and result in a non-symplectic
# final matrix if n is too large.
- max_np_inv = 150
- if dim <= max_np_inv:
+ if dim <= block_inverse_threshold:
return np.linalg.inv(mat).astype(np.int8) % 2
# For very large matrices we divide the matrix into 4 blocks of
@@ -232,8 +237,8 @@
# call the inverse function recursively to compute inv(A) and invD
dim1 = dim // 2
- mat_a = _inverse_tril(mat[0:dim1, 0:dim1])
- mat_d = _inverse_tril(mat[dim1:dim, dim1:dim])
+ mat_a = _inverse_tril(mat[0:dim1, 0:dim1], block_inverse_threshold)
+ mat_d = _inverse_tril(mat[dim1:dim, dim1:dim], block_inverse_threshold)
mat_c = np.matmul(np.matmul(mat_d, mat[dim1:dim, 0:dim1]), mat_a)
inv = np.block([[mat_a, np.zeros((dim1, dim - dim1), dtype=int)], [mat_c, mat_d]])
return inv % 2
|
{"golden_diff": "diff --git a/qiskit/quantum_info/operators/symplectic/random.py b/qiskit/quantum_info/operators/symplectic/random.py\n--- a/qiskit/quantum_info/operators/symplectic/random.py\n+++ b/qiskit/quantum_info/operators/symplectic/random.py\n@@ -107,12 +107,18 @@\n _fill_tril(delta1, rng)\n _fill_tril(delta2, rng)\n \n+ # For large num_qubits numpy.inv function called below can\n+ # return invalid output leading to a non-symplectic Clifford\n+ # being generated. This can be prevented by manually forcing\n+ # block inversion of the matrix.\n+ block_inverse_threshold = 50\n+\n # Compute stabilizer table\n zero = np.zeros((num_qubits, num_qubits), dtype=np.int8)\n prod1 = np.matmul(gamma1, delta1) % 2\n prod2 = np.matmul(gamma2, delta2) % 2\n- inv1 = _inverse_tril(delta1).transpose()\n- inv2 = _inverse_tril(delta2).transpose()\n+ inv1 = _inverse_tril(delta1, block_inverse_threshold).transpose()\n+ inv2 = _inverse_tril(delta2, block_inverse_threshold).transpose()\n table1 = np.block([[delta1, zero], [prod1, inv1]])\n table2 = np.block([[delta2, zero], [prod2, inv2]])\n \n@@ -197,7 +203,7 @@\n mat[(cols, rows)] = vals\n \n \n-def _inverse_tril(mat):\n+def _inverse_tril(mat, block_inverse_threshold):\n \"\"\"Invert a lower-triangular matrix with unit diagonal.\"\"\"\n # Optimized inversion function for low dimensions\n dim = mat.shape[0]\n@@ -221,8 +227,7 @@\n # For higher dimensions we use Numpy's inverse function\n # however this function tends to fail and result in a non-symplectic\n # final matrix if n is too large.\n- max_np_inv = 150\n- if dim <= max_np_inv:\n+ if dim <= block_inverse_threshold:\n return np.linalg.inv(mat).astype(np.int8) % 2\n \n # For very large matrices we divide the matrix into 4 blocks of\n@@ -232,8 +237,8 @@\n # call the inverse function recursively to compute inv(A) and invD\n \n dim1 = dim // 2\n- mat_a = _inverse_tril(mat[0:dim1, 0:dim1])\n- mat_d = _inverse_tril(mat[dim1:dim, dim1:dim])\n+ mat_a = _inverse_tril(mat[0:dim1, 0:dim1], block_inverse_threshold)\n+ mat_d = _inverse_tril(mat[dim1:dim, dim1:dim], block_inverse_threshold)\n mat_c = np.matmul(np.matmul(mat_d, mat[dim1:dim, 0:dim1]), mat_a)\n inv = np.block([[mat_a, np.zeros((dim1, dim - dim1), dtype=int)], [mat_c, mat_d]])\n return inv % 2\n", "issue": "Randomized testing failure on main\n<!-- commit 60c8bba11f7314e376ff2fd7e6f211bad11b702e@master -->\nTrying to build `master` at commit 60c8bba11f7314e376ff2fd7e6f211bad11b702e failed.\nMore info at: https://travis-ci.com/Qiskit/qiskit-terra/jobs/208623536\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2020\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\"\"\"\nRandom symplectic operator functions\n\"\"\"\n\nimport numpy as np\nfrom numpy.random import default_rng\n\nfrom .clifford import Clifford\nfrom .stabilizer_table import StabilizerTable\nfrom .pauli_table import PauliTable\n\n\ndef random_pauli_table(num_qubits, size=1, seed=None):\n \"\"\"Return a random PauliTable.\n\n Args:\n num_qubits (int): the number of qubits.\n size (int): Optional. The number of rows of the table (Default: 1).\n seed (int or np.random.Generator): Optional. Set a fixed seed or\n generator for RNG.\n\n Returns:\n PauliTable: a random PauliTable.\n \"\"\"\n if seed is None:\n rng = np.random.default_rng()\n elif isinstance(seed, np.random.Generator):\n rng = seed\n else:\n rng = default_rng(seed)\n\n table = rng.integers(2, size=(size, 2 * num_qubits)).astype(np.bool)\n return PauliTable(table)\n\n\ndef random_stabilizer_table(num_qubits, size=1, seed=None):\n \"\"\"Return a random StabilizerTable.\n\n Args:\n num_qubits (int): the number of qubits.\n size (int): Optional. The number of rows of the table (Default: 1).\n seed (int or np.random.Generator): Optional. Set a fixed seed or\n generator for RNG.\n\n Returns:\n PauliTable: a random StabilizerTable.\n \"\"\"\n if seed is None:\n rng = np.random.default_rng()\n elif isinstance(seed, np.random.Generator):\n rng = seed\n else:\n rng = default_rng(seed)\n\n table = rng.integers(2, size=(size, 2 * num_qubits)).astype(np.bool)\n phase = rng.integers(2, size=size).astype(np.bool)\n return StabilizerTable(table, phase)\n\n\ndef random_clifford(num_qubits, seed=None):\n \"\"\"Return a random Clifford operator.\n\n The Clifford is sampled using the method of Reference [1].\n\n Args:\n num_qubits (int): the number of qubits for the Clifford\n seed (int or np.random.Generator): Optional. Set a fixed seed or\n generator for RNG.\n\n Returns:\n Clifford: a random Clifford operator.\n\n Reference:\n 1. S. Bravyi and D. Maslov, *Hadamard-free circuits expose the\n structure of the Clifford group*.\n `arXiv:2003.09412 [quant-ph] <https://arxiv.org/abs/2003.09412>`_\n \"\"\"\n if seed is None:\n rng = np.random.default_rng()\n elif isinstance(seed, np.random.Generator):\n rng = seed\n else:\n rng = default_rng(seed)\n\n had, perm = _sample_qmallows(num_qubits, rng)\n\n gamma1 = np.diag(rng.integers(2, size=num_qubits, dtype=np.int8))\n gamma2 = np.diag(rng.integers(2, size=num_qubits, dtype=np.int8))\n delta1 = np.eye(num_qubits, dtype=np.int8)\n delta2 = delta1.copy()\n\n _fill_tril(gamma1, rng, symmetric=True)\n _fill_tril(gamma2, rng, symmetric=True)\n _fill_tril(delta1, rng)\n _fill_tril(delta2, rng)\n\n # Compute stabilizer table\n zero = np.zeros((num_qubits, num_qubits), dtype=np.int8)\n prod1 = np.matmul(gamma1, delta1) % 2\n prod2 = np.matmul(gamma2, delta2) % 2\n inv1 = _inverse_tril(delta1).transpose()\n inv2 = _inverse_tril(delta2).transpose()\n table1 = np.block([[delta1, zero], [prod1, inv1]])\n table2 = np.block([[delta2, zero], [prod2, inv2]])\n\n # Apply qubit permutation\n table = table2[np.concatenate([perm, num_qubits + perm])]\n\n # Apply layer of Hadamards\n inds = had * np.arange(1, num_qubits + 1)\n inds = inds[inds > 0] - 1\n lhs_inds = np.concatenate([inds, inds + num_qubits])\n rhs_inds = np.concatenate([inds + num_qubits, inds])\n table[lhs_inds, :] = table[rhs_inds, :]\n\n # Apply table\n table = np.mod(np.matmul(table1, table), 2).astype(np.bool)\n\n # Generate random phases\n phase = rng.integers(2, size=2 * num_qubits).astype(np.bool)\n return Clifford(StabilizerTable(table, phase))\n\n\ndef _sample_qmallows(n, rng=None):\n \"\"\"Sample from the quantum Mallows distribution\"\"\"\n\n if rng is None:\n rng = np.random.default_rng()\n\n # Hadmard layer\n had = np.zeros(n, dtype=np.bool)\n\n # Permutation layer\n perm = np.zeros(n, dtype=int)\n\n inds = list(range(n))\n for i in range(n):\n m = n - i\n eps = 4 ** (-m)\n r = rng.uniform(0, 1)\n index = -int(np.ceil(np.log2(r + (1 - r) * eps)))\n had[i] = index < m\n if index < m:\n k = index\n else:\n k = 2 * m - index - 1\n perm[i] = inds[k]\n del inds[k]\n return had, perm\n\n\ndef _fill_tril(mat, rng, symmetric=False):\n \"\"\"Add symmetric random ints to off diagonals\"\"\"\n dim = mat.shape[0]\n # Optimized for low dimensions\n if dim == 1:\n return\n\n if dim <= 4:\n mat[1, 0] = rng.integers(2, dtype=np.int8)\n if symmetric:\n mat[0, 1] = mat[1, 0]\n if dim > 2:\n mat[2, 0] = rng.integers(2, dtype=np.int8)\n mat[2, 1] = rng.integers(2, dtype=np.int8)\n if symmetric:\n mat[0, 2] = mat[2, 0]\n mat[1, 2] = mat[2, 1]\n if dim > 3:\n mat[3, 0] = rng.integers(2, dtype=np.int8)\n mat[3, 1] = rng.integers(2, dtype=np.int8)\n mat[3, 2] = rng.integers(2, dtype=np.int8)\n if symmetric:\n mat[0, 3] = mat[3, 0]\n mat[1, 3] = mat[3, 1]\n mat[2, 3] = mat[3, 2]\n return\n\n # Use numpy indices for larger dimensions\n rows, cols = np.tril_indices(dim, -1)\n vals = rng.integers(2, size=rows.size, dtype=np.int8)\n mat[(rows, cols)] = vals\n if symmetric:\n mat[(cols, rows)] = vals\n\n\ndef _inverse_tril(mat):\n \"\"\"Invert a lower-triangular matrix with unit diagonal.\"\"\"\n # Optimized inversion function for low dimensions\n dim = mat.shape[0]\n\n if dim <= 2:\n return mat\n\n if dim <= 5:\n inv = mat.copy()\n inv[2, 0] = (mat[2, 0] ^ (mat[1, 0] & mat[2, 1]))\n if dim > 3:\n inv[3, 1] = (mat[3, 1] ^ (mat[2, 1] & mat[3, 2]))\n inv[3, 0] = mat[3, 0] ^ (mat[3, 2] & mat[2, 0]) ^ (mat[1, 0] & inv[3, 1])\n if dim > 4:\n inv[4, 2] = ((mat[4, 2] ^ (mat[3, 2] & mat[4, 3]))) & 1\n inv[4, 1] = mat[4, 1] ^ (mat[4, 3] & mat[3, 1]) ^ (mat[2, 1] & inv[4, 2])\n inv[4, 0] = mat[4, 0] ^ (mat[1, 0] & inv[4, 1]) ^ (\n mat[2, 0] & inv[4, 2]) ^ (mat[3, 0] & mat[4, 3])\n return inv % 2\n\n # For higher dimensions we use Numpy's inverse function\n # however this function tends to fail and result in a non-symplectic\n # final matrix if n is too large.\n max_np_inv = 150\n if dim <= max_np_inv:\n return np.linalg.inv(mat).astype(np.int8) % 2\n\n # For very large matrices we divide the matrix into 4 blocks of\n # roughly equal size and use the analytic formula for the inverse\n # of a block lower-triangular matrix:\n # inv([[A, 0],[C, D]]) = [[inv(A), 0], [inv(D).C.inv(A), inv(D)]]\n # call the inverse function recursively to compute inv(A) and invD\n\n dim1 = dim // 2\n mat_a = _inverse_tril(mat[0:dim1, 0:dim1])\n mat_d = _inverse_tril(mat[dim1:dim, dim1:dim])\n mat_c = np.matmul(np.matmul(mat_d, mat[dim1:dim, 0:dim1]), mat_a)\n inv = np.block([[mat_a, np.zeros((dim1, dim - dim1), dtype=int)], [mat_c, mat_d]])\n return inv % 2\n", "path": "qiskit/quantum_info/operators/symplectic/random.py"}]}
| 3,696 | 720 |
gh_patches_debug_225
|
rasdani/github-patches
|
git_diff
|
mlcommons__GaNDLF-722
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Move unit testing data to the MLCommons Storage
**Is your feature request related to a problem? Please describe.**
Currently, the unit testing data is on UPenn Box - which is inconvenient for someone without access who wants to make any updates.
**Describe the solution you'd like**
Changing this to the MLCommons storage would make things much easier from an admin perspective.
**Describe alternatives you've considered**
N.A.
**Additional context**
N.A.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 """The setup script."""
4
5
6 import sys, re, os
7 from setuptools import setup, find_packages
8 from setuptools.command.install import install
9 from setuptools.command.develop import develop
10 from setuptools.command.egg_info import egg_info
11
12 try:
13 with open("README.md") as readme_file:
14 readme = readme_file.read()
15 except Exception as error:
16 readme = "No README information found."
17 sys.stderr.write("Warning: Could not open '%s' due %s\n" % ("README.md", error))
18
19
20 class CustomInstallCommand(install):
21 def run(self):
22 install.run(self)
23
24
25 class CustomDevelopCommand(develop):
26 def run(self):
27 develop.run(self)
28
29
30 class CustomEggInfoCommand(egg_info):
31 def run(self):
32 egg_info.run(self)
33
34
35 try:
36 filepath = "GANDLF/version.py"
37 version_file = open(filepath)
38 (__version__,) = re.findall('__version__ = "(.*)"', version_file.read())
39
40 except Exception as error:
41 __version__ = "0.0.1"
42 sys.stderr.write("Warning: Could not open '%s' due %s\n" % (filepath, error))
43
44 # Handle cases where specific files need to be bundled into the final package as installed via PyPI
45 dockerfiles = [
46 item
47 for item in os.listdir(os.path.dirname(os.path.abspath(__file__)))
48 if (os.path.isfile(item) and item.startswith("Dockerfile-"))
49 ]
50 entrypoint_files = [
51 item
52 for item in os.listdir(os.path.dirname(os.path.abspath(__file__)))
53 if (os.path.isfile(item) and item.startswith("gandlf_"))
54 ]
55 setup_files = ["setup.py", ".dockerignore", "pyproject.toml", "MANIFEST.in"]
56 all_extra_files = dockerfiles + entrypoint_files + setup_files
57 all_extra_files_pathcorrected = [os.path.join("../", item) for item in all_extra_files]
58 # find_packages should only ever find these as subpackages of gandlf, not as top-level packages
59 # generate this dynamically?
60 # GANDLF.GANDLF is needed to prevent recursion madness in deployments
61 toplevel_package_excludes = [
62 "GANDLF.GANDLF",
63 "anonymize",
64 "cli",
65 "compute",
66 "data",
67 "grad_clipping",
68 "losses",
69 "metrics",
70 "models",
71 "optimizers",
72 "schedulers",
73 "utils",
74 ]
75
76
77 requirements = [
78 "torch==1.13.1",
79 "black",
80 "numpy==1.22.0",
81 "scipy",
82 "SimpleITK!=2.0.*",
83 "SimpleITK!=2.2.1", # https://github.com/mlcommons/GaNDLF/issues/536
84 "torchvision",
85 "tqdm",
86 "torchio==0.18.75",
87 "pandas<2.0.0",
88 "scikit-learn>=0.23.2",
89 "scikit-image>=0.19.1",
90 "setuptools",
91 "seaborn",
92 "pyyaml",
93 "tiffslide",
94 "matplotlib",
95 "requests>=2.25.0",
96 "pytest",
97 "coverage",
98 "pytest-cov",
99 "psutil",
100 "medcam",
101 "opencv-python",
102 "torchmetrics==0.8.1",
103 "zarr==2.10.3",
104 "pydicom",
105 "onnx",
106 "torchinfo==1.7.0",
107 "segmentation-models-pytorch==0.3.2",
108 "ACSConv==0.1.1",
109 "docker",
110 "dicom-anonymizer",
111 "twine",
112 "zarr",
113 "keyring",
114 ]
115
116 if __name__ == "__main__":
117 setup(
118 name="GANDLF",
119 version=__version__,
120 author="MLCommons",
121 author_email="[email protected]",
122 python_requires=">=3.8",
123 packages=find_packages(
124 where=os.path.dirname(os.path.abspath(__file__)),
125 exclude=toplevel_package_excludes,
126 ),
127 cmdclass={
128 "install": CustomInstallCommand,
129 "develop": CustomDevelopCommand,
130 "egg_info": CustomEggInfoCommand,
131 },
132 scripts=[
133 "gandlf_run",
134 "gandlf_constructCSV",
135 "gandlf_collectStats",
136 "gandlf_patchMiner",
137 "gandlf_preprocess",
138 "gandlf_anonymizer",
139 "gandlf_verifyInstall",
140 "gandlf_configGenerator",
141 "gandlf_recoverConfig",
142 "gandlf_deploy",
143 "gandlf_optimizeModel",
144 "gandlf_generateMetrics",
145 ],
146 classifiers=[
147 "Development Status :: 3 - Alpha",
148 "Intended Audience :: Science/Research",
149 "License :: OSI Approved :: Apache Software License",
150 "Natural Language :: English",
151 "Operating System :: OS Independent",
152 "Programming Language :: Python :: 3.8",
153 "Programming Language :: Python :: 3.9",
154 "Programming Language :: Python :: 3.10",
155 "Topic :: Scientific/Engineering :: Medical Science Apps.",
156 ],
157 description=(
158 "PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging."
159 ),
160 install_requires=requirements,
161 license="Apache-2.0",
162 long_description=readme,
163 long_description_content_type="text/markdown",
164 include_package_data=True,
165 package_data={"GANDLF": all_extra_files_pathcorrected},
166 keywords="semantic, segmentation, regression, classification, data-augmentation, medical-imaging, clinical-workflows, deep-learning, pytorch",
167 zip_safe=False,
168 )
169
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -92,7 +92,7 @@
"pyyaml",
"tiffslide",
"matplotlib",
- "requests>=2.25.0",
+ "gdown",
"pytest",
"coverage",
"pytest-cov",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -92,7 +92,7 @@\n \"pyyaml\",\n \"tiffslide\",\n \"matplotlib\",\n- \"requests>=2.25.0\",\n+ \"gdown\",\n \"pytest\",\n \"coverage\",\n \"pytest-cov\",\n", "issue": "Move unit testing data to the MLCommons Storage\n**Is your feature request related to a problem? Please describe.**\r\nCurrently, the unit testing data is on UPenn Box - which is inconvenient for someone without access who wants to make any updates. \r\n\r\n**Describe the solution you'd like**\r\nChanging this to the MLCommons storage would make things much easier from an admin perspective.\r\n\r\n**Describe alternatives you've considered**\r\nN.A.\r\n\r\n**Additional context**\r\nN.A.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"The setup script.\"\"\"\n\n\nimport sys, re, os\nfrom setuptools import setup, find_packages\nfrom setuptools.command.install import install\nfrom setuptools.command.develop import develop\nfrom setuptools.command.egg_info import egg_info\n\ntry:\n with open(\"README.md\") as readme_file:\n readme = readme_file.read()\nexcept Exception as error:\n readme = \"No README information found.\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (\"README.md\", error))\n\n\nclass CustomInstallCommand(install):\n def run(self):\n install.run(self)\n\n\nclass CustomDevelopCommand(develop):\n def run(self):\n develop.run(self)\n\n\nclass CustomEggInfoCommand(egg_info):\n def run(self):\n egg_info.run(self)\n\n\ntry:\n filepath = \"GANDLF/version.py\"\n version_file = open(filepath)\n (__version__,) = re.findall('__version__ = \"(.*)\"', version_file.read())\n\nexcept Exception as error:\n __version__ = \"0.0.1\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (filepath, error))\n\n# Handle cases where specific files need to be bundled into the final package as installed via PyPI\ndockerfiles = [\n item\n for item in os.listdir(os.path.dirname(os.path.abspath(__file__)))\n if (os.path.isfile(item) and item.startswith(\"Dockerfile-\"))\n]\nentrypoint_files = [\n item\n for item in os.listdir(os.path.dirname(os.path.abspath(__file__)))\n if (os.path.isfile(item) and item.startswith(\"gandlf_\"))\n]\nsetup_files = [\"setup.py\", \".dockerignore\", \"pyproject.toml\", \"MANIFEST.in\"]\nall_extra_files = dockerfiles + entrypoint_files + setup_files\nall_extra_files_pathcorrected = [os.path.join(\"../\", item) for item in all_extra_files]\n# find_packages should only ever find these as subpackages of gandlf, not as top-level packages\n# generate this dynamically?\n# GANDLF.GANDLF is needed to prevent recursion madness in deployments\ntoplevel_package_excludes = [\n \"GANDLF.GANDLF\",\n \"anonymize\",\n \"cli\",\n \"compute\",\n \"data\",\n \"grad_clipping\",\n \"losses\",\n \"metrics\",\n \"models\",\n \"optimizers\",\n \"schedulers\",\n \"utils\",\n]\n\n\nrequirements = [\n \"torch==1.13.1\",\n \"black\",\n \"numpy==1.22.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n \"SimpleITK!=2.2.1\", # https://github.com/mlcommons/GaNDLF/issues/536\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.18.75\",\n \"pandas<2.0.0\",\n \"scikit-learn>=0.23.2\",\n \"scikit-image>=0.19.1\",\n \"setuptools\",\n \"seaborn\",\n \"pyyaml\",\n \"tiffslide\",\n \"matplotlib\",\n \"requests>=2.25.0\",\n \"pytest\",\n \"coverage\",\n \"pytest-cov\",\n \"psutil\",\n \"medcam\",\n \"opencv-python\",\n \"torchmetrics==0.8.1\",\n \"zarr==2.10.3\",\n \"pydicom\",\n \"onnx\",\n \"torchinfo==1.7.0\",\n \"segmentation-models-pytorch==0.3.2\",\n \"ACSConv==0.1.1\",\n \"docker\",\n \"dicom-anonymizer\",\n \"twine\",\n \"zarr\",\n \"keyring\",\n]\n\nif __name__ == \"__main__\":\n setup(\n name=\"GANDLF\",\n version=__version__,\n author=\"MLCommons\",\n author_email=\"[email protected]\",\n python_requires=\">=3.8\",\n packages=find_packages(\n where=os.path.dirname(os.path.abspath(__file__)),\n exclude=toplevel_package_excludes,\n ),\n cmdclass={\n \"install\": CustomInstallCommand,\n \"develop\": CustomDevelopCommand,\n \"egg_info\": CustomEggInfoCommand,\n },\n scripts=[\n \"gandlf_run\",\n \"gandlf_constructCSV\",\n \"gandlf_collectStats\",\n \"gandlf_patchMiner\",\n \"gandlf_preprocess\",\n \"gandlf_anonymizer\",\n \"gandlf_verifyInstall\",\n \"gandlf_configGenerator\",\n \"gandlf_recoverConfig\",\n \"gandlf_deploy\",\n \"gandlf_optimizeModel\",\n \"gandlf_generateMetrics\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Scientific/Engineering :: Medical Science Apps.\",\n ],\n description=(\n \"PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging.\"\n ),\n install_requires=requirements,\n license=\"Apache-2.0\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n package_data={\"GANDLF\": all_extra_files_pathcorrected},\n keywords=\"semantic, segmentation, regression, classification, data-augmentation, medical-imaging, clinical-workflows, deep-learning, pytorch\",\n zip_safe=False,\n )\n", "path": "setup.py"}]}
| 2,292 | 79 |
gh_patches_debug_556
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-804
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.0.2
On the docket:
+ [x] Add a test of pypi index rendering. (#799)
+ [x] Fix `iter_compatible_interpreters` path biasing. (#798)
+ [x] Fix current platform handling. #801
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '2.0.1'
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = '2.0.1'
+__version__ = '2.0.2'
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = '2.0.1'\n+__version__ = '2.0.2'\n", "issue": "Release 2.0.2\nOn the docket:\r\n\r\n+ [x] Add a test of pypi index rendering. (#799)\r\n+ [x] Fix `iter_compatible_interpreters` path biasing. (#798)\r\n+ [x] Fix current platform handling. #801\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '2.0.1'\n", "path": "pex/version.py"}]}
| 648 | 94 |
gh_patches_debug_28102
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-2428
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Goodgame plugin not worked.
<!--
Thanks for reporting a plugin issue!
USE THE TEMPLATE. Otherwise your plugin issue may be rejected.
First, see the contribution guidelines:
https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink
Also check the list of open and closed plugin issues:
https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22
Please see the text preview to avoid unnecessary formatting errors.
-->
## Plugin Issue
<!-- Replace [ ] with [x] in order to check the box -->
- [x] This is a plugin issue and I have read the contribution guidelines.
### Description
<!-- Explain the plugin issue as thoroughly as you can. -->
It looks like the plugin can no longer open streams.
### Reproduction steps / Explicit stream URLs to test
https://goodgame.ru/channel/Miker/#autoplay
<!-- How can we reproduce this? Please note the exact steps below using the list format supplied. If you need more steps please add them. -->
1. ...
2. ...
3. ...
### Log output
<!--
TEXT LOG OUTPUT IS REQUIRED for a plugin issue!
Use the `--loglevel debug` parameter and avoid using parameters which suppress log output.
https://streamlink.github.io/cli.html#cmdoption-l
Make sure to **remove usernames and passwords**
You can copy the output to https://gist.github.com/ or paste it below.
-->
```
REPLACE THIS TEXT WITH THE LOG OUTPUT
```
c:\>streamlink --loglevel debug https://goodgame.ru/channel/Miker/#autoplay best
[cli][debug] OS: Windows 7
[cli][debug] Python: 3.6.6
[cli][debug] Streamlink: 1.1.1
[cli][debug] Requests(2.21.0), Socks(1.6.7), Websocket(0.56.0)
[cli][info] Found matching plugin goodgame for URL https://goodgame.ru/channel/Miker/#autoplay
Traceback (most recent call last):
File "runpy.py", line 193, in _run_module_as_main
File "runpy.py", line 85, in _run_code
File "C:\Program Files (x86)\Streamlink\bin\streamlink.exe\__main__.py", line 18, in <module>
File "C:\Program Files (x86)\Streamlink\pkgs\streamlink_cli\main.py", line 1033, in main
handle_url()
File "C:\Program Files (x86)\Streamlink\pkgs\streamlink_cli\main.py", line 577, in handle_url
streams = fetch_streams(plugin)
File "C:\Program Files (x86)\Streamlink\pkgs\streamlink_cli\main.py", line 457, in fetch_streams
sorting_excludes=args.stream_sorting_excludes)
File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\plugin\plugin.py", line 317, in streams
ostreams = self._get_streams()
File "C:\Program Files (x86)\Streamlink\pkgs\streamlink\plugins\goodgame.py", line 49, in _get_str
eams
**channel_info)
File "logging\__init__.py", line 1295, in debug
TypeError: _log() got an unexpected keyword argument 'id'
### Additional comments, screenshots, etc.
[Love Streamlink? Please consider supporting our collective. Thanks!](https://opencollective.com/streamlink/donate)
</issue>
<code>
[start of src/streamlink/plugins/goodgame.py]
1 import re
2
3 from streamlink.plugin import Plugin
4 from streamlink.stream import HLSStream
5 from streamlink.utils import parse_json
6
7 HLS_URL_FORMAT = "https://hls.goodgame.ru/hls/{0}{1}.m3u8"
8 QUALITIES = {
9 "1080p": "",
10 "720p": "_720",
11 "480p": "_480",
12 "240p": "_240"
13 }
14
15 _url_re = re.compile(r"https?://(?:www\.)?goodgame.ru/channel/(?P<user>[^/]+)")
16 _apidata_re = re.compile(r'''(?P<quote>["']?)channel(?P=quote)\s*:\s*(?P<data>{.*?})\s*,''')
17 _ddos_re = re.compile(r'document.cookie="(__DDOS_[^;]+)')
18
19
20 class GoodGame(Plugin):
21 @classmethod
22 def can_handle_url(cls, url):
23 return _url_re.match(url)
24
25 def _check_stream(self, url):
26 res = self.session.http.get(url, acceptable_status=(200, 404))
27 if res.status_code == 200:
28 return True
29
30 def _get_streams(self):
31 headers = {
32 "Referer": self.url
33 }
34 res = self.session.http.get(self.url, headers=headers)
35
36 match = _ddos_re.search(res.text)
37 if match:
38 self.logger.debug("Anti-DDOS bypass...")
39 headers["Cookie"] = match.group(1)
40 res = self.session.http.get(self.url, headers=headers)
41
42 match = _apidata_re.search(res.text)
43 channel_info = match and parse_json(match.group("data"))
44 if not channel_info:
45 self.logger.error("Could not find channel info")
46 return
47
48 self.logger.debug("Found channel info: channelkey={channelkey} pid={streamkey} online={status}",
49 **channel_info)
50 if not channel_info['status']:
51 self.logger.debug("Channel appears to be offline")
52
53 streams = {}
54 for name, url_suffix in QUALITIES.items():
55 url = HLS_URL_FORMAT.format(channel_info['streamkey'], url_suffix)
56 if not self._check_stream(url):
57 continue
58
59 streams[name] = HLSStream(self.session, url)
60
61 return streams
62
63
64 __plugin__ = GoodGame
65
[end of src/streamlink/plugins/goodgame.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/streamlink/plugins/goodgame.py b/src/streamlink/plugins/goodgame.py
--- a/src/streamlink/plugins/goodgame.py
+++ b/src/streamlink/plugins/goodgame.py
@@ -1,9 +1,12 @@
import re
+import logging
from streamlink.plugin import Plugin
from streamlink.stream import HLSStream
from streamlink.utils import parse_json
+log = logging.getLogger(__name__)
+
HLS_URL_FORMAT = "https://hls.goodgame.ru/hls/{0}{1}.m3u8"
QUALITIES = {
"1080p": "",
@@ -35,7 +38,7 @@
match = _ddos_re.search(res.text)
if match:
- self.logger.debug("Anti-DDOS bypass...")
+ log.debug("Anti-DDOS bypass...")
headers["Cookie"] = match.group(1)
res = self.session.http.get(self.url, headers=headers)
@@ -45,10 +48,9 @@
self.logger.error("Could not find channel info")
return
- self.logger.debug("Found channel info: channelkey={channelkey} pid={streamkey} online={status}",
- **channel_info)
+ log.debug("Found channel info: id={id} channelkey={channelkey} pid={streamkey} online={status}".format(**channel_info))
if not channel_info['status']:
- self.logger.debug("Channel appears to be offline")
+ log.debug("Channel appears to be offline")
streams = {}
for name, url_suffix in QUALITIES.items():
|
{"golden_diff": "diff --git a/src/streamlink/plugins/goodgame.py b/src/streamlink/plugins/goodgame.py\n--- a/src/streamlink/plugins/goodgame.py\n+++ b/src/streamlink/plugins/goodgame.py\n@@ -1,9 +1,12 @@\n import re\n+import logging\n \n from streamlink.plugin import Plugin\n from streamlink.stream import HLSStream\n from streamlink.utils import parse_json\n \n+log = logging.getLogger(__name__)\n+\n HLS_URL_FORMAT = \"https://hls.goodgame.ru/hls/{0}{1}.m3u8\"\n QUALITIES = {\n \"1080p\": \"\",\n@@ -35,7 +38,7 @@\n \n match = _ddos_re.search(res.text)\n if match:\n- self.logger.debug(\"Anti-DDOS bypass...\")\n+ log.debug(\"Anti-DDOS bypass...\")\n headers[\"Cookie\"] = match.group(1)\n res = self.session.http.get(self.url, headers=headers)\n \n@@ -45,10 +48,9 @@\n self.logger.error(\"Could not find channel info\")\n return\n \n- self.logger.debug(\"Found channel info: channelkey={channelkey} pid={streamkey} online={status}\",\n- **channel_info)\n+ log.debug(\"Found channel info: id={id} channelkey={channelkey} pid={streamkey} online={status}\".format(**channel_info))\n if not channel_info['status']:\n- self.logger.debug(\"Channel appears to be offline\")\n+ log.debug(\"Channel appears to be offline\")\n \n streams = {}\n for name, url_suffix in QUALITIES.items():\n", "issue": "Goodgame plugin not worked.\n<!--\r\nThanks for reporting a plugin issue!\r\nUSE THE TEMPLATE. Otherwise your plugin issue may be rejected.\r\n\r\nFirst, see the contribution guidelines:\r\nhttps://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink\r\n\r\nAlso check the list of open and closed plugin issues:\r\nhttps://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22\r\n\r\nPlease see the text preview to avoid unnecessary formatting errors.\r\n-->\r\n\r\n\r\n## Plugin Issue\r\n\r\n<!-- Replace [ ] with [x] in order to check the box -->\r\n- [x] This is a plugin issue and I have read the contribution guidelines.\r\n\r\n\r\n### Description\r\n\r\n<!-- Explain the plugin issue as thoroughly as you can. -->\r\nIt looks like the plugin can no longer open streams.\r\n\r\n### Reproduction steps / Explicit stream URLs to test\r\nhttps://goodgame.ru/channel/Miker/#autoplay\r\n<!-- How can we reproduce this? Please note the exact steps below using the list format supplied. If you need more steps please add them. -->\r\n\r\n1. ...\r\n2. ...\r\n3. ...\r\n\r\n\r\n### Log output\r\n\r\n<!--\r\nTEXT LOG OUTPUT IS REQUIRED for a plugin issue!\r\nUse the `--loglevel debug` parameter and avoid using parameters which suppress log output.\r\nhttps://streamlink.github.io/cli.html#cmdoption-l\r\n\r\nMake sure to **remove usernames and passwords**\r\nYou can copy the output to https://gist.github.com/ or paste it below.\r\n-->\r\n\r\n```\r\nREPLACE THIS TEXT WITH THE LOG OUTPUT\r\n```\r\nc:\\>streamlink --loglevel debug https://goodgame.ru/channel/Miker/#autoplay best\r\n[cli][debug] OS: Windows 7\r\n[cli][debug] Python: 3.6.6\r\n[cli][debug] Streamlink: 1.1.1\r\n[cli][debug] Requests(2.21.0), Socks(1.6.7), Websocket(0.56.0)\r\n[cli][info] Found matching plugin goodgame for URL https://goodgame.ru/channel/Miker/#autoplay\r\nTraceback (most recent call last):\r\n File \"runpy.py\", line 193, in _run_module_as_main\r\n File \"runpy.py\", line 85, in _run_code\r\n File \"C:\\Program Files (x86)\\Streamlink\\bin\\streamlink.exe\\__main__.py\", line 18, in <module>\r\n File \"C:\\Program Files (x86)\\Streamlink\\pkgs\\streamlink_cli\\main.py\", line 1033, in main\r\n handle_url()\r\n File \"C:\\Program Files (x86)\\Streamlink\\pkgs\\streamlink_cli\\main.py\", line 577, in handle_url\r\n streams = fetch_streams(plugin)\r\n File \"C:\\Program Files (x86)\\Streamlink\\pkgs\\streamlink_cli\\main.py\", line 457, in fetch_streams\r\n sorting_excludes=args.stream_sorting_excludes)\r\n File \"C:\\Program Files (x86)\\Streamlink\\pkgs\\streamlink\\plugin\\plugin.py\", line 317, in streams\r\n ostreams = self._get_streams()\r\n File \"C:\\Program Files (x86)\\Streamlink\\pkgs\\streamlink\\plugins\\goodgame.py\", line 49, in _get_str\r\neams\r\n **channel_info)\r\n File \"logging\\__init__.py\", line 1295, in debug\r\nTypeError: _log() got an unexpected keyword argument 'id'\r\n\r\n### Additional comments, screenshots, etc.\r\n\r\n\r\n\r\n[Love Streamlink? Please consider supporting our collective. Thanks!](https://opencollective.com/streamlink/donate)\r\n\n", "before_files": [{"content": "import re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.stream import HLSStream\nfrom streamlink.utils import parse_json\n\nHLS_URL_FORMAT = \"https://hls.goodgame.ru/hls/{0}{1}.m3u8\"\nQUALITIES = {\n \"1080p\": \"\",\n \"720p\": \"_720\",\n \"480p\": \"_480\",\n \"240p\": \"_240\"\n}\n\n_url_re = re.compile(r\"https?://(?:www\\.)?goodgame.ru/channel/(?P<user>[^/]+)\")\n_apidata_re = re.compile(r'''(?P<quote>[\"']?)channel(?P=quote)\\s*:\\s*(?P<data>{.*?})\\s*,''')\n_ddos_re = re.compile(r'document.cookie=\"(__DDOS_[^;]+)')\n\n\nclass GoodGame(Plugin):\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n def _check_stream(self, url):\n res = self.session.http.get(url, acceptable_status=(200, 404))\n if res.status_code == 200:\n return True\n\n def _get_streams(self):\n headers = {\n \"Referer\": self.url\n }\n res = self.session.http.get(self.url, headers=headers)\n\n match = _ddos_re.search(res.text)\n if match:\n self.logger.debug(\"Anti-DDOS bypass...\")\n headers[\"Cookie\"] = match.group(1)\n res = self.session.http.get(self.url, headers=headers)\n\n match = _apidata_re.search(res.text)\n channel_info = match and parse_json(match.group(\"data\"))\n if not channel_info:\n self.logger.error(\"Could not find channel info\")\n return\n\n self.logger.debug(\"Found channel info: channelkey={channelkey} pid={streamkey} online={status}\",\n **channel_info)\n if not channel_info['status']:\n self.logger.debug(\"Channel appears to be offline\")\n\n streams = {}\n for name, url_suffix in QUALITIES.items():\n url = HLS_URL_FORMAT.format(channel_info['streamkey'], url_suffix)\n if not self._check_stream(url):\n continue\n\n streams[name] = HLSStream(self.session, url)\n\n return streams\n\n\n__plugin__ = GoodGame\n", "path": "src/streamlink/plugins/goodgame.py"}]}
| 1,986 | 343 |
gh_patches_debug_8349
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-206
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Windows: Large number of files causes `xargs: ... Bad file number`
Originally here: https://github.com/pre-commit/pre-commit-hooks/issues/41
</issue>
<code>
[start of pre_commit/languages/helpers.py]
1 from __future__ import unicode_literals
2
3 import pipes
4
5
6 def file_args_to_stdin(file_args):
7 return '\0'.join(list(file_args) + [''])
8
9
10 def run_hook(env, hook, file_args):
11 quoted_args = [pipes.quote(arg) for arg in hook['args']]
12 return env.run(
13 ' '.join(['xargs', '-0', hook['entry']] + quoted_args),
14 stdin=file_args_to_stdin(file_args),
15 retcode=None,
16 )
17
18
19 class Environment(object):
20 def __init__(self, repo_cmd_runner):
21 self.repo_cmd_runner = repo_cmd_runner
22
23 @property
24 def env_prefix(self):
25 """env_prefix is a value that is prefixed to the command that is run.
26
27 Usually this is to source a virtualenv, etc.
28
29 Commands basically end up looking like:
30
31 bash -c '{env_prefix} {cmd}'
32
33 so you'll often want to end your prefix with &&
34 """
35 raise NotImplementedError
36
37 def run(self, cmd, **kwargs):
38 """Returns (returncode, stdout, stderr)."""
39 return self.repo_cmd_runner.run(
40 ['bash', '-c', ' '.join([self.env_prefix, cmd])], **kwargs
41 )
42
[end of pre_commit/languages/helpers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/languages/helpers.py b/pre_commit/languages/helpers.py
--- a/pre_commit/languages/helpers.py
+++ b/pre_commit/languages/helpers.py
@@ -10,7 +10,9 @@
def run_hook(env, hook, file_args):
quoted_args = [pipes.quote(arg) for arg in hook['args']]
return env.run(
- ' '.join(['xargs', '-0', hook['entry']] + quoted_args),
+ # Use -s 4000 (slightly less than posix mandated minimum)
+ # This is to prevent "xargs: ... Bad file number" on windows
+ ' '.join(['xargs', '-0', '-s4000', hook['entry']] + quoted_args),
stdin=file_args_to_stdin(file_args),
retcode=None,
)
|
{"golden_diff": "diff --git a/pre_commit/languages/helpers.py b/pre_commit/languages/helpers.py\n--- a/pre_commit/languages/helpers.py\n+++ b/pre_commit/languages/helpers.py\n@@ -10,7 +10,9 @@\n def run_hook(env, hook, file_args):\n quoted_args = [pipes.quote(arg) for arg in hook['args']]\n return env.run(\n- ' '.join(['xargs', '-0', hook['entry']] + quoted_args),\n+ # Use -s 4000 (slightly less than posix mandated minimum)\n+ # This is to prevent \"xargs: ... Bad file number\" on windows\n+ ' '.join(['xargs', '-0', '-s4000', hook['entry']] + quoted_args),\n stdin=file_args_to_stdin(file_args),\n retcode=None,\n )\n", "issue": "Windows: Large number of files causes `xargs: ... Bad file number`\nOriginally here: https://github.com/pre-commit/pre-commit-hooks/issues/41\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport pipes\n\n\ndef file_args_to_stdin(file_args):\n return '\\0'.join(list(file_args) + [''])\n\n\ndef run_hook(env, hook, file_args):\n quoted_args = [pipes.quote(arg) for arg in hook['args']]\n return env.run(\n ' '.join(['xargs', '-0', hook['entry']] + quoted_args),\n stdin=file_args_to_stdin(file_args),\n retcode=None,\n )\n\n\nclass Environment(object):\n def __init__(self, repo_cmd_runner):\n self.repo_cmd_runner = repo_cmd_runner\n\n @property\n def env_prefix(self):\n \"\"\"env_prefix is a value that is prefixed to the command that is run.\n\n Usually this is to source a virtualenv, etc.\n\n Commands basically end up looking like:\n\n bash -c '{env_prefix} {cmd}'\n\n so you'll often want to end your prefix with &&\n \"\"\"\n raise NotImplementedError\n\n def run(self, cmd, **kwargs):\n \"\"\"Returns (returncode, stdout, stderr).\"\"\"\n return self.repo_cmd_runner.run(\n ['bash', '-c', ' '.join([self.env_prefix, cmd])], **kwargs\n )\n", "path": "pre_commit/languages/helpers.py"}]}
| 912 | 182 |
gh_patches_debug_37704
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-6170
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Passing ObjectID as a function argument in local_mode is broken
### System information
- **OS Platform and Distribution**: Ubuntu 18.04
- **Ray installed from (source or binary)**: binary
- **Ray version**: 0.8.0.dev6
- **Python version**: 3.7
- **Exact command to reproduce**: see below
### Describe the problem
The argument passing behavior with local_mode=True vs False seems to be different.
When I run the code snippet below:
```import ray
ray.init(local_mode=True) # Replace with False to get a working example
@ray.remote
def remote_function(x):
obj = x['a']
return ray.get(obj)
a = ray.put(42)
d = {'a': a}
result = remote_function.remote(d)
print(ray.get(result))
```
With local_mode=False I get output `42`, as expected.
With local_mode=True I get the following error:
```
Traceback (most recent call last):
File "/home/alex/all/projects/doom-neurobot/playground/ray_local_mode_bug.py", line 13, in <module>
print(ray.get(result))
File "/home/alex/miniconda3/envs/doom-rl/lib/python3.7/site-packages/ray/worker.py", line 2194, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(KeyError): /home/alex/miniconda3/envs/doom-rl/bin/python /home/alex/all/projects/doom-neurobot/playground/ray_local_mode_bug.py (pid=2449, ip=10.136.109.38)
File "/home/alex/miniconda3/envs/doom-rl/lib/python3.7/site-packages/ray/local_mode_manager.py", line 55, in execute
results = function(*copy.deepcopy(args))
File "/home/alex/all/projects/doom-neurobot/playground/ray_local_mode_bug.py", line 7, in remote_function
return ray.get(obj)
File "/home/alex/miniconda3/envs/doom-rl/lib/python3.7/site-packages/ray/local_mode_manager.py", line 105, in get_objects
raise KeyError("Value for {} not found".format(object_id))
KeyError: 'Value for LocalModeObjectID(89f92e430883458c8107c10ed53eb35b26099831) not found'
```
It looks like the LocalObjectID instance inside `d` loses it's field `value` when it gets deep copied during the "remote" function call (currently it's `local_mode_manager.py:55`). It's hard to tell why exactly that happens, looks like a bug.
</issue>
<code>
[start of python/ray/local_mode_manager.py]
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 import copy
6 import traceback
7
8 from ray import ObjectID
9 from ray.utils import format_error_message
10 from ray.exceptions import RayTaskError
11
12
13 class LocalModeObjectID(ObjectID):
14 """Wrapper class around ray.ObjectID used for local mode.
15
16 Object values are stored directly as a field of the LocalModeObjectID.
17
18 Attributes:
19 value: Field that stores object values. If this field does not exist,
20 it equates to the object not existing in the object store. This is
21 necessary because None is a valid object value.
22 """
23 pass
24
25
26 class LocalModeManager(object):
27 """Used to emulate remote operations when running in local mode."""
28
29 def __init__(self):
30 """Initialize a LocalModeManager."""
31
32 def execute(self, function, function_name, args, kwargs, num_return_vals):
33 """Synchronously executes a "remote" function or actor method.
34
35 Stores results directly in the generated and returned
36 LocalModeObjectIDs. Any exceptions raised during function execution
37 will be stored under all returned object IDs and later raised by the
38 worker.
39
40 Args:
41 function: The function to execute.
42 function_name: Name of the function to execute.
43 args: Arguments to the function. These will not be modified by
44 the function execution.
45 kwargs: Keyword arguments to the function.
46 num_return_vals: Number of expected return values specified in the
47 function's decorator.
48
49 Returns:
50 LocalModeObjectIDs corresponding to the function return values.
51 """
52 object_ids = [
53 LocalModeObjectID.from_random() for _ in range(num_return_vals)
54 ]
55 try:
56 results = function(*copy.deepcopy(args), **copy.deepcopy(kwargs))
57 if num_return_vals == 1:
58 object_ids[0].value = results
59 else:
60 for object_id, result in zip(object_ids, results):
61 object_id.value = result
62 except Exception as e:
63 backtrace = format_error_message(traceback.format_exc())
64 task_error = RayTaskError(function_name, backtrace, e.__class__)
65 for object_id in object_ids:
66 object_id.value = task_error
67
68 return object_ids
69
70 def put_object(self, value):
71 """Store an object in the emulated object store.
72
73 Implemented by generating a LocalModeObjectID and storing the value
74 directly within it.
75
76 Args:
77 value: The value to store.
78
79 Returns:
80 LocalModeObjectID corresponding to the value.
81 """
82 object_id = LocalModeObjectID.from_random()
83 object_id.value = value
84 return object_id
85
86 def get_objects(self, object_ids):
87 """Fetch objects from the emulated object store.
88
89 Accepts only LocalModeObjectIDs and reads values directly from them.
90
91 Args:
92 object_ids: A list of object IDs to fetch values for.
93
94 Raises:
95 TypeError if any of the object IDs are not LocalModeObjectIDs.
96 KeyError if any of the object IDs do not contain values.
97 """
98 results = []
99 for object_id in object_ids:
100 if not isinstance(object_id, LocalModeObjectID):
101 raise TypeError("Only LocalModeObjectIDs are supported "
102 "when running in LOCAL_MODE. Using "
103 "user-generated ObjectIDs will fail.")
104 if not hasattr(object_id, "value"):
105 raise KeyError("Value for {} not found".format(object_id))
106
107 results.append(object_id.value)
108
109 return results
110
111 def free(self, object_ids):
112 """Delete objects from the emulated object store.
113
114 Accepts only LocalModeObjectIDs and deletes their values directly.
115
116 Args:
117 object_ids: A list of ObjectIDs to delete.
118
119 Raises:
120 TypeError if any of the object IDs are not LocalModeObjectIDs.
121 """
122 for object_id in object_ids:
123 if not isinstance(object_id, LocalModeObjectID):
124 raise TypeError("Only LocalModeObjectIDs are supported "
125 "when running in LOCAL_MODE. Using "
126 "user-generated ObjectIDs will fail.")
127 try:
128 del object_id.value
129 except AttributeError:
130 pass
131
[end of python/ray/local_mode_manager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/python/ray/local_mode_manager.py b/python/ray/local_mode_manager.py
--- a/python/ray/local_mode_manager.py
+++ b/python/ray/local_mode_manager.py
@@ -5,6 +5,7 @@
import copy
import traceback
+import ray
from ray import ObjectID
from ray.utils import format_error_message
from ray.exceptions import RayTaskError
@@ -20,7 +21,18 @@
it equates to the object not existing in the object store. This is
necessary because None is a valid object value.
"""
- pass
+
+ def __copy__(self):
+ new = LocalModeObjectID(self.binary())
+ if hasattr(self, "value"):
+ new.value = self.value
+ return new
+
+ def __deepcopy__(self, memo=None):
+ new = LocalModeObjectID(self.binary())
+ if hasattr(self, "value"):
+ new.value = self.value
+ return new
class LocalModeManager(object):
@@ -49,23 +61,37 @@
Returns:
LocalModeObjectIDs corresponding to the function return values.
"""
- object_ids = [
+ return_ids = [
LocalModeObjectID.from_random() for _ in range(num_return_vals)
]
+ new_args = []
+ for i, arg in enumerate(args):
+ if isinstance(arg, ObjectID):
+ new_args.append(ray.get(arg))
+ else:
+ new_args.append(copy.deepcopy(arg))
+
+ new_kwargs = {}
+ for k, v in kwargs.items():
+ if isinstance(v, ObjectID):
+ new_kwargs[k] = ray.get(v)
+ else:
+ new_kwargs[k] = copy.deepcopy(v)
+
try:
- results = function(*copy.deepcopy(args), **copy.deepcopy(kwargs))
+ results = function(*new_args, **new_kwargs)
if num_return_vals == 1:
- object_ids[0].value = results
+ return_ids[0].value = results
else:
- for object_id, result in zip(object_ids, results):
+ for object_id, result in zip(return_ids, results):
object_id.value = result
except Exception as e:
backtrace = format_error_message(traceback.format_exc())
task_error = RayTaskError(function_name, backtrace, e.__class__)
- for object_id in object_ids:
+ for object_id in return_ids:
object_id.value = task_error
- return object_ids
+ return return_ids
def put_object(self, value):
"""Store an object in the emulated object store.
|
{"golden_diff": "diff --git a/python/ray/local_mode_manager.py b/python/ray/local_mode_manager.py\n--- a/python/ray/local_mode_manager.py\n+++ b/python/ray/local_mode_manager.py\n@@ -5,6 +5,7 @@\n import copy\n import traceback\n \n+import ray\n from ray import ObjectID\n from ray.utils import format_error_message\n from ray.exceptions import RayTaskError\n@@ -20,7 +21,18 @@\n it equates to the object not existing in the object store. This is\n necessary because None is a valid object value.\n \"\"\"\n- pass\n+\n+ def __copy__(self):\n+ new = LocalModeObjectID(self.binary())\n+ if hasattr(self, \"value\"):\n+ new.value = self.value\n+ return new\n+\n+ def __deepcopy__(self, memo=None):\n+ new = LocalModeObjectID(self.binary())\n+ if hasattr(self, \"value\"):\n+ new.value = self.value\n+ return new\n \n \n class LocalModeManager(object):\n@@ -49,23 +61,37 @@\n Returns:\n LocalModeObjectIDs corresponding to the function return values.\n \"\"\"\n- object_ids = [\n+ return_ids = [\n LocalModeObjectID.from_random() for _ in range(num_return_vals)\n ]\n+ new_args = []\n+ for i, arg in enumerate(args):\n+ if isinstance(arg, ObjectID):\n+ new_args.append(ray.get(arg))\n+ else:\n+ new_args.append(copy.deepcopy(arg))\n+\n+ new_kwargs = {}\n+ for k, v in kwargs.items():\n+ if isinstance(v, ObjectID):\n+ new_kwargs[k] = ray.get(v)\n+ else:\n+ new_kwargs[k] = copy.deepcopy(v)\n+\n try:\n- results = function(*copy.deepcopy(args), **copy.deepcopy(kwargs))\n+ results = function(*new_args, **new_kwargs)\n if num_return_vals == 1:\n- object_ids[0].value = results\n+ return_ids[0].value = results\n else:\n- for object_id, result in zip(object_ids, results):\n+ for object_id, result in zip(return_ids, results):\n object_id.value = result\n except Exception as e:\n backtrace = format_error_message(traceback.format_exc())\n task_error = RayTaskError(function_name, backtrace, e.__class__)\n- for object_id in object_ids:\n+ for object_id in return_ids:\n object_id.value = task_error\n \n- return object_ids\n+ return return_ids\n \n def put_object(self, value):\n \"\"\"Store an object in the emulated object store.\n", "issue": "Passing ObjectID as a function argument in local_mode is broken\n### System information\r\n- **OS Platform and Distribution**: Ubuntu 18.04\r\n- **Ray installed from (source or binary)**: binary\r\n- **Ray version**: 0.8.0.dev6\r\n- **Python version**: 3.7\r\n- **Exact command to reproduce**: see below\r\n\r\n### Describe the problem\r\nThe argument passing behavior with local_mode=True vs False seems to be different.\r\nWhen I run the code snippet below:\r\n\r\n```import ray\r\nray.init(local_mode=True) # Replace with False to get a working example\r\n\r\[email protected]\r\ndef remote_function(x):\r\n obj = x['a']\r\n return ray.get(obj)\r\n\r\n\r\na = ray.put(42)\r\nd = {'a': a}\r\nresult = remote_function.remote(d)\r\nprint(ray.get(result))\r\n```\r\n\r\nWith local_mode=False I get output `42`, as expected.\r\nWith local_mode=True I get the following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/alex/all/projects/doom-neurobot/playground/ray_local_mode_bug.py\", line 13, in <module>\r\n print(ray.get(result))\r\n File \"/home/alex/miniconda3/envs/doom-rl/lib/python3.7/site-packages/ray/worker.py\", line 2194, in get\r\n raise value.as_instanceof_cause()\r\nray.exceptions.RayTaskError(KeyError): /home/alex/miniconda3/envs/doom-rl/bin/python /home/alex/all/projects/doom-neurobot/playground/ray_local_mode_bug.py (pid=2449, ip=10.136.109.38)\r\n File \"/home/alex/miniconda3/envs/doom-rl/lib/python3.7/site-packages/ray/local_mode_manager.py\", line 55, in execute\r\n results = function(*copy.deepcopy(args))\r\n File \"/home/alex/all/projects/doom-neurobot/playground/ray_local_mode_bug.py\", line 7, in remote_function\r\n return ray.get(obj)\r\n File \"/home/alex/miniconda3/envs/doom-rl/lib/python3.7/site-packages/ray/local_mode_manager.py\", line 105, in get_objects\r\n raise KeyError(\"Value for {} not found\".format(object_id))\r\nKeyError: 'Value for LocalModeObjectID(89f92e430883458c8107c10ed53eb35b26099831) not found'\r\n```\r\nIt looks like the LocalObjectID instance inside `d` loses it's field `value` when it gets deep copied during the \"remote\" function call (currently it's `local_mode_manager.py:55`). It's hard to tell why exactly that happens, looks like a bug.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport copy\nimport traceback\n\nfrom ray import ObjectID\nfrom ray.utils import format_error_message\nfrom ray.exceptions import RayTaskError\n\n\nclass LocalModeObjectID(ObjectID):\n \"\"\"Wrapper class around ray.ObjectID used for local mode.\n\n Object values are stored directly as a field of the LocalModeObjectID.\n\n Attributes:\n value: Field that stores object values. If this field does not exist,\n it equates to the object not existing in the object store. This is\n necessary because None is a valid object value.\n \"\"\"\n pass\n\n\nclass LocalModeManager(object):\n \"\"\"Used to emulate remote operations when running in local mode.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize a LocalModeManager.\"\"\"\n\n def execute(self, function, function_name, args, kwargs, num_return_vals):\n \"\"\"Synchronously executes a \"remote\" function or actor method.\n\n Stores results directly in the generated and returned\n LocalModeObjectIDs. Any exceptions raised during function execution\n will be stored under all returned object IDs and later raised by the\n worker.\n\n Args:\n function: The function to execute.\n function_name: Name of the function to execute.\n args: Arguments to the function. These will not be modified by\n the function execution.\n kwargs: Keyword arguments to the function.\n num_return_vals: Number of expected return values specified in the\n function's decorator.\n\n Returns:\n LocalModeObjectIDs corresponding to the function return values.\n \"\"\"\n object_ids = [\n LocalModeObjectID.from_random() for _ in range(num_return_vals)\n ]\n try:\n results = function(*copy.deepcopy(args), **copy.deepcopy(kwargs))\n if num_return_vals == 1:\n object_ids[0].value = results\n else:\n for object_id, result in zip(object_ids, results):\n object_id.value = result\n except Exception as e:\n backtrace = format_error_message(traceback.format_exc())\n task_error = RayTaskError(function_name, backtrace, e.__class__)\n for object_id in object_ids:\n object_id.value = task_error\n\n return object_ids\n\n def put_object(self, value):\n \"\"\"Store an object in the emulated object store.\n\n Implemented by generating a LocalModeObjectID and storing the value\n directly within it.\n\n Args:\n value: The value to store.\n\n Returns:\n LocalModeObjectID corresponding to the value.\n \"\"\"\n object_id = LocalModeObjectID.from_random()\n object_id.value = value\n return object_id\n\n def get_objects(self, object_ids):\n \"\"\"Fetch objects from the emulated object store.\n\n Accepts only LocalModeObjectIDs and reads values directly from them.\n\n Args:\n object_ids: A list of object IDs to fetch values for.\n\n Raises:\n TypeError if any of the object IDs are not LocalModeObjectIDs.\n KeyError if any of the object IDs do not contain values.\n \"\"\"\n results = []\n for object_id in object_ids:\n if not isinstance(object_id, LocalModeObjectID):\n raise TypeError(\"Only LocalModeObjectIDs are supported \"\n \"when running in LOCAL_MODE. Using \"\n \"user-generated ObjectIDs will fail.\")\n if not hasattr(object_id, \"value\"):\n raise KeyError(\"Value for {} not found\".format(object_id))\n\n results.append(object_id.value)\n\n return results\n\n def free(self, object_ids):\n \"\"\"Delete objects from the emulated object store.\n\n Accepts only LocalModeObjectIDs and deletes their values directly.\n\n Args:\n object_ids: A list of ObjectIDs to delete.\n\n Raises:\n TypeError if any of the object IDs are not LocalModeObjectIDs.\n \"\"\"\n for object_id in object_ids:\n if not isinstance(object_id, LocalModeObjectID):\n raise TypeError(\"Only LocalModeObjectIDs are supported \"\n \"when running in LOCAL_MODE. Using \"\n \"user-generated ObjectIDs will fail.\")\n try:\n del object_id.value\n except AttributeError:\n pass\n", "path": "python/ray/local_mode_manager.py"}]}
| 2,342 | 577 |
gh_patches_debug_7286
|
rasdani/github-patches
|
git_diff
|
streamlit__streamlit-7219
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Should st.experimental_user.email local value be [email protected]?
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
Currently st.experimental_user.email returns `[email protected]` when run locally (cite [docs](https://docs.streamlit.io/library/api-reference/personalization/st.experimental_user#local-development)).
However, a more standard and correct approach is to use `example.com` domain ([cite](https://en.wikipedia.org/wiki/Example.com)) which is safer.
Should we switch?
### Reproducible Code Example
```Python
import streamlit as st
st.write(st.experimental_user.email)
```
### Steps To Reproduce
1. Run the code example in a local environment
### Expected Behavior
I see `[email protected]` which is standard
### Current Behavior
I see `[email protected]`, which might not be totally safe and isn't a standard approach.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.25
- Python version: 3.11
- Operating System: MacOS
- Browser: Chrome
### Additional Information
Thanks @jroes for finding this!
</issue>
<code>
[start of lib/streamlit/web/server/browser_websocket_handler.py]
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import base64
16 import binascii
17 import json
18 from typing import Any, Awaitable, Dict, List, Optional, Union
19
20 import tornado.concurrent
21 import tornado.locks
22 import tornado.netutil
23 import tornado.web
24 import tornado.websocket
25 from tornado.websocket import WebSocketHandler
26 from typing_extensions import Final
27
28 from streamlit import config
29 from streamlit.logger import get_logger
30 from streamlit.proto.BackMsg_pb2 import BackMsg
31 from streamlit.proto.ForwardMsg_pb2 import ForwardMsg
32 from streamlit.runtime import Runtime, SessionClient, SessionClientDisconnectedError
33 from streamlit.runtime.runtime_util import serialize_forward_msg
34 from streamlit.web.server.server_util import is_url_from_allowed_origins
35
36 _LOGGER: Final = get_logger(__name__)
37
38
39 class BrowserWebSocketHandler(WebSocketHandler, SessionClient):
40 """Handles a WebSocket connection from the browser"""
41
42 def initialize(self, runtime: Runtime) -> None:
43 self._runtime = runtime
44 self._session_id: Optional[str] = None
45 # The XSRF cookie is normally set when xsrf_form_html is used, but in a
46 # pure-Javascript application that does not use any regular forms we just
47 # need to read the self.xsrf_token manually to set the cookie as a side
48 # effect. See https://www.tornadoweb.org/en/stable/guide/security.html#cross-site-request-forgery-protection
49 # for more details.
50 if config.get_option("server.enableXsrfProtection"):
51 _ = self.xsrf_token
52
53 def check_origin(self, origin: str) -> bool:
54 """Set up CORS."""
55 return super().check_origin(origin) or is_url_from_allowed_origins(origin)
56
57 def write_forward_msg(self, msg: ForwardMsg) -> None:
58 """Send a ForwardMsg to the browser."""
59 try:
60 self.write_message(serialize_forward_msg(msg), binary=True)
61 except tornado.websocket.WebSocketClosedError as e:
62 raise SessionClientDisconnectedError from e
63
64 def select_subprotocol(self, subprotocols: List[str]) -> Optional[str]:
65 """Return the first subprotocol in the given list.
66
67 This method is used by Tornado to select a protocol when the
68 Sec-WebSocket-Protocol header is set in an HTTP Upgrade request.
69
70 NOTE: We repurpose the Sec-WebSocket-Protocol header here in a slightly
71 unfortunate (but necessary) way. The browser WebSocket API doesn't allow us to
72 set arbitrary HTTP headers, and this header is the only one where we have the
73 ability to set it to arbitrary values, so we use it to pass tokens (in this
74 case, the previous session ID to allow us to reconnect to it) from client to
75 server as the *second* value in the list.
76
77 The reason why the auth token is set as the second value is that, when
78 Sec-WebSocket-Protocol is set, many clients expect the server to respond with a
79 selected subprotocol to use. We don't want that reply to be the token, so we
80 by convention have the client always set the first protocol to "streamlit" and
81 select that.
82 """
83 if subprotocols:
84 return subprotocols[0]
85
86 return None
87
88 def open(self, *args, **kwargs) -> Optional[Awaitable[None]]:
89 # Extract user info from the X-Streamlit-User header
90 is_public_cloud_app = False
91
92 try:
93 header_content = self.request.headers["X-Streamlit-User"]
94 payload = base64.b64decode(header_content)
95 user_obj = json.loads(payload)
96 email = user_obj["email"]
97 is_public_cloud_app = user_obj["isPublicCloudApp"]
98 except (KeyError, binascii.Error, json.decoder.JSONDecodeError):
99 email = "[email protected]"
100
101 user_info: Dict[str, Optional[str]] = dict()
102 if is_public_cloud_app:
103 user_info["email"] = None
104 else:
105 user_info["email"] = email
106
107 existing_session_id = None
108 try:
109 ws_protocols = [
110 p.strip()
111 for p in self.request.headers["Sec-Websocket-Protocol"].split(",")
112 ]
113
114 if len(ws_protocols) > 1:
115 # See the NOTE in the docstring of the select_subprotocol method above
116 # for a detailed explanation of why this is done.
117 existing_session_id = ws_protocols[1]
118 except KeyError:
119 # Just let existing_session_id=None if we run into any error while trying to
120 # extract it from the Sec-Websocket-Protocol header.
121 pass
122
123 self._session_id = self._runtime.connect_session(
124 client=self,
125 user_info=user_info,
126 existing_session_id=existing_session_id,
127 )
128 return None
129
130 def on_close(self) -> None:
131 if not self._session_id:
132 return
133 self._runtime.disconnect_session(self._session_id)
134 self._session_id = None
135
136 def get_compression_options(self) -> Optional[Dict[Any, Any]]:
137 """Enable WebSocket compression.
138
139 Returning an empty dict enables websocket compression. Returning
140 None disables it.
141
142 (See the docstring in the parent class.)
143 """
144 if config.get_option("server.enableWebsocketCompression"):
145 return {}
146 return None
147
148 def on_message(self, payload: Union[str, bytes]) -> None:
149 if not self._session_id:
150 return
151
152 try:
153 if isinstance(payload, str):
154 # Sanity check. (The frontend should only be sending us bytes;
155 # Protobuf.ParseFromString does not accept str input.)
156 raise RuntimeError(
157 "WebSocket received an unexpected `str` message. "
158 "(We expect `bytes` only.)"
159 )
160
161 msg = BackMsg()
162 msg.ParseFromString(payload)
163 _LOGGER.debug("Received the following back message:\n%s", msg)
164
165 except Exception as ex:
166 _LOGGER.error(ex)
167 self._runtime.handle_backmsg_deserialization_exception(self._session_id, ex)
168 return
169
170 # "debug_disconnect_websocket" and "debug_shutdown_runtime" are special
171 # developmentMode-only messages used in e2e tests to test reconnect handling and
172 # disabling widgets.
173 if msg.WhichOneof("type") == "debug_disconnect_websocket":
174 if config.get_option("global.developmentMode"):
175 self.close()
176 else:
177 _LOGGER.warning(
178 "Client tried to disconnect websocket when not in development mode."
179 )
180 elif msg.WhichOneof("type") == "debug_shutdown_runtime":
181 if config.get_option("global.developmentMode"):
182 self._runtime.stop()
183 else:
184 _LOGGER.warning(
185 "Client tried to shut down runtime when not in development mode."
186 )
187 else:
188 # AppSession handles all other BackMsg types.
189 self._runtime.handle_backmsg(self._session_id, msg)
190
[end of lib/streamlit/web/server/browser_websocket_handler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/streamlit/web/server/browser_websocket_handler.py b/lib/streamlit/web/server/browser_websocket_handler.py
--- a/lib/streamlit/web/server/browser_websocket_handler.py
+++ b/lib/streamlit/web/server/browser_websocket_handler.py
@@ -96,7 +96,7 @@
email = user_obj["email"]
is_public_cloud_app = user_obj["isPublicCloudApp"]
except (KeyError, binascii.Error, json.decoder.JSONDecodeError):
- email = "[email protected]"
+ email = "[email protected]"
user_info: Dict[str, Optional[str]] = dict()
if is_public_cloud_app:
|
{"golden_diff": "diff --git a/lib/streamlit/web/server/browser_websocket_handler.py b/lib/streamlit/web/server/browser_websocket_handler.py\n--- a/lib/streamlit/web/server/browser_websocket_handler.py\n+++ b/lib/streamlit/web/server/browser_websocket_handler.py\n@@ -96,7 +96,7 @@\n email = user_obj[\"email\"]\n is_public_cloud_app = user_obj[\"isPublicCloudApp\"]\n except (KeyError, binascii.Error, json.decoder.JSONDecodeError):\n- email = \"[email protected]\"\n+ email = \"[email protected]\"\n \n user_info: Dict[str, Optional[str]] = dict()\n if is_public_cloud_app:\n", "issue": "Should st.experimental_user.email local value be [email protected]?\n### Checklist\n\n- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.\n- [X] I added a very descriptive title to this issue.\n- [X] I have provided sufficient information below to help reproduce this issue.\n\n### Summary\n\nCurrently st.experimental_user.email returns `[email protected]` when run locally (cite [docs](https://docs.streamlit.io/library/api-reference/personalization/st.experimental_user#local-development)).\r\n\r\nHowever, a more standard and correct approach is to use `example.com` domain ([cite](https://en.wikipedia.org/wiki/Example.com)) which is safer.\r\n\r\nShould we switch?\n\n### Reproducible Code Example\n\n```Python\nimport streamlit as st\r\n\r\nst.write(st.experimental_user.email)\n```\n\n\n### Steps To Reproduce\n\n1. Run the code example in a local environment\n\n### Expected Behavior\n\nI see `[email protected]` which is standard\n\n### Current Behavior\n\nI see `[email protected]`, which might not be totally safe and isn't a standard approach.\n\n### Is this a regression?\n\n- [ ] Yes, this used to work in a previous version.\n\n### Debug info\n\n- Streamlit version: 1.25\r\n- Python version: 3.11\r\n- Operating System: MacOS\r\n- Browser: Chrome\r\n\n\n### Additional Information\n\nThanks @jroes for finding this!\n", "before_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport base64\nimport binascii\nimport json\nfrom typing import Any, Awaitable, Dict, List, Optional, Union\n\nimport tornado.concurrent\nimport tornado.locks\nimport tornado.netutil\nimport tornado.web\nimport tornado.websocket\nfrom tornado.websocket import WebSocketHandler\nfrom typing_extensions import Final\n\nfrom streamlit import config\nfrom streamlit.logger import get_logger\nfrom streamlit.proto.BackMsg_pb2 import BackMsg\nfrom streamlit.proto.ForwardMsg_pb2 import ForwardMsg\nfrom streamlit.runtime import Runtime, SessionClient, SessionClientDisconnectedError\nfrom streamlit.runtime.runtime_util import serialize_forward_msg\nfrom streamlit.web.server.server_util import is_url_from_allowed_origins\n\n_LOGGER: Final = get_logger(__name__)\n\n\nclass BrowserWebSocketHandler(WebSocketHandler, SessionClient):\n \"\"\"Handles a WebSocket connection from the browser\"\"\"\n\n def initialize(self, runtime: Runtime) -> None:\n self._runtime = runtime\n self._session_id: Optional[str] = None\n # The XSRF cookie is normally set when xsrf_form_html is used, but in a\n # pure-Javascript application that does not use any regular forms we just\n # need to read the self.xsrf_token manually to set the cookie as a side\n # effect. See https://www.tornadoweb.org/en/stable/guide/security.html#cross-site-request-forgery-protection\n # for more details.\n if config.get_option(\"server.enableXsrfProtection\"):\n _ = self.xsrf_token\n\n def check_origin(self, origin: str) -> bool:\n \"\"\"Set up CORS.\"\"\"\n return super().check_origin(origin) or is_url_from_allowed_origins(origin)\n\n def write_forward_msg(self, msg: ForwardMsg) -> None:\n \"\"\"Send a ForwardMsg to the browser.\"\"\"\n try:\n self.write_message(serialize_forward_msg(msg), binary=True)\n except tornado.websocket.WebSocketClosedError as e:\n raise SessionClientDisconnectedError from e\n\n def select_subprotocol(self, subprotocols: List[str]) -> Optional[str]:\n \"\"\"Return the first subprotocol in the given list.\n\n This method is used by Tornado to select a protocol when the\n Sec-WebSocket-Protocol header is set in an HTTP Upgrade request.\n\n NOTE: We repurpose the Sec-WebSocket-Protocol header here in a slightly\n unfortunate (but necessary) way. The browser WebSocket API doesn't allow us to\n set arbitrary HTTP headers, and this header is the only one where we have the\n ability to set it to arbitrary values, so we use it to pass tokens (in this\n case, the previous session ID to allow us to reconnect to it) from client to\n server as the *second* value in the list.\n\n The reason why the auth token is set as the second value is that, when\n Sec-WebSocket-Protocol is set, many clients expect the server to respond with a\n selected subprotocol to use. We don't want that reply to be the token, so we\n by convention have the client always set the first protocol to \"streamlit\" and\n select that.\n \"\"\"\n if subprotocols:\n return subprotocols[0]\n\n return None\n\n def open(self, *args, **kwargs) -> Optional[Awaitable[None]]:\n # Extract user info from the X-Streamlit-User header\n is_public_cloud_app = False\n\n try:\n header_content = self.request.headers[\"X-Streamlit-User\"]\n payload = base64.b64decode(header_content)\n user_obj = json.loads(payload)\n email = user_obj[\"email\"]\n is_public_cloud_app = user_obj[\"isPublicCloudApp\"]\n except (KeyError, binascii.Error, json.decoder.JSONDecodeError):\n email = \"[email protected]\"\n\n user_info: Dict[str, Optional[str]] = dict()\n if is_public_cloud_app:\n user_info[\"email\"] = None\n else:\n user_info[\"email\"] = email\n\n existing_session_id = None\n try:\n ws_protocols = [\n p.strip()\n for p in self.request.headers[\"Sec-Websocket-Protocol\"].split(\",\")\n ]\n\n if len(ws_protocols) > 1:\n # See the NOTE in the docstring of the select_subprotocol method above\n # for a detailed explanation of why this is done.\n existing_session_id = ws_protocols[1]\n except KeyError:\n # Just let existing_session_id=None if we run into any error while trying to\n # extract it from the Sec-Websocket-Protocol header.\n pass\n\n self._session_id = self._runtime.connect_session(\n client=self,\n user_info=user_info,\n existing_session_id=existing_session_id,\n )\n return None\n\n def on_close(self) -> None:\n if not self._session_id:\n return\n self._runtime.disconnect_session(self._session_id)\n self._session_id = None\n\n def get_compression_options(self) -> Optional[Dict[Any, Any]]:\n \"\"\"Enable WebSocket compression.\n\n Returning an empty dict enables websocket compression. Returning\n None disables it.\n\n (See the docstring in the parent class.)\n \"\"\"\n if config.get_option(\"server.enableWebsocketCompression\"):\n return {}\n return None\n\n def on_message(self, payload: Union[str, bytes]) -> None:\n if not self._session_id:\n return\n\n try:\n if isinstance(payload, str):\n # Sanity check. (The frontend should only be sending us bytes;\n # Protobuf.ParseFromString does not accept str input.)\n raise RuntimeError(\n \"WebSocket received an unexpected `str` message. \"\n \"(We expect `bytes` only.)\"\n )\n\n msg = BackMsg()\n msg.ParseFromString(payload)\n _LOGGER.debug(\"Received the following back message:\\n%s\", msg)\n\n except Exception as ex:\n _LOGGER.error(ex)\n self._runtime.handle_backmsg_deserialization_exception(self._session_id, ex)\n return\n\n # \"debug_disconnect_websocket\" and \"debug_shutdown_runtime\" are special\n # developmentMode-only messages used in e2e tests to test reconnect handling and\n # disabling widgets.\n if msg.WhichOneof(\"type\") == \"debug_disconnect_websocket\":\n if config.get_option(\"global.developmentMode\"):\n self.close()\n else:\n _LOGGER.warning(\n \"Client tried to disconnect websocket when not in development mode.\"\n )\n elif msg.WhichOneof(\"type\") == \"debug_shutdown_runtime\":\n if config.get_option(\"global.developmentMode\"):\n self._runtime.stop()\n else:\n _LOGGER.warning(\n \"Client tried to shut down runtime when not in development mode.\"\n )\n else:\n # AppSession handles all other BackMsg types.\n self._runtime.handle_backmsg(self._session_id, msg)\n", "path": "lib/streamlit/web/server/browser_websocket_handler.py"}]}
| 2,946 | 142 |
gh_patches_debug_37594
|
rasdani/github-patches
|
git_diff
|
nilearn__nilearn-2201
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Testing Nilearn setup, build, & installation
**What would you like changed/added and why?**
Use AzurePipelines unused additional CI jobs to test Nilearn installation.
**What would be the benefit?**
It will detect and forestall errors during building and installing Niearn preventing the need for PR like #2198 & help detect issues like #1993 . The additional time required for this would be very low.
</issue>
<code>
[start of setup.py]
1 #! /usr/bin/env python
2
3 descr = """A set of python modules for neuroimaging..."""
4
5 import sys
6 import os
7
8 from setuptools import setup, find_packages
9
10
11 def load_version():
12 """Executes nilearn/version.py in a globals dictionary and return it.
13
14 Note: importing nilearn is not an option because there may be
15 dependencies like nibabel which are not installed and
16 setup.py is supposed to install them.
17 """
18 # load all vars into globals, otherwise
19 # the later function call using global vars doesn't work.
20 globals_dict = {}
21 with open(os.path.join('nilearn', 'version.py')) as fp:
22 exec(fp.read(), globals_dict)
23
24 return globals_dict
25
26
27 def is_installing():
28 # Allow command-lines such as "python setup.py build install"
29 install_commands = set(['install', 'develop'])
30 return install_commands.intersection(set(sys.argv))
31
32
33 # Make sources available using relative paths from this file's directory.
34 os.chdir(os.path.dirname(os.path.abspath(__file__)))
35
36 _VERSION_GLOBALS = load_version()
37 DISTNAME = 'nilearn'
38 DESCRIPTION = 'Statistical learning for neuroimaging in Python'
39 with open('README.rst') as fp:
40 LONG_DESCRIPTION = fp.read()
41 MAINTAINER = 'Gael Varoquaux'
42 MAINTAINER_EMAIL = '[email protected]'
43 URL = 'http://nilearn.github.io'
44 LICENSE = 'new BSD'
45 DOWNLOAD_URL = 'http://nilearn.github.io'
46 VERSION = _VERSION_GLOBALS['__version__']
47
48
49 if __name__ == "__main__":
50 if is_installing():
51 module_check_fn = _VERSION_GLOBALS['_check_module_dependencies']
52 module_check_fn(is_nilearn_installing=True)
53
54 install_requires = \
55 ['%s>=%s' % (mod, meta['min_version'])
56 for mod, meta in _VERSION_GLOBALS['REQUIRED_MODULE_METADATA']
57 if not meta['required_at_installation']]
58
59 setup(name=DISTNAME,
60 maintainer=MAINTAINER,
61 maintainer_email=MAINTAINER_EMAIL,
62 description=DESCRIPTION,
63 license=LICENSE,
64 url=URL,
65 version=VERSION,
66 download_url=DOWNLOAD_URL,
67 long_description=LONG_DESCRIPTION,
68 zip_safe=False, # the package can run out of an .egg file
69 classifiers=[
70 'Intended Audience :: Science/Research',
71 'Intended Audience :: Developers',
72 'License :: OSI Approved',
73 'Programming Language :: C',
74 'Programming Language :: Python',
75 'Topic :: Software Development',
76 'Topic :: Scientific/Engineering',
77 'Operating System :: Microsoft :: Windows',
78 'Operating System :: POSIX',
79 'Operating System :: Unix',
80 'Operating System :: MacOS',
81 'Programming Language :: Python :: 3.5',
82 'Programming Language :: Python :: 3.6',
83 'Programming Language :: Python :: 3.7',
84 ],
85 packages=find_packages(),
86 package_data={'nilearn.datasets.data': ['*.nii.gz', '*.csv', '*.txt'
87 ],
88 'nilearn.datasets.data.fsaverage5': ['*.gz'],
89 'nilearn.surface.data': ['*.csv'],
90 'nilearn.plotting.data.js': ['*.js'],
91 'nilearn.plotting.data.html': ['*.html'],
92 'nilearn.plotting.glass_brain_files': ['*.json'],
93 'nilearn.tests.data': ['*'],
94 'nilearn.image.tests.data': ['*.mgz'],
95 'nilearn.surface.tests.data': ['*.annot', '*.label'],
96 'nilearn.datasets.tests.data': ['*.*'],
97 'nilearn.datasets.description': ['*.rst'],
98 'nilearn.reporting.data.html': ['*.html']},
99 install_requires=install_requires,
100 python_requires='>=3.5',
101 )
102
[end of setup.py]
[start of continuous_integration/show-python-packages-versions.py]
1 import sys
2
3 DEPENDENCIES = ['numpy', 'scipy', 'sklearn', 'joblib', 'matplotlib', 'nibabel']
4
5
6 def print_package_version(package_name, indent=' '):
7 try:
8 package = __import__(package_name)
9 version = getattr(package, '__version__', None)
10 package_file = getattr(package, '__file__', )
11 provenance_info = '{0} from {1}'.format(version, package_file)
12 except ImportError:
13 provenance_info = 'not installed'
14
15 print('{0}{1}: {2}'.format(indent, package_name, provenance_info))
16
17 if __name__ == '__main__':
18 print('=' * 120)
19 print('Python %s' % str(sys.version))
20 print('from: %s\n' % sys.executable)
21
22 print('Dependencies versions')
23 for package_name in DEPENDENCIES:
24 print_package_version(package_name)
25 print('=' * 120)
26
[end of continuous_integration/show-python-packages-versions.py]
[start of nilearn/version.py]
1 # *- encoding: utf-8 -*-
2 """
3 nilearn version, required package versions, and utilities for checking
4 """
5 # Author: Loic Esteve, Ben Cipollini
6 # License: simplified BSD
7
8 # PEP0440 compatible formatted version, see:
9 # https://www.python.org/dev/peps/pep-0440/
10 #
11 # Generic release markers:
12 # X.Y
13 # X.Y.Z # For bugfix releases
14 #
15 # Admissible pre-release markers:
16 # X.YaN # Alpha release
17 # X.YbN # Beta release
18 # X.YrcN # Release Candidate
19 # X.Y # Final release
20 #
21 # Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.
22 # 'X.Y.dev0' is the canonical version of 'X.Y.dev'
23 #
24 __version__ = '0.6.0b'
25
26 _NILEARN_INSTALL_MSG = 'See %s for installation information.' % (
27 'http://nilearn.github.io/introduction.html#installation')
28
29 # This is a tuple to preserve order, so that dependencies are checked
30 # in some meaningful order (more => less 'core').
31 REQUIRED_MODULE_METADATA = (
32 ('numpy', {
33 'min_version': '1.11',
34 'required_at_installation': True,
35 'install_info': _NILEARN_INSTALL_MSG}),
36 ('scipy', {
37 'min_version': '0.19',
38 'required_at_installation': True,
39 'install_info': _NILEARN_INSTALL_MSG}),
40 ('sklearn', {
41 'min_version': '0.19',
42 'required_at_installation': True,
43 'install_info': _NILEARN_INSTALL_MSG}),
44 ('joblib', {
45 'min_version': '0.11',
46 'required_at_installation': True,
47 'install_info': _NILEARN_INSTALL_MSG}),
48 ('nibabel', {
49 'min_version': '2.0.2',
50 'required_at_installation': False}))
51
52 OPTIONAL_MATPLOTLIB_MIN_VERSION = '2.0'
53
54
55 def _import_module_with_version_check(
56 module_name,
57 minimum_version,
58 install_info=None):
59 """Check that module is installed with a recent enough version
60 """
61 from distutils.version import LooseVersion
62
63 try:
64 module = __import__(module_name)
65 except ImportError as exc:
66 user_friendly_info = ('Module "{0}" could not be found. {1}').format(
67 module_name,
68 install_info or 'Please install it properly to use nilearn.')
69 exc.args += (user_friendly_info,)
70 # Necessary for Python 3 because the repr/str of ImportError
71 # objects was changed in Python 3
72 if hasattr(exc, 'msg'):
73 exc.msg += '. ' + user_friendly_info
74 raise
75
76 # Avoid choking on modules with no __version__ attribute
77 module_version = getattr(module, '__version__', '0.0.0')
78
79 version_too_old = (not LooseVersion(module_version) >=
80 LooseVersion(minimum_version))
81
82 if version_too_old:
83 message = (
84 'A {module_name} version of at least {minimum_version} '
85 'is required to use nilearn. {module_version} was found. '
86 'Please upgrade {module_name}').format(
87 module_name=module_name,
88 minimum_version=minimum_version,
89 module_version=module_version)
90
91 raise ImportError(message)
92
93 return module
94
95
96 def _check_module_dependencies(is_nilearn_installing=False):
97 """Throw an exception if nilearn dependencies are not installed.
98
99 Parameters
100 ----------
101 is_nilearn_installing: boolean
102 if True, only error on missing packages that cannot be auto-installed.
103 if False, error on any missing package.
104
105 Throws
106 -------
107 ImportError
108 """
109
110 for (module_name, module_metadata) in REQUIRED_MODULE_METADATA:
111 if not (is_nilearn_installing and
112 not module_metadata['required_at_installation']):
113 # Skip check only when installing and it's a module that
114 # will be auto-installed.
115 _import_module_with_version_check(
116 module_name=module_name,
117 minimum_version=module_metadata['min_version'],
118 install_info=module_metadata.get('install_info'))
119
[end of nilearn/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/continuous_integration/show-python-packages-versions.py b/continuous_integration/show-python-packages-versions.py
--- a/continuous_integration/show-python-packages-versions.py
+++ b/continuous_integration/show-python-packages-versions.py
@@ -1,6 +1,7 @@
import sys
-DEPENDENCIES = ['numpy', 'scipy', 'sklearn', 'joblib', 'matplotlib', 'nibabel']
+DEPENDENCIES = ['numpy', 'scipy', 'scikit-learn', 'joblib', 'matplotlib',
+ 'nibabel']
def print_package_version(package_name, indent=' '):
diff --git a/nilearn/version.py b/nilearn/version.py
--- a/nilearn/version.py
+++ b/nilearn/version.py
@@ -21,7 +21,7 @@
# Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.
# 'X.Y.dev0' is the canonical version of 'X.Y.dev'
#
-__version__ = '0.6.0b'
+__version__ = '0.6.0b0'
_NILEARN_INSTALL_MSG = 'See %s for installation information.' % (
'http://nilearn.github.io/introduction.html#installation')
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -30,6 +30,20 @@
return install_commands.intersection(set(sys.argv))
+def list_required_packages():
+ required_packages = []
+ required_packages_orig = ['%s>=%s' % (mod, meta['min_version'])
+ for mod, meta
+ in _VERSION_GLOBALS['REQUIRED_MODULE_METADATA']
+ ]
+ for package in required_packages_orig:
+ if package.startswith('sklearn'):
+ package = package.replace('sklearn', 'scikit-learn')
+ required_packages.append(package)
+ required_packages.append('sklearn')
+ return required_packages
+
+
# Make sources available using relative paths from this file's directory.
os.chdir(os.path.dirname(os.path.abspath(__file__)))
@@ -51,11 +65,6 @@
module_check_fn = _VERSION_GLOBALS['_check_module_dependencies']
module_check_fn(is_nilearn_installing=True)
- install_requires = \
- ['%s>=%s' % (mod, meta['min_version'])
- for mod, meta in _VERSION_GLOBALS['REQUIRED_MODULE_METADATA']
- if not meta['required_at_installation']]
-
setup(name=DISTNAME,
maintainer=MAINTAINER,
maintainer_email=MAINTAINER_EMAIL,
@@ -96,6 +105,6 @@
'nilearn.datasets.tests.data': ['*.*'],
'nilearn.datasets.description': ['*.rst'],
'nilearn.reporting.data.html': ['*.html']},
- install_requires=install_requires,
+ install_requires=list_required_packages(),
python_requires='>=3.5',
)
|
{"golden_diff": "diff --git a/continuous_integration/show-python-packages-versions.py b/continuous_integration/show-python-packages-versions.py\n--- a/continuous_integration/show-python-packages-versions.py\n+++ b/continuous_integration/show-python-packages-versions.py\n@@ -1,6 +1,7 @@\n import sys\n \n-DEPENDENCIES = ['numpy', 'scipy', 'sklearn', 'joblib', 'matplotlib', 'nibabel']\n+DEPENDENCIES = ['numpy', 'scipy', 'scikit-learn', 'joblib', 'matplotlib',\n+ 'nibabel']\n \n \n def print_package_version(package_name, indent=' '):\ndiff --git a/nilearn/version.py b/nilearn/version.py\n--- a/nilearn/version.py\n+++ b/nilearn/version.py\n@@ -21,7 +21,7 @@\n # Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.\n # 'X.Y.dev0' is the canonical version of 'X.Y.dev'\n #\n-__version__ = '0.6.0b'\n+__version__ = '0.6.0b0'\n \n _NILEARN_INSTALL_MSG = 'See %s for installation information.' % (\n 'http://nilearn.github.io/introduction.html#installation')\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -30,6 +30,20 @@\n return install_commands.intersection(set(sys.argv))\n \n \n+def list_required_packages():\n+ required_packages = []\n+ required_packages_orig = ['%s>=%s' % (mod, meta['min_version'])\n+ for mod, meta\n+ in _VERSION_GLOBALS['REQUIRED_MODULE_METADATA']\n+ ]\n+ for package in required_packages_orig:\n+ if package.startswith('sklearn'):\n+ package = package.replace('sklearn', 'scikit-learn')\n+ required_packages.append(package)\n+ required_packages.append('sklearn')\n+ return required_packages\n+\n+\n # Make sources available using relative paths from this file's directory.\n os.chdir(os.path.dirname(os.path.abspath(__file__)))\n \n@@ -51,11 +65,6 @@\n module_check_fn = _VERSION_GLOBALS['_check_module_dependencies']\n module_check_fn(is_nilearn_installing=True)\n \n- install_requires = \\\n- ['%s>=%s' % (mod, meta['min_version'])\n- for mod, meta in _VERSION_GLOBALS['REQUIRED_MODULE_METADATA']\n- if not meta['required_at_installation']]\n-\n setup(name=DISTNAME,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n@@ -96,6 +105,6 @@\n 'nilearn.datasets.tests.data': ['*.*'],\n 'nilearn.datasets.description': ['*.rst'],\n 'nilearn.reporting.data.html': ['*.html']},\n- install_requires=install_requires,\n+ install_requires=list_required_packages(),\n python_requires='>=3.5',\n )\n", "issue": "Testing Nilearn setup, build, & installation\n**What would you like changed/added and why?**\r\nUse AzurePipelines unused additional CI jobs to test Nilearn installation.\r\n\r\n**What would be the benefit?**\r\nIt will detect and forestall errors during building and installing Niearn preventing the need for PR like #2198 & help detect issues like #1993 . The additional time required for this would be very low.\r\n\n", "before_files": [{"content": "#! /usr/bin/env python\n\ndescr = \"\"\"A set of python modules for neuroimaging...\"\"\"\n\nimport sys\nimport os\n\nfrom setuptools import setup, find_packages\n\n\ndef load_version():\n \"\"\"Executes nilearn/version.py in a globals dictionary and return it.\n\n Note: importing nilearn is not an option because there may be\n dependencies like nibabel which are not installed and\n setup.py is supposed to install them.\n \"\"\"\n # load all vars into globals, otherwise\n # the later function call using global vars doesn't work.\n globals_dict = {}\n with open(os.path.join('nilearn', 'version.py')) as fp:\n exec(fp.read(), globals_dict)\n\n return globals_dict\n\n\ndef is_installing():\n # Allow command-lines such as \"python setup.py build install\"\n install_commands = set(['install', 'develop'])\n return install_commands.intersection(set(sys.argv))\n\n\n# Make sources available using relative paths from this file's directory.\nos.chdir(os.path.dirname(os.path.abspath(__file__)))\n\n_VERSION_GLOBALS = load_version()\nDISTNAME = 'nilearn'\nDESCRIPTION = 'Statistical learning for neuroimaging in Python'\nwith open('README.rst') as fp:\n LONG_DESCRIPTION = fp.read()\nMAINTAINER = 'Gael Varoquaux'\nMAINTAINER_EMAIL = '[email protected]'\nURL = 'http://nilearn.github.io'\nLICENSE = 'new BSD'\nDOWNLOAD_URL = 'http://nilearn.github.io'\nVERSION = _VERSION_GLOBALS['__version__']\n\n\nif __name__ == \"__main__\":\n if is_installing():\n module_check_fn = _VERSION_GLOBALS['_check_module_dependencies']\n module_check_fn(is_nilearn_installing=True)\n\n install_requires = \\\n ['%s>=%s' % (mod, meta['min_version'])\n for mod, meta in _VERSION_GLOBALS['REQUIRED_MODULE_METADATA']\n if not meta['required_at_installation']]\n\n setup(name=DISTNAME,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n description=DESCRIPTION,\n license=LICENSE,\n url=URL,\n version=VERSION,\n download_url=DOWNLOAD_URL,\n long_description=LONG_DESCRIPTION,\n zip_safe=False, # the package can run out of an .egg file\n classifiers=[\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved',\n 'Programming Language :: C',\n 'Programming Language :: Python',\n 'Topic :: Software Development',\n 'Topic :: Scientific/Engineering',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: Unix',\n 'Operating System :: MacOS',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ],\n packages=find_packages(),\n package_data={'nilearn.datasets.data': ['*.nii.gz', '*.csv', '*.txt'\n ],\n 'nilearn.datasets.data.fsaverage5': ['*.gz'],\n 'nilearn.surface.data': ['*.csv'],\n 'nilearn.plotting.data.js': ['*.js'],\n 'nilearn.plotting.data.html': ['*.html'],\n 'nilearn.plotting.glass_brain_files': ['*.json'],\n 'nilearn.tests.data': ['*'],\n 'nilearn.image.tests.data': ['*.mgz'],\n 'nilearn.surface.tests.data': ['*.annot', '*.label'],\n 'nilearn.datasets.tests.data': ['*.*'],\n 'nilearn.datasets.description': ['*.rst'],\n 'nilearn.reporting.data.html': ['*.html']},\n install_requires=install_requires,\n python_requires='>=3.5',\n )\n", "path": "setup.py"}, {"content": "import sys\n\nDEPENDENCIES = ['numpy', 'scipy', 'sklearn', 'joblib', 'matplotlib', 'nibabel']\n\n\ndef print_package_version(package_name, indent=' '):\n try:\n package = __import__(package_name)\n version = getattr(package, '__version__', None)\n package_file = getattr(package, '__file__', )\n provenance_info = '{0} from {1}'.format(version, package_file)\n except ImportError:\n provenance_info = 'not installed'\n\n print('{0}{1}: {2}'.format(indent, package_name, provenance_info))\n\nif __name__ == '__main__':\n print('=' * 120)\n print('Python %s' % str(sys.version))\n print('from: %s\\n' % sys.executable)\n\n print('Dependencies versions')\n for package_name in DEPENDENCIES:\n print_package_version(package_name)\n print('=' * 120)\n", "path": "continuous_integration/show-python-packages-versions.py"}, {"content": "# *- encoding: utf-8 -*-\n\"\"\"\nnilearn version, required package versions, and utilities for checking\n\"\"\"\n# Author: Loic Esteve, Ben Cipollini\n# License: simplified BSD\n\n# PEP0440 compatible formatted version, see:\n# https://www.python.org/dev/peps/pep-0440/\n#\n# Generic release markers:\n# X.Y\n# X.Y.Z # For bugfix releases\n#\n# Admissible pre-release markers:\n# X.YaN # Alpha release\n# X.YbN # Beta release\n# X.YrcN # Release Candidate\n# X.Y # Final release\n#\n# Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.\n# 'X.Y.dev0' is the canonical version of 'X.Y.dev'\n#\n__version__ = '0.6.0b'\n\n_NILEARN_INSTALL_MSG = 'See %s for installation information.' % (\n 'http://nilearn.github.io/introduction.html#installation')\n\n# This is a tuple to preserve order, so that dependencies are checked\n# in some meaningful order (more => less 'core').\nREQUIRED_MODULE_METADATA = (\n ('numpy', {\n 'min_version': '1.11',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('scipy', {\n 'min_version': '0.19',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('sklearn', {\n 'min_version': '0.19',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('joblib', {\n 'min_version': '0.11',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('nibabel', {\n 'min_version': '2.0.2',\n 'required_at_installation': False}))\n\nOPTIONAL_MATPLOTLIB_MIN_VERSION = '2.0'\n\n\ndef _import_module_with_version_check(\n module_name,\n minimum_version,\n install_info=None):\n \"\"\"Check that module is installed with a recent enough version\n \"\"\"\n from distutils.version import LooseVersion\n\n try:\n module = __import__(module_name)\n except ImportError as exc:\n user_friendly_info = ('Module \"{0}\" could not be found. {1}').format(\n module_name,\n install_info or 'Please install it properly to use nilearn.')\n exc.args += (user_friendly_info,)\n # Necessary for Python 3 because the repr/str of ImportError\n # objects was changed in Python 3\n if hasattr(exc, 'msg'):\n exc.msg += '. ' + user_friendly_info\n raise\n\n # Avoid choking on modules with no __version__ attribute\n module_version = getattr(module, '__version__', '0.0.0')\n\n version_too_old = (not LooseVersion(module_version) >=\n LooseVersion(minimum_version))\n\n if version_too_old:\n message = (\n 'A {module_name} version of at least {minimum_version} '\n 'is required to use nilearn. {module_version} was found. '\n 'Please upgrade {module_name}').format(\n module_name=module_name,\n minimum_version=minimum_version,\n module_version=module_version)\n\n raise ImportError(message)\n\n return module\n\n\ndef _check_module_dependencies(is_nilearn_installing=False):\n \"\"\"Throw an exception if nilearn dependencies are not installed.\n\n Parameters\n ----------\n is_nilearn_installing: boolean\n if True, only error on missing packages that cannot be auto-installed.\n if False, error on any missing package.\n\n Throws\n -------\n ImportError\n \"\"\"\n\n for (module_name, module_metadata) in REQUIRED_MODULE_METADATA:\n if not (is_nilearn_installing and\n not module_metadata['required_at_installation']):\n # Skip check only when installing and it's a module that\n # will be auto-installed.\n _import_module_with_version_check(\n module_name=module_name,\n minimum_version=module_metadata['min_version'],\n install_info=module_metadata.get('install_info'))\n", "path": "nilearn/version.py"}]}
| 3,136 | 656 |
gh_patches_debug_41250
|
rasdani/github-patches
|
git_diff
|
TabbycatDebate__tabbycat-846
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Email Notification: Add additional information to ballot receipts
Thanks to @tienne-B for his hard work on bringing this excellent new feature to life. I was wondering if it would be helpful to add some additional information to the {{ scores }} variable.

It might be useful to include the total scores and ranking win/lose as well. I think judges are more likely to notice an issue here than by reading the individual speaker scores. This may be low priority given the work it might take to develop different display formats or may have intentionally not been done.
I can think of different ways to do this but one could possibly be:
BP:
1st - CG: Sacred Heart 2 (180)
2nd - CO: St Andrews 2 (170)
3rd - OG: Macleans 2 (160)
4th - OO: St Marys 1 (140)
WSDC:
Winner - Proposition: Germany (250)
Loser - Opposition: France (245)
</issue>
<code>
[start of tabbycat/notifications/utils.py]
1 from django.db.models import Exists, OuterRef
2 from django.utils.translation import gettext as _
3
4 from adjallocation.allocation import AdjudicatorAllocation
5 from adjallocation.models import DebateAdjudicator
6 from draw.models import Debate
7 from results.result import DebateResult
8 from options.utils import use_team_code_names
9 from participants.prefetch import populate_win_counts
10 from tournaments.models import Round, Tournament
11
12 from .models import SentMessageRecord
13
14
15 def adjudicator_assignment_email_generator(round_id):
16 emails = []
17 round = Round.objects.get(id=round_id)
18 tournament = round.tournament
19 draw = round.debate_set_with_prefetches(speakers=False, divisions=False).all()
20 use_codes = use_team_code_names(tournament, False)
21
22 adj_position_names = {
23 AdjudicatorAllocation.POSITION_CHAIR: _("the chair"),
24 AdjudicatorAllocation.POSITION_ONLY: _("the only"),
25 AdjudicatorAllocation.POSITION_PANELLIST: _("a panellist"),
26 AdjudicatorAllocation.POSITION_TRAINEE: _("a trainee"),
27 }
28
29 def _assemble_panel(adjs):
30 adj_string = []
31 for adj, pos in adjs:
32 adj_string.append("%s (%s)" % (adj.name, adj_position_names[pos]))
33
34 return ", ".join(adj_string)
35
36 for debate in draw:
37 matchup = debate.matchup_codes if use_codes else debate.matchup
38 context = {
39 'ROUND': round.name,
40 'VENUE': debate.venue.name,
41 'PANEL': _assemble_panel(debate.adjudicators.with_positions()),
42 'DRAW': matchup
43 }
44
45 for adj, pos in debate.adjudicators.with_positions():
46 if adj.email is None:
47 continue
48
49 context_user = context.copy()
50 context_user['USER'] = adj.name
51 context_user['POSITION'] = adj_position_names[pos]
52
53 emails.append((context_user, adj))
54
55 return emails
56
57
58 def randomized_url_email_generator(url, tournament_id):
59 emails = []
60 tournament = Tournament.objects.get(id=tournament_id)
61
62 subquery = SentMessageRecord.objects.filter(
63 event=SentMessageRecord.EVENT_TYPE_URL,
64 tournament=tournament, email=OuterRef('email')
65 )
66 participants = tournament.participants.filter(
67 url_key__isnull=False, email__isnull=False
68 ).exclude(email__exact="").annotate(already_sent=Exists(subquery)).filter(already_sent=False)
69
70 for instance in participants:
71 url_ind = url + instance.url_key + '/'
72
73 variables = {'USER': instance.name, 'URL': url_ind, 'KEY': instance.url_key, 'TOURN': str(tournament)}
74
75 emails.append((variables, instance))
76
77 return emails
78
79
80 def ballots_email_generator(debate_id):
81 emails = []
82 debate = Debate.objects.get(id=debate_id)
83 tournament = debate.round.tournament
84 ballots = DebateResult(debate.confirmed_ballot).as_dicts()
85 round_name = _("%(tournament)s %(round)s @ %(room)s") % {'tournament': str(tournament),
86 'round': debate.round.name, 'room': debate.venue.name}
87
88 context = {'DEBATE': round_name}
89 use_codes = use_team_code_names(debate.round.tournament, False)
90
91 for ballot in ballots:
92 if 'adjudicator' in ballot:
93 judge = ballot['adjudicator']
94 else:
95 judge = debate.debateadjudicator_set.get(type=DebateAdjudicator.TYPE_CHAIR).adjudicator
96
97 if judge.email is None:
98 continue
99
100 scores = ""
101 for team in ballot['teams']:
102
103 team_name = team['team'].code_name if use_codes else team['team'].short_name
104 scores += _("(%(side)s) %(team)s\n") % {'side': team['side'], 'team': team_name}
105
106 for speaker in team['speakers']:
107 scores += _("- %(debater)s: %(score)s\n") % {'debater': speaker['speaker'], 'score': speaker['score']}
108
109 context_user = context.copy()
110 context_user['USER'] = judge.name
111 context_user['SCORES'] = scores
112
113 emails.append((context, judge))
114
115 return emails
116
117
118 def standings_email_generator(url, round_id):
119 emails = []
120 round = Round.objects.get(id=round_id)
121 tournament = round.tournament
122
123 teams = round.active_teams.prefetch_related('speaker_set')
124 populate_win_counts(teams)
125
126 context = {
127 'TOURN': str(tournament),
128 'ROUND': round.name,
129 'URL': url if tournament.pref('public_team_standings') else ""
130 }
131
132 for team in teams:
133 context_team = context.copy()
134 context_team['POINTS'] = str(team.points_count)
135 context_team['TEAM'] = team.short_name
136
137 for speaker in team.speaker_set.all():
138 if speaker.email is None:
139 continue
140
141 context_user = context_team.copy()
142 context_user['USER'] = speaker.name
143
144 emails.append((context_user, speaker))
145
146 return emails
147
148
149 def motion_release_email_generator(round_id):
150 emails = []
151 round = Round.objects.get(id=round_id)
152
153 def _create_motion_list():
154 motion_list = ""
155 for motion in round.motion_set.all():
156 motion_list += _(" - %s (%s)\n") % (motion.text, motion.reference)
157
158 if motion.info_slide:
159 motion_list += " %s\n" % (motion.info_slide)
160
161 return motion_list
162
163 context = {
164 'TOURN': str(round.tournament),
165 'ROUND': round.name,
166 'MOTIONS': _create_motion_list()
167 }
168
169 teams = round.tournament.team_set.filter(round_availabilities__round=round).prefetch_related('speaker_set')
170 for team in teams:
171 for speaker in team.speaker_set.all():
172 if speaker.email is None:
173 continue
174
175 context_user = context.copy()
176 context_user['USER'] = speaker.name
177
178 emails.append((context_user, speaker))
179
180 return emails
181
182
183 def team_speaker_email_generator(tournament_id):
184 emails = []
185 tournament = Tournament.objects.get(id=tournament_id)
186
187 for team in tournament.team_set.all().prefetch_related('speaker_set', 'break_categories').select_related('division', 'institution'):
188 context = {
189 'TOURN': str(tournament),
190 'SHORT': team.short_name,
191 'LONG': team.long_name,
192 'CODE': team.code_name,
193 'DIVISION': team.division.name if team.division is not None else "",
194 'BREAK': _(", ").join([breakq.name for breakq in team.break_categories.all()]),
195 'SPEAKERS': _(", ").join([p.name for p in team.speaker_set.all()]),
196 'INSTITUTION': str(team.institution),
197 'EMOJI': team.emoji
198 }
199
200 for speaker in team.speakers:
201 if speaker.email is None:
202 continue
203
204 context_user = context.copy()
205 context_user['USER'] = speaker.name
206
207 emails.append((context_user, speaker))
208
209 return emails
210
[end of tabbycat/notifications/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tabbycat/notifications/utils.py b/tabbycat/notifications/utils.py
--- a/tabbycat/notifications/utils.py
+++ b/tabbycat/notifications/utils.py
@@ -2,9 +2,9 @@
from django.utils.translation import gettext as _
from adjallocation.allocation import AdjudicatorAllocation
-from adjallocation.models import DebateAdjudicator
from draw.models import Debate
-from results.result import DebateResult
+from results.result import BaseConsensusDebateResultWithSpeakers, DebateResult, VotingDebateResult
+from results.utils import side_and_position_names
from options.utils import use_team_code_names
from participants.prefetch import populate_win_counts
from tournaments.models import Round, Tournament
@@ -81,36 +81,50 @@
emails = []
debate = Debate.objects.get(id=debate_id)
tournament = debate.round.tournament
- ballots = DebateResult(debate.confirmed_ballot).as_dicts()
+ results = DebateResult(debate.confirmed_ballot)
round_name = _("%(tournament)s %(round)s @ %(room)s") % {'tournament': str(tournament),
'round': debate.round.name, 'room': debate.venue.name}
- context = {'DEBATE': round_name}
use_codes = use_team_code_names(debate.round.tournament, False)
- for ballot in ballots:
- if 'adjudicator' in ballot:
- judge = ballot['adjudicator']
- else:
- judge = debate.debateadjudicator_set.get(type=DebateAdjudicator.TYPE_CHAIR).adjudicator
+ def _create_ballot(result, scoresheet):
+ ballot = ""
- if judge.email is None:
- continue
+ for side, (side_name, pos_names) in zip(tournament.sides, side_and_position_names(tournament)):
+ ballot += _("%(side)s: %(team)s (%(points)d - %(rank)s)\n") % {
+ 'side': side_name,
+ 'team': result.debateteams[side].team.code_name if use_codes else result.debateteams[side].team.short_name,
+ 'points': scoresheet.get_total(side),
+ 'rank': scoresheet.rank(side)
+ }
- scores = ""
- for team in ballot['teams']:
+ for pos, pos_name in zip(tournament.positions, pos_names):
+ ballot += _("- %(pos)s: %(speaker)s (%(score)s)\n") % {
+ 'pos': pos_name,
+ 'speaker': result.get_speaker(side, pos).name,
+ 'score': scoresheet.get_score(side, pos)
+ }
- team_name = team['team'].code_name if use_codes else team['team'].short_name
- scores += _("(%(side)s) %(team)s\n") % {'side': team['side'], 'team': team_name}
+ return ballot
- for speaker in team['speakers']:
- scores += _("- %(debater)s: %(score)s\n") % {'debater': speaker['speaker'], 'score': speaker['score']}
+ if isinstance(results, VotingDebateResult):
+ for (adj, ballot) in results.scoresheets.items():
+ if adj.email is None:
+ continue
+
+ context = {'DEBATE': round_name, 'USER': adj.name, 'SCORES': _create_ballot(results, ballot)}
+ emails.append((context, adj))
+ elif isinstance(results, BaseConsensusDebateResultWithSpeakers):
+ context = {'DEBATE': round_name, 'SCORES': _create_ballot(results, results.scoresheet)}
- context_user = context.copy()
- context_user['USER'] = judge.name
- context_user['SCORES'] = scores
+ for adj in debate.debateadjudicator_set.all():
+ if adj.adjudicator.email is None:
+ continue
+
+ context_user = context.copy()
+ context_user['USER'] = adj.adjudicator.name
- emails.append((context, judge))
+ emails.append((context_user, adj.adjudicator))
return emails
|
{"golden_diff": "diff --git a/tabbycat/notifications/utils.py b/tabbycat/notifications/utils.py\n--- a/tabbycat/notifications/utils.py\n+++ b/tabbycat/notifications/utils.py\n@@ -2,9 +2,9 @@\n from django.utils.translation import gettext as _\n \n from adjallocation.allocation import AdjudicatorAllocation\n-from adjallocation.models import DebateAdjudicator\n from draw.models import Debate\n-from results.result import DebateResult\n+from results.result import BaseConsensusDebateResultWithSpeakers, DebateResult, VotingDebateResult\n+from results.utils import side_and_position_names\n from options.utils import use_team_code_names\n from participants.prefetch import populate_win_counts\n from tournaments.models import Round, Tournament\n@@ -81,36 +81,50 @@\n emails = []\n debate = Debate.objects.get(id=debate_id)\n tournament = debate.round.tournament\n- ballots = DebateResult(debate.confirmed_ballot).as_dicts()\n+ results = DebateResult(debate.confirmed_ballot)\n round_name = _(\"%(tournament)s %(round)s @ %(room)s\") % {'tournament': str(tournament),\n 'round': debate.round.name, 'room': debate.venue.name}\n \n- context = {'DEBATE': round_name}\n use_codes = use_team_code_names(debate.round.tournament, False)\n \n- for ballot in ballots:\n- if 'adjudicator' in ballot:\n- judge = ballot['adjudicator']\n- else:\n- judge = debate.debateadjudicator_set.get(type=DebateAdjudicator.TYPE_CHAIR).adjudicator\n+ def _create_ballot(result, scoresheet):\n+ ballot = \"\"\n \n- if judge.email is None:\n- continue\n+ for side, (side_name, pos_names) in zip(tournament.sides, side_and_position_names(tournament)):\n+ ballot += _(\"%(side)s: %(team)s (%(points)d - %(rank)s)\\n\") % {\n+ 'side': side_name,\n+ 'team': result.debateteams[side].team.code_name if use_codes else result.debateteams[side].team.short_name,\n+ 'points': scoresheet.get_total(side),\n+ 'rank': scoresheet.rank(side)\n+ }\n \n- scores = \"\"\n- for team in ballot['teams']:\n+ for pos, pos_name in zip(tournament.positions, pos_names):\n+ ballot += _(\"- %(pos)s: %(speaker)s (%(score)s)\\n\") % {\n+ 'pos': pos_name,\n+ 'speaker': result.get_speaker(side, pos).name,\n+ 'score': scoresheet.get_score(side, pos)\n+ }\n \n- team_name = team['team'].code_name if use_codes else team['team'].short_name\n- scores += _(\"(%(side)s) %(team)s\\n\") % {'side': team['side'], 'team': team_name}\n+ return ballot\n \n- for speaker in team['speakers']:\n- scores += _(\"- %(debater)s: %(score)s\\n\") % {'debater': speaker['speaker'], 'score': speaker['score']}\n+ if isinstance(results, VotingDebateResult):\n+ for (adj, ballot) in results.scoresheets.items():\n+ if adj.email is None:\n+ continue\n+\n+ context = {'DEBATE': round_name, 'USER': adj.name, 'SCORES': _create_ballot(results, ballot)}\n+ emails.append((context, adj))\n+ elif isinstance(results, BaseConsensusDebateResultWithSpeakers):\n+ context = {'DEBATE': round_name, 'SCORES': _create_ballot(results, results.scoresheet)}\n \n- context_user = context.copy()\n- context_user['USER'] = judge.name\n- context_user['SCORES'] = scores\n+ for adj in debate.debateadjudicator_set.all():\n+ if adj.adjudicator.email is None:\n+ continue\n+\n+ context_user = context.copy()\n+ context_user['USER'] = adj.adjudicator.name\n \n- emails.append((context, judge))\n+ emails.append((context_user, adj.adjudicator))\n \n return emails\n", "issue": "Email Notification: Add additional information to ballot receipts\nThanks to @tienne-B for his hard work on bringing this excellent new feature to life. I was wondering if it would be helpful to add some additional information to the {{ scores }} variable. \r\n\r\n\r\n\r\nIt might be useful to include the total scores and ranking win/lose as well. I think judges are more likely to notice an issue here than by reading the individual speaker scores. This may be low priority given the work it might take to develop different display formats or may have intentionally not been done.\r\n\r\nI can think of different ways to do this but one could possibly be:\r\n\r\nBP: \r\n1st - CG: Sacred Heart 2 (180)\r\n2nd - CO: St Andrews 2 (170)\r\n3rd - OG: Macleans 2 (160)\r\n4th - OO: St Marys 1 (140)\r\n\r\nWSDC:\r\nWinner - Proposition: Germany (250)\r\nLoser - Opposition: France (245)\n", "before_files": [{"content": "from django.db.models import Exists, OuterRef\nfrom django.utils.translation import gettext as _\n\nfrom adjallocation.allocation import AdjudicatorAllocation\nfrom adjallocation.models import DebateAdjudicator\nfrom draw.models import Debate\nfrom results.result import DebateResult\nfrom options.utils import use_team_code_names\nfrom participants.prefetch import populate_win_counts\nfrom tournaments.models import Round, Tournament\n\nfrom .models import SentMessageRecord\n\n\ndef adjudicator_assignment_email_generator(round_id):\n emails = []\n round = Round.objects.get(id=round_id)\n tournament = round.tournament\n draw = round.debate_set_with_prefetches(speakers=False, divisions=False).all()\n use_codes = use_team_code_names(tournament, False)\n\n adj_position_names = {\n AdjudicatorAllocation.POSITION_CHAIR: _(\"the chair\"),\n AdjudicatorAllocation.POSITION_ONLY: _(\"the only\"),\n AdjudicatorAllocation.POSITION_PANELLIST: _(\"a panellist\"),\n AdjudicatorAllocation.POSITION_TRAINEE: _(\"a trainee\"),\n }\n\n def _assemble_panel(adjs):\n adj_string = []\n for adj, pos in adjs:\n adj_string.append(\"%s (%s)\" % (adj.name, adj_position_names[pos]))\n\n return \", \".join(adj_string)\n\n for debate in draw:\n matchup = debate.matchup_codes if use_codes else debate.matchup\n context = {\n 'ROUND': round.name,\n 'VENUE': debate.venue.name,\n 'PANEL': _assemble_panel(debate.adjudicators.with_positions()),\n 'DRAW': matchup\n }\n\n for adj, pos in debate.adjudicators.with_positions():\n if adj.email is None:\n continue\n\n context_user = context.copy()\n context_user['USER'] = adj.name\n context_user['POSITION'] = adj_position_names[pos]\n\n emails.append((context_user, adj))\n\n return emails\n\n\ndef randomized_url_email_generator(url, tournament_id):\n emails = []\n tournament = Tournament.objects.get(id=tournament_id)\n\n subquery = SentMessageRecord.objects.filter(\n event=SentMessageRecord.EVENT_TYPE_URL,\n tournament=tournament, email=OuterRef('email')\n )\n participants = tournament.participants.filter(\n url_key__isnull=False, email__isnull=False\n ).exclude(email__exact=\"\").annotate(already_sent=Exists(subquery)).filter(already_sent=False)\n\n for instance in participants:\n url_ind = url + instance.url_key + '/'\n\n variables = {'USER': instance.name, 'URL': url_ind, 'KEY': instance.url_key, 'TOURN': str(tournament)}\n\n emails.append((variables, instance))\n\n return emails\n\n\ndef ballots_email_generator(debate_id):\n emails = []\n debate = Debate.objects.get(id=debate_id)\n tournament = debate.round.tournament\n ballots = DebateResult(debate.confirmed_ballot).as_dicts()\n round_name = _(\"%(tournament)s %(round)s @ %(room)s\") % {'tournament': str(tournament),\n 'round': debate.round.name, 'room': debate.venue.name}\n\n context = {'DEBATE': round_name}\n use_codes = use_team_code_names(debate.round.tournament, False)\n\n for ballot in ballots:\n if 'adjudicator' in ballot:\n judge = ballot['adjudicator']\n else:\n judge = debate.debateadjudicator_set.get(type=DebateAdjudicator.TYPE_CHAIR).adjudicator\n\n if judge.email is None:\n continue\n\n scores = \"\"\n for team in ballot['teams']:\n\n team_name = team['team'].code_name if use_codes else team['team'].short_name\n scores += _(\"(%(side)s) %(team)s\\n\") % {'side': team['side'], 'team': team_name}\n\n for speaker in team['speakers']:\n scores += _(\"- %(debater)s: %(score)s\\n\") % {'debater': speaker['speaker'], 'score': speaker['score']}\n\n context_user = context.copy()\n context_user['USER'] = judge.name\n context_user['SCORES'] = scores\n\n emails.append((context, judge))\n\n return emails\n\n\ndef standings_email_generator(url, round_id):\n emails = []\n round = Round.objects.get(id=round_id)\n tournament = round.tournament\n\n teams = round.active_teams.prefetch_related('speaker_set')\n populate_win_counts(teams)\n\n context = {\n 'TOURN': str(tournament),\n 'ROUND': round.name,\n 'URL': url if tournament.pref('public_team_standings') else \"\"\n }\n\n for team in teams:\n context_team = context.copy()\n context_team['POINTS'] = str(team.points_count)\n context_team['TEAM'] = team.short_name\n\n for speaker in team.speaker_set.all():\n if speaker.email is None:\n continue\n\n context_user = context_team.copy()\n context_user['USER'] = speaker.name\n\n emails.append((context_user, speaker))\n\n return emails\n\n\ndef motion_release_email_generator(round_id):\n emails = []\n round = Round.objects.get(id=round_id)\n\n def _create_motion_list():\n motion_list = \"\"\n for motion in round.motion_set.all():\n motion_list += _(\" - %s (%s)\\n\") % (motion.text, motion.reference)\n\n if motion.info_slide:\n motion_list += \" %s\\n\" % (motion.info_slide)\n\n return motion_list\n\n context = {\n 'TOURN': str(round.tournament),\n 'ROUND': round.name,\n 'MOTIONS': _create_motion_list()\n }\n\n teams = round.tournament.team_set.filter(round_availabilities__round=round).prefetch_related('speaker_set')\n for team in teams:\n for speaker in team.speaker_set.all():\n if speaker.email is None:\n continue\n\n context_user = context.copy()\n context_user['USER'] = speaker.name\n\n emails.append((context_user, speaker))\n\n return emails\n\n\ndef team_speaker_email_generator(tournament_id):\n emails = []\n tournament = Tournament.objects.get(id=tournament_id)\n\n for team in tournament.team_set.all().prefetch_related('speaker_set', 'break_categories').select_related('division', 'institution'):\n context = {\n 'TOURN': str(tournament),\n 'SHORT': team.short_name,\n 'LONG': team.long_name,\n 'CODE': team.code_name,\n 'DIVISION': team.division.name if team.division is not None else \"\",\n 'BREAK': _(\", \").join([breakq.name for breakq in team.break_categories.all()]),\n 'SPEAKERS': _(\", \").join([p.name for p in team.speaker_set.all()]),\n 'INSTITUTION': str(team.institution),\n 'EMOJI': team.emoji\n }\n\n for speaker in team.speakers:\n if speaker.email is None:\n continue\n\n context_user = context.copy()\n context_user['USER'] = speaker.name\n\n emails.append((context_user, speaker))\n\n return emails\n", "path": "tabbycat/notifications/utils.py"}]}
| 2,878 | 917 |
gh_patches_debug_61693
|
rasdani/github-patches
|
git_diff
|
mdn__kuma-5779
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
JSONDecodeError in ssr.py
Simply opening https://beta.developer.mozilla.org/ja/docs/Web/API/Window causes the 500 ISE.
https://sentry.prod.mozaws.net/operations/mdn-prod/issues/6224701/?environment=oregon%3Aprod
```
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
File "django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "django/core/handlers/base.py", line 187, in _get_response
response = self.process_exception_by_middleware(e, request)
File "django/core/handlers/base.py", line 185, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "newrelic/hooks/framework_django.py", line 544, in wrapper
return wrapped(*args, **kwargs)
File "kuma/core/decorators.py", line 37, in _cache_controlled
response = viewfunc(request, *args, **kw)
File "django/views/decorators/csrf.py", line 58, in wrapped_view
return view_func(*args, **kwargs)
File "django/views/decorators/http.py", line 40, in inner
return func(request, *args, **kwargs)
File "kuma/wiki/decorators.py", line 31, in _added_header
response = func(request, *args, **kwargs)
File "kuma/wiki/decorators.py", line 105, in process
return func(request, *args, **kwargs)
File "newrelic/api/function_trace.py", line 139, in literal_wrapper
return wrapped(*args, **kwargs)
File "ratelimit/decorators.py", line 30, in _wrapped
return fn(*args, **kw)
File "kuma/wiki/views/document.py", line 617, in document
return react_document(request, document_slug, document_locale)
File "kuma/wiki/views/document.py", line 873, in react_document
response = render(request, 'wiki/react_document.html', context)
File "django/shortcuts.py", line 30, in render
content = loader.render_to_string(template_name, context, request, using=using)
File "django/template/loader.py", line 68, in render_to_string
return template.render(context, request)
File "django_jinja/backend.py", line 106, in render
return mark_safe(self.template.render(context))
File "newrelic/api/function_trace.py", line 121, in dynamic_wrapper
return wrapped(*args, **kwargs)
File "jinja2/environment.py", line 1008, in render
return self.environment.handle_exception(exc_info, True)
File "jinja2/environment.py", line 780, in handle_exception
reraise(exc_type, exc_value, tb)
File "/app/kuma/wiki/jinja2/wiki/react_document.html", line 120, in top-level template code
document_data)|safe }}
File "kuma/wiki/templatetags/ssr.py", line 50, in render_react
return server_side_render(component_name, data)
File "kuma/wiki/templatetags/ssr.py", line 133, in server_side_render
result = response.json()
File "requests/models.py", line 897, in json
return complexjson.loads(self.text, **kwargs)
File "simplejson/__init__.py", line 516, in loads
return _default_decoder.decode(s)
File "simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
```
Seems what's coming back from the SSR Node service isn't a JSON response but in Python we're expecting it to.
</issue>
<code>
[start of kuma/wiki/templatetags/ssr.py]
1 from __future__ import print_function
2
3 import json
4 import os
5
6 import requests
7 import requests.exceptions
8 from django.conf import settings
9 from django.utils import lru_cache
10 from django_jinja import library
11
12
13 @lru_cache.lru_cache()
14 def get_localization_data(locale):
15 """
16 Read the frontend string catalog for the specified locale, parse
17 it as JSON, and return the resulting dict. The returned values
18 are cached so that we don't have to read files all the time.
19 """
20 path = os.path.join(settings.BASE_DIR,
21 'static', 'jsi18n',
22 locale, 'react.json')
23 with open(path, 'r') as f:
24 return json.load(f)
25
26
27 @library.global_function
28 def render_react(component_name, locale, url, document_data, ssr=True):
29 """
30 Render a script tag to define the data and any other HTML tags needed
31 to enable the display of a React-based UI. By default, this does
32 server side rendering, falling back to client-side rendering if
33 the SSR attempt fails. Pass False as the second argument to do
34 client-side rendering unconditionally.
35
36 Note that we are not defining a generic Jinja template tag here.
37 The code in this file is specific to Kuma's React-based UI.
38 """
39 localization_data = get_localization_data(locale)
40
41 data = {
42 'locale': locale,
43 'stringCatalog': localization_data['catalog'],
44 'pluralExpression': localization_data['plural'],
45 'url': url,
46 'documentData': document_data,
47 }
48
49 if ssr:
50 return server_side_render(component_name, data)
51 else:
52 return client_side_render(component_name, data)
53
54
55 def _render(component_name, html, script, needs_serialization=False):
56 """A utility function used by both client side and server side rendering.
57 Returns a string that includes the specified HTML and a serialized
58 form of the state dict, in the format expected by the client-side code
59 in kuma/javascript/src/index.jsx.
60 """
61 if needs_serialization:
62 assert isinstance(script, dict), type(script)
63 script = json.dumps(script).replace('</', '<\\/')
64 else:
65 script = u'JSON.parse({})'.format(script)
66
67 return (
68 u'<div id="react-container" data-component-name="{}">{}</div>\n'
69 u'<script>window._react_data = {};</script>\n'
70 ).format(component_name, html, script)
71
72
73 def client_side_render(component_name, data):
74 """
75 Output an empty <div> and a script with complete state so that
76 the UI can be rendered on the client-side.
77 """
78 return _render(component_name, '', data, needs_serialization=True)
79
80
81 def server_side_render(component_name, data):
82 """
83 Pre-render the React UI to HTML and output it in a <div>, and then
84 also pass the necessary serialized state in a <script> so that
85 React on the client side can sync itself with the pre-rendred HTML.
86
87 If any exceptions are thrown during the server-side rendering, we
88 fall back to client-side rendering instead.
89 """
90 url = '{}/{}'.format(settings.SSR_URL, component_name)
91 timeout = settings.SSR_TIMEOUT
92 # Try server side rendering
93 try:
94 # POST the document data as JSON to the SSR server and we
95 # should get HTML text (encoded as plain text) in the body
96 # of the response
97 response = requests.post(url,
98 headers={'Content-Type': 'application/json'},
99 data=json.dumps(data).encode('utf8'),
100 timeout=timeout)
101
102 # Even though we've got fully rendered HTML now, we still need to
103 # send the document data along with it so that React can sync its
104 # state on the client side with what is in the HTML. When rendering
105 # a document page, the data includes long strings of HTML that
106 # we can get away without duplicating. So as an optimization when
107 # component_name is "document", we're going to make a copy of the
108 # data (because the original belongs to our caller) and delete those
109 # strings from the copy.
110 #
111 # WARNING: This optimization can save 20kb in data transfer
112 # for typical pages, but it requires us to be very careful on
113 # the frontend. If any components render conditionally based on
114 # the state of bodyHTML, tocHTML or quickLinkHTML, then they will
115 # render differently on the client than during SSR, and the hydrate
116 # will not just work cleanly, and those components will re-render
117 # with empty strings. This has already caused Bug 1558308, and
118 # I've commented it out because the benefit in file size doesn't
119 # seem worth the risk of client-side bugs.
120 #
121 # As an alternative, it ought to be possible to extract the HTML
122 # strings from the SSR'ed document and rebuild the document object
123 # on the client right before we call hydrate(). So if you uncomment
124 # the lines below, you should also edit kuma/javascript/src/index.jsx
125 # to extract the HTML from the document as well.
126 #
127 # if component_name == 'document':
128 # data = data.copy()
129 # data['documentData'] = data['documentData'].copy()
130 # data['documentData'].update(bodyHTML='',
131 # tocHTML='',
132 # quickLinksHTML='')
133 result = response.json()
134 return _render(component_name, result['html'], result['script'])
135
136 except requests.exceptions.ConnectionError:
137 print("Connection error contacting SSR server.")
138 print("Falling back to client side rendering.")
139 return client_side_render(component_name, data)
140 except requests.exceptions.ReadTimeout:
141 print("Timeout contacting SSR server.")
142 print("Falling back to client side rendering.")
143 return client_side_render(component_name, data)
144
[end of kuma/wiki/templatetags/ssr.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kuma/wiki/templatetags/ssr.py b/kuma/wiki/templatetags/ssr.py
--- a/kuma/wiki/templatetags/ssr.py
+++ b/kuma/wiki/templatetags/ssr.py
@@ -130,6 +130,7 @@
# data['documentData'].update(bodyHTML='',
# tocHTML='',
# quickLinksHTML='')
+ response.raise_for_status()
result = response.json()
return _render(component_name, result['html'], result['script'])
|
{"golden_diff": "diff --git a/kuma/wiki/templatetags/ssr.py b/kuma/wiki/templatetags/ssr.py\n--- a/kuma/wiki/templatetags/ssr.py\n+++ b/kuma/wiki/templatetags/ssr.py\n@@ -130,6 +130,7 @@\n # data['documentData'].update(bodyHTML='',\n # tocHTML='',\n # quickLinksHTML='')\n+ response.raise_for_status()\n result = response.json()\n return _render(component_name, result['html'], result['script'])\n", "issue": "JSONDecodeError in ssr.py\nSimply opening https://beta.developer.mozilla.org/ja/docs/Web/API/Window causes the 500 ISE. \r\nhttps://sentry.prod.mozaws.net/operations/mdn-prod/issues/6224701/?environment=oregon%3Aprod\r\n\r\n```\r\nJSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n File \"django/core/handlers/exception.py\", line 41, in inner\r\n response = get_response(request)\r\n File \"django/core/handlers/base.py\", line 187, in _get_response\r\n response = self.process_exception_by_middleware(e, request)\r\n File \"django/core/handlers/base.py\", line 185, in _get_response\r\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\n File \"newrelic/hooks/framework_django.py\", line 544, in wrapper\r\n return wrapped(*args, **kwargs)\r\n File \"kuma/core/decorators.py\", line 37, in _cache_controlled\r\n response = viewfunc(request, *args, **kw)\r\n File \"django/views/decorators/csrf.py\", line 58, in wrapped_view\r\n return view_func(*args, **kwargs)\r\n File \"django/views/decorators/http.py\", line 40, in inner\r\n return func(request, *args, **kwargs)\r\n File \"kuma/wiki/decorators.py\", line 31, in _added_header\r\n response = func(request, *args, **kwargs)\r\n File \"kuma/wiki/decorators.py\", line 105, in process\r\n return func(request, *args, **kwargs)\r\n File \"newrelic/api/function_trace.py\", line 139, in literal_wrapper\r\n return wrapped(*args, **kwargs)\r\n File \"ratelimit/decorators.py\", line 30, in _wrapped\r\n return fn(*args, **kw)\r\n File \"kuma/wiki/views/document.py\", line 617, in document\r\n return react_document(request, document_slug, document_locale)\r\n File \"kuma/wiki/views/document.py\", line 873, in react_document\r\n response = render(request, 'wiki/react_document.html', context)\r\n File \"django/shortcuts.py\", line 30, in render\r\n content = loader.render_to_string(template_name, context, request, using=using)\r\n File \"django/template/loader.py\", line 68, in render_to_string\r\n return template.render(context, request)\r\n File \"django_jinja/backend.py\", line 106, in render\r\n return mark_safe(self.template.render(context))\r\n File \"newrelic/api/function_trace.py\", line 121, in dynamic_wrapper\r\n return wrapped(*args, **kwargs)\r\n File \"jinja2/environment.py\", line 1008, in render\r\n return self.environment.handle_exception(exc_info, True)\r\n File \"jinja2/environment.py\", line 780, in handle_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/app/kuma/wiki/jinja2/wiki/react_document.html\", line 120, in top-level template code\r\n document_data)|safe }}\r\n File \"kuma/wiki/templatetags/ssr.py\", line 50, in render_react\r\n return server_side_render(component_name, data)\r\n File \"kuma/wiki/templatetags/ssr.py\", line 133, in server_side_render\r\n result = response.json()\r\n File \"requests/models.py\", line 897, in json\r\n return complexjson.loads(self.text, **kwargs)\r\n File \"simplejson/__init__.py\", line 516, in loads\r\n return _default_decoder.decode(s)\r\n File \"simplejson/decoder.py\", line 370, in decode\r\n obj, end = self.raw_decode(s)\r\n File \"simplejson/decoder.py\", line 400, in raw_decode\r\n return self.scan_once(s, idx=_w(s, idx).end())\r\n```\r\n\r\nSeems what's coming back from the SSR Node service isn't a JSON response but in Python we're expecting it to. \n", "before_files": [{"content": "from __future__ import print_function\n\nimport json\nimport os\n\nimport requests\nimport requests.exceptions\nfrom django.conf import settings\nfrom django.utils import lru_cache\nfrom django_jinja import library\n\n\n@lru_cache.lru_cache()\ndef get_localization_data(locale):\n \"\"\"\n Read the frontend string catalog for the specified locale, parse\n it as JSON, and return the resulting dict. The returned values\n are cached so that we don't have to read files all the time.\n \"\"\"\n path = os.path.join(settings.BASE_DIR,\n 'static', 'jsi18n',\n locale, 'react.json')\n with open(path, 'r') as f:\n return json.load(f)\n\n\[email protected]_function\ndef render_react(component_name, locale, url, document_data, ssr=True):\n \"\"\"\n Render a script tag to define the data and any other HTML tags needed\n to enable the display of a React-based UI. By default, this does\n server side rendering, falling back to client-side rendering if\n the SSR attempt fails. Pass False as the second argument to do\n client-side rendering unconditionally.\n\n Note that we are not defining a generic Jinja template tag here.\n The code in this file is specific to Kuma's React-based UI.\n \"\"\"\n localization_data = get_localization_data(locale)\n\n data = {\n 'locale': locale,\n 'stringCatalog': localization_data['catalog'],\n 'pluralExpression': localization_data['plural'],\n 'url': url,\n 'documentData': document_data,\n }\n\n if ssr:\n return server_side_render(component_name, data)\n else:\n return client_side_render(component_name, data)\n\n\ndef _render(component_name, html, script, needs_serialization=False):\n \"\"\"A utility function used by both client side and server side rendering.\n Returns a string that includes the specified HTML and a serialized\n form of the state dict, in the format expected by the client-side code\n in kuma/javascript/src/index.jsx.\n \"\"\"\n if needs_serialization:\n assert isinstance(script, dict), type(script)\n script = json.dumps(script).replace('</', '<\\\\/')\n else:\n script = u'JSON.parse({})'.format(script)\n\n return (\n u'<div id=\"react-container\" data-component-name=\"{}\">{}</div>\\n'\n u'<script>window._react_data = {};</script>\\n'\n ).format(component_name, html, script)\n\n\ndef client_side_render(component_name, data):\n \"\"\"\n Output an empty <div> and a script with complete state so that\n the UI can be rendered on the client-side.\n \"\"\"\n return _render(component_name, '', data, needs_serialization=True)\n\n\ndef server_side_render(component_name, data):\n \"\"\"\n Pre-render the React UI to HTML and output it in a <div>, and then\n also pass the necessary serialized state in a <script> so that\n React on the client side can sync itself with the pre-rendred HTML.\n\n If any exceptions are thrown during the server-side rendering, we\n fall back to client-side rendering instead.\n \"\"\"\n url = '{}/{}'.format(settings.SSR_URL, component_name)\n timeout = settings.SSR_TIMEOUT\n # Try server side rendering\n try:\n # POST the document data as JSON to the SSR server and we\n # should get HTML text (encoded as plain text) in the body\n # of the response\n response = requests.post(url,\n headers={'Content-Type': 'application/json'},\n data=json.dumps(data).encode('utf8'),\n timeout=timeout)\n\n # Even though we've got fully rendered HTML now, we still need to\n # send the document data along with it so that React can sync its\n # state on the client side with what is in the HTML. When rendering\n # a document page, the data includes long strings of HTML that\n # we can get away without duplicating. So as an optimization when\n # component_name is \"document\", we're going to make a copy of the\n # data (because the original belongs to our caller) and delete those\n # strings from the copy.\n #\n # WARNING: This optimization can save 20kb in data transfer\n # for typical pages, but it requires us to be very careful on\n # the frontend. If any components render conditionally based on\n # the state of bodyHTML, tocHTML or quickLinkHTML, then they will\n # render differently on the client than during SSR, and the hydrate\n # will not just work cleanly, and those components will re-render\n # with empty strings. This has already caused Bug 1558308, and\n # I've commented it out because the benefit in file size doesn't\n # seem worth the risk of client-side bugs.\n #\n # As an alternative, it ought to be possible to extract the HTML\n # strings from the SSR'ed document and rebuild the document object\n # on the client right before we call hydrate(). So if you uncomment\n # the lines below, you should also edit kuma/javascript/src/index.jsx\n # to extract the HTML from the document as well.\n #\n # if component_name == 'document':\n # data = data.copy()\n # data['documentData'] = data['documentData'].copy()\n # data['documentData'].update(bodyHTML='',\n # tocHTML='',\n # quickLinksHTML='')\n result = response.json()\n return _render(component_name, result['html'], result['script'])\n\n except requests.exceptions.ConnectionError:\n print(\"Connection error contacting SSR server.\")\n print(\"Falling back to client side rendering.\")\n return client_side_render(component_name, data)\n except requests.exceptions.ReadTimeout:\n print(\"Timeout contacting SSR server.\")\n print(\"Falling back to client side rendering.\")\n return client_side_render(component_name, data)\n", "path": "kuma/wiki/templatetags/ssr.py"}]}
| 3,087 | 123 |
gh_patches_debug_4898
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-modules-extras-3459
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
locale_gen fails when using Python 3
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
`locale_gen`
##### ANSIBLE VERSION
```
ansible 2.2.0.0
```
##### CONFIGURATION
```ansible_python_interpreter: "python3"```
##### OS / ENVIRONMENT
Ubuntu Xenial 16.04
##### SUMMARY
When using Python 3, running a task with `locale_gen` fails.
##### STEPS TO REPRODUCE
1. Set a task using `locale_gen`; in my case it was:
```
- name: ensure the required locale exists
locale_gen:
name: "{{ db_locale }}"
state: present
```
2. Set `--ansible_python_interpreter=python3` when running the task. (Or `/usr/bin/python3` or whatever works on your system.)
##### EXPECTED RESULTS
```
TASK [postgresql : ensure the required locale exists] **************************
changed: [default] => {"changed": true, "msg": "OK", "name": "en_GB.UTF-8"}
```
##### ACTUAL RESULTS
```
TASK [postgresql : ensure the required locale exists] **************************
fatal: [default]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to 127.0.0.1 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_23qigrzr/ansible_module_locale_gen.py\", line 239, in <module>\r\n main()\r\n File \"/tmp/ansible_23qigrzr/ansible_module_locale_gen.py\", line 217, in main\r\n if is_present(name):\r\n File \"/tmp/ansible_23qigrzr/ansible_module_locale_gen.py\", line 96, in is_present\r\n return any(fix_case(name) == fix_case(line) for line in output.splitlines())\r\n File \"/tmp/ansible_23qigrzr/ansible_module_locale_gen.py\", line 96, in <genexpr>\r\n return any(fix_case(name) == fix_case(line) for line in output.splitlines())\r\n File \"/tmp/ansible_23qigrzr/ansible_module_locale_gen.py\", line 101, in fix_case\r\n for s, r in LOCALE_NORMALIZATION.iteritems():\r\nAttributeError: 'dict' object has no attribute 'iteritems'\r\n", "msg": "MODULE FAILURE"}```
</issue>
<code>
[start of system/locale_gen.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3 #
4 # This file is part of Ansible
5 #
6 # Ansible is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # Ansible is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
18
19 DOCUMENTATION = '''
20 ---
21 module: locale_gen
22 short_description: Creates or removes locales.
23 description:
24 - Manages locales by editing /etc/locale.gen and invoking locale-gen.
25 version_added: "1.6"
26 author: "Augustus Kling (@AugustusKling)"
27 options:
28 name:
29 description:
30 - Name and encoding of the locale, such as "en_GB.UTF-8".
31 required: true
32 default: null
33 aliases: []
34 state:
35 description:
36 - Whether the locale shall be present.
37 required: false
38 choices: ["present", "absent"]
39 default: "present"
40 '''
41
42 EXAMPLES = '''
43 # Ensure a locale exists.
44 - locale_gen: name=de_CH.UTF-8 state=present
45 '''
46
47 import os
48 import os.path
49 from subprocess import Popen, PIPE, call
50 import re
51
52 from ansible.module_utils.basic import *
53 from ansible.module_utils.pycompat24 import get_exception
54
55 LOCALE_NORMALIZATION = {
56 ".utf8": ".UTF-8",
57 ".eucjp": ".EUC-JP",
58 ".iso885915": ".ISO-8859-15",
59 ".cp1251": ".CP1251",
60 ".koi8r": ".KOI8-R",
61 ".armscii8": ".ARMSCII-8",
62 ".euckr": ".EUC-KR",
63 ".gbk": ".GBK",
64 ".gb18030": ".GB18030",
65 ".euctw": ".EUC-TW",
66 }
67
68 # ===========================================
69 # location module specific support methods.
70 #
71
72 def is_available(name, ubuntuMode):
73 """Check if the given locale is available on the system. This is done by
74 checking either :
75 * if the locale is present in /etc/locales.gen
76 * or if the locale is present in /usr/share/i18n/SUPPORTED"""
77 if ubuntuMode:
78 __regexp = '^(?P<locale>\S+_\S+) (?P<charset>\S+)\s*$'
79 __locales_available = '/usr/share/i18n/SUPPORTED'
80 else:
81 __regexp = '^#{0,1}\s*(?P<locale>\S+_\S+) (?P<charset>\S+)\s*$'
82 __locales_available = '/etc/locale.gen'
83
84 re_compiled = re.compile(__regexp)
85 fd = open(__locales_available, 'r')
86 for line in fd:
87 result = re_compiled.match(line)
88 if result and result.group('locale') == name:
89 return True
90 fd.close()
91 return False
92
93 def is_present(name):
94 """Checks if the given locale is currently installed."""
95 output = Popen(["locale", "-a"], stdout=PIPE).communicate()[0]
96 return any(fix_case(name) == fix_case(line) for line in output.splitlines())
97
98 def fix_case(name):
99 """locale -a might return the encoding in either lower or upper case.
100 Passing through this function makes them uniform for comparisons."""
101 for s, r in LOCALE_NORMALIZATION.iteritems():
102 name = name.replace(s, r)
103 return name
104
105 def replace_line(existing_line, new_line):
106 """Replaces lines in /etc/locale.gen"""
107 try:
108 f = open("/etc/locale.gen", "r")
109 lines = [line.replace(existing_line, new_line) for line in f]
110 finally:
111 f.close()
112 try:
113 f = open("/etc/locale.gen", "w")
114 f.write("".join(lines))
115 finally:
116 f.close()
117
118 def set_locale(name, enabled=True):
119 """ Sets the state of the locale. Defaults to enabled. """
120 search_string = '#{0,1}\s*%s (?P<charset>.+)' % name
121 if enabled:
122 new_string = '%s \g<charset>' % (name)
123 else:
124 new_string = '# %s \g<charset>' % (name)
125 try:
126 f = open("/etc/locale.gen", "r")
127 lines = [re.sub(search_string, new_string, line) for line in f]
128 finally:
129 f.close()
130 try:
131 f = open("/etc/locale.gen", "w")
132 f.write("".join(lines))
133 finally:
134 f.close()
135
136 def apply_change(targetState, name):
137 """Create or remove locale.
138
139 Keyword arguments:
140 targetState -- Desired state, either present or absent.
141 name -- Name including encoding such as de_CH.UTF-8.
142 """
143 if targetState=="present":
144 # Create locale.
145 set_locale(name, enabled=True)
146 else:
147 # Delete locale.
148 set_locale(name, enabled=False)
149
150 localeGenExitValue = call("locale-gen")
151 if localeGenExitValue!=0:
152 raise EnvironmentError(localeGenExitValue, "locale.gen failed to execute, it returned "+str(localeGenExitValue))
153
154 def apply_change_ubuntu(targetState, name):
155 """Create or remove locale.
156
157 Keyword arguments:
158 targetState -- Desired state, either present or absent.
159 name -- Name including encoding such as de_CH.UTF-8.
160 """
161 if targetState=="present":
162 # Create locale.
163 # Ubuntu's patched locale-gen automatically adds the new locale to /var/lib/locales/supported.d/local
164 localeGenExitValue = call(["locale-gen", name])
165 else:
166 # Delete locale involves discarding the locale from /var/lib/locales/supported.d/local and regenerating all locales.
167 try:
168 f = open("/var/lib/locales/supported.d/local", "r")
169 content = f.readlines()
170 finally:
171 f.close()
172 try:
173 f = open("/var/lib/locales/supported.d/local", "w")
174 for line in content:
175 locale, charset = line.split(' ')
176 if locale != name:
177 f.write(line)
178 finally:
179 f.close()
180 # Purge locales and regenerate.
181 # Please provide a patch if you know how to avoid regenerating the locales to keep!
182 localeGenExitValue = call(["locale-gen", "--purge"])
183
184 if localeGenExitValue!=0:
185 raise EnvironmentError(localeGenExitValue, "locale.gen failed to execute, it returned "+str(localeGenExitValue))
186
187 # ==============================================================
188 # main
189
190 def main():
191
192 module = AnsibleModule(
193 argument_spec = dict(
194 name = dict(required=True),
195 state = dict(choices=['present','absent'], default='present'),
196 ),
197 supports_check_mode=True
198 )
199
200 name = module.params['name']
201 state = module.params['state']
202
203 if not os.path.exists("/etc/locale.gen"):
204 if os.path.exists("/var/lib/locales/supported.d/"):
205 # Ubuntu created its own system to manage locales.
206 ubuntuMode = True
207 else:
208 module.fail_json(msg="/etc/locale.gen and /var/lib/locales/supported.d/local are missing. Is the package \"locales\" installed?")
209 else:
210 # We found the common way to manage locales.
211 ubuntuMode = False
212
213 if not is_available(name, ubuntuMode):
214 module.fail_json(msg="The locales you've entered is not available "
215 "on your system.")
216
217 if is_present(name):
218 prev_state = "present"
219 else:
220 prev_state = "absent"
221 changed = (prev_state!=state)
222
223 if module.check_mode:
224 module.exit_json(changed=changed)
225 else:
226 if changed:
227 try:
228 if ubuntuMode==False:
229 apply_change(state, name)
230 else:
231 apply_change_ubuntu(state, name)
232 except EnvironmentError:
233 e = get_exception()
234 module.fail_json(msg=e.strerror, exitValue=e.errno)
235
236 module.exit_json(name=name, changed=changed, msg="OK")
237
238
239 main()
240
[end of system/locale_gen.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/system/locale_gen.py b/system/locale_gen.py
--- a/system/locale_gen.py
+++ b/system/locale_gen.py
@@ -98,7 +98,7 @@
def fix_case(name):
"""locale -a might return the encoding in either lower or upper case.
Passing through this function makes them uniform for comparisons."""
- for s, r in LOCALE_NORMALIZATION.iteritems():
+ for s, r in LOCALE_NORMALIZATION.items():
name = name.replace(s, r)
return name
|
{"golden_diff": "diff --git a/system/locale_gen.py b/system/locale_gen.py\n--- a/system/locale_gen.py\n+++ b/system/locale_gen.py\n@@ -98,7 +98,7 @@\n def fix_case(name):\n \"\"\"locale -a might return the encoding in either lower or upper case.\n Passing through this function makes them uniform for comparisons.\"\"\"\n- for s, r in LOCALE_NORMALIZATION.iteritems():\n+ for s, r in LOCALE_NORMALIZATION.items():\n name = name.replace(s, r)\n return name\n", "issue": "locale_gen fails when using Python 3\n##### ISSUE TYPE\r\n<!--- Pick one below and delete the rest: -->\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\n`locale_gen`\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible 2.2.0.0\r\n```\r\n\r\n##### CONFIGURATION\r\n```ansible_python_interpreter: \"python3\"```\r\n\r\n##### OS / ENVIRONMENT\r\nUbuntu Xenial 16.04\r\n\r\n##### SUMMARY\r\nWhen using Python 3, running a task with `locale_gen` fails.\r\n\r\n##### STEPS TO REPRODUCE\r\n1. Set a task using `locale_gen`; in my case it was:\r\n ```\r\n - name: ensure the required locale exists\r\n locale_gen:\r\n name: \"{{ db_locale }}\"\r\n state: present\r\n ```\r\n2. Set `--ansible_python_interpreter=python3` when running the task. (Or `/usr/bin/python3` or whatever works on your system.)\r\n\r\n##### EXPECTED RESULTS\r\n```\r\nTASK [postgresql : ensure the required locale exists] **************************\r\nchanged: [default] => {\"changed\": true, \"msg\": \"OK\", \"name\": \"en_GB.UTF-8\"}\r\n```\r\n\r\n##### ACTUAL RESULTS\r\n```\r\nTASK [postgresql : ensure the required locale exists] **************************\r\nfatal: [default]: FAILED! => {\"changed\": false, \"failed\": true, \"module_stderr\": \"Shared connection to 127.0.0.1 closed.\\r\\n\", \"module_stdout\": \"Traceback (most recent call last):\\r\\n File \\\"/tmp/ansible_23qigrzr/ansible_module_locale_gen.py\\\", line 239, in <module>\\r\\n main()\\r\\n File \\\"/tmp/ansible_23qigrzr/ansible_module_locale_gen.py\\\", line 217, in main\\r\\n if is_present(name):\\r\\n File \\\"/tmp/ansible_23qigrzr/ansible_module_locale_gen.py\\\", line 96, in is_present\\r\\n return any(fix_case(name) == fix_case(line) for line in output.splitlines())\\r\\n File \\\"/tmp/ansible_23qigrzr/ansible_module_locale_gen.py\\\", line 96, in <genexpr>\\r\\n return any(fix_case(name) == fix_case(line) for line in output.splitlines())\\r\\n File \\\"/tmp/ansible_23qigrzr/ansible_module_locale_gen.py\\\", line 101, in fix_case\\r\\n for s, r in LOCALE_NORMALIZATION.iteritems():\\r\\nAttributeError: 'dict' object has no attribute 'iteritems'\\r\\n\", \"msg\": \"MODULE FAILURE\"}```\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nDOCUMENTATION = '''\n---\nmodule: locale_gen\nshort_description: Creates or removes locales.\ndescription:\n - Manages locales by editing /etc/locale.gen and invoking locale-gen.\nversion_added: \"1.6\"\nauthor: \"Augustus Kling (@AugustusKling)\"\noptions:\n name:\n description:\n - Name and encoding of the locale, such as \"en_GB.UTF-8\".\n required: true\n default: null\n aliases: []\n state:\n description:\n - Whether the locale shall be present.\n required: false\n choices: [\"present\", \"absent\"]\n default: \"present\"\n'''\n\nEXAMPLES = '''\n# Ensure a locale exists.\n- locale_gen: name=de_CH.UTF-8 state=present\n'''\n\nimport os\nimport os.path\nfrom subprocess import Popen, PIPE, call\nimport re\n\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.pycompat24 import get_exception\n\nLOCALE_NORMALIZATION = {\n \".utf8\": \".UTF-8\",\n \".eucjp\": \".EUC-JP\",\n \".iso885915\": \".ISO-8859-15\",\n \".cp1251\": \".CP1251\",\n \".koi8r\": \".KOI8-R\",\n \".armscii8\": \".ARMSCII-8\",\n \".euckr\": \".EUC-KR\",\n \".gbk\": \".GBK\",\n \".gb18030\": \".GB18030\",\n \".euctw\": \".EUC-TW\",\n}\n\n# ===========================================\n# location module specific support methods.\n#\n\ndef is_available(name, ubuntuMode):\n \"\"\"Check if the given locale is available on the system. This is done by\n checking either :\n * if the locale is present in /etc/locales.gen\n * or if the locale is present in /usr/share/i18n/SUPPORTED\"\"\"\n if ubuntuMode:\n __regexp = '^(?P<locale>\\S+_\\S+) (?P<charset>\\S+)\\s*$'\n __locales_available = '/usr/share/i18n/SUPPORTED'\n else:\n __regexp = '^#{0,1}\\s*(?P<locale>\\S+_\\S+) (?P<charset>\\S+)\\s*$'\n __locales_available = '/etc/locale.gen'\n\n re_compiled = re.compile(__regexp)\n fd = open(__locales_available, 'r')\n for line in fd:\n result = re_compiled.match(line)\n if result and result.group('locale') == name:\n return True\n fd.close()\n return False\n\ndef is_present(name):\n \"\"\"Checks if the given locale is currently installed.\"\"\"\n output = Popen([\"locale\", \"-a\"], stdout=PIPE).communicate()[0]\n return any(fix_case(name) == fix_case(line) for line in output.splitlines())\n\ndef fix_case(name):\n \"\"\"locale -a might return the encoding in either lower or upper case.\n Passing through this function makes them uniform for comparisons.\"\"\"\n for s, r in LOCALE_NORMALIZATION.iteritems():\n name = name.replace(s, r)\n return name\n\ndef replace_line(existing_line, new_line):\n \"\"\"Replaces lines in /etc/locale.gen\"\"\"\n try:\n f = open(\"/etc/locale.gen\", \"r\")\n lines = [line.replace(existing_line, new_line) for line in f]\n finally:\n f.close()\n try:\n f = open(\"/etc/locale.gen\", \"w\")\n f.write(\"\".join(lines))\n finally:\n f.close()\n\ndef set_locale(name, enabled=True):\n \"\"\" Sets the state of the locale. Defaults to enabled. \"\"\"\n search_string = '#{0,1}\\s*%s (?P<charset>.+)' % name\n if enabled:\n new_string = '%s \\g<charset>' % (name)\n else:\n new_string = '# %s \\g<charset>' % (name)\n try:\n f = open(\"/etc/locale.gen\", \"r\")\n lines = [re.sub(search_string, new_string, line) for line in f]\n finally:\n f.close()\n try:\n f = open(\"/etc/locale.gen\", \"w\")\n f.write(\"\".join(lines))\n finally:\n f.close()\n\ndef apply_change(targetState, name):\n \"\"\"Create or remove locale.\n\n Keyword arguments:\n targetState -- Desired state, either present or absent.\n name -- Name including encoding such as de_CH.UTF-8.\n \"\"\"\n if targetState==\"present\":\n # Create locale.\n set_locale(name, enabled=True)\n else:\n # Delete locale.\n set_locale(name, enabled=False)\n \n localeGenExitValue = call(\"locale-gen\")\n if localeGenExitValue!=0:\n raise EnvironmentError(localeGenExitValue, \"locale.gen failed to execute, it returned \"+str(localeGenExitValue))\n\ndef apply_change_ubuntu(targetState, name):\n \"\"\"Create or remove locale.\n \n Keyword arguments:\n targetState -- Desired state, either present or absent.\n name -- Name including encoding such as de_CH.UTF-8.\n \"\"\"\n if targetState==\"present\":\n # Create locale.\n # Ubuntu's patched locale-gen automatically adds the new locale to /var/lib/locales/supported.d/local\n localeGenExitValue = call([\"locale-gen\", name])\n else:\n # Delete locale involves discarding the locale from /var/lib/locales/supported.d/local and regenerating all locales.\n try:\n f = open(\"/var/lib/locales/supported.d/local\", \"r\")\n content = f.readlines()\n finally:\n f.close()\n try:\n f = open(\"/var/lib/locales/supported.d/local\", \"w\")\n for line in content:\n locale, charset = line.split(' ')\n if locale != name:\n f.write(line)\n finally:\n f.close()\n # Purge locales and regenerate.\n # Please provide a patch if you know how to avoid regenerating the locales to keep!\n localeGenExitValue = call([\"locale-gen\", \"--purge\"])\n \n if localeGenExitValue!=0:\n raise EnvironmentError(localeGenExitValue, \"locale.gen failed to execute, it returned \"+str(localeGenExitValue))\n\n# ==============================================================\n# main\n\ndef main():\n\n module = AnsibleModule(\n argument_spec = dict(\n name = dict(required=True),\n state = dict(choices=['present','absent'], default='present'),\n ),\n supports_check_mode=True\n )\n\n name = module.params['name']\n state = module.params['state']\n\n if not os.path.exists(\"/etc/locale.gen\"):\n if os.path.exists(\"/var/lib/locales/supported.d/\"):\n # Ubuntu created its own system to manage locales.\n ubuntuMode = True\n else:\n module.fail_json(msg=\"/etc/locale.gen and /var/lib/locales/supported.d/local are missing. Is the package \\\"locales\\\" installed?\")\n else:\n # We found the common way to manage locales.\n ubuntuMode = False\n\n if not is_available(name, ubuntuMode):\n module.fail_json(msg=\"The locales you've entered is not available \"\n \"on your system.\")\n\n if is_present(name):\n prev_state = \"present\"\n else:\n prev_state = \"absent\"\n changed = (prev_state!=state)\n \n if module.check_mode:\n module.exit_json(changed=changed)\n else:\n if changed:\n try:\n if ubuntuMode==False:\n apply_change(state, name)\n else:\n apply_change_ubuntu(state, name)\n except EnvironmentError:\n e = get_exception()\n module.fail_json(msg=e.strerror, exitValue=e.errno)\n\n module.exit_json(name=name, changed=changed, msg=\"OK\")\n\n\nmain()\n", "path": "system/locale_gen.py"}]}
| 3,584 | 111 |
gh_patches_debug_32763
|
rasdani/github-patches
|
git_diff
|
benoitc__gunicorn-2193
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[ver.20.0.0] The gunicorn config files cannot using '__file__' constants.
I configured app_root that base on the relative path in gunicorn files.
`app_root = os.path.abspath(os.path.join(__file__, '..', '..'))`
now I have occurred when upgrade gunicorn version to 20.0.0 and starting with config file.
`$ venv/bin/gunicorn -c /path/to/app/conf/gunicorn.conf.py wsgi:app`
> Failed to read config file: /path/to/app/conf/gunicorn.conf.py
Traceback (most recent call last):
File "/path/to/app/venv/lib/python3.7/site-packages/gunicorn/app/base.py", line 102, in get_config_from_filename
loader.exec_module(mod)
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/path/to/app/conf/gunicorn.conf.py", line 9, in <module>
app_root = os.path.abspath(os.path.join(__file__, '..', '..'))
Thanks to help me.
</issue>
<code>
[start of setup.py]
1 # -*- coding: utf-8 -
2 #
3 # This file is part of gunicorn released under the MIT license.
4 # See the NOTICE for more information.
5
6 import os
7 import sys
8
9 from setuptools import setup, find_packages
10 from setuptools.command.test import test as TestCommand
11
12 from gunicorn import __version__
13
14
15 CLASSIFIERS = [
16 'Development Status :: 4 - Beta',
17 'Environment :: Other Environment',
18 'Intended Audience :: Developers',
19 'License :: OSI Approved :: MIT License',
20 'Operating System :: MacOS :: MacOS X',
21 'Operating System :: POSIX',
22 'Programming Language :: Python',
23 'Programming Language :: Python :: 3',
24 'Programming Language :: Python :: 3.4',
25 'Programming Language :: Python :: 3.5',
26 'Programming Language :: Python :: 3.6',
27 'Programming Language :: Python :: 3.7',
28 'Programming Language :: Python :: 3.8',
29 'Programming Language :: Python :: 3 :: Only',
30 'Programming Language :: Python :: Implementation :: CPython',
31 'Programming Language :: Python :: Implementation :: PyPy',
32 'Topic :: Internet',
33 'Topic :: Utilities',
34 'Topic :: Software Development :: Libraries :: Python Modules',
35 'Topic :: Internet :: WWW/HTTP',
36 'Topic :: Internet :: WWW/HTTP :: WSGI',
37 'Topic :: Internet :: WWW/HTTP :: WSGI :: Server',
38 'Topic :: Internet :: WWW/HTTP :: Dynamic Content']
39
40 # read long description
41 with open(os.path.join(os.path.dirname(__file__), 'README.rst')) as f:
42 long_description = f.read()
43
44 # read dev requirements
45 fname = os.path.join(os.path.dirname(__file__), 'requirements_test.txt')
46 with open(fname) as f:
47 tests_require = [l.strip() for l in f.readlines()]
48
49 class PyTestCommand(TestCommand):
50 user_options = [
51 ("cov", None, "measure coverage")
52 ]
53
54 def initialize_options(self):
55 TestCommand.initialize_options(self)
56 self.cov = None
57
58 def finalize_options(self):
59 TestCommand.finalize_options(self)
60 self.test_args = ['tests']
61 if self.cov:
62 self.test_args += ['--cov', 'gunicorn']
63 self.test_suite = True
64
65 def run_tests(self):
66 import pytest
67 errno = pytest.main(self.test_args)
68 sys.exit(errno)
69
70
71 install_requires = [
72 # We depend on functioning pkg_resources.working_set.add_entry() and
73 # pkg_resources.load_entry_point(). These both work as of 3.0 which
74 # is the first version to support Python 3.4 which we require as a
75 # floor.
76 'setuptools>=3.0',
77 ]
78
79 extras_require = {
80 'gevent': ['gevent>=0.13'],
81 'eventlet': ['eventlet>=0.9.7'],
82 'tornado': ['tornado>=0.2'],
83 'gthread': [],
84 'setproctitle': ['setproctitle'],
85 }
86
87 setup(
88 name='gunicorn',
89 version=__version__,
90
91 description='WSGI HTTP Server for UNIX',
92 long_description=long_description,
93 author='Benoit Chesneau',
94 author_email='[email protected]',
95 license='MIT',
96 url='http://gunicorn.org',
97
98 python_requires='>=3.4',
99 install_requires=install_requires,
100 classifiers=CLASSIFIERS,
101 zip_safe=False,
102 packages=find_packages(exclude=['examples', 'tests']),
103 include_package_data=True,
104
105 tests_require=tests_require,
106 cmdclass={'test': PyTestCommand},
107
108 entry_points="""
109 [console_scripts]
110 gunicorn=gunicorn.app.wsgiapp:run
111
112 [paste.server_runner]
113 main=gunicorn.app.pasterapp:serve
114 """,
115 extras_require=extras_require,
116 )
117
[end of setup.py]
[start of gunicorn/app/base.py]
1 # -*- coding: utf-8 -
2 #
3 # This file is part of gunicorn released under the MIT license.
4 # See the NOTICE for more information.
5 import importlib.machinery
6 import os
7 import sys
8 import traceback
9 import types
10
11 from gunicorn import util
12 from gunicorn.arbiter import Arbiter
13 from gunicorn.config import Config, get_default_config_file
14 from gunicorn import debug
15
16
17 class BaseApplication(object):
18 """
19 An application interface for configuring and loading
20 the various necessities for any given web framework.
21 """
22 def __init__(self, usage=None, prog=None):
23 self.usage = usage
24 self.cfg = None
25 self.callable = None
26 self.prog = prog
27 self.logger = None
28 self.do_load_config()
29
30 def do_load_config(self):
31 """
32 Loads the configuration
33 """
34 try:
35 self.load_default_config()
36 self.load_config()
37 except Exception as e:
38 print("\nError: %s" % str(e), file=sys.stderr)
39 sys.stderr.flush()
40 sys.exit(1)
41
42 def load_default_config(self):
43 # init configuration
44 self.cfg = Config(self.usage, prog=self.prog)
45
46 def init(self, parser, opts, args):
47 raise NotImplementedError
48
49 def load(self):
50 raise NotImplementedError
51
52 def load_config(self):
53 """
54 This method is used to load the configuration from one or several input(s).
55 Custom Command line, configuration file.
56 You have to override this method in your class.
57 """
58 raise NotImplementedError
59
60 def reload(self):
61 self.do_load_config()
62 if self.cfg.spew:
63 debug.spew()
64
65 def wsgi(self):
66 if self.callable is None:
67 self.callable = self.load()
68 return self.callable
69
70 def run(self):
71 try:
72 Arbiter(self).run()
73 except RuntimeError as e:
74 print("\nError: %s\n" % e, file=sys.stderr)
75 sys.stderr.flush()
76 sys.exit(1)
77
78
79 class Application(BaseApplication):
80
81 # 'init' and 'load' methods are implemented by WSGIApplication.
82 # pylint: disable=abstract-method
83
84 def chdir(self):
85 # chdir to the configured path before loading,
86 # default is the current dir
87 os.chdir(self.cfg.chdir)
88
89 # add the path to sys.path
90 if self.cfg.chdir not in sys.path:
91 sys.path.insert(0, self.cfg.chdir)
92
93 def get_config_from_filename(self, filename):
94
95 if not os.path.exists(filename):
96 raise RuntimeError("%r doesn't exist" % filename)
97
98 try:
99 module_name = '__config__'
100 mod = types.ModuleType(module_name)
101 loader = importlib.machinery.SourceFileLoader(module_name, filename)
102 loader.exec_module(mod)
103 except Exception:
104 print("Failed to read config file: %s" % filename, file=sys.stderr)
105 traceback.print_exc()
106 sys.stderr.flush()
107 sys.exit(1)
108
109 return vars(mod)
110
111 def get_config_from_module_name(self, module_name):
112 return vars(importlib.import_module(module_name))
113
114 def load_config_from_module_name_or_filename(self, location):
115 """
116 Loads the configuration file: the file is a python file, otherwise raise an RuntimeError
117 Exception or stop the process if the configuration file contains a syntax error.
118 """
119
120 if location.startswith("python:"):
121 module_name = location[len("python:"):]
122 cfg = self.get_config_from_module_name(module_name)
123 else:
124 if location.startswith("file:"):
125 filename = location[len("file:"):]
126 else:
127 filename = location
128 cfg = self.get_config_from_filename(filename)
129
130 for k, v in cfg.items():
131 # Ignore unknown names
132 if k not in self.cfg.settings:
133 continue
134 try:
135 self.cfg.set(k.lower(), v)
136 except:
137 print("Invalid value for %s: %s\n" % (k, v), file=sys.stderr)
138 sys.stderr.flush()
139 raise
140
141 return cfg
142
143 def load_config_from_file(self, filename):
144 return self.load_config_from_module_name_or_filename(location=filename)
145
146 def load_config(self):
147 # parse console args
148 parser = self.cfg.parser()
149 args = parser.parse_args()
150
151 # optional settings from apps
152 cfg = self.init(parser, args, args.args)
153
154 # set up import paths and follow symlinks
155 self.chdir()
156
157 # Load up the any app specific configuration
158 if cfg:
159 for k, v in cfg.items():
160 self.cfg.set(k.lower(), v)
161
162 env_args = parser.parse_args(self.cfg.get_cmd_args_from_env())
163
164 if args.config:
165 self.load_config_from_file(args.config)
166 elif env_args.config:
167 self.load_config_from_file(env_args.config)
168 else:
169 default_config = get_default_config_file()
170 if default_config is not None:
171 self.load_config_from_file(default_config)
172
173 # Load up environment configuration
174 for k, v in vars(env_args).items():
175 if v is None:
176 continue
177 if k == "args":
178 continue
179 self.cfg.set(k.lower(), v)
180
181 # Lastly, update the configuration with any command line settings.
182 for k, v in vars(args).items():
183 if v is None:
184 continue
185 if k == "args":
186 continue
187 self.cfg.set(k.lower(), v)
188
189 # current directory might be changed by the config now
190 # set up import paths and follow symlinks
191 self.chdir()
192
193 def run(self):
194 if self.cfg.check_config:
195 try:
196 self.load()
197 except:
198 msg = "\nError while loading the application:\n"
199 print(msg, file=sys.stderr)
200 traceback.print_exc()
201 sys.stderr.flush()
202 sys.exit(1)
203 sys.exit(0)
204
205 if self.cfg.spew:
206 debug.spew()
207
208 if self.cfg.daemon:
209 util.daemonize(self.cfg.enable_stdio_inheritance)
210
211 # set python paths
212 if self.cfg.pythonpath:
213 paths = self.cfg.pythonpath.split(",")
214 for path in paths:
215 pythonpath = os.path.abspath(path)
216 if pythonpath not in sys.path:
217 sys.path.insert(0, pythonpath)
218
219 super().run()
220
[end of gunicorn/app/base.py]
[start of gunicorn/__init__.py]
1 # -*- coding: utf-8 -
2 #
3 # This file is part of gunicorn released under the MIT license.
4 # See the NOTICE for more information.
5
6 version_info = (20, 0, 0)
7 __version__ = ".".join([str(v) for v in version_info])
8 SERVER_SOFTWARE = "gunicorn/%s" % __version__
9
[end of gunicorn/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gunicorn/__init__.py b/gunicorn/__init__.py
--- a/gunicorn/__init__.py
+++ b/gunicorn/__init__.py
@@ -3,6 +3,6 @@
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
-version_info = (20, 0, 0)
+version_info = (20, 0, 1)
__version__ = ".".join([str(v) for v in version_info])
SERVER_SOFTWARE = "gunicorn/%s" % __version__
diff --git a/gunicorn/app/base.py b/gunicorn/app/base.py
--- a/gunicorn/app/base.py
+++ b/gunicorn/app/base.py
@@ -2,11 +2,10 @@
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
-import importlib.machinery
+import importlib.util
import os
import sys
import traceback
-import types
from gunicorn import util
from gunicorn.arbiter import Arbiter
@@ -97,9 +96,10 @@
try:
module_name = '__config__'
- mod = types.ModuleType(module_name)
- loader = importlib.machinery.SourceFileLoader(module_name, filename)
- loader.exec_module(mod)
+ spec = importlib.util.spec_from_file_location(module_name, filename)
+ mod = importlib.util.module_from_spec(spec)
+ sys.modules[module_name] = mod
+ spec.loader.exec_module(mod)
except Exception:
print("Failed to read config file: %s" % filename, file=sys.stderr)
traceback.print_exc()
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -21,7 +21,6 @@
'Operating System :: POSIX',
'Programming Language :: Python',
'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
|
{"golden_diff": "diff --git a/gunicorn/__init__.py b/gunicorn/__init__.py\n--- a/gunicorn/__init__.py\n+++ b/gunicorn/__init__.py\n@@ -3,6 +3,6 @@\n # This file is part of gunicorn released under the MIT license.\n # See the NOTICE for more information.\n \n-version_info = (20, 0, 0)\n+version_info = (20, 0, 1)\n __version__ = \".\".join([str(v) for v in version_info])\n SERVER_SOFTWARE = \"gunicorn/%s\" % __version__\ndiff --git a/gunicorn/app/base.py b/gunicorn/app/base.py\n--- a/gunicorn/app/base.py\n+++ b/gunicorn/app/base.py\n@@ -2,11 +2,10 @@\n #\n # This file is part of gunicorn released under the MIT license.\n # See the NOTICE for more information.\n-import importlib.machinery\n+import importlib.util\n import os\n import sys\n import traceback\n-import types\n \n from gunicorn import util\n from gunicorn.arbiter import Arbiter\n@@ -97,9 +96,10 @@\n \n try:\n module_name = '__config__'\n- mod = types.ModuleType(module_name)\n- loader = importlib.machinery.SourceFileLoader(module_name, filename)\n- loader.exec_module(mod)\n+ spec = importlib.util.spec_from_file_location(module_name, filename)\n+ mod = importlib.util.module_from_spec(spec)\n+ sys.modules[module_name] = mod\n+ spec.loader.exec_module(mod)\n except Exception:\n print(\"Failed to read config file: %s\" % filename, file=sys.stderr)\n traceback.print_exc()\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -21,7 +21,6 @@\n 'Operating System :: POSIX',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n- 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n", "issue": "[ver.20.0.0] The gunicorn config files cannot using '__file__' constants.\nI configured app_root that base on the relative path in gunicorn files.\r\n\r\n`app_root = os.path.abspath(os.path.join(__file__, '..', '..'))`\r\n\r\nnow I have occurred when upgrade gunicorn version to 20.0.0 and starting with config file.\r\n\r\n`$ venv/bin/gunicorn -c /path/to/app/conf/gunicorn.conf.py wsgi:app`\r\n\r\n> Failed to read config file: /path/to/app/conf/gunicorn.conf.py\r\nTraceback (most recent call last):\r\n File \"/path/to/app/venv/lib/python3.7/site-packages/gunicorn/app/base.py\", line 102, in get_config_from_filename\r\n loader.exec_module(mod)\r\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/path/to/app/conf/gunicorn.conf.py\", line 9, in <module>\r\n app_root = os.path.abspath(os.path.join(__file__, '..', '..'))\r\n\r\nThanks to help me.\n", "before_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.test import test as TestCommand\n\nfrom gunicorn import __version__\n\n\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Internet',\n 'Topic :: Utilities',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Internet :: WWW/HTTP :: WSGI',\n 'Topic :: Internet :: WWW/HTTP :: WSGI :: Server',\n 'Topic :: Internet :: WWW/HTTP :: Dynamic Content']\n\n# read long description\nwith open(os.path.join(os.path.dirname(__file__), 'README.rst')) as f:\n long_description = f.read()\n\n# read dev requirements\nfname = os.path.join(os.path.dirname(__file__), 'requirements_test.txt')\nwith open(fname) as f:\n tests_require = [l.strip() for l in f.readlines()]\n\nclass PyTestCommand(TestCommand):\n user_options = [\n (\"cov\", None, \"measure coverage\")\n ]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.cov = None\n\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = ['tests']\n if self.cov:\n self.test_args += ['--cov', 'gunicorn']\n self.test_suite = True\n\n def run_tests(self):\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\n\ninstall_requires = [\n # We depend on functioning pkg_resources.working_set.add_entry() and\n # pkg_resources.load_entry_point(). These both work as of 3.0 which\n # is the first version to support Python 3.4 which we require as a\n # floor.\n 'setuptools>=3.0',\n]\n\nextras_require = {\n 'gevent': ['gevent>=0.13'],\n 'eventlet': ['eventlet>=0.9.7'],\n 'tornado': ['tornado>=0.2'],\n 'gthread': [],\n 'setproctitle': ['setproctitle'],\n}\n\nsetup(\n name='gunicorn',\n version=__version__,\n\n description='WSGI HTTP Server for UNIX',\n long_description=long_description,\n author='Benoit Chesneau',\n author_email='[email protected]',\n license='MIT',\n url='http://gunicorn.org',\n\n python_requires='>=3.4',\n install_requires=install_requires,\n classifiers=CLASSIFIERS,\n zip_safe=False,\n packages=find_packages(exclude=['examples', 'tests']),\n include_package_data=True,\n\n tests_require=tests_require,\n cmdclass={'test': PyTestCommand},\n\n entry_points=\"\"\"\n [console_scripts]\n gunicorn=gunicorn.app.wsgiapp:run\n\n [paste.server_runner]\n main=gunicorn.app.pasterapp:serve\n \"\"\",\n extras_require=extras_require,\n)\n", "path": "setup.py"}, {"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\nimport importlib.machinery\nimport os\nimport sys\nimport traceback\nimport types\n\nfrom gunicorn import util\nfrom gunicorn.arbiter import Arbiter\nfrom gunicorn.config import Config, get_default_config_file\nfrom gunicorn import debug\n\n\nclass BaseApplication(object):\n \"\"\"\n An application interface for configuring and loading\n the various necessities for any given web framework.\n \"\"\"\n def __init__(self, usage=None, prog=None):\n self.usage = usage\n self.cfg = None\n self.callable = None\n self.prog = prog\n self.logger = None\n self.do_load_config()\n\n def do_load_config(self):\n \"\"\"\n Loads the configuration\n \"\"\"\n try:\n self.load_default_config()\n self.load_config()\n except Exception as e:\n print(\"\\nError: %s\" % str(e), file=sys.stderr)\n sys.stderr.flush()\n sys.exit(1)\n\n def load_default_config(self):\n # init configuration\n self.cfg = Config(self.usage, prog=self.prog)\n\n def init(self, parser, opts, args):\n raise NotImplementedError\n\n def load(self):\n raise NotImplementedError\n\n def load_config(self):\n \"\"\"\n This method is used to load the configuration from one or several input(s).\n Custom Command line, configuration file.\n You have to override this method in your class.\n \"\"\"\n raise NotImplementedError\n\n def reload(self):\n self.do_load_config()\n if self.cfg.spew:\n debug.spew()\n\n def wsgi(self):\n if self.callable is None:\n self.callable = self.load()\n return self.callable\n\n def run(self):\n try:\n Arbiter(self).run()\n except RuntimeError as e:\n print(\"\\nError: %s\\n\" % e, file=sys.stderr)\n sys.stderr.flush()\n sys.exit(1)\n\n\nclass Application(BaseApplication):\n\n # 'init' and 'load' methods are implemented by WSGIApplication.\n # pylint: disable=abstract-method\n\n def chdir(self):\n # chdir to the configured path before loading,\n # default is the current dir\n os.chdir(self.cfg.chdir)\n\n # add the path to sys.path\n if self.cfg.chdir not in sys.path:\n sys.path.insert(0, self.cfg.chdir)\n\n def get_config_from_filename(self, filename):\n\n if not os.path.exists(filename):\n raise RuntimeError(\"%r doesn't exist\" % filename)\n\n try:\n module_name = '__config__'\n mod = types.ModuleType(module_name)\n loader = importlib.machinery.SourceFileLoader(module_name, filename)\n loader.exec_module(mod)\n except Exception:\n print(\"Failed to read config file: %s\" % filename, file=sys.stderr)\n traceback.print_exc()\n sys.stderr.flush()\n sys.exit(1)\n\n return vars(mod)\n\n def get_config_from_module_name(self, module_name):\n return vars(importlib.import_module(module_name))\n\n def load_config_from_module_name_or_filename(self, location):\n \"\"\"\n Loads the configuration file: the file is a python file, otherwise raise an RuntimeError\n Exception or stop the process if the configuration file contains a syntax error.\n \"\"\"\n\n if location.startswith(\"python:\"):\n module_name = location[len(\"python:\"):]\n cfg = self.get_config_from_module_name(module_name)\n else:\n if location.startswith(\"file:\"):\n filename = location[len(\"file:\"):]\n else:\n filename = location\n cfg = self.get_config_from_filename(filename)\n\n for k, v in cfg.items():\n # Ignore unknown names\n if k not in self.cfg.settings:\n continue\n try:\n self.cfg.set(k.lower(), v)\n except:\n print(\"Invalid value for %s: %s\\n\" % (k, v), file=sys.stderr)\n sys.stderr.flush()\n raise\n\n return cfg\n\n def load_config_from_file(self, filename):\n return self.load_config_from_module_name_or_filename(location=filename)\n\n def load_config(self):\n # parse console args\n parser = self.cfg.parser()\n args = parser.parse_args()\n\n # optional settings from apps\n cfg = self.init(parser, args, args.args)\n\n # set up import paths and follow symlinks\n self.chdir()\n\n # Load up the any app specific configuration\n if cfg:\n for k, v in cfg.items():\n self.cfg.set(k.lower(), v)\n\n env_args = parser.parse_args(self.cfg.get_cmd_args_from_env())\n\n if args.config:\n self.load_config_from_file(args.config)\n elif env_args.config:\n self.load_config_from_file(env_args.config)\n else:\n default_config = get_default_config_file()\n if default_config is not None:\n self.load_config_from_file(default_config)\n\n # Load up environment configuration\n for k, v in vars(env_args).items():\n if v is None:\n continue\n if k == \"args\":\n continue\n self.cfg.set(k.lower(), v)\n\n # Lastly, update the configuration with any command line settings.\n for k, v in vars(args).items():\n if v is None:\n continue\n if k == \"args\":\n continue\n self.cfg.set(k.lower(), v)\n\n # current directory might be changed by the config now\n # set up import paths and follow symlinks\n self.chdir()\n\n def run(self):\n if self.cfg.check_config:\n try:\n self.load()\n except:\n msg = \"\\nError while loading the application:\\n\"\n print(msg, file=sys.stderr)\n traceback.print_exc()\n sys.stderr.flush()\n sys.exit(1)\n sys.exit(0)\n\n if self.cfg.spew:\n debug.spew()\n\n if self.cfg.daemon:\n util.daemonize(self.cfg.enable_stdio_inheritance)\n\n # set python paths\n if self.cfg.pythonpath:\n paths = self.cfg.pythonpath.split(\",\")\n for path in paths:\n pythonpath = os.path.abspath(path)\n if pythonpath not in sys.path:\n sys.path.insert(0, pythonpath)\n\n super().run()\n", "path": "gunicorn/app/base.py"}, {"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nversion_info = (20, 0, 0)\n__version__ = \".\".join([str(v) for v in version_info])\nSERVER_SOFTWARE = \"gunicorn/%s\" % __version__\n", "path": "gunicorn/__init__.py"}]}
| 3,900 | 472 |
gh_patches_debug_8
|
rasdani/github-patches
|
git_diff
|
kivy__python-for-android-2797
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Python exception when using colorlog due to incomplete IO implementation in sys.stderr
I am attempting to run a program which uses `TTYColoredFormatter` from [colorlog](https://pypi.org/project/colorlog/). This class formats log messages, adding ANSI escape codes _only_ if the stream it is writing to returns `True` for `stream.isatty()`.
Unfortunately, python-for-android's bootstrap code replaces sys.stderr and sys.stdout with a custom `LogFile` object: https://github.com/kivy/python-for-android/blob/53d77fc26c9e37eb6ce05f8899f4dae8334842b1/pythonforandroid/bootstraps/common/build/jni/application/src/start.c#L226-L242
This object doesn't implement `isatty()` (or much else, for that matter). As a result, the program raises an exception:
```
03-03 13:32:56.222 5806 5891 I python : Traceback (most recent call last):
03-03 13:32:56.222 5806 5891 I python : File "/home/jenkins/workspace/kolibri-installer-android-pr/src/main.py", line 3, in <module>
03-03 13:32:56.222 5806 5891 I python : File "/home/jenkins/workspace/kolibri-installer-android-pr/src/kolibri_android/main_activity/__main__.py", line 7, in main
03-03 13:32:56.222 5806 5891 I python : File "/home/jenkins/workspace/kolibri-installer-android-pr/src/kolibri_android/main_activity/activity.py", line 19, in <module>
03-03 13:32:56.222 5806 5891 I python : File "/home/jenkins/workspace/kolibri-installer-android-pr/src/kolibri_android/kolibri_utils.py", line 13, in <module>
03-03 13:32:56.223 5806 5891 I python : File "/home/jenkins/workspace/kolibri-installer-android-pr/src/kolibri_android/android_whitenoise.py", line 11, in <module>
03-03 13:32:56.223 5806 5891 I python : File "/home/jenkins/workspace/kolibri-installer-android-pr/src/kolibri/__init__.py", line 10, in <module>
03-03 13:32:56.223 5806 5891 I python : File "/home/jenkins/workspace/kolibri-installer-android-pr/src/kolibri/utils/env.py", line 29, in <module>
03-03 13:32:56.223 5806 5891 I python : File "/home/jenkins/workspace/kolibri-installer-android-pr/src/kolibri/dist/colorlog/colorlog.py", line 203, in __init__
03-03 13:32:56.223 5806 5891 I python : AttributeError: 'LogFile' object has no attribute 'isatty'
```
(For reference, we're using colorlog v3.2.0, so the code raising the exception looks like this: https://github.com/borntyping/python-colorlog/blob/v3.2.0/colorlog/colorlog.py#L191-L211).
Service don t start anymore, as smallIconName extra is now mandatory
https://github.com/kivy/python-for-android/blob/8cb497dd89e402478011df61f4690b963a0c96da/pythonforandroid/bootstraps/common/build/src/main/java/org/kivy/android/PythonService.java#L116
```java.lang.NullPointerException: Attempt to invoke virtual method 'boolean java.lang.String.equals(java.lang.Object)' on a null object reference```
We could test if null before.
</issue>
<code>
[start of pythonforandroid/__init__.py]
1 __version__ = '2023.02.10'
2
[end of pythonforandroid/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pythonforandroid/__init__.py b/pythonforandroid/__init__.py
--- a/pythonforandroid/__init__.py
+++ b/pythonforandroid/__init__.py
@@ -1 +1 @@
-__version__ = '2023.02.10'
+__version__ = '2023.05.21'
|
{"golden_diff": "diff --git a/pythonforandroid/__init__.py b/pythonforandroid/__init__.py\n--- a/pythonforandroid/__init__.py\n+++ b/pythonforandroid/__init__.py\n@@ -1 +1 @@\n-__version__ = '2023.02.10'\n+__version__ = '2023.05.21'\n", "issue": "Python exception when using colorlog due to incomplete IO implementation in sys.stderr\nI am attempting to run a program which uses `TTYColoredFormatter` from [colorlog](https://pypi.org/project/colorlog/). This class formats log messages, adding ANSI escape codes _only_ if the stream it is writing to returns `True` for `stream.isatty()`.\r\n\r\nUnfortunately, python-for-android's bootstrap code replaces sys.stderr and sys.stdout with a custom `LogFile` object: https://github.com/kivy/python-for-android/blob/53d77fc26c9e37eb6ce05f8899f4dae8334842b1/pythonforandroid/bootstraps/common/build/jni/application/src/start.c#L226-L242\r\n\r\nThis object doesn't implement `isatty()` (or much else, for that matter). As a result, the program raises an exception:\r\n\r\n```\r\n03-03 13:32:56.222 5806 5891 I python : Traceback (most recent call last):\r\n03-03 13:32:56.222 5806 5891 I python : File \"/home/jenkins/workspace/kolibri-installer-android-pr/src/main.py\", line 3, in <module>\r\n03-03 13:32:56.222 5806 5891 I python : File \"/home/jenkins/workspace/kolibri-installer-android-pr/src/kolibri_android/main_activity/__main__.py\", line 7, in main\r\n03-03 13:32:56.222 5806 5891 I python : File \"/home/jenkins/workspace/kolibri-installer-android-pr/src/kolibri_android/main_activity/activity.py\", line 19, in <module>\r\n03-03 13:32:56.222 5806 5891 I python : File \"/home/jenkins/workspace/kolibri-installer-android-pr/src/kolibri_android/kolibri_utils.py\", line 13, in <module>\r\n03-03 13:32:56.223 5806 5891 I python : File \"/home/jenkins/workspace/kolibri-installer-android-pr/src/kolibri_android/android_whitenoise.py\", line 11, in <module>\r\n03-03 13:32:56.223 5806 5891 I python : File \"/home/jenkins/workspace/kolibri-installer-android-pr/src/kolibri/__init__.py\", line 10, in <module>\r\n03-03 13:32:56.223 5806 5891 I python : File \"/home/jenkins/workspace/kolibri-installer-android-pr/src/kolibri/utils/env.py\", line 29, in <module>\r\n03-03 13:32:56.223 5806 5891 I python : File \"/home/jenkins/workspace/kolibri-installer-android-pr/src/kolibri/dist/colorlog/colorlog.py\", line 203, in __init__\r\n03-03 13:32:56.223 5806 5891 I python : AttributeError: 'LogFile' object has no attribute 'isatty'\r\n```\r\n\r\n(For reference, we're using colorlog v3.2.0, so the code raising the exception looks like this: https://github.com/borntyping/python-colorlog/blob/v3.2.0/colorlog/colorlog.py#L191-L211).\nService don t start anymore, as smallIconName extra is now mandatory\nhttps://github.com/kivy/python-for-android/blob/8cb497dd89e402478011df61f4690b963a0c96da/pythonforandroid/bootstraps/common/build/src/main/java/org/kivy/android/PythonService.java#L116\r\n\r\n```java.lang.NullPointerException: Attempt to invoke virtual method 'boolean java.lang.String.equals(java.lang.Object)' on a null object reference```\r\n\r\nWe could test if null before.\n", "before_files": [{"content": "__version__ = '2023.02.10'\n", "path": "pythonforandroid/__init__.py"}]}
| 1,559 | 80 |
gh_patches_debug_39973
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-1917
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Write a proper commonprefix / nearest ancestor path function.
As discussed in this thread, the stdlib is lacking here: https://github.com/pantsbuild/pex/pull/1914#discussion_r974846450
</issue>
<code>
[start of pex/compatibility.py]
1 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 # This file contains several 2.x/3.x compatibility checkstyle violations for a reason
5 # checkstyle: noqa
6
7 from __future__ import absolute_import
8
9 import os
10 import re
11 import sys
12 from abc import ABCMeta
13 from sys import version_info as sys_version_info
14
15 from pex.typing import TYPE_CHECKING, cast
16
17 if TYPE_CHECKING:
18 from typing import IO, AnyStr, BinaryIO, Callable, Optional, Text, Tuple, Type
19
20
21 try:
22 # Python 2.x
23 from ConfigParser import ConfigParser as ConfigParser
24 except ImportError:
25 # Python 3.x
26 from configparser import ConfigParser as ConfigParser # type: ignore[import, no-redef]
27
28
29 AbstractClass = ABCMeta("AbstractClass", (object,), {})
30 PY2 = sys_version_info[0] == 2
31 PY3 = sys_version_info[0] == 3
32
33 string = cast("Tuple[Type, ...]", (str,) if PY3 else (str, unicode)) # type: ignore[name-defined]
34 text = cast("Type[Text]", str if PY3 else unicode) # type: ignore[name-defined]
35
36
37 if PY2:
38 from collections import Iterable as Iterable
39 from collections import MutableSet as MutableSet
40 else:
41 from collections.abc import Iterable as Iterable
42 from collections.abc import MutableSet as MutableSet
43
44 if PY2:
45
46 def to_bytes(st, encoding="utf-8"):
47 # type: (AnyStr, Text) -> bytes
48 if isinstance(st, unicode):
49 return st.encode(encoding)
50 elif isinstance(st, bytes):
51 return st
52 else:
53 raise ValueError("Cannot convert %s to bytes" % type(st))
54
55 def to_unicode(st, encoding="utf-8"):
56 # type: (AnyStr, Text) -> Text
57 if isinstance(st, unicode):
58 return st
59 elif isinstance(st, (str, bytes)):
60 return unicode(st, encoding)
61 else:
62 raise ValueError("Cannot convert %s to a unicode string" % type(st))
63
64 else:
65
66 def to_bytes(st, encoding="utf-8"):
67 # type: (AnyStr, Text) -> bytes
68 if isinstance(st, str):
69 return st.encode(encoding)
70 elif isinstance(st, bytes):
71 return st
72 else:
73 raise ValueError("Cannot convert %s to bytes." % type(st))
74
75 def to_unicode(st, encoding="utf-8"):
76 # type: (AnyStr, Text) -> Text
77 if isinstance(st, str):
78 return st
79 elif isinstance(st, bytes):
80 return str(st, encoding)
81 else:
82 raise ValueError("Cannot convert %s to a unicode string" % type(st))
83
84
85 _PY3_EXEC_FUNCTION = """
86 def exec_function(ast, globals_map):
87 locals_map = globals_map
88 exec ast in globals_map, locals_map
89 return locals_map
90 """
91
92 if PY3:
93
94 def exec_function(ast, globals_map):
95 locals_map = globals_map
96 exec (ast, globals_map, locals_map)
97 return locals_map
98
99 else:
100
101 def exec_function(ast, globals_map):
102 raise AssertionError("Expected this function to be re-defined at runtime.")
103
104 # This will result in `exec_function` being re-defined at runtime.
105 eval(compile(_PY3_EXEC_FUNCTION, "<exec_function>", "exec"))
106
107
108 if PY3:
109 from urllib import parse as urlparse
110 from urllib.error import HTTPError as HTTPError
111 from urllib.parse import unquote as unquote
112 from urllib.request import FileHandler as FileHandler
113 from urllib.request import HTTPBasicAuthHandler as HTTPBasicAuthHandler
114 from urllib.request import HTTPDigestAuthHandler as HTTPDigestAuthHandler
115 from urllib.request import HTTPPasswordMgrWithDefaultRealm as HTTPPasswordMgrWithDefaultRealm
116 from urllib.request import HTTPSHandler as HTTPSHandler
117 from urllib.request import ProxyHandler as ProxyHandler
118 from urllib.request import Request as Request
119 from urllib.request import build_opener as build_opener
120 else:
121 from urllib import unquote as unquote
122
123 import urlparse as urlparse
124 from urllib2 import FileHandler as FileHandler
125 from urllib2 import HTTPBasicAuthHandler as HTTPBasicAuthHandler
126 from urllib2 import HTTPDigestAuthHandler as HTTPDigestAuthHandler
127 from urllib2 import HTTPError as HTTPError
128 from urllib2 import HTTPPasswordMgrWithDefaultRealm as HTTPPasswordMgrWithDefaultRealm
129 from urllib2 import HTTPSHandler as HTTPSHandler
130 from urllib2 import ProxyHandler as ProxyHandler
131 from urllib2 import Request as Request
132 from urllib2 import build_opener as build_opener
133
134 if PY3:
135 from queue import Queue as Queue
136
137 # The `os.sched_getaffinity` function appears to be supported on Linux but not OSX.
138 if not hasattr(os, "sched_getaffinity"):
139 from os import cpu_count as cpu_count
140 else:
141
142 def cpu_count():
143 # type: () -> Optional[int]
144 # The set of CPUs accessible to the current process (pid 0).
145 cpu_set = os.sched_getaffinity(0)
146 return len(cpu_set)
147
148 else:
149 from multiprocessing import cpu_count as cpu_count
150
151 from Queue import Queue as Queue
152
153 WINDOWS = os.name == "nt"
154
155
156 # Universal newlines is the default in Python 3.
157 MODE_READ_UNIVERSAL_NEWLINES = "rU" if PY2 else "r"
158
159
160 def _get_stdio_bytes_buffer(stdio):
161 # type: (IO[str]) -> BinaryIO
162 return cast("BinaryIO", getattr(stdio, "buffer", stdio))
163
164
165 def get_stdout_bytes_buffer():
166 # type: () -> BinaryIO
167 return _get_stdio_bytes_buffer(sys.stdout)
168
169
170 def get_stderr_bytes_buffer():
171 # type: () -> BinaryIO
172 return _get_stdio_bytes_buffer(sys.stderr)
173
174
175 if PY3:
176 is_valid_python_identifier = str.isidentifier
177 else:
178
179 def is_valid_python_identifier(text):
180 # type: (str) -> bool
181
182 # N.B.: Python 2.7 only supports ASCII characters so the check is easy and this is probably
183 # why it's nt in the stdlib.
184 # See: https://docs.python.org/2.7/reference/lexical_analysis.html#identifiers
185 return re.match(r"^[_a-zA-Z][_a-zA-Z0-9]*$", text) is not None
186
187
188 if PY2:
189
190 def indent(
191 text, # type: Text
192 prefix, # type: Text
193 predicate=None, # type: Optional[Callable[[Text], bool]]
194 ):
195 add_prefix = predicate if predicate else lambda line: bool(line.strip())
196 return "".join(
197 prefix + line if add_prefix(line) else line for line in text.splitlines(True)
198 )
199
200 else:
201 from textwrap import indent as indent
202
[end of pex/compatibility.py]
[start of pex/tools/main.py]
1 # Copyright 2020 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import absolute_import, print_function
5
6 import os
7 from argparse import ArgumentParser, Namespace
8
9 from pex import pex_bootstrapper
10 from pex.commands.command import GlobalConfigurationError, Main
11 from pex.pex import PEX
12 from pex.pex_bootstrapper import InterpreterTest
13 from pex.pex_info import PexInfo
14 from pex.result import Result, catch
15 from pex.tools import commands
16 from pex.tools.command import PEXCommand
17 from pex.tracer import TRACER
18 from pex.typing import TYPE_CHECKING
19
20 if TYPE_CHECKING:
21 from typing import Callable, Optional, Union
22
23 CommandFunc = Callable[[PEX, Namespace], Result]
24
25
26 def simplify_pex_path(pex_path):
27 # type: (str) -> str
28 # Generate the most concise path possible that is still cut/paste-able to the command line.
29 pex_path = os.path.abspath(pex_path)
30 cwd = os.getcwd()
31 if os.path.commonprefix((pex_path, cwd)) == cwd:
32 pex_path = os.path.relpath(pex_path, cwd)
33 # Handle users that do not have . as a PATH entry.
34 if not os.path.dirname(pex_path) and os.curdir not in os.environ.get("PATH", "").split(
35 os.pathsep
36 ):
37 pex_path = os.path.join(os.curdir, pex_path)
38 return pex_path
39
40
41 class PexTools(Main[PEXCommand]):
42 def __init__(self, pex=None):
43 # type: (Optional[PEX]) -> None
44
45 pex_prog_path = simplify_pex_path(pex.path()) if pex else None
46
47 # By default, let argparse derive prog from sys.argv[0].
48 prog = None # type: Optional[str]
49 if pex:
50 prog = "PEX_TOOLS=1 {pex_path}".format(pex_path=pex_prog_path)
51
52 description = "Tools for working with {}.".format(pex_prog_path if pex else "PEX files")
53 subparsers_description = (
54 "{} can be operated on using any of the following subcommands.".format(
55 "The PEX file {}".format(pex_prog_path) if pex else "A PEX file"
56 )
57 )
58
59 super(PexTools, self).__init__(
60 description=description,
61 subparsers_description=subparsers_description,
62 command_types=commands.all_commands(),
63 prog=prog,
64 )
65 self._pex = pex
66
67 def add_arguments(self, parser):
68 # type: (ArgumentParser) -> None
69 if self._pex is None:
70 parser.add_argument(
71 "pex", nargs=1, metavar="PATH", help="The path of the PEX file to operate on."
72 )
73
74
75 def main(pex=None):
76 # type: (Optional[PEX]) -> Union[int, str]
77
78 pex_tools = PexTools(pex=pex)
79 try:
80 with pex_tools.parsed_command() as pex_command, TRACER.timed(
81 "Executing PEX_TOOLS {}".format(pex_command.name())
82 ):
83 if pex is None:
84 pex_file_path = pex_command.options.pex[0]
85 pex_info = PexInfo.from_pex(pex_file_path)
86 pex_info.update(PexInfo.from_env())
87 interpreter = pex_bootstrapper.find_compatible_interpreter(
88 interpreter_test=InterpreterTest(entry_point=pex_file_path, pex_info=pex_info)
89 )
90 pex = PEX(pex_file_path, interpreter=interpreter)
91
92 result = catch(pex_command.run, pex)
93 result.maybe_display()
94 return result.exit_code
95 except GlobalConfigurationError as e:
96 return str(e)
97
[end of pex/tools/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pex/compatibility.py b/pex/compatibility.py
--- a/pex/compatibility.py
+++ b/pex/compatibility.py
@@ -15,7 +15,19 @@
from pex.typing import TYPE_CHECKING, cast
if TYPE_CHECKING:
- from typing import IO, AnyStr, BinaryIO, Callable, Optional, Text, Tuple, Type
+ from typing import (
+ IO,
+ AnyStr,
+ BinaryIO,
+ Callable,
+ Deque,
+ List,
+ Optional,
+ Sequence,
+ Text,
+ Tuple,
+ Type,
+ )
try:
@@ -37,6 +49,7 @@
if PY2:
from collections import Iterable as Iterable
from collections import MutableSet as MutableSet
+ from collections import deque
else:
from collections.abc import Iterable as Iterable
from collections.abc import MutableSet as MutableSet
@@ -199,3 +212,50 @@
else:
from textwrap import indent as indent
+
+
+if PY3:
+ from os.path import commonpath as commonpath
+else:
+
+ def commonpath(paths):
+ # type: (Sequence[Text]) -> Text
+ if not paths:
+ raise ValueError("The paths given must be a non-empty sequence")
+ if len(paths) == 1:
+ return paths[0]
+ if len({os.path.isabs(path) for path in paths}) > 1:
+ raise ValueError(
+ "Can't mix absolute and relative paths, given:\n{paths}".format(
+ paths="\n".join(paths)
+ )
+ )
+
+ def components(path):
+ # type: (Text) -> Iterable[Text]
+
+ pieces = deque() # type: Deque[Text]
+
+ def append(piece):
+ if piece and piece != ".":
+ pieces.appendleft(piece)
+
+ head, tail = os.path.split(path)
+ append(tail)
+ while head:
+ if "/" == head:
+ append(head)
+ break
+ head, tail = os.path.split(head)
+ append(tail)
+ return pieces
+
+ prefix = [] # type: List[Text]
+ for atoms in zip(*(components(path) for path in paths)):
+ if len(set(atoms)) == 1:
+ prefix.append(atoms[0])
+ else:
+ break
+ if not prefix:
+ return ""
+ return os.path.join(*prefix)
diff --git a/pex/tools/main.py b/pex/tools/main.py
--- a/pex/tools/main.py
+++ b/pex/tools/main.py
@@ -8,6 +8,7 @@
from pex import pex_bootstrapper
from pex.commands.command import GlobalConfigurationError, Main
+from pex.compatibility import commonpath
from pex.pex import PEX
from pex.pex_bootstrapper import InterpreterTest
from pex.pex_info import PexInfo
@@ -28,7 +29,7 @@
# Generate the most concise path possible that is still cut/paste-able to the command line.
pex_path = os.path.abspath(pex_path)
cwd = os.getcwd()
- if os.path.commonprefix((pex_path, cwd)) == cwd:
+ if commonpath((pex_path, cwd)) == cwd:
pex_path = os.path.relpath(pex_path, cwd)
# Handle users that do not have . as a PATH entry.
if not os.path.dirname(pex_path) and os.curdir not in os.environ.get("PATH", "").split(
|
{"golden_diff": "diff --git a/pex/compatibility.py b/pex/compatibility.py\n--- a/pex/compatibility.py\n+++ b/pex/compatibility.py\n@@ -15,7 +15,19 @@\n from pex.typing import TYPE_CHECKING, cast\n \n if TYPE_CHECKING:\n- from typing import IO, AnyStr, BinaryIO, Callable, Optional, Text, Tuple, Type\n+ from typing import (\n+ IO,\n+ AnyStr,\n+ BinaryIO,\n+ Callable,\n+ Deque,\n+ List,\n+ Optional,\n+ Sequence,\n+ Text,\n+ Tuple,\n+ Type,\n+ )\n \n \n try:\n@@ -37,6 +49,7 @@\n if PY2:\n from collections import Iterable as Iterable\n from collections import MutableSet as MutableSet\n+ from collections import deque\n else:\n from collections.abc import Iterable as Iterable\n from collections.abc import MutableSet as MutableSet\n@@ -199,3 +212,50 @@\n \n else:\n from textwrap import indent as indent\n+\n+\n+if PY3:\n+ from os.path import commonpath as commonpath\n+else:\n+\n+ def commonpath(paths):\n+ # type: (Sequence[Text]) -> Text\n+ if not paths:\n+ raise ValueError(\"The paths given must be a non-empty sequence\")\n+ if len(paths) == 1:\n+ return paths[0]\n+ if len({os.path.isabs(path) for path in paths}) > 1:\n+ raise ValueError(\n+ \"Can't mix absolute and relative paths, given:\\n{paths}\".format(\n+ paths=\"\\n\".join(paths)\n+ )\n+ )\n+\n+ def components(path):\n+ # type: (Text) -> Iterable[Text]\n+\n+ pieces = deque() # type: Deque[Text]\n+\n+ def append(piece):\n+ if piece and piece != \".\":\n+ pieces.appendleft(piece)\n+\n+ head, tail = os.path.split(path)\n+ append(tail)\n+ while head:\n+ if \"/\" == head:\n+ append(head)\n+ break\n+ head, tail = os.path.split(head)\n+ append(tail)\n+ return pieces\n+\n+ prefix = [] # type: List[Text]\n+ for atoms in zip(*(components(path) for path in paths)):\n+ if len(set(atoms)) == 1:\n+ prefix.append(atoms[0])\n+ else:\n+ break\n+ if not prefix:\n+ return \"\"\n+ return os.path.join(*prefix)\ndiff --git a/pex/tools/main.py b/pex/tools/main.py\n--- a/pex/tools/main.py\n+++ b/pex/tools/main.py\n@@ -8,6 +8,7 @@\n \n from pex import pex_bootstrapper\n from pex.commands.command import GlobalConfigurationError, Main\n+from pex.compatibility import commonpath\n from pex.pex import PEX\n from pex.pex_bootstrapper import InterpreterTest\n from pex.pex_info import PexInfo\n@@ -28,7 +29,7 @@\n # Generate the most concise path possible that is still cut/paste-able to the command line.\n pex_path = os.path.abspath(pex_path)\n cwd = os.getcwd()\n- if os.path.commonprefix((pex_path, cwd)) == cwd:\n+ if commonpath((pex_path, cwd)) == cwd:\n pex_path = os.path.relpath(pex_path, cwd)\n # Handle users that do not have . as a PATH entry.\n if not os.path.dirname(pex_path) and os.curdir not in os.environ.get(\"PATH\", \"\").split(\n", "issue": "Write a proper commonprefix / nearest ancestor path function.\nAs discussed in this thread, the stdlib is lacking here: https://github.com/pantsbuild/pex/pull/1914#discussion_r974846450\n", "before_files": [{"content": "# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n# This file contains several 2.x/3.x compatibility checkstyle violations for a reason\n# checkstyle: noqa\n\nfrom __future__ import absolute_import\n\nimport os\nimport re\nimport sys\nfrom abc import ABCMeta\nfrom sys import version_info as sys_version_info\n\nfrom pex.typing import TYPE_CHECKING, cast\n\nif TYPE_CHECKING:\n from typing import IO, AnyStr, BinaryIO, Callable, Optional, Text, Tuple, Type\n\n\ntry:\n # Python 2.x\n from ConfigParser import ConfigParser as ConfigParser\nexcept ImportError:\n # Python 3.x\n from configparser import ConfigParser as ConfigParser # type: ignore[import, no-redef]\n\n\nAbstractClass = ABCMeta(\"AbstractClass\", (object,), {})\nPY2 = sys_version_info[0] == 2\nPY3 = sys_version_info[0] == 3\n\nstring = cast(\"Tuple[Type, ...]\", (str,) if PY3 else (str, unicode)) # type: ignore[name-defined]\ntext = cast(\"Type[Text]\", str if PY3 else unicode) # type: ignore[name-defined]\n\n\nif PY2:\n from collections import Iterable as Iterable\n from collections import MutableSet as MutableSet\nelse:\n from collections.abc import Iterable as Iterable\n from collections.abc import MutableSet as MutableSet\n\nif PY2:\n\n def to_bytes(st, encoding=\"utf-8\"):\n # type: (AnyStr, Text) -> bytes\n if isinstance(st, unicode):\n return st.encode(encoding)\n elif isinstance(st, bytes):\n return st\n else:\n raise ValueError(\"Cannot convert %s to bytes\" % type(st))\n\n def to_unicode(st, encoding=\"utf-8\"):\n # type: (AnyStr, Text) -> Text\n if isinstance(st, unicode):\n return st\n elif isinstance(st, (str, bytes)):\n return unicode(st, encoding)\n else:\n raise ValueError(\"Cannot convert %s to a unicode string\" % type(st))\n\nelse:\n\n def to_bytes(st, encoding=\"utf-8\"):\n # type: (AnyStr, Text) -> bytes\n if isinstance(st, str):\n return st.encode(encoding)\n elif isinstance(st, bytes):\n return st\n else:\n raise ValueError(\"Cannot convert %s to bytes.\" % type(st))\n\n def to_unicode(st, encoding=\"utf-8\"):\n # type: (AnyStr, Text) -> Text\n if isinstance(st, str):\n return st\n elif isinstance(st, bytes):\n return str(st, encoding)\n else:\n raise ValueError(\"Cannot convert %s to a unicode string\" % type(st))\n\n\n_PY3_EXEC_FUNCTION = \"\"\"\ndef exec_function(ast, globals_map):\n locals_map = globals_map\n exec ast in globals_map, locals_map\n return locals_map\n\"\"\"\n\nif PY3:\n\n def exec_function(ast, globals_map):\n locals_map = globals_map\n exec (ast, globals_map, locals_map)\n return locals_map\n\nelse:\n\n def exec_function(ast, globals_map):\n raise AssertionError(\"Expected this function to be re-defined at runtime.\")\n\n # This will result in `exec_function` being re-defined at runtime.\n eval(compile(_PY3_EXEC_FUNCTION, \"<exec_function>\", \"exec\"))\n\n\nif PY3:\n from urllib import parse as urlparse\n from urllib.error import HTTPError as HTTPError\n from urllib.parse import unquote as unquote\n from urllib.request import FileHandler as FileHandler\n from urllib.request import HTTPBasicAuthHandler as HTTPBasicAuthHandler\n from urllib.request import HTTPDigestAuthHandler as HTTPDigestAuthHandler\n from urllib.request import HTTPPasswordMgrWithDefaultRealm as HTTPPasswordMgrWithDefaultRealm\n from urllib.request import HTTPSHandler as HTTPSHandler\n from urllib.request import ProxyHandler as ProxyHandler\n from urllib.request import Request as Request\n from urllib.request import build_opener as build_opener\nelse:\n from urllib import unquote as unquote\n\n import urlparse as urlparse\n from urllib2 import FileHandler as FileHandler\n from urllib2 import HTTPBasicAuthHandler as HTTPBasicAuthHandler\n from urllib2 import HTTPDigestAuthHandler as HTTPDigestAuthHandler\n from urllib2 import HTTPError as HTTPError\n from urllib2 import HTTPPasswordMgrWithDefaultRealm as HTTPPasswordMgrWithDefaultRealm\n from urllib2 import HTTPSHandler as HTTPSHandler\n from urllib2 import ProxyHandler as ProxyHandler\n from urllib2 import Request as Request\n from urllib2 import build_opener as build_opener\n\nif PY3:\n from queue import Queue as Queue\n\n # The `os.sched_getaffinity` function appears to be supported on Linux but not OSX.\n if not hasattr(os, \"sched_getaffinity\"):\n from os import cpu_count as cpu_count\n else:\n\n def cpu_count():\n # type: () -> Optional[int]\n # The set of CPUs accessible to the current process (pid 0).\n cpu_set = os.sched_getaffinity(0)\n return len(cpu_set)\n\nelse:\n from multiprocessing import cpu_count as cpu_count\n\n from Queue import Queue as Queue\n\nWINDOWS = os.name == \"nt\"\n\n\n# Universal newlines is the default in Python 3.\nMODE_READ_UNIVERSAL_NEWLINES = \"rU\" if PY2 else \"r\"\n\n\ndef _get_stdio_bytes_buffer(stdio):\n # type: (IO[str]) -> BinaryIO\n return cast(\"BinaryIO\", getattr(stdio, \"buffer\", stdio))\n\n\ndef get_stdout_bytes_buffer():\n # type: () -> BinaryIO\n return _get_stdio_bytes_buffer(sys.stdout)\n\n\ndef get_stderr_bytes_buffer():\n # type: () -> BinaryIO\n return _get_stdio_bytes_buffer(sys.stderr)\n\n\nif PY3:\n is_valid_python_identifier = str.isidentifier\nelse:\n\n def is_valid_python_identifier(text):\n # type: (str) -> bool\n\n # N.B.: Python 2.7 only supports ASCII characters so the check is easy and this is probably\n # why it's nt in the stdlib.\n # See: https://docs.python.org/2.7/reference/lexical_analysis.html#identifiers\n return re.match(r\"^[_a-zA-Z][_a-zA-Z0-9]*$\", text) is not None\n\n\nif PY2:\n\n def indent(\n text, # type: Text\n prefix, # type: Text\n predicate=None, # type: Optional[Callable[[Text], bool]]\n ):\n add_prefix = predicate if predicate else lambda line: bool(line.strip())\n return \"\".join(\n prefix + line if add_prefix(line) else line for line in text.splitlines(True)\n )\n\nelse:\n from textwrap import indent as indent\n", "path": "pex/compatibility.py"}, {"content": "# Copyright 2020 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import absolute_import, print_function\n\nimport os\nfrom argparse import ArgumentParser, Namespace\n\nfrom pex import pex_bootstrapper\nfrom pex.commands.command import GlobalConfigurationError, Main\nfrom pex.pex import PEX\nfrom pex.pex_bootstrapper import InterpreterTest\nfrom pex.pex_info import PexInfo\nfrom pex.result import Result, catch\nfrom pex.tools import commands\nfrom pex.tools.command import PEXCommand\nfrom pex.tracer import TRACER\nfrom pex.typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from typing import Callable, Optional, Union\n\n CommandFunc = Callable[[PEX, Namespace], Result]\n\n\ndef simplify_pex_path(pex_path):\n # type: (str) -> str\n # Generate the most concise path possible that is still cut/paste-able to the command line.\n pex_path = os.path.abspath(pex_path)\n cwd = os.getcwd()\n if os.path.commonprefix((pex_path, cwd)) == cwd:\n pex_path = os.path.relpath(pex_path, cwd)\n # Handle users that do not have . as a PATH entry.\n if not os.path.dirname(pex_path) and os.curdir not in os.environ.get(\"PATH\", \"\").split(\n os.pathsep\n ):\n pex_path = os.path.join(os.curdir, pex_path)\n return pex_path\n\n\nclass PexTools(Main[PEXCommand]):\n def __init__(self, pex=None):\n # type: (Optional[PEX]) -> None\n\n pex_prog_path = simplify_pex_path(pex.path()) if pex else None\n\n # By default, let argparse derive prog from sys.argv[0].\n prog = None # type: Optional[str]\n if pex:\n prog = \"PEX_TOOLS=1 {pex_path}\".format(pex_path=pex_prog_path)\n\n description = \"Tools for working with {}.\".format(pex_prog_path if pex else \"PEX files\")\n subparsers_description = (\n \"{} can be operated on using any of the following subcommands.\".format(\n \"The PEX file {}\".format(pex_prog_path) if pex else \"A PEX file\"\n )\n )\n\n super(PexTools, self).__init__(\n description=description,\n subparsers_description=subparsers_description,\n command_types=commands.all_commands(),\n prog=prog,\n )\n self._pex = pex\n\n def add_arguments(self, parser):\n # type: (ArgumentParser) -> None\n if self._pex is None:\n parser.add_argument(\n \"pex\", nargs=1, metavar=\"PATH\", help=\"The path of the PEX file to operate on.\"\n )\n\n\ndef main(pex=None):\n # type: (Optional[PEX]) -> Union[int, str]\n\n pex_tools = PexTools(pex=pex)\n try:\n with pex_tools.parsed_command() as pex_command, TRACER.timed(\n \"Executing PEX_TOOLS {}\".format(pex_command.name())\n ):\n if pex is None:\n pex_file_path = pex_command.options.pex[0]\n pex_info = PexInfo.from_pex(pex_file_path)\n pex_info.update(PexInfo.from_env())\n interpreter = pex_bootstrapper.find_compatible_interpreter(\n interpreter_test=InterpreterTest(entry_point=pex_file_path, pex_info=pex_info)\n )\n pex = PEX(pex_file_path, interpreter=interpreter)\n\n result = catch(pex_command.run, pex)\n result.maybe_display()\n return result.exit_code\n except GlobalConfigurationError as e:\n return str(e)\n", "path": "pex/tools/main.py"}]}
| 3,668 | 821 |
gh_patches_debug_41064
|
rasdani/github-patches
|
git_diff
|
LibraryOfCongress__concordia-732
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Include one-page text per item in BagIt export
**Is your feature request related to a problem? Please describe.**
In the BagIt export which feeds transcriptions into loc.gov, we want to provide one plain text file per item which has all the transcription text from all pages.
**Describe the solution you'd like**
The text file should contain all pages in the item and be named with the item ID. There should be a single attribution at the bottom of all the text.
</issue>
<code>
[start of exporter/views.py]
1 import os
2 import re
3 import shutil
4 import tempfile
5 from logging import getLogger
6
7 import bagit
8 import boto3
9 from django.conf import settings
10 from django.contrib.admin.views.decorators import staff_member_required
11 from django.db.models import OuterRef, Subquery
12 from django.http import HttpResponse, HttpResponseRedirect
13 from django.utils.decorators import method_decorator
14 from django.views.generic import TemplateView
15 from tabular_export.core import export_to_csv_response, flatten_queryset
16
17 from concordia.models import Asset, Transcription, TranscriptionStatus
18
19 logger = getLogger(__name__)
20
21
22 def get_latest_transcription_data(asset_qs):
23 latest_trans_subquery = (
24 Transcription.objects.filter(asset=OuterRef("pk"))
25 .order_by("-pk")
26 .values("text")
27 )
28
29 assets = asset_qs.annotate(latest_transcription=Subquery(latest_trans_subquery[:1]))
30 return assets
31
32
33 def get_original_asset_id(download_url):
34 """
35 Extract the bit from the download url
36 that identifies this image uniquely on loc.gov
37 """
38 if download_url.startswith("http://tile.loc.gov/"):
39 pattern = r"/service:([A-Za-z0-9:\-]+)/"
40 asset_id = re.search(pattern, download_url)
41 if not asset_id:
42 logger.error(
43 "Couldn't find a matching asset ID in download URL %s", download_url
44 )
45 raise AssertionError
46 else:
47 matching_asset_id = asset_id.group(1)
48 logger.debug(
49 "Found asset ID %s in download URL %s", matching_asset_id, download_url
50 )
51 return matching_asset_id
52 else:
53 logger.warning("Download URL doesn't start with tile.loc.gov: %s", download_url)
54 return download_url
55
56
57 def do_bagit_export(assets, export_base_dir, export_filename_base):
58 """
59 Executes bagit.py to turn temp directory into LC-specific bagit strucutre.
60 Builds and exports bagit structure as zip.
61 Uploads zip to S3 if configured.
62 """
63
64 for asset in assets:
65 asset_id = get_original_asset_id(asset.download_url)
66 logger.debug("Exporting asset %s into %s", asset_id, export_base_dir)
67
68 asset_id = asset_id.replace(":", "/")
69 asset_path, asset_filename = os.path.split(asset_id)
70
71 dest_path = os.path.join(export_base_dir, asset_path)
72 os.makedirs(dest_path, exist_ok=True)
73
74 # Build transcription output text file
75 text_output_path = os.path.join(dest_path, "%s.txt" % asset_filename)
76 with open(text_output_path, "w") as f:
77 f.write(asset.latest_transcription or "")
78 if hasattr(settings, "ATTRIBUTION_TEXT"):
79 f.write("\n\n")
80 f.write(settings.ATTRIBUTION_TEXT)
81
82 # Turn Structure into bagit format
83 bagit.make_bag(
84 export_base_dir,
85 {
86 "Content-Access": "web",
87 "Content-Custodian": "dcms",
88 "Content-Process": "crowdsourced",
89 "Content-Type": "textual",
90 "LC-Bag-Id": export_filename_base,
91 "LC-Items": "%d transcriptions" % len(assets),
92 "LC-Project": "gdccrowd",
93 "License-Information": "Public domain",
94 },
95 )
96
97 # Build .zip file of bagit formatted Campaign Folder
98 archive_name = export_base_dir
99 shutil.make_archive(archive_name, "zip", export_base_dir)
100
101 export_filename = "%s.zip" % export_filename_base
102
103 # Upload zip to S3 bucket
104 s3_bucket = getattr(settings, "EXPORT_S3_BUCKET_NAME", None)
105
106 if s3_bucket:
107 logger.debug("Uploading exported bag to S3 bucket %s", s3_bucket)
108 s3 = boto3.resource("s3")
109 s3.Bucket(s3_bucket).upload_file(
110 "%s.zip" % export_base_dir, "%s" % export_filename
111 )
112
113 return HttpResponseRedirect(
114 "https://%s.s3.amazonaws.com/%s" % (s3_bucket, export_filename)
115 )
116 else:
117 # Download zip from local storage
118 with open("%s.zip" % export_base_dir, "rb") as zip_file:
119 response = HttpResponse(zip_file, content_type="application/zip")
120 response["Content-Disposition"] = "attachment; filename=%s" % export_filename
121 return response
122
123
124 class ExportCampaignToCSV(TemplateView):
125 """
126 Exports the most recent transcription for each asset in a campaign
127 """
128
129 @method_decorator(staff_member_required)
130 def get(self, request, *args, **kwargs):
131 asset_qs = Asset.objects.filter(
132 item__project__campaign__slug=self.kwargs["campaign_slug"]
133 )
134 assets = get_latest_transcription_data(asset_qs)
135
136 headers, data = flatten_queryset(
137 assets,
138 field_names=[
139 "item__project__campaign__title",
140 "item__project__title",
141 "item__title",
142 "item__item_id",
143 "title",
144 "transcription_status",
145 "download_url",
146 "latest_transcription",
147 ],
148 extra_verbose_names={
149 "item__project__campaign__title": "Campaign",
150 "item__project__title": "Project",
151 "item__title": "Item",
152 "item__item_id": "ItemId",
153 "item_id": "ItemId",
154 "title": "Asset",
155 "transcription_status": "AssetStatus",
156 "download_url": "DownloadUrl",
157 "latest_transcription": "Transcription",
158 },
159 )
160
161 return export_to_csv_response(
162 "%s.csv" % self.kwargs["campaign_slug"], headers, data
163 )
164
165
166 class ExportItemToBagIt(TemplateView):
167 @method_decorator(staff_member_required)
168 def get(self, request, *args, **kwargs):
169 campaign_slug = self.kwargs["campaign_slug"]
170 project_slug = self.kwargs["project_slug"]
171 item_id = self.kwargs["item_id"]
172
173 asset_qs = Asset.objects.filter(
174 item__project__campaign__slug=campaign_slug,
175 item__project__slug=project_slug,
176 item__item_id=item_id,
177 transcription_status=TranscriptionStatus.COMPLETED,
178 )
179
180 assets = get_latest_transcription_data(asset_qs)
181
182 export_filename_base = "%s-%s-%s" % (campaign_slug, project_slug, item_id)
183
184 with tempfile.TemporaryDirectory(
185 prefix=export_filename_base
186 ) as export_base_dir:
187 return do_bagit_export(assets, export_base_dir, export_filename_base)
188
189
190 class ExportProjectToBagIt(TemplateView):
191 @method_decorator(staff_member_required)
192 def get(self, request, *args, **kwargs):
193 campaign_slug = self.kwargs["campaign_slug"]
194 project_slug = self.kwargs["project_slug"]
195 asset_qs = Asset.objects.filter(
196 item__project__campaign__slug=campaign_slug,
197 item__project__slug=project_slug,
198 transcription_status=TranscriptionStatus.COMPLETED,
199 )
200
201 assets = get_latest_transcription_data(asset_qs)
202
203 export_filename_base = "%s-%s" % (campaign_slug, project_slug)
204
205 with tempfile.TemporaryDirectory(
206 prefix=export_filename_base
207 ) as export_base_dir:
208 return do_bagit_export(assets, export_base_dir, export_filename_base)
209
210
211 class ExportCampaignToBagit(TemplateView):
212 @method_decorator(staff_member_required)
213 def get(self, request, *args, **kwargs):
214 campaign_slug = self.kwargs["campaign_slug"]
215 asset_qs = Asset.objects.filter(
216 item__project__campaign__slug=campaign_slug,
217 transcription_status=TranscriptionStatus.COMPLETED,
218 )
219
220 assets = get_latest_transcription_data(asset_qs)
221
222 export_filename_base = "%s" % (campaign_slug,)
223
224 with tempfile.TemporaryDirectory(
225 prefix=export_filename_base
226 ) as export_base_dir:
227 return do_bagit_export(assets, export_base_dir, export_filename_base)
228
[end of exporter/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/exporter/views.py b/exporter/views.py
--- a/exporter/views.py
+++ b/exporter/views.py
@@ -61,23 +61,40 @@
Uploads zip to S3 if configured.
"""
+ # These assets should already be in the correct order - by item, seequence
for asset in assets:
asset_id = get_original_asset_id(asset.download_url)
logger.debug("Exporting asset %s into %s", asset_id, export_base_dir)
asset_id = asset_id.replace(":", "/")
asset_path, asset_filename = os.path.split(asset_id)
+ item_path, item_filename = os.path.split(asset_path)
- dest_path = os.path.join(export_base_dir, asset_path)
- os.makedirs(dest_path, exist_ok=True)
+ asset_dest_path = os.path.join(export_base_dir, asset_path)
+ os.makedirs(asset_dest_path, exist_ok=True)
- # Build transcription output text file
- text_output_path = os.path.join(dest_path, "%s.txt" % asset_filename)
- with open(text_output_path, "w") as f:
+ # Build a transcription output text file for each asset
+ asset_text_output_path = os.path.join(
+ asset_dest_path, "%s.txt" % asset_filename
+ )
+ # Write the asset level transcription file
+ with open(asset_text_output_path, "w") as f:
+ f.write(asset.latest_transcription or "")
+
+ # Append this asset transcription to the item transcription
+ item_text_output_path = os.path.join(asset_dest_path, "%s.txt" % item_filename)
+ with open(item_text_output_path, "a") as f:
f.write(asset.latest_transcription or "")
- if hasattr(settings, "ATTRIBUTION_TEXT"):
- f.write("\n\n")
- f.write(settings.ATTRIBUTION_TEXT)
+ f.write("\n\n")
+
+ # Add attributions to the end of all text files found under asset_dest_path
+ if hasattr(settings, "ATTRIBUTION_TEXT"):
+ for dirpath, dirnames, filenames in os.walk(export_base_dir, topdown=False):
+ for each_text_file in (i for i in filenames if i.endswith(".txt")):
+ this_text_file = os.path.join(dirpath, each_text_file)
+ with open(this_text_file, "a") as f:
+ f.write("\n\n")
+ f.write(settings.ATTRIBUTION_TEXT)
# Turn Structure into bagit format
bagit.make_bag(
@@ -175,7 +192,7 @@
item__project__slug=project_slug,
item__item_id=item_id,
transcription_status=TranscriptionStatus.COMPLETED,
- )
+ ).order_by("sequence")
assets = get_latest_transcription_data(asset_qs)
@@ -196,7 +213,7 @@
item__project__campaign__slug=campaign_slug,
item__project__slug=project_slug,
transcription_status=TranscriptionStatus.COMPLETED,
- )
+ ).order_by("item", "sequence")
assets = get_latest_transcription_data(asset_qs)
@@ -215,7 +232,7 @@
asset_qs = Asset.objects.filter(
item__project__campaign__slug=campaign_slug,
transcription_status=TranscriptionStatus.COMPLETED,
- )
+ ).order_by("item__project", "item", "sequence")
assets = get_latest_transcription_data(asset_qs)
|
{"golden_diff": "diff --git a/exporter/views.py b/exporter/views.py\n--- a/exporter/views.py\n+++ b/exporter/views.py\n@@ -61,23 +61,40 @@\n Uploads zip to S3 if configured.\n \"\"\"\n \n+ # These assets should already be in the correct order - by item, seequence\n for asset in assets:\n asset_id = get_original_asset_id(asset.download_url)\n logger.debug(\"Exporting asset %s into %s\", asset_id, export_base_dir)\n \n asset_id = asset_id.replace(\":\", \"/\")\n asset_path, asset_filename = os.path.split(asset_id)\n+ item_path, item_filename = os.path.split(asset_path)\n \n- dest_path = os.path.join(export_base_dir, asset_path)\n- os.makedirs(dest_path, exist_ok=True)\n+ asset_dest_path = os.path.join(export_base_dir, asset_path)\n+ os.makedirs(asset_dest_path, exist_ok=True)\n \n- # Build transcription output text file\n- text_output_path = os.path.join(dest_path, \"%s.txt\" % asset_filename)\n- with open(text_output_path, \"w\") as f:\n+ # Build a transcription output text file for each asset\n+ asset_text_output_path = os.path.join(\n+ asset_dest_path, \"%s.txt\" % asset_filename\n+ )\n+ # Write the asset level transcription file\n+ with open(asset_text_output_path, \"w\") as f:\n+ f.write(asset.latest_transcription or \"\")\n+\n+ # Append this asset transcription to the item transcription\n+ item_text_output_path = os.path.join(asset_dest_path, \"%s.txt\" % item_filename)\n+ with open(item_text_output_path, \"a\") as f:\n f.write(asset.latest_transcription or \"\")\n- if hasattr(settings, \"ATTRIBUTION_TEXT\"):\n- f.write(\"\\n\\n\")\n- f.write(settings.ATTRIBUTION_TEXT)\n+ f.write(\"\\n\\n\")\n+\n+ # Add attributions to the end of all text files found under asset_dest_path\n+ if hasattr(settings, \"ATTRIBUTION_TEXT\"):\n+ for dirpath, dirnames, filenames in os.walk(export_base_dir, topdown=False):\n+ for each_text_file in (i for i in filenames if i.endswith(\".txt\")):\n+ this_text_file = os.path.join(dirpath, each_text_file)\n+ with open(this_text_file, \"a\") as f:\n+ f.write(\"\\n\\n\")\n+ f.write(settings.ATTRIBUTION_TEXT)\n \n # Turn Structure into bagit format\n bagit.make_bag(\n@@ -175,7 +192,7 @@\n item__project__slug=project_slug,\n item__item_id=item_id,\n transcription_status=TranscriptionStatus.COMPLETED,\n- )\n+ ).order_by(\"sequence\")\n \n assets = get_latest_transcription_data(asset_qs)\n \n@@ -196,7 +213,7 @@\n item__project__campaign__slug=campaign_slug,\n item__project__slug=project_slug,\n transcription_status=TranscriptionStatus.COMPLETED,\n- )\n+ ).order_by(\"item\", \"sequence\")\n \n assets = get_latest_transcription_data(asset_qs)\n \n@@ -215,7 +232,7 @@\n asset_qs = Asset.objects.filter(\n item__project__campaign__slug=campaign_slug,\n transcription_status=TranscriptionStatus.COMPLETED,\n- )\n+ ).order_by(\"item__project\", \"item\", \"sequence\")\n \n assets = get_latest_transcription_data(asset_qs)\n", "issue": "Include one-page text per item in BagIt export\n**Is your feature request related to a problem? Please describe.**\r\nIn the BagIt export which feeds transcriptions into loc.gov, we want to provide one plain text file per item which has all the transcription text from all pages.\r\n\r\n**Describe the solution you'd like**\r\nThe text file should contain all pages in the item and be named with the item ID. There should be a single attribution at the bottom of all the text.\r\n\r\n\n", "before_files": [{"content": "import os\nimport re\nimport shutil\nimport tempfile\nfrom logging import getLogger\n\nimport bagit\nimport boto3\nfrom django.conf import settings\nfrom django.contrib.admin.views.decorators import staff_member_required\nfrom django.db.models import OuterRef, Subquery\nfrom django.http import HttpResponse, HttpResponseRedirect\nfrom django.utils.decorators import method_decorator\nfrom django.views.generic import TemplateView\nfrom tabular_export.core import export_to_csv_response, flatten_queryset\n\nfrom concordia.models import Asset, Transcription, TranscriptionStatus\n\nlogger = getLogger(__name__)\n\n\ndef get_latest_transcription_data(asset_qs):\n latest_trans_subquery = (\n Transcription.objects.filter(asset=OuterRef(\"pk\"))\n .order_by(\"-pk\")\n .values(\"text\")\n )\n\n assets = asset_qs.annotate(latest_transcription=Subquery(latest_trans_subquery[:1]))\n return assets\n\n\ndef get_original_asset_id(download_url):\n \"\"\"\n Extract the bit from the download url\n that identifies this image uniquely on loc.gov\n \"\"\"\n if download_url.startswith(\"http://tile.loc.gov/\"):\n pattern = r\"/service:([A-Za-z0-9:\\-]+)/\"\n asset_id = re.search(pattern, download_url)\n if not asset_id:\n logger.error(\n \"Couldn't find a matching asset ID in download URL %s\", download_url\n )\n raise AssertionError\n else:\n matching_asset_id = asset_id.group(1)\n logger.debug(\n \"Found asset ID %s in download URL %s\", matching_asset_id, download_url\n )\n return matching_asset_id\n else:\n logger.warning(\"Download URL doesn't start with tile.loc.gov: %s\", download_url)\n return download_url\n\n\ndef do_bagit_export(assets, export_base_dir, export_filename_base):\n \"\"\"\n Executes bagit.py to turn temp directory into LC-specific bagit strucutre.\n Builds and exports bagit structure as zip.\n Uploads zip to S3 if configured.\n \"\"\"\n\n for asset in assets:\n asset_id = get_original_asset_id(asset.download_url)\n logger.debug(\"Exporting asset %s into %s\", asset_id, export_base_dir)\n\n asset_id = asset_id.replace(\":\", \"/\")\n asset_path, asset_filename = os.path.split(asset_id)\n\n dest_path = os.path.join(export_base_dir, asset_path)\n os.makedirs(dest_path, exist_ok=True)\n\n # Build transcription output text file\n text_output_path = os.path.join(dest_path, \"%s.txt\" % asset_filename)\n with open(text_output_path, \"w\") as f:\n f.write(asset.latest_transcription or \"\")\n if hasattr(settings, \"ATTRIBUTION_TEXT\"):\n f.write(\"\\n\\n\")\n f.write(settings.ATTRIBUTION_TEXT)\n\n # Turn Structure into bagit format\n bagit.make_bag(\n export_base_dir,\n {\n \"Content-Access\": \"web\",\n \"Content-Custodian\": \"dcms\",\n \"Content-Process\": \"crowdsourced\",\n \"Content-Type\": \"textual\",\n \"LC-Bag-Id\": export_filename_base,\n \"LC-Items\": \"%d transcriptions\" % len(assets),\n \"LC-Project\": \"gdccrowd\",\n \"License-Information\": \"Public domain\",\n },\n )\n\n # Build .zip file of bagit formatted Campaign Folder\n archive_name = export_base_dir\n shutil.make_archive(archive_name, \"zip\", export_base_dir)\n\n export_filename = \"%s.zip\" % export_filename_base\n\n # Upload zip to S3 bucket\n s3_bucket = getattr(settings, \"EXPORT_S3_BUCKET_NAME\", None)\n\n if s3_bucket:\n logger.debug(\"Uploading exported bag to S3 bucket %s\", s3_bucket)\n s3 = boto3.resource(\"s3\")\n s3.Bucket(s3_bucket).upload_file(\n \"%s.zip\" % export_base_dir, \"%s\" % export_filename\n )\n\n return HttpResponseRedirect(\n \"https://%s.s3.amazonaws.com/%s\" % (s3_bucket, export_filename)\n )\n else:\n # Download zip from local storage\n with open(\"%s.zip\" % export_base_dir, \"rb\") as zip_file:\n response = HttpResponse(zip_file, content_type=\"application/zip\")\n response[\"Content-Disposition\"] = \"attachment; filename=%s\" % export_filename\n return response\n\n\nclass ExportCampaignToCSV(TemplateView):\n \"\"\"\n Exports the most recent transcription for each asset in a campaign\n \"\"\"\n\n @method_decorator(staff_member_required)\n def get(self, request, *args, **kwargs):\n asset_qs = Asset.objects.filter(\n item__project__campaign__slug=self.kwargs[\"campaign_slug\"]\n )\n assets = get_latest_transcription_data(asset_qs)\n\n headers, data = flatten_queryset(\n assets,\n field_names=[\n \"item__project__campaign__title\",\n \"item__project__title\",\n \"item__title\",\n \"item__item_id\",\n \"title\",\n \"transcription_status\",\n \"download_url\",\n \"latest_transcription\",\n ],\n extra_verbose_names={\n \"item__project__campaign__title\": \"Campaign\",\n \"item__project__title\": \"Project\",\n \"item__title\": \"Item\",\n \"item__item_id\": \"ItemId\",\n \"item_id\": \"ItemId\",\n \"title\": \"Asset\",\n \"transcription_status\": \"AssetStatus\",\n \"download_url\": \"DownloadUrl\",\n \"latest_transcription\": \"Transcription\",\n },\n )\n\n return export_to_csv_response(\n \"%s.csv\" % self.kwargs[\"campaign_slug\"], headers, data\n )\n\n\nclass ExportItemToBagIt(TemplateView):\n @method_decorator(staff_member_required)\n def get(self, request, *args, **kwargs):\n campaign_slug = self.kwargs[\"campaign_slug\"]\n project_slug = self.kwargs[\"project_slug\"]\n item_id = self.kwargs[\"item_id\"]\n\n asset_qs = Asset.objects.filter(\n item__project__campaign__slug=campaign_slug,\n item__project__slug=project_slug,\n item__item_id=item_id,\n transcription_status=TranscriptionStatus.COMPLETED,\n )\n\n assets = get_latest_transcription_data(asset_qs)\n\n export_filename_base = \"%s-%s-%s\" % (campaign_slug, project_slug, item_id)\n\n with tempfile.TemporaryDirectory(\n prefix=export_filename_base\n ) as export_base_dir:\n return do_bagit_export(assets, export_base_dir, export_filename_base)\n\n\nclass ExportProjectToBagIt(TemplateView):\n @method_decorator(staff_member_required)\n def get(self, request, *args, **kwargs):\n campaign_slug = self.kwargs[\"campaign_slug\"]\n project_slug = self.kwargs[\"project_slug\"]\n asset_qs = Asset.objects.filter(\n item__project__campaign__slug=campaign_slug,\n item__project__slug=project_slug,\n transcription_status=TranscriptionStatus.COMPLETED,\n )\n\n assets = get_latest_transcription_data(asset_qs)\n\n export_filename_base = \"%s-%s\" % (campaign_slug, project_slug)\n\n with tempfile.TemporaryDirectory(\n prefix=export_filename_base\n ) as export_base_dir:\n return do_bagit_export(assets, export_base_dir, export_filename_base)\n\n\nclass ExportCampaignToBagit(TemplateView):\n @method_decorator(staff_member_required)\n def get(self, request, *args, **kwargs):\n campaign_slug = self.kwargs[\"campaign_slug\"]\n asset_qs = Asset.objects.filter(\n item__project__campaign__slug=campaign_slug,\n transcription_status=TranscriptionStatus.COMPLETED,\n )\n\n assets = get_latest_transcription_data(asset_qs)\n\n export_filename_base = \"%s\" % (campaign_slug,)\n\n with tempfile.TemporaryDirectory(\n prefix=export_filename_base\n ) as export_base_dir:\n return do_bagit_export(assets, export_base_dir, export_filename_base)\n", "path": "exporter/views.py"}]}
| 2,938 | 779 |
gh_patches_debug_27077
|
rasdani/github-patches
|
git_diff
|
aws__aws-cli-444
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
S3 Sync with "--delete" option is deleting files that have folders with the same name!
If you have a folder in your S3 bucket and a file name that contains the same starting characters as the folder name, the target files will be intermittently be deleted upon each run.
For example, create a folder in your S3 bucket:
s3://my-bucket/test/
Now create some txt files in the bucket with 'test' as the first 4 characters of the file name.. ie:
s3://my-bucket/test-123.txt
s3://my-bucket/test-321.txt
s3://my-bucket/test.txt
Run `aws s3 sync --delete s3://my-bucket /my/local/folder`.. Do this 3 or 4 times. You will see each file 'test-123.txt', 'test-321.txt', 'test.txt' will get intermittently be deleted and downloaded with each run. Each run produces different results.
Having files unexpectedly being deleting is a big concern as we use 's3 sync --delete' for daily backup's.
Please see AWS post for originating report https://forums.aws.amazon.com/message.jspa?messageID=497335#497335
</issue>
<code>
[start of awscli/customizations/s3/filegenerator.py]
1 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 import os
14
15 from dateutil.parser import parse
16 from dateutil.tz import tzlocal
17
18 from awscli.customizations.s3.fileinfo import FileInfo
19 from awscli.customizations.s3.utils import find_bucket_key, get_file_stat
20
21
22 class FileGenerator(object):
23 """
24 This is a class the creates a generator to yield files based on information
25 returned from the ``FileFormat`` class. It is universal in the sense that
26 it will handle s3 files, local files, local directories, and s3 objects
27 under the same common prefix. The generator yields corresponding
28 ``FileInfo`` objects to send to a ``Comparator`` or ``S3Handler``.
29 """
30 def __init__(self, service, endpoint, operation_name, parameters):
31 self._service = service
32 self._endpoint = endpoint
33 self.operation_name = operation_name
34
35 def call(self, files):
36 """
37 This is the generalized function to yield the ``FileInfo`` objects.
38 ``dir_op`` and ``use_src_name`` flags affect which files are used and
39 ensure the proper destination paths and compare keys are formed.
40 """
41 src = files['src']
42 dest = files['dest']
43 src_type = src['type']
44 dest_type = dest['type']
45 function_table = {'s3': self.list_objects, 'local': self.list_files}
46 sep_table = {'s3': '/', 'local': os.sep}
47 source = src['path']
48 file_list = function_table[src_type](source, files['dir_op'])
49 for src_path, size, last_update in file_list:
50 if files['dir_op']:
51 rel_path = src_path[len(src['path']):]
52 else:
53 rel_path = src_path.split(sep_table[src_type])[-1]
54 compare_key = rel_path.replace(sep_table[src_type], '/')
55 if files['use_src_name']:
56 dest_path = dest['path']
57 dest_path += rel_path.replace(sep_table[src_type],
58 sep_table[dest_type])
59 else:
60 dest_path = dest['path']
61 yield FileInfo(src=src_path, dest=dest_path,
62 compare_key=compare_key, size=size,
63 last_update=last_update, src_type=src_type,
64 service=self._service, endpoint=self._endpoint,
65 dest_type=dest_type,
66 operation_name=self.operation_name)
67
68 def list_files(self, path, dir_op):
69 """
70 This function yields the appropriate local file or local files
71 under a directory depending on if the operation is on a directory.
72 For directories a depth first search is implemented in order to
73 follow the same sorted pattern as a s3 list objects operation
74 outputs. It yields the file's source path, size, and last
75 update
76 """
77 join, isdir, isfile = os.path.join, os.path.isdir, os.path.isfile
78 error, listdir = os.error, os.listdir
79 if not dir_op:
80 size, last_update = get_file_stat(path)
81 yield path, size, last_update
82 else:
83 names = sorted(listdir(path))
84 for name in names:
85 file_path = join(path, name)
86 if isdir(file_path):
87 for x in self.list_files(file_path, dir_op):
88 yield x
89 else:
90 size, last_update = get_file_stat(file_path)
91 yield file_path, size, last_update
92
93 def list_objects(self, s3_path, dir_op):
94 """
95 This function yields the appropriate object or objects under a
96 common prefix depending if the operation is on objects under a
97 common prefix. It yields the file's source path, size, and last
98 update.
99 """
100 operation = self._service.get_operation('ListObjects')
101 bucket, prefix = find_bucket_key(s3_path)
102 iterator = operation.paginate(self._endpoint, bucket=bucket,
103 prefix=prefix)
104 for html_response, response_data in iterator:
105 contents = response_data['Contents']
106 for content in contents:
107 src_path = bucket + '/' + content['Key']
108 size = content['Size']
109 last_update = parse(content['LastModified'])
110 last_update = last_update.astimezone(tzlocal())
111 if size == 0 and src_path.endswith('/'):
112 if self.operation_name == 'delete':
113 # This is to filter out manually created folders
114 # in S3. They have a size zero and would be
115 # undesirably downloaded. Local directories
116 # are automatically created when they do not
117 # exist locally. But user should be able to
118 # delete them.
119 yield src_path, size, last_update
120 elif not dir_op and s3_path != src_path:
121 pass
122 else:
123 yield src_path, size, last_update
124
[end of awscli/customizations/s3/filegenerator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/awscli/customizations/s3/filegenerator.py b/awscli/customizations/s3/filegenerator.py
--- a/awscli/customizations/s3/filegenerator.py
+++ b/awscli/customizations/s3/filegenerator.py
@@ -80,10 +80,28 @@
size, last_update = get_file_stat(path)
yield path, size, last_update
else:
- names = sorted(listdir(path))
+ # We need to list files in byte order based on the full
+ # expanded path of the key: 'test/1/2/3.txt' However, listdir()
+ # will only give us contents a single directory at a time, so we'll
+ # get 'test'. At the same time we don't want to load the entire
+ # list of files into memory. This is handled by first going
+ # through the current directory contents and adding the directory
+ # separator to any directories. We can then sort the contents,
+ # and ensure byte order.
+ names = listdir(path)
+ for i, name in enumerate(names):
+ file_path = join(path, name)
+ if isdir(file_path):
+ names[i] = name + os.path.sep
+ names.sort()
for name in names:
file_path = join(path, name)
if isdir(file_path):
+ # Anything in a directory will have a prefix of this
+ # current directory and will come before the
+ # remaining contents in this directory. This means we need
+ # to recurse into this sub directory before yielding the
+ # rest of this directory's contents.
for x in self.list_files(file_path, dir_op):
yield x
else:
|
{"golden_diff": "diff --git a/awscli/customizations/s3/filegenerator.py b/awscli/customizations/s3/filegenerator.py\n--- a/awscli/customizations/s3/filegenerator.py\n+++ b/awscli/customizations/s3/filegenerator.py\n@@ -80,10 +80,28 @@\n size, last_update = get_file_stat(path)\n yield path, size, last_update\n else:\n- names = sorted(listdir(path))\n+ # We need to list files in byte order based on the full\n+ # expanded path of the key: 'test/1/2/3.txt' However, listdir()\n+ # will only give us contents a single directory at a time, so we'll\n+ # get 'test'. At the same time we don't want to load the entire\n+ # list of files into memory. This is handled by first going\n+ # through the current directory contents and adding the directory\n+ # separator to any directories. We can then sort the contents,\n+ # and ensure byte order.\n+ names = listdir(path)\n+ for i, name in enumerate(names):\n+ file_path = join(path, name)\n+ if isdir(file_path):\n+ names[i] = name + os.path.sep\n+ names.sort()\n for name in names:\n file_path = join(path, name)\n if isdir(file_path):\n+ # Anything in a directory will have a prefix of this\n+ # current directory and will come before the\n+ # remaining contents in this directory. This means we need\n+ # to recurse into this sub directory before yielding the\n+ # rest of this directory's contents.\n for x in self.list_files(file_path, dir_op):\n yield x\n else:\n", "issue": "S3 Sync with \"--delete\" option is deleting files that have folders with the same name!\nIf you have a folder in your S3 bucket and a file name that contains the same starting characters as the folder name, the target files will be intermittently be deleted upon each run.\n\nFor example, create a folder in your S3 bucket:\n\ns3://my-bucket/test/\n\nNow create some txt files in the bucket with 'test' as the first 4 characters of the file name.. ie:\n\ns3://my-bucket/test-123.txt\ns3://my-bucket/test-321.txt\ns3://my-bucket/test.txt\n\nRun `aws s3 sync --delete s3://my-bucket /my/local/folder`.. Do this 3 or 4 times. You will see each file 'test-123.txt', 'test-321.txt', 'test.txt' will get intermittently be deleted and downloaded with each run. Each run produces different results.\n\nHaving files unexpectedly being deleting is a big concern as we use 's3 sync --delete' for daily backup's.\n\nPlease see AWS post for originating report https://forums.aws.amazon.com/message.jspa?messageID=497335#497335\n\n", "before_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nimport os\n\nfrom dateutil.parser import parse\nfrom dateutil.tz import tzlocal\n\nfrom awscli.customizations.s3.fileinfo import FileInfo\nfrom awscli.customizations.s3.utils import find_bucket_key, get_file_stat\n\n\nclass FileGenerator(object):\n \"\"\"\n This is a class the creates a generator to yield files based on information\n returned from the ``FileFormat`` class. It is universal in the sense that\n it will handle s3 files, local files, local directories, and s3 objects\n under the same common prefix. The generator yields corresponding\n ``FileInfo`` objects to send to a ``Comparator`` or ``S3Handler``.\n \"\"\"\n def __init__(self, service, endpoint, operation_name, parameters):\n self._service = service\n self._endpoint = endpoint\n self.operation_name = operation_name\n\n def call(self, files):\n \"\"\"\n This is the generalized function to yield the ``FileInfo`` objects.\n ``dir_op`` and ``use_src_name`` flags affect which files are used and\n ensure the proper destination paths and compare keys are formed.\n \"\"\"\n src = files['src']\n dest = files['dest']\n src_type = src['type']\n dest_type = dest['type']\n function_table = {'s3': self.list_objects, 'local': self.list_files}\n sep_table = {'s3': '/', 'local': os.sep}\n source = src['path']\n file_list = function_table[src_type](source, files['dir_op'])\n for src_path, size, last_update in file_list:\n if files['dir_op']:\n rel_path = src_path[len(src['path']):]\n else:\n rel_path = src_path.split(sep_table[src_type])[-1]\n compare_key = rel_path.replace(sep_table[src_type], '/')\n if files['use_src_name']:\n dest_path = dest['path']\n dest_path += rel_path.replace(sep_table[src_type],\n sep_table[dest_type])\n else:\n dest_path = dest['path']\n yield FileInfo(src=src_path, dest=dest_path,\n compare_key=compare_key, size=size,\n last_update=last_update, src_type=src_type,\n service=self._service, endpoint=self._endpoint,\n dest_type=dest_type,\n operation_name=self.operation_name)\n\n def list_files(self, path, dir_op):\n \"\"\"\n This function yields the appropriate local file or local files\n under a directory depending on if the operation is on a directory.\n For directories a depth first search is implemented in order to\n follow the same sorted pattern as a s3 list objects operation\n outputs. It yields the file's source path, size, and last\n update\n \"\"\"\n join, isdir, isfile = os.path.join, os.path.isdir, os.path.isfile\n error, listdir = os.error, os.listdir\n if not dir_op:\n size, last_update = get_file_stat(path)\n yield path, size, last_update\n else:\n names = sorted(listdir(path))\n for name in names:\n file_path = join(path, name)\n if isdir(file_path):\n for x in self.list_files(file_path, dir_op):\n yield x\n else:\n size, last_update = get_file_stat(file_path)\n yield file_path, size, last_update\n\n def list_objects(self, s3_path, dir_op):\n \"\"\"\n This function yields the appropriate object or objects under a\n common prefix depending if the operation is on objects under a\n common prefix. It yields the file's source path, size, and last\n update.\n \"\"\"\n operation = self._service.get_operation('ListObjects')\n bucket, prefix = find_bucket_key(s3_path)\n iterator = operation.paginate(self._endpoint, bucket=bucket,\n prefix=prefix)\n for html_response, response_data in iterator:\n contents = response_data['Contents']\n for content in contents:\n src_path = bucket + '/' + content['Key']\n size = content['Size']\n last_update = parse(content['LastModified'])\n last_update = last_update.astimezone(tzlocal())\n if size == 0 and src_path.endswith('/'):\n if self.operation_name == 'delete':\n # This is to filter out manually created folders\n # in S3. They have a size zero and would be\n # undesirably downloaded. Local directories\n # are automatically created when they do not\n # exist locally. But user should be able to\n # delete them.\n yield src_path, size, last_update\n elif not dir_op and s3_path != src_path:\n pass\n else:\n yield src_path, size, last_update\n", "path": "awscli/customizations/s3/filegenerator.py"}]}
| 2,235 | 387 |
gh_patches_debug_14233
|
rasdani/github-patches
|
git_diff
|
kivy__kivy-4657
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Touchtracer - delay when trying to draw
I don't even see it if I don't look close, but if I'm right, the thing that's mentioned in [this question](http://stackoverflow.com/q/37933920/5994041) is in fact a delay I see here. If not, there's a delay anyway. It's like there is an invisible(because bg isn't only single color) brush covering the touch - such thing you can see even in Paint as it shows you where the brush actually is.
Touchtracer seems to catch every touch, yet something slips and is displayed hm... after there's enough items in a list or something? E.g. it won't draw until there's at least 3-4+ touches in let's say `touches = []` so that a line could be drawn? Even if, you can construct a line with two points, but I see that more than two touches are not present when I drag the mouse until another part is drawn(see the red circle). This thing is present even with master and from each side(it's not an issue with misplaced touch coords).

</issue>
<code>
[start of examples/demo/touchtracer/main.py]
1 '''
2 Touch Tracer Line Drawing Demonstration
3 =======================================
4
5 This demonstrates tracking each touch registered to a device. You should
6 see a basic background image. When you press and hold the mouse, you
7 should see cross-hairs with the coordinates written next to them. As
8 you drag, it leaves a trail. Additional information, like pressure,
9 will be shown if they are in your device's touch.profile.
10
11 This program specifies an icon, the file icon.png, in its App subclass.
12 It also uses the particle.png file as the source for drawing the trails which
13 are white on transparent. The file touchtracer.kv describes the application.
14
15 The file android.txt is used to package the application for use with the
16 Kivy Launcher Android application. For Android devices, you can
17 copy/paste this directory into /sdcard/kivy/touchtracer on your Android device.
18
19 '''
20 __version__ = '1.0'
21
22 import kivy
23 kivy.require('1.0.6')
24
25 from kivy.app import App
26 from kivy.uix.floatlayout import FloatLayout
27 from kivy.uix.label import Label
28 from kivy.graphics import Color, Rectangle, Point, GraphicException
29 from random import random
30 from math import sqrt
31
32
33 def calculate_points(x1, y1, x2, y2, steps=5):
34 dx = x2 - x1
35 dy = y2 - y1
36 dist = sqrt(dx * dx + dy * dy)
37 if dist < steps:
38 return None
39 o = []
40 m = dist / steps
41 for i in range(1, int(m)):
42 mi = i / m
43 lastx = x1 + dx * mi
44 lasty = y1 + dy * mi
45 o.extend([lastx, lasty])
46 return o
47
48
49 class Touchtracer(FloatLayout):
50
51 def on_touch_down(self, touch):
52 win = self.get_parent_window()
53 ud = touch.ud
54 ud['group'] = g = str(touch.uid)
55 pointsize = 5
56 if 'pressure' in touch.profile:
57 ud['pressure'] = touch.pressure
58 pointsize = (touch.pressure * 100000) ** 2
59 ud['color'] = random()
60
61 with self.canvas:
62 Color(ud['color'], 1, 1, mode='hsv', group=g)
63 ud['lines'] = [
64 Rectangle(pos=(touch.x, 0), size=(1, win.height), group=g),
65 Rectangle(pos=(0, touch.y), size=(win.width, 1), group=g),
66 Point(points=(touch.x, touch.y), source='particle.png',
67 pointsize=pointsize, group=g)]
68
69 ud['label'] = Label(size_hint=(None, None))
70 self.update_touch_label(ud['label'], touch)
71 self.add_widget(ud['label'])
72 touch.grab(self)
73 return True
74
75 def on_touch_move(self, touch):
76 if touch.grab_current is not self:
77 return
78 ud = touch.ud
79 ud['lines'][0].pos = touch.x, 0
80 ud['lines'][1].pos = 0, touch.y
81
82 index = -1
83
84 while True:
85 try:
86 points = ud['lines'][index].points
87 oldx, oldy = points[-2], points[-1]
88 break
89 except:
90 index -= 1
91
92 points = calculate_points(oldx, oldy, touch.x, touch.y)
93
94 # if pressure changed create a new point instruction
95 if 'pressure' in ud:
96 if not .95 < (touch.pressure / ud['pressure']) < 1.05:
97 g = ud['group']
98 pointsize = (touch.pressure * 100000) ** 2
99 with self.canvas:
100 Color(ud['color'], 1, 1, mode='hsv', group=g)
101 ud['lines'].append(
102 Point(points=(), source='particle.png',
103 pointsize=pointsize, group=g))
104
105 if points:
106 try:
107 lp = ud['lines'][-1].add_point
108 for idx in range(0, len(points), 2):
109 lp(points[idx], points[idx + 1])
110 except GraphicException:
111 pass
112
113 ud['label'].pos = touch.pos
114 import time
115 t = int(time.time())
116 if t not in ud:
117 ud[t] = 1
118 else:
119 ud[t] += 1
120 self.update_touch_label(ud['label'], touch)
121
122 def on_touch_up(self, touch):
123 if touch.grab_current is not self:
124 return
125 touch.ungrab(self)
126 ud = touch.ud
127 self.canvas.remove_group(ud['group'])
128 self.remove_widget(ud['label'])
129
130 def update_touch_label(self, label, touch):
131 label.text = 'ID: %s\nPos: (%d, %d)\nClass: %s' % (
132 touch.id, touch.x, touch.y, touch.__class__.__name__)
133 label.texture_update()
134 label.pos = touch.pos
135 label.size = label.texture_size[0] + 20, label.texture_size[1] + 20
136
137
138 class TouchtracerApp(App):
139 title = 'Touchtracer'
140 icon = 'icon.png'
141
142 def build(self):
143 return Touchtracer()
144
145 def on_pause(self):
146 return True
147
148 if __name__ == '__main__':
149 TouchtracerApp().run()
150
[end of examples/demo/touchtracer/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/demo/touchtracer/main.py b/examples/demo/touchtracer/main.py
--- a/examples/demo/touchtracer/main.py
+++ b/examples/demo/touchtracer/main.py
@@ -8,6 +8,12 @@
you drag, it leaves a trail. Additional information, like pressure,
will be shown if they are in your device's touch.profile.
+.. note::
+
+ A function `calculate_points` handling the points which will be drawn
+ has by default implemented a delay of 5 steps. To get more precise visual
+ results lower the value of the optional keyword argument `steps`.
+
This program specifies an icon, the file icon.png, in its App subclass.
It also uses the particle.png file as the source for drawing the trails which
are white on transparent. The file touchtracer.kv describes the application.
|
{"golden_diff": "diff --git a/examples/demo/touchtracer/main.py b/examples/demo/touchtracer/main.py\n--- a/examples/demo/touchtracer/main.py\n+++ b/examples/demo/touchtracer/main.py\n@@ -8,6 +8,12 @@\n you drag, it leaves a trail. Additional information, like pressure,\n will be shown if they are in your device's touch.profile.\n \n+.. note::\n+\n+ A function `calculate_points` handling the points which will be drawn\n+ has by default implemented a delay of 5 steps. To get more precise visual\n+ results lower the value of the optional keyword argument `steps`.\n+\n This program specifies an icon, the file icon.png, in its App subclass.\n It also uses the particle.png file as the source for drawing the trails which\n are white on transparent. The file touchtracer.kv describes the application.\n", "issue": "Touchtracer - delay when trying to draw\nI don't even see it if I don't look close, but if I'm right, the thing that's mentioned in [this question](http://stackoverflow.com/q/37933920/5994041) is in fact a delay I see here. If not, there's a delay anyway. It's like there is an invisible(because bg isn't only single color) brush covering the touch - such thing you can see even in Paint as it shows you where the brush actually is.\n\nTouchtracer seems to catch every touch, yet something slips and is displayed hm... after there's enough items in a list or something? E.g. it won't draw until there's at least 3-4+ touches in let's say `touches = []` so that a line could be drawn? Even if, you can construct a line with two points, but I see that more than two touches are not present when I drag the mouse until another part is drawn(see the red circle). This thing is present even with master and from each side(it's not an issue with misplaced touch coords).\n\n\n\n", "before_files": [{"content": "'''\nTouch Tracer Line Drawing Demonstration\n=======================================\n\nThis demonstrates tracking each touch registered to a device. You should\nsee a basic background image. When you press and hold the mouse, you\nshould see cross-hairs with the coordinates written next to them. As\nyou drag, it leaves a trail. Additional information, like pressure,\nwill be shown if they are in your device's touch.profile.\n\nThis program specifies an icon, the file icon.png, in its App subclass.\nIt also uses the particle.png file as the source for drawing the trails which\nare white on transparent. The file touchtracer.kv describes the application.\n\nThe file android.txt is used to package the application for use with the\nKivy Launcher Android application. For Android devices, you can\ncopy/paste this directory into /sdcard/kivy/touchtracer on your Android device.\n\n'''\n__version__ = '1.0'\n\nimport kivy\nkivy.require('1.0.6')\n\nfrom kivy.app import App\nfrom kivy.uix.floatlayout import FloatLayout\nfrom kivy.uix.label import Label\nfrom kivy.graphics import Color, Rectangle, Point, GraphicException\nfrom random import random\nfrom math import sqrt\n\n\ndef calculate_points(x1, y1, x2, y2, steps=5):\n dx = x2 - x1\n dy = y2 - y1\n dist = sqrt(dx * dx + dy * dy)\n if dist < steps:\n return None\n o = []\n m = dist / steps\n for i in range(1, int(m)):\n mi = i / m\n lastx = x1 + dx * mi\n lasty = y1 + dy * mi\n o.extend([lastx, lasty])\n return o\n\n\nclass Touchtracer(FloatLayout):\n\n def on_touch_down(self, touch):\n win = self.get_parent_window()\n ud = touch.ud\n ud['group'] = g = str(touch.uid)\n pointsize = 5\n if 'pressure' in touch.profile:\n ud['pressure'] = touch.pressure\n pointsize = (touch.pressure * 100000) ** 2\n ud['color'] = random()\n\n with self.canvas:\n Color(ud['color'], 1, 1, mode='hsv', group=g)\n ud['lines'] = [\n Rectangle(pos=(touch.x, 0), size=(1, win.height), group=g),\n Rectangle(pos=(0, touch.y), size=(win.width, 1), group=g),\n Point(points=(touch.x, touch.y), source='particle.png',\n pointsize=pointsize, group=g)]\n\n ud['label'] = Label(size_hint=(None, None))\n self.update_touch_label(ud['label'], touch)\n self.add_widget(ud['label'])\n touch.grab(self)\n return True\n\n def on_touch_move(self, touch):\n if touch.grab_current is not self:\n return\n ud = touch.ud\n ud['lines'][0].pos = touch.x, 0\n ud['lines'][1].pos = 0, touch.y\n\n index = -1\n\n while True:\n try:\n points = ud['lines'][index].points\n oldx, oldy = points[-2], points[-1]\n break\n except:\n index -= 1\n\n points = calculate_points(oldx, oldy, touch.x, touch.y)\n\n # if pressure changed create a new point instruction\n if 'pressure' in ud:\n if not .95 < (touch.pressure / ud['pressure']) < 1.05:\n g = ud['group']\n pointsize = (touch.pressure * 100000) ** 2\n with self.canvas:\n Color(ud['color'], 1, 1, mode='hsv', group=g)\n ud['lines'].append(\n Point(points=(), source='particle.png',\n pointsize=pointsize, group=g))\n\n if points:\n try:\n lp = ud['lines'][-1].add_point\n for idx in range(0, len(points), 2):\n lp(points[idx], points[idx + 1])\n except GraphicException:\n pass\n\n ud['label'].pos = touch.pos\n import time\n t = int(time.time())\n if t not in ud:\n ud[t] = 1\n else:\n ud[t] += 1\n self.update_touch_label(ud['label'], touch)\n\n def on_touch_up(self, touch):\n if touch.grab_current is not self:\n return\n touch.ungrab(self)\n ud = touch.ud\n self.canvas.remove_group(ud['group'])\n self.remove_widget(ud['label'])\n\n def update_touch_label(self, label, touch):\n label.text = 'ID: %s\\nPos: (%d, %d)\\nClass: %s' % (\n touch.id, touch.x, touch.y, touch.__class__.__name__)\n label.texture_update()\n label.pos = touch.pos\n label.size = label.texture_size[0] + 20, label.texture_size[1] + 20\n\n\nclass TouchtracerApp(App):\n title = 'Touchtracer'\n icon = 'icon.png'\n\n def build(self):\n return Touchtracer()\n\n def on_pause(self):\n return True\n\nif __name__ == '__main__':\n TouchtracerApp().run()\n", "path": "examples/demo/touchtracer/main.py"}]}
| 2,385 | 181 |
gh_patches_debug_13152
|
rasdani/github-patches
|
git_diff
|
netbox-community__netbox-16013
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to reference object id in site using REST API
### Deployment Type
Self-hosted
### NetBox Version
v4.0.0
### Python Version
3.10
### Steps to Reproduce
1. Create a tenant named "Test Tenant". Make a note of the tenant's id (in my case it's 7)
2. Create a site using REST API
```
curl -s -X POST \
-H "Authorization: Token 0123456789abcdef0123456789abcdef01234567" \
-H "Content-Type: application/json" \
http://localhost:32768/api/dcim/sites/ \
--data '{"name": "Test site 1", "slug": "test-site-1", "tenant": 7}' | jq '.'
```
### Expected Behavior
The site is created in and tenant is set to Test tenant.
### Observed Behavior
```
{
"tenant": {
"non_field_errors": [
"Invalid data. Expected a dictionary, but got int."
]
}
}
```
The same API calls work as expected in NetBox 3.7.
</issue>
<code>
[start of netbox/dcim/api/serializers_/sites.py]
1 from rest_framework import serializers
2 from timezone_field.rest_framework import TimeZoneSerializerField
3
4 from dcim.choices import *
5 from dcim.models import Location, Region, Site, SiteGroup
6 from ipam.api.serializers_.asns import ASNSerializer
7 from ipam.models import ASN
8 from netbox.api.fields import ChoiceField, RelatedObjectCountField, SerializedPKRelatedField
9 from netbox.api.serializers import NestedGroupModelSerializer, NetBoxModelSerializer
10 from tenancy.api.serializers_.tenants import TenantSerializer
11 from ..nested_serializers import *
12
13 __all__ = (
14 'LocationSerializer',
15 'RegionSerializer',
16 'SiteGroupSerializer',
17 'SiteSerializer',
18 )
19
20
21 class RegionSerializer(NestedGroupModelSerializer):
22 url = serializers.HyperlinkedIdentityField(view_name='dcim-api:region-detail')
23 parent = NestedRegionSerializer(required=False, allow_null=True, default=None)
24 site_count = serializers.IntegerField(read_only=True)
25
26 class Meta:
27 model = Region
28 fields = [
29 'id', 'url', 'display', 'name', 'slug', 'parent', 'description', 'tags', 'custom_fields', 'created',
30 'last_updated', 'site_count', '_depth',
31 ]
32 brief_fields = ('id', 'url', 'display', 'name', 'slug', 'description', 'site_count', '_depth')
33
34
35 class SiteGroupSerializer(NestedGroupModelSerializer):
36 url = serializers.HyperlinkedIdentityField(view_name='dcim-api:sitegroup-detail')
37 parent = NestedSiteGroupSerializer(required=False, allow_null=True, default=None)
38 site_count = serializers.IntegerField(read_only=True)
39
40 class Meta:
41 model = SiteGroup
42 fields = [
43 'id', 'url', 'display', 'name', 'slug', 'parent', 'description', 'tags', 'custom_fields', 'created',
44 'last_updated', 'site_count', '_depth',
45 ]
46 brief_fields = ('id', 'url', 'display', 'name', 'slug', 'description', 'site_count', '_depth')
47
48
49 class SiteSerializer(NetBoxModelSerializer):
50 url = serializers.HyperlinkedIdentityField(view_name='dcim-api:site-detail')
51 status = ChoiceField(choices=SiteStatusChoices, required=False)
52 region = RegionSerializer(nested=True, required=False, allow_null=True)
53 group = SiteGroupSerializer(nested=True, required=False, allow_null=True)
54 tenant = TenantSerializer(required=False, allow_null=True)
55 time_zone = TimeZoneSerializerField(required=False, allow_null=True)
56 asns = SerializedPKRelatedField(
57 queryset=ASN.objects.all(),
58 serializer=ASNSerializer,
59 nested=True,
60 required=False,
61 many=True
62 )
63
64 # Related object counts
65 circuit_count = RelatedObjectCountField('circuit_terminations')
66 device_count = RelatedObjectCountField('devices')
67 prefix_count = RelatedObjectCountField('prefixes')
68 rack_count = RelatedObjectCountField('racks')
69 vlan_count = RelatedObjectCountField('vlans')
70 virtualmachine_count = RelatedObjectCountField('virtual_machines')
71
72 class Meta:
73 model = Site
74 fields = [
75 'id', 'url', 'display', 'name', 'slug', 'status', 'region', 'group', 'tenant', 'facility', 'time_zone',
76 'description', 'physical_address', 'shipping_address', 'latitude', 'longitude', 'comments', 'asns', 'tags',
77 'custom_fields', 'created', 'last_updated', 'circuit_count', 'device_count', 'prefix_count', 'rack_count',
78 'virtualmachine_count', 'vlan_count',
79 ]
80 brief_fields = ('id', 'url', 'display', 'name', 'description', 'slug')
81
82
83 class LocationSerializer(NestedGroupModelSerializer):
84 url = serializers.HyperlinkedIdentityField(view_name='dcim-api:location-detail')
85 site = SiteSerializer(nested=True)
86 parent = NestedLocationSerializer(required=False, allow_null=True, default=None)
87 status = ChoiceField(choices=LocationStatusChoices, required=False)
88 tenant = TenantSerializer(nested=True, required=False, allow_null=True)
89 rack_count = serializers.IntegerField(read_only=True)
90 device_count = serializers.IntegerField(read_only=True)
91
92 class Meta:
93 model = Location
94 fields = [
95 'id', 'url', 'display', 'name', 'slug', 'site', 'parent', 'status', 'tenant', 'facility', 'description',
96 'tags', 'custom_fields', 'created', 'last_updated', 'rack_count', 'device_count', '_depth',
97 ]
98 brief_fields = ('id', 'url', 'display', 'name', 'slug', 'description', 'rack_count', '_depth')
99
[end of netbox/dcim/api/serializers_/sites.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/netbox/dcim/api/serializers_/sites.py b/netbox/dcim/api/serializers_/sites.py
--- a/netbox/dcim/api/serializers_/sites.py
+++ b/netbox/dcim/api/serializers_/sites.py
@@ -51,7 +51,7 @@
status = ChoiceField(choices=SiteStatusChoices, required=False)
region = RegionSerializer(nested=True, required=False, allow_null=True)
group = SiteGroupSerializer(nested=True, required=False, allow_null=True)
- tenant = TenantSerializer(required=False, allow_null=True)
+ tenant = TenantSerializer(nested=True, required=False, allow_null=True)
time_zone = TimeZoneSerializerField(required=False, allow_null=True)
asns = SerializedPKRelatedField(
queryset=ASN.objects.all(),
|
{"golden_diff": "diff --git a/netbox/dcim/api/serializers_/sites.py b/netbox/dcim/api/serializers_/sites.py\n--- a/netbox/dcim/api/serializers_/sites.py\n+++ b/netbox/dcim/api/serializers_/sites.py\n@@ -51,7 +51,7 @@\n status = ChoiceField(choices=SiteStatusChoices, required=False)\n region = RegionSerializer(nested=True, required=False, allow_null=True)\n group = SiteGroupSerializer(nested=True, required=False, allow_null=True)\n- tenant = TenantSerializer(required=False, allow_null=True)\n+ tenant = TenantSerializer(nested=True, required=False, allow_null=True)\n time_zone = TimeZoneSerializerField(required=False, allow_null=True)\n asns = SerializedPKRelatedField(\n queryset=ASN.objects.all(),\n", "issue": "Unable to reference object id in site using REST API \n### Deployment Type\r\n\r\nSelf-hosted\r\n\r\n### NetBox Version\r\n\r\nv4.0.0\r\n\r\n### Python Version\r\n\r\n3.10\r\n\r\n### Steps to Reproduce\r\n\r\n1. Create a tenant named \"Test Tenant\". Make a note of the tenant's id (in my case it's 7)\r\n2. Create a site using REST API\r\n```\r\ncurl -s -X POST \\\r\n-H \"Authorization: Token 0123456789abcdef0123456789abcdef01234567\" \\\r\n-H \"Content-Type: application/json\" \\\r\nhttp://localhost:32768/api/dcim/sites/ \\\r\n--data '{\"name\": \"Test site 1\", \"slug\": \"test-site-1\", \"tenant\": 7}' | jq '.'\r\n```\r\n\r\n### Expected Behavior\r\n\r\nThe site is created in and tenant is set to Test tenant.\r\n\r\n### Observed Behavior\r\n\r\n```\r\n{\r\n \"tenant\": {\r\n \"non_field_errors\": [\r\n \"Invalid data. Expected a dictionary, but got int.\"\r\n ]\r\n }\r\n}\r\n```\r\n\r\nThe same API calls work as expected in NetBox 3.7.\n", "before_files": [{"content": "from rest_framework import serializers\nfrom timezone_field.rest_framework import TimeZoneSerializerField\n\nfrom dcim.choices import *\nfrom dcim.models import Location, Region, Site, SiteGroup\nfrom ipam.api.serializers_.asns import ASNSerializer\nfrom ipam.models import ASN\nfrom netbox.api.fields import ChoiceField, RelatedObjectCountField, SerializedPKRelatedField\nfrom netbox.api.serializers import NestedGroupModelSerializer, NetBoxModelSerializer\nfrom tenancy.api.serializers_.tenants import TenantSerializer\nfrom ..nested_serializers import *\n\n__all__ = (\n 'LocationSerializer',\n 'RegionSerializer',\n 'SiteGroupSerializer',\n 'SiteSerializer',\n)\n\n\nclass RegionSerializer(NestedGroupModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:region-detail')\n parent = NestedRegionSerializer(required=False, allow_null=True, default=None)\n site_count = serializers.IntegerField(read_only=True)\n\n class Meta:\n model = Region\n fields = [\n 'id', 'url', 'display', 'name', 'slug', 'parent', 'description', 'tags', 'custom_fields', 'created',\n 'last_updated', 'site_count', '_depth',\n ]\n brief_fields = ('id', 'url', 'display', 'name', 'slug', 'description', 'site_count', '_depth')\n\n\nclass SiteGroupSerializer(NestedGroupModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:sitegroup-detail')\n parent = NestedSiteGroupSerializer(required=False, allow_null=True, default=None)\n site_count = serializers.IntegerField(read_only=True)\n\n class Meta:\n model = SiteGroup\n fields = [\n 'id', 'url', 'display', 'name', 'slug', 'parent', 'description', 'tags', 'custom_fields', 'created',\n 'last_updated', 'site_count', '_depth',\n ]\n brief_fields = ('id', 'url', 'display', 'name', 'slug', 'description', 'site_count', '_depth')\n\n\nclass SiteSerializer(NetBoxModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:site-detail')\n status = ChoiceField(choices=SiteStatusChoices, required=False)\n region = RegionSerializer(nested=True, required=False, allow_null=True)\n group = SiteGroupSerializer(nested=True, required=False, allow_null=True)\n tenant = TenantSerializer(required=False, allow_null=True)\n time_zone = TimeZoneSerializerField(required=False, allow_null=True)\n asns = SerializedPKRelatedField(\n queryset=ASN.objects.all(),\n serializer=ASNSerializer,\n nested=True,\n required=False,\n many=True\n )\n\n # Related object counts\n circuit_count = RelatedObjectCountField('circuit_terminations')\n device_count = RelatedObjectCountField('devices')\n prefix_count = RelatedObjectCountField('prefixes')\n rack_count = RelatedObjectCountField('racks')\n vlan_count = RelatedObjectCountField('vlans')\n virtualmachine_count = RelatedObjectCountField('virtual_machines')\n\n class Meta:\n model = Site\n fields = [\n 'id', 'url', 'display', 'name', 'slug', 'status', 'region', 'group', 'tenant', 'facility', 'time_zone',\n 'description', 'physical_address', 'shipping_address', 'latitude', 'longitude', 'comments', 'asns', 'tags',\n 'custom_fields', 'created', 'last_updated', 'circuit_count', 'device_count', 'prefix_count', 'rack_count',\n 'virtualmachine_count', 'vlan_count',\n ]\n brief_fields = ('id', 'url', 'display', 'name', 'description', 'slug')\n\n\nclass LocationSerializer(NestedGroupModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:location-detail')\n site = SiteSerializer(nested=True)\n parent = NestedLocationSerializer(required=False, allow_null=True, default=None)\n status = ChoiceField(choices=LocationStatusChoices, required=False)\n tenant = TenantSerializer(nested=True, required=False, allow_null=True)\n rack_count = serializers.IntegerField(read_only=True)\n device_count = serializers.IntegerField(read_only=True)\n\n class Meta:\n model = Location\n fields = [\n 'id', 'url', 'display', 'name', 'slug', 'site', 'parent', 'status', 'tenant', 'facility', 'description',\n 'tags', 'custom_fields', 'created', 'last_updated', 'rack_count', 'device_count', '_depth',\n ]\n brief_fields = ('id', 'url', 'display', 'name', 'slug', 'description', 'rack_count', '_depth')\n", "path": "netbox/dcim/api/serializers_/sites.py"}]}
| 2,002 | 175 |
gh_patches_debug_17100
|
rasdani/github-patches
|
git_diff
|
qtile__qtile-4109
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ampersand in window name return an error
### The issue:
## Qtile version
0.22.1
## Issue
Ampersands in window name return an error with the WindowTabs widget
```
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/libqtile/hook.py", line 404, in fire
i(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/libqtile/widget/windowtabs.py", line 82, in update
self.text = self.separator.join(names)
File "/usr/lib/python3.10/site-packages/libqtile/widget/base.py", line 483, in text
self.layout.text = self.formatted_text
File "/usr/lib/python3.10/site-packages/libqtile/drawer.py", line 72, in text
attrlist, value, accel_char = pangocffi.parse_markup(value)
File "/usr/lib/python3.10/site-packages/libqtile/pangocffi.py", line 186, in parse_markup
raise Exception("parse_markup() failed for %s" % value)
Exception: parse_markup() failed for b'<b>Search \xc2\xb7 & \xe2\x80\x94 Mozilla Firefox</b>'
```
The same goes for the Mpris2 widget
```
2023-01-07 17:07:22,656 ERROR libqtile loop.py:_handle_exception():L63 parse_markup() failed for b'Fireman & Dancer - Royal Republic'
NoneType: None
2023-01-07 17:07:22,656 ERROR libqtile loop.py:_handle_exception():L63 parse_markup() failed for b'Fireman & Dancer - Royal Republic'
NoneType: None
````
Found a similar issue [#1685](https://github.com/qtile/qtile/issues/1685) but for the WindowName widget
### Required:
- [X] I have searched past issues to see if this bug has already been reported.
</issue>
<code>
[start of libqtile/widget/windowtabs.py]
1 # Copyright (c) 2012-2013 Craig Barnes
2 # Copyright (c) 2012 roger
3 # Copyright (c) 2012, 2014 Tycho Andersen
4 # Copyright (c) 2014 Sean Vig
5 # Copyright (c) 2014 Adi Sieker
6 #
7 # Permission is hereby granted, free of charge, to any person obtaining a copy
8 # of this software and associated documentation files (the "Software"), to deal
9 # in the Software without restriction, including without limitation the rights
10 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
11 # copies of the Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice shall be included in
15 # all copies or substantial portions of the Software.
16 #
17 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
18 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
19 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
20 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
21 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
22 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
23 # SOFTWARE.
24
25 from libqtile import bar, hook
26 from libqtile.log_utils import logger
27 from libqtile.widget import base
28
29
30 class WindowTabs(base._TextBox):
31 """
32 Displays the name of each window in the current group.
33 Contrary to TaskList this is not an interactive widget.
34 The window that currently has focus is highlighted.
35 """
36
37 defaults = [
38 ("separator", " | ", "Task separator text."),
39 ("selected", ("<b>", "</b>"), "Selected task indicator"),
40 (
41 "parse_text",
42 None,
43 "Function to parse and modify window names. "
44 "e.g. function in config that removes excess "
45 "strings from window name: "
46 "def my_func(text)"
47 ' for string in [" - Chromium", " - Firefox"]:'
48 ' text = text.replace(string, "")'
49 " return text"
50 "then set option parse_text=my_func",
51 ),
52 ]
53
54 def __init__(self, **config):
55 width = config.pop("width", bar.STRETCH)
56 base._TextBox.__init__(self, width=width, **config)
57 self.add_defaults(WindowTabs.defaults)
58 if not isinstance(self.selected, (tuple, list)):
59 self.selected = (self.selected, self.selected)
60
61 def _configure(self, qtile, bar):
62 base._TextBox._configure(self, qtile, bar)
63 hook.subscribe.client_name_updated(self.update)
64 hook.subscribe.focus_change(self.update)
65 hook.subscribe.float_change(self.update)
66 self.add_callbacks({"Button1": self.bar.screen.group.next_window})
67
68 def update(self, *args):
69 names = []
70 for w in self.bar.screen.group.windows:
71 state = ""
72 if w.maximized:
73 state = "[] "
74 elif w.minimized:
75 state = "_ "
76 elif w.floating:
77 state = "V "
78 task = "%s%s" % (state, w.name if w and w.name else " ")
79 if w is self.bar.screen.group.current_window:
80 task = task.join(self.selected)
81 names.append(task)
82 self.text = self.separator.join(names)
83 if callable(self.parse_text):
84 try:
85 self.text = self.parse_text(self.text)
86 except: # noqa: E722
87 logger.exception("parse_text function failed:")
88 self.bar.draw()
89
[end of libqtile/widget/windowtabs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/libqtile/widget/windowtabs.py b/libqtile/widget/windowtabs.py
--- a/libqtile/widget/windowtabs.py
+++ b/libqtile/widget/windowtabs.py
@@ -22,7 +22,7 @@
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
-from libqtile import bar, hook
+from libqtile import bar, hook, pangocffi
from libqtile.log_utils import logger
from libqtile.widget import base
@@ -76,6 +76,7 @@
elif w.floating:
state = "V "
task = "%s%s" % (state, w.name if w and w.name else " ")
+ task = pangocffi.markup_escape_text(task)
if w is self.bar.screen.group.current_window:
task = task.join(self.selected)
names.append(task)
|
{"golden_diff": "diff --git a/libqtile/widget/windowtabs.py b/libqtile/widget/windowtabs.py\n--- a/libqtile/widget/windowtabs.py\n+++ b/libqtile/widget/windowtabs.py\n@@ -22,7 +22,7 @@\n # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n # SOFTWARE.\n \n-from libqtile import bar, hook\n+from libqtile import bar, hook, pangocffi\n from libqtile.log_utils import logger\n from libqtile.widget import base\n \n@@ -76,6 +76,7 @@\n elif w.floating:\n state = \"V \"\n task = \"%s%s\" % (state, w.name if w and w.name else \" \")\n+ task = pangocffi.markup_escape_text(task)\n if w is self.bar.screen.group.current_window:\n task = task.join(self.selected)\n names.append(task)\n", "issue": "Ampersand in window name return an error\n### The issue:\n\n## Qtile version\r\n0.22.1\r\n\r\n## Issue\r\nAmpersands in window name return an error with the WindowTabs widget\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.10/site-packages/libqtile/hook.py\", line 404, in fire\r\n i(*args, **kwargs)\r\n File \"/usr/lib/python3.10/site-packages/libqtile/widget/windowtabs.py\", line 82, in update\r\n self.text = self.separator.join(names)\r\n File \"/usr/lib/python3.10/site-packages/libqtile/widget/base.py\", line 483, in text\r\n self.layout.text = self.formatted_text\r\n File \"/usr/lib/python3.10/site-packages/libqtile/drawer.py\", line 72, in text\r\n attrlist, value, accel_char = pangocffi.parse_markup(value)\r\n File \"/usr/lib/python3.10/site-packages/libqtile/pangocffi.py\", line 186, in parse_markup\r\n raise Exception(\"parse_markup() failed for %s\" % value)\r\nException: parse_markup() failed for b'<b>Search \\xc2\\xb7 & \\xe2\\x80\\x94 Mozilla Firefox</b>'\r\n```\r\n\r\nThe same goes for the Mpris2 widget\r\n```\r\n2023-01-07 17:07:22,656 ERROR libqtile loop.py:_handle_exception():L63 parse_markup() failed for b'Fireman & Dancer - Royal Republic'\r\nNoneType: None\r\n2023-01-07 17:07:22,656 ERROR libqtile loop.py:_handle_exception():L63 parse_markup() failed for b'Fireman & Dancer - Royal Republic'\r\nNoneType: None\r\n````\r\n\r\nFound a similar issue [#1685](https://github.com/qtile/qtile/issues/1685) but for the WindowName widget\n\n### Required:\n\n- [X] I have searched past issues to see if this bug has already been reported.\n", "before_files": [{"content": "# Copyright (c) 2012-2013 Craig Barnes\n# Copyright (c) 2012 roger\n# Copyright (c) 2012, 2014 Tycho Andersen\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 Adi Sieker\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nfrom libqtile import bar, hook\nfrom libqtile.log_utils import logger\nfrom libqtile.widget import base\n\n\nclass WindowTabs(base._TextBox):\n \"\"\"\n Displays the name of each window in the current group.\n Contrary to TaskList this is not an interactive widget.\n The window that currently has focus is highlighted.\n \"\"\"\n\n defaults = [\n (\"separator\", \" | \", \"Task separator text.\"),\n (\"selected\", (\"<b>\", \"</b>\"), \"Selected task indicator\"),\n (\n \"parse_text\",\n None,\n \"Function to parse and modify window names. \"\n \"e.g. function in config that removes excess \"\n \"strings from window name: \"\n \"def my_func(text)\"\n ' for string in [\" - Chromium\", \" - Firefox\"]:'\n ' text = text.replace(string, \"\")'\n \" return text\"\n \"then set option parse_text=my_func\",\n ),\n ]\n\n def __init__(self, **config):\n width = config.pop(\"width\", bar.STRETCH)\n base._TextBox.__init__(self, width=width, **config)\n self.add_defaults(WindowTabs.defaults)\n if not isinstance(self.selected, (tuple, list)):\n self.selected = (self.selected, self.selected)\n\n def _configure(self, qtile, bar):\n base._TextBox._configure(self, qtile, bar)\n hook.subscribe.client_name_updated(self.update)\n hook.subscribe.focus_change(self.update)\n hook.subscribe.float_change(self.update)\n self.add_callbacks({\"Button1\": self.bar.screen.group.next_window})\n\n def update(self, *args):\n names = []\n for w in self.bar.screen.group.windows:\n state = \"\"\n if w.maximized:\n state = \"[] \"\n elif w.minimized:\n state = \"_ \"\n elif w.floating:\n state = \"V \"\n task = \"%s%s\" % (state, w.name if w and w.name else \" \")\n if w is self.bar.screen.group.current_window:\n task = task.join(self.selected)\n names.append(task)\n self.text = self.separator.join(names)\n if callable(self.parse_text):\n try:\n self.text = self.parse_text(self.text)\n except: # noqa: E722\n logger.exception(\"parse_text function failed:\")\n self.bar.draw()\n", "path": "libqtile/widget/windowtabs.py"}]}
| 1,981 | 196 |
gh_patches_debug_180
|
rasdani/github-patches
|
git_diff
|
dask__dask-6299
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
importing fails when calling python -OO
This was discovered by `xarray`'s `upstream-dev` CI ([environment](https://dev.azure.com/xarray/xarray/_build/results?buildId=2996&view=logs&j=2280efed-fda1-53bd-9213-1fa8ec9b4fa8&t=031ddd67-e55f-5fbd-2283-1ff4dfed6587)) a few days ago, but we were a bit slow in reporting so this also happens with the newly released `2.18.0`.
The problem is this:
```
$ python -OO -c 'import dask.array'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File ".../lib/python3.8/site-packages/dask/array/__init__.py", line 26, in <module>
from .routines import (
File ".../lib/python3.8/site-packages/dask/array/routines.py", line 18, in <module>
from .creation import arange, diag, empty, indices
File ".../lib/python3.8/site-packages/dask/array/creation.py", line 26, in <module>
from .wrap import empty, ones, zeros, full
File ".../lib/python3.8/site-packages/dask/array/wrap.py", line 173, in <module>
full.__name__ = _full.__name__
AttributeError: 'functools.partial' object has no attribute '__name__'
```
without the optimization, the import obviously works.
See also pydata/xarray#4124
</issue>
<code>
[start of dask/array/wrap.py]
1 from functools import partial
2 from itertools import product
3
4 import numpy as np
5
6 from tlz import curry
7
8 from ..base import tokenize
9 from ..utils import funcname
10 from .core import Array, normalize_chunks
11 from .utils import meta_from_array
12
13
14 def _parse_wrap_args(func, args, kwargs, shape):
15 if isinstance(shape, np.ndarray):
16 shape = shape.tolist()
17
18 if not isinstance(shape, (tuple, list)):
19 shape = (shape,)
20
21 name = kwargs.pop("name", None)
22 chunks = kwargs.pop("chunks", "auto")
23
24 dtype = kwargs.pop("dtype", None)
25 if dtype is None:
26 dtype = func(shape, *args, **kwargs).dtype
27 dtype = np.dtype(dtype)
28
29 chunks = normalize_chunks(chunks, shape, dtype=dtype)
30
31 name = name or funcname(func) + "-" + tokenize(
32 func, shape, chunks, dtype, args, kwargs
33 )
34
35 return {
36 "shape": shape,
37 "dtype": dtype,
38 "kwargs": kwargs,
39 "chunks": chunks,
40 "name": name,
41 }
42
43
44 def wrap_func_shape_as_first_arg(func, *args, **kwargs):
45 """
46 Transform np creation function into blocked version
47 """
48 if "shape" not in kwargs:
49 shape, args = args[0], args[1:]
50 else:
51 shape = kwargs.pop("shape")
52
53 if isinstance(shape, Array):
54 raise TypeError(
55 "Dask array input not supported. "
56 "Please use tuple, list, or a 1D numpy array instead."
57 )
58
59 parsed = _parse_wrap_args(func, args, kwargs, shape)
60 shape = parsed["shape"]
61 dtype = parsed["dtype"]
62 chunks = parsed["chunks"]
63 name = parsed["name"]
64 kwargs = parsed["kwargs"]
65
66 keys = product([name], *[range(len(bd)) for bd in chunks])
67 shapes = product(*chunks)
68 func = partial(func, dtype=dtype, **kwargs)
69 vals = ((func,) + (s,) + args for s in shapes)
70
71 dsk = dict(zip(keys, vals))
72 return Array(dsk, name, chunks, dtype=dtype)
73
74
75 def wrap_func_like(func, *args, **kwargs):
76 """
77 Transform np creation function into blocked version
78 """
79 x = args[0]
80 meta = meta_from_array(x)
81 shape = kwargs.get("shape", x.shape)
82
83 parsed = _parse_wrap_args(func, args, kwargs, shape)
84 shape = parsed["shape"]
85 dtype = parsed["dtype"]
86 chunks = parsed["chunks"]
87 name = parsed["name"]
88 kwargs = parsed["kwargs"]
89
90 keys = product([name], *[range(len(bd)) for bd in chunks])
91 shapes = product(*chunks)
92 shapes = list(shapes)
93 kw = [kwargs for _ in shapes]
94 for i, s in enumerate(list(shapes)):
95 kw[i]["shape"] = s
96 vals = ((partial(func, dtype=dtype, **k),) + args for (k, s) in zip(kw, shapes))
97
98 dsk = dict(zip(keys, vals))
99
100 return Array(dsk, name, chunks, meta=meta.astype(dtype))
101
102
103 def wrap_func_like_safe(func, func_like, *args, **kwargs):
104 """
105 Safe implementation for wrap_func_like(), attempts to use func_like(),
106 if the shape keyword argument, falls back to func().
107 """
108 try:
109 return func_like(*args, **kwargs)
110 except TypeError:
111 return func(*args, **kwargs)
112
113
114 @curry
115 def wrap(wrap_func, func, **kwargs):
116 func_like = kwargs.pop("func_like", None)
117 if func_like is None:
118 f = partial(wrap_func, func, **kwargs)
119 else:
120 f = partial(wrap_func, func_like, **kwargs)
121 template = """
122 Blocked variant of %(name)s
123
124 Follows the signature of %(name)s exactly except that it also features
125 optional keyword arguments ``chunks: int, tuple, or dict`` and ``name: str``.
126
127 Original signature follows below.
128 """
129 if func.__doc__ is not None:
130 f.__doc__ = template % {"name": func.__name__} + func.__doc__
131 f.__name__ = "blocked_" + func.__name__
132 return f
133
134
135 w = wrap(wrap_func_shape_as_first_arg)
136
137 ones = w(np.ones, dtype="f8")
138 zeros = w(np.zeros, dtype="f8")
139 empty = w(np.empty, dtype="f8")
140
141
142 w_like = wrap(wrap_func_like_safe)
143
144
145 empty_like = w_like(np.empty, func_like=np.empty_like)
146
147
148 # full and full_like require special casing due to argument check on fill_value
149 # Generate wrapped functions only once
150 _full = w(np.full)
151 _full_like = w_like(np.full, func_like=np.full_like)
152
153
154 def full(shape, fill_value, *args, **kwargs):
155 # np.isscalar has somewhat strange behavior:
156 # https://docs.scipy.org/doc/numpy/reference/generated/numpy.isscalar.html
157 if np.ndim(fill_value) != 0:
158 raise ValueError(
159 f"fill_value must be scalar. Received {type(fill_value).__name__} instead."
160 )
161 return _full(shape=shape, fill_value=fill_value, *args, **kwargs)
162
163
164 def full_like(a, fill_value, *args, **kwargs):
165 if np.ndim(fill_value) != 0:
166 raise ValueError(
167 f"fill_value must be scalar. Received {type(fill_value).__name__} instead."
168 )
169 return _full_like(a=a, fill_value=fill_value, *args, **kwargs,)
170
171
172 full.__doc__ = _full.__doc__
173 full.__name__ = _full.__name__
174 full_like.__doc__ = _full_like.__doc__
175 full_like.__name__ = _full_like.__name__
176
[end of dask/array/wrap.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dask/array/wrap.py b/dask/array/wrap.py
--- a/dask/array/wrap.py
+++ b/dask/array/wrap.py
@@ -170,6 +170,4 @@
full.__doc__ = _full.__doc__
-full.__name__ = _full.__name__
full_like.__doc__ = _full_like.__doc__
-full_like.__name__ = _full_like.__name__
|
{"golden_diff": "diff --git a/dask/array/wrap.py b/dask/array/wrap.py\n--- a/dask/array/wrap.py\n+++ b/dask/array/wrap.py\n@@ -170,6 +170,4 @@\n \n \n full.__doc__ = _full.__doc__\n-full.__name__ = _full.__name__\n full_like.__doc__ = _full_like.__doc__\n-full_like.__name__ = _full_like.__name__\n", "issue": "importing fails when calling python -OO\nThis was discovered by `xarray`'s `upstream-dev` CI ([environment](https://dev.azure.com/xarray/xarray/_build/results?buildId=2996&view=logs&j=2280efed-fda1-53bd-9213-1fa8ec9b4fa8&t=031ddd67-e55f-5fbd-2283-1ff4dfed6587)) a few days ago, but we were a bit slow in reporting so this also happens with the newly released `2.18.0`.\r\n\r\nThe problem is this:\r\n```\r\n$ python -OO -c 'import dask.array'\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \".../lib/python3.8/site-packages/dask/array/__init__.py\", line 26, in <module>\r\n from .routines import (\r\n File \".../lib/python3.8/site-packages/dask/array/routines.py\", line 18, in <module>\r\n from .creation import arange, diag, empty, indices\r\n File \".../lib/python3.8/site-packages/dask/array/creation.py\", line 26, in <module>\r\n from .wrap import empty, ones, zeros, full\r\n File \".../lib/python3.8/site-packages/dask/array/wrap.py\", line 173, in <module>\r\n full.__name__ = _full.__name__\r\nAttributeError: 'functools.partial' object has no attribute '__name__'\r\n```\r\nwithout the optimization, the import obviously works.\r\n\r\nSee also pydata/xarray#4124\n", "before_files": [{"content": "from functools import partial\nfrom itertools import product\n\nimport numpy as np\n\nfrom tlz import curry\n\nfrom ..base import tokenize\nfrom ..utils import funcname\nfrom .core import Array, normalize_chunks\nfrom .utils import meta_from_array\n\n\ndef _parse_wrap_args(func, args, kwargs, shape):\n if isinstance(shape, np.ndarray):\n shape = shape.tolist()\n\n if not isinstance(shape, (tuple, list)):\n shape = (shape,)\n\n name = kwargs.pop(\"name\", None)\n chunks = kwargs.pop(\"chunks\", \"auto\")\n\n dtype = kwargs.pop(\"dtype\", None)\n if dtype is None:\n dtype = func(shape, *args, **kwargs).dtype\n dtype = np.dtype(dtype)\n\n chunks = normalize_chunks(chunks, shape, dtype=dtype)\n\n name = name or funcname(func) + \"-\" + tokenize(\n func, shape, chunks, dtype, args, kwargs\n )\n\n return {\n \"shape\": shape,\n \"dtype\": dtype,\n \"kwargs\": kwargs,\n \"chunks\": chunks,\n \"name\": name,\n }\n\n\ndef wrap_func_shape_as_first_arg(func, *args, **kwargs):\n \"\"\"\n Transform np creation function into blocked version\n \"\"\"\n if \"shape\" not in kwargs:\n shape, args = args[0], args[1:]\n else:\n shape = kwargs.pop(\"shape\")\n\n if isinstance(shape, Array):\n raise TypeError(\n \"Dask array input not supported. \"\n \"Please use tuple, list, or a 1D numpy array instead.\"\n )\n\n parsed = _parse_wrap_args(func, args, kwargs, shape)\n shape = parsed[\"shape\"]\n dtype = parsed[\"dtype\"]\n chunks = parsed[\"chunks\"]\n name = parsed[\"name\"]\n kwargs = parsed[\"kwargs\"]\n\n keys = product([name], *[range(len(bd)) for bd in chunks])\n shapes = product(*chunks)\n func = partial(func, dtype=dtype, **kwargs)\n vals = ((func,) + (s,) + args for s in shapes)\n\n dsk = dict(zip(keys, vals))\n return Array(dsk, name, chunks, dtype=dtype)\n\n\ndef wrap_func_like(func, *args, **kwargs):\n \"\"\"\n Transform np creation function into blocked version\n \"\"\"\n x = args[0]\n meta = meta_from_array(x)\n shape = kwargs.get(\"shape\", x.shape)\n\n parsed = _parse_wrap_args(func, args, kwargs, shape)\n shape = parsed[\"shape\"]\n dtype = parsed[\"dtype\"]\n chunks = parsed[\"chunks\"]\n name = parsed[\"name\"]\n kwargs = parsed[\"kwargs\"]\n\n keys = product([name], *[range(len(bd)) for bd in chunks])\n shapes = product(*chunks)\n shapes = list(shapes)\n kw = [kwargs for _ in shapes]\n for i, s in enumerate(list(shapes)):\n kw[i][\"shape\"] = s\n vals = ((partial(func, dtype=dtype, **k),) + args for (k, s) in zip(kw, shapes))\n\n dsk = dict(zip(keys, vals))\n\n return Array(dsk, name, chunks, meta=meta.astype(dtype))\n\n\ndef wrap_func_like_safe(func, func_like, *args, **kwargs):\n \"\"\"\n Safe implementation for wrap_func_like(), attempts to use func_like(),\n if the shape keyword argument, falls back to func().\n \"\"\"\n try:\n return func_like(*args, **kwargs)\n except TypeError:\n return func(*args, **kwargs)\n\n\n@curry\ndef wrap(wrap_func, func, **kwargs):\n func_like = kwargs.pop(\"func_like\", None)\n if func_like is None:\n f = partial(wrap_func, func, **kwargs)\n else:\n f = partial(wrap_func, func_like, **kwargs)\n template = \"\"\"\n Blocked variant of %(name)s\n\n Follows the signature of %(name)s exactly except that it also features\n optional keyword arguments ``chunks: int, tuple, or dict`` and ``name: str``.\n\n Original signature follows below.\n \"\"\"\n if func.__doc__ is not None:\n f.__doc__ = template % {\"name\": func.__name__} + func.__doc__\n f.__name__ = \"blocked_\" + func.__name__\n return f\n\n\nw = wrap(wrap_func_shape_as_first_arg)\n\nones = w(np.ones, dtype=\"f8\")\nzeros = w(np.zeros, dtype=\"f8\")\nempty = w(np.empty, dtype=\"f8\")\n\n\nw_like = wrap(wrap_func_like_safe)\n\n\nempty_like = w_like(np.empty, func_like=np.empty_like)\n\n\n# full and full_like require special casing due to argument check on fill_value\n# Generate wrapped functions only once\n_full = w(np.full)\n_full_like = w_like(np.full, func_like=np.full_like)\n\n\ndef full(shape, fill_value, *args, **kwargs):\n # np.isscalar has somewhat strange behavior:\n # https://docs.scipy.org/doc/numpy/reference/generated/numpy.isscalar.html\n if np.ndim(fill_value) != 0:\n raise ValueError(\n f\"fill_value must be scalar. Received {type(fill_value).__name__} instead.\"\n )\n return _full(shape=shape, fill_value=fill_value, *args, **kwargs)\n\n\ndef full_like(a, fill_value, *args, **kwargs):\n if np.ndim(fill_value) != 0:\n raise ValueError(\n f\"fill_value must be scalar. Received {type(fill_value).__name__} instead.\"\n )\n return _full_like(a=a, fill_value=fill_value, *args, **kwargs,)\n\n\nfull.__doc__ = _full.__doc__\nfull.__name__ = _full.__name__\nfull_like.__doc__ = _full_like.__doc__\nfull_like.__name__ = _full_like.__name__\n", "path": "dask/array/wrap.py"}]}
| 2,631 | 95 |
gh_patches_debug_26708
|
rasdani/github-patches
|
git_diff
|
python-telegram-bot__python-telegram-bot-3552
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] telegram.ext._utils.stack.was_called_by gives incorrect result on 64-bit machine
### Steps to Reproduce
1. Set up virtualenv using `python3 -m venv` on a 64-bit machine.
2. Initialize an `application` object using `telegram.ext.ApplicationBuilder`:
``` python
from telegram.ext import ApplicationBuilder
def main() -> None:
application = ApplicationBuilder().token("TOKEN").build()
if __name__ == "__main__":
main()
```
4. Run the bot in virtualenv and it will give a warning messgae like ```PTBUserWarning: `Application` instances should be built via the `ApplicationBuilder`.```
### Expected behaviour
The warning message shouldn't be given since `ApplicationBuilder` is being used.
### Actual behaviour
``` bash
$ python test.py
../venv/lib64/python3.11/site-packages/telegram/ext/_applicationbuilder.py:292:
PTBUserWarning: `Application` instances should be built via the `ApplicationBuilder`.
] = DefaultValue.get_value( # pylint: disable=not-callable
```
### Operating System
Fedora Linux 37 (Server Edition)
### Version of Python, python-telegram-bot & dependencies
```shell
python-telegram-bot 20.0
Bot API 6.4
Python 3.11.1 (main, Dec 7 2022, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)]
```
### Relevant log output
_No response_
### Additional Context
I believe this is caused by comparing a resolved path with an unresolved path [here](https://github.com/python-telegram-bot/python-telegram-bot/blob/master/telegram/ext/_application.py#L273).
In my case, it finds `../venv/lib/python3.11/site-packages/telegram/ext/_applicationbuilder.py` not equal to `../venv/lib64/python3.11/site-packages/telegram/ext/_applicationbuilder.py`, the directory `lib64` being a symlink to `lib`.
A quick (maybe not final) fix is to modify [stack.py](https://github.com/python-telegram-bot/python-telegram-bot/blob/master/telegram/ext/_utils/stack.py) so that `was_called_by` always resolves paths from frame:
``` python
while frame.f_back:
frame = frame.f_back
if Path(frame.f_code.co_filename).resolve() == caller:
return True
```
I have tested it and the warning no longer appears.
</issue>
<code>
[start of telegram/ext/_utils/stack.py]
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2023
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 """This module contains helper functions related to inspecting the program stack.
20
21 .. versionadded:: 20.0
22
23 Warning:
24 Contents of this module are intended to be used internally by the library and *not* by the
25 user. Changes to this module are not considered breaking changes and may not be documented in
26 the changelog.
27 """
28 from pathlib import Path
29 from types import FrameType
30 from typing import Optional
31
32
33 def was_called_by(frame: Optional[FrameType], caller: Path) -> bool:
34 """Checks if the passed frame was called by the specified file.
35
36 Example:
37 .. code:: pycon
38
39 >>> was_called_by(inspect.currentframe(), Path(__file__))
40 True
41
42 Arguments:
43 frame (:obj:`FrameType`): The frame - usually the return value of
44 ``inspect.currentframe()``. If :obj:`None` is passed, the return value will be
45 :obj:`False`.
46 caller (:obj:`pathlib.Path`): File that should be the caller.
47
48 Returns:
49 :obj:`bool`: Whether the frame was called by the specified file.
50 """
51 if frame is None:
52 return False
53
54 # https://stackoverflow.com/a/57712700/10606962
55 if Path(frame.f_code.co_filename) == caller:
56 return True
57 while frame.f_back:
58 frame = frame.f_back
59 if Path(frame.f_code.co_filename) == caller:
60 return True
61 return False
62
[end of telegram/ext/_utils/stack.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/telegram/ext/_utils/stack.py b/telegram/ext/_utils/stack.py
--- a/telegram/ext/_utils/stack.py
+++ b/telegram/ext/_utils/stack.py
@@ -25,10 +25,13 @@
user. Changes to this module are not considered breaking changes and may not be documented in
the changelog.
"""
+import logging
from pathlib import Path
from types import FrameType
from typing import Optional
+_logger = logging.getLogger(__name__)
+
def was_called_by(frame: Optional[FrameType], caller: Path) -> bool:
"""Checks if the passed frame was called by the specified file.
@@ -51,11 +54,22 @@
if frame is None:
return False
+ try:
+ return _was_called_by(frame, caller)
+ except Exception as exc:
+ _logger.debug(
+ "Failed to check if frame was called by `caller`. Assuming that it was not.",
+ exc_info=exc,
+ )
+ return False
+
+
+def _was_called_by(frame: FrameType, caller: Path) -> bool:
# https://stackoverflow.com/a/57712700/10606962
- if Path(frame.f_code.co_filename) == caller:
+ if Path(frame.f_code.co_filename).resolve() == caller:
return True
while frame.f_back:
frame = frame.f_back
- if Path(frame.f_code.co_filename) == caller:
+ if Path(frame.f_code.co_filename).resolve() == caller:
return True
return False
|
{"golden_diff": "diff --git a/telegram/ext/_utils/stack.py b/telegram/ext/_utils/stack.py\n--- a/telegram/ext/_utils/stack.py\n+++ b/telegram/ext/_utils/stack.py\n@@ -25,10 +25,13 @@\n user. Changes to this module are not considered breaking changes and may not be documented in\n the changelog.\n \"\"\"\n+import logging\n from pathlib import Path\n from types import FrameType\n from typing import Optional\n \n+_logger = logging.getLogger(__name__)\n+\n \n def was_called_by(frame: Optional[FrameType], caller: Path) -> bool:\n \"\"\"Checks if the passed frame was called by the specified file.\n@@ -51,11 +54,22 @@\n if frame is None:\n return False\n \n+ try:\n+ return _was_called_by(frame, caller)\n+ except Exception as exc:\n+ _logger.debug(\n+ \"Failed to check if frame was called by `caller`. Assuming that it was not.\",\n+ exc_info=exc,\n+ )\n+ return False\n+\n+\n+def _was_called_by(frame: FrameType, caller: Path) -> bool:\n # https://stackoverflow.com/a/57712700/10606962\n- if Path(frame.f_code.co_filename) == caller:\n+ if Path(frame.f_code.co_filename).resolve() == caller:\n return True\n while frame.f_back:\n frame = frame.f_back\n- if Path(frame.f_code.co_filename) == caller:\n+ if Path(frame.f_code.co_filename).resolve() == caller:\n return True\n return False\n", "issue": "[BUG] telegram.ext._utils.stack.was_called_by gives incorrect result on 64-bit machine\n### Steps to Reproduce\n\n1. Set up virtualenv using `python3 -m venv` on a 64-bit machine.\r\n2. Initialize an `application` object using `telegram.ext.ApplicationBuilder`:\r\n``` python\r\nfrom telegram.ext import ApplicationBuilder\r\ndef main() -> None:\r\n application = ApplicationBuilder().token(\"TOKEN\").build()\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n4. Run the bot in virtualenv and it will give a warning messgae like ```PTBUserWarning: `Application` instances should be built via the `ApplicationBuilder`.```\r\n\n\n### Expected behaviour\n\nThe warning message shouldn't be given since `ApplicationBuilder` is being used.\n\n### Actual behaviour\n\n``` bash\r\n$ python test.py \r\n../venv/lib64/python3.11/site-packages/telegram/ext/_applicationbuilder.py:292: \r\nPTBUserWarning: `Application` instances should be built via the `ApplicationBuilder`.\r\n ] = DefaultValue.get_value( # pylint: disable=not-callable\r\n```\r\n\r\n\n\n### Operating System\n\nFedora Linux 37 (Server Edition)\n\n### Version of Python, python-telegram-bot & dependencies\n\n```shell\npython-telegram-bot 20.0\r\nBot API 6.4\r\nPython 3.11.1 (main, Dec 7 2022, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)]\n```\n\n\n### Relevant log output\n\n_No response_\n\n### Additional Context\n\nI believe this is caused by comparing a resolved path with an unresolved path [here](https://github.com/python-telegram-bot/python-telegram-bot/blob/master/telegram/ext/_application.py#L273). \r\n\r\nIn my case, it finds `../venv/lib/python3.11/site-packages/telegram/ext/_applicationbuilder.py` not equal to `../venv/lib64/python3.11/site-packages/telegram/ext/_applicationbuilder.py`, the directory `lib64` being a symlink to `lib`.\r\n\r\nA quick (maybe not final) fix is to modify [stack.py](https://github.com/python-telegram-bot/python-telegram-bot/blob/master/telegram/ext/_utils/stack.py) so that `was_called_by` always resolves paths from frame:\r\n``` python\r\n while frame.f_back:\r\n frame = frame.f_back\r\n if Path(frame.f_code.co_filename).resolve() == caller:\r\n return True\r\n```\r\nI have tested it and the warning no longer appears.\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2023\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains helper functions related to inspecting the program stack.\n\n.. versionadded:: 20.0\n\nWarning:\n Contents of this module are intended to be used internally by the library and *not* by the\n user. Changes to this module are not considered breaking changes and may not be documented in\n the changelog.\n\"\"\"\nfrom pathlib import Path\nfrom types import FrameType\nfrom typing import Optional\n\n\ndef was_called_by(frame: Optional[FrameType], caller: Path) -> bool:\n \"\"\"Checks if the passed frame was called by the specified file.\n\n Example:\n .. code:: pycon\n\n >>> was_called_by(inspect.currentframe(), Path(__file__))\n True\n\n Arguments:\n frame (:obj:`FrameType`): The frame - usually the return value of\n ``inspect.currentframe()``. If :obj:`None` is passed, the return value will be\n :obj:`False`.\n caller (:obj:`pathlib.Path`): File that should be the caller.\n\n Returns:\n :obj:`bool`: Whether the frame was called by the specified file.\n \"\"\"\n if frame is None:\n return False\n\n # https://stackoverflow.com/a/57712700/10606962\n if Path(frame.f_code.co_filename) == caller:\n return True\n while frame.f_back:\n frame = frame.f_back\n if Path(frame.f_code.co_filename) == caller:\n return True\n return False\n", "path": "telegram/ext/_utils/stack.py"}]}
| 1,753 | 359 |
gh_patches_debug_17575
|
rasdani/github-patches
|
git_diff
|
bids-standard__pybids-705
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
unclear validation error with dataset_description.json
A couple people have posted asking why their datasets could not be read (through fmriprep or mriqc), since the error message did not indicate which file was not formatted correctly.
example error message
```
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
```
- example 1: https://neurostars.org/t/fmriprep1-2-3-jsondecode-error-expecting-value/3352
- example 2: https://neurostars.org/t/week-8-quiz-question-10/18410
</issue>
<code>
[start of bids/layout/validation.py]
1 """Functionality related to validation of BIDSLayouts and BIDS projects."""
2
3 import os
4 import json
5 import re
6 import warnings
7
8 from ..utils import listify
9 from ..exceptions import BIDSValidationError, BIDSDerivativesValidationError
10
11
12 MANDATORY_BIDS_FIELDS = {
13 "Name": {"Name": "Example dataset"},
14 "BIDSVersion": {"BIDSVersion": "1.0.2"},
15 }
16
17
18 MANDATORY_DERIVATIVES_FIELDS = {
19 **MANDATORY_BIDS_FIELDS,
20 "PipelineDescription.Name": {
21 "PipelineDescription": {"Name": "Example pipeline"}
22 },
23 }
24
25 EXAMPLE_BIDS_DESCRIPTION = {
26 k: val[k] for val in MANDATORY_BIDS_FIELDS.values() for k in val}
27
28
29 EXAMPLE_DERIVATIVES_DESCRIPTION = {
30 k: val[k] for val in MANDATORY_DERIVATIVES_FIELDS.values() for k in val}
31
32
33 DEFAULT_LOCATIONS_TO_IGNORE = ("code", "stimuli", "sourcedata", "models",
34 re.compile(r'^\.'))
35
36 def absolute_path_deprecation_warning():
37 warnings.warn("The absolute_paths argument will be removed from PyBIDS "
38 "in 0.14. You can easily access the relative path of "
39 "BIDSFile objects via the .relpath attribute (instead of "
40 ".path). Switching to this pattern is strongly encouraged, "
41 "as the current implementation of relative path handling "
42 "is known to produce query failures in certain edge cases.")
43
44
45 def indexer_arg_deprecation_warning():
46 warnings.warn("The ability to pass arguments to BIDSLayout that control "
47 "indexing is likely to be removed in future; possibly as "
48 "early as PyBIDS 0.14. This includes the `config_filename`, "
49 "`ignore`, `force_index`, and `index_metadata` arguments. "
50 "The recommended usage pattern is to initialize a new "
51 "BIDSLayoutIndexer with these arguments, and pass it to "
52 "the BIDSLayout via the `indexer` argument.")
53
54
55 def validate_root(root, validate):
56 # Validate root argument and make sure it contains mandatory info
57 try:
58 root = str(root)
59 except:
60 raise TypeError("root argument must be a string (or a type that "
61 "supports casting to string, such as "
62 "pathlib.Path) specifying the directory "
63 "containing the BIDS dataset.")
64
65 root = os.path.abspath(root)
66
67 if not os.path.exists(root):
68 raise ValueError("BIDS root does not exist: %s" % root)
69
70 target = os.path.join(root, 'dataset_description.json')
71 if not os.path.exists(target):
72 if validate:
73 raise BIDSValidationError(
74 "'dataset_description.json' is missing from project root."
75 " Every valid BIDS dataset must have this file."
76 "\nExample contents of 'dataset_description.json': \n%s" %
77 json.dumps(EXAMPLE_BIDS_DESCRIPTION)
78 )
79 else:
80 description = None
81 else:
82 with open(target, 'r', encoding='utf-8') as desc_fd:
83 description = json.load(desc_fd)
84 if validate:
85 for k in MANDATORY_BIDS_FIELDS:
86 if k not in description:
87 raise BIDSValidationError(
88 "Mandatory %r field missing from "
89 "'dataset_description.json'."
90 "\nExample: %s" % (k, MANDATORY_BIDS_FIELDS[k])
91 )
92
93 return root, description
94
95
96 def validate_derivative_paths(paths, layout=None, **kwargs):
97
98 deriv_dirs = []
99
100 # Collect all paths that contain a dataset_description.json
101 def check_for_description(bids_dir):
102 dd = os.path.join(bids_dir, 'dataset_description.json')
103 return os.path.exists(dd)
104
105 for p in paths:
106 p = os.path.abspath(str(p))
107 if os.path.exists(p):
108 if check_for_description(p):
109 deriv_dirs.append(p)
110 else:
111 subdirs = [d for d in os.listdir(p)
112 if os.path.isdir(os.path.join(p, d))]
113 for sd in subdirs:
114 sd = os.path.join(p, sd)
115 if check_for_description(sd):
116 deriv_dirs.append(sd)
117
118 if not deriv_dirs:
119 warnings.warn("Derivative indexing was requested, but no valid "
120 "datasets were found in the specified locations "
121 "({}). Note that all BIDS-Derivatives datasets must"
122 " meet all the requirements for BIDS-Raw datasets "
123 "(a common problem is to fail to include a "
124 "'dataset_description.json' file in derivatives "
125 "datasets).\n".format(paths) +
126 "Example contents of 'dataset_description.json':\n%s" %
127 json.dumps(EXAMPLE_DERIVATIVES_DESCRIPTION))
128
129 paths = {}
130
131 for deriv in deriv_dirs:
132 dd = os.path.join(deriv, 'dataset_description.json')
133 with open(dd, 'r', encoding='utf-8') as ddfd:
134 description = json.load(ddfd)
135 pipeline_name = description.get(
136 'PipelineDescription', {}).get('Name')
137 if pipeline_name is None:
138 raise BIDSDerivativesValidationError(
139 "Every valid BIDS-derivatives dataset must "
140 "have a PipelineDescription.Name field set "
141 "inside 'dataset_description.json'. "
142 "\nExample: %s" %
143 MANDATORY_DERIVATIVES_FIELDS['PipelineDescription.Name'])
144 if layout is not None and pipeline_name in layout.derivatives:
145 raise BIDSDerivativesValidationError(
146 "Pipeline name '%s' has already been added "
147 "to this BIDSLayout. Every added pipeline "
148 "must have a unique name!")
149 paths[pipeline_name] = deriv
150
151 return paths
152
153
154 def validate_indexing_args(ignore, force_index, root):
155 if ignore is None:
156 ignore = DEFAULT_LOCATIONS_TO_IGNORE
157
158 # Do after root validation to ensure os.path.join works
159 ignore = [os.path.abspath(os.path.join(root, patt))
160 if isinstance(patt, str) else patt
161 for patt in listify(ignore or [])]
162 force_index = [os.path.abspath(os.path.join(root, patt))
163 if isinstance(patt, str) else patt
164 for patt in listify(force_index or [])]
165
166 # Derivatives get special handling; they shouldn't be indexed normally
167 if force_index is not None:
168 for entry in force_index:
169 condi = (isinstance(entry, str) and
170 os.path.normpath(entry).startswith('derivatives'))
171 if condi:
172 msg = ("Do not pass 'derivatives' in the force_index "
173 "list. To index derivatives, either set "
174 "derivatives=True, or use add_derivatives().")
175 raise ValueError(msg)
176
177 return ignore, force_index
178
[end of bids/layout/validation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bids/layout/validation.py b/bids/layout/validation.py
--- a/bids/layout/validation.py
+++ b/bids/layout/validation.py
@@ -79,9 +79,21 @@
else:
description = None
else:
- with open(target, 'r', encoding='utf-8') as desc_fd:
- description = json.load(desc_fd)
+ try:
+ with open(target, 'r', encoding='utf-8') as desc_fd:
+ description = json.load(desc_fd)
+ except json.JSONDecodeError:
+ description = None
if validate:
+
+ if description is None:
+ raise BIDSValidationError(
+ "'dataset_description.json' is not a valid json file."
+ " There is likely a typo in your 'dataset_description.json'."
+ "\nExample contents of 'dataset_description.json': \n%s" %
+ json.dumps(EXAMPLE_BIDS_DESCRIPTION)
+ )
+
for k in MANDATORY_BIDS_FIELDS:
if k not in description:
raise BIDSValidationError(
|
{"golden_diff": "diff --git a/bids/layout/validation.py b/bids/layout/validation.py\n--- a/bids/layout/validation.py\n+++ b/bids/layout/validation.py\n@@ -79,9 +79,21 @@\n else:\n description = None\n else:\n- with open(target, 'r', encoding='utf-8') as desc_fd:\n- description = json.load(desc_fd)\n+ try:\n+ with open(target, 'r', encoding='utf-8') as desc_fd:\n+ description = json.load(desc_fd)\n+ except json.JSONDecodeError:\n+ description = None\n if validate:\n+\n+ if description is None:\n+ raise BIDSValidationError(\n+ \"'dataset_description.json' is not a valid json file.\"\n+ \" There is likely a typo in your 'dataset_description.json'.\"\n+ \"\\nExample contents of 'dataset_description.json': \\n%s\" %\n+ json.dumps(EXAMPLE_BIDS_DESCRIPTION)\n+ )\n+ \n for k in MANDATORY_BIDS_FIELDS:\n if k not in description:\n raise BIDSValidationError(\n", "issue": "unclear validation error with dataset_description.json\nA couple people have posted asking why their datasets could not be read (through fmriprep or mriqc), since the error message did not indicate which file was not formatted correctly.\r\n\r\nexample error message\r\n```\r\njson.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)\r\n```\r\n\r\n- example 1: https://neurostars.org/t/fmriprep1-2-3-jsondecode-error-expecting-value/3352\r\n- example 2: https://neurostars.org/t/week-8-quiz-question-10/18410\n", "before_files": [{"content": "\"\"\"Functionality related to validation of BIDSLayouts and BIDS projects.\"\"\"\n\nimport os\nimport json\nimport re\nimport warnings\n\nfrom ..utils import listify\nfrom ..exceptions import BIDSValidationError, BIDSDerivativesValidationError\n\n\nMANDATORY_BIDS_FIELDS = {\n \"Name\": {\"Name\": \"Example dataset\"},\n \"BIDSVersion\": {\"BIDSVersion\": \"1.0.2\"},\n}\n\n\nMANDATORY_DERIVATIVES_FIELDS = {\n **MANDATORY_BIDS_FIELDS,\n \"PipelineDescription.Name\": {\n \"PipelineDescription\": {\"Name\": \"Example pipeline\"}\n },\n}\n\nEXAMPLE_BIDS_DESCRIPTION = {\n k: val[k] for val in MANDATORY_BIDS_FIELDS.values() for k in val}\n\n\nEXAMPLE_DERIVATIVES_DESCRIPTION = {\n k: val[k] for val in MANDATORY_DERIVATIVES_FIELDS.values() for k in val}\n\n\nDEFAULT_LOCATIONS_TO_IGNORE = (\"code\", \"stimuli\", \"sourcedata\", \"models\",\n re.compile(r'^\\.'))\n\ndef absolute_path_deprecation_warning():\n warnings.warn(\"The absolute_paths argument will be removed from PyBIDS \"\n \"in 0.14. You can easily access the relative path of \"\n \"BIDSFile objects via the .relpath attribute (instead of \"\n \".path). Switching to this pattern is strongly encouraged, \"\n \"as the current implementation of relative path handling \"\n \"is known to produce query failures in certain edge cases.\")\n\n\ndef indexer_arg_deprecation_warning():\n warnings.warn(\"The ability to pass arguments to BIDSLayout that control \"\n \"indexing is likely to be removed in future; possibly as \"\n \"early as PyBIDS 0.14. This includes the `config_filename`, \"\n \"`ignore`, `force_index`, and `index_metadata` arguments. \"\n \"The recommended usage pattern is to initialize a new \"\n \"BIDSLayoutIndexer with these arguments, and pass it to \"\n \"the BIDSLayout via the `indexer` argument.\")\n\n\ndef validate_root(root, validate):\n # Validate root argument and make sure it contains mandatory info\n try:\n root = str(root)\n except:\n raise TypeError(\"root argument must be a string (or a type that \"\n \"supports casting to string, such as \"\n \"pathlib.Path) specifying the directory \"\n \"containing the BIDS dataset.\")\n\n root = os.path.abspath(root)\n\n if not os.path.exists(root):\n raise ValueError(\"BIDS root does not exist: %s\" % root)\n\n target = os.path.join(root, 'dataset_description.json')\n if not os.path.exists(target):\n if validate:\n raise BIDSValidationError(\n \"'dataset_description.json' is missing from project root.\"\n \" Every valid BIDS dataset must have this file.\"\n \"\\nExample contents of 'dataset_description.json': \\n%s\" %\n json.dumps(EXAMPLE_BIDS_DESCRIPTION)\n )\n else:\n description = None\n else:\n with open(target, 'r', encoding='utf-8') as desc_fd:\n description = json.load(desc_fd)\n if validate:\n for k in MANDATORY_BIDS_FIELDS:\n if k not in description:\n raise BIDSValidationError(\n \"Mandatory %r field missing from \"\n \"'dataset_description.json'.\"\n \"\\nExample: %s\" % (k, MANDATORY_BIDS_FIELDS[k])\n )\n\n return root, description\n\n\ndef validate_derivative_paths(paths, layout=None, **kwargs):\n\n deriv_dirs = []\n\n # Collect all paths that contain a dataset_description.json\n def check_for_description(bids_dir):\n dd = os.path.join(bids_dir, 'dataset_description.json')\n return os.path.exists(dd)\n\n for p in paths:\n p = os.path.abspath(str(p))\n if os.path.exists(p):\n if check_for_description(p):\n deriv_dirs.append(p)\n else:\n subdirs = [d for d in os.listdir(p)\n if os.path.isdir(os.path.join(p, d))]\n for sd in subdirs:\n sd = os.path.join(p, sd)\n if check_for_description(sd):\n deriv_dirs.append(sd)\n\n if not deriv_dirs:\n warnings.warn(\"Derivative indexing was requested, but no valid \"\n \"datasets were found in the specified locations \"\n \"({}). Note that all BIDS-Derivatives datasets must\"\n \" meet all the requirements for BIDS-Raw datasets \"\n \"(a common problem is to fail to include a \"\n \"'dataset_description.json' file in derivatives \"\n \"datasets).\\n\".format(paths) +\n \"Example contents of 'dataset_description.json':\\n%s\" %\n json.dumps(EXAMPLE_DERIVATIVES_DESCRIPTION))\n\n paths = {}\n\n for deriv in deriv_dirs:\n dd = os.path.join(deriv, 'dataset_description.json')\n with open(dd, 'r', encoding='utf-8') as ddfd:\n description = json.load(ddfd)\n pipeline_name = description.get(\n 'PipelineDescription', {}).get('Name')\n if pipeline_name is None:\n raise BIDSDerivativesValidationError(\n \"Every valid BIDS-derivatives dataset must \"\n \"have a PipelineDescription.Name field set \"\n \"inside 'dataset_description.json'. \"\n \"\\nExample: %s\" %\n MANDATORY_DERIVATIVES_FIELDS['PipelineDescription.Name'])\n if layout is not None and pipeline_name in layout.derivatives:\n raise BIDSDerivativesValidationError(\n \"Pipeline name '%s' has already been added \"\n \"to this BIDSLayout. Every added pipeline \"\n \"must have a unique name!\")\n paths[pipeline_name] = deriv\n\n return paths\n\n\ndef validate_indexing_args(ignore, force_index, root):\n if ignore is None:\n ignore = DEFAULT_LOCATIONS_TO_IGNORE\n\n # Do after root validation to ensure os.path.join works\n ignore = [os.path.abspath(os.path.join(root, patt))\n if isinstance(patt, str) else patt\n for patt in listify(ignore or [])]\n force_index = [os.path.abspath(os.path.join(root, patt))\n if isinstance(patt, str) else patt\n for patt in listify(force_index or [])]\n\n # Derivatives get special handling; they shouldn't be indexed normally\n if force_index is not None:\n for entry in force_index:\n condi = (isinstance(entry, str) and\n os.path.normpath(entry).startswith('derivatives'))\n if condi:\n msg = (\"Do not pass 'derivatives' in the force_index \"\n \"list. To index derivatives, either set \"\n \"derivatives=True, or use add_derivatives().\")\n raise ValueError(msg)\n\n return ignore, force_index\n", "path": "bids/layout/validation.py"}]}
| 2,568 | 235 |
gh_patches_debug_8234
|
rasdani/github-patches
|
git_diff
|
easybuilders__easybuild-framework-2914
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error "_set_blas_variables: blas_lib not set" in EasyBuild 3.9.1
I am getting the following error when I am trying to build LAMMPS with EasyBuild 3.9.1.
For an extended dry run, the following is included in the logs:
```
WARNING: ignoring error '_set_blas_variables: BLAS_LIB not set'
```
Using EasyBuild 3.8.1 the build succeeds. The eb recipe is this https://github.com/eth-cscs/production/blob/master/easybuild/easyconfigs/l/LAMMPS/LAMMPS-22Aug2018-CrayGNU-18.08.eb,
</issue>
<code>
[start of easybuild/toolchains/linalg/libsci.py]
1 ##
2 # Copyright 2014-2019 Ghent University
3 #
4 # This file is part of EasyBuild,
5 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
6 # with support of Ghent University (http://ugent.be/hpc),
7 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),
8 # Flemish Research Foundation (FWO) (http://www.fwo.be/en)
9 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
10 #
11 # https://github.com/easybuilders/easybuild
12 #
13 # EasyBuild is free software: you can redistribute it and/or modify
14 # it under the terms of the GNU General Public License as published by
15 # the Free Software Foundation v2.
16 #
17 # EasyBuild is distributed in the hope that it will be useful,
18 # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 # GNU General Public License for more details.
21 #
22 # You should have received a copy of the GNU General Public License
23 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
24 ##
25 """
26 Support for Cray's LibSci library, which provides BLAS/LAPACK support.
27 cfr. https://www.nersc.gov/users/software/programming-libraries/math-libraries/libsci/
28
29 :author: Petar Forai (IMP/IMBA, Austria)
30 :author: Kenneth Hoste (Ghent University)
31 """
32 import os
33
34 from easybuild.tools.build_log import EasyBuildError
35 from easybuild.tools.toolchain.linalg import LinAlg
36
37
38 CRAY_LIBSCI_MODULE_NAME = 'cray-libsci'
39 TC_CONSTANT_CRAY_LIBSCI = 'CrayLibSci'
40
41
42 class LibSci(LinAlg):
43 """Support for Cray's LibSci library, which provides BLAS/LAPACK support."""
44 # BLAS/LAPACK support
45 # via cray-libsci module, which gets loaded via the PrgEnv module
46 # see https://www.nersc.gov/users/software/programming-libraries/math-libraries/libsci/
47 BLAS_MODULE_NAME = [CRAY_LIBSCI_MODULE_NAME]
48
49 # no need to specify libraries, compiler driver takes care of linking the right libraries
50 # FIXME: need to revisit this, on numpy we ended up with a serial BLAS through the wrapper.
51 BLAS_LIB = []
52 BLAS_LIB_MT = []
53 BLAS_FAMILY = TC_CONSTANT_CRAY_LIBSCI
54
55 LAPACK_MODULE_NAME = [CRAY_LIBSCI_MODULE_NAME]
56 LAPACK_IS_BLAS = True
57 LAPACK_FAMILY = TC_CONSTANT_CRAY_LIBSCI
58
59 BLACS_MODULE_NAME = []
60 SCALAPACK_MODULE_NAME = []
61
62 def _get_software_root(self, name):
63 """Get install prefix for specified software name; special treatment for Cray modules."""
64 if name == 'cray-libsci':
65 # Cray-provided LibSci module
66 env_var = 'CRAY_LIBSCI_PREFIX_DIR'
67 root = os.getenv(env_var, None)
68 if root is None:
69 raise EasyBuildError("Failed to determine install prefix for %s via $%s", name, env_var)
70 else:
71 self.log.debug("Obtained install prefix for %s via $%s: %s", name, env_var, root)
72 else:
73 root = super(LibSci, self)._get_software_root(name)
74
75 return root
76
77 def _set_blacs_variables(self):
78 """Skip setting BLACS related variables"""
79 pass
80
81 def _set_scalapack_variables(self):
82 """Skip setting ScaLAPACK related variables"""
83 pass
84
85 def definition(self):
86 """
87 Filter BLAS module from toolchain definition.
88 The cray-libsci module is loaded indirectly (and versionless) via the PrgEnv module,
89 and thus is not a direct toolchain component.
90 """
91 tc_def = super(LibSci, self).definition()
92 tc_def['BLAS'] = []
93 tc_def['LAPACK'] = []
94 return tc_def
95
[end of easybuild/toolchains/linalg/libsci.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/easybuild/toolchains/linalg/libsci.py b/easybuild/toolchains/linalg/libsci.py
--- a/easybuild/toolchains/linalg/libsci.py
+++ b/easybuild/toolchains/linalg/libsci.py
@@ -48,8 +48,8 @@
# no need to specify libraries, compiler driver takes care of linking the right libraries
# FIXME: need to revisit this, on numpy we ended up with a serial BLAS through the wrapper.
- BLAS_LIB = []
- BLAS_LIB_MT = []
+ BLAS_LIB = ['']
+ BLAS_LIB_MT = ['']
BLAS_FAMILY = TC_CONSTANT_CRAY_LIBSCI
LAPACK_MODULE_NAME = [CRAY_LIBSCI_MODULE_NAME]
|
{"golden_diff": "diff --git a/easybuild/toolchains/linalg/libsci.py b/easybuild/toolchains/linalg/libsci.py\n--- a/easybuild/toolchains/linalg/libsci.py\n+++ b/easybuild/toolchains/linalg/libsci.py\n@@ -48,8 +48,8 @@\n \n # no need to specify libraries, compiler driver takes care of linking the right libraries\n # FIXME: need to revisit this, on numpy we ended up with a serial BLAS through the wrapper.\n- BLAS_LIB = []\n- BLAS_LIB_MT = []\n+ BLAS_LIB = ['']\n+ BLAS_LIB_MT = ['']\n BLAS_FAMILY = TC_CONSTANT_CRAY_LIBSCI\n \n LAPACK_MODULE_NAME = [CRAY_LIBSCI_MODULE_NAME]\n", "issue": "Error \"_set_blas_variables: blas_lib not set\" in EasyBuild 3.9.1\nI am getting the following error when I am trying to build LAMMPS with EasyBuild 3.9.1. \r\nFor an extended dry run, the following is included in the logs:\r\n```\r\nWARNING: ignoring error '_set_blas_variables: BLAS_LIB not set'\r\n```\r\n\r\nUsing EasyBuild 3.8.1 the build succeeds. The eb recipe is this https://github.com/eth-cscs/production/blob/master/easybuild/easyconfigs/l/LAMMPS/LAMMPS-22Aug2018-CrayGNU-18.08.eb,\n", "before_files": [{"content": "##\n# Copyright 2014-2019 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nSupport for Cray's LibSci library, which provides BLAS/LAPACK support.\ncfr. https://www.nersc.gov/users/software/programming-libraries/math-libraries/libsci/\n\n:author: Petar Forai (IMP/IMBA, Austria)\n:author: Kenneth Hoste (Ghent University)\n\"\"\"\nimport os\n\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.toolchain.linalg import LinAlg\n\n\nCRAY_LIBSCI_MODULE_NAME = 'cray-libsci'\nTC_CONSTANT_CRAY_LIBSCI = 'CrayLibSci'\n\n\nclass LibSci(LinAlg):\n \"\"\"Support for Cray's LibSci library, which provides BLAS/LAPACK support.\"\"\"\n # BLAS/LAPACK support\n # via cray-libsci module, which gets loaded via the PrgEnv module\n # see https://www.nersc.gov/users/software/programming-libraries/math-libraries/libsci/\n BLAS_MODULE_NAME = [CRAY_LIBSCI_MODULE_NAME]\n\n # no need to specify libraries, compiler driver takes care of linking the right libraries\n # FIXME: need to revisit this, on numpy we ended up with a serial BLAS through the wrapper.\n BLAS_LIB = []\n BLAS_LIB_MT = []\n BLAS_FAMILY = TC_CONSTANT_CRAY_LIBSCI\n\n LAPACK_MODULE_NAME = [CRAY_LIBSCI_MODULE_NAME]\n LAPACK_IS_BLAS = True\n LAPACK_FAMILY = TC_CONSTANT_CRAY_LIBSCI\n\n BLACS_MODULE_NAME = []\n SCALAPACK_MODULE_NAME = []\n\n def _get_software_root(self, name):\n \"\"\"Get install prefix for specified software name; special treatment for Cray modules.\"\"\"\n if name == 'cray-libsci':\n # Cray-provided LibSci module\n env_var = 'CRAY_LIBSCI_PREFIX_DIR'\n root = os.getenv(env_var, None)\n if root is None:\n raise EasyBuildError(\"Failed to determine install prefix for %s via $%s\", name, env_var)\n else:\n self.log.debug(\"Obtained install prefix for %s via $%s: %s\", name, env_var, root)\n else:\n root = super(LibSci, self)._get_software_root(name)\n\n return root\n\n def _set_blacs_variables(self):\n \"\"\"Skip setting BLACS related variables\"\"\"\n pass\n\n def _set_scalapack_variables(self):\n \"\"\"Skip setting ScaLAPACK related variables\"\"\"\n pass\n\n def definition(self):\n \"\"\"\n Filter BLAS module from toolchain definition.\n The cray-libsci module is loaded indirectly (and versionless) via the PrgEnv module,\n and thus is not a direct toolchain component.\n \"\"\"\n tc_def = super(LibSci, self).definition()\n tc_def['BLAS'] = []\n tc_def['LAPACK'] = []\n return tc_def\n", "path": "easybuild/toolchains/linalg/libsci.py"}]}
| 1,770 | 164 |
gh_patches_debug_13411
|
rasdani/github-patches
|
git_diff
|
beetbox__beets-1473
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
python-mpd is old and crusty, beets should start using python-mpd2
Assuming that at some point in time `beets` needs to run on Python 3 (which it does eventually, because Python 2 is being phased out), there will be an issue concerning `python-mpd`; `python-mpd` is not compatible with Python 3, nor is it really even maintained upstream anymore. The [last update was in December of 2010](https://pypi.python.org/pypi/python-mpd/), and it's website is down as well.
[`python-mpd2`](https://github.com/Mic92/python-mpd2), however, is maintained and sees fairly active development. It is a fork of `python-mpd`, and has [a document explaining porting](https://github.com/Mic92/python-mpd2/blob/1c7e8f246465110ccb2d64df829c6dbdcdc74c9e/doc/topics/porting.rst) from `python-mpd` on the repository. Aside from the stickers API, which I'm not even sure `beets` uses, it looks fairly easy to replace.
I think that it would be better to use python-mpd2 for these reasons. Any thoughts?
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 # This file is part of beets.
4 # Copyright 2015, Adrian Sampson.
5 #
6 # Permission is hereby granted, free of charge, to any person obtaining
7 # a copy of this software and associated documentation files (the
8 # "Software"), to deal in the Software without restriction, including
9 # without limitation the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the Software, and to
11 # permit persons to whom the Software is furnished to do so, subject to
12 # the following conditions:
13 #
14 # The above copyright notice and this permission notice shall be
15 # included in all copies or substantial portions of the Software.
16
17 from __future__ import division, absolute_import, print_function
18
19 import os
20 import sys
21 import subprocess
22 import shutil
23 from setuptools import setup
24
25
26 def _read(fn):
27 path = os.path.join(os.path.dirname(__file__), fn)
28 return open(path).read()
29
30
31 def build_manpages():
32 # Go into the docs directory and build the manpage.
33 docdir = os.path.join(os.path.dirname(__file__), 'docs')
34 curdir = os.getcwd()
35 os.chdir(docdir)
36 try:
37 subprocess.check_call(['make', 'man'])
38 except OSError:
39 print("Could not build manpages (make man failed)!", file=sys.stderr)
40 return
41 finally:
42 os.chdir(curdir)
43
44 # Copy resulting manpages.
45 mandir = os.path.join(os.path.dirname(__file__), 'man')
46 if os.path.exists(mandir):
47 shutil.rmtree(mandir)
48 shutil.copytree(os.path.join(docdir, '_build', 'man'), mandir)
49
50
51 # Build manpages if we're making a source distribution tarball.
52 if 'sdist' in sys.argv:
53 build_manpages()
54
55
56 setup(
57 name='beets',
58 version='1.3.14',
59 description='music tagger and library organizer',
60 author='Adrian Sampson',
61 author_email='[email protected]',
62 url='http://beets.radbox.org/',
63 license='MIT',
64 platforms='ALL',
65 long_description=_read('README.rst'),
66 test_suite='test.testall.suite',
67 include_package_data=True, # Install plugin resources.
68
69 packages=[
70 'beets',
71 'beets.ui',
72 'beets.autotag',
73 'beets.util',
74 'beets.dbcore',
75 'beetsplug',
76 'beetsplug.bpd',
77 'beetsplug.web',
78 'beetsplug.lastgenre',
79 'beetsplug.metasync',
80 ],
81 entry_points={
82 'console_scripts': [
83 'beet = beets.ui:main',
84 ],
85 },
86
87 install_requires=[
88 'enum34>=1.0.4',
89 'mutagen>=1.27',
90 'munkres',
91 'unidecode',
92 'musicbrainzngs>=0.4',
93 'pyyaml',
94 'jellyfish',
95 ] + (['colorama'] if (sys.platform == 'win32') else []) +
96 (['ordereddict'] if sys.version_info < (2, 7, 0) else []),
97
98 tests_require=[
99 'beautifulsoup4',
100 'flask',
101 'mock',
102 'pyechonest',
103 'pylast',
104 'rarfile',
105 'responses',
106 'pyxdg',
107 'pathlib',
108 'python-mpd',
109 ],
110
111 # Plugin (optional) dependencies:
112 extras_require={
113 'fetchart': ['requests'],
114 'chroma': ['pyacoustid'],
115 'discogs': ['discogs-client>=2.1.0'],
116 'echonest': ['pyechonest'],
117 'lastgenre': ['pylast'],
118 'mpdstats': ['python-mpd'],
119 'web': ['flask', 'flask-cors'],
120 'import': ['rarfile'],
121 'thumbnails': ['pathlib', 'pyxdg'],
122 'metasync': ['dbus-python'],
123 },
124 # Non-Python/non-PyPI plugin dependencies:
125 # convert: ffmpeg
126 # bpd: pygst
127
128 classifiers=[
129 'Topic :: Multimedia :: Sound/Audio',
130 'Topic :: Multimedia :: Sound/Audio :: Players :: MP3',
131 'License :: OSI Approved :: MIT License',
132 'Environment :: Console',
133 'Environment :: Web Environment',
134 'Programming Language :: Python :: 2',
135 'Programming Language :: Python :: 2.7',
136 ],
137 )
138
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -105,7 +105,7 @@
'responses',
'pyxdg',
'pathlib',
- 'python-mpd',
+ 'python-mpd2',
],
# Plugin (optional) dependencies:
@@ -115,7 +115,7 @@
'discogs': ['discogs-client>=2.1.0'],
'echonest': ['pyechonest'],
'lastgenre': ['pylast'],
- 'mpdstats': ['python-mpd'],
+ 'mpdstats': ['python-mpd2'],
'web': ['flask', 'flask-cors'],
'import': ['rarfile'],
'thumbnails': ['pathlib', 'pyxdg'],
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -105,7 +105,7 @@\n 'responses',\n 'pyxdg',\n 'pathlib',\n- 'python-mpd',\n+ 'python-mpd2',\n ],\n \n # Plugin (optional) dependencies:\n@@ -115,7 +115,7 @@\n 'discogs': ['discogs-client>=2.1.0'],\n 'echonest': ['pyechonest'],\n 'lastgenre': ['pylast'],\n- 'mpdstats': ['python-mpd'],\n+ 'mpdstats': ['python-mpd2'],\n 'web': ['flask', 'flask-cors'],\n 'import': ['rarfile'],\n 'thumbnails': ['pathlib', 'pyxdg'],\n", "issue": "python-mpd is old and crusty, beets should start using python-mpd2\nAssuming that at some point in time `beets` needs to run on Python 3 (which it does eventually, because Python 2 is being phased out), there will be an issue concerning `python-mpd`; `python-mpd` is not compatible with Python 3, nor is it really even maintained upstream anymore. The [last update was in December of 2010](https://pypi.python.org/pypi/python-mpd/), and it's website is down as well.\n\n[`python-mpd2`](https://github.com/Mic92/python-mpd2), however, is maintained and sees fairly active development. It is a fork of `python-mpd`, and has [a document explaining porting](https://github.com/Mic92/python-mpd2/blob/1c7e8f246465110ccb2d64df829c6dbdcdc74c9e/doc/topics/porting.rst) from `python-mpd` on the repository. Aside from the stickers API, which I'm not even sure `beets` uses, it looks fairly easy to replace.\n\nI think that it would be better to use python-mpd2 for these reasons. Any thoughts?\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# This file is part of beets.\n# Copyright 2015, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\nfrom __future__ import division, absolute_import, print_function\n\nimport os\nimport sys\nimport subprocess\nimport shutil\nfrom setuptools import setup\n\n\ndef _read(fn):\n path = os.path.join(os.path.dirname(__file__), fn)\n return open(path).read()\n\n\ndef build_manpages():\n # Go into the docs directory and build the manpage.\n docdir = os.path.join(os.path.dirname(__file__), 'docs')\n curdir = os.getcwd()\n os.chdir(docdir)\n try:\n subprocess.check_call(['make', 'man'])\n except OSError:\n print(\"Could not build manpages (make man failed)!\", file=sys.stderr)\n return\n finally:\n os.chdir(curdir)\n\n # Copy resulting manpages.\n mandir = os.path.join(os.path.dirname(__file__), 'man')\n if os.path.exists(mandir):\n shutil.rmtree(mandir)\n shutil.copytree(os.path.join(docdir, '_build', 'man'), mandir)\n\n\n# Build manpages if we're making a source distribution tarball.\nif 'sdist' in sys.argv:\n build_manpages()\n\n\nsetup(\n name='beets',\n version='1.3.14',\n description='music tagger and library organizer',\n author='Adrian Sampson',\n author_email='[email protected]',\n url='http://beets.radbox.org/',\n license='MIT',\n platforms='ALL',\n long_description=_read('README.rst'),\n test_suite='test.testall.suite',\n include_package_data=True, # Install plugin resources.\n\n packages=[\n 'beets',\n 'beets.ui',\n 'beets.autotag',\n 'beets.util',\n 'beets.dbcore',\n 'beetsplug',\n 'beetsplug.bpd',\n 'beetsplug.web',\n 'beetsplug.lastgenre',\n 'beetsplug.metasync',\n ],\n entry_points={\n 'console_scripts': [\n 'beet = beets.ui:main',\n ],\n },\n\n install_requires=[\n 'enum34>=1.0.4',\n 'mutagen>=1.27',\n 'munkres',\n 'unidecode',\n 'musicbrainzngs>=0.4',\n 'pyyaml',\n 'jellyfish',\n ] + (['colorama'] if (sys.platform == 'win32') else []) +\n (['ordereddict'] if sys.version_info < (2, 7, 0) else []),\n\n tests_require=[\n 'beautifulsoup4',\n 'flask',\n 'mock',\n 'pyechonest',\n 'pylast',\n 'rarfile',\n 'responses',\n 'pyxdg',\n 'pathlib',\n 'python-mpd',\n ],\n\n # Plugin (optional) dependencies:\n extras_require={\n 'fetchart': ['requests'],\n 'chroma': ['pyacoustid'],\n 'discogs': ['discogs-client>=2.1.0'],\n 'echonest': ['pyechonest'],\n 'lastgenre': ['pylast'],\n 'mpdstats': ['python-mpd'],\n 'web': ['flask', 'flask-cors'],\n 'import': ['rarfile'],\n 'thumbnails': ['pathlib', 'pyxdg'],\n 'metasync': ['dbus-python'],\n },\n # Non-Python/non-PyPI plugin dependencies:\n # convert: ffmpeg\n # bpd: pygst\n\n classifiers=[\n 'Topic :: Multimedia :: Sound/Audio',\n 'Topic :: Multimedia :: Sound/Audio :: Players :: MP3',\n 'License :: OSI Approved :: MIT License',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n ],\n)\n", "path": "setup.py"}]}
| 2,116 | 188 |
gh_patches_debug_42506
|
rasdani/github-patches
|
git_diff
|
Flexget__Flexget-618
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Thread should be created after deamonizing
There is a long existing bug related to `Task Queue has died unexpectedly` in daemon mode. #1947 #2656 The cause of this bug is really simple. **Do not** create thread before `os.fork()` (demonize). Here is a small code snippet that demonstrates the problem:
```python
from threading import Thread
import os
import time
import sys
def func():
for _ in range(5):
print('running in thread')
time.sleep(0.5)
t = Thread(target=func)
pid = os.fork()
if pid > 0:
sys.exit(0)
t.start()
print("pid:", os.getpid())
print('is thread alive after fork:', t.is_alive())
t.join()
```
The running result is as follows:
```
# lym @ mig186 in ~ [11:46:22]
$ python main.py
running in thread
pid: 122253
is thread alive after fork: False
# lym @ mig186 in ~ [11:46:23]
$ running in thread
running in thread
running in thread
running in thread
```
The thread resource created before `os.fork()` is only alive in the parent process (which is exited to daemonize). In the child process, the `t.is_alive()` always return False. Here is the right way to do:
```python
from threading import Thread
import os
import time
import sys
def func():
for _ in range(5):
print('running in thread')
time.sleep(0.5)
pid = os.fork()
if pid > 0:
sys.exit(0)
t = Thread(target=func)
t.start()
print("pid:", os.getpid())
print('is thread alive after fork:', t.is_alive())
t.join()
```
By the way, the process of daemonize usually involves double fork, in which case we should create all the threading after the second fork.
And here is the cause of bug in `flexget/manager.py`, the task queue is initialized in line 231 in the `def initialize(self)` function, and the damonize happens afterwards in line 476 `self.daemonize()`, which involves double forking:
```python
def daemonize(self) -> None:
"""Daemonizes the current process. Returns the new pid"""
if sys.platform.startswith('win'):
logger.error('Cannot daemonize on windows')
return
if threading.active_count() != 1:
logger.critical(
'There are {!r} active threads. Daemonizing now may cause strange failures.',
threading.enumerate(),
)
logger.info('Daemonizing...')
try:
pid = os.fork()
if pid > 0:
# Don't run the exit handlers on the parent
atexit._exithandlers = []
# exit first parent
sys.exit(0)
except OSError as e:
sys.stderr.write(f'fork #1 failed: {e.errno} ({e.strerror})\n')
sys.exit(1)
# decouple from parent environment
os.chdir('/')
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# Don't run the exit handlers on the parent
atexit._exithandlers = []
# exit from second parent
sys.exit(0)
except OSError as e:
sys.stderr.write(f'fork #2 failed: {e.errno} ({e.strerror})\n')
sys.exit(1)
```
Therefore after daemonizing, when we try to execute the command, this function always shows the not alive error (line 415):
```python
def execute_command(self, options: argparse.Namespace) -> None:
"""
Handles the 'execute' CLI command.
If there is already a task queue running in this process, adds the execution to the queue.
If FlexGet is being invoked with this command, starts up a task queue and runs the execution.
Fires events:
* manager.execute.started
* manager.execute.completed
:param options: argparse options
"""
fire_event('manager.execute.started', self, options)
if self.task_queue.is_alive() or self.is_daemon:
if not self.task_queue.is_alive():
logger.error(
'Task queue has died unexpectedly. Restarting it. Please open an issue on Github and include'
' any previous error logs.'
)
```
And it creates a new task queue and leaving the old one hanging, which causes other strange problems like the flexget daemon can not be stopped.
</issue>
<code>
[start of flexget/plugins/input/filesystem.py]
1 from __future__ import unicode_literals, division, absolute_import
2 import logging
3 import re
4 import os
5
6 from path import Path
7
8 from flexget import plugin
9 from flexget.config_schema import one_or_more
10 from flexget.event import event
11 from flexget.entry import Entry
12
13 log = logging.getLogger('filesystem')
14
15
16 class Filesystem(object):
17 """
18 Uses local path content as an input. Can use recursion if configured.
19 Recursion is False by default. Can be configured to true or get integer that will specify max depth in relation to
20 base folder.
21 All files/dir/symlinks are retrieved by default. Can be changed by using the 'retrieve' property.
22
23 Example 1:: Single path
24
25 filesystem: /storage/movies/
26
27 Example 2:: List of paths
28
29 filesystem:
30 - /storage/movies/
31 - /storage/tv/
32
33 Example 3:: Object with list of paths
34
35 filesystem:
36 path:
37 - /storage/movies/
38 - /storage/tv/
39 mask: '*.mkv'
40
41 Example 4::
42
43 filesystem:
44 path:
45 - /storage/movies/
46 - /storage/tv/
47 recursive: 4 # 4 levels deep from each base folder
48 retrieve: files # Only files will be retrieved
49
50 Example 5::
51
52 filesystem:
53 path:
54 - /storage/movies/
55 - /storage/tv/
56 recursive: yes # No limit to depth, all sub dirs will be accessed
57 retrieve: # Only files and dirs will be retrieved
58 - files
59 - dirs
60
61 """
62 retrieval_options = ['files', 'dirs', 'symlinks']
63 paths = one_or_more({'type': 'string', 'format': 'path'}, unique_items=True)
64
65 schema = {
66 'oneOf': [
67 paths,
68 {'type': 'object',
69 'properties': {
70 'path': paths,
71 'mask': {'type': 'string'},
72 'regexp': {'type': 'string', 'format': 'regex'},
73 'recursive': {'oneOf': [{'type': 'integer', 'minimum': 2}, {'type': 'boolean'}]},
74 'retrieve': one_or_more({'type': 'string', 'enum': retrieval_options}, unique_items=True)
75 },
76 'required': ['path'],
77 'additionalProperties': False}]
78 }
79
80 def prepare_config(self, config):
81 from fnmatch import translate
82 config = config
83
84 # Converts config to a dict with a list of paths
85 if not isinstance(config, dict):
86 config = {'path': config}
87 if not isinstance(config['path'], list):
88 config['path'] = [config['path']]
89
90 config.setdefault('recursive', False)
91 # If mask was specified, turn it in to a regexp
92 if config.get('mask'):
93 config['regexp'] = translate(config['mask'])
94 # If no mask or regexp specified, accept all files
95 config.setdefault('regexp', '.')
96 # Sets the default retrieval option to files
97 config.setdefault('retrieve', self.retrieval_options)
98
99 return config
100
101 def create_entry(self, filepath, test_mode):
102 """
103 Creates a single entry using a filepath and a type (file/dir)
104 """
105 entry = Entry()
106 entry['location'] = filepath
107 entry['url'] = 'file://{}'.format(filepath)
108 entry['filename'] = filepath.name
109 if filepath.isfile():
110 entry['title'] = filepath.namebase
111 else:
112 entry['title'] = filepath.name
113 try:
114 entry['timestamp'] = os.path.getmtime(filepath)
115 except Exception as e:
116 log.warning('Error setting timestamp for %s: %s' % (filepath, e))
117 entry['timestamp'] = None
118 if entry.isvalid():
119 if test_mode:
120 log.info("Test mode. Entry includes:")
121 log.info(" Title: %s" % entry["title"])
122 log.info(" URL: %s" % entry["url"])
123 log.info(" Filename: %s" % entry["filename"])
124 log.info(" Location: %s" % entry["location"])
125 log.info(" Timestamp: %s" % entry["timestamp"])
126 return entry
127 else:
128 log.error('Non valid entry created: {}'.format(entry))
129 return
130
131 def get_max_depth(self, recursion, base_depth):
132 if recursion is False:
133 return base_depth + 1
134 elif recursion is True:
135 return float('inf')
136 else:
137 return base_depth + recursion
138
139 def get_folder_objects(self, folder, recursion):
140 if recursion is False:
141 return folder.listdir()
142 else:
143 return folder.walk(errors='ignore')
144
145 def get_entries_from_path(self, path_list, match, recursion, test_mode, get_files, get_dirs, get_symlinks):
146 entries = []
147
148 for folder in path_list:
149 log.verbose('Scanning folder {}. Recursion is set to {}.'.format(folder, recursion))
150 folder = Path(folder).expanduser()
151 log.debug('Scanning %s' % folder)
152 base_depth = len(folder.splitall())
153 max_depth = self.get_max_depth(recursion, base_depth)
154 folder_objects = self.get_folder_objects(folder, recursion)
155 for path_object in folder_objects:
156 log.debug('Checking if {} qualifies to be added as an entry.'.format(path_object))
157 try:
158 path_object.exists()
159 except UnicodeError:
160 log.error('File %s not decodable with filesystem encoding: {}'.format(path_object))
161 continue
162 entry = None
163 object_depth = len(path_object.splitall())
164 if object_depth <= max_depth:
165 if match(path_object):
166 if (path_object.isdir() and get_dirs) or (
167 path_object.islink() and get_symlinks) or (
168 path_object.isfile() and get_files):
169 entry = self.create_entry(path_object, test_mode)
170 else:
171 log.debug("Path object's {} type doesn't match requested object types.".format(path_object))
172 if entry and entry not in entries:
173 entries.append(entry)
174
175 return entries
176
177 def on_task_input(self, task, config):
178 config = self.prepare_config(config)
179
180 path_list = config['path']
181 test_mode = task.options.test
182 match = re.compile(config['regexp'], re.IGNORECASE).match
183 recursive = config['recursive']
184 get_files = 'files' in config['retrieve']
185 get_dirs = 'dirs' in config['retrieve']
186 get_symlinks = 'symlinks' in config['retrieve']
187
188 log.info('Starting to scan folders.')
189 return self.get_entries_from_path(path_list, match, recursive, test_mode, get_files, get_dirs, get_symlinks)
190
191
192 @event('plugin.register')
193 def register_plugin():
194 plugin.register(Filesystem, 'filesystem', api_ver=2)
195
[end of flexget/plugins/input/filesystem.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/flexget/plugins/input/filesystem.py b/flexget/plugins/input/filesystem.py
--- a/flexget/plugins/input/filesystem.py
+++ b/flexget/plugins/input/filesystem.py
@@ -1,7 +1,7 @@
from __future__ import unicode_literals, division, absolute_import
import logging
import re
-import os
+from datetime import datetime
from path import Path
@@ -104,14 +104,14 @@
"""
entry = Entry()
entry['location'] = filepath
- entry['url'] = 'file://{}'.format(filepath)
+ entry['url'] = 'file://%s' % filepath
entry['filename'] = filepath.name
if filepath.isfile():
entry['title'] = filepath.namebase
else:
entry['title'] = filepath.name
try:
- entry['timestamp'] = os.path.getmtime(filepath)
+ entry['timestamp'] = datetime.fromtimestamp(filepath.getmtime())
except Exception as e:
log.warning('Error setting timestamp for %s: %s' % (filepath, e))
entry['timestamp'] = None
@@ -125,7 +125,7 @@
log.info(" Timestamp: %s" % entry["timestamp"])
return entry
else:
- log.error('Non valid entry created: {}'.format(entry))
+ log.error('Non valid entry created: %s '% entry)
return
def get_max_depth(self, recursion, base_depth):
@@ -146,18 +146,18 @@
entries = []
for folder in path_list:
- log.verbose('Scanning folder {}. Recursion is set to {}.'.format(folder, recursion))
+ log.verbose('Scanning folder %s. Recursion is set to %s.' % (folder, recursion))
folder = Path(folder).expanduser()
log.debug('Scanning %s' % folder)
base_depth = len(folder.splitall())
max_depth = self.get_max_depth(recursion, base_depth)
folder_objects = self.get_folder_objects(folder, recursion)
for path_object in folder_objects:
- log.debug('Checking if {} qualifies to be added as an entry.'.format(path_object))
+ log.debug('Checking if %s qualifies to be added as an entry.' % path_object)
try:
path_object.exists()
except UnicodeError:
- log.error('File %s not decodable with filesystem encoding: {}'.format(path_object))
+ log.error('File %s not decodable with filesystem encoding: %s' % path_object)
continue
entry = None
object_depth = len(path_object.splitall())
@@ -168,7 +168,7 @@
path_object.isfile() and get_files):
entry = self.create_entry(path_object, test_mode)
else:
- log.debug("Path object's {} type doesn't match requested object types.".format(path_object))
+ log.debug("Path object's %s type doesn't match requested object types." % path_object)
if entry and entry not in entries:
entries.append(entry)
|
{"golden_diff": "diff --git a/flexget/plugins/input/filesystem.py b/flexget/plugins/input/filesystem.py\n--- a/flexget/plugins/input/filesystem.py\n+++ b/flexget/plugins/input/filesystem.py\n@@ -1,7 +1,7 @@\n from __future__ import unicode_literals, division, absolute_import\n import logging\n import re\n-import os\n+from datetime import datetime\n \n from path import Path\n \n@@ -104,14 +104,14 @@\n \"\"\"\n entry = Entry()\n entry['location'] = filepath\n- entry['url'] = 'file://{}'.format(filepath)\n+ entry['url'] = 'file://%s' % filepath\n entry['filename'] = filepath.name\n if filepath.isfile():\n entry['title'] = filepath.namebase\n else:\n entry['title'] = filepath.name\n try:\n- entry['timestamp'] = os.path.getmtime(filepath)\n+ entry['timestamp'] = datetime.fromtimestamp(filepath.getmtime())\n except Exception as e:\n log.warning('Error setting timestamp for %s: %s' % (filepath, e))\n entry['timestamp'] = None\n@@ -125,7 +125,7 @@\n log.info(\" Timestamp: %s\" % entry[\"timestamp\"])\n return entry\n else:\n- log.error('Non valid entry created: {}'.format(entry))\n+ log.error('Non valid entry created: %s '% entry)\n return\n \n def get_max_depth(self, recursion, base_depth):\n@@ -146,18 +146,18 @@\n entries = []\n \n for folder in path_list:\n- log.verbose('Scanning folder {}. Recursion is set to {}.'.format(folder, recursion))\n+ log.verbose('Scanning folder %s. Recursion is set to %s.' % (folder, recursion))\n folder = Path(folder).expanduser()\n log.debug('Scanning %s' % folder)\n base_depth = len(folder.splitall())\n max_depth = self.get_max_depth(recursion, base_depth)\n folder_objects = self.get_folder_objects(folder, recursion)\n for path_object in folder_objects:\n- log.debug('Checking if {} qualifies to be added as an entry.'.format(path_object))\n+ log.debug('Checking if %s qualifies to be added as an entry.' % path_object)\n try:\n path_object.exists()\n except UnicodeError:\n- log.error('File %s not decodable with filesystem encoding: {}'.format(path_object))\n+ log.error('File %s not decodable with filesystem encoding: %s' % path_object)\n continue\n entry = None\n object_depth = len(path_object.splitall())\n@@ -168,7 +168,7 @@\n path_object.isfile() and get_files):\n entry = self.create_entry(path_object, test_mode)\n else:\n- log.debug(\"Path object's {} type doesn't match requested object types.\".format(path_object))\n+ log.debug(\"Path object's %s type doesn't match requested object types.\" % path_object)\n if entry and entry not in entries:\n entries.append(entry)\n", "issue": "Thread should be created after deamonizing\nThere is a long existing bug related to `Task Queue has died unexpectedly` in daemon mode. #1947 #2656 The cause of this bug is really simple. **Do not** create thread before `os.fork()` (demonize). Here is a small code snippet that demonstrates the problem:\r\n```python\r\nfrom threading import Thread\r\nimport os\r\nimport time\r\nimport sys\r\n\r\n\r\ndef func():\r\n for _ in range(5):\r\n print('running in thread')\r\n time.sleep(0.5)\r\n\r\nt = Thread(target=func)\r\n\r\npid = os.fork()\r\nif pid > 0:\r\n sys.exit(0)\r\n\r\nt.start()\r\nprint(\"pid:\", os.getpid())\r\nprint('is thread alive after fork:', t.is_alive())\r\nt.join()\r\n```\r\nThe running result is as follows:\r\n```\r\n# lym @ mig186 in ~ [11:46:22]\r\n$ python main.py\r\nrunning in thread\r\npid: 122253\r\nis thread alive after fork: False\r\n# lym @ mig186 in ~ [11:46:23]\r\n$ running in thread\r\nrunning in thread\r\nrunning in thread\r\nrunning in thread\r\n```\r\nThe thread resource created before `os.fork()` is only alive in the parent process (which is exited to daemonize). In the child process, the `t.is_alive()` always return False. Here is the right way to do:\r\n```python\r\nfrom threading import Thread\r\nimport os\r\nimport time\r\nimport sys\r\n\r\n\r\ndef func():\r\n for _ in range(5):\r\n print('running in thread')\r\n time.sleep(0.5)\r\n\r\n\r\npid = os.fork()\r\nif pid > 0:\r\n sys.exit(0)\r\n\r\nt = Thread(target=func)\r\nt.start()\r\nprint(\"pid:\", os.getpid())\r\nprint('is thread alive after fork:', t.is_alive())\r\nt.join()\r\n```\r\nBy the way, the process of daemonize usually involves double fork, in which case we should create all the threading after the second fork.\r\n\r\nAnd here is the cause of bug in `flexget/manager.py`, the task queue is initialized in line 231 in the `def initialize(self)` function, and the damonize happens afterwards in line 476 `self.daemonize()`, which involves double forking:\r\n```python\r\n def daemonize(self) -> None:\r\n \"\"\"Daemonizes the current process. Returns the new pid\"\"\"\r\n if sys.platform.startswith('win'):\r\n logger.error('Cannot daemonize on windows')\r\n return\r\n if threading.active_count() != 1:\r\n logger.critical(\r\n 'There are {!r} active threads. Daemonizing now may cause strange failures.',\r\n threading.enumerate(),\r\n )\r\n\r\n logger.info('Daemonizing...')\r\n\r\n try:\r\n pid = os.fork()\r\n if pid > 0:\r\n # Don't run the exit handlers on the parent\r\n atexit._exithandlers = []\r\n # exit first parent\r\n sys.exit(0)\r\n except OSError as e:\r\n sys.stderr.write(f'fork #1 failed: {e.errno} ({e.strerror})\\n')\r\n sys.exit(1)\r\n\r\n # decouple from parent environment\r\n os.chdir('/')\r\n os.setsid()\r\n os.umask(0)\r\n\r\n # do second fork\r\n try:\r\n pid = os.fork()\r\n if pid > 0:\r\n # Don't run the exit handlers on the parent\r\n atexit._exithandlers = []\r\n # exit from second parent\r\n sys.exit(0)\r\n except OSError as e:\r\n sys.stderr.write(f'fork #2 failed: {e.errno} ({e.strerror})\\n')\r\n sys.exit(1)\r\n```\r\n\r\nTherefore after daemonizing, when we try to execute the command, this function always shows the not alive error (line 415):\r\n```python\r\n def execute_command(self, options: argparse.Namespace) -> None:\r\n \"\"\"\r\n Handles the 'execute' CLI command.\r\n If there is already a task queue running in this process, adds the execution to the queue.\r\n If FlexGet is being invoked with this command, starts up a task queue and runs the execution.\r\n Fires events:\r\n * manager.execute.started\r\n * manager.execute.completed\r\n :param options: argparse options\r\n \"\"\"\r\n fire_event('manager.execute.started', self, options)\r\n if self.task_queue.is_alive() or self.is_daemon:\r\n if not self.task_queue.is_alive():\r\n logger.error(\r\n 'Task queue has died unexpectedly. Restarting it. Please open an issue on Github and include'\r\n ' any previous error logs.'\r\n )\r\n```\r\nAnd it creates a new task queue and leaving the old one hanging, which causes other strange problems like the flexget daemon can not be stopped.\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nimport logging\nimport re\nimport os\n\nfrom path import Path\n\nfrom flexget import plugin\nfrom flexget.config_schema import one_or_more\nfrom flexget.event import event\nfrom flexget.entry import Entry\n\nlog = logging.getLogger('filesystem')\n\n\nclass Filesystem(object):\n \"\"\"\n Uses local path content as an input. Can use recursion if configured.\n Recursion is False by default. Can be configured to true or get integer that will specify max depth in relation to\n base folder.\n All files/dir/symlinks are retrieved by default. Can be changed by using the 'retrieve' property.\n\n Example 1:: Single path\n\n filesystem: /storage/movies/\n\n Example 2:: List of paths\n\n filesystem:\n - /storage/movies/\n - /storage/tv/\n\n Example 3:: Object with list of paths\n\n filesystem:\n path:\n - /storage/movies/\n - /storage/tv/\n mask: '*.mkv'\n\n Example 4::\n\n filesystem:\n path:\n - /storage/movies/\n - /storage/tv/\n recursive: 4 # 4 levels deep from each base folder\n retrieve: files # Only files will be retrieved\n\n Example 5::\n\n filesystem:\n path:\n - /storage/movies/\n - /storage/tv/\n recursive: yes # No limit to depth, all sub dirs will be accessed\n retrieve: # Only files and dirs will be retrieved\n - files\n - dirs\n\n \"\"\"\n retrieval_options = ['files', 'dirs', 'symlinks']\n paths = one_or_more({'type': 'string', 'format': 'path'}, unique_items=True)\n\n schema = {\n 'oneOf': [\n paths,\n {'type': 'object',\n 'properties': {\n 'path': paths,\n 'mask': {'type': 'string'},\n 'regexp': {'type': 'string', 'format': 'regex'},\n 'recursive': {'oneOf': [{'type': 'integer', 'minimum': 2}, {'type': 'boolean'}]},\n 'retrieve': one_or_more({'type': 'string', 'enum': retrieval_options}, unique_items=True)\n },\n 'required': ['path'],\n 'additionalProperties': False}]\n }\n\n def prepare_config(self, config):\n from fnmatch import translate\n config = config\n\n # Converts config to a dict with a list of paths\n if not isinstance(config, dict):\n config = {'path': config}\n if not isinstance(config['path'], list):\n config['path'] = [config['path']]\n\n config.setdefault('recursive', False)\n # If mask was specified, turn it in to a regexp\n if config.get('mask'):\n config['regexp'] = translate(config['mask'])\n # If no mask or regexp specified, accept all files\n config.setdefault('regexp', '.')\n # Sets the default retrieval option to files\n config.setdefault('retrieve', self.retrieval_options)\n\n return config\n\n def create_entry(self, filepath, test_mode):\n \"\"\"\n Creates a single entry using a filepath and a type (file/dir)\n \"\"\"\n entry = Entry()\n entry['location'] = filepath\n entry['url'] = 'file://{}'.format(filepath)\n entry['filename'] = filepath.name\n if filepath.isfile():\n entry['title'] = filepath.namebase\n else:\n entry['title'] = filepath.name\n try:\n entry['timestamp'] = os.path.getmtime(filepath)\n except Exception as e:\n log.warning('Error setting timestamp for %s: %s' % (filepath, e))\n entry['timestamp'] = None\n if entry.isvalid():\n if test_mode:\n log.info(\"Test mode. Entry includes:\")\n log.info(\" Title: %s\" % entry[\"title\"])\n log.info(\" URL: %s\" % entry[\"url\"])\n log.info(\" Filename: %s\" % entry[\"filename\"])\n log.info(\" Location: %s\" % entry[\"location\"])\n log.info(\" Timestamp: %s\" % entry[\"timestamp\"])\n return entry\n else:\n log.error('Non valid entry created: {}'.format(entry))\n return\n\n def get_max_depth(self, recursion, base_depth):\n if recursion is False:\n return base_depth + 1\n elif recursion is True:\n return float('inf')\n else:\n return base_depth + recursion\n\n def get_folder_objects(self, folder, recursion):\n if recursion is False:\n return folder.listdir()\n else:\n return folder.walk(errors='ignore')\n\n def get_entries_from_path(self, path_list, match, recursion, test_mode, get_files, get_dirs, get_symlinks):\n entries = []\n\n for folder in path_list:\n log.verbose('Scanning folder {}. Recursion is set to {}.'.format(folder, recursion))\n folder = Path(folder).expanduser()\n log.debug('Scanning %s' % folder)\n base_depth = len(folder.splitall())\n max_depth = self.get_max_depth(recursion, base_depth)\n folder_objects = self.get_folder_objects(folder, recursion)\n for path_object in folder_objects:\n log.debug('Checking if {} qualifies to be added as an entry.'.format(path_object))\n try:\n path_object.exists()\n except UnicodeError:\n log.error('File %s not decodable with filesystem encoding: {}'.format(path_object))\n continue\n entry = None\n object_depth = len(path_object.splitall())\n if object_depth <= max_depth:\n if match(path_object):\n if (path_object.isdir() and get_dirs) or (\n path_object.islink() and get_symlinks) or (\n path_object.isfile() and get_files):\n entry = self.create_entry(path_object, test_mode)\n else:\n log.debug(\"Path object's {} type doesn't match requested object types.\".format(path_object))\n if entry and entry not in entries:\n entries.append(entry)\n\n return entries\n\n def on_task_input(self, task, config):\n config = self.prepare_config(config)\n\n path_list = config['path']\n test_mode = task.options.test\n match = re.compile(config['regexp'], re.IGNORECASE).match\n recursive = config['recursive']\n get_files = 'files' in config['retrieve']\n get_dirs = 'dirs' in config['retrieve']\n get_symlinks = 'symlinks' in config['retrieve']\n\n log.info('Starting to scan folders.')\n return self.get_entries_from_path(path_list, match, recursive, test_mode, get_files, get_dirs, get_symlinks)\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(Filesystem, 'filesystem', api_ver=2)\n", "path": "flexget/plugins/input/filesystem.py"}]}
| 3,527 | 674 |
gh_patches_debug_3904
|
rasdani/github-patches
|
git_diff
|
buildbot__buildbot-5041
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Custom templates still not working
Hi, this is the original issue with broken custom templates #4980
But it doesn't work even after the fix (2.4.1).
The web part of Buildbot is far to complicated for me. But I was able to find lines like this in scripts.js?_1568233606304
```
, function(e, t) {
e.exports = window.T["undefined/properties.html"] || '<table class="table table-hover...
}
```
And I presume there is something wrong if there is "**undefined**/properties.html".
</issue>
<code>
[start of master/buildbot/www/config.py]
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16
17 import json
18 import os
19 import posixpath
20
21 import jinja2
22
23 from twisted.internet import defer
24 from twisted.python import log
25 from twisted.web.error import Error
26
27 from buildbot.interfaces import IConfigured
28 from buildbot.util import unicode2bytes
29 from buildbot.www import resource
30
31
32 class IndexResource(resource.Resource):
33 # enable reconfigResource calls
34 needsReconfig = True
35
36 def __init__(self, master, staticdir):
37 super().__init__(master)
38 loader = jinja2.FileSystemLoader(staticdir)
39 self.jinja = jinja2.Environment(
40 loader=loader, undefined=jinja2.StrictUndefined)
41
42 def reconfigResource(self, new_config):
43 self.config = new_config.www
44
45 versions = self.getEnvironmentVersions()
46 vs = self.config.get('versions')
47 if isinstance(vs, list):
48 versions += vs
49 self.config['versions'] = versions
50
51 self.custom_templates = {}
52 template_dir = self.config.pop('custom_templates_dir', None)
53 if template_dir is not None:
54 template_dir = os.path.join(self.master.basedir, template_dir)
55 self.custom_templates = self.parseCustomTemplateDir(template_dir)
56
57 def render_GET(self, request):
58 return self.asyncRenderHelper(request, self.renderIndex)
59
60 def parseCustomTemplateDir(self, template_dir):
61 res = {}
62 allowed_ext = [".html"]
63 try:
64 import pyjade
65 allowed_ext.append(".jade")
66 except ImportError: # pragma: no cover
67 log.msg("pyjade not installed. Ignoring .jade files from %s" %
68 (template_dir,))
69 pyjade = None
70 for root, dirs, files in os.walk(template_dir):
71 if root == template_dir:
72 template_name = posixpath.join("views", "%s.html")
73 else:
74 # template_name is a url, so we really want '/'
75 # root is a os.path, though
76 template_name = posixpath.join(
77 os.path.basename(root), "views", "%s.html")
78 for f in files:
79 fn = os.path.join(root, f)
80 basename, ext = os.path.splitext(f)
81 if ext not in allowed_ext:
82 continue
83 if ext == ".html":
84 with open(fn) as f:
85 html = f.read().strip()
86 elif ext == ".jade":
87 with open(fn) as f:
88 jade = f.read()
89 parser = pyjade.parser.Parser(jade)
90 block = parser.parse()
91 compiler = pyjade.ext.html.Compiler(
92 block, pretty=False)
93 html = compiler.compile()
94 res[template_name % (basename,)] = json.dumps(html)
95
96 return res
97
98 @staticmethod
99 def getEnvironmentVersions():
100 import sys
101 import twisted
102 from buildbot import version as bbversion
103
104 pyversion = '.'.join(map(str, sys.version_info[:3]))
105
106 tx_version_info = (twisted.version.major,
107 twisted.version.minor,
108 twisted.version.micro)
109 txversion = '.'.join(map(str, tx_version_info))
110
111 return [
112 ('Python', pyversion),
113 ('Buildbot', bbversion),
114 ('Twisted', txversion),
115 ]
116
117 @defer.inlineCallbacks
118 def renderIndex(self, request):
119 config = {}
120 request.setHeader(b"content-type", b'text/html')
121 request.setHeader(b"Cache-Control", b"public;max-age=0")
122
123 try:
124 yield self.config['auth'].maybeAutoLogin(request)
125 except Error as e:
126 config["on_load_warning"] = e.message
127
128 user_info = self.master.www.getUserInfos(request)
129 config.update({"user": user_info})
130
131 config.update(self.config)
132 config['buildbotURL'] = self.master.config.buildbotURL
133 config['title'] = self.master.config.title
134 config['titleURL'] = self.master.config.titleURL
135 config['multiMaster'] = self.master.config.multiMaster
136
137 # delete things that may contain secrets
138 if 'change_hook_dialects' in config:
139 del config['change_hook_dialects']
140
141 def toJson(obj):
142 try:
143 obj = IConfigured(obj).getConfigDict()
144 except TypeError:
145 # this happens for old style classes (not deriving objects)
146 pass
147 if isinstance(obj, dict):
148 return obj
149 # don't leak object memory address
150 obj = obj.__class__.__module__ + "." + obj.__class__.__name__
151 return repr(obj) + " not yet IConfigured"
152
153 tpl = self.jinja.get_template('index.html')
154 # we use Jinja in order to render some server side dynamic stuff
155 # For example, custom_templates javascript is generated by the
156 # layout.jade jinja template
157 tpl = tpl.render(configjson=json.dumps(config, default=toJson),
158 custom_templates=self.custom_templates,
159 config=self.config)
160 return unicode2bytes(tpl, encoding='ascii')
161
[end of master/buildbot/www/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/master/buildbot/www/config.py b/master/buildbot/www/config.py
--- a/master/buildbot/www/config.py
+++ b/master/buildbot/www/config.py
@@ -91,7 +91,7 @@
compiler = pyjade.ext.html.Compiler(
block, pretty=False)
html = compiler.compile()
- res[template_name % (basename,)] = json.dumps(html)
+ res[template_name % (basename,)] = html
return res
|
{"golden_diff": "diff --git a/master/buildbot/www/config.py b/master/buildbot/www/config.py\n--- a/master/buildbot/www/config.py\n+++ b/master/buildbot/www/config.py\n@@ -91,7 +91,7 @@\n compiler = pyjade.ext.html.Compiler(\n block, pretty=False)\n html = compiler.compile()\n- res[template_name % (basename,)] = json.dumps(html)\n+ res[template_name % (basename,)] = html\n \n return res\n", "issue": "Custom templates still not working\nHi, this is the original issue with broken custom templates #4980\r\n\r\nBut it doesn't work even after the fix (2.4.1).\r\nThe web part of Buildbot is far to complicated for me. But I was able to find lines like this in scripts.js?_1568233606304\r\n```\r\n , function(e, t) {\r\n e.exports = window.T[\"undefined/properties.html\"] || '<table class=\"table table-hover...\r\n }\r\n```\r\nAnd I presume there is something wrong if there is \"**undefined**/properties.html\".\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\n\nimport json\nimport os\nimport posixpath\n\nimport jinja2\n\nfrom twisted.internet import defer\nfrom twisted.python import log\nfrom twisted.web.error import Error\n\nfrom buildbot.interfaces import IConfigured\nfrom buildbot.util import unicode2bytes\nfrom buildbot.www import resource\n\n\nclass IndexResource(resource.Resource):\n # enable reconfigResource calls\n needsReconfig = True\n\n def __init__(self, master, staticdir):\n super().__init__(master)\n loader = jinja2.FileSystemLoader(staticdir)\n self.jinja = jinja2.Environment(\n loader=loader, undefined=jinja2.StrictUndefined)\n\n def reconfigResource(self, new_config):\n self.config = new_config.www\n\n versions = self.getEnvironmentVersions()\n vs = self.config.get('versions')\n if isinstance(vs, list):\n versions += vs\n self.config['versions'] = versions\n\n self.custom_templates = {}\n template_dir = self.config.pop('custom_templates_dir', None)\n if template_dir is not None:\n template_dir = os.path.join(self.master.basedir, template_dir)\n self.custom_templates = self.parseCustomTemplateDir(template_dir)\n\n def render_GET(self, request):\n return self.asyncRenderHelper(request, self.renderIndex)\n\n def parseCustomTemplateDir(self, template_dir):\n res = {}\n allowed_ext = [\".html\"]\n try:\n import pyjade\n allowed_ext.append(\".jade\")\n except ImportError: # pragma: no cover\n log.msg(\"pyjade not installed. Ignoring .jade files from %s\" %\n (template_dir,))\n pyjade = None\n for root, dirs, files in os.walk(template_dir):\n if root == template_dir:\n template_name = posixpath.join(\"views\", \"%s.html\")\n else:\n # template_name is a url, so we really want '/'\n # root is a os.path, though\n template_name = posixpath.join(\n os.path.basename(root), \"views\", \"%s.html\")\n for f in files:\n fn = os.path.join(root, f)\n basename, ext = os.path.splitext(f)\n if ext not in allowed_ext:\n continue\n if ext == \".html\":\n with open(fn) as f:\n html = f.read().strip()\n elif ext == \".jade\":\n with open(fn) as f:\n jade = f.read()\n parser = pyjade.parser.Parser(jade)\n block = parser.parse()\n compiler = pyjade.ext.html.Compiler(\n block, pretty=False)\n html = compiler.compile()\n res[template_name % (basename,)] = json.dumps(html)\n\n return res\n\n @staticmethod\n def getEnvironmentVersions():\n import sys\n import twisted\n from buildbot import version as bbversion\n\n pyversion = '.'.join(map(str, sys.version_info[:3]))\n\n tx_version_info = (twisted.version.major,\n twisted.version.minor,\n twisted.version.micro)\n txversion = '.'.join(map(str, tx_version_info))\n\n return [\n ('Python', pyversion),\n ('Buildbot', bbversion),\n ('Twisted', txversion),\n ]\n\n @defer.inlineCallbacks\n def renderIndex(self, request):\n config = {}\n request.setHeader(b\"content-type\", b'text/html')\n request.setHeader(b\"Cache-Control\", b\"public;max-age=0\")\n\n try:\n yield self.config['auth'].maybeAutoLogin(request)\n except Error as e:\n config[\"on_load_warning\"] = e.message\n\n user_info = self.master.www.getUserInfos(request)\n config.update({\"user\": user_info})\n\n config.update(self.config)\n config['buildbotURL'] = self.master.config.buildbotURL\n config['title'] = self.master.config.title\n config['titleURL'] = self.master.config.titleURL\n config['multiMaster'] = self.master.config.multiMaster\n\n # delete things that may contain secrets\n if 'change_hook_dialects' in config:\n del config['change_hook_dialects']\n\n def toJson(obj):\n try:\n obj = IConfigured(obj).getConfigDict()\n except TypeError:\n # this happens for old style classes (not deriving objects)\n pass\n if isinstance(obj, dict):\n return obj\n # don't leak object memory address\n obj = obj.__class__.__module__ + \".\" + obj.__class__.__name__\n return repr(obj) + \" not yet IConfigured\"\n\n tpl = self.jinja.get_template('index.html')\n # we use Jinja in order to render some server side dynamic stuff\n # For example, custom_templates javascript is generated by the\n # layout.jade jinja template\n tpl = tpl.render(configjson=json.dumps(config, default=toJson),\n custom_templates=self.custom_templates,\n config=self.config)\n return unicode2bytes(tpl, encoding='ascii')\n", "path": "master/buildbot/www/config.py"}]}
| 2,283 | 104 |
gh_patches_debug_3095
|
rasdani/github-patches
|
git_diff
|
buildbot__buildbot-3859
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Authentication problem Buildbot 0.9.14
I created the class below
```
class MyAuth(CustomAuth):
def check_credentials(user, password):
if user == 'snow' and password == 'white':
return True
else:
return False
```
and set it as my auth class.
```
c['www']['auth']=MyAuth()
```
But it throws following exception.
```
web.Server Traceback (most recent call last):
exceptions.AttributeError: 'str' object has no attribute 'providedBy'
/home/buildbot/virtualenv.buildbot/local/lib/python2.7/site-packages/twisted/web/server.py:195 in process
194 self._encoder = encoder
195 self.render(resrc)
196 except:
/home/buildbot/virtualenv.buildbot/local/lib/python2.7/site-packages/twisted/web/server.py:255 in render
254 try:
255 body = resrc.render(self)
256 except UnsupportedMethod as e:
/home/buildbot/virtualenv.buildbot/local/lib/python2.7/site-packages/twisted/web/_auth/wrapper.py:138 in render
137 """
138 return self._authorizedResource(request).render(request)
139
/home/buildbot/virtualenv.buildbot/local/lib/python2.7/site-packages/twisted/web/_auth/wrapper.py:116 in _authorizedResource
115 if not authheader:
116 return util.DeferredResource(self._login(Anonymous()))
117
/home/buildbot/virtualenv.buildbot/local/lib/python2.7/site-packages/twisted/web/_auth/wrapper.py:162 in _login
161 """
162 d = self._portal.login(credentials, None, IResource)
163 d.addCallbacks(self._loginSucceeded, self._loginFailed)
/home/buildbot/virtualenv.buildbot/local/lib/python2.7/site-packages/twisted/cred/portal.py:118 in login
117 for i in self.checkers:
118 if i.providedBy(credentials):
119 return maybeDeferred(self.checkers[i].requestAvatarId, credentials
exceptions.AttributeError: 'str' object has no attribute 'providedBy'
```
</issue>
<code>
[start of master/buildbot/www/auth.py]
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 from __future__ import absolute_import
17 from __future__ import print_function
18
19 import re
20 from abc import ABCMeta
21 from abc import abstractmethod
22
23 from twisted.cred.checkers import FilePasswordDB
24 from twisted.cred.checkers import ICredentialsChecker
25 from twisted.cred.checkers import InMemoryUsernamePasswordDatabaseDontUse
26 from twisted.cred.credentials import IUsernamePassword
27 from twisted.cred.error import UnauthorizedLogin
28 from twisted.cred.portal import IRealm
29 from twisted.cred.portal import Portal
30 from twisted.internet import defer
31 from twisted.web.error import Error
32 from twisted.web.guard import BasicCredentialFactory
33 from twisted.web.guard import DigestCredentialFactory
34 from twisted.web.guard import HTTPAuthSessionWrapper
35 from twisted.web.resource import IResource
36 from zope.interface import implementer
37
38 from buildbot.util import bytes2unicode
39 from buildbot.util import config
40 from buildbot.util import unicode2bytes
41 from buildbot.www import resource
42
43
44 class AuthRootResource(resource.Resource):
45
46 def getChild(self, path, request):
47 # return dynamically generated resources
48 if path == b'login':
49 return self.master.www.auth.getLoginResource()
50 elif path == b'logout':
51 return self.master.www.auth.getLogoutResource()
52 return resource.Resource.getChild(self, path, request)
53
54
55 class AuthBase(config.ConfiguredMixin):
56
57 def __init__(self, userInfoProvider=None):
58 self.userInfoProvider = userInfoProvider
59
60 def reconfigAuth(self, master, new_config):
61 self.master = master
62
63 def maybeAutoLogin(self, request):
64 return defer.succeed(None)
65
66 def getLoginResource(self):
67 raise Error(501, b"not implemented")
68
69 def getLogoutResource(self):
70 return LogoutResource(self.master)
71
72 @defer.inlineCallbacks
73 def updateUserInfo(self, request):
74 session = request.getSession()
75 if self.userInfoProvider is not None:
76 infos = yield self.userInfoProvider.getUserInfo(session.user_info['username'])
77 session.user_info.update(infos)
78 session.updateSession(request)
79
80 def getConfigDict(self):
81 return {'name': type(self).__name__}
82
83
84 class UserInfoProviderBase(config.ConfiguredMixin):
85 name = "noinfo"
86
87 def getUserInfo(self, username):
88 return defer.succeed({'email': username})
89
90
91 class LoginResource(resource.Resource):
92
93 def render_GET(self, request):
94 return self.asyncRenderHelper(request, self.renderLogin)
95
96 @defer.inlineCallbacks
97 def renderLogin(self, request):
98 raise NotImplementedError
99
100
101 class NoAuth(AuthBase):
102 pass
103
104
105 class RemoteUserAuth(AuthBase):
106 header = b"REMOTE_USER"
107 headerRegex = re.compile(br"(?P<username>[^ @]+)@(?P<realm>[^ @]+)")
108
109 def __init__(self, header=None, headerRegex=None, **kwargs):
110 AuthBase.__init__(self, **kwargs)
111 if self.userInfoProvider is None:
112 self.userInfoProvider = UserInfoProviderBase()
113 if header is not None:
114 self.header = header
115 if headerRegex is not None:
116 self.headerRegex = re.compile(headerRegex)
117
118 @defer.inlineCallbacks
119 def maybeAutoLogin(self, request):
120 header = request.getHeader(self.header)
121 if header is None:
122 raise Error(403, b"missing http header " + self.header + b". Check your reverse proxy config!")
123 res = self.headerRegex.match(header)
124 if res is None:
125 raise Error(
126 403, b'http header does not match regex! "' + header + b'" not matching ' + self.headerRegex.pattern)
127 session = request.getSession()
128 if session.user_info != dict(res.groupdict()):
129 session.user_info = dict(res.groupdict())
130 yield self.updateUserInfo(request)
131
132
133 @implementer(IRealm)
134 class AuthRealm(object):
135
136 def __init__(self, master, auth):
137 self.auth = auth
138 self.master = master
139
140 def requestAvatar(self, avatarId, mind, *interfaces):
141 if IResource in interfaces:
142 return (IResource,
143 PreAuthenticatedLoginResource(self.master, avatarId),
144 lambda: None)
145 raise NotImplementedError()
146
147
148 class TwistedICredAuthBase(AuthBase):
149
150 def __init__(self, credentialFactories, checkers, **kwargs):
151 AuthBase.__init__(self, **kwargs)
152 if self.userInfoProvider is None:
153 self.userInfoProvider = UserInfoProviderBase()
154 self.credentialFactories = credentialFactories
155 self.checkers = checkers
156
157 def getLoginResource(self):
158 return HTTPAuthSessionWrapper(
159 Portal(AuthRealm(self.master, self), self.checkers),
160 self.credentialFactories)
161
162
163 class HTPasswdAuth(TwistedICredAuthBase):
164
165 def __init__(self, passwdFile, **kwargs):
166 TwistedICredAuthBase.__init__(
167 self,
168 [DigestCredentialFactory(b"md5", b"buildbot"),
169 BasicCredentialFactory(b"buildbot")],
170 [FilePasswordDB(passwdFile)],
171 **kwargs)
172
173
174 class UserPasswordAuth(TwistedICredAuthBase):
175
176 def __init__(self, users, **kwargs):
177 if isinstance(users, dict):
178 users = {user: unicode2bytes(pw) for user, pw in users.items()}
179 elif isinstance(users, list):
180 users = [(user, unicode2bytes(pw)) for user, pw in users]
181 TwistedICredAuthBase.__init__(
182 self,
183 [DigestCredentialFactory(b"md5", b"buildbot"),
184 BasicCredentialFactory(b"buildbot")],
185 [InMemoryUsernamePasswordDatabaseDontUse(**dict(users))],
186 **kwargs)
187
188
189 @implementer(ICredentialsChecker)
190 class CustomAuth(TwistedICredAuthBase):
191 __metaclass__ = ABCMeta
192 credentialInterfaces = IUsernamePassword
193
194 def __init__(self, **kwargs):
195 TwistedICredAuthBase.__init__(
196 self,
197 [BasicCredentialFactory(b"buildbot")],
198 [self],
199 **kwargs)
200
201 def requestAvatarId(self, cred):
202 if self.check_credentials(cred.username, cred.password):
203 return defer.succeed(cred.username)
204 return defer.fail(UnauthorizedLogin())
205
206 @abstractmethod
207 def check_credentials(username, password):
208 return False
209
210
211 def _redirect(master, request):
212 url = request.args.get(b"redirect", [b"/"])[0]
213 url = bytes2unicode(url)
214 return resource.Redirect(master.config.buildbotURL + "#" + url)
215
216
217 class PreAuthenticatedLoginResource(LoginResource):
218 # a LoginResource which is already authenticated via a
219 # HTTPAuthSessionWrapper
220
221 def __init__(self, master, username):
222 LoginResource.__init__(self, master)
223 self.username = username
224
225 @defer.inlineCallbacks
226 def renderLogin(self, request):
227 session = request.getSession()
228 session.user_info = dict(username=bytes2unicode(self.username))
229 yield self.master.www.auth.updateUserInfo(request)
230 raise _redirect(self.master, request)
231
232
233 class LogoutResource(resource.Resource):
234
235 def render_GET(self, request):
236 session = request.getSession()
237 session.expire()
238 session.updateSession(request)
239 request.redirect(_redirect(self.master, request).url)
240 return b''
241
[end of master/buildbot/www/auth.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/master/buildbot/www/auth.py b/master/buildbot/www/auth.py
--- a/master/buildbot/www/auth.py
+++ b/master/buildbot/www/auth.py
@@ -189,7 +189,7 @@
@implementer(ICredentialsChecker)
class CustomAuth(TwistedICredAuthBase):
__metaclass__ = ABCMeta
- credentialInterfaces = IUsernamePassword
+ credentialInterfaces = [IUsernamePassword]
def __init__(self, **kwargs):
TwistedICredAuthBase.__init__(
|
{"golden_diff": "diff --git a/master/buildbot/www/auth.py b/master/buildbot/www/auth.py\n--- a/master/buildbot/www/auth.py\n+++ b/master/buildbot/www/auth.py\n@@ -189,7 +189,7 @@\n @implementer(ICredentialsChecker)\n class CustomAuth(TwistedICredAuthBase):\n __metaclass__ = ABCMeta\n- credentialInterfaces = IUsernamePassword\n+ credentialInterfaces = [IUsernamePassword]\n \n def __init__(self, **kwargs):\n TwistedICredAuthBase.__init__(\n", "issue": "Authentication problem Buildbot 0.9.14\nI created the class below\r\n```\r\nclass MyAuth(CustomAuth):\r\n def check_credentials(user, password):\r\n if user == 'snow' and password == 'white':\r\n return True\r\n else:\r\n return False\r\n```\r\nand set it as my auth class.\r\n```\r\nc['www']['auth']=MyAuth()\r\n```\r\nBut it throws following exception.\r\n```\r\nweb.Server Traceback (most recent call last):\r\nexceptions.AttributeError: 'str' object has no attribute 'providedBy'\r\n/home/buildbot/virtualenv.buildbot/local/lib/python2.7/site-packages/twisted/web/server.py:195 in process\r\n194 self._encoder = encoder\r\n195 self.render(resrc)\r\n196 except:\r\n/home/buildbot/virtualenv.buildbot/local/lib/python2.7/site-packages/twisted/web/server.py:255 in render\r\n254 try:\r\n255 body = resrc.render(self)\r\n256 except UnsupportedMethod as e:\r\n/home/buildbot/virtualenv.buildbot/local/lib/python2.7/site-packages/twisted/web/_auth/wrapper.py:138 in render\r\n137 \"\"\"\r\n138 return self._authorizedResource(request).render(request)\r\n139\r\n/home/buildbot/virtualenv.buildbot/local/lib/python2.7/site-packages/twisted/web/_auth/wrapper.py:116 in _authorizedResource\r\n115 if not authheader:\r\n116 return util.DeferredResource(self._login(Anonymous()))\r\n117\r\n/home/buildbot/virtualenv.buildbot/local/lib/python2.7/site-packages/twisted/web/_auth/wrapper.py:162 in _login\r\n161 \"\"\"\r\n162 d = self._portal.login(credentials, None, IResource)\r\n163 d.addCallbacks(self._loginSucceeded, self._loginFailed)\r\n/home/buildbot/virtualenv.buildbot/local/lib/python2.7/site-packages/twisted/cred/portal.py:118 in login\r\n117 for i in self.checkers:\r\n118 if i.providedBy(credentials):\r\n119 return maybeDeferred(self.checkers[i].requestAvatarId, credentials\r\nexceptions.AttributeError: 'str' object has no attribute 'providedBy'\r\n```\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom __future__ import absolute_import\nfrom __future__ import print_function\n\nimport re\nfrom abc import ABCMeta\nfrom abc import abstractmethod\n\nfrom twisted.cred.checkers import FilePasswordDB\nfrom twisted.cred.checkers import ICredentialsChecker\nfrom twisted.cred.checkers import InMemoryUsernamePasswordDatabaseDontUse\nfrom twisted.cred.credentials import IUsernamePassword\nfrom twisted.cred.error import UnauthorizedLogin\nfrom twisted.cred.portal import IRealm\nfrom twisted.cred.portal import Portal\nfrom twisted.internet import defer\nfrom twisted.web.error import Error\nfrom twisted.web.guard import BasicCredentialFactory\nfrom twisted.web.guard import DigestCredentialFactory\nfrom twisted.web.guard import HTTPAuthSessionWrapper\nfrom twisted.web.resource import IResource\nfrom zope.interface import implementer\n\nfrom buildbot.util import bytes2unicode\nfrom buildbot.util import config\nfrom buildbot.util import unicode2bytes\nfrom buildbot.www import resource\n\n\nclass AuthRootResource(resource.Resource):\n\n def getChild(self, path, request):\n # return dynamically generated resources\n if path == b'login':\n return self.master.www.auth.getLoginResource()\n elif path == b'logout':\n return self.master.www.auth.getLogoutResource()\n return resource.Resource.getChild(self, path, request)\n\n\nclass AuthBase(config.ConfiguredMixin):\n\n def __init__(self, userInfoProvider=None):\n self.userInfoProvider = userInfoProvider\n\n def reconfigAuth(self, master, new_config):\n self.master = master\n\n def maybeAutoLogin(self, request):\n return defer.succeed(None)\n\n def getLoginResource(self):\n raise Error(501, b\"not implemented\")\n\n def getLogoutResource(self):\n return LogoutResource(self.master)\n\n @defer.inlineCallbacks\n def updateUserInfo(self, request):\n session = request.getSession()\n if self.userInfoProvider is not None:\n infos = yield self.userInfoProvider.getUserInfo(session.user_info['username'])\n session.user_info.update(infos)\n session.updateSession(request)\n\n def getConfigDict(self):\n return {'name': type(self).__name__}\n\n\nclass UserInfoProviderBase(config.ConfiguredMixin):\n name = \"noinfo\"\n\n def getUserInfo(self, username):\n return defer.succeed({'email': username})\n\n\nclass LoginResource(resource.Resource):\n\n def render_GET(self, request):\n return self.asyncRenderHelper(request, self.renderLogin)\n\n @defer.inlineCallbacks\n def renderLogin(self, request):\n raise NotImplementedError\n\n\nclass NoAuth(AuthBase):\n pass\n\n\nclass RemoteUserAuth(AuthBase):\n header = b\"REMOTE_USER\"\n headerRegex = re.compile(br\"(?P<username>[^ @]+)@(?P<realm>[^ @]+)\")\n\n def __init__(self, header=None, headerRegex=None, **kwargs):\n AuthBase.__init__(self, **kwargs)\n if self.userInfoProvider is None:\n self.userInfoProvider = UserInfoProviderBase()\n if header is not None:\n self.header = header\n if headerRegex is not None:\n self.headerRegex = re.compile(headerRegex)\n\n @defer.inlineCallbacks\n def maybeAutoLogin(self, request):\n header = request.getHeader(self.header)\n if header is None:\n raise Error(403, b\"missing http header \" + self.header + b\". Check your reverse proxy config!\")\n res = self.headerRegex.match(header)\n if res is None:\n raise Error(\n 403, b'http header does not match regex! \"' + header + b'\" not matching ' + self.headerRegex.pattern)\n session = request.getSession()\n if session.user_info != dict(res.groupdict()):\n session.user_info = dict(res.groupdict())\n yield self.updateUserInfo(request)\n\n\n@implementer(IRealm)\nclass AuthRealm(object):\n\n def __init__(self, master, auth):\n self.auth = auth\n self.master = master\n\n def requestAvatar(self, avatarId, mind, *interfaces):\n if IResource in interfaces:\n return (IResource,\n PreAuthenticatedLoginResource(self.master, avatarId),\n lambda: None)\n raise NotImplementedError()\n\n\nclass TwistedICredAuthBase(AuthBase):\n\n def __init__(self, credentialFactories, checkers, **kwargs):\n AuthBase.__init__(self, **kwargs)\n if self.userInfoProvider is None:\n self.userInfoProvider = UserInfoProviderBase()\n self.credentialFactories = credentialFactories\n self.checkers = checkers\n\n def getLoginResource(self):\n return HTTPAuthSessionWrapper(\n Portal(AuthRealm(self.master, self), self.checkers),\n self.credentialFactories)\n\n\nclass HTPasswdAuth(TwistedICredAuthBase):\n\n def __init__(self, passwdFile, **kwargs):\n TwistedICredAuthBase.__init__(\n self,\n [DigestCredentialFactory(b\"md5\", b\"buildbot\"),\n BasicCredentialFactory(b\"buildbot\")],\n [FilePasswordDB(passwdFile)],\n **kwargs)\n\n\nclass UserPasswordAuth(TwistedICredAuthBase):\n\n def __init__(self, users, **kwargs):\n if isinstance(users, dict):\n users = {user: unicode2bytes(pw) for user, pw in users.items()}\n elif isinstance(users, list):\n users = [(user, unicode2bytes(pw)) for user, pw in users]\n TwistedICredAuthBase.__init__(\n self,\n [DigestCredentialFactory(b\"md5\", b\"buildbot\"),\n BasicCredentialFactory(b\"buildbot\")],\n [InMemoryUsernamePasswordDatabaseDontUse(**dict(users))],\n **kwargs)\n\n\n@implementer(ICredentialsChecker)\nclass CustomAuth(TwistedICredAuthBase):\n __metaclass__ = ABCMeta\n credentialInterfaces = IUsernamePassword\n\n def __init__(self, **kwargs):\n TwistedICredAuthBase.__init__(\n self,\n [BasicCredentialFactory(b\"buildbot\")],\n [self],\n **kwargs)\n\n def requestAvatarId(self, cred):\n if self.check_credentials(cred.username, cred.password):\n return defer.succeed(cred.username)\n return defer.fail(UnauthorizedLogin())\n\n @abstractmethod\n def check_credentials(username, password):\n return False\n\n\ndef _redirect(master, request):\n url = request.args.get(b\"redirect\", [b\"/\"])[0]\n url = bytes2unicode(url)\n return resource.Redirect(master.config.buildbotURL + \"#\" + url)\n\n\nclass PreAuthenticatedLoginResource(LoginResource):\n # a LoginResource which is already authenticated via a\n # HTTPAuthSessionWrapper\n\n def __init__(self, master, username):\n LoginResource.__init__(self, master)\n self.username = username\n\n @defer.inlineCallbacks\n def renderLogin(self, request):\n session = request.getSession()\n session.user_info = dict(username=bytes2unicode(self.username))\n yield self.master.www.auth.updateUserInfo(request)\n raise _redirect(self.master, request)\n\n\nclass LogoutResource(resource.Resource):\n\n def render_GET(self, request):\n session = request.getSession()\n session.expire()\n session.updateSession(request)\n request.redirect(_redirect(self.master, request).url)\n return b''\n", "path": "master/buildbot/www/auth.py"}]}
| 3,388 | 117 |
gh_patches_debug_9939
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-9061
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Color regex needs raw string
Warning in CI:
> bokeh/core/property/color.py:137
/home/travis/build/bokeh/bokeh/bokeh/core/property/color.py:137: DeprecationWarning: invalid escape sequence \d
value = colors.RGB(*[int(val) for val in re.findall("\d+", value)[:3]]).to_hex()
</issue>
<code>
[start of bokeh/core/property/color.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.
3 # All rights reserved.
4 #
5 # The full license is in the file LICENSE.txt, distributed with this software.
6 #-----------------------------------------------------------------------------
7 ''' Provide color related properties.
8
9 '''
10
11 #-----------------------------------------------------------------------------
12 # Boilerplate
13 #-----------------------------------------------------------------------------
14 from __future__ import absolute_import, division, print_function, unicode_literals
15
16 import logging
17 log = logging.getLogger(__name__)
18
19 #-----------------------------------------------------------------------------
20 # Imports
21 #-----------------------------------------------------------------------------
22
23 # Standard library imports
24 import re
25
26 # External imports
27 from six import string_types
28
29 # Bokeh imports
30 from ... import colors
31 from .. import enums
32 from .bases import Property
33 from .container import Tuple
34 from .enum import Enum
35 from .either import Either
36 from .numeric import Byte, Percent
37 from .regex import Regex
38
39 #-----------------------------------------------------------------------------
40 # Globals and constants
41 #-----------------------------------------------------------------------------
42
43 __all__ = (
44 'Color',
45 'RGB',
46 'ColorHex',
47 )
48
49 #-----------------------------------------------------------------------------
50 # General API
51 #-----------------------------------------------------------------------------
52
53
54 class RGB(Property):
55 ''' Accept colors.RGB values.
56
57 '''
58
59 def validate(self, value, detail=True):
60 super(RGB, self).validate(value, detail)
61
62 if not (value is None or isinstance(value, colors.RGB)):
63 msg = "" if not detail else "expected RGB value, got %r" % (value,)
64 raise ValueError(msg)
65
66
67 class Color(Either):
68 ''' Accept color values in a variety of ways.
69
70 For colors, because we support named colors and hex values prefaced
71 with a "#", when we are handed a string value, there is a little
72 interpretation: if the value is one of the 147 SVG named colors or
73 it starts with a "#", then it is interpreted as a value.
74
75 If a 3-tuple is provided, then it is treated as an RGB (0..255).
76 If a 4-tuple is provided, then it is treated as an RGBa (0..255), with
77 alpha as a float between 0 and 1. (This follows the HTML5 Canvas API.)
78
79 Example:
80
81 .. code-block:: python
82
83 >>> class ColorModel(HasProps):
84 ... prop = Color()
85 ...
86
87 >>> m = ColorModel()
88
89 >>> m.prop = "firebrick"
90
91 >>> m.prop = "#a240a2"
92
93 >>> m.prop = (100, 100, 255)
94
95 >>> m.prop = (100, 100, 255, 0.5)
96
97 >>> m.prop = "junk" # ValueError !!
98
99 >>> m.prop = (100.2, 57.3, 10.2) # ValueError !!
100
101 '''
102
103 def __init__(self, default=None, help=None):
104 types = (Enum(enums.NamedColor),
105 Regex(r"^#[0-9a-fA-F]{6}$"),
106 Regex(r"^rgba\(((25[0-5]|2[0-4]\d|1\d{1,2}|\d\d?)\s*,"
107 r"\s*?){2}(25[0-5]|2[0-4]\d|1\d{1,2}|\d\d?)\s*,"
108 r"\s*([01]\.?\d*?)\)"),
109 Regex(r"^rgb\(((25[0-5]|2[0-4]\d|1\d{1,2}|\d\d?)\s*,"
110 r"\s*?){2}(25[0-5]|2[0-4]\d|1\d{1,2}|\d\d?)\s*?\)"),
111 Tuple(Byte, Byte, Byte),
112 Tuple(Byte, Byte, Byte, Percent),
113 RGB)
114 super(Color, self).__init__(*types, default=default, help=help)
115
116 def __str__(self):
117 return self.__class__.__name__
118
119 def transform(self, value):
120 if isinstance(value, tuple):
121 value = colors.RGB(*value).to_css()
122 return value
123
124 def _sphinx_type(self):
125 return self._sphinx_prop_link()
126
127
128 class ColorHex(Color):
129 ''' ref Color
130
131 The only difference with Color is it's transform in hexadecimal string
132 when send to javascript side
133
134 '''
135
136 def transform(self, value):
137 if isinstance(value, string_types):
138 value = value.lower()
139 if value.startswith('rgb'):
140 value = colors.RGB(*[int(val) for val in re.findall("\d+", value)[:3]]).to_hex()
141 elif value in enums.NamedColor:
142 value = getattr(colors.named, value).to_hex()
143 elif isinstance(value, tuple):
144 value = colors.RGB(*value).to_hex()
145 else:
146 value = value.to_hex()
147 return value.lower()
148
149 #-----------------------------------------------------------------------------
150 # Dev API
151 #-----------------------------------------------------------------------------
152
153 #-----------------------------------------------------------------------------
154 # Private API
155 #-----------------------------------------------------------------------------
156
157 #-----------------------------------------------------------------------------
158 # Code
159 #-----------------------------------------------------------------------------
160
[end of bokeh/core/property/color.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bokeh/core/property/color.py b/bokeh/core/property/color.py
--- a/bokeh/core/property/color.py
+++ b/bokeh/core/property/color.py
@@ -137,7 +137,7 @@
if isinstance(value, string_types):
value = value.lower()
if value.startswith('rgb'):
- value = colors.RGB(*[int(val) for val in re.findall("\d+", value)[:3]]).to_hex()
+ value = colors.RGB(*[int(val) for val in re.findall(r"\d+", value)[:3]]).to_hex()
elif value in enums.NamedColor:
value = getattr(colors.named, value).to_hex()
elif isinstance(value, tuple):
|
{"golden_diff": "diff --git a/bokeh/core/property/color.py b/bokeh/core/property/color.py\n--- a/bokeh/core/property/color.py\n+++ b/bokeh/core/property/color.py\n@@ -137,7 +137,7 @@\n if isinstance(value, string_types):\n value = value.lower()\n if value.startswith('rgb'):\n- value = colors.RGB(*[int(val) for val in re.findall(\"\\d+\", value)[:3]]).to_hex()\n+ value = colors.RGB(*[int(val) for val in re.findall(r\"\\d+\", value)[:3]]).to_hex()\n elif value in enums.NamedColor:\n value = getattr(colors.named, value).to_hex()\n elif isinstance(value, tuple):\n", "issue": "Color regex needs raw string\nWarning in CI: \r\n\r\n> bokeh/core/property/color.py:137\r\n /home/travis/build/bokeh/bokeh/bokeh/core/property/color.py:137: DeprecationWarning: invalid escape sequence \\d\r\n value = colors.RGB(*[int(val) for val in re.findall(\"\\d+\", value)[:3]]).to_hex()\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' Provide color related properties.\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\nimport re\n\n# External imports\nfrom six import string_types\n\n# Bokeh imports\nfrom ... import colors\nfrom .. import enums\nfrom .bases import Property\nfrom .container import Tuple\nfrom .enum import Enum\nfrom .either import Either\nfrom .numeric import Byte, Percent\nfrom .regex import Regex\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'Color',\n 'RGB',\n 'ColorHex',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n\nclass RGB(Property):\n ''' Accept colors.RGB values.\n\n '''\n\n def validate(self, value, detail=True):\n super(RGB, self).validate(value, detail)\n\n if not (value is None or isinstance(value, colors.RGB)):\n msg = \"\" if not detail else \"expected RGB value, got %r\" % (value,)\n raise ValueError(msg)\n\n\nclass Color(Either):\n ''' Accept color values in a variety of ways.\n\n For colors, because we support named colors and hex values prefaced\n with a \"#\", when we are handed a string value, there is a little\n interpretation: if the value is one of the 147 SVG named colors or\n it starts with a \"#\", then it is interpreted as a value.\n\n If a 3-tuple is provided, then it is treated as an RGB (0..255).\n If a 4-tuple is provided, then it is treated as an RGBa (0..255), with\n alpha as a float between 0 and 1. (This follows the HTML5 Canvas API.)\n\n Example:\n\n .. code-block:: python\n\n >>> class ColorModel(HasProps):\n ... prop = Color()\n ...\n\n >>> m = ColorModel()\n\n >>> m.prop = \"firebrick\"\n\n >>> m.prop = \"#a240a2\"\n\n >>> m.prop = (100, 100, 255)\n\n >>> m.prop = (100, 100, 255, 0.5)\n\n >>> m.prop = \"junk\" # ValueError !!\n\n >>> m.prop = (100.2, 57.3, 10.2) # ValueError !!\n\n '''\n\n def __init__(self, default=None, help=None):\n types = (Enum(enums.NamedColor),\n Regex(r\"^#[0-9a-fA-F]{6}$\"),\n Regex(r\"^rgba\\(((25[0-5]|2[0-4]\\d|1\\d{1,2}|\\d\\d?)\\s*,\"\n r\"\\s*?){2}(25[0-5]|2[0-4]\\d|1\\d{1,2}|\\d\\d?)\\s*,\"\n r\"\\s*([01]\\.?\\d*?)\\)\"),\n Regex(r\"^rgb\\(((25[0-5]|2[0-4]\\d|1\\d{1,2}|\\d\\d?)\\s*,\"\n r\"\\s*?){2}(25[0-5]|2[0-4]\\d|1\\d{1,2}|\\d\\d?)\\s*?\\)\"),\n Tuple(Byte, Byte, Byte),\n Tuple(Byte, Byte, Byte, Percent),\n RGB)\n super(Color, self).__init__(*types, default=default, help=help)\n\n def __str__(self):\n return self.__class__.__name__\n\n def transform(self, value):\n if isinstance(value, tuple):\n value = colors.RGB(*value).to_css()\n return value\n\n def _sphinx_type(self):\n return self._sphinx_prop_link()\n\n\nclass ColorHex(Color):\n ''' ref Color\n\n The only difference with Color is it's transform in hexadecimal string\n when send to javascript side\n\n '''\n\n def transform(self, value):\n if isinstance(value, string_types):\n value = value.lower()\n if value.startswith('rgb'):\n value = colors.RGB(*[int(val) for val in re.findall(\"\\d+\", value)[:3]]).to_hex()\n elif value in enums.NamedColor:\n value = getattr(colors.named, value).to_hex()\n elif isinstance(value, tuple):\n value = colors.RGB(*value).to_hex()\n else:\n value = value.to_hex()\n return value.lower()\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n", "path": "bokeh/core/property/color.py"}]}
| 2,113 | 161 |
gh_patches_debug_7724
|
rasdani/github-patches
|
git_diff
|
CTFd__CTFd-1315
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
unlocks api does not check already unlocked hints
There is not check in the unlocks api for already unlocked hints in the file [unlocks.py](https://github.com/CTFd/CTFd/blob/master/CTFd/api/v1/unlocks.py)
It is possible to unlock multiple times the same hint by just calling the api.
</issue>
<code>
[start of CTFd/api/v1/unlocks.py]
1 from flask import request
2 from flask_restplus import Namespace, Resource
3
4 from CTFd.cache import clear_standings
5 from CTFd.models import Unlocks, db, get_class_by_tablename
6 from CTFd.schemas.awards import AwardSchema
7 from CTFd.schemas.unlocks import UnlockSchema
8 from CTFd.utils.decorators import (
9 admins_only,
10 authed_only,
11 during_ctf_time_only,
12 require_verified_emails,
13 )
14 from CTFd.utils.user import get_current_user
15
16 unlocks_namespace = Namespace("unlocks", description="Endpoint to retrieve Unlocks")
17
18
19 @unlocks_namespace.route("")
20 class UnlockList(Resource):
21 @admins_only
22 def get(self):
23 hints = Unlocks.query.all()
24 schema = UnlockSchema()
25 response = schema.dump(hints)
26
27 if response.errors:
28 return {"success": False, "errors": response.errors}, 400
29
30 return {"success": True, "data": response.data}
31
32 @during_ctf_time_only
33 @require_verified_emails
34 @authed_only
35 def post(self):
36 req = request.get_json()
37 user = get_current_user()
38
39 req["user_id"] = user.id
40 req["team_id"] = user.team_id
41
42 Model = get_class_by_tablename(req["type"])
43 target = Model.query.filter_by(id=req["target"]).first_or_404()
44
45 if target.cost > user.score:
46 return (
47 {
48 "success": False,
49 "errors": {
50 "score": "You do not have enough points to unlock this hint"
51 },
52 },
53 400,
54 )
55
56 schema = UnlockSchema()
57 response = schema.load(req, session=db.session)
58
59 if response.errors:
60 return {"success": False, "errors": response.errors}, 400
61
62 db.session.add(response.data)
63
64 award_schema = AwardSchema()
65 award = {
66 "user_id": user.id,
67 "team_id": user.team_id,
68 "name": target.name,
69 "description": target.description,
70 "value": (-target.cost),
71 "category": target.category,
72 }
73
74 award = award_schema.load(award)
75 db.session.add(award.data)
76 db.session.commit()
77 clear_standings()
78
79 response = schema.dump(response.data)
80
81 return {"success": True, "data": response.data}
82
[end of CTFd/api/v1/unlocks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/CTFd/api/v1/unlocks.py b/CTFd/api/v1/unlocks.py
--- a/CTFd/api/v1/unlocks.py
+++ b/CTFd/api/v1/unlocks.py
@@ -59,6 +59,16 @@
if response.errors:
return {"success": False, "errors": response.errors}, 400
+ existing = Unlocks.query.filter_by(**req).first()
+ if existing:
+ return (
+ {
+ "success": False,
+ "errors": {"target": "You've already unlocked this this target"},
+ },
+ 400,
+ )
+
db.session.add(response.data)
award_schema = AwardSchema()
|
{"golden_diff": "diff --git a/CTFd/api/v1/unlocks.py b/CTFd/api/v1/unlocks.py\n--- a/CTFd/api/v1/unlocks.py\n+++ b/CTFd/api/v1/unlocks.py\n@@ -59,6 +59,16 @@\n if response.errors:\n return {\"success\": False, \"errors\": response.errors}, 400\n \n+ existing = Unlocks.query.filter_by(**req).first()\n+ if existing:\n+ return (\n+ {\n+ \"success\": False,\n+ \"errors\": {\"target\": \"You've already unlocked this this target\"},\n+ },\n+ 400,\n+ )\n+\n db.session.add(response.data)\n \n award_schema = AwardSchema()\n", "issue": "unlocks api does not check already unlocked hints \nThere is not check in the unlocks api for already unlocked hints in the file [unlocks.py](https://github.com/CTFd/CTFd/blob/master/CTFd/api/v1/unlocks.py)\r\n\r\nIt is possible to unlock multiple times the same hint by just calling the api.\n", "before_files": [{"content": "from flask import request\nfrom flask_restplus import Namespace, Resource\n\nfrom CTFd.cache import clear_standings\nfrom CTFd.models import Unlocks, db, get_class_by_tablename\nfrom CTFd.schemas.awards import AwardSchema\nfrom CTFd.schemas.unlocks import UnlockSchema\nfrom CTFd.utils.decorators import (\n admins_only,\n authed_only,\n during_ctf_time_only,\n require_verified_emails,\n)\nfrom CTFd.utils.user import get_current_user\n\nunlocks_namespace = Namespace(\"unlocks\", description=\"Endpoint to retrieve Unlocks\")\n\n\n@unlocks_namespace.route(\"\")\nclass UnlockList(Resource):\n @admins_only\n def get(self):\n hints = Unlocks.query.all()\n schema = UnlockSchema()\n response = schema.dump(hints)\n\n if response.errors:\n return {\"success\": False, \"errors\": response.errors}, 400\n\n return {\"success\": True, \"data\": response.data}\n\n @during_ctf_time_only\n @require_verified_emails\n @authed_only\n def post(self):\n req = request.get_json()\n user = get_current_user()\n\n req[\"user_id\"] = user.id\n req[\"team_id\"] = user.team_id\n\n Model = get_class_by_tablename(req[\"type\"])\n target = Model.query.filter_by(id=req[\"target\"]).first_or_404()\n\n if target.cost > user.score:\n return (\n {\n \"success\": False,\n \"errors\": {\n \"score\": \"You do not have enough points to unlock this hint\"\n },\n },\n 400,\n )\n\n schema = UnlockSchema()\n response = schema.load(req, session=db.session)\n\n if response.errors:\n return {\"success\": False, \"errors\": response.errors}, 400\n\n db.session.add(response.data)\n\n award_schema = AwardSchema()\n award = {\n \"user_id\": user.id,\n \"team_id\": user.team_id,\n \"name\": target.name,\n \"description\": target.description,\n \"value\": (-target.cost),\n \"category\": target.category,\n }\n\n award = award_schema.load(award)\n db.session.add(award.data)\n db.session.commit()\n clear_standings()\n\n response = schema.dump(response.data)\n\n return {\"success\": True, \"data\": response.data}\n", "path": "CTFd/api/v1/unlocks.py"}]}
| 1,281 | 163 |
gh_patches_debug_25011
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-3306
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider alnatura_de is broken
During the global build at 2021-08-25-14-42-15, spider **alnatura_de** failed with **134 features** and **5 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/logs/alnatura_de.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/output/alnatura_de.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/output/alnatura_de.geojson))
</issue>
<code>
[start of locations/spiders/alnatura_de.py]
1 import scrapy
2 import re
3 import json
4
5 from locations.items import GeojsonPointItem
6 from locations.hours import OpeningHours
7
8
9 DAY_MAPPING = {
10 1: 'Mo', 2: 'Tu', 3: 'We', 4: 'Th', 5: 'Fr', 6: 'Sa', 7: 'Su',
11 'Mo': 1, 'Tu': 2, 'We': 3, 'Th': 4, 'Fr': 5, 'Sa': 6, 'Su': 7
12 }
13
14
15 class AlnaturaSpider(scrapy.Spider):
16 name = "alnatura_de"
17 allowed_domains = ["www.alnatura.de"]
18 start_urls = (
19 'https://www.alnatura.de/api/sitecore/stores/FindStoresforMap?'
20 'ElementsPerPage=10000&lat=50.99820058296841'
21 '&lng=7.811966062500009&radius=1483'
22 '&Tradepartner=Alnatura%20Super%20Natur%20Markt',
23 )
24
25 def parse_hours(self, store_hours):
26 opening_hours = OpeningHours()
27 match = re.match(r'(.+?)-(.+?) +(\d.*?)-(.+?) Uhr', store_hours)
28 if match:
29 from_day = match.group(1).strip()
30 to_day = match.group(2).strip()
31 from_time = match.group(3).strip()
32 to_time = match.group(4).strip()
33
34 fhours = int(float(from_time))
35 fminutes = (float(from_time) * 60) % 60
36 fmt_from_time = "%d:%02d" % (fhours, fminutes)
37 thours = int(float(to_time))
38 tminutes = (float(to_time) * 60) % 60
39 fmt_to_time = "%d:%02d" % (thours, tminutes)
40
41 for day in range(DAY_MAPPING[from_day], DAY_MAPPING[to_day] + 1):
42 opening_hours.add_range(
43 day=DAY_MAPPING[day],
44 open_time=fmt_from_time,
45 close_time=fmt_to_time,
46 time_format='%H:%M'
47 )
48
49 return opening_hours.as_opening_hours()
50
51 def parse_stores(self, response):
52 store = json.loads(response.text)
53 store = store['Payload']
54
55 properties = {
56 'lat': response.meta.get('lat'),
57 'lon': response.meta.get('lng'),
58 'name': store['StoreName'],
59 'street': store['Street'],
60 'city': store['City'],
61 'postcode': store['PostalCode'],
62 'phone': store['Tel'],
63 'country': store['Country'],
64 'ref': response.meta.get('id'),
65 }
66
67 if store['OpeningTime']:
68 hours = self.parse_hours(store.get('OpeningTime'))
69 if hours:
70 properties["opening_hours"] = hours
71
72 yield GeojsonPointItem(**properties)
73
74 def parse(self, response):
75 data = json.loads(response.text)
76
77 for stores in data['Payload']:
78 yield scrapy.Request(
79 f"https://www.alnatura.de/api/sitecore/stores/StoreDetails"
80 f"?storeid={stores['Id']}",
81 callback=self.parse_stores,
82 meta={
83 'lat': stores['Lat'].replace(',', '.'),
84 'lng': stores['Lng'].replace(',', '.'),
85 'id': stores['Id'],
86 }
87 )
88
[end of locations/spiders/alnatura_de.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/alnatura_de.py b/locations/spiders/alnatura_de.py
--- a/locations/spiders/alnatura_de.py
+++ b/locations/spiders/alnatura_de.py
@@ -28,8 +28,8 @@
if match:
from_day = match.group(1).strip()
to_day = match.group(2).strip()
- from_time = match.group(3).strip()
- to_time = match.group(4).strip()
+ from_time = match.group(3).strip().replace(':','.')
+ to_time = match.group(4).strip().replace(':','.')
fhours = int(float(from_time))
fminutes = (float(from_time) * 60) % 60
@@ -38,13 +38,13 @@
tminutes = (float(to_time) * 60) % 60
fmt_to_time = "%d:%02d" % (thours, tminutes)
- for day in range(DAY_MAPPING[from_day], DAY_MAPPING[to_day] + 1):
- opening_hours.add_range(
- day=DAY_MAPPING[day],
- open_time=fmt_from_time,
- close_time=fmt_to_time,
- time_format='%H:%M'
- )
+ for day in range(DAY_MAPPING[from_day], DAY_MAPPING[to_day] + 1):
+ opening_hours.add_range(
+ day=DAY_MAPPING[day],
+ open_time=fmt_from_time,
+ close_time=fmt_to_time,
+ time_format='%H:%M'
+ )
return opening_hours.as_opening_hours()
|
{"golden_diff": "diff --git a/locations/spiders/alnatura_de.py b/locations/spiders/alnatura_de.py\n--- a/locations/spiders/alnatura_de.py\n+++ b/locations/spiders/alnatura_de.py\n@@ -28,8 +28,8 @@\n if match:\n from_day = match.group(1).strip()\n to_day = match.group(2).strip()\n- from_time = match.group(3).strip()\n- to_time = match.group(4).strip()\n+ from_time = match.group(3).strip().replace(':','.')\n+ to_time = match.group(4).strip().replace(':','.')\n \n fhours = int(float(from_time))\n fminutes = (float(from_time) * 60) % 60\n@@ -38,13 +38,13 @@\n tminutes = (float(to_time) * 60) % 60\n fmt_to_time = \"%d:%02d\" % (thours, tminutes)\n \n- for day in range(DAY_MAPPING[from_day], DAY_MAPPING[to_day] + 1):\n- opening_hours.add_range(\n- day=DAY_MAPPING[day],\n- open_time=fmt_from_time,\n- close_time=fmt_to_time,\n- time_format='%H:%M'\n- )\n+ for day in range(DAY_MAPPING[from_day], DAY_MAPPING[to_day] + 1):\n+ opening_hours.add_range(\n+ day=DAY_MAPPING[day],\n+ open_time=fmt_from_time,\n+ close_time=fmt_to_time,\n+ time_format='%H:%M'\n+ )\n \n return opening_hours.as_opening_hours()\n", "issue": "Spider alnatura_de is broken\nDuring the global build at 2021-08-25-14-42-15, spider **alnatura_de** failed with **134 features** and **5 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/logs/alnatura_de.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/output/alnatura_de.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/output/alnatura_de.geojson))\n", "before_files": [{"content": "import scrapy\nimport re\nimport json\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nDAY_MAPPING = {\n 1: 'Mo', 2: 'Tu', 3: 'We', 4: 'Th', 5: 'Fr', 6: 'Sa', 7: 'Su',\n 'Mo': 1, 'Tu': 2, 'We': 3, 'Th': 4, 'Fr': 5, 'Sa': 6, 'Su': 7\n}\n\n\nclass AlnaturaSpider(scrapy.Spider):\n name = \"alnatura_de\"\n allowed_domains = [\"www.alnatura.de\"]\n start_urls = (\n 'https://www.alnatura.de/api/sitecore/stores/FindStoresforMap?'\n 'ElementsPerPage=10000&lat=50.99820058296841'\n '&lng=7.811966062500009&radius=1483'\n '&Tradepartner=Alnatura%20Super%20Natur%20Markt',\n )\n\n def parse_hours(self, store_hours):\n opening_hours = OpeningHours()\n match = re.match(r'(.+?)-(.+?) +(\\d.*?)-(.+?) Uhr', store_hours)\n if match:\n from_day = match.group(1).strip()\n to_day = match.group(2).strip()\n from_time = match.group(3).strip()\n to_time = match.group(4).strip()\n\n fhours = int(float(from_time))\n fminutes = (float(from_time) * 60) % 60\n fmt_from_time = \"%d:%02d\" % (fhours, fminutes)\n thours = int(float(to_time))\n tminutes = (float(to_time) * 60) % 60\n fmt_to_time = \"%d:%02d\" % (thours, tminutes)\n\n for day in range(DAY_MAPPING[from_day], DAY_MAPPING[to_day] + 1):\n opening_hours.add_range(\n day=DAY_MAPPING[day],\n open_time=fmt_from_time,\n close_time=fmt_to_time,\n time_format='%H:%M'\n )\n\n return opening_hours.as_opening_hours()\n\n def parse_stores(self, response):\n store = json.loads(response.text)\n store = store['Payload']\n\n properties = {\n 'lat': response.meta.get('lat'),\n 'lon': response.meta.get('lng'),\n 'name': store['StoreName'],\n 'street': store['Street'],\n 'city': store['City'],\n 'postcode': store['PostalCode'],\n 'phone': store['Tel'],\n 'country': store['Country'],\n 'ref': response.meta.get('id'),\n }\n\n if store['OpeningTime']:\n hours = self.parse_hours(store.get('OpeningTime'))\n if hours:\n properties[\"opening_hours\"] = hours\n\n yield GeojsonPointItem(**properties)\n\n def parse(self, response):\n data = json.loads(response.text)\n\n for stores in data['Payload']:\n yield scrapy.Request(\n f\"https://www.alnatura.de/api/sitecore/stores/StoreDetails\"\n f\"?storeid={stores['Id']}\",\n callback=self.parse_stores,\n meta={\n 'lat': stores['Lat'].replace(',', '.'),\n 'lng': stores['Lng'].replace(',', '.'),\n 'id': stores['Id'],\n }\n )\n", "path": "locations/spiders/alnatura_de.py"}]}
| 1,684 | 369 |
gh_patches_debug_34373
|
rasdani/github-patches
|
git_diff
|
encode__httpx-7
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Timeout tests
</issue>
<code>
[start of httpcore/compat.py]
1 try:
2 import brotli
3 except ImportError:
4 brotli = None # pragma: nocover
5
[end of httpcore/compat.py]
[start of httpcore/__init__.py]
1 from .config import PoolLimits, SSLConfig, TimeoutConfig
2 from .datastructures import URL, Request, Response
3 from .exceptions import ResponseClosed, StreamConsumed
4 from .pool import ConnectionPool
5
6 __version__ = "0.0.3"
7
[end of httpcore/__init__.py]
[start of httpcore/pool.py]
1 import asyncio
2 import functools
3 import os
4 import ssl
5 import typing
6 from types import TracebackType
7
8 from .config import (
9 DEFAULT_CA_BUNDLE_PATH,
10 DEFAULT_POOL_LIMITS,
11 DEFAULT_SSL_CONFIG,
12 DEFAULT_TIMEOUT_CONFIG,
13 PoolLimits,
14 SSLConfig,
15 TimeoutConfig,
16 )
17 from .connections import Connection
18 from .datastructures import URL, Request, Response
19
20 ConnectionKey = typing.Tuple[str, str, int] # (scheme, host, port)
21
22
23 class ConnectionSemaphore:
24 def __init__(self, max_connections: int = None):
25 if max_connections is not None:
26 self.semaphore = asyncio.BoundedSemaphore(value=max_connections)
27
28 async def acquire(self) -> None:
29 if hasattr(self, "semaphore"):
30 await self.semaphore.acquire()
31
32 def release(self) -> None:
33 if hasattr(self, "semaphore"):
34 self.semaphore.release()
35
36
37 class ConnectionPool:
38 def __init__(
39 self,
40 *,
41 ssl: SSLConfig = DEFAULT_SSL_CONFIG,
42 timeout: TimeoutConfig = DEFAULT_TIMEOUT_CONFIG,
43 limits: PoolLimits = DEFAULT_POOL_LIMITS,
44 ):
45 self.ssl_config = ssl
46 self.timeout = timeout
47 self.limits = limits
48 self.is_closed = False
49 self.num_active_connections = 0
50 self.num_keepalive_connections = 0
51 self._connections = (
52 {}
53 ) # type: typing.Dict[ConnectionKey, typing.List[Connection]]
54 self._connection_semaphore = ConnectionSemaphore(
55 max_connections=self.limits.hard_limit
56 )
57
58 async def request(
59 self,
60 method: str,
61 url: str,
62 *,
63 headers: typing.Sequence[typing.Tuple[bytes, bytes]] = (),
64 body: typing.Union[bytes, typing.AsyncIterator[bytes]] = b"",
65 stream: bool = False,
66 ) -> Response:
67 parsed_url = URL(url)
68 request = Request(method, parsed_url, headers=headers, body=body)
69 ssl_context = await self.get_ssl_context(parsed_url)
70 connection = await self.acquire_connection(parsed_url, ssl=ssl_context)
71 response = await connection.send(request)
72 if not stream:
73 try:
74 await response.read()
75 finally:
76 await response.close()
77 return response
78
79 @property
80 def num_connections(self) -> int:
81 return self.num_active_connections + self.num_keepalive_connections
82
83 async def acquire_connection(
84 self, url: URL, *, ssl: typing.Optional[ssl.SSLContext] = None
85 ) -> Connection:
86 key = (url.scheme, url.hostname, url.port)
87 try:
88 connection = self._connections[key].pop()
89 if not self._connections[key]:
90 del self._connections[key]
91 self.num_keepalive_connections -= 1
92 self.num_active_connections += 1
93
94 except (KeyError, IndexError):
95 await self._connection_semaphore.acquire()
96 release = functools.partial(self.release_connection, key=key)
97 connection = Connection(timeout=self.timeout, on_release=release)
98 self.num_active_connections += 1
99 await connection.open(url.hostname, url.port, ssl=ssl)
100
101 return connection
102
103 async def release_connection(
104 self, connection: Connection, key: ConnectionKey
105 ) -> None:
106 if connection.is_closed:
107 self._connection_semaphore.release()
108 self.num_active_connections -= 1
109 elif (
110 self.limits.soft_limit is not None
111 and self.num_connections > self.limits.soft_limit
112 ):
113 self._connection_semaphore.release()
114 self.num_active_connections -= 1
115 connection.close()
116 else:
117 self.num_active_connections -= 1
118 self.num_keepalive_connections += 1
119 try:
120 self._connections[key].append(connection)
121 except KeyError:
122 self._connections[key] = [connection]
123
124 async def get_ssl_context(self, url: URL) -> typing.Optional[ssl.SSLContext]:
125 if not url.is_secure:
126 return None
127
128 if not hasattr(self, "ssl_context"):
129 if not self.ssl_config.verify:
130 self.ssl_context = self.get_ssl_context_no_verify()
131 else:
132 # Run the SSL loading in a threadpool, since it makes disk accesses.
133 loop = asyncio.get_event_loop()
134 self.ssl_context = await loop.run_in_executor(
135 None, self.get_ssl_context_verify
136 )
137
138 return self.ssl_context
139
140 def get_ssl_context_no_verify(self) -> ssl.SSLContext:
141 """
142 Return an SSL context for unverified connections.
143 """
144 context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
145 context.options |= ssl.OP_NO_SSLv2
146 context.options |= ssl.OP_NO_SSLv3
147 context.options |= ssl.OP_NO_COMPRESSION
148 context.set_default_verify_paths()
149 return context
150
151 def get_ssl_context_verify(self) -> ssl.SSLContext:
152 """
153 Return an SSL context for verified connections.
154 """
155 cert = self.ssl_config.cert
156 verify = self.ssl_config.verify
157
158 if isinstance(verify, bool):
159 ca_bundle_path = DEFAULT_CA_BUNDLE_PATH
160 elif os.path.exists(verify):
161 ca_bundle_path = verify
162 else:
163 raise IOError(
164 "Could not find a suitable TLS CA certificate bundle, "
165 "invalid path: {}".format(verify)
166 )
167
168 context = ssl.create_default_context()
169 if os.path.isfile(ca_bundle_path):
170 context.load_verify_locations(cafile=ca_bundle_path)
171 elif os.path.isdir(ca_bundle_path):
172 context.load_verify_locations(capath=ca_bundle_path)
173
174 if cert is not None:
175 if isinstance(cert, str):
176 context.load_cert_chain(certfile=cert)
177 else:
178 context.load_cert_chain(certfile=cert[0], keyfile=cert[1])
179
180 return context
181
182 async def close(self) -> None:
183 self.is_closed = True
184
185 async def __aenter__(self) -> "ConnectionPool":
186 return self
187
188 async def __aexit__(
189 self,
190 exc_type: typing.Type[BaseException] = None,
191 exc_value: BaseException = None,
192 traceback: TracebackType = None,
193 ) -> None:
194 await self.close()
195
[end of httpcore/pool.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/httpcore/__init__.py b/httpcore/__init__.py
--- a/httpcore/__init__.py
+++ b/httpcore/__init__.py
@@ -1,6 +1,13 @@
from .config import PoolLimits, SSLConfig, TimeoutConfig
from .datastructures import URL, Request, Response
-from .exceptions import ResponseClosed, StreamConsumed
+from .exceptions import (
+ ConnectTimeout,
+ PoolTimeout,
+ ReadTimeout,
+ ResponseClosed,
+ StreamConsumed,
+ Timeout,
+)
from .pool import ConnectionPool
__version__ = "0.0.3"
diff --git a/httpcore/compat.py b/httpcore/compat.py
--- a/httpcore/compat.py
+++ b/httpcore/compat.py
@@ -1,4 +1,4 @@
try:
import brotli
-except ImportError:
- brotli = None # pragma: nocover
+except ImportError: # pragma: nocover
+ brotli = None
diff --git a/httpcore/pool.py b/httpcore/pool.py
--- a/httpcore/pool.py
+++ b/httpcore/pool.py
@@ -16,18 +16,23 @@
)
from .connections import Connection
from .datastructures import URL, Request, Response
+from .exceptions import PoolTimeout
ConnectionKey = typing.Tuple[str, str, int] # (scheme, host, port)
class ConnectionSemaphore:
- def __init__(self, max_connections: int = None):
+ def __init__(self, max_connections: int = None, timeout: float = None):
+ self.timeout = timeout
if max_connections is not None:
self.semaphore = asyncio.BoundedSemaphore(value=max_connections)
async def acquire(self) -> None:
if hasattr(self, "semaphore"):
- await self.semaphore.acquire()
+ try:
+ await asyncio.wait_for(self.semaphore.acquire(), self.timeout)
+ except asyncio.TimeoutError:
+ raise PoolTimeout()
def release(self) -> None:
if hasattr(self, "semaphore"):
@@ -52,7 +57,7 @@
{}
) # type: typing.Dict[ConnectionKey, typing.List[Connection]]
self._connection_semaphore = ConnectionSemaphore(
- max_connections=self.limits.hard_limit
+ max_connections=self.limits.hard_limit, timeout=self.timeout.pool_timeout
)
async def request(
|
{"golden_diff": "diff --git a/httpcore/__init__.py b/httpcore/__init__.py\n--- a/httpcore/__init__.py\n+++ b/httpcore/__init__.py\n@@ -1,6 +1,13 @@\n from .config import PoolLimits, SSLConfig, TimeoutConfig\n from .datastructures import URL, Request, Response\n-from .exceptions import ResponseClosed, StreamConsumed\n+from .exceptions import (\n+ ConnectTimeout,\n+ PoolTimeout,\n+ ReadTimeout,\n+ ResponseClosed,\n+ StreamConsumed,\n+ Timeout,\n+)\n from .pool import ConnectionPool\n \n __version__ = \"0.0.3\"\ndiff --git a/httpcore/compat.py b/httpcore/compat.py\n--- a/httpcore/compat.py\n+++ b/httpcore/compat.py\n@@ -1,4 +1,4 @@\n try:\n import brotli\n-except ImportError:\n- brotli = None # pragma: nocover\n+except ImportError: # pragma: nocover\n+ brotli = None\ndiff --git a/httpcore/pool.py b/httpcore/pool.py\n--- a/httpcore/pool.py\n+++ b/httpcore/pool.py\n@@ -16,18 +16,23 @@\n )\n from .connections import Connection\n from .datastructures import URL, Request, Response\n+from .exceptions import PoolTimeout\n \n ConnectionKey = typing.Tuple[str, str, int] # (scheme, host, port)\n \n \n class ConnectionSemaphore:\n- def __init__(self, max_connections: int = None):\n+ def __init__(self, max_connections: int = None, timeout: float = None):\n+ self.timeout = timeout\n if max_connections is not None:\n self.semaphore = asyncio.BoundedSemaphore(value=max_connections)\n \n async def acquire(self) -> None:\n if hasattr(self, \"semaphore\"):\n- await self.semaphore.acquire()\n+ try:\n+ await asyncio.wait_for(self.semaphore.acquire(), self.timeout)\n+ except asyncio.TimeoutError:\n+ raise PoolTimeout()\n \n def release(self) -> None:\n if hasattr(self, \"semaphore\"):\n@@ -52,7 +57,7 @@\n {}\n ) # type: typing.Dict[ConnectionKey, typing.List[Connection]]\n self._connection_semaphore = ConnectionSemaphore(\n- max_connections=self.limits.hard_limit\n+ max_connections=self.limits.hard_limit, timeout=self.timeout.pool_timeout\n )\n \n async def request(\n", "issue": "Timeout tests\n\n", "before_files": [{"content": "try:\n import brotli\nexcept ImportError:\n brotli = None # pragma: nocover\n", "path": "httpcore/compat.py"}, {"content": "from .config import PoolLimits, SSLConfig, TimeoutConfig\nfrom .datastructures import URL, Request, Response\nfrom .exceptions import ResponseClosed, StreamConsumed\nfrom .pool import ConnectionPool\n\n__version__ = \"0.0.3\"\n", "path": "httpcore/__init__.py"}, {"content": "import asyncio\nimport functools\nimport os\nimport ssl\nimport typing\nfrom types import TracebackType\n\nfrom .config import (\n DEFAULT_CA_BUNDLE_PATH,\n DEFAULT_POOL_LIMITS,\n DEFAULT_SSL_CONFIG,\n DEFAULT_TIMEOUT_CONFIG,\n PoolLimits,\n SSLConfig,\n TimeoutConfig,\n)\nfrom .connections import Connection\nfrom .datastructures import URL, Request, Response\n\nConnectionKey = typing.Tuple[str, str, int] # (scheme, host, port)\n\n\nclass ConnectionSemaphore:\n def __init__(self, max_connections: int = None):\n if max_connections is not None:\n self.semaphore = asyncio.BoundedSemaphore(value=max_connections)\n\n async def acquire(self) -> None:\n if hasattr(self, \"semaphore\"):\n await self.semaphore.acquire()\n\n def release(self) -> None:\n if hasattr(self, \"semaphore\"):\n self.semaphore.release()\n\n\nclass ConnectionPool:\n def __init__(\n self,\n *,\n ssl: SSLConfig = DEFAULT_SSL_CONFIG,\n timeout: TimeoutConfig = DEFAULT_TIMEOUT_CONFIG,\n limits: PoolLimits = DEFAULT_POOL_LIMITS,\n ):\n self.ssl_config = ssl\n self.timeout = timeout\n self.limits = limits\n self.is_closed = False\n self.num_active_connections = 0\n self.num_keepalive_connections = 0\n self._connections = (\n {}\n ) # type: typing.Dict[ConnectionKey, typing.List[Connection]]\n self._connection_semaphore = ConnectionSemaphore(\n max_connections=self.limits.hard_limit\n )\n\n async def request(\n self,\n method: str,\n url: str,\n *,\n headers: typing.Sequence[typing.Tuple[bytes, bytes]] = (),\n body: typing.Union[bytes, typing.AsyncIterator[bytes]] = b\"\",\n stream: bool = False,\n ) -> Response:\n parsed_url = URL(url)\n request = Request(method, parsed_url, headers=headers, body=body)\n ssl_context = await self.get_ssl_context(parsed_url)\n connection = await self.acquire_connection(parsed_url, ssl=ssl_context)\n response = await connection.send(request)\n if not stream:\n try:\n await response.read()\n finally:\n await response.close()\n return response\n\n @property\n def num_connections(self) -> int:\n return self.num_active_connections + self.num_keepalive_connections\n\n async def acquire_connection(\n self, url: URL, *, ssl: typing.Optional[ssl.SSLContext] = None\n ) -> Connection:\n key = (url.scheme, url.hostname, url.port)\n try:\n connection = self._connections[key].pop()\n if not self._connections[key]:\n del self._connections[key]\n self.num_keepalive_connections -= 1\n self.num_active_connections += 1\n\n except (KeyError, IndexError):\n await self._connection_semaphore.acquire()\n release = functools.partial(self.release_connection, key=key)\n connection = Connection(timeout=self.timeout, on_release=release)\n self.num_active_connections += 1\n await connection.open(url.hostname, url.port, ssl=ssl)\n\n return connection\n\n async def release_connection(\n self, connection: Connection, key: ConnectionKey\n ) -> None:\n if connection.is_closed:\n self._connection_semaphore.release()\n self.num_active_connections -= 1\n elif (\n self.limits.soft_limit is not None\n and self.num_connections > self.limits.soft_limit\n ):\n self._connection_semaphore.release()\n self.num_active_connections -= 1\n connection.close()\n else:\n self.num_active_connections -= 1\n self.num_keepalive_connections += 1\n try:\n self._connections[key].append(connection)\n except KeyError:\n self._connections[key] = [connection]\n\n async def get_ssl_context(self, url: URL) -> typing.Optional[ssl.SSLContext]:\n if not url.is_secure:\n return None\n\n if not hasattr(self, \"ssl_context\"):\n if not self.ssl_config.verify:\n self.ssl_context = self.get_ssl_context_no_verify()\n else:\n # Run the SSL loading in a threadpool, since it makes disk accesses.\n loop = asyncio.get_event_loop()\n self.ssl_context = await loop.run_in_executor(\n None, self.get_ssl_context_verify\n )\n\n return self.ssl_context\n\n def get_ssl_context_no_verify(self) -> ssl.SSLContext:\n \"\"\"\n Return an SSL context for unverified connections.\n \"\"\"\n context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)\n context.options |= ssl.OP_NO_SSLv2\n context.options |= ssl.OP_NO_SSLv3\n context.options |= ssl.OP_NO_COMPRESSION\n context.set_default_verify_paths()\n return context\n\n def get_ssl_context_verify(self) -> ssl.SSLContext:\n \"\"\"\n Return an SSL context for verified connections.\n \"\"\"\n cert = self.ssl_config.cert\n verify = self.ssl_config.verify\n\n if isinstance(verify, bool):\n ca_bundle_path = DEFAULT_CA_BUNDLE_PATH\n elif os.path.exists(verify):\n ca_bundle_path = verify\n else:\n raise IOError(\n \"Could not find a suitable TLS CA certificate bundle, \"\n \"invalid path: {}\".format(verify)\n )\n\n context = ssl.create_default_context()\n if os.path.isfile(ca_bundle_path):\n context.load_verify_locations(cafile=ca_bundle_path)\n elif os.path.isdir(ca_bundle_path):\n context.load_verify_locations(capath=ca_bundle_path)\n\n if cert is not None:\n if isinstance(cert, str):\n context.load_cert_chain(certfile=cert)\n else:\n context.load_cert_chain(certfile=cert[0], keyfile=cert[1])\n\n return context\n\n async def close(self) -> None:\n self.is_closed = True\n\n async def __aenter__(self) -> \"ConnectionPool\":\n return self\n\n async def __aexit__(\n self,\n exc_type: typing.Type[BaseException] = None,\n exc_value: BaseException = None,\n traceback: TracebackType = None,\n ) -> None:\n await self.close()\n", "path": "httpcore/pool.py"}]}
| 2,469 | 539 |
gh_patches_debug_5944
|
rasdani/github-patches
|
git_diff
|
Gallopsled__pwntools-1735
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
opcode LIST_EXTEND not allowed
Python 3.9.0 brings new opcode [`LIST_EXTEND`](https://docs.python.org/3/library/dis.html#opcode-LIST_EXTEND), which leads to an error during `pwn` import:
```
$ cat ~/.pwn.conf
[context]
terminal = ['alacritty', '-e', 'sh', '-c']
$ ipython
Python 3.9.0 (default, Oct 7 2020, 23:09:01)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import pwn
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-1-85301d96339b> in <module>
----> 1 import pwn
~/src/pwntools/pwn/toplevel.py in <module>
6 pwnlib.args.initialize()
7 pwnlib.log.install_default_handler()
----> 8 pwnlib.config.initialize()
9
10 log = pwnlib.log.getLogger('pwnlib.exploit')
~/src/pwntools/pwnlib/config.py in initialize()
62 continue
63 settings = dict(c.items(section))
---> 64 registered_configs[section](settings)
~/src/pwntools/pwnlib/context/__init__.py in update_context_defaults(section)
1491
1492 if isinstance(default, six.string_types + six.integer_types + (tuple, list, dict)):
-> 1493 value = safeeval.expr(value)
1494 else:
1495 log.warn("Unsupported configuration option %r in section %r" % (key, 'context'))
~/src/pwntools/pwnlib/util/safeeval.py in expr(expr)
109 """
110
--> 111 c = test_expr(expr, _expr_codes)
112 return eval(c)
113
~/src/pwntools/pwnlib/util/safeeval.py in test_expr(expr, allowed_codes)
61 for code in codes:
62 if code not in allowed_codes:
---> 63 raise ValueError("opcode %s not allowed" % dis.opname[code])
64 return c
65
ValueError: opcode LIST_EXTEND not allowed
```
Adding `'LIST_EXTEND'` to [`_const_codes`](https://github.com/Gallopsled/pwntools/blob/db8df476b2bed695dea14eb726b45237b972811f/pwnlib/util/safeeval.py#L3) or [`_expr_codes`](https://github.com/Gallopsled/pwntools/blob/db8df476b2bed695dea14eb726b45237b972811f/pwnlib/util/safeeval.py#L9) seems to solve the issue, but I'm not familiar with this code.
Also, there are more new opcodes: `LIST_TO_TUPLE`, `SET_UPDATE`, etc.
</issue>
<code>
[start of pwnlib/util/safeeval.py]
1 from __future__ import division
2
3 _const_codes = [
4 'POP_TOP','ROT_TWO','ROT_THREE','ROT_FOUR','DUP_TOP',
5 'BUILD_LIST','BUILD_MAP','BUILD_TUPLE','BUILD_SET',
6 'LOAD_CONST','RETURN_VALUE','STORE_SUBSCR', 'STORE_MAP'
7 ]
8
9 _expr_codes = _const_codes + [
10 'UNARY_POSITIVE','UNARY_NEGATIVE','UNARY_NOT',
11 'UNARY_INVERT','BINARY_POWER','BINARY_MULTIPLY',
12 'BINARY_DIVIDE','BINARY_FLOOR_DIVIDE','BINARY_TRUE_DIVIDE',
13 'BINARY_MODULO','BINARY_ADD','BINARY_SUBTRACT',
14 'BINARY_LSHIFT','BINARY_RSHIFT','BINARY_AND','BINARY_XOR',
15 'BINARY_OR',
16 ]
17
18 _values_codes = _expr_codes + ['LOAD_NAME']
19
20 import six
21
22 def _get_opcodes(codeobj):
23 """_get_opcodes(codeobj) -> [opcodes]
24
25 Extract the actual opcodes as a list from a code object
26
27 >>> c = compile("[1 + 2, (1,2)]", "", "eval")
28 >>> _get_opcodes(c)
29 [100, 100, 103, 83]
30 """
31 import dis
32 if hasattr(dis, 'get_instructions'):
33 return [ins.opcode for ins in dis.get_instructions(codeobj)]
34 i = 0
35 opcodes = []
36 s = codeobj.co_code
37 while i < len(s):
38 code = six.indexbytes(s, i)
39 opcodes.append(code)
40 if code >= dis.HAVE_ARGUMENT:
41 i += 3
42 else:
43 i += 1
44 return opcodes
45
46 def test_expr(expr, allowed_codes):
47 """test_expr(expr, allowed_codes) -> codeobj
48
49 Test that the expression contains only the listed opcodes.
50 If the expression is valid and contains only allowed codes,
51 return the compiled code object. Otherwise raise a ValueError
52 """
53 import dis
54 allowed_codes = [dis.opmap[c] for c in allowed_codes if c in dis.opmap]
55 try:
56 c = compile(expr, "", "eval")
57 except SyntaxError:
58 raise ValueError("%s is not a valid expression" % expr)
59 codes = _get_opcodes(c)
60 for code in codes:
61 if code not in allowed_codes:
62 raise ValueError("opcode %s not allowed" % dis.opname[code])
63 return c
64
65 def const(expr):
66 """const(expression) -> value
67
68 Safe Python constant evaluation
69
70 Evaluates a string that contains an expression describing
71 a Python constant. Strings that are not valid Python expressions
72 or that contain other code besides the constant raise ValueError.
73
74 Examples:
75
76 >>> const("10")
77 10
78 >>> const("[1,2, (3,4), {'foo':'bar'}]")
79 [1, 2, (3, 4), {'foo': 'bar'}]
80 >>> const("[1]+[2]")
81 Traceback (most recent call last):
82 ...
83 ValueError: opcode BINARY_ADD not allowed
84 """
85
86 c = test_expr(expr, _const_codes)
87 return eval(c)
88
89 def expr(expr):
90 """expr(expression) -> value
91
92 Safe Python expression evaluation
93
94 Evaluates a string that contains an expression that only
95 uses Python constants. This can be used to e.g. evaluate
96 a numerical expression from an untrusted source.
97
98 Examples:
99
100 >>> expr("1+2")
101 3
102 >>> expr("[1,2]*2")
103 [1, 2, 1, 2]
104 >>> expr("__import__('sys').modules")
105 Traceback (most recent call last):
106 ...
107 ValueError: opcode LOAD_NAME not allowed
108 """
109
110 c = test_expr(expr, _expr_codes)
111 return eval(c)
112
113 def values(expr, env):
114 """values(expression, dict) -> value
115
116 Safe Python expression evaluation
117
118 Evaluates a string that contains an expression that only
119 uses Python constants and values from a supplied dictionary.
120 This can be used to e.g. evaluate e.g. an argument to a syscall.
121
122 Note: This is potentially unsafe if e.g. the __add__ method has side
123 effects.
124
125 Examples:
126
127 >>> values("A + 4", {'A': 6})
128 10
129 >>> class Foo:
130 ... def __add__(self, other):
131 ... print("Firing the missiles")
132 >>> values("A + 1", {'A': Foo()})
133 Firing the missiles
134 >>> values("A.x", {'A': Foo()})
135 Traceback (most recent call last):
136 ...
137 ValueError: opcode LOAD_ATTR not allowed
138 """
139
140 # The caller might need his dictionary again
141 env = dict(env)
142
143 # We do not want to have built-ins set
144 env['__builtins__'] = {}
145
146 c = test_expr(expr, _values_codes)
147 return eval(c, env)
148
[end of pwnlib/util/safeeval.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pwnlib/util/safeeval.py b/pwnlib/util/safeeval.py
--- a/pwnlib/util/safeeval.py
+++ b/pwnlib/util/safeeval.py
@@ -3,7 +3,9 @@
_const_codes = [
'POP_TOP','ROT_TWO','ROT_THREE','ROT_FOUR','DUP_TOP',
'BUILD_LIST','BUILD_MAP','BUILD_TUPLE','BUILD_SET',
- 'LOAD_CONST','RETURN_VALUE','STORE_SUBSCR', 'STORE_MAP'
+ 'BUILD_CONST_KEY_MAP', 'BUILD_STRING',
+ 'LOAD_CONST','RETURN_VALUE','STORE_SUBSCR', 'STORE_MAP',
+ 'LIST_TO_TUPLE', 'LIST_EXTEND', 'SET_UPDATE', 'DICT_UPDATE', 'DICT_MERGE',
]
_expr_codes = _const_codes + [
|
{"golden_diff": "diff --git a/pwnlib/util/safeeval.py b/pwnlib/util/safeeval.py\n--- a/pwnlib/util/safeeval.py\n+++ b/pwnlib/util/safeeval.py\n@@ -3,7 +3,9 @@\n _const_codes = [\n 'POP_TOP','ROT_TWO','ROT_THREE','ROT_FOUR','DUP_TOP',\n 'BUILD_LIST','BUILD_MAP','BUILD_TUPLE','BUILD_SET',\n- 'LOAD_CONST','RETURN_VALUE','STORE_SUBSCR', 'STORE_MAP'\n+ 'BUILD_CONST_KEY_MAP', 'BUILD_STRING',\n+ 'LOAD_CONST','RETURN_VALUE','STORE_SUBSCR', 'STORE_MAP',\n+ 'LIST_TO_TUPLE', 'LIST_EXTEND', 'SET_UPDATE', 'DICT_UPDATE', 'DICT_MERGE',\n ]\n \n _expr_codes = _const_codes + [\n", "issue": "opcode LIST_EXTEND not allowed\nPython 3.9.0 brings new opcode [`LIST_EXTEND`](https://docs.python.org/3/library/dis.html#opcode-LIST_EXTEND), which leads to an error during `pwn` import:\r\n\r\n```\r\n$ cat ~/.pwn.conf\r\n[context]\r\nterminal = ['alacritty', '-e', 'sh', '-c']\r\n$ ipython\r\nPython 3.9.0 (default, Oct 7 2020, 23:09:01)\r\nType 'copyright', 'credits' or 'license' for more information\r\nIPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help.\r\n\r\nIn [1]: import pwn\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-1-85301d96339b> in <module>\r\n----> 1 import pwn\r\n\r\n~/src/pwntools/pwn/toplevel.py in <module>\r\n 6 pwnlib.args.initialize()\r\n 7 pwnlib.log.install_default_handler()\r\n----> 8 pwnlib.config.initialize()\r\n 9\r\n 10 log = pwnlib.log.getLogger('pwnlib.exploit')\r\n\r\n~/src/pwntools/pwnlib/config.py in initialize()\r\n 62 continue\r\n 63 settings = dict(c.items(section))\r\n---> 64 registered_configs[section](settings)\r\n\r\n~/src/pwntools/pwnlib/context/__init__.py in update_context_defaults(section)\r\n 1491\r\n 1492 if isinstance(default, six.string_types + six.integer_types + (tuple, list, dict)):\r\n-> 1493 value = safeeval.expr(value)\r\n 1494 else:\r\n 1495 log.warn(\"Unsupported configuration option %r in section %r\" % (key, 'context'))\r\n\r\n~/src/pwntools/pwnlib/util/safeeval.py in expr(expr)\r\n 109 \"\"\"\r\n 110\r\n--> 111 c = test_expr(expr, _expr_codes)\r\n 112 return eval(c)\r\n 113\r\n\r\n~/src/pwntools/pwnlib/util/safeeval.py in test_expr(expr, allowed_codes)\r\n 61 for code in codes:\r\n 62 if code not in allowed_codes:\r\n---> 63 raise ValueError(\"opcode %s not allowed\" % dis.opname[code])\r\n 64 return c\r\n 65\r\n\r\nValueError: opcode LIST_EXTEND not allowed\r\n```\r\n\r\nAdding `'LIST_EXTEND'` to [`_const_codes`](https://github.com/Gallopsled/pwntools/blob/db8df476b2bed695dea14eb726b45237b972811f/pwnlib/util/safeeval.py#L3) or [`_expr_codes`](https://github.com/Gallopsled/pwntools/blob/db8df476b2bed695dea14eb726b45237b972811f/pwnlib/util/safeeval.py#L9) seems to solve the issue, but I'm not familiar with this code.\r\n\r\nAlso, there are more new opcodes: `LIST_TO_TUPLE`, `SET_UPDATE`, etc.\n", "before_files": [{"content": "from __future__ import division\n\n_const_codes = [\n 'POP_TOP','ROT_TWO','ROT_THREE','ROT_FOUR','DUP_TOP',\n 'BUILD_LIST','BUILD_MAP','BUILD_TUPLE','BUILD_SET',\n 'LOAD_CONST','RETURN_VALUE','STORE_SUBSCR', 'STORE_MAP'\n ]\n\n_expr_codes = _const_codes + [\n 'UNARY_POSITIVE','UNARY_NEGATIVE','UNARY_NOT',\n 'UNARY_INVERT','BINARY_POWER','BINARY_MULTIPLY',\n 'BINARY_DIVIDE','BINARY_FLOOR_DIVIDE','BINARY_TRUE_DIVIDE',\n 'BINARY_MODULO','BINARY_ADD','BINARY_SUBTRACT',\n 'BINARY_LSHIFT','BINARY_RSHIFT','BINARY_AND','BINARY_XOR',\n 'BINARY_OR',\n ]\n\n_values_codes = _expr_codes + ['LOAD_NAME']\n\nimport six\n\ndef _get_opcodes(codeobj):\n \"\"\"_get_opcodes(codeobj) -> [opcodes]\n\n Extract the actual opcodes as a list from a code object\n\n >>> c = compile(\"[1 + 2, (1,2)]\", \"\", \"eval\")\n >>> _get_opcodes(c)\n [100, 100, 103, 83]\n \"\"\"\n import dis\n if hasattr(dis, 'get_instructions'):\n return [ins.opcode for ins in dis.get_instructions(codeobj)]\n i = 0\n opcodes = []\n s = codeobj.co_code\n while i < len(s):\n code = six.indexbytes(s, i)\n opcodes.append(code)\n if code >= dis.HAVE_ARGUMENT:\n i += 3\n else:\n i += 1\n return opcodes\n\ndef test_expr(expr, allowed_codes):\n \"\"\"test_expr(expr, allowed_codes) -> codeobj\n\n Test that the expression contains only the listed opcodes.\n If the expression is valid and contains only allowed codes,\n return the compiled code object. Otherwise raise a ValueError\n \"\"\"\n import dis\n allowed_codes = [dis.opmap[c] for c in allowed_codes if c in dis.opmap]\n try:\n c = compile(expr, \"\", \"eval\")\n except SyntaxError:\n raise ValueError(\"%s is not a valid expression\" % expr)\n codes = _get_opcodes(c)\n for code in codes:\n if code not in allowed_codes:\n raise ValueError(\"opcode %s not allowed\" % dis.opname[code])\n return c\n\ndef const(expr):\n \"\"\"const(expression) -> value\n\n Safe Python constant evaluation\n\n Evaluates a string that contains an expression describing\n a Python constant. Strings that are not valid Python expressions\n or that contain other code besides the constant raise ValueError.\n\n Examples:\n\n >>> const(\"10\")\n 10\n >>> const(\"[1,2, (3,4), {'foo':'bar'}]\")\n [1, 2, (3, 4), {'foo': 'bar'}]\n >>> const(\"[1]+[2]\")\n Traceback (most recent call last):\n ...\n ValueError: opcode BINARY_ADD not allowed\n \"\"\"\n\n c = test_expr(expr, _const_codes)\n return eval(c)\n\ndef expr(expr):\n \"\"\"expr(expression) -> value\n\n Safe Python expression evaluation\n\n Evaluates a string that contains an expression that only\n uses Python constants. This can be used to e.g. evaluate\n a numerical expression from an untrusted source.\n\n Examples:\n\n >>> expr(\"1+2\")\n 3\n >>> expr(\"[1,2]*2\")\n [1, 2, 1, 2]\n >>> expr(\"__import__('sys').modules\")\n Traceback (most recent call last):\n ...\n ValueError: opcode LOAD_NAME not allowed\n \"\"\"\n\n c = test_expr(expr, _expr_codes)\n return eval(c)\n\ndef values(expr, env):\n \"\"\"values(expression, dict) -> value\n\n Safe Python expression evaluation\n\n Evaluates a string that contains an expression that only\n uses Python constants and values from a supplied dictionary.\n This can be used to e.g. evaluate e.g. an argument to a syscall.\n\n Note: This is potentially unsafe if e.g. the __add__ method has side\n effects.\n\n Examples:\n\n >>> values(\"A + 4\", {'A': 6})\n 10\n >>> class Foo:\n ... def __add__(self, other):\n ... print(\"Firing the missiles\")\n >>> values(\"A + 1\", {'A': Foo()})\n Firing the missiles\n >>> values(\"A.x\", {'A': Foo()})\n Traceback (most recent call last):\n ...\n ValueError: opcode LOAD_ATTR not allowed\n \"\"\"\n\n # The caller might need his dictionary again\n env = dict(env)\n\n # We do not want to have built-ins set\n env['__builtins__'] = {}\n\n c = test_expr(expr, _values_codes)\n return eval(c, env)\n", "path": "pwnlib/util/safeeval.py"}]}
| 2,722 | 180 |
gh_patches_debug_17938
|
rasdani/github-patches
|
git_diff
|
acl-org__acl-anthology-998
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Volume links are 404
The "URL" field on volume pages is 404, e.g., https://www.aclweb.org/anthology/2020.acl-main.

We need to either (a) update `.htaccess` to recognize and redirect volume links or (b) update the URL to https://www.aclweb.org/anthology/volumes/2020.acl-main
</issue>
<code>
[start of bin/anthology/papers.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright 2019 Marcel Bollmann <[email protected]>
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 import iso639
18 import logging as log
19 from .utils import (
20 build_anthology_id,
21 parse_element,
22 infer_attachment_url,
23 remove_extra_whitespace,
24 is_journal,
25 is_volume_id,
26 )
27 from . import data
28
29 # For BibTeX export
30 from .formatter import bibtex_encode, bibtex_make_entry
31
32
33 class Paper:
34 def __init__(self, paper_id, ingest_date, volume, formatter):
35 self.parent_volume = volume
36 self.formatter = formatter
37 self._id = paper_id
38 self._ingest_date = ingest_date
39 self._bibkey = False
40 self.is_volume = paper_id == "0"
41
42 # initialize metadata with keys inherited from volume
43 self.attrib = {}
44 for key, value in volume.attrib.items():
45 # Only inherit 'editor' for frontmatter
46 if (key == "editor" and not self.is_volume) or key in (
47 "collection_id",
48 "booktitle",
49 "id",
50 "meta_data",
51 "meta_journal_title",
52 "meta_volume",
53 "meta_issue",
54 "sigs",
55 "venues",
56 "meta_date",
57 "url",
58 "pdf",
59 ):
60 continue
61
62 self.attrib[key] = value
63
64 def from_xml(xml_element, *args):
65 ingest_date = xml_element.get("ingest-date", data.UNKNOWN_INGEST_DATE)
66
67 # Default to paper ID "0" (for front matter)
68 paper = Paper(xml_element.get("id", "0"), ingest_date, *args)
69
70 # Set values from parsing the XML element (overwriting
71 # and changing some initialized from the volume metadata)
72 for key, value in parse_element(xml_element).items():
73 if key == "author" and "editor" in paper.attrib:
74 del paper.attrib["editor"]
75 paper.attrib[key] = value
76
77 # Frontmatter title is the volume 'booktitle'
78 if paper.is_volume:
79 paper.attrib["xml_title"] = paper.attrib["xml_booktitle"]
80 paper.attrib["xml_title"].tag = "title"
81
82 # Remove booktitle for frontmatter and journals
83 if paper.is_volume or is_journal(paper.full_id):
84 del paper.attrib["xml_booktitle"]
85
86 # Expand URLs with paper ID
87 for tag in ("revision", "erratum"):
88 if tag in paper.attrib:
89 for item in paper.attrib[tag]:
90 if not item["url"].startswith(paper.full_id):
91 log.error(
92 "{} must begin with paper ID '{}', but is '{}'".format(
93 tag, paper.full_id, item["url"]
94 )
95 )
96 item["url"] = data.ANTHOLOGY_PDF.format(item["url"])
97
98 if "attachment" in paper.attrib:
99 for item in paper.attrib["attachment"]:
100 item["url"] = infer_attachment_url(item["url"], paper.full_id)
101
102 paper.attrib["title"] = paper.get_title("plain")
103 paper.attrib["booktitle"] = paper.get_booktitle("plain")
104
105 if "editor" in paper.attrib:
106 if paper.is_volume:
107 if "author" in paper.attrib:
108 log.warn(
109 "Paper {} has both <editor> and <author>; ignoring <author>".format(
110 paper.full_id
111 )
112 )
113 # Proceedings editors are considered authors for their front matter
114 paper.attrib["author"] = paper.attrib["editor"]
115 del paper.attrib["editor"]
116 else:
117 log.warn(
118 "Paper {} has <editor> but is not a proceedings volume; ignoring <editor>".format(
119 paper.full_id
120 )
121 )
122 if "pages" in paper.attrib:
123 if paper.attrib["pages"] is not None:
124 paper._interpret_pages()
125 else:
126 del paper.attrib["pages"]
127
128 if "author" in paper.attrib:
129 paper.attrib["author_string"] = ", ".join(
130 [x[0].full for x in paper.attrib["author"]]
131 )
132
133 paper.attrib["thumbnail"] = data.ANTHOLOGY_THUMBNAIL.format(paper.full_id)
134
135 return paper
136
137 def _interpret_pages(self):
138 """Splits up 'pages' field into first and last page, if possible.
139
140 This is used for metadata in the generated HTML."""
141 for s in ("--", "-", "β"):
142 if self.attrib["pages"].count(s) == 1:
143 self.attrib["page_first"], self.attrib["page_last"] = self.attrib[
144 "pages"
145 ].split(s)
146 self.attrib["pages"] = self.attrib["pages"].replace(s, "β")
147 return
148
149 @property
150 def ingest_date(self):
151 """Inherit publication date from parent, but self overrides. May be undefined."""
152 if self._ingest_date:
153 return self._ingest_date
154 if self.parent_volume:
155 return self.parent_volume.ingest_date
156 return data.UNKNOWN_INGEST_DATE
157
158 @property
159 def collection_id(self):
160 return self.parent_volume.collection_id
161
162 @property
163 def volume_id(self):
164 return self.parent_volume.volume_id
165
166 @property
167 def paper_id(self):
168 return self._id
169
170 @property
171 def full_id(self):
172 return self.anthology_id
173
174 @property
175 def anthology_id(self):
176 return build_anthology_id(self.collection_id, self.volume_id, self.paper_id)
177
178 @property
179 def bibkey(self):
180 if not self._bibkey:
181 self._bibkey = self.full_id # fallback
182 return self._bibkey
183
184 @bibkey.setter
185 def bibkey(self, value):
186 self._bibkey = value
187
188 @property
189 def bibtype(self):
190 if is_journal(self.full_id):
191 return "article"
192 elif self.is_volume:
193 return "proceedings"
194 else:
195 return "inproceedings"
196
197 @property
198 def parent_volume_id(self):
199 if self.parent_volume is not None:
200 return self.parent_volume.full_id
201 return None
202
203 @property
204 def has_abstract(self):
205 return "xml_abstract" in self.attrib
206
207 @property
208 def isbn(self):
209 return self.attrib.get("isbn", None)
210
211 @property
212 def language(self):
213 """Returns the ISO-639 language code, if present"""
214 return self.attrib.get("language", None)
215
216 def get(self, name, default=None):
217 try:
218 return self.attrib[name]
219 except KeyError:
220 return default
221
222 def get_title(self, form="xml"):
223 """Returns the paper title, optionally formatting it.
224
225 Accepted formats:
226 - xml: Include any contained XML tags unchanged
227 - plain: Strip all XML tags, returning only plain text
228 - html: Convert XML tags into valid HTML tags
229 - latex: Convert XML tags into LaTeX commands
230 """
231 return self.formatter(self.get("xml_title"), form)
232
233 def get_abstract(self, form="xml"):
234 """Returns the abstract, optionally formatting it.
235
236 See `get_title()` for details.
237 """
238 return self.formatter(self.get("xml_abstract"), form, allow_url=True)
239
240 def get_booktitle(self, form="xml", default=""):
241 """Returns the booktitle, optionally formatting it.
242
243 See `get_title()` for details.
244 """
245 if "xml_booktitle" in self.attrib:
246 return self.formatter(self.get("xml_booktitle"), form)
247 elif self.parent_volume is not None:
248 return self.parent_volume.get("title")
249 else:
250 return default
251
252 def as_bibtex(self, concise=False):
253 """Return the BibTeX entry for this paper."""
254 # Build BibTeX entry
255 bibkey = self.bibkey
256 bibtype = self.bibtype
257 entries = [("title", self.get_title(form="latex"))]
258 for people in ("author", "editor"):
259 if people in self.attrib:
260 entries.append(
261 (people, " and ".join(p.as_bibtex() for p, _ in self.get(people)))
262 )
263 if is_journal(self.full_id):
264 entries.append(
265 ("journal", bibtex_encode(self.parent_volume.get("meta_journal_title")))
266 )
267 journal_volume = self.parent_volume.get(
268 "meta_volume", self.parent_volume.get("volume")
269 )
270 if journal_volume:
271 entries.append(("volume", journal_volume))
272 journal_issue = self.parent_volume.get(
273 "meta_issue", self.parent_volume.get("issue")
274 )
275 if journal_issue:
276 entries.append(("number", journal_issue))
277 else:
278 # not is_journal(self.full_id)
279 if "xml_booktitle" in self.attrib:
280 entries.append(("booktitle", self.get_booktitle(form="latex")))
281 elif bibtype != "proceedings":
282 entries.append(("booktitle", self.parent_volume.get_title(form="latex")))
283 for entry in ("month", "year", "address", "publisher", "note"):
284 if self.get(entry) is not None:
285 entries.append((entry, bibtex_encode(self.get(entry))))
286 for entry in ("url", "doi"):
287 if entry in self.attrib:
288 # don't want latex escapes such as
289 # doi = "10.1162/coli{\_}a{\_}00008",
290 entries.append((entry, self.get(entry)))
291 if "pages" in self.attrib:
292 entries.append(("pages", self.get("pages").replace("β", "--")))
293 if "xml_abstract" in self.attrib and not concise:
294 entries.append(("abstract", self.get_abstract(form="latex")))
295 if self.language:
296 entries.append(("language", iso639.languages.get(part3=self.language).name))
297 if self.isbn:
298 entries.append(("ISBN", self.isbn))
299
300 # Serialize it
301 return bibtex_make_entry(bibkey, bibtype, entries)
302
303 def as_dict(self):
304 value = self.attrib
305 value["paper_id"] = self.paper_id
306 value["parent_volume_id"] = self.parent_volume_id
307 value["bibkey"] = self.bibkey
308 value["bibtype"] = self.bibtype
309 return value
310
311 def items(self):
312 return self.attrib.items()
313
[end of bin/anthology/papers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bin/anthology/papers.py b/bin/anthology/papers.py
--- a/bin/anthology/papers.py
+++ b/bin/anthology/papers.py
@@ -19,6 +19,7 @@
from .utils import (
build_anthology_id,
parse_element,
+ infer_url,
infer_attachment_url,
remove_extra_whitespace,
is_journal,
@@ -79,6 +80,10 @@
paper.attrib["xml_title"] = paper.attrib["xml_booktitle"]
paper.attrib["xml_title"].tag = "title"
+ # Create URL field if not present. But see https://github.com/acl-org/acl-anthology/issues/997.
+ if "url" not in paper.attrib:
+ paper.attrib["url"] = infer_url(paper.full_id)
+
# Remove booktitle for frontmatter and journals
if paper.is_volume or is_journal(paper.full_id):
del paper.attrib["xml_booktitle"]
|
{"golden_diff": "diff --git a/bin/anthology/papers.py b/bin/anthology/papers.py\n--- a/bin/anthology/papers.py\n+++ b/bin/anthology/papers.py\n@@ -19,6 +19,7 @@\n from .utils import (\n build_anthology_id,\n parse_element,\n+ infer_url,\n infer_attachment_url,\n remove_extra_whitespace,\n is_journal,\n@@ -79,6 +80,10 @@\n paper.attrib[\"xml_title\"] = paper.attrib[\"xml_booktitle\"]\n paper.attrib[\"xml_title\"].tag = \"title\"\n \n+ # Create URL field if not present. But see https://github.com/acl-org/acl-anthology/issues/997.\n+ if \"url\" not in paper.attrib:\n+ paper.attrib[\"url\"] = infer_url(paper.full_id)\n+\n # Remove booktitle for frontmatter and journals\n if paper.is_volume or is_journal(paper.full_id):\n del paper.attrib[\"xml_booktitle\"]\n", "issue": "Volume links are 404\nThe \"URL\" field on volume pages is 404, e.g., https://www.aclweb.org/anthology/2020.acl-main.\r\n\r\n\r\n\r\nWe need to either (a) update `.htaccess` to recognize and redirect volume links or (b) update the URL to https://www.aclweb.org/anthology/volumes/2020.acl-main\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2019 Marcel Bollmann <[email protected]>\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport iso639\nimport logging as log\nfrom .utils import (\n build_anthology_id,\n parse_element,\n infer_attachment_url,\n remove_extra_whitespace,\n is_journal,\n is_volume_id,\n)\nfrom . import data\n\n# For BibTeX export\nfrom .formatter import bibtex_encode, bibtex_make_entry\n\n\nclass Paper:\n def __init__(self, paper_id, ingest_date, volume, formatter):\n self.parent_volume = volume\n self.formatter = formatter\n self._id = paper_id\n self._ingest_date = ingest_date\n self._bibkey = False\n self.is_volume = paper_id == \"0\"\n\n # initialize metadata with keys inherited from volume\n self.attrib = {}\n for key, value in volume.attrib.items():\n # Only inherit 'editor' for frontmatter\n if (key == \"editor\" and not self.is_volume) or key in (\n \"collection_id\",\n \"booktitle\",\n \"id\",\n \"meta_data\",\n \"meta_journal_title\",\n \"meta_volume\",\n \"meta_issue\",\n \"sigs\",\n \"venues\",\n \"meta_date\",\n \"url\",\n \"pdf\",\n ):\n continue\n\n self.attrib[key] = value\n\n def from_xml(xml_element, *args):\n ingest_date = xml_element.get(\"ingest-date\", data.UNKNOWN_INGEST_DATE)\n\n # Default to paper ID \"0\" (for front matter)\n paper = Paper(xml_element.get(\"id\", \"0\"), ingest_date, *args)\n\n # Set values from parsing the XML element (overwriting\n # and changing some initialized from the volume metadata)\n for key, value in parse_element(xml_element).items():\n if key == \"author\" and \"editor\" in paper.attrib:\n del paper.attrib[\"editor\"]\n paper.attrib[key] = value\n\n # Frontmatter title is the volume 'booktitle'\n if paper.is_volume:\n paper.attrib[\"xml_title\"] = paper.attrib[\"xml_booktitle\"]\n paper.attrib[\"xml_title\"].tag = \"title\"\n\n # Remove booktitle for frontmatter and journals\n if paper.is_volume or is_journal(paper.full_id):\n del paper.attrib[\"xml_booktitle\"]\n\n # Expand URLs with paper ID\n for tag in (\"revision\", \"erratum\"):\n if tag in paper.attrib:\n for item in paper.attrib[tag]:\n if not item[\"url\"].startswith(paper.full_id):\n log.error(\n \"{} must begin with paper ID '{}', but is '{}'\".format(\n tag, paper.full_id, item[\"url\"]\n )\n )\n item[\"url\"] = data.ANTHOLOGY_PDF.format(item[\"url\"])\n\n if \"attachment\" in paper.attrib:\n for item in paper.attrib[\"attachment\"]:\n item[\"url\"] = infer_attachment_url(item[\"url\"], paper.full_id)\n\n paper.attrib[\"title\"] = paper.get_title(\"plain\")\n paper.attrib[\"booktitle\"] = paper.get_booktitle(\"plain\")\n\n if \"editor\" in paper.attrib:\n if paper.is_volume:\n if \"author\" in paper.attrib:\n log.warn(\n \"Paper {} has both <editor> and <author>; ignoring <author>\".format(\n paper.full_id\n )\n )\n # Proceedings editors are considered authors for their front matter\n paper.attrib[\"author\"] = paper.attrib[\"editor\"]\n del paper.attrib[\"editor\"]\n else:\n log.warn(\n \"Paper {} has <editor> but is not a proceedings volume; ignoring <editor>\".format(\n paper.full_id\n )\n )\n if \"pages\" in paper.attrib:\n if paper.attrib[\"pages\"] is not None:\n paper._interpret_pages()\n else:\n del paper.attrib[\"pages\"]\n\n if \"author\" in paper.attrib:\n paper.attrib[\"author_string\"] = \", \".join(\n [x[0].full for x in paper.attrib[\"author\"]]\n )\n\n paper.attrib[\"thumbnail\"] = data.ANTHOLOGY_THUMBNAIL.format(paper.full_id)\n\n return paper\n\n def _interpret_pages(self):\n \"\"\"Splits up 'pages' field into first and last page, if possible.\n\n This is used for metadata in the generated HTML.\"\"\"\n for s in (\"--\", \"-\", \"\u2013\"):\n if self.attrib[\"pages\"].count(s) == 1:\n self.attrib[\"page_first\"], self.attrib[\"page_last\"] = self.attrib[\n \"pages\"\n ].split(s)\n self.attrib[\"pages\"] = self.attrib[\"pages\"].replace(s, \"\u2013\")\n return\n\n @property\n def ingest_date(self):\n \"\"\"Inherit publication date from parent, but self overrides. May be undefined.\"\"\"\n if self._ingest_date:\n return self._ingest_date\n if self.parent_volume:\n return self.parent_volume.ingest_date\n return data.UNKNOWN_INGEST_DATE\n\n @property\n def collection_id(self):\n return self.parent_volume.collection_id\n\n @property\n def volume_id(self):\n return self.parent_volume.volume_id\n\n @property\n def paper_id(self):\n return self._id\n\n @property\n def full_id(self):\n return self.anthology_id\n\n @property\n def anthology_id(self):\n return build_anthology_id(self.collection_id, self.volume_id, self.paper_id)\n\n @property\n def bibkey(self):\n if not self._bibkey:\n self._bibkey = self.full_id # fallback\n return self._bibkey\n\n @bibkey.setter\n def bibkey(self, value):\n self._bibkey = value\n\n @property\n def bibtype(self):\n if is_journal(self.full_id):\n return \"article\"\n elif self.is_volume:\n return \"proceedings\"\n else:\n return \"inproceedings\"\n\n @property\n def parent_volume_id(self):\n if self.parent_volume is not None:\n return self.parent_volume.full_id\n return None\n\n @property\n def has_abstract(self):\n return \"xml_abstract\" in self.attrib\n\n @property\n def isbn(self):\n return self.attrib.get(\"isbn\", None)\n\n @property\n def language(self):\n \"\"\"Returns the ISO-639 language code, if present\"\"\"\n return self.attrib.get(\"language\", None)\n\n def get(self, name, default=None):\n try:\n return self.attrib[name]\n except KeyError:\n return default\n\n def get_title(self, form=\"xml\"):\n \"\"\"Returns the paper title, optionally formatting it.\n\n Accepted formats:\n - xml: Include any contained XML tags unchanged\n - plain: Strip all XML tags, returning only plain text\n - html: Convert XML tags into valid HTML tags\n - latex: Convert XML tags into LaTeX commands\n \"\"\"\n return self.formatter(self.get(\"xml_title\"), form)\n\n def get_abstract(self, form=\"xml\"):\n \"\"\"Returns the abstract, optionally formatting it.\n\n See `get_title()` for details.\n \"\"\"\n return self.formatter(self.get(\"xml_abstract\"), form, allow_url=True)\n\n def get_booktitle(self, form=\"xml\", default=\"\"):\n \"\"\"Returns the booktitle, optionally formatting it.\n\n See `get_title()` for details.\n \"\"\"\n if \"xml_booktitle\" in self.attrib:\n return self.formatter(self.get(\"xml_booktitle\"), form)\n elif self.parent_volume is not None:\n return self.parent_volume.get(\"title\")\n else:\n return default\n\n def as_bibtex(self, concise=False):\n \"\"\"Return the BibTeX entry for this paper.\"\"\"\n # Build BibTeX entry\n bibkey = self.bibkey\n bibtype = self.bibtype\n entries = [(\"title\", self.get_title(form=\"latex\"))]\n for people in (\"author\", \"editor\"):\n if people in self.attrib:\n entries.append(\n (people, \" and \".join(p.as_bibtex() for p, _ in self.get(people)))\n )\n if is_journal(self.full_id):\n entries.append(\n (\"journal\", bibtex_encode(self.parent_volume.get(\"meta_journal_title\")))\n )\n journal_volume = self.parent_volume.get(\n \"meta_volume\", self.parent_volume.get(\"volume\")\n )\n if journal_volume:\n entries.append((\"volume\", journal_volume))\n journal_issue = self.parent_volume.get(\n \"meta_issue\", self.parent_volume.get(\"issue\")\n )\n if journal_issue:\n entries.append((\"number\", journal_issue))\n else:\n # not is_journal(self.full_id)\n if \"xml_booktitle\" in self.attrib:\n entries.append((\"booktitle\", self.get_booktitle(form=\"latex\")))\n elif bibtype != \"proceedings\":\n entries.append((\"booktitle\", self.parent_volume.get_title(form=\"latex\")))\n for entry in (\"month\", \"year\", \"address\", \"publisher\", \"note\"):\n if self.get(entry) is not None:\n entries.append((entry, bibtex_encode(self.get(entry))))\n for entry in (\"url\", \"doi\"):\n if entry in self.attrib:\n # don't want latex escapes such as\n # doi = \"10.1162/coli{\\_}a{\\_}00008\",\n entries.append((entry, self.get(entry)))\n if \"pages\" in self.attrib:\n entries.append((\"pages\", self.get(\"pages\").replace(\"\u2013\", \"--\")))\n if \"xml_abstract\" in self.attrib and not concise:\n entries.append((\"abstract\", self.get_abstract(form=\"latex\")))\n if self.language:\n entries.append((\"language\", iso639.languages.get(part3=self.language).name))\n if self.isbn:\n entries.append((\"ISBN\", self.isbn))\n\n # Serialize it\n return bibtex_make_entry(bibkey, bibtype, entries)\n\n def as_dict(self):\n value = self.attrib\n value[\"paper_id\"] = self.paper_id\n value[\"parent_volume_id\"] = self.parent_volume_id\n value[\"bibkey\"] = self.bibkey\n value[\"bibtype\"] = self.bibtype\n return value\n\n def items(self):\n return self.attrib.items()\n", "path": "bin/anthology/papers.py"}]}
| 3,885 | 218 |
gh_patches_debug_2533
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-3646
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The validation of input parameters for the repair endpoint is omitted
```
curl -X POST -H 'Content-Type: application/json' -H 'Authorization: Basic YWRtaW46cGFzc3dvcmQ=' -d '[]' http://localhost:5001/pulp/api/v3/repair/
```
```
pulp [804a07335b9f4417ad0c71dde478634e]: django.request:ERROR: Internal Server Error: /pulp/api/v3/repair/
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/views/generic/base.py", line 70, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/src/pulpcore/pulpcore/app/views/repair.py", line 27, in post
verify_checksums = serializer.validated_data["verify_checksums"]
KeyError: 'verify_checksums'
```
</issue>
<code>
[start of pulpcore/app/views/repair.py]
1 from drf_spectacular.utils import extend_schema
2 from rest_framework.views import APIView
3
4 from pulpcore.app.response import OperationPostponedResponse
5 from pulpcore.app.serializers import AsyncOperationResponseSerializer, RepairSerializer
6 from pulpcore.app.tasks import repair_all_artifacts
7 from pulpcore.tasking.tasks import dispatch
8
9
10 class RepairView(APIView):
11 @extend_schema(
12 description=(
13 "Trigger an asynchronous task that checks for missing "
14 "or corrupted artifacts, and attempts to redownload them."
15 ),
16 summary="Repair Artifact Storage",
17 request=RepairSerializer,
18 responses={202: AsyncOperationResponseSerializer},
19 )
20 def post(self, request):
21 """
22 Repair artifacts.
23 """
24 serializer = RepairSerializer(data=request.data)
25 serializer.is_valid()
26
27 verify_checksums = serializer.validated_data["verify_checksums"]
28
29 task = dispatch(repair_all_artifacts, args=[verify_checksums])
30
31 return OperationPostponedResponse(task, request)
32
[end of pulpcore/app/views/repair.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pulpcore/app/views/repair.py b/pulpcore/app/views/repair.py
--- a/pulpcore/app/views/repair.py
+++ b/pulpcore/app/views/repair.py
@@ -22,7 +22,7 @@
Repair artifacts.
"""
serializer = RepairSerializer(data=request.data)
- serializer.is_valid()
+ serializer.is_valid(raise_exception=True)
verify_checksums = serializer.validated_data["verify_checksums"]
|
{"golden_diff": "diff --git a/pulpcore/app/views/repair.py b/pulpcore/app/views/repair.py\n--- a/pulpcore/app/views/repair.py\n+++ b/pulpcore/app/views/repair.py\n@@ -22,7 +22,7 @@\n Repair artifacts.\n \"\"\"\n serializer = RepairSerializer(data=request.data)\n- serializer.is_valid()\n+ serializer.is_valid(raise_exception=True)\n \n verify_checksums = serializer.validated_data[\"verify_checksums\"]\n", "issue": "The validation of input parameters for the repair endpoint is omitted\n```\r\ncurl -X POST -H 'Content-Type: application/json' -H 'Authorization: Basic YWRtaW46cGFzc3dvcmQ=' -d '[]' http://localhost:5001/pulp/api/v3/repair/\r\n```\r\n\r\n```\r\npulp [804a07335b9f4417ad0c71dde478634e]: django.request:ERROR: Internal Server Error: /pulp/api/v3/repair/\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/site-packages/django/core/handlers/exception.py\", line 47, in inner\r\n response = get_response(request)\r\n File \"/usr/local/lib/python3.8/site-packages/django/core/handlers/base.py\", line 181, in _get_response\r\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\n File \"/usr/local/lib/python3.8/site-packages/django/views/decorators/csrf.py\", line 54, in wrapped_view\r\n return view_func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.8/site-packages/django/views/generic/base.py\", line 70, in view\r\n return self.dispatch(request, *args, **kwargs)\r\n File \"/usr/local/lib/python3.8/site-packages/rest_framework/views.py\", line 509, in dispatch\r\n response = self.handle_exception(exc)\r\n File \"/usr/local/lib/python3.8/site-packages/rest_framework/views.py\", line 469, in handle_exception\r\n self.raise_uncaught_exception(exc)\r\n File \"/usr/local/lib/python3.8/site-packages/rest_framework/views.py\", line 480, in raise_uncaught_exception\r\n raise exc\r\n File \"/usr/local/lib/python3.8/site-packages/rest_framework/views.py\", line 506, in dispatch\r\n response = handler(request, *args, **kwargs)\r\n File \"/src/pulpcore/pulpcore/app/views/repair.py\", line 27, in post\r\n verify_checksums = serializer.validated_data[\"verify_checksums\"]\r\nKeyError: 'verify_checksums'\r\n```\n", "before_files": [{"content": "from drf_spectacular.utils import extend_schema\nfrom rest_framework.views import APIView\n\nfrom pulpcore.app.response import OperationPostponedResponse\nfrom pulpcore.app.serializers import AsyncOperationResponseSerializer, RepairSerializer\nfrom pulpcore.app.tasks import repair_all_artifacts\nfrom pulpcore.tasking.tasks import dispatch\n\n\nclass RepairView(APIView):\n @extend_schema(\n description=(\n \"Trigger an asynchronous task that checks for missing \"\n \"or corrupted artifacts, and attempts to redownload them.\"\n ),\n summary=\"Repair Artifact Storage\",\n request=RepairSerializer,\n responses={202: AsyncOperationResponseSerializer},\n )\n def post(self, request):\n \"\"\"\n Repair artifacts.\n \"\"\"\n serializer = RepairSerializer(data=request.data)\n serializer.is_valid()\n\n verify_checksums = serializer.validated_data[\"verify_checksums\"]\n\n task = dispatch(repair_all_artifacts, args=[verify_checksums])\n\n return OperationPostponedResponse(task, request)\n", "path": "pulpcore/app/views/repair.py"}]}
| 1,287 | 102 |
gh_patches_debug_6660
|
rasdani/github-patches
|
git_diff
|
ipython__ipython-9453
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Event callbacks which register/unregister other callbacks lead to surprising behavior
tl;dr: the loop in [this line](https://github.com/ipython/ipython/blob/1d63c56f6d69d9f4cc47adcb87a7f80744a48260/IPython/core/events.py#L72) isn't robust to `register`/`unregister`. What's the desired behavior?
### The problem
We ran into a funny situation in colaboratory, involving `post_run_cell` callbacks which need to potentially add or remove other `post_run_cell` callbacks. Here's a really simplified version:
``` python
ip = get_ipython()
ev = ip.events
def func1(*unused_args):
print 'invoking func1'
ev.unregister('post_run_cell', func1)
ev.register('post_run_cell', func3)
def func2(*unused_args):
print 'invoking func2'
ev.unregister('post_run_cell', func2)
def func3(*unused_args):
print 'invoking func3'
ev.register('post_run_cell', func1)
ev.register('post_run_cell', func2)
```
In principle, this should invoke the three functions in order. In reality, it invokes `func1` and `func3`, but _not_ `func2`. Even worse, at the end of this cell's execution, the list of registered callbacks is `[func2, func3]`, which is _not_ the list of callbacks we saw execute. So `func2`, the only function that stays in the list of callbacks the whole time, is the only one we don't execute. π
The cause is easy to see after checking out [this line](https://github.com/ipython/ipython/blob/1d63c56f6d69d9f4cc47adcb87a7f80744a48260/IPython/core/events.py#L72):
- on the first iteration of the loop, `func1` is our callback, and `[func1, func2]` is our list of callbacks. `func1` executes, and our list of callbacks is now `[func2, func3]`.
- the next loop iteration picks up at the second element of the list, namely `func3`. unfortunately, we've now skipped `func2`. sadface.
The lesson is, of course, never mutate a list as you iterate over it.
## Potential solutions
I'm happy to send a PR, but first, I want to make sure we're on the same page about what the **desired** semantics are here.
I think we want to be flexible: given that the only exposed operations for callbacks are remove and append (and thankfully [**not** reset](https://github.com/ipython/ipython/commit/4043b271fee4f6c36c99c8038018d54cd86b94eb)), we can ensure
1. we execute new callbacks appended to the list, and
2. we skip removed callbacks if we haven't already executed them.
Other options all involve preventing this behavior. We could freeze state by instead making a copy of the list before we iterate over it, whence any modifications would only be visible on the next triggering of the event. Or we could add code to completely disallow modifications to the currently-executing list of callbacks for the duration of `trigger`, meaning a user would get an error if they tried.
## Our use case
A common pattern we've used is "one-shot callbacks" -- a callback which, when invoked, removes itself from the list of callbacks. We use this for some forms of output manipulation, as well as occasionally to do one-time cleanup that needs to happen after the user's current code completes.
With the proposed solution, this is easy -- the body of a callback just includes a call to deregister itself. This currently works great, unless the one-shot callback is not the last callback in the list. (The example above was distilled from a variant of this -- a one-shot callback was registering another one-shot callback.) It also allows for forcing a callback to be idempotent -- register as many times as you'd like, and the first one can deregister any copies it finds.
With either of the other solutions, we're stuck basically inventing our own event system, and using it to handle cleanup.
</issue>
<code>
[start of IPython/core/events.py]
1 """Infrastructure for registering and firing callbacks on application events.
2
3 Unlike :mod:`IPython.core.hooks`, which lets end users set single functions to
4 be called at specific times, or a collection of alternative methods to try,
5 callbacks are designed to be used by extension authors. A number of callbacks
6 can be registered for the same event without needing to be aware of one another.
7
8 The functions defined in this module are no-ops indicating the names of available
9 events and the arguments which will be passed to them.
10
11 .. note::
12
13 This API is experimental in IPython 2.0, and may be revised in future versions.
14 """
15 from __future__ import print_function
16
17 class EventManager(object):
18 """Manage a collection of events and a sequence of callbacks for each.
19
20 This is attached to :class:`~IPython.core.interactiveshell.InteractiveShell`
21 instances as an ``events`` attribute.
22
23 .. note::
24
25 This API is experimental in IPython 2.0, and may be revised in future versions.
26 """
27 def __init__(self, shell, available_events):
28 """Initialise the :class:`CallbackManager`.
29
30 Parameters
31 ----------
32 shell
33 The :class:`~IPython.core.interactiveshell.InteractiveShell` instance
34 available_callbacks
35 An iterable of names for callback events.
36 """
37 self.shell = shell
38 self.callbacks = {n:[] for n in available_events}
39
40 def register(self, event, function):
41 """Register a new event callback
42
43 Parameters
44 ----------
45 event : str
46 The event for which to register this callback.
47 function : callable
48 A function to be called on the given event. It should take the same
49 parameters as the appropriate callback prototype.
50
51 Raises
52 ------
53 TypeError
54 If ``function`` is not callable.
55 KeyError
56 If ``event`` is not one of the known events.
57 """
58 if not callable(function):
59 raise TypeError('Need a callable, got %r' % function)
60 self.callbacks[event].append(function)
61
62 def unregister(self, event, function):
63 """Remove a callback from the given event."""
64 self.callbacks[event].remove(function)
65
66 def trigger(self, event, *args, **kwargs):
67 """Call callbacks for ``event``.
68
69 Any additional arguments are passed to all callbacks registered for this
70 event. Exceptions raised by callbacks are caught, and a message printed.
71 """
72 for func in self.callbacks[event]:
73 try:
74 func(*args, **kwargs)
75 except Exception:
76 print("Error in callback {} (for {}):".format(func, event))
77 self.shell.showtraceback()
78
79 # event_name -> prototype mapping
80 available_events = {}
81
82 def _define_event(callback_proto):
83 available_events[callback_proto.__name__] = callback_proto
84 return callback_proto
85
86 # ------------------------------------------------------------------------------
87 # Callback prototypes
88 #
89 # No-op functions which describe the names of available events and the
90 # signatures of callbacks for those events.
91 # ------------------------------------------------------------------------------
92
93 @_define_event
94 def pre_execute():
95 """Fires before code is executed in response to user/frontend action.
96
97 This includes comm and widget messages and silent execution, as well as user
98 code cells."""
99 pass
100
101 @_define_event
102 def pre_run_cell():
103 """Fires before user-entered code runs."""
104 pass
105
106 @_define_event
107 def post_execute():
108 """Fires after code is executed in response to user/frontend action.
109
110 This includes comm and widget messages and silent execution, as well as user
111 code cells."""
112 pass
113
114 @_define_event
115 def post_run_cell():
116 """Fires after user-entered code runs."""
117 pass
118
119 @_define_event
120 def shell_initialized(ip):
121 """Fires after initialisation of :class:`~IPython.core.interactiveshell.InteractiveShell`.
122
123 This is before extensions and startup scripts are loaded, so it can only be
124 set by subclassing.
125
126 Parameters
127 ----------
128 ip : :class:`~IPython.core.interactiveshell.InteractiveShell`
129 The newly initialised shell.
130 """
131 pass
132
[end of IPython/core/events.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/IPython/core/events.py b/IPython/core/events.py
--- a/IPython/core/events.py
+++ b/IPython/core/events.py
@@ -69,7 +69,7 @@
Any additional arguments are passed to all callbacks registered for this
event. Exceptions raised by callbacks are caught, and a message printed.
"""
- for func in self.callbacks[event]:
+ for func in self.callbacks[event][:]:
try:
func(*args, **kwargs)
except Exception:
|
{"golden_diff": "diff --git a/IPython/core/events.py b/IPython/core/events.py\n--- a/IPython/core/events.py\n+++ b/IPython/core/events.py\n@@ -69,7 +69,7 @@\n Any additional arguments are passed to all callbacks registered for this\n event. Exceptions raised by callbacks are caught, and a message printed.\n \"\"\"\n- for func in self.callbacks[event]:\n+ for func in self.callbacks[event][:]:\n try:\n func(*args, **kwargs)\n except Exception:\n", "issue": "Event callbacks which register/unregister other callbacks lead to surprising behavior\ntl;dr: the loop in [this line](https://github.com/ipython/ipython/blob/1d63c56f6d69d9f4cc47adcb87a7f80744a48260/IPython/core/events.py#L72) isn't robust to `register`/`unregister`. What's the desired behavior?\n### The problem\n\nWe ran into a funny situation in colaboratory, involving `post_run_cell` callbacks which need to potentially add or remove other `post_run_cell` callbacks. Here's a really simplified version:\n\n``` python\nip = get_ipython()\nev = ip.events\n\ndef func1(*unused_args):\n print 'invoking func1'\n ev.unregister('post_run_cell', func1)\n ev.register('post_run_cell', func3)\n\ndef func2(*unused_args):\n print 'invoking func2'\n ev.unregister('post_run_cell', func2)\n\ndef func3(*unused_args):\n print 'invoking func3'\n\nev.register('post_run_cell', func1)\nev.register('post_run_cell', func2)\n```\n\nIn principle, this should invoke the three functions in order. In reality, it invokes `func1` and `func3`, but _not_ `func2`. Even worse, at the end of this cell's execution, the list of registered callbacks is `[func2, func3]`, which is _not_ the list of callbacks we saw execute. So `func2`, the only function that stays in the list of callbacks the whole time, is the only one we don't execute. \ud83d\ude09 \n\nThe cause is easy to see after checking out [this line](https://github.com/ipython/ipython/blob/1d63c56f6d69d9f4cc47adcb87a7f80744a48260/IPython/core/events.py#L72): \n- on the first iteration of the loop, `func1` is our callback, and `[func1, func2]` is our list of callbacks. `func1` executes, and our list of callbacks is now `[func2, func3]`.\n- the next loop iteration picks up at the second element of the list, namely `func3`. unfortunately, we've now skipped `func2`. sadface.\n\nThe lesson is, of course, never mutate a list as you iterate over it.\n## Potential solutions\n\nI'm happy to send a PR, but first, I want to make sure we're on the same page about what the **desired** semantics are here. \n\nI think we want to be flexible: given that the only exposed operations for callbacks are remove and append (and thankfully [**not** reset](https://github.com/ipython/ipython/commit/4043b271fee4f6c36c99c8038018d54cd86b94eb)), we can ensure\n1. we execute new callbacks appended to the list, and \n2. we skip removed callbacks if we haven't already executed them.\n\nOther options all involve preventing this behavior. We could freeze state by instead making a copy of the list before we iterate over it, whence any modifications would only be visible on the next triggering of the event. Or we could add code to completely disallow modifications to the currently-executing list of callbacks for the duration of `trigger`, meaning a user would get an error if they tried.\n## Our use case\n\nA common pattern we've used is \"one-shot callbacks\" -- a callback which, when invoked, removes itself from the list of callbacks. We use this for some forms of output manipulation, as well as occasionally to do one-time cleanup that needs to happen after the user's current code completes. \n\nWith the proposed solution, this is easy -- the body of a callback just includes a call to deregister itself. This currently works great, unless the one-shot callback is not the last callback in the list. (The example above was distilled from a variant of this -- a one-shot callback was registering another one-shot callback.) It also allows for forcing a callback to be idempotent -- register as many times as you'd like, and the first one can deregister any copies it finds.\n\nWith either of the other solutions, we're stuck basically inventing our own event system, and using it to handle cleanup.\n\n", "before_files": [{"content": "\"\"\"Infrastructure for registering and firing callbacks on application events.\n\nUnlike :mod:`IPython.core.hooks`, which lets end users set single functions to\nbe called at specific times, or a collection of alternative methods to try,\ncallbacks are designed to be used by extension authors. A number of callbacks\ncan be registered for the same event without needing to be aware of one another.\n\nThe functions defined in this module are no-ops indicating the names of available\nevents and the arguments which will be passed to them.\n\n.. note::\n\n This API is experimental in IPython 2.0, and may be revised in future versions.\n\"\"\"\nfrom __future__ import print_function\n\nclass EventManager(object):\n \"\"\"Manage a collection of events and a sequence of callbacks for each.\n \n This is attached to :class:`~IPython.core.interactiveshell.InteractiveShell`\n instances as an ``events`` attribute.\n \n .. note::\n\n This API is experimental in IPython 2.0, and may be revised in future versions.\n \"\"\"\n def __init__(self, shell, available_events):\n \"\"\"Initialise the :class:`CallbackManager`.\n \n Parameters\n ----------\n shell\n The :class:`~IPython.core.interactiveshell.InteractiveShell` instance\n available_callbacks\n An iterable of names for callback events.\n \"\"\"\n self.shell = shell\n self.callbacks = {n:[] for n in available_events}\n \n def register(self, event, function):\n \"\"\"Register a new event callback\n \n Parameters\n ----------\n event : str\n The event for which to register this callback.\n function : callable\n A function to be called on the given event. It should take the same\n parameters as the appropriate callback prototype.\n \n Raises\n ------\n TypeError\n If ``function`` is not callable.\n KeyError\n If ``event`` is not one of the known events.\n \"\"\"\n if not callable(function):\n raise TypeError('Need a callable, got %r' % function)\n self.callbacks[event].append(function)\n \n def unregister(self, event, function):\n \"\"\"Remove a callback from the given event.\"\"\"\n self.callbacks[event].remove(function)\n \n def trigger(self, event, *args, **kwargs):\n \"\"\"Call callbacks for ``event``.\n \n Any additional arguments are passed to all callbacks registered for this\n event. Exceptions raised by callbacks are caught, and a message printed.\n \"\"\"\n for func in self.callbacks[event]:\n try:\n func(*args, **kwargs)\n except Exception:\n print(\"Error in callback {} (for {}):\".format(func, event))\n self.shell.showtraceback()\n\n# event_name -> prototype mapping\navailable_events = {}\n\ndef _define_event(callback_proto):\n available_events[callback_proto.__name__] = callback_proto\n return callback_proto\n\n# ------------------------------------------------------------------------------\n# Callback prototypes\n#\n# No-op functions which describe the names of available events and the\n# signatures of callbacks for those events.\n# ------------------------------------------------------------------------------\n\n@_define_event\ndef pre_execute():\n \"\"\"Fires before code is executed in response to user/frontend action.\n \n This includes comm and widget messages and silent execution, as well as user\n code cells.\"\"\"\n pass\n\n@_define_event\ndef pre_run_cell():\n \"\"\"Fires before user-entered code runs.\"\"\"\n pass\n\n@_define_event\ndef post_execute():\n \"\"\"Fires after code is executed in response to user/frontend action.\n \n This includes comm and widget messages and silent execution, as well as user\n code cells.\"\"\"\n pass\n\n@_define_event\ndef post_run_cell():\n \"\"\"Fires after user-entered code runs.\"\"\"\n pass\n\n@_define_event\ndef shell_initialized(ip):\n \"\"\"Fires after initialisation of :class:`~IPython.core.interactiveshell.InteractiveShell`.\n \n This is before extensions and startup scripts are loaded, so it can only be\n set by subclassing.\n \n Parameters\n ----------\n ip : :class:`~IPython.core.interactiveshell.InteractiveShell`\n The newly initialised shell.\n \"\"\"\n pass\n", "path": "IPython/core/events.py"}]}
| 2,648 | 108 |
gh_patches_debug_35264
|
rasdani/github-patches
|
git_diff
|
networkx__networkx-1098
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
doc build broken
From a clean checkout I ran `python setup.py install` and then I attempted to build a local copy of the docs via `make html` in `doc` and got the following error:
```
(py2k-base)tcaswell@tcaswellpc1:~/other_source/networkx/doc$ make html
mkdir -p build
./make_gallery.py
atlas.pyTraceback (most recent call last):
File "./make_gallery.py", line 57, in <module>
execfile(example)
File "atlas.py", line 59, in <module>
G=atlas6()
File "atlas.py", line 25, in atlas6
Atlas=nx.graph_atlas_g()[0:208] # 208
AttributeError: 'module' object has no attribute 'graph_atlas_g'
make: *** [build/generate-stamp] Error 1
```
</issue>
<code>
[start of examples/drawing/atlas.py]
1 #!/usr/bin/env python
2 """
3 Atlas of all graphs of 6 nodes or less.
4
5 """
6 __author__ = """Aric Hagberg ([email protected])"""
7 # Copyright (C) 2004 by
8 # Aric Hagberg <[email protected]>
9 # Dan Schult <[email protected]>
10 # Pieter Swart <[email protected]>
11 # All rights reserved.
12 # BSD license.
13
14 import networkx as nx
15 #from networkx import *
16 #from networkx.generators.atlas import *
17 from networkx.algorithms.isomorphism.isomorph import graph_could_be_isomorphic as isomorphic
18 import random
19
20 def atlas6():
21 """ Return the atlas of all connected graphs of 6 nodes or less.
22 Attempt to check for isomorphisms and remove.
23 """
24
25 Atlas=nx.graph_atlas_g()[0:208] # 208
26 # remove isolated nodes, only connected graphs are left
27 U=nx.Graph() # graph for union of all graphs in atlas
28 for G in Atlas:
29 zerodegree=[n for n in G if G.degree(n)==0]
30 for n in zerodegree:
31 G.remove_node(n)
32 U=nx.disjoint_union(U,G)
33
34 # list of graphs of all connected components
35 C=nx.connected_component_subgraphs(U)
36
37 UU=nx.Graph()
38 # do quick isomorphic-like check, not a true isomorphism checker
39 nlist=[] # list of nonisomorphic graphs
40 for G in C:
41 # check against all nonisomorphic graphs so far
42 if not iso(G,nlist):
43 nlist.append(G)
44 UU=nx.disjoint_union(UU,G) # union the nonisomorphic graphs
45 return UU
46
47 def iso(G1, glist):
48 """Quick and dirty nonisomorphism checker used to check isomorphisms."""
49 for G2 in glist:
50 if isomorphic(G1,G2):
51 return True
52 return False
53
54
55 if __name__ == '__main__':
56
57 import networkx as nx
58
59 G=atlas6()
60
61 print("graph has %d nodes with %d edges"\
62 %(nx.number_of_nodes(G),nx.number_of_edges(G)))
63 print(nx.number_connected_components(G),"connected components")
64
65
66 try:
67 from networkx import graphviz_layout
68 except ImportError:
69 raise ImportError("This example needs Graphviz and either PyGraphviz or Pydot")
70
71 import matplotlib.pyplot as plt
72 plt.figure(1,figsize=(8,8))
73 # layout graphs with positions using graphviz neato
74 pos=nx.graphviz_layout(G,prog="neato")
75 # color nodes the same in each connected subgraph
76 C=nx.connected_component_subgraphs(G)
77 for g in C:
78 c=[random.random()]*nx.number_of_nodes(g) # random color...
79 nx.draw(g,
80 pos,
81 node_size=40,
82 node_color=c,
83 vmin=0.0,
84 vmax=1.0,
85 with_labels=False
86 )
87 plt.savefig("atlas.png",dpi=75)
88
[end of examples/drawing/atlas.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/drawing/atlas.py b/examples/drawing/atlas.py
deleted file mode 100644
--- a/examples/drawing/atlas.py
+++ /dev/null
@@ -1,87 +0,0 @@
-#!/usr/bin/env python
-"""
-Atlas of all graphs of 6 nodes or less.
-
-"""
-__author__ = """Aric Hagberg ([email protected])"""
-# Copyright (C) 2004 by
-# Aric Hagberg <[email protected]>
-# Dan Schult <[email protected]>
-# Pieter Swart <[email protected]>
-# All rights reserved.
-# BSD license.
-
-import networkx as nx
-#from networkx import *
-#from networkx.generators.atlas import *
-from networkx.algorithms.isomorphism.isomorph import graph_could_be_isomorphic as isomorphic
-import random
-
-def atlas6():
- """ Return the atlas of all connected graphs of 6 nodes or less.
- Attempt to check for isomorphisms and remove.
- """
-
- Atlas=nx.graph_atlas_g()[0:208] # 208
- # remove isolated nodes, only connected graphs are left
- U=nx.Graph() # graph for union of all graphs in atlas
- for G in Atlas:
- zerodegree=[n for n in G if G.degree(n)==0]
- for n in zerodegree:
- G.remove_node(n)
- U=nx.disjoint_union(U,G)
-
- # list of graphs of all connected components
- C=nx.connected_component_subgraphs(U)
-
- UU=nx.Graph()
- # do quick isomorphic-like check, not a true isomorphism checker
- nlist=[] # list of nonisomorphic graphs
- for G in C:
- # check against all nonisomorphic graphs so far
- if not iso(G,nlist):
- nlist.append(G)
- UU=nx.disjoint_union(UU,G) # union the nonisomorphic graphs
- return UU
-
-def iso(G1, glist):
- """Quick and dirty nonisomorphism checker used to check isomorphisms."""
- for G2 in glist:
- if isomorphic(G1,G2):
- return True
- return False
-
-
-if __name__ == '__main__':
-
- import networkx as nx
-
- G=atlas6()
-
- print("graph has %d nodes with %d edges"\
- %(nx.number_of_nodes(G),nx.number_of_edges(G)))
- print(nx.number_connected_components(G),"connected components")
-
-
- try:
- from networkx import graphviz_layout
- except ImportError:
- raise ImportError("This example needs Graphviz and either PyGraphviz or Pydot")
-
- import matplotlib.pyplot as plt
- plt.figure(1,figsize=(8,8))
- # layout graphs with positions using graphviz neato
- pos=nx.graphviz_layout(G,prog="neato")
- # color nodes the same in each connected subgraph
- C=nx.connected_component_subgraphs(G)
- for g in C:
- c=[random.random()]*nx.number_of_nodes(g) # random color...
- nx.draw(g,
- pos,
- node_size=40,
- node_color=c,
- vmin=0.0,
- vmax=1.0,
- with_labels=False
- )
- plt.savefig("atlas.png",dpi=75)
diff --git a/examples/drawing/atlas.py b/examples/drawing/atlas.py
new file mode 120000
--- /dev/null
+++ b/examples/drawing/atlas.py
@@ -0,0 +1 @@
+../graph/atlas.py
\ No newline at end of file
|
{"golden_diff": "diff --git a/examples/drawing/atlas.py b/examples/drawing/atlas.py\ndeleted file mode 100644\n--- a/examples/drawing/atlas.py\n+++ /dev/null\n@@ -1,87 +0,0 @@\n-#!/usr/bin/env python\n-\"\"\"\n-Atlas of all graphs of 6 nodes or less.\n-\n-\"\"\"\n-__author__ = \"\"\"Aric Hagberg ([email protected])\"\"\"\n-# Copyright (C) 2004 by \n-# Aric Hagberg <[email protected]>\n-# Dan Schult <[email protected]>\n-# Pieter Swart <[email protected]>\n-# All rights reserved.\n-# BSD license.\n-\n-import networkx as nx\n-#from networkx import *\n-#from networkx.generators.atlas import *\n-from networkx.algorithms.isomorphism.isomorph import graph_could_be_isomorphic as isomorphic\n-import random\n-\n-def atlas6():\n- \"\"\" Return the atlas of all connected graphs of 6 nodes or less.\n- Attempt to check for isomorphisms and remove.\n- \"\"\"\n-\n- Atlas=nx.graph_atlas_g()[0:208] # 208\n- # remove isolated nodes, only connected graphs are left\n- U=nx.Graph() # graph for union of all graphs in atlas\n- for G in Atlas: \n- zerodegree=[n for n in G if G.degree(n)==0]\n- for n in zerodegree:\n- G.remove_node(n)\n- U=nx.disjoint_union(U,G)\n-\n- # list of graphs of all connected components \n- C=nx.connected_component_subgraphs(U) \n- \n- UU=nx.Graph() \n- # do quick isomorphic-like check, not a true isomorphism checker \n- nlist=[] # list of nonisomorphic graphs\n- for G in C:\n- # check against all nonisomorphic graphs so far\n- if not iso(G,nlist):\n- nlist.append(G)\n- UU=nx.disjoint_union(UU,G) # union the nonisomorphic graphs \n- return UU \n-\n-def iso(G1, glist):\n- \"\"\"Quick and dirty nonisomorphism checker used to check isomorphisms.\"\"\"\n- for G2 in glist:\n- if isomorphic(G1,G2):\n- return True\n- return False \n-\n-\n-if __name__ == '__main__':\n-\n- import networkx as nx\n-\n- G=atlas6()\n-\n- print(\"graph has %d nodes with %d edges\"\\\n- %(nx.number_of_nodes(G),nx.number_of_edges(G)))\n- print(nx.number_connected_components(G),\"connected components\")\n-\n-\n- try:\n- from networkx import graphviz_layout\n- except ImportError:\n- raise ImportError(\"This example needs Graphviz and either PyGraphviz or Pydot\")\n-\n- import matplotlib.pyplot as plt\n- plt.figure(1,figsize=(8,8))\n- # layout graphs with positions using graphviz neato\n- pos=nx.graphviz_layout(G,prog=\"neato\")\n- # color nodes the same in each connected subgraph\n- C=nx.connected_component_subgraphs(G)\n- for g in C:\n- c=[random.random()]*nx.number_of_nodes(g) # random color...\n- nx.draw(g,\n- pos,\n- node_size=40,\n- node_color=c,\n- vmin=0.0,\n- vmax=1.0,\n- with_labels=False\n- )\n- plt.savefig(\"atlas.png\",dpi=75) \ndiff --git a/examples/drawing/atlas.py b/examples/drawing/atlas.py\nnew file mode 120000\n--- /dev/null\n+++ b/examples/drawing/atlas.py\n@@ -0,0 +1 @@\n+../graph/atlas.py\n\\ No newline at end of file\n", "issue": "doc build broken\nFrom a clean checkout I ran `python setup.py install` and then I attempted to build a local copy of the docs via `make html` in `doc` and got the following error:\n\n```\n(py2k-base)tcaswell@tcaswellpc1:~/other_source/networkx/doc$ make html\nmkdir -p build\n./make_gallery.py \natlas.pyTraceback (most recent call last):\n File \"./make_gallery.py\", line 57, in <module>\n execfile(example)\n File \"atlas.py\", line 59, in <module>\n G=atlas6()\n File \"atlas.py\", line 25, in atlas6\n Atlas=nx.graph_atlas_g()[0:208] # 208\nAttributeError: 'module' object has no attribute 'graph_atlas_g'\nmake: *** [build/generate-stamp] Error 1\n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nAtlas of all graphs of 6 nodes or less.\n\n\"\"\"\n__author__ = \"\"\"Aric Hagberg ([email protected])\"\"\"\n# Copyright (C) 2004 by \n# Aric Hagberg <[email protected]>\n# Dan Schult <[email protected]>\n# Pieter Swart <[email protected]>\n# All rights reserved.\n# BSD license.\n\nimport networkx as nx\n#from networkx import *\n#from networkx.generators.atlas import *\nfrom networkx.algorithms.isomorphism.isomorph import graph_could_be_isomorphic as isomorphic\nimport random\n\ndef atlas6():\n \"\"\" Return the atlas of all connected graphs of 6 nodes or less.\n Attempt to check for isomorphisms and remove.\n \"\"\"\n\n Atlas=nx.graph_atlas_g()[0:208] # 208\n # remove isolated nodes, only connected graphs are left\n U=nx.Graph() # graph for union of all graphs in atlas\n for G in Atlas: \n zerodegree=[n for n in G if G.degree(n)==0]\n for n in zerodegree:\n G.remove_node(n)\n U=nx.disjoint_union(U,G)\n\n # list of graphs of all connected components \n C=nx.connected_component_subgraphs(U) \n \n UU=nx.Graph() \n # do quick isomorphic-like check, not a true isomorphism checker \n nlist=[] # list of nonisomorphic graphs\n for G in C:\n # check against all nonisomorphic graphs so far\n if not iso(G,nlist):\n nlist.append(G)\n UU=nx.disjoint_union(UU,G) # union the nonisomorphic graphs \n return UU \n\ndef iso(G1, glist):\n \"\"\"Quick and dirty nonisomorphism checker used to check isomorphisms.\"\"\"\n for G2 in glist:\n if isomorphic(G1,G2):\n return True\n return False \n\n\nif __name__ == '__main__':\n\n import networkx as nx\n\n G=atlas6()\n\n print(\"graph has %d nodes with %d edges\"\\\n %(nx.number_of_nodes(G),nx.number_of_edges(G)))\n print(nx.number_connected_components(G),\"connected components\")\n\n\n try:\n from networkx import graphviz_layout\n except ImportError:\n raise ImportError(\"This example needs Graphviz and either PyGraphviz or Pydot\")\n\n import matplotlib.pyplot as plt\n plt.figure(1,figsize=(8,8))\n # layout graphs with positions using graphviz neato\n pos=nx.graphviz_layout(G,prog=\"neato\")\n # color nodes the same in each connected subgraph\n C=nx.connected_component_subgraphs(G)\n for g in C:\n c=[random.random()]*nx.number_of_nodes(g) # random color...\n nx.draw(g,\n pos,\n node_size=40,\n node_color=c,\n vmin=0.0,\n vmax=1.0,\n with_labels=False\n )\n plt.savefig(\"atlas.png\",dpi=75) \n", "path": "examples/drawing/atlas.py"}]}
| 1,613 | 891 |
gh_patches_debug_25942
|
rasdani/github-patches
|
git_diff
|
kivy__python-for-android-1383
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unit test recipes (reportlab to begin with)
The test suite is currently running full integration tests for a bunch of recipes.
While integration tests are good, we cannot currently use them for all recipes because they run for too long.
However having unit tests for all recipes should be feasible and may still cover some issues like https://github.com/kivy/python-for-android/pull/1357#issuecomment-423614116.
Unit tests were recently enabled the following pull request https://github.com/kivy/python-for-android/pull/1379. So the idea is to increase the coverage start from reportlab recipe as a use case.
</issue>
<code>
[start of pythonforandroid/recipes/reportlab/__init__.py]
1 import os, sh
2 from pythonforandroid.recipe import CompiledComponentsPythonRecipe
3 from pythonforandroid.util import (current_directory, ensure_dir)
4 from pythonforandroid.logger import (info, shprint)
5
6
7 class ReportLabRecipe(CompiledComponentsPythonRecipe):
8 version = 'c088826211ca'
9 url = 'https://bitbucket.org/rptlab/reportlab/get/{version}.tar.gz'
10 depends = [('python2', 'python3crystax'), 'freetype']
11
12 def prebuild_arch(self, arch):
13 if not self.is_patched(arch):
14 super(ReportLabRecipe, self).prebuild_arch(arch)
15 self.apply_patch('patches/fix-setup.patch', arch.arch)
16 recipe_dir = self.get_build_dir(arch.arch)
17 shprint(sh.touch, os.path.join(recipe_dir, '.patched'))
18 ft = self.get_recipe('freetype', self.ctx)
19 ft_dir = ft.get_build_dir(arch.arch)
20 ft_lib_dir = os.environ.get('_FT_LIB_', os.path.join(ft_dir, 'objs', '.libs'))
21 ft_inc_dir = os.environ.get('_FT_INC_', os.path.join(ft_dir, 'include'))
22 tmp_dir = os.path.normpath(os.path.join(recipe_dir, "..", "..", "tmp"))
23 info('reportlab recipe: recipe_dir={}'.format(recipe_dir))
24 info('reportlab recipe: tmp_dir={}'.format(tmp_dir))
25 info('reportlab recipe: ft_dir={}'.format(ft_dir))
26 info('reportlab recipe: ft_lib_dir={}'.format(ft_lib_dir))
27 info('reportlab recipe: ft_inc_dir={}'.format(ft_inc_dir))
28 with current_directory(recipe_dir):
29 sh.ls('-lathr')
30 ensure_dir(tmp_dir)
31 pfbfile = os.path.join(tmp_dir, "pfbfer-20070710.zip")
32 if not os.path.isfile(pfbfile):
33 sh.wget("http://www.reportlab.com/ftp/pfbfer-20070710.zip", "-O", pfbfile)
34 sh.unzip("-u", "-d", os.path.join(recipe_dir, "src", "reportlab", "fonts"), pfbfile)
35 if os.path.isfile("setup.py"):
36 with open('setup.py', 'rb') as f:
37 text = f.read().replace('_FT_LIB_', ft_lib_dir).replace('_FT_INC_', ft_inc_dir)
38 with open('setup.py', 'wb') as f:
39 f.write(text)
40
41
42 recipe = ReportLabRecipe()
43
[end of pythonforandroid/recipes/reportlab/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pythonforandroid/recipes/reportlab/__init__.py b/pythonforandroid/recipes/reportlab/__init__.py
--- a/pythonforandroid/recipes/reportlab/__init__.py
+++ b/pythonforandroid/recipes/reportlab/__init__.py
@@ -26,16 +26,15 @@
info('reportlab recipe: ft_lib_dir={}'.format(ft_lib_dir))
info('reportlab recipe: ft_inc_dir={}'.format(ft_inc_dir))
with current_directory(recipe_dir):
- sh.ls('-lathr')
ensure_dir(tmp_dir)
pfbfile = os.path.join(tmp_dir, "pfbfer-20070710.zip")
if not os.path.isfile(pfbfile):
sh.wget("http://www.reportlab.com/ftp/pfbfer-20070710.zip", "-O", pfbfile)
sh.unzip("-u", "-d", os.path.join(recipe_dir, "src", "reportlab", "fonts"), pfbfile)
if os.path.isfile("setup.py"):
- with open('setup.py', 'rb') as f:
+ with open('setup.py', 'r') as f:
text = f.read().replace('_FT_LIB_', ft_lib_dir).replace('_FT_INC_', ft_inc_dir)
- with open('setup.py', 'wb') as f:
+ with open('setup.py', 'w') as f:
f.write(text)
|
{"golden_diff": "diff --git a/pythonforandroid/recipes/reportlab/__init__.py b/pythonforandroid/recipes/reportlab/__init__.py\n--- a/pythonforandroid/recipes/reportlab/__init__.py\n+++ b/pythonforandroid/recipes/reportlab/__init__.py\n@@ -26,16 +26,15 @@\n info('reportlab recipe: ft_lib_dir={}'.format(ft_lib_dir))\n info('reportlab recipe: ft_inc_dir={}'.format(ft_inc_dir))\n with current_directory(recipe_dir):\n- sh.ls('-lathr')\n ensure_dir(tmp_dir)\n pfbfile = os.path.join(tmp_dir, \"pfbfer-20070710.zip\")\n if not os.path.isfile(pfbfile):\n sh.wget(\"http://www.reportlab.com/ftp/pfbfer-20070710.zip\", \"-O\", pfbfile)\n sh.unzip(\"-u\", \"-d\", os.path.join(recipe_dir, \"src\", \"reportlab\", \"fonts\"), pfbfile)\n if os.path.isfile(\"setup.py\"):\n- with open('setup.py', 'rb') as f:\n+ with open('setup.py', 'r') as f:\n text = f.read().replace('_FT_LIB_', ft_lib_dir).replace('_FT_INC_', ft_inc_dir)\n- with open('setup.py', 'wb') as f:\n+ with open('setup.py', 'w') as f:\n f.write(text)\n", "issue": "Unit test recipes (reportlab to begin with)\nThe test suite is currently running full integration tests for a bunch of recipes.\r\nWhile integration tests are good, we cannot currently use them for all recipes because they run for too long.\r\nHowever having unit tests for all recipes should be feasible and may still cover some issues like https://github.com/kivy/python-for-android/pull/1357#issuecomment-423614116.\r\nUnit tests were recently enabled the following pull request https://github.com/kivy/python-for-android/pull/1379. So the idea is to increase the coverage start from reportlab recipe as a use case.\n", "before_files": [{"content": "import os, sh\nfrom pythonforandroid.recipe import CompiledComponentsPythonRecipe\nfrom pythonforandroid.util import (current_directory, ensure_dir)\nfrom pythonforandroid.logger import (info, shprint)\n\n\nclass ReportLabRecipe(CompiledComponentsPythonRecipe):\n version = 'c088826211ca'\n url = 'https://bitbucket.org/rptlab/reportlab/get/{version}.tar.gz'\n depends = [('python2', 'python3crystax'), 'freetype']\n\n def prebuild_arch(self, arch):\n if not self.is_patched(arch):\n super(ReportLabRecipe, self).prebuild_arch(arch)\n self.apply_patch('patches/fix-setup.patch', arch.arch)\n recipe_dir = self.get_build_dir(arch.arch)\n shprint(sh.touch, os.path.join(recipe_dir, '.patched'))\n ft = self.get_recipe('freetype', self.ctx)\n ft_dir = ft.get_build_dir(arch.arch)\n ft_lib_dir = os.environ.get('_FT_LIB_', os.path.join(ft_dir, 'objs', '.libs'))\n ft_inc_dir = os.environ.get('_FT_INC_', os.path.join(ft_dir, 'include'))\n tmp_dir = os.path.normpath(os.path.join(recipe_dir, \"..\", \"..\", \"tmp\"))\n info('reportlab recipe: recipe_dir={}'.format(recipe_dir))\n info('reportlab recipe: tmp_dir={}'.format(tmp_dir))\n info('reportlab recipe: ft_dir={}'.format(ft_dir))\n info('reportlab recipe: ft_lib_dir={}'.format(ft_lib_dir))\n info('reportlab recipe: ft_inc_dir={}'.format(ft_inc_dir))\n with current_directory(recipe_dir):\n sh.ls('-lathr')\n ensure_dir(tmp_dir)\n pfbfile = os.path.join(tmp_dir, \"pfbfer-20070710.zip\")\n if not os.path.isfile(pfbfile):\n sh.wget(\"http://www.reportlab.com/ftp/pfbfer-20070710.zip\", \"-O\", pfbfile)\n sh.unzip(\"-u\", \"-d\", os.path.join(recipe_dir, \"src\", \"reportlab\", \"fonts\"), pfbfile)\n if os.path.isfile(\"setup.py\"):\n with open('setup.py', 'rb') as f:\n text = f.read().replace('_FT_LIB_', ft_lib_dir).replace('_FT_INC_', ft_inc_dir)\n with open('setup.py', 'wb') as f:\n f.write(text)\n\n\nrecipe = ReportLabRecipe()\n", "path": "pythonforandroid/recipes/reportlab/__init__.py"}]}
| 1,311 | 320 |
gh_patches_debug_22375
|
rasdani/github-patches
|
git_diff
|
systemd__mkosi-1844
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to mount build-overlay with incremental builds
I am having issues getting incremental builds to run as the build overlay fails to mount. This is on Arch Linux with a btrfs filesystem, using mkosi commit 7030d76 (the latest as of the time of writing).
Using the following mkosi configuration:
```ini
[Distribution]
Distribution=debian
Release=bookworm
[Output]
CacheDirectory=cache
ImageId=test
[Content]
BuildPackages=
make
BuildScript=build.sh
```
And a simple build script:
```bash
#!/usr/bin/bash
echo "building"
make --version
```
On the first run of `mkosi -i -f` everything runs successfully and I get an output image. Of potential interest is the dmesg output:
```console
[ 1907.326051] overlayfs: fs on '<path>/.mkosi-tmp7x3kcw9q/root' does not support file handles, falling back to index=off,nfs_export=off.
[ 1907.326054] overlayfs: fs on '<path>/.mkosi-tmp7x3kcw9q/root' does not support file handles, falling back to xino=off.
```
However, on a second run of `mkosi -i -f` I get the following error:
```console
β£ Removing output filesβ¦
β£ Building default image
Create subvolume '<path>/.mkosi-tmp38ur8lqj/root'
β£ Copying cached trees
β£ Running build scriptβ¦
mount: <path>/.mkosi-tmp38ur8lqj/root: mount(2) system call failed: Stale file handle.
dmesg(1) may have more information after failed mount system call.
β£ "mount --no-mtab overlay <path>/.mkosi-tmp38ur8lqj/root --types overlay --options lowerdir=<path>/.mkosi-tmp38ur8lqj/root,upperdir=<path>/.mkosi-tmp38ur8lqj/build-overlay,workdir=<path>/.mkosi-tmp38ur8lqj/build-overlay-workdirpawin893,userxattr" returned non-zero exit code 32.
umount: <path>/.mkosi-tmp38ur8lqj/root: not mounted.
β£ "umount --no-mtab <path>/.mkosi-tmp38ur8lqj/root" returned non-zero exit code 32.
β£ (Cleaning up overlayfs)
β£ (Removing overlay whiteout filesβ¦)
```
The corresponding dmesg output is:
```console
[ 2126.464825] overlayfs: failed to verify origin (.mkosi-tmp38ur8lqj/root, ino=256, err=-116)
[ 2126.464829] overlayfs: failed to verify upper root origin
```
From [the overlayfs docs](https://www.kernel.org/doc/html/latest/filesystems/overlayfs.html#sharing-and-copying-layers)
> Mounting an overlay using an upper layer path, where the upper layer path was previously used by another mounted overlay in combination with a different lower layer path, is allowed, unless the "inodes index" feature or "metadata only copy up" feature is enabled.
It seems that Arch [has these features enabled by default](https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/754a4cb9f87cb074eedbb6bebf1f926543a5b64f/config#L10193), and I am guessing that the use of a new temporary directory for the lower /root on each run causes a mismatch with the one used when the build overlay was created.
If I modify line 84 of `mkosi/mounts.py`
https://github.com/systemd/mkosi/blob/7030d76ae957d35d5b49e72c54c1d7f771cc4cac/mkosi/mounts.py#L84
to
```python
options = [f"lowerdir={lower}" for lower in lowerdirs] + [f"upperdir={upperdir}", f"workdir={workdir}", "index=off"]
```
then the second incremental run works (it seems not to be neccessary to clean the cached images with `-ff`). The dmesg warning from the first run about falling back to index=off seems to not apply to the build overlay.
Note that it wasn't neccessary (in my case, at least) to add `"metacopy=off"` to the options, although I guess this might be safer in general.
I am not sure if this is the proper fix; if so, I am happy to submit a pull request (or for a maintainer to just make the trivial change directly).
</issue>
<code>
[start of mkosi/mounts.py]
1 # SPDX-License-Identifier: LGPL-2.1+
2
3 import contextlib
4 import os
5 import platform
6 import stat
7 import tempfile
8 from collections.abc import Iterator, Sequence
9 from pathlib import Path
10 from typing import Optional
11
12 from mkosi.log import complete_step
13 from mkosi.run import run
14 from mkosi.types import PathString
15 from mkosi.util import umask
16 from mkosi.versioncomp import GenericVersion
17
18
19 def stat_is_whiteout(st: os.stat_result) -> bool:
20 return stat.S_ISCHR(st.st_mode) and st.st_rdev == 0
21
22
23 def delete_whiteout_files(path: Path) -> None:
24 """Delete any char(0,0) device nodes underneath @path
25
26 Overlayfs uses such files to mark "whiteouts" (files present in
27 the lower layers, but removed in the upper one).
28 """
29
30 with complete_step("Removing overlay whiteout filesβ¦"):
31 for entry in path.rglob("*"):
32 # TODO: Use Path.stat() once we depend on Python 3.10+.
33 if stat_is_whiteout(os.stat(entry, follow_symlinks=False)):
34 entry.unlink()
35
36
37 @contextlib.contextmanager
38 def mount(
39 what: PathString,
40 where: Path,
41 operation: Optional[str] = None,
42 options: Sequence[str] = (),
43 type: Optional[str] = None,
44 read_only: bool = False,
45 umount: bool = True,
46 lazy: bool = False,
47 ) -> Iterator[Path]:
48 if not where.exists():
49 with umask(~0o755):
50 where.mkdir(parents=True)
51
52 if read_only:
53 options = ["ro", *options]
54
55 cmd: list[PathString] = ["mount", "--no-mtab"]
56
57 if operation:
58 cmd += [operation]
59
60 cmd += [what, where]
61
62 if type:
63 cmd += ["--types", type]
64
65 if options:
66 cmd += ["--options", ",".join(options)]
67
68 try:
69 run(cmd)
70 yield where
71 finally:
72 if umount:
73 run(["umount", "--no-mtab", *(["--lazy"] if lazy else []), where])
74
75
76 @contextlib.contextmanager
77 def mount_overlay(
78 lowerdirs: Sequence[Path],
79 upperdir: Path,
80 where: Path,
81 read_only: bool = True,
82 ) -> Iterator[Path]:
83 with tempfile.TemporaryDirectory(dir=upperdir.parent, prefix=f"{upperdir.name}-workdir") as workdir:
84 options = [f"lowerdir={lower}" for lower in lowerdirs] + [f"upperdir={upperdir}", f"workdir={workdir}"]
85
86 # userxattr is only supported on overlayfs since kernel 5.11
87 if GenericVersion(platform.release()) >= GenericVersion("5.11"):
88 options.append("userxattr")
89
90 try:
91 with mount("overlay", where, options=options, type="overlay", read_only=read_only):
92 yield where
93 finally:
94 with complete_step("Cleaning up overlayfs"):
95 delete_whiteout_files(upperdir)
96
97
98 @contextlib.contextmanager
99 def mount_usr(tree: Optional[Path], umount: bool = True) -> Iterator[None]:
100 if not tree:
101 yield
102 return
103
104 # If we replace /usr, we should ignore any local modifications made to PATH as any of those binaries
105 # might not work anymore when /usr is replaced wholesale. We also make sure that both /usr/bin and
106 # /usr/sbin/ are searched so that e.g. if the host is Arch and the root is Debian we don't ignore the
107 # binaries from /usr/sbin in the Debian root.
108 old = os.environ["PATH"]
109 os.environ["PATH"] = "/usr/bin:/usr/sbin"
110
111 try:
112 # If we mounted over /usr, trying to use umount will fail with "target is busy", because umount is
113 # being called from /usr, which we're trying to unmount. To work around this issue, we do a lazy
114 # unmount.
115 with mount(what=tree / "usr", where=Path("/usr"), operation="--bind", read_only=True, umount=umount, lazy=True):
116 yield
117 finally:
118 os.environ["PATH"] = old
119
120
121 @contextlib.contextmanager
122 def mount_passwd(name: str, uid: int, gid: int, umount: bool = True) -> Iterator[None]:
123 """
124 ssh looks up the running user in /etc/passwd and fails if it can't find the running user. To trick it, we
125 mount over /etc/passwd with our own file containing our user in the user namespace.
126 """
127 with tempfile.NamedTemporaryFile(prefix="mkosi.passwd", mode="w") as passwd:
128 passwd.write(f"{name}:x:{uid}:{gid}:{name}:/bin/sh\n")
129 os.fchown(passwd.file.fileno(), uid, gid)
130
131 with mount(passwd.name, Path("/etc/passwd"), operation="--bind", umount=umount):
132 passwd.close() # Don't need the file anymore after it's mounted.
133 yield
134
[end of mkosi/mounts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mkosi/mounts.py b/mkosi/mounts.py
--- a/mkosi/mounts.py
+++ b/mkosi/mounts.py
@@ -81,7 +81,19 @@
read_only: bool = True,
) -> Iterator[Path]:
with tempfile.TemporaryDirectory(dir=upperdir.parent, prefix=f"{upperdir.name}-workdir") as workdir:
- options = [f"lowerdir={lower}" for lower in lowerdirs] + [f"upperdir={upperdir}", f"workdir={workdir}"]
+ options = [f"lowerdir={lower}" for lower in lowerdirs] + [
+ f"upperdir={upperdir}",
+ f"workdir={workdir}",
+ # Disable the inodes index and metacopy (only copy metadata upwards if possible)
+ # options. If these are enabled (e.g., if the kernel enables them by default),
+ # the mount will fail if the upper directory has been earlier used with a different
+ # lower directory, such as with a build overlay that was generated on top of a
+ # different temporary root.
+ # See https://www.kernel.org/doc/html/latest/filesystems/overlayfs.html#sharing-and-copying-layers
+ # and https://github.com/systemd/mkosi/issues/1841.
+ "index=off",
+ "metacopy=off"
+ ]
# userxattr is only supported on overlayfs since kernel 5.11
if GenericVersion(platform.release()) >= GenericVersion("5.11"):
|
{"golden_diff": "diff --git a/mkosi/mounts.py b/mkosi/mounts.py\n--- a/mkosi/mounts.py\n+++ b/mkosi/mounts.py\n@@ -81,7 +81,19 @@\n read_only: bool = True,\n ) -> Iterator[Path]:\n with tempfile.TemporaryDirectory(dir=upperdir.parent, prefix=f\"{upperdir.name}-workdir\") as workdir:\n- options = [f\"lowerdir={lower}\" for lower in lowerdirs] + [f\"upperdir={upperdir}\", f\"workdir={workdir}\"]\n+ options = [f\"lowerdir={lower}\" for lower in lowerdirs] + [\n+ f\"upperdir={upperdir}\",\n+ f\"workdir={workdir}\",\n+ # Disable the inodes index and metacopy (only copy metadata upwards if possible)\n+ # options. If these are enabled (e.g., if the kernel enables them by default),\n+ # the mount will fail if the upper directory has been earlier used with a different\n+ # lower directory, such as with a build overlay that was generated on top of a\n+ # different temporary root.\n+ # See https://www.kernel.org/doc/html/latest/filesystems/overlayfs.html#sharing-and-copying-layers\n+ # and https://github.com/systemd/mkosi/issues/1841.\n+ \"index=off\",\n+ \"metacopy=off\"\n+ ]\n \n # userxattr is only supported on overlayfs since kernel 5.11\n if GenericVersion(platform.release()) >= GenericVersion(\"5.11\"):\n", "issue": "Unable to mount build-overlay with incremental builds\nI am having issues getting incremental builds to run as the build overlay fails to mount. This is on Arch Linux with a btrfs filesystem, using mkosi commit 7030d76 (the latest as of the time of writing).\r\n\r\nUsing the following mkosi configuration:\r\n\r\n```ini\r\n[Distribution]\r\nDistribution=debian\r\nRelease=bookworm\r\n\r\n[Output]\r\nCacheDirectory=cache\r\nImageId=test\r\n\r\n[Content]\r\nBuildPackages=\r\n make\r\nBuildScript=build.sh\r\n```\r\n\r\nAnd a simple build script:\r\n\r\n```bash\r\n#!/usr/bin/bash\r\necho \"building\"\r\nmake --version\r\n```\r\n\r\nOn the first run of `mkosi -i -f` everything runs successfully and I get an output image. Of potential interest is the dmesg output:\r\n\r\n```console\r\n[ 1907.326051] overlayfs: fs on '<path>/.mkosi-tmp7x3kcw9q/root' does not support file handles, falling back to index=off,nfs_export=off.\r\n[ 1907.326054] overlayfs: fs on '<path>/.mkosi-tmp7x3kcw9q/root' does not support file handles, falling back to xino=off.\r\n```\r\n\r\nHowever, on a second run of `mkosi -i -f` I get the following error:\r\n\r\n```console\r\n\u2023 Removing output files\u2026\r\n\u2023 Building default image\r\nCreate subvolume '<path>/.mkosi-tmp38ur8lqj/root'\r\n\u2023 Copying cached trees\r\n\u2023 Running build script\u2026\r\nmount: <path>/.mkosi-tmp38ur8lqj/root: mount(2) system call failed: Stale file handle.\r\n dmesg(1) may have more information after failed mount system call.\r\n\u2023 \"mount --no-mtab overlay <path>/.mkosi-tmp38ur8lqj/root --types overlay --options lowerdir=<path>/.mkosi-tmp38ur8lqj/root,upperdir=<path>/.mkosi-tmp38ur8lqj/build-overlay,workdir=<path>/.mkosi-tmp38ur8lqj/build-overlay-workdirpawin893,userxattr\" returned non-zero exit code 32.\r\numount: <path>/.mkosi-tmp38ur8lqj/root: not mounted.\r\n\u2023 \"umount --no-mtab <path>/.mkosi-tmp38ur8lqj/root\" returned non-zero exit code 32.\r\n\u2023 (Cleaning up overlayfs)\r\n\u2023 (Removing overlay whiteout files\u2026)\r\n```\r\n\r\nThe corresponding dmesg output is:\r\n```console\r\n[ 2126.464825] overlayfs: failed to verify origin (.mkosi-tmp38ur8lqj/root, ino=256, err=-116)\r\n[ 2126.464829] overlayfs: failed to verify upper root origin\r\n```\r\n\r\nFrom [the overlayfs docs](https://www.kernel.org/doc/html/latest/filesystems/overlayfs.html#sharing-and-copying-layers)\r\n> Mounting an overlay using an upper layer path, where the upper layer path was previously used by another mounted overlay in combination with a different lower layer path, is allowed, unless the \"inodes index\" feature or \"metadata only copy up\" feature is enabled.\r\n\r\nIt seems that Arch [has these features enabled by default](https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/754a4cb9f87cb074eedbb6bebf1f926543a5b64f/config#L10193), and I am guessing that the use of a new temporary directory for the lower /root on each run causes a mismatch with the one used when the build overlay was created.\r\n\r\nIf I modify line 84 of `mkosi/mounts.py`\r\nhttps://github.com/systemd/mkosi/blob/7030d76ae957d35d5b49e72c54c1d7f771cc4cac/mkosi/mounts.py#L84\r\n\r\nto\r\n\r\n```python\r\n options = [f\"lowerdir={lower}\" for lower in lowerdirs] + [f\"upperdir={upperdir}\", f\"workdir={workdir}\", \"index=off\"]\r\n```\r\n\r\nthen the second incremental run works (it seems not to be neccessary to clean the cached images with `-ff`). The dmesg warning from the first run about falling back to index=off seems to not apply to the build overlay.\r\n\r\nNote that it wasn't neccessary (in my case, at least) to add `\"metacopy=off\"` to the options, although I guess this might be safer in general.\r\n\r\nI am not sure if this is the proper fix; if so, I am happy to submit a pull request (or for a maintainer to just make the trivial change directly).\n", "before_files": [{"content": "# SPDX-License-Identifier: LGPL-2.1+\n\nimport contextlib\nimport os\nimport platform\nimport stat\nimport tempfile\nfrom collections.abc import Iterator, Sequence\nfrom pathlib import Path\nfrom typing import Optional\n\nfrom mkosi.log import complete_step\nfrom mkosi.run import run\nfrom mkosi.types import PathString\nfrom mkosi.util import umask\nfrom mkosi.versioncomp import GenericVersion\n\n\ndef stat_is_whiteout(st: os.stat_result) -> bool:\n return stat.S_ISCHR(st.st_mode) and st.st_rdev == 0\n\n\ndef delete_whiteout_files(path: Path) -> None:\n \"\"\"Delete any char(0,0) device nodes underneath @path\n\n Overlayfs uses such files to mark \"whiteouts\" (files present in\n the lower layers, but removed in the upper one).\n \"\"\"\n\n with complete_step(\"Removing overlay whiteout files\u2026\"):\n for entry in path.rglob(\"*\"):\n # TODO: Use Path.stat() once we depend on Python 3.10+.\n if stat_is_whiteout(os.stat(entry, follow_symlinks=False)):\n entry.unlink()\n\n\[email protected]\ndef mount(\n what: PathString,\n where: Path,\n operation: Optional[str] = None,\n options: Sequence[str] = (),\n type: Optional[str] = None,\n read_only: bool = False,\n umount: bool = True,\n lazy: bool = False,\n) -> Iterator[Path]:\n if not where.exists():\n with umask(~0o755):\n where.mkdir(parents=True)\n\n if read_only:\n options = [\"ro\", *options]\n\n cmd: list[PathString] = [\"mount\", \"--no-mtab\"]\n\n if operation:\n cmd += [operation]\n\n cmd += [what, where]\n\n if type:\n cmd += [\"--types\", type]\n\n if options:\n cmd += [\"--options\", \",\".join(options)]\n\n try:\n run(cmd)\n yield where\n finally:\n if umount:\n run([\"umount\", \"--no-mtab\", *([\"--lazy\"] if lazy else []), where])\n\n\[email protected]\ndef mount_overlay(\n lowerdirs: Sequence[Path],\n upperdir: Path,\n where: Path,\n read_only: bool = True,\n) -> Iterator[Path]:\n with tempfile.TemporaryDirectory(dir=upperdir.parent, prefix=f\"{upperdir.name}-workdir\") as workdir:\n options = [f\"lowerdir={lower}\" for lower in lowerdirs] + [f\"upperdir={upperdir}\", f\"workdir={workdir}\"]\n\n # userxattr is only supported on overlayfs since kernel 5.11\n if GenericVersion(platform.release()) >= GenericVersion(\"5.11\"):\n options.append(\"userxattr\")\n\n try:\n with mount(\"overlay\", where, options=options, type=\"overlay\", read_only=read_only):\n yield where\n finally:\n with complete_step(\"Cleaning up overlayfs\"):\n delete_whiteout_files(upperdir)\n\n\[email protected]\ndef mount_usr(tree: Optional[Path], umount: bool = True) -> Iterator[None]:\n if not tree:\n yield\n return\n\n # If we replace /usr, we should ignore any local modifications made to PATH as any of those binaries\n # might not work anymore when /usr is replaced wholesale. We also make sure that both /usr/bin and\n # /usr/sbin/ are searched so that e.g. if the host is Arch and the root is Debian we don't ignore the\n # binaries from /usr/sbin in the Debian root.\n old = os.environ[\"PATH\"]\n os.environ[\"PATH\"] = \"/usr/bin:/usr/sbin\"\n\n try:\n # If we mounted over /usr, trying to use umount will fail with \"target is busy\", because umount is\n # being called from /usr, which we're trying to unmount. To work around this issue, we do a lazy\n # unmount.\n with mount(what=tree / \"usr\", where=Path(\"/usr\"), operation=\"--bind\", read_only=True, umount=umount, lazy=True):\n yield\n finally:\n os.environ[\"PATH\"] = old\n\n\[email protected]\ndef mount_passwd(name: str, uid: int, gid: int, umount: bool = True) -> Iterator[None]:\n \"\"\"\n ssh looks up the running user in /etc/passwd and fails if it can't find the running user. To trick it, we\n mount over /etc/passwd with our own file containing our user in the user namespace.\n \"\"\"\n with tempfile.NamedTemporaryFile(prefix=\"mkosi.passwd\", mode=\"w\") as passwd:\n passwd.write(f\"{name}:x:{uid}:{gid}:{name}:/bin/sh\\n\")\n os.fchown(passwd.file.fileno(), uid, gid)\n\n with mount(passwd.name, Path(\"/etc/passwd\"), operation=\"--bind\", umount=umount):\n passwd.close() # Don't need the file anymore after it's mounted.\n yield\n", "path": "mkosi/mounts.py"}]}
| 3,058 | 358 |
gh_patches_debug_9406
|
rasdani/github-patches
|
git_diff
|
frappe__frappe-8707
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PDF Filename is broken when saving if it contains UTF-8 characters
## Description of the issue
When trying to save a PDF using python 3 the filename is broken if it contains UTF-8 characters.
## Context information (for bug reports)
**Output of `bench version`**
```
erpnext 12.x.x-develop
frappe 12.x.x-develop
fatal: Not a git repository (or any of the parent directories): .git
```
## Steps to reproduce the issue
For example
1. create an invoice naming series called "΀ΠΞ₯"
2. then create an invoice in this series
3. try to save the PDF
### Observed result
filename is broken

### Expected result
Filename renders correctly
## Additional information
OS version / distribution, `ERPNext` install method, etc.
Ubuntu 16.04
easy install script
Python 3
</issue>
<code>
[start of frappe/utils/response.py]
1 # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
2 # MIT License. See license.txt
3
4 from __future__ import unicode_literals
5 import json
6 import datetime
7 import decimal
8 import mimetypes
9 import os
10 import frappe
11 from frappe import _
12 import frappe.model.document
13 import frappe.utils
14 import frappe.sessions
15 import werkzeug.utils
16 from werkzeug.local import LocalProxy
17 from werkzeug.wsgi import wrap_file
18 from werkzeug.wrappers import Response
19 from werkzeug.exceptions import NotFound, Forbidden
20 from frappe.website.render import render
21 from frappe.utils import cint
22 from six import text_type
23 from six.moves.urllib.parse import quote
24 from frappe.core.doctype.access_log.access_log import make_access_log
25
26
27 def report_error(status_code):
28 '''Build error. Show traceback in developer mode'''
29 if (cint(frappe.db.get_system_setting('allow_error_traceback'))
30 and (status_code!=404 or frappe.conf.logging)
31 and not frappe.local.flags.disable_traceback):
32 frappe.errprint(frappe.utils.get_traceback())
33
34 response = build_response("json")
35 response.status_code = status_code
36 return response
37
38 def build_response(response_type=None):
39 if "docs" in frappe.local.response and not frappe.local.response.docs:
40 del frappe.local.response["docs"]
41
42 response_type_map = {
43 'csv': as_csv,
44 'txt': as_txt,
45 'download': as_raw,
46 'json': as_json,
47 'pdf': as_pdf,
48 'page': as_page,
49 'redirect': redirect,
50 'binary': as_binary
51 }
52
53 return response_type_map[frappe.response.get('type') or response_type]()
54
55 def as_csv():
56 response = Response()
57 response.mimetype = 'text/csv'
58 response.charset = 'utf-8'
59 response.headers["Content-Disposition"] = ("attachment; filename=\"%s.csv\"" % frappe.response['doctype'].replace(' ', '_')).encode("utf-8")
60 response.data = frappe.response['result']
61 return response
62
63 def as_txt():
64 response = Response()
65 response.mimetype = 'text'
66 response.charset = 'utf-8'
67 response.headers["Content-Disposition"] = ("attachment; filename=\"%s.txt\"" % frappe.response['doctype'].replace(' ', '_')).encode("utf-8")
68 response.data = frappe.response['result']
69 return response
70
71 def as_raw():
72 response = Response()
73 response.mimetype = frappe.response.get("content_type") or mimetypes.guess_type(frappe.response['filename'])[0] or "application/unknown"
74 response.headers["Content-Disposition"] = ("attachment; filename=\"%s\"" % frappe.response['filename'].replace(' ', '_')).encode("utf-8")
75 response.data = frappe.response['filecontent']
76 return response
77
78 def as_json():
79 make_logs()
80 response = Response()
81 if frappe.local.response.http_status_code:
82 response.status_code = frappe.local.response['http_status_code']
83 del frappe.local.response['http_status_code']
84
85 response.mimetype = 'application/json'
86 response.charset = 'utf-8'
87 response.data = json.dumps(frappe.local.response, default=json_handler, separators=(',',':'))
88 return response
89
90 def as_pdf():
91 response = Response()
92 response.mimetype = "application/pdf"
93 response.headers["Content-Disposition"] = ("filename=\"%s\"" % frappe.response['filename'].replace(' ', '_')).encode("utf-8")
94 response.data = frappe.response['filecontent']
95 return response
96
97 def as_binary():
98 response = Response()
99 response.mimetype = 'application/octet-stream'
100 response.headers["Content-Disposition"] = ("filename=\"%s\"" % frappe.response['filename'].replace(' ', '_')).encode("utf-8")
101 response.data = frappe.response['filecontent']
102 return response
103
104 def make_logs(response = None):
105 """make strings for msgprint and errprint"""
106 if not response:
107 response = frappe.local.response
108
109 if frappe.error_log:
110 response['exc'] = json.dumps([frappe.utils.cstr(d["exc"]) for d in frappe.local.error_log])
111
112 if frappe.local.message_log:
113 response['_server_messages'] = json.dumps([frappe.utils.cstr(d) for
114 d in frappe.local.message_log])
115
116 if frappe.debug_log and frappe.conf.get("logging") or False:
117 response['_debug_messages'] = json.dumps(frappe.local.debug_log)
118
119 if frappe.flags.error_message:
120 response['_error_message'] = frappe.flags.error_message
121
122 def json_handler(obj):
123 """serialize non-serializable data for json"""
124 # serialize date
125 import collections
126
127 if isinstance(obj, (datetime.date, datetime.timedelta, datetime.datetime)):
128 return text_type(obj)
129
130 elif isinstance(obj, decimal.Decimal):
131 return float(obj)
132
133 elif isinstance(obj, LocalProxy):
134 return text_type(obj)
135
136 elif isinstance(obj, frappe.model.document.BaseDocument):
137 doc = obj.as_dict(no_nulls=True)
138 return doc
139
140 elif isinstance(obj, collections.Iterable):
141 return list(obj)
142
143 elif type(obj)==type or isinstance(obj, Exception):
144 return repr(obj)
145
146 else:
147 raise TypeError("""Object of type %s with value of %s is not JSON serializable""" % \
148 (type(obj), repr(obj)))
149
150 def as_page():
151 """print web page"""
152 return render(frappe.response['route'], http_status_code=frappe.response.get("http_status_code"))
153
154 def redirect():
155 return werkzeug.utils.redirect(frappe.response.location)
156
157 def download_backup(path):
158 try:
159 frappe.only_for(("System Manager", "Administrator"))
160 make_access_log(report_name='Backup')
161 except frappe.PermissionError:
162 raise Forbidden(_("You need to be logged in and have System Manager Role to be able to access backups."))
163
164 return send_private_file(path)
165
166 def download_private_file(path):
167 """Checks permissions and sends back private file"""
168
169 files = frappe.db.get_all('File', {'file_url': path})
170 can_access = False
171 # this file might be attached to multiple documents
172 # if the file is accessible from any one of those documents
173 # then it should be downloadable
174 for f in files:
175 _file = frappe.get_doc("File", f)
176 can_access = _file.is_downloadable()
177 if can_access:
178 make_access_log(doctype='File', document=_file.name, file_type=os.path.splitext(path)[-1][1:])
179 break
180
181 if not can_access:
182 raise Forbidden(_("You don't have permission to access this file"))
183
184 return send_private_file(path.split("/private", 1)[1])
185
186
187 def send_private_file(path):
188 path = os.path.join(frappe.local.conf.get('private_path', 'private'), path.strip("/"))
189 filename = os.path.basename(path)
190
191 if frappe.local.request.headers.get('X-Use-X-Accel-Redirect'):
192 path = '/protected/' + path
193 response = Response()
194 response.headers['X-Accel-Redirect'] = quote(frappe.utils.encode(path))
195
196 else:
197 filepath = frappe.utils.get_site_path(path)
198 try:
199 f = open(filepath, 'rb')
200 except IOError:
201 raise NotFound
202
203 response = Response(wrap_file(frappe.local.request.environ, f), direct_passthrough=True)
204
205 # no need for content disposition and force download. let browser handle its opening.
206 # Except for those that can be injected with scripts.
207
208 extension = os.path.splitext(path)[1]
209 blacklist = ['.svg', '.html', '.htm', '.xml']
210
211 if extension.lower() in blacklist:
212 response.headers.add(b'Content-Disposition', b'attachment', filename=filename.encode("utf-8"))
213
214 response.mimetype = mimetypes.guess_type(filename)[0] or 'application/octet-stream'
215
216 return response
217
218 def handle_session_stopped():
219 frappe.respond_as_web_page(_("Updating"),
220 _("Your system is being updated. Please refresh again after a few moments"),
221 http_status_code=503, indicator_color='orange', fullpage = True, primary_action=None)
222 return frappe.website.render.render("message", http_status_code=503)
223
[end of frappe/utils/response.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/frappe/utils/response.py b/frappe/utils/response.py
--- a/frappe/utils/response.py
+++ b/frappe/utils/response.py
@@ -90,7 +90,8 @@
def as_pdf():
response = Response()
response.mimetype = "application/pdf"
- response.headers["Content-Disposition"] = ("filename=\"%s\"" % frappe.response['filename'].replace(' ', '_')).encode("utf-8")
+ encoded_filename = quote(frappe.response['filename'].replace(' ', '_'), encoding='utf-8')
+ response.headers["Content-Disposition"] = ("filename=\"%s\"" % frappe.response['filename'].replace(' ', '_') + ";filename*=utf-8''%s" % encoded_filename).encode("utf-8")
response.data = frappe.response['filecontent']
return response
|
{"golden_diff": "diff --git a/frappe/utils/response.py b/frappe/utils/response.py\n--- a/frappe/utils/response.py\n+++ b/frappe/utils/response.py\n@@ -90,7 +90,8 @@\n def as_pdf():\n \tresponse = Response()\n \tresponse.mimetype = \"application/pdf\"\n-\tresponse.headers[\"Content-Disposition\"] = (\"filename=\\\"%s\\\"\" % frappe.response['filename'].replace(' ', '_')).encode(\"utf-8\")\n+\tencoded_filename = quote(frappe.response['filename'].replace(' ', '_'), encoding='utf-8')\n+\tresponse.headers[\"Content-Disposition\"] = (\"filename=\\\"%s\\\"\" % frappe.response['filename'].replace(' ', '_') + \";filename*=utf-8''%s\" % encoded_filename).encode(\"utf-8\")\n \tresponse.data = frappe.response['filecontent']\n \treturn response\n", "issue": "PDF Filename is broken when saving if it contains UTF-8 characters\n## Description of the issue\r\nWhen trying to save a PDF using python 3 the filename is broken if it contains UTF-8 characters. \r\n## Context information (for bug reports)\r\n\r\n**Output of `bench version`**\r\n```\r\nerpnext 12.x.x-develop\r\nfrappe 12.x.x-develop\r\nfatal: Not a git repository (or any of the parent directories): .git\r\n```\r\n\r\n## Steps to reproduce the issue\r\nFor example \r\n1. create an invoice naming series called \"\u03a4\u03a0\u03a5\" \r\n2. then create an invoice in this series\r\n3. try to save the PDF\r\n\r\n### Observed result\r\nfilename is broken\r\n\r\n\r\n### Expected result\r\nFilename renders correctly\r\n\r\n\r\n## Additional information\r\n\r\nOS version / distribution, `ERPNext` install method, etc.\r\nUbuntu 16.04\r\neasy install script \r\nPython 3\n", "before_files": [{"content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# MIT License. See license.txt\n\nfrom __future__ import unicode_literals\nimport json\nimport datetime\nimport decimal\nimport mimetypes\nimport os\nimport frappe\nfrom frappe import _\nimport frappe.model.document\nimport frappe.utils\nimport frappe.sessions\nimport werkzeug.utils\nfrom werkzeug.local import LocalProxy\nfrom werkzeug.wsgi import wrap_file\nfrom werkzeug.wrappers import Response\nfrom werkzeug.exceptions import NotFound, Forbidden\nfrom frappe.website.render import render\nfrom frappe.utils import cint\nfrom six import text_type\nfrom six.moves.urllib.parse import quote\nfrom frappe.core.doctype.access_log.access_log import make_access_log\n\n\ndef report_error(status_code):\n\t'''Build error. Show traceback in developer mode'''\n\tif (cint(frappe.db.get_system_setting('allow_error_traceback'))\n\t\tand (status_code!=404 or frappe.conf.logging)\n\t\tand not frappe.local.flags.disable_traceback):\n\t\tfrappe.errprint(frappe.utils.get_traceback())\n\n\tresponse = build_response(\"json\")\n\tresponse.status_code = status_code\n\treturn response\n\ndef build_response(response_type=None):\n\tif \"docs\" in frappe.local.response and not frappe.local.response.docs:\n\t\tdel frappe.local.response[\"docs\"]\n\n\tresponse_type_map = {\n\t\t'csv': as_csv,\n\t\t'txt': as_txt,\n\t\t'download': as_raw,\n\t\t'json': as_json,\n\t\t'pdf': as_pdf,\n\t\t'page': as_page,\n\t\t'redirect': redirect,\n\t\t'binary': as_binary\n\t}\n\n\treturn response_type_map[frappe.response.get('type') or response_type]()\n\ndef as_csv():\n\tresponse = Response()\n\tresponse.mimetype = 'text/csv'\n\tresponse.charset = 'utf-8'\n\tresponse.headers[\"Content-Disposition\"] = (\"attachment; filename=\\\"%s.csv\\\"\" % frappe.response['doctype'].replace(' ', '_')).encode(\"utf-8\")\n\tresponse.data = frappe.response['result']\n\treturn response\n\ndef as_txt():\n\tresponse = Response()\n\tresponse.mimetype = 'text'\n\tresponse.charset = 'utf-8'\n\tresponse.headers[\"Content-Disposition\"] = (\"attachment; filename=\\\"%s.txt\\\"\" % frappe.response['doctype'].replace(' ', '_')).encode(\"utf-8\")\n\tresponse.data = frappe.response['result']\n\treturn response\n\ndef as_raw():\n\tresponse = Response()\n\tresponse.mimetype = frappe.response.get(\"content_type\") or mimetypes.guess_type(frappe.response['filename'])[0] or \"application/unknown\"\n\tresponse.headers[\"Content-Disposition\"] = (\"attachment; filename=\\\"%s\\\"\" % frappe.response['filename'].replace(' ', '_')).encode(\"utf-8\")\n\tresponse.data = frappe.response['filecontent']\n\treturn response\n\ndef as_json():\n\tmake_logs()\n\tresponse = Response()\n\tif frappe.local.response.http_status_code:\n\t\tresponse.status_code = frappe.local.response['http_status_code']\n\t\tdel frappe.local.response['http_status_code']\n\n\tresponse.mimetype = 'application/json'\n\tresponse.charset = 'utf-8'\n\tresponse.data = json.dumps(frappe.local.response, default=json_handler, separators=(',',':'))\n\treturn response\n\ndef as_pdf():\n\tresponse = Response()\n\tresponse.mimetype = \"application/pdf\"\n\tresponse.headers[\"Content-Disposition\"] = (\"filename=\\\"%s\\\"\" % frappe.response['filename'].replace(' ', '_')).encode(\"utf-8\")\n\tresponse.data = frappe.response['filecontent']\n\treturn response\n\ndef as_binary():\n\tresponse = Response()\n\tresponse.mimetype = 'application/octet-stream'\n\tresponse.headers[\"Content-Disposition\"] = (\"filename=\\\"%s\\\"\" % frappe.response['filename'].replace(' ', '_')).encode(\"utf-8\")\n\tresponse.data = frappe.response['filecontent']\n\treturn response\n\ndef make_logs(response = None):\n\t\"\"\"make strings for msgprint and errprint\"\"\"\n\tif not response:\n\t\tresponse = frappe.local.response\n\n\tif frappe.error_log:\n\t\tresponse['exc'] = json.dumps([frappe.utils.cstr(d[\"exc\"]) for d in frappe.local.error_log])\n\n\tif frappe.local.message_log:\n\t\tresponse['_server_messages'] = json.dumps([frappe.utils.cstr(d) for\n\t\t\td in frappe.local.message_log])\n\n\tif frappe.debug_log and frappe.conf.get(\"logging\") or False:\n\t\tresponse['_debug_messages'] = json.dumps(frappe.local.debug_log)\n\n\tif frappe.flags.error_message:\n\t\tresponse['_error_message'] = frappe.flags.error_message\n\ndef json_handler(obj):\n\t\"\"\"serialize non-serializable data for json\"\"\"\n\t# serialize date\n\timport collections\n\n\tif isinstance(obj, (datetime.date, datetime.timedelta, datetime.datetime)):\n\t\treturn text_type(obj)\n\n\telif isinstance(obj, decimal.Decimal):\n\t\treturn float(obj)\n\n\telif isinstance(obj, LocalProxy):\n\t\treturn text_type(obj)\n\n\telif isinstance(obj, frappe.model.document.BaseDocument):\n\t\tdoc = obj.as_dict(no_nulls=True)\n\t\treturn doc\n\n\telif isinstance(obj, collections.Iterable):\n\t\treturn list(obj)\n\n\telif type(obj)==type or isinstance(obj, Exception):\n\t\treturn repr(obj)\n\n\telse:\n\t\traise TypeError(\"\"\"Object of type %s with value of %s is not JSON serializable\"\"\" % \\\n\t\t\t\t\t\t(type(obj), repr(obj)))\n\ndef as_page():\n\t\"\"\"print web page\"\"\"\n\treturn render(frappe.response['route'], http_status_code=frappe.response.get(\"http_status_code\"))\n\ndef redirect():\n\treturn werkzeug.utils.redirect(frappe.response.location)\n\ndef download_backup(path):\n\ttry:\n\t\tfrappe.only_for((\"System Manager\", \"Administrator\"))\n\t\tmake_access_log(report_name='Backup')\n\texcept frappe.PermissionError:\n\t\traise Forbidden(_(\"You need to be logged in and have System Manager Role to be able to access backups.\"))\n\n\treturn send_private_file(path)\n\ndef download_private_file(path):\n\t\"\"\"Checks permissions and sends back private file\"\"\"\n\n\tfiles = frappe.db.get_all('File', {'file_url': path})\n\tcan_access = False\n\t# this file might be attached to multiple documents\n\t# if the file is accessible from any one of those documents\n\t# then it should be downloadable\n\tfor f in files:\n\t\t_file = frappe.get_doc(\"File\", f)\n\t\tcan_access = _file.is_downloadable()\n\t\tif can_access:\n\t\t\tmake_access_log(doctype='File', document=_file.name, file_type=os.path.splitext(path)[-1][1:])\n\t\t\tbreak\n\n\tif not can_access:\n\t\traise Forbidden(_(\"You don't have permission to access this file\"))\n\n\treturn send_private_file(path.split(\"/private\", 1)[1])\n\n\ndef send_private_file(path):\n\tpath = os.path.join(frappe.local.conf.get('private_path', 'private'), path.strip(\"/\"))\n\tfilename = os.path.basename(path)\n\n\tif frappe.local.request.headers.get('X-Use-X-Accel-Redirect'):\n\t\tpath = '/protected/' + path\n\t\tresponse = Response()\n\t\tresponse.headers['X-Accel-Redirect'] = quote(frappe.utils.encode(path))\n\n\telse:\n\t\tfilepath = frappe.utils.get_site_path(path)\n\t\ttry:\n\t\t\tf = open(filepath, 'rb')\n\t\texcept IOError:\n\t\t\traise NotFound\n\n\t\tresponse = Response(wrap_file(frappe.local.request.environ, f), direct_passthrough=True)\n\n\t# no need for content disposition and force download. let browser handle its opening.\n\t# Except for those that can be injected with scripts.\n\n\textension = os.path.splitext(path)[1]\n\tblacklist = ['.svg', '.html', '.htm', '.xml']\n\n\tif extension.lower() in blacklist:\n\t\tresponse.headers.add(b'Content-Disposition', b'attachment', filename=filename.encode(\"utf-8\"))\n\n\tresponse.mimetype = mimetypes.guess_type(filename)[0] or 'application/octet-stream'\n\n\treturn response\n\ndef handle_session_stopped():\n\tfrappe.respond_as_web_page(_(\"Updating\"),\n\t\t_(\"Your system is being updated. Please refresh again after a few moments\"),\n\t\thttp_status_code=503, indicator_color='orange', fullpage = True, primary_action=None)\n\treturn frappe.website.render.render(\"message\", http_status_code=503)\n", "path": "frappe/utils/response.py"}]}
| 3,183 | 178 |
gh_patches_debug_21032
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-3344
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider ingles is broken
During the global build at 2021-07-07-14-42-19, spider **ingles** failed with **0 features** and **189 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-07-07-14-42-19/logs/ingles.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-07-07-14-42-19/output/ingles.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-07-07-14-42-19/output/ingles.geojson))
</issue>
<code>
[start of locations/spiders/ingles.py]
1 # -*- coding: utf-8
2 import scrapy
3 import re
4
5 from locations.items import GeojsonPointItem
6 from locations.hours import OpeningHours
7
8 URL = 'https://www.ingles-markets.com/storelocate/storelocator.php?address='
9
10 STORE_STATES = ["Alabama", "Georgia", "North%20Carolina", "South%20Carolina", "Tennessee", "Virginia"]
11
12 DAYS = ["Mo", "Tu", "We", "Th", "Fr", "Sa", "Su"]
13
14 class ingles(scrapy.Spider):
15 name = "ingles"
16 item_attributes = { 'brand': "Ingles" }
17 allowed_domains = ["www.ingles-markets.com"]
18
19 def start_requests(self):
20 for state in STORE_STATES:
21 yield scrapy.Request(URL + state, callback=self.parse)
22
23 def parse_hours(self, hours):
24 opening_hours = OpeningHours()
25
26 for day in DAYS:
27 open_time, close_time = hours.split('to')
28 opening_hours.add_range(day=day, open_time=("".join(open_time).strip()), close_time=("".join(close_time).strip()), time_format="%H:%M%p")
29
30 return opening_hours.as_opening_hours()
31
32 def parse_store(self, response):
33
34 properties = {
35 'ref': response.meta["ref"],
36 'name': response.meta["name"],
37 'addr_full': response.meta["addr_full"],
38 'city': response.meta["city"],
39 'state': response.meta["state"],
40 'postcode': re.search(r'(\d{5})',response.xpath("/html/body/fieldset/div[2]/span[2]/strong/text()").get()).group(),
41 'phone': response.xpath("/html/body/fieldset/div[2]/a/text()").get(),
42 'lat': response.meta["lat"],
43 'lon': response.meta["lon"],
44 'website': response.url,
45 }
46
47 hours = self.parse_hours(" ".join(response.xpath("/html/body/fieldset/div[2]/text()")[2].getall()).strip())
48 if hours:
49 properties["opening_hours"] = hours
50
51 yield GeojsonPointItem(**properties)
52
53 def parse(self, response):
54 for store in response.xpath('//markers/marker'):
55 ids =store.xpath('./@id').extract_first(),
56 name = store.xpath('./@name').get()
57 addr = store.xpath('./@address').get()
58 city = store.xpath('./@city').get()
59 state = store.xpath('./@state').get()
60 lats = store.xpath('./@lat').get()
61 longs = store.xpath('./@lng').get()
62
63 for id in ids:
64 yield scrapy.Request(
65 'https://www.ingles-markets.com/storelocate/storeinfo.php?storenum=' + id,
66 callback=self.parse_store,
67 meta={
68 'ref': id,
69 'name': name,
70 'addr_full': addr,
71 'city': city,
72 'state': state,
73 'lat': lats,
74 'lon': longs
75 }
76 )
77
[end of locations/spiders/ingles.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/ingles.py b/locations/spiders/ingles.py
--- a/locations/spiders/ingles.py
+++ b/locations/spiders/ingles.py
@@ -37,14 +37,14 @@
'addr_full': response.meta["addr_full"],
'city': response.meta["city"],
'state': response.meta["state"],
- 'postcode': re.search(r'(\d{5})',response.xpath("/html/body/fieldset/div[2]/span[2]/strong/text()").get()).group(),
+ 'postcode': re.search(r'(\d{5})',response.xpath("/html/body/div[2]/span[2]/strong/text()").get()).group(),
'phone': response.xpath("/html/body/fieldset/div[2]/a/text()").get(),
'lat': response.meta["lat"],
'lon': response.meta["lon"],
'website': response.url,
}
- hours = self.parse_hours(" ".join(response.xpath("/html/body/fieldset/div[2]/text()")[2].getall()).strip())
+ hours = self.parse_hours(" ".join(response.xpath("/html/body/fieldset/div[2]/text()")[1].getall()).strip())
if hours:
properties["opening_hours"] = hours
|
{"golden_diff": "diff --git a/locations/spiders/ingles.py b/locations/spiders/ingles.py\n--- a/locations/spiders/ingles.py\n+++ b/locations/spiders/ingles.py\n@@ -37,14 +37,14 @@\n 'addr_full': response.meta[\"addr_full\"],\n 'city': response.meta[\"city\"],\n 'state': response.meta[\"state\"],\n- 'postcode': re.search(r'(\\d{5})',response.xpath(\"/html/body/fieldset/div[2]/span[2]/strong/text()\").get()).group(),\n+ 'postcode': re.search(r'(\\d{5})',response.xpath(\"/html/body/div[2]/span[2]/strong/text()\").get()).group(),\n 'phone': response.xpath(\"/html/body/fieldset/div[2]/a/text()\").get(),\n 'lat': response.meta[\"lat\"],\n 'lon': response.meta[\"lon\"],\n 'website': response.url,\n }\n \n- hours = self.parse_hours(\" \".join(response.xpath(\"/html/body/fieldset/div[2]/text()\")[2].getall()).strip())\n+ hours = self.parse_hours(\" \".join(response.xpath(\"/html/body/fieldset/div[2]/text()\")[1].getall()).strip())\n if hours:\n properties[\"opening_hours\"] = hours\n", "issue": "Spider ingles is broken\nDuring the global build at 2021-07-07-14-42-19, spider **ingles** failed with **0 features** and **189 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-07-07-14-42-19/logs/ingles.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-07-07-14-42-19/output/ingles.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-07-07-14-42-19/output/ingles.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8\nimport scrapy\nimport re\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nURL = 'https://www.ingles-markets.com/storelocate/storelocator.php?address='\n\nSTORE_STATES = [\"Alabama\", \"Georgia\", \"North%20Carolina\", \"South%20Carolina\", \"Tennessee\", \"Virginia\"]\n\nDAYS = [\"Mo\", \"Tu\", \"We\", \"Th\", \"Fr\", \"Sa\", \"Su\"]\n\nclass ingles(scrapy.Spider):\n name = \"ingles\"\n item_attributes = { 'brand': \"Ingles\" }\n allowed_domains = [\"www.ingles-markets.com\"]\n\n def start_requests(self):\n for state in STORE_STATES:\n yield scrapy.Request(URL + state, callback=self.parse)\n\n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n\n for day in DAYS:\n open_time, close_time = hours.split('to')\n opening_hours.add_range(day=day, open_time=(\"\".join(open_time).strip()), close_time=(\"\".join(close_time).strip()), time_format=\"%H:%M%p\")\n\n return opening_hours.as_opening_hours()\n\n def parse_store(self, response):\n\n properties = {\n 'ref': response.meta[\"ref\"],\n 'name': response.meta[\"name\"],\n 'addr_full': response.meta[\"addr_full\"],\n 'city': response.meta[\"city\"],\n 'state': response.meta[\"state\"],\n 'postcode': re.search(r'(\\d{5})',response.xpath(\"/html/body/fieldset/div[2]/span[2]/strong/text()\").get()).group(),\n 'phone': response.xpath(\"/html/body/fieldset/div[2]/a/text()\").get(),\n 'lat': response.meta[\"lat\"],\n 'lon': response.meta[\"lon\"],\n 'website': response.url,\n }\n\n hours = self.parse_hours(\" \".join(response.xpath(\"/html/body/fieldset/div[2]/text()\")[2].getall()).strip())\n if hours:\n properties[\"opening_hours\"] = hours\n\n yield GeojsonPointItem(**properties)\n\n def parse(self, response):\n for store in response.xpath('//markers/marker'):\n ids =store.xpath('./@id').extract_first(),\n name = store.xpath('./@name').get()\n addr = store.xpath('./@address').get()\n city = store.xpath('./@city').get()\n state = store.xpath('./@state').get()\n lats = store.xpath('./@lat').get()\n longs = store.xpath('./@lng').get()\n\n for id in ids:\n yield scrapy.Request(\n 'https://www.ingles-markets.com/storelocate/storeinfo.php?storenum=' + id,\n callback=self.parse_store,\n meta={\n 'ref': id,\n 'name': name,\n 'addr_full': addr,\n 'city': city,\n 'state': state,\n 'lat': lats,\n 'lon': longs\n }\n )\n", "path": "locations/spiders/ingles.py"}]}
| 1,511 | 281 |
gh_patches_debug_36858
|
rasdani/github-patches
|
git_diff
|
digitalfabrik__integreat-cms-368
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update Readme to use python3.7-dev
for Ubuntu it's also required to install python3-dev so we need also to update this too
</issue>
<code>
[start of src/backend/wsgi.py]
1 """
2 WSGI config for backend project.
3
4 It exposes the WSGI callable as a module-level variable named ``application``.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/
8 """
9
10 import os
11
12 from django.core.wsgi import get_wsgi_application
13
14 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "backend.settings")
15
16 application = get_wsgi_application()
17
[end of src/backend/wsgi.py]
[start of src/backend/settings.py]
1 """
2 Django settings for backend project.
3
4 Generated by 'django-admin startproject' using Django 1.11.11.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/1.11/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/1.11/ref/settings/
11 """
12
13 import os
14
15 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
16 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
17
18
19 # Quick-start development settings - unsuitable for production
20 # See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/
21
22 # SECURITY WARNING: keep the secret key used in production secret!
23 SECRET_KEY = '-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_'
24
25 # SECURITY WARNING: don't run with debug turned on in production!
26 DEBUG = True
27
28 ALLOWED_HOSTS = [
29 'localhost',
30 '127.0.0.1',
31 '0.0.0.0'
32 ]
33
34 # Needed for webauthn (this is a setting in case the application runs behind a proxy)
35 HOSTNAME = 'localhost'
36 BASE_URL = 'http://localhost:8000'
37
38 # Application definition
39
40 INSTALLED_APPS = [
41 'cms.apps.CmsConfig',
42 'gvz_api.apps.GvzApiConfig',
43 'django.contrib.admin',
44 'django.contrib.auth',
45 'django.contrib.contenttypes',
46 'django.contrib.messages',
47 'django.contrib.sessions',
48 'django.contrib.staticfiles',
49 'compressor',
50 'compressor_toolkit',
51 'widget_tweaks',
52 'easy_thumbnails',
53 'filer',
54 'mptt',
55 'rules.apps.AutodiscoverRulesConfig',
56 ]
57
58 MIDDLEWARE = [
59 'django.middleware.security.SecurityMiddleware',
60 'django.contrib.sessions.middleware.SessionMiddleware',
61 'django.middleware.locale.LocaleMiddleware',
62 'django.middleware.common.CommonMiddleware',
63 'django.middleware.csrf.CsrfViewMiddleware',
64 'django.contrib.auth.middleware.AuthenticationMiddleware',
65 'django.contrib.messages.middleware.MessageMiddleware',
66 'django.middleware.clickjacking.XFrameOptionsMiddleware',
67 ]
68
69 ROOT_URLCONF = 'backend.urls'
70 THUMBNAIL_HIGH_RESOLUTION = True
71
72 TEMPLATES = [
73 {
74 'BACKEND': 'django.template.backends.django.DjangoTemplates',
75 'DIRS': [],
76 'APP_DIRS': True,
77 'OPTIONS': {
78 'context_processors': [
79 'django.template.context_processors.debug',
80 'django.template.context_processors.request',
81 'django.contrib.auth.context_processors.auth',
82 'django.contrib.messages.context_processors.messages',
83 'backend.context_processors.region_slug_processor',
84 ],
85 },
86 },
87 ]
88
89 WSGI_APPLICATION = 'backend.wsgi.application'
90
91
92 # Database
93 # https://docs.djangoproject.com/en/1.11/ref/settings/#databases
94
95 DATABASES = {
96 'default': {
97 'ENGINE': 'django.db.backends.postgresql_psycopg2',
98 'NAME': 'integreat',
99 'USER': 'integreat',
100 'PASSWORD': 'password',
101 'HOST': 'localhost',
102 'PORT': '5432',
103 }
104 }
105
106 # Directory for initial database contents
107
108 FIXTURE_DIRS = (
109 os.path.join(BASE_DIR, 'cms/fixtures/'),
110 )
111
112 # Authentication backends
113
114 AUTHENTICATION_BACKENDS = (
115 'rules.permissions.ObjectPermissionBackend',
116 'django.contrib.auth.backends.ModelBackend', # this is default
117 )
118
119
120 # Password validation
121 # https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators
122
123 AUTH_PASSWORD_VALIDATORS = [
124 {
125 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
126 },
127 {
128 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
129 },
130 {
131 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
132 },
133 {
134 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
135 },
136 ]
137
138
139 # Internationalization
140 # https://docs.djangoproject.com/en/1.11/topics/i18n/
141
142 LANGUAGES = (
143 ('en-us', 'English'),
144 ('de-de', 'Deutsch'),
145 )
146
147 LOCALE_PATHS = (
148 os.path.join(BASE_DIR, 'locale'),
149 )
150
151 LANGUAGE_CODE = 'de-de'
152
153 TIME_ZONE = 'UTC'
154
155 USE_I18N = True
156
157 USE_L10N = True
158
159 USE_TZ = True
160
161
162 # Static files (CSS, JavaScript, Images)
163 # https://docs.djangoproject.com/en/1.11/howto/static-files/
164
165 STATICFILES_DIRS = [
166 os.path.join(BASE_DIR, "../node_modules"),
167 ]
168 STATIC_URL = '/static/'
169 STATIC_ROOT = os.path.join(BASE_DIR, 'cms/static/')
170
171 # Login
172 LOGIN_URL = '/login'
173 LOGIN_REDIRECT_URL = '/'
174 LOGOUT_REDIRECT_URL = '/login'
175
176 # Miscellaneous
177 EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
178 CSRF_FAILURE_VIEW = 'cms.views.error_handler.csrf_failure'
179
180 MEDIA_URL = '/media/'
181 MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
182 FILER_CANONICAL_URL = 'media/'
183
184 LOGGING = {
185 'version': 1,
186 'disable_existing_loggers': False,
187 'handlers': {
188 'console': {
189 'class': 'logging.StreamHandler'
190 },
191 },
192 'loggers': {
193 'django': {
194 'handlers': ['console'],
195 'level': 'WARN',
196 'propagate': True,
197 },
198 'api': {
199 'handlers': ['console'],
200 'level': 'INFO',
201 'propagate': True,
202 },
203 'cms': {
204 'handlers': ['console'],
205 'level': 'INFO',
206 'propagate': True,
207 },
208 'rules': {
209 'handlers': ['console'],
210 'level': 'DEBUG',
211 'propagate': True,
212 },
213 }
214 }
215
216 STATICFILES_FINDERS = (
217 'django.contrib.staticfiles.finders.FileSystemFinder',
218 'django.contrib.staticfiles.finders.AppDirectoriesFinder',
219 'compressor.finders.CompressorFinder',
220 )
221
222 COMPRESS_CSS_FILTERS = [
223 'compressor.filters.css_default.CssAbsoluteFilter',
224 'compressor.filters.cssmin.CSSMinFilter',
225 'compressor.filters.template.TemplateFilter'
226 ]
227 COMPRESS_JS_FILTERS = [
228 'compressor.filters.jsmin.JSMinFilter',
229 ]
230 COMPRESS_PRECOMPILERS = (
231 ('module', 'compressor_toolkit.precompilers.ES6Compiler'),
232 ('css', 'compressor_toolkit.precompilers.SCSSCompiler'),
233 )
234 COMPRESS_ENABLED = True
235 COMPRESS_OFFLINE = True
236
237 # GVZ (Gemeindeverzeichnis) API URL
238 GVZ_API_URL = "http://gvz.integreat-app.de/api/"
239 GVZ_API_ENABLED = True
240
[end of src/backend/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/backend/settings.py b/src/backend/settings.py
--- a/src/backend/settings.py
+++ b/src/backend/settings.py
@@ -4,10 +4,10 @@
Generated by 'django-admin startproject' using Django 1.11.11.
For more information on this file, see
-https://docs.djangoproject.com/en/1.11/topics/settings/
+https://docs.djangoproject.com/en/2.2/topics/settings/
For the full list of settings and their values, see
-https://docs.djangoproject.com/en/1.11/ref/settings/
+https://docs.djangoproject.com/en/2.2/ref/settings/
"""
import os
@@ -17,7 +17,7 @@
# Quick-start development settings - unsuitable for production
-# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/
+# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_'
@@ -90,7 +90,7 @@
# Database
-# https://docs.djangoproject.com/en/1.11/ref/settings/#databases
+# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
@@ -118,7 +118,7 @@
# Password validation
-# https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators
+# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
@@ -137,7 +137,7 @@
# Internationalization
-# https://docs.djangoproject.com/en/1.11/topics/i18n/
+# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGES = (
('en-us', 'English'),
@@ -160,7 +160,7 @@
# Static files (CSS, JavaScript, Images)
-# https://docs.djangoproject.com/en/1.11/howto/static-files/
+# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "../node_modules"),
diff --git a/src/backend/wsgi.py b/src/backend/wsgi.py
--- a/src/backend/wsgi.py
+++ b/src/backend/wsgi.py
@@ -4,7 +4,7 @@
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
-https://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/
+https://docs.djangoproject.com/en/2.2/howto/deployment/wsgi/
"""
import os
|
{"golden_diff": "diff --git a/src/backend/settings.py b/src/backend/settings.py\n--- a/src/backend/settings.py\n+++ b/src/backend/settings.py\n@@ -4,10 +4,10 @@\n Generated by 'django-admin startproject' using Django 1.11.11.\n \n For more information on this file, see\n-https://docs.djangoproject.com/en/1.11/topics/settings/\n+https://docs.djangoproject.com/en/2.2/topics/settings/\n \n For the full list of settings and their values, see\n-https://docs.djangoproject.com/en/1.11/ref/settings/\n+https://docs.djangoproject.com/en/2.2/ref/settings/\n \"\"\"\n \n import os\n@@ -17,7 +17,7 @@\n \n \n # Quick-start development settings - unsuitable for production\n-# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/\n+# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/\n \n # SECURITY WARNING: keep the secret key used in production secret!\n SECRET_KEY = '-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_'\n@@ -90,7 +90,7 @@\n \n \n # Database\n-# https://docs.djangoproject.com/en/1.11/ref/settings/#databases\n+# https://docs.djangoproject.com/en/2.2/ref/settings/#databases\n \n DATABASES = {\n 'default': {\n@@ -118,7 +118,7 @@\n \n \n # Password validation\n-# https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators\n+# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators\n \n AUTH_PASSWORD_VALIDATORS = [\n {\n@@ -137,7 +137,7 @@\n \n \n # Internationalization\n-# https://docs.djangoproject.com/en/1.11/topics/i18n/\n+# https://docs.djangoproject.com/en/2.2/topics/i18n/\n \n LANGUAGES = (\n ('en-us', 'English'),\n@@ -160,7 +160,7 @@\n \n \n # Static files (CSS, JavaScript, Images)\n-# https://docs.djangoproject.com/en/1.11/howto/static-files/\n+# https://docs.djangoproject.com/en/2.2/howto/static-files/\n \n STATICFILES_DIRS = [\n os.path.join(BASE_DIR, \"../node_modules\"),\ndiff --git a/src/backend/wsgi.py b/src/backend/wsgi.py\n--- a/src/backend/wsgi.py\n+++ b/src/backend/wsgi.py\n@@ -4,7 +4,7 @@\n It exposes the WSGI callable as a module-level variable named ``application``.\n \n For more information on this file, see\n-https://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/\n+https://docs.djangoproject.com/en/2.2/howto/deployment/wsgi/\n \"\"\"\n \n import os\n", "issue": "Update Readme to use python3.7-dev\nfor Ubuntu it's also required to install python3-dev so we need also to update this too\n", "before_files": [{"content": "\"\"\"\nWSGI config for backend project.\n\nIt exposes the WSGI callable as a module-level variable named ``application``.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/\n\"\"\"\n\nimport os\n\nfrom django.core.wsgi import get_wsgi_application\n\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"backend.settings\")\n\napplication = get_wsgi_application()\n", "path": "src/backend/wsgi.py"}, {"content": "\"\"\"\nDjango settings for backend project.\n\nGenerated by 'django-admin startproject' using Django 1.11.11.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.11/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.11/ref/settings/\n\"\"\"\n\nimport os\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = '-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_'\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nALLOWED_HOSTS = [\n 'localhost',\n '127.0.0.1',\n '0.0.0.0'\n]\n\n# Needed for webauthn (this is a setting in case the application runs behind a proxy)\nHOSTNAME = 'localhost'\nBASE_URL = 'http://localhost:8000'\n\n# Application definition\n\nINSTALLED_APPS = [\n 'cms.apps.CmsConfig',\n 'gvz_api.apps.GvzApiConfig',\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.messages',\n 'django.contrib.sessions',\n 'django.contrib.staticfiles',\n 'compressor',\n 'compressor_toolkit',\n 'widget_tweaks',\n 'easy_thumbnails',\n 'filer',\n 'mptt',\n 'rules.apps.AutodiscoverRulesConfig',\n]\n\nMIDDLEWARE = [\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'backend.urls'\nTHUMBNAIL_HIGH_RESOLUTION = True\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n 'backend.context_processors.region_slug_processor',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'backend.wsgi.application'\n\n\n# Database\n# https://docs.djangoproject.com/en/1.11/ref/settings/#databases\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.postgresql_psycopg2',\n 'NAME': 'integreat',\n 'USER': 'integreat',\n 'PASSWORD': 'password',\n 'HOST': 'localhost',\n 'PORT': '5432',\n }\n}\n\n# Directory for initial database contents\n\nFIXTURE_DIRS = (\n os.path.join(BASE_DIR, 'cms/fixtures/'),\n)\n\n# Authentication backends\n\nAUTHENTICATION_BACKENDS = (\n 'rules.permissions.ObjectPermissionBackend',\n 'django.contrib.auth.backends.ModelBackend', # this is default\n)\n\n\n# Password validation\n# https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.11/topics/i18n/\n\nLANGUAGES = (\n ('en-us', 'English'),\n ('de-de', 'Deutsch'),\n)\n\nLOCALE_PATHS = (\n os.path.join(BASE_DIR, 'locale'),\n)\n\nLANGUAGE_CODE = 'de-de'\n\nTIME_ZONE = 'UTC'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.11/howto/static-files/\n\nSTATICFILES_DIRS = [\n os.path.join(BASE_DIR, \"../node_modules\"),\n]\nSTATIC_URL = '/static/'\nSTATIC_ROOT = os.path.join(BASE_DIR, 'cms/static/')\n\n# Login\nLOGIN_URL = '/login'\nLOGIN_REDIRECT_URL = '/'\nLOGOUT_REDIRECT_URL = '/login'\n\n# Miscellaneous\nEMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\nCSRF_FAILURE_VIEW = 'cms.views.error_handler.csrf_failure'\n\nMEDIA_URL = '/media/'\nMEDIA_ROOT = os.path.join(BASE_DIR, 'media')\nFILER_CANONICAL_URL = 'media/'\n\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'handlers': {\n 'console': {\n 'class': 'logging.StreamHandler'\n },\n },\n 'loggers': {\n 'django': {\n 'handlers': ['console'],\n 'level': 'WARN',\n 'propagate': True,\n },\n 'api': {\n 'handlers': ['console'],\n 'level': 'INFO',\n 'propagate': True,\n },\n 'cms': {\n 'handlers': ['console'],\n 'level': 'INFO',\n 'propagate': True,\n },\n 'rules': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': True,\n },\n }\n}\n\nSTATICFILES_FINDERS = (\n 'django.contrib.staticfiles.finders.FileSystemFinder',\n 'django.contrib.staticfiles.finders.AppDirectoriesFinder',\n 'compressor.finders.CompressorFinder',\n)\n\nCOMPRESS_CSS_FILTERS = [\n 'compressor.filters.css_default.CssAbsoluteFilter',\n 'compressor.filters.cssmin.CSSMinFilter',\n 'compressor.filters.template.TemplateFilter'\n]\nCOMPRESS_JS_FILTERS = [\n 'compressor.filters.jsmin.JSMinFilter',\n]\nCOMPRESS_PRECOMPILERS = (\n ('module', 'compressor_toolkit.precompilers.ES6Compiler'),\n ('css', 'compressor_toolkit.precompilers.SCSSCompiler'),\n)\nCOMPRESS_ENABLED = True\nCOMPRESS_OFFLINE = True\n\n# GVZ (Gemeindeverzeichnis) API URL\nGVZ_API_URL = \"http://gvz.integreat-app.de/api/\"\nGVZ_API_ENABLED = True\n", "path": "src/backend/settings.py"}]}
| 2,812 | 650 |
gh_patches_debug_12677
|
rasdani/github-patches
|
git_diff
|
deis__deis-659
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make EC2 AMIs for 0.6.0 release
</issue>
<code>
[start of controller/provider/ec2.py]
1 """
2 Deis cloud provider implementation for Amazon EC2.
3 """
4
5 from __future__ import unicode_literals
6
7 import json
8 import time
9
10 from boto import ec2
11 from boto.exception import EC2ResponseError
12
13 # from api.ssh import connect_ssh, exec_ssh
14 from deis import settings
15
16
17 # Deis-optimized EC2 amis -- with 3.8 kernel, chef 11 deps,
18 # and large docker images (e.g. buildstep) pre-installed
19 IMAGE_MAP = {
20 'ap-northeast-1': 'ami-59007158',
21 'ap-southeast-1': 'ami-4cb4e51e',
22 'ap-southeast-2': 'ami-6d5bc257',
23 'eu-west-1': 'ami-c3ef12b4',
24 'sa-east-1': 'ami-3b45e626',
25 'us-east-1': 'ami-d90408b0',
26 'us-west-1': 'ami-9c3906d9',
27 'us-west-2': 'ami-8c6a05bc',
28 }
29
30
31 def seed_flavors():
32 """Seed the database with default flavors for each EC2 region.
33
34 :rtype: list of dicts containing flavor data
35 """
36 flavors = []
37 for r in ('us-east-1', 'us-west-1', 'us-west-2', 'eu-west-1',
38 'ap-northeast-1', 'ap-southeast-1', 'ap-southeast-2',
39 'sa-east-1'):
40 flavors.append({'id': 'ec2-{}'.format(r),
41 'provider': 'ec2',
42 'params': json.dumps({
43 'region': r,
44 'image': IMAGE_MAP[r],
45 'zone': 'any',
46 'size': 'm1.medium'})})
47 return flavors
48
49
50 def build_layer(layer):
51 """
52 Build a layer.
53
54 :param layer: a dict containing formation, id, params, and creds info
55 """
56 region = layer['params'].get('region', 'us-east-1')
57 conn = _create_ec2_connection(layer['creds'], region)
58 # create a new sg and authorize all ports
59 # use iptables on the host to firewall ports
60 name = "{formation}-{id}".format(**layer)
61 sg = conn.create_security_group(name, 'Created by Deis')
62 # import a new keypair using the layer key material
63 conn.import_key_pair(name, layer['ssh_public_key'])
64 # loop until the sg is *actually* there
65 for i in xrange(10):
66 try:
67 sg.authorize(ip_protocol='tcp', from_port=1, to_port=65535,
68 cidr_ip='0.0.0.0/0')
69 break
70 except EC2ResponseError:
71 if i < 10:
72 time.sleep(1.5)
73 continue
74 else:
75 raise RuntimeError('Failed to authorize security group')
76
77
78 def destroy_layer(layer):
79 """
80 Destroy a layer.
81
82 :param layer: a dict containing formation, id, params, and creds info
83 """
84 region = layer['params'].get('region', 'us-east-1')
85 name = "{formation}-{id}".format(**layer)
86 conn = _create_ec2_connection(layer['creds'], region)
87 conn.delete_key_pair(name)
88 # there's an ec2 race condition on instances terminating
89 # successfully but still holding a lock on the security group
90 for i in range(5):
91 # let's take a nap
92 time.sleep(i ** 1.25) # 1, 2.4, 3.9, 5.6, 7.4
93 try:
94 conn.delete_security_group(name)
95 return
96 except EC2ResponseError as err:
97 if err.code == 'InvalidGroup.NotFound':
98 return
99 elif err.code in ('InvalidGroup.InUse',
100 'DependencyViolation') and i < 4:
101 continue # retry
102 else:
103 raise
104
105
106 def build_node(node):
107 """
108 Build a node.
109
110 :param node: a dict containing formation, layer, params, and creds info.
111 :rtype: a tuple of (provider_id, fully_qualified_domain_name, metadata)
112 """
113 params, creds = node['params'], node['creds']
114 region = params.setdefault('region', 'us-east-1')
115 conn = _create_ec2_connection(creds, region)
116 name = "{formation}-{layer}".format(**node)
117 params['key_name'] = name
118 sg = conn.get_all_security_groups(name)[0]
119 params.setdefault('security_groups', []).append(sg.name)
120 image_id = params.get(
121 'image', getattr(settings, 'IMAGE_MAP', IMAGE_MAP)[region])
122 images = conn.get_all_images([image_id])
123 if len(images) != 1:
124 raise LookupError('Could not find AMI: %s' % image_id)
125 image = images[0]
126 kwargs = _prepare_run_kwargs(params)
127 reservation = image.run(**kwargs)
128 instances = reservation.instances
129 boto = instances[0]
130 # sleep before tagging
131 time.sleep(10)
132 boto.update()
133 boto.add_tag('Name', node['id'])
134 # loop until running
135 while(True):
136 time.sleep(2)
137 boto.update()
138 if boto.state == 'running':
139 break
140 # prepare return values
141 provider_id = boto.id
142 fqdn = boto.public_dns_name
143 metadata = _format_metadata(boto)
144 return provider_id, fqdn, metadata
145
146
147 def destroy_node(node):
148 """
149 Destroy a node.
150
151 :param node: a dict containing a node's provider_id, params, and creds
152 """
153 provider_id = node['provider_id']
154 region = node['params'].get('region', 'us-east-1')
155 conn = _create_ec2_connection(node['creds'], region)
156 if provider_id:
157 try:
158 conn.terminate_instances([provider_id])
159 i = conn.get_all_instances([provider_id])[0].instances[0]
160 while(True):
161 time.sleep(2)
162 i.update()
163 if i.state == "terminated":
164 break
165 except EC2ResponseError as e:
166 if e.code not in ('InvalidInstanceID.NotFound',):
167 raise
168
169
170 def _create_ec2_connection(creds, region):
171 """
172 Connect to an EC2 region with the given credentials.
173
174 :param creds: a dict containing an EC2 access_key and secret_key
175 :region: the name of an EC2 region, such as "us-west-2"
176 :rtype: a connected :class:`~boto.ec2.connection.EC2Connection`
177 :raises EnvironmentError: if no credentials are provided
178 """
179 if not creds:
180 raise EnvironmentError('No credentials provided')
181 return ec2.connect_to_region(region,
182 aws_access_key_id=creds['access_key'],
183 aws_secret_access_key=creds['secret_key'])
184
185
186 def _prepare_run_kwargs(params):
187 # start with sane defaults
188 kwargs = {
189 'min_count': 1, 'max_count': 1,
190 'user_data': None, 'addressing_type': None,
191 'instance_type': None, 'placement': None,
192 'kernel_id': None, 'ramdisk_id': None,
193 'monitoring_enabled': False, 'subnet_id': None,
194 'block_device_map': None,
195 }
196 # convert zone "any" to NoneType
197 requested_zone = params.get('zone')
198 if requested_zone and requested_zone.lower() == 'any':
199 requested_zone = None
200 # lookup kwargs from params
201 param_kwargs = {
202 'instance_type': params.get('size', 'm1.medium'),
203 'security_groups': params['security_groups'],
204 'placement': requested_zone,
205 'key_name': params['key_name'],
206 'kernel_id': params.get('kernel', None),
207 }
208 # add user_data if provided in params
209 user_data = params.get('user_data')
210 if user_data:
211 kwargs.update({'user_data': user_data})
212 # params override defaults
213 kwargs.update(param_kwargs)
214 return kwargs
215
216
217 def _format_metadata(boto):
218 return {
219 'architecture': boto.architecture,
220 'block_device_mapping': {
221 k: v.volume_id for k, v in boto.block_device_mapping.items()
222 },
223 'client_token': boto.client_token,
224 'dns_name': boto.dns_name,
225 'ebs_optimized': boto.ebs_optimized,
226 'eventsSet': boto.eventsSet,
227 'group_name': boto.group_name,
228 'groups': [g.id for g in boto.groups],
229 'hypervisor': boto.hypervisor,
230 'id': boto.id,
231 'image_id': boto.image_id,
232 'instance_profile': boto.instance_profile,
233 'instance_type': boto.instance_type,
234 'interfaces': list(boto.interfaces),
235 'ip_address': boto.ip_address,
236 'kernel': boto.kernel,
237 'key_name': boto.key_name,
238 'launch_time': boto.launch_time,
239 'monitored': boto.monitored,
240 'monitoring_state': boto.monitoring_state,
241 'persistent': boto.persistent,
242 'placement': boto.placement,
243 'placement_group': boto.placement_group,
244 'placement_tenancy': boto.placement_tenancy,
245 'previous_state': boto.previous_state,
246 'private_dns_name': boto.private_dns_name,
247 'private_ip_address': boto.private_ip_address,
248 'public_dns_name': boto.public_dns_name,
249 'ramdisk': boto.ramdisk,
250 'region': boto.region.name,
251 'root_device_name': boto.root_device_name,
252 'root_device_type': boto.root_device_type,
253 'spot_instance_request_id': boto.spot_instance_request_id,
254 'state': boto.state,
255 'state_code': boto.state_code,
256 'state_reason': boto.state_reason,
257 'subnet_id': boto.subnet_id,
258 'tags': dict(boto.tags),
259 'virtualization_type': boto.virtualization_type,
260 'vpc_id': boto.vpc_id,
261 }
262
[end of controller/provider/ec2.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/controller/provider/ec2.py b/controller/provider/ec2.py
--- a/controller/provider/ec2.py
+++ b/controller/provider/ec2.py
@@ -17,14 +17,14 @@
# Deis-optimized EC2 amis -- with 3.8 kernel, chef 11 deps,
# and large docker images (e.g. buildstep) pre-installed
IMAGE_MAP = {
- 'ap-northeast-1': 'ami-59007158',
- 'ap-southeast-1': 'ami-4cb4e51e',
- 'ap-southeast-2': 'ami-6d5bc257',
- 'eu-west-1': 'ami-c3ef12b4',
- 'sa-east-1': 'ami-3b45e626',
- 'us-east-1': 'ami-d90408b0',
- 'us-west-1': 'ami-9c3906d9',
- 'us-west-2': 'ami-8c6a05bc',
+ 'ap-northeast-1': 'ami-ae85ec9e',
+ 'ap-southeast-1': 'ami-904919c2',
+ 'ap-southeast-2': 'ami-a9db4393',
+ 'eu-west-1': 'ami-01eb1576',
+ 'sa-east-1': 'ami-d3cc6ece',
+ 'us-east-1': 'ami-51382c38',
+ 'us-west-1': 'ami-ec0d33a9',
+ 'us-west-2': 'ami-a085ec90',
}
|
{"golden_diff": "diff --git a/controller/provider/ec2.py b/controller/provider/ec2.py\n--- a/controller/provider/ec2.py\n+++ b/controller/provider/ec2.py\n@@ -17,14 +17,14 @@\n # Deis-optimized EC2 amis -- with 3.8 kernel, chef 11 deps,\n # and large docker images (e.g. buildstep) pre-installed\n IMAGE_MAP = {\n- 'ap-northeast-1': 'ami-59007158',\n- 'ap-southeast-1': 'ami-4cb4e51e',\n- 'ap-southeast-2': 'ami-6d5bc257',\n- 'eu-west-1': 'ami-c3ef12b4',\n- 'sa-east-1': 'ami-3b45e626',\n- 'us-east-1': 'ami-d90408b0',\n- 'us-west-1': 'ami-9c3906d9',\n- 'us-west-2': 'ami-8c6a05bc',\n+ 'ap-northeast-1': 'ami-ae85ec9e',\n+ 'ap-southeast-1': 'ami-904919c2',\n+ 'ap-southeast-2': 'ami-a9db4393',\n+ 'eu-west-1': 'ami-01eb1576',\n+ 'sa-east-1': 'ami-d3cc6ece',\n+ 'us-east-1': 'ami-51382c38',\n+ 'us-west-1': 'ami-ec0d33a9',\n+ 'us-west-2': 'ami-a085ec90',\n }\n", "issue": "Make EC2 AMIs for 0.6.0 release\n\n", "before_files": [{"content": "\"\"\"\nDeis cloud provider implementation for Amazon EC2.\n\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport json\nimport time\n\nfrom boto import ec2\nfrom boto.exception import EC2ResponseError\n\n# from api.ssh import connect_ssh, exec_ssh\nfrom deis import settings\n\n\n# Deis-optimized EC2 amis -- with 3.8 kernel, chef 11 deps,\n# and large docker images (e.g. buildstep) pre-installed\nIMAGE_MAP = {\n 'ap-northeast-1': 'ami-59007158',\n 'ap-southeast-1': 'ami-4cb4e51e',\n 'ap-southeast-2': 'ami-6d5bc257',\n 'eu-west-1': 'ami-c3ef12b4',\n 'sa-east-1': 'ami-3b45e626',\n 'us-east-1': 'ami-d90408b0',\n 'us-west-1': 'ami-9c3906d9',\n 'us-west-2': 'ami-8c6a05bc',\n}\n\n\ndef seed_flavors():\n \"\"\"Seed the database with default flavors for each EC2 region.\n\n :rtype: list of dicts containing flavor data\n \"\"\"\n flavors = []\n for r in ('us-east-1', 'us-west-1', 'us-west-2', 'eu-west-1',\n 'ap-northeast-1', 'ap-southeast-1', 'ap-southeast-2',\n 'sa-east-1'):\n flavors.append({'id': 'ec2-{}'.format(r),\n 'provider': 'ec2',\n 'params': json.dumps({\n 'region': r,\n 'image': IMAGE_MAP[r],\n 'zone': 'any',\n 'size': 'm1.medium'})})\n return flavors\n\n\ndef build_layer(layer):\n \"\"\"\n Build a layer.\n\n :param layer: a dict containing formation, id, params, and creds info\n \"\"\"\n region = layer['params'].get('region', 'us-east-1')\n conn = _create_ec2_connection(layer['creds'], region)\n # create a new sg and authorize all ports\n # use iptables on the host to firewall ports\n name = \"{formation}-{id}\".format(**layer)\n sg = conn.create_security_group(name, 'Created by Deis')\n # import a new keypair using the layer key material\n conn.import_key_pair(name, layer['ssh_public_key'])\n # loop until the sg is *actually* there\n for i in xrange(10):\n try:\n sg.authorize(ip_protocol='tcp', from_port=1, to_port=65535,\n cidr_ip='0.0.0.0/0')\n break\n except EC2ResponseError:\n if i < 10:\n time.sleep(1.5)\n continue\n else:\n raise RuntimeError('Failed to authorize security group')\n\n\ndef destroy_layer(layer):\n \"\"\"\n Destroy a layer.\n\n :param layer: a dict containing formation, id, params, and creds info\n \"\"\"\n region = layer['params'].get('region', 'us-east-1')\n name = \"{formation}-{id}\".format(**layer)\n conn = _create_ec2_connection(layer['creds'], region)\n conn.delete_key_pair(name)\n # there's an ec2 race condition on instances terminating\n # successfully but still holding a lock on the security group\n for i in range(5):\n # let's take a nap\n time.sleep(i ** 1.25) # 1, 2.4, 3.9, 5.6, 7.4\n try:\n conn.delete_security_group(name)\n return\n except EC2ResponseError as err:\n if err.code == 'InvalidGroup.NotFound':\n return\n elif err.code in ('InvalidGroup.InUse',\n 'DependencyViolation') and i < 4:\n continue # retry\n else:\n raise\n\n\ndef build_node(node):\n \"\"\"\n Build a node.\n\n :param node: a dict containing formation, layer, params, and creds info.\n :rtype: a tuple of (provider_id, fully_qualified_domain_name, metadata)\n \"\"\"\n params, creds = node['params'], node['creds']\n region = params.setdefault('region', 'us-east-1')\n conn = _create_ec2_connection(creds, region)\n name = \"{formation}-{layer}\".format(**node)\n params['key_name'] = name\n sg = conn.get_all_security_groups(name)[0]\n params.setdefault('security_groups', []).append(sg.name)\n image_id = params.get(\n 'image', getattr(settings, 'IMAGE_MAP', IMAGE_MAP)[region])\n images = conn.get_all_images([image_id])\n if len(images) != 1:\n raise LookupError('Could not find AMI: %s' % image_id)\n image = images[0]\n kwargs = _prepare_run_kwargs(params)\n reservation = image.run(**kwargs)\n instances = reservation.instances\n boto = instances[0]\n # sleep before tagging\n time.sleep(10)\n boto.update()\n boto.add_tag('Name', node['id'])\n # loop until running\n while(True):\n time.sleep(2)\n boto.update()\n if boto.state == 'running':\n break\n # prepare return values\n provider_id = boto.id\n fqdn = boto.public_dns_name\n metadata = _format_metadata(boto)\n return provider_id, fqdn, metadata\n\n\ndef destroy_node(node):\n \"\"\"\n Destroy a node.\n\n :param node: a dict containing a node's provider_id, params, and creds\n \"\"\"\n provider_id = node['provider_id']\n region = node['params'].get('region', 'us-east-1')\n conn = _create_ec2_connection(node['creds'], region)\n if provider_id:\n try:\n conn.terminate_instances([provider_id])\n i = conn.get_all_instances([provider_id])[0].instances[0]\n while(True):\n time.sleep(2)\n i.update()\n if i.state == \"terminated\":\n break\n except EC2ResponseError as e:\n if e.code not in ('InvalidInstanceID.NotFound',):\n raise\n\n\ndef _create_ec2_connection(creds, region):\n \"\"\"\n Connect to an EC2 region with the given credentials.\n\n :param creds: a dict containing an EC2 access_key and secret_key\n :region: the name of an EC2 region, such as \"us-west-2\"\n :rtype: a connected :class:`~boto.ec2.connection.EC2Connection`\n :raises EnvironmentError: if no credentials are provided\n \"\"\"\n if not creds:\n raise EnvironmentError('No credentials provided')\n return ec2.connect_to_region(region,\n aws_access_key_id=creds['access_key'],\n aws_secret_access_key=creds['secret_key'])\n\n\ndef _prepare_run_kwargs(params):\n # start with sane defaults\n kwargs = {\n 'min_count': 1, 'max_count': 1,\n 'user_data': None, 'addressing_type': None,\n 'instance_type': None, 'placement': None,\n 'kernel_id': None, 'ramdisk_id': None,\n 'monitoring_enabled': False, 'subnet_id': None,\n 'block_device_map': None,\n }\n # convert zone \"any\" to NoneType\n requested_zone = params.get('zone')\n if requested_zone and requested_zone.lower() == 'any':\n requested_zone = None\n # lookup kwargs from params\n param_kwargs = {\n 'instance_type': params.get('size', 'm1.medium'),\n 'security_groups': params['security_groups'],\n 'placement': requested_zone,\n 'key_name': params['key_name'],\n 'kernel_id': params.get('kernel', None),\n }\n # add user_data if provided in params\n user_data = params.get('user_data')\n if user_data:\n kwargs.update({'user_data': user_data})\n # params override defaults\n kwargs.update(param_kwargs)\n return kwargs\n\n\ndef _format_metadata(boto):\n return {\n 'architecture': boto.architecture,\n 'block_device_mapping': {\n k: v.volume_id for k, v in boto.block_device_mapping.items()\n },\n 'client_token': boto.client_token,\n 'dns_name': boto.dns_name,\n 'ebs_optimized': boto.ebs_optimized,\n 'eventsSet': boto.eventsSet,\n 'group_name': boto.group_name,\n 'groups': [g.id for g in boto.groups],\n 'hypervisor': boto.hypervisor,\n 'id': boto.id,\n 'image_id': boto.image_id,\n 'instance_profile': boto.instance_profile,\n 'instance_type': boto.instance_type,\n 'interfaces': list(boto.interfaces),\n 'ip_address': boto.ip_address,\n 'kernel': boto.kernel,\n 'key_name': boto.key_name,\n 'launch_time': boto.launch_time,\n 'monitored': boto.monitored,\n 'monitoring_state': boto.monitoring_state,\n 'persistent': boto.persistent,\n 'placement': boto.placement,\n 'placement_group': boto.placement_group,\n 'placement_tenancy': boto.placement_tenancy,\n 'previous_state': boto.previous_state,\n 'private_dns_name': boto.private_dns_name,\n 'private_ip_address': boto.private_ip_address,\n 'public_dns_name': boto.public_dns_name,\n 'ramdisk': boto.ramdisk,\n 'region': boto.region.name,\n 'root_device_name': boto.root_device_name,\n 'root_device_type': boto.root_device_type,\n 'spot_instance_request_id': boto.spot_instance_request_id,\n 'state': boto.state,\n 'state_code': boto.state_code,\n 'state_reason': boto.state_reason,\n 'subnet_id': boto.subnet_id,\n 'tags': dict(boto.tags),\n 'virtualization_type': boto.virtualization_type,\n 'vpc_id': boto.vpc_id,\n }\n", "path": "controller/provider/ec2.py"}]}
| 3,442 | 403 |
gh_patches_debug_27610
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-4516
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow whitespace and comments in db key file
It would be useful if the db key file parsed generously around additional whitespace.
Also in order to help guide an admin while rotating keys, we should allow to add comments.
</issue>
<code>
[start of pulpcore/app/models/fields.py]
1 import json
2 import logging
3 import os
4 from gettext import gettext as _
5 from functools import lru_cache
6
7 from cryptography.fernet import Fernet, MultiFernet
8 from django.conf import settings
9 from django.core.exceptions import ImproperlyConfigured
10 from django.db.models import Lookup, FileField, JSONField
11 from django.db.models.fields import Field, TextField
12 from django.utils.encoding import force_bytes, force_str
13
14
15 from pulpcore.app.files import TemporaryDownloadedFile
16 from pulpcore.app.loggers import deprecation_logger
17
18 _logger = logging.getLogger(__name__)
19
20
21 @lru_cache(maxsize=1)
22 def _fernet():
23 # Cache the enryption keys once per application.
24 _logger.debug(f"Loading encryption key from {settings.DB_ENCRYPTION_KEY}")
25 with open(settings.DB_ENCRYPTION_KEY, "rb") as key_file:
26 return MultiFernet([Fernet(key) for key in key_file.readlines()])
27
28
29 class ArtifactFileField(FileField):
30 """
31 A custom FileField that always saves files to location specified by 'upload_to'.
32
33 The field can be set as either a path to the file or File object. In both cases the file is
34 moved or copied to the location specified by 'upload_to' field parameter.
35 """
36
37 def pre_save(self, model_instance, add):
38 """
39 Return FieldFile object which specifies path to the file to be stored in database.
40
41 There are two ways to get artifact into Pulp: sync and upload.
42
43 The upload case
44 - file is not stored yet, aka file._committed = False
45 - nothing to do here in addition to Django pre_save actions
46
47 The sync case:
48 - file is already stored in a temporary location, aka file._committed = True
49 - it needs to be moved into Pulp artifact storage if it's not there
50 - TemporaryDownloadedFile takes care of correctly set storage path
51 - only then Django pre_save actions should be performed
52
53 Args:
54 model_instance (`class::pulpcore.plugin.Artifact`): The instance this field belongs to.
55 add (bool): Whether the instance is being saved to the database for the first time.
56 Ignored by Django pre_save method.
57
58 Returns:
59 FieldFile object just before saving.
60
61 """
62 file = model_instance.file
63 artifact_storage_path = self.upload_to(model_instance, "")
64
65 already_in_place = file.name in [
66 artifact_storage_path,
67 os.path.join(settings.MEDIA_ROOT, artifact_storage_path),
68 ]
69 is_in_artifact_storage = file.name.startswith(os.path.join(settings.MEDIA_ROOT, "artifact"))
70
71 if not already_in_place and is_in_artifact_storage:
72 raise ValueError(
73 _(
74 "The file referenced by the Artifact is already present in "
75 "Artifact storage. Files must be stored outside this location "
76 "prior to Artifact creation."
77 )
78 )
79
80 move = file._committed and file.name != artifact_storage_path
81 if move:
82 if not already_in_place:
83 file._file = TemporaryDownloadedFile(open(file.name, "rb"))
84 file._committed = False
85
86 return super().pre_save(model_instance, add)
87
88
89 class EncryptedTextField(TextField):
90 """A field mixin that encrypts text using settings.DB_ENCRYPTION_KEY."""
91
92 def __init__(self, *args, **kwargs):
93 if kwargs.get("primary_key"):
94 raise ImproperlyConfigured("EncryptedTextField does not support primary_key=True.")
95 if kwargs.get("unique"):
96 raise ImproperlyConfigured("EncryptedTextField does not support unique=True.")
97 if kwargs.get("db_index"):
98 raise ImproperlyConfigured("EncryptedTextField does not support db_index=True.")
99 super().__init__(*args, **kwargs)
100
101 def get_prep_value(self, value):
102 if value is not None:
103 assert isinstance(value, str)
104 value = force_str(_fernet().encrypt(force_bytes(value)))
105 return super().get_prep_value(value)
106
107 def from_db_value(self, value, expression, connection):
108 if value is not None:
109 value = force_str(_fernet().decrypt(force_bytes(value)))
110 return value
111
112
113 class EncryptedJSONField(JSONField):
114 """A Field mixin that encrypts the JSON text using settings.DP_ENCRYPTION_KEY."""
115
116 def __init__(self, *args, **kwargs):
117 if kwargs.get("primary_key"):
118 raise ImproperlyConfigured("EncryptedJSONField does not support primary_key=True.")
119 if kwargs.get("unique"):
120 raise ImproperlyConfigured("EncryptedJSONField does not support unique=True.")
121 if kwargs.get("db_index"):
122 raise ImproperlyConfigured("EncryptedJSONField does not support db_index=True.")
123 super().__init__(*args, **kwargs)
124
125 def encrypt(self, value):
126 if isinstance(value, dict):
127 return {k: self.encrypt(v) for k, v in value.items()}
128 elif isinstance(value, (list, tuple, set)):
129 return [self.encrypt(v) for v in value]
130
131 return force_str(_fernet().encrypt(force_bytes(json.dumps(value, cls=self.encoder))))
132
133 def decrypt(self, value):
134 if isinstance(value, dict):
135 return {k: self.decrypt(v) for k, v in value.items()}
136 elif isinstance(value, (list, tuple, set)):
137 return [self.decrypt(v) for v in value]
138
139 dec_value = force_str(_fernet().decrypt(force_bytes(value)))
140 try:
141 return json.loads(dec_value, cls=self.decoder)
142 except json.JSONDecodeError:
143 deprecation_logger.info(
144 "Failed to decode json in an EncryptedJSONField. Falling back to eval. "
145 "Please run pulpcore-manager rotate-db-key to repair."
146 "This is deprecated and will be removed in pulpcore 3.40."
147 )
148 return eval(dec_value)
149
150 def get_prep_value(self, value):
151 if value is not None:
152 if hasattr(value, "as_sql"):
153 return value
154 value = self.encrypt(value)
155 return super().get_prep_value(value)
156
157 def from_db_value(self, value, expression, connection):
158 if value is not None:
159 value = self.decrypt(super().from_db_value(value, expression, connection))
160 return value
161
162
163 @Field.register_lookup
164 class NotEqualLookup(Lookup):
165 # this is copied from https://docs.djangoproject.com/en/3.2/howto/custom-lookups/
166 lookup_name = "ne"
167
168 def as_sql(self, compiler, connection):
169 lhs, lhs_params = self.process_lhs(compiler, connection)
170 rhs, rhs_params = self.process_rhs(compiler, connection)
171 params = lhs_params + rhs_params
172 return "%s <> %s" % (lhs, rhs), params
173
[end of pulpcore/app/models/fields.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pulpcore/app/models/fields.py b/pulpcore/app/models/fields.py
--- a/pulpcore/app/models/fields.py
+++ b/pulpcore/app/models/fields.py
@@ -1,17 +1,15 @@
import json
import logging
import os
-from gettext import gettext as _
from functools import lru_cache
+from gettext import gettext as _
from cryptography.fernet import Fernet, MultiFernet
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
-from django.db.models import Lookup, FileField, JSONField
+from django.db.models import FileField, JSONField, Lookup
from django.db.models.fields import Field, TextField
from django.utils.encoding import force_bytes, force_str
-
-
from pulpcore.app.files import TemporaryDownloadedFile
from pulpcore.app.loggers import deprecation_logger
@@ -23,7 +21,13 @@
# Cache the enryption keys once per application.
_logger.debug(f"Loading encryption key from {settings.DB_ENCRYPTION_KEY}")
with open(settings.DB_ENCRYPTION_KEY, "rb") as key_file:
- return MultiFernet([Fernet(key) for key in key_file.readlines()])
+ return MultiFernet(
+ [
+ Fernet(key.strip())
+ for key in key_file.readlines()
+ if not key.startswith(b"#") and key.strip() != b""
+ ]
+ )
class ArtifactFileField(FileField):
|
{"golden_diff": "diff --git a/pulpcore/app/models/fields.py b/pulpcore/app/models/fields.py\n--- a/pulpcore/app/models/fields.py\n+++ b/pulpcore/app/models/fields.py\n@@ -1,17 +1,15 @@\n import json\n import logging\n import os\n-from gettext import gettext as _\n from functools import lru_cache\n+from gettext import gettext as _\n \n from cryptography.fernet import Fernet, MultiFernet\n from django.conf import settings\n from django.core.exceptions import ImproperlyConfigured\n-from django.db.models import Lookup, FileField, JSONField\n+from django.db.models import FileField, JSONField, Lookup\n from django.db.models.fields import Field, TextField\n from django.utils.encoding import force_bytes, force_str\n-\n-\n from pulpcore.app.files import TemporaryDownloadedFile\n from pulpcore.app.loggers import deprecation_logger\n \n@@ -23,7 +21,13 @@\n # Cache the enryption keys once per application.\n _logger.debug(f\"Loading encryption key from {settings.DB_ENCRYPTION_KEY}\")\n with open(settings.DB_ENCRYPTION_KEY, \"rb\") as key_file:\n- return MultiFernet([Fernet(key) for key in key_file.readlines()])\n+ return MultiFernet(\n+ [\n+ Fernet(key.strip())\n+ for key in key_file.readlines()\n+ if not key.startswith(b\"#\") and key.strip() != b\"\"\n+ ]\n+ )\n \n \n class ArtifactFileField(FileField):\n", "issue": "Allow whitespace and comments in db key file\nIt would be useful if the db key file parsed generously around additional whitespace.\r\n\r\nAlso in order to help guide an admin while rotating keys, we should allow to add comments.\n", "before_files": [{"content": "import json\nimport logging\nimport os\nfrom gettext import gettext as _\nfrom functools import lru_cache\n\nfrom cryptography.fernet import Fernet, MultiFernet\nfrom django.conf import settings\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.db.models import Lookup, FileField, JSONField\nfrom django.db.models.fields import Field, TextField\nfrom django.utils.encoding import force_bytes, force_str\n\n\nfrom pulpcore.app.files import TemporaryDownloadedFile\nfrom pulpcore.app.loggers import deprecation_logger\n\n_logger = logging.getLogger(__name__)\n\n\n@lru_cache(maxsize=1)\ndef _fernet():\n # Cache the enryption keys once per application.\n _logger.debug(f\"Loading encryption key from {settings.DB_ENCRYPTION_KEY}\")\n with open(settings.DB_ENCRYPTION_KEY, \"rb\") as key_file:\n return MultiFernet([Fernet(key) for key in key_file.readlines()])\n\n\nclass ArtifactFileField(FileField):\n \"\"\"\n A custom FileField that always saves files to location specified by 'upload_to'.\n\n The field can be set as either a path to the file or File object. In both cases the file is\n moved or copied to the location specified by 'upload_to' field parameter.\n \"\"\"\n\n def pre_save(self, model_instance, add):\n \"\"\"\n Return FieldFile object which specifies path to the file to be stored in database.\n\n There are two ways to get artifact into Pulp: sync and upload.\n\n The upload case\n - file is not stored yet, aka file._committed = False\n - nothing to do here in addition to Django pre_save actions\n\n The sync case:\n - file is already stored in a temporary location, aka file._committed = True\n - it needs to be moved into Pulp artifact storage if it's not there\n - TemporaryDownloadedFile takes care of correctly set storage path\n - only then Django pre_save actions should be performed\n\n Args:\n model_instance (`class::pulpcore.plugin.Artifact`): The instance this field belongs to.\n add (bool): Whether the instance is being saved to the database for the first time.\n Ignored by Django pre_save method.\n\n Returns:\n FieldFile object just before saving.\n\n \"\"\"\n file = model_instance.file\n artifact_storage_path = self.upload_to(model_instance, \"\")\n\n already_in_place = file.name in [\n artifact_storage_path,\n os.path.join(settings.MEDIA_ROOT, artifact_storage_path),\n ]\n is_in_artifact_storage = file.name.startswith(os.path.join(settings.MEDIA_ROOT, \"artifact\"))\n\n if not already_in_place and is_in_artifact_storage:\n raise ValueError(\n _(\n \"The file referenced by the Artifact is already present in \"\n \"Artifact storage. Files must be stored outside this location \"\n \"prior to Artifact creation.\"\n )\n )\n\n move = file._committed and file.name != artifact_storage_path\n if move:\n if not already_in_place:\n file._file = TemporaryDownloadedFile(open(file.name, \"rb\"))\n file._committed = False\n\n return super().pre_save(model_instance, add)\n\n\nclass EncryptedTextField(TextField):\n \"\"\"A field mixin that encrypts text using settings.DB_ENCRYPTION_KEY.\"\"\"\n\n def __init__(self, *args, **kwargs):\n if kwargs.get(\"primary_key\"):\n raise ImproperlyConfigured(\"EncryptedTextField does not support primary_key=True.\")\n if kwargs.get(\"unique\"):\n raise ImproperlyConfigured(\"EncryptedTextField does not support unique=True.\")\n if kwargs.get(\"db_index\"):\n raise ImproperlyConfigured(\"EncryptedTextField does not support db_index=True.\")\n super().__init__(*args, **kwargs)\n\n def get_prep_value(self, value):\n if value is not None:\n assert isinstance(value, str)\n value = force_str(_fernet().encrypt(force_bytes(value)))\n return super().get_prep_value(value)\n\n def from_db_value(self, value, expression, connection):\n if value is not None:\n value = force_str(_fernet().decrypt(force_bytes(value)))\n return value\n\n\nclass EncryptedJSONField(JSONField):\n \"\"\"A Field mixin that encrypts the JSON text using settings.DP_ENCRYPTION_KEY.\"\"\"\n\n def __init__(self, *args, **kwargs):\n if kwargs.get(\"primary_key\"):\n raise ImproperlyConfigured(\"EncryptedJSONField does not support primary_key=True.\")\n if kwargs.get(\"unique\"):\n raise ImproperlyConfigured(\"EncryptedJSONField does not support unique=True.\")\n if kwargs.get(\"db_index\"):\n raise ImproperlyConfigured(\"EncryptedJSONField does not support db_index=True.\")\n super().__init__(*args, **kwargs)\n\n def encrypt(self, value):\n if isinstance(value, dict):\n return {k: self.encrypt(v) for k, v in value.items()}\n elif isinstance(value, (list, tuple, set)):\n return [self.encrypt(v) for v in value]\n\n return force_str(_fernet().encrypt(force_bytes(json.dumps(value, cls=self.encoder))))\n\n def decrypt(self, value):\n if isinstance(value, dict):\n return {k: self.decrypt(v) for k, v in value.items()}\n elif isinstance(value, (list, tuple, set)):\n return [self.decrypt(v) for v in value]\n\n dec_value = force_str(_fernet().decrypt(force_bytes(value)))\n try:\n return json.loads(dec_value, cls=self.decoder)\n except json.JSONDecodeError:\n deprecation_logger.info(\n \"Failed to decode json in an EncryptedJSONField. Falling back to eval. \"\n \"Please run pulpcore-manager rotate-db-key to repair.\"\n \"This is deprecated and will be removed in pulpcore 3.40.\"\n )\n return eval(dec_value)\n\n def get_prep_value(self, value):\n if value is not None:\n if hasattr(value, \"as_sql\"):\n return value\n value = self.encrypt(value)\n return super().get_prep_value(value)\n\n def from_db_value(self, value, expression, connection):\n if value is not None:\n value = self.decrypt(super().from_db_value(value, expression, connection))\n return value\n\n\[email protected]_lookup\nclass NotEqualLookup(Lookup):\n # this is copied from https://docs.djangoproject.com/en/3.2/howto/custom-lookups/\n lookup_name = \"ne\"\n\n def as_sql(self, compiler, connection):\n lhs, lhs_params = self.process_lhs(compiler, connection)\n rhs, rhs_params = self.process_rhs(compiler, connection)\n params = lhs_params + rhs_params\n return \"%s <> %s\" % (lhs, rhs), params\n", "path": "pulpcore/app/models/fields.py"}]}
| 2,443 | 321 |
gh_patches_debug_14372
|
rasdani/github-patches
|
git_diff
|
googleapis__python-bigquery-52
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BigQuery: Document the use of the timeout parameter in samples
After adding the new `timeout` parameter to various public methods (#9987), we should demonstrate its usage in the code samples.
Users should be aware of this new feature, and should probably use it by default to avoid sporadic weird issues related to a method "getting stuck" at the transport layer.
</issue>
<code>
[start of samples/create_dataset.py]
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 def create_dataset(dataset_id):
17
18 # [START bigquery_create_dataset]
19 from google.cloud import bigquery
20
21 # Construct a BigQuery client object.
22 client = bigquery.Client()
23
24 # TODO(developer): Set dataset_id to the ID of the dataset to create.
25 # dataset_id = "{}.your_dataset".format(client.project)
26
27 # Construct a full Dataset object to send to the API.
28 dataset = bigquery.Dataset(dataset_id)
29
30 # TODO(developer): Specify the geographic location where the dataset should reside.
31 dataset.location = "US"
32
33 # Send the dataset to the API for creation.
34 # Raises google.api_core.exceptions.Conflict if the Dataset already
35 # exists within the project.
36 dataset = client.create_dataset(dataset) # Make an API request.
37 print("Created dataset {}.{}".format(client.project, dataset.dataset_id))
38 # [END bigquery_create_dataset]
39
[end of samples/create_dataset.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/samples/create_dataset.py b/samples/create_dataset.py
--- a/samples/create_dataset.py
+++ b/samples/create_dataset.py
@@ -30,9 +30,9 @@
# TODO(developer): Specify the geographic location where the dataset should reside.
dataset.location = "US"
- # Send the dataset to the API for creation.
+ # Send the dataset to the API for creation, with an explicit timeout.
# Raises google.api_core.exceptions.Conflict if the Dataset already
# exists within the project.
- dataset = client.create_dataset(dataset) # Make an API request.
+ dataset = client.create_dataset(dataset, timeout=30) # Make an API request.
print("Created dataset {}.{}".format(client.project, dataset.dataset_id))
# [END bigquery_create_dataset]
|
{"golden_diff": "diff --git a/samples/create_dataset.py b/samples/create_dataset.py\n--- a/samples/create_dataset.py\n+++ b/samples/create_dataset.py\n@@ -30,9 +30,9 @@\n # TODO(developer): Specify the geographic location where the dataset should reside.\n dataset.location = \"US\"\n \n- # Send the dataset to the API for creation.\n+ # Send the dataset to the API for creation, with an explicit timeout.\n # Raises google.api_core.exceptions.Conflict if the Dataset already\n # exists within the project.\n- dataset = client.create_dataset(dataset) # Make an API request.\n+ dataset = client.create_dataset(dataset, timeout=30) # Make an API request.\n print(\"Created dataset {}.{}\".format(client.project, dataset.dataset_id))\n # [END bigquery_create_dataset]\n", "issue": "BigQuery: Document the use of the timeout parameter in samples\nAfter adding the new `timeout` parameter to various public methods (#9987), we should demonstrate its usage in the code samples.\r\n\r\nUsers should be aware of this new feature, and should probably use it by default to avoid sporadic weird issues related to a method \"getting stuck\" at the transport layer.\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\ndef create_dataset(dataset_id):\n\n # [START bigquery_create_dataset]\n from google.cloud import bigquery\n\n # Construct a BigQuery client object.\n client = bigquery.Client()\n\n # TODO(developer): Set dataset_id to the ID of the dataset to create.\n # dataset_id = \"{}.your_dataset\".format(client.project)\n\n # Construct a full Dataset object to send to the API.\n dataset = bigquery.Dataset(dataset_id)\n\n # TODO(developer): Specify the geographic location where the dataset should reside.\n dataset.location = \"US\"\n\n # Send the dataset to the API for creation.\n # Raises google.api_core.exceptions.Conflict if the Dataset already\n # exists within the project.\n dataset = client.create_dataset(dataset) # Make an API request.\n print(\"Created dataset {}.{}\".format(client.project, dataset.dataset_id))\n # [END bigquery_create_dataset]\n", "path": "samples/create_dataset.py"}]}
| 1,004 | 180 |
gh_patches_debug_29660
|
rasdani/github-patches
|
git_diff
|
rotki__rotki-5409
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PnL Report is wrong for Genesis ETH2 Validator
## Problem Definition
For a validator that deposited prior to the Beacon Chain Genesis date
- the post genesis deposit event (from the 06/11/2020) is treated as if the 32 ETH were sold
- the genesis day `ETH2 staking daily PnL` value (from the 01/12/2020) in the Profit and Loss Report is too large by 32 ETH (32.016 ETH instead of 0.016 ETH)

One day later, the `ETH2 staking daily PnL` value is correctly calculated.

## Logs
I can provide logs if needed via a DM.
I assume that you can reproduce for any pre-geneis deposited validator. Accordingly, I suggest that you pick a validator with a sufficiently low validator index, e.g. [Validator 999](https://beaconcha.in/validator/999) who [deposited on 05/12/2020](https://etherscan.io/tx/0x187bef85f7797f4f42534fcfa080ed28ab77491b79fe9e9be8039416eebab6bc)
### System Description
Operating system: MacOS Montery
Rotki version: 1.26.3
</issue>
<code>
[start of rotkehlchen/chain/ethereum/modules/eth2/utils.py]
1 import logging
2 from http import HTTPStatus
3 from typing import NamedTuple
4
5 import gevent
6 import requests
7 from bs4 import BeautifulSoup, SoupStrainer
8
9 from rotkehlchen.constants.assets import A_ETH
10 from rotkehlchen.constants.misc import ZERO
11 from rotkehlchen.constants.timing import DAY_IN_SECONDS, DEFAULT_TIMEOUT_TUPLE
12 from rotkehlchen.errors.misc import RemoteError
13 from rotkehlchen.fval import FVal
14 from rotkehlchen.history.price import query_usd_price_zero_if_error
15 from rotkehlchen.logging import RotkehlchenLogsAdapter
16 from rotkehlchen.types import Timestamp
17 from rotkehlchen.user_messages import MessagesAggregator
18 from rotkehlchen.utils.misc import create_timestamp
19
20 from .structures import ValidatorDailyStats
21
22 logger = logging.getLogger(__name__)
23 log = RotkehlchenLogsAdapter(logger)
24
25
26 class ValidatorBalance(NamedTuple):
27 epoch: int
28 balance: int # in gwei
29 effective_balance: int # in wei
30
31
32 def _parse_fval(line: str, entry: str) -> FVal:
33 try:
34 result = FVal(line.replace('ETH', ''))
35 except ValueError as e:
36 raise RemoteError(f'Could not parse {line} as a number for {entry}') from e
37
38 return result
39
40
41 def _parse_int(line: str, entry: str) -> int:
42 try:
43 if line == '-':
44 result = 0
45 else:
46 result = int(line)
47 except ValueError as e:
48 raise RemoteError(f'Could not parse {line} as an integer for {entry}') from e
49
50 return result
51
52
53 def scrape_validator_daily_stats(
54 validator_index: int,
55 last_known_timestamp: Timestamp,
56 msg_aggregator: MessagesAggregator,
57 ) -> list[ValidatorDailyStats]:
58 """Scrapes the website of beaconcha.in and parses the data directly out of the data table.
59
60 The parser is very simple. And can break if they change stuff in the way
61 it's displayed in https://beaconcha.in/validator/33710/stats. If that happpens
62 we need to adjust here. If we also somehow programatically get the data in a CSV
63 that would be swell.
64
65 May raise:
66 - RemoteError if we can't query beaconcha.in or if the data is not in the expected format
67 """
68 url = f'https://beaconcha.in/validator/{validator_index}/stats'
69 tries = 1
70 max_tries = 3
71 backoff = 60
72 while True:
73 log.debug(f'Querying beaconcha.in stats: {url}')
74 try:
75 response = requests.get(url, timeout=DEFAULT_TIMEOUT_TUPLE)
76 except requests.exceptions.RequestException as e:
77 raise RemoteError(f'Beaconcha.in api request {url} failed due to {str(e)}') from e
78
79 if response.status_code == HTTPStatus.TOO_MANY_REQUESTS and tries <= max_tries:
80 sleep_secs = backoff * tries / max_tries
81 log.warning(
82 f'Querying {url} returned 429. Backoff try {tries} / {max_tries}.'
83 f' We are backing off for {sleep_secs}',
84 )
85 tries += 1
86 gevent.sleep(sleep_secs)
87 continue
88
89 if response.status_code != 200:
90 raise RemoteError(
91 f'Beaconcha.in api request {url} failed with code: {response.status_code}'
92 f' and response: {response.text}',
93 )
94
95 break # else all good - break from the loop
96
97 log.debug('Got beaconcha.in stats results. Processing it.')
98 soup = BeautifulSoup(response.text, 'html.parser', parse_only=SoupStrainer('tbod'))
99 if soup is None:
100 raise RemoteError('Could not find <tbod> while parsing beaconcha.in stats page')
101 try:
102 tr = soup.tbod.tr
103 except AttributeError as e:
104 raise RemoteError('Could not find first <tr> while parsing beaconcha.in stats page') from e
105
106 timestamp = Timestamp(0)
107 pnl = ZERO
108 start_amount = ZERO
109 end_amount = ZERO
110 missed_attestations = 0
111 orphaned_attestations = 0
112 proposed_blocks = 0
113 missed_blocks = 0
114 orphaned_blocks = 0
115 included_attester_slashings = 0
116 proposer_attester_slashings = 0
117 deposits_number = 0
118 amount_deposited = ZERO
119 column_pos = 1
120 stats: list[ValidatorDailyStats] = []
121 while tr is not None:
122
123 for column in tr.children:
124 if column.name != 'td':
125 continue
126
127 if column_pos == 1: # date
128 date = column.string
129 try:
130 timestamp = create_timestamp(date, formatstr='%d %b %Y')
131 except ValueError as e:
132 raise RemoteError(f'Failed to parse {date} to timestamp') from e
133
134 if timestamp <= last_known_timestamp:
135 return stats # we are done
136
137 column_pos += 1
138 elif column_pos == 2:
139 pnl = _parse_fval(column.string, 'income')
140 column_pos += 1
141 elif column_pos == 3:
142 start_amount = _parse_fval(column.string, 'start amount')
143 column_pos += 1
144 elif column_pos == 4:
145 end_amount = _parse_fval(column.string, 'end amount')
146 column_pos += 1
147 elif column_pos == 5:
148 missed_attestations = _parse_int(column.string, 'missed attestations')
149 column_pos += 1
150 elif column_pos == 6:
151 orphaned_attestations = _parse_int(column.string, 'orphaned attestations')
152 column_pos += 1
153 elif column_pos == 7:
154 proposed_blocks = _parse_int(column.string, 'proposed blocks')
155 column_pos += 1
156 elif column_pos == 8:
157 missed_blocks = _parse_int(column.string, 'missed blocks')
158 column_pos += 1
159 elif column_pos == 9:
160 orphaned_blocks = _parse_int(column.string, 'orphaned blocks')
161 column_pos += 1
162 elif column_pos == 10:
163 included_attester_slashings = _parse_int(column.string, 'included attester slashings') # noqa: E501
164 column_pos += 1
165 elif column_pos == 11:
166 proposer_attester_slashings = _parse_int(column.string, 'proposer attester slashings') # noqa: E501
167 column_pos += 1
168 elif column_pos == 12:
169 deposits_number = _parse_int(column.string, 'deposits number')
170 column_pos += 1
171 elif column_pos == 13:
172 amount_deposited = _parse_fval(column.string, 'amount deposited')
173 column_pos += 1
174
175 column_pos = 1
176 prices = [
177 query_usd_price_zero_if_error(
178 A_ETH,
179 time=time,
180 location='eth2 staking daily stats',
181 msg_aggregator=msg_aggregator,
182 )
183 for time in (timestamp, Timestamp(timestamp + DAY_IN_SECONDS))
184 ]
185 stats.append(ValidatorDailyStats(
186 validator_index=validator_index,
187 timestamp=timestamp,
188 start_usd_price=prices[0],
189 end_usd_price=prices[1],
190 pnl=pnl,
191 start_amount=start_amount,
192 end_amount=end_amount,
193 missed_attestations=missed_attestations,
194 orphaned_attestations=orphaned_attestations,
195 proposed_blocks=proposed_blocks,
196 missed_blocks=missed_blocks,
197 orphaned_blocks=orphaned_blocks,
198 included_attester_slashings=included_attester_slashings,
199 proposer_attester_slashings=proposer_attester_slashings,
200 deposits_number=deposits_number,
201 amount_deposited=amount_deposited,
202 ))
203 tr = tr.find_next_sibling()
204
205 log.debug('Processed beaconcha.in stats results. Returning it.')
206 return stats
207
[end of rotkehlchen/chain/ethereum/modules/eth2/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/rotkehlchen/chain/ethereum/modules/eth2/utils.py b/rotkehlchen/chain/ethereum/modules/eth2/utils.py
--- a/rotkehlchen/chain/ethereum/modules/eth2/utils.py
+++ b/rotkehlchen/chain/ethereum/modules/eth2/utils.py
@@ -7,7 +7,7 @@
from bs4 import BeautifulSoup, SoupStrainer
from rotkehlchen.constants.assets import A_ETH
-from rotkehlchen.constants.misc import ZERO
+from rotkehlchen.constants.misc import ONE, ZERO
from rotkehlchen.constants.timing import DAY_IN_SECONDS, DEFAULT_TIMEOUT_TUPLE
from rotkehlchen.errors.misc import RemoteError
from rotkehlchen.fval import FVal
@@ -22,6 +22,9 @@
logger = logging.getLogger(__name__)
log = RotkehlchenLogsAdapter(logger)
+DAY_AFTER_ETH2_GENESIS = Timestamp(1606780800)
+INITIAL_ETH_DEPOSIT = FVal(32)
+
class ValidatorBalance(NamedTuple):
epoch: int
@@ -137,6 +140,13 @@
column_pos += 1
elif column_pos == 2:
pnl = _parse_fval(column.string, 'income')
+ # if the validator makes profit in the genesis day beaconchain returns a
+ # profit of deposit + validation reward. We need to subtract the deposit value
+ # to obtain the actual pnl.
+ # Example: https://beaconcha.in/validator/999/stats
+ if pnl > ONE and timestamp == DAY_AFTER_ETH2_GENESIS:
+ pnl -= INITIAL_ETH_DEPOSIT
+
column_pos += 1
elif column_pos == 3:
start_amount = _parse_fval(column.string, 'start amount')
|
{"golden_diff": "diff --git a/rotkehlchen/chain/ethereum/modules/eth2/utils.py b/rotkehlchen/chain/ethereum/modules/eth2/utils.py\n--- a/rotkehlchen/chain/ethereum/modules/eth2/utils.py\n+++ b/rotkehlchen/chain/ethereum/modules/eth2/utils.py\n@@ -7,7 +7,7 @@\n from bs4 import BeautifulSoup, SoupStrainer\n \n from rotkehlchen.constants.assets import A_ETH\n-from rotkehlchen.constants.misc import ZERO\n+from rotkehlchen.constants.misc import ONE, ZERO\n from rotkehlchen.constants.timing import DAY_IN_SECONDS, DEFAULT_TIMEOUT_TUPLE\n from rotkehlchen.errors.misc import RemoteError\n from rotkehlchen.fval import FVal\n@@ -22,6 +22,9 @@\n logger = logging.getLogger(__name__)\n log = RotkehlchenLogsAdapter(logger)\n \n+DAY_AFTER_ETH2_GENESIS = Timestamp(1606780800)\n+INITIAL_ETH_DEPOSIT = FVal(32)\n+\n \n class ValidatorBalance(NamedTuple):\n epoch: int\n@@ -137,6 +140,13 @@\n column_pos += 1\n elif column_pos == 2:\n pnl = _parse_fval(column.string, 'income')\n+ # if the validator makes profit in the genesis day beaconchain returns a\n+ # profit of deposit + validation reward. We need to subtract the deposit value\n+ # to obtain the actual pnl.\n+ # Example: https://beaconcha.in/validator/999/stats\n+ if pnl > ONE and timestamp == DAY_AFTER_ETH2_GENESIS:\n+ pnl -= INITIAL_ETH_DEPOSIT\n+\n column_pos += 1\n elif column_pos == 3:\n start_amount = _parse_fval(column.string, 'start amount')\n", "issue": "PnL Report is wrong for Genesis ETH2 Validator\n## Problem Definition\r\n\r\nFor a validator that deposited prior to the Beacon Chain Genesis date\r\n- the post genesis deposit event (from the 06/11/2020) is treated as if the 32 ETH were sold\r\n- the genesis day `ETH2 staking daily PnL` value (from the 01/12/2020) in the Profit and Loss Report is too large by 32 ETH (32.016 ETH instead of 0.016 ETH) \r\n\r\n\r\n\r\nOne day later, the `ETH2 staking daily PnL` value is correctly calculated.\r\n\r\n\r\n## Logs\r\n\r\nI can provide logs if needed via a DM.\r\n\r\nI assume that you can reproduce for any pre-geneis deposited validator. Accordingly, I suggest that you pick a validator with a sufficiently low validator index, e.g. [Validator 999](https://beaconcha.in/validator/999) who [deposited on 05/12/2020](https://etherscan.io/tx/0x187bef85f7797f4f42534fcfa080ed28ab77491b79fe9e9be8039416eebab6bc)\r\n\r\n\r\n### System Description\r\n\r\nOperating system: MacOS Montery\r\nRotki version: 1.26.3\r\n\n", "before_files": [{"content": "import logging\nfrom http import HTTPStatus\nfrom typing import NamedTuple\n\nimport gevent\nimport requests\nfrom bs4 import BeautifulSoup, SoupStrainer\n\nfrom rotkehlchen.constants.assets import A_ETH\nfrom rotkehlchen.constants.misc import ZERO\nfrom rotkehlchen.constants.timing import DAY_IN_SECONDS, DEFAULT_TIMEOUT_TUPLE\nfrom rotkehlchen.errors.misc import RemoteError\nfrom rotkehlchen.fval import FVal\nfrom rotkehlchen.history.price import query_usd_price_zero_if_error\nfrom rotkehlchen.logging import RotkehlchenLogsAdapter\nfrom rotkehlchen.types import Timestamp\nfrom rotkehlchen.user_messages import MessagesAggregator\nfrom rotkehlchen.utils.misc import create_timestamp\n\nfrom .structures import ValidatorDailyStats\n\nlogger = logging.getLogger(__name__)\nlog = RotkehlchenLogsAdapter(logger)\n\n\nclass ValidatorBalance(NamedTuple):\n epoch: int\n balance: int # in gwei\n effective_balance: int # in wei\n\n\ndef _parse_fval(line: str, entry: str) -> FVal:\n try:\n result = FVal(line.replace('ETH', ''))\n except ValueError as e:\n raise RemoteError(f'Could not parse {line} as a number for {entry}') from e\n\n return result\n\n\ndef _parse_int(line: str, entry: str) -> int:\n try:\n if line == '-':\n result = 0\n else:\n result = int(line)\n except ValueError as e:\n raise RemoteError(f'Could not parse {line} as an integer for {entry}') from e\n\n return result\n\n\ndef scrape_validator_daily_stats(\n validator_index: int,\n last_known_timestamp: Timestamp,\n msg_aggregator: MessagesAggregator,\n) -> list[ValidatorDailyStats]:\n \"\"\"Scrapes the website of beaconcha.in and parses the data directly out of the data table.\n\n The parser is very simple. And can break if they change stuff in the way\n it's displayed in https://beaconcha.in/validator/33710/stats. If that happpens\n we need to adjust here. If we also somehow programatically get the data in a CSV\n that would be swell.\n\n May raise:\n - RemoteError if we can't query beaconcha.in or if the data is not in the expected format\n \"\"\"\n url = f'https://beaconcha.in/validator/{validator_index}/stats'\n tries = 1\n max_tries = 3\n backoff = 60\n while True:\n log.debug(f'Querying beaconcha.in stats: {url}')\n try:\n response = requests.get(url, timeout=DEFAULT_TIMEOUT_TUPLE)\n except requests.exceptions.RequestException as e:\n raise RemoteError(f'Beaconcha.in api request {url} failed due to {str(e)}') from e\n\n if response.status_code == HTTPStatus.TOO_MANY_REQUESTS and tries <= max_tries:\n sleep_secs = backoff * tries / max_tries\n log.warning(\n f'Querying {url} returned 429. Backoff try {tries} / {max_tries}.'\n f' We are backing off for {sleep_secs}',\n )\n tries += 1\n gevent.sleep(sleep_secs)\n continue\n\n if response.status_code != 200:\n raise RemoteError(\n f'Beaconcha.in api request {url} failed with code: {response.status_code}'\n f' and response: {response.text}',\n )\n\n break # else all good - break from the loop\n\n log.debug('Got beaconcha.in stats results. Processing it.')\n soup = BeautifulSoup(response.text, 'html.parser', parse_only=SoupStrainer('tbod'))\n if soup is None:\n raise RemoteError('Could not find <tbod> while parsing beaconcha.in stats page')\n try:\n tr = soup.tbod.tr\n except AttributeError as e:\n raise RemoteError('Could not find first <tr> while parsing beaconcha.in stats page') from e\n\n timestamp = Timestamp(0)\n pnl = ZERO\n start_amount = ZERO\n end_amount = ZERO\n missed_attestations = 0\n orphaned_attestations = 0\n proposed_blocks = 0\n missed_blocks = 0\n orphaned_blocks = 0\n included_attester_slashings = 0\n proposer_attester_slashings = 0\n deposits_number = 0\n amount_deposited = ZERO\n column_pos = 1\n stats: list[ValidatorDailyStats] = []\n while tr is not None:\n\n for column in tr.children:\n if column.name != 'td':\n continue\n\n if column_pos == 1: # date\n date = column.string\n try:\n timestamp = create_timestamp(date, formatstr='%d %b %Y')\n except ValueError as e:\n raise RemoteError(f'Failed to parse {date} to timestamp') from e\n\n if timestamp <= last_known_timestamp:\n return stats # we are done\n\n column_pos += 1\n elif column_pos == 2:\n pnl = _parse_fval(column.string, 'income')\n column_pos += 1\n elif column_pos == 3:\n start_amount = _parse_fval(column.string, 'start amount')\n column_pos += 1\n elif column_pos == 4:\n end_amount = _parse_fval(column.string, 'end amount')\n column_pos += 1\n elif column_pos == 5:\n missed_attestations = _parse_int(column.string, 'missed attestations')\n column_pos += 1\n elif column_pos == 6:\n orphaned_attestations = _parse_int(column.string, 'orphaned attestations')\n column_pos += 1\n elif column_pos == 7:\n proposed_blocks = _parse_int(column.string, 'proposed blocks')\n column_pos += 1\n elif column_pos == 8:\n missed_blocks = _parse_int(column.string, 'missed blocks')\n column_pos += 1\n elif column_pos == 9:\n orphaned_blocks = _parse_int(column.string, 'orphaned blocks')\n column_pos += 1\n elif column_pos == 10:\n included_attester_slashings = _parse_int(column.string, 'included attester slashings') # noqa: E501\n column_pos += 1\n elif column_pos == 11:\n proposer_attester_slashings = _parse_int(column.string, 'proposer attester slashings') # noqa: E501\n column_pos += 1\n elif column_pos == 12:\n deposits_number = _parse_int(column.string, 'deposits number')\n column_pos += 1\n elif column_pos == 13:\n amount_deposited = _parse_fval(column.string, 'amount deposited')\n column_pos += 1\n\n column_pos = 1\n prices = [\n query_usd_price_zero_if_error(\n A_ETH,\n time=time,\n location='eth2 staking daily stats',\n msg_aggregator=msg_aggregator,\n )\n for time in (timestamp, Timestamp(timestamp + DAY_IN_SECONDS))\n ]\n stats.append(ValidatorDailyStats(\n validator_index=validator_index,\n timestamp=timestamp,\n start_usd_price=prices[0],\n end_usd_price=prices[1],\n pnl=pnl,\n start_amount=start_amount,\n end_amount=end_amount,\n missed_attestations=missed_attestations,\n orphaned_attestations=orphaned_attestations,\n proposed_blocks=proposed_blocks,\n missed_blocks=missed_blocks,\n orphaned_blocks=orphaned_blocks,\n included_attester_slashings=included_attester_slashings,\n proposer_attester_slashings=proposer_attester_slashings,\n deposits_number=deposits_number,\n amount_deposited=amount_deposited,\n ))\n tr = tr.find_next_sibling()\n\n log.debug('Processed beaconcha.in stats results. Returning it.')\n return stats\n", "path": "rotkehlchen/chain/ethereum/modules/eth2/utils.py"}]}
| 3,350 | 405 |
gh_patches_debug_21580
|
rasdani/github-patches
|
git_diff
|
cupy__cupy-6172
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect output of `cupy.logaddexp()`
For this case, mathematically we should get `inf`, but CuPy returns `nan`:
```python
>>> np.logaddexp(np.inf, np.inf)
inf
>>>
>>> cp.logaddexp(np.inf, np.inf)
array(nan)
```
The reason is `in0-in1` gives `nan` when both are `inf`, and it propagates all the way out:
https://github.com/cupy/cupy/blob/4469fae998df33c72ff40ef954cb08b8f0004b18/cupy/_math/explog.py#L73
</issue>
<code>
[start of cupy/_math/explog.py]
1 from cupy import _core
2 from cupy._math import ufunc
3
4
5 exp = ufunc.create_math_ufunc(
6 'exp', 1, 'cupy_exp',
7 '''Elementwise exponential function.
8
9 .. seealso:: :data:`numpy.exp`
10
11 ''')
12
13
14 expm1 = ufunc.create_math_ufunc(
15 'expm1', 1, 'cupy_expm1',
16 '''Computes ``exp(x) - 1`` elementwise.
17
18 .. seealso:: :data:`numpy.expm1`
19
20 ''')
21
22
23 exp2 = _core.create_ufunc(
24 'cupy_exp2',
25 ('e->e', 'f->f', 'd->d', 'F->F', 'D->D'),
26 'out0 = pow(in0_type(2), in0)',
27 doc='''Elementwise exponentiation with base 2.
28
29 .. seealso:: :data:`numpy.exp2`
30
31 ''')
32
33
34 log = ufunc.create_math_ufunc(
35 'log', 1, 'cupy_log',
36 '''Elementwise natural logarithm function.
37
38 .. seealso:: :data:`numpy.log`
39
40 ''')
41
42
43 log10 = ufunc.create_math_ufunc(
44 'log10', 1, 'cupy_log10',
45 '''Elementwise common logarithm function.
46
47 .. seealso:: :data:`numpy.log10`
48
49 ''')
50
51
52 log2 = ufunc.create_math_ufunc(
53 'log2', 1, 'cupy_log2',
54 '''Elementwise binary logarithm function.
55
56 .. seealso:: :data:`numpy.log2`
57
58 ''')
59
60
61 log1p = ufunc.create_math_ufunc(
62 'log1p', 1, 'cupy_log1p',
63 '''Computes ``log(1 + x)`` elementwise.
64
65 .. seealso:: :data:`numpy.log1p`
66
67 ''')
68
69
70 logaddexp = _core.create_ufunc(
71 'cupy_logaddexp',
72 ('ee->e', 'ff->f', 'dd->d'),
73 'out0 = fmax(in0, in1) + log1p(exp(-fabs(in0 - in1)))',
74 doc='''Computes ``log(exp(x1) + exp(x2))`` elementwise.
75
76 .. seealso:: :data:`numpy.logaddexp`
77
78 ''')
79
80
81 logaddexp2 = _core.create_ufunc(
82 'cupy_logaddexp2',
83 ('ee->e', 'ff->f', 'dd->d'),
84 'out0 = fmax(in0, in1) + log2(1 + exp2(-fabs(in0 - in1)))',
85 doc='''Computes ``log2(exp2(x1) + exp2(x2))`` elementwise.
86
87 .. seealso:: :data:`numpy.logaddexp2`
88
89 ''')
90
[end of cupy/_math/explog.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cupy/_math/explog.py b/cupy/_math/explog.py
--- a/cupy/_math/explog.py
+++ b/cupy/_math/explog.py
@@ -70,7 +70,14 @@
logaddexp = _core.create_ufunc(
'cupy_logaddexp',
('ee->e', 'ff->f', 'dd->d'),
- 'out0 = fmax(in0, in1) + log1p(exp(-fabs(in0 - in1)))',
+ '''
+ if (in0 == in1) {
+ /* Handles infinities of the same sign */
+ out0 = in0 + log(2.0);
+ } else {
+ out0 = fmax(in0, in1) + log1p(exp(-fabs(in0 - in1)));
+ }
+ ''',
doc='''Computes ``log(exp(x1) + exp(x2))`` elementwise.
.. seealso:: :data:`numpy.logaddexp`
@@ -81,7 +88,14 @@
logaddexp2 = _core.create_ufunc(
'cupy_logaddexp2',
('ee->e', 'ff->f', 'dd->d'),
- 'out0 = fmax(in0, in1) + log2(1 + exp2(-fabs(in0 - in1)))',
+ '''
+ if (in0 == in1) {
+ /* Handles infinities of the same sign */
+ out0 = in0 + 1.0;
+ } else {
+ out0 = fmax(in0, in1) + log2(1 + exp2(-fabs(in0 - in1)));
+ }
+ ''',
doc='''Computes ``log2(exp2(x1) + exp2(x2))`` elementwise.
.. seealso:: :data:`numpy.logaddexp2`
|
{"golden_diff": "diff --git a/cupy/_math/explog.py b/cupy/_math/explog.py\n--- a/cupy/_math/explog.py\n+++ b/cupy/_math/explog.py\n@@ -70,7 +70,14 @@\n logaddexp = _core.create_ufunc(\n 'cupy_logaddexp',\n ('ee->e', 'ff->f', 'dd->d'),\n- 'out0 = fmax(in0, in1) + log1p(exp(-fabs(in0 - in1)))',\n+ '''\n+ if (in0 == in1) {\n+ /* Handles infinities of the same sign */\n+ out0 = in0 + log(2.0);\n+ } else {\n+ out0 = fmax(in0, in1) + log1p(exp(-fabs(in0 - in1)));\n+ }\n+ ''',\n doc='''Computes ``log(exp(x1) + exp(x2))`` elementwise.\n \n .. seealso:: :data:`numpy.logaddexp`\n@@ -81,7 +88,14 @@\n logaddexp2 = _core.create_ufunc(\n 'cupy_logaddexp2',\n ('ee->e', 'ff->f', 'dd->d'),\n- 'out0 = fmax(in0, in1) + log2(1 + exp2(-fabs(in0 - in1)))',\n+ '''\n+ if (in0 == in1) {\n+ /* Handles infinities of the same sign */\n+ out0 = in0 + 1.0;\n+ } else {\n+ out0 = fmax(in0, in1) + log2(1 + exp2(-fabs(in0 - in1)));\n+ }\n+ ''',\n doc='''Computes ``log2(exp2(x1) + exp2(x2))`` elementwise.\n \n .. seealso:: :data:`numpy.logaddexp2`\n", "issue": "Incorrect output of `cupy.logaddexp()`\nFor this case, mathematically we should get `inf`, but CuPy returns `nan`:\r\n```python\r\n>>> np.logaddexp(np.inf, np.inf)\r\ninf\r\n>>>\r\n>>> cp.logaddexp(np.inf, np.inf)\r\narray(nan)\r\n\r\n```\r\nThe reason is `in0-in1` gives `nan` when both are `inf`, and it propagates all the way out:\r\nhttps://github.com/cupy/cupy/blob/4469fae998df33c72ff40ef954cb08b8f0004b18/cupy/_math/explog.py#L73\r\n\r\n\n", "before_files": [{"content": "from cupy import _core\nfrom cupy._math import ufunc\n\n\nexp = ufunc.create_math_ufunc(\n 'exp', 1, 'cupy_exp',\n '''Elementwise exponential function.\n\n .. seealso:: :data:`numpy.exp`\n\n ''')\n\n\nexpm1 = ufunc.create_math_ufunc(\n 'expm1', 1, 'cupy_expm1',\n '''Computes ``exp(x) - 1`` elementwise.\n\n .. seealso:: :data:`numpy.expm1`\n\n ''')\n\n\nexp2 = _core.create_ufunc(\n 'cupy_exp2',\n ('e->e', 'f->f', 'd->d', 'F->F', 'D->D'),\n 'out0 = pow(in0_type(2), in0)',\n doc='''Elementwise exponentiation with base 2.\n\n .. seealso:: :data:`numpy.exp2`\n\n ''')\n\n\nlog = ufunc.create_math_ufunc(\n 'log', 1, 'cupy_log',\n '''Elementwise natural logarithm function.\n\n .. seealso:: :data:`numpy.log`\n\n ''')\n\n\nlog10 = ufunc.create_math_ufunc(\n 'log10', 1, 'cupy_log10',\n '''Elementwise common logarithm function.\n\n .. seealso:: :data:`numpy.log10`\n\n ''')\n\n\nlog2 = ufunc.create_math_ufunc(\n 'log2', 1, 'cupy_log2',\n '''Elementwise binary logarithm function.\n\n .. seealso:: :data:`numpy.log2`\n\n ''')\n\n\nlog1p = ufunc.create_math_ufunc(\n 'log1p', 1, 'cupy_log1p',\n '''Computes ``log(1 + x)`` elementwise.\n\n .. seealso:: :data:`numpy.log1p`\n\n ''')\n\n\nlogaddexp = _core.create_ufunc(\n 'cupy_logaddexp',\n ('ee->e', 'ff->f', 'dd->d'),\n 'out0 = fmax(in0, in1) + log1p(exp(-fabs(in0 - in1)))',\n doc='''Computes ``log(exp(x1) + exp(x2))`` elementwise.\n\n .. seealso:: :data:`numpy.logaddexp`\n\n ''')\n\n\nlogaddexp2 = _core.create_ufunc(\n 'cupy_logaddexp2',\n ('ee->e', 'ff->f', 'dd->d'),\n 'out0 = fmax(in0, in1) + log2(1 + exp2(-fabs(in0 - in1)))',\n doc='''Computes ``log2(exp2(x1) + exp2(x2))`` elementwise.\n\n .. seealso:: :data:`numpy.logaddexp2`\n\n ''')\n", "path": "cupy/_math/explog.py"}]}
| 1,517 | 429 |
gh_patches_debug_43359
|
rasdani/github-patches
|
git_diff
|
Qiskit__qiskit-5022
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Count constructor crashes when data is empty
<!-- β οΈ If you do not respect this template, your issue will be closed -->
<!-- β οΈ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: 0.15.1
- **Python version**: 3.7.3
- **Operating system**: Mac OS Catalina 10.15.6
### What is the current behavior?
I simply ran a circuit which does not perform any measurements. On the returned results, `get_counts()` crashes because an empty dictionary is passed into the `Count` constructor and the constructor tries to iterate on the empty dictionary.
I tried this on an older version, Terra 0.14.2, which instead of `Count` uses `postprocess.format_counts()`, which simply ignores the empty dictionary and returns an empty dictionary in this case. This did not crash.
### Steps to reproduce the problem
Run any circuit without calling `measure()`, then try to apply `get_counts()` to the results.
### What is the expected behavior?
`get_counts()` should return an empty dictionary (and not crash).
### Suggested solutions
A simple check for an empty dictionary in the `Count` constructor would be sufficient, I think.
</issue>
<code>
[start of qiskit/result/counts.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2020.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """A container class for counts from a circuit execution."""
16
17 import re
18
19 from qiskit.result import postprocess
20 from qiskit import exceptions
21
22
23 # NOTE: A dict subclass should not overload any dunder methods like __getitem__
24 # this can cause unexpected behavior and issues as the cPython dict
25 # implementation has many standard methods in C for performance and the dunder
26 # methods are not always used as expected. For example, update() doesn't call
27 # __setitem__ so overloading __setitem__ would not always provide the expected
28 # result
29 class Counts(dict):
30 """A class to store a counts result from a circuit execution."""
31
32 def __init__(self, data, time_taken=None, creg_sizes=None,
33 memory_slots=None):
34 """Build a counts object
35
36 Args:
37 data (dict): The dictionary input for the counts. Where the keys
38 represent a measured classical value and the value is an
39 integer the number of shots with that result.
40 The keys can be one of several formats:
41
42 * A hexademical string of the form ``"0x4a"``
43 * A bit string prefixed with ``0b`` for example ``'0b1011'``
44 * A bit string formatted across register and memory slots.
45 For example, ``'00 10'``.
46 * A dit string, for example ``'02'``. Note for objects created
47 with dit strings the ``creg_sizes``and ``memory_slots``
48 kwargs don't work and :meth:`hex_outcomes` and
49 :meth:`int_outcomes` also do not work.
50
51 time_taken (float): The duration of the experiment that generated
52 the counts
53 creg_sizes (list): a nested list where the inner element is a list
54 of tuples containing both the classical register name and
55 classical register size. For example,
56 ``[('c_reg', 2), ('my_creg', 4)]``.
57 memory_slots (int): The number of total ``memory_slots`` in the
58 experiment.
59 Raises:
60 TypeError: If the input key type is not an int or string
61 QiskitError: If a dit string key is input with creg_sizes and/or
62 memory_slots
63 """
64 bin_data = None
65 data = dict(data)
66 first_key = next(iter(data.keys()))
67 if isinstance(first_key, int):
68 self.int_raw = data
69 self.hex_raw = {
70 hex(key): value for key, value in self.int_raw.items()}
71 elif isinstance(first_key, str):
72 if first_key.startswith('0x'):
73 self.hex_raw = data
74 self.int_raw = {
75 int(key, 0): value for key, value in self.hex_raw.items()}
76 elif first_key.startswith('0b'):
77 self.int_raw = {
78 int(key, 0): value for key, value in data.items()}
79 self.hex_raw = {
80 hex(key): value for key, value in self.int_raw.items()}
81 else:
82 if not creg_sizes and not memory_slots:
83 self.hex_raw = None
84 self.int_raw = None
85 bin_data = data
86 else:
87 bitstring_regex = re.compile(r'^[01\s]+$')
88 hex_dict = {}
89 int_dict = {}
90 for bitstring, value in data.items():
91 if not bitstring_regex.search(bitstring):
92 raise exceptions.QiskitError(
93 'Counts objects with dit strings do not '
94 'currently support dit string formatting parameters '
95 'creg_sizes or memory_slots')
96 int_key = int(bitstring.replace(" ", ""), 2)
97 int_dict[int_key] = value
98 hex_dict[hex(int_key)] = value
99 self.hex_raw = hex_dict
100 self.int_raw = int_dict
101 else:
102 raise TypeError("Invalid input key type %s, must be either an int "
103 "key or string key with hexademical value or bit string")
104 header = {}
105 self.creg_sizes = creg_sizes
106 if self.creg_sizes:
107 header['creg_sizes'] = self.creg_sizes
108 self.memory_slots = memory_slots
109 if self.memory_slots:
110 header['memory_slots'] = self.memory_slots
111 if not bin_data:
112 bin_data = postprocess.format_counts(self.hex_raw, header=header)
113 super().__init__(bin_data)
114 self.time_taken = time_taken
115
116 def most_frequent(self):
117 """Return the most frequent count
118
119 Returns:
120 str: The bit string for the most frequent result
121 Raises:
122 QiskitError: when there is >1 count with the same max counts
123 """
124 max_value = max(self.values())
125 max_values_counts = [x[0] for x in self.items() if x[1] == max_value]
126 if len(max_values_counts) != 1:
127 raise exceptions.QiskitError(
128 "Multiple values have the same maximum counts: %s" %
129 ','.join(max_values_counts))
130 return max_values_counts[0]
131
132 def hex_outcomes(self):
133 """Return a counts dictionary with hexademical string keys
134
135 Returns:
136 dict: A dictionary with the keys as hexadecimal strings instead of
137 bitstrings
138 Raises:
139 QiskitError: If the Counts object contains counts for dit strings
140 """
141 if self.hex_raw:
142 return {key.lower(): value for key, value in self.hex_raw.items()}
143 else:
144 bitstring_regex = re.compile(r'^[01\s]+$')
145 out_dict = {}
146 for bitstring, value in self.items():
147 if not bitstring_regex.search(bitstring):
148 raise exceptions.QiskitError(
149 'Counts objects with dit strings do not '
150 'currently support conversion to hexadecimal')
151 int_key = int(bitstring.replace(" ", ""), 2)
152 out_dict[hex(int_key)] = value
153 return out_dict
154
155 def int_outcomes(self):
156 """Build a counts dictionary with integer keys instead of count strings
157
158 Returns:
159 dict: A dictionary with the keys as integers instead of bitstrings
160 Raises:
161 QiskitError: If the Counts object contains counts for dit strings
162 """
163 if self.int_raw:
164 return self.int_raw
165 else:
166 bitstring_regex = re.compile(r'^[01\s]+$')
167 out_dict = {}
168 for bitstring, value in self.items():
169 if not bitstring_regex.search(bitstring):
170 raise exceptions.QiskitError(
171 'Counts objects with dit strings do not '
172 'currently support conversion to integer')
173 int_key = int(bitstring.replace(" ", ""), 2)
174 out_dict[int_key] = value
175 return out_dict
176
[end of qiskit/result/counts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/qiskit/result/counts.py b/qiskit/result/counts.py
--- a/qiskit/result/counts.py
+++ b/qiskit/result/counts.py
@@ -63,44 +63,49 @@
"""
bin_data = None
data = dict(data)
- first_key = next(iter(data.keys()))
- if isinstance(first_key, int):
- self.int_raw = data
- self.hex_raw = {
- hex(key): value for key, value in self.int_raw.items()}
- elif isinstance(first_key, str):
- if first_key.startswith('0x'):
- self.hex_raw = data
- self.int_raw = {
- int(key, 0): value for key, value in self.hex_raw.items()}
- elif first_key.startswith('0b'):
- self.int_raw = {
- int(key, 0): value for key, value in data.items()}
+ if not data:
+ self.int_raw = {}
+ self.hex_raw = {}
+ bin_data = {}
+ else:
+ first_key = next(iter(data.keys()))
+ if isinstance(first_key, int):
+ self.int_raw = data
self.hex_raw = {
hex(key): value for key, value in self.int_raw.items()}
- else:
- if not creg_sizes and not memory_slots:
- self.hex_raw = None
- self.int_raw = None
- bin_data = data
+ elif isinstance(first_key, str):
+ if first_key.startswith('0x'):
+ self.hex_raw = data
+ self.int_raw = {
+ int(key, 0): value for key, value in self.hex_raw.items()}
+ elif first_key.startswith('0b'):
+ self.int_raw = {
+ int(key, 0): value for key, value in data.items()}
+ self.hex_raw = {
+ hex(key): value for key, value in self.int_raw.items()}
else:
- bitstring_regex = re.compile(r'^[01\s]+$')
- hex_dict = {}
- int_dict = {}
- for bitstring, value in data.items():
- if not bitstring_regex.search(bitstring):
- raise exceptions.QiskitError(
- 'Counts objects with dit strings do not '
- 'currently support dit string formatting parameters '
- 'creg_sizes or memory_slots')
- int_key = int(bitstring.replace(" ", ""), 2)
- int_dict[int_key] = value
- hex_dict[hex(int_key)] = value
- self.hex_raw = hex_dict
- self.int_raw = int_dict
- else:
- raise TypeError("Invalid input key type %s, must be either an int "
- "key or string key with hexademical value or bit string")
+ if not creg_sizes and not memory_slots:
+ self.hex_raw = None
+ self.int_raw = None
+ bin_data = data
+ else:
+ bitstring_regex = re.compile(r'^[01\s]+$')
+ hex_dict = {}
+ int_dict = {}
+ for bitstring, value in data.items():
+ if not bitstring_regex.search(bitstring):
+ raise exceptions.QiskitError(
+ 'Counts objects with dit strings do not '
+ 'currently support dit string formatting parameters '
+ 'creg_sizes or memory_slots')
+ int_key = int(bitstring.replace(" ", ""), 2)
+ int_dict[int_key] = value
+ hex_dict[hex(int_key)] = value
+ self.hex_raw = hex_dict
+ self.int_raw = int_dict
+ else:
+ raise TypeError("Invalid input key type %s, must be either an int "
+ "key or string key with hexademical value or bit string")
header = {}
self.creg_sizes = creg_sizes
if self.creg_sizes:
@@ -119,8 +124,12 @@
Returns:
str: The bit string for the most frequent result
Raises:
- QiskitError: when there is >1 count with the same max counts
+ QiskitError: when there is >1 count with the same max counts, or
+ an empty object.
"""
+ if not self:
+ raise exceptions.QiskitError(
+ "Can not return a most frequent count on an empty object")
max_value = max(self.values())
max_values_counts = [x[0] for x in self.items() if x[1] == max_value]
if len(max_values_counts) != 1:
|
{"golden_diff": "diff --git a/qiskit/result/counts.py b/qiskit/result/counts.py\n--- a/qiskit/result/counts.py\n+++ b/qiskit/result/counts.py\n@@ -63,44 +63,49 @@\n \"\"\"\n bin_data = None\n data = dict(data)\n- first_key = next(iter(data.keys()))\n- if isinstance(first_key, int):\n- self.int_raw = data\n- self.hex_raw = {\n- hex(key): value for key, value in self.int_raw.items()}\n- elif isinstance(first_key, str):\n- if first_key.startswith('0x'):\n- self.hex_raw = data\n- self.int_raw = {\n- int(key, 0): value for key, value in self.hex_raw.items()}\n- elif first_key.startswith('0b'):\n- self.int_raw = {\n- int(key, 0): value for key, value in data.items()}\n+ if not data:\n+ self.int_raw = {}\n+ self.hex_raw = {}\n+ bin_data = {}\n+ else:\n+ first_key = next(iter(data.keys()))\n+ if isinstance(first_key, int):\n+ self.int_raw = data\n self.hex_raw = {\n hex(key): value for key, value in self.int_raw.items()}\n- else:\n- if not creg_sizes and not memory_slots:\n- self.hex_raw = None\n- self.int_raw = None\n- bin_data = data\n+ elif isinstance(first_key, str):\n+ if first_key.startswith('0x'):\n+ self.hex_raw = data\n+ self.int_raw = {\n+ int(key, 0): value for key, value in self.hex_raw.items()}\n+ elif first_key.startswith('0b'):\n+ self.int_raw = {\n+ int(key, 0): value for key, value in data.items()}\n+ self.hex_raw = {\n+ hex(key): value for key, value in self.int_raw.items()}\n else:\n- bitstring_regex = re.compile(r'^[01\\s]+$')\n- hex_dict = {}\n- int_dict = {}\n- for bitstring, value in data.items():\n- if not bitstring_regex.search(bitstring):\n- raise exceptions.QiskitError(\n- 'Counts objects with dit strings do not '\n- 'currently support dit string formatting parameters '\n- 'creg_sizes or memory_slots')\n- int_key = int(bitstring.replace(\" \", \"\"), 2)\n- int_dict[int_key] = value\n- hex_dict[hex(int_key)] = value\n- self.hex_raw = hex_dict\n- self.int_raw = int_dict\n- else:\n- raise TypeError(\"Invalid input key type %s, must be either an int \"\n- \"key or string key with hexademical value or bit string\")\n+ if not creg_sizes and not memory_slots:\n+ self.hex_raw = None\n+ self.int_raw = None\n+ bin_data = data\n+ else:\n+ bitstring_regex = re.compile(r'^[01\\s]+$')\n+ hex_dict = {}\n+ int_dict = {}\n+ for bitstring, value in data.items():\n+ if not bitstring_regex.search(bitstring):\n+ raise exceptions.QiskitError(\n+ 'Counts objects with dit strings do not '\n+ 'currently support dit string formatting parameters '\n+ 'creg_sizes or memory_slots')\n+ int_key = int(bitstring.replace(\" \", \"\"), 2)\n+ int_dict[int_key] = value\n+ hex_dict[hex(int_key)] = value\n+ self.hex_raw = hex_dict\n+ self.int_raw = int_dict\n+ else:\n+ raise TypeError(\"Invalid input key type %s, must be either an int \"\n+ \"key or string key with hexademical value or bit string\")\n header = {}\n self.creg_sizes = creg_sizes\n if self.creg_sizes:\n@@ -119,8 +124,12 @@\n Returns:\n str: The bit string for the most frequent result\n Raises:\n- QiskitError: when there is >1 count with the same max counts\n+ QiskitError: when there is >1 count with the same max counts, or\n+ an empty object.\n \"\"\"\n+ if not self:\n+ raise exceptions.QiskitError(\n+ \"Can not return a most frequent count on an empty object\")\n max_value = max(self.values())\n max_values_counts = [x[0] for x in self.items() if x[1] == max_value]\n if len(max_values_counts) != 1:\n", "issue": "Count constructor crashes when data is empty\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues -->\r\n\r\n### Information\r\n\r\n- **Qiskit Terra version**: 0.15.1\r\n- **Python version**: 3.7.3\r\n- **Operating system**: Mac OS Catalina 10.15.6\r\n\r\n### What is the current behavior?\r\nI simply ran a circuit which does not perform any measurements. On the returned results, `get_counts()` crashes because an empty dictionary is passed into the `Count` constructor and the constructor tries to iterate on the empty dictionary.\r\n\r\nI tried this on an older version, Terra 0.14.2, which instead of `Count` uses `postprocess.format_counts()`, which simply ignores the empty dictionary and returns an empty dictionary in this case. This did not crash.\r\n\r\n\r\n### Steps to reproduce the problem\r\nRun any circuit without calling `measure()`, then try to apply `get_counts()` to the results.\r\n\r\n\r\n### What is the expected behavior?\r\n`get_counts()` should return an empty dictionary (and not crash).\r\n\r\n\r\n### Suggested solutions\r\nA simple check for an empty dictionary in the `Count` constructor would be sufficient, I think.\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2020.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"A container class for counts from a circuit execution.\"\"\"\n\nimport re\n\nfrom qiskit.result import postprocess\nfrom qiskit import exceptions\n\n\n# NOTE: A dict subclass should not overload any dunder methods like __getitem__\n# this can cause unexpected behavior and issues as the cPython dict\n# implementation has many standard methods in C for performance and the dunder\n# methods are not always used as expected. For example, update() doesn't call\n# __setitem__ so overloading __setitem__ would not always provide the expected\n# result\nclass Counts(dict):\n \"\"\"A class to store a counts result from a circuit execution.\"\"\"\n\n def __init__(self, data, time_taken=None, creg_sizes=None,\n memory_slots=None):\n \"\"\"Build a counts object\n\n Args:\n data (dict): The dictionary input for the counts. Where the keys\n represent a measured classical value and the value is an\n integer the number of shots with that result.\n The keys can be one of several formats:\n\n * A hexademical string of the form ``\"0x4a\"``\n * A bit string prefixed with ``0b`` for example ``'0b1011'``\n * A bit string formatted across register and memory slots.\n For example, ``'00 10'``.\n * A dit string, for example ``'02'``. Note for objects created\n with dit strings the ``creg_sizes``and ``memory_slots``\n kwargs don't work and :meth:`hex_outcomes` and\n :meth:`int_outcomes` also do not work.\n\n time_taken (float): The duration of the experiment that generated\n the counts\n creg_sizes (list): a nested list where the inner element is a list\n of tuples containing both the classical register name and\n classical register size. For example,\n ``[('c_reg', 2), ('my_creg', 4)]``.\n memory_slots (int): The number of total ``memory_slots`` in the\n experiment.\n Raises:\n TypeError: If the input key type is not an int or string\n QiskitError: If a dit string key is input with creg_sizes and/or\n memory_slots\n \"\"\"\n bin_data = None\n data = dict(data)\n first_key = next(iter(data.keys()))\n if isinstance(first_key, int):\n self.int_raw = data\n self.hex_raw = {\n hex(key): value for key, value in self.int_raw.items()}\n elif isinstance(first_key, str):\n if first_key.startswith('0x'):\n self.hex_raw = data\n self.int_raw = {\n int(key, 0): value for key, value in self.hex_raw.items()}\n elif first_key.startswith('0b'):\n self.int_raw = {\n int(key, 0): value for key, value in data.items()}\n self.hex_raw = {\n hex(key): value for key, value in self.int_raw.items()}\n else:\n if not creg_sizes and not memory_slots:\n self.hex_raw = None\n self.int_raw = None\n bin_data = data\n else:\n bitstring_regex = re.compile(r'^[01\\s]+$')\n hex_dict = {}\n int_dict = {}\n for bitstring, value in data.items():\n if not bitstring_regex.search(bitstring):\n raise exceptions.QiskitError(\n 'Counts objects with dit strings do not '\n 'currently support dit string formatting parameters '\n 'creg_sizes or memory_slots')\n int_key = int(bitstring.replace(\" \", \"\"), 2)\n int_dict[int_key] = value\n hex_dict[hex(int_key)] = value\n self.hex_raw = hex_dict\n self.int_raw = int_dict\n else:\n raise TypeError(\"Invalid input key type %s, must be either an int \"\n \"key or string key with hexademical value or bit string\")\n header = {}\n self.creg_sizes = creg_sizes\n if self.creg_sizes:\n header['creg_sizes'] = self.creg_sizes\n self.memory_slots = memory_slots\n if self.memory_slots:\n header['memory_slots'] = self.memory_slots\n if not bin_data:\n bin_data = postprocess.format_counts(self.hex_raw, header=header)\n super().__init__(bin_data)\n self.time_taken = time_taken\n\n def most_frequent(self):\n \"\"\"Return the most frequent count\n\n Returns:\n str: The bit string for the most frequent result\n Raises:\n QiskitError: when there is >1 count with the same max counts\n \"\"\"\n max_value = max(self.values())\n max_values_counts = [x[0] for x in self.items() if x[1] == max_value]\n if len(max_values_counts) != 1:\n raise exceptions.QiskitError(\n \"Multiple values have the same maximum counts: %s\" %\n ','.join(max_values_counts))\n return max_values_counts[0]\n\n def hex_outcomes(self):\n \"\"\"Return a counts dictionary with hexademical string keys\n\n Returns:\n dict: A dictionary with the keys as hexadecimal strings instead of\n bitstrings\n Raises:\n QiskitError: If the Counts object contains counts for dit strings\n \"\"\"\n if self.hex_raw:\n return {key.lower(): value for key, value in self.hex_raw.items()}\n else:\n bitstring_regex = re.compile(r'^[01\\s]+$')\n out_dict = {}\n for bitstring, value in self.items():\n if not bitstring_regex.search(bitstring):\n raise exceptions.QiskitError(\n 'Counts objects with dit strings do not '\n 'currently support conversion to hexadecimal')\n int_key = int(bitstring.replace(\" \", \"\"), 2)\n out_dict[hex(int_key)] = value\n return out_dict\n\n def int_outcomes(self):\n \"\"\"Build a counts dictionary with integer keys instead of count strings\n\n Returns:\n dict: A dictionary with the keys as integers instead of bitstrings\n Raises:\n QiskitError: If the Counts object contains counts for dit strings\n \"\"\"\n if self.int_raw:\n return self.int_raw\n else:\n bitstring_regex = re.compile(r'^[01\\s]+$')\n out_dict = {}\n for bitstring, value in self.items():\n if not bitstring_regex.search(bitstring):\n raise exceptions.QiskitError(\n 'Counts objects with dit strings do not '\n 'currently support conversion to integer')\n int_key = int(bitstring.replace(\" \", \"\"), 2)\n out_dict[int_key] = value\n return out_dict\n", "path": "qiskit/result/counts.py"}]}
| 2,801 | 1,016 |
gh_patches_debug_4440
|
rasdani/github-patches
|
git_diff
|
zulip__zulip-12231
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Open Graph og:url tags over-canonicalize all URLs back to the home page
Split from #12187:
> By the way, it turns out all our OpenGraph tags are [busted anyway](https://developers.facebook.com/tools/debug/sharing/?q=https%3A%2F%2Fchat.zulip.org%2Fapi%2Fincoming-webhooks-walkthrough) because we always set `og:url` to point to the home page (which redirects to /login, whose `og:url` points back to the home page).
</issue>
<code>
[start of zerver/context_processors.py]
1
2 from typing import Any, Dict
3 from django.http import HttpRequest
4 from django.conf import settings
5 from django.urls import reverse
6
7 from zerver.models import UserProfile, get_realm, Realm
8 from zproject.backends import (
9 any_oauth_backend_enabled,
10 password_auth_enabled,
11 require_email_format_usernames,
12 auth_enabled_helper,
13 AUTH_BACKEND_NAME_MAP,
14 SOCIAL_AUTH_BACKENDS,
15 )
16 from zerver.decorator import get_client_name
17 from zerver.lib.send_email import FromAddress
18 from zerver.lib.subdomains import get_subdomain
19 from zerver.lib.realm_icon import get_realm_icon_url
20 from zerver.lib.realm_description import get_realm_rendered_description, get_realm_text_description
21
22 from version import ZULIP_VERSION, LATEST_RELEASE_VERSION, LATEST_MAJOR_VERSION, \
23 LATEST_RELEASE_ANNOUNCEMENT
24
25 def common_context(user: UserProfile) -> Dict[str, Any]:
26 """Common context used for things like outgoing emails that don't
27 have a request.
28 """
29 return {
30 'realm_uri': user.realm.uri,
31 'realm_name': user.realm.name,
32 'root_domain_uri': settings.ROOT_DOMAIN_URI,
33 'external_uri_scheme': settings.EXTERNAL_URI_SCHEME,
34 'external_host': settings.EXTERNAL_HOST,
35 'user_name': user.full_name,
36 }
37
38 def get_realm_from_request(request: HttpRequest) -> Realm:
39 if hasattr(request, "user") and hasattr(request.user, "realm"):
40 return request.user.realm
41 if not hasattr(request, "realm"):
42 # We cache the realm object from this function on the request,
43 # so that functions that call get_realm_from_request don't
44 # need to do duplicate queries on the same realm while
45 # processing a single request.
46 subdomain = get_subdomain(request)
47 request.realm = get_realm(subdomain)
48 return request.realm
49
50 def zulip_default_context(request: HttpRequest) -> Dict[str, Any]:
51 """Context available to all Zulip Jinja2 templates that have a request
52 passed in. Designed to provide the long list of variables at the
53 bottom of this function in a wide range of situations: logged-in
54 or logged-out, subdomains or not, etc.
55
56 The main variable in the below is whether we know what realm the
57 user is trying to interact with.
58 """
59 realm = get_realm_from_request(request)
60
61 if realm is None:
62 realm_uri = settings.ROOT_DOMAIN_URI
63 realm_name = None
64 realm_icon = None
65 else:
66 realm_uri = realm.uri
67 realm_name = realm.name
68 realm_icon = get_realm_icon_url(realm)
69
70 register_link_disabled = settings.REGISTER_LINK_DISABLED
71 login_link_disabled = settings.LOGIN_LINK_DISABLED
72 find_team_link_disabled = settings.FIND_TEAM_LINK_DISABLED
73 allow_search_engine_indexing = False
74
75 if (settings.ROOT_DOMAIN_LANDING_PAGE
76 and get_subdomain(request) == Realm.SUBDOMAIN_FOR_ROOT_DOMAIN):
77 register_link_disabled = True
78 login_link_disabled = True
79 find_team_link_disabled = False
80 allow_search_engine_indexing = True
81
82 apps_page_url = 'https://zulipchat.com/apps/'
83 if settings.ZILENCER_ENABLED:
84 apps_page_url = '/apps/'
85
86 user_is_authenticated = False
87 if hasattr(request, 'user') and hasattr(request.user, 'is_authenticated'):
88 user_is_authenticated = request.user.is_authenticated.value
89
90 if settings.DEVELOPMENT:
91 secrets_path = "zproject/dev-secrets.conf"
92 settings_path = "zproject/dev_settings.py"
93 settings_comments_path = "zproject/prod_settings_template.py"
94 else:
95 secrets_path = "/etc/zulip/zulip-secrets.conf"
96 settings_path = "/etc/zulip/settings.py"
97 settings_comments_path = "/etc/zulip/settings.py"
98
99 # We can't use request.client here because we might not be using
100 # an auth decorator that sets it, but we can call its helper to
101 # get the same result.
102 platform = get_client_name(request, True)
103
104 context = {
105 'root_domain_landing_page': settings.ROOT_DOMAIN_LANDING_PAGE,
106 'custom_logo_url': settings.CUSTOM_LOGO_URL,
107 'register_link_disabled': register_link_disabled,
108 'login_link_disabled': login_link_disabled,
109 'terms_of_service': settings.TERMS_OF_SERVICE,
110 'privacy_policy': settings.PRIVACY_POLICY,
111 'login_url': settings.HOME_NOT_LOGGED_IN,
112 'only_sso': settings.ONLY_SSO,
113 'external_host': settings.EXTERNAL_HOST,
114 'external_uri_scheme': settings.EXTERNAL_URI_SCHEME,
115 'realm_uri': realm_uri,
116 'realm_name': realm_name,
117 'realm_icon': realm_icon,
118 'root_domain_uri': settings.ROOT_DOMAIN_URI,
119 'apps_page_url': apps_page_url,
120 'open_realm_creation': settings.OPEN_REALM_CREATION,
121 'development_environment': settings.DEVELOPMENT,
122 'support_email': FromAddress.SUPPORT,
123 'find_team_link_disabled': find_team_link_disabled,
124 'password_min_length': settings.PASSWORD_MIN_LENGTH,
125 'password_min_guesses': settings.PASSWORD_MIN_GUESSES,
126 'jitsi_server_url': settings.JITSI_SERVER_URL,
127 'zulip_version': ZULIP_VERSION,
128 'user_is_authenticated': user_is_authenticated,
129 'settings_path': settings_path,
130 'secrets_path': secrets_path,
131 'settings_comments_path': settings_comments_path,
132 'platform': platform,
133 'allow_search_engine_indexing': allow_search_engine_indexing,
134 }
135
136 if realm is not None and realm.icon_source == realm.ICON_UPLOADED:
137 context['OPEN_GRAPH_IMAGE'] = '%s%s' % (realm_uri, realm_icon)
138
139 return context
140
141 def login_context(request: HttpRequest) -> Dict[str, Any]:
142 realm = get_realm_from_request(request)
143
144 if realm is None:
145 realm_description = None
146 realm_invite_required = False
147 else:
148 realm_description = get_realm_rendered_description(realm)
149 realm_invite_required = realm.invite_required
150
151 context = {
152 'realm_invite_required': realm_invite_required,
153 'realm_description': realm_description,
154 'require_email_format_usernames': require_email_format_usernames(realm),
155 'password_auth_enabled': password_auth_enabled(realm),
156 'any_oauth_backend_enabled': any_oauth_backend_enabled(realm),
157 'two_factor_authentication_enabled': settings.TWO_FACTOR_AUTHENTICATION_ENABLED,
158 } # type: Dict[str, Any]
159
160 if realm is not None and realm.description:
161 context['OPEN_GRAPH_TITLE'] = realm.name
162 context['OPEN_GRAPH_DESCRIPTION'] = get_realm_text_description(realm)
163
164 # Add the keys for our standard authentication backends.
165 no_auth_enabled = True
166 social_backends = []
167 for auth_backend_name in AUTH_BACKEND_NAME_MAP:
168 name_lower = auth_backend_name.lower()
169 key = "%s_auth_enabled" % (name_lower,)
170 is_enabled = auth_enabled_helper([auth_backend_name], realm)
171 context[key] = is_enabled
172 if is_enabled:
173 no_auth_enabled = False
174
175 # Now add the enabled social backends to the social_backends
176 # list used to generate buttons for login/register pages.
177 backend = AUTH_BACKEND_NAME_MAP[auth_backend_name]
178 if not is_enabled or backend not in SOCIAL_AUTH_BACKENDS:
179 continue
180 social_backends.append({
181 'name': backend.name,
182 'display_name': backend.auth_backend_name,
183 'login_url': reverse('login-social', args=(backend.name,)),
184 'signup_url': reverse('signup-social', args=(backend.name,)),
185 'sort_order': backend.sort_order,
186 })
187 context['social_backends'] = sorted(social_backends, key=lambda x: x['sort_order'], reverse=True)
188 context['no_auth_enabled'] = no_auth_enabled
189
190 return context
191
192 def latest_info_context() -> Dict[str, str]:
193 context = {
194 'latest_release_version': LATEST_RELEASE_VERSION,
195 'latest_major_version': LATEST_MAJOR_VERSION,
196 'latest_release_announcement': LATEST_RELEASE_ANNOUNCEMENT,
197 }
198 return context
199
[end of zerver/context_processors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/zerver/context_processors.py b/zerver/context_processors.py
--- a/zerver/context_processors.py
+++ b/zerver/context_processors.py
@@ -133,6 +133,7 @@
'allow_search_engine_indexing': allow_search_engine_indexing,
}
+ context['OPEN_GRAPH_URL'] = '%s%s' % (realm_uri, request.path)
if realm is not None and realm.icon_source == realm.ICON_UPLOADED:
context['OPEN_GRAPH_IMAGE'] = '%s%s' % (realm_uri, realm_icon)
|
{"golden_diff": "diff --git a/zerver/context_processors.py b/zerver/context_processors.py\n--- a/zerver/context_processors.py\n+++ b/zerver/context_processors.py\n@@ -133,6 +133,7 @@\n 'allow_search_engine_indexing': allow_search_engine_indexing,\n }\n \n+ context['OPEN_GRAPH_URL'] = '%s%s' % (realm_uri, request.path)\n if realm is not None and realm.icon_source == realm.ICON_UPLOADED:\n context['OPEN_GRAPH_IMAGE'] = '%s%s' % (realm_uri, realm_icon)\n", "issue": "Open Graph og:url tags over-canonicalize all URLs back to the home page\nSplit from #12187:\r\n\r\n> By the way, it turns out all our OpenGraph tags are [busted anyway](https://developers.facebook.com/tools/debug/sharing/?q=https%3A%2F%2Fchat.zulip.org%2Fapi%2Fincoming-webhooks-walkthrough) because we always set `og:url` to point to the home page (which redirects to /login, whose `og:url` points back to the home page).\n", "before_files": [{"content": "\nfrom typing import Any, Dict\nfrom django.http import HttpRequest\nfrom django.conf import settings\nfrom django.urls import reverse\n\nfrom zerver.models import UserProfile, get_realm, Realm\nfrom zproject.backends import (\n any_oauth_backend_enabled,\n password_auth_enabled,\n require_email_format_usernames,\n auth_enabled_helper,\n AUTH_BACKEND_NAME_MAP,\n SOCIAL_AUTH_BACKENDS,\n)\nfrom zerver.decorator import get_client_name\nfrom zerver.lib.send_email import FromAddress\nfrom zerver.lib.subdomains import get_subdomain\nfrom zerver.lib.realm_icon import get_realm_icon_url\nfrom zerver.lib.realm_description import get_realm_rendered_description, get_realm_text_description\n\nfrom version import ZULIP_VERSION, LATEST_RELEASE_VERSION, LATEST_MAJOR_VERSION, \\\n LATEST_RELEASE_ANNOUNCEMENT\n\ndef common_context(user: UserProfile) -> Dict[str, Any]:\n \"\"\"Common context used for things like outgoing emails that don't\n have a request.\n \"\"\"\n return {\n 'realm_uri': user.realm.uri,\n 'realm_name': user.realm.name,\n 'root_domain_uri': settings.ROOT_DOMAIN_URI,\n 'external_uri_scheme': settings.EXTERNAL_URI_SCHEME,\n 'external_host': settings.EXTERNAL_HOST,\n 'user_name': user.full_name,\n }\n\ndef get_realm_from_request(request: HttpRequest) -> Realm:\n if hasattr(request, \"user\") and hasattr(request.user, \"realm\"):\n return request.user.realm\n if not hasattr(request, \"realm\"):\n # We cache the realm object from this function on the request,\n # so that functions that call get_realm_from_request don't\n # need to do duplicate queries on the same realm while\n # processing a single request.\n subdomain = get_subdomain(request)\n request.realm = get_realm(subdomain)\n return request.realm\n\ndef zulip_default_context(request: HttpRequest) -> Dict[str, Any]:\n \"\"\"Context available to all Zulip Jinja2 templates that have a request\n passed in. Designed to provide the long list of variables at the\n bottom of this function in a wide range of situations: logged-in\n or logged-out, subdomains or not, etc.\n\n The main variable in the below is whether we know what realm the\n user is trying to interact with.\n \"\"\"\n realm = get_realm_from_request(request)\n\n if realm is None:\n realm_uri = settings.ROOT_DOMAIN_URI\n realm_name = None\n realm_icon = None\n else:\n realm_uri = realm.uri\n realm_name = realm.name\n realm_icon = get_realm_icon_url(realm)\n\n register_link_disabled = settings.REGISTER_LINK_DISABLED\n login_link_disabled = settings.LOGIN_LINK_DISABLED\n find_team_link_disabled = settings.FIND_TEAM_LINK_DISABLED\n allow_search_engine_indexing = False\n\n if (settings.ROOT_DOMAIN_LANDING_PAGE\n and get_subdomain(request) == Realm.SUBDOMAIN_FOR_ROOT_DOMAIN):\n register_link_disabled = True\n login_link_disabled = True\n find_team_link_disabled = False\n allow_search_engine_indexing = True\n\n apps_page_url = 'https://zulipchat.com/apps/'\n if settings.ZILENCER_ENABLED:\n apps_page_url = '/apps/'\n\n user_is_authenticated = False\n if hasattr(request, 'user') and hasattr(request.user, 'is_authenticated'):\n user_is_authenticated = request.user.is_authenticated.value\n\n if settings.DEVELOPMENT:\n secrets_path = \"zproject/dev-secrets.conf\"\n settings_path = \"zproject/dev_settings.py\"\n settings_comments_path = \"zproject/prod_settings_template.py\"\n else:\n secrets_path = \"/etc/zulip/zulip-secrets.conf\"\n settings_path = \"/etc/zulip/settings.py\"\n settings_comments_path = \"/etc/zulip/settings.py\"\n\n # We can't use request.client here because we might not be using\n # an auth decorator that sets it, but we can call its helper to\n # get the same result.\n platform = get_client_name(request, True)\n\n context = {\n 'root_domain_landing_page': settings.ROOT_DOMAIN_LANDING_PAGE,\n 'custom_logo_url': settings.CUSTOM_LOGO_URL,\n 'register_link_disabled': register_link_disabled,\n 'login_link_disabled': login_link_disabled,\n 'terms_of_service': settings.TERMS_OF_SERVICE,\n 'privacy_policy': settings.PRIVACY_POLICY,\n 'login_url': settings.HOME_NOT_LOGGED_IN,\n 'only_sso': settings.ONLY_SSO,\n 'external_host': settings.EXTERNAL_HOST,\n 'external_uri_scheme': settings.EXTERNAL_URI_SCHEME,\n 'realm_uri': realm_uri,\n 'realm_name': realm_name,\n 'realm_icon': realm_icon,\n 'root_domain_uri': settings.ROOT_DOMAIN_URI,\n 'apps_page_url': apps_page_url,\n 'open_realm_creation': settings.OPEN_REALM_CREATION,\n 'development_environment': settings.DEVELOPMENT,\n 'support_email': FromAddress.SUPPORT,\n 'find_team_link_disabled': find_team_link_disabled,\n 'password_min_length': settings.PASSWORD_MIN_LENGTH,\n 'password_min_guesses': settings.PASSWORD_MIN_GUESSES,\n 'jitsi_server_url': settings.JITSI_SERVER_URL,\n 'zulip_version': ZULIP_VERSION,\n 'user_is_authenticated': user_is_authenticated,\n 'settings_path': settings_path,\n 'secrets_path': secrets_path,\n 'settings_comments_path': settings_comments_path,\n 'platform': platform,\n 'allow_search_engine_indexing': allow_search_engine_indexing,\n }\n\n if realm is not None and realm.icon_source == realm.ICON_UPLOADED:\n context['OPEN_GRAPH_IMAGE'] = '%s%s' % (realm_uri, realm_icon)\n\n return context\n\ndef login_context(request: HttpRequest) -> Dict[str, Any]:\n realm = get_realm_from_request(request)\n\n if realm is None:\n realm_description = None\n realm_invite_required = False\n else:\n realm_description = get_realm_rendered_description(realm)\n realm_invite_required = realm.invite_required\n\n context = {\n 'realm_invite_required': realm_invite_required,\n 'realm_description': realm_description,\n 'require_email_format_usernames': require_email_format_usernames(realm),\n 'password_auth_enabled': password_auth_enabled(realm),\n 'any_oauth_backend_enabled': any_oauth_backend_enabled(realm),\n 'two_factor_authentication_enabled': settings.TWO_FACTOR_AUTHENTICATION_ENABLED,\n } # type: Dict[str, Any]\n\n if realm is not None and realm.description:\n context['OPEN_GRAPH_TITLE'] = realm.name\n context['OPEN_GRAPH_DESCRIPTION'] = get_realm_text_description(realm)\n\n # Add the keys for our standard authentication backends.\n no_auth_enabled = True\n social_backends = []\n for auth_backend_name in AUTH_BACKEND_NAME_MAP:\n name_lower = auth_backend_name.lower()\n key = \"%s_auth_enabled\" % (name_lower,)\n is_enabled = auth_enabled_helper([auth_backend_name], realm)\n context[key] = is_enabled\n if is_enabled:\n no_auth_enabled = False\n\n # Now add the enabled social backends to the social_backends\n # list used to generate buttons for login/register pages.\n backend = AUTH_BACKEND_NAME_MAP[auth_backend_name]\n if not is_enabled or backend not in SOCIAL_AUTH_BACKENDS:\n continue\n social_backends.append({\n 'name': backend.name,\n 'display_name': backend.auth_backend_name,\n 'login_url': reverse('login-social', args=(backend.name,)),\n 'signup_url': reverse('signup-social', args=(backend.name,)),\n 'sort_order': backend.sort_order,\n })\n context['social_backends'] = sorted(social_backends, key=lambda x: x['sort_order'], reverse=True)\n context['no_auth_enabled'] = no_auth_enabled\n\n return context\n\ndef latest_info_context() -> Dict[str, str]:\n context = {\n 'latest_release_version': LATEST_RELEASE_VERSION,\n 'latest_major_version': LATEST_MAJOR_VERSION,\n 'latest_release_announcement': LATEST_RELEASE_ANNOUNCEMENT,\n }\n return context\n", "path": "zerver/context_processors.py"}]}
| 2,912 | 122 |
gh_patches_debug_12309
|
rasdani/github-patches
|
git_diff
|
horovod__horovod-3263
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix flaky spark integration tests
Several Spark tests have been disabled in https://github.com/horovod/horovod/pull/3259, we should address the underlying flakiness and re-enable.
</issue>
<code>
[start of examples/spark/pytorch/pytorch_lightning_spark_mnist.py]
1 import argparse
2 import os
3 import subprocess
4 import sys
5 from distutils.version import LooseVersion
6
7 import numpy as np
8
9 import pyspark
10 import pyspark.sql.types as T
11 from pyspark import SparkConf
12 from pyspark.ml.evaluation import MulticlassClassificationEvaluator
13 if LooseVersion(pyspark.__version__) < LooseVersion('3.0.0'):
14 from pyspark.ml.feature import OneHotEncoderEstimator as OneHotEncoder
15 else:
16 from pyspark.ml.feature import OneHotEncoder
17 from pyspark.sql import SparkSession
18 from pyspark.sql.functions import udf
19
20 from pytorch_lightning import LightningModule
21
22 import torch
23 import torch.nn as nn
24 import torch.nn.functional as F
25 import torch.optim as optim
26
27 import horovod.spark.lightning as hvd
28 from horovod.spark.lightning.estimator import MIN_PL_VERSION
29 from horovod.spark.common.backend import SparkBackend
30 from horovod.spark.common.store import Store
31
32 parser = argparse.ArgumentParser(description='PyTorch Spark MNIST Example',
33 formatter_class=argparse.ArgumentDefaultsHelpFormatter)
34 parser.add_argument('--master',
35 help='spark master to connect to')
36 parser.add_argument('--num-proc', type=int,
37 help='number of worker processes for training, default: `spark.default.parallelism`')
38 parser.add_argument('--batch-size', type=int, default=64,
39 help='input batch size for training')
40 parser.add_argument('--epochs', type=int, default=12,
41 help='number of epochs to train')
42 parser.add_argument('--work-dir', default='/tmp',
43 help='temporary working directory to write intermediate files (prefix with hdfs:// to use HDFS)')
44 parser.add_argument('--data-dir', default='/tmp',
45 help='location of the training dataset in the local filesystem (will be downloaded if needed)')
46 parser.add_argument('--enable-profiler', action='store_true',
47 help='Enable profiler')
48
49
50 def train_model(args):
51 # do not run this test for pytorch lightning below min supported verson
52 import pytorch_lightning as pl
53 if LooseVersion(pl.__version__) < LooseVersion(MIN_PL_VERSION):
54 print("Skip test for pytorch_ligthning=={}, min support version is {}".format(pl.__version__, MIN_PL_VERSION))
55 return
56
57 # Initialize SparkSession
58 conf = SparkConf().setAppName('pytorch_spark_mnist').set('spark.sql.shuffle.partitions', '16')
59 if args.master:
60 conf.setMaster(args.master)
61 elif args.num_proc:
62 conf.setMaster('local[{}]'.format(args.num_proc))
63 spark = SparkSession.builder.config(conf=conf).getOrCreate()
64
65 # Setup our store for intermediate data
66 store = Store.create(args.work_dir)
67
68 # Download MNIST dataset
69 data_url = 'https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multiclass/mnist.bz2'
70 libsvm_path = os.path.join(args.data_dir, 'mnist.bz2')
71 if not os.path.exists(libsvm_path):
72 subprocess.check_output(['wget', data_url, '-O', libsvm_path])
73
74 # Load dataset into a Spark DataFrame
75 df = spark.read.format('libsvm') \
76 .option('numFeatures', '784') \
77 .load(libsvm_path)
78
79 # One-hot encode labels into SparseVectors
80 encoder = OneHotEncoder(inputCols=['label'],
81 outputCols=['label_vec'],
82 dropLast=False)
83 model = encoder.fit(df)
84 train_df = model.transform(df)
85
86 # Train/test split
87 train_df, test_df = train_df.randomSplit([0.9, 0.1])
88
89 # Define the PyTorch model without any Horovod-specific parameters
90 class Net(LightningModule):
91 def __init__(self):
92 super(Net, self).__init__()
93 self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
94 self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
95 self.conv2_drop = nn.Dropout2d()
96 self.fc1 = nn.Linear(320, 50)
97 self.fc2 = nn.Linear(50, 10)
98
99 def forward(self, x):
100 x = x.float().reshape((-1, 1, 28, 28))
101 x = F.relu(F.max_pool2d(self.conv1(x), 2))
102 x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
103 x = x.view(-1, 320)
104 x = F.relu(self.fc1(x))
105 x = F.dropout(x, training=self.training)
106 x = self.fc2(x)
107 return F.log_softmax(x, -1)
108
109 def configure_optimizers(self):
110 return optim.SGD(self.parameters(), lr=0.01, momentum=0.5)
111
112 def training_step(self, batch, batch_idx):
113 if batch_idx == 0:
114 print(f"training data batch size: {batch['label'].shape}")
115 x, y = batch['features'], batch['label']
116 y_hat = self(x)
117 loss = F.nll_loss(y_hat, y.long())
118 self.log('train_loss', loss)
119 return loss
120
121 def validation_step(self, batch, batch_idx):
122 if batch_idx == 0:
123 print(f"validation data batch size: {batch['label'].shape}")
124 x, y = batch['features'], batch['label']
125 y_hat = self(x)
126 loss = F.nll_loss(y_hat, y.long())
127 self.log('val_loss', loss)
128
129 def validation_epoch_end(self, outputs):
130 avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean() if len(outputs) > 0 else float('inf')
131 self.log('avg_val_loss', avg_loss)
132
133 model = Net()
134
135 # Train a Horovod Spark Estimator on the DataFrame
136 backend = SparkBackend(num_proc=args.num_proc,
137 stdout=sys.stdout, stderr=sys.stderr,
138 prefix_output_with_timestamp=True)
139
140 from pytorch_lightning.callbacks import Callback
141
142 epochs = args.epochs
143
144 class MyDummyCallback(Callback):
145 def __init__(self):
146 self.epcoh_end_counter = 0
147 self.train_epcoh_end_counter = 0
148 self.validation_epoch_end_counter = 0
149
150 def on_init_start(self, trainer):
151 print('Starting to init trainer!')
152
153 def on_init_end(self, trainer):
154 print('Trainer is initialized.')
155
156 def on_epoch_end(self, trainer, model):
157 print('A train or eval epoch ended.')
158 self.epcoh_end_counter += 1
159
160 def on_train_epoch_end(self, trainer, model, unused=None):
161 print('A train epoch ended.')
162 self.train_epcoh_end_counter += 1
163
164 def on_validation_epoch_end(self, trainer, model, unused=None):
165 print('A val epoch ended.')
166 self.validation_epoch_end_counter += 1
167
168 def on_train_end(self, trainer, model):
169 print("Training ends:"
170 f"epcoh_end_counter={self.epcoh_end_counter}, "
171 f"train_epcoh_end_counter={self.train_epcoh_end_counter}, "
172 f"validation_epoch_end_counter={self.validation_epoch_end_counter} \n")
173 assert self.train_epcoh_end_counter <= epochs
174 assert self.epcoh_end_counter == self.train_epcoh_end_counter + self.validation_epoch_end_counter
175
176 callbacks = [MyDummyCallback()]
177
178 # added EarlyStopping and ModelCheckpoint
179 from pytorch_lightning.callbacks.model_checkpoint import ModelCheckpoint
180 callbacks.append(ModelCheckpoint(monitor='val_loss', mode="min",
181 save_top_k=1, verbose=True))
182
183 from pytorch_lightning.callbacks.early_stopping import EarlyStopping
184 callbacks.append(EarlyStopping(monitor='val_loss',
185 min_delta=0.001,
186 patience=3,
187 verbose=True,
188 mode='min'))
189
190 torch_estimator = hvd.TorchEstimator(backend=backend,
191 store=store,
192 model=model,
193 input_shapes=[[-1, 1, 28, 28]],
194 feature_cols=['features'],
195 label_cols=['label'],
196 batch_size=args.batch_size,
197 epochs=args.epochs,
198 validation=0.1,
199 verbose=1,
200 callbacks=callbacks,
201 profiler="simple" if args.enable_profiler else None)
202
203 torch_model = torch_estimator.fit(train_df).setOutputCols(['label_prob'])
204
205 # Evaluate the model on the held-out test DataFrame
206 pred_df = torch_model.transform(test_df)
207
208 argmax = udf(lambda v: float(np.argmax(v)), returnType=T.DoubleType())
209 pred_df = pred_df.withColumn('label_pred', argmax(pred_df.label_prob))
210 evaluator = MulticlassClassificationEvaluator(predictionCol='label_pred', labelCol='label', metricName='accuracy')
211 print('Test accuracy:', evaluator.evaluate(pred_df))
212
213 spark.stop()
214
215
216 if __name__ == '__main__':
217 args = parser.parse_args()
218 train_model(args)
219
[end of examples/spark/pytorch/pytorch_lightning_spark_mnist.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/spark/pytorch/pytorch_lightning_spark_mnist.py b/examples/spark/pytorch/pytorch_lightning_spark_mnist.py
--- a/examples/spark/pytorch/pytorch_lightning_spark_mnist.py
+++ b/examples/spark/pytorch/pytorch_lightning_spark_mnist.py
@@ -17,6 +17,17 @@
from pyspark.sql import SparkSession
from pyspark.sql.functions import udf
+# Spark PyTorch Lightning tests conflict with Tensorflow 2.5.x: https://github.com/horovod/horovod/pull/3263
+try:
+ # tensorflow has to be imported BEFORE pytorch_lightning, otherwise we see the segfault right away
+ import tensorflow as tf
+ from distutils.version import LooseVersion
+ if LooseVersion('2.5.0') <= LooseVersion(tf.__version__) < LooseVersion('2.6.0'):
+ print('Skipping test as Pytorch Lightning conflicts with present Tensorflow 2.5.x', file=sys.stderr)
+ sys.exit(0)
+except ImportError:
+ pass
+
from pytorch_lightning import LightningModule
import torch
|
{"golden_diff": "diff --git a/examples/spark/pytorch/pytorch_lightning_spark_mnist.py b/examples/spark/pytorch/pytorch_lightning_spark_mnist.py\n--- a/examples/spark/pytorch/pytorch_lightning_spark_mnist.py\n+++ b/examples/spark/pytorch/pytorch_lightning_spark_mnist.py\n@@ -17,6 +17,17 @@\n from pyspark.sql import SparkSession\n from pyspark.sql.functions import udf\n \n+# Spark PyTorch Lightning tests conflict with Tensorflow 2.5.x: https://github.com/horovod/horovod/pull/3263\n+try:\n+ # tensorflow has to be imported BEFORE pytorch_lightning, otherwise we see the segfault right away\n+ import tensorflow as tf\n+ from distutils.version import LooseVersion\n+ if LooseVersion('2.5.0') <= LooseVersion(tf.__version__) < LooseVersion('2.6.0'):\n+ print('Skipping test as Pytorch Lightning conflicts with present Tensorflow 2.5.x', file=sys.stderr)\n+ sys.exit(0)\n+except ImportError:\n+ pass\n+\n from pytorch_lightning import LightningModule\n \n import torch\n", "issue": "Fix flaky spark integration tests\nSeveral Spark tests have been disabled in https://github.com/horovod/horovod/pull/3259, we should address the underlying flakiness and re-enable.\n", "before_files": [{"content": "import argparse\nimport os\nimport subprocess\nimport sys\nfrom distutils.version import LooseVersion\n\nimport numpy as np\n\nimport pyspark\nimport pyspark.sql.types as T\nfrom pyspark import SparkConf\nfrom pyspark.ml.evaluation import MulticlassClassificationEvaluator\nif LooseVersion(pyspark.__version__) < LooseVersion('3.0.0'):\n from pyspark.ml.feature import OneHotEncoderEstimator as OneHotEncoder\nelse:\n from pyspark.ml.feature import OneHotEncoder\nfrom pyspark.sql import SparkSession\nfrom pyspark.sql.functions import udf\n\nfrom pytorch_lightning import LightningModule\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\nimport horovod.spark.lightning as hvd\nfrom horovod.spark.lightning.estimator import MIN_PL_VERSION\nfrom horovod.spark.common.backend import SparkBackend\nfrom horovod.spark.common.store import Store\n\nparser = argparse.ArgumentParser(description='PyTorch Spark MNIST Example',\n formatter_class=argparse.ArgumentDefaultsHelpFormatter)\nparser.add_argument('--master',\n help='spark master to connect to')\nparser.add_argument('--num-proc', type=int,\n help='number of worker processes for training, default: `spark.default.parallelism`')\nparser.add_argument('--batch-size', type=int, default=64,\n help='input batch size for training')\nparser.add_argument('--epochs', type=int, default=12,\n help='number of epochs to train')\nparser.add_argument('--work-dir', default='/tmp',\n help='temporary working directory to write intermediate files (prefix with hdfs:// to use HDFS)')\nparser.add_argument('--data-dir', default='/tmp',\n help='location of the training dataset in the local filesystem (will be downloaded if needed)')\nparser.add_argument('--enable-profiler', action='store_true',\n help='Enable profiler')\n\n\ndef train_model(args):\n # do not run this test for pytorch lightning below min supported verson\n import pytorch_lightning as pl\n if LooseVersion(pl.__version__) < LooseVersion(MIN_PL_VERSION):\n print(\"Skip test for pytorch_ligthning=={}, min support version is {}\".format(pl.__version__, MIN_PL_VERSION))\n return\n\n # Initialize SparkSession\n conf = SparkConf().setAppName('pytorch_spark_mnist').set('spark.sql.shuffle.partitions', '16')\n if args.master:\n conf.setMaster(args.master)\n elif args.num_proc:\n conf.setMaster('local[{}]'.format(args.num_proc))\n spark = SparkSession.builder.config(conf=conf).getOrCreate()\n\n # Setup our store for intermediate data\n store = Store.create(args.work_dir)\n\n # Download MNIST dataset\n data_url = 'https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multiclass/mnist.bz2'\n libsvm_path = os.path.join(args.data_dir, 'mnist.bz2')\n if not os.path.exists(libsvm_path):\n subprocess.check_output(['wget', data_url, '-O', libsvm_path])\n\n # Load dataset into a Spark DataFrame\n df = spark.read.format('libsvm') \\\n .option('numFeatures', '784') \\\n .load(libsvm_path)\n\n # One-hot encode labels into SparseVectors\n encoder = OneHotEncoder(inputCols=['label'],\n outputCols=['label_vec'],\n dropLast=False)\n model = encoder.fit(df)\n train_df = model.transform(df)\n\n # Train/test split\n train_df, test_df = train_df.randomSplit([0.9, 0.1])\n\n # Define the PyTorch model without any Horovod-specific parameters\n class Net(LightningModule):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_drop = nn.Dropout2d()\n self.fc1 = nn.Linear(320, 50)\n self.fc2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = x.float().reshape((-1, 1, 28, 28))\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = x.view(-1, 320)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, training=self.training)\n x = self.fc2(x)\n return F.log_softmax(x, -1)\n\n def configure_optimizers(self):\n return optim.SGD(self.parameters(), lr=0.01, momentum=0.5)\n\n def training_step(self, batch, batch_idx):\n if batch_idx == 0:\n print(f\"training data batch size: {batch['label'].shape}\")\n x, y = batch['features'], batch['label']\n y_hat = self(x)\n loss = F.nll_loss(y_hat, y.long())\n self.log('train_loss', loss)\n return loss\n\n def validation_step(self, batch, batch_idx):\n if batch_idx == 0:\n print(f\"validation data batch size: {batch['label'].shape}\")\n x, y = batch['features'], batch['label']\n y_hat = self(x)\n loss = F.nll_loss(y_hat, y.long())\n self.log('val_loss', loss)\n\n def validation_epoch_end(self, outputs):\n avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean() if len(outputs) > 0 else float('inf')\n self.log('avg_val_loss', avg_loss)\n\n model = Net()\n\n # Train a Horovod Spark Estimator on the DataFrame\n backend = SparkBackend(num_proc=args.num_proc,\n stdout=sys.stdout, stderr=sys.stderr,\n prefix_output_with_timestamp=True)\n\n from pytorch_lightning.callbacks import Callback\n\n epochs = args.epochs\n\n class MyDummyCallback(Callback):\n def __init__(self):\n self.epcoh_end_counter = 0\n self.train_epcoh_end_counter = 0\n self.validation_epoch_end_counter = 0\n\n def on_init_start(self, trainer):\n print('Starting to init trainer!')\n\n def on_init_end(self, trainer):\n print('Trainer is initialized.')\n\n def on_epoch_end(self, trainer, model):\n print('A train or eval epoch ended.')\n self.epcoh_end_counter += 1\n\n def on_train_epoch_end(self, trainer, model, unused=None):\n print('A train epoch ended.')\n self.train_epcoh_end_counter += 1\n\n def on_validation_epoch_end(self, trainer, model, unused=None):\n print('A val epoch ended.')\n self.validation_epoch_end_counter += 1\n\n def on_train_end(self, trainer, model):\n print(\"Training ends:\"\n f\"epcoh_end_counter={self.epcoh_end_counter}, \"\n f\"train_epcoh_end_counter={self.train_epcoh_end_counter}, \"\n f\"validation_epoch_end_counter={self.validation_epoch_end_counter} \\n\")\n assert self.train_epcoh_end_counter <= epochs\n assert self.epcoh_end_counter == self.train_epcoh_end_counter + self.validation_epoch_end_counter\n\n callbacks = [MyDummyCallback()]\n\n # added EarlyStopping and ModelCheckpoint\n from pytorch_lightning.callbacks.model_checkpoint import ModelCheckpoint\n callbacks.append(ModelCheckpoint(monitor='val_loss', mode=\"min\",\n save_top_k=1, verbose=True))\n\n from pytorch_lightning.callbacks.early_stopping import EarlyStopping\n callbacks.append(EarlyStopping(monitor='val_loss',\n min_delta=0.001,\n patience=3,\n verbose=True,\n mode='min'))\n\n torch_estimator = hvd.TorchEstimator(backend=backend,\n store=store,\n model=model,\n input_shapes=[[-1, 1, 28, 28]],\n feature_cols=['features'],\n label_cols=['label'],\n batch_size=args.batch_size,\n epochs=args.epochs,\n validation=0.1,\n verbose=1,\n callbacks=callbacks,\n profiler=\"simple\" if args.enable_profiler else None)\n\n torch_model = torch_estimator.fit(train_df).setOutputCols(['label_prob'])\n\n # Evaluate the model on the held-out test DataFrame\n pred_df = torch_model.transform(test_df)\n\n argmax = udf(lambda v: float(np.argmax(v)), returnType=T.DoubleType())\n pred_df = pred_df.withColumn('label_pred', argmax(pred_df.label_prob))\n evaluator = MulticlassClassificationEvaluator(predictionCol='label_pred', labelCol='label', metricName='accuracy')\n print('Test accuracy:', evaluator.evaluate(pred_df))\n\n spark.stop()\n\n\nif __name__ == '__main__':\n args = parser.parse_args()\n train_model(args)\n", "path": "examples/spark/pytorch/pytorch_lightning_spark_mnist.py"}]}
| 3,119 | 260 |
gh_patches_debug_30378
|
rasdani/github-patches
|
git_diff
|
conda__conda-8938
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Similar packages in pkgs/main and pkgs/free are considered the same package
If `pkgs/main` and `pkgs/free` contain a package with the same subdir, name, version, build_number and build [conda considers](https://github.com/conda/conda/blob/4.6.6/conda/models/records.py#L253-L266) these packages identical and will not respect the sub-channel priority (main > free).
A good example of this is the blas=1.0=mkl packages for the linux-64 platform.
Additionally packages in `pkgs/free` and `pkg/main` with the [same name, version and build](https://github.com/conda/conda/blob/4.6.6/conda/models/records.py#L272-L273) will be considered by the [solver](https://github.com/conda/conda/blob/4.6.6/conda/resolve.py#L612-L620) to be the same even if they have a different build_number. This occurs with the blas=1.0=openblas packages for the linux-ppc64le platform which makes them un-installable.
cf: #8301 #8236
</issue>
<code>
[start of conda/core/index.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (C) 2012 Anaconda, Inc
3 # SPDX-License-Identifier: BSD-3-Clause
4 from __future__ import absolute_import, division, print_function, unicode_literals
5
6 from itertools import chain
7 from logging import getLogger
8
9 from .package_cache_data import PackageCacheData
10 from .prefix_data import PrefixData
11 from .subdir_data import SubdirData, make_feature_record
12 from .._vendor.boltons.setutils import IndexedSet
13 from .._vendor.toolz import concat, concatv
14 from ..base.context import context
15 from ..common.compat import itervalues
16 from ..common.io import ThreadLimitedThreadPoolExecutor, time_recorder
17 from ..exceptions import ChannelNotAllowed, InvalidSpec
18 from ..gateways.logging import initialize_logging
19 from ..models.channel import Channel, all_channel_urls
20 from ..models.enums import PackageType
21 from ..models.match_spec import MatchSpec
22 from ..models.records import EMPTY_LINK, PackageCacheRecord, PackageRecord, PrefixRecord
23
24 log = getLogger(__name__)
25
26
27 def check_whitelist(channel_urls):
28 if context.whitelist_channels:
29 whitelist_channel_urls = tuple(concat(
30 Channel(c).base_urls for c in context.whitelist_channels
31 ))
32 for url in channel_urls:
33 these_urls = Channel(url).base_urls
34 if not all(this_url in whitelist_channel_urls for this_url in these_urls):
35 raise ChannelNotAllowed(Channel(url))
36
37
38 LAST_CHANNEL_URLS = []
39
40 @time_recorder("get_index")
41 def get_index(channel_urls=(), prepend=True, platform=None,
42 use_local=False, use_cache=False, unknown=None, prefix=None,
43 repodata_fn=context.repodata_fns[-1]):
44 """
45 Return the index of packages available on the channels
46
47 If prepend=False, only the channels passed in as arguments are used.
48 If platform=None, then the current platform is used.
49 If prefix is supplied, then the packages installed in that prefix are added.
50 """
51 initialize_logging() # needed in case this function is called directly as a public API
52
53 if context.offline and unknown is None:
54 unknown = True
55
56 channel_urls = calculate_channel_urls(channel_urls, prepend, platform, use_local)
57 del LAST_CHANNEL_URLS[:]
58 LAST_CHANNEL_URLS.extend(channel_urls)
59
60 check_whitelist(channel_urls)
61
62 index = fetch_index(channel_urls, use_cache=use_cache, repodata_fn=repodata_fn)
63
64 if prefix:
65 _supplement_index_with_prefix(index, prefix)
66 if unknown:
67 _supplement_index_with_cache(index)
68 if context.track_features:
69 _supplement_index_with_features(index)
70 return index
71
72
73 def fetch_index(channel_urls, use_cache=False, index=None, repodata_fn=context.repodata_fns[-1]):
74 log.debug('channel_urls=' + repr(channel_urls))
75 index = {}
76 with ThreadLimitedThreadPoolExecutor() as executor:
77 subdir_instantiator = lambda url: SubdirData(Channel(url), repodata_fn=repodata_fn)
78 for f in executor.map(subdir_instantiator, channel_urls):
79 index.update((rec, rec) for rec in f.iter_records())
80 return index
81
82
83 def dist_str_in_index(index, dist_str):
84 match_spec = MatchSpec.from_dist_str(dist_str)
85 return any(match_spec.match(prec) for prec in itervalues(index))
86
87
88 def _supplement_index_with_prefix(index, prefix):
89 # supplement index with information from prefix/conda-meta
90 assert prefix
91 for prefix_record in PrefixData(prefix).iter_records():
92 if prefix_record in index:
93 # The downloaded repodata takes priority, so we do not overwrite.
94 # We do, however, copy the link information so that the solver (i.e. resolve)
95 # knows this package is installed.
96 current_record = index[prefix_record]
97 link = prefix_record.get('link') or EMPTY_LINK
98 index[prefix_record] = PrefixRecord.from_objects(
99 current_record, prefix_record, link=link
100 )
101 else:
102 # If the package is not in the repodata, use the local data.
103 # If the channel is known but the package is not in the index, it
104 # is because 1) the channel is unavailable offline, or 2) it no
105 # longer contains this package. Either way, we should prefer any
106 # other version of the package to this one. On the other hand, if
107 # it is in a channel we don't know about, assign it a value just
108 # above the priority of all known channels.
109 index[prefix_record] = prefix_record
110
111
112 def _supplement_index_with_cache(index):
113 # supplement index with packages from the cache
114 for pcrec in PackageCacheData.get_all_extracted_entries():
115 if pcrec in index:
116 # The downloaded repodata takes priority
117 current_record = index[pcrec]
118 index[pcrec] = PackageCacheRecord.from_objects(current_record, pcrec)
119 else:
120 index[pcrec] = pcrec
121
122
123 def _make_virtual_package(name, version=None):
124 return PackageRecord(
125 package_type=PackageType.VIRTUAL_SYSTEM,
126 name=name,
127 version=version or '0',
128 build='0',
129 channel='@',
130 subdir=context.subdir,
131 md5="12345678901234567890123456789012",
132 build_number=0,
133 fn=name,
134 )
135
136 def _supplement_index_with_features(index, features=()):
137 for feature in chain(context.track_features, features):
138 rec = make_feature_record(feature)
139 index[rec] = rec
140
141
142 def _supplement_index_with_system(index):
143 cuda_version = context.cuda_version
144 if cuda_version is not None:
145 rec = _make_virtual_package('__cuda', cuda_version)
146 index[rec] = rec
147
148
149 def calculate_channel_urls(channel_urls=(), prepend=True, platform=None, use_local=False):
150 if use_local:
151 channel_urls = ['local'] + list(channel_urls)
152 if prepend:
153 channel_urls += context.channels
154
155 subdirs = (platform, 'noarch') if platform is not None else context.subdirs
156 return all_channel_urls(channel_urls, subdirs=subdirs)
157
158
159 def get_reduced_index(prefix, channels, subdirs, specs, repodata_fn):
160 records = IndexedSet()
161 collected_names = set()
162 collected_track_features = set()
163 pending_names = set()
164 pending_track_features = set()
165
166 def push_spec(spec):
167 name = spec.get_raw_value('name')
168 if name and name not in collected_names:
169 pending_names.add(name)
170 track_features = spec.get_raw_value('track_features')
171 if track_features:
172 for ftr_name in track_features:
173 if ftr_name not in collected_track_features:
174 pending_track_features.add(ftr_name)
175
176 def push_record(record):
177 try:
178 combined_depends = record.combined_depends
179 except InvalidSpec as e:
180 log.warning("Skipping %s due to InvalidSpec: %s",
181 record.record_id(), e._kwargs["invalid_spec"])
182 return
183 push_spec(MatchSpec(record.name))
184 for _spec in combined_depends:
185 push_spec(_spec)
186 if record.track_features:
187 for ftr_name in record.track_features:
188 push_spec(MatchSpec(track_features=ftr_name))
189
190 if prefix:
191 for prefix_rec in PrefixData(prefix).iter_records():
192 push_record(prefix_rec)
193 for spec in specs:
194 push_spec(spec)
195
196 while pending_names or pending_track_features:
197 while pending_names:
198 name = pending_names.pop()
199 collected_names.add(name)
200 spec = MatchSpec(name)
201 new_records = SubdirData.query_all(spec, channels=channels, subdirs=subdirs,
202 repodata_fn=repodata_fn)
203 for record in new_records:
204 push_record(record)
205 records.update(new_records)
206
207 while pending_track_features:
208 feature_name = pending_track_features.pop()
209 collected_track_features.add(feature_name)
210 spec = MatchSpec(track_features=feature_name)
211 new_records = SubdirData.query_all(spec, channels=channels, subdirs=subdirs,
212 repodata_fn=repodata_fn)
213 for record in new_records:
214 push_record(record)
215 records.update(new_records)
216
217 reduced_index = {rec: rec for rec in records}
218
219 if prefix is not None:
220 _supplement_index_with_prefix(reduced_index, prefix)
221
222 if context.offline or ('unknown' in context._argparse_args
223 and context._argparse_args.unknown):
224 # This is really messed up right now. Dates all the way back to
225 # https://github.com/conda/conda/commit/f761f65a82b739562a0d997a2570e2b8a0bdc783
226 # TODO: revisit this later
227 _supplement_index_with_cache(reduced_index)
228
229 # add feature records for the solver
230 known_features = set()
231 for rec in itervalues(reduced_index):
232 known_features.update(concatv(rec.track_features, rec.features))
233 known_features.update(context.track_features)
234 for ftr_str in known_features:
235 rec = make_feature_record(ftr_str)
236 reduced_index[rec] = rec
237
238 _supplement_index_with_system(reduced_index)
239
240 return reduced_index
241
[end of conda/core/index.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conda/core/index.py b/conda/core/index.py
--- a/conda/core/index.py
+++ b/conda/core/index.py
@@ -90,14 +90,26 @@
assert prefix
for prefix_record in PrefixData(prefix).iter_records():
if prefix_record in index:
- # The downloaded repodata takes priority, so we do not overwrite.
- # We do, however, copy the link information so that the solver (i.e. resolve)
- # knows this package is installed.
current_record = index[prefix_record]
- link = prefix_record.get('link') or EMPTY_LINK
- index[prefix_record] = PrefixRecord.from_objects(
- current_record, prefix_record, link=link
- )
+ if current_record.channel == prefix_record.channel:
+ # The downloaded repodata takes priority, so we do not overwrite.
+ # We do, however, copy the link information so that the solver (i.e. resolve)
+ # knows this package is installed.
+ link = prefix_record.get('link') or EMPTY_LINK
+ index[prefix_record] = PrefixRecord.from_objects(
+ current_record, prefix_record, link=link
+ )
+ else:
+ # If the local packages channel information does not agree with
+ # the channel information in the index then they are most
+ # likely referring to different packages. This can occur if a
+ # multi-channel changes configuration, e.g. defaults with and
+ # without the free channel. In this case we need to fake the
+ # channel data for the existing package.
+ prefix_channel = prefix_record.channel
+ prefix_channel._Channel__canonical_name = prefix_channel.url()
+ del prefix_record._PackageRecord__pkey
+ index[prefix_record] = prefix_record
else:
# If the package is not in the repodata, use the local data.
# If the channel is known but the package is not in the index, it
|
{"golden_diff": "diff --git a/conda/core/index.py b/conda/core/index.py\n--- a/conda/core/index.py\n+++ b/conda/core/index.py\n@@ -90,14 +90,26 @@\n assert prefix\n for prefix_record in PrefixData(prefix).iter_records():\n if prefix_record in index:\n- # The downloaded repodata takes priority, so we do not overwrite.\n- # We do, however, copy the link information so that the solver (i.e. resolve)\n- # knows this package is installed.\n current_record = index[prefix_record]\n- link = prefix_record.get('link') or EMPTY_LINK\n- index[prefix_record] = PrefixRecord.from_objects(\n- current_record, prefix_record, link=link\n- )\n+ if current_record.channel == prefix_record.channel:\n+ # The downloaded repodata takes priority, so we do not overwrite.\n+ # We do, however, copy the link information so that the solver (i.e. resolve)\n+ # knows this package is installed.\n+ link = prefix_record.get('link') or EMPTY_LINK\n+ index[prefix_record] = PrefixRecord.from_objects(\n+ current_record, prefix_record, link=link\n+ )\n+ else:\n+ # If the local packages channel information does not agree with\n+ # the channel information in the index then they are most\n+ # likely referring to different packages. This can occur if a\n+ # multi-channel changes configuration, e.g. defaults with and\n+ # without the free channel. In this case we need to fake the\n+ # channel data for the existing package.\n+ prefix_channel = prefix_record.channel\n+ prefix_channel._Channel__canonical_name = prefix_channel.url()\n+ del prefix_record._PackageRecord__pkey\n+ index[prefix_record] = prefix_record\n else:\n # If the package is not in the repodata, use the local data.\n # If the channel is known but the package is not in the index, it\n", "issue": "Similar packages in pkgs/main and pkgs/free are considered the same package\nIf `pkgs/main` and `pkgs/free` contain a package with the same subdir, name, version, build_number and build [conda considers](https://github.com/conda/conda/blob/4.6.6/conda/models/records.py#L253-L266) these packages identical and will not respect the sub-channel priority (main > free). \r\n\r\nA good example of this is the blas=1.0=mkl packages for the linux-64 platform.\r\n\r\nAdditionally packages in `pkgs/free` and `pkg/main` with the [same name, version and build](https://github.com/conda/conda/blob/4.6.6/conda/models/records.py#L272-L273) will be considered by the [solver](https://github.com/conda/conda/blob/4.6.6/conda/resolve.py#L612-L620) to be the same even if they have a different build_number. This occurs with the blas=1.0=openblas packages for the linux-ppc64le platform which makes them un-installable.\r\n\r\ncf: #8301 #8236 \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (C) 2012 Anaconda, Inc\n# SPDX-License-Identifier: BSD-3-Clause\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom itertools import chain\nfrom logging import getLogger\n\nfrom .package_cache_data import PackageCacheData\nfrom .prefix_data import PrefixData\nfrom .subdir_data import SubdirData, make_feature_record\nfrom .._vendor.boltons.setutils import IndexedSet\nfrom .._vendor.toolz import concat, concatv\nfrom ..base.context import context\nfrom ..common.compat import itervalues\nfrom ..common.io import ThreadLimitedThreadPoolExecutor, time_recorder\nfrom ..exceptions import ChannelNotAllowed, InvalidSpec\nfrom ..gateways.logging import initialize_logging\nfrom ..models.channel import Channel, all_channel_urls\nfrom ..models.enums import PackageType\nfrom ..models.match_spec import MatchSpec\nfrom ..models.records import EMPTY_LINK, PackageCacheRecord, PackageRecord, PrefixRecord\n\nlog = getLogger(__name__)\n\n\ndef check_whitelist(channel_urls):\n if context.whitelist_channels:\n whitelist_channel_urls = tuple(concat(\n Channel(c).base_urls for c in context.whitelist_channels\n ))\n for url in channel_urls:\n these_urls = Channel(url).base_urls\n if not all(this_url in whitelist_channel_urls for this_url in these_urls):\n raise ChannelNotAllowed(Channel(url))\n\n\nLAST_CHANNEL_URLS = []\n\n@time_recorder(\"get_index\")\ndef get_index(channel_urls=(), prepend=True, platform=None,\n use_local=False, use_cache=False, unknown=None, prefix=None,\n repodata_fn=context.repodata_fns[-1]):\n \"\"\"\n Return the index of packages available on the channels\n\n If prepend=False, only the channels passed in as arguments are used.\n If platform=None, then the current platform is used.\n If prefix is supplied, then the packages installed in that prefix are added.\n \"\"\"\n initialize_logging() # needed in case this function is called directly as a public API\n\n if context.offline and unknown is None:\n unknown = True\n\n channel_urls = calculate_channel_urls(channel_urls, prepend, platform, use_local)\n del LAST_CHANNEL_URLS[:]\n LAST_CHANNEL_URLS.extend(channel_urls)\n\n check_whitelist(channel_urls)\n\n index = fetch_index(channel_urls, use_cache=use_cache, repodata_fn=repodata_fn)\n\n if prefix:\n _supplement_index_with_prefix(index, prefix)\n if unknown:\n _supplement_index_with_cache(index)\n if context.track_features:\n _supplement_index_with_features(index)\n return index\n\n\ndef fetch_index(channel_urls, use_cache=False, index=None, repodata_fn=context.repodata_fns[-1]):\n log.debug('channel_urls=' + repr(channel_urls))\n index = {}\n with ThreadLimitedThreadPoolExecutor() as executor:\n subdir_instantiator = lambda url: SubdirData(Channel(url), repodata_fn=repodata_fn)\n for f in executor.map(subdir_instantiator, channel_urls):\n index.update((rec, rec) for rec in f.iter_records())\n return index\n\n\ndef dist_str_in_index(index, dist_str):\n match_spec = MatchSpec.from_dist_str(dist_str)\n return any(match_spec.match(prec) for prec in itervalues(index))\n\n\ndef _supplement_index_with_prefix(index, prefix):\n # supplement index with information from prefix/conda-meta\n assert prefix\n for prefix_record in PrefixData(prefix).iter_records():\n if prefix_record in index:\n # The downloaded repodata takes priority, so we do not overwrite.\n # We do, however, copy the link information so that the solver (i.e. resolve)\n # knows this package is installed.\n current_record = index[prefix_record]\n link = prefix_record.get('link') or EMPTY_LINK\n index[prefix_record] = PrefixRecord.from_objects(\n current_record, prefix_record, link=link\n )\n else:\n # If the package is not in the repodata, use the local data.\n # If the channel is known but the package is not in the index, it\n # is because 1) the channel is unavailable offline, or 2) it no\n # longer contains this package. Either way, we should prefer any\n # other version of the package to this one. On the other hand, if\n # it is in a channel we don't know about, assign it a value just\n # above the priority of all known channels.\n index[prefix_record] = prefix_record\n\n\ndef _supplement_index_with_cache(index):\n # supplement index with packages from the cache\n for pcrec in PackageCacheData.get_all_extracted_entries():\n if pcrec in index:\n # The downloaded repodata takes priority\n current_record = index[pcrec]\n index[pcrec] = PackageCacheRecord.from_objects(current_record, pcrec)\n else:\n index[pcrec] = pcrec\n\n\ndef _make_virtual_package(name, version=None):\n return PackageRecord(\n package_type=PackageType.VIRTUAL_SYSTEM,\n name=name,\n version=version or '0',\n build='0',\n channel='@',\n subdir=context.subdir,\n md5=\"12345678901234567890123456789012\",\n build_number=0,\n fn=name,\n )\n\ndef _supplement_index_with_features(index, features=()):\n for feature in chain(context.track_features, features):\n rec = make_feature_record(feature)\n index[rec] = rec\n\n\ndef _supplement_index_with_system(index):\n cuda_version = context.cuda_version\n if cuda_version is not None:\n rec = _make_virtual_package('__cuda', cuda_version)\n index[rec] = rec\n\n\ndef calculate_channel_urls(channel_urls=(), prepend=True, platform=None, use_local=False):\n if use_local:\n channel_urls = ['local'] + list(channel_urls)\n if prepend:\n channel_urls += context.channels\n\n subdirs = (platform, 'noarch') if platform is not None else context.subdirs\n return all_channel_urls(channel_urls, subdirs=subdirs)\n\n\ndef get_reduced_index(prefix, channels, subdirs, specs, repodata_fn):\n records = IndexedSet()\n collected_names = set()\n collected_track_features = set()\n pending_names = set()\n pending_track_features = set()\n\n def push_spec(spec):\n name = spec.get_raw_value('name')\n if name and name not in collected_names:\n pending_names.add(name)\n track_features = spec.get_raw_value('track_features')\n if track_features:\n for ftr_name in track_features:\n if ftr_name not in collected_track_features:\n pending_track_features.add(ftr_name)\n\n def push_record(record):\n try:\n combined_depends = record.combined_depends\n except InvalidSpec as e:\n log.warning(\"Skipping %s due to InvalidSpec: %s\",\n record.record_id(), e._kwargs[\"invalid_spec\"])\n return\n push_spec(MatchSpec(record.name))\n for _spec in combined_depends:\n push_spec(_spec)\n if record.track_features:\n for ftr_name in record.track_features:\n push_spec(MatchSpec(track_features=ftr_name))\n\n if prefix:\n for prefix_rec in PrefixData(prefix).iter_records():\n push_record(prefix_rec)\n for spec in specs:\n push_spec(spec)\n\n while pending_names or pending_track_features:\n while pending_names:\n name = pending_names.pop()\n collected_names.add(name)\n spec = MatchSpec(name)\n new_records = SubdirData.query_all(spec, channels=channels, subdirs=subdirs,\n repodata_fn=repodata_fn)\n for record in new_records:\n push_record(record)\n records.update(new_records)\n\n while pending_track_features:\n feature_name = pending_track_features.pop()\n collected_track_features.add(feature_name)\n spec = MatchSpec(track_features=feature_name)\n new_records = SubdirData.query_all(spec, channels=channels, subdirs=subdirs,\n repodata_fn=repodata_fn)\n for record in new_records:\n push_record(record)\n records.update(new_records)\n\n reduced_index = {rec: rec for rec in records}\n\n if prefix is not None:\n _supplement_index_with_prefix(reduced_index, prefix)\n\n if context.offline or ('unknown' in context._argparse_args\n and context._argparse_args.unknown):\n # This is really messed up right now. Dates all the way back to\n # https://github.com/conda/conda/commit/f761f65a82b739562a0d997a2570e2b8a0bdc783\n # TODO: revisit this later\n _supplement_index_with_cache(reduced_index)\n\n # add feature records for the solver\n known_features = set()\n for rec in itervalues(reduced_index):\n known_features.update(concatv(rec.track_features, rec.features))\n known_features.update(context.track_features)\n for ftr_str in known_features:\n rec = make_feature_record(ftr_str)\n reduced_index[rec] = rec\n\n _supplement_index_with_system(reduced_index)\n\n return reduced_index\n", "path": "conda/core/index.py"}]}
| 3,457 | 441 |
gh_patches_debug_13269
|
rasdani/github-patches
|
git_diff
|
mars-project__mars-558
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Cannot import new_client
**Describe the bug**
When trying to import ``new_client`` from ``mars.actors`` in Python 2.7 in Linux, a ValueError is raised:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "mars/actors/core.pyx", line 147, in mars.actors.core.new_client
cpdef object new_client(object parallel=None, str backend='gevent'):
File "mars/actors/core.pyx", line 151, in mars.actors.core.new_client
from .pool.gevent_pool import ActorClient
File "mars/actors/pool/gevent_pool.pyx", line 38, in init mars.actors.pool.gevent_pool
from ...lib import gipc
File "mars/lib/gipc.pyx", line 1159, in init mars.lib.gipc
__exec("""def _reraise(tp, value, tb=None):
File "mars/lib/gipc.pyx", line 1150, in mars.lib.gipc.__exec
frame = sys._getframe(1)
ValueError: call stack is not deep enough
```
**To Reproduce**
```python
>>> from mars.actors import new_client
>>> client = new_client()
```
</issue>
<code>
[start of mars/actors/__init__.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 # Copyright 1999-2018 Alibaba Group Holding Ltd.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17
18 from .core import create_actor_pool, Actor, FunctionActor, new_client, \
19 register_actor_implementation, unregister_actor_implementation
20 from .errors import ActorPoolNotStarted, ActorNotExist, ActorAlreadyExist
21 from .distributor import Distributor
22
[end of mars/actors/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mars/actors/__init__.py b/mars/actors/__init__.py
--- a/mars/actors/__init__.py
+++ b/mars/actors/__init__.py
@@ -14,8 +14,14 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-
from .core import create_actor_pool, Actor, FunctionActor, new_client, \
register_actor_implementation, unregister_actor_implementation
from .errors import ActorPoolNotStarted, ActorNotExist, ActorAlreadyExist
from .distributor import Distributor
+
+# import gipc first to avoid stack issue of `call stack is not deep enough`
+try:
+ from ..lib import gipc
+ del gipc
+except ImportError: # pragma: no cover
+ pass
|
{"golden_diff": "diff --git a/mars/actors/__init__.py b/mars/actors/__init__.py\n--- a/mars/actors/__init__.py\n+++ b/mars/actors/__init__.py\n@@ -14,8 +14,14 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-\n from .core import create_actor_pool, Actor, FunctionActor, new_client, \\\n register_actor_implementation, unregister_actor_implementation\n from .errors import ActorPoolNotStarted, ActorNotExist, ActorAlreadyExist\n from .distributor import Distributor\n+\n+# import gipc first to avoid stack issue of `call stack is not deep enough`\n+try:\n+ from ..lib import gipc\n+ del gipc\n+except ImportError: # pragma: no cover\n+ pass\n", "issue": "[BUG] Cannot import new_client\n**Describe the bug**\r\nWhen trying to import ``new_client`` from ``mars.actors`` in Python 2.7 in Linux, a ValueError is raised:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"mars/actors/core.pyx\", line 147, in mars.actors.core.new_client\r\n cpdef object new_client(object parallel=None, str backend='gevent'):\r\n File \"mars/actors/core.pyx\", line 151, in mars.actors.core.new_client\r\n from .pool.gevent_pool import ActorClient\r\n File \"mars/actors/pool/gevent_pool.pyx\", line 38, in init mars.actors.pool.gevent_pool\r\n from ...lib import gipc\r\n File \"mars/lib/gipc.pyx\", line 1159, in init mars.lib.gipc\r\n __exec(\"\"\"def _reraise(tp, value, tb=None):\r\n File \"mars/lib/gipc.pyx\", line 1150, in mars.lib.gipc.__exec\r\n frame = sys._getframe(1)\r\nValueError: call stack is not deep enough\r\n```\r\n\r\n**To Reproduce**\r\n```python\r\n>>> from mars.actors import new_client\r\n>>> client = new_client()\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom .core import create_actor_pool, Actor, FunctionActor, new_client, \\\n register_actor_implementation, unregister_actor_implementation\nfrom .errors import ActorPoolNotStarted, ActorNotExist, ActorAlreadyExist\nfrom .distributor import Distributor\n", "path": "mars/actors/__init__.py"}]}
| 1,068 | 181 |
gh_patches_debug_42764
|
rasdani/github-patches
|
git_diff
|
mdn__kuma-5756
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
We should use JSON.parse instead of a JS literal for hydration state
See https://v8.dev/blog/cost-of-javascript-2019#json
We're currently doing this...
```
<script>window._react_data = {"locale": "en-US", "stringCatalog": {}, ... </script>
```
We should be doing...
```
<script>window._react_data = JSON.parse('{"locale": "en-US", "stringCatalog": {}, ... ')</script>
```
</issue>
<code>
[start of kuma/wiki/templatetags/ssr.py]
1 from __future__ import print_function
2
3 import json
4 import os
5
6 import requests
7 import requests.exceptions
8 from django.conf import settings
9 from django.utils import lru_cache
10 from django_jinja import library
11
12
13 @lru_cache.lru_cache()
14 def get_localization_data(locale):
15 """
16 Read the frontend string catalog for the specified locale, parse
17 it as JSON, and return the resulting dict. The returned values
18 are cached so that we don't have to read files all the time.
19 """
20 path = os.path.join(settings.BASE_DIR,
21 'static', 'jsi18n',
22 locale, 'react.json')
23 with open(path, 'r') as f:
24 return json.load(f)
25
26
27 @library.global_function
28 def render_react(component_name, locale, url, document_data, ssr=True):
29 """
30 Render a script tag to define the data and any other HTML tags needed
31 to enable the display of a React-based UI. By default, this does
32 server side rendering, falling back to client-side rendering if
33 the SSR attempt fails. Pass False as the second argument to do
34 client-side rendering unconditionally.
35
36 Note that we are not defining a generic Jinja template tag here.
37 The code in this file is specific to Kuma's React-based UI.
38 """
39 localization_data = get_localization_data(locale)
40
41 data = {
42 'locale': locale,
43 'stringCatalog': localization_data['catalog'],
44 'pluralExpression': localization_data['plural'],
45 'url': url,
46 'documentData': document_data,
47 }
48
49 if ssr:
50 return server_side_render(component_name, data)
51 else:
52 return client_side_render(component_name, data)
53
54
55 def _render(component_name, html, state):
56 """A utility function used by both client side and server side rendering.
57 Returns a string that includes the specified HTML and a serialized
58 form of the state dict, in the format expected by the client-side code
59 in kuma/javascript/src/index.jsx.
60 """
61 # We're going to need this below, but we don't want to keep it around
62 pluralExpression = state['pluralExpression']
63 del state['pluralExpression']
64
65 # Serialize the state object to JSON and be sure the string
66 # "</script>" does not appear in it, since we are going to embed it
67 # within an HTML <script> tag.
68 serializedState = json.dumps(state).replace('</', '<\\/')
69
70 # In addition to the JSON-serialized data structure, we also want
71 # to pass the pluralForm() function required for the ngettext()
72 # localization function. Functions can't be included in JSON, but
73 # they are part of JavaScript, and our serializedState string is
74 # embedded in an HTML <script> tag, so it can include arbitrary
75 # JavaScript, not just JSON. The reason that we need to do this
76 # is that Django provides us with a JS expression as a string and
77 # we need to convert it into JS code. If we don't do it here with
78 # string manipulation, then we need to use eval() or `new Function()`
79 # on the client-side and that causes a CSP violation.
80 if pluralExpression:
81 # A JavaScript function expression as a Python string
82 js_function_text = (
83 'function(n){{var v=({});return(v===true)?1:((v===false)?0:v);}}'
84 .format(pluralExpression)
85 )
86 # Splice it into the JSON-formatted data string
87 serializedState = (
88 '{pluralFunction:' + js_function_text + ',' + serializedState[1:]
89 )
90
91 # Now return the HTML and the state as a single string
92 return (
93 u'<div id="react-container" data-component-name="{}">{}</div>\n'
94 u'<script>window._react_data = {};</script>\n'
95 ).format(component_name, html, serializedState)
96
97
98 def client_side_render(component_name, data):
99 """
100 Output an empty <div> and a script with complete state so that
101 the UI can be rendered on the client-side.
102 """
103 return _render(component_name, '', data)
104
105
106 def server_side_render(component_name, data):
107 """
108 Pre-render the React UI to HTML and output it in a <div>, and then
109 also pass the necessary serialized state in a <script> so that
110 React on the client side can sync itself with the pre-rendred HTML.
111
112 If any exceptions are thrown during the server-side rendering, we
113 fall back to client-side rendering instead.
114 """
115 url = '{}/{}'.format(settings.SSR_URL, component_name)
116 timeout = settings.SSR_TIMEOUT
117
118 # Try server side rendering
119 try:
120 # POST the document data as JSON to the SSR server and we
121 # should get HTML text (encoded as plain text) in the body
122 # of the response
123 response = requests.post(url,
124 headers={'Content-Type': 'application/json'},
125 data=json.dumps(data).encode('utf8'),
126 timeout=timeout)
127
128 # Even though we've got fully rendered HTML now, we still need to
129 # send the document data along with it so that React can sync its
130 # state on the client side with what is in the HTML. When rendering
131 # a document page, the data includes long strings of HTML that
132 # we can get away without duplicating. So as an optimization when
133 # component_name is "document", we're going to make a copy of the
134 # data (because the original belongs to our caller) and delete those
135 # strings from the copy.
136 #
137 # WARNING: This optimization can save 20kb in data transfer
138 # for typical pages, but it requires us to be very careful on
139 # the frontend. If any components render conditionally based on
140 # the state of bodyHTML, tocHTML or quickLinkHTML, then they will
141 # render differently on the client than during SSR, and the hydrate
142 # will not just work cleanly, and those components will re-render
143 # with empty strings. This has already caused Bug 1558308, and
144 # I've commented it out because the benefit in file size doesn't
145 # seem worth the risk of client-side bugs.
146 #
147 # As an alternative, it ought to be possible to extract the HTML
148 # strings from the SSR'ed document and rebuild the document object
149 # on the client right before we call hydrate(). So if you uncomment
150 # the lines below, you should also edit kuma/javascript/src/index.jsx
151 # to extract the HTML from the document as well.
152 #
153 # if component_name == 'document':
154 # data = data.copy()
155 # data['documentData'] = data['documentData'].copy()
156 # data['documentData'].update(bodyHTML='',
157 # tocHTML='',
158 # quickLinksHTML='')
159
160 return _render(component_name, response.text, data)
161
162 except requests.exceptions.ConnectionError:
163 print("Connection error contacting SSR server.")
164 print("Falling back to client side rendering.")
165 return client_side_render(component_name, data)
166 except requests.exceptions.ReadTimeout:
167 print("Timeout contacting SSR server.")
168 print("Falling back to client side rendering.")
169 return client_side_render(component_name, data)
170
[end of kuma/wiki/templatetags/ssr.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kuma/wiki/templatetags/ssr.py b/kuma/wiki/templatetags/ssr.py
--- a/kuma/wiki/templatetags/ssr.py
+++ b/kuma/wiki/templatetags/ssr.py
@@ -52,47 +52,22 @@
return client_side_render(component_name, data)
-def _render(component_name, html, state):
+def _render(component_name, html, script, needs_serialization=False):
"""A utility function used by both client side and server side rendering.
Returns a string that includes the specified HTML and a serialized
form of the state dict, in the format expected by the client-side code
in kuma/javascript/src/index.jsx.
"""
- # We're going to need this below, but we don't want to keep it around
- pluralExpression = state['pluralExpression']
- del state['pluralExpression']
-
- # Serialize the state object to JSON and be sure the string
- # "</script>" does not appear in it, since we are going to embed it
- # within an HTML <script> tag.
- serializedState = json.dumps(state).replace('</', '<\\/')
-
- # In addition to the JSON-serialized data structure, we also want
- # to pass the pluralForm() function required for the ngettext()
- # localization function. Functions can't be included in JSON, but
- # they are part of JavaScript, and our serializedState string is
- # embedded in an HTML <script> tag, so it can include arbitrary
- # JavaScript, not just JSON. The reason that we need to do this
- # is that Django provides us with a JS expression as a string and
- # we need to convert it into JS code. If we don't do it here with
- # string manipulation, then we need to use eval() or `new Function()`
- # on the client-side and that causes a CSP violation.
- if pluralExpression:
- # A JavaScript function expression as a Python string
- js_function_text = (
- 'function(n){{var v=({});return(v===true)?1:((v===false)?0:v);}}'
- .format(pluralExpression)
- )
- # Splice it into the JSON-formatted data string
- serializedState = (
- '{pluralFunction:' + js_function_text + ',' + serializedState[1:]
- )
-
- # Now return the HTML and the state as a single string
+ if needs_serialization:
+ assert isinstance(script, dict), type(script)
+ script = json.dumps(script).replace('</', '<\\/')
+ else:
+ script = u'JSON.parse({})'.format(script)
+
return (
u'<div id="react-container" data-component-name="{}">{}</div>\n'
u'<script>window._react_data = {};</script>\n'
- ).format(component_name, html, serializedState)
+ ).format(component_name, html, script)
def client_side_render(component_name, data):
@@ -100,7 +75,7 @@
Output an empty <div> and a script with complete state so that
the UI can be rendered on the client-side.
"""
- return _render(component_name, '', data)
+ return _render(component_name, '', data, needs_serialization=True)
def server_side_render(component_name, data):
@@ -114,7 +89,6 @@
"""
url = '{}/{}'.format(settings.SSR_URL, component_name)
timeout = settings.SSR_TIMEOUT
-
# Try server side rendering
try:
# POST the document data as JSON to the SSR server and we
@@ -156,8 +130,8 @@
# data['documentData'].update(bodyHTML='',
# tocHTML='',
# quickLinksHTML='')
-
- return _render(component_name, response.text, data)
+ result = response.json()
+ return _render(component_name, result['html'], result['script'])
except requests.exceptions.ConnectionError:
print("Connection error contacting SSR server.")
|
{"golden_diff": "diff --git a/kuma/wiki/templatetags/ssr.py b/kuma/wiki/templatetags/ssr.py\n--- a/kuma/wiki/templatetags/ssr.py\n+++ b/kuma/wiki/templatetags/ssr.py\n@@ -52,47 +52,22 @@\n return client_side_render(component_name, data)\n \n \n-def _render(component_name, html, state):\n+def _render(component_name, html, script, needs_serialization=False):\n \"\"\"A utility function used by both client side and server side rendering.\n Returns a string that includes the specified HTML and a serialized\n form of the state dict, in the format expected by the client-side code\n in kuma/javascript/src/index.jsx.\n \"\"\"\n- # We're going to need this below, but we don't want to keep it around\n- pluralExpression = state['pluralExpression']\n- del state['pluralExpression']\n-\n- # Serialize the state object to JSON and be sure the string\n- # \"</script>\" does not appear in it, since we are going to embed it\n- # within an HTML <script> tag.\n- serializedState = json.dumps(state).replace('</', '<\\\\/')\n-\n- # In addition to the JSON-serialized data structure, we also want\n- # to pass the pluralForm() function required for the ngettext()\n- # localization function. Functions can't be included in JSON, but\n- # they are part of JavaScript, and our serializedState string is\n- # embedded in an HTML <script> tag, so it can include arbitrary\n- # JavaScript, not just JSON. The reason that we need to do this\n- # is that Django provides us with a JS expression as a string and\n- # we need to convert it into JS code. If we don't do it here with\n- # string manipulation, then we need to use eval() or `new Function()`\n- # on the client-side and that causes a CSP violation.\n- if pluralExpression:\n- # A JavaScript function expression as a Python string\n- js_function_text = (\n- 'function(n){{var v=({});return(v===true)?1:((v===false)?0:v);}}'\n- .format(pluralExpression)\n- )\n- # Splice it into the JSON-formatted data string\n- serializedState = (\n- '{pluralFunction:' + js_function_text + ',' + serializedState[1:]\n- )\n-\n- # Now return the HTML and the state as a single string\n+ if needs_serialization:\n+ assert isinstance(script, dict), type(script)\n+ script = json.dumps(script).replace('</', '<\\\\/')\n+ else:\n+ script = u'JSON.parse({})'.format(script)\n+\n return (\n u'<div id=\"react-container\" data-component-name=\"{}\">{}</div>\\n'\n u'<script>window._react_data = {};</script>\\n'\n- ).format(component_name, html, serializedState)\n+ ).format(component_name, html, script)\n \n \n def client_side_render(component_name, data):\n@@ -100,7 +75,7 @@\n Output an empty <div> and a script with complete state so that\n the UI can be rendered on the client-side.\n \"\"\"\n- return _render(component_name, '', data)\n+ return _render(component_name, '', data, needs_serialization=True)\n \n \n def server_side_render(component_name, data):\n@@ -114,7 +89,6 @@\n \"\"\"\n url = '{}/{}'.format(settings.SSR_URL, component_name)\n timeout = settings.SSR_TIMEOUT\n-\n # Try server side rendering\n try:\n # POST the document data as JSON to the SSR server and we\n@@ -156,8 +130,8 @@\n # data['documentData'].update(bodyHTML='',\n # tocHTML='',\n # quickLinksHTML='')\n-\n- return _render(component_name, response.text, data)\n+ result = response.json()\n+ return _render(component_name, result['html'], result['script'])\n \n except requests.exceptions.ConnectionError:\n print(\"Connection error contacting SSR server.\")\n", "issue": "We should use JSON.parse instead of a JS literal for hydration state\nSee https://v8.dev/blog/cost-of-javascript-2019#json\r\n\r\nWe're currently doing this...\r\n```\r\n<script>window._react_data = {\"locale\": \"en-US\", \"stringCatalog\": {}, ... </script>\r\n```\r\n\r\nWe should be doing...\r\n```\r\n<script>window._react_data = JSON.parse('{\"locale\": \"en-US\", \"stringCatalog\": {}, ... ')</script>\r\n```\r\n\n", "before_files": [{"content": "from __future__ import print_function\n\nimport json\nimport os\n\nimport requests\nimport requests.exceptions\nfrom django.conf import settings\nfrom django.utils import lru_cache\nfrom django_jinja import library\n\n\n@lru_cache.lru_cache()\ndef get_localization_data(locale):\n \"\"\"\n Read the frontend string catalog for the specified locale, parse\n it as JSON, and return the resulting dict. The returned values\n are cached so that we don't have to read files all the time.\n \"\"\"\n path = os.path.join(settings.BASE_DIR,\n 'static', 'jsi18n',\n locale, 'react.json')\n with open(path, 'r') as f:\n return json.load(f)\n\n\[email protected]_function\ndef render_react(component_name, locale, url, document_data, ssr=True):\n \"\"\"\n Render a script tag to define the data and any other HTML tags needed\n to enable the display of a React-based UI. By default, this does\n server side rendering, falling back to client-side rendering if\n the SSR attempt fails. Pass False as the second argument to do\n client-side rendering unconditionally.\n\n Note that we are not defining a generic Jinja template tag here.\n The code in this file is specific to Kuma's React-based UI.\n \"\"\"\n localization_data = get_localization_data(locale)\n\n data = {\n 'locale': locale,\n 'stringCatalog': localization_data['catalog'],\n 'pluralExpression': localization_data['plural'],\n 'url': url,\n 'documentData': document_data,\n }\n\n if ssr:\n return server_side_render(component_name, data)\n else:\n return client_side_render(component_name, data)\n\n\ndef _render(component_name, html, state):\n \"\"\"A utility function used by both client side and server side rendering.\n Returns a string that includes the specified HTML and a serialized\n form of the state dict, in the format expected by the client-side code\n in kuma/javascript/src/index.jsx.\n \"\"\"\n # We're going to need this below, but we don't want to keep it around\n pluralExpression = state['pluralExpression']\n del state['pluralExpression']\n\n # Serialize the state object to JSON and be sure the string\n # \"</script>\" does not appear in it, since we are going to embed it\n # within an HTML <script> tag.\n serializedState = json.dumps(state).replace('</', '<\\\\/')\n\n # In addition to the JSON-serialized data structure, we also want\n # to pass the pluralForm() function required for the ngettext()\n # localization function. Functions can't be included in JSON, but\n # they are part of JavaScript, and our serializedState string is\n # embedded in an HTML <script> tag, so it can include arbitrary\n # JavaScript, not just JSON. The reason that we need to do this\n # is that Django provides us with a JS expression as a string and\n # we need to convert it into JS code. If we don't do it here with\n # string manipulation, then we need to use eval() or `new Function()`\n # on the client-side and that causes a CSP violation.\n if pluralExpression:\n # A JavaScript function expression as a Python string\n js_function_text = (\n 'function(n){{var v=({});return(v===true)?1:((v===false)?0:v);}}'\n .format(pluralExpression)\n )\n # Splice it into the JSON-formatted data string\n serializedState = (\n '{pluralFunction:' + js_function_text + ',' + serializedState[1:]\n )\n\n # Now return the HTML and the state as a single string\n return (\n u'<div id=\"react-container\" data-component-name=\"{}\">{}</div>\\n'\n u'<script>window._react_data = {};</script>\\n'\n ).format(component_name, html, serializedState)\n\n\ndef client_side_render(component_name, data):\n \"\"\"\n Output an empty <div> and a script with complete state so that\n the UI can be rendered on the client-side.\n \"\"\"\n return _render(component_name, '', data)\n\n\ndef server_side_render(component_name, data):\n \"\"\"\n Pre-render the React UI to HTML and output it in a <div>, and then\n also pass the necessary serialized state in a <script> so that\n React on the client side can sync itself with the pre-rendred HTML.\n\n If any exceptions are thrown during the server-side rendering, we\n fall back to client-side rendering instead.\n \"\"\"\n url = '{}/{}'.format(settings.SSR_URL, component_name)\n timeout = settings.SSR_TIMEOUT\n\n # Try server side rendering\n try:\n # POST the document data as JSON to the SSR server and we\n # should get HTML text (encoded as plain text) in the body\n # of the response\n response = requests.post(url,\n headers={'Content-Type': 'application/json'},\n data=json.dumps(data).encode('utf8'),\n timeout=timeout)\n\n # Even though we've got fully rendered HTML now, we still need to\n # send the document data along with it so that React can sync its\n # state on the client side with what is in the HTML. When rendering\n # a document page, the data includes long strings of HTML that\n # we can get away without duplicating. So as an optimization when\n # component_name is \"document\", we're going to make a copy of the\n # data (because the original belongs to our caller) and delete those\n # strings from the copy.\n #\n # WARNING: This optimization can save 20kb in data transfer\n # for typical pages, but it requires us to be very careful on\n # the frontend. If any components render conditionally based on\n # the state of bodyHTML, tocHTML or quickLinkHTML, then they will\n # render differently on the client than during SSR, and the hydrate\n # will not just work cleanly, and those components will re-render\n # with empty strings. This has already caused Bug 1558308, and\n # I've commented it out because the benefit in file size doesn't\n # seem worth the risk of client-side bugs.\n #\n # As an alternative, it ought to be possible to extract the HTML\n # strings from the SSR'ed document and rebuild the document object\n # on the client right before we call hydrate(). So if you uncomment\n # the lines below, you should also edit kuma/javascript/src/index.jsx\n # to extract the HTML from the document as well.\n #\n # if component_name == 'document':\n # data = data.copy()\n # data['documentData'] = data['documentData'].copy()\n # data['documentData'].update(bodyHTML='',\n # tocHTML='',\n # quickLinksHTML='')\n\n return _render(component_name, response.text, data)\n\n except requests.exceptions.ConnectionError:\n print(\"Connection error contacting SSR server.\")\n print(\"Falling back to client side rendering.\")\n return client_side_render(component_name, data)\n except requests.exceptions.ReadTimeout:\n print(\"Timeout contacting SSR server.\")\n print(\"Falling back to client side rendering.\")\n return client_side_render(component_name, data)\n", "path": "kuma/wiki/templatetags/ssr.py"}]}
| 2,647 | 917 |
gh_patches_debug_37405
|
rasdani/github-patches
|
git_diff
|
TabbycatDebate__tabbycat-836
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use sockets for self checkin
Using `POST` messages for self checkin has no broadcast for the updated status, causing the change to not be reflected in the checkin views. This also has the reverse consequence that changes in status through the checkins sheet will not be reflected in the landing page (if an event happens while the participant has the page loaded). This is less important though. Don't think it is crucial but for the personal page, the events feed should be filtered to only include events relating to the subject.
</issue>
<code>
[start of tabbycat/checkins/views.py]
1 import json
2
3 from django.contrib import messages
4 from django.core.exceptions import ObjectDoesNotExist
5 from django.views.generic.base import TemplateView
6 from django.template.response import TemplateResponse
7 from django.utils.translation import gettext as _
8
9 from actionlog.mixins import LogActionMixin
10 from actionlog.models import ActionLogEntry
11 from options.utils import use_team_code_names
12 from participants.models import Person, Speaker
13 from utils.misc import reverse_tournament
14 from utils.mixins import AdministratorMixin, AssistantMixin
15 from utils.views import PostOnlyRedirectView
16 from tournaments.mixins import PublicTournamentPageMixin, TournamentMixin
17
18 from .models import Event, PersonIdentifier, VenueIdentifier
19 from .utils import create_identifiers, get_unexpired_checkins
20
21
22 class CheckInPreScanView(TournamentMixin, TemplateView):
23 template_name = 'checkin_scan.html'
24 page_title = _('Scan Identifiers')
25 page_emoji = 'π·'
26
27 def get_context_data(self, **kwargs):
28 kwargs["scan_url"] = self.tournament.slug + '/checkins/'
29 return super().get_context_data(**kwargs)
30
31
32 class AdminCheckInPreScanView(AdministratorMixin, CheckInPreScanView):
33 scan_view = 'admin-checkin-scan'
34
35
36 class AssistantCheckInPreScanView(AssistantMixin, CheckInPreScanView):
37 scan_view = 'assistant-checkin-scan'
38
39
40 class BaseCheckInStatusView(TournamentMixin, TemplateView):
41 template_name = 'checkin_status.html'
42 scan_view = False
43
44 def get_context_data(self, **kwargs):
45 events = get_unexpired_checkins(self.tournament, self.window_preference)
46 kwargs["events"] = json.dumps([e.serialize() for e in events])
47 if self.scan_view:
48 kwargs["scan_url"] = self.tournament.slug + '/checkins/'
49 return super().get_context_data(**kwargs)
50
51
52 class CheckInPeopleStatusView(BaseCheckInStatusView):
53 page_emoji = 'βοΈ'
54 page_title = _("People's Check-In Statuses")
55 window_preference = 'checkin_window_people'
56
57 def get_context_data(self, **kwargs):
58
59 for_admin = True
60 if hasattr(self, '_user_role') and self._user_role == 'public':
61 for_admin = False
62
63 team_codes = use_team_code_names(self.tournament, admin=for_admin)
64 kwargs["team_codes"] = json.dumps(team_codes)
65
66 adjudicators = []
67 for adj in self.tournament.relevant_adjudicators.all().select_related('institution', 'checkin_identifier'):
68 try:
69 code = adj.checkin_identifier.barcode
70 except ObjectDoesNotExist:
71 code = None
72
73 adjudicators.append({
74 'id': adj.id, 'name': adj.name, 'type': 'Adjudicator',
75 'identifier': [code], 'locked': False, 'independent': adj.independent,
76 'institution': adj.institution.serialize if adj.institution else None,
77 })
78 kwargs["adjudicators"] = json.dumps(adjudicators)
79
80 speakers = []
81 for speaker in Speaker.objects.filter(team__tournament=self.tournament).select_related('team', 'team__institution', 'checkin_identifier'):
82 try:
83 code = speaker.checkin_identifier.barcode
84 except ObjectDoesNotExist:
85 code = None
86
87 speakers.append({
88 'id': speaker.id, 'name': speaker.name, 'type': 'Speaker',
89 'identifier': [code], 'locked': False,
90 'team': speaker.team.code_name if team_codes else speaker.team.short_name,
91 'institution': speaker.team.institution.serialize if speaker.team.institution else None,
92 })
93 kwargs["speakers"] = json.dumps(speakers)
94
95 return super().get_context_data(**kwargs)
96
97
98 class AdminCheckInPeopleStatusView(AdministratorMixin, CheckInPeopleStatusView):
99 scan_view = 'admin-checkin-scan'
100
101
102 class AssistantCheckInPeopleStatusView(AssistantMixin, CheckInPeopleStatusView):
103 scan_view = 'assistant-checkin-scan'
104
105
106 class PublicCheckInPeopleStatusView(PublicTournamentPageMixin, CheckInPeopleStatusView):
107 public_page_preference = 'public_checkins'
108
109
110 class CheckInVenuesStatusView(BaseCheckInStatusView):
111 page_emoji = 'π'
112 page_title = _("Venue's Check-In Statuses")
113 window_preference = 'checkin_window_venues'
114
115 def get_context_data(self, **kwargs):
116 venues = []
117 for venue in self.tournament.relevant_venues.select_related('checkin_identifier').prefetch_related('venuecategory_set').all():
118 item = venue.serialize()
119 item['locked'] = False
120 try:
121 item['identifier'] = [venue.checkin_identifier.barcode]
122 except ObjectDoesNotExist:
123 item['identifier'] = [None]
124 venues.append(item)
125 kwargs["venues"] = json.dumps(venues)
126 kwargs["team_codes"] = json.dumps(False)
127
128 return super().get_context_data(**kwargs)
129
130
131 class AdminCheckInVenuesStatusView(AdministratorMixin, CheckInVenuesStatusView):
132 scan_view = 'admin-checkin-scan'
133
134
135 class AssistantCheckInVenuesStatusView(AssistantMixin, CheckInVenuesStatusView):
136 scan_view = 'assistant-checkin-scan'
137
138
139 class SegregatedCheckinsMixin(TournamentMixin):
140
141 def t_speakers(self):
142 return Speaker.objects.filter(
143 team__tournament=self.tournament).values_list(
144 'person_ptr_id', flat=True)
145
146 def speakers_with_barcodes(self):
147 identifiers = PersonIdentifier.objects.all()
148 return identifiers.filter(person_id__in=self.t_speakers())
149
150 def t_adjs(self):
151 return self.tournament.adjudicator_set.values_list(
152 'person_ptr_id', flat=True)
153
154 def adjs_with_barcodes(self):
155 identifiers = PersonIdentifier.objects.all()
156 return identifiers.filter(person_id__in=self.t_adjs())
157
158
159 class CheckInIdentifiersView(SegregatedCheckinsMixin, TemplateView):
160 template_name = 'checkin_ids.html'
161 page_title = _('Make Identifiers')
162 page_emoji = 'π'
163
164 def get_context_data(self, **kwargs):
165 t = self.tournament
166 kwargs["check_in_info"] = {
167 "speakers": {
168 "title": _("Speakers"),
169 "total": self.t_speakers().count(),
170 "in": self.speakers_with_barcodes().count()
171 },
172 "adjudicators": {
173 "title": _("Adjudicators"),
174 "total": self.t_adjs().count(),
175 "in": self.adjs_with_barcodes().count()
176 },
177 "venues": {
178 "title": _("Venues"),
179 "total": t.venue_set.count(),
180 "in": VenueIdentifier.objects.filter(venue__tournament=t).count(),
181 }
182 }
183 return super().get_context_data(**kwargs)
184
185
186 class AdminCheckInIdentifiersView(AdministratorMixin, CheckInIdentifiersView):
187 pass
188
189
190 class AssistantCheckInIdentifiersView(AssistantMixin, CheckInIdentifiersView):
191 pass
192
193
194 class AdminCheckInGenerateView(AdministratorMixin, LogActionMixin,
195 TournamentMixin, PostOnlyRedirectView):
196
197 def get_action_log_type(self):
198 if self.kwargs["kind"] == "speakers":
199 return ActionLogEntry.ACTION_TYPE_CHECKIN_SPEAK_GENERATE
200 elif self.kwargs["kind"] == "adjudicators":
201 return ActionLogEntry.ACTION_TYPE_CHECKIN_ADJ_GENERATE
202 elif self.kwargs["kind"] == "venues":
203 return ActionLogEntry.ACTION_TYPE_CHECKIN_VENUES_GENERATE
204
205 # Providing tournament_slug_url_kwarg isn't working for some reason; so use:
206 def get_redirect_url(self, *args, **kwargs):
207 return reverse_tournament('admin-checkin-identifiers', self.tournament)
208
209 def post(self, request, *args, **kwargs):
210 t = self.tournament
211
212 if self.kwargs["kind"] == "speakers":
213 create_identifiers(PersonIdentifier, Speaker.objects.filter(team__tournament=t))
214 elif self.kwargs["kind"] == "adjudicators":
215 create_identifiers(PersonIdentifier, t.adjudicator_set.all())
216 elif self.kwargs["kind"] == "venues":
217 create_identifiers(VenueIdentifier, t.venue_set.all())
218
219 messages.success(request, _("Generated identifiers for %s" % self.kwargs["kind"]))
220 self.log_action() # Need to call explicitly
221 return super().post(request, *args, **kwargs)
222
223
224 class CheckInPrintablesView(SegregatedCheckinsMixin, TemplateView):
225 template_name = 'checkin_printables.html'
226 page_title = _('Identifiers')
227 page_emoji = 'π'
228
229 def get_context_data(self, **kwargs):
230 if self.kwargs["kind"] == "speakers":
231 kwargs["identifiers"] = self.speakers_with_barcodes().order_by('person__name')
232 elif self.kwargs["kind"] == "adjudicators":
233 kwargs["identifiers"] = self.adjs_with_barcodes().order_by('person__name')
234 elif self.kwargs["kind"] == "venues":
235 venues = self.tournament.relevant_venues
236 kwargs["identifiers"] = VenueIdentifier.objects.filter(venue__in=venues)
237
238 return super().get_context_data(**kwargs)
239
240
241 class AdminCheckInPrintablesView(AdministratorMixin, CheckInPrintablesView):
242 pass
243
244
245 class AssistantCheckInPrintablesView(AssistantMixin, CheckInPrintablesView):
246 pass
247
248
249 class ParticipantCheckinView(PublicTournamentPageMixin, PostOnlyRedirectView):
250
251 public_page_preference = 'public_checkins_submit'
252
253 def post(self, request, *args, **kwargs):
254 t = self.tournament
255
256 action = request.POST['action']
257
258 try:
259 person = Person.objects.get(url_key=kwargs['url_key'])
260 identifier = PersonIdentifier.objects.get(person=person)
261 except ObjectDoesNotExist:
262 messages.error(self.request, _("Could not check you in as you do not have an identifying code β your tab director may need to make you an identifier."))
263 return super().post(request, *args, **kwargs)
264
265 checkins = get_unexpired_checkins(t, 'checkin_window_people')
266 existing_checkin = checkins.filter(identifier=identifier)
267 if action == 'revoke':
268 if existing_checkin.exists():
269 existing_checkin.delete()
270 messages.success(self.request, _("You have revoked your check-in."))
271 else:
272 messages.error(self.request, _("Whoops! Looks like your check-in was already revoked."))
273 elif action == 'checkin':
274 if existing_checkin.exists():
275 messages.error(self.request, _("Whoops! Looks like you're already checked in."))
276 else:
277 Event(identifier=identifier, tournament=t).save()
278 messages.success(self.request, _("You are now checked in."))
279 else:
280 return TemplateResponse(request=self.request, template='400.html', status=400)
281
282 return super().post(request, *args, **kwargs)
283
284 def get_redirect_url(self, *args, **kwargs):
285 return reverse_tournament('privateurls-person-index', self.tournament, kwargs={'url_key': kwargs['url_key']})
286
[end of tabbycat/checkins/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tabbycat/checkins/views.py b/tabbycat/checkins/views.py
--- a/tabbycat/checkins/views.py
+++ b/tabbycat/checkins/views.py
@@ -1,5 +1,7 @@
import json
+from asgiref.sync import async_to_sync
+from channels.layers import get_channel_layer
from django.contrib import messages
from django.core.exceptions import ObjectDoesNotExist
from django.views.generic.base import TemplateView
@@ -15,6 +17,7 @@
from utils.views import PostOnlyRedirectView
from tournaments.mixins import PublicTournamentPageMixin, TournamentMixin
+from .consumers import CheckInEventConsumer
from .models import Event, PersonIdentifier, VenueIdentifier
from .utils import create_identifiers, get_unexpired_checkins
@@ -266,7 +269,6 @@
existing_checkin = checkins.filter(identifier=identifier)
if action == 'revoke':
if existing_checkin.exists():
- existing_checkin.delete()
messages.success(self.request, _("You have revoked your check-in."))
else:
messages.error(self.request, _("Whoops! Looks like your check-in was already revoked."))
@@ -274,11 +276,20 @@
if existing_checkin.exists():
messages.error(self.request, _("Whoops! Looks like you're already checked in."))
else:
- Event(identifier=identifier, tournament=t).save()
messages.success(self.request, _("You are now checked in."))
else:
return TemplateResponse(request=self.request, template='400.html', status=400)
+ group_name = CheckInEventConsumer.group_prefix + "_" + t.slug
+
+ # Override permissions check - no user but authenticated through URL
+ async_to_sync(get_channel_layer().group_send)(
+ group_name, {
+ 'type': 'broadcast_checkin',
+ 'content': { 'barcodes': [identifier.barcode], 'status': action == 'checkin', 'type': 'people', 'component_id': None }
+ }
+ )
+
return super().post(request, *args, **kwargs)
def get_redirect_url(self, *args, **kwargs):
|
{"golden_diff": "diff --git a/tabbycat/checkins/views.py b/tabbycat/checkins/views.py\n--- a/tabbycat/checkins/views.py\n+++ b/tabbycat/checkins/views.py\n@@ -1,5 +1,7 @@\n import json\n \n+from asgiref.sync import async_to_sync\n+from channels.layers import get_channel_layer\n from django.contrib import messages\n from django.core.exceptions import ObjectDoesNotExist\n from django.views.generic.base import TemplateView\n@@ -15,6 +17,7 @@\n from utils.views import PostOnlyRedirectView\n from tournaments.mixins import PublicTournamentPageMixin, TournamentMixin\n \n+from .consumers import CheckInEventConsumer\n from .models import Event, PersonIdentifier, VenueIdentifier\n from .utils import create_identifiers, get_unexpired_checkins\n \n@@ -266,7 +269,6 @@\n existing_checkin = checkins.filter(identifier=identifier)\n if action == 'revoke':\n if existing_checkin.exists():\n- existing_checkin.delete()\n messages.success(self.request, _(\"You have revoked your check-in.\"))\n else:\n messages.error(self.request, _(\"Whoops! Looks like your check-in was already revoked.\"))\n@@ -274,11 +276,20 @@\n if existing_checkin.exists():\n messages.error(self.request, _(\"Whoops! Looks like you're already checked in.\"))\n else:\n- Event(identifier=identifier, tournament=t).save()\n messages.success(self.request, _(\"You are now checked in.\"))\n else:\n return TemplateResponse(request=self.request, template='400.html', status=400)\n \n+ group_name = CheckInEventConsumer.group_prefix + \"_\" + t.slug\n+\n+ # Override permissions check - no user but authenticated through URL\n+ async_to_sync(get_channel_layer().group_send)(\n+ group_name, {\n+ 'type': 'broadcast_checkin',\n+ 'content': { 'barcodes': [identifier.barcode], 'status': action == 'checkin', 'type': 'people', 'component_id': None }\n+ }\n+ )\n+\n return super().post(request, *args, **kwargs)\n \n def get_redirect_url(self, *args, **kwargs):\n", "issue": "Use sockets for self checkin\nUsing `POST` messages for self checkin has no broadcast for the updated status, causing the change to not be reflected in the checkin views. This also has the reverse consequence that changes in status through the checkins sheet will not be reflected in the landing page (if an event happens while the participant has the page loaded). This is less important though. Don't think it is crucial but for the personal page, the events feed should be filtered to only include events relating to the subject.\n", "before_files": [{"content": "import json\n\nfrom django.contrib import messages\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.views.generic.base import TemplateView\nfrom django.template.response import TemplateResponse\nfrom django.utils.translation import gettext as _\n\nfrom actionlog.mixins import LogActionMixin\nfrom actionlog.models import ActionLogEntry\nfrom options.utils import use_team_code_names\nfrom participants.models import Person, Speaker\nfrom utils.misc import reverse_tournament\nfrom utils.mixins import AdministratorMixin, AssistantMixin\nfrom utils.views import PostOnlyRedirectView\nfrom tournaments.mixins import PublicTournamentPageMixin, TournamentMixin\n\nfrom .models import Event, PersonIdentifier, VenueIdentifier\nfrom .utils import create_identifiers, get_unexpired_checkins\n\n\nclass CheckInPreScanView(TournamentMixin, TemplateView):\n template_name = 'checkin_scan.html'\n page_title = _('Scan Identifiers')\n page_emoji = '\ud83d\udcf7'\n\n def get_context_data(self, **kwargs):\n kwargs[\"scan_url\"] = self.tournament.slug + '/checkins/'\n return super().get_context_data(**kwargs)\n\n\nclass AdminCheckInPreScanView(AdministratorMixin, CheckInPreScanView):\n scan_view = 'admin-checkin-scan'\n\n\nclass AssistantCheckInPreScanView(AssistantMixin, CheckInPreScanView):\n scan_view = 'assistant-checkin-scan'\n\n\nclass BaseCheckInStatusView(TournamentMixin, TemplateView):\n template_name = 'checkin_status.html'\n scan_view = False\n\n def get_context_data(self, **kwargs):\n events = get_unexpired_checkins(self.tournament, self.window_preference)\n kwargs[\"events\"] = json.dumps([e.serialize() for e in events])\n if self.scan_view:\n kwargs[\"scan_url\"] = self.tournament.slug + '/checkins/'\n return super().get_context_data(**kwargs)\n\n\nclass CheckInPeopleStatusView(BaseCheckInStatusView):\n page_emoji = '\u231a\ufe0f'\n page_title = _(\"People's Check-In Statuses\")\n window_preference = 'checkin_window_people'\n\n def get_context_data(self, **kwargs):\n\n for_admin = True\n if hasattr(self, '_user_role') and self._user_role == 'public':\n for_admin = False\n\n team_codes = use_team_code_names(self.tournament, admin=for_admin)\n kwargs[\"team_codes\"] = json.dumps(team_codes)\n\n adjudicators = []\n for adj in self.tournament.relevant_adjudicators.all().select_related('institution', 'checkin_identifier'):\n try:\n code = adj.checkin_identifier.barcode\n except ObjectDoesNotExist:\n code = None\n\n adjudicators.append({\n 'id': adj.id, 'name': adj.name, 'type': 'Adjudicator',\n 'identifier': [code], 'locked': False, 'independent': adj.independent,\n 'institution': adj.institution.serialize if adj.institution else None,\n })\n kwargs[\"adjudicators\"] = json.dumps(adjudicators)\n\n speakers = []\n for speaker in Speaker.objects.filter(team__tournament=self.tournament).select_related('team', 'team__institution', 'checkin_identifier'):\n try:\n code = speaker.checkin_identifier.barcode\n except ObjectDoesNotExist:\n code = None\n\n speakers.append({\n 'id': speaker.id, 'name': speaker.name, 'type': 'Speaker',\n 'identifier': [code], 'locked': False,\n 'team': speaker.team.code_name if team_codes else speaker.team.short_name,\n 'institution': speaker.team.institution.serialize if speaker.team.institution else None,\n })\n kwargs[\"speakers\"] = json.dumps(speakers)\n\n return super().get_context_data(**kwargs)\n\n\nclass AdminCheckInPeopleStatusView(AdministratorMixin, CheckInPeopleStatusView):\n scan_view = 'admin-checkin-scan'\n\n\nclass AssistantCheckInPeopleStatusView(AssistantMixin, CheckInPeopleStatusView):\n scan_view = 'assistant-checkin-scan'\n\n\nclass PublicCheckInPeopleStatusView(PublicTournamentPageMixin, CheckInPeopleStatusView):\n public_page_preference = 'public_checkins'\n\n\nclass CheckInVenuesStatusView(BaseCheckInStatusView):\n page_emoji = '\ud83d\udc5c'\n page_title = _(\"Venue's Check-In Statuses\")\n window_preference = 'checkin_window_venues'\n\n def get_context_data(self, **kwargs):\n venues = []\n for venue in self.tournament.relevant_venues.select_related('checkin_identifier').prefetch_related('venuecategory_set').all():\n item = venue.serialize()\n item['locked'] = False\n try:\n item['identifier'] = [venue.checkin_identifier.barcode]\n except ObjectDoesNotExist:\n item['identifier'] = [None]\n venues.append(item)\n kwargs[\"venues\"] = json.dumps(venues)\n kwargs[\"team_codes\"] = json.dumps(False)\n\n return super().get_context_data(**kwargs)\n\n\nclass AdminCheckInVenuesStatusView(AdministratorMixin, CheckInVenuesStatusView):\n scan_view = 'admin-checkin-scan'\n\n\nclass AssistantCheckInVenuesStatusView(AssistantMixin, CheckInVenuesStatusView):\n scan_view = 'assistant-checkin-scan'\n\n\nclass SegregatedCheckinsMixin(TournamentMixin):\n\n def t_speakers(self):\n return Speaker.objects.filter(\n team__tournament=self.tournament).values_list(\n 'person_ptr_id', flat=True)\n\n def speakers_with_barcodes(self):\n identifiers = PersonIdentifier.objects.all()\n return identifiers.filter(person_id__in=self.t_speakers())\n\n def t_adjs(self):\n return self.tournament.adjudicator_set.values_list(\n 'person_ptr_id', flat=True)\n\n def adjs_with_barcodes(self):\n identifiers = PersonIdentifier.objects.all()\n return identifiers.filter(person_id__in=self.t_adjs())\n\n\nclass CheckInIdentifiersView(SegregatedCheckinsMixin, TemplateView):\n template_name = 'checkin_ids.html'\n page_title = _('Make Identifiers')\n page_emoji = '\ud83d\udcdb'\n\n def get_context_data(self, **kwargs):\n t = self.tournament\n kwargs[\"check_in_info\"] = {\n \"speakers\": {\n \"title\": _(\"Speakers\"),\n \"total\": self.t_speakers().count(),\n \"in\": self.speakers_with_barcodes().count()\n },\n \"adjudicators\": {\n \"title\": _(\"Adjudicators\"),\n \"total\": self.t_adjs().count(),\n \"in\": self.adjs_with_barcodes().count()\n },\n \"venues\": {\n \"title\": _(\"Venues\"),\n \"total\": t.venue_set.count(),\n \"in\": VenueIdentifier.objects.filter(venue__tournament=t).count(),\n }\n }\n return super().get_context_data(**kwargs)\n\n\nclass AdminCheckInIdentifiersView(AdministratorMixin, CheckInIdentifiersView):\n pass\n\n\nclass AssistantCheckInIdentifiersView(AssistantMixin, CheckInIdentifiersView):\n pass\n\n\nclass AdminCheckInGenerateView(AdministratorMixin, LogActionMixin,\n TournamentMixin, PostOnlyRedirectView):\n\n def get_action_log_type(self):\n if self.kwargs[\"kind\"] == \"speakers\":\n return ActionLogEntry.ACTION_TYPE_CHECKIN_SPEAK_GENERATE\n elif self.kwargs[\"kind\"] == \"adjudicators\":\n return ActionLogEntry.ACTION_TYPE_CHECKIN_ADJ_GENERATE\n elif self.kwargs[\"kind\"] == \"venues\":\n return ActionLogEntry.ACTION_TYPE_CHECKIN_VENUES_GENERATE\n\n # Providing tournament_slug_url_kwarg isn't working for some reason; so use:\n def get_redirect_url(self, *args, **kwargs):\n return reverse_tournament('admin-checkin-identifiers', self.tournament)\n\n def post(self, request, *args, **kwargs):\n t = self.tournament\n\n if self.kwargs[\"kind\"] == \"speakers\":\n create_identifiers(PersonIdentifier, Speaker.objects.filter(team__tournament=t))\n elif self.kwargs[\"kind\"] == \"adjudicators\":\n create_identifiers(PersonIdentifier, t.adjudicator_set.all())\n elif self.kwargs[\"kind\"] == \"venues\":\n create_identifiers(VenueIdentifier, t.venue_set.all())\n\n messages.success(request, _(\"Generated identifiers for %s\" % self.kwargs[\"kind\"]))\n self.log_action() # Need to call explicitly\n return super().post(request, *args, **kwargs)\n\n\nclass CheckInPrintablesView(SegregatedCheckinsMixin, TemplateView):\n template_name = 'checkin_printables.html'\n page_title = _('Identifiers')\n page_emoji = '\ud83d\udcdb'\n\n def get_context_data(self, **kwargs):\n if self.kwargs[\"kind\"] == \"speakers\":\n kwargs[\"identifiers\"] = self.speakers_with_barcodes().order_by('person__name')\n elif self.kwargs[\"kind\"] == \"adjudicators\":\n kwargs[\"identifiers\"] = self.adjs_with_barcodes().order_by('person__name')\n elif self.kwargs[\"kind\"] == \"venues\":\n venues = self.tournament.relevant_venues\n kwargs[\"identifiers\"] = VenueIdentifier.objects.filter(venue__in=venues)\n\n return super().get_context_data(**kwargs)\n\n\nclass AdminCheckInPrintablesView(AdministratorMixin, CheckInPrintablesView):\n pass\n\n\nclass AssistantCheckInPrintablesView(AssistantMixin, CheckInPrintablesView):\n pass\n\n\nclass ParticipantCheckinView(PublicTournamentPageMixin, PostOnlyRedirectView):\n\n public_page_preference = 'public_checkins_submit'\n\n def post(self, request, *args, **kwargs):\n t = self.tournament\n\n action = request.POST['action']\n\n try:\n person = Person.objects.get(url_key=kwargs['url_key'])\n identifier = PersonIdentifier.objects.get(person=person)\n except ObjectDoesNotExist:\n messages.error(self.request, _(\"Could not check you in as you do not have an identifying code \u2014 your tab director may need to make you an identifier.\"))\n return super().post(request, *args, **kwargs)\n\n checkins = get_unexpired_checkins(t, 'checkin_window_people')\n existing_checkin = checkins.filter(identifier=identifier)\n if action == 'revoke':\n if existing_checkin.exists():\n existing_checkin.delete()\n messages.success(self.request, _(\"You have revoked your check-in.\"))\n else:\n messages.error(self.request, _(\"Whoops! Looks like your check-in was already revoked.\"))\n elif action == 'checkin':\n if existing_checkin.exists():\n messages.error(self.request, _(\"Whoops! Looks like you're already checked in.\"))\n else:\n Event(identifier=identifier, tournament=t).save()\n messages.success(self.request, _(\"You are now checked in.\"))\n else:\n return TemplateResponse(request=self.request, template='400.html', status=400)\n\n return super().post(request, *args, **kwargs)\n\n def get_redirect_url(self, *args, **kwargs):\n return reverse_tournament('privateurls-person-index', self.tournament, kwargs={'url_key': kwargs['url_key']})\n", "path": "tabbycat/checkins/views.py"}]}
| 3,829 | 482 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.