problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_20487
|
rasdani/github-patches
|
git_diff
|
AUTOMATIC1111__stable-diffusion-webui-8118
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug]: GitPython breaking API change in 3.1.30, breaks extension updates
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
[Per this PR](https://github.com/gitpython-developers/GitPython/pull/1518) and [the changelog](https://github.com/gitpython-developers/GitPython/pull/1518) you can no longer feed arbitrary arguments to prevent remote code execution.
Easy fix, just use the built kwarg that's already there for it.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blame/0cc0ee1bcb4c24a8c9715f66cede06601bfc00c8/modules/extensions.py#LL69C28-L69C28
there may be other places in the code as well, I'll take a peek
### Steps to reproduce the problem
1. `pip install 'gitpython>=3.1.30'
2. run the web-ui and try to install/check for updates while watching terminal
### What should have happened?
Should have successfully run the git commands and updated the git repos
### Commit where the problem happens
0cc0ee1b
### What platforms do you use to access the UI ?
Windows
### What browsers do you use to access the UI ?
Google Chrome
### Command Line Arguments
```Shell
No
```
### List of extensions
No
### Console logs
```Shell
Traceback (most recent call last):
File "/mnt/d/stable-diffusion/stable-diffusion-webui/modules/ui_extensions.py", line 66, in check_updates
ext.check_updates()
File "/mnt/d/stable-diffusion/stable-diffusion-webui/modules/extensions.py", line 69, in check_updates
for fetch in repo.remote().fetch("--dry-run"):
File "/home/adam/.cache/pypoetry/virtualenvs/sd-deps-z4SYejYZ-py3.10/lib/python3.10/site-packages/git/remote.py", line 1007, in fetch
res = self._get_fetch_info_from_stderr(proc, progress, kill_after_timeout=kill_after_timeout)
File "/home/adam/.cache/pypoetry/virtualenvs/sd-deps-z4SYejYZ-py3.10/lib/python3.10/site-packages/git/remote.py", line 848, in _get_fetch_info_from_stderr
proc.wait(stderr=stderr_text)
File "/home/adam/.cache/pypoetry/virtualenvs/sd-deps-z4SYejYZ-py3.10/lib/python3.10/site-packages/git/cmd.py", line 604, in wait
raise GitCommandError(remove_password_if_present(self.args), status, errstr)
git.exc.GitCommandError: Cmd('git') failed due to: exit code(128)
cmdline: git fetch -v -- origin --dry-run
stderr: 'fatal: couldn't find remote ref --dry-run'
```
### Additional information
_No response_
</issue>
<code>
[start of modules/extensions.py]
1 import os
2 import sys
3 import traceback
4
5 import time
6 import git
7
8 from modules import paths, shared
9
10 extensions = []
11 extensions_dir = os.path.join(paths.data_path, "extensions")
12 extensions_builtin_dir = os.path.join(paths.script_path, "extensions-builtin")
13
14 if not os.path.exists(extensions_dir):
15 os.makedirs(extensions_dir)
16
17 def active():
18 return [x for x in extensions if x.enabled]
19
20
21 class Extension:
22 def __init__(self, name, path, enabled=True, is_builtin=False):
23 self.name = name
24 self.path = path
25 self.enabled = enabled
26 self.status = ''
27 self.can_update = False
28 self.is_builtin = is_builtin
29 self.version = ''
30
31 repo = None
32 try:
33 if os.path.exists(os.path.join(path, ".git")):
34 repo = git.Repo(path)
35 except Exception:
36 print(f"Error reading github repository info from {path}:", file=sys.stderr)
37 print(traceback.format_exc(), file=sys.stderr)
38
39 if repo is None or repo.bare:
40 self.remote = None
41 else:
42 try:
43 self.remote = next(repo.remote().urls, None)
44 self.status = 'unknown'
45 head = repo.head.commit
46 ts = time.asctime(time.gmtime(repo.head.commit.committed_date))
47 self.version = f'{head.hexsha[:8]} ({ts})'
48
49 except Exception:
50 self.remote = None
51
52 def list_files(self, subdir, extension):
53 from modules import scripts
54
55 dirpath = os.path.join(self.path, subdir)
56 if not os.path.isdir(dirpath):
57 return []
58
59 res = []
60 for filename in sorted(os.listdir(dirpath)):
61 res.append(scripts.ScriptFile(self.path, filename, os.path.join(dirpath, filename)))
62
63 res = [x for x in res if os.path.splitext(x.path)[1].lower() == extension and os.path.isfile(x.path)]
64
65 return res
66
67 def check_updates(self):
68 repo = git.Repo(self.path)
69 for fetch in repo.remote().fetch("--dry-run"):
70 if fetch.flags != fetch.HEAD_UPTODATE:
71 self.can_update = True
72 self.status = "behind"
73 return
74
75 self.can_update = False
76 self.status = "latest"
77
78 def fetch_and_reset_hard(self):
79 repo = git.Repo(self.path)
80 # Fix: `error: Your local changes to the following files would be overwritten by merge`,
81 # because WSL2 Docker set 755 file permissions instead of 644, this results to the error.
82 repo.git.fetch('--all')
83 repo.git.reset('--hard', 'origin')
84
85
86 def list_extensions():
87 extensions.clear()
88
89 if not os.path.isdir(extensions_dir):
90 return
91
92 paths = []
93 for dirname in [extensions_dir, extensions_builtin_dir]:
94 if not os.path.isdir(dirname):
95 return
96
97 for extension_dirname in sorted(os.listdir(dirname)):
98 path = os.path.join(dirname, extension_dirname)
99 if not os.path.isdir(path):
100 continue
101
102 paths.append((extension_dirname, path, dirname == extensions_builtin_dir))
103
104 for dirname, path, is_builtin in paths:
105 extension = Extension(name=dirname, path=path, enabled=dirname not in shared.opts.disabled_extensions, is_builtin=is_builtin)
106 extensions.append(extension)
107
108
[end of modules/extensions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/modules/extensions.py b/modules/extensions.py
--- a/modules/extensions.py
+++ b/modules/extensions.py
@@ -66,7 +66,7 @@
def check_updates(self):
repo = git.Repo(self.path)
- for fetch in repo.remote().fetch("--dry-run"):
+ for fetch in repo.remote().fetch(dry_run=True):
if fetch.flags != fetch.HEAD_UPTODATE:
self.can_update = True
self.status = "behind"
@@ -79,8 +79,8 @@
repo = git.Repo(self.path)
# Fix: `error: Your local changes to the following files would be overwritten by merge`,
# because WSL2 Docker set 755 file permissions instead of 644, this results to the error.
- repo.git.fetch('--all')
- repo.git.reset('--hard', 'origin')
+ repo.git.fetch(all=True)
+ repo.git.reset('origin', hard=True)
def list_extensions():
|
{"golden_diff": "diff --git a/modules/extensions.py b/modules/extensions.py\n--- a/modules/extensions.py\n+++ b/modules/extensions.py\n@@ -66,7 +66,7 @@\n \r\n def check_updates(self):\r\n repo = git.Repo(self.path)\r\n- for fetch in repo.remote().fetch(\"--dry-run\"):\r\n+ for fetch in repo.remote().fetch(dry_run=True):\r\n if fetch.flags != fetch.HEAD_UPTODATE:\r\n self.can_update = True\r\n self.status = \"behind\"\r\n@@ -79,8 +79,8 @@\n repo = git.Repo(self.path)\r\n # Fix: `error: Your local changes to the following files would be overwritten by merge`,\r\n # because WSL2 Docker set 755 file permissions instead of 644, this results to the error.\r\n- repo.git.fetch('--all')\r\n- repo.git.reset('--hard', 'origin')\r\n+ repo.git.fetch(all=True)\r\n+ repo.git.reset('origin', hard=True)\r\n \r\n \r\n def list_extensions():\n", "issue": "[Bug]: GitPython breaking API change in 3.1.30, breaks extension updates\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\n[Per this PR](https://github.com/gitpython-developers/GitPython/pull/1518) and [the changelog](https://github.com/gitpython-developers/GitPython/pull/1518) you can no longer feed arbitrary arguments to prevent remote code execution.\r\n\r\nEasy fix, just use the built kwarg that's already there for it.\r\nhttps://github.com/AUTOMATIC1111/stable-diffusion-webui/blame/0cc0ee1bcb4c24a8c9715f66cede06601bfc00c8/modules/extensions.py#LL69C28-L69C28\r\n\r\nthere may be other places in the code as well, I'll take a peek\n\n### Steps to reproduce the problem\n\n1. `pip install 'gitpython>=3.1.30'\r\n2. run the web-ui and try to install/check for updates while watching terminal\n\n### What should have happened?\n\nShould have successfully run the git commands and updated the git repos\n\n### Commit where the problem happens\n\n0cc0ee1b\n\n### What platforms do you use to access the UI ?\n\nWindows\n\n### What browsers do you use to access the UI ?\n\nGoogle Chrome\n\n### Command Line Arguments\n\n```Shell\nNo\n```\n\n\n### List of extensions\n\nNo\n\n### Console logs\n\n```Shell\nTraceback (most recent call last):\r\n File \"/mnt/d/stable-diffusion/stable-diffusion-webui/modules/ui_extensions.py\", line 66, in check_updates\r\n ext.check_updates()\r\n File \"/mnt/d/stable-diffusion/stable-diffusion-webui/modules/extensions.py\", line 69, in check_updates\r\n for fetch in repo.remote().fetch(\"--dry-run\"):\r\n File \"/home/adam/.cache/pypoetry/virtualenvs/sd-deps-z4SYejYZ-py3.10/lib/python3.10/site-packages/git/remote.py\", line 1007, in fetch\r\n res = self._get_fetch_info_from_stderr(proc, progress, kill_after_timeout=kill_after_timeout)\r\n File \"/home/adam/.cache/pypoetry/virtualenvs/sd-deps-z4SYejYZ-py3.10/lib/python3.10/site-packages/git/remote.py\", line 848, in _get_fetch_info_from_stderr\r\n proc.wait(stderr=stderr_text)\r\n File \"/home/adam/.cache/pypoetry/virtualenvs/sd-deps-z4SYejYZ-py3.10/lib/python3.10/site-packages/git/cmd.py\", line 604, in wait\r\n raise GitCommandError(remove_password_if_present(self.args), status, errstr)\r\ngit.exc.GitCommandError: Cmd('git') failed due to: exit code(128)\r\n cmdline: git fetch -v -- origin --dry-run\r\n stderr: 'fatal: couldn't find remote ref --dry-run'\n```\n\n\n### Additional information\n\n_No response_\n", "before_files": [{"content": "import os\r\nimport sys\r\nimport traceback\r\n\r\nimport time\r\nimport git\r\n\r\nfrom modules import paths, shared\r\n\r\nextensions = []\r\nextensions_dir = os.path.join(paths.data_path, \"extensions\")\r\nextensions_builtin_dir = os.path.join(paths.script_path, \"extensions-builtin\")\r\n\r\nif not os.path.exists(extensions_dir):\r\n os.makedirs(extensions_dir)\r\n\r\ndef active():\r\n return [x for x in extensions if x.enabled]\r\n\r\n\r\nclass Extension:\r\n def __init__(self, name, path, enabled=True, is_builtin=False):\r\n self.name = name\r\n self.path = path\r\n self.enabled = enabled\r\n self.status = ''\r\n self.can_update = False\r\n self.is_builtin = is_builtin\r\n self.version = ''\r\n\r\n repo = None\r\n try:\r\n if os.path.exists(os.path.join(path, \".git\")):\r\n repo = git.Repo(path)\r\n except Exception:\r\n print(f\"Error reading github repository info from {path}:\", file=sys.stderr)\r\n print(traceback.format_exc(), file=sys.stderr)\r\n\r\n if repo is None or repo.bare:\r\n self.remote = None\r\n else:\r\n try:\r\n self.remote = next(repo.remote().urls, None)\r\n self.status = 'unknown'\r\n head = repo.head.commit\r\n ts = time.asctime(time.gmtime(repo.head.commit.committed_date))\r\n self.version = f'{head.hexsha[:8]} ({ts})'\r\n\r\n except Exception:\r\n self.remote = None\r\n\r\n def list_files(self, subdir, extension):\r\n from modules import scripts\r\n\r\n dirpath = os.path.join(self.path, subdir)\r\n if not os.path.isdir(dirpath):\r\n return []\r\n\r\n res = []\r\n for filename in sorted(os.listdir(dirpath)):\r\n res.append(scripts.ScriptFile(self.path, filename, os.path.join(dirpath, filename)))\r\n\r\n res = [x for x in res if os.path.splitext(x.path)[1].lower() == extension and os.path.isfile(x.path)]\r\n\r\n return res\r\n\r\n def check_updates(self):\r\n repo = git.Repo(self.path)\r\n for fetch in repo.remote().fetch(\"--dry-run\"):\r\n if fetch.flags != fetch.HEAD_UPTODATE:\r\n self.can_update = True\r\n self.status = \"behind\"\r\n return\r\n\r\n self.can_update = False\r\n self.status = \"latest\"\r\n\r\n def fetch_and_reset_hard(self):\r\n repo = git.Repo(self.path)\r\n # Fix: `error: Your local changes to the following files would be overwritten by merge`,\r\n # because WSL2 Docker set 755 file permissions instead of 644, this results to the error.\r\n repo.git.fetch('--all')\r\n repo.git.reset('--hard', 'origin')\r\n\r\n\r\ndef list_extensions():\r\n extensions.clear()\r\n\r\n if not os.path.isdir(extensions_dir):\r\n return\r\n\r\n paths = []\r\n for dirname in [extensions_dir, extensions_builtin_dir]:\r\n if not os.path.isdir(dirname):\r\n return\r\n\r\n for extension_dirname in sorted(os.listdir(dirname)):\r\n path = os.path.join(dirname, extension_dirname)\r\n if not os.path.isdir(path):\r\n continue\r\n\r\n paths.append((extension_dirname, path, dirname == extensions_builtin_dir))\r\n\r\n for dirname, path, is_builtin in paths:\r\n extension = Extension(name=dirname, path=path, enabled=dirname not in shared.opts.disabled_extensions, is_builtin=is_builtin)\r\n extensions.append(extension)\r\n\r\n", "path": "modules/extensions.py"}]}
| 2,175 | 219 |
gh_patches_debug_34955
|
rasdani/github-patches
|
git_diff
|
elastic__apm-agent-python-881
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'FragmentDefinition' object has no attribute 'operation'"
**Describe the bug**:
I'm using elastic APM with Django 3.1.2 and graphql.
On every GraphQL Query, I'm seeing now this error: `AttributeError: 'FragmentDefinition' object has no attribute 'operation'`
The relevant file is: `elasticapm/instrumentation/packages/graphql.py in get_graphql_tx_name at line 99`
**To Reproduce**
I'm not sure yet, why the error is occurring and I'm just getting started with the service. If you can guide me to the right direction, I can create a reproducible example.
**Environment (please complete the following information)**
- OS: Linux docker Container
- Python version:
- Framework and version : Django 3.1.2
- APM Server version:
- Agent version: 5.9.0
**Additional context**
Add any other context about the problem here.
- Agent config options <!-- be careful not to post sensitive information -->
<details>
<summary>Click to expand</summary>
```
replace this line with your agent config options
remember to mask any sensitive fields like tokens
```
</details>
- `requirements.txt`:
<details>
<summary>Click to expand</summary>
```
replace this line with your `requirements.txt`
```
</details>
</issue>
<code>
[start of elasticapm/instrumentation/packages/graphql.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 from elasticapm import set_transaction_name
32 from elasticapm.instrumentation.packages.base import AbstractInstrumentedModule
33 from elasticapm.traces import capture_span
34
35
36 class GraphQLExecutorInstrumentation(AbstractInstrumentedModule):
37 name = "graphql"
38
39 instrument_list = [
40 ("graphql.execution.executors.sync", "SyncExecutor.execute"),
41 ("graphql.execution.executors.gevent", "GeventExecutor.execute"),
42 ("graphql.execution.executors.asyncio", "AsyncioExecutor.execute"),
43 ("graphql.execution.executors.process", "ProcessExecutor.execute"),
44 ("graphql.execution.executors.thread", "ThreadExecutor.execute_in_thread"),
45 ("graphql.execution.executors.thread", "ThreadExecutor.execute_in_pool"),
46 ]
47
48 def call(self, module, method, wrapped, instance, args, kwargs):
49 name = "GraphQL"
50
51 info = ""
52 query = args[2]
53
54 if "ResolveInfo" == type(query).__name__:
55 if str(query.return_type) in [
56 'Boolean',
57 'Context',
58 'Date',
59 'DateTime',
60 'Decimal',
61 'Dynamic',
62 'Float',
63 'ID',
64 'Int',
65 'String',
66 'Time',
67 'UUID',
68 'Boolean',
69 'String'
70 ]:
71 return wrapped(*args, **kwargs)
72
73 op = query.operation.operation
74 field = query.field_name
75 info = "%s %s" % (op, field)
76 elif "RequestParams" == type(query).__name__:
77 info = "%s %s" % ("request", query.query)
78 else:
79 info = str(query)
80
81 with capture_span(
82 "%s.%s" % (name, info),
83 span_type="external",
84 span_subtype="graphql",
85 span_action="query"
86 ):
87 return wrapped(*args, **kwargs)
88
89
90 class GraphQLBackendInstrumentation(AbstractInstrumentedModule):
91 name = "graphql"
92
93 instrument_list = [
94 ("graphql.backend.core", "GraphQLCoreBackend.document_from_string"),
95 ("graphql.backend.cache", "GraphQLCachedBackend.document_from_string"),
96 ]
97
98 def get_graphql_tx_name(self, graphql_doc):
99 op = graphql_doc.definitions[0].operation
100 fields = graphql_doc.definitions[0].selection_set.selections
101 return "GraphQL %s %s" % (op.upper(), "+".join([f.name.value for f in fields]))
102
103 def call(self, module, method, wrapped, instance, args, kwargs):
104 graphql_document = wrapped(*args, **kwargs)
105 transaction_name = self.get_graphql_tx_name(graphql_document.document_ast)
106 set_transaction_name(transaction_name)
107 return graphql_document
108
[end of elasticapm/instrumentation/packages/graphql.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/elasticapm/instrumentation/packages/graphql.py b/elasticapm/instrumentation/packages/graphql.py
--- a/elasticapm/instrumentation/packages/graphql.py
+++ b/elasticapm/instrumentation/packages/graphql.py
@@ -53,20 +53,20 @@
if "ResolveInfo" == type(query).__name__:
if str(query.return_type) in [
- 'Boolean',
- 'Context',
- 'Date',
- 'DateTime',
- 'Decimal',
- 'Dynamic',
- 'Float',
- 'ID',
- 'Int',
- 'String',
- 'Time',
- 'UUID',
- 'Boolean',
- 'String'
+ "Boolean",
+ "Context",
+ "Date",
+ "DateTime",
+ "Decimal",
+ "Dynamic",
+ "Float",
+ "ID",
+ "Int",
+ "String",
+ "Time",
+ "UUID",
+ "Boolean",
+ "String",
]:
return wrapped(*args, **kwargs)
@@ -78,12 +78,7 @@
else:
info = str(query)
- with capture_span(
- "%s.%s" % (name, info),
- span_type="external",
- span_subtype="graphql",
- span_action="query"
- ):
+ with capture_span("%s.%s" % (name, info), span_type="external", span_subtype="graphql", span_action="query"):
return wrapped(*args, **kwargs)
@@ -96,9 +91,15 @@
]
def get_graphql_tx_name(self, graphql_doc):
- op = graphql_doc.definitions[0].operation
- fields = graphql_doc.definitions[0].selection_set.selections
- return "GraphQL %s %s" % (op.upper(), "+".join([f.name.value for f in fields]))
+ try:
+ op_def = [i for i in graphql_doc.definitions if type(i).__name__ == "OperationDefinition"][0]
+ except KeyError:
+ return "GraphQL unknown operation"
+
+ op = op_def.operation
+ name = op_def.name
+ fields = op_def.selection_set.selections
+ return "GraphQL %s %s" % (op.upper(), name if name else "+".join([f.name.value for f in fields]))
def call(self, module, method, wrapped, instance, args, kwargs):
graphql_document = wrapped(*args, **kwargs)
|
{"golden_diff": "diff --git a/elasticapm/instrumentation/packages/graphql.py b/elasticapm/instrumentation/packages/graphql.py\n--- a/elasticapm/instrumentation/packages/graphql.py\n+++ b/elasticapm/instrumentation/packages/graphql.py\n@@ -53,20 +53,20 @@\n \n if \"ResolveInfo\" == type(query).__name__:\n if str(query.return_type) in [\n- 'Boolean',\n- 'Context',\n- 'Date',\n- 'DateTime',\n- 'Decimal',\n- 'Dynamic',\n- 'Float',\n- 'ID',\n- 'Int',\n- 'String',\n- 'Time',\n- 'UUID',\n- 'Boolean',\n- 'String'\n+ \"Boolean\",\n+ \"Context\",\n+ \"Date\",\n+ \"DateTime\",\n+ \"Decimal\",\n+ \"Dynamic\",\n+ \"Float\",\n+ \"ID\",\n+ \"Int\",\n+ \"String\",\n+ \"Time\",\n+ \"UUID\",\n+ \"Boolean\",\n+ \"String\",\n ]:\n return wrapped(*args, **kwargs)\n \n@@ -78,12 +78,7 @@\n else:\n info = str(query)\n \n- with capture_span(\n- \"%s.%s\" % (name, info),\n- span_type=\"external\",\n- span_subtype=\"graphql\",\n- span_action=\"query\"\n- ):\n+ with capture_span(\"%s.%s\" % (name, info), span_type=\"external\", span_subtype=\"graphql\", span_action=\"query\"):\n return wrapped(*args, **kwargs)\n \n \n@@ -96,9 +91,15 @@\n ]\n \n def get_graphql_tx_name(self, graphql_doc):\n- op = graphql_doc.definitions[0].operation\n- fields = graphql_doc.definitions[0].selection_set.selections\n- return \"GraphQL %s %s\" % (op.upper(), \"+\".join([f.name.value for f in fields]))\n+ try:\n+ op_def = [i for i in graphql_doc.definitions if type(i).__name__ == \"OperationDefinition\"][0]\n+ except KeyError:\n+ return \"GraphQL unknown operation\"\n+\n+ op = op_def.operation\n+ name = op_def.name\n+ fields = op_def.selection_set.selections\n+ return \"GraphQL %s %s\" % (op.upper(), name if name else \"+\".join([f.name.value for f in fields]))\n \n def call(self, module, method, wrapped, instance, args, kwargs):\n graphql_document = wrapped(*args, **kwargs)\n", "issue": "'FragmentDefinition' object has no attribute 'operation'\"\n**Describe the bug**: \r\nI'm using elastic APM with Django 3.1.2 and graphql.\r\nOn every GraphQL Query, I'm seeing now this error: `AttributeError: 'FragmentDefinition' object has no attribute 'operation'`\r\n\r\nThe relevant file is: `elasticapm/instrumentation/packages/graphql.py in get_graphql_tx_name at line 99`\r\n\r\n**To Reproduce**\r\nI'm not sure yet, why the error is occurring and I'm just getting started with the service. If you can guide me to the right direction, I can create a reproducible example.\r\n\r\n**Environment (please complete the following information)**\r\n- OS: Linux docker Container\r\n- Python version:\r\n- Framework and version : Django 3.1.2\r\n- APM Server version: \r\n- Agent version: 5.9.0\r\n\r\n\r\n**Additional context**\r\n\r\nAdd any other context about the problem here.\r\n\r\n- Agent config options <!-- be careful not to post sensitive information -->\r\n <details>\r\n <summary>Click to expand</summary>\r\n\r\n ```\r\n replace this line with your agent config options\r\n remember to mask any sensitive fields like tokens\r\n ```\r\n </details>\r\n- `requirements.txt`:\r\n <details>\r\n <summary>Click to expand</summary>\r\n\r\n ```\r\n replace this line with your `requirements.txt`\r\n ```\r\n </details>\r\n\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom elasticapm import set_transaction_name\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.traces import capture_span\n\n\nclass GraphQLExecutorInstrumentation(AbstractInstrumentedModule):\n name = \"graphql\"\n\n instrument_list = [\n (\"graphql.execution.executors.sync\", \"SyncExecutor.execute\"),\n (\"graphql.execution.executors.gevent\", \"GeventExecutor.execute\"),\n (\"graphql.execution.executors.asyncio\", \"AsyncioExecutor.execute\"),\n (\"graphql.execution.executors.process\", \"ProcessExecutor.execute\"),\n (\"graphql.execution.executors.thread\", \"ThreadExecutor.execute_in_thread\"),\n (\"graphql.execution.executors.thread\", \"ThreadExecutor.execute_in_pool\"),\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n name = \"GraphQL\"\n\n info = \"\"\n query = args[2]\n\n if \"ResolveInfo\" == type(query).__name__:\n if str(query.return_type) in [\n 'Boolean',\n 'Context',\n 'Date',\n 'DateTime',\n 'Decimal',\n 'Dynamic',\n 'Float',\n 'ID',\n 'Int',\n 'String',\n 'Time',\n 'UUID',\n 'Boolean',\n 'String'\n ]:\n return wrapped(*args, **kwargs)\n\n op = query.operation.operation\n field = query.field_name\n info = \"%s %s\" % (op, field)\n elif \"RequestParams\" == type(query).__name__:\n info = \"%s %s\" % (\"request\", query.query)\n else:\n info = str(query)\n\n with capture_span(\n \"%s.%s\" % (name, info),\n span_type=\"external\",\n span_subtype=\"graphql\",\n span_action=\"query\"\n ):\n return wrapped(*args, **kwargs)\n\n\nclass GraphQLBackendInstrumentation(AbstractInstrumentedModule):\n name = \"graphql\"\n\n instrument_list = [\n (\"graphql.backend.core\", \"GraphQLCoreBackend.document_from_string\"),\n (\"graphql.backend.cache\", \"GraphQLCachedBackend.document_from_string\"),\n ]\n\n def get_graphql_tx_name(self, graphql_doc):\n op = graphql_doc.definitions[0].operation\n fields = graphql_doc.definitions[0].selection_set.selections\n return \"GraphQL %s %s\" % (op.upper(), \"+\".join([f.name.value for f in fields]))\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n graphql_document = wrapped(*args, **kwargs)\n transaction_name = self.get_graphql_tx_name(graphql_document.document_ast)\n set_transaction_name(transaction_name)\n return graphql_document\n", "path": "elasticapm/instrumentation/packages/graphql.py"}]}
| 1,961 | 565 |
gh_patches_debug_37682
|
rasdani/github-patches
|
git_diff
|
apluslms__a-plus-1005
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
A+ Security logs, CEF format
After security audit in spring 2021, some new security-related log events were added, using SecurityLog class. The log output format should be converted to CEF format that can be exported to Aalto ITS logging systems. Also, the current log events should be reviewed: do they contain sufficient information, and should some additional events be added. Note that security log should contain only relevant events, that can be justified from security point of view.
</issue>
<code>
[start of lib/logging.py]
1 from django.http import UnreadablePostError
2 import logging
3 from django.contrib.auth.signals import user_logged_in, user_logged_out, user_login_failed
4 from django.dispatch import receiver
5 from django.http.request import HttpRequest
6
7 def skip_unreadable_post(record):
8 """Skips log records of unfinished post requests."""
9 return not record.exc_info or not issubclass(record.exc_info[0], UnreadablePostError)
10
11 class SecurityLog:
12 """
13 Static singleton class used for A+ - wide security logging.
14 Django signals are used to track login/logout events.
15 """
16
17 seclogger = logging.getLogger('aplus.security')
18
19 @staticmethod
20 def logger() -> logging.Logger:
21 return SecurityLog.seclogger
22
23 @staticmethod
24 def logevent(request: HttpRequest, type: str, message: str) -> None:
25 # Unit tests do not have user defined in request object.
26 if request and hasattr(request, 'user'):
27 user=request.user
28 ip=request.META.get('REMOTE_ADDR')
29 else:
30 user='?'
31 ip='?'
32 SecurityLog.logger().info("({}/{}): {}: {}".format(
33 user,
34 ip,
35 type,
36 message
37 ))
38
39
40 # This example was used as an inspiration for the following functions:
41 # https://stackoverflow.com/questions/37618473/how-can-i-log-both-successful-and-failed-login-and-logout-attempts-in-django
42 @receiver(user_logged_in)
43 def user_logged_in_callback(sender, request, user, **kwargs):
44 SecurityLog.logevent(request, "login", "")
45
46 @receiver(user_logged_out)
47 def user_logged_out_callback(sender, request, user, **kwargs):
48 SecurityLog.logevent(request, "logout", "")
49
50 # Unfortunately the request object is not passed with this signal,
51 # so we cannot get the IP.
52 @receiver(user_login_failed)
53 def user_login_failed_callback(sender, credentials, **kwargs):
54 SecurityLog.logevent(None, "login-failed","{credentials}".format(
55 credentials=credentials)
56 )
57
[end of lib/logging.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/logging.py b/lib/logging.py
--- a/lib/logging.py
+++ b/lib/logging.py
@@ -3,6 +3,12 @@
from django.contrib.auth.signals import user_logged_in, user_logged_out, user_login_failed
from django.dispatch import receiver
from django.http.request import HttpRequest
+from format_cef import format_cef
+from aplus import VERSION
+
+
+CEF_VENDOR = 'Aalto'
+CEF_PRODUCT = 'aplus'
def skip_unreadable_post(record):
"""Skips log records of unfinished post requests."""
@@ -10,7 +16,8 @@
class SecurityLog:
"""
- Static singleton class used for A+ - wide security logging.
+ Static singleton class used for A+ - wide security logging,
+ to produce ArcSight Common Event Format (CEF) log.
Django signals are used to track login/logout events.
"""
@@ -21,20 +28,32 @@
return SecurityLog.seclogger
@staticmethod
- def logevent(request: HttpRequest, type: str, message: str) -> None:
- # Unit tests do not have user defined in request object.
- if request and hasattr(request, 'user'):
- user=request.user
- ip=request.META.get('REMOTE_ADDR')
- else:
- user='?'
- ip='?'
- SecurityLog.logger().info("({}/{}): {}: {}".format(
- user,
- ip,
- type,
- message
- ))
+ def logevent(
+ request: HttpRequest,
+ type: str,
+ message: str,
+ severity: int = 5,
+ ) -> None:
+ extensions = {}
+ # Unit tests may not have user or source address defined.
+ if request:
+ if hasattr(request, 'user'):
+ extensions['sourceUserName'] = str(request.user)
+ extensions['sourceUserId'] = str(request.user.id)
+ if (addr := request.META.get('REMOTE_ADDR')):
+ extensions['sourceAddress'] = addr
+
+ SecurityLog.logger().info(
+ format_cef(
+ CEF_VENDOR,
+ CEF_PRODUCT,
+ VERSION,
+ type,
+ message,
+ severity,
+ extensions,
+ ).decode("utf-8")
+ )
# This example was used as an inspiration for the following functions:
@@ -51,6 +70,8 @@
# so we cannot get the IP.
@receiver(user_login_failed)
def user_login_failed_callback(sender, credentials, **kwargs):
- SecurityLog.logevent(None, "login-failed","{credentials}".format(
- credentials=credentials)
- )
+ try:
+ SecurityLog.logevent(None, "login-failed", f"username: {credentials['username']}")
+ except KeyError:
+ # Unit tests do not have 'username' in credentials, let's not fail them for that
+ pass
|
{"golden_diff": "diff --git a/lib/logging.py b/lib/logging.py\n--- a/lib/logging.py\n+++ b/lib/logging.py\n@@ -3,6 +3,12 @@\n from django.contrib.auth.signals import user_logged_in, user_logged_out, user_login_failed\n from django.dispatch import receiver\n from django.http.request import HttpRequest\n+from format_cef import format_cef\n+from aplus import VERSION\n+\n+\n+CEF_VENDOR = 'Aalto'\n+CEF_PRODUCT = 'aplus'\n \n def skip_unreadable_post(record):\n \"\"\"Skips log records of unfinished post requests.\"\"\"\n@@ -10,7 +16,8 @@\n \n class SecurityLog:\n \"\"\"\n- Static singleton class used for A+ - wide security logging.\n+ Static singleton class used for A+ - wide security logging,\n+ to produce ArcSight Common Event Format (CEF) log.\n Django signals are used to track login/logout events.\n \"\"\"\n \n@@ -21,20 +28,32 @@\n return SecurityLog.seclogger\n \n @staticmethod\n- def logevent(request: HttpRequest, type: str, message: str) -> None:\n- # Unit tests do not have user defined in request object.\n- if request and hasattr(request, 'user'):\n- user=request.user\n- ip=request.META.get('REMOTE_ADDR')\n- else:\n- user='?'\n- ip='?'\n- SecurityLog.logger().info(\"({}/{}): {}: {}\".format(\n- user,\n- ip,\n- type,\n- message\n- ))\n+ def logevent(\n+ request: HttpRequest,\n+ type: str,\n+ message: str,\n+ severity: int = 5,\n+ ) -> None:\n+ extensions = {}\n+ # Unit tests may not have user or source address defined.\n+ if request:\n+ if hasattr(request, 'user'):\n+ extensions['sourceUserName'] = str(request.user)\n+ extensions['sourceUserId'] = str(request.user.id)\n+ if (addr := request.META.get('REMOTE_ADDR')):\n+ extensions['sourceAddress'] = addr\n+\n+ SecurityLog.logger().info(\n+ format_cef(\n+ CEF_VENDOR,\n+ CEF_PRODUCT,\n+ VERSION,\n+ type,\n+ message,\n+ severity,\n+ extensions,\n+ ).decode(\"utf-8\")\n+ )\n \n \n # This example was used as an inspiration for the following functions:\n@@ -51,6 +70,8 @@\n # so we cannot get the IP.\n @receiver(user_login_failed)\n def user_login_failed_callback(sender, credentials, **kwargs):\n- SecurityLog.logevent(None, \"login-failed\",\"{credentials}\".format(\n- credentials=credentials)\n- )\n+ try:\n+ SecurityLog.logevent(None, \"login-failed\", f\"username: {credentials['username']}\")\n+ except KeyError:\n+ # Unit tests do not have 'username' in credentials, let's not fail them for that\n+ pass\n", "issue": "A+ Security logs, CEF format\nAfter security audit in spring 2021, some new security-related log events were added, using SecurityLog class. The log output format should be converted to CEF format that can be exported to Aalto ITS logging systems. Also, the current log events should be reviewed: do they contain sufficient information, and should some additional events be added. Note that security log should contain only relevant events, that can be justified from security point of view.\n", "before_files": [{"content": "from django.http import UnreadablePostError\nimport logging\nfrom django.contrib.auth.signals import user_logged_in, user_logged_out, user_login_failed\nfrom django.dispatch import receiver\nfrom django.http.request import HttpRequest\n\ndef skip_unreadable_post(record):\n \"\"\"Skips log records of unfinished post requests.\"\"\"\n return not record.exc_info or not issubclass(record.exc_info[0], UnreadablePostError)\n\nclass SecurityLog:\n \"\"\"\n Static singleton class used for A+ - wide security logging.\n Django signals are used to track login/logout events.\n \"\"\"\n\n seclogger = logging.getLogger('aplus.security')\n\n @staticmethod\n def logger() -> logging.Logger:\n return SecurityLog.seclogger\n\n @staticmethod\n def logevent(request: HttpRequest, type: str, message: str) -> None:\n # Unit tests do not have user defined in request object.\n if request and hasattr(request, 'user'):\n user=request.user\n ip=request.META.get('REMOTE_ADDR')\n else:\n user='?'\n ip='?'\n SecurityLog.logger().info(\"({}/{}): {}: {}\".format(\n user,\n ip,\n type,\n message\n ))\n\n\n# This example was used as an inspiration for the following functions:\n# https://stackoverflow.com/questions/37618473/how-can-i-log-both-successful-and-failed-login-and-logout-attempts-in-django\n@receiver(user_logged_in)\ndef user_logged_in_callback(sender, request, user, **kwargs):\n SecurityLog.logevent(request, \"login\", \"\")\n\n@receiver(user_logged_out)\ndef user_logged_out_callback(sender, request, user, **kwargs):\n SecurityLog.logevent(request, \"logout\", \"\")\n\n# Unfortunately the request object is not passed with this signal,\n# so we cannot get the IP.\n@receiver(user_login_failed)\ndef user_login_failed_callback(sender, credentials, **kwargs):\n SecurityLog.logevent(None, \"login-failed\",\"{credentials}\".format(\n credentials=credentials)\n )\n", "path": "lib/logging.py"}]}
| 1,171 | 654 |
gh_patches_debug_30069
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-2746
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
deprecate `python_venv` language
this has been an alias to `python` for a very long time but it cannot be removed without a deprecation period
this is going to need a long deprecation period since it's sorta subtle and usually not the user's fault and will need hook authors to (potentially) make updates
the plan is to do the following:
1. introduce the following in a minor release
- migrate-config will autofix `.pre-commit-config.yaml` usages of `language: python_venv` (there isn't an equivalent `migrate-manifest` -- though users outnumber hook authors by several orders of magnitude)
1. introduce the following in a minor release
- a warning is shown for configuration using the `language: python_venv`
- a warning is shown for repos using `language: python_venv` (do this at install time so it only shows once as to not be super annoying for users who have no control)
- a recommendation for hook authors to also set `minimum_pre_commit_version` to this version
1. a long time passes (typically my deprecation period has been 12-18+ months)
1. introduce the following in a major release
- removal of the `python_venv` alias
</issue>
<code>
[start of pre_commit/repository.py]
1 from __future__ import annotations
2
3 import json
4 import logging
5 import os
6 from typing import Any
7 from typing import Sequence
8
9 import pre_commit.constants as C
10 from pre_commit.clientlib import load_manifest
11 from pre_commit.clientlib import LOCAL
12 from pre_commit.clientlib import META
13 from pre_commit.clientlib import parse_version
14 from pre_commit.hook import Hook
15 from pre_commit.languages.all import languages
16 from pre_commit.languages.helpers import environment_dir
17 from pre_commit.prefix import Prefix
18 from pre_commit.store import Store
19 from pre_commit.util import clean_path_on_failure
20 from pre_commit.util import rmtree
21
22
23 logger = logging.getLogger('pre_commit')
24
25
26 def _state_filename_v1(venv: str) -> str:
27 return os.path.join(venv, '.install_state_v1')
28
29
30 def _state_filename_v2(venv: str) -> str:
31 return os.path.join(venv, '.install_state_v2')
32
33
34 def _state(additional_deps: Sequence[str]) -> object:
35 return {'additional_dependencies': sorted(additional_deps)}
36
37
38 def _read_state(venv: str) -> object | None:
39 filename = _state_filename_v1(venv)
40 if not os.path.exists(filename):
41 return None
42 else:
43 with open(filename) as f:
44 return json.load(f)
45
46
47 def _hook_installed(hook: Hook) -> bool:
48 lang = languages[hook.language]
49 if lang.ENVIRONMENT_DIR is None:
50 return True
51
52 venv = environment_dir(
53 hook.prefix,
54 lang.ENVIRONMENT_DIR,
55 hook.language_version,
56 )
57 return (
58 (
59 os.path.exists(_state_filename_v2(venv)) or
60 _read_state(venv) == _state(hook.additional_dependencies)
61 ) and
62 not lang.health_check(hook.prefix, hook.language_version)
63 )
64
65
66 def _hook_install(hook: Hook) -> None:
67 logger.info(f'Installing environment for {hook.src}.')
68 logger.info('Once installed this environment will be reused.')
69 logger.info('This may take a few minutes...')
70
71 lang = languages[hook.language]
72 assert lang.ENVIRONMENT_DIR is not None
73
74 venv = environment_dir(
75 hook.prefix,
76 lang.ENVIRONMENT_DIR,
77 hook.language_version,
78 )
79
80 # There's potentially incomplete cleanup from previous runs
81 # Clean it up!
82 if os.path.exists(venv):
83 rmtree(venv)
84
85 with clean_path_on_failure(venv):
86 lang.install_environment(
87 hook.prefix, hook.language_version, hook.additional_dependencies,
88 )
89 health_error = lang.health_check(hook.prefix, hook.language_version)
90 if health_error:
91 raise AssertionError(
92 f'BUG: expected environment for {hook.language} to be healthy '
93 f'immediately after install, please open an issue describing '
94 f'your environment\n\n'
95 f'more info:\n\n{health_error}',
96 )
97
98 # TODO: remove v1 state writing, no longer needed after pre-commit 3.0
99 # Write our state to indicate we're installed
100 state_filename = _state_filename_v1(venv)
101 staging = f'{state_filename}staging'
102 with open(staging, 'w') as state_file:
103 state_file.write(json.dumps(_state(hook.additional_dependencies)))
104 # Move the file into place atomically to indicate we've installed
105 os.replace(staging, state_filename)
106
107 open(_state_filename_v2(venv), 'a+').close()
108
109
110 def _hook(
111 *hook_dicts: dict[str, Any],
112 root_config: dict[str, Any],
113 ) -> dict[str, Any]:
114 ret, rest = dict(hook_dicts[0]), hook_dicts[1:]
115 for dct in rest:
116 ret.update(dct)
117
118 version = ret['minimum_pre_commit_version']
119 if parse_version(version) > parse_version(C.VERSION):
120 logger.error(
121 f'The hook `{ret["id"]}` requires pre-commit version {version} '
122 f'but version {C.VERSION} is installed. '
123 f'Perhaps run `pip install --upgrade pre-commit`.',
124 )
125 exit(1)
126
127 lang = ret['language']
128 if ret['language_version'] == C.DEFAULT:
129 ret['language_version'] = root_config['default_language_version'][lang]
130 if ret['language_version'] == C.DEFAULT:
131 ret['language_version'] = languages[lang].get_default_version()
132
133 if not ret['stages']:
134 ret['stages'] = root_config['default_stages']
135
136 if languages[lang].ENVIRONMENT_DIR is None:
137 if ret['language_version'] != C.DEFAULT:
138 logger.error(
139 f'The hook `{ret["id"]}` specifies `language_version` but is '
140 f'using language `{lang}` which does not install an '
141 f'environment. '
142 f'Perhaps you meant to use a specific language?',
143 )
144 exit(1)
145 if ret['additional_dependencies']:
146 logger.error(
147 f'The hook `{ret["id"]}` specifies `additional_dependencies` '
148 f'but is using language `{lang}` which does not install an '
149 f'environment. '
150 f'Perhaps you meant to use a specific language?',
151 )
152 exit(1)
153
154 return ret
155
156
157 def _non_cloned_repository_hooks(
158 repo_config: dict[str, Any],
159 store: Store,
160 root_config: dict[str, Any],
161 ) -> tuple[Hook, ...]:
162 def _prefix(language_name: str, deps: Sequence[str]) -> Prefix:
163 language = languages[language_name]
164 # pygrep / script / system / docker_image do not have
165 # environments so they work out of the current directory
166 if language.ENVIRONMENT_DIR is None:
167 return Prefix(os.getcwd())
168 else:
169 return Prefix(store.make_local(deps))
170
171 return tuple(
172 Hook.create(
173 repo_config['repo'],
174 _prefix(hook['language'], hook['additional_dependencies']),
175 _hook(hook, root_config=root_config),
176 )
177 for hook in repo_config['hooks']
178 )
179
180
181 def _cloned_repository_hooks(
182 repo_config: dict[str, Any],
183 store: Store,
184 root_config: dict[str, Any],
185 ) -> tuple[Hook, ...]:
186 repo, rev = repo_config['repo'], repo_config['rev']
187 manifest_path = os.path.join(store.clone(repo, rev), C.MANIFEST_FILE)
188 by_id = {hook['id']: hook for hook in load_manifest(manifest_path)}
189
190 for hook in repo_config['hooks']:
191 if hook['id'] not in by_id:
192 logger.error(
193 f'`{hook["id"]}` is not present in repository {repo}. '
194 f'Typo? Perhaps it is introduced in a newer version? '
195 f'Often `pre-commit autoupdate` fixes this.',
196 )
197 exit(1)
198
199 hook_dcts = [
200 _hook(by_id[hook['id']], hook, root_config=root_config)
201 for hook in repo_config['hooks']
202 ]
203 return tuple(
204 Hook.create(
205 repo_config['repo'],
206 Prefix(store.clone(repo, rev, hook['additional_dependencies'])),
207 hook,
208 )
209 for hook in hook_dcts
210 )
211
212
213 def _repository_hooks(
214 repo_config: dict[str, Any],
215 store: Store,
216 root_config: dict[str, Any],
217 ) -> tuple[Hook, ...]:
218 if repo_config['repo'] in {LOCAL, META}:
219 return _non_cloned_repository_hooks(repo_config, store, root_config)
220 else:
221 return _cloned_repository_hooks(repo_config, store, root_config)
222
223
224 def install_hook_envs(hooks: Sequence[Hook], store: Store) -> None:
225 def _need_installed() -> list[Hook]:
226 seen: set[tuple[Prefix, str, str, tuple[str, ...]]] = set()
227 ret = []
228 for hook in hooks:
229 if hook.install_key not in seen and not _hook_installed(hook):
230 ret.append(hook)
231 seen.add(hook.install_key)
232 return ret
233
234 if not _need_installed():
235 return
236 with store.exclusive_lock():
237 # Another process may have already completed this work
238 for hook in _need_installed():
239 _hook_install(hook)
240
241
242 def all_hooks(root_config: dict[str, Any], store: Store) -> tuple[Hook, ...]:
243 return tuple(
244 hook
245 for repo in root_config['repos']
246 for hook in _repository_hooks(repo, store, root_config)
247 )
248
[end of pre_commit/repository.py]
[start of pre_commit/commands/migrate_config.py]
1 from __future__ import annotations
2
3 import re
4 import textwrap
5
6 import cfgv
7 import yaml
8
9 from pre_commit.clientlib import InvalidConfigError
10 from pre_commit.yaml import yaml_load
11
12
13 def _is_header_line(line: str) -> bool:
14 return line.startswith(('#', '---')) or not line.strip()
15
16
17 def _migrate_map(contents: str) -> str:
18 if isinstance(yaml_load(contents), list):
19 # Find the first non-header line
20 lines = contents.splitlines(True)
21 i = 0
22 # Only loop on non empty configuration file
23 while i < len(lines) and _is_header_line(lines[i]):
24 i += 1
25
26 header = ''.join(lines[:i])
27 rest = ''.join(lines[i:])
28
29 # If they are using the "default" flow style of yaml, this operation
30 # will yield a valid configuration
31 try:
32 trial_contents = f'{header}repos:\n{rest}'
33 yaml_load(trial_contents)
34 contents = trial_contents
35 except yaml.YAMLError:
36 contents = f'{header}repos:\n{textwrap.indent(rest, " " * 4)}'
37
38 return contents
39
40
41 def _migrate_sha_to_rev(contents: str) -> str:
42 return re.sub(r'(\n\s+)sha:', r'\1rev:', contents)
43
44
45 def migrate_config(config_file: str, quiet: bool = False) -> int:
46 with open(config_file) as f:
47 orig_contents = contents = f.read()
48
49 with cfgv.reraise_as(InvalidConfigError):
50 with cfgv.validate_context(f'File {config_file}'):
51 try:
52 yaml_load(orig_contents)
53 except Exception as e:
54 raise cfgv.ValidationError(str(e))
55
56 contents = _migrate_map(contents)
57 contents = _migrate_sha_to_rev(contents)
58
59 if contents != orig_contents:
60 with open(config_file, 'w') as f:
61 f.write(contents)
62
63 print('Configuration has been migrated.')
64 elif not quiet:
65 print('Configuration is already migrated.')
66 return 0
67
[end of pre_commit/commands/migrate_config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/commands/migrate_config.py b/pre_commit/commands/migrate_config.py
--- a/pre_commit/commands/migrate_config.py
+++ b/pre_commit/commands/migrate_config.py
@@ -42,6 +42,14 @@
return re.sub(r'(\n\s+)sha:', r'\1rev:', contents)
+def _migrate_python_venv(contents: str) -> str:
+ return re.sub(
+ r'(\n\s+)language: python_venv\b',
+ r'\1language: python',
+ contents,
+ )
+
+
def migrate_config(config_file: str, quiet: bool = False) -> int:
with open(config_file) as f:
orig_contents = contents = f.read()
@@ -55,6 +63,7 @@
contents = _migrate_map(contents)
contents = _migrate_sha_to_rev(contents)
+ contents = _migrate_python_venv(contents)
if contents != orig_contents:
with open(config_file, 'w') as f:
diff --git a/pre_commit/repository.py b/pre_commit/repository.py
--- a/pre_commit/repository.py
+++ b/pre_commit/repository.py
@@ -3,6 +3,7 @@
import json
import logging
import os
+import shlex
from typing import Any
from typing import Sequence
@@ -68,6 +69,14 @@
logger.info('Once installed this environment will be reused.')
logger.info('This may take a few minutes...')
+ if hook.language == 'python_venv':
+ logger.warning(
+ f'`repo: {hook.src}` uses deprecated `language: python_venv`. '
+ f'This is an alias for `language: python`. '
+ f'Often `pre-commit autoupdate --repo {shlex.quote(hook.src)}` '
+ f'will fix this.',
+ )
+
lang = languages[hook.language]
assert lang.ENVIRONMENT_DIR is not None
|
{"golden_diff": "diff --git a/pre_commit/commands/migrate_config.py b/pre_commit/commands/migrate_config.py\n--- a/pre_commit/commands/migrate_config.py\n+++ b/pre_commit/commands/migrate_config.py\n@@ -42,6 +42,14 @@\n return re.sub(r'(\\n\\s+)sha:', r'\\1rev:', contents)\n \n \n+def _migrate_python_venv(contents: str) -> str:\n+ return re.sub(\n+ r'(\\n\\s+)language: python_venv\\b',\n+ r'\\1language: python',\n+ contents,\n+ )\n+\n+\n def migrate_config(config_file: str, quiet: bool = False) -> int:\n with open(config_file) as f:\n orig_contents = contents = f.read()\n@@ -55,6 +63,7 @@\n \n contents = _migrate_map(contents)\n contents = _migrate_sha_to_rev(contents)\n+ contents = _migrate_python_venv(contents)\n \n if contents != orig_contents:\n with open(config_file, 'w') as f:\ndiff --git a/pre_commit/repository.py b/pre_commit/repository.py\n--- a/pre_commit/repository.py\n+++ b/pre_commit/repository.py\n@@ -3,6 +3,7 @@\n import json\n import logging\n import os\n+import shlex\n from typing import Any\n from typing import Sequence\n \n@@ -68,6 +69,14 @@\n logger.info('Once installed this environment will be reused.')\n logger.info('This may take a few minutes...')\n \n+ if hook.language == 'python_venv':\n+ logger.warning(\n+ f'`repo: {hook.src}` uses deprecated `language: python_venv`. '\n+ f'This is an alias for `language: python`. '\n+ f'Often `pre-commit autoupdate --repo {shlex.quote(hook.src)}` '\n+ f'will fix this.',\n+ )\n+\n lang = languages[hook.language]\n assert lang.ENVIRONMENT_DIR is not None\n", "issue": "deprecate `python_venv` language\nthis has been an alias to `python` for a very long time but it cannot be removed without a deprecation period\r\n\r\nthis is going to need a long deprecation period since it's sorta subtle and usually not the user's fault and will need hook authors to (potentially) make updates\r\n\r\nthe plan is to do the following:\r\n\r\n1. introduce the following in a minor release\r\n - migrate-config will autofix `.pre-commit-config.yaml` usages of `language: python_venv` (there isn't an equivalent `migrate-manifest` -- though users outnumber hook authors by several orders of magnitude)\r\n1. introduce the following in a minor release\r\n - a warning is shown for configuration using the `language: python_venv`\r\n - a warning is shown for repos using `language: python_venv` (do this at install time so it only shows once as to not be super annoying for users who have no control)\r\n - a recommendation for hook authors to also set `minimum_pre_commit_version` to this version\r\n1. a long time passes (typically my deprecation period has been 12-18+ months)\r\n1. introduce the following in a major release\r\n - removal of the `python_venv` alias\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport json\nimport logging\nimport os\nfrom typing import Any\nfrom typing import Sequence\n\nimport pre_commit.constants as C\nfrom pre_commit.clientlib import load_manifest\nfrom pre_commit.clientlib import LOCAL\nfrom pre_commit.clientlib import META\nfrom pre_commit.clientlib import parse_version\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages.all import languages\nfrom pre_commit.languages.helpers import environment_dir\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.store import Store\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import rmtree\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef _state_filename_v1(venv: str) -> str:\n return os.path.join(venv, '.install_state_v1')\n\n\ndef _state_filename_v2(venv: str) -> str:\n return os.path.join(venv, '.install_state_v2')\n\n\ndef _state(additional_deps: Sequence[str]) -> object:\n return {'additional_dependencies': sorted(additional_deps)}\n\n\ndef _read_state(venv: str) -> object | None:\n filename = _state_filename_v1(venv)\n if not os.path.exists(filename):\n return None\n else:\n with open(filename) as f:\n return json.load(f)\n\n\ndef _hook_installed(hook: Hook) -> bool:\n lang = languages[hook.language]\n if lang.ENVIRONMENT_DIR is None:\n return True\n\n venv = environment_dir(\n hook.prefix,\n lang.ENVIRONMENT_DIR,\n hook.language_version,\n )\n return (\n (\n os.path.exists(_state_filename_v2(venv)) or\n _read_state(venv) == _state(hook.additional_dependencies)\n ) and\n not lang.health_check(hook.prefix, hook.language_version)\n )\n\n\ndef _hook_install(hook: Hook) -> None:\n logger.info(f'Installing environment for {hook.src}.')\n logger.info('Once installed this environment will be reused.')\n logger.info('This may take a few minutes...')\n\n lang = languages[hook.language]\n assert lang.ENVIRONMENT_DIR is not None\n\n venv = environment_dir(\n hook.prefix,\n lang.ENVIRONMENT_DIR,\n hook.language_version,\n )\n\n # There's potentially incomplete cleanup from previous runs\n # Clean it up!\n if os.path.exists(venv):\n rmtree(venv)\n\n with clean_path_on_failure(venv):\n lang.install_environment(\n hook.prefix, hook.language_version, hook.additional_dependencies,\n )\n health_error = lang.health_check(hook.prefix, hook.language_version)\n if health_error:\n raise AssertionError(\n f'BUG: expected environment for {hook.language} to be healthy '\n f'immediately after install, please open an issue describing '\n f'your environment\\n\\n'\n f'more info:\\n\\n{health_error}',\n )\n\n # TODO: remove v1 state writing, no longer needed after pre-commit 3.0\n # Write our state to indicate we're installed\n state_filename = _state_filename_v1(venv)\n staging = f'{state_filename}staging'\n with open(staging, 'w') as state_file:\n state_file.write(json.dumps(_state(hook.additional_dependencies)))\n # Move the file into place atomically to indicate we've installed\n os.replace(staging, state_filename)\n\n open(_state_filename_v2(venv), 'a+').close()\n\n\ndef _hook(\n *hook_dicts: dict[str, Any],\n root_config: dict[str, Any],\n) -> dict[str, Any]:\n ret, rest = dict(hook_dicts[0]), hook_dicts[1:]\n for dct in rest:\n ret.update(dct)\n\n version = ret['minimum_pre_commit_version']\n if parse_version(version) > parse_version(C.VERSION):\n logger.error(\n f'The hook `{ret[\"id\"]}` requires pre-commit version {version} '\n f'but version {C.VERSION} is installed. '\n f'Perhaps run `pip install --upgrade pre-commit`.',\n )\n exit(1)\n\n lang = ret['language']\n if ret['language_version'] == C.DEFAULT:\n ret['language_version'] = root_config['default_language_version'][lang]\n if ret['language_version'] == C.DEFAULT:\n ret['language_version'] = languages[lang].get_default_version()\n\n if not ret['stages']:\n ret['stages'] = root_config['default_stages']\n\n if languages[lang].ENVIRONMENT_DIR is None:\n if ret['language_version'] != C.DEFAULT:\n logger.error(\n f'The hook `{ret[\"id\"]}` specifies `language_version` but is '\n f'using language `{lang}` which does not install an '\n f'environment. '\n f'Perhaps you meant to use a specific language?',\n )\n exit(1)\n if ret['additional_dependencies']:\n logger.error(\n f'The hook `{ret[\"id\"]}` specifies `additional_dependencies` '\n f'but is using language `{lang}` which does not install an '\n f'environment. '\n f'Perhaps you meant to use a specific language?',\n )\n exit(1)\n\n return ret\n\n\ndef _non_cloned_repository_hooks(\n repo_config: dict[str, Any],\n store: Store,\n root_config: dict[str, Any],\n) -> tuple[Hook, ...]:\n def _prefix(language_name: str, deps: Sequence[str]) -> Prefix:\n language = languages[language_name]\n # pygrep / script / system / docker_image do not have\n # environments so they work out of the current directory\n if language.ENVIRONMENT_DIR is None:\n return Prefix(os.getcwd())\n else:\n return Prefix(store.make_local(deps))\n\n return tuple(\n Hook.create(\n repo_config['repo'],\n _prefix(hook['language'], hook['additional_dependencies']),\n _hook(hook, root_config=root_config),\n )\n for hook in repo_config['hooks']\n )\n\n\ndef _cloned_repository_hooks(\n repo_config: dict[str, Any],\n store: Store,\n root_config: dict[str, Any],\n) -> tuple[Hook, ...]:\n repo, rev = repo_config['repo'], repo_config['rev']\n manifest_path = os.path.join(store.clone(repo, rev), C.MANIFEST_FILE)\n by_id = {hook['id']: hook for hook in load_manifest(manifest_path)}\n\n for hook in repo_config['hooks']:\n if hook['id'] not in by_id:\n logger.error(\n f'`{hook[\"id\"]}` is not present in repository {repo}. '\n f'Typo? Perhaps it is introduced in a newer version? '\n f'Often `pre-commit autoupdate` fixes this.',\n )\n exit(1)\n\n hook_dcts = [\n _hook(by_id[hook['id']], hook, root_config=root_config)\n for hook in repo_config['hooks']\n ]\n return tuple(\n Hook.create(\n repo_config['repo'],\n Prefix(store.clone(repo, rev, hook['additional_dependencies'])),\n hook,\n )\n for hook in hook_dcts\n )\n\n\ndef _repository_hooks(\n repo_config: dict[str, Any],\n store: Store,\n root_config: dict[str, Any],\n) -> tuple[Hook, ...]:\n if repo_config['repo'] in {LOCAL, META}:\n return _non_cloned_repository_hooks(repo_config, store, root_config)\n else:\n return _cloned_repository_hooks(repo_config, store, root_config)\n\n\ndef install_hook_envs(hooks: Sequence[Hook], store: Store) -> None:\n def _need_installed() -> list[Hook]:\n seen: set[tuple[Prefix, str, str, tuple[str, ...]]] = set()\n ret = []\n for hook in hooks:\n if hook.install_key not in seen and not _hook_installed(hook):\n ret.append(hook)\n seen.add(hook.install_key)\n return ret\n\n if not _need_installed():\n return\n with store.exclusive_lock():\n # Another process may have already completed this work\n for hook in _need_installed():\n _hook_install(hook)\n\n\ndef all_hooks(root_config: dict[str, Any], store: Store) -> tuple[Hook, ...]:\n return tuple(\n hook\n for repo in root_config['repos']\n for hook in _repository_hooks(repo, store, root_config)\n )\n", "path": "pre_commit/repository.py"}, {"content": "from __future__ import annotations\n\nimport re\nimport textwrap\n\nimport cfgv\nimport yaml\n\nfrom pre_commit.clientlib import InvalidConfigError\nfrom pre_commit.yaml import yaml_load\n\n\ndef _is_header_line(line: str) -> bool:\n return line.startswith(('#', '---')) or not line.strip()\n\n\ndef _migrate_map(contents: str) -> str:\n if isinstance(yaml_load(contents), list):\n # Find the first non-header line\n lines = contents.splitlines(True)\n i = 0\n # Only loop on non empty configuration file\n while i < len(lines) and _is_header_line(lines[i]):\n i += 1\n\n header = ''.join(lines[:i])\n rest = ''.join(lines[i:])\n\n # If they are using the \"default\" flow style of yaml, this operation\n # will yield a valid configuration\n try:\n trial_contents = f'{header}repos:\\n{rest}'\n yaml_load(trial_contents)\n contents = trial_contents\n except yaml.YAMLError:\n contents = f'{header}repos:\\n{textwrap.indent(rest, \" \" * 4)}'\n\n return contents\n\n\ndef _migrate_sha_to_rev(contents: str) -> str:\n return re.sub(r'(\\n\\s+)sha:', r'\\1rev:', contents)\n\n\ndef migrate_config(config_file: str, quiet: bool = False) -> int:\n with open(config_file) as f:\n orig_contents = contents = f.read()\n\n with cfgv.reraise_as(InvalidConfigError):\n with cfgv.validate_context(f'File {config_file}'):\n try:\n yaml_load(orig_contents)\n except Exception as e:\n raise cfgv.ValidationError(str(e))\n\n contents = _migrate_map(contents)\n contents = _migrate_sha_to_rev(contents)\n\n if contents != orig_contents:\n with open(config_file, 'w') as f:\n f.write(contents)\n\n print('Configuration has been migrated.')\n elif not quiet:\n print('Configuration is already migrated.')\n return 0\n", "path": "pre_commit/commands/migrate_config.py"}]}
| 3,904 | 443 |
gh_patches_debug_1256
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-127
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mark headers as "SYSTEM" headers to silence warnings
Many libraries generate tons of warnings in public headers. WebSocket++ uses auto_ptr for example and many Boost libraries truncate integers implicitly (-Wconversion). To consume these libraries you have to treat them as system headers because GCC won't emit warnings in these.
This is how Conan currently sets the include directories:
``` CMake
include_directories(${CONAN_INCLUDE_DIRS})
```
This is how you would add them as "system" headers to silence warnings:
``` CMake
include_directories(SYSTEM ${CONAN_INCLUDE_DIRS})
```
Is there a reason it is not already done this way?
This issue may apply to configurations other than CMake/GCC, too, but this is the most important one for me.
</issue>
<code>
[start of conans/client/generators/cmake.py]
1 from conans.model import Generator
2 from conans.paths import BUILD_INFO_CMAKE
3
4
5 class DepsCppCmake(object):
6 def __init__(self, deps_cpp_info):
7 self.include_paths = "\n\t\t\t".join('"%s"' % p.replace("\\", "/")
8 for p in deps_cpp_info.include_paths)
9 self.lib_paths = "\n\t\t\t".join('"%s"' % p.replace("\\", "/")
10 for p in deps_cpp_info.lib_paths)
11 self.libs = " ".join(deps_cpp_info.libs)
12 self.defines = "\n\t\t\t".join("-D%s" % d for d in deps_cpp_info.defines)
13 self.cppflags = " ".join(deps_cpp_info.cppflags)
14 self.cflags = " ".join(deps_cpp_info.cflags)
15 self.sharedlinkflags = " ".join(deps_cpp_info.sharedlinkflags)
16 self.exelinkflags = " ".join(deps_cpp_info.exelinkflags)
17 self.bin_paths = "\n\t\t\t".join('"%s"' % p.replace("\\", "/")
18 for p in deps_cpp_info.bin_paths)
19
20 self.rootpath = '"%s"' % deps_cpp_info.rootpath.replace("\\", "/")
21
22
23 class CMakeGenerator(Generator):
24 @property
25 def filename(self):
26 return BUILD_INFO_CMAKE
27
28 @property
29 def content(self):
30 sections = []
31
32 # DEPS VARIABLES
33 template_dep = ('set(CONAN_{dep}_ROOT {deps.rootpath})\n'
34 'set(CONAN_INCLUDE_DIRS_{dep} {deps.include_paths})\n'
35 'set(CONAN_LIB_DIRS_{dep} {deps.lib_paths})\n'
36 'set(CONAN_BIN_DIRS_{dep} {deps.bin_paths})\n'
37 'set(CONAN_LIBS_{dep} {deps.libs})\n'
38 'set(CONAN_DEFINES_{dep} {deps.defines})\n'
39 'set(CONAN_CXX_FLAGS_{dep} "{deps.cppflags}")\n'
40 'set(CONAN_SHARED_LINKER_FLAGS_{dep} "{deps.sharedlinkflags}")\n'
41 'set(CONAN_EXE_LINKER_FLAGS_{dep} "{deps.exelinkflags}")\n'
42 'set(CONAN_C_FLAGS_{dep} "{deps.cflags}")\n')
43
44 for dep_name, dep_cpp_info in self.deps_build_info.dependencies:
45 deps = DepsCppCmake(dep_cpp_info)
46 dep_flags = template_dep.format(dep=dep_name.upper(),
47 deps=deps)
48 sections.append(dep_flags)
49
50 # GENERAL VARIABLES
51 deps = DepsCppCmake(self.deps_build_info)
52
53 template = ('set(CONAN_INCLUDE_DIRS {deps.include_paths} ${{CONAN_INCLUDE_DIRS}})\n'
54 'set(CONAN_LIB_DIRS {deps.lib_paths} ${{CONAN_LIB_DIRS}})\n'
55 'set(CONAN_BIN_DIRS {deps.bin_paths} ${{CONAN_BIN_DIRS}})\n'
56 'set(CONAN_LIBS {deps.libs} ${{CONAN_LIBS}})\n'
57 'set(CONAN_DEFINES {deps.defines} ${{CONAN_DEFINES}})\n'
58 'set(CONAN_CXX_FLAGS "{deps.cppflags} ${{CONAN_CXX_FLAGS}}")\n'
59 'set(CONAN_SHARED_LINKER_FLAGS "{deps.sharedlinkflags} ${{CONAN_SHARED_LINKER_FLAGS}}")\n'
60 'set(CONAN_EXE_LINKER_FLAGS "{deps.exelinkflags} ${{CONAN_EXE_LINKER_FLAGS}}")\n'
61 'set(CONAN_C_FLAGS "{deps.cflags} ${{CONAN_C_FLAGS}}")\n'
62 'set(CONAN_CMAKE_MODULE_PATH {module_paths} ${{CONAN_CMAKE_MODULE_PATH}})')
63
64 rootpaths = [DepsCppCmake(dep_cpp_info).rootpath for _, dep_cpp_info
65 in self.deps_build_info.dependencies]
66 module_paths = " ".join(rootpaths)
67 all_flags = template.format(deps=deps, module_paths=module_paths)
68 sections.append(all_flags)
69
70 # MACROS
71 sections.append(self._aux_cmake_test_setup())
72
73 return "\n".join(sections)
74
75 def _aux_cmake_test_setup(self):
76 return """macro(CONAN_BASIC_SETUP)
77 conan_check_compiler()
78 conan_output_dirs_setup()
79 conan_flags_setup()
80 # CMake can find findXXX.cmake files in the root of packages
81 set(CMAKE_MODULE_PATH ${CONAN_CMAKE_MODULE_PATH} ${CMAKE_MODULE_PATH})
82 endmacro()
83
84 macro(CONAN_FLAGS_SETUP)
85 include_directories(${CONAN_INCLUDE_DIRS})
86 link_directories(${CONAN_LIB_DIRS})
87 add_definitions(${CONAN_DEFINES})
88
89 # For find_library
90 set(CMAKE_INCLUDE_PATH ${CONAN_INCLUDE_DIRS} ${CMAKE_INCLUDE_PATH})
91 set(CMAKE_LIBRARY_PATH ${CONAN_LIB_DIRS} ${CMAKE_LIBRARY_PATH})
92
93 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${CONAN_CXX_FLAGS}")
94 set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${CONAN_C_FLAGS}")
95 set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} ${CONAN_SHARED_LINKER_FLAGS}")
96 set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} ${CONAN_EXE_LINKER_FLAGS}")
97
98 if(APPLE)
99 # https://cmake.org/Wiki/CMake_RPATH_handling
100 # CONAN GUIDE: All generated libraries should have the id and dependencies to other
101 # dylibs without path, just the name, EX:
102 # libMyLib1.dylib:
103 # libMyLib1.dylib (compatibility version 0.0.0, current version 0.0.0)
104 # libMyLib0.dylib (compatibility version 0.0.0, current version 0.0.0)
105 # /usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 120.0.0)
106 # /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1197.1.1)
107 set(CMAKE_SKIP_RPATH 1) # AVOID RPATH FOR *.dylib, ALL LIBS BETWEEN THEM AND THE EXE
108 # SHOULD BE ON THE LINKER RESOLVER PATH (./ IS ONE OF THEM)
109 endif()
110 if(CONAN_LINK_RUNTIME)
111 string(REPLACE "/MD" ${CONAN_LINK_RUNTIME} CMAKE_CXX_FLAGS_RELEASE ${CMAKE_CXX_FLAGS_RELEASE})
112 string(REPLACE "/MDd" ${CONAN_LINK_RUNTIME} CMAKE_CXX_FLAGS_DEBUG ${CMAKE_CXX_FLAGS_DEBUG})
113 string(REPLACE "/MD" ${CONAN_LINK_RUNTIME} CMAKE_C_FLAGS_RELEASE ${CMAKE_C_FLAGS_RELEASE})
114 string(REPLACE "/MDd" ${CONAN_LINK_RUNTIME} CMAKE_C_FLAGS_DEBUG ${CMAKE_C_FLAGS_DEBUG})
115 endif()
116 endmacro()
117
118 macro(CONAN_OUTPUT_DIRS_SETUP)
119 set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/bin)
120 set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_RELEASE ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})
121 set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_DEBUG ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})
122
123 set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/lib)
124 set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY_RELEASE ${CMAKE_ARCHIVE_OUTPUT_DIRECTORY})
125 set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY_DEBUG ${CMAKE_ARCHIVE_OUTPUT_DIRECTORY})
126 endmacro()
127
128 macro(CONAN_SPLIT_VERSION VERSION_STRING MAJOR MINOR)
129 #make a list from the version string
130 string(REPLACE "." ";" VERSION_LIST ${${VERSION_STRING}})
131
132 #write output values
133 list(GET VERSION_LIST 0 ${MAJOR})
134 list(GET VERSION_LIST 1 ${MINOR})
135 endmacro()
136
137 macro(ERROR_COMPILER_VERSION)
138 message(FATAL_ERROR "Incorrect '${CONAN_COMPILER}' version 'compiler.version=${CONAN_COMPILER_VERSION}'"
139 " is not the one detected by CMake: '${CMAKE_CXX_COMPILER_ID}="${VERSION_MAJOR}.${VERSION_MINOR}')
140 endmacro()
141
142 macro(CHECK_COMPILER_VERSION)
143
144 CONAN_SPLIT_VERSION(CMAKE_CXX_COMPILER_VERSION VERSION_MAJOR VERSION_MINOR)
145
146 if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "MSVC")
147 # https://cmake.org/cmake/help/v3.2/variable/MSVC_VERSION.html
148 if( (${CONAN_COMPILER_VERSION} STREQUAL "14" AND NOT ${VERSION_MAJOR} STREQUAL "19") OR
149 (${CONAN_COMPILER_VERSION} STREQUAL "12" AND NOT ${VERSION_MAJOR} STREQUAL "18") OR
150 (${CONAN_COMPILER_VERSION} STREQUAL "11" AND NOT ${VERSION_MAJOR} STREQUAL "17") OR
151 (${CONAN_COMPILER_VERSION} STREQUAL "10" AND NOT ${VERSION_MAJOR} STREQUAL "16") OR
152 (${CONAN_COMPILER_VERSION} STREQUAL "9" AND NOT ${VERSION_MAJOR} STREQUAL "15") OR
153 (${CONAN_COMPILER_VERSION} STREQUAL "8" AND NOT ${VERSION_MAJOR} STREQUAL "14") OR
154 (${CONAN_COMPILER_VERSION} STREQUAL "7" AND NOT ${VERSION_MAJOR} STREQUAL "13") OR
155 (${CONAN_COMPILER_VERSION} STREQUAL "6" AND NOT ${VERSION_MAJOR} STREQUAL "12") )
156 ERROR_COMPILER_VERSION()
157 endif()
158 elseif("${CONAN_COMPILER}" STREQUAL "gcc" OR "${CONAN_COMPILER}" MATCHES "Clang")
159 if(NOT ${VERSION_MAJOR}.${VERSION_MINOR} VERSION_EQUAL "${CONAN_COMPILER_VERSION}")
160 ERROR_COMPILER_VERSION()
161 endif()
162 else()
163 message("Skipping version checking of not detected compiler...")
164 endif()
165 endmacro()
166
167 macro(CONAN_CHECK_COMPILER)
168 if( ("${CONAN_COMPILER}" STREQUAL "Visual Studio" AND NOT "${CMAKE_CXX_COMPILER_ID}" STREQUAL "MSVC") OR
169 ("${CONAN_COMPILER}" STREQUAL "gcc" AND NOT "${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU") OR
170 ("${CONAN_COMPILER}" STREQUAL "apple-clang" AND (NOT APPLE OR NOT ${CMAKE_CXX_COMPILER_ID} MATCHES "Clang")) OR
171 ("${CONAN_COMPILER}" STREQUAL "clang" AND NOT ${CMAKE_CXX_COMPILER_ID} MATCHES "Clang") )
172 message(FATAL_ERROR "Incorrect '${CONAN_COMPILER}', is not the one detected by CMake: '${CMAKE_CXX_COMPILER_ID}'")
173 endif()
174 CHECK_COMPILER_VERSION()
175 endmacro()
176 """
177
[end of conans/client/generators/cmake.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conans/client/generators/cmake.py b/conans/client/generators/cmake.py
--- a/conans/client/generators/cmake.py
+++ b/conans/client/generators/cmake.py
@@ -82,7 +82,7 @@
endmacro()
macro(CONAN_FLAGS_SETUP)
- include_directories(${CONAN_INCLUDE_DIRS})
+ include_directories(SYSTEM ${CONAN_INCLUDE_DIRS})
link_directories(${CONAN_LIB_DIRS})
add_definitions(${CONAN_DEFINES})
|
{"golden_diff": "diff --git a/conans/client/generators/cmake.py b/conans/client/generators/cmake.py\n--- a/conans/client/generators/cmake.py\n+++ b/conans/client/generators/cmake.py\n@@ -82,7 +82,7 @@\n endmacro()\n \n macro(CONAN_FLAGS_SETUP)\n- include_directories(${CONAN_INCLUDE_DIRS})\n+ include_directories(SYSTEM ${CONAN_INCLUDE_DIRS})\n link_directories(${CONAN_LIB_DIRS})\n add_definitions(${CONAN_DEFINES})\n", "issue": "mark headers as \"SYSTEM\" headers to silence warnings\nMany libraries generate tons of warnings in public headers. WebSocket++ uses auto_ptr for example and many Boost libraries truncate integers implicitly (-Wconversion). To consume these libraries you have to treat them as system headers because GCC won't emit warnings in these.\n\nThis is how Conan currently sets the include directories:\n\n``` CMake\ninclude_directories(${CONAN_INCLUDE_DIRS})\n```\n\nThis is how you would add them as \"system\" headers to silence warnings:\n\n``` CMake\ninclude_directories(SYSTEM ${CONAN_INCLUDE_DIRS})\n```\n\nIs there a reason it is not already done this way?\nThis issue may apply to configurations other than CMake/GCC, too, but this is the most important one for me.\n\n", "before_files": [{"content": "from conans.model import Generator\nfrom conans.paths import BUILD_INFO_CMAKE\n\n\nclass DepsCppCmake(object):\n def __init__(self, deps_cpp_info):\n self.include_paths = \"\\n\\t\\t\\t\".join('\"%s\"' % p.replace(\"\\\\\", \"/\")\n for p in deps_cpp_info.include_paths)\n self.lib_paths = \"\\n\\t\\t\\t\".join('\"%s\"' % p.replace(\"\\\\\", \"/\")\n for p in deps_cpp_info.lib_paths)\n self.libs = \" \".join(deps_cpp_info.libs)\n self.defines = \"\\n\\t\\t\\t\".join(\"-D%s\" % d for d in deps_cpp_info.defines)\n self.cppflags = \" \".join(deps_cpp_info.cppflags)\n self.cflags = \" \".join(deps_cpp_info.cflags)\n self.sharedlinkflags = \" \".join(deps_cpp_info.sharedlinkflags)\n self.exelinkflags = \" \".join(deps_cpp_info.exelinkflags)\n self.bin_paths = \"\\n\\t\\t\\t\".join('\"%s\"' % p.replace(\"\\\\\", \"/\")\n for p in deps_cpp_info.bin_paths)\n\n self.rootpath = '\"%s\"' % deps_cpp_info.rootpath.replace(\"\\\\\", \"/\")\n\n\nclass CMakeGenerator(Generator):\n @property\n def filename(self):\n return BUILD_INFO_CMAKE\n\n @property\n def content(self):\n sections = []\n\n # DEPS VARIABLES\n template_dep = ('set(CONAN_{dep}_ROOT {deps.rootpath})\\n'\n 'set(CONAN_INCLUDE_DIRS_{dep} {deps.include_paths})\\n'\n 'set(CONAN_LIB_DIRS_{dep} {deps.lib_paths})\\n'\n 'set(CONAN_BIN_DIRS_{dep} {deps.bin_paths})\\n'\n 'set(CONAN_LIBS_{dep} {deps.libs})\\n'\n 'set(CONAN_DEFINES_{dep} {deps.defines})\\n'\n 'set(CONAN_CXX_FLAGS_{dep} \"{deps.cppflags}\")\\n'\n 'set(CONAN_SHARED_LINKER_FLAGS_{dep} \"{deps.sharedlinkflags}\")\\n'\n 'set(CONAN_EXE_LINKER_FLAGS_{dep} \"{deps.exelinkflags}\")\\n'\n 'set(CONAN_C_FLAGS_{dep} \"{deps.cflags}\")\\n')\n\n for dep_name, dep_cpp_info in self.deps_build_info.dependencies:\n deps = DepsCppCmake(dep_cpp_info)\n dep_flags = template_dep.format(dep=dep_name.upper(),\n deps=deps)\n sections.append(dep_flags)\n\n # GENERAL VARIABLES\n deps = DepsCppCmake(self.deps_build_info)\n\n template = ('set(CONAN_INCLUDE_DIRS {deps.include_paths} ${{CONAN_INCLUDE_DIRS}})\\n'\n 'set(CONAN_LIB_DIRS {deps.lib_paths} ${{CONAN_LIB_DIRS}})\\n'\n 'set(CONAN_BIN_DIRS {deps.bin_paths} ${{CONAN_BIN_DIRS}})\\n'\n 'set(CONAN_LIBS {deps.libs} ${{CONAN_LIBS}})\\n'\n 'set(CONAN_DEFINES {deps.defines} ${{CONAN_DEFINES}})\\n'\n 'set(CONAN_CXX_FLAGS \"{deps.cppflags} ${{CONAN_CXX_FLAGS}}\")\\n'\n 'set(CONAN_SHARED_LINKER_FLAGS \"{deps.sharedlinkflags} ${{CONAN_SHARED_LINKER_FLAGS}}\")\\n'\n 'set(CONAN_EXE_LINKER_FLAGS \"{deps.exelinkflags} ${{CONAN_EXE_LINKER_FLAGS}}\")\\n'\n 'set(CONAN_C_FLAGS \"{deps.cflags} ${{CONAN_C_FLAGS}}\")\\n'\n 'set(CONAN_CMAKE_MODULE_PATH {module_paths} ${{CONAN_CMAKE_MODULE_PATH}})')\n\n rootpaths = [DepsCppCmake(dep_cpp_info).rootpath for _, dep_cpp_info\n in self.deps_build_info.dependencies]\n module_paths = \" \".join(rootpaths)\n all_flags = template.format(deps=deps, module_paths=module_paths)\n sections.append(all_flags)\n\n # MACROS\n sections.append(self._aux_cmake_test_setup())\n\n return \"\\n\".join(sections)\n\n def _aux_cmake_test_setup(self):\n return \"\"\"macro(CONAN_BASIC_SETUP)\n conan_check_compiler()\n conan_output_dirs_setup()\n conan_flags_setup()\n # CMake can find findXXX.cmake files in the root of packages\n set(CMAKE_MODULE_PATH ${CONAN_CMAKE_MODULE_PATH} ${CMAKE_MODULE_PATH})\nendmacro()\n\nmacro(CONAN_FLAGS_SETUP)\n include_directories(${CONAN_INCLUDE_DIRS})\n link_directories(${CONAN_LIB_DIRS})\n add_definitions(${CONAN_DEFINES})\n\n # For find_library\n set(CMAKE_INCLUDE_PATH ${CONAN_INCLUDE_DIRS} ${CMAKE_INCLUDE_PATH})\n set(CMAKE_LIBRARY_PATH ${CONAN_LIB_DIRS} ${CMAKE_LIBRARY_PATH})\n\n set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} ${CONAN_CXX_FLAGS}\")\n set(CMAKE_C_FLAGS \"${CMAKE_C_FLAGS} ${CONAN_C_FLAGS}\")\n set(CMAKE_SHARED_LINKER_FLAGS \"${CMAKE_SHARED_LINKER_FLAGS} ${CONAN_SHARED_LINKER_FLAGS}\")\n set(CMAKE_EXE_LINKER_FLAGS \"${CMAKE_EXE_LINKER_FLAGS} ${CONAN_EXE_LINKER_FLAGS}\")\n\n if(APPLE)\n # https://cmake.org/Wiki/CMake_RPATH_handling\n # CONAN GUIDE: All generated libraries should have the id and dependencies to other\n # dylibs without path, just the name, EX:\n # libMyLib1.dylib:\n # libMyLib1.dylib (compatibility version 0.0.0, current version 0.0.0)\n # libMyLib0.dylib (compatibility version 0.0.0, current version 0.0.0)\n # /usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 120.0.0)\n # /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1197.1.1)\n set(CMAKE_SKIP_RPATH 1) # AVOID RPATH FOR *.dylib, ALL LIBS BETWEEN THEM AND THE EXE\n # SHOULD BE ON THE LINKER RESOLVER PATH (./ IS ONE OF THEM)\n endif()\n if(CONAN_LINK_RUNTIME)\n string(REPLACE \"/MD\" ${CONAN_LINK_RUNTIME} CMAKE_CXX_FLAGS_RELEASE ${CMAKE_CXX_FLAGS_RELEASE})\n string(REPLACE \"/MDd\" ${CONAN_LINK_RUNTIME} CMAKE_CXX_FLAGS_DEBUG ${CMAKE_CXX_FLAGS_DEBUG})\n string(REPLACE \"/MD\" ${CONAN_LINK_RUNTIME} CMAKE_C_FLAGS_RELEASE ${CMAKE_C_FLAGS_RELEASE})\n string(REPLACE \"/MDd\" ${CONAN_LINK_RUNTIME} CMAKE_C_FLAGS_DEBUG ${CMAKE_C_FLAGS_DEBUG})\n endif()\nendmacro()\n\nmacro(CONAN_OUTPUT_DIRS_SETUP)\n set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/bin)\n set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_RELEASE ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})\n set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_DEBUG ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})\n\n set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/lib)\n set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY_RELEASE ${CMAKE_ARCHIVE_OUTPUT_DIRECTORY})\n set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY_DEBUG ${CMAKE_ARCHIVE_OUTPUT_DIRECTORY})\nendmacro()\n\nmacro(CONAN_SPLIT_VERSION VERSION_STRING MAJOR MINOR)\n #make a list from the version string\n string(REPLACE \".\" \";\" VERSION_LIST ${${VERSION_STRING}})\n\n #write output values\n list(GET VERSION_LIST 0 ${MAJOR})\n list(GET VERSION_LIST 1 ${MINOR})\nendmacro()\n\nmacro(ERROR_COMPILER_VERSION)\n message(FATAL_ERROR \"Incorrect '${CONAN_COMPILER}' version 'compiler.version=${CONAN_COMPILER_VERSION}'\"\n \" is not the one detected by CMake: '${CMAKE_CXX_COMPILER_ID}=\"${VERSION_MAJOR}.${VERSION_MINOR}')\nendmacro()\n\nmacro(CHECK_COMPILER_VERSION)\n\n CONAN_SPLIT_VERSION(CMAKE_CXX_COMPILER_VERSION VERSION_MAJOR VERSION_MINOR)\n\n if(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"MSVC\")\n # https://cmake.org/cmake/help/v3.2/variable/MSVC_VERSION.html\n if( (${CONAN_COMPILER_VERSION} STREQUAL \"14\" AND NOT ${VERSION_MAJOR} STREQUAL \"19\") OR\n (${CONAN_COMPILER_VERSION} STREQUAL \"12\" AND NOT ${VERSION_MAJOR} STREQUAL \"18\") OR\n (${CONAN_COMPILER_VERSION} STREQUAL \"11\" AND NOT ${VERSION_MAJOR} STREQUAL \"17\") OR\n (${CONAN_COMPILER_VERSION} STREQUAL \"10\" AND NOT ${VERSION_MAJOR} STREQUAL \"16\") OR\n (${CONAN_COMPILER_VERSION} STREQUAL \"9\" AND NOT ${VERSION_MAJOR} STREQUAL \"15\") OR\n (${CONAN_COMPILER_VERSION} STREQUAL \"8\" AND NOT ${VERSION_MAJOR} STREQUAL \"14\") OR\n (${CONAN_COMPILER_VERSION} STREQUAL \"7\" AND NOT ${VERSION_MAJOR} STREQUAL \"13\") OR\n (${CONAN_COMPILER_VERSION} STREQUAL \"6\" AND NOT ${VERSION_MAJOR} STREQUAL \"12\") )\n ERROR_COMPILER_VERSION()\n endif()\n elseif(\"${CONAN_COMPILER}\" STREQUAL \"gcc\" OR \"${CONAN_COMPILER}\" MATCHES \"Clang\")\n if(NOT ${VERSION_MAJOR}.${VERSION_MINOR} VERSION_EQUAL \"${CONAN_COMPILER_VERSION}\")\n ERROR_COMPILER_VERSION()\n endif()\n else()\n message(\"Skipping version checking of not detected compiler...\")\n endif()\nendmacro()\n\nmacro(CONAN_CHECK_COMPILER)\n if( (\"${CONAN_COMPILER}\" STREQUAL \"Visual Studio\" AND NOT \"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"MSVC\") OR\n (\"${CONAN_COMPILER}\" STREQUAL \"gcc\" AND NOT \"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\") OR\n (\"${CONAN_COMPILER}\" STREQUAL \"apple-clang\" AND (NOT APPLE OR NOT ${CMAKE_CXX_COMPILER_ID} MATCHES \"Clang\")) OR\n (\"${CONAN_COMPILER}\" STREQUAL \"clang\" AND NOT ${CMAKE_CXX_COMPILER_ID} MATCHES \"Clang\") )\n message(FATAL_ERROR \"Incorrect '${CONAN_COMPILER}', is not the one detected by CMake: '${CMAKE_CXX_COMPILER_ID}'\")\n endif()\n CHECK_COMPILER_VERSION()\nendmacro()\n\"\"\"\n", "path": "conans/client/generators/cmake.py"}]}
| 3,416 | 113 |
gh_patches_debug_18289
|
rasdani/github-patches
|
git_diff
|
ivy-llc__ivy-15430
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
fold_in
</issue>
<code>
[start of ivy/functional/frontends/jax/random.py]
1 # local
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes
4 from ivy.functional.frontends.jax.func_wrapper import (
5 to_ivy_arrays_and_back,
6 handle_jax_dtype,
7 )
8
9
10 @to_ivy_arrays_and_back
11 def PRNGKey(seed):
12 return ivy.array([0, seed % 4294967295 - (seed // 4294967295)], dtype=ivy.int64)
13
14
15 @handle_jax_dtype
16 @to_ivy_arrays_and_back
17 def uniform(key, shape=(), dtype=None, minval=0.0, maxval=1.0):
18 return ivy.random_uniform(
19 low=minval, high=maxval, shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1])
20 )
21
22
23 @handle_jax_dtype
24 @to_ivy_arrays_and_back
25 def normal(key, shape=(), dtype=None):
26 return ivy.random_normal(shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1]))
27
28
29 def _get_seed(key):
30 key1, key2 = int(key[0]), int(key[1])
31 return ivy.to_scalar(int("".join(map(str, [key1, key2]))))
32
33
34 @handle_jax_dtype
35 @to_ivy_arrays_and_back
36 @with_unsupported_dtypes(
37 {
38 "0.3.14 and below": (
39 "float16",
40 "bfloat16",
41 )
42 },
43 "jax",
44 )
45 def beta(key, a, b, shape=None, dtype=None):
46 seed = _get_seed(key)
47 return ivy.beta(a, b, shape=shape, dtype=dtype, seed=seed)
48
49
50 @handle_jax_dtype
51 @to_ivy_arrays_and_back
52 @with_unsupported_dtypes(
53 {
54 "0.3.14 and below": (
55 "float16",
56 "bfloat16",
57 )
58 },
59 "jax",
60 )
61 def dirichlet(key, alpha, shape=None, dtype="float32"):
62 seed = _get_seed(key)
63 alpha = ivy.astype(alpha, dtype)
64 return ivy.dirichlet(alpha, size=shape, dtype=dtype, seed=seed)
65
66
67 @handle_jax_dtype
68 @to_ivy_arrays_and_back
69 def cauchy(key, shape=(), dtype="float64"):
70 seed = _get_seed(key)
71 u = ivy.random_uniform(low=0.0, high=1.0, shape=shape, dtype=dtype, seed=seed)
72 return ivy.tan(ivy.pi * (u - 0.5))
73
74
75 @handle_jax_dtype
76 @to_ivy_arrays_and_back
77 @with_unsupported_dtypes(
78 {"0.3.14 and below": ("unsigned", "int8", "int16")},
79 "jax",
80 )
81 def poisson(key, lam, shape=None, dtype=None):
82 seed = _get_seed(key)
83 return ivy.poisson(lam, shape=shape, dtype=dtype, seed=seed)
84
85
86 @handle_jax_dtype
87 @to_ivy_arrays_and_back
88 @with_unsupported_dtypes(
89 {
90 "0.3.14 and below": (
91 "float16",
92 "bfloat16",
93 )
94 },
95 "jax",
96 )
97 def gamma(key, a, shape=None, dtype="float64"):
98 seed = _get_seed(key)
99 return ivy.gamma(a, 1.0, shape=shape, dtype=dtype, seed=seed)
100
101
102 @handle_jax_dtype
103 @to_ivy_arrays_and_back
104 @with_unsupported_dtypes(
105 {
106 "0.3.14 and below": (
107 "float16",
108 "bfloat16",
109 )
110 },
111 "jax",
112 )
113 def gumbel(key, shape=(), dtype="float64"):
114 seed = _get_seed(key)
115 uniform_x = ivy.random_uniform(
116 low=0.0,
117 high=1.0,
118 shape=shape,
119 dtype=dtype,
120 seed=seed,
121 )
122 return -ivy.log(-ivy.log(uniform_x))
123
124
125 @handle_jax_dtype
126 @to_ivy_arrays_and_back
127 @with_unsupported_dtypes(
128 {"0.3.14 and below": ("unsigned", "int8", "int16")},
129 "jax",
130 )
131 def rademacher(key, shape, dtype="int64"):
132 seed = _get_seed(key)
133 b = ivy.bernoulli(ivy.array([0.5]), shape=shape, dtype="float32", seed=seed)
134 b = ivy.astype(b, dtype)
135 return 2 * b - 1
136
137
138 @handle_jax_dtype
139 @to_ivy_arrays_and_back
140 @with_unsupported_dtypes(
141 {
142 "0.3.14 and below": (
143 "float16",
144 "bfloat16",
145 )
146 },
147 "jax",
148 )
149 def generalized_normal(key, p, shape=(), dtype="float64"):
150 seed = _get_seed(key)
151 g = ivy.gamma(1 / p, 1.0, shape=shape, dtype=dtype, seed=seed)
152 b = ivy.bernoulli(ivy.array([0.5]), shape=shape, dtype=dtype, seed=seed)
153 r = 2 * b - 1
154 return r * g ** (1 / p)
155
156
157 def t(key, df, shape=(), dtype="float64"):
158 seed = _get_seed(key)
159 n = ivy.random_normal(shape=shape, dtype=dtype, seed=seed)
160 half_df = df / 2.0
161 g = ivy.gamma(half_df, 1.0, shape=shape, dtype=dtype, seed=seed)
162 return n * ivy.sqrt(ivy.divide(half_df, g))
163
164
165 @handle_jax_dtype
166 @to_ivy_arrays_and_back
167 @with_unsupported_dtypes(
168 {"0.3.14 and below": ("unsigned", "int8", "int16")},
169 "jax",
170 )
171 def randint(key, shape, minval, maxval, dtype="int64"):
172 seed = _get_seed(key)
173 return ivy.randint(minval, maxval, shape=shape, dtype=dtype, seed=seed)
174
175
176 @to_ivy_arrays_and_back
177 def permutation(key, x, axis=0, independent=False):
178 x = ivy.array(x)
179 seed = _get_seed(key)
180 if not ivy.get_num_dims(x):
181 r = int(x)
182 return ivy.shuffle(ivy.arange(r), axis, seed=seed)
183 if independent:
184 return ivy.shuffle(x, axis, seed=seed)
185 rand = ivy.arange(x.shape[axis])
186 ind = ivy.shuffle(rand, 0, seed=seed)
187
188 return ivy.gather(x, ind, axis=axis)
189
[end of ivy/functional/frontends/jax/random.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ivy/functional/frontends/jax/random.py b/ivy/functional/frontends/jax/random.py
--- a/ivy/functional/frontends/jax/random.py
+++ b/ivy/functional/frontends/jax/random.py
@@ -172,6 +172,17 @@
seed = _get_seed(key)
return ivy.randint(minval, maxval, shape=shape, dtype=dtype, seed=seed)
+@to_ivy_arrays_and_back
+def bernoulli(key, p=0.5, shape=None):
+ seed = _get_seed(key)
+ return ivy.bernoulli(p, shape=shape, seed=seed)
+
+@to_ivy_arrays_and_back
+def fold_in(key, data):
+ s = ivy.bitwise_left_shift(
+ ivy.asarray(data, dtype=ivy.uint32), ivy.array(32, dtype=ivy.uint32)
+ )
+ return ivy.bitwise_xor(key, s)
@to_ivy_arrays_and_back
def permutation(key, x, axis=0, independent=False):
@@ -184,5 +195,4 @@
return ivy.shuffle(x, axis, seed=seed)
rand = ivy.arange(x.shape[axis])
ind = ivy.shuffle(rand, 0, seed=seed)
-
return ivy.gather(x, ind, axis=axis)
|
{"golden_diff": "diff --git a/ivy/functional/frontends/jax/random.py b/ivy/functional/frontends/jax/random.py\n--- a/ivy/functional/frontends/jax/random.py\n+++ b/ivy/functional/frontends/jax/random.py\n@@ -172,6 +172,17 @@\n seed = _get_seed(key)\n return ivy.randint(minval, maxval, shape=shape, dtype=dtype, seed=seed)\n \n+@to_ivy_arrays_and_back\n+def bernoulli(key, p=0.5, shape=None):\n+ seed = _get_seed(key)\n+ return ivy.bernoulli(p, shape=shape, seed=seed)\n+\n+@to_ivy_arrays_and_back\n+def fold_in(key, data):\n+ s = ivy.bitwise_left_shift(\n+ ivy.asarray(data, dtype=ivy.uint32), ivy.array(32, dtype=ivy.uint32)\n+ )\n+ return ivy.bitwise_xor(key, s)\n \n @to_ivy_arrays_and_back\n def permutation(key, x, axis=0, independent=False):\n@@ -184,5 +195,4 @@\n return ivy.shuffle(x, axis, seed=seed)\n rand = ivy.arange(x.shape[axis])\n ind = ivy.shuffle(rand, 0, seed=seed)\n-\n return ivy.gather(x, ind, axis=axis)\n", "issue": "fold_in\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes\nfrom ivy.functional.frontends.jax.func_wrapper import (\n to_ivy_arrays_and_back,\n handle_jax_dtype,\n)\n\n\n@to_ivy_arrays_and_back\ndef PRNGKey(seed):\n return ivy.array([0, seed % 4294967295 - (seed // 4294967295)], dtype=ivy.int64)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef uniform(key, shape=(), dtype=None, minval=0.0, maxval=1.0):\n return ivy.random_uniform(\n low=minval, high=maxval, shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1])\n )\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef normal(key, shape=(), dtype=None):\n return ivy.random_normal(shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1]))\n\n\ndef _get_seed(key):\n key1, key2 = int(key[0]), int(key[1])\n return ivy.to_scalar(int(\"\".join(map(str, [key1, key2]))))\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef beta(key, a, b, shape=None, dtype=None):\n seed = _get_seed(key)\n return ivy.beta(a, b, shape=shape, dtype=dtype, seed=seed)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef dirichlet(key, alpha, shape=None, dtype=\"float32\"):\n seed = _get_seed(key)\n alpha = ivy.astype(alpha, dtype)\n return ivy.dirichlet(alpha, size=shape, dtype=dtype, seed=seed)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef cauchy(key, shape=(), dtype=\"float64\"):\n seed = _get_seed(key)\n u = ivy.random_uniform(low=0.0, high=1.0, shape=shape, dtype=dtype, seed=seed)\n return ivy.tan(ivy.pi * (u - 0.5))\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\"0.3.14 and below\": (\"unsigned\", \"int8\", \"int16\")},\n \"jax\",\n)\ndef poisson(key, lam, shape=None, dtype=None):\n seed = _get_seed(key)\n return ivy.poisson(lam, shape=shape, dtype=dtype, seed=seed)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef gamma(key, a, shape=None, dtype=\"float64\"):\n seed = _get_seed(key)\n return ivy.gamma(a, 1.0, shape=shape, dtype=dtype, seed=seed)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef gumbel(key, shape=(), dtype=\"float64\"):\n seed = _get_seed(key)\n uniform_x = ivy.random_uniform(\n low=0.0,\n high=1.0,\n shape=shape,\n dtype=dtype,\n seed=seed,\n )\n return -ivy.log(-ivy.log(uniform_x))\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\"0.3.14 and below\": (\"unsigned\", \"int8\", \"int16\")},\n \"jax\",\n)\ndef rademacher(key, shape, dtype=\"int64\"):\n seed = _get_seed(key)\n b = ivy.bernoulli(ivy.array([0.5]), shape=shape, dtype=\"float32\", seed=seed)\n b = ivy.astype(b, dtype)\n return 2 * b - 1\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef generalized_normal(key, p, shape=(), dtype=\"float64\"):\n seed = _get_seed(key)\n g = ivy.gamma(1 / p, 1.0, shape=shape, dtype=dtype, seed=seed)\n b = ivy.bernoulli(ivy.array([0.5]), shape=shape, dtype=dtype, seed=seed)\n r = 2 * b - 1\n return r * g ** (1 / p)\n\n\ndef t(key, df, shape=(), dtype=\"float64\"):\n seed = _get_seed(key)\n n = ivy.random_normal(shape=shape, dtype=dtype, seed=seed)\n half_df = df / 2.0\n g = ivy.gamma(half_df, 1.0, shape=shape, dtype=dtype, seed=seed)\n return n * ivy.sqrt(ivy.divide(half_df, g))\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\"0.3.14 and below\": (\"unsigned\", \"int8\", \"int16\")},\n \"jax\",\n)\ndef randint(key, shape, minval, maxval, dtype=\"int64\"):\n seed = _get_seed(key)\n return ivy.randint(minval, maxval, shape=shape, dtype=dtype, seed=seed)\n\n\n@to_ivy_arrays_and_back\ndef permutation(key, x, axis=0, independent=False):\n x = ivy.array(x)\n seed = _get_seed(key)\n if not ivy.get_num_dims(x):\n r = int(x)\n return ivy.shuffle(ivy.arange(r), axis, seed=seed)\n if independent:\n return ivy.shuffle(x, axis, seed=seed)\n rand = ivy.arange(x.shape[axis])\n ind = ivy.shuffle(rand, 0, seed=seed)\n\n return ivy.gather(x, ind, axis=axis)\n", "path": "ivy/functional/frontends/jax/random.py"}]}
| 2,534 | 313 |
gh_patches_debug_22756
|
rasdani/github-patches
|
git_diff
|
streamlit__streamlit-929
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make it easy to screencast a Streamlit app
See spec here: https://docs.google.com/presentation/d/18bNul9a6rjScGhxRmGbZbCcU3uYp_b3ckxA9DVFJlKM/edit
And see a crappy demo I wrote with some code you can steal:
https://gist.github.com/tvst/c114620cf36b77732d5d67f411c55f12
Questions:
* What browsers support this?
* Can we record as mp4 / h264?
* Can we record in a format that works in both Windows and Mac without extra installs? (Linux is not a problem -- users know how to open video in different formats)
</issue>
<code>
[start of e2e/scripts/st_latex.py]
1 import streamlit as st
2
3 st.latex(r"\LaTeX")
4
5 try:
6 import sympy
7
8 a, b = sympy.symbols("a b")
9 out = a + b
10 except:
11 out = "a + b"
12
13 st.latex(out)
14
[end of e2e/scripts/st_latex.py]
[start of e2e/scripts/st_chart_utc_time.py]
1 from datetime import date
2
3 import pandas as pd
4 import streamlit as st
5
6 df = pd.DataFrame(
7 {
8 "index": [
9 date(2019, 8, 9),
10 date(2019, 8, 10),
11 date(2019, 8, 11),
12 date(2019, 8, 12),
13 ],
14 "numbers": [10, 50, 30, 40],
15 }
16 )
17
18 df.set_index("index", inplace=True)
19
20 # st.area/bar/line_chart all use Altair/Vega-Lite under the hood.
21 # By default, Vega-Lite displays time values in the browser's local
22 # time zone. In `altair.generate_chart`, we explicitly set the time
23 # display to UTC, so that our results are consistent. This test verifies
24 # that change!
25 st.area_chart(df)
26 st.bar_chart(df)
27 st.line_chart(df)
28
[end of e2e/scripts/st_chart_utc_time.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/e2e/scripts/st_chart_utc_time.py b/e2e/scripts/st_chart_utc_time.py
--- a/e2e/scripts/st_chart_utc_time.py
+++ b/e2e/scripts/st_chart_utc_time.py
@@ -1,3 +1,18 @@
+# -*- coding: utf-8 -*-
+# Copyright 2018-2020 Streamlit Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
from datetime import date
import pandas as pd
diff --git a/e2e/scripts/st_latex.py b/e2e/scripts/st_latex.py
--- a/e2e/scripts/st_latex.py
+++ b/e2e/scripts/st_latex.py
@@ -1,3 +1,18 @@
+# -*- coding: utf-8 -*-
+# Copyright 2018-2020 Streamlit Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
import streamlit as st
st.latex(r"\LaTeX")
|
{"golden_diff": "diff --git a/e2e/scripts/st_chart_utc_time.py b/e2e/scripts/st_chart_utc_time.py\n--- a/e2e/scripts/st_chart_utc_time.py\n+++ b/e2e/scripts/st_chart_utc_time.py\n@@ -1,3 +1,18 @@\n+# -*- coding: utf-8 -*-\n+# Copyright 2018-2020 Streamlit Inc.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+\n from datetime import date\n \n import pandas as pd\ndiff --git a/e2e/scripts/st_latex.py b/e2e/scripts/st_latex.py\n--- a/e2e/scripts/st_latex.py\n+++ b/e2e/scripts/st_latex.py\n@@ -1,3 +1,18 @@\n+# -*- coding: utf-8 -*-\n+# Copyright 2018-2020 Streamlit Inc.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+\n import streamlit as st\n \n st.latex(r\"\\LaTeX\")\n", "issue": "Make it easy to screencast a Streamlit app\nSee spec here: https://docs.google.com/presentation/d/18bNul9a6rjScGhxRmGbZbCcU3uYp_b3ckxA9DVFJlKM/edit\r\n\r\nAnd see a crappy demo I wrote with some code you can steal:\r\nhttps://gist.github.com/tvst/c114620cf36b77732d5d67f411c55f12\r\n\r\nQuestions:\r\n* What browsers support this?\r\n* Can we record as mp4 / h264?\r\n* Can we record in a format that works in both Windows and Mac without extra installs? (Linux is not a problem -- users know how to open video in different formats)\n", "before_files": [{"content": "import streamlit as st\n\nst.latex(r\"\\LaTeX\")\n\ntry:\n import sympy\n\n a, b = sympy.symbols(\"a b\")\n out = a + b\nexcept:\n out = \"a + b\"\n\nst.latex(out)\n", "path": "e2e/scripts/st_latex.py"}, {"content": "from datetime import date\n\nimport pandas as pd\nimport streamlit as st\n\ndf = pd.DataFrame(\n {\n \"index\": [\n date(2019, 8, 9),\n date(2019, 8, 10),\n date(2019, 8, 11),\n date(2019, 8, 12),\n ],\n \"numbers\": [10, 50, 30, 40],\n }\n)\n\ndf.set_index(\"index\", inplace=True)\n\n# st.area/bar/line_chart all use Altair/Vega-Lite under the hood.\n# By default, Vega-Lite displays time values in the browser's local\n# time zone. In `altair.generate_chart`, we explicitly set the time\n# display to UTC, so that our results are consistent. This test verifies\n# that change!\nst.area_chart(df)\nst.bar_chart(df)\nst.line_chart(df)\n", "path": "e2e/scripts/st_chart_utc_time.py"}]}
| 1,075 | 434 |
gh_patches_debug_8419
|
rasdani/github-patches
|
git_diff
|
searxng__searxng-2830
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: internetarchivescholar engine
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
Repository: https://github.com/searxng/searxng
Branch: master
Version: 2023.9.19+3ac7c40b6
<!-- Check if these values are correct -->
**How did you install SearXNG?**
<!-- Did you install SearXNG using the official wiki or using searxng-docker
or manually by executing the searx/webapp.py file? -->
**What happened?**
<!-- A clear and concise description of what the bug is. -->
**How To Reproduce**
<!-- How can we reproduce this issue? (as minimally and as precisely as possible) -->
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Screenshots & Logs**
<!-- If applicable, add screenshots, logs to help explain your problem. -->
**Additional context**
<!-- Add any other context about the problem here. -->
**Technical report**
Error
* Error: KeyError
* Percentage: 25
* Parameters: `()`
* File name: `searx/engines/internet_archive_scholar.py:59`
* Function: `response`
* Code: `'title': result['biblio']['title'],`
</issue>
<code>
[start of searx/engines/internet_archive_scholar.py]
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 """Internet Archive scholar(science)
4 """
5
6 from datetime import datetime
7 from urllib.parse import urlencode
8 from searx.utils import html_to_text
9
10 about = {
11 "website": "https://scholar.archive.org/",
12 "wikidata_id": "Q115667709",
13 "official_api_documentation": "https://scholar.archive.org/api/redoc",
14 "use_official_api": True,
15 "require_api_key": False,
16 "results": "JSON",
17 }
18 categories = ['science', 'scientific publications']
19 paging = True
20
21 base_url = "https://scholar.archive.org"
22 results_per_page = 15
23
24
25 def request(query, params):
26 args = {
27 "q": query,
28 "limit": results_per_page,
29 "offset": (params["pageno"] - 1) * results_per_page,
30 }
31 params["url"] = f"{base_url}/search?{urlencode(args)}"
32 params["headers"]["Accept"] = "application/json"
33 return params
34
35
36 def response(resp):
37 results = []
38
39 json = resp.json()
40
41 for result in json["results"]:
42 publishedDate, content, doi = None, '', None
43
44 if result['biblio'].get('release_date'):
45 publishedDate = datetime.strptime(result['biblio']['release_date'], "%Y-%m-%d")
46
47 if len(result['abstracts']) > 0:
48 content = result['abstracts'][0].get('body')
49 elif len(result['_highlights']) > 0:
50 content = result['_highlights'][0]
51
52 if len(result['releases']) > 0:
53 doi = result['releases'][0].get('doi')
54
55 results.append(
56 {
57 'template': 'paper.html',
58 'url': result['fulltext']['access_url'],
59 'title': result['biblio']['title'],
60 'content': html_to_text(content),
61 'publisher': result['biblio'].get('publisher'),
62 'doi': doi,
63 'journal': result['biblio'].get('container_name'),
64 'authors': result['biblio'].get('contrib_names'),
65 'tags': result['tags'],
66 'publishedDate': publishedDate,
67 'issns': result['biblio'].get('issns'),
68 'pdf_url': result['fulltext'].get('access_url'),
69 }
70 )
71
72 return results
73
[end of searx/engines/internet_archive_scholar.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/searx/engines/internet_archive_scholar.py b/searx/engines/internet_archive_scholar.py
--- a/searx/engines/internet_archive_scholar.py
+++ b/searx/engines/internet_archive_scholar.py
@@ -56,7 +56,7 @@
{
'template': 'paper.html',
'url': result['fulltext']['access_url'],
- 'title': result['biblio']['title'],
+ 'title': result['biblio'].get('title') or result['biblio'].get('container_name'),
'content': html_to_text(content),
'publisher': result['biblio'].get('publisher'),
'doi': doi,
|
{"golden_diff": "diff --git a/searx/engines/internet_archive_scholar.py b/searx/engines/internet_archive_scholar.py\n--- a/searx/engines/internet_archive_scholar.py\n+++ b/searx/engines/internet_archive_scholar.py\n@@ -56,7 +56,7 @@\n {\n 'template': 'paper.html',\n 'url': result['fulltext']['access_url'],\n- 'title': result['biblio']['title'],\n+ 'title': result['biblio'].get('title') or result['biblio'].get('container_name'),\n 'content': html_to_text(content),\n 'publisher': result['biblio'].get('publisher'),\n 'doi': doi,\n", "issue": "Bug: internetarchivescholar engine\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\nRepository: https://github.com/searxng/searxng\r\nBranch: master\r\nVersion: 2023.9.19+3ac7c40b6\r\n<!-- Check if these values are correct -->\r\n\r\n**How did you install SearXNG?**\r\n<!-- Did you install SearXNG using the official wiki or using searxng-docker\r\nor manually by executing the searx/webapp.py file? -->\r\n**What happened?**\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n**How To Reproduce**\r\n<!-- How can we reproduce this issue? (as minimally and as precisely as possible) -->\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n**Screenshots & Logs**\r\n<!-- If applicable, add screenshots, logs to help explain your problem. -->\r\n\r\n**Additional context**\r\n<!-- Add any other context about the problem here. -->\r\n\r\n**Technical report**\r\n\r\nError\r\n * Error: KeyError\r\n * Percentage: 25\r\n * Parameters: `()`\r\n * File name: `searx/engines/internet_archive_scholar.py:59`\r\n * Function: `response`\r\n * Code: `'title': result['biblio']['title'],`\r\n\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Internet Archive scholar(science)\n\"\"\"\n\nfrom datetime import datetime\nfrom urllib.parse import urlencode\nfrom searx.utils import html_to_text\n\nabout = {\n \"website\": \"https://scholar.archive.org/\",\n \"wikidata_id\": \"Q115667709\",\n \"official_api_documentation\": \"https://scholar.archive.org/api/redoc\",\n \"use_official_api\": True,\n \"require_api_key\": False,\n \"results\": \"JSON\",\n}\ncategories = ['science', 'scientific publications']\npaging = True\n\nbase_url = \"https://scholar.archive.org\"\nresults_per_page = 15\n\n\ndef request(query, params):\n args = {\n \"q\": query,\n \"limit\": results_per_page,\n \"offset\": (params[\"pageno\"] - 1) * results_per_page,\n }\n params[\"url\"] = f\"{base_url}/search?{urlencode(args)}\"\n params[\"headers\"][\"Accept\"] = \"application/json\"\n return params\n\n\ndef response(resp):\n results = []\n\n json = resp.json()\n\n for result in json[\"results\"]:\n publishedDate, content, doi = None, '', None\n\n if result['biblio'].get('release_date'):\n publishedDate = datetime.strptime(result['biblio']['release_date'], \"%Y-%m-%d\")\n\n if len(result['abstracts']) > 0:\n content = result['abstracts'][0].get('body')\n elif len(result['_highlights']) > 0:\n content = result['_highlights'][0]\n\n if len(result['releases']) > 0:\n doi = result['releases'][0].get('doi')\n\n results.append(\n {\n 'template': 'paper.html',\n 'url': result['fulltext']['access_url'],\n 'title': result['biblio']['title'],\n 'content': html_to_text(content),\n 'publisher': result['biblio'].get('publisher'),\n 'doi': doi,\n 'journal': result['biblio'].get('container_name'),\n 'authors': result['biblio'].get('contrib_names'),\n 'tags': result['tags'],\n 'publishedDate': publishedDate,\n 'issns': result['biblio'].get('issns'),\n 'pdf_url': result['fulltext'].get('access_url'),\n }\n )\n\n return results\n", "path": "searx/engines/internet_archive_scholar.py"}]}
| 1,530 | 163 |
gh_patches_debug_21006
|
rasdani/github-patches
|
git_diff
|
paperless-ngx__paperless-ngx-2983
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Docker 1.13-dev: Multi-User Setup, can't change anything with to document details
### Description
Hi, I am pretty new to paperless. I wanted to start directly with the multi-user feature for my whole family. So I decided to install the dev-docker image. It runs quote nice, there is just this one thing...
When a document has permission set to owner: Person A and read permission to Person B. I cannot change anything within the document e.g. Name, Permission, Tags, Document type etc.
When I try to do so, there is just this error:
`Error saving document: Http failure response for https://example.com/api/documents/23/: 500 OK`
The change thru the webui won't be saved. When I remove the permission (read-permission to Person B) everything is working thru the webui. When I put the permission back, the error comes back as well.
BTW I can change things without a problem within the Django admin panel.
### Steps to reproduce
1. Have a document
2. Have different users
3. Share this document as an owner to a second person
4. Error pops up when trying to change information on the document
5. remove permission, leave owner
6. Document can be changed
7. give read permission to the second person again
8. same error comes back
### Webserver logs
```bash
[2023-03-31 22:13:36,819] [ERROR] [django.request] Internal Server Error: /api/documents/23/
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/asgiref/sync.py", line 486, in thread_handler
raise exc_info[1]
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 43, in inner
response = await get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 253, in _get_response_async
response = await wrapped_callback(
File "/usr/local/lib/python3.9/site-packages/asgiref/sync.py", line 448, in __call__
ret = await asyncio.wait_for(future, timeout=None)
File "/usr/local/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
return await fut
File "/usr/local/lib/python3.9/site-packages/asgiref/current_thread_executor.py", line 22, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.9/site-packages/asgiref/sync.py", line 490, in thread_handler
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 55, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/src/paperless/src/documents/views.py", line 278, in update
response = super().update(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 68, in update
self.perform_update(serializer)
File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 78, in perform_update
serializer.save()
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 207, in save
self.instance = self.update(self.instance, validated_data)
File "/usr/src/paperless/src/documents/serialisers.py", line 419, in update
super().update(instance, validated_data)
File "/usr/src/paperless/src/documents/serialisers.py", line 210, in update
self._set_permissions(validated_data["set_permissions"], instance)
File "/usr/src/paperless/src/documents/serialisers.py", line 149, in _set_permissions
set_permissions_for_object(permissions, object)
File "/usr/src/paperless/src/documents/permissions.py", line 70, in set_permissions_for_object
if len(users_to_remove) > 0:
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 376, in __len__
self._fetch_all()
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 1867, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 87, in __iter__
results = compiler.execute_sql(
File "/usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1385, in execute_sql
sql, params = self.as_sql()
File "/usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 603, in as_sql
raise NotSupportedError(
django.db.utils.NotSupportedError: difference is not supported on this database backend.
```
### Browser logs
_No response_
### Paperless-ngx version
1.13.0-dev
### Host OS
Docker@Ubuntu 22.04
### Installation method
Docker - official image
### Browser
Safari
### Configuration changes
_No response_
### Other
_No response_
</issue>
<code>
[start of src/documents/permissions.py]
1 from django.contrib.auth.models import Group
2 from django.contrib.auth.models import Permission
3 from django.contrib.auth.models import User
4 from django.contrib.contenttypes.models import ContentType
5 from guardian.models import GroupObjectPermission
6 from guardian.shortcuts import assign_perm
7 from guardian.shortcuts import get_users_with_perms
8 from guardian.shortcuts import remove_perm
9 from rest_framework.permissions import BasePermission
10 from rest_framework.permissions import DjangoObjectPermissions
11
12
13 class PaperlessObjectPermissions(DjangoObjectPermissions):
14 """
15 A permissions backend that checks for object-level permissions
16 or for ownership.
17 """
18
19 perms_map = {
20 "GET": ["%(app_label)s.view_%(model_name)s"],
21 "OPTIONS": ["%(app_label)s.view_%(model_name)s"],
22 "HEAD": ["%(app_label)s.view_%(model_name)s"],
23 "POST": ["%(app_label)s.add_%(model_name)s"],
24 "PUT": ["%(app_label)s.change_%(model_name)s"],
25 "PATCH": ["%(app_label)s.change_%(model_name)s"],
26 "DELETE": ["%(app_label)s.delete_%(model_name)s"],
27 }
28
29 def has_object_permission(self, request, view, obj):
30 if hasattr(obj, "owner") and obj.owner is not None:
31 if request.user == obj.owner:
32 return True
33 else:
34 return super().has_object_permission(request, view, obj)
35 else:
36 return True # no owner
37
38
39 class PaperlessAdminPermissions(BasePermission):
40 def has_permission(self, request, view):
41 return request.user.has_perm("admin.view_logentry")
42
43
44 def get_groups_with_only_permission(obj, codename):
45 ctype = ContentType.objects.get_for_model(obj)
46 permission = Permission.objects.get(content_type=ctype, codename=codename)
47 group_object_perm_group_ids = (
48 GroupObjectPermission.objects.filter(
49 object_pk=obj.pk,
50 content_type=ctype,
51 )
52 .filter(permission=permission)
53 .values_list("group_id")
54 )
55 return Group.objects.filter(id__in=group_object_perm_group_ids).distinct()
56
57
58 def set_permissions_for_object(permissions, object):
59 for action in permissions:
60 permission = f"{action}_{object.__class__.__name__.lower()}"
61 # users
62 users_to_add = User.objects.filter(id__in=permissions[action]["users"])
63 users_to_remove = get_users_with_perms(
64 object,
65 only_with_perms_in=[permission],
66 with_group_users=False,
67 )
68 if len(users_to_add) > 0 and len(users_to_remove) > 0:
69 users_to_remove = users_to_remove.difference(users_to_add)
70 if len(users_to_remove) > 0:
71 for user in users_to_remove:
72 remove_perm(permission, user, object)
73 if len(users_to_add) > 0:
74 for user in users_to_add:
75 assign_perm(permission, user, object)
76 if action == "change":
77 # change gives view too
78 assign_perm(
79 f"view_{object.__class__.__name__.lower()}",
80 user,
81 object,
82 )
83 # groups
84 groups_to_add = Group.objects.filter(id__in=permissions[action]["groups"])
85 groups_to_remove = get_groups_with_only_permission(
86 object,
87 permission,
88 )
89 if len(groups_to_add) > 0 and len(groups_to_remove) > 0:
90 groups_to_remove = groups_to_remove.difference(groups_to_add)
91 if len(groups_to_remove) > 0:
92 for group in groups_to_remove:
93 remove_perm(permission, group, object)
94 if len(groups_to_add) > 0:
95 for group in groups_to_add:
96 assign_perm(permission, group, object)
97 if action == "change":
98 # change gives view too
99 assign_perm(
100 f"view_{object.__class__.__name__.lower()}",
101 group,
102 object,
103 )
104
[end of src/documents/permissions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/documents/permissions.py b/src/documents/permissions.py
--- a/src/documents/permissions.py
+++ b/src/documents/permissions.py
@@ -66,7 +66,7 @@
with_group_users=False,
)
if len(users_to_add) > 0 and len(users_to_remove) > 0:
- users_to_remove = users_to_remove.difference(users_to_add)
+ users_to_remove = users_to_remove.exclude(id__in=users_to_add)
if len(users_to_remove) > 0:
for user in users_to_remove:
remove_perm(permission, user, object)
@@ -87,7 +87,7 @@
permission,
)
if len(groups_to_add) > 0 and len(groups_to_remove) > 0:
- groups_to_remove = groups_to_remove.difference(groups_to_add)
+ groups_to_remove = groups_to_remove.exclude(id__in=groups_to_add)
if len(groups_to_remove) > 0:
for group in groups_to_remove:
remove_perm(permission, group, object)
|
{"golden_diff": "diff --git a/src/documents/permissions.py b/src/documents/permissions.py\n--- a/src/documents/permissions.py\n+++ b/src/documents/permissions.py\n@@ -66,7 +66,7 @@\n with_group_users=False,\n )\n if len(users_to_add) > 0 and len(users_to_remove) > 0:\n- users_to_remove = users_to_remove.difference(users_to_add)\n+ users_to_remove = users_to_remove.exclude(id__in=users_to_add)\n if len(users_to_remove) > 0:\n for user in users_to_remove:\n remove_perm(permission, user, object)\n@@ -87,7 +87,7 @@\n permission,\n )\n if len(groups_to_add) > 0 and len(groups_to_remove) > 0:\n- groups_to_remove = groups_to_remove.difference(groups_to_add)\n+ groups_to_remove = groups_to_remove.exclude(id__in=groups_to_add)\n if len(groups_to_remove) > 0:\n for group in groups_to_remove:\n remove_perm(permission, group, object)\n", "issue": "[BUG] Docker 1.13-dev: Multi-User Setup, can't change anything with to document details\n### Description\n\nHi, I am pretty new to paperless. I wanted to start directly with the multi-user feature for my whole family. So I decided to install the dev-docker image. It runs quote nice, there is just this one thing...\r\nWhen a document has permission set to owner: Person A and read permission to Person B. I cannot change anything within the document e.g. Name, Permission, Tags, Document type etc.\r\n\r\nWhen I try to do so, there is just this error:\r\n`Error saving document: Http failure response for https://example.com/api/documents/23/: 500 OK`\r\n\r\nThe change thru the webui won't be saved. When I remove the permission (read-permission to Person B) everything is working thru the webui. When I put the permission back, the error comes back as well.\r\n\r\nBTW I can change things without a problem within the Django admin panel.\n\n### Steps to reproduce\n\n1. Have a document\r\n2. Have different users\r\n3. Share this document as an owner to a second person\r\n4. Error pops up when trying to change information on the document\r\n5. remove permission, leave owner\r\n6. Document can be changed\r\n7. give read permission to the second person again\r\n8. same error comes back\n\n### Webserver logs\n\n```bash\n[2023-03-31 22:13:36,819] [ERROR] [django.request] Internal Server Error: /api/documents/23/\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.9/site-packages/asgiref/sync.py\", line 486, in thread_handler\r\n raise exc_info[1]\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py\", line 43, in inner\r\n response = await get_response(request)\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py\", line 253, in _get_response_async\r\n response = await wrapped_callback(\r\n File \"/usr/local/lib/python3.9/site-packages/asgiref/sync.py\", line 448, in __call__\r\n ret = await asyncio.wait_for(future, timeout=None)\r\n File \"/usr/local/lib/python3.9/asyncio/tasks.py\", line 442, in wait_for\r\n return await fut\r\n File \"/usr/local/lib/python3.9/site-packages/asgiref/current_thread_executor.py\", line 22, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/asgiref/sync.py\", line 490, in thread_handler\r\n return func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py\", line 55, in wrapped_view\r\n return view_func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py\", line 125, in view\r\n return self.dispatch(request, *args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/views.py\", line 509, in dispatch\r\n response = self.handle_exception(exc)\r\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/views.py\", line 469, in handle_exception\r\n self.raise_uncaught_exception(exc)\r\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/views.py\", line 480, in raise_uncaught_exception\r\n raise exc\r\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/views.py\", line 506, in dispatch\r\n response = handler(request, *args, **kwargs)\r\n File \"/usr/src/paperless/src/documents/views.py\", line 278, in update\r\n response = super().update(request, *args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py\", line 68, in update\r\n self.perform_update(serializer)\r\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py\", line 78, in perform_update\r\n serializer.save()\r\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py\", line 207, in save\r\n self.instance = self.update(self.instance, validated_data)\r\n File \"/usr/src/paperless/src/documents/serialisers.py\", line 419, in update\r\n super().update(instance, validated_data)\r\n File \"/usr/src/paperless/src/documents/serialisers.py\", line 210, in update\r\n self._set_permissions(validated_data[\"set_permissions\"], instance)\r\n File \"/usr/src/paperless/src/documents/serialisers.py\", line 149, in _set_permissions\r\n set_permissions_for_object(permissions, object)\r\n File \"/usr/src/paperless/src/documents/permissions.py\", line 70, in set_permissions_for_object\r\n if len(users_to_remove) > 0:\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/models/query.py\", line 376, in __len__\r\n self._fetch_all()\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/models/query.py\", line 1867, in _fetch_all\r\n self._result_cache = list(self._iterable_class(self))\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/models/query.py\", line 87, in __iter__\r\n results = compiler.execute_sql(\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py\", line 1385, in execute_sql\r\n sql, params = self.as_sql()\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py\", line 603, in as_sql\r\n raise NotSupportedError(\r\ndjango.db.utils.NotSupportedError: difference is not supported on this database backend.\n```\n\n\n### Browser logs\n\n_No response_\n\n### Paperless-ngx version\n\n1.13.0-dev\n\n### Host OS\n\nDocker@Ubuntu 22.04\n\n### Installation method\n\nDocker - official image\n\n### Browser\n\nSafari\n\n### Configuration changes\n\n_No response_\n\n### Other\n\n_No response_\n", "before_files": [{"content": "from django.contrib.auth.models import Group\nfrom django.contrib.auth.models import Permission\nfrom django.contrib.auth.models import User\nfrom django.contrib.contenttypes.models import ContentType\nfrom guardian.models import GroupObjectPermission\nfrom guardian.shortcuts import assign_perm\nfrom guardian.shortcuts import get_users_with_perms\nfrom guardian.shortcuts import remove_perm\nfrom rest_framework.permissions import BasePermission\nfrom rest_framework.permissions import DjangoObjectPermissions\n\n\nclass PaperlessObjectPermissions(DjangoObjectPermissions):\n \"\"\"\n A permissions backend that checks for object-level permissions\n or for ownership.\n \"\"\"\n\n perms_map = {\n \"GET\": [\"%(app_label)s.view_%(model_name)s\"],\n \"OPTIONS\": [\"%(app_label)s.view_%(model_name)s\"],\n \"HEAD\": [\"%(app_label)s.view_%(model_name)s\"],\n \"POST\": [\"%(app_label)s.add_%(model_name)s\"],\n \"PUT\": [\"%(app_label)s.change_%(model_name)s\"],\n \"PATCH\": [\"%(app_label)s.change_%(model_name)s\"],\n \"DELETE\": [\"%(app_label)s.delete_%(model_name)s\"],\n }\n\n def has_object_permission(self, request, view, obj):\n if hasattr(obj, \"owner\") and obj.owner is not None:\n if request.user == obj.owner:\n return True\n else:\n return super().has_object_permission(request, view, obj)\n else:\n return True # no owner\n\n\nclass PaperlessAdminPermissions(BasePermission):\n def has_permission(self, request, view):\n return request.user.has_perm(\"admin.view_logentry\")\n\n\ndef get_groups_with_only_permission(obj, codename):\n ctype = ContentType.objects.get_for_model(obj)\n permission = Permission.objects.get(content_type=ctype, codename=codename)\n group_object_perm_group_ids = (\n GroupObjectPermission.objects.filter(\n object_pk=obj.pk,\n content_type=ctype,\n )\n .filter(permission=permission)\n .values_list(\"group_id\")\n )\n return Group.objects.filter(id__in=group_object_perm_group_ids).distinct()\n\n\ndef set_permissions_for_object(permissions, object):\n for action in permissions:\n permission = f\"{action}_{object.__class__.__name__.lower()}\"\n # users\n users_to_add = User.objects.filter(id__in=permissions[action][\"users\"])\n users_to_remove = get_users_with_perms(\n object,\n only_with_perms_in=[permission],\n with_group_users=False,\n )\n if len(users_to_add) > 0 and len(users_to_remove) > 0:\n users_to_remove = users_to_remove.difference(users_to_add)\n if len(users_to_remove) > 0:\n for user in users_to_remove:\n remove_perm(permission, user, object)\n if len(users_to_add) > 0:\n for user in users_to_add:\n assign_perm(permission, user, object)\n if action == \"change\":\n # change gives view too\n assign_perm(\n f\"view_{object.__class__.__name__.lower()}\",\n user,\n object,\n )\n # groups\n groups_to_add = Group.objects.filter(id__in=permissions[action][\"groups\"])\n groups_to_remove = get_groups_with_only_permission(\n object,\n permission,\n )\n if len(groups_to_add) > 0 and len(groups_to_remove) > 0:\n groups_to_remove = groups_to_remove.difference(groups_to_add)\n if len(groups_to_remove) > 0:\n for group in groups_to_remove:\n remove_perm(permission, group, object)\n if len(groups_to_add) > 0:\n for group in groups_to_add:\n assign_perm(permission, group, object)\n if action == \"change\":\n # change gives view too\n assign_perm(\n f\"view_{object.__class__.__name__.lower()}\",\n group,\n object,\n )\n", "path": "src/documents/permissions.py"}]}
| 2,958 | 231 |
gh_patches_debug_17987
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-3480
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Thumbnails 'Source' objects are not removed when deleting an album
### Describe the bug
Currently when an album is deleted, the Source objects (from django-thumbnails) of its photos aren't deleted. Then, if an album with the same name is uploaded again, we get IntegrityErrors when trying to create photos with the same filenames that already existed.
### How to reproduce
<!-- Steps to reproduce the behaviour -->
1. Upload album
2. Delete it
3. Upload the same album again
### Expected behaviour
The Source's are deleted, so there is no integrityerror.
### Additional context
https://thalia.sentry.io/issues/4543169553/events/239a90e83abd437e9d7116b45ec30b9a/
This issue would be solved by making photos filenames be uuid's (#3442), and just fixing this single issue won't fix all problems we have with reuploading an album, as there is another problem:
Reuploading different photos with the same name does not invalidate the cloudfront cache. So If we do that within 24 hours, facedetection would likely analyze the wrong (stale) cached files, and they would also show up to users. So we also need to either make the filenames unique, or implement cloudfront cache invalidation. Clearly, unique filenames are the easier way to go.
</issue>
<code>
[start of website/photos/models.py]
1 import hashlib
2 import logging
3 import os
4 import random
5 from secrets import token_hex
6
7 from django.conf import settings
8 from django.core.exceptions import ValidationError
9 from django.db import models
10 from django.db.models import Count, IntegerField, Value
11 from django.db.models.functions import Coalesce
12 from django.urls import reverse
13 from django.utils.functional import cached_property
14 from django.utils.translation import gettext_lazy as _
15
16 from queryable_properties.managers import QueryablePropertiesManager
17 from queryable_properties.properties import AnnotationProperty
18 from thumbnails.fields import ImageField
19
20 from members.models import Member
21
22 COVER_FILENAME = "cover.jpg"
23
24
25 logger = logging.getLogger(__name__)
26
27
28 def photo_uploadto(instance, filename):
29 ext = os.path.splitext(filename)[1]
30 return f"photos/{instance.album.dirname}/{token_hex(8)}{ext}"
31
32
33 class DuplicatePhotoException(Exception):
34 """Raised when a photo with the same digest already exists in a given album."""
35
36
37 class Photo(models.Model):
38 """Model for a Photo object."""
39
40 objects = QueryablePropertiesManager()
41
42 album = models.ForeignKey(
43 "Album", on_delete=models.CASCADE, verbose_name=_("album")
44 )
45
46 file = ImageField(
47 _("file"),
48 upload_to=photo_uploadto,
49 resize_source_to="source",
50 )
51
52 rotation = models.IntegerField(
53 verbose_name=_("rotation"),
54 default=0,
55 choices=((x, x) for x in (0, 90, 180, 270)),
56 help_text=_("This does not modify the original image file."),
57 )
58
59 _digest = models.CharField(
60 "digest",
61 max_length=40,
62 blank=True,
63 editable=False,
64 )
65
66 num_likes = AnnotationProperty(
67 Coalesce(Count("likes"), Value(0), output_field=IntegerField())
68 )
69
70 def __init__(self, *args, **kwargs):
71 """Initialize Photo object and set the file if it exists."""
72 super().__init__(*args, **kwargs)
73 if self.file:
74 self.original_file = self.file.name
75 else:
76 self.original_file = ""
77
78 def __str__(self):
79 """Return the filename of a Photo object."""
80 return os.path.basename(self.file.name)
81
82 def clean(self):
83 if not self.file._committed:
84 hash_sha1 = hashlib.sha1()
85 for chunk in iter(lambda: self.file.read(4096), b""):
86 hash_sha1.update(chunk)
87 digest = hash_sha1.hexdigest()
88 self._digest = digest
89
90 if (
91 Photo.objects.filter(album=self.album, _digest=digest)
92 .exclude(pk=self.pk)
93 .exists()
94 ):
95 raise ValidationError(
96 {"file": "This photo already exists in this album."}
97 )
98
99 return super().clean()
100
101 def delete(self, using=None, keep_parents=False):
102 removed = super().delete(using, keep_parents)
103 if self.file.name:
104 self.file.delete()
105 return removed
106
107 class Meta:
108 """Meta class for Photo."""
109
110 # Photos are created in order of their filename.
111 ordering = ("pk",)
112
113
114 class Like(models.Model):
115 photo = models.ForeignKey(
116 Photo, null=False, blank=False, related_name="likes", on_delete=models.CASCADE
117 )
118 member = models.ForeignKey(
119 Member, null=True, blank=False, on_delete=models.SET_NULL
120 )
121
122 def __str__(self):
123 return str(self.member) + " " + _("likes") + " " + str(self.photo)
124
125 class Meta:
126 unique_together = ["photo", "member"]
127
128
129 class Album(models.Model):
130 """Model for Album objects."""
131
132 title = models.CharField(
133 _("title"),
134 blank=True,
135 max_length=200,
136 help_text=_("Leave empty to take over the title of the event"),
137 )
138
139 dirname = models.CharField(
140 verbose_name=_("directory name"),
141 max_length=200,
142 )
143
144 date = models.DateField(
145 verbose_name=_("date"),
146 blank=True,
147 help_text=_("Leave empty to take over the date of the event"),
148 )
149
150 slug = models.SlugField(
151 verbose_name=_("slug"),
152 unique=True,
153 )
154
155 hidden = models.BooleanField(verbose_name=_("hidden"), default=False)
156
157 event = models.ForeignKey(
158 "events.Event",
159 on_delete=models.SET_NULL,
160 blank=True,
161 null=True,
162 )
163
164 _cover = models.OneToOneField(
165 Photo,
166 on_delete=models.SET_NULL,
167 blank=True,
168 null=True,
169 related_name="covered_album",
170 verbose_name=_("cover image"),
171 )
172
173 shareable = models.BooleanField(verbose_name=_("shareable"), default=False)
174
175 photosdir = "photos"
176 photospath = os.path.join(settings.MEDIA_ROOT, photosdir)
177
178 @cached_property
179 def cover(self):
180 """Return cover of Album.
181
182 If a cover is not set, return a random photo or None if there are no photos.
183 """
184 cover = None
185 if self._cover is not None:
186 return self._cover
187
188 # Not prefetched because this should be rare and is a lot of data
189 # `exists` is faster in theory, but requires everything to be fetched later anyways
190 if self.photo_set.exists():
191 r = random.Random(self.dirname)
192 cover = r.choice(self.photo_set.all())
193 return cover
194
195 def __str__(self):
196 """Get string representation of Album."""
197 return f"{self.date:%Y-%m-%d} {self.title}"
198
199 def get_absolute_url(self):
200 """Get url of Album."""
201 return reverse("photos:album", args=[str(self.slug)])
202
203 def clean(self):
204 super().clean()
205 errors = {}
206
207 if not self.title and not self.event:
208 errors.update(
209 {"title": _("This field is required if there is no event selected.")}
210 )
211
212 if not self.date and not self.event:
213 errors.update(
214 {"date": _("This field is required if there is no event selected.")}
215 )
216
217 if errors:
218 raise ValidationError(errors)
219
220 def save(self, **kwargs):
221 """Save album and send appropriate notifications."""
222 # dirname is only set for new objects, to avoid ever changing it
223 if self.pk is None:
224 self.dirname = self.slug
225
226 if not self.title and self.event:
227 self.title = self.event.title
228
229 if not self.date:
230 self.date = self.event.start.date()
231
232 super().save(**kwargs)
233
234 @property
235 def access_token(self):
236 """Return access token for album."""
237 return hashlib.sha256(
238 f"{settings.SECRET_KEY}album{self.pk}".encode()
239 ).hexdigest()
240
241 class Meta:
242 """Meta class for Album."""
243
244 ordering = ("-date", "title")
245
[end of website/photos/models.py]
[start of website/photos/signals.py]
1 from django.db.models.signals import pre_delete
2
3 from utils.models.signals import suspendingreceiver
4
5
6 @suspendingreceiver(
7 pre_delete, sender="photos.Photo", dispatch_uid="photos_photo_delete"
8 )
9 def pre_photo_delete(sender, instance, **kwargs):
10 """Remove main photo file on deletion."""
11 instance.file.delete()
12
[end of website/photos/signals.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/website/photos/models.py b/website/photos/models.py
--- a/website/photos/models.py
+++ b/website/photos/models.py
@@ -98,12 +98,6 @@
return super().clean()
- def delete(self, using=None, keep_parents=False):
- removed = super().delete(using, keep_parents)
- if self.file.name:
- self.file.delete()
- return removed
-
class Meta:
"""Meta class for Photo."""
diff --git a/website/photos/signals.py b/website/photos/signals.py
--- a/website/photos/signals.py
+++ b/website/photos/signals.py
@@ -7,5 +7,9 @@
pre_delete, sender="photos.Photo", dispatch_uid="photos_photo_delete"
)
def pre_photo_delete(sender, instance, **kwargs):
- """Remove main photo file on deletion."""
- instance.file.delete()
+ """Remove main photo file and thumbnails on deletion."""
+ name = instance.file.name # First get the name, it is removed by the next line.
+ instance.file.delete() # Delete the file and its thumbnails.
+
+ # Clean up the source metadata, django-thumbnails does not do this.
+ instance.file.metadata_backend.delete_source(name)
|
{"golden_diff": "diff --git a/website/photos/models.py b/website/photos/models.py\n--- a/website/photos/models.py\n+++ b/website/photos/models.py\n@@ -98,12 +98,6 @@\n \n return super().clean()\n \n- def delete(self, using=None, keep_parents=False):\n- removed = super().delete(using, keep_parents)\n- if self.file.name:\n- self.file.delete()\n- return removed\n-\n class Meta:\n \"\"\"Meta class for Photo.\"\"\"\n \ndiff --git a/website/photos/signals.py b/website/photos/signals.py\n--- a/website/photos/signals.py\n+++ b/website/photos/signals.py\n@@ -7,5 +7,9 @@\n pre_delete, sender=\"photos.Photo\", dispatch_uid=\"photos_photo_delete\"\n )\n def pre_photo_delete(sender, instance, **kwargs):\n- \"\"\"Remove main photo file on deletion.\"\"\"\n- instance.file.delete()\n+ \"\"\"Remove main photo file and thumbnails on deletion.\"\"\"\n+ name = instance.file.name # First get the name, it is removed by the next line.\n+ instance.file.delete() # Delete the file and its thumbnails.\n+\n+ # Clean up the source metadata, django-thumbnails does not do this.\n+ instance.file.metadata_backend.delete_source(name)\n", "issue": "Thumbnails 'Source' objects are not removed when deleting an album\n### Describe the bug\r\n\r\nCurrently when an album is deleted, the Source objects (from django-thumbnails) of its photos aren't deleted. Then, if an album with the same name is uploaded again, we get IntegrityErrors when trying to create photos with the same filenames that already existed. \r\n\r\n### How to reproduce\r\n<!-- Steps to reproduce the behaviour -->\r\n1. Upload album\r\n2. Delete it\r\n3. Upload the same album again\r\n\r\n### Expected behaviour\r\nThe Source's are deleted, so there is no integrityerror.\r\n\r\n### Additional context\r\nhttps://thalia.sentry.io/issues/4543169553/events/239a90e83abd437e9d7116b45ec30b9a/\r\n\r\nThis issue would be solved by making photos filenames be uuid's (#3442), and just fixing this single issue won't fix all problems we have with reuploading an album, as there is another problem:\r\nReuploading different photos with the same name does not invalidate the cloudfront cache. So If we do that within 24 hours, facedetection would likely analyze the wrong (stale) cached files, and they would also show up to users. So we also need to either make the filenames unique, or implement cloudfront cache invalidation. Clearly, unique filenames are the easier way to go.\r\n\n", "before_files": [{"content": "import hashlib\nimport logging\nimport os\nimport random\nfrom secrets import token_hex\n\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.db.models import Count, IntegerField, Value\nfrom django.db.models.functions import Coalesce\nfrom django.urls import reverse\nfrom django.utils.functional import cached_property\nfrom django.utils.translation import gettext_lazy as _\n\nfrom queryable_properties.managers import QueryablePropertiesManager\nfrom queryable_properties.properties import AnnotationProperty\nfrom thumbnails.fields import ImageField\n\nfrom members.models import Member\n\nCOVER_FILENAME = \"cover.jpg\"\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef photo_uploadto(instance, filename):\n ext = os.path.splitext(filename)[1]\n return f\"photos/{instance.album.dirname}/{token_hex(8)}{ext}\"\n\n\nclass DuplicatePhotoException(Exception):\n \"\"\"Raised when a photo with the same digest already exists in a given album.\"\"\"\n\n\nclass Photo(models.Model):\n \"\"\"Model for a Photo object.\"\"\"\n\n objects = QueryablePropertiesManager()\n\n album = models.ForeignKey(\n \"Album\", on_delete=models.CASCADE, verbose_name=_(\"album\")\n )\n\n file = ImageField(\n _(\"file\"),\n upload_to=photo_uploadto,\n resize_source_to=\"source\",\n )\n\n rotation = models.IntegerField(\n verbose_name=_(\"rotation\"),\n default=0,\n choices=((x, x) for x in (0, 90, 180, 270)),\n help_text=_(\"This does not modify the original image file.\"),\n )\n\n _digest = models.CharField(\n \"digest\",\n max_length=40,\n blank=True,\n editable=False,\n )\n\n num_likes = AnnotationProperty(\n Coalesce(Count(\"likes\"), Value(0), output_field=IntegerField())\n )\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize Photo object and set the file if it exists.\"\"\"\n super().__init__(*args, **kwargs)\n if self.file:\n self.original_file = self.file.name\n else:\n self.original_file = \"\"\n\n def __str__(self):\n \"\"\"Return the filename of a Photo object.\"\"\"\n return os.path.basename(self.file.name)\n\n def clean(self):\n if not self.file._committed:\n hash_sha1 = hashlib.sha1()\n for chunk in iter(lambda: self.file.read(4096), b\"\"):\n hash_sha1.update(chunk)\n digest = hash_sha1.hexdigest()\n self._digest = digest\n\n if (\n Photo.objects.filter(album=self.album, _digest=digest)\n .exclude(pk=self.pk)\n .exists()\n ):\n raise ValidationError(\n {\"file\": \"This photo already exists in this album.\"}\n )\n\n return super().clean()\n\n def delete(self, using=None, keep_parents=False):\n removed = super().delete(using, keep_parents)\n if self.file.name:\n self.file.delete()\n return removed\n\n class Meta:\n \"\"\"Meta class for Photo.\"\"\"\n\n # Photos are created in order of their filename.\n ordering = (\"pk\",)\n\n\nclass Like(models.Model):\n photo = models.ForeignKey(\n Photo, null=False, blank=False, related_name=\"likes\", on_delete=models.CASCADE\n )\n member = models.ForeignKey(\n Member, null=True, blank=False, on_delete=models.SET_NULL\n )\n\n def __str__(self):\n return str(self.member) + \" \" + _(\"likes\") + \" \" + str(self.photo)\n\n class Meta:\n unique_together = [\"photo\", \"member\"]\n\n\nclass Album(models.Model):\n \"\"\"Model for Album objects.\"\"\"\n\n title = models.CharField(\n _(\"title\"),\n blank=True,\n max_length=200,\n help_text=_(\"Leave empty to take over the title of the event\"),\n )\n\n dirname = models.CharField(\n verbose_name=_(\"directory name\"),\n max_length=200,\n )\n\n date = models.DateField(\n verbose_name=_(\"date\"),\n blank=True,\n help_text=_(\"Leave empty to take over the date of the event\"),\n )\n\n slug = models.SlugField(\n verbose_name=_(\"slug\"),\n unique=True,\n )\n\n hidden = models.BooleanField(verbose_name=_(\"hidden\"), default=False)\n\n event = models.ForeignKey(\n \"events.Event\",\n on_delete=models.SET_NULL,\n blank=True,\n null=True,\n )\n\n _cover = models.OneToOneField(\n Photo,\n on_delete=models.SET_NULL,\n blank=True,\n null=True,\n related_name=\"covered_album\",\n verbose_name=_(\"cover image\"),\n )\n\n shareable = models.BooleanField(verbose_name=_(\"shareable\"), default=False)\n\n photosdir = \"photos\"\n photospath = os.path.join(settings.MEDIA_ROOT, photosdir)\n\n @cached_property\n def cover(self):\n \"\"\"Return cover of Album.\n\n If a cover is not set, return a random photo or None if there are no photos.\n \"\"\"\n cover = None\n if self._cover is not None:\n return self._cover\n\n # Not prefetched because this should be rare and is a lot of data\n # `exists` is faster in theory, but requires everything to be fetched later anyways\n if self.photo_set.exists():\n r = random.Random(self.dirname)\n cover = r.choice(self.photo_set.all())\n return cover\n\n def __str__(self):\n \"\"\"Get string representation of Album.\"\"\"\n return f\"{self.date:%Y-%m-%d} {self.title}\"\n\n def get_absolute_url(self):\n \"\"\"Get url of Album.\"\"\"\n return reverse(\"photos:album\", args=[str(self.slug)])\n\n def clean(self):\n super().clean()\n errors = {}\n\n if not self.title and not self.event:\n errors.update(\n {\"title\": _(\"This field is required if there is no event selected.\")}\n )\n\n if not self.date and not self.event:\n errors.update(\n {\"date\": _(\"This field is required if there is no event selected.\")}\n )\n\n if errors:\n raise ValidationError(errors)\n\n def save(self, **kwargs):\n \"\"\"Save album and send appropriate notifications.\"\"\"\n # dirname is only set for new objects, to avoid ever changing it\n if self.pk is None:\n self.dirname = self.slug\n\n if not self.title and self.event:\n self.title = self.event.title\n\n if not self.date:\n self.date = self.event.start.date()\n\n super().save(**kwargs)\n\n @property\n def access_token(self):\n \"\"\"Return access token for album.\"\"\"\n return hashlib.sha256(\n f\"{settings.SECRET_KEY}album{self.pk}\".encode()\n ).hexdigest()\n\n class Meta:\n \"\"\"Meta class for Album.\"\"\"\n\n ordering = (\"-date\", \"title\")\n", "path": "website/photos/models.py"}, {"content": "from django.db.models.signals import pre_delete\n\nfrom utils.models.signals import suspendingreceiver\n\n\n@suspendingreceiver(\n pre_delete, sender=\"photos.Photo\", dispatch_uid=\"photos_photo_delete\"\n)\ndef pre_photo_delete(sender, instance, **kwargs):\n \"\"\"Remove main photo file on deletion.\"\"\"\n instance.file.delete()\n", "path": "website/photos/signals.py"}]}
| 3,029 | 275 |
gh_patches_debug_21665
|
rasdani/github-patches
|
git_diff
|
qtile__qtile-1241
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fake screens
[This part of the documentation](https://github.com/qtile/qtile/blob/ed7198a5fb5438110f81a8c6ecc0e6289676c057/libqtile/config.py#L231-L232) mentions "fake screens", and the term is also found [in the code](https://github.com/qtile/qtile/blob/7c2a88fba68bdcf6f25dfb5494a74afc475d674e/libqtile/manager.py#L357-L373).
What are they? How to use them?
We need to document answers to those questions, and then make sure they work correctly.
See #1192 for this last point.
</issue>
<code>
[start of libqtile/confreader.py]
1 # coding: utf-8
2 #
3 # Copyright (c) 2008, Aldo Cortesi <[email protected]>
4 # Copyright (c) 2011, Andrew Grigorev <[email protected]>
5 #
6 # All rights reserved.
7 #
8 # Permission is hereby granted, free of charge, to any person obtaining a copy
9 # of this software and associated documentation files (the "Software"), to deal
10 # in the Software without restriction, including without limitation the rights
11 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
12 # copies of the Software, and to permit persons to whom the Software is
13 # furnished to do so, subject to the following conditions:
14 #
15 # The above copyright notice and this permission notice shall be included in
16 # all copies or substantial portions of the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
19 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
20 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
21 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
22 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
23 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
24 # SOFTWARE.
25 import os
26 import sys
27
28
29 class ConfigError(Exception):
30 pass
31
32
33 class Config(object):
34 settings_keys = [
35 "keys",
36 "mouse",
37 "groups",
38 "dgroups_key_binder",
39 "dgroups_app_rules",
40 "follow_mouse_focus",
41 "focus_on_window_activation",
42 "cursor_warp",
43 "layouts",
44 "floating_layout",
45 "screens",
46 "main",
47 "auto_fullscreen",
48 "widget_defaults",
49 "extension_defaults",
50 "bring_front_click",
51 "wmname",
52 ]
53
54 def __init__(self, **settings):
55 """Create a Config() object from settings
56
57 Only attributes found in Config.settings_keys will be added to object.
58 config attribute precedence is 1.) **settings 2.) self 3.) default_config
59 """
60 from .resources import default_config
61 default = vars(default_config)
62 for key in self.settings_keys:
63 try:
64 value = settings[key]
65 except KeyError:
66 value = getattr(self, key, default[key])
67 setattr(self, key, value)
68 self._init_deprecated(**settings)
69
70 def _init_deprecated(self, extensions=None, **settings):
71 "Initialize deprecated settings."
72 if extensions: # Deprecated in v0.10.7
73 import warnings
74 warnings.warn("'extentions' is deprecated, use "
75 "'extension_defaults'", DeprecationWarning)
76 self.extension_defaults.update(extensions.get('dmenu', {}))
77
78 @classmethod
79 def from_file(cls, path):
80 "Create a Config() object from the python file located at path."
81 try:
82 sys.path.insert(0, os.path.dirname(path))
83 config = __import__(os.path.basename(path)[:-3])
84 except Exception:
85 import traceback
86 from .log_utils import logger
87 logger.exception('Could not import config file %r', path)
88 tb = traceback.format_exc()
89 raise ConfigError(tb)
90 return cls(**vars(config))
91
[end of libqtile/confreader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/libqtile/confreader.py b/libqtile/confreader.py
--- a/libqtile/confreader.py
+++ b/libqtile/confreader.py
@@ -66,6 +66,7 @@
value = getattr(self, key, default[key])
setattr(self, key, value)
self._init_deprecated(**settings)
+ self._init_fake_screens(**settings)
def _init_deprecated(self, extensions=None, **settings):
"Initialize deprecated settings."
@@ -75,6 +76,14 @@
"'extension_defaults'", DeprecationWarning)
self.extension_defaults.update(extensions.get('dmenu', {}))
+ def _init_fake_screens(self, **settings):
+ " Initiaize fake_screens if they are set."
+ try:
+ value = settings['fake_screens']
+ setattr(self, 'fake_screens', value)
+ except KeyError:
+ pass
+
@classmethod
def from_file(cls, path):
"Create a Config() object from the python file located at path."
|
{"golden_diff": "diff --git a/libqtile/confreader.py b/libqtile/confreader.py\n--- a/libqtile/confreader.py\n+++ b/libqtile/confreader.py\n@@ -66,6 +66,7 @@\n value = getattr(self, key, default[key])\n setattr(self, key, value)\n self._init_deprecated(**settings)\n+ self._init_fake_screens(**settings)\n \n def _init_deprecated(self, extensions=None, **settings):\n \"Initialize deprecated settings.\"\n@@ -75,6 +76,14 @@\n \"'extension_defaults'\", DeprecationWarning)\n self.extension_defaults.update(extensions.get('dmenu', {}))\n \n+ def _init_fake_screens(self, **settings):\n+ \" Initiaize fake_screens if they are set.\"\n+ try:\n+ value = settings['fake_screens']\n+ setattr(self, 'fake_screens', value)\n+ except KeyError:\n+ pass\n+\n @classmethod\n def from_file(cls, path):\n \"Create a Config() object from the python file located at path.\"\n", "issue": "Fake screens\n[This part of the documentation](https://github.com/qtile/qtile/blob/ed7198a5fb5438110f81a8c6ecc0e6289676c057/libqtile/config.py#L231-L232) mentions \"fake screens\", and the term is also found [in the code](https://github.com/qtile/qtile/blob/7c2a88fba68bdcf6f25dfb5494a74afc475d674e/libqtile/manager.py#L357-L373).\r\n\r\nWhat are they? How to use them?\r\n\r\nWe need to document answers to those questions, and then make sure they work correctly.\r\n\r\nSee #1192 for this last point.\n", "before_files": [{"content": "# coding: utf-8\n#\n# Copyright (c) 2008, Aldo Cortesi <[email protected]>\n# Copyright (c) 2011, Andrew Grigorev <[email protected]>\n#\n# All rights reserved.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\nimport os\nimport sys\n\n\nclass ConfigError(Exception):\n pass\n\n\nclass Config(object):\n settings_keys = [\n \"keys\",\n \"mouse\",\n \"groups\",\n \"dgroups_key_binder\",\n \"dgroups_app_rules\",\n \"follow_mouse_focus\",\n \"focus_on_window_activation\",\n \"cursor_warp\",\n \"layouts\",\n \"floating_layout\",\n \"screens\",\n \"main\",\n \"auto_fullscreen\",\n \"widget_defaults\",\n \"extension_defaults\",\n \"bring_front_click\",\n \"wmname\",\n ]\n\n def __init__(self, **settings):\n \"\"\"Create a Config() object from settings\n\n Only attributes found in Config.settings_keys will be added to object.\n config attribute precedence is 1.) **settings 2.) self 3.) default_config\n \"\"\"\n from .resources import default_config\n default = vars(default_config)\n for key in self.settings_keys:\n try:\n value = settings[key]\n except KeyError:\n value = getattr(self, key, default[key])\n setattr(self, key, value)\n self._init_deprecated(**settings)\n\n def _init_deprecated(self, extensions=None, **settings):\n \"Initialize deprecated settings.\"\n if extensions: # Deprecated in v0.10.7\n import warnings\n warnings.warn(\"'extentions' is deprecated, use \"\n \"'extension_defaults'\", DeprecationWarning)\n self.extension_defaults.update(extensions.get('dmenu', {}))\n\n @classmethod\n def from_file(cls, path):\n \"Create a Config() object from the python file located at path.\"\n try:\n sys.path.insert(0, os.path.dirname(path))\n config = __import__(os.path.basename(path)[:-3])\n except Exception:\n import traceback\n from .log_utils import logger\n logger.exception('Could not import config file %r', path)\n tb = traceback.format_exc()\n raise ConfigError(tb)\n return cls(**vars(config))\n", "path": "libqtile/confreader.py"}]}
| 1,602 | 236 |
gh_patches_debug_13966
|
rasdani/github-patches
|
git_diff
|
streamlit__streamlit-7255
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[v1.21.0] "TypeError: Unicode-objects must be encoded before hashing" in `util.calc_md5` due to inadequate type check
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
After upgrading to version 1.21.0, [`util.calc_md5`](https://github.com/streamlit/streamlit/blob/316cef3426c4f3c81e99414446b0e9a131dbdb57/lib/streamlit/util.py#L180) throws an error when loading a page: `TypeError: Unicode-objects must be encoded before hashing`.
This is due to the check `if type(s) is str` in `b = cast(bytes, s.encode("utf-8") if type(s) is str else s)` not being broad enough to catch all `str`-like objects.
### Reproducible Code Example
```Python
from streamlit import util
class MyString(str):
pass
s = MyString("foobar")
util.calc_md5(s)
```
### Steps To Reproduce
Run the provided code example.
### Expected Behavior
No error.
### Current Behavior
TypeError: Unicode-objects must be encoded before hashing
### Is this a regression?
- [X] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.21.0
- Python version: 3.8.14
- Operating System: Ubuntu 18.04.6
- Browser: Chrome Version 107.0.5304.87 (Official Build) (64-bit)
- Virtual environment:
### Additional Information
In my case, the [`main_script_path_str` passed to `get_pages` here](https://github.com/streamlit/streamlit/blob/316cef3426c4f3c81e99414446b0e9a131dbdb57/lib/streamlit/source_util.py#L106) ends up being a subclass of `str`.
A good enough fix for my specific case is using `isinstance` instead of `type`:
```
b = cast(bytes, s.encode("utf-8") if isinstance(s, str) else s)
```
Or something like this could maybe catch more potential issues:
```
b = cast(bytes, str(s).encode("utf-8") if type(s) is not bytes else s)
```
### Are you willing to submit a PR?
- [X] Yes, I am willing to submit a PR!
</issue>
<code>
[start of lib/streamlit/util.py]
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """A bunch of useful utilities."""
16
17 from __future__ import annotations
18
19 import dataclasses
20 import functools
21 import hashlib
22 import os
23 import subprocess
24 from typing import Any, Dict, Iterable, List, Mapping, Set, TypeVar, Union, cast
25
26 from typing_extensions import Final
27
28 from streamlit import env_util
29
30 # URL of Streamlit's help page.
31 HELP_DOC: Final = "https://docs.streamlit.io/"
32 FLOAT_EQUALITY_EPSILON: Final[float] = 0.000000000005
33
34
35 def memoize(func):
36 """Decorator to memoize the result of a no-args func."""
37 result: List[Any] = []
38
39 @functools.wraps(func)
40 def wrapped_func():
41 if not result:
42 result.append(func())
43 return result[0]
44
45 return wrapped_func
46
47
48 def open_browser(url):
49 """Open a web browser pointing to a given URL.
50
51 We use this function instead of Python's `webbrowser` module because this
52 way we can capture stdout/stderr to avoid polluting the terminal with the
53 browser's messages. For example, Chrome always prints things like "Created
54 new window in existing browser session", and those get on the user's way.
55
56 url : str
57 The URL. Must include the protocol.
58
59 """
60 # Treat Windows separately because:
61 # 1. /dev/null doesn't exist.
62 # 2. subprocess.Popen(['start', url]) doesn't actually pop up the
63 # browser even though 'start url' works from the command prompt.
64 # Fun!
65 # Also, use webbrowser if we are on Linux and xdg-open is not installed.
66 #
67 # We don't use the webbrowser module on Linux and Mac because some browsers
68 # (ahem... Chrome) always print "Opening in existing browser session" to
69 # the terminal, which is spammy and annoying. So instead we start the
70 # browser ourselves and send all its output to /dev/null.
71
72 if env_util.IS_WINDOWS:
73 _open_browser_with_webbrowser(url)
74 return
75 if env_util.IS_LINUX_OR_BSD:
76 if env_util.is_executable_in_path("xdg-open"):
77 _open_browser_with_command("xdg-open", url)
78 return
79 _open_browser_with_webbrowser(url)
80 return
81 if env_util.IS_DARWIN:
82 _open_browser_with_command("open", url)
83 return
84
85 import platform
86
87 raise Error('Cannot open browser in platform "%s"' % platform.system())
88
89
90 def _open_browser_with_webbrowser(url):
91 import webbrowser
92
93 webbrowser.open(url)
94
95
96 def _open_browser_with_command(command, url):
97 cmd_line = [command, url]
98 with open(os.devnull, "w") as devnull:
99 subprocess.Popen(cmd_line, stdout=devnull, stderr=subprocess.STDOUT)
100
101
102 def _maybe_tuple_to_list(item: Any) -> Any:
103 """Convert a tuple to a list. Leave as is if it's not a tuple."""
104 if isinstance(item, tuple):
105 return list(item)
106 return item
107
108
109 def repr_(self: Any) -> str:
110 """A clean repr for a class, excluding both values that are likely defaults,
111 and those explicitly default for dataclasses.
112 """
113 classname = self.__class__.__name__
114 # Most of the falsey value, but excluding 0 and 0.0, since those often have
115 # semantic meaning within streamlit.
116 defaults: list[Any] = [None, "", False, [], set(), dict()]
117 if dataclasses.is_dataclass(self):
118 fields_vals = (
119 (f.name, getattr(self, f.name))
120 for f in dataclasses.fields(self)
121 if f.repr
122 and getattr(self, f.name) != f.default
123 and getattr(self, f.name) not in defaults
124 )
125 else:
126 fields_vals = ((f, v) for (f, v) in self.__dict__.items() if v not in defaults)
127
128 field_reprs = ", ".join(f"{field}={value!r}" for field, value in fields_vals)
129 return f"{classname}({field_reprs})"
130
131
132 _Value = TypeVar("_Value")
133
134
135 def index_(iterable: Iterable[_Value], x: _Value) -> int:
136 """Return zero-based index of the first item whose value is equal to x.
137 Raises a ValueError if there is no such item.
138
139 We need a custom implementation instead of the built-in list .index() to
140 be compatible with NumPy array and Pandas Series.
141
142 Parameters
143 ----------
144 iterable : list, tuple, numpy.ndarray, pandas.Series
145 x : Any
146
147 Returns
148 -------
149 int
150 """
151 for i, value in enumerate(iterable):
152 if x == value:
153 return i
154 elif isinstance(value, float) and isinstance(x, float):
155 if abs(x - value) < FLOAT_EQUALITY_EPSILON:
156 return i
157 raise ValueError("{} is not in iterable".format(str(x)))
158
159
160 _Key = TypeVar("_Key", bound=str)
161
162
163 def lower_clean_dict_keys(dict: Mapping[_Key, _Value]) -> Dict[str, _Value]:
164 return {k.lower().strip(): v for k, v in dict.items()}
165
166
167 # TODO: Move this into errors.py? Replace with StreamlitAPIException?
168 class Error(Exception):
169 pass
170
171
172 def calc_md5(s: Union[bytes, str]) -> str:
173 """Return the md5 hash of the given string."""
174 h = hashlib.new("md5")
175
176 # mypy seems to have trouble inferring that the type of the if/else expression is
177 # always bytes.
178 b = cast(bytes, s.encode("utf-8") if type(s) is str else s)
179
180 h.update(b)
181 return h.hexdigest()
182
183
184 def exclude_key_query_params(
185 query_params: Dict[str, List[str]], keys_to_exclude: List[str]
186 ) -> Dict[str, List[str]]:
187 """Returns new object query_params : Dict[str, List[str]], but without keys defined with keys_to_drop : List[str]."""
188 return {
189 key: value
190 for key, value in query_params.items()
191 if key.lower() not in keys_to_exclude
192 }
193
194
195 def extract_key_query_params(
196 query_params: Dict[str, List[str]], param_key: str
197 ) -> Set[str]:
198 """Extracts key (case-insensitive) query params from Dict, and returns them as Set of str."""
199 return set(
200 [
201 item.lower()
202 for sublist in [
203 [value.lower() for value in query_params[key]]
204 for key in query_params.keys()
205 if key.lower() == param_key and query_params.get(key)
206 ]
207 for item in sublist
208 ]
209 )
210
[end of lib/streamlit/util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/streamlit/util.py b/lib/streamlit/util.py
--- a/lib/streamlit/util.py
+++ b/lib/streamlit/util.py
@@ -21,7 +21,7 @@
import hashlib
import os
import subprocess
-from typing import Any, Dict, Iterable, List, Mapping, Set, TypeVar, Union, cast
+from typing import Any, Dict, Iterable, List, Mapping, Set, TypeVar, Union
from typing_extensions import Final
@@ -173,9 +173,7 @@
"""Return the md5 hash of the given string."""
h = hashlib.new("md5")
- # mypy seems to have trouble inferring that the type of the if/else expression is
- # always bytes.
- b = cast(bytes, s.encode("utf-8") if type(s) is str else s)
+ b = s.encode("utf-8") if isinstance(s, str) else s
h.update(b)
return h.hexdigest()
|
{"golden_diff": "diff --git a/lib/streamlit/util.py b/lib/streamlit/util.py\n--- a/lib/streamlit/util.py\n+++ b/lib/streamlit/util.py\n@@ -21,7 +21,7 @@\n import hashlib\n import os\n import subprocess\n-from typing import Any, Dict, Iterable, List, Mapping, Set, TypeVar, Union, cast\n+from typing import Any, Dict, Iterable, List, Mapping, Set, TypeVar, Union\n \n from typing_extensions import Final\n \n@@ -173,9 +173,7 @@\n \"\"\"Return the md5 hash of the given string.\"\"\"\n h = hashlib.new(\"md5\")\n \n- # mypy seems to have trouble inferring that the type of the if/else expression is\n- # always bytes.\n- b = cast(bytes, s.encode(\"utf-8\") if type(s) is str else s)\n+ b = s.encode(\"utf-8\") if isinstance(s, str) else s\n \n h.update(b)\n return h.hexdigest()\n", "issue": "[v1.21.0] \"TypeError: Unicode-objects must be encoded before hashing\" in `util.calc_md5` due to inadequate type check\n### Checklist\r\n\r\n- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.\r\n- [X] I added a very descriptive title to this issue.\r\n- [X] I have provided sufficient information below to help reproduce this issue.\r\n\r\n### Summary\r\n\r\nAfter upgrading to version 1.21.0, [`util.calc_md5`](https://github.com/streamlit/streamlit/blob/316cef3426c4f3c81e99414446b0e9a131dbdb57/lib/streamlit/util.py#L180) throws an error when loading a page: `TypeError: Unicode-objects must be encoded before hashing`.\r\n\r\nThis is due to the check `if type(s) is str` in `b = cast(bytes, s.encode(\"utf-8\") if type(s) is str else s)` not being broad enough to catch all `str`-like objects.\r\n\r\n### Reproducible Code Example\r\n\r\n```Python\r\nfrom streamlit import util\r\n\r\nclass MyString(str):\r\n pass\r\n\r\ns = MyString(\"foobar\")\r\n\r\nutil.calc_md5(s)\r\n```\r\n\r\n\r\n### Steps To Reproduce\r\n\r\nRun the provided code example.\r\n\r\n### Expected Behavior\r\n\r\nNo error.\r\n\r\n### Current Behavior\r\n\r\nTypeError: Unicode-objects must be encoded before hashing\r\n\r\n### Is this a regression?\r\n\r\n- [X] Yes, this used to work in a previous version.\r\n\r\n### Debug info\r\n\r\n- Streamlit version: 1.21.0\r\n- Python version: 3.8.14\r\n- Operating System: Ubuntu 18.04.6\r\n- Browser: Chrome Version 107.0.5304.87 (Official Build) (64-bit)\r\n- Virtual environment:\r\n\r\n\r\n### Additional Information\r\n\r\nIn my case, the [`main_script_path_str` passed to `get_pages` here](https://github.com/streamlit/streamlit/blob/316cef3426c4f3c81e99414446b0e9a131dbdb57/lib/streamlit/source_util.py#L106) ends up being a subclass of `str`.\r\n\r\nA good enough fix for my specific case is using `isinstance` instead of `type`:\r\n\r\n```\r\nb = cast(bytes, s.encode(\"utf-8\") if isinstance(s, str) else s)\r\n```\r\n\r\nOr something like this could maybe catch more potential issues:\r\n\r\n```\r\nb = cast(bytes, str(s).encode(\"utf-8\") if type(s) is not bytes else s)\r\n```\r\n\r\n### Are you willing to submit a PR?\r\n\r\n- [X] Yes, I am willing to submit a PR!\n", "before_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"A bunch of useful utilities.\"\"\"\n\nfrom __future__ import annotations\n\nimport dataclasses\nimport functools\nimport hashlib\nimport os\nimport subprocess\nfrom typing import Any, Dict, Iterable, List, Mapping, Set, TypeVar, Union, cast\n\nfrom typing_extensions import Final\n\nfrom streamlit import env_util\n\n# URL of Streamlit's help page.\nHELP_DOC: Final = \"https://docs.streamlit.io/\"\nFLOAT_EQUALITY_EPSILON: Final[float] = 0.000000000005\n\n\ndef memoize(func):\n \"\"\"Decorator to memoize the result of a no-args func.\"\"\"\n result: List[Any] = []\n\n @functools.wraps(func)\n def wrapped_func():\n if not result:\n result.append(func())\n return result[0]\n\n return wrapped_func\n\n\ndef open_browser(url):\n \"\"\"Open a web browser pointing to a given URL.\n\n We use this function instead of Python's `webbrowser` module because this\n way we can capture stdout/stderr to avoid polluting the terminal with the\n browser's messages. For example, Chrome always prints things like \"Created\n new window in existing browser session\", and those get on the user's way.\n\n url : str\n The URL. Must include the protocol.\n\n \"\"\"\n # Treat Windows separately because:\n # 1. /dev/null doesn't exist.\n # 2. subprocess.Popen(['start', url]) doesn't actually pop up the\n # browser even though 'start url' works from the command prompt.\n # Fun!\n # Also, use webbrowser if we are on Linux and xdg-open is not installed.\n #\n # We don't use the webbrowser module on Linux and Mac because some browsers\n # (ahem... Chrome) always print \"Opening in existing browser session\" to\n # the terminal, which is spammy and annoying. So instead we start the\n # browser ourselves and send all its output to /dev/null.\n\n if env_util.IS_WINDOWS:\n _open_browser_with_webbrowser(url)\n return\n if env_util.IS_LINUX_OR_BSD:\n if env_util.is_executable_in_path(\"xdg-open\"):\n _open_browser_with_command(\"xdg-open\", url)\n return\n _open_browser_with_webbrowser(url)\n return\n if env_util.IS_DARWIN:\n _open_browser_with_command(\"open\", url)\n return\n\n import platform\n\n raise Error('Cannot open browser in platform \"%s\"' % platform.system())\n\n\ndef _open_browser_with_webbrowser(url):\n import webbrowser\n\n webbrowser.open(url)\n\n\ndef _open_browser_with_command(command, url):\n cmd_line = [command, url]\n with open(os.devnull, \"w\") as devnull:\n subprocess.Popen(cmd_line, stdout=devnull, stderr=subprocess.STDOUT)\n\n\ndef _maybe_tuple_to_list(item: Any) -> Any:\n \"\"\"Convert a tuple to a list. Leave as is if it's not a tuple.\"\"\"\n if isinstance(item, tuple):\n return list(item)\n return item\n\n\ndef repr_(self: Any) -> str:\n \"\"\"A clean repr for a class, excluding both values that are likely defaults,\n and those explicitly default for dataclasses.\n \"\"\"\n classname = self.__class__.__name__\n # Most of the falsey value, but excluding 0 and 0.0, since those often have\n # semantic meaning within streamlit.\n defaults: list[Any] = [None, \"\", False, [], set(), dict()]\n if dataclasses.is_dataclass(self):\n fields_vals = (\n (f.name, getattr(self, f.name))\n for f in dataclasses.fields(self)\n if f.repr\n and getattr(self, f.name) != f.default\n and getattr(self, f.name) not in defaults\n )\n else:\n fields_vals = ((f, v) for (f, v) in self.__dict__.items() if v not in defaults)\n\n field_reprs = \", \".join(f\"{field}={value!r}\" for field, value in fields_vals)\n return f\"{classname}({field_reprs})\"\n\n\n_Value = TypeVar(\"_Value\")\n\n\ndef index_(iterable: Iterable[_Value], x: _Value) -> int:\n \"\"\"Return zero-based index of the first item whose value is equal to x.\n Raises a ValueError if there is no such item.\n\n We need a custom implementation instead of the built-in list .index() to\n be compatible with NumPy array and Pandas Series.\n\n Parameters\n ----------\n iterable : list, tuple, numpy.ndarray, pandas.Series\n x : Any\n\n Returns\n -------\n int\n \"\"\"\n for i, value in enumerate(iterable):\n if x == value:\n return i\n elif isinstance(value, float) and isinstance(x, float):\n if abs(x - value) < FLOAT_EQUALITY_EPSILON:\n return i\n raise ValueError(\"{} is not in iterable\".format(str(x)))\n\n\n_Key = TypeVar(\"_Key\", bound=str)\n\n\ndef lower_clean_dict_keys(dict: Mapping[_Key, _Value]) -> Dict[str, _Value]:\n return {k.lower().strip(): v for k, v in dict.items()}\n\n\n# TODO: Move this into errors.py? Replace with StreamlitAPIException?\nclass Error(Exception):\n pass\n\n\ndef calc_md5(s: Union[bytes, str]) -> str:\n \"\"\"Return the md5 hash of the given string.\"\"\"\n h = hashlib.new(\"md5\")\n\n # mypy seems to have trouble inferring that the type of the if/else expression is\n # always bytes.\n b = cast(bytes, s.encode(\"utf-8\") if type(s) is str else s)\n\n h.update(b)\n return h.hexdigest()\n\n\ndef exclude_key_query_params(\n query_params: Dict[str, List[str]], keys_to_exclude: List[str]\n) -> Dict[str, List[str]]:\n \"\"\"Returns new object query_params : Dict[str, List[str]], but without keys defined with keys_to_drop : List[str].\"\"\"\n return {\n key: value\n for key, value in query_params.items()\n if key.lower() not in keys_to_exclude\n }\n\n\ndef extract_key_query_params(\n query_params: Dict[str, List[str]], param_key: str\n) -> Set[str]:\n \"\"\"Extracts key (case-insensitive) query params from Dict, and returns them as Set of str.\"\"\"\n return set(\n [\n item.lower()\n for sublist in [\n [value.lower() for value in query_params[key]]\n for key in query_params.keys()\n if key.lower() == param_key and query_params.get(key)\n ]\n for item in sublist\n ]\n )\n", "path": "lib/streamlit/util.py"}]}
| 3,302 | 220 |
gh_patches_debug_2189
|
rasdani/github-patches
|
git_diff
|
googleapis__python-bigquery-648
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
chore: use paths for --cov arguments in noxfile
https://github.com/googleapis/python-bigquery/blob/6a48e80bc7d347f381b181f4cf81fef105d0ad0d/noxfile.py#L80-L81
To pull https://github.com/googleapis/synthtool/pull/859 from templates.
</issue>
<code>
[start of noxfile.py]
1 # Copyright 2016 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16
17 import pathlib
18 import os
19 import shutil
20
21 import nox
22
23
24 PYTYPE_VERSION = "pytype==2021.4.9"
25 BLACK_VERSION = "black==19.10b0"
26 BLACK_PATHS = ("docs", "google", "samples", "tests", "noxfile.py", "setup.py")
27
28 DEFAULT_PYTHON_VERSION = "3.8"
29 SYSTEM_TEST_PYTHON_VERSIONS = ["3.8"]
30 UNIT_TEST_PYTHON_VERSIONS = ["3.6", "3.7", "3.8", "3.9"]
31 CURRENT_DIRECTORY = pathlib.Path(__file__).parent.absolute()
32
33 # 'docfx' is excluded since it only needs to run in 'docs-presubmit'
34 nox.options.sessions = [
35 "unit_noextras",
36 "unit",
37 "system",
38 "snippets",
39 "cover",
40 "lint",
41 "lint_setup_py",
42 "blacken",
43 "pytype",
44 "docs",
45 ]
46
47
48 def default(session, install_extras=True):
49 """Default unit test session.
50
51 This is intended to be run **without** an interpreter set, so
52 that the current ``python`` (on the ``PATH``) or the version of
53 Python corresponding to the ``nox`` binary the ``PATH`` can
54 run the tests.
55 """
56 constraints_path = str(
57 CURRENT_DIRECTORY / "testing" / f"constraints-{session.python}.txt"
58 )
59
60 # Install all test dependencies, then install local packages in-place.
61 session.install(
62 "mock",
63 "pytest",
64 "google-cloud-testutils",
65 "pytest-cov",
66 "freezegun",
67 "-c",
68 constraints_path,
69 )
70
71 install_target = ".[all]" if install_extras else "."
72 session.install("-e", install_target, "-c", constraints_path)
73
74 session.install("ipython", "-c", constraints_path)
75
76 # Run py.test against the unit tests.
77 session.run(
78 "py.test",
79 "--quiet",
80 "--cov=google.cloud.bigquery",
81 "--cov=tests.unit",
82 "--cov-append",
83 "--cov-config=.coveragerc",
84 "--cov-report=",
85 "--cov-fail-under=0",
86 os.path.join("tests", "unit"),
87 *session.posargs,
88 )
89
90
91 @nox.session(python=UNIT_TEST_PYTHON_VERSIONS)
92 def unit(session):
93 """Run the unit test suite."""
94 default(session)
95
96
97 @nox.session(python=UNIT_TEST_PYTHON_VERSIONS[-1])
98 def unit_noextras(session):
99 """Run the unit test suite."""
100 default(session, install_extras=False)
101
102
103 @nox.session(python=DEFAULT_PYTHON_VERSION)
104 def pytype(session):
105 """Run type checks."""
106 session.install("-e", ".[all]")
107 session.install("ipython")
108 session.install(PYTYPE_VERSION)
109 session.run("pytype")
110
111
112 @nox.session(python=SYSTEM_TEST_PYTHON_VERSIONS)
113 def system(session):
114 """Run the system test suite."""
115
116 constraints_path = str(
117 CURRENT_DIRECTORY / "testing" / f"constraints-{session.python}.txt"
118 )
119
120 # Check the value of `RUN_SYSTEM_TESTS` env var. It defaults to true.
121 if os.environ.get("RUN_SYSTEM_TESTS", "true") == "false":
122 session.skip("RUN_SYSTEM_TESTS is set to false, skipping")
123
124 # Sanity check: Only run system tests if the environment variable is set.
125 if not os.environ.get("GOOGLE_APPLICATION_CREDENTIALS", ""):
126 session.skip("Credentials must be set via environment variable.")
127
128 # Use pre-release gRPC for system tests.
129 session.install("--pre", "grpcio", "-c", constraints_path)
130
131 # Install all test dependencies, then install local packages in place.
132 session.install(
133 "mock", "pytest", "psutil", "google-cloud-testutils", "-c", constraints_path
134 )
135 if os.environ.get("GOOGLE_API_USE_CLIENT_CERTIFICATE", "") == "true":
136 # mTLS test requires pyopenssl and latest google-cloud-storage
137 session.install("google-cloud-storage", "pyopenssl")
138 else:
139 session.install("google-cloud-storage", "-c", constraints_path)
140
141 session.install("-e", ".[all]", "-c", constraints_path)
142 session.install("ipython", "-c", constraints_path)
143
144 # Run py.test against the system tests.
145 session.run("py.test", "--quiet", os.path.join("tests", "system"), *session.posargs)
146
147
148 @nox.session(python=SYSTEM_TEST_PYTHON_VERSIONS)
149 def snippets(session):
150 """Run the snippets test suite."""
151
152 # Check the value of `RUN_SNIPPETS_TESTS` env var. It defaults to true.
153 if os.environ.get("RUN_SNIPPETS_TESTS", "true") == "false":
154 session.skip("RUN_SNIPPETS_TESTS is set to false, skipping")
155
156 # Sanity check: Only run snippets tests if the environment variable is set.
157 if not os.environ.get("GOOGLE_APPLICATION_CREDENTIALS", ""):
158 session.skip("Credentials must be set via environment variable.")
159
160 constraints_path = str(
161 CURRENT_DIRECTORY / "testing" / f"constraints-{session.python}.txt"
162 )
163
164 # Install all test dependencies, then install local packages in place.
165 session.install("mock", "pytest", "google-cloud-testutils", "-c", constraints_path)
166 session.install("google-cloud-storage", "-c", constraints_path)
167 session.install("grpcio", "-c", constraints_path)
168
169 session.install("-e", ".[all]", "-c", constraints_path)
170
171 # Run py.test against the snippets tests.
172 # Skip tests in samples/snippets, as those are run in a different session
173 # using the nox config from that directory.
174 session.run("py.test", os.path.join("docs", "snippets.py"), *session.posargs)
175 session.run(
176 "py.test",
177 "samples",
178 "--ignore=samples/snippets",
179 "--ignore=samples/geography",
180 *session.posargs,
181 )
182
183
184 @nox.session(python=DEFAULT_PYTHON_VERSION)
185 def cover(session):
186 """Run the final coverage report.
187
188 This outputs the coverage report aggregating coverage from the unit
189 test runs (not system test runs), and then erases coverage data.
190 """
191 session.install("coverage", "pytest-cov")
192 session.run("coverage", "report", "--show-missing", "--fail-under=100")
193 session.run("coverage", "erase")
194
195
196 @nox.session(python=SYSTEM_TEST_PYTHON_VERSIONS)
197 def prerelease_deps(session):
198 """Run all tests with prerelease versions of dependencies installed.
199
200 https://github.com/googleapis/python-bigquery/issues/95
201 """
202 # PyArrow prerelease packages are published to an alternative PyPI host.
203 # https://arrow.apache.org/docs/python/install.html#installing-nightly-packages
204 session.install(
205 "--extra-index-url", "https://pypi.fury.io/arrow-nightlies/", "--pre", "pyarrow"
206 )
207 session.install("--pre", "grpcio", "pandas")
208 session.install(
209 "freezegun",
210 "google-cloud-storage",
211 "google-cloud-testutils",
212 "IPython",
213 "mock",
214 "psutil",
215 "pytest",
216 "pytest-cov",
217 )
218 session.install("-e", ".[all]")
219
220 # Print out prerelease package versions.
221 session.run("python", "-c", "import grpc; print(grpc.__version__)")
222 session.run("python", "-c", "import pandas; print(pandas.__version__)")
223 session.run("python", "-c", "import pyarrow; print(pyarrow.__version__)")
224
225 # Run all tests, except a few samples tests which require extra dependencies.
226 session.run("py.test", "tests/unit")
227 session.run("py.test", "tests/system")
228 session.run("py.test", "samples/tests")
229
230
231 @nox.session(python=DEFAULT_PYTHON_VERSION)
232 def lint(session):
233 """Run linters.
234
235 Returns a failure if the linters find linting errors or sufficiently
236 serious code quality issues.
237 """
238
239 session.install("flake8", BLACK_VERSION)
240 session.install("-e", ".")
241 session.run("flake8", os.path.join("google", "cloud", "bigquery"))
242 session.run("flake8", "tests")
243 session.run("flake8", os.path.join("docs", "samples"))
244 session.run("flake8", os.path.join("docs", "snippets.py"))
245 session.run("black", "--check", *BLACK_PATHS)
246
247
248 @nox.session(python=DEFAULT_PYTHON_VERSION)
249 def lint_setup_py(session):
250 """Verify that setup.py is valid (including RST check)."""
251
252 session.install("docutils", "Pygments")
253 session.run("python", "setup.py", "check", "--restructuredtext", "--strict")
254
255
256 @nox.session(python="3.6")
257 def blacken(session):
258 """Run black.
259 Format code to uniform standard.
260
261 This currently uses Python 3.6 due to the automated Kokoro run of synthtool.
262 That run uses an image that doesn't have 3.6 installed. Before updating this
263 check the state of the `gcp_ubuntu_config` we use for that Kokoro run.
264 """
265 session.install(BLACK_VERSION)
266 session.run("black", *BLACK_PATHS)
267
268
269 @nox.session(python=DEFAULT_PYTHON_VERSION)
270 def docs(session):
271 """Build the docs."""
272
273 session.install("ipython", "recommonmark", "sphinx", "sphinx_rtd_theme")
274 session.install("google-cloud-storage")
275 session.install("-e", ".[all]")
276
277 shutil.rmtree(os.path.join("docs", "_build"), ignore_errors=True)
278 session.run(
279 "sphinx-build",
280 "-W", # warnings as errors
281 "-T", # show full traceback on exception
282 "-N", # no colors
283 "-b",
284 "html",
285 "-d",
286 os.path.join("docs", "_build", "doctrees", ""),
287 os.path.join("docs", ""),
288 os.path.join("docs", "_build", "html", ""),
289 )
290
291
292 @nox.session(python=DEFAULT_PYTHON_VERSION)
293 def docfx(session):
294 """Build the docfx yaml files for this library."""
295
296 session.install("-e", ".")
297 session.install("sphinx", "alabaster", "recommonmark", "gcp-sphinx-docfx-yaml")
298
299 shutil.rmtree(os.path.join("docs", "_build"), ignore_errors=True)
300 session.run(
301 "sphinx-build",
302 "-T", # show full traceback on exception
303 "-N", # no colors
304 "-D",
305 (
306 "extensions=sphinx.ext.autodoc,"
307 "sphinx.ext.autosummary,"
308 "docfx_yaml.extension,"
309 "sphinx.ext.intersphinx,"
310 "sphinx.ext.coverage,"
311 "sphinx.ext.napoleon,"
312 "sphinx.ext.todo,"
313 "sphinx.ext.viewcode,"
314 "recommonmark"
315 ),
316 "-b",
317 "html",
318 "-d",
319 os.path.join("docs", "_build", "doctrees", ""),
320 os.path.join("docs", ""),
321 os.path.join("docs", "_build", "html", ""),
322 )
323
[end of noxfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -77,8 +77,8 @@
session.run(
"py.test",
"--quiet",
- "--cov=google.cloud.bigquery",
- "--cov=tests.unit",
+ "--cov=google/cloud/bigquery",
+ "--cov=tests/unit",
"--cov-append",
"--cov-config=.coveragerc",
"--cov-report=",
|
{"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -77,8 +77,8 @@\n session.run(\n \"py.test\",\n \"--quiet\",\n- \"--cov=google.cloud.bigquery\",\n- \"--cov=tests.unit\",\n+ \"--cov=google/cloud/bigquery\",\n+ \"--cov=tests/unit\",\n \"--cov-append\",\n \"--cov-config=.coveragerc\",\n \"--cov-report=\",\n", "issue": "chore: use paths for --cov arguments in noxfile\nhttps://github.com/googleapis/python-bigquery/blob/6a48e80bc7d347f381b181f4cf81fef105d0ad0d/noxfile.py#L80-L81\r\n\r\nTo pull https://github.com/googleapis/synthtool/pull/859 from templates.\n", "before_files": [{"content": "# Copyright 2016 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nimport pathlib\nimport os\nimport shutil\n\nimport nox\n\n\nPYTYPE_VERSION = \"pytype==2021.4.9\"\nBLACK_VERSION = \"black==19.10b0\"\nBLACK_PATHS = (\"docs\", \"google\", \"samples\", \"tests\", \"noxfile.py\", \"setup.py\")\n\nDEFAULT_PYTHON_VERSION = \"3.8\"\nSYSTEM_TEST_PYTHON_VERSIONS = [\"3.8\"]\nUNIT_TEST_PYTHON_VERSIONS = [\"3.6\", \"3.7\", \"3.8\", \"3.9\"]\nCURRENT_DIRECTORY = pathlib.Path(__file__).parent.absolute()\n\n# 'docfx' is excluded since it only needs to run in 'docs-presubmit'\nnox.options.sessions = [\n \"unit_noextras\",\n \"unit\",\n \"system\",\n \"snippets\",\n \"cover\",\n \"lint\",\n \"lint_setup_py\",\n \"blacken\",\n \"pytype\",\n \"docs\",\n]\n\n\ndef default(session, install_extras=True):\n \"\"\"Default unit test session.\n\n This is intended to be run **without** an interpreter set, so\n that the current ``python`` (on the ``PATH``) or the version of\n Python corresponding to the ``nox`` binary the ``PATH`` can\n run the tests.\n \"\"\"\n constraints_path = str(\n CURRENT_DIRECTORY / \"testing\" / f\"constraints-{session.python}.txt\"\n )\n\n # Install all test dependencies, then install local packages in-place.\n session.install(\n \"mock\",\n \"pytest\",\n \"google-cloud-testutils\",\n \"pytest-cov\",\n \"freezegun\",\n \"-c\",\n constraints_path,\n )\n\n install_target = \".[all]\" if install_extras else \".\"\n session.install(\"-e\", install_target, \"-c\", constraints_path)\n\n session.install(\"ipython\", \"-c\", constraints_path)\n\n # Run py.test against the unit tests.\n session.run(\n \"py.test\",\n \"--quiet\",\n \"--cov=google.cloud.bigquery\",\n \"--cov=tests.unit\",\n \"--cov-append\",\n \"--cov-config=.coveragerc\",\n \"--cov-report=\",\n \"--cov-fail-under=0\",\n os.path.join(\"tests\", \"unit\"),\n *session.posargs,\n )\n\n\[email protected](python=UNIT_TEST_PYTHON_VERSIONS)\ndef unit(session):\n \"\"\"Run the unit test suite.\"\"\"\n default(session)\n\n\[email protected](python=UNIT_TEST_PYTHON_VERSIONS[-1])\ndef unit_noextras(session):\n \"\"\"Run the unit test suite.\"\"\"\n default(session, install_extras=False)\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef pytype(session):\n \"\"\"Run type checks.\"\"\"\n session.install(\"-e\", \".[all]\")\n session.install(\"ipython\")\n session.install(PYTYPE_VERSION)\n session.run(\"pytype\")\n\n\[email protected](python=SYSTEM_TEST_PYTHON_VERSIONS)\ndef system(session):\n \"\"\"Run the system test suite.\"\"\"\n\n constraints_path = str(\n CURRENT_DIRECTORY / \"testing\" / f\"constraints-{session.python}.txt\"\n )\n\n # Check the value of `RUN_SYSTEM_TESTS` env var. It defaults to true.\n if os.environ.get(\"RUN_SYSTEM_TESTS\", \"true\") == \"false\":\n session.skip(\"RUN_SYSTEM_TESTS is set to false, skipping\")\n\n # Sanity check: Only run system tests if the environment variable is set.\n if not os.environ.get(\"GOOGLE_APPLICATION_CREDENTIALS\", \"\"):\n session.skip(\"Credentials must be set via environment variable.\")\n\n # Use pre-release gRPC for system tests.\n session.install(\"--pre\", \"grpcio\", \"-c\", constraints_path)\n\n # Install all test dependencies, then install local packages in place.\n session.install(\n \"mock\", \"pytest\", \"psutil\", \"google-cloud-testutils\", \"-c\", constraints_path\n )\n if os.environ.get(\"GOOGLE_API_USE_CLIENT_CERTIFICATE\", \"\") == \"true\":\n # mTLS test requires pyopenssl and latest google-cloud-storage\n session.install(\"google-cloud-storage\", \"pyopenssl\")\n else:\n session.install(\"google-cloud-storage\", \"-c\", constraints_path)\n\n session.install(\"-e\", \".[all]\", \"-c\", constraints_path)\n session.install(\"ipython\", \"-c\", constraints_path)\n\n # Run py.test against the system tests.\n session.run(\"py.test\", \"--quiet\", os.path.join(\"tests\", \"system\"), *session.posargs)\n\n\[email protected](python=SYSTEM_TEST_PYTHON_VERSIONS)\ndef snippets(session):\n \"\"\"Run the snippets test suite.\"\"\"\n\n # Check the value of `RUN_SNIPPETS_TESTS` env var. It defaults to true.\n if os.environ.get(\"RUN_SNIPPETS_TESTS\", \"true\") == \"false\":\n session.skip(\"RUN_SNIPPETS_TESTS is set to false, skipping\")\n\n # Sanity check: Only run snippets tests if the environment variable is set.\n if not os.environ.get(\"GOOGLE_APPLICATION_CREDENTIALS\", \"\"):\n session.skip(\"Credentials must be set via environment variable.\")\n\n constraints_path = str(\n CURRENT_DIRECTORY / \"testing\" / f\"constraints-{session.python}.txt\"\n )\n\n # Install all test dependencies, then install local packages in place.\n session.install(\"mock\", \"pytest\", \"google-cloud-testutils\", \"-c\", constraints_path)\n session.install(\"google-cloud-storage\", \"-c\", constraints_path)\n session.install(\"grpcio\", \"-c\", constraints_path)\n\n session.install(\"-e\", \".[all]\", \"-c\", constraints_path)\n\n # Run py.test against the snippets tests.\n # Skip tests in samples/snippets, as those are run in a different session\n # using the nox config from that directory.\n session.run(\"py.test\", os.path.join(\"docs\", \"snippets.py\"), *session.posargs)\n session.run(\n \"py.test\",\n \"samples\",\n \"--ignore=samples/snippets\",\n \"--ignore=samples/geography\",\n *session.posargs,\n )\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef cover(session):\n \"\"\"Run the final coverage report.\n\n This outputs the coverage report aggregating coverage from the unit\n test runs (not system test runs), and then erases coverage data.\n \"\"\"\n session.install(\"coverage\", \"pytest-cov\")\n session.run(\"coverage\", \"report\", \"--show-missing\", \"--fail-under=100\")\n session.run(\"coverage\", \"erase\")\n\n\[email protected](python=SYSTEM_TEST_PYTHON_VERSIONS)\ndef prerelease_deps(session):\n \"\"\"Run all tests with prerelease versions of dependencies installed.\n\n https://github.com/googleapis/python-bigquery/issues/95\n \"\"\"\n # PyArrow prerelease packages are published to an alternative PyPI host.\n # https://arrow.apache.org/docs/python/install.html#installing-nightly-packages\n session.install(\n \"--extra-index-url\", \"https://pypi.fury.io/arrow-nightlies/\", \"--pre\", \"pyarrow\"\n )\n session.install(\"--pre\", \"grpcio\", \"pandas\")\n session.install(\n \"freezegun\",\n \"google-cloud-storage\",\n \"google-cloud-testutils\",\n \"IPython\",\n \"mock\",\n \"psutil\",\n \"pytest\",\n \"pytest-cov\",\n )\n session.install(\"-e\", \".[all]\")\n\n # Print out prerelease package versions.\n session.run(\"python\", \"-c\", \"import grpc; print(grpc.__version__)\")\n session.run(\"python\", \"-c\", \"import pandas; print(pandas.__version__)\")\n session.run(\"python\", \"-c\", \"import pyarrow; print(pyarrow.__version__)\")\n\n # Run all tests, except a few samples tests which require extra dependencies.\n session.run(\"py.test\", \"tests/unit\")\n session.run(\"py.test\", \"tests/system\")\n session.run(\"py.test\", \"samples/tests\")\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef lint(session):\n \"\"\"Run linters.\n\n Returns a failure if the linters find linting errors or sufficiently\n serious code quality issues.\n \"\"\"\n\n session.install(\"flake8\", BLACK_VERSION)\n session.install(\"-e\", \".\")\n session.run(\"flake8\", os.path.join(\"google\", \"cloud\", \"bigquery\"))\n session.run(\"flake8\", \"tests\")\n session.run(\"flake8\", os.path.join(\"docs\", \"samples\"))\n session.run(\"flake8\", os.path.join(\"docs\", \"snippets.py\"))\n session.run(\"black\", \"--check\", *BLACK_PATHS)\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef lint_setup_py(session):\n \"\"\"Verify that setup.py is valid (including RST check).\"\"\"\n\n session.install(\"docutils\", \"Pygments\")\n session.run(\"python\", \"setup.py\", \"check\", \"--restructuredtext\", \"--strict\")\n\n\[email protected](python=\"3.6\")\ndef blacken(session):\n \"\"\"Run black.\n Format code to uniform standard.\n\n This currently uses Python 3.6 due to the automated Kokoro run of synthtool.\n That run uses an image that doesn't have 3.6 installed. Before updating this\n check the state of the `gcp_ubuntu_config` we use for that Kokoro run.\n \"\"\"\n session.install(BLACK_VERSION)\n session.run(\"black\", *BLACK_PATHS)\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef docs(session):\n \"\"\"Build the docs.\"\"\"\n\n session.install(\"ipython\", \"recommonmark\", \"sphinx\", \"sphinx_rtd_theme\")\n session.install(\"google-cloud-storage\")\n session.install(\"-e\", \".[all]\")\n\n shutil.rmtree(os.path.join(\"docs\", \"_build\"), ignore_errors=True)\n session.run(\n \"sphinx-build\",\n \"-W\", # warnings as errors\n \"-T\", # show full traceback on exception\n \"-N\", # no colors\n \"-b\",\n \"html\",\n \"-d\",\n os.path.join(\"docs\", \"_build\", \"doctrees\", \"\"),\n os.path.join(\"docs\", \"\"),\n os.path.join(\"docs\", \"_build\", \"html\", \"\"),\n )\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef docfx(session):\n \"\"\"Build the docfx yaml files for this library.\"\"\"\n\n session.install(\"-e\", \".\")\n session.install(\"sphinx\", \"alabaster\", \"recommonmark\", \"gcp-sphinx-docfx-yaml\")\n\n shutil.rmtree(os.path.join(\"docs\", \"_build\"), ignore_errors=True)\n session.run(\n \"sphinx-build\",\n \"-T\", # show full traceback on exception\n \"-N\", # no colors\n \"-D\",\n (\n \"extensions=sphinx.ext.autodoc,\"\n \"sphinx.ext.autosummary,\"\n \"docfx_yaml.extension,\"\n \"sphinx.ext.intersphinx,\"\n \"sphinx.ext.coverage,\"\n \"sphinx.ext.napoleon,\"\n \"sphinx.ext.todo,\"\n \"sphinx.ext.viewcode,\"\n \"recommonmark\"\n ),\n \"-b\",\n \"html\",\n \"-d\",\n os.path.join(\"docs\", \"_build\", \"doctrees\", \"\"),\n os.path.join(\"docs\", \"\"),\n os.path.join(\"docs\", \"_build\", \"html\", \"\"),\n )\n", "path": "noxfile.py"}]}
| 4,092 | 109 |
gh_patches_debug_34454
|
rasdani/github-patches
|
git_diff
|
mindsdb__mindsdb-1336
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can`t upload big files with waitress
* Mindsdb version you tried to install: latest
* Additional info if applicable: waitress==2.0.0
Waitress consumes too much memory when uploading a file. 16Gb of memory is not enough to upload 1.1Gb file
</issue>
<code>
[start of mindsdb/api/http/namespaces/datasource.py]
1 import os
2 import threading
3 import tempfile
4 import re
5 import multipart
6
7 from dateutil.parser import parse
8 from flask import request, send_file
9 from flask_restx import Resource, abort # 'abort' using to return errors as json: {'message': 'error text'}
10 from flask import current_app as ca
11
12 from mindsdb.utilities.log import log
13 from mindsdb.api.http.namespaces.configs.datasources import ns_conf
14 from mindsdb.api.http.namespaces.entitites.datasources.datasource import (
15 datasource_metadata,
16 put_datasource_params
17 )
18 from mindsdb.api.http.namespaces.entitites.datasources.datasource_data import (
19 get_datasource_rows_params,
20 datasource_rows_metadata
21 )
22 from mindsdb.api.http.namespaces.entitites.datasources.datasource_files import (
23 put_datasource_file_params
24 )
25 from mindsdb.api.http.namespaces.entitites.datasources.datasource_missed_files import (
26 datasource_missed_files_metadata,
27 get_datasource_missed_files_params
28 )
29 from mindsdb.interfaces.database.integrations import get_db_integration
30
31
32 def parse_filter(key, value):
33 result = re.search(r'filter(_*.*)\[(.*)\]', key)
34 operator = result.groups()[0].strip('_') or 'like'
35 field = result.groups()[1]
36 operators_map = {
37 'like': 'like',
38 'in': 'in',
39 'nin': 'not in',
40 'gt': '>',
41 'lt': '<',
42 'gte': '>=',
43 'lte': '<=',
44 'eq': '=',
45 'neq': '!='
46 }
47 if operator not in operators_map:
48 return None
49 operator = operators_map[operator]
50 return [field, operator, value]
51
52
53 @ns_conf.route('/')
54 class DatasourcesList(Resource):
55 @ns_conf.doc('get_datasources_list')
56 @ns_conf.marshal_list_with(datasource_metadata)
57 def get(self):
58 '''List all datasources'''
59 return request.default_store.get_datasources()
60
61
62 @ns_conf.route('/<name>')
63 @ns_conf.param('name', 'Datasource name')
64 class Datasource(Resource):
65 @ns_conf.doc('get_datasource')
66 @ns_conf.marshal_with(datasource_metadata)
67 def get(self, name):
68 '''return datasource metadata'''
69 ds = request.default_store.get_datasource(name)
70 if ds is not None:
71 return ds
72 return '', 404
73
74 @ns_conf.doc('delete_datasource')
75 def delete(self, name):
76 '''delete datasource'''
77
78 try:
79 request.default_store.delete_datasource(name)
80 except Exception as e:
81 log.error(e)
82 abort(400, str(e))
83 return '', 200
84
85 @ns_conf.doc('put_datasource', params=put_datasource_params)
86 @ns_conf.marshal_with(datasource_metadata)
87 def put(self, name):
88 '''add new datasource'''
89 data = {}
90
91 def on_field(field):
92 name = field.field_name.decode()
93 value = field.value.decode()
94 data[name] = value
95
96 file_object = None
97
98 def on_file(file):
99 nonlocal file_object
100 data['file'] = file.file_name.decode()
101 file_object = file.file_object
102
103 temp_dir_path = tempfile.mkdtemp(prefix='datasource_file_')
104
105 if request.headers['Content-Type'].startswith('multipart/form-data'):
106 parser = multipart.create_form_parser(
107 headers=request.headers,
108 on_field=on_field,
109 on_file=on_file,
110 config={
111 'UPLOAD_DIR': temp_dir_path.encode(), # bytes required
112 'UPLOAD_KEEP_FILENAME': True,
113 'UPLOAD_KEEP_EXTENSIONS': True,
114 'MAX_MEMORY_FILE_SIZE': 0
115 }
116 )
117
118 while True:
119 chunk = request.stream.read(8192)
120 if not chunk:
121 break
122 parser.write(chunk)
123 parser.finalize()
124 parser.close()
125
126 if file_object is not None and not file_object.closed:
127 file_object.close()
128 else:
129 data = request.json
130
131 if 'query' in data:
132 integration_id = request.json['integration_id']
133 integration = get_db_integration(integration_id, request.company_id)
134 if integration is None:
135 abort(400, f"{integration_id} integration doesn't exist")
136
137 if integration['type'] == 'mongodb':
138 data['find'] = data['query']
139
140 request.default_store.save_datasource(name, integration_id, data)
141 os.rmdir(temp_dir_path)
142 return request.default_store.get_datasource(name)
143
144 ds_name = data['name'] if 'name' in data else name
145 source = data['source'] if 'source' in data else name
146 source_type = data['source_type']
147
148 if source_type == 'file':
149 file_path = os.path.join(temp_dir_path, data['file'])
150 else:
151 file_path = None
152
153 request.default_store.save_datasource(ds_name, source_type, source, file_path)
154 os.rmdir(temp_dir_path)
155
156 return request.default_store.get_datasource(ds_name)
157
158
159 def analyzing_thread(name, default_store):
160 try:
161 from mindsdb.interfaces.storage.db import session
162 analysis = default_store.start_analysis(name)
163 session.close()
164 except Exception as e:
165 log.error(e)
166
167
168 @ns_conf.route('/<name>/analyze')
169 @ns_conf.param('name', 'Datasource name')
170 class Analyze(Resource):
171 @ns_conf.doc('analyse_dataset')
172 def get(self, name):
173 analysis = request.default_store.get_analysis(name)
174 if analysis is not None:
175 return analysis, 200
176
177
178 ds = request.default_store.get_datasource(name)
179 if ds is None:
180 log.error('No valid datasource given')
181 abort(400, 'No valid datasource given')
182
183 x = threading.Thread(target=analyzing_thread, args=(name, request.default_store))
184 x.start()
185 return {'status': 'analyzing'}, 200
186
187
188 @ns_conf.route('/<name>/analyze_refresh')
189 @ns_conf.param('name', 'Datasource name')
190 class Analyze2(Resource):
191 @ns_conf.doc('analyze_refresh_dataset')
192 def get(self, name):
193 analysis = request.default_store.get_analysis(name)
194 if analysis is not None:
195 return analysis, 200
196
197 ds = request.default_store.get_datasource(name)
198 if ds is None:
199 log.error('No valid datasource given')
200 abort(400, 'No valid datasource given')
201
202 x = threading.Thread(target=analyzing_thread, args=(name, request.default_store))
203 x.start()
204 return {'status': 'analyzing'}, 200
205
206
207 @ns_conf.route('/<name>/data/')
208 @ns_conf.param('name', 'Datasource name')
209 class DatasourceData(Resource):
210 @ns_conf.doc('get_datasource_data', params=get_datasource_rows_params)
211 @ns_conf.marshal_with(datasource_rows_metadata)
212 def get(self, name):
213 '''return data rows'''
214 ds = request.default_store.get_datasource(name)
215 if ds is None:
216 abort(400, 'No valid datasource given')
217
218 params = {
219 'page[size]': None,
220 'page[offset]': None
221 }
222 where = []
223 for key, value in request.args.items():
224 if key == 'page[size]':
225 params['page[size]'] = int(value)
226 if key == 'page[offset]':
227 params['page[offset]'] = int(value)
228 elif key.startswith('filter'):
229 param = parse_filter(key, value)
230 if param is None:
231 abort(400, f'Not valid filter "{key}"')
232 where.append(param)
233
234 data_dict = request.default_store.get_data(name, where, params['page[size]'], params['page[offset]'])
235 return data_dict, 200
236
237
238 @ns_conf.route('/<name>/download')
239 @ns_conf.param('name', 'Datasource name')
240 class DatasourceMissedFilesDownload(Resource):
241 @ns_conf.doc('get_datasource_download')
242 def get(self, name):
243 '''download uploaded file'''
244 ds = request.default_store.get_datasource(name)
245 if not ds:
246 abort(404, "{} not found".format(name))
247 # force download from s3
248 request.default_store.get_datasource_obj(name)
249 if not os.path.exists(ds['source']):
250 abort(404, "{} not found".format(name))
251
252 return send_file(os.path.abspath(ds['source']), as_attachment=True)
253
[end of mindsdb/api/http/namespaces/datasource.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mindsdb/api/http/namespaces/datasource.py b/mindsdb/api/http/namespaces/datasource.py
--- a/mindsdb/api/http/namespaces/datasource.py
+++ b/mindsdb/api/http/namespaces/datasource.py
@@ -3,6 +3,8 @@
import tempfile
import re
import multipart
+import zipfile
+import tarfile
from dateutil.parser import parse
from flask import request, send_file
@@ -10,6 +12,7 @@
from flask import current_app as ca
from mindsdb.utilities.log import log
+from mindsdb.api.http.utils import http_error
from mindsdb.api.http.namespaces.configs.datasources import ns_conf
from mindsdb.api.http.namespaces.entitites.datasources.datasource import (
datasource_metadata,
@@ -147,6 +150,24 @@
if source_type == 'file':
file_path = os.path.join(temp_dir_path, data['file'])
+ lp = file_path.lower()
+ if lp.endswith(('.zip', '.tar.gz')):
+ if lp.endswith('.zip'):
+ with zipfile.ZipFile(file_path) as f:
+ f.extractall(temp_dir_path)
+ elif lp.endswith('.tar.gz'):
+ with tarfile.open(file_path) as f:
+ f.extractall(temp_dir_path)
+ os.remove(file_path)
+ files = os.listdir(temp_dir_path)
+ if len(files) != 1:
+ os.rmdir(temp_dir_path)
+ return http_error(400, 'Wrong content.', 'Archive must contain only one data file.')
+ file_path = os.path.join(temp_dir_path, files[0])
+ source = files[0]
+ if not os.path.isfile(file_path):
+ os.rmdir(temp_dir_path)
+ return http_error(400, 'Wrong content.', 'Archive must contain data file in root.')
else:
file_path = None
@@ -174,7 +195,6 @@
if analysis is not None:
return analysis, 200
-
ds = request.default_store.get_datasource(name)
if ds is None:
log.error('No valid datasource given')
|
{"golden_diff": "diff --git a/mindsdb/api/http/namespaces/datasource.py b/mindsdb/api/http/namespaces/datasource.py\n--- a/mindsdb/api/http/namespaces/datasource.py\n+++ b/mindsdb/api/http/namespaces/datasource.py\n@@ -3,6 +3,8 @@\n import tempfile\n import re\n import multipart\n+import zipfile\n+import tarfile\n \n from dateutil.parser import parse\n from flask import request, send_file\n@@ -10,6 +12,7 @@\n from flask import current_app as ca\n \n from mindsdb.utilities.log import log\n+from mindsdb.api.http.utils import http_error\n from mindsdb.api.http.namespaces.configs.datasources import ns_conf\n from mindsdb.api.http.namespaces.entitites.datasources.datasource import (\n datasource_metadata,\n@@ -147,6 +150,24 @@\n \n if source_type == 'file':\n file_path = os.path.join(temp_dir_path, data['file'])\n+ lp = file_path.lower()\n+ if lp.endswith(('.zip', '.tar.gz')):\n+ if lp.endswith('.zip'):\n+ with zipfile.ZipFile(file_path) as f:\n+ f.extractall(temp_dir_path)\n+ elif lp.endswith('.tar.gz'):\n+ with tarfile.open(file_path) as f:\n+ f.extractall(temp_dir_path)\n+ os.remove(file_path)\n+ files = os.listdir(temp_dir_path)\n+ if len(files) != 1:\n+ os.rmdir(temp_dir_path)\n+ return http_error(400, 'Wrong content.', 'Archive must contain only one data file.')\n+ file_path = os.path.join(temp_dir_path, files[0])\n+ source = files[0]\n+ if not os.path.isfile(file_path):\n+ os.rmdir(temp_dir_path)\n+ return http_error(400, 'Wrong content.', 'Archive must contain data file in root.')\n else:\n file_path = None\n \n@@ -174,7 +195,6 @@\n if analysis is not None:\n return analysis, 200\n \n-\n ds = request.default_store.get_datasource(name)\n if ds is None:\n log.error('No valid datasource given')\n", "issue": "Can`t upload big files with waitress\n* Mindsdb version you tried to install: latest\r\n* Additional info if applicable: waitress==2.0.0\r\n\r\nWaitress consumes too much memory when uploading a file. 16Gb of memory is not enough to upload 1.1Gb file \n", "before_files": [{"content": "import os\nimport threading\nimport tempfile\nimport re\nimport multipart\n\nfrom dateutil.parser import parse\nfrom flask import request, send_file\nfrom flask_restx import Resource, abort # 'abort' using to return errors as json: {'message': 'error text'}\nfrom flask import current_app as ca\n\nfrom mindsdb.utilities.log import log\nfrom mindsdb.api.http.namespaces.configs.datasources import ns_conf\nfrom mindsdb.api.http.namespaces.entitites.datasources.datasource import (\n datasource_metadata,\n put_datasource_params\n)\nfrom mindsdb.api.http.namespaces.entitites.datasources.datasource_data import (\n get_datasource_rows_params,\n datasource_rows_metadata\n)\nfrom mindsdb.api.http.namespaces.entitites.datasources.datasource_files import (\n put_datasource_file_params\n)\nfrom mindsdb.api.http.namespaces.entitites.datasources.datasource_missed_files import (\n datasource_missed_files_metadata,\n get_datasource_missed_files_params\n)\nfrom mindsdb.interfaces.database.integrations import get_db_integration\n\n\ndef parse_filter(key, value):\n result = re.search(r'filter(_*.*)\\[(.*)\\]', key)\n operator = result.groups()[0].strip('_') or 'like'\n field = result.groups()[1]\n operators_map = {\n 'like': 'like',\n 'in': 'in',\n 'nin': 'not in',\n 'gt': '>',\n 'lt': '<',\n 'gte': '>=',\n 'lte': '<=',\n 'eq': '=',\n 'neq': '!='\n }\n if operator not in operators_map:\n return None\n operator = operators_map[operator]\n return [field, operator, value]\n\n\n@ns_conf.route('/')\nclass DatasourcesList(Resource):\n @ns_conf.doc('get_datasources_list')\n @ns_conf.marshal_list_with(datasource_metadata)\n def get(self):\n '''List all datasources'''\n return request.default_store.get_datasources()\n\n\n@ns_conf.route('/<name>')\n@ns_conf.param('name', 'Datasource name')\nclass Datasource(Resource):\n @ns_conf.doc('get_datasource')\n @ns_conf.marshal_with(datasource_metadata)\n def get(self, name):\n '''return datasource metadata'''\n ds = request.default_store.get_datasource(name)\n if ds is not None:\n return ds\n return '', 404\n\n @ns_conf.doc('delete_datasource')\n def delete(self, name):\n '''delete datasource'''\n\n try:\n request.default_store.delete_datasource(name)\n except Exception as e:\n log.error(e)\n abort(400, str(e))\n return '', 200\n\n @ns_conf.doc('put_datasource', params=put_datasource_params)\n @ns_conf.marshal_with(datasource_metadata)\n def put(self, name):\n '''add new datasource'''\n data = {}\n\n def on_field(field):\n name = field.field_name.decode()\n value = field.value.decode()\n data[name] = value\n\n file_object = None\n\n def on_file(file):\n nonlocal file_object\n data['file'] = file.file_name.decode()\n file_object = file.file_object\n\n temp_dir_path = tempfile.mkdtemp(prefix='datasource_file_')\n\n if request.headers['Content-Type'].startswith('multipart/form-data'):\n parser = multipart.create_form_parser(\n headers=request.headers,\n on_field=on_field,\n on_file=on_file,\n config={\n 'UPLOAD_DIR': temp_dir_path.encode(), # bytes required\n 'UPLOAD_KEEP_FILENAME': True,\n 'UPLOAD_KEEP_EXTENSIONS': True,\n 'MAX_MEMORY_FILE_SIZE': 0\n }\n )\n\n while True:\n chunk = request.stream.read(8192)\n if not chunk:\n break\n parser.write(chunk)\n parser.finalize()\n parser.close()\n\n if file_object is not None and not file_object.closed:\n file_object.close()\n else:\n data = request.json\n\n if 'query' in data:\n integration_id = request.json['integration_id']\n integration = get_db_integration(integration_id, request.company_id)\n if integration is None:\n abort(400, f\"{integration_id} integration doesn't exist\")\n\n if integration['type'] == 'mongodb':\n data['find'] = data['query']\n\n request.default_store.save_datasource(name, integration_id, data)\n os.rmdir(temp_dir_path)\n return request.default_store.get_datasource(name)\n\n ds_name = data['name'] if 'name' in data else name\n source = data['source'] if 'source' in data else name\n source_type = data['source_type']\n\n if source_type == 'file':\n file_path = os.path.join(temp_dir_path, data['file'])\n else:\n file_path = None\n\n request.default_store.save_datasource(ds_name, source_type, source, file_path)\n os.rmdir(temp_dir_path)\n\n return request.default_store.get_datasource(ds_name)\n\n\ndef analyzing_thread(name, default_store):\n try:\n from mindsdb.interfaces.storage.db import session\n analysis = default_store.start_analysis(name)\n session.close()\n except Exception as e:\n log.error(e)\n\n\n@ns_conf.route('/<name>/analyze')\n@ns_conf.param('name', 'Datasource name')\nclass Analyze(Resource):\n @ns_conf.doc('analyse_dataset')\n def get(self, name):\n analysis = request.default_store.get_analysis(name)\n if analysis is not None:\n return analysis, 200\n\n\n ds = request.default_store.get_datasource(name)\n if ds is None:\n log.error('No valid datasource given')\n abort(400, 'No valid datasource given')\n\n x = threading.Thread(target=analyzing_thread, args=(name, request.default_store))\n x.start()\n return {'status': 'analyzing'}, 200\n\n\n@ns_conf.route('/<name>/analyze_refresh')\n@ns_conf.param('name', 'Datasource name')\nclass Analyze2(Resource):\n @ns_conf.doc('analyze_refresh_dataset')\n def get(self, name):\n analysis = request.default_store.get_analysis(name)\n if analysis is not None:\n return analysis, 200\n\n ds = request.default_store.get_datasource(name)\n if ds is None:\n log.error('No valid datasource given')\n abort(400, 'No valid datasource given')\n\n x = threading.Thread(target=analyzing_thread, args=(name, request.default_store))\n x.start()\n return {'status': 'analyzing'}, 200\n\n\n@ns_conf.route('/<name>/data/')\n@ns_conf.param('name', 'Datasource name')\nclass DatasourceData(Resource):\n @ns_conf.doc('get_datasource_data', params=get_datasource_rows_params)\n @ns_conf.marshal_with(datasource_rows_metadata)\n def get(self, name):\n '''return data rows'''\n ds = request.default_store.get_datasource(name)\n if ds is None:\n abort(400, 'No valid datasource given')\n\n params = {\n 'page[size]': None,\n 'page[offset]': None\n }\n where = []\n for key, value in request.args.items():\n if key == 'page[size]':\n params['page[size]'] = int(value)\n if key == 'page[offset]':\n params['page[offset]'] = int(value)\n elif key.startswith('filter'):\n param = parse_filter(key, value)\n if param is None:\n abort(400, f'Not valid filter \"{key}\"')\n where.append(param)\n\n data_dict = request.default_store.get_data(name, where, params['page[size]'], params['page[offset]'])\n return data_dict, 200\n\n\n@ns_conf.route('/<name>/download')\n@ns_conf.param('name', 'Datasource name')\nclass DatasourceMissedFilesDownload(Resource):\n @ns_conf.doc('get_datasource_download')\n def get(self, name):\n '''download uploaded file'''\n ds = request.default_store.get_datasource(name)\n if not ds:\n abort(404, \"{} not found\".format(name))\n # force download from s3\n request.default_store.get_datasource_obj(name)\n if not os.path.exists(ds['source']):\n abort(404, \"{} not found\".format(name))\n\n return send_file(os.path.abspath(ds['source']), as_attachment=True)\n", "path": "mindsdb/api/http/namespaces/datasource.py"}]}
| 3,109 | 486 |
gh_patches_debug_38557
|
rasdani/github-patches
|
git_diff
|
cornellius-gp__gpytorch-196
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fully connected layer before mean and covariance modules
I am trying to do a non-linear transformation of input before forwarding it through mean and covariance modules. Before master branch was merged to priors (commit 1f5491e3edcac6497d3370c8aaef9a9362048a3e), I can add a fully connected layer before passing the input through mean and covariance modules to learn a non-linear representation. For example, the following script worked fine.
```
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
from torch import optim
from gpytorch.kernels import RBFKernel
from gpytorch.means import ConstantMean
from gpytorch.likelihoods import GaussianLikelihood
from gpytorch.random_variables import GaussianRandomVariable
train_x = torch.linspace(0, 1, 11)
train_y = torch.sin(train_x.data * (2 * math.pi)) + torch.randn(train_x.size()) * 0.2
class ExactGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = ConstantMean(constant_bounds=(-10, 10))
self.covar_module = RBFKernel(log_lengthscale_bounds=(-5, 5))
self.fc = torch.nn.Linear(1, 2)
def forward(self, x):
x_ = self.fc(x)
mean_x = self.mean_module(x_)
covar_x = self.covar_module(x_)
return GaussianRandomVariable(mean_x, covar_x)
likelihood = GaussianLikelihood(log_noise_bounds=(-5, 5))
model = ExactGPModel(train_x.data, train_y.data, likelihood)
model.train()
likelihood.train()
optimizer = torch.optim.Adam([
{'params': model.parameters()},
], lr=0.1)
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
training_iter = 1000
for i in range(training_iter):
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f log_lengthscale: %.3f log_noise: %.3f' % (
i + 1, training_iter, loss.data[0],
model.covar_module.log_lengthscale.data[0, 0],
model.likelihood.log_noise.data[0]
))
optimizer.step()
```
However, now I get this runtime error because of the linear fc layer:
`AttributeError: 'Linear' object has no attribute '_get_prior_for'`
How can I get this to work with the latest version of gpytorch? DKL does something similar to what I am trying to do but implements AdditiveGridInducingVariationalGP and softmax likelihood, however, in my application I'd like to use ExactGP and gaussian likelihood. Is it possible to do so?
</issue>
<code>
[start of gpytorch/module.py]
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4 from __future__ import unicode_literals
5
6 import torch
7 from collections import OrderedDict
8 from torch import nn
9 from .random_variables import RandomVariable
10 from .lazy import LazyVariable
11 from .variational import VariationalStrategy
12
13
14 class Module(nn.Module):
15 def __init__(self):
16 super(Module, self).__init__()
17 self._priors = OrderedDict()
18 self._derived_priors = OrderedDict()
19 self._variational_strategies = OrderedDict()
20
21 def _get_module_and_name(self, parameter_name):
22 """Get module and name from full parameter name."""
23 module, name = parameter_name.split(".", 1)
24 if module in self._modules:
25 return self.__getattr__(module), name
26 else:
27 raise AttributeError(
28 "Invalid parameter name {}. {} has no module {}".format(parameter_name, type(self).__name__, module)
29 )
30
31 def _get_prior_for(self, parameter_name):
32 """
33 Get prior for parameter
34
35 parameter_name (str): parameter name
36 """
37 if "." in parameter_name:
38 module, parameter_name = self._get_module_and_name(parameter_name)
39 return module._get_prior_for(parameter_name)
40 else:
41 if parameter_name in self._parameters:
42 return self._priors.get(parameter_name)
43 else:
44 raise AttributeError(
45 "Module {module} has no parameter {name}".format(module=type(self).__name__, name=parameter_name)
46 )
47
48 def _get_derived_prior(self, prior_name):
49 """
50 Get derived prior from prior name
51
52 prior_name (str): the name of the derived prior
53 """
54 if "." in prior_name:
55 module, prior_name = self._get_module_and_name(prior_name)
56 return module._get_derived_prior(prior_name)
57 else:
58 if prior_name in self._parameters:
59 return self._derived_priors.get(prior_name)
60 else:
61 raise AttributeError(
62 "Module {module} has no derived prior {name}".format(module=type(self).__name__, name=prior_name)
63 )
64
65 def forward(self, *inputs, **kwargs):
66 raise NotImplementedError
67
68 def initialize(self, **kwargs):
69 """
70 Set a value for a parameter
71
72 kwargs: (param_name, value) - parameter to initialize
73 Value can take the form of a tensor, a float, or an int
74 """
75 for name, val in kwargs.items():
76 if name not in self._parameters:
77 raise AttributeError("Unknown parameter {p} for {c}".format(p=name, c=self.__class__.__name__))
78 if torch.is_tensor(val):
79 self.__getattr__(name).data.copy_(val)
80 elif isinstance(val, float) or isinstance(val, int):
81 self.__getattr__(name).data.fill_(val)
82 else:
83 raise AttributeError("Type {t} not valid to initialize parameter {p}".format(t=type(val), p=name))
84
85 # Ensure value is contained in support of prior (if present)
86 prior = self._priors.get(name)
87 if prior is not None:
88 param = self._parameters[name]
89 if not prior.is_in_support(param):
90 raise ValueError(
91 "Value of parameter {param} not contained in support of specified prior".format(param=param)
92 )
93 return self
94
95 def named_parameter_priors(self):
96 """
97 Returns an iterator over module parameter priors, yielding the name of
98 the parameter, the parameter itself, as well as the associated prior
99 (excludes parameters for which no prior has been registered)
100 """
101 for name, param in self.named_parameters():
102 prior = self._get_prior_for(name)
103 if prior is not None:
104 yield name, param, prior
105
106 def named_derived_priors(self, memo=None, prefix=""):
107 """Returns an iterator over module derived priors, yielding both the
108 name of the prior as well as the prior, the associated parameters, and
109 the transformation callable.
110
111 Yields:
112 (string, Prior, tuple(string), callable): Tuple containing the name
113 of the prior, the prior itself, its parameters, and the transform
114 to be called on the parameters.
115
116 """
117 if memo is None:
118 memo = set()
119 for name, (prior, pnames, tf) in self._derived_priors.items():
120 if prior is not None and prior not in memo:
121 memo.add(prior)
122 parameters = tuple(getattr(self, pname) for pname in pnames)
123 yield prefix + ("." if prefix else "") + name, prior, parameters, tf
124 for mname, module in self.named_children():
125 submodule_prefix = prefix + ("." if prefix else "") + mname
126 if hasattr(module, "_derived_priors"):
127 for name, prior, parameters, tf in module.named_derived_priors(memo, submodule_prefix):
128 yield name, prior, parameters, tf
129
130 def named_variational_strategies(self, memo=None, prefix=""):
131 """Returns an iterator over module variational strategies, yielding both
132 the name of the variational strategy as well as the strategy itself.
133
134 Yields:
135 (string, VariationalStrategy): Tuple containing the name of the
136 strategy and the strategy
137
138 """
139 if memo is None:
140 memo = set()
141 for name, strategy in self._variational_strategies.items():
142 if strategy is not None and strategy not in memo:
143 memo.add(strategy)
144 yield prefix + ("." if prefix else "") + name, strategy
145 for mname, module in self.named_children():
146 submodule_prefix = prefix + ("." if prefix else "") + mname
147 if hasattr(module, "named_variational_strategies"):
148 for name, strategy in module.named_variational_strategies(memo, submodule_prefix):
149 yield name, strategy
150
151 def register_parameter(self, name, parameter, prior=None):
152 """
153 Adds a parameter to the module.
154 The parameter can be accessed as an attribute using given name.
155
156 name (str): name of parameter
157 param (torch.nn.Parameter): parameter
158 prior (Prior): prior for parameter (default: None)
159 """
160 if "_parameters" not in self.__dict__:
161 raise AttributeError("Cannot assign parameter before Module.__init__() call")
162 super(Module, self).register_parameter(name, parameter)
163 if prior is not None:
164 self.set_parameter_priors(**{name: prior})
165
166 def register_derived_prior(self, name, prior, parameter_names, transform):
167 """
168 Adds a derived prior to the module.
169 The prior can be accessed as an attribute using the given name.
170
171 name (str): name of the derived prior
172 prior (Prior): the prior object
173 parameter_names (tuple(str)): The parameters the transform operaters on,
174 in the same order as expected by the transform callable.
175 transform (Callable): The function called on the specified parameters. The
176 log-pdf of the prior will be evaluating on the output of this transform.
177
178 A derived prior operates on a transform of one or multiple parameters.
179 This can be used, for instance, to put a prior over the ICM Kernel
180 covariance matrix generated from covar_factor and log_var parameters.
181
182 """
183 self.add_module(name, prior)
184 self._derived_priors[name] = (prior, tuple(parameter_names), transform)
185
186 def register_variational_strategy(self, name):
187 self._variational_strategies[name] = None
188
189 def set_parameter_priors(self, **kwargs):
190 """
191 Set prior for a parameter.
192 The prior can be accessed as an attribute using <PARAMETER_NAME>_prior.
193
194 kwargs: (param_name, prior) - parameter to initialize
195 prior must be a gpytorch Prior
196 """
197 for name, prior in kwargs.items():
198 if name not in self._parameters:
199 raise AttributeError(
200 "Unknown parameter {name} for {module}".format(name=name, module=self.__class__.__name__)
201 )
202 self.add_module("_".join([name, "prior"]), prior)
203 self._priors[name] = prior
204 return self
205
206 def variational_strategies(self):
207 for _, strategy in self.named_variational_strategies():
208 yield strategy
209
210 def update_variational_strategy(self, name, variational_strategy):
211 if not isinstance(variational_strategy, VariationalStrategy):
212 raise RuntimeError("variational_strategy must be a VariationalStrategy")
213 if name not in self._variational_strategies.keys():
214 raise RuntimeError("variational strategy {} not registered".format(name))
215 self._variational_strategies[name] = variational_strategy
216
217 def __call__(self, *inputs, **kwargs):
218 outputs = self.forward(*inputs, **kwargs)
219 if torch.is_tensor(outputs) or isinstance(outputs, RandomVariable) or isinstance(outputs, LazyVariable):
220 return outputs
221 for output in outputs:
222 if not (isinstance(output, RandomVariable) or torch.is_tensor(output) or isinstance(output, LazyVariable)):
223 raise RuntimeError(
224 "Output must be a RandomVariable, torch.Tensor, or LazyVariable. "
225 "Was a {}".format(input.__class__.__name__)
226 )
227 if len(outputs) == 1:
228 outputs = outputs[0]
229 return outputs
230
[end of gpytorch/module.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gpytorch/module.py b/gpytorch/module.py
--- a/gpytorch/module.py
+++ b/gpytorch/module.py
@@ -28,40 +28,6 @@
"Invalid parameter name {}. {} has no module {}".format(parameter_name, type(self).__name__, module)
)
- def _get_prior_for(self, parameter_name):
- """
- Get prior for parameter
-
- parameter_name (str): parameter name
- """
- if "." in parameter_name:
- module, parameter_name = self._get_module_and_name(parameter_name)
- return module._get_prior_for(parameter_name)
- else:
- if parameter_name in self._parameters:
- return self._priors.get(parameter_name)
- else:
- raise AttributeError(
- "Module {module} has no parameter {name}".format(module=type(self).__name__, name=parameter_name)
- )
-
- def _get_derived_prior(self, prior_name):
- """
- Get derived prior from prior name
-
- prior_name (str): the name of the derived prior
- """
- if "." in prior_name:
- module, prior_name = self._get_module_and_name(prior_name)
- return module._get_derived_prior(prior_name)
- else:
- if prior_name in self._parameters:
- return self._derived_priors.get(prior_name)
- else:
- raise AttributeError(
- "Module {module} has no derived prior {name}".format(module=type(self).__name__, name=prior_name)
- )
-
def forward(self, *inputs, **kwargs):
raise NotImplementedError
@@ -92,16 +58,24 @@
)
return self
- def named_parameter_priors(self):
+ def named_parameter_priors(self, memo=None, prefix=""):
"""
Returns an iterator over module parameter priors, yielding the name of
the parameter, the parameter itself, as well as the associated prior
(excludes parameters for which no prior has been registered)
"""
- for name, param in self.named_parameters():
- prior = self._get_prior_for(name)
- if prior is not None:
- yield name, param, prior
+ if memo is None:
+ memo = set()
+ for name, parameter in self._parameters.items():
+ if name in self._priors and self._priors[name] not in memo:
+ prior = self._priors[name]
+ memo.add(prior)
+ yield prefix + ("." if prefix else "") + name, parameter, prior
+ for mname, module in self.named_children():
+ submodule_prefix = prefix + ("." if prefix else "") + mname
+ if hasattr(module, "named_parameter_priors"):
+ for name, parameter, prior in module.named_parameter_priors(memo, submodule_prefix):
+ yield name, parameter, prior
def named_derived_priors(self, memo=None, prefix=""):
"""Returns an iterator over module derived priors, yielding both the
|
{"golden_diff": "diff --git a/gpytorch/module.py b/gpytorch/module.py\n--- a/gpytorch/module.py\n+++ b/gpytorch/module.py\n@@ -28,40 +28,6 @@\n \"Invalid parameter name {}. {} has no module {}\".format(parameter_name, type(self).__name__, module)\n )\n \n- def _get_prior_for(self, parameter_name):\n- \"\"\"\n- Get prior for parameter\n-\n- parameter_name (str): parameter name\n- \"\"\"\n- if \".\" in parameter_name:\n- module, parameter_name = self._get_module_and_name(parameter_name)\n- return module._get_prior_for(parameter_name)\n- else:\n- if parameter_name in self._parameters:\n- return self._priors.get(parameter_name)\n- else:\n- raise AttributeError(\n- \"Module {module} has no parameter {name}\".format(module=type(self).__name__, name=parameter_name)\n- )\n-\n- def _get_derived_prior(self, prior_name):\n- \"\"\"\n- Get derived prior from prior name\n-\n- prior_name (str): the name of the derived prior\n- \"\"\"\n- if \".\" in prior_name:\n- module, prior_name = self._get_module_and_name(prior_name)\n- return module._get_derived_prior(prior_name)\n- else:\n- if prior_name in self._parameters:\n- return self._derived_priors.get(prior_name)\n- else:\n- raise AttributeError(\n- \"Module {module} has no derived prior {name}\".format(module=type(self).__name__, name=prior_name)\n- )\n-\n def forward(self, *inputs, **kwargs):\n raise NotImplementedError\n \n@@ -92,16 +58,24 @@\n )\n return self\n \n- def named_parameter_priors(self):\n+ def named_parameter_priors(self, memo=None, prefix=\"\"):\n \"\"\"\n Returns an iterator over module parameter priors, yielding the name of\n the parameter, the parameter itself, as well as the associated prior\n (excludes parameters for which no prior has been registered)\n \"\"\"\n- for name, param in self.named_parameters():\n- prior = self._get_prior_for(name)\n- if prior is not None:\n- yield name, param, prior\n+ if memo is None:\n+ memo = set()\n+ for name, parameter in self._parameters.items():\n+ if name in self._priors and self._priors[name] not in memo:\n+ prior = self._priors[name]\n+ memo.add(prior)\n+ yield prefix + (\".\" if prefix else \"\") + name, parameter, prior\n+ for mname, module in self.named_children():\n+ submodule_prefix = prefix + (\".\" if prefix else \"\") + mname\n+ if hasattr(module, \"named_parameter_priors\"):\n+ for name, parameter, prior in module.named_parameter_priors(memo, submodule_prefix):\n+ yield name, parameter, prior\n \n def named_derived_priors(self, memo=None, prefix=\"\"):\n \"\"\"Returns an iterator over module derived priors, yielding both the\n", "issue": "Fully connected layer before mean and covariance modules\nI am trying to do a non-linear transformation of input before forwarding it through mean and covariance modules. Before master branch was merged to priors (commit 1f5491e3edcac6497d3370c8aaef9a9362048a3e), I can add a fully connected layer before passing the input through mean and covariance modules to learn a non-linear representation. For example, the following script worked fine. \r\n\r\n```\r\nimport math\r\nimport torch\r\nimport gpytorch\r\nfrom matplotlib import pyplot as plt\r\n\r\nfrom torch import optim\r\nfrom gpytorch.kernels import RBFKernel\r\nfrom gpytorch.means import ConstantMean\r\nfrom gpytorch.likelihoods import GaussianLikelihood\r\nfrom gpytorch.random_variables import GaussianRandomVariable\r\n\r\ntrain_x = torch.linspace(0, 1, 11)\r\ntrain_y = torch.sin(train_x.data * (2 * math.pi)) + torch.randn(train_x.size()) * 0.2\r\n\r\nclass ExactGPModel(gpytorch.models.ExactGP):\r\n def __init__(self, train_x, train_y, likelihood):\r\n super(ExactGPModel, self).__init__(train_x, train_y, likelihood)\r\n self.mean_module = ConstantMean(constant_bounds=(-10, 10))\r\n self.covar_module = RBFKernel(log_lengthscale_bounds=(-5, 5))\r\n self.fc = torch.nn.Linear(1, 2)\r\n\r\n def forward(self, x):\r\n x_ = self.fc(x)\r\n mean_x = self.mean_module(x_)\r\n covar_x = self.covar_module(x_)\r\n return GaussianRandomVariable(mean_x, covar_x)\r\n\r\nlikelihood = GaussianLikelihood(log_noise_bounds=(-5, 5))\r\nmodel = ExactGPModel(train_x.data, train_y.data, likelihood)\r\n\r\nmodel.train()\r\nlikelihood.train()\r\n\r\noptimizer = torch.optim.Adam([\r\n {'params': model.parameters()}, \r\n], lr=0.1)\r\n\r\nmll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)\r\n\r\ntraining_iter = 1000\r\nfor i in range(training_iter):\r\n optimizer.zero_grad()\r\n output = model(train_x)\r\n loss = -mll(output, train_y)\r\n loss.backward()\r\n print('Iter %d/%d - Loss: %.3f log_lengthscale: %.3f log_noise: %.3f' % (\r\n i + 1, training_iter, loss.data[0],\r\n model.covar_module.log_lengthscale.data[0, 0],\r\n model.likelihood.log_noise.data[0]\r\n ))\r\n optimizer.step()\r\n```\r\n\r\nHowever, now I get this runtime error because of the linear fc layer:\r\n`AttributeError: 'Linear' object has no attribute '_get_prior_for'`\r\n\r\nHow can I get this to work with the latest version of gpytorch? DKL does something similar to what I am trying to do but implements AdditiveGridInducingVariationalGP and softmax likelihood, however, in my application I'd like to use ExactGP and gaussian likelihood. Is it possible to do so? \r\n\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport torch\nfrom collections import OrderedDict\nfrom torch import nn\nfrom .random_variables import RandomVariable\nfrom .lazy import LazyVariable\nfrom .variational import VariationalStrategy\n\n\nclass Module(nn.Module):\n def __init__(self):\n super(Module, self).__init__()\n self._priors = OrderedDict()\n self._derived_priors = OrderedDict()\n self._variational_strategies = OrderedDict()\n\n def _get_module_and_name(self, parameter_name):\n \"\"\"Get module and name from full parameter name.\"\"\"\n module, name = parameter_name.split(\".\", 1)\n if module in self._modules:\n return self.__getattr__(module), name\n else:\n raise AttributeError(\n \"Invalid parameter name {}. {} has no module {}\".format(parameter_name, type(self).__name__, module)\n )\n\n def _get_prior_for(self, parameter_name):\n \"\"\"\n Get prior for parameter\n\n parameter_name (str): parameter name\n \"\"\"\n if \".\" in parameter_name:\n module, parameter_name = self._get_module_and_name(parameter_name)\n return module._get_prior_for(parameter_name)\n else:\n if parameter_name in self._parameters:\n return self._priors.get(parameter_name)\n else:\n raise AttributeError(\n \"Module {module} has no parameter {name}\".format(module=type(self).__name__, name=parameter_name)\n )\n\n def _get_derived_prior(self, prior_name):\n \"\"\"\n Get derived prior from prior name\n\n prior_name (str): the name of the derived prior\n \"\"\"\n if \".\" in prior_name:\n module, prior_name = self._get_module_and_name(prior_name)\n return module._get_derived_prior(prior_name)\n else:\n if prior_name in self._parameters:\n return self._derived_priors.get(prior_name)\n else:\n raise AttributeError(\n \"Module {module} has no derived prior {name}\".format(module=type(self).__name__, name=prior_name)\n )\n\n def forward(self, *inputs, **kwargs):\n raise NotImplementedError\n\n def initialize(self, **kwargs):\n \"\"\"\n Set a value for a parameter\n\n kwargs: (param_name, value) - parameter to initialize\n Value can take the form of a tensor, a float, or an int\n \"\"\"\n for name, val in kwargs.items():\n if name not in self._parameters:\n raise AttributeError(\"Unknown parameter {p} for {c}\".format(p=name, c=self.__class__.__name__))\n if torch.is_tensor(val):\n self.__getattr__(name).data.copy_(val)\n elif isinstance(val, float) or isinstance(val, int):\n self.__getattr__(name).data.fill_(val)\n else:\n raise AttributeError(\"Type {t} not valid to initialize parameter {p}\".format(t=type(val), p=name))\n\n # Ensure value is contained in support of prior (if present)\n prior = self._priors.get(name)\n if prior is not None:\n param = self._parameters[name]\n if not prior.is_in_support(param):\n raise ValueError(\n \"Value of parameter {param} not contained in support of specified prior\".format(param=param)\n )\n return self\n\n def named_parameter_priors(self):\n \"\"\"\n Returns an iterator over module parameter priors, yielding the name of\n the parameter, the parameter itself, as well as the associated prior\n (excludes parameters for which no prior has been registered)\n \"\"\"\n for name, param in self.named_parameters():\n prior = self._get_prior_for(name)\n if prior is not None:\n yield name, param, prior\n\n def named_derived_priors(self, memo=None, prefix=\"\"):\n \"\"\"Returns an iterator over module derived priors, yielding both the\n name of the prior as well as the prior, the associated parameters, and\n the transformation callable.\n\n Yields:\n (string, Prior, tuple(string), callable): Tuple containing the name\n of the prior, the prior itself, its parameters, and the transform\n to be called on the parameters.\n\n \"\"\"\n if memo is None:\n memo = set()\n for name, (prior, pnames, tf) in self._derived_priors.items():\n if prior is not None and prior not in memo:\n memo.add(prior)\n parameters = tuple(getattr(self, pname) for pname in pnames)\n yield prefix + (\".\" if prefix else \"\") + name, prior, parameters, tf\n for mname, module in self.named_children():\n submodule_prefix = prefix + (\".\" if prefix else \"\") + mname\n if hasattr(module, \"_derived_priors\"):\n for name, prior, parameters, tf in module.named_derived_priors(memo, submodule_prefix):\n yield name, prior, parameters, tf\n\n def named_variational_strategies(self, memo=None, prefix=\"\"):\n \"\"\"Returns an iterator over module variational strategies, yielding both\n the name of the variational strategy as well as the strategy itself.\n\n Yields:\n (string, VariationalStrategy): Tuple containing the name of the\n strategy and the strategy\n\n \"\"\"\n if memo is None:\n memo = set()\n for name, strategy in self._variational_strategies.items():\n if strategy is not None and strategy not in memo:\n memo.add(strategy)\n yield prefix + (\".\" if prefix else \"\") + name, strategy\n for mname, module in self.named_children():\n submodule_prefix = prefix + (\".\" if prefix else \"\") + mname\n if hasattr(module, \"named_variational_strategies\"):\n for name, strategy in module.named_variational_strategies(memo, submodule_prefix):\n yield name, strategy\n\n def register_parameter(self, name, parameter, prior=None):\n \"\"\"\n Adds a parameter to the module.\n The parameter can be accessed as an attribute using given name.\n\n name (str): name of parameter\n param (torch.nn.Parameter): parameter\n prior (Prior): prior for parameter (default: None)\n \"\"\"\n if \"_parameters\" not in self.__dict__:\n raise AttributeError(\"Cannot assign parameter before Module.__init__() call\")\n super(Module, self).register_parameter(name, parameter)\n if prior is not None:\n self.set_parameter_priors(**{name: prior})\n\n def register_derived_prior(self, name, prior, parameter_names, transform):\n \"\"\"\n Adds a derived prior to the module.\n The prior can be accessed as an attribute using the given name.\n\n name (str): name of the derived prior\n prior (Prior): the prior object\n parameter_names (tuple(str)): The parameters the transform operaters on,\n in the same order as expected by the transform callable.\n transform (Callable): The function called on the specified parameters. The\n log-pdf of the prior will be evaluating on the output of this transform.\n\n A derived prior operates on a transform of one or multiple parameters.\n This can be used, for instance, to put a prior over the ICM Kernel\n covariance matrix generated from covar_factor and log_var parameters.\n\n \"\"\"\n self.add_module(name, prior)\n self._derived_priors[name] = (prior, tuple(parameter_names), transform)\n\n def register_variational_strategy(self, name):\n self._variational_strategies[name] = None\n\n def set_parameter_priors(self, **kwargs):\n \"\"\"\n Set prior for a parameter.\n The prior can be accessed as an attribute using <PARAMETER_NAME>_prior.\n\n kwargs: (param_name, prior) - parameter to initialize\n prior must be a gpytorch Prior\n \"\"\"\n for name, prior in kwargs.items():\n if name not in self._parameters:\n raise AttributeError(\n \"Unknown parameter {name} for {module}\".format(name=name, module=self.__class__.__name__)\n )\n self.add_module(\"_\".join([name, \"prior\"]), prior)\n self._priors[name] = prior\n return self\n\n def variational_strategies(self):\n for _, strategy in self.named_variational_strategies():\n yield strategy\n\n def update_variational_strategy(self, name, variational_strategy):\n if not isinstance(variational_strategy, VariationalStrategy):\n raise RuntimeError(\"variational_strategy must be a VariationalStrategy\")\n if name not in self._variational_strategies.keys():\n raise RuntimeError(\"variational strategy {} not registered\".format(name))\n self._variational_strategies[name] = variational_strategy\n\n def __call__(self, *inputs, **kwargs):\n outputs = self.forward(*inputs, **kwargs)\n if torch.is_tensor(outputs) or isinstance(outputs, RandomVariable) or isinstance(outputs, LazyVariable):\n return outputs\n for output in outputs:\n if not (isinstance(output, RandomVariable) or torch.is_tensor(output) or isinstance(output, LazyVariable)):\n raise RuntimeError(\n \"Output must be a RandomVariable, torch.Tensor, or LazyVariable. \"\n \"Was a {}\".format(input.__class__.__name__)\n )\n if len(outputs) == 1:\n outputs = outputs[0]\n return outputs\n", "path": "gpytorch/module.py"}]}
| 3,781 | 678 |
gh_patches_debug_795
|
rasdani/github-patches
|
git_diff
|
Parsl__parsl-140
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Do not import `parsl` before requirements are setup
```
[annawoodard@midway001 parsl]$ python setup.py install
Traceback (most recent call last):
File "setup.py", line 2, in <module>
from parsl.version import VERSION
File "/home/annawoodard/parsl/parsl/__init__.py", line 35, in <module>
from parsl.executors.ipp import IPyParallelExecutor
File "/home/annawoodard/parsl/parsl/executors/ipp.py", line 4, in <module>
from ipyparallel import Client
ModuleNotFoundError: No module named 'ipyparallel'
```
Setuptools is supposed to take care of dependencies for us, but importing parsl in `setup.py` breaks that (because we require the dependencies by importing the parsl version from `version.py` before they can be installed). We should avoid this.
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2 from parsl.version import VERSION
3
4 with open('requirements.txt') as f:
5 install_requires = f.readlines()
6
7 # tests_require = parse_requirements('test-requirements.txt')
8
9 setup(
10 name='parsl',
11 version=VERSION,
12 description='Simple data dependent workflows in Python',
13 long_description='Simple and easy parallel workflows system for Python',
14 url='https://github.com/Parsl/parsl',
15 author='Yadu Nand Babuji',
16 author_email='[email protected]',
17 license='Apache 2.0',
18 download_url='https://github.com/Parsl/parsl/archive/{}.tar.gz'.format(VERSION),
19 package_data={'': ['LICENSE']},
20 packages=find_packages(),
21 install_requires=install_requires,
22 classifiers=[
23 # Maturity
24 'Development Status :: 3 - Alpha',
25 # Intended audience
26 'Intended Audience :: Developers',
27 # Licence, must match with licence above
28 'License :: OSI Approved :: Apache Software License',
29 # Python versions supported
30 'Programming Language :: Python :: 3.5',
31 'Programming Language :: Python :: 3.6',
32 ],
33 keywords=['Workflows', 'Scientific computing'],
34 )
35
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,5 +1,7 @@
from setuptools import setup, find_packages
-from parsl.version import VERSION
+
+with open('parsl/version.py') as f:
+ exec(f.read())
with open('requirements.txt') as f:
install_requires = f.readlines()
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,5 +1,7 @@\n from setuptools import setup, find_packages\n-from parsl.version import VERSION\n+\n+with open('parsl/version.py') as f:\n+ exec(f.read())\n \n with open('requirements.txt') as f:\n install_requires = f.readlines()\n", "issue": "Do not import `parsl` before requirements are setup\n```\r\n[annawoodard@midway001 parsl]$ python setup.py install\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 2, in <module>\r\n from parsl.version import VERSION\r\n File \"/home/annawoodard/parsl/parsl/__init__.py\", line 35, in <module>\r\n from parsl.executors.ipp import IPyParallelExecutor\r\n File \"/home/annawoodard/parsl/parsl/executors/ipp.py\", line 4, in <module>\r\n from ipyparallel import Client\r\nModuleNotFoundError: No module named 'ipyparallel'\r\n```\r\n\r\nSetuptools is supposed to take care of dependencies for us, but importing parsl in `setup.py` breaks that (because we require the dependencies by importing the parsl version from `version.py` before they can be installed). We should avoid this.\n", "before_files": [{"content": "from setuptools import setup, find_packages\nfrom parsl.version import VERSION\n\nwith open('requirements.txt') as f:\n install_requires = f.readlines()\n\n# tests_require = parse_requirements('test-requirements.txt')\n\nsetup(\n name='parsl',\n version=VERSION,\n description='Simple data dependent workflows in Python',\n long_description='Simple and easy parallel workflows system for Python',\n url='https://github.com/Parsl/parsl',\n author='Yadu Nand Babuji',\n author_email='[email protected]',\n license='Apache 2.0',\n download_url='https://github.com/Parsl/parsl/archive/{}.tar.gz'.format(VERSION),\n package_data={'': ['LICENSE']},\n packages=find_packages(),\n install_requires=install_requires,\n classifiers=[\n # Maturity\n 'Development Status :: 3 - Alpha',\n # Intended audience\n 'Intended Audience :: Developers',\n # Licence, must match with licence above\n 'License :: OSI Approved :: Apache Software License',\n # Python versions supported\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n keywords=['Workflows', 'Scientific computing'],\n)\n", "path": "setup.py"}]}
| 1,069 | 81 |
gh_patches_debug_29007
|
rasdani/github-patches
|
git_diff
|
vega__altair-2642
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Dots aren't showing up in ranged dot plot

</issue>
<code>
[start of altair/examples/ranged_dot_plot.py]
1 """
2 Ranged Dot Plot
3 -----------------
4 This example shows a ranged dot plot that uses 'layer' to convey changing life expectancy for the five most populous countries (between 1955 and 2000).
5 """
6 # category: other charts
7 import altair as alt
8 from vega_datasets import data
9
10 source = data.countries.url
11
12 chart = alt.layer(
13 data=source
14 ).transform_filter(
15 filter={"field": 'country',
16 "oneOf": ["China", "India", "United States", "Indonesia", "Brazil"]}
17 ).transform_filter(
18 filter={'field': 'year',
19 "oneOf": [1955, 2000]}
20 )
21
22 chart += alt.Chart().mark_line(color='#db646f').encode(
23 x='life_expect:Q',
24 y='country:N',
25 detail='country:N'
26 )
27 # Add points for life expectancy in 1955 & 2000
28 chart += alt.Chart().mark_point(
29 size=100,
30 opacity=1,
31 filled=True
32 ).encode(
33 x='life_expect:Q',
34 y='country:N',
35 color=alt.Color('year:O',
36 scale=alt.Scale(
37 domain=['1955', '2000'],
38 range=['#e6959c', '#911a24']
39 )
40 )
41 ).interactive()
42
43 chart
44
[end of altair/examples/ranged_dot_plot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/altair/examples/ranged_dot_plot.py b/altair/examples/ranged_dot_plot.py
--- a/altair/examples/ranged_dot_plot.py
+++ b/altair/examples/ranged_dot_plot.py
@@ -1,7 +1,7 @@
"""
Ranged Dot Plot
------------------
-This example shows a ranged dot plot that uses 'layer' to convey changing life expectancy for the five most populous countries (between 1955 and 2000).
+---------------
+This example shows a ranged dot plot to convey changing life expectancy for the five most populous countries (between 1955 and 2000).
"""
# category: other charts
import altair as alt
@@ -9,7 +9,7 @@
source = data.countries.url
-chart = alt.layer(
+chart = alt.Chart(
data=source
).transform_filter(
filter={"field": 'country',
@@ -19,13 +19,13 @@
"oneOf": [1955, 2000]}
)
-chart += alt.Chart().mark_line(color='#db646f').encode(
+line = chart.mark_line(color='#db646f').encode(
x='life_expect:Q',
y='country:N',
detail='country:N'
)
# Add points for life expectancy in 1955 & 2000
-chart += alt.Chart().mark_point(
+points = chart.mark_point(
size=100,
opacity=1,
filled=True
@@ -34,10 +34,10 @@
y='country:N',
color=alt.Color('year:O',
scale=alt.Scale(
- domain=['1955', '2000'],
+ domain=[1955, 2000],
range=['#e6959c', '#911a24']
)
)
).interactive()
-chart
+(line + points)
|
{"golden_diff": "diff --git a/altair/examples/ranged_dot_plot.py b/altair/examples/ranged_dot_plot.py\n--- a/altair/examples/ranged_dot_plot.py\n+++ b/altair/examples/ranged_dot_plot.py\n@@ -1,7 +1,7 @@\n \"\"\"\n Ranged Dot Plot\n------------------\n-This example shows a ranged dot plot that uses 'layer' to convey changing life expectancy for the five most populous countries (between 1955 and 2000).\n+---------------\n+This example shows a ranged dot plot to convey changing life expectancy for the five most populous countries (between 1955 and 2000).\n \"\"\"\n # category: other charts\n import altair as alt\n@@ -9,7 +9,7 @@\n \n source = data.countries.url\n \n-chart = alt.layer(\n+chart = alt.Chart(\n data=source\n ).transform_filter(\n filter={\"field\": 'country',\n@@ -19,13 +19,13 @@\n \"oneOf\": [1955, 2000]}\n )\n \n-chart += alt.Chart().mark_line(color='#db646f').encode(\n+line = chart.mark_line(color='#db646f').encode(\n x='life_expect:Q',\n y='country:N',\n detail='country:N'\n )\n # Add points for life expectancy in 1955 & 2000\n-chart += alt.Chart().mark_point(\n+points = chart.mark_point(\n size=100,\n opacity=1,\n filled=True\n@@ -34,10 +34,10 @@\n y='country:N',\n color=alt.Color('year:O',\n scale=alt.Scale(\n- domain=['1955', '2000'],\n+ domain=[1955, 2000],\n range=['#e6959c', '#911a24']\n )\n )\n ).interactive()\n \n-chart\n+(line + points)\n", "issue": "Dots aren't showing up in ranged dot plot\n\r\n\n", "before_files": [{"content": "\"\"\"\nRanged Dot Plot\n-----------------\nThis example shows a ranged dot plot that uses 'layer' to convey changing life expectancy for the five most populous countries (between 1955 and 2000).\n\"\"\"\n# category: other charts\nimport altair as alt\nfrom vega_datasets import data\n\nsource = data.countries.url\n\nchart = alt.layer(\n data=source\n).transform_filter(\n filter={\"field\": 'country',\n \"oneOf\": [\"China\", \"India\", \"United States\", \"Indonesia\", \"Brazil\"]}\n).transform_filter(\n filter={'field': 'year',\n \"oneOf\": [1955, 2000]}\n)\n\nchart += alt.Chart().mark_line(color='#db646f').encode(\n x='life_expect:Q',\n y='country:N',\n detail='country:N'\n)\n# Add points for life expectancy in 1955 & 2000\nchart += alt.Chart().mark_point(\n size=100,\n opacity=1,\n filled=True\n).encode(\n x='life_expect:Q',\n y='country:N',\n color=alt.Color('year:O',\n scale=alt.Scale(\n domain=['1955', '2000'],\n range=['#e6959c', '#911a24']\n )\n )\n).interactive()\n\nchart\n", "path": "altair/examples/ranged_dot_plot.py"}]}
| 1,019 | 435 |
gh_patches_debug_64587
|
rasdani/github-patches
|
git_diff
|
kubeflow__pipelines-4118
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
allow output artifact store configuration (vs hard coded)
it seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`).
see: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148
it would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.
i suggest making it configurable, i can do such PR if we agree its needed.
flexible pipeline service (host) path in client SDK
when creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:
`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`
to:
`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`
also note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug
if its acceptable i can submit a PR for the line change above
</issue>
<code>
[start of samples/core/iris/iris.py]
1 #!/usr/bin/env python3
2 # Copyright 2020 Google LLC
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Iris flowers example using TFX. Based on https://github.com/tensorflow/tfx/blob/master/tfx/examples/iris/iris_pipeline_native_keras.py"""
16
17 from __future__ import absolute_import
18 from __future__ import division
19 from __future__ import print_function
20
21 import os
22 import kfp
23 from typing import Text
24
25 import absl
26 import tensorflow_model_analysis as tfma
27
28 from tfx.components import CsvExampleGen
29 from tfx.components import Evaluator
30 from tfx.components import ExampleValidator
31 from tfx.components import Pusher
32 from tfx.components import ResolverNode
33 from tfx.components import SchemaGen
34 from tfx.components import StatisticsGen
35 from tfx.components import Trainer
36 from tfx.components import Transform
37 from tfx.components.base import executor_spec
38 from tfx.components.trainer.executor import GenericExecutor
39 from tfx.dsl.experimental import latest_blessed_model_resolver
40 from tfx.orchestration import data_types
41 from tfx.orchestration import pipeline
42 from tfx.orchestration.kubeflow import kubeflow_dag_runner
43 from tfx.proto import trainer_pb2
44 from tfx.proto import pusher_pb2
45 from tfx.types import Channel
46 from tfx.types.standard_artifacts import Model
47 from tfx.types.standard_artifacts import ModelBlessing
48 from tfx.utils.dsl_utils import external_input
49
50 _pipeline_name = 'iris_native_keras'
51
52 # This example assumes that Iris flowers data is stored in GCS and the
53 # utility function is in iris_utils.py. Feel free to customize as needed.
54 _data_root_param = data_types.RuntimeParameter(
55 name='data-root',
56 default='gs://ml-pipeline/sample-data/iris/data',
57 ptype=Text,
58 )
59
60 # Python module file to inject customized logic into the TFX components. The
61 # Transform and Trainer both require user-defined functions to run successfully.
62 # This file is fork from https://github.com/tensorflow/tfx/blob/master/tfx/examples/iris/iris_utils_native_keras.py
63 # and baked into the TFX image used in the pipeline.
64 _module_file_param = data_types.RuntimeParameter(
65 name='module-file',
66 default=
67 '/tfx-src/tfx/examples/iris/iris_utils_native_keras.py',
68 ptype=Text,
69 )
70
71 # Directory and data locations. This example assumes all of the flowers
72 # example code and metadata library is relative to a GCS path.
73 # Note: if one deployed KFP from GKE marketplace, it's possible to leverage
74 # the following magic placeholder to auto-populate the default GCS bucket
75 # associated with KFP deployment. Otherwise you'll need to replace it with your
76 # actual bucket name here or when creating a run.
77 _pipeline_root = os.path.join(
78 'gs://{{kfp-default-bucket}}', 'tfx_iris', kfp.dsl.RUN_ID_PLACEHOLDER
79 )
80
81
82 def _create_pipeline(
83 pipeline_name: Text, pipeline_root: Text
84 ) -> pipeline.Pipeline:
85 """Implements the Iris flowers pipeline with TFX."""
86 examples = external_input(_data_root_param)
87
88 # Brings data into the pipeline or otherwise joins/converts training data.
89 example_gen = CsvExampleGen(input=examples)
90
91 # Computes statistics over data for visualization and example validation.
92 statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])
93
94 # Generates schema based on statistics files.
95 infer_schema = SchemaGen(
96 statistics=statistics_gen.outputs['statistics'], infer_feature_shape=True
97 )
98
99 # Performs anomaly detection based on statistics and data schema.
100 validate_stats = ExampleValidator(
101 statistics=statistics_gen.outputs['statistics'],
102 schema=infer_schema.outputs['schema']
103 )
104
105 # Performs transformations and feature engineering in training and serving.
106 transform = Transform(
107 examples=example_gen.outputs['examples'],
108 schema=infer_schema.outputs['schema'],
109 module_file=_module_file_param
110 )
111
112 # Uses user-provided Python function that implements a model using Keras.
113 trainer = Trainer(
114 module_file=_module_file_param,
115 custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor),
116 examples=transform.outputs['transformed_examples'],
117 transform_graph=transform.outputs['transform_graph'],
118 schema=infer_schema.outputs['schema'],
119 train_args=trainer_pb2.TrainArgs(num_steps=100),
120 eval_args=trainer_pb2.EvalArgs(num_steps=50)
121 )
122
123 # Get the latest blessed model for model validation.
124 model_resolver = ResolverNode(
125 instance_name='latest_blessed_model_resolver',
126 resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver,
127 model=Channel(type=Model),
128 model_blessing=Channel(type=ModelBlessing)
129 )
130
131 # Uses TFMA to compute an evaluation statistics over features of a model and
132 # perform quality validation of a candidate model (compared to a baseline).
133 # Note: to compile this successfully you'll need TFMA at >= 0.21.5
134 eval_config = tfma.EvalConfig(
135 model_specs=[
136 tfma.ModelSpec(name='candidate', label_key='variety'),
137 tfma.ModelSpec(
138 name='baseline', label_key='variety', is_baseline=True
139 )
140 ],
141 slicing_specs=[
142 tfma.SlicingSpec(),
143 # Data can be sliced along a feature column. Required by TFMA visualization.
144 tfma.SlicingSpec(feature_keys=['sepal_length'])],
145 metrics_specs=[
146 tfma.MetricsSpec(
147 metrics=[
148 tfma.MetricConfig(
149 class_name='SparseCategoricalAccuracy',
150 threshold=tfma.config.MetricThreshold(
151 value_threshold=tfma.GenericValueThreshold(
152 lower_bound={'value': 0.9}
153 ),
154 change_threshold=tfma.GenericChangeThreshold(
155 direction=tfma.MetricDirection.HIGHER_IS_BETTER,
156 absolute={'value': -1e-10}
157 )
158 )
159 )
160 ]
161 )
162 ]
163 )
164
165 # Uses TFMA to compute a evaluation statistics over features of a model.
166 model_analyzer = Evaluator(
167 examples=example_gen.outputs['examples'],
168 model=trainer.outputs['model'],
169 baseline_model=model_resolver.outputs['model'],
170 # Change threshold will be ignored if there is no baseline (first run).
171 eval_config=eval_config
172 )
173
174 # Checks whether the model passed the validation steps and pushes the model
175 # to a file destination if check passed.
176 pusher = Pusher(
177 model=trainer.outputs['model'],
178 model_blessing=model_analyzer.outputs['blessing'],
179 push_destination=pusher_pb2.PushDestination(
180 filesystem=pusher_pb2.PushDestination.Filesystem(
181 base_directory=os.path.
182 join(str(pipeline.ROOT_PARAMETER), 'model_serving')
183 )
184 )
185 )
186
187 return pipeline.Pipeline(
188 pipeline_name=pipeline_name,
189 pipeline_root=pipeline_root,
190 components=[
191 example_gen, statistics_gen, infer_schema, validate_stats, transform,
192 trainer, model_resolver, model_analyzer, pusher
193 ],
194 enable_cache=True,
195 )
196
197
198 if __name__ == '__main__':
199 absl.logging.set_verbosity(absl.logging.INFO)
200 # Make sure the version of TFX image used is consistent with the version of
201 # TFX SDK. Here we use tfx:0.22.0 image.
202 config = kubeflow_dag_runner.KubeflowDagRunnerConfig(
203 kubeflow_metadata_config=kubeflow_dag_runner.
204 get_default_kubeflow_metadata_config(),
205 tfx_image='gcr.io/tfx-oss-public/tfx:0.22.0',
206 )
207 kfp_runner = kubeflow_dag_runner.KubeflowDagRunner(
208 output_filename=__file__ + '.yaml', config=config
209 )
210 kfp_runner.run(
211 _create_pipeline(
212 pipeline_name=_pipeline_name, pipeline_root=_pipeline_root
213 )
214 )
215
[end of samples/core/iris/iris.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/samples/core/iris/iris.py b/samples/core/iris/iris.py
--- a/samples/core/iris/iris.py
+++ b/samples/core/iris/iris.py
@@ -14,10 +14,6 @@
# limitations under the License.
"""Iris flowers example using TFX. Based on https://github.com/tensorflow/tfx/blob/master/tfx/examples/iris/iris_pipeline_native_keras.py"""
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
import os
import kfp
from typing import Text
|
{"golden_diff": "diff --git a/samples/core/iris/iris.py b/samples/core/iris/iris.py\n--- a/samples/core/iris/iris.py\n+++ b/samples/core/iris/iris.py\n@@ -14,10 +14,6 @@\n # limitations under the License.\n \"\"\"Iris flowers example using TFX. Based on https://github.com/tensorflow/tfx/blob/master/tfx/examples/iris/iris_pipeline_native_keras.py\"\"\"\n \n-from __future__ import absolute_import\n-from __future__ import division\n-from __future__ import print_function\n-\n import os\n import kfp\n from typing import Text\n", "issue": "allow output artifact store configuration (vs hard coded)\nit seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`). \r\n\r\nsee: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148\r\n\r\nit would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.\r\ni suggest making it configurable, i can do such PR if we agree its needed. \nflexible pipeline service (host) path in client SDK \nwhen creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:\r\n\r\n`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`\r\n\r\nto:\r\n\r\n`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`\r\n\r\nalso note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug\r\n\r\nif its acceptable i can submit a PR for the line change above\r\n \n", "before_files": [{"content": "#!/usr/bin/env python3\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Iris flowers example using TFX. Based on https://github.com/tensorflow/tfx/blob/master/tfx/examples/iris/iris_pipeline_native_keras.py\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport kfp\nfrom typing import Text\n\nimport absl\nimport tensorflow_model_analysis as tfma\n\nfrom tfx.components import CsvExampleGen\nfrom tfx.components import Evaluator\nfrom tfx.components import ExampleValidator\nfrom tfx.components import Pusher\nfrom tfx.components import ResolverNode\nfrom tfx.components import SchemaGen\nfrom tfx.components import StatisticsGen\nfrom tfx.components import Trainer\nfrom tfx.components import Transform\nfrom tfx.components.base import executor_spec\nfrom tfx.components.trainer.executor import GenericExecutor\nfrom tfx.dsl.experimental import latest_blessed_model_resolver\nfrom tfx.orchestration import data_types\nfrom tfx.orchestration import pipeline\nfrom tfx.orchestration.kubeflow import kubeflow_dag_runner\nfrom tfx.proto import trainer_pb2\nfrom tfx.proto import pusher_pb2\nfrom tfx.types import Channel\nfrom tfx.types.standard_artifacts import Model\nfrom tfx.types.standard_artifacts import ModelBlessing\nfrom tfx.utils.dsl_utils import external_input\n\n_pipeline_name = 'iris_native_keras'\n\n# This example assumes that Iris flowers data is stored in GCS and the\n# utility function is in iris_utils.py. Feel free to customize as needed.\n_data_root_param = data_types.RuntimeParameter(\n name='data-root',\n default='gs://ml-pipeline/sample-data/iris/data',\n ptype=Text,\n)\n\n# Python module file to inject customized logic into the TFX components. The\n# Transform and Trainer both require user-defined functions to run successfully.\n# This file is fork from https://github.com/tensorflow/tfx/blob/master/tfx/examples/iris/iris_utils_native_keras.py\n# and baked into the TFX image used in the pipeline.\n_module_file_param = data_types.RuntimeParameter(\n name='module-file',\n default=\n '/tfx-src/tfx/examples/iris/iris_utils_native_keras.py',\n ptype=Text,\n)\n\n# Directory and data locations. This example assumes all of the flowers\n# example code and metadata library is relative to a GCS path.\n# Note: if one deployed KFP from GKE marketplace, it's possible to leverage\n# the following magic placeholder to auto-populate the default GCS bucket\n# associated with KFP deployment. Otherwise you'll need to replace it with your\n# actual bucket name here or when creating a run.\n_pipeline_root = os.path.join(\n 'gs://{{kfp-default-bucket}}', 'tfx_iris', kfp.dsl.RUN_ID_PLACEHOLDER\n)\n\n\ndef _create_pipeline(\n pipeline_name: Text, pipeline_root: Text\n) -> pipeline.Pipeline:\n \"\"\"Implements the Iris flowers pipeline with TFX.\"\"\"\n examples = external_input(_data_root_param)\n\n # Brings data into the pipeline or otherwise joins/converts training data.\n example_gen = CsvExampleGen(input=examples)\n\n # Computes statistics over data for visualization and example validation.\n statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])\n\n # Generates schema based on statistics files.\n infer_schema = SchemaGen(\n statistics=statistics_gen.outputs['statistics'], infer_feature_shape=True\n )\n\n # Performs anomaly detection based on statistics and data schema.\n validate_stats = ExampleValidator(\n statistics=statistics_gen.outputs['statistics'],\n schema=infer_schema.outputs['schema']\n )\n\n # Performs transformations and feature engineering in training and serving.\n transform = Transform(\n examples=example_gen.outputs['examples'],\n schema=infer_schema.outputs['schema'],\n module_file=_module_file_param\n )\n\n # Uses user-provided Python function that implements a model using Keras.\n trainer = Trainer(\n module_file=_module_file_param,\n custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor),\n examples=transform.outputs['transformed_examples'],\n transform_graph=transform.outputs['transform_graph'],\n schema=infer_schema.outputs['schema'],\n train_args=trainer_pb2.TrainArgs(num_steps=100),\n eval_args=trainer_pb2.EvalArgs(num_steps=50)\n )\n\n # Get the latest blessed model for model validation.\n model_resolver = ResolverNode(\n instance_name='latest_blessed_model_resolver',\n resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver,\n model=Channel(type=Model),\n model_blessing=Channel(type=ModelBlessing)\n )\n\n # Uses TFMA to compute an evaluation statistics over features of a model and\n # perform quality validation of a candidate model (compared to a baseline).\n # Note: to compile this successfully you'll need TFMA at >= 0.21.5\n eval_config = tfma.EvalConfig(\n model_specs=[\n tfma.ModelSpec(name='candidate', label_key='variety'),\n tfma.ModelSpec(\n name='baseline', label_key='variety', is_baseline=True\n )\n ],\n slicing_specs=[\n tfma.SlicingSpec(),\n # Data can be sliced along a feature column. Required by TFMA visualization.\n tfma.SlicingSpec(feature_keys=['sepal_length'])],\n metrics_specs=[\n tfma.MetricsSpec(\n metrics=[\n tfma.MetricConfig(\n class_name='SparseCategoricalAccuracy',\n threshold=tfma.config.MetricThreshold(\n value_threshold=tfma.GenericValueThreshold(\n lower_bound={'value': 0.9}\n ),\n change_threshold=tfma.GenericChangeThreshold(\n direction=tfma.MetricDirection.HIGHER_IS_BETTER,\n absolute={'value': -1e-10}\n )\n )\n )\n ]\n )\n ]\n )\n\n # Uses TFMA to compute a evaluation statistics over features of a model.\n model_analyzer = Evaluator(\n examples=example_gen.outputs['examples'],\n model=trainer.outputs['model'],\n baseline_model=model_resolver.outputs['model'],\n # Change threshold will be ignored if there is no baseline (first run).\n eval_config=eval_config\n )\n\n # Checks whether the model passed the validation steps and pushes the model\n # to a file destination if check passed.\n pusher = Pusher(\n model=trainer.outputs['model'],\n model_blessing=model_analyzer.outputs['blessing'],\n push_destination=pusher_pb2.PushDestination(\n filesystem=pusher_pb2.PushDestination.Filesystem(\n base_directory=os.path.\n join(str(pipeline.ROOT_PARAMETER), 'model_serving')\n )\n )\n )\n\n return pipeline.Pipeline(\n pipeline_name=pipeline_name,\n pipeline_root=pipeline_root,\n components=[\n example_gen, statistics_gen, infer_schema, validate_stats, transform,\n trainer, model_resolver, model_analyzer, pusher\n ],\n enable_cache=True,\n )\n\n\nif __name__ == '__main__':\n absl.logging.set_verbosity(absl.logging.INFO)\n # Make sure the version of TFX image used is consistent with the version of\n # TFX SDK. Here we use tfx:0.22.0 image.\n config = kubeflow_dag_runner.KubeflowDagRunnerConfig(\n kubeflow_metadata_config=kubeflow_dag_runner.\n get_default_kubeflow_metadata_config(),\n tfx_image='gcr.io/tfx-oss-public/tfx:0.22.0',\n )\n kfp_runner = kubeflow_dag_runner.KubeflowDagRunner(\n output_filename=__file__ + '.yaml', config=config\n )\n kfp_runner.run(\n _create_pipeline(\n pipeline_name=_pipeline_name, pipeline_root=_pipeline_root\n )\n )\n", "path": "samples/core/iris/iris.py"}]}
| 3,278 | 135 |
gh_patches_debug_7421
|
rasdani/github-patches
|
git_diff
|
safe-global__safe-config-service-65
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Format JSON keys in camel case
Endpoints consumed by the clients should have the JSON keys in camel case. By having them camel case, it follows the formatting that we have in other services.
</issue>
<code>
[start of src/config/settings.py]
1 """
2 Django settings for safe_client_config_service project.
3
4 Generated by 'django-admin startproject' using Django 3.2.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/3.2/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/3.2/ref/settings/
11 """
12 import os
13 from distutils.util import strtobool
14 from pathlib import Path
15
16 # Build paths inside the project like this: BASE_DIR / 'subdir'.
17 BASE_DIR = Path(__file__).resolve().parent.parent
18
19 # Quick-start development settings - unsuitable for production
20 # See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/
21
22 # SECURITY WARNING: keep the secret key used in production secret!
23 SECRET_KEY = os.getenv("SECRET_KEY", None)
24
25 # SECURITY WARNING: don't run with debug turned on in production!
26 DEBUG = bool(strtobool(os.getenv("DEBUG", "false")))
27
28 # https://docs.djangoproject.com/en/3.2/ref/settings/#std:setting-ALLOWED_HOSTS
29 allowed_hosts = os.getenv("DJANGO_ALLOWED_HOSTS", ".localhost,127.0.0.1,[::1]")
30 ALLOWED_HOSTS = [allowed_host.strip() for allowed_host in allowed_hosts.split(",")]
31
32 # Application definition
33
34 default_renderer_classes = os.getenv(
35 "REST_DEFAULT_RENDERER_CLASSES", "rest_framework.renderers.JSONRenderer"
36 )
37 REST_FRAMEWORK = {
38 # https://www.django-rest-framework.org/api-guide/renderers/
39 "DEFAULT_RENDERER_CLASSES": [
40 default_renderer_class.strip()
41 for default_renderer_class in default_renderer_classes.split(",")
42 ]
43 }
44
45 INSTALLED_APPS = [
46 "safe_apps.apps.AppsConfig",
47 "django.contrib.admin",
48 "django.contrib.auth",
49 "django.contrib.contenttypes",
50 "django.contrib.sessions",
51 "django.contrib.messages",
52 "django.contrib.staticfiles",
53 "rest_framework",
54 ]
55
56 MIDDLEWARE = [
57 "config.middleware.LoggingMiddleware",
58 "django.middleware.security.SecurityMiddleware",
59 "django.contrib.sessions.middleware.SessionMiddleware",
60 "django.middleware.common.CommonMiddleware",
61 "django.middleware.csrf.CsrfViewMiddleware",
62 "django.contrib.auth.middleware.AuthenticationMiddleware",
63 "django.contrib.messages.middleware.MessageMiddleware",
64 "django.middleware.clickjacking.XFrameOptionsMiddleware",
65 ]
66
67 CACHES = {
68 "default": {
69 "BACKEND": "django.core.cache.backends.locmem.LocMemCache",
70 },
71 "safe-apps": {
72 "BACKEND": "django.core.cache.backends.locmem.LocMemCache",
73 },
74 }
75
76 LOGGING = {
77 "version": 1,
78 "disable_existing_loggers": False,
79 "formatters": {
80 "short": {"format": "%(asctime)s %(message)s"},
81 "verbose": {
82 "format": "%(asctime)s [%(levelname)s] [%(processName)s] %(message)s"
83 },
84 },
85 "handlers": {
86 "console": {
87 "class": "logging.StreamHandler",
88 "formatter": "verbose",
89 },
90 "console_short": {
91 "class": "logging.StreamHandler",
92 "formatter": "short",
93 },
94 },
95 "root": {
96 "handlers": ["console"],
97 "level": os.getenv("ROOT_LOG_LEVEL", "INFO"),
98 },
99 "loggers": {
100 "LoggingMiddleware": {
101 "handlers": ["console_short"],
102 "level": "INFO",
103 "propagate": False,
104 },
105 },
106 }
107
108 ROOT_URLCONF = "config.urls"
109
110 TEMPLATES = [
111 {
112 "BACKEND": "django.template.backends.django.DjangoTemplates",
113 "DIRS": [],
114 "APP_DIRS": True,
115 "OPTIONS": {
116 "context_processors": [
117 "django.template.context_processors.debug",
118 "django.template.context_processors.request",
119 "django.contrib.auth.context_processors.auth",
120 "django.contrib.messages.context_processors.messages",
121 ],
122 },
123 },
124 ]
125
126 WSGI_APPLICATION = "config.wsgi.application"
127
128 # Database
129 # https://docs.djangoproject.com/en/3.2/ref/settings/#databases
130
131 DATABASES = {
132 "default": {
133 "ENGINE": "django.db.backends.postgresql",
134 "NAME": os.getenv("POSTGRES_NAME", "postgres"),
135 "USER": os.getenv("POSTGRES_USER", "postgres"),
136 "PASSWORD": os.getenv("POSTGRES_PASSWORD", "postgres"),
137 "HOST": os.getenv("POSTGRES_HOST", "db"),
138 "PORT": os.getenv("POSTGRES_PORT", "5432"),
139 }
140 }
141
142 # Password validation
143 # https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators
144
145 AUTH_PASSWORD_VALIDATORS = [
146 {
147 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
148 },
149 {
150 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
151 },
152 {
153 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
154 },
155 {
156 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
157 },
158 ]
159
160 # Internationalization
161 # https://docs.djangoproject.com/en/3.2/topics/i18n/
162
163 LANGUAGE_CODE = "en-us"
164
165 TIME_ZONE = "UTC"
166
167 USE_I18N = True
168
169 USE_L10N = True
170
171 USE_TZ = True
172
173 # Static files (CSS, JavaScript, Images)
174 # https://docs.djangoproject.com/en/3.2/howto/static-files/
175
176 STATIC_URL = "/static/"
177
178 # Default primary key field type
179 # https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field
180
181 DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
182
[end of src/config/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/config/settings.py b/src/config/settings.py
--- a/src/config/settings.py
+++ b/src/config/settings.py
@@ -31,14 +31,10 @@
# Application definition
-default_renderer_classes = os.getenv(
- "REST_DEFAULT_RENDERER_CLASSES", "rest_framework.renderers.JSONRenderer"
-)
REST_FRAMEWORK = {
# https://www.django-rest-framework.org/api-guide/renderers/
"DEFAULT_RENDERER_CLASSES": [
- default_renderer_class.strip()
- for default_renderer_class in default_renderer_classes.split(",")
+ "djangorestframework_camel_case.render.CamelCaseJSONRenderer",
]
}
|
{"golden_diff": "diff --git a/src/config/settings.py b/src/config/settings.py\n--- a/src/config/settings.py\n+++ b/src/config/settings.py\n@@ -31,14 +31,10 @@\n \n # Application definition\n \n-default_renderer_classes = os.getenv(\n- \"REST_DEFAULT_RENDERER_CLASSES\", \"rest_framework.renderers.JSONRenderer\"\n-)\n REST_FRAMEWORK = {\n # https://www.django-rest-framework.org/api-guide/renderers/\n \"DEFAULT_RENDERER_CLASSES\": [\n- default_renderer_class.strip()\n- for default_renderer_class in default_renderer_classes.split(\",\")\n+ \"djangorestframework_camel_case.render.CamelCaseJSONRenderer\",\n ]\n }\n", "issue": "Format JSON keys in camel case\nEndpoints consumed by the clients should have the JSON keys in camel case. By having them camel case, it follows the formatting that we have in other services. \n", "before_files": [{"content": "\"\"\"\nDjango settings for safe_client_config_service project.\n\nGenerated by 'django-admin startproject' using Django 3.2.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.2/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.2/ref/settings/\n\"\"\"\nimport os\nfrom distutils.util import strtobool\nfrom pathlib import Path\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.getenv(\"SECRET_KEY\", None)\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = bool(strtobool(os.getenv(\"DEBUG\", \"false\")))\n\n# https://docs.djangoproject.com/en/3.2/ref/settings/#std:setting-ALLOWED_HOSTS\nallowed_hosts = os.getenv(\"DJANGO_ALLOWED_HOSTS\", \".localhost,127.0.0.1,[::1]\")\nALLOWED_HOSTS = [allowed_host.strip() for allowed_host in allowed_hosts.split(\",\")]\n\n# Application definition\n\ndefault_renderer_classes = os.getenv(\n \"REST_DEFAULT_RENDERER_CLASSES\", \"rest_framework.renderers.JSONRenderer\"\n)\nREST_FRAMEWORK = {\n # https://www.django-rest-framework.org/api-guide/renderers/\n \"DEFAULT_RENDERER_CLASSES\": [\n default_renderer_class.strip()\n for default_renderer_class in default_renderer_classes.split(\",\")\n ]\n}\n\nINSTALLED_APPS = [\n \"safe_apps.apps.AppsConfig\",\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"rest_framework\",\n]\n\nMIDDLEWARE = [\n \"config.middleware.LoggingMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nCACHES = {\n \"default\": {\n \"BACKEND\": \"django.core.cache.backends.locmem.LocMemCache\",\n },\n \"safe-apps\": {\n \"BACKEND\": \"django.core.cache.backends.locmem.LocMemCache\",\n },\n}\n\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"short\": {\"format\": \"%(asctime)s %(message)s\"},\n \"verbose\": {\n \"format\": \"%(asctime)s [%(levelname)s] [%(processName)s] %(message)s\"\n },\n },\n \"handlers\": {\n \"console\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"verbose\",\n },\n \"console_short\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"short\",\n },\n },\n \"root\": {\n \"handlers\": [\"console\"],\n \"level\": os.getenv(\"ROOT_LOG_LEVEL\", \"INFO\"),\n },\n \"loggers\": {\n \"LoggingMiddleware\": {\n \"handlers\": [\"console_short\"],\n \"level\": \"INFO\",\n \"propagate\": False,\n },\n },\n}\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.2/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"NAME\": os.getenv(\"POSTGRES_NAME\", \"postgres\"),\n \"USER\": os.getenv(\"POSTGRES_USER\", \"postgres\"),\n \"PASSWORD\": os.getenv(\"POSTGRES_PASSWORD\", \"postgres\"),\n \"HOST\": os.getenv(\"POSTGRES_HOST\", \"db\"),\n \"PORT\": os.getenv(\"POSTGRES_PORT\", \"5432\"),\n }\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.2/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.2/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\n# Default primary key field type\n# https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field\n\nDEFAULT_AUTO_FIELD = \"django.db.models.BigAutoField\"\n", "path": "src/config/settings.py"}]}
| 2,225 | 138 |
gh_patches_debug_25504
|
rasdani/github-patches
|
git_diff
|
canonical__microk8s-4235
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
update homebrew formula to newest microk8s version (1.28) - otherwise Mac Users can't use it.
Summary
The latest present formula on homebrew as of October 2023 point to ubuntu version 22.04 and microk8s version 1.27. This makes it near to impossible for mac users to use it.
Why is this important?
Lot has changed since that time. The instructions do not work in the present day, leading to newbies like myself wasting precious time, assuming the fault is theirs :)
Are you interested in contributing to this feature?
yep definitely.
</issue>
<code>
[start of installer/common/definitions.py]
1 MAX_CHARACTERS_WRAP: int = 120
2 command_descriptions = {
3 "add-node": "Adds a node to a cluster",
4 "ambassador": "Ambassador API Gateway and Ingress",
5 "cilium": "The cilium client",
6 "config": "Print the kubeconfig",
7 "ctr": "The containerd client",
8 "dashboard-proxy": "Enable the Kubernetes dashboard and proxy to host",
9 "dbctl": "Backup and restore the Kubernetes datastore",
10 "disable": "Disables running add-ons",
11 "enable": "Enables useful add-ons",
12 "helm": "The helm client",
13 "helm3": "The helm3 client",
14 "inspect": "Checks the cluster and gathers logs",
15 "istioctl": "The istio client",
16 "join": "Joins this instance as a node to a cluster",
17 "kubectl": "The kubernetes client",
18 "leave": "Disconnects this node from any cluster it has joined",
19 "linkerd": "The linkerd client",
20 "refresh-certs": "Refresh the CA certificates in this deployment",
21 "remove-node": "Removes a node from the cluster",
22 "reset": "Cleans the cluster from all workloads",
23 "start": "Starts the kubernetes cluster",
24 "status": "Displays the status of the cluster",
25 "stop": "Stops the kubernetes cluster",
26 }
27 DEFAULT_CORES: int = 2
28 DEFAULT_MEMORY_GB: int = 4
29 DEFAULT_DISK_GB: int = 50
30 DEFAULT_ASSUME: bool = False
31 DEFAULT_CHANNEL: str = "1.27/stable"
32 DEFAULT_IMAGE: str = "22.04"
33
34 MIN_CORES: int = 2
35 MIN_MEMORY_GB: int = 2
36 MIN_DISK_GB: int = 10
37
[end of installer/common/definitions.py]
[start of installer/vm_providers/_multipass/_windows.py]
1 # -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
2 #
3 # Copyright (C) 2018 Canonical Ltd
4 #
5 # This program is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License version 3 as
7 # published by the Free Software Foundation.
8 #
9 # This program is distributed in the hope that it will be useful,
10 # but WITHOUT ANY WARRANTY; without even the implied warranty of
11 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 # GNU General Public License for more details.
13 #
14 # You should have received a copy of the GNU General Public License
15 # along with this program. If not, see <http://www.gnu.org/licenses/>.
16
17 import logging
18 import os.path
19 import requests
20 import shutil
21 import simplejson
22 import subprocess
23 import sys
24 import tempfile
25
26 from progressbar import AnimatedMarker, Bar, Percentage, ProgressBar, UnknownLength
27
28 from common.file_utils import calculate_sha3_384, is_dumb_terminal
29 from vm_providers.errors import (
30 ProviderMultipassDownloadFailed,
31 ProviderMultipassInstallationFailed,
32 )
33
34 if sys.platform == "win32":
35 import winreg
36
37
38 logger = logging.getLogger(__name__)
39
40
41 _MULTIPASS_RELEASES_API_URL = "https://api.github.com/repos/canonical/multipass/releases"
42 _MULTIPASS_DL_VERSION = "1.12.0"
43 _MULTIPASS_DL_NAME = "multipass-{version}+win-win64.exe".format(version=_MULTIPASS_DL_VERSION)
44
45 # Download multipass installer and calculate hash:
46 # python3 -c "from installer.common.file_utils import calculate_sha3_384; print(calculate_sha3_384('$HOME/Downloads/multipass-1.11.1+win-win64.exe'))" # noqa: E501
47 _MULTIPASS_DL_SHA3_384 = "ddba66059052a67fa6a363729b75aca374591bc5a2531c938dd70d63f683c22108d5c2ab77025b818b31f69103228eee" # noqa: E501
48
49
50 def windows_reload_multipass_path_env():
51 """Update PATH to include installed Multipass, if not already set."""
52
53 assert sys.platform == "win32"
54
55 key = winreg.OpenKey(winreg.HKEY_CURRENT_USER, "Environment")
56
57 paths = os.environ["PATH"].split(";")
58
59 # Drop empty placeholder for trailing comma, if present.
60 if paths[-1] == "":
61 del paths[-1]
62
63 reg_user_path, _ = winreg.QueryValueEx(key, "Path")
64 for path in reg_user_path.split(";"):
65 if path not in paths and "Multipass" in path:
66 paths.append(path)
67
68 # Restore path with trailing comma.
69 os.environ["PATH"] = ";".join(paths) + ";"
70
71
72 def _run_installer(installer_path: str, echoer):
73 """Execute multipass installer."""
74
75 echoer.info("Installing Multipass...")
76
77 # Multipass requires administrative privileges to install, which requires
78 # the use of `runas` functionality. Some of the options included:
79 # (1) https://stackoverflow.com/a/34216774
80 # (2) ShellExecuteW and wait on installer by attempting to delete it.
81 # Windows would prevent us from deleting installer with a PermissionError:
82 # PermissionError: [WinError 32] The process cannot access the file because
83 # it is being used by another process: <path>
84 # (3) Use PowerShell's "Start-Process" with RunAs verb as shown below.
85 # None of the options are quite ideal, but #3 will do.
86 cmd = """
87 & {{
88 try {{
89 $Output = Start-Process -FilePath {path!r} -Args /S -Verb RunAs -Wait -PassThru
90 }} catch {{
91 [Environment]::Exit(1)
92 }}
93 }}
94 """.format(
95 path=installer_path
96 )
97
98 try:
99 subprocess.check_call(["powershell.exe", "-Command", cmd])
100 except subprocess.CalledProcessError:
101 raise ProviderMultipassInstallationFailed("error launching installer")
102
103 # Reload path environment to see if we can find multipass now.
104 windows_reload_multipass_path_env()
105
106 if not shutil.which("multipass.exe"):
107 # Installation failed.
108 raise ProviderMultipassInstallationFailed("installation did not complete successfully")
109
110 echoer.info("Multipass installation completed successfully.")
111
112
113 def _requests_exception_hint(e: requests.RequestException) -> str:
114 # Use the __doc__ description to give the user a hint. It seems to be a
115 # a decent option over trying to enumerate all of possible types.
116 if e.__doc__:
117 split_lines = e.__doc__.splitlines()
118 if split_lines:
119 return e.__doc__.splitlines()[0].decode().strip()
120
121 # Should never get here.
122 return "unknown download error"
123
124
125 def _fetch_installer_url() -> str:
126 """Verify version set is a valid
127 ref in GitHub and return the full
128 URL.
129 """
130
131 try:
132 resp = requests.get(_MULTIPASS_RELEASES_API_URL)
133 except requests.RequestException as e:
134 raise ProviderMultipassDownloadFailed(_requests_exception_hint(e))
135
136 try:
137 data = resp.json()
138 except simplejson.JSONDecodeError:
139 raise ProviderMultipassDownloadFailed(
140 "failed to fetch valid release data from {}".format(_MULTIPASS_RELEASES_API_URL)
141 )
142
143 for assets in data:
144 for asset in assets.get("assets", list()):
145 # Find matching name.
146 if asset.get("name") != _MULTIPASS_DL_NAME:
147 continue
148
149 return asset.get("browser_download_url")
150
151 # Something changed we don't know about - we will simply categorize
152 # all possible events as an updated version we do not yet know about.
153 raise ProviderMultipassDownloadFailed("ref specified is not a valid ref in GitHub")
154
155
156 def _download_multipass(dl_dir: str, echoer) -> str:
157 """Creates temporary Downloads installer to temp directory."""
158
159 dl_url = _fetch_installer_url()
160 dl_basename = os.path.basename(dl_url)
161 dl_path = os.path.join(dl_dir, dl_basename)
162
163 echoer.info("Downloading Multipass installer...\n{} -> {}".format(dl_url, dl_path))
164
165 try:
166 request = requests.get(dl_url, stream=True, allow_redirects=True)
167 request.raise_for_status()
168 download_requests_stream(request, dl_path)
169 except requests.RequestException as e:
170 raise ProviderMultipassDownloadFailed(_requests_exception_hint(e))
171
172 digest = calculate_sha3_384(dl_path)
173 if digest != _MULTIPASS_DL_SHA3_384:
174 raise ProviderMultipassDownloadFailed(
175 "download failed verification (expected={} but found={})".format(
176 _MULTIPASS_DL_SHA3_384, digest
177 )
178 )
179
180 echoer.info("Verified installer successfully...")
181 return dl_path
182
183
184 def windows_install_multipass(echoer) -> None:
185 """Download and install multipass."""
186
187 assert sys.platform == "win32"
188
189 dl_dir = tempfile.mkdtemp()
190 dl_path = _download_multipass(dl_dir, echoer)
191 _run_installer(dl_path, echoer)
192
193 # Cleanup.
194 shutil.rmtree(dl_dir)
195
196
197 def _init_progress_bar(total_length, destination, message=None):
198 if not message:
199 message = "Downloading {!r}".format(os.path.basename(destination))
200
201 valid_length = total_length and total_length > 0
202
203 if valid_length and is_dumb_terminal():
204 widgets = [message, " ", Percentage()]
205 maxval = total_length
206 elif valid_length and not is_dumb_terminal():
207 widgets = [message, Bar(marker="=", left="[", right="]"), " ", Percentage()]
208 maxval = total_length
209 elif not valid_length and is_dumb_terminal():
210 widgets = [message]
211 maxval = UnknownLength
212 else:
213 widgets = [message, AnimatedMarker()]
214 maxval = UnknownLength
215
216 return ProgressBar(widgets=widgets, maxval=maxval)
217
218
219 def download_requests_stream(request_stream, destination, message=None, total_read=0):
220 """This is a facility to download a request with nice progress bars."""
221
222 # Doing len(request_stream.content) may defeat the purpose of a
223 # progress bar
224 total_length = 0
225 if not request_stream.headers.get("Content-Encoding", ""):
226 total_length = int(request_stream.headers.get("Content-Length", "0"))
227 # Content-Length in the case of resuming will be
228 # Content-Length - total_read so we add back up to have the feel of
229 # resuming
230 if os.path.exists(destination):
231 total_length += total_read
232
233 progress_bar = _init_progress_bar(total_length, destination, message)
234 progress_bar.start()
235
236 if os.path.exists(destination):
237 mode = "ab"
238 else:
239 mode = "wb"
240 with open(destination, mode) as destination_file:
241 for buf in request_stream.iter_content(1024):
242 destination_file.write(buf)
243 if not is_dumb_terminal():
244 total_read += len(buf)
245 progress_bar.update(total_read)
246 progress_bar.finish()
247
[end of installer/vm_providers/_multipass/_windows.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/installer/common/definitions.py b/installer/common/definitions.py
--- a/installer/common/definitions.py
+++ b/installer/common/definitions.py
@@ -28,7 +28,7 @@
DEFAULT_MEMORY_GB: int = 4
DEFAULT_DISK_GB: int = 50
DEFAULT_ASSUME: bool = False
-DEFAULT_CHANNEL: str = "1.27/stable"
+DEFAULT_CHANNEL: str = "1.28/stable"
DEFAULT_IMAGE: str = "22.04"
MIN_CORES: int = 2
diff --git a/installer/vm_providers/_multipass/_windows.py b/installer/vm_providers/_multipass/_windows.py
--- a/installer/vm_providers/_multipass/_windows.py
+++ b/installer/vm_providers/_multipass/_windows.py
@@ -39,12 +39,12 @@
_MULTIPASS_RELEASES_API_URL = "https://api.github.com/repos/canonical/multipass/releases"
-_MULTIPASS_DL_VERSION = "1.12.0"
+_MULTIPASS_DL_VERSION = "1.12.2"
_MULTIPASS_DL_NAME = "multipass-{version}+win-win64.exe".format(version=_MULTIPASS_DL_VERSION)
# Download multipass installer and calculate hash:
# python3 -c "from installer.common.file_utils import calculate_sha3_384; print(calculate_sha3_384('$HOME/Downloads/multipass-1.11.1+win-win64.exe'))" # noqa: E501
-_MULTIPASS_DL_SHA3_384 = "ddba66059052a67fa6a363729b75aca374591bc5a2531c938dd70d63f683c22108d5c2ab77025b818b31f69103228eee" # noqa: E501
+_MULTIPASS_DL_SHA3_384 = "9031c8fc98b941df1094a832c356e12f281c70d0eb10bee15b5576c61af4c8a17ef32b833f0043c8df0e04897e69c8bc" # noqa: E501
def windows_reload_multipass_path_env():
|
{"golden_diff": "diff --git a/installer/common/definitions.py b/installer/common/definitions.py\n--- a/installer/common/definitions.py\n+++ b/installer/common/definitions.py\n@@ -28,7 +28,7 @@\n DEFAULT_MEMORY_GB: int = 4\n DEFAULT_DISK_GB: int = 50\n DEFAULT_ASSUME: bool = False\n-DEFAULT_CHANNEL: str = \"1.27/stable\"\n+DEFAULT_CHANNEL: str = \"1.28/stable\"\n DEFAULT_IMAGE: str = \"22.04\"\n \n MIN_CORES: int = 2\ndiff --git a/installer/vm_providers/_multipass/_windows.py b/installer/vm_providers/_multipass/_windows.py\n--- a/installer/vm_providers/_multipass/_windows.py\n+++ b/installer/vm_providers/_multipass/_windows.py\n@@ -39,12 +39,12 @@\n \n \n _MULTIPASS_RELEASES_API_URL = \"https://api.github.com/repos/canonical/multipass/releases\"\n-_MULTIPASS_DL_VERSION = \"1.12.0\"\n+_MULTIPASS_DL_VERSION = \"1.12.2\"\n _MULTIPASS_DL_NAME = \"multipass-{version}+win-win64.exe\".format(version=_MULTIPASS_DL_VERSION)\n \n # Download multipass installer and calculate hash:\n # python3 -c \"from installer.common.file_utils import calculate_sha3_384; print(calculate_sha3_384('$HOME/Downloads/multipass-1.11.1+win-win64.exe'))\" # noqa: E501\n-_MULTIPASS_DL_SHA3_384 = \"ddba66059052a67fa6a363729b75aca374591bc5a2531c938dd70d63f683c22108d5c2ab77025b818b31f69103228eee\" # noqa: E501\n+_MULTIPASS_DL_SHA3_384 = \"9031c8fc98b941df1094a832c356e12f281c70d0eb10bee15b5576c61af4c8a17ef32b833f0043c8df0e04897e69c8bc\" # noqa: E501\n \n \n def windows_reload_multipass_path_env():\n", "issue": "update homebrew formula to newest microk8s version (1.28) - otherwise Mac Users can't use it. \nSummary\r\nThe latest present formula on homebrew as of October 2023 point to ubuntu version 22.04 and microk8s version 1.27. This makes it near to impossible for mac users to use it.\r\n\r\nWhy is this important?\r\nLot has changed since that time. The instructions do not work in the present day, leading to newbies like myself wasting precious time, assuming the fault is theirs :)\r\n\r\nAre you interested in contributing to this feature?\r\nyep definitely.\n", "before_files": [{"content": "MAX_CHARACTERS_WRAP: int = 120\ncommand_descriptions = {\n \"add-node\": \"Adds a node to a cluster\",\n \"ambassador\": \"Ambassador API Gateway and Ingress\",\n \"cilium\": \"The cilium client\",\n \"config\": \"Print the kubeconfig\",\n \"ctr\": \"The containerd client\",\n \"dashboard-proxy\": \"Enable the Kubernetes dashboard and proxy to host\",\n \"dbctl\": \"Backup and restore the Kubernetes datastore\",\n \"disable\": \"Disables running add-ons\",\n \"enable\": \"Enables useful add-ons\",\n \"helm\": \"The helm client\",\n \"helm3\": \"The helm3 client\",\n \"inspect\": \"Checks the cluster and gathers logs\",\n \"istioctl\": \"The istio client\",\n \"join\": \"Joins this instance as a node to a cluster\",\n \"kubectl\": \"The kubernetes client\",\n \"leave\": \"Disconnects this node from any cluster it has joined\",\n \"linkerd\": \"The linkerd client\",\n \"refresh-certs\": \"Refresh the CA certificates in this deployment\",\n \"remove-node\": \"Removes a node from the cluster\",\n \"reset\": \"Cleans the cluster from all workloads\",\n \"start\": \"Starts the kubernetes cluster\",\n \"status\": \"Displays the status of the cluster\",\n \"stop\": \"Stops the kubernetes cluster\",\n}\nDEFAULT_CORES: int = 2\nDEFAULT_MEMORY_GB: int = 4\nDEFAULT_DISK_GB: int = 50\nDEFAULT_ASSUME: bool = False\nDEFAULT_CHANNEL: str = \"1.27/stable\"\nDEFAULT_IMAGE: str = \"22.04\"\n\nMIN_CORES: int = 2\nMIN_MEMORY_GB: int = 2\nMIN_DISK_GB: int = 10\n", "path": "installer/common/definitions.py"}, {"content": "# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-\n#\n# Copyright (C) 2018 Canonical Ltd\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License version 3 as\n# published by the Free Software Foundation.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\nimport logging\nimport os.path\nimport requests\nimport shutil\nimport simplejson\nimport subprocess\nimport sys\nimport tempfile\n\nfrom progressbar import AnimatedMarker, Bar, Percentage, ProgressBar, UnknownLength\n\nfrom common.file_utils import calculate_sha3_384, is_dumb_terminal\nfrom vm_providers.errors import (\n ProviderMultipassDownloadFailed,\n ProviderMultipassInstallationFailed,\n)\n\nif sys.platform == \"win32\":\n import winreg\n\n\nlogger = logging.getLogger(__name__)\n\n\n_MULTIPASS_RELEASES_API_URL = \"https://api.github.com/repos/canonical/multipass/releases\"\n_MULTIPASS_DL_VERSION = \"1.12.0\"\n_MULTIPASS_DL_NAME = \"multipass-{version}+win-win64.exe\".format(version=_MULTIPASS_DL_VERSION)\n\n# Download multipass installer and calculate hash:\n# python3 -c \"from installer.common.file_utils import calculate_sha3_384; print(calculate_sha3_384('$HOME/Downloads/multipass-1.11.1+win-win64.exe'))\" # noqa: E501\n_MULTIPASS_DL_SHA3_384 = \"ddba66059052a67fa6a363729b75aca374591bc5a2531c938dd70d63f683c22108d5c2ab77025b818b31f69103228eee\" # noqa: E501\n\n\ndef windows_reload_multipass_path_env():\n \"\"\"Update PATH to include installed Multipass, if not already set.\"\"\"\n\n assert sys.platform == \"win32\"\n\n key = winreg.OpenKey(winreg.HKEY_CURRENT_USER, \"Environment\")\n\n paths = os.environ[\"PATH\"].split(\";\")\n\n # Drop empty placeholder for trailing comma, if present.\n if paths[-1] == \"\":\n del paths[-1]\n\n reg_user_path, _ = winreg.QueryValueEx(key, \"Path\")\n for path in reg_user_path.split(\";\"):\n if path not in paths and \"Multipass\" in path:\n paths.append(path)\n\n # Restore path with trailing comma.\n os.environ[\"PATH\"] = \";\".join(paths) + \";\"\n\n\ndef _run_installer(installer_path: str, echoer):\n \"\"\"Execute multipass installer.\"\"\"\n\n echoer.info(\"Installing Multipass...\")\n\n # Multipass requires administrative privileges to install, which requires\n # the use of `runas` functionality. Some of the options included:\n # (1) https://stackoverflow.com/a/34216774\n # (2) ShellExecuteW and wait on installer by attempting to delete it.\n # Windows would prevent us from deleting installer with a PermissionError:\n # PermissionError: [WinError 32] The process cannot access the file because\n # it is being used by another process: <path>\n # (3) Use PowerShell's \"Start-Process\" with RunAs verb as shown below.\n # None of the options are quite ideal, but #3 will do.\n cmd = \"\"\"\n & {{\n try {{\n $Output = Start-Process -FilePath {path!r} -Args /S -Verb RunAs -Wait -PassThru\n }} catch {{\n [Environment]::Exit(1)\n }}\n }}\n \"\"\".format(\n path=installer_path\n )\n\n try:\n subprocess.check_call([\"powershell.exe\", \"-Command\", cmd])\n except subprocess.CalledProcessError:\n raise ProviderMultipassInstallationFailed(\"error launching installer\")\n\n # Reload path environment to see if we can find multipass now.\n windows_reload_multipass_path_env()\n\n if not shutil.which(\"multipass.exe\"):\n # Installation failed.\n raise ProviderMultipassInstallationFailed(\"installation did not complete successfully\")\n\n echoer.info(\"Multipass installation completed successfully.\")\n\n\ndef _requests_exception_hint(e: requests.RequestException) -> str:\n # Use the __doc__ description to give the user a hint. It seems to be a\n # a decent option over trying to enumerate all of possible types.\n if e.__doc__:\n split_lines = e.__doc__.splitlines()\n if split_lines:\n return e.__doc__.splitlines()[0].decode().strip()\n\n # Should never get here.\n return \"unknown download error\"\n\n\ndef _fetch_installer_url() -> str:\n \"\"\"Verify version set is a valid\n ref in GitHub and return the full\n URL.\n \"\"\"\n\n try:\n resp = requests.get(_MULTIPASS_RELEASES_API_URL)\n except requests.RequestException as e:\n raise ProviderMultipassDownloadFailed(_requests_exception_hint(e))\n\n try:\n data = resp.json()\n except simplejson.JSONDecodeError:\n raise ProviderMultipassDownloadFailed(\n \"failed to fetch valid release data from {}\".format(_MULTIPASS_RELEASES_API_URL)\n )\n\n for assets in data:\n for asset in assets.get(\"assets\", list()):\n # Find matching name.\n if asset.get(\"name\") != _MULTIPASS_DL_NAME:\n continue\n\n return asset.get(\"browser_download_url\")\n\n # Something changed we don't know about - we will simply categorize\n # all possible events as an updated version we do not yet know about.\n raise ProviderMultipassDownloadFailed(\"ref specified is not a valid ref in GitHub\")\n\n\ndef _download_multipass(dl_dir: str, echoer) -> str:\n \"\"\"Creates temporary Downloads installer to temp directory.\"\"\"\n\n dl_url = _fetch_installer_url()\n dl_basename = os.path.basename(dl_url)\n dl_path = os.path.join(dl_dir, dl_basename)\n\n echoer.info(\"Downloading Multipass installer...\\n{} -> {}\".format(dl_url, dl_path))\n\n try:\n request = requests.get(dl_url, stream=True, allow_redirects=True)\n request.raise_for_status()\n download_requests_stream(request, dl_path)\n except requests.RequestException as e:\n raise ProviderMultipassDownloadFailed(_requests_exception_hint(e))\n\n digest = calculate_sha3_384(dl_path)\n if digest != _MULTIPASS_DL_SHA3_384:\n raise ProviderMultipassDownloadFailed(\n \"download failed verification (expected={} but found={})\".format(\n _MULTIPASS_DL_SHA3_384, digest\n )\n )\n\n echoer.info(\"Verified installer successfully...\")\n return dl_path\n\n\ndef windows_install_multipass(echoer) -> None:\n \"\"\"Download and install multipass.\"\"\"\n\n assert sys.platform == \"win32\"\n\n dl_dir = tempfile.mkdtemp()\n dl_path = _download_multipass(dl_dir, echoer)\n _run_installer(dl_path, echoer)\n\n # Cleanup.\n shutil.rmtree(dl_dir)\n\n\ndef _init_progress_bar(total_length, destination, message=None):\n if not message:\n message = \"Downloading {!r}\".format(os.path.basename(destination))\n\n valid_length = total_length and total_length > 0\n\n if valid_length and is_dumb_terminal():\n widgets = [message, \" \", Percentage()]\n maxval = total_length\n elif valid_length and not is_dumb_terminal():\n widgets = [message, Bar(marker=\"=\", left=\"[\", right=\"]\"), \" \", Percentage()]\n maxval = total_length\n elif not valid_length and is_dumb_terminal():\n widgets = [message]\n maxval = UnknownLength\n else:\n widgets = [message, AnimatedMarker()]\n maxval = UnknownLength\n\n return ProgressBar(widgets=widgets, maxval=maxval)\n\n\ndef download_requests_stream(request_stream, destination, message=None, total_read=0):\n \"\"\"This is a facility to download a request with nice progress bars.\"\"\"\n\n # Doing len(request_stream.content) may defeat the purpose of a\n # progress bar\n total_length = 0\n if not request_stream.headers.get(\"Content-Encoding\", \"\"):\n total_length = int(request_stream.headers.get(\"Content-Length\", \"0\"))\n # Content-Length in the case of resuming will be\n # Content-Length - total_read so we add back up to have the feel of\n # resuming\n if os.path.exists(destination):\n total_length += total_read\n\n progress_bar = _init_progress_bar(total_length, destination, message)\n progress_bar.start()\n\n if os.path.exists(destination):\n mode = \"ab\"\n else:\n mode = \"wb\"\n with open(destination, mode) as destination_file:\n for buf in request_stream.iter_content(1024):\n destination_file.write(buf)\n if not is_dumb_terminal():\n total_read += len(buf)\n progress_bar.update(total_read)\n progress_bar.finish()\n", "path": "installer/vm_providers/_multipass/_windows.py"}]}
| 3,869 | 581 |
gh_patches_debug_491
|
rasdani/github-patches
|
git_diff
|
zulip__zulip-22637
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Show info about organization and Zulip in gear menu
At present, it requires some digging to find a few key pieces of information about the Zulip organization one is viewing:
- Organization name
- Organization URL (if using the Desktop app)
- For Zulip Cloud, the plan that the organization on.
- For self-hosted Zulip, does the server need to be upgraded? What's the version of the server, and what's the current release version?
We should address this by adding this information at the top of the gear settings menu.
- For all users: Show organization name and URL
- For all Zulip Cloud users: Show plan name with a link to `/plans`, e.g. "Zulip Cloud Free"
- For owners of Zulip Cloud Free orgs: Show "Upgrade to Zulip Cloud Standard" link to `/upgrade`
- For all self-hosted users:
- Show Zulip server version (same as in the "About Zulip" widget); we'll need to test to make sure it looks reasonable for non-standard versions (e.g. forks, installs running off `main`).
- If the server version is old, we should show an "Upgrade to the latest release (x.y)" linking to https://zulip.readthedocs.io/en/stable/production/upgrade-or-modify.html. We should probably show this link to all users, as server admins might not be owners/admins of the organization.
## Mockups
<img width="1552" alt="popover-menu" src="https://user-images.githubusercontent.com/2090066/172440944-5dc8ee48-908f-4642-beb7-9ec141128a29.png">
<img width="1552" alt="dark-inbox-01" src="https://user-images.githubusercontent.com/2090066/172440973-12639e2a-3f42-408d-b976-27b01498ecda.png">
<img width="1608" alt="selfhosted-upgrade" src="https://user-images.githubusercontent.com/2090066/172441028-c0ce417f-e3db-4542-845f-10ba3fab98df.png">
**CZO discussion threads:**
- [Design proposal (Zulip Cloud)](https://chat.zulip.org/#narrow/stream/101-design/topic/UI.20redesign.3A.20popover.20menu/near/1388585)
- [Server upgrade notice](https://chat.zulip.org/#narrow/stream/101-design/topic/server.20upgrade.20notice)
</issue>
<code>
[start of tools/lib/capitalization.py]
1 import re
2 from typing import List, Match, Tuple
3
4 from bs4 import BeautifulSoup
5
6 # The phrases in this list will be ignored. The longest phrase is
7 # tried first; this removes the chance of smaller phrases changing
8 # the text before longer phrases are tried.
9 # The errors shown by `tools/check-capitalization` can be added to
10 # this list without any modification.
11 IGNORED_PHRASES = [
12 # Proper nouns and acronyms
13 r"API",
14 r"APNS",
15 r"Botserver",
16 r"Cookie Bot",
17 r"DevAuthBackend",
18 r"GCM",
19 r"GitHub",
20 r"Gravatar",
21 r"Help Center",
22 r"HTTP",
23 r"ID",
24 r"IDs",
25 r"IP",
26 r"JSON",
27 r"Kerberos",
28 r"LDAP",
29 r"Markdown",
30 r"OTP",
31 r"Pivotal",
32 r"PM",
33 r"PMs",
34 r"Slack",
35 r"Google",
36 r"Terms of Service",
37 r"Tuesday",
38 r"URL",
39 r"UUID",
40 r"Webathena",
41 r"WordPress",
42 r"Zephyr",
43 r"Zoom",
44 r"Zulip",
45 r"Zulip Account Security",
46 r"Zulip Security",
47 r"Zulip Cloud Standard",
48 r"BigBlueButton",
49 # Code things
50 r"\.zuliprc",
51 # BeautifulSoup will remove <z-user> which is horribly confusing,
52 # so we need more of the sentence.
53 r"<z-user></z-user> will have the same role",
54 # Things using "I"
55 r"I understand",
56 r"I'm",
57 r"I've",
58 # Specific short words
59 r"beta",
60 r"and",
61 r"bot",
62 r"e\.g\.",
63 r"enabled",
64 r"signups",
65 # Placeholders
66 r"keyword",
67 r"streamname",
68 r"user@example\.com",
69 # Fragments of larger strings
70 (r"your subscriptions on your Streams page"),
71 r"Add global time<br />Everyone sees global times in their own time zone\.",
72 r"user",
73 r"an unknown operating system",
74 r"Go to Settings",
75 # SPECIAL CASES
76 # Because topics usually are lower-case, this would look weird if it were capitalized
77 r"more topics",
78 # Used alone in a parenthetical where capitalized looks worse.
79 r"^deprecated$",
80 # Capital 'i' looks weird in reminders popover
81 r"in 1 hour",
82 r"in 20 minutes",
83 r"in 3 hours",
84 # these are used as topics
85 r"^new streams$",
86 r"^stream events$",
87 # These are used as example short names (e.g. an uncapitalized context):
88 r"^marketing$",
89 r"^cookie$",
90 # Used to refer custom time limits
91 r"\bN\b",
92 # Capital c feels obtrusive in clear status option
93 r"clear",
94 r"group private messages with \{recipient\}",
95 r"private messages with \{recipient\}",
96 r"private messages with yourself",
97 r"GIF",
98 # Emoji name placeholder
99 r"leafy green vegetable",
100 # Subdomain placeholder
101 r"your-organization-url",
102 # Used in invite modal
103 r"or",
104 # Used in GIPHY popover.
105 r"GIFs",
106 r"GIPHY",
107 # Used in our case studies
108 r"Technical University of Munich",
109 r"University of California San Diego",
110 # Used in stream creation form
111 r"email hidden",
112 # Use in compose box.
113 r"to send",
114 r"to add a new line",
115 # Used in showing Notification Bot read receipts message
116 "Notification Bot",
117 # Used in presence_enabled setting label
118 r"invisible mode off",
119 # Typeahead suggestions for "Pronouns" custom field type.
120 r"he/him",
121 r"she/her",
122 r"they/them",
123 ]
124
125 # Sort regexes in descending order of their lengths. As a result, the
126 # longer phrases will be ignored first.
127 IGNORED_PHRASES.sort(key=lambda regex: len(regex), reverse=True)
128
129 # Compile regexes to improve performance. This also extracts the
130 # text using BeautifulSoup and then removes extra whitespaces from
131 # it. This step enables us to add HTML in our regexes directly.
132 COMPILED_IGNORED_PHRASES = [
133 re.compile(" ".join(BeautifulSoup(regex, "lxml").text.split())) for regex in IGNORED_PHRASES
134 ]
135
136 SPLIT_BOUNDARY = "?.!" # Used to split string into sentences.
137 SPLIT_BOUNDARY_REGEX = re.compile(rf"[{SPLIT_BOUNDARY}]")
138
139 # Regexes which check capitalization in sentences.
140 DISALLOWED = [
141 r"^[a-z](?!\})", # Checks if the sentence starts with a lower case character.
142 r"^[A-Z][a-z]+[\sa-z0-9]+[A-Z]", # Checks if an upper case character exists
143 # after a lower case character when the first character is in upper case.
144 ]
145 DISALLOWED_REGEX = re.compile(r"|".join(DISALLOWED))
146
147 BANNED_WORDS = {
148 "realm": "The term realm should not appear in user-facing strings. Use organization instead.",
149 }
150
151
152 def get_safe_phrase(phrase: str) -> str:
153 """
154 Safe phrase is in lower case and doesn't contain characters which can
155 conflict with split boundaries. All conflicting characters are replaced
156 with low dash (_).
157 """
158 phrase = SPLIT_BOUNDARY_REGEX.sub("_", phrase)
159 return phrase.lower()
160
161
162 def replace_with_safe_phrase(matchobj: Match[str]) -> str:
163 """
164 The idea is to convert IGNORED_PHRASES into safe phrases, see
165 `get_safe_phrase()` function. The only exception is when the
166 IGNORED_PHRASE is at the start of the text or after a split
167 boundary; in this case, we change the first letter of the phrase
168 to upper case.
169 """
170 ignored_phrase = matchobj.group(0)
171 safe_string = get_safe_phrase(ignored_phrase)
172
173 start_index = matchobj.start()
174 complete_string = matchobj.string
175
176 is_string_start = start_index == 0
177 # We expect that there will be one space between split boundary
178 # and the next word.
179 punctuation = complete_string[max(start_index - 2, 0)]
180 is_after_split_boundary = punctuation in SPLIT_BOUNDARY
181 if is_string_start or is_after_split_boundary:
182 return safe_string.capitalize()
183
184 return safe_string
185
186
187 def get_safe_text(text: str) -> str:
188 """
189 This returns text which is rendered by BeautifulSoup and is in the
190 form that can be split easily and has all IGNORED_PHRASES processed.
191 """
192 soup = BeautifulSoup(text, "lxml")
193 text = " ".join(soup.text.split()) # Remove extra whitespaces.
194 for phrase_regex in COMPILED_IGNORED_PHRASES:
195 text = phrase_regex.sub(replace_with_safe_phrase, text)
196
197 return text
198
199
200 def is_capitalized(safe_text: str) -> bool:
201 sentences = SPLIT_BOUNDARY_REGEX.split(safe_text)
202 return not any(DISALLOWED_REGEX.search(sentence.strip()) for sentence in sentences)
203
204
205 def check_banned_words(text: str) -> List[str]:
206 lower_cased_text = text.lower()
207 errors = []
208 for word, reason in BANNED_WORDS.items():
209 if word in lower_cased_text:
210 # Hack: Should move this into BANNED_WORDS framework; for
211 # now, just hand-code the skips:
212 if "realm_name" in lower_cased_text:
213 continue
214 kwargs = dict(word=word, text=text, reason=reason)
215 msg = "{word} found in '{text}'. {reason}".format(**kwargs)
216 errors.append(msg)
217
218 return errors
219
220
221 def check_capitalization(strings: List[str]) -> Tuple[List[str], List[str], List[str]]:
222 errors = []
223 ignored = []
224 banned_word_errors = []
225 for text in strings:
226 text = " ".join(text.split()) # Remove extra whitespaces.
227 safe_text = get_safe_text(text)
228 has_ignored_phrase = text != safe_text
229 capitalized = is_capitalized(safe_text)
230 if not capitalized:
231 errors.append(text)
232 elif has_ignored_phrase:
233 ignored.append(text)
234
235 banned_word_errors.extend(check_banned_words(text))
236
237 return sorted(errors), sorted(ignored), sorted(banned_word_errors)
238
[end of tools/lib/capitalization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tools/lib/capitalization.py b/tools/lib/capitalization.py
--- a/tools/lib/capitalization.py
+++ b/tools/lib/capitalization.py
@@ -42,6 +42,7 @@
r"Zephyr",
r"Zoom",
r"Zulip",
+ r"Zulip Server",
r"Zulip Account Security",
r"Zulip Security",
r"Zulip Cloud Standard",
|
{"golden_diff": "diff --git a/tools/lib/capitalization.py b/tools/lib/capitalization.py\n--- a/tools/lib/capitalization.py\n+++ b/tools/lib/capitalization.py\n@@ -42,6 +42,7 @@\n r\"Zephyr\",\n r\"Zoom\",\n r\"Zulip\",\n+ r\"Zulip Server\",\n r\"Zulip Account Security\",\n r\"Zulip Security\",\n r\"Zulip Cloud Standard\",\n", "issue": "Show info about organization and Zulip in gear menu\nAt present, it requires some digging to find a few key pieces of information about the Zulip organization one is viewing:\r\n\r\n- Organization name\r\n- Organization URL (if using the Desktop app)\r\n- For Zulip Cloud, the plan that the organization on.\r\n- For self-hosted Zulip, does the server need to be upgraded? What's the version of the server, and what's the current release version?\r\n\r\nWe should address this by adding this information at the top of the gear settings menu.\r\n\r\n- For all users: Show organization name and URL\r\n- For all Zulip Cloud users: Show plan name with a link to `/plans`, e.g. \"Zulip Cloud Free\"\r\n- For owners of Zulip Cloud Free orgs: Show \"Upgrade to Zulip Cloud Standard\" link to `/upgrade`\r\n- For all self-hosted users:\r\n - Show Zulip server version (same as in the \"About Zulip\" widget); we'll need to test to make sure it looks reasonable for non-standard versions (e.g. forks, installs running off `main`).\r\n - If the server version is old, we should show an \"Upgrade to the latest release (x.y)\" linking to https://zulip.readthedocs.io/en/stable/production/upgrade-or-modify.html. We should probably show this link to all users, as server admins might not be owners/admins of the organization.\r\n\r\n## Mockups\r\n<img width=\"1552\" alt=\"popover-menu\" src=\"https://user-images.githubusercontent.com/2090066/172440944-5dc8ee48-908f-4642-beb7-9ec141128a29.png\">\r\n\r\n<img width=\"1552\" alt=\"dark-inbox-01\" src=\"https://user-images.githubusercontent.com/2090066/172440973-12639e2a-3f42-408d-b976-27b01498ecda.png\">\r\n\r\n<img width=\"1608\" alt=\"selfhosted-upgrade\" src=\"https://user-images.githubusercontent.com/2090066/172441028-c0ce417f-e3db-4542-845f-10ba3fab98df.png\">\r\n\r\n**CZO discussion threads:**\r\n- [Design proposal (Zulip Cloud)](https://chat.zulip.org/#narrow/stream/101-design/topic/UI.20redesign.3A.20popover.20menu/near/1388585)\r\n- [Server upgrade notice](https://chat.zulip.org/#narrow/stream/101-design/topic/server.20upgrade.20notice)\r\n\r\n\n", "before_files": [{"content": "import re\nfrom typing import List, Match, Tuple\n\nfrom bs4 import BeautifulSoup\n\n# The phrases in this list will be ignored. The longest phrase is\n# tried first; this removes the chance of smaller phrases changing\n# the text before longer phrases are tried.\n# The errors shown by `tools/check-capitalization` can be added to\n# this list without any modification.\nIGNORED_PHRASES = [\n # Proper nouns and acronyms\n r\"API\",\n r\"APNS\",\n r\"Botserver\",\n r\"Cookie Bot\",\n r\"DevAuthBackend\",\n r\"GCM\",\n r\"GitHub\",\n r\"Gravatar\",\n r\"Help Center\",\n r\"HTTP\",\n r\"ID\",\n r\"IDs\",\n r\"IP\",\n r\"JSON\",\n r\"Kerberos\",\n r\"LDAP\",\n r\"Markdown\",\n r\"OTP\",\n r\"Pivotal\",\n r\"PM\",\n r\"PMs\",\n r\"Slack\",\n r\"Google\",\n r\"Terms of Service\",\n r\"Tuesday\",\n r\"URL\",\n r\"UUID\",\n r\"Webathena\",\n r\"WordPress\",\n r\"Zephyr\",\n r\"Zoom\",\n r\"Zulip\",\n r\"Zulip Account Security\",\n r\"Zulip Security\",\n r\"Zulip Cloud Standard\",\n r\"BigBlueButton\",\n # Code things\n r\"\\.zuliprc\",\n # BeautifulSoup will remove <z-user> which is horribly confusing,\n # so we need more of the sentence.\n r\"<z-user></z-user> will have the same role\",\n # Things using \"I\"\n r\"I understand\",\n r\"I'm\",\n r\"I've\",\n # Specific short words\n r\"beta\",\n r\"and\",\n r\"bot\",\n r\"e\\.g\\.\",\n r\"enabled\",\n r\"signups\",\n # Placeholders\n r\"keyword\",\n r\"streamname\",\n r\"user@example\\.com\",\n # Fragments of larger strings\n (r\"your subscriptions on your Streams page\"),\n r\"Add global time<br />Everyone sees global times in their own time zone\\.\",\n r\"user\",\n r\"an unknown operating system\",\n r\"Go to Settings\",\n # SPECIAL CASES\n # Because topics usually are lower-case, this would look weird if it were capitalized\n r\"more topics\",\n # Used alone in a parenthetical where capitalized looks worse.\n r\"^deprecated$\",\n # Capital 'i' looks weird in reminders popover\n r\"in 1 hour\",\n r\"in 20 minutes\",\n r\"in 3 hours\",\n # these are used as topics\n r\"^new streams$\",\n r\"^stream events$\",\n # These are used as example short names (e.g. an uncapitalized context):\n r\"^marketing$\",\n r\"^cookie$\",\n # Used to refer custom time limits\n r\"\\bN\\b\",\n # Capital c feels obtrusive in clear status option\n r\"clear\",\n r\"group private messages with \\{recipient\\}\",\n r\"private messages with \\{recipient\\}\",\n r\"private messages with yourself\",\n r\"GIF\",\n # Emoji name placeholder\n r\"leafy green vegetable\",\n # Subdomain placeholder\n r\"your-organization-url\",\n # Used in invite modal\n r\"or\",\n # Used in GIPHY popover.\n r\"GIFs\",\n r\"GIPHY\",\n # Used in our case studies\n r\"Technical University of Munich\",\n r\"University of California San Diego\",\n # Used in stream creation form\n r\"email hidden\",\n # Use in compose box.\n r\"to send\",\n r\"to add a new line\",\n # Used in showing Notification Bot read receipts message\n \"Notification Bot\",\n # Used in presence_enabled setting label\n r\"invisible mode off\",\n # Typeahead suggestions for \"Pronouns\" custom field type.\n r\"he/him\",\n r\"she/her\",\n r\"they/them\",\n]\n\n# Sort regexes in descending order of their lengths. As a result, the\n# longer phrases will be ignored first.\nIGNORED_PHRASES.sort(key=lambda regex: len(regex), reverse=True)\n\n# Compile regexes to improve performance. This also extracts the\n# text using BeautifulSoup and then removes extra whitespaces from\n# it. This step enables us to add HTML in our regexes directly.\nCOMPILED_IGNORED_PHRASES = [\n re.compile(\" \".join(BeautifulSoup(regex, \"lxml\").text.split())) for regex in IGNORED_PHRASES\n]\n\nSPLIT_BOUNDARY = \"?.!\" # Used to split string into sentences.\nSPLIT_BOUNDARY_REGEX = re.compile(rf\"[{SPLIT_BOUNDARY}]\")\n\n# Regexes which check capitalization in sentences.\nDISALLOWED = [\n r\"^[a-z](?!\\})\", # Checks if the sentence starts with a lower case character.\n r\"^[A-Z][a-z]+[\\sa-z0-9]+[A-Z]\", # Checks if an upper case character exists\n # after a lower case character when the first character is in upper case.\n]\nDISALLOWED_REGEX = re.compile(r\"|\".join(DISALLOWED))\n\nBANNED_WORDS = {\n \"realm\": \"The term realm should not appear in user-facing strings. Use organization instead.\",\n}\n\n\ndef get_safe_phrase(phrase: str) -> str:\n \"\"\"\n Safe phrase is in lower case and doesn't contain characters which can\n conflict with split boundaries. All conflicting characters are replaced\n with low dash (_).\n \"\"\"\n phrase = SPLIT_BOUNDARY_REGEX.sub(\"_\", phrase)\n return phrase.lower()\n\n\ndef replace_with_safe_phrase(matchobj: Match[str]) -> str:\n \"\"\"\n The idea is to convert IGNORED_PHRASES into safe phrases, see\n `get_safe_phrase()` function. The only exception is when the\n IGNORED_PHRASE is at the start of the text or after a split\n boundary; in this case, we change the first letter of the phrase\n to upper case.\n \"\"\"\n ignored_phrase = matchobj.group(0)\n safe_string = get_safe_phrase(ignored_phrase)\n\n start_index = matchobj.start()\n complete_string = matchobj.string\n\n is_string_start = start_index == 0\n # We expect that there will be one space between split boundary\n # and the next word.\n punctuation = complete_string[max(start_index - 2, 0)]\n is_after_split_boundary = punctuation in SPLIT_BOUNDARY\n if is_string_start or is_after_split_boundary:\n return safe_string.capitalize()\n\n return safe_string\n\n\ndef get_safe_text(text: str) -> str:\n \"\"\"\n This returns text which is rendered by BeautifulSoup and is in the\n form that can be split easily and has all IGNORED_PHRASES processed.\n \"\"\"\n soup = BeautifulSoup(text, \"lxml\")\n text = \" \".join(soup.text.split()) # Remove extra whitespaces.\n for phrase_regex in COMPILED_IGNORED_PHRASES:\n text = phrase_regex.sub(replace_with_safe_phrase, text)\n\n return text\n\n\ndef is_capitalized(safe_text: str) -> bool:\n sentences = SPLIT_BOUNDARY_REGEX.split(safe_text)\n return not any(DISALLOWED_REGEX.search(sentence.strip()) for sentence in sentences)\n\n\ndef check_banned_words(text: str) -> List[str]:\n lower_cased_text = text.lower()\n errors = []\n for word, reason in BANNED_WORDS.items():\n if word in lower_cased_text:\n # Hack: Should move this into BANNED_WORDS framework; for\n # now, just hand-code the skips:\n if \"realm_name\" in lower_cased_text:\n continue\n kwargs = dict(word=word, text=text, reason=reason)\n msg = \"{word} found in '{text}'. {reason}\".format(**kwargs)\n errors.append(msg)\n\n return errors\n\n\ndef check_capitalization(strings: List[str]) -> Tuple[List[str], List[str], List[str]]:\n errors = []\n ignored = []\n banned_word_errors = []\n for text in strings:\n text = \" \".join(text.split()) # Remove extra whitespaces.\n safe_text = get_safe_text(text)\n has_ignored_phrase = text != safe_text\n capitalized = is_capitalized(safe_text)\n if not capitalized:\n errors.append(text)\n elif has_ignored_phrase:\n ignored.append(text)\n\n banned_word_errors.extend(check_banned_words(text))\n\n return sorted(errors), sorted(ignored), sorted(banned_word_errors)\n", "path": "tools/lib/capitalization.py"}]}
| 3,687 | 106 |
gh_patches_debug_36161
|
rasdani/github-patches
|
git_diff
|
DistrictDataLabs__yellowbrick-530
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect parameter values in K-Elbow Visualizer docstring
Initially in the K-Elbow Visualizer's docstring, the possible values for the parameter `metric` are named as `distortion_score`, `silhouette_score`, and `calinski_harabaz_score`. However, using these values returns the following error:
`YellowbrickValueError: '{}' is not a defined metric use one of distortion, silhouette, or calinski_harabaz`
However, the correct names—`distortion`, `silhouette`, `calinski_harabaz`—are listed corrected further down under `Parameters`.

</issue>
<code>
[start of yellowbrick/cluster/elbow.py]
1 # yellowbrick.cluster.elbow
2 # Implements the elbow method for determining the optimal number of clusters.
3 #
4 # Author: Benjamin Bengfort <[email protected]>
5 # Created: Thu Mar 23 22:36:31 2017 -0400
6 #
7 # Copyright (C) 2016 District Data Labs
8 # For license information, see LICENSE.txt
9 #
10 # ID: elbow.py [5a370c8] [email protected] $
11
12 """
13 Implements the elbow method for determining the optimal number of clusters.
14 https://bl.ocks.org/rpgove/0060ff3b656618e9136b
15 """
16
17 ##########################################################################
18 ## Imports
19 ##########################################################################
20
21 import time
22 import numpy as np
23 import scipy.sparse as sp
24
25 from .base import ClusteringScoreVisualizer
26 from ..exceptions import YellowbrickValueError
27
28 from sklearn.metrics import silhouette_score
29 from sklearn.metrics import calinski_harabaz_score
30 from sklearn.metrics.pairwise import pairwise_distances
31 from sklearn.preprocessing import LabelEncoder
32
33
34 ## Packages for export
35 __all__ = [
36 "KElbowVisualizer", "distortion_score"
37 ]
38
39
40 ##########################################################################
41 ## Metrics
42 ##########################################################################
43
44 def distortion_score(X, labels, metric='euclidean'):
45 """
46 Compute the mean distortion of all samples.
47
48 The distortion is computed as the the sum of the squared distances between
49 each observation and its closest centroid. Logically, this is the metric
50 that K-Means attempts to minimize as it is fitting the model.
51
52 .. seealso:: http://kldavenport.com/the-cost-function-of-k-means/
53
54 Parameters
55 ----------
56 X : array, shape = [n_samples, n_features] or [n_samples_a, n_samples_a]
57 Array of pairwise distances between samples if metric == "precomputed"
58 or a feature array for computing distances against the labels.
59
60 labels : array, shape = [n_samples]
61 Predicted labels for each sample
62
63 metric : string
64 The metric to use when calculating distance between instances in a
65 feature array. If metric is a string, it must be one of the options
66 allowed by `sklearn.metrics.pairwise.pairwise_distances
67 <http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.pairwise_distances.html#sklearn.metrics.pairwise.pairwise_distances>`_
68
69 .. todo:: add sample_size and random_state kwds similar to silhouette_score
70 """
71 # Encode labels to get unique centers and groups
72 le = LabelEncoder()
73 labels = le.fit_transform(labels)
74 unique_labels = le.classes_
75
76 # Sum of the distortions
77 distortion = 0
78
79 # Loop through each label (center) to compute the centroid
80 for current_label in unique_labels:
81 # Mask the instances that belong to the current label
82 mask = labels == current_label
83 instances = X[mask]
84
85 # Compute the center of these instances
86 center = instances.mean(axis=0)
87
88 # NOTE: csc_matrix and csr_matrix mean returns a 2D array, numpy.mean
89 # returns an array of 1 dimension less than the input. We expect
90 # instances to be a 2D array, therefore to do pairwise computation we
91 # require center to be a 2D array with a single row (the center).
92 # See #370 for more detail.
93 if not sp.issparse(instances):
94 center = np.array([center])
95
96 # Compute the square distances from the instances to the center
97 distances = pairwise_distances(instances, center, metric=metric)
98 distances = distances ** 2
99
100 # Add the mean square distance to the distortion
101 distortion += distances.mean()
102
103 return distortion
104
105
106 ##########################################################################
107 ## Elbow Method
108 ##########################################################################
109
110 KELBOW_SCOREMAP = {
111 "distortion": distortion_score,
112 "silhouette": silhouette_score,
113 "calinski_harabaz": calinski_harabaz_score,
114 }
115
116
117 class KElbowVisualizer(ClusteringScoreVisualizer):
118 """
119 The K-Elbow Visualizer implements the "elbow" method of selecting the
120 optimal number of clusters for K-means clustering. K-means is a simple
121 unsupervised machine learning algorithm that groups data into a specified
122 number (k) of clusters. Because the user must specify in advance what k to
123 choose, the algorithm is somewhat naive -- it assigns all members to k
124 clusters even if that is not the right k for the dataset.
125
126 The elbow method runs k-means clustering on the dataset for a range of
127 values for k (say from 1-10) and then for each value of k computes an
128 average score for all clusters. By default, the ``distortion_score`` is
129 computed, the sum of square distances from each point to its assigned
130 center. Other metrics can also be used such as the ``silhouette_score``,
131 the mean silhouette coefficient for all samples or the
132 ``calinski_harabaz_score``, which computes the ratio of dispersion between
133 and within clusters.
134
135 When these overall metrics for each model are plotted, it is possible to
136 visually determine the best value for K. If the line chart looks like an
137 arm, then the "elbow" (the point of inflection on the curve) is the best
138 value of k. The "arm" can be either up or down, but if there is a strong
139 inflection point, it is a good indication that the underlying model fits
140 best at that point.
141
142 Parameters
143 ----------
144
145 model : a Scikit-Learn clusterer
146 Should be an instance of a clusterer, specifically ``KMeans`` or
147 ``MiniBatchKMeans``. If it is not a clusterer, an exception is raised.
148
149 ax : matplotlib Axes, default: None
150 The axes to plot the figure on. If None is passed in the current axes
151 will be used (or generated if required).
152
153 k : integer or tuple
154 The range of k to compute silhouette scores for. If a single integer
155 is specified, then will compute the range (2,k) otherwise the
156 specified range in the tuple is used.
157
158 metric : string, default: ``"distortion"``
159 Select the scoring metric to evaluate the clusters. The default is the
160 mean distortion, defined by the sum of squared distances between each
161 observation and its closest centroid. Other metrics include:
162
163 - **distortion**: mean sum of squared distances to centers
164 - **silhouette**: mean ratio of intra-cluster and nearest-cluster distance
165 - **calinski_harabaz**: ratio of within to between cluster dispersion
166
167 timings : bool, default: True
168 Display the fitting time per k to evaluate the amount of time required
169 to train the clustering model.
170
171 kwargs : dict
172 Keyword arguments that are passed to the base class and may influence
173 the visualization as defined in other Visualizers.
174
175 Examples
176 --------
177
178 >>> from yellowbrick.cluster import KElbowVisualizer
179 >>> from sklearn.cluster import KMeans
180 >>> model = KElbowVisualizer(KMeans(), k=10)
181 >>> model.fit(X)
182 >>> model.poof()
183
184 Notes
185 -----
186
187 If you get a visualizer that doesn't have an elbow or inflection point,
188 then this method may not be working. The elbow method does not work well
189 if the data is not very clustered; in this case you might see a smooth
190 curve and the value of k is unclear. Other scoring methods such as BIC or
191 SSE also can be used to explore if clustering is a correct choice.
192
193 For a discussion on the Elbow method, read more at
194 `Robert Gove's Block <https://bl.ocks.org/rpgove/0060ff3b656618e9136b>`_.
195
196 .. todo:: add parallelization option for performance
197 .. todo:: add different metrics for scores and silhoutte
198 .. todo:: add timing information about how long its taking
199 """
200
201 def __init__(self, model, ax=None, k=10,
202 metric="distortion", timings=True, **kwargs):
203 super(KElbowVisualizer, self).__init__(model, ax=ax, **kwargs)
204
205 # Get the scoring method
206 if metric not in KELBOW_SCOREMAP:
207 raise YellowbrickValueError(
208 "'{}' is not a defined metric "
209 "use one of distortion, silhouette, or calinski_harabaz"
210 )
211
212 # Store the arguments
213 self.scoring_metric = KELBOW_SCOREMAP[metric]
214 self.timings = timings
215
216 # Convert K into a tuple argument if an integer
217 if isinstance(k, int):
218 k = (2, k+1)
219
220 # Expand k in to the values we will use, capturing exceptions
221 try:
222 k = tuple(k)
223 self.k_values_ = list(range(*k))
224 except:
225 raise YellowbrickValueError((
226 "Specify a range or maximal K value, the value '{}' "
227 "is not a valid argument for K.".format(k)
228 ))
229
230
231 # Holds the values of the silhoutte scores
232 self.k_scores_ = None
233
234 def fit(self, X, y=None, **kwargs):
235 """
236 Fits n KMeans models where n is the length of ``self.k_values_``,
237 storing the silhoutte scores in the ``self.k_scores_`` attribute.
238 This method finishes up by calling draw to create the plot.
239 """
240
241 self.k_scores_ = []
242 self.k_timers_ = []
243
244 for k in self.k_values_:
245 # Compute the start time for each model
246 start = time.time()
247
248 # Set the k value and fit the model
249 self.estimator.set_params(n_clusters=k)
250 self.estimator.fit(X)
251
252 # Append the time and score to our plottable metrics
253 self.k_timers_.append(time.time() - start)
254 self.k_scores_.append(
255 self.scoring_metric(X, self.estimator.labels_)
256 )
257
258 self.draw()
259
260 return self
261
262 def draw(self):
263 """
264 Draw the elbow curve for the specified scores and values of K.
265 """
266 # Plot the silhouette score against k
267 self.ax.plot(self.k_values_, self.k_scores_, marker="D", label="score")
268
269 # If we're going to plot the timings, create a twinx axis
270 if self.timings:
271 self.axes = [self.ax, self.ax.twinx()]
272 self.axes[1].plot(
273 self.k_values_, self.k_timers_, label="fit time",
274 c='g', marker="o", linestyle="--", alpha=0.75,
275 )
276
277 return self.ax
278
279 def finalize(self):
280 """
281 Prepare the figure for rendering by setting the title as well as the
282 X and Y axis labels and adding the legend.
283 """
284 # Get the metric name
285 metric = self.scoring_metric.__name__.replace("_", " ").title()
286
287 # Set the title
288 self.set_title(
289 '{} Elbow for {} Clustering'.format(metric, self.name)
290 )
291
292 # Set the x and y labels
293 self.ax.set_xlabel('k')
294 self.ax.set_ylabel(metric.lower())
295
296 # Set the second y axis labels
297 if self.timings:
298 self.axes[1].grid(False)
299 self.axes[1].set_ylabel("fit time (seconds)", color='g')
300 self.axes[1].tick_params('y', colors='g')
301
[end of yellowbrick/cluster/elbow.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/yellowbrick/cluster/elbow.py b/yellowbrick/cluster/elbow.py
--- a/yellowbrick/cluster/elbow.py
+++ b/yellowbrick/cluster/elbow.py
@@ -125,11 +125,11 @@
The elbow method runs k-means clustering on the dataset for a range of
values for k (say from 1-10) and then for each value of k computes an
- average score for all clusters. By default, the ``distortion_score`` is
+ average score for all clusters. By default, the ``distortion`` score is
computed, the sum of square distances from each point to its assigned
- center. Other metrics can also be used such as the ``silhouette_score``,
+ center. Other metrics can also be used such as the ``silhouette`` score,
the mean silhouette coefficient for all samples or the
- ``calinski_harabaz_score``, which computes the ratio of dispersion between
+ ``calinski_harabaz`` score, which computes the ratio of dispersion between
and within clusters.
When these overall metrics for each model are plotted, it is possible to
@@ -186,16 +186,21 @@
If you get a visualizer that doesn't have an elbow or inflection point,
then this method may not be working. The elbow method does not work well
- if the data is not very clustered; in this case you might see a smooth
- curve and the value of k is unclear. Other scoring methods such as BIC or
- SSE also can be used to explore if clustering is a correct choice.
+ if the data is not very clustered; in this case, you might see a smooth
+ curve and the value of k is unclear. Other scoring methods, such as BIC or
+ SSE, also can be used to explore if clustering is a correct choice.
For a discussion on the Elbow method, read more at
`Robert Gove's Block <https://bl.ocks.org/rpgove/0060ff3b656618e9136b>`_.
+
+ .. seealso:: The scikit-learn documentation for the `silhouette_score
+ <https://bit.ly/2LYWjYb>`_ and `calinski_harabaz_score
+ <https://bit.ly/2LW3Zu9>`_. The default, `distortion_score`, is
+ implemented in`yellowbrick.cluster.elbow`.
.. todo:: add parallelization option for performance
- .. todo:: add different metrics for scores and silhoutte
- .. todo:: add timing information about how long its taking
+ .. todo:: add different metrics for scores and silhouette
+ .. todo:: add timing information about how long it's taking
"""
def __init__(self, model, ax=None, k=10,
|
{"golden_diff": "diff --git a/yellowbrick/cluster/elbow.py b/yellowbrick/cluster/elbow.py\n--- a/yellowbrick/cluster/elbow.py\n+++ b/yellowbrick/cluster/elbow.py\n@@ -125,11 +125,11 @@\n \n The elbow method runs k-means clustering on the dataset for a range of\n values for k (say from 1-10) and then for each value of k computes an\n- average score for all clusters. By default, the ``distortion_score`` is\n+ average score for all clusters. By default, the ``distortion`` score is\n computed, the sum of square distances from each point to its assigned\n- center. Other metrics can also be used such as the ``silhouette_score``,\n+ center. Other metrics can also be used such as the ``silhouette`` score,\n the mean silhouette coefficient for all samples or the\n- ``calinski_harabaz_score``, which computes the ratio of dispersion between\n+ ``calinski_harabaz`` score, which computes the ratio of dispersion between\n and within clusters.\n \n When these overall metrics for each model are plotted, it is possible to\n@@ -186,16 +186,21 @@\n \n If you get a visualizer that doesn't have an elbow or inflection point,\n then this method may not be working. The elbow method does not work well\n- if the data is not very clustered; in this case you might see a smooth\n- curve and the value of k is unclear. Other scoring methods such as BIC or\n- SSE also can be used to explore if clustering is a correct choice.\n+ if the data is not very clustered; in this case, you might see a smooth\n+ curve and the value of k is unclear. Other scoring methods, such as BIC or\n+ SSE, also can be used to explore if clustering is a correct choice.\n \n For a discussion on the Elbow method, read more at\n `Robert Gove's Block <https://bl.ocks.org/rpgove/0060ff3b656618e9136b>`_.\n+ \n+ .. seealso:: The scikit-learn documentation for the `silhouette_score\n+ <https://bit.ly/2LYWjYb>`_ and `calinski_harabaz_score\n+ <https://bit.ly/2LW3Zu9>`_. The default, `distortion_score`, is\n+ implemented in`yellowbrick.cluster.elbow`.\n \n .. todo:: add parallelization option for performance\n- .. todo:: add different metrics for scores and silhoutte\n- .. todo:: add timing information about how long its taking\n+ .. todo:: add different metrics for scores and silhouette\n+ .. todo:: add timing information about how long it's taking\n \"\"\"\n \n def __init__(self, model, ax=None, k=10,\n", "issue": "Incorrect parameter values in K-Elbow Visualizer docstring\nInitially in the K-Elbow Visualizer's docstring, the possible values for the parameter `metric` are named as `distortion_score`, `silhouette_score`, and `calinski_harabaz_score`. However, using these values returns the following error:\r\n\r\n`YellowbrickValueError: '{}' is not a defined metric use one of distortion, silhouette, or calinski_harabaz`\r\n\r\nHowever, the correct names—`distortion`, `silhouette`, `calinski_harabaz`—are listed corrected further down under `Parameters`.\r\n\r\n\r\n\n", "before_files": [{"content": "# yellowbrick.cluster.elbow\n# Implements the elbow method for determining the optimal number of clusters.\n#\n# Author: Benjamin Bengfort <[email protected]>\n# Created: Thu Mar 23 22:36:31 2017 -0400\n#\n# Copyright (C) 2016 District Data Labs\n# For license information, see LICENSE.txt\n#\n# ID: elbow.py [5a370c8] [email protected] $\n\n\"\"\"\nImplements the elbow method for determining the optimal number of clusters.\nhttps://bl.ocks.org/rpgove/0060ff3b656618e9136b\n\"\"\"\n\n##########################################################################\n## Imports\n##########################################################################\n\nimport time\nimport numpy as np\nimport scipy.sparse as sp\n\nfrom .base import ClusteringScoreVisualizer\nfrom ..exceptions import YellowbrickValueError\n\nfrom sklearn.metrics import silhouette_score\nfrom sklearn.metrics import calinski_harabaz_score\nfrom sklearn.metrics.pairwise import pairwise_distances\nfrom sklearn.preprocessing import LabelEncoder\n\n\n## Packages for export\n__all__ = [\n \"KElbowVisualizer\", \"distortion_score\"\n]\n\n\n##########################################################################\n## Metrics\n##########################################################################\n\ndef distortion_score(X, labels, metric='euclidean'):\n \"\"\"\n Compute the mean distortion of all samples.\n\n The distortion is computed as the the sum of the squared distances between\n each observation and its closest centroid. Logically, this is the metric\n that K-Means attempts to minimize as it is fitting the model.\n\n .. seealso:: http://kldavenport.com/the-cost-function-of-k-means/\n\n Parameters\n ----------\n X : array, shape = [n_samples, n_features] or [n_samples_a, n_samples_a]\n Array of pairwise distances between samples if metric == \"precomputed\"\n or a feature array for computing distances against the labels.\n\n labels : array, shape = [n_samples]\n Predicted labels for each sample\n\n metric : string\n The metric to use when calculating distance between instances in a\n feature array. If metric is a string, it must be one of the options\n allowed by `sklearn.metrics.pairwise.pairwise_distances\n <http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.pairwise_distances.html#sklearn.metrics.pairwise.pairwise_distances>`_\n\n .. todo:: add sample_size and random_state kwds similar to silhouette_score\n \"\"\"\n # Encode labels to get unique centers and groups\n le = LabelEncoder()\n labels = le.fit_transform(labels)\n unique_labels = le.classes_\n\n # Sum of the distortions\n distortion = 0\n\n # Loop through each label (center) to compute the centroid\n for current_label in unique_labels:\n # Mask the instances that belong to the current label\n mask = labels == current_label\n instances = X[mask]\n\n # Compute the center of these instances\n center = instances.mean(axis=0)\n\n # NOTE: csc_matrix and csr_matrix mean returns a 2D array, numpy.mean\n # returns an array of 1 dimension less than the input. We expect\n # instances to be a 2D array, therefore to do pairwise computation we\n # require center to be a 2D array with a single row (the center).\n # See #370 for more detail.\n if not sp.issparse(instances):\n center = np.array([center])\n\n # Compute the square distances from the instances to the center\n distances = pairwise_distances(instances, center, metric=metric)\n distances = distances ** 2\n\n # Add the mean square distance to the distortion\n distortion += distances.mean()\n\n return distortion\n\n\n##########################################################################\n## Elbow Method\n##########################################################################\n\nKELBOW_SCOREMAP = {\n \"distortion\": distortion_score,\n \"silhouette\": silhouette_score,\n \"calinski_harabaz\": calinski_harabaz_score,\n}\n\n\nclass KElbowVisualizer(ClusteringScoreVisualizer):\n \"\"\"\n The K-Elbow Visualizer implements the \"elbow\" method of selecting the\n optimal number of clusters for K-means clustering. K-means is a simple\n unsupervised machine learning algorithm that groups data into a specified\n number (k) of clusters. Because the user must specify in advance what k to\n choose, the algorithm is somewhat naive -- it assigns all members to k\n clusters even if that is not the right k for the dataset.\n\n The elbow method runs k-means clustering on the dataset for a range of\n values for k (say from 1-10) and then for each value of k computes an\n average score for all clusters. By default, the ``distortion_score`` is\n computed, the sum of square distances from each point to its assigned\n center. Other metrics can also be used such as the ``silhouette_score``,\n the mean silhouette coefficient for all samples or the\n ``calinski_harabaz_score``, which computes the ratio of dispersion between\n and within clusters.\n\n When these overall metrics for each model are plotted, it is possible to\n visually determine the best value for K. If the line chart looks like an\n arm, then the \"elbow\" (the point of inflection on the curve) is the best\n value of k. The \"arm\" can be either up or down, but if there is a strong\n inflection point, it is a good indication that the underlying model fits\n best at that point.\n\n Parameters\n ----------\n\n model : a Scikit-Learn clusterer\n Should be an instance of a clusterer, specifically ``KMeans`` or\n ``MiniBatchKMeans``. If it is not a clusterer, an exception is raised.\n\n ax : matplotlib Axes, default: None\n The axes to plot the figure on. If None is passed in the current axes\n will be used (or generated if required).\n\n k : integer or tuple\n The range of k to compute silhouette scores for. If a single integer\n is specified, then will compute the range (2,k) otherwise the\n specified range in the tuple is used.\n\n metric : string, default: ``\"distortion\"``\n Select the scoring metric to evaluate the clusters. The default is the\n mean distortion, defined by the sum of squared distances between each\n observation and its closest centroid. Other metrics include:\n\n - **distortion**: mean sum of squared distances to centers\n - **silhouette**: mean ratio of intra-cluster and nearest-cluster distance\n - **calinski_harabaz**: ratio of within to between cluster dispersion\n\n timings : bool, default: True\n Display the fitting time per k to evaluate the amount of time required\n to train the clustering model.\n\n kwargs : dict\n Keyword arguments that are passed to the base class and may influence\n the visualization as defined in other Visualizers.\n\n Examples\n --------\n\n >>> from yellowbrick.cluster import KElbowVisualizer\n >>> from sklearn.cluster import KMeans\n >>> model = KElbowVisualizer(KMeans(), k=10)\n >>> model.fit(X)\n >>> model.poof()\n\n Notes\n -----\n\n If you get a visualizer that doesn't have an elbow or inflection point,\n then this method may not be working. The elbow method does not work well\n if the data is not very clustered; in this case you might see a smooth\n curve and the value of k is unclear. Other scoring methods such as BIC or\n SSE also can be used to explore if clustering is a correct choice.\n\n For a discussion on the Elbow method, read more at\n `Robert Gove's Block <https://bl.ocks.org/rpgove/0060ff3b656618e9136b>`_.\n\n .. todo:: add parallelization option for performance\n .. todo:: add different metrics for scores and silhoutte\n .. todo:: add timing information about how long its taking\n \"\"\"\n\n def __init__(self, model, ax=None, k=10,\n metric=\"distortion\", timings=True, **kwargs):\n super(KElbowVisualizer, self).__init__(model, ax=ax, **kwargs)\n\n # Get the scoring method\n if metric not in KELBOW_SCOREMAP:\n raise YellowbrickValueError(\n \"'{}' is not a defined metric \"\n \"use one of distortion, silhouette, or calinski_harabaz\"\n )\n\n # Store the arguments\n self.scoring_metric = KELBOW_SCOREMAP[metric]\n self.timings = timings\n\n # Convert K into a tuple argument if an integer\n if isinstance(k, int):\n k = (2, k+1)\n\n # Expand k in to the values we will use, capturing exceptions\n try:\n k = tuple(k)\n self.k_values_ = list(range(*k))\n except:\n raise YellowbrickValueError((\n \"Specify a range or maximal K value, the value '{}' \"\n \"is not a valid argument for K.\".format(k)\n ))\n\n\n # Holds the values of the silhoutte scores\n self.k_scores_ = None\n\n def fit(self, X, y=None, **kwargs):\n \"\"\"\n Fits n KMeans models where n is the length of ``self.k_values_``,\n storing the silhoutte scores in the ``self.k_scores_`` attribute.\n This method finishes up by calling draw to create the plot.\n \"\"\"\n\n self.k_scores_ = []\n self.k_timers_ = []\n\n for k in self.k_values_:\n # Compute the start time for each model\n start = time.time()\n\n # Set the k value and fit the model\n self.estimator.set_params(n_clusters=k)\n self.estimator.fit(X)\n\n # Append the time and score to our plottable metrics\n self.k_timers_.append(time.time() - start)\n self.k_scores_.append(\n self.scoring_metric(X, self.estimator.labels_)\n )\n\n self.draw()\n\n return self\n\n def draw(self):\n \"\"\"\n Draw the elbow curve for the specified scores and values of K.\n \"\"\"\n # Plot the silhouette score against k\n self.ax.plot(self.k_values_, self.k_scores_, marker=\"D\", label=\"score\")\n\n # If we're going to plot the timings, create a twinx axis\n if self.timings:\n self.axes = [self.ax, self.ax.twinx()]\n self.axes[1].plot(\n self.k_values_, self.k_timers_, label=\"fit time\",\n c='g', marker=\"o\", linestyle=\"--\", alpha=0.75,\n )\n\n return self.ax\n\n def finalize(self):\n \"\"\"\n Prepare the figure for rendering by setting the title as well as the\n X and Y axis labels and adding the legend.\n \"\"\"\n # Get the metric name\n metric = self.scoring_metric.__name__.replace(\"_\", \" \").title()\n\n # Set the title\n self.set_title(\n '{} Elbow for {} Clustering'.format(metric, self.name)\n )\n\n # Set the x and y labels\n self.ax.set_xlabel('k')\n self.ax.set_ylabel(metric.lower())\n\n # Set the second y axis labels\n if self.timings:\n self.axes[1].grid(False)\n self.axes[1].set_ylabel(\"fit time (seconds)\", color='g')\n self.axes[1].tick_params('y', colors='g')\n", "path": "yellowbrick/cluster/elbow.py"}]}
| 4,094 | 660 |
gh_patches_debug_61516
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmpose-267
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pylint: R1719
```bash
mmpose/models/backbones/shufflenet_v1.py:238:26: R1719: The if expression can be replaced with 'test' (simplifiable-if-expression)
```
</issue>
<code>
[start of mmpose/models/backbones/shufflenet_v1.py]
1 import logging
2
3 import torch
4 import torch.nn as nn
5 import torch.utils.checkpoint as cp
6 from mmcv.cnn import (ConvModule, build_activation_layer, constant_init,
7 normal_init)
8 from torch.nn.modules.batchnorm import _BatchNorm
9
10 from ..registry import BACKBONES
11 from .base_backbone import BaseBackbone
12 from .utils import channel_shuffle, load_checkpoint, make_divisible
13
14
15 class ShuffleUnit(nn.Module):
16 """ShuffleUnit block.
17
18 ShuffleNet unit with pointwise group convolution (GConv) and channel
19 shuffle.
20
21 Args:
22 in_channels (int): The input channels of the ShuffleUnit.
23 out_channels (int): The output channels of the ShuffleUnit.
24 groups (int, optional): The number of groups to be used in grouped 1x1
25 convolutions in each ShuffleUnit. Default: 3
26 first_block (bool, optional): Whether it is the first ShuffleUnit of a
27 sequential ShuffleUnits. Default: False, which means not using the
28 grouped 1x1 convolution.
29 combine (str, optional): The ways to combine the input and output
30 branches. Default: 'add'.
31 conv_cfg (dict): Config dict for convolution layer. Default: None,
32 which means using conv2d.
33 norm_cfg (dict): Config dict for normalization layer.
34 Default: dict(type='BN').
35 act_cfg (dict): Config dict for activation layer.
36 Default: dict(type='ReLU').
37 with_cp (bool, optional): Use checkpoint or not. Using checkpoint
38 will save some memory while slowing down the training speed.
39 Default: False.
40
41 Returns:
42 Tensor: The output tensor.
43 """
44
45 def __init__(self,
46 in_channels,
47 out_channels,
48 groups=3,
49 first_block=True,
50 combine='add',
51 conv_cfg=None,
52 norm_cfg=dict(type='BN'),
53 act_cfg=dict(type='ReLU'),
54 with_cp=False):
55 super().__init__()
56 self.in_channels = in_channels
57 self.out_channels = out_channels
58 self.first_block = first_block
59 self.combine = combine
60 self.groups = groups
61 self.bottleneck_channels = self.out_channels // 4
62 self.with_cp = with_cp
63
64 if self.combine == 'add':
65 self.depthwise_stride = 1
66 self._combine_func = self._add
67 assert in_channels == out_channels, (
68 'in_channels must be equal to out_channels when combine '
69 'is add')
70 elif self.combine == 'concat':
71 self.depthwise_stride = 2
72 self._combine_func = self._concat
73 self.out_channels -= self.in_channels
74 self.avgpool = nn.AvgPool2d(kernel_size=3, stride=2, padding=1)
75 else:
76 raise ValueError(f'Cannot combine tensors with {self.combine}. '
77 'Only "add" and "concat" are supported')
78
79 self.first_1x1_groups = 1 if first_block else self.groups
80 self.g_conv_1x1_compress = ConvModule(
81 in_channels=self.in_channels,
82 out_channels=self.bottleneck_channels,
83 kernel_size=1,
84 groups=self.first_1x1_groups,
85 conv_cfg=conv_cfg,
86 norm_cfg=norm_cfg,
87 act_cfg=act_cfg)
88
89 self.depthwise_conv3x3_bn = ConvModule(
90 in_channels=self.bottleneck_channels,
91 out_channels=self.bottleneck_channels,
92 kernel_size=3,
93 stride=self.depthwise_stride,
94 padding=1,
95 groups=self.bottleneck_channels,
96 conv_cfg=conv_cfg,
97 norm_cfg=norm_cfg,
98 act_cfg=None)
99
100 self.g_conv_1x1_expand = ConvModule(
101 in_channels=self.bottleneck_channels,
102 out_channels=self.out_channels,
103 kernel_size=1,
104 groups=self.groups,
105 conv_cfg=conv_cfg,
106 norm_cfg=norm_cfg,
107 act_cfg=None)
108
109 self.act = build_activation_layer(act_cfg)
110
111 @staticmethod
112 def _add(x, out):
113 # residual connection
114 return x + out
115
116 @staticmethod
117 def _concat(x, out):
118 # concatenate along channel axis
119 return torch.cat((x, out), 1)
120
121 def forward(self, x):
122
123 def _inner_forward(x):
124 residual = x
125
126 out = self.g_conv_1x1_compress(x)
127 out = self.depthwise_conv3x3_bn(out)
128
129 if self.groups > 1:
130 out = channel_shuffle(out, self.groups)
131
132 out = self.g_conv_1x1_expand(out)
133
134 if self.combine == 'concat':
135 residual = self.avgpool(residual)
136 out = self.act(out)
137 out = self._combine_func(residual, out)
138 else:
139 out = self._combine_func(residual, out)
140 out = self.act(out)
141 return out
142
143 if self.with_cp and x.requires_grad:
144 out = cp.checkpoint(_inner_forward, x)
145 else:
146 out = _inner_forward(x)
147
148 return out
149
150
151 @BACKBONES.register_module()
152 class ShuffleNetV1(BaseBackbone):
153 """ShuffleNetV1 backbone.
154
155 Args:
156 groups (int, optional): The number of groups to be used in grouped 1x1
157 convolutions in each ShuffleUnit. Default: 3.
158 widen_factor (float, optional): Width multiplier - adjusts the number
159 of channels in each layer by this amount. Default: 1.0.
160 out_indices (Sequence[int]): Output from which stages.
161 Default: (2, )
162 frozen_stages (int): Stages to be frozen (all param fixed).
163 Default: -1, which means not freezing any parameters.
164 conv_cfg (dict): Config dict for convolution layer. Default: None,
165 which means using conv2d.
166 norm_cfg (dict): Config dict for normalization layer.
167 Default: dict(type='BN').
168 act_cfg (dict): Config dict for activation layer.
169 Default: dict(type='ReLU').
170 norm_eval (bool): Whether to set norm layers to eval mode, namely,
171 freeze running stats (mean and var). Note: Effect on Batch Norm
172 and its variants only. Default: False.
173 with_cp (bool): Use checkpoint or not. Using checkpoint will save some
174 memory while slowing down the training speed. Default: False.
175 """
176
177 def __init__(self,
178 groups=3,
179 widen_factor=1.0,
180 out_indices=(2, ),
181 frozen_stages=-1,
182 conv_cfg=None,
183 norm_cfg=dict(type='BN'),
184 act_cfg=dict(type='ReLU'),
185 norm_eval=False,
186 with_cp=False):
187 super().__init__()
188 self.stage_blocks = [4, 8, 4]
189 self.groups = groups
190
191 for index in out_indices:
192 if index not in range(0, 3):
193 raise ValueError('the item in out_indices must in '
194 f'range(0, 3). But received {index}')
195
196 if frozen_stages not in range(-1, 3):
197 raise ValueError('frozen_stages must be in range(-1, 3). '
198 f'But received {frozen_stages}')
199 self.out_indices = out_indices
200 self.frozen_stages = frozen_stages
201 self.conv_cfg = conv_cfg
202 self.norm_cfg = norm_cfg
203 self.act_cfg = act_cfg
204 self.norm_eval = norm_eval
205 self.with_cp = with_cp
206
207 if groups == 1:
208 channels = (144, 288, 576)
209 elif groups == 2:
210 channels = (200, 400, 800)
211 elif groups == 3:
212 channels = (240, 480, 960)
213 elif groups == 4:
214 channels = (272, 544, 1088)
215 elif groups == 8:
216 channels = (384, 768, 1536)
217 else:
218 raise ValueError(f'{groups} groups is not supported for 1x1 '
219 'Grouped Convolutions')
220
221 channels = [make_divisible(ch * widen_factor, 8) for ch in channels]
222
223 self.in_channels = int(24 * widen_factor)
224
225 self.conv1 = ConvModule(
226 in_channels=3,
227 out_channels=self.in_channels,
228 kernel_size=3,
229 stride=2,
230 padding=1,
231 conv_cfg=conv_cfg,
232 norm_cfg=norm_cfg,
233 act_cfg=act_cfg)
234 self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
235
236 self.layers = nn.ModuleList()
237 for i, num_blocks in enumerate(self.stage_blocks):
238 first_block = True if i == 0 else False
239 layer = self.make_layer(channels[i], num_blocks, first_block)
240 self.layers.append(layer)
241
242 def _freeze_stages(self):
243 if self.frozen_stages >= 0:
244 for param in self.conv1.parameters():
245 param.requires_grad = False
246 for i in range(self.frozen_stages):
247 layer = self.layers[i]
248 layer.eval()
249 for param in layer.parameters():
250 param.requires_grad = False
251
252 def init_weights(self, pretrained=None):
253 if isinstance(pretrained, str):
254 logger = logging.getLogger()
255 load_checkpoint(self, pretrained, strict=False, logger=logger)
256 elif pretrained is None:
257 for name, m in self.named_modules():
258 if isinstance(m, nn.Conv2d):
259 if 'conv1' in name:
260 normal_init(m, mean=0, std=0.01)
261 else:
262 normal_init(m, mean=0, std=1.0 / m.weight.shape[1])
263 elif isinstance(m, (_BatchNorm, nn.GroupNorm)):
264 constant_init(m.weight, val=1, bias=0.0001)
265 if isinstance(m, _BatchNorm):
266 if m.running_mean is not None:
267 nn.init.constant_(m.running_mean, 0)
268 else:
269 raise TypeError('pretrained must be a str or None. But received '
270 f'{type(pretrained)}')
271
272 def make_layer(self, out_channels, num_blocks, first_block=False):
273 """Stack ShuffleUnit blocks to make a layer.
274
275 Args:
276 out_channels (int): out_channels of the block.
277 num_blocks (int): Number of blocks.
278 first_block (bool, optional): Whether is the first ShuffleUnit of a
279 sequential ShuffleUnits. Default: False, which means not using
280 the grouped 1x1 convolution.
281 """
282 layers = []
283 for i in range(num_blocks):
284 first_block = first_block if i == 0 else False
285 combine_mode = 'concat' if i == 0 else 'add'
286 layers.append(
287 ShuffleUnit(
288 self.in_channels,
289 out_channels,
290 groups=self.groups,
291 first_block=first_block,
292 combine=combine_mode,
293 conv_cfg=self.conv_cfg,
294 norm_cfg=self.norm_cfg,
295 act_cfg=self.act_cfg,
296 with_cp=self.with_cp))
297 self.in_channels = out_channels
298
299 return nn.Sequential(*layers)
300
301 def forward(self, x):
302 x = self.conv1(x)
303 x = self.maxpool(x)
304
305 outs = []
306 for i, layer in enumerate(self.layers):
307 x = layer(x)
308 if i in self.out_indices:
309 outs.append(x)
310
311 if len(outs) == 1:
312 return outs[0]
313 else:
314 return tuple(outs)
315
316 def train(self, mode=True):
317 super().train(mode)
318 self._freeze_stages()
319 if mode and self.norm_eval:
320 for m in self.modules():
321 if isinstance(m, _BatchNorm):
322 m.eval()
323
[end of mmpose/models/backbones/shufflenet_v1.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mmpose/models/backbones/shufflenet_v1.py b/mmpose/models/backbones/shufflenet_v1.py
--- a/mmpose/models/backbones/shufflenet_v1.py
+++ b/mmpose/models/backbones/shufflenet_v1.py
@@ -235,7 +235,7 @@
self.layers = nn.ModuleList()
for i, num_blocks in enumerate(self.stage_blocks):
- first_block = True if i == 0 else False
+ first_block = (i == 0)
layer = self.make_layer(channels[i], num_blocks, first_block)
self.layers.append(layer)
|
{"golden_diff": "diff --git a/mmpose/models/backbones/shufflenet_v1.py b/mmpose/models/backbones/shufflenet_v1.py\n--- a/mmpose/models/backbones/shufflenet_v1.py\n+++ b/mmpose/models/backbones/shufflenet_v1.py\n@@ -235,7 +235,7 @@\n \n self.layers = nn.ModuleList()\n for i, num_blocks in enumerate(self.stage_blocks):\n- first_block = True if i == 0 else False\n+ first_block = (i == 0)\n layer = self.make_layer(channels[i], num_blocks, first_block)\n self.layers.append(layer)\n", "issue": "Pylint: R1719\n```bash\r\nmmpose/models/backbones/shufflenet_v1.py:238:26: R1719: The if expression can be replaced with 'test' (simplifiable-if-expression)\r\n```\n", "before_files": [{"content": "import logging\n\nimport torch\nimport torch.nn as nn\nimport torch.utils.checkpoint as cp\nfrom mmcv.cnn import (ConvModule, build_activation_layer, constant_init,\n normal_init)\nfrom torch.nn.modules.batchnorm import _BatchNorm\n\nfrom ..registry import BACKBONES\nfrom .base_backbone import BaseBackbone\nfrom .utils import channel_shuffle, load_checkpoint, make_divisible\n\n\nclass ShuffleUnit(nn.Module):\n \"\"\"ShuffleUnit block.\n\n ShuffleNet unit with pointwise group convolution (GConv) and channel\n shuffle.\n\n Args:\n in_channels (int): The input channels of the ShuffleUnit.\n out_channels (int): The output channels of the ShuffleUnit.\n groups (int, optional): The number of groups to be used in grouped 1x1\n convolutions in each ShuffleUnit. Default: 3\n first_block (bool, optional): Whether it is the first ShuffleUnit of a\n sequential ShuffleUnits. Default: False, which means not using the\n grouped 1x1 convolution.\n combine (str, optional): The ways to combine the input and output\n branches. Default: 'add'.\n conv_cfg (dict): Config dict for convolution layer. Default: None,\n which means using conv2d.\n norm_cfg (dict): Config dict for normalization layer.\n Default: dict(type='BN').\n act_cfg (dict): Config dict for activation layer.\n Default: dict(type='ReLU').\n with_cp (bool, optional): Use checkpoint or not. Using checkpoint\n will save some memory while slowing down the training speed.\n Default: False.\n\n Returns:\n Tensor: The output tensor.\n \"\"\"\n\n def __init__(self,\n in_channels,\n out_channels,\n groups=3,\n first_block=True,\n combine='add',\n conv_cfg=None,\n norm_cfg=dict(type='BN'),\n act_cfg=dict(type='ReLU'),\n with_cp=False):\n super().__init__()\n self.in_channels = in_channels\n self.out_channels = out_channels\n self.first_block = first_block\n self.combine = combine\n self.groups = groups\n self.bottleneck_channels = self.out_channels // 4\n self.with_cp = with_cp\n\n if self.combine == 'add':\n self.depthwise_stride = 1\n self._combine_func = self._add\n assert in_channels == out_channels, (\n 'in_channels must be equal to out_channels when combine '\n 'is add')\n elif self.combine == 'concat':\n self.depthwise_stride = 2\n self._combine_func = self._concat\n self.out_channels -= self.in_channels\n self.avgpool = nn.AvgPool2d(kernel_size=3, stride=2, padding=1)\n else:\n raise ValueError(f'Cannot combine tensors with {self.combine}. '\n 'Only \"add\" and \"concat\" are supported')\n\n self.first_1x1_groups = 1 if first_block else self.groups\n self.g_conv_1x1_compress = ConvModule(\n in_channels=self.in_channels,\n out_channels=self.bottleneck_channels,\n kernel_size=1,\n groups=self.first_1x1_groups,\n conv_cfg=conv_cfg,\n norm_cfg=norm_cfg,\n act_cfg=act_cfg)\n\n self.depthwise_conv3x3_bn = ConvModule(\n in_channels=self.bottleneck_channels,\n out_channels=self.bottleneck_channels,\n kernel_size=3,\n stride=self.depthwise_stride,\n padding=1,\n groups=self.bottleneck_channels,\n conv_cfg=conv_cfg,\n norm_cfg=norm_cfg,\n act_cfg=None)\n\n self.g_conv_1x1_expand = ConvModule(\n in_channels=self.bottleneck_channels,\n out_channels=self.out_channels,\n kernel_size=1,\n groups=self.groups,\n conv_cfg=conv_cfg,\n norm_cfg=norm_cfg,\n act_cfg=None)\n\n self.act = build_activation_layer(act_cfg)\n\n @staticmethod\n def _add(x, out):\n # residual connection\n return x + out\n\n @staticmethod\n def _concat(x, out):\n # concatenate along channel axis\n return torch.cat((x, out), 1)\n\n def forward(self, x):\n\n def _inner_forward(x):\n residual = x\n\n out = self.g_conv_1x1_compress(x)\n out = self.depthwise_conv3x3_bn(out)\n\n if self.groups > 1:\n out = channel_shuffle(out, self.groups)\n\n out = self.g_conv_1x1_expand(out)\n\n if self.combine == 'concat':\n residual = self.avgpool(residual)\n out = self.act(out)\n out = self._combine_func(residual, out)\n else:\n out = self._combine_func(residual, out)\n out = self.act(out)\n return out\n\n if self.with_cp and x.requires_grad:\n out = cp.checkpoint(_inner_forward, x)\n else:\n out = _inner_forward(x)\n\n return out\n\n\[email protected]_module()\nclass ShuffleNetV1(BaseBackbone):\n \"\"\"ShuffleNetV1 backbone.\n\n Args:\n groups (int, optional): The number of groups to be used in grouped 1x1\n convolutions in each ShuffleUnit. Default: 3.\n widen_factor (float, optional): Width multiplier - adjusts the number\n of channels in each layer by this amount. Default: 1.0.\n out_indices (Sequence[int]): Output from which stages.\n Default: (2, )\n frozen_stages (int): Stages to be frozen (all param fixed).\n Default: -1, which means not freezing any parameters.\n conv_cfg (dict): Config dict for convolution layer. Default: None,\n which means using conv2d.\n norm_cfg (dict): Config dict for normalization layer.\n Default: dict(type='BN').\n act_cfg (dict): Config dict for activation layer.\n Default: dict(type='ReLU').\n norm_eval (bool): Whether to set norm layers to eval mode, namely,\n freeze running stats (mean and var). Note: Effect on Batch Norm\n and its variants only. Default: False.\n with_cp (bool): Use checkpoint or not. Using checkpoint will save some\n memory while slowing down the training speed. Default: False.\n \"\"\"\n\n def __init__(self,\n groups=3,\n widen_factor=1.0,\n out_indices=(2, ),\n frozen_stages=-1,\n conv_cfg=None,\n norm_cfg=dict(type='BN'),\n act_cfg=dict(type='ReLU'),\n norm_eval=False,\n with_cp=False):\n super().__init__()\n self.stage_blocks = [4, 8, 4]\n self.groups = groups\n\n for index in out_indices:\n if index not in range(0, 3):\n raise ValueError('the item in out_indices must in '\n f'range(0, 3). But received {index}')\n\n if frozen_stages not in range(-1, 3):\n raise ValueError('frozen_stages must be in range(-1, 3). '\n f'But received {frozen_stages}')\n self.out_indices = out_indices\n self.frozen_stages = frozen_stages\n self.conv_cfg = conv_cfg\n self.norm_cfg = norm_cfg\n self.act_cfg = act_cfg\n self.norm_eval = norm_eval\n self.with_cp = with_cp\n\n if groups == 1:\n channels = (144, 288, 576)\n elif groups == 2:\n channels = (200, 400, 800)\n elif groups == 3:\n channels = (240, 480, 960)\n elif groups == 4:\n channels = (272, 544, 1088)\n elif groups == 8:\n channels = (384, 768, 1536)\n else:\n raise ValueError(f'{groups} groups is not supported for 1x1 '\n 'Grouped Convolutions')\n\n channels = [make_divisible(ch * widen_factor, 8) for ch in channels]\n\n self.in_channels = int(24 * widen_factor)\n\n self.conv1 = ConvModule(\n in_channels=3,\n out_channels=self.in_channels,\n kernel_size=3,\n stride=2,\n padding=1,\n conv_cfg=conv_cfg,\n norm_cfg=norm_cfg,\n act_cfg=act_cfg)\n self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n\n self.layers = nn.ModuleList()\n for i, num_blocks in enumerate(self.stage_blocks):\n first_block = True if i == 0 else False\n layer = self.make_layer(channels[i], num_blocks, first_block)\n self.layers.append(layer)\n\n def _freeze_stages(self):\n if self.frozen_stages >= 0:\n for param in self.conv1.parameters():\n param.requires_grad = False\n for i in range(self.frozen_stages):\n layer = self.layers[i]\n layer.eval()\n for param in layer.parameters():\n param.requires_grad = False\n\n def init_weights(self, pretrained=None):\n if isinstance(pretrained, str):\n logger = logging.getLogger()\n load_checkpoint(self, pretrained, strict=False, logger=logger)\n elif pretrained is None:\n for name, m in self.named_modules():\n if isinstance(m, nn.Conv2d):\n if 'conv1' in name:\n normal_init(m, mean=0, std=0.01)\n else:\n normal_init(m, mean=0, std=1.0 / m.weight.shape[1])\n elif isinstance(m, (_BatchNorm, nn.GroupNorm)):\n constant_init(m.weight, val=1, bias=0.0001)\n if isinstance(m, _BatchNorm):\n if m.running_mean is not None:\n nn.init.constant_(m.running_mean, 0)\n else:\n raise TypeError('pretrained must be a str or None. But received '\n f'{type(pretrained)}')\n\n def make_layer(self, out_channels, num_blocks, first_block=False):\n \"\"\"Stack ShuffleUnit blocks to make a layer.\n\n Args:\n out_channels (int): out_channels of the block.\n num_blocks (int): Number of blocks.\n first_block (bool, optional): Whether is the first ShuffleUnit of a\n sequential ShuffleUnits. Default: False, which means not using\n the grouped 1x1 convolution.\n \"\"\"\n layers = []\n for i in range(num_blocks):\n first_block = first_block if i == 0 else False\n combine_mode = 'concat' if i == 0 else 'add'\n layers.append(\n ShuffleUnit(\n self.in_channels,\n out_channels,\n groups=self.groups,\n first_block=first_block,\n combine=combine_mode,\n conv_cfg=self.conv_cfg,\n norm_cfg=self.norm_cfg,\n act_cfg=self.act_cfg,\n with_cp=self.with_cp))\n self.in_channels = out_channels\n\n return nn.Sequential(*layers)\n\n def forward(self, x):\n x = self.conv1(x)\n x = self.maxpool(x)\n\n outs = []\n for i, layer in enumerate(self.layers):\n x = layer(x)\n if i in self.out_indices:\n outs.append(x)\n\n if len(outs) == 1:\n return outs[0]\n else:\n return tuple(outs)\n\n def train(self, mode=True):\n super().train(mode)\n self._freeze_stages()\n if mode and self.norm_eval:\n for m in self.modules():\n if isinstance(m, _BatchNorm):\n m.eval()\n", "path": "mmpose/models/backbones/shufflenet_v1.py"}]}
| 4,051 | 146 |
gh_patches_debug_19361
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-1248
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
create a sans-io docker client class and impliment Client with blocking requests
use https://github.com/mikeal/deferred to create a sans-io version of Client (SansIOClient) that requires something like:
``` python
class SimpleStream(object):
def next(self) -> Deferred:
...
class IOAdapter(object):
def request(self, **kwargs) -> Deferred:
...
def stream(self, **kwargs) -> SimpleStream:
...
def unwrap_deferred(self, deferred: Deferred) -> Any:
...
```
and then implement it with something like:
``` python
class BlockingSimpleStream(SimpleStream):
def __init__(self, stream):
self.generator = _stream_helper(stream):
def next(self) -> Deferred:
return deferred.succeeded(next(self.generator))
class BlockingIOAdapter(IOAdapter):
def __init__(session: requests.Session):
self.session = session
def request(self, **kwargs) -> Deferred:
return deferred.execute(self.session.request, **kwargs)
def stream(self, **kwargs) -> BlockingSimpleStream:
return BlockingSimpleStream(self.session.request(**kwargs, stream=True))
def unwrap_deferred(self, d: Deferred):
return deferred.waitForDeferred(d).getResult()
```
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 from setuptools import setup
6
7
8 ROOT_DIR = os.path.dirname(__file__)
9 SOURCE_DIR = os.path.join(ROOT_DIR)
10
11 requirements = [
12 'requests >= 2.5.2, < 2.11',
13 'six >= 1.4.0',
14 'websocket-client >= 0.32.0',
15 'docker-pycreds >= 0.2.1'
16 ]
17
18 if sys.platform == 'win32':
19 requirements.append('pypiwin32 >= 219')
20
21 extras_require = {
22 ':python_version < "3.5"': 'backports.ssl_match_hostname >= 3.5',
23 # While not imported explicitly, the ipaddress module is required for
24 # ssl_match_hostname to verify hosts match with certificates via
25 # ServerAltname: https://pypi.python.org/pypi/backports.ssl_match_hostname
26 ':python_version < "3.3"': 'ipaddress >= 1.0.16',
27 }
28
29 version = None
30 exec(open('docker/version.py').read())
31
32 with open('./test-requirements.txt') as test_reqs_txt:
33 test_requirements = [line for line in test_reqs_txt]
34
35
36 setup(
37 name="docker-py",
38 version=version,
39 description="Python client for Docker.",
40 url='https://github.com/docker/docker-py/',
41 packages=[
42 'docker', 'docker.api', 'docker.auth', 'docker.transport',
43 'docker.utils', 'docker.utils.ports', 'docker.ssladapter',
44 'docker.types',
45 ],
46 install_requires=requirements,
47 tests_require=test_requirements,
48 extras_require=extras_require,
49 zip_safe=False,
50 test_suite='tests',
51 classifiers=[
52 'Development Status :: 4 - Beta',
53 'Environment :: Other Environment',
54 'Intended Audience :: Developers',
55 'Operating System :: OS Independent',
56 'Programming Language :: Python',
57 'Programming Language :: Python :: 2',
58 'Programming Language :: Python :: 2.6',
59 'Programming Language :: Python :: 2.7',
60 'Programming Language :: Python :: 3',
61 'Programming Language :: Python :: 3.3',
62 'Programming Language :: Python :: 3.4',
63 'Programming Language :: Python :: 3.5',
64 'Topic :: Utilities',
65 'License :: OSI Approved :: Apache Software License',
66 ],
67 )
68
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -33,10 +33,20 @@
test_requirements = [line for line in test_reqs_txt]
+long_description = ''
+try:
+ with open('./README.rst') as readme_rst:
+ long_description = readme_rst.read()
+except IOError:
+ # README.rst is only generated on release. Its absence should not prevent
+ # setup.py from working properly.
+ pass
+
setup(
name="docker-py",
version=version,
description="Python client for Docker.",
+ long_description=long_description,
url='https://github.com/docker/docker-py/',
packages=[
'docker', 'docker.api', 'docker.auth', 'docker.transport',
@@ -64,4 +74,6 @@
'Topic :: Utilities',
'License :: OSI Approved :: Apache Software License',
],
+ maintainer='Joffrey F',
+ maintainer_email='[email protected]',
)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -33,10 +33,20 @@\n test_requirements = [line for line in test_reqs_txt]\n \n \n+long_description = ''\n+try:\n+ with open('./README.rst') as readme_rst:\n+ long_description = readme_rst.read()\n+except IOError:\n+ # README.rst is only generated on release. Its absence should not prevent\n+ # setup.py from working properly.\n+ pass\n+\n setup(\n name=\"docker-py\",\n version=version,\n description=\"Python client for Docker.\",\n+ long_description=long_description,\n url='https://github.com/docker/docker-py/',\n packages=[\n 'docker', 'docker.api', 'docker.auth', 'docker.transport',\n@@ -64,4 +74,6 @@\n 'Topic :: Utilities',\n 'License :: OSI Approved :: Apache Software License',\n ],\n+ maintainer='Joffrey F',\n+ maintainer_email='[email protected]',\n )\n", "issue": "create a sans-io docker client class and impliment Client with blocking requests\nuse https://github.com/mikeal/deferred to create a sans-io version of Client (SansIOClient) that requires something like:\n\n``` python\nclass SimpleStream(object):\n def next(self) -> Deferred:\n ...\n\nclass IOAdapter(object):\n def request(self, **kwargs) -> Deferred:\n ...\n\n def stream(self, **kwargs) -> SimpleStream:\n ...\n\n def unwrap_deferred(self, deferred: Deferred) -> Any:\n ...\n```\n\nand then implement it with something like:\n\n``` python\nclass BlockingSimpleStream(SimpleStream):\n def __init__(self, stream):\n self.generator = _stream_helper(stream):\n def next(self) -> Deferred:\n return deferred.succeeded(next(self.generator))\n\nclass BlockingIOAdapter(IOAdapter):\n def __init__(session: requests.Session):\n self.session = session\n\n def request(self, **kwargs) -> Deferred:\n return deferred.execute(self.session.request, **kwargs)\n\n def stream(self, **kwargs) -> BlockingSimpleStream:\n return BlockingSimpleStream(self.session.request(**kwargs, stream=True))\n\n def unwrap_deferred(self, d: Deferred):\n return deferred.waitForDeferred(d).getResult()\n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\nROOT_DIR = os.path.dirname(__file__)\nSOURCE_DIR = os.path.join(ROOT_DIR)\n\nrequirements = [\n 'requests >= 2.5.2, < 2.11',\n 'six >= 1.4.0',\n 'websocket-client >= 0.32.0',\n 'docker-pycreds >= 0.2.1'\n]\n\nif sys.platform == 'win32':\n requirements.append('pypiwin32 >= 219')\n\nextras_require = {\n ':python_version < \"3.5\"': 'backports.ssl_match_hostname >= 3.5',\n # While not imported explicitly, the ipaddress module is required for\n # ssl_match_hostname to verify hosts match with certificates via\n # ServerAltname: https://pypi.python.org/pypi/backports.ssl_match_hostname\n ':python_version < \"3.3\"': 'ipaddress >= 1.0.16',\n}\n\nversion = None\nexec(open('docker/version.py').read())\n\nwith open('./test-requirements.txt') as test_reqs_txt:\n test_requirements = [line for line in test_reqs_txt]\n\n\nsetup(\n name=\"docker-py\",\n version=version,\n description=\"Python client for Docker.\",\n url='https://github.com/docker/docker-py/',\n packages=[\n 'docker', 'docker.api', 'docker.auth', 'docker.transport',\n 'docker.utils', 'docker.utils.ports', 'docker.ssladapter',\n 'docker.types',\n ],\n install_requires=requirements,\n tests_require=test_requirements,\n extras_require=extras_require,\n zip_safe=False,\n test_suite='tests',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: Apache Software License',\n ],\n)\n", "path": "setup.py"}]}
| 1,437 | 233 |
gh_patches_debug_12074
|
rasdani/github-patches
|
git_diff
|
quantumlib__Cirq-1946
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pytest fails but travis says green
https://travis-ci.com/quantumlib/Cirq/jobs/225040090
```
4 failed, 7087 passed, 26 skipped, 3 warnings in 96.07 seconds
The command "check/pytest --ignore=cirq/contrib --benchmark-skip --actually-quiet" exited with 0.
```
</issue>
<code>
[start of cirq/ops/phased_x_gate.py]
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """An `XPowGate` conjugated by `ZPowGate`s."""
16 from typing import Union, Sequence, Tuple, Optional, cast
17
18 import math
19 import numpy as np
20 import sympy
21
22 import cirq
23 from cirq import value, protocols
24 from cirq._compat import proper_repr
25 from cirq.ops import gate_features, raw_types, op_tree, common_gates
26 from cirq.type_workarounds import NotImplementedType
27
28
29 @value.value_equality(manual_cls=True)
30 class PhasedXPowGate(gate_features.SingleQubitGate):
31 """A gate equivalent to the circuit ───Z^-p───X^t───Z^p───."""
32
33 def __init__(self,
34 *,
35 phase_exponent: Union[float, sympy.Symbol],
36 exponent: Union[float, sympy.Symbol] = 1.0,
37 global_shift: float = 0.0) -> None:
38 """
39 Args:
40 phase_exponent: The exponent on the Z gates conjugating the X gate.
41 exponent: The exponent on the X gate conjugated by Zs.
42 global_shift: How much to shift the operation's eigenvalues at
43 exponent=1.
44 """
45 self._phase_exponent = value.canonicalize_half_turns(phase_exponent)
46 self._exponent = exponent
47 self._global_shift = global_shift
48
49 def _qasm_(self,
50 args: protocols.QasmArgs,
51 qubits: Tuple[raw_types.Qid, ...]) -> Optional[str]:
52 if cirq.is_parameterized(self):
53 return None
54
55 args.validate_version('2.0')
56
57 e = cast(float, value.canonicalize_half_turns(self._exponent))
58 p = cast(float, self.phase_exponent)
59 epsilon = 10**-args.precision
60
61 if abs(e + 0.5) <= epsilon:
62 return args.format('u2({0:half_turns}, {1:half_turns}) {2};\n',
63 p + 0.5, -p - 0.5, qubits[0])
64
65 if abs(e - 0.5) <= epsilon:
66 return args.format('u2({0:half_turns}, {1:half_turns}) {2};\n',
67 p - 0.5, -p + 0.5, qubits[0])
68
69 return args.format(
70 'u3({0:half_turns}, {1:half_turns}, {2:half_turns}) {3};\n',
71 -e, p + 0.5, -p - 0.5, qubits[0])
72
73 def _decompose_(self, qubits: Sequence[raw_types.Qid]
74 ) -> op_tree.OP_TREE:
75 assert len(qubits) == 1
76 q = qubits[0]
77 z = cirq.Z(q)**self._phase_exponent
78 x = cirq.X(q)**self._exponent
79 if protocols.is_parameterized(z):
80 return NotImplemented
81 return z**-1, x, z
82
83 @property
84 def exponent(self) -> Union[float, sympy.Symbol]:
85 """The exponent on the central X gate conjugated by the Z gates."""
86 return self._exponent
87
88 @property
89 def phase_exponent(self) -> Union[float, sympy.Symbol]:
90 """The exponent on the Z gates conjugating the X gate."""
91 return self._phase_exponent
92
93 def __pow__(self, exponent: Union[float, sympy.Symbol]) -> 'PhasedXPowGate':
94 new_exponent = protocols.mul(self._exponent, exponent, NotImplemented)
95 if new_exponent is NotImplemented:
96 return NotImplemented
97 return PhasedXPowGate(phase_exponent=self._phase_exponent,
98 exponent=new_exponent,
99 global_shift=self._global_shift)
100
101 def _trace_distance_bound_(self) -> Optional[float]:
102 if self._is_parameterized_():
103 return None
104 return abs(np.sin(self._exponent * 0.5 * np.pi))
105
106 def _unitary_(self) -> Union[np.ndarray, NotImplementedType]:
107 """See `cirq.SupportsUnitary`."""
108 if self._is_parameterized_():
109 return NotImplemented
110 z = protocols.unitary(cirq.Z**self._phase_exponent)
111 x = protocols.unitary(cirq.X**self._exponent)
112 p = np.exp(1j * np.pi * self._global_shift * self._exponent)
113 return np.dot(np.dot(z, x), np.conj(z)) * p
114
115 def _pauli_expansion_(self) -> value.LinearDict[str]:
116 if self._is_parameterized_():
117 return NotImplemented
118 phase_angle = np.pi * self._phase_exponent / 2
119 angle = np.pi * self._exponent / 2
120 phase = 1j**(2 * self._exponent * (self._global_shift + 0.5))
121 return value.LinearDict({
122 'I': phase * np.cos(angle),
123 'X': -1j * phase * np.sin(angle) * np.cos(2 * phase_angle),
124 'Y': -1j * phase * np.sin(angle) * np.sin(2 * phase_angle),
125 })
126
127 def _is_parameterized_(self) -> bool:
128 """See `cirq.SupportsParameterization`."""
129 return (protocols.is_parameterized(self._exponent) or
130 protocols.is_parameterized(self._phase_exponent))
131
132 def _resolve_parameters_(self, param_resolver) -> 'PhasedXPowGate':
133 """See `cirq.SupportsParameterization`."""
134 return PhasedXPowGate(
135 phase_exponent=param_resolver.value_of(self._phase_exponent),
136 exponent=param_resolver.value_of(self._exponent),
137 global_shift=self._global_shift)
138
139 def _phase_by_(self, phase_turns, qubit_index):
140 """See `cirq.SupportsPhase`."""
141 assert qubit_index == 0
142 return PhasedXPowGate(
143 exponent=self._exponent,
144 phase_exponent=self._phase_exponent + phase_turns * 2,
145 global_shift=self._global_shift)
146
147 def _circuit_diagram_info_(self, args: protocols.CircuitDiagramInfoArgs
148 ) -> protocols.CircuitDiagramInfo:
149 """See `cirq.SupportsCircuitDiagramInfo`."""
150
151 if (isinstance(self.phase_exponent, sympy.Symbol) or
152 args.precision is None):
153 s = 'PhasedX({})'.format(self.phase_exponent)
154 else:
155 s = 'PhasedX({{:.{}f}})'.format(args.precision).format(
156 self.phase_exponent)
157 return protocols.CircuitDiagramInfo(
158 wire_symbols=(s,),
159 exponent=value.canonicalize_half_turns(self._exponent))
160
161 def __str__(self):
162 info = protocols.circuit_diagram_info(self)
163 if info.exponent == 1:
164 return info.wire_symbols[0]
165 return '{}^{}'.format(info.wire_symbols[0], info.exponent)
166
167 def __repr__(self):
168 args = ['phase_exponent={}'.format(proper_repr(self.phase_exponent))]
169 if self.exponent != 1:
170 args.append('exponent={}'.format(proper_repr(self.exponent)))
171 if self._global_shift != 0:
172 args.append('global_shift={!r}'.format(self._global_shift))
173 return 'cirq.PhasedXPowGate({})'.format(', '.join(args))
174
175 def _period(self):
176 exponents = [self._global_shift, 1 + self._global_shift]
177 real_periods = [abs(2/e) for e in exponents if e != 0]
178 int_periods = [int(np.round(e)) for e in real_periods]
179 if any(i != r for i, r in zip(real_periods, int_periods)):
180 return None
181 if len(int_periods) == 1:
182 return int_periods[0]
183 return int_periods[0] * int_periods[1] / math.gcd(*int_periods)
184
185 @property
186 def _canonical_exponent(self):
187 period = self._period()
188 if not period or isinstance(self._exponent, sympy.Symbol):
189 return self._exponent
190
191 return self._exponent % period
192
193 def _value_equality_values_cls_(self):
194 if self.phase_exponent == 0:
195 return common_gates.XPowGate
196 if self.phase_exponent == 0.5:
197 return common_gates.YPowGate
198 return PhasedXPowGate
199
200 def _value_equality_values_(self):
201 if self.phase_exponent == 0:
202 return common_gates.XPowGate(
203 exponent=self._exponent,
204 global_shift=self._global_shift)._value_equality_values_()
205 if self.phase_exponent == 0.5:
206 return common_gates.YPowGate(
207 exponent=self._exponent,
208 global_shift=self._global_shift)._value_equality_values_()
209 return self.phase_exponent, self._canonical_exponent, self._global_shift
210
[end of cirq/ops/phased_x_gate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cirq/ops/phased_x_gate.py b/cirq/ops/phased_x_gate.py
--- a/cirq/ops/phased_x_gate.py
+++ b/cirq/ops/phased_x_gate.py
@@ -151,8 +151,10 @@
if (isinstance(self.phase_exponent, sympy.Symbol) or
args.precision is None):
s = 'PhasedX({})'.format(self.phase_exponent)
+ elif isinstance(self.phase_exponent, int):
+ s = 'PhasedX({})'.format(self.phase_exponent)
else:
- s = 'PhasedX({{:.{}f}})'.format(args.precision).format(
+ s = 'PhasedX({{:.{}}})'.format(args.precision).format(
self.phase_exponent)
return protocols.CircuitDiagramInfo(
wire_symbols=(s,),
|
{"golden_diff": "diff --git a/cirq/ops/phased_x_gate.py b/cirq/ops/phased_x_gate.py\n--- a/cirq/ops/phased_x_gate.py\n+++ b/cirq/ops/phased_x_gate.py\n@@ -151,8 +151,10 @@\n if (isinstance(self.phase_exponent, sympy.Symbol) or\n args.precision is None):\n s = 'PhasedX({})'.format(self.phase_exponent)\n+ elif isinstance(self.phase_exponent, int):\n+ s = 'PhasedX({})'.format(self.phase_exponent)\n else:\n- s = 'PhasedX({{:.{}f}})'.format(args.precision).format(\n+ s = 'PhasedX({{:.{}}})'.format(args.precision).format(\n self.phase_exponent)\n return protocols.CircuitDiagramInfo(\n wire_symbols=(s,),\n", "issue": "pytest fails but travis says green\nhttps://travis-ci.com/quantumlib/Cirq/jobs/225040090\r\n\r\n```\r\n4 failed, 7087 passed, 26 skipped, 3 warnings in 96.07 seconds\r\nThe command \"check/pytest --ignore=cirq/contrib --benchmark-skip --actually-quiet\" exited with 0.\r\n```\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"An `XPowGate` conjugated by `ZPowGate`s.\"\"\"\nfrom typing import Union, Sequence, Tuple, Optional, cast\n\nimport math\nimport numpy as np\nimport sympy\n\nimport cirq\nfrom cirq import value, protocols\nfrom cirq._compat import proper_repr\nfrom cirq.ops import gate_features, raw_types, op_tree, common_gates\nfrom cirq.type_workarounds import NotImplementedType\n\n\[email protected]_equality(manual_cls=True)\nclass PhasedXPowGate(gate_features.SingleQubitGate):\n \"\"\"A gate equivalent to the circuit \u2500\u2500\u2500Z^-p\u2500\u2500\u2500X^t\u2500\u2500\u2500Z^p\u2500\u2500\u2500.\"\"\"\n\n def __init__(self,\n *,\n phase_exponent: Union[float, sympy.Symbol],\n exponent: Union[float, sympy.Symbol] = 1.0,\n global_shift: float = 0.0) -> None:\n \"\"\"\n Args:\n phase_exponent: The exponent on the Z gates conjugating the X gate.\n exponent: The exponent on the X gate conjugated by Zs.\n global_shift: How much to shift the operation's eigenvalues at\n exponent=1.\n \"\"\"\n self._phase_exponent = value.canonicalize_half_turns(phase_exponent)\n self._exponent = exponent\n self._global_shift = global_shift\n\n def _qasm_(self,\n args: protocols.QasmArgs,\n qubits: Tuple[raw_types.Qid, ...]) -> Optional[str]:\n if cirq.is_parameterized(self):\n return None\n\n args.validate_version('2.0')\n\n e = cast(float, value.canonicalize_half_turns(self._exponent))\n p = cast(float, self.phase_exponent)\n epsilon = 10**-args.precision\n\n if abs(e + 0.5) <= epsilon:\n return args.format('u2({0:half_turns}, {1:half_turns}) {2};\\n',\n p + 0.5, -p - 0.5, qubits[0])\n\n if abs(e - 0.5) <= epsilon:\n return args.format('u2({0:half_turns}, {1:half_turns}) {2};\\n',\n p - 0.5, -p + 0.5, qubits[0])\n\n return args.format(\n 'u3({0:half_turns}, {1:half_turns}, {2:half_turns}) {3};\\n',\n -e, p + 0.5, -p - 0.5, qubits[0])\n\n def _decompose_(self, qubits: Sequence[raw_types.Qid]\n ) -> op_tree.OP_TREE:\n assert len(qubits) == 1\n q = qubits[0]\n z = cirq.Z(q)**self._phase_exponent\n x = cirq.X(q)**self._exponent\n if protocols.is_parameterized(z):\n return NotImplemented\n return z**-1, x, z\n\n @property\n def exponent(self) -> Union[float, sympy.Symbol]:\n \"\"\"The exponent on the central X gate conjugated by the Z gates.\"\"\"\n return self._exponent\n\n @property\n def phase_exponent(self) -> Union[float, sympy.Symbol]:\n \"\"\"The exponent on the Z gates conjugating the X gate.\"\"\"\n return self._phase_exponent\n\n def __pow__(self, exponent: Union[float, sympy.Symbol]) -> 'PhasedXPowGate':\n new_exponent = protocols.mul(self._exponent, exponent, NotImplemented)\n if new_exponent is NotImplemented:\n return NotImplemented\n return PhasedXPowGate(phase_exponent=self._phase_exponent,\n exponent=new_exponent,\n global_shift=self._global_shift)\n\n def _trace_distance_bound_(self) -> Optional[float]:\n if self._is_parameterized_():\n return None\n return abs(np.sin(self._exponent * 0.5 * np.pi))\n\n def _unitary_(self) -> Union[np.ndarray, NotImplementedType]:\n \"\"\"See `cirq.SupportsUnitary`.\"\"\"\n if self._is_parameterized_():\n return NotImplemented\n z = protocols.unitary(cirq.Z**self._phase_exponent)\n x = protocols.unitary(cirq.X**self._exponent)\n p = np.exp(1j * np.pi * self._global_shift * self._exponent)\n return np.dot(np.dot(z, x), np.conj(z)) * p\n\n def _pauli_expansion_(self) -> value.LinearDict[str]:\n if self._is_parameterized_():\n return NotImplemented\n phase_angle = np.pi * self._phase_exponent / 2\n angle = np.pi * self._exponent / 2\n phase = 1j**(2 * self._exponent * (self._global_shift + 0.5))\n return value.LinearDict({\n 'I': phase * np.cos(angle),\n 'X': -1j * phase * np.sin(angle) * np.cos(2 * phase_angle),\n 'Y': -1j * phase * np.sin(angle) * np.sin(2 * phase_angle),\n })\n\n def _is_parameterized_(self) -> bool:\n \"\"\"See `cirq.SupportsParameterization`.\"\"\"\n return (protocols.is_parameterized(self._exponent) or\n protocols.is_parameterized(self._phase_exponent))\n\n def _resolve_parameters_(self, param_resolver) -> 'PhasedXPowGate':\n \"\"\"See `cirq.SupportsParameterization`.\"\"\"\n return PhasedXPowGate(\n phase_exponent=param_resolver.value_of(self._phase_exponent),\n exponent=param_resolver.value_of(self._exponent),\n global_shift=self._global_shift)\n\n def _phase_by_(self, phase_turns, qubit_index):\n \"\"\"See `cirq.SupportsPhase`.\"\"\"\n assert qubit_index == 0\n return PhasedXPowGate(\n exponent=self._exponent,\n phase_exponent=self._phase_exponent + phase_turns * 2,\n global_shift=self._global_shift)\n\n def _circuit_diagram_info_(self, args: protocols.CircuitDiagramInfoArgs\n ) -> protocols.CircuitDiagramInfo:\n \"\"\"See `cirq.SupportsCircuitDiagramInfo`.\"\"\"\n\n if (isinstance(self.phase_exponent, sympy.Symbol) or\n args.precision is None):\n s = 'PhasedX({})'.format(self.phase_exponent)\n else:\n s = 'PhasedX({{:.{}f}})'.format(args.precision).format(\n self.phase_exponent)\n return protocols.CircuitDiagramInfo(\n wire_symbols=(s,),\n exponent=value.canonicalize_half_turns(self._exponent))\n\n def __str__(self):\n info = protocols.circuit_diagram_info(self)\n if info.exponent == 1:\n return info.wire_symbols[0]\n return '{}^{}'.format(info.wire_symbols[0], info.exponent)\n\n def __repr__(self):\n args = ['phase_exponent={}'.format(proper_repr(self.phase_exponent))]\n if self.exponent != 1:\n args.append('exponent={}'.format(proper_repr(self.exponent)))\n if self._global_shift != 0:\n args.append('global_shift={!r}'.format(self._global_shift))\n return 'cirq.PhasedXPowGate({})'.format(', '.join(args))\n\n def _period(self):\n exponents = [self._global_shift, 1 + self._global_shift]\n real_periods = [abs(2/e) for e in exponents if e != 0]\n int_periods = [int(np.round(e)) for e in real_periods]\n if any(i != r for i, r in zip(real_periods, int_periods)):\n return None\n if len(int_periods) == 1:\n return int_periods[0]\n return int_periods[0] * int_periods[1] / math.gcd(*int_periods)\n\n @property\n def _canonical_exponent(self):\n period = self._period()\n if not period or isinstance(self._exponent, sympy.Symbol):\n return self._exponent\n\n return self._exponent % period\n\n def _value_equality_values_cls_(self):\n if self.phase_exponent == 0:\n return common_gates.XPowGate\n if self.phase_exponent == 0.5:\n return common_gates.YPowGate\n return PhasedXPowGate\n\n def _value_equality_values_(self):\n if self.phase_exponent == 0:\n return common_gates.XPowGate(\n exponent=self._exponent,\n global_shift=self._global_shift)._value_equality_values_()\n if self.phase_exponent == 0.5:\n return common_gates.YPowGate(\n exponent=self._exponent,\n global_shift=self._global_shift)._value_equality_values_()\n return self.phase_exponent, self._canonical_exponent, self._global_shift\n", "path": "cirq/ops/phased_x_gate.py"}]}
| 3,267 | 195 |
gh_patches_debug_39188
|
rasdani/github-patches
|
git_diff
|
Cog-Creators__Red-DiscordBot-5902
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add file output for code evaluation commands & Better long output handling
### What component of Red (cog, command, API) would you like to see improvements on?
Dev
### Describe the enhancement you're suggesting.
Currently when evaluating code with a large response, it would be shown in multiple messages. (when typing more)
Dropping a file would be way more convenient.
Furthermore instead of typing more, a button will be nice. When clicking the button, the button disappears and a new message with a new button appears. (if there is further output)
### Anything else?
_No response_
</issue>
<code>
[start of redbot/core/commands/context.py]
1 from __future__ import annotations
2
3 import asyncio
4 import contextlib
5 import os
6 import re
7 from typing import Iterable, List, Union, Optional, TYPE_CHECKING
8 import discord
9 from discord.ext.commands import Context as DPYContext
10
11 from .requires import PermState
12 from ..utils.chat_formatting import box
13 from ..utils.predicates import MessagePredicate
14 from ..utils import can_user_react_in, common_filters
15
16 if TYPE_CHECKING:
17 from .commands import Command
18 from ..bot import Red
19
20 TICK = "\N{WHITE HEAVY CHECK MARK}"
21
22 __all__ = ["Context", "GuildContext", "DMContext"]
23
24
25 class Context(DPYContext):
26 """Command invocation context for Red.
27
28 All context passed into commands will be of this type.
29
30 This class inherits from `discord.ext.commands.Context`.
31
32 Attributes
33 ----------
34 assume_yes: bool
35 Whether or not interactive checks should
36 be skipped and assumed to be confirmed.
37
38 This is intended for allowing automation of tasks.
39
40 An example of this would be scheduled commands
41 not requiring interaction if the cog developer
42 checks this value prior to confirming something interactively.
43
44 Depending on the potential impact of a command,
45 it may still be appropriate not to use this setting.
46 permission_state: PermState
47 The permission state the current context is in.
48 """
49
50 command: "Command"
51 invoked_subcommand: "Optional[Command]"
52 bot: "Red"
53
54 def __init__(self, **attrs):
55 self.assume_yes = attrs.pop("assume_yes", False)
56 super().__init__(**attrs)
57 self.permission_state: PermState = PermState.NORMAL
58
59 async def send(self, content=None, **kwargs):
60 """Sends a message to the destination with the content given.
61
62 This acts the same as `discord.ext.commands.Context.send`, with
63 one added keyword argument as detailed below in *Other Parameters*.
64
65 Parameters
66 ----------
67 content : str
68 The content of the message to send.
69
70 Other Parameters
71 ----------------
72 filter : callable (`str`) -> `str`, optional
73 A function which is used to filter the ``content`` before
74 it is sent.
75 This must take a single `str` as an argument, and return
76 the processed `str`. When `None` is passed, ``content`` won't be touched.
77 Defaults to `None`.
78 **kwargs
79 See `discord.ext.commands.Context.send`.
80
81 Returns
82 -------
83 discord.Message
84 The message that was sent.
85
86 """
87
88 _filter = kwargs.pop("filter", None)
89
90 if _filter and content:
91 content = _filter(str(content))
92
93 return await super().send(content=content, **kwargs)
94
95 async def send_help(self, command=None):
96 """Send the command help message."""
97 # This allows people to manually use this similarly
98 # to the upstream d.py version, while retaining our use.
99 command = command or self.command
100 await self.bot.send_help_for(self, command)
101
102 async def tick(self, *, message: Optional[str] = None) -> bool:
103 """Add a tick reaction to the command message.
104
105 Keyword Arguments
106 -----------------
107 message : str, optional
108 The message to send if adding the reaction doesn't succeed.
109
110 Returns
111 -------
112 bool
113 :code:`True` if adding the reaction succeeded.
114
115 """
116 return await self.react_quietly(TICK, message=message)
117
118 async def react_quietly(
119 self,
120 reaction: Union[discord.Emoji, discord.Reaction, discord.PartialEmoji, str],
121 *,
122 message: Optional[str] = None,
123 ) -> bool:
124 """Adds a reaction to the command message.
125
126 Parameters
127 ----------
128 reaction : Union[discord.Emoji, discord.Reaction, discord.PartialEmoji, str]
129 The emoji to react with.
130
131 Keyword Arguments
132 -----------------
133 message : str, optional
134 The message to send if adding the reaction doesn't succeed.
135
136 Returns
137 -------
138 bool
139 :code:`True` if adding the reaction succeeded.
140 """
141 try:
142 if not can_user_react_in(self.me, self.channel):
143 raise RuntimeError
144 await self.message.add_reaction(reaction)
145 except (RuntimeError, discord.HTTPException):
146 if message is not None:
147 await self.send(message)
148 return False
149 else:
150 return True
151
152 async def send_interactive(
153 self, messages: Iterable[str], box_lang: str = None, timeout: int = 15
154 ) -> List[discord.Message]:
155 """Send multiple messages interactively.
156
157 The user will be prompted for whether or not they would like to view
158 the next message, one at a time. They will also be notified of how
159 many messages are remaining on each prompt.
160
161 Parameters
162 ----------
163 messages : `iterable` of `str`
164 The messages to send.
165 box_lang : str
166 If specified, each message will be contained within a codeblock of
167 this language.
168 timeout : int
169 How long the user has to respond to the prompt before it times out.
170 After timing out, the bot deletes its prompt message.
171
172 """
173 messages = tuple(messages)
174 ret = []
175
176 for idx, page in enumerate(messages, 1):
177 if box_lang is None:
178 msg = await self.send(page)
179 else:
180 msg = await self.send(box(page, lang=box_lang))
181 ret.append(msg)
182 n_remaining = len(messages) - idx
183 if n_remaining > 0:
184 if n_remaining == 1:
185 plural = ""
186 is_are = "is"
187 else:
188 plural = "s"
189 is_are = "are"
190 query = await self.send(
191 "There {} still {} message{} remaining. "
192 "Type `more` to continue."
193 "".format(is_are, n_remaining, plural)
194 )
195 try:
196 resp = await self.bot.wait_for(
197 "message",
198 check=MessagePredicate.lower_equal_to("more", self),
199 timeout=timeout,
200 )
201 except asyncio.TimeoutError:
202 with contextlib.suppress(discord.HTTPException):
203 await query.delete()
204 break
205 else:
206 try:
207 await self.channel.delete_messages((query, resp))
208 except (discord.HTTPException, AttributeError):
209 # In case the bot can't delete other users' messages,
210 # or is not a bot account
211 # or channel is a DM
212 with contextlib.suppress(discord.HTTPException):
213 await query.delete()
214 return ret
215
216 async def embed_colour(self):
217 """
218 Helper function to get the colour for an embed.
219
220 Returns
221 -------
222 discord.Colour:
223 The colour to be used
224 """
225 return await self.bot.get_embed_color(self)
226
227 @property
228 def embed_color(self):
229 # Rather than double awaiting.
230 return self.embed_colour
231
232 async def embed_requested(self):
233 """
234 Short-hand for calling bot.embed_requested with permission checks.
235
236 Equivalent to:
237
238 .. code:: python
239
240 await ctx.bot.embed_requested(ctx)
241
242 Returns
243 -------
244 bool:
245 :code:`True` if an embed is requested
246 """
247 return await self.bot.embed_requested(self)
248
249 async def maybe_send_embed(self, message: str) -> discord.Message:
250 """
251 Simple helper to send a simple message to context
252 without manually checking ctx.embed_requested
253 This should only be used for simple messages.
254
255 Parameters
256 ----------
257 message: `str`
258 The string to send
259
260 Returns
261 -------
262 discord.Message:
263 the message which was sent
264
265 Raises
266 ------
267 discord.Forbidden
268 see `discord.abc.Messageable.send`
269 discord.HTTPException
270 see `discord.abc.Messageable.send`
271 ValueError
272 when the message's length is not between 1 and 2000 characters.
273 """
274 if not message or len(message) > 2000:
275 raise ValueError("Message length must be between 1 and 2000")
276 if await self.embed_requested():
277 return await self.send(
278 embed=discord.Embed(description=message, color=(await self.embed_colour()))
279 )
280 else:
281 return await self.send(
282 message,
283 allowed_mentions=discord.AllowedMentions(everyone=False, roles=False, users=False),
284 )
285
286 @property
287 def me(self) -> Union[discord.ClientUser, discord.Member]:
288 """
289 discord.abc.User: The bot member or user object.
290
291 If the context is DM, this will be a `discord.User` object.
292 """
293 if self.guild is not None:
294 return self.guild.me
295 else:
296 return self.bot.user
297
298
299 if TYPE_CHECKING or os.getenv("BUILDING_DOCS", False):
300
301 class DMContext(Context):
302 """
303 At runtime, this will still be a normal context object.
304
305 This lies about some type narrowing for type analysis in commands
306 using a dm_only decorator.
307
308 It is only correct to use when those types are already narrowed
309 """
310
311 @property
312 def author(self) -> discord.User:
313 ...
314
315 @property
316 def channel(self) -> discord.DMChannel:
317 ...
318
319 @property
320 def guild(self) -> None:
321 ...
322
323 @property
324 def me(self) -> discord.ClientUser:
325 ...
326
327 class GuildContext(Context):
328 """
329 At runtime, this will still be a normal context object.
330
331 This lies about some type narrowing for type analysis in commands
332 using a guild_only decorator.
333
334 It is only correct to use when those types are already narrowed
335 """
336
337 @property
338 def author(self) -> discord.Member:
339 ...
340
341 @property
342 def channel(self) -> Union[discord.TextChannel, discord.VoiceChannel, discord.Thread]:
343 ...
344
345 @property
346 def guild(self) -> discord.Guild:
347 ...
348
349 @property
350 def me(self) -> discord.Member:
351 ...
352
353 else:
354 GuildContext = Context
355 DMContext = Context
356
[end of redbot/core/commands/context.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/redbot/core/commands/context.py b/redbot/core/commands/context.py
--- a/redbot/core/commands/context.py
+++ b/redbot/core/commands/context.py
@@ -9,7 +9,7 @@
from discord.ext.commands import Context as DPYContext
from .requires import PermState
-from ..utils.chat_formatting import box
+from ..utils.chat_formatting import box, text_to_file
from ..utils.predicates import MessagePredicate
from ..utils import can_user_react_in, common_filters
@@ -150,7 +150,11 @@
return True
async def send_interactive(
- self, messages: Iterable[str], box_lang: str = None, timeout: int = 15
+ self,
+ messages: Iterable[str],
+ box_lang: str = None,
+ timeout: int = 15,
+ join_character: str = "",
) -> List[discord.Message]:
"""Send multiple messages interactively.
@@ -168,6 +172,9 @@
timeout : int
How long the user has to respond to the prompt before it times out.
After timing out, the bot deletes its prompt message.
+ join_character : str
+ The character used to join all the messages when the file output
+ is selected.
"""
messages = tuple(messages)
@@ -189,13 +196,14 @@
is_are = "are"
query = await self.send(
"There {} still {} message{} remaining. "
- "Type `more` to continue."
+ "Type `more` to continue or `file` to upload all contents as a file."
"".format(is_are, n_remaining, plural)
)
+ pred = MessagePredicate.lower_contained_in(("more", "file"), self)
try:
resp = await self.bot.wait_for(
"message",
- check=MessagePredicate.lower_equal_to("more", self),
+ check=pred,
timeout=timeout,
)
except asyncio.TimeoutError:
@@ -211,6 +219,9 @@
# or channel is a DM
with contextlib.suppress(discord.HTTPException):
await query.delete()
+ if pred.result == 1:
+ await self.send(file=text_to_file(join_character.join(messages)))
+ break
return ret
async def embed_colour(self):
|
{"golden_diff": "diff --git a/redbot/core/commands/context.py b/redbot/core/commands/context.py\n--- a/redbot/core/commands/context.py\n+++ b/redbot/core/commands/context.py\n@@ -9,7 +9,7 @@\n from discord.ext.commands import Context as DPYContext\n \n from .requires import PermState\n-from ..utils.chat_formatting import box\n+from ..utils.chat_formatting import box, text_to_file\n from ..utils.predicates import MessagePredicate\n from ..utils import can_user_react_in, common_filters\n \n@@ -150,7 +150,11 @@\n return True\n \n async def send_interactive(\n- self, messages: Iterable[str], box_lang: str = None, timeout: int = 15\n+ self,\n+ messages: Iterable[str],\n+ box_lang: str = None,\n+ timeout: int = 15,\n+ join_character: str = \"\",\n ) -> List[discord.Message]:\n \"\"\"Send multiple messages interactively.\n \n@@ -168,6 +172,9 @@\n timeout : int\n How long the user has to respond to the prompt before it times out.\n After timing out, the bot deletes its prompt message.\n+ join_character : str\n+ The character used to join all the messages when the file output\n+ is selected.\n \n \"\"\"\n messages = tuple(messages)\n@@ -189,13 +196,14 @@\n is_are = \"are\"\n query = await self.send(\n \"There {} still {} message{} remaining. \"\n- \"Type `more` to continue.\"\n+ \"Type `more` to continue or `file` to upload all contents as a file.\"\n \"\".format(is_are, n_remaining, plural)\n )\n+ pred = MessagePredicate.lower_contained_in((\"more\", \"file\"), self)\n try:\n resp = await self.bot.wait_for(\n \"message\",\n- check=MessagePredicate.lower_equal_to(\"more\", self),\n+ check=pred,\n timeout=timeout,\n )\n except asyncio.TimeoutError:\n@@ -211,6 +219,9 @@\n # or channel is a DM\n with contextlib.suppress(discord.HTTPException):\n await query.delete()\n+ if pred.result == 1:\n+ await self.send(file=text_to_file(join_character.join(messages)))\n+ break\n return ret\n \n async def embed_colour(self):\n", "issue": "Add file output for code evaluation commands & Better long output handling\n### What component of Red (cog, command, API) would you like to see improvements on?\n\nDev\n\n### Describe the enhancement you're suggesting.\n\nCurrently when evaluating code with a large response, it would be shown in multiple messages. (when typing more)\r\n\r\nDropping a file would be way more convenient. \r\n\r\nFurthermore instead of typing more, a button will be nice. When clicking the button, the button disappears and a new message with a new button appears. (if there is further output)\n\n### Anything else?\n\n_No response_\n", "before_files": [{"content": "from __future__ import annotations\n\nimport asyncio\nimport contextlib\nimport os\nimport re\nfrom typing import Iterable, List, Union, Optional, TYPE_CHECKING\nimport discord\nfrom discord.ext.commands import Context as DPYContext\n\nfrom .requires import PermState\nfrom ..utils.chat_formatting import box\nfrom ..utils.predicates import MessagePredicate\nfrom ..utils import can_user_react_in, common_filters\n\nif TYPE_CHECKING:\n from .commands import Command\n from ..bot import Red\n\nTICK = \"\\N{WHITE HEAVY CHECK MARK}\"\n\n__all__ = [\"Context\", \"GuildContext\", \"DMContext\"]\n\n\nclass Context(DPYContext):\n \"\"\"Command invocation context for Red.\n\n All context passed into commands will be of this type.\n\n This class inherits from `discord.ext.commands.Context`.\n\n Attributes\n ----------\n assume_yes: bool\n Whether or not interactive checks should\n be skipped and assumed to be confirmed.\n\n This is intended for allowing automation of tasks.\n\n An example of this would be scheduled commands\n not requiring interaction if the cog developer\n checks this value prior to confirming something interactively.\n\n Depending on the potential impact of a command,\n it may still be appropriate not to use this setting.\n permission_state: PermState\n The permission state the current context is in.\n \"\"\"\n\n command: \"Command\"\n invoked_subcommand: \"Optional[Command]\"\n bot: \"Red\"\n\n def __init__(self, **attrs):\n self.assume_yes = attrs.pop(\"assume_yes\", False)\n super().__init__(**attrs)\n self.permission_state: PermState = PermState.NORMAL\n\n async def send(self, content=None, **kwargs):\n \"\"\"Sends a message to the destination with the content given.\n\n This acts the same as `discord.ext.commands.Context.send`, with\n one added keyword argument as detailed below in *Other Parameters*.\n\n Parameters\n ----------\n content : str\n The content of the message to send.\n\n Other Parameters\n ----------------\n filter : callable (`str`) -> `str`, optional\n A function which is used to filter the ``content`` before\n it is sent.\n This must take a single `str` as an argument, and return\n the processed `str`. When `None` is passed, ``content`` won't be touched.\n Defaults to `None`.\n **kwargs\n See `discord.ext.commands.Context.send`.\n\n Returns\n -------\n discord.Message\n The message that was sent.\n\n \"\"\"\n\n _filter = kwargs.pop(\"filter\", None)\n\n if _filter and content:\n content = _filter(str(content))\n\n return await super().send(content=content, **kwargs)\n\n async def send_help(self, command=None):\n \"\"\"Send the command help message.\"\"\"\n # This allows people to manually use this similarly\n # to the upstream d.py version, while retaining our use.\n command = command or self.command\n await self.bot.send_help_for(self, command)\n\n async def tick(self, *, message: Optional[str] = None) -> bool:\n \"\"\"Add a tick reaction to the command message.\n\n Keyword Arguments\n -----------------\n message : str, optional\n The message to send if adding the reaction doesn't succeed.\n\n Returns\n -------\n bool\n :code:`True` if adding the reaction succeeded.\n\n \"\"\"\n return await self.react_quietly(TICK, message=message)\n\n async def react_quietly(\n self,\n reaction: Union[discord.Emoji, discord.Reaction, discord.PartialEmoji, str],\n *,\n message: Optional[str] = None,\n ) -> bool:\n \"\"\"Adds a reaction to the command message.\n\n Parameters\n ----------\n reaction : Union[discord.Emoji, discord.Reaction, discord.PartialEmoji, str]\n The emoji to react with.\n\n Keyword Arguments\n -----------------\n message : str, optional\n The message to send if adding the reaction doesn't succeed.\n\n Returns\n -------\n bool\n :code:`True` if adding the reaction succeeded.\n \"\"\"\n try:\n if not can_user_react_in(self.me, self.channel):\n raise RuntimeError\n await self.message.add_reaction(reaction)\n except (RuntimeError, discord.HTTPException):\n if message is not None:\n await self.send(message)\n return False\n else:\n return True\n\n async def send_interactive(\n self, messages: Iterable[str], box_lang: str = None, timeout: int = 15\n ) -> List[discord.Message]:\n \"\"\"Send multiple messages interactively.\n\n The user will be prompted for whether or not they would like to view\n the next message, one at a time. They will also be notified of how\n many messages are remaining on each prompt.\n\n Parameters\n ----------\n messages : `iterable` of `str`\n The messages to send.\n box_lang : str\n If specified, each message will be contained within a codeblock of\n this language.\n timeout : int\n How long the user has to respond to the prompt before it times out.\n After timing out, the bot deletes its prompt message.\n\n \"\"\"\n messages = tuple(messages)\n ret = []\n\n for idx, page in enumerate(messages, 1):\n if box_lang is None:\n msg = await self.send(page)\n else:\n msg = await self.send(box(page, lang=box_lang))\n ret.append(msg)\n n_remaining = len(messages) - idx\n if n_remaining > 0:\n if n_remaining == 1:\n plural = \"\"\n is_are = \"is\"\n else:\n plural = \"s\"\n is_are = \"are\"\n query = await self.send(\n \"There {} still {} message{} remaining. \"\n \"Type `more` to continue.\"\n \"\".format(is_are, n_remaining, plural)\n )\n try:\n resp = await self.bot.wait_for(\n \"message\",\n check=MessagePredicate.lower_equal_to(\"more\", self),\n timeout=timeout,\n )\n except asyncio.TimeoutError:\n with contextlib.suppress(discord.HTTPException):\n await query.delete()\n break\n else:\n try:\n await self.channel.delete_messages((query, resp))\n except (discord.HTTPException, AttributeError):\n # In case the bot can't delete other users' messages,\n # or is not a bot account\n # or channel is a DM\n with contextlib.suppress(discord.HTTPException):\n await query.delete()\n return ret\n\n async def embed_colour(self):\n \"\"\"\n Helper function to get the colour for an embed.\n\n Returns\n -------\n discord.Colour:\n The colour to be used\n \"\"\"\n return await self.bot.get_embed_color(self)\n\n @property\n def embed_color(self):\n # Rather than double awaiting.\n return self.embed_colour\n\n async def embed_requested(self):\n \"\"\"\n Short-hand for calling bot.embed_requested with permission checks.\n\n Equivalent to:\n\n .. code:: python\n\n await ctx.bot.embed_requested(ctx)\n\n Returns\n -------\n bool:\n :code:`True` if an embed is requested\n \"\"\"\n return await self.bot.embed_requested(self)\n\n async def maybe_send_embed(self, message: str) -> discord.Message:\n \"\"\"\n Simple helper to send a simple message to context\n without manually checking ctx.embed_requested\n This should only be used for simple messages.\n\n Parameters\n ----------\n message: `str`\n The string to send\n\n Returns\n -------\n discord.Message:\n the message which was sent\n\n Raises\n ------\n discord.Forbidden\n see `discord.abc.Messageable.send`\n discord.HTTPException\n see `discord.abc.Messageable.send`\n ValueError\n when the message's length is not between 1 and 2000 characters.\n \"\"\"\n if not message or len(message) > 2000:\n raise ValueError(\"Message length must be between 1 and 2000\")\n if await self.embed_requested():\n return await self.send(\n embed=discord.Embed(description=message, color=(await self.embed_colour()))\n )\n else:\n return await self.send(\n message,\n allowed_mentions=discord.AllowedMentions(everyone=False, roles=False, users=False),\n )\n\n @property\n def me(self) -> Union[discord.ClientUser, discord.Member]:\n \"\"\"\n discord.abc.User: The bot member or user object.\n\n If the context is DM, this will be a `discord.User` object.\n \"\"\"\n if self.guild is not None:\n return self.guild.me\n else:\n return self.bot.user\n\n\nif TYPE_CHECKING or os.getenv(\"BUILDING_DOCS\", False):\n\n class DMContext(Context):\n \"\"\"\n At runtime, this will still be a normal context object.\n\n This lies about some type narrowing for type analysis in commands\n using a dm_only decorator.\n\n It is only correct to use when those types are already narrowed\n \"\"\"\n\n @property\n def author(self) -> discord.User:\n ...\n\n @property\n def channel(self) -> discord.DMChannel:\n ...\n\n @property\n def guild(self) -> None:\n ...\n\n @property\n def me(self) -> discord.ClientUser:\n ...\n\n class GuildContext(Context):\n \"\"\"\n At runtime, this will still be a normal context object.\n\n This lies about some type narrowing for type analysis in commands\n using a guild_only decorator.\n\n It is only correct to use when those types are already narrowed\n \"\"\"\n\n @property\n def author(self) -> discord.Member:\n ...\n\n @property\n def channel(self) -> Union[discord.TextChannel, discord.VoiceChannel, discord.Thread]:\n ...\n\n @property\n def guild(self) -> discord.Guild:\n ...\n\n @property\n def me(self) -> discord.Member:\n ...\n\nelse:\n GuildContext = Context\n DMContext = Context\n", "path": "redbot/core/commands/context.py"}]}
| 3,813 | 530 |
gh_patches_debug_12517
|
rasdani/github-patches
|
git_diff
|
avocado-framework__avocado-4253
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
distutils depreaction
https://www.python.org/dev/peps/pep-0632/
We have one module using this: `avocado.utils.kernel`.
</issue>
<code>
[start of avocado/utils/kernel.py]
1 # This program is free software; you can redistribute it and/or modify
2 # it under the terms of the GNU General Public License as published by
3 # the Free Software Foundation; either version 2 of the License, or
4 # (at your option) any later version.
5 #
6 # This program is distributed in the hope that it will be useful,
7 # but WITHOUT ANY WARRANTY; without even the implied warranty of
8 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
9 #
10 # See LICENSE for more details.
11 #
12 # Copyright: Red Hat Inc. 2014
13 # Author: Ruda Moura <[email protected]>
14 # Author: Santhosh G <[email protected]>
15
16 """
17 Provides utilities for the Linux kernel.
18 """
19
20 import logging
21 import multiprocessing
22 import os
23 import shutil
24 import tempfile
25 from distutils.version import LooseVersion # pylint: disable=E0611
26
27 from . import archive, asset, build, distro, process
28
29 LOG = logging.getLogger('avocado.test')
30
31
32 class KernelBuild:
33
34 """
35 Build the Linux Kernel from official tarballs.
36 """
37
38 URL = 'https://www.kernel.org/pub/linux/kernel/v{major}.x/'
39 SOURCE = 'linux-{version}.tar.gz'
40
41 def __init__(self, version, config_path=None, work_dir=None,
42 data_dirs=None):
43 """
44 Creates an instance of :class:`KernelBuild`.
45
46 :param version: kernel version ("3.19.8").
47 :param config_path: path to config file.
48 :param work_dir: work directory.
49 :param data_dirs: list of directories to keep the downloaded kernel
50 :return: None.
51 """
52 self.asset_path = None
53 self.version = version
54 self.config_path = config_path
55 self.distro = distro.detect()
56 if work_dir is None:
57 work_dir = tempfile.mkdtemp(prefix='avocado_' + __name__)
58 self.work_dir = work_dir
59 if data_dirs is not None:
60 self.data_dirs = data_dirs
61 else:
62 self.data_dirs = [self.work_dir]
63 self._build_dir = os.path.join(self.work_dir, 'linux-%s' % self.version)
64
65 def __repr__(self):
66 return "KernelBuild('%s, %s, %s')" % (self.version,
67 self.config_path,
68 self.work_dir)
69
70 @property
71 def vmlinux(self):
72 """
73 Return the vmlinux path if the file exists
74 """
75 if not self.build_dir:
76 return None
77 vmlinux_path = os.path.join(self.build_dir, 'vmlinux')
78 if os.path.isfile(vmlinux_path):
79 return vmlinux_path
80 return None
81
82 @property
83 def build_dir(self):
84 """
85 Return the build path if the directory exists
86 """
87 if os.path.isdir(self._build_dir):
88 return self._build_dir
89 return None
90
91 def _build_kernel_url(self, base_url=None):
92 kernel_file = self.SOURCE.format(version=self.version)
93 if base_url is None:
94 base_url = self.URL.format(major=self.version.split('.', 1)[0])
95 return base_url + kernel_file
96
97 def download(self, url=None):
98 """
99 Download kernel source.
100
101 :param url: override the url from where to fetch the kernel
102 source tarball
103 :type url: str or None
104 """
105 full_url = self._build_kernel_url(base_url=url)
106 self.asset_path = asset.Asset(full_url, asset_hash=None,
107 algorithm=None, locations=None,
108 cache_dirs=self.data_dirs).fetch()
109
110 def uncompress(self):
111 """
112 Uncompress kernel source.
113
114 :raises: Exception in case the tarball is not downloaded
115 """
116 if self.asset_path:
117 LOG.info("Uncompressing tarball")
118 archive.extract(self.asset_path, self.work_dir)
119 else:
120 raise Exception("Unable to find the tarball")
121
122 def configure(self, targets=('defconfig'), extra_configs=None):
123 """
124 Configure/prepare kernel source to build.
125
126 :param targets: configuration targets. Default is 'defconfig'.
127 :type targets: list of str
128 :param extra_configs: additional configurations in the form of
129 CONFIG_NAME=VALUE.
130 :type extra_configs: list of str
131 """
132 build.make(self._build_dir, extra_args='-C %s mrproper' %
133 self._build_dir)
134 if self.config_path is not None:
135 dotconfig = os.path.join(self._build_dir, '.config')
136 shutil.copy(self.config_path, dotconfig)
137 build.make(self._build_dir, extra_args='-C %s olddefconfig' %
138 self._build_dir)
139 else:
140 if isinstance(targets, list):
141 _targets = " ".join(targets)
142 else:
143 _targets = targets
144 build.make(self.build_dir,
145 extra_args='-C %s %s' % (self.build_dir, _targets))
146 if extra_configs:
147 with tempfile.NamedTemporaryFile(mode='w+t',
148 prefix='avocado_') as config_file:
149 config_file.write('\n'.join(extra_configs))
150 config_file.flush()
151 cmd = ['cd', self._build_dir, '&&',
152 './scripts/kconfig/merge_config.sh', '.config',
153 config_file.name]
154 process.run(" ".join(cmd), shell=True)
155
156 def build(self, binary_package=False, njobs=multiprocessing.cpu_count()):
157 """
158 Build kernel from source.
159
160 :param binary_package: when True, the appropriate
161 platform package is built
162 for install() to use
163 :type binary_pacakge: bool
164 :param njobs: number of jobs. It is mapped to the -j option from make.
165 If njobs is None then do not limit the number of jobs
166 (e.g. uses -j without value). The -j is omitted if a
167 value equal or less than zero is passed. Default value
168 is set to `multiprocessing.cpu_count()`.
169 :type njobs: int or None
170 """
171 make_args = []
172 LOG.info("Starting build the kernel")
173
174 if njobs is None:
175 make_args.append('-j')
176 elif njobs > 0:
177 make_args.extend(['-j', str(njobs)])
178 make_args.extend(['-C', self._build_dir])
179
180 if binary_package is True:
181 if self.distro.name == "Ubuntu":
182 make_args.append("deb-pkg")
183
184 build.make(self._build_dir, extra_args=" ".join(make_args))
185
186 def install(self):
187 """
188 Install built kernel.
189 """
190 LOG.info("Starting kernel install")
191 if self.distro.name == "Ubuntu":
192 process.run('dpkg -i %s/*.deb' %
193 self.work_dir, shell=True, sudo=True)
194 else:
195 LOG.info("Skipping kernel install")
196
197 def __del__(self):
198 shutil.rmtree(self.work_dir)
199
200
201 def check_version(version):
202 """
203 This utility function compares the current kernel version with
204 the version parameter and gives assertion error if the version
205 parameter is greater.
206
207 :type version: string
208 :param version: version to be compared with current kernel version
209 """
210 assert LooseVersion(os.uname()[2]) > LooseVersion(version), "Old kernel"
211
[end of avocado/utils/kernel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/avocado/utils/kernel.py b/avocado/utils/kernel.py
--- a/avocado/utils/kernel.py
+++ b/avocado/utils/kernel.py
@@ -22,7 +22,8 @@
import os
import shutil
import tempfile
-from distutils.version import LooseVersion # pylint: disable=E0611
+
+from pkg_resources import packaging
from . import archive, asset, build, distro, process
@@ -207,4 +208,6 @@
:type version: string
:param version: version to be compared with current kernel version
"""
- assert LooseVersion(os.uname()[2]) > LooseVersion(version), "Old kernel"
+ os_version = packaging.version.parse(os.uname()[2])
+ version = packaging.version.parse(version)
+ assert os_version > version, "Old kernel"
\ No newline at end of file
|
{"golden_diff": "diff --git a/avocado/utils/kernel.py b/avocado/utils/kernel.py\n--- a/avocado/utils/kernel.py\n+++ b/avocado/utils/kernel.py\n@@ -22,7 +22,8 @@\n import os\n import shutil\n import tempfile\n-from distutils.version import LooseVersion # pylint: disable=E0611\n+\n+from pkg_resources import packaging\n \n from . import archive, asset, build, distro, process\n \n@@ -207,4 +208,6 @@\n :type version: string\n :param version: version to be compared with current kernel version\n \"\"\"\n- assert LooseVersion(os.uname()[2]) > LooseVersion(version), \"Old kernel\"\n+ os_version = packaging.version.parse(os.uname()[2])\n+ version = packaging.version.parse(version)\n+ assert os_version > version, \"Old kernel\"\n\\ No newline at end of file\n", "issue": "distutils depreaction\nhttps://www.python.org/dev/peps/pep-0632/\r\n\r\nWe have one module using this: `avocado.utils.kernel`.\n", "before_files": [{"content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n#\n# See LICENSE for more details.\n#\n# Copyright: Red Hat Inc. 2014\n# Author: Ruda Moura <[email protected]>\n# Author: Santhosh G <[email protected]>\n\n\"\"\"\nProvides utilities for the Linux kernel.\n\"\"\"\n\nimport logging\nimport multiprocessing\nimport os\nimport shutil\nimport tempfile\nfrom distutils.version import LooseVersion # pylint: disable=E0611\n\nfrom . import archive, asset, build, distro, process\n\nLOG = logging.getLogger('avocado.test')\n\n\nclass KernelBuild:\n\n \"\"\"\n Build the Linux Kernel from official tarballs.\n \"\"\"\n\n URL = 'https://www.kernel.org/pub/linux/kernel/v{major}.x/'\n SOURCE = 'linux-{version}.tar.gz'\n\n def __init__(self, version, config_path=None, work_dir=None,\n data_dirs=None):\n \"\"\"\n Creates an instance of :class:`KernelBuild`.\n\n :param version: kernel version (\"3.19.8\").\n :param config_path: path to config file.\n :param work_dir: work directory.\n :param data_dirs: list of directories to keep the downloaded kernel\n :return: None.\n \"\"\"\n self.asset_path = None\n self.version = version\n self.config_path = config_path\n self.distro = distro.detect()\n if work_dir is None:\n work_dir = tempfile.mkdtemp(prefix='avocado_' + __name__)\n self.work_dir = work_dir\n if data_dirs is not None:\n self.data_dirs = data_dirs\n else:\n self.data_dirs = [self.work_dir]\n self._build_dir = os.path.join(self.work_dir, 'linux-%s' % self.version)\n\n def __repr__(self):\n return \"KernelBuild('%s, %s, %s')\" % (self.version,\n self.config_path,\n self.work_dir)\n\n @property\n def vmlinux(self):\n \"\"\"\n Return the vmlinux path if the file exists\n \"\"\"\n if not self.build_dir:\n return None\n vmlinux_path = os.path.join(self.build_dir, 'vmlinux')\n if os.path.isfile(vmlinux_path):\n return vmlinux_path\n return None\n\n @property\n def build_dir(self):\n \"\"\"\n Return the build path if the directory exists\n \"\"\"\n if os.path.isdir(self._build_dir):\n return self._build_dir\n return None\n\n def _build_kernel_url(self, base_url=None):\n kernel_file = self.SOURCE.format(version=self.version)\n if base_url is None:\n base_url = self.URL.format(major=self.version.split('.', 1)[0])\n return base_url + kernel_file\n\n def download(self, url=None):\n \"\"\"\n Download kernel source.\n\n :param url: override the url from where to fetch the kernel\n source tarball\n :type url: str or None\n \"\"\"\n full_url = self._build_kernel_url(base_url=url)\n self.asset_path = asset.Asset(full_url, asset_hash=None,\n algorithm=None, locations=None,\n cache_dirs=self.data_dirs).fetch()\n\n def uncompress(self):\n \"\"\"\n Uncompress kernel source.\n\n :raises: Exception in case the tarball is not downloaded\n \"\"\"\n if self.asset_path:\n LOG.info(\"Uncompressing tarball\")\n archive.extract(self.asset_path, self.work_dir)\n else:\n raise Exception(\"Unable to find the tarball\")\n\n def configure(self, targets=('defconfig'), extra_configs=None):\n \"\"\"\n Configure/prepare kernel source to build.\n\n :param targets: configuration targets. Default is 'defconfig'.\n :type targets: list of str\n :param extra_configs: additional configurations in the form of\n CONFIG_NAME=VALUE.\n :type extra_configs: list of str\n \"\"\"\n build.make(self._build_dir, extra_args='-C %s mrproper' %\n self._build_dir)\n if self.config_path is not None:\n dotconfig = os.path.join(self._build_dir, '.config')\n shutil.copy(self.config_path, dotconfig)\n build.make(self._build_dir, extra_args='-C %s olddefconfig' %\n self._build_dir)\n else:\n if isinstance(targets, list):\n _targets = \" \".join(targets)\n else:\n _targets = targets\n build.make(self.build_dir,\n extra_args='-C %s %s' % (self.build_dir, _targets))\n if extra_configs:\n with tempfile.NamedTemporaryFile(mode='w+t',\n prefix='avocado_') as config_file:\n config_file.write('\\n'.join(extra_configs))\n config_file.flush()\n cmd = ['cd', self._build_dir, '&&',\n './scripts/kconfig/merge_config.sh', '.config',\n config_file.name]\n process.run(\" \".join(cmd), shell=True)\n\n def build(self, binary_package=False, njobs=multiprocessing.cpu_count()):\n \"\"\"\n Build kernel from source.\n\n :param binary_package: when True, the appropriate\n platform package is built\n for install() to use\n :type binary_pacakge: bool\n :param njobs: number of jobs. It is mapped to the -j option from make.\n If njobs is None then do not limit the number of jobs\n (e.g. uses -j without value). The -j is omitted if a\n value equal or less than zero is passed. Default value\n is set to `multiprocessing.cpu_count()`.\n :type njobs: int or None\n \"\"\"\n make_args = []\n LOG.info(\"Starting build the kernel\")\n\n if njobs is None:\n make_args.append('-j')\n elif njobs > 0:\n make_args.extend(['-j', str(njobs)])\n make_args.extend(['-C', self._build_dir])\n\n if binary_package is True:\n if self.distro.name == \"Ubuntu\":\n make_args.append(\"deb-pkg\")\n\n build.make(self._build_dir, extra_args=\" \".join(make_args))\n\n def install(self):\n \"\"\"\n Install built kernel.\n \"\"\"\n LOG.info(\"Starting kernel install\")\n if self.distro.name == \"Ubuntu\":\n process.run('dpkg -i %s/*.deb' %\n self.work_dir, shell=True, sudo=True)\n else:\n LOG.info(\"Skipping kernel install\")\n\n def __del__(self):\n shutil.rmtree(self.work_dir)\n\n\ndef check_version(version):\n \"\"\"\n This utility function compares the current kernel version with\n the version parameter and gives assertion error if the version\n parameter is greater.\n\n :type version: string\n :param version: version to be compared with current kernel version\n \"\"\"\n assert LooseVersion(os.uname()[2]) > LooseVersion(version), \"Old kernel\"\n", "path": "avocado/utils/kernel.py"}]}
| 2,683 | 197 |
gh_patches_debug_49088
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-5232
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
is_stripe_linked property does not set on connecting stripe account
**Describe the bug**
is_stripe_linked property does not set on connecting stripe account. It throws error.
**Expected behavior**
Property should set appropriately.
**Error**
```
Class 'sqlalchemy.orm.query.Query' is not mapped Traceback (most recent call last):
File "/home/rs/Pradeep/github/open-event-server/env/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 1722, in add
state = attributes.instance_state(instance)
AttributeError: 'Query' object has no attribute '_sa_instance_state'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/rs/Pradeep/github/open-event-server/app/api/helpers/db.py", line 22, in save_to_db
db.session.add(item)
File "/home/rs/Pradeep/github/open-event-server/env/lib/python3.5/site-packages/sqlalchemy/orm/scoping.py", line 157, in do
return getattr(self.registry(), name)(*args, **kwargs)
File "/home/rs/Pradeep/github/open-event-server/env/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 1724, in add
raise exc.UnmappedInstanceError(instance)
sqlalchemy.orm.exc.UnmappedInstanceError: Class 'sqlalchemy.orm.query.Query' is not mapped
ERROR:root:DB Exception! Class 'sqlalchemy.orm.query.Query' is not mapped
```
</issue>
<code>
[start of app/api/stripe_authorization.py]
1 from flask_rest_jsonapi import ResourceDetail, ResourceList
2 from sqlalchemy.orm.exc import NoResultFound
3
4 from app.api.bootstrap import api
5 from app.api.helpers.db import safe_query, get_count, save_to_db
6 from app.api.helpers.exceptions import ForbiddenException, ConflictException, UnprocessableEntity
7 from app.api.helpers.payment import StripePaymentsManager
8 from app.api.helpers.permission_manager import has_access
9 from app.api.helpers.permissions import jwt_required
10 from app.api.helpers.utilities import require_relationship
11 from app.api.schema.stripe_authorization import StripeAuthorizationSchema
12 from app.models import db
13 from app.models.event import Event
14 from app.models.stripe_authorization import StripeAuthorization
15
16
17 class StripeAuthorizationListPost(ResourceList):
18 """
19 List and Create Stripe Authorization
20 """
21 def before_post(self, args, kwargs, data):
22 """
23 before post method to check for required relationship and proper permission
24 :param args:
25 :param kwargs:
26 :param data:
27 :return:
28 """
29 require_relationship(['event'], data)
30 if not has_access('is_organizer', event_id=data['event']):
31 raise ForbiddenException({'source': ''}, "Minimum Organizer access required")
32 if get_count(db.session.query(Event).filter_by(id=int(data['event']), can_pay_by_stripe=False)) > 0:
33 raise ForbiddenException({'pointer': ''}, "Stripe payment is disabled for this Event")
34
35 def before_create_object(self, data, view_kwargs):
36 """
37 method to check if stripe authorization object already exists for an event.
38 Raises ConflictException if it already exists.
39 If it doesn't, then uses the StripePaymentManager to get the other credentials from Stripe.
40 :param data:
41 :param view_kwargs:
42 :return:
43 """
44 try:
45 self.session.query(StripeAuthorization).filter_by(event_id=data['event'], deleted_at=None).one()
46 except NoResultFound:
47 credentials = StripePaymentsManager\
48 .get_event_organizer_credentials_from_stripe(data['stripe_auth_code'])
49 if 'error' in credentials:
50 raise UnprocessableEntity({'pointer': '/data/stripe_auth_code'}, credentials['error_description'])
51 data['stripe_secret_key'] = credentials['access_token']
52 data['stripe_refresh_token'] = credentials['refresh_token']
53 data['stripe_publishable_key'] = credentials['stripe_publishable_key']
54 data['stripe_user_id'] = credentials['stripe_user_id']
55 else:
56 raise ConflictException({'pointer': '/data/relationships/event'},
57 "Stripe Authorization already exists for this event")
58
59 def after_create_object(self, stripe_authorization, data, view_kwargs):
60 """
61 after create object method for StripeAuthorizationListPost Class
62 :param stripe_authorization: Stripe authorization created from mashmallow_jsonapi
63 :param data:
64 :param view_kwargs:
65 :return:
66 """
67 event = db.session.query(Event).filter_by(id=int(data['event']))
68 event.is_stripe_linked = True
69 save_to_db(event)
70
71 schema = StripeAuthorizationSchema
72 decorators = (jwt_required, )
73 methods = ['POST']
74 data_layer = {'session': db.session,
75 'model': StripeAuthorization,
76 'methods': {
77 'before_create_object': before_create_object,
78 'after_create_object': after_create_object
79 }}
80
81
82 class StripeAuthorizationDetail(ResourceDetail):
83 """
84 Stripe Authorization Detail Resource by ID
85 """
86 def before_get_object(self, view_kwargs):
87 """
88 method to get id of stripe authorization related to an event
89 :param view_kwargs:
90 :return:
91 """
92 if view_kwargs.get('event_identifier'):
93 event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')
94 view_kwargs['event_id'] = event.id
95
96 if view_kwargs.get('event_id'):
97 stripe_authorization = \
98 safe_query(self, StripeAuthorization, 'event_id', view_kwargs['event_id'], 'event_id')
99 view_kwargs['id'] = stripe_authorization.id
100
101 def after_delete_object(self, stripe_authorization, view_kwargs):
102 """Make work after delete object
103 :param stripe_authorization: stripe authorization.
104 :param dict view_kwargs: kwargs from the resource view
105 """
106 event = stripe_authorization.event
107 event.is_stripe_linked = False
108 save_to_db(event)
109
110 decorators = (api.has_permission('is_coorganizer', fetch="event_id",
111 fetch_as="event_id", model=StripeAuthorization),)
112 schema = StripeAuthorizationSchema
113 data_layer = {'session': db.session,
114 'model': StripeAuthorization,
115 'methods': {
116 'before_get_object': before_get_object,
117 'after_delete_object': after_delete_object
118 }}
119
120
121 class StripeAuthorizationRelationship(ResourceDetail):
122 """
123 Stripe Authorization Relationship
124 """
125
126 decorators = (api.has_permission('is_coorganizer', fetch="event_id",
127 fetch_as="event_id", model=StripeAuthorization),)
128 schema = StripeAuthorizationSchema
129 data_layer = {'session': db.session,
130 'model': StripeAuthorization}
131
[end of app/api/stripe_authorization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/app/api/stripe_authorization.py b/app/api/stripe_authorization.py
--- a/app/api/stripe_authorization.py
+++ b/app/api/stripe_authorization.py
@@ -64,7 +64,7 @@
:param view_kwargs:
:return:
"""
- event = db.session.query(Event).filter_by(id=int(data['event']))
+ event = db.session.query(Event).filter_by(id=int(data['event'])).one()
event.is_stripe_linked = True
save_to_db(event)
|
{"golden_diff": "diff --git a/app/api/stripe_authorization.py b/app/api/stripe_authorization.py\n--- a/app/api/stripe_authorization.py\n+++ b/app/api/stripe_authorization.py\n@@ -64,7 +64,7 @@\n :param view_kwargs:\n :return:\n \"\"\"\n- event = db.session.query(Event).filter_by(id=int(data['event']))\n+ event = db.session.query(Event).filter_by(id=int(data['event'])).one()\n event.is_stripe_linked = True\n save_to_db(event)\n", "issue": "is_stripe_linked property does not set on connecting stripe account\n**Describe the bug**\r\nis_stripe_linked property does not set on connecting stripe account. It throws error.\r\n\r\n**Expected behavior**\r\nProperty should set appropriately.\r\n\r\n**Error**\r\n```\r\nClass 'sqlalchemy.orm.query.Query' is not mapped Traceback (most recent call last):\r\n File \"/home/rs/Pradeep/github/open-event-server/env/lib/python3.5/site-packages/sqlalchemy/orm/session.py\", line 1722, in add\r\n state = attributes.instance_state(instance)\r\nAttributeError: 'Query' object has no attribute '_sa_instance_state'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/rs/Pradeep/github/open-event-server/app/api/helpers/db.py\", line 22, in save_to_db\r\n db.session.add(item)\r\n File \"/home/rs/Pradeep/github/open-event-server/env/lib/python3.5/site-packages/sqlalchemy/orm/scoping.py\", line 157, in do\r\n return getattr(self.registry(), name)(*args, **kwargs)\r\n File \"/home/rs/Pradeep/github/open-event-server/env/lib/python3.5/site-packages/sqlalchemy/orm/session.py\", line 1724, in add\r\n raise exc.UnmappedInstanceError(instance)\r\nsqlalchemy.orm.exc.UnmappedInstanceError: Class 'sqlalchemy.orm.query.Query' is not mapped\r\nERROR:root:DB Exception! Class 'sqlalchemy.orm.query.Query' is not mapped\r\n```\n", "before_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query, get_count, save_to_db\nfrom app.api.helpers.exceptions import ForbiddenException, ConflictException, UnprocessableEntity\nfrom app.api.helpers.payment import StripePaymentsManager\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.stripe_authorization import StripeAuthorizationSchema\nfrom app.models import db\nfrom app.models.event import Event\nfrom app.models.stripe_authorization import StripeAuthorization\n\n\nclass StripeAuthorizationListPost(ResourceList):\n \"\"\"\n List and Create Stripe Authorization\n \"\"\"\n def before_post(self, args, kwargs, data):\n \"\"\"\n before post method to check for required relationship and proper permission\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_organizer', event_id=data['event']):\n raise ForbiddenException({'source': ''}, \"Minimum Organizer access required\")\n if get_count(db.session.query(Event).filter_by(id=int(data['event']), can_pay_by_stripe=False)) > 0:\n raise ForbiddenException({'pointer': ''}, \"Stripe payment is disabled for this Event\")\n\n def before_create_object(self, data, view_kwargs):\n \"\"\"\n method to check if stripe authorization object already exists for an event.\n Raises ConflictException if it already exists.\n If it doesn't, then uses the StripePaymentManager to get the other credentials from Stripe.\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n try:\n self.session.query(StripeAuthorization).filter_by(event_id=data['event'], deleted_at=None).one()\n except NoResultFound:\n credentials = StripePaymentsManager\\\n .get_event_organizer_credentials_from_stripe(data['stripe_auth_code'])\n if 'error' in credentials:\n raise UnprocessableEntity({'pointer': '/data/stripe_auth_code'}, credentials['error_description'])\n data['stripe_secret_key'] = credentials['access_token']\n data['stripe_refresh_token'] = credentials['refresh_token']\n data['stripe_publishable_key'] = credentials['stripe_publishable_key']\n data['stripe_user_id'] = credentials['stripe_user_id']\n else:\n raise ConflictException({'pointer': '/data/relationships/event'},\n \"Stripe Authorization already exists for this event\")\n\n def after_create_object(self, stripe_authorization, data, view_kwargs):\n \"\"\"\n after create object method for StripeAuthorizationListPost Class\n :param stripe_authorization: Stripe authorization created from mashmallow_jsonapi\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n event = db.session.query(Event).filter_by(id=int(data['event']))\n event.is_stripe_linked = True\n save_to_db(event)\n\n schema = StripeAuthorizationSchema\n decorators = (jwt_required, )\n methods = ['POST']\n data_layer = {'session': db.session,\n 'model': StripeAuthorization,\n 'methods': {\n 'before_create_object': before_create_object,\n 'after_create_object': after_create_object\n }}\n\n\nclass StripeAuthorizationDetail(ResourceDetail):\n \"\"\"\n Stripe Authorization Detail Resource by ID\n \"\"\"\n def before_get_object(self, view_kwargs):\n \"\"\"\n method to get id of stripe authorization related to an event\n :param view_kwargs:\n :return:\n \"\"\"\n if view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n view_kwargs['event_id'] = event.id\n\n if view_kwargs.get('event_id'):\n stripe_authorization = \\\n safe_query(self, StripeAuthorization, 'event_id', view_kwargs['event_id'], 'event_id')\n view_kwargs['id'] = stripe_authorization.id\n\n def after_delete_object(self, stripe_authorization, view_kwargs):\n \"\"\"Make work after delete object\n :param stripe_authorization: stripe authorization.\n :param dict view_kwargs: kwargs from the resource view\n \"\"\"\n event = stripe_authorization.event\n event.is_stripe_linked = False\n save_to_db(event)\n\n decorators = (api.has_permission('is_coorganizer', fetch=\"event_id\",\n fetch_as=\"event_id\", model=StripeAuthorization),)\n schema = StripeAuthorizationSchema\n data_layer = {'session': db.session,\n 'model': StripeAuthorization,\n 'methods': {\n 'before_get_object': before_get_object,\n 'after_delete_object': after_delete_object\n }}\n\n\nclass StripeAuthorizationRelationship(ResourceDetail):\n \"\"\"\n Stripe Authorization Relationship\n \"\"\"\n\n decorators = (api.has_permission('is_coorganizer', fetch=\"event_id\",\n fetch_as=\"event_id\", model=StripeAuthorization),)\n schema = StripeAuthorizationSchema\n data_layer = {'session': db.session,\n 'model': StripeAuthorization}\n", "path": "app/api/stripe_authorization.py"}]}
| 2,217 | 116 |
gh_patches_debug_1408
|
rasdani/github-patches
|
git_diff
|
cupy__cupy-3570
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cupy.percentile only calculates integer percentiles when the input data is an integer.
This seems to be caused by a cast of the percentiles array `q` to the same type as the input array `a` in the cupy.percentile source :
https://github.com/cupy/cupy/blob/adfcc44bc9a17886a340cd85b7c9ebadd94b38a1/cupy/statistics/order.py#L189
Example code to reproduce the issue:
`cupy.percentile(cupy.arange(1001).astype(cupy.int16),[98, 99, 99.9, 100]).get()`
`array([ 980., 990., 990., 1000.])`
`cupy.percentile(cupy.arange(1001).astype(cupy.float16),[98, 99, 99.9, 100]).get()`
`array([ 980., 990., 999., 1000.])`
For comparison the numpy version always calculates correctly:
`numpy.percentile(numpy.arange(1001).astype(numpy.int16),[98, 99, 99.9, 100])`
`array([ 980., 990., 999., 1000.])`
Cupy configuration:
CuPy Version : 7.6.0
CUDA Root : C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2
CUDA Build Version : 10020
CUDA Driver Version : 10020
CUDA Runtime Version : 10020
</issue>
<code>
[start of cupy/statistics/order.py]
1 import warnings
2
3 import cupy
4 from cupy import core
5 from cupy.core import _routines_statistics as _statistics
6 from cupy.core import _fusion_thread_local
7 from cupy.logic import content
8
9
10 def amin(a, axis=None, out=None, keepdims=False):
11 """Returns the minimum of an array or the minimum along an axis.
12
13 .. note::
14
15 When at least one element is NaN, the corresponding min value will be
16 NaN.
17
18 Args:
19 a (cupy.ndarray): Array to take the minimum.
20 axis (int): Along which axis to take the minimum. The flattened array
21 is used by default.
22 out (cupy.ndarray): Output array.
23 keepdims (bool): If ``True``, the axis is remained as an axis of
24 size one.
25
26 Returns:
27 cupy.ndarray: The minimum of ``a``, along the axis if specified.
28
29 .. seealso:: :func:`numpy.amin`
30
31 """
32 if _fusion_thread_local.is_fusing():
33 if keepdims:
34 raise NotImplementedError(
35 'cupy.amin does not support `keepdims` in fusion yet.')
36 return _fusion_thread_local.call_reduction(
37 _statistics.amin, a, axis=axis, out=out)
38
39 # TODO(okuta): check type
40 return a.min(axis=axis, out=out, keepdims=keepdims)
41
42
43 def amax(a, axis=None, out=None, keepdims=False):
44 """Returns the maximum of an array or the maximum along an axis.
45
46 .. note::
47
48 When at least one element is NaN, the corresponding min value will be
49 NaN.
50
51 Args:
52 a (cupy.ndarray): Array to take the maximum.
53 axis (int): Along which axis to take the maximum. The flattened array
54 is used by default.
55 out (cupy.ndarray): Output array.
56 keepdims (bool): If ``True``, the axis is remained as an axis of
57 size one.
58
59 Returns:
60 cupy.ndarray: The maximum of ``a``, along the axis if specified.
61
62 .. seealso:: :func:`numpy.amax`
63
64 """
65 if _fusion_thread_local.is_fusing():
66 if keepdims:
67 raise NotImplementedError(
68 'cupy.amax does not support `keepdims` in fusion yet.')
69 return _fusion_thread_local.call_reduction(
70 _statistics.amax, a, axis=axis, out=out)
71
72 # TODO(okuta): check type
73 return a.max(axis=axis, out=out, keepdims=keepdims)
74
75
76 def nanmin(a, axis=None, out=None, keepdims=False):
77 """Returns the minimum of an array along an axis ignoring NaN.
78
79 When there is a slice whose elements are all NaN, a :class:`RuntimeWarning`
80 is raised and NaN is returned.
81
82 Args:
83 a (cupy.ndarray): Array to take the minimum.
84 axis (int): Along which axis to take the minimum. The flattened array
85 is used by default.
86 out (cupy.ndarray): Output array.
87 keepdims (bool): If ``True``, the axis is remained as an axis of
88 size one.
89
90 Returns:
91 cupy.ndarray: The minimum of ``a``, along the axis if specified.
92
93 .. warning::
94
95 This function may synchronize the device.
96
97 .. seealso:: :func:`numpy.nanmin`
98
99 """
100 # TODO(niboshi): Avoid synchronization.
101 res = core.nanmin(a, axis=axis, out=out, keepdims=keepdims)
102 if content.isnan(res).any(): # synchronize!
103 warnings.warn('All-NaN slice encountered', RuntimeWarning)
104 return res
105
106
107 def nanmax(a, axis=None, out=None, keepdims=False):
108 """Returns the maximum of an array along an axis ignoring NaN.
109
110 When there is a slice whose elements are all NaN, a :class:`RuntimeWarning`
111 is raised and NaN is returned.
112
113 Args:
114 a (cupy.ndarray): Array to take the maximum.
115 axis (int): Along which axis to take the maximum. The flattened array
116 is used by default.
117 out (cupy.ndarray): Output array.
118 keepdims (bool): If ``True``, the axis is remained as an axis of
119 size one.
120
121 Returns:
122 cupy.ndarray: The maximum of ``a``, along the axis if specified.
123
124 .. warning::
125
126 This function may synchronize the device.
127
128 .. seealso:: :func:`numpy.nanmax`
129
130 """
131 # TODO(niboshi): Avoid synchronization.
132 res = core.nanmax(a, axis=axis, out=out, keepdims=keepdims)
133 if content.isnan(res).any(): # synchronize!
134 warnings.warn('All-NaN slice encountered', RuntimeWarning)
135 return res
136
137
138 def ptp(a, axis=None, out=None, keepdims=False):
139 """Returns the range of values (maximum - minimum) along an axis.
140
141 .. note::
142
143 The name of the function comes from the acronym for 'peak to peak'.
144
145 When at least one element is NaN, the corresponding ptp value will be
146 NaN.
147
148 Args:
149 a (cupy.ndarray): Array over which to take the range.
150 axis (int): Axis along which to take the minimum. The flattened
151 array is used by default.
152 out (cupy.ndarray): Output array.
153 keepdims (bool): If ``True``, the axis is retained as an axis of
154 size one.
155
156 Returns:
157 cupy.ndarray: The minimum of ``a``, along the axis if specified.
158
159 .. seealso:: :func:`numpy.amin`
160
161 """
162 return a.ptp(axis=axis, out=out, keepdims=keepdims)
163
164
165 def percentile(a, q, axis=None, out=None, interpolation='linear',
166 keepdims=False):
167 """Computes the q-th percentile of the data along the specified axis.
168
169 Args:
170 a (cupy.ndarray): Array for which to compute percentiles.
171 q (float, tuple of floats or cupy.ndarray): Percentiles to compute
172 in the range between 0 and 100 inclusive.
173 axis (int or tuple of ints): Along which axis or axes to compute the
174 percentiles. The flattened array is used by default.
175 out (cupy.ndarray): Output array.
176 interpolation (str): Interpolation method when a quantile lies between
177 two data points. ``linear`` interpolation is used by default.
178 Supported interpolations are``lower``, ``higher``, ``midpoint``,
179 ``nearest`` and ``linear``.
180 keepdims (bool): If ``True``, the axis is remained as an axis of
181 size one.
182
183 Returns:
184 cupy.ndarray: The percentiles of ``a``, along the axis if specified.
185
186 .. seealso:: :func:`numpy.percentile`
187
188 """
189 q = cupy.asarray(q, dtype=a.dtype)
190 if q.ndim == 0:
191 q = q[None]
192 zerod = True
193 else:
194 zerod = False
195 if q.ndim > 1:
196 raise ValueError('Expected q to have a dimension of 1.\n'
197 'Actual: {0} != 1'.format(q.ndim))
198
199 if keepdims:
200 if axis is None:
201 keepdim = (1,) * a.ndim
202 else:
203 keepdim = list(a.shape)
204 for ax in axis:
205 keepdim[ax % a.ndim] = 1
206 keepdim = tuple(keepdim)
207
208 # Copy a since we need it sorted but without modifying the original array
209 if isinstance(axis, int):
210 axis = axis,
211 if axis is None:
212 ap = a.flatten()
213 nkeep = 0
214 else:
215 # Reduce axes from a and put them last
216 axis = tuple(ax % a.ndim for ax in axis)
217 keep = set(range(a.ndim)) - set(axis)
218 nkeep = len(keep)
219 for i, s in enumerate(sorted(keep)):
220 a = a.swapaxes(i, s)
221 ap = a.reshape(a.shape[:nkeep] + (-1,)).copy()
222
223 axis = -1
224 ap.sort(axis=axis)
225 Nx = ap.shape[axis]
226 indices = q * 0.01 * (Nx - 1.) # percents to decimals
227
228 if interpolation == 'lower':
229 indices = cupy.floor(indices).astype(cupy.int32)
230 elif interpolation == 'higher':
231 indices = cupy.ceil(indices).astype(cupy.int32)
232 elif interpolation == 'midpoint':
233 indices = 0.5 * (cupy.floor(indices) + cupy.ceil(indices))
234 elif interpolation == 'nearest':
235 # TODO(hvy): Implement nearest using around
236 raise ValueError('\'nearest\' interpolation is not yet supported. '
237 'Please use any other interpolation method.')
238 elif interpolation == 'linear':
239 pass
240 else:
241 raise ValueError('Unexpected interpolation method.\n'
242 'Actual: \'{0}\' not in (\'linear\', \'lower\', '
243 '\'higher\', \'midpoint\')'.format(interpolation))
244
245 if indices.dtype == cupy.int32:
246 ret = cupy.rollaxis(ap, axis)
247 ret = ret.take(indices, axis=0, out=out)
248 else:
249 if out is None:
250 ret = cupy.empty(ap.shape[:-1] + q.shape, dtype=cupy.float64)
251 else:
252 ret = cupy.rollaxis(out, 0, out.ndim)
253
254 cupy.ElementwiseKernel(
255 'S idx, raw T a, raw int32 offset', 'U ret',
256 '''
257 ptrdiff_t idx_below = floor(idx);
258 U weight_above = idx - idx_below;
259
260 ptrdiff_t offset_i = _ind.get()[0] * offset;
261 ret = a[offset_i + idx_below] * (1.0 - weight_above)
262 + a[offset_i + idx_below + 1] * weight_above;
263 ''',
264 'percentile_weightnening'
265 )(indices, ap, ap.shape[-1] if ap.ndim > 1 else 0, ret)
266 ret = cupy.rollaxis(ret, -1) # Roll q dimension back to first axis
267
268 if zerod:
269 ret = ret.squeeze(0)
270 if keepdims:
271 if q.size > 1:
272 keepdim = (-1,) + keepdim
273 ret = ret.reshape(keepdim)
274
275 return core._internal_ascontiguousarray(ret)
276
[end of cupy/statistics/order.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cupy/statistics/order.py b/cupy/statistics/order.py
--- a/cupy/statistics/order.py
+++ b/cupy/statistics/order.py
@@ -186,7 +186,8 @@
.. seealso:: :func:`numpy.percentile`
"""
- q = cupy.asarray(q, dtype=a.dtype)
+ if not isinstance(q, cupy.ndarray):
+ q = cupy.asarray(q, dtype='d')
if q.ndim == 0:
q = q[None]
zerod = True
|
{"golden_diff": "diff --git a/cupy/statistics/order.py b/cupy/statistics/order.py\n--- a/cupy/statistics/order.py\n+++ b/cupy/statistics/order.py\n@@ -186,7 +186,8 @@\n .. seealso:: :func:`numpy.percentile`\n \n \"\"\"\n- q = cupy.asarray(q, dtype=a.dtype)\n+ if not isinstance(q, cupy.ndarray):\n+ q = cupy.asarray(q, dtype='d')\n if q.ndim == 0:\n q = q[None]\n zerod = True\n", "issue": "cupy.percentile only calculates integer percentiles when the input data is an integer.\nThis seems to be caused by a cast of the percentiles array `q` to the same type as the input array `a` in the cupy.percentile source :\r\n\r\nhttps://github.com/cupy/cupy/blob/adfcc44bc9a17886a340cd85b7c9ebadd94b38a1/cupy/statistics/order.py#L189\r\n\r\nExample code to reproduce the issue:\r\n\r\n`cupy.percentile(cupy.arange(1001).astype(cupy.int16),[98, 99, 99.9, 100]).get()`\r\n`array([ 980., 990., 990., 1000.])`\r\n\r\n`cupy.percentile(cupy.arange(1001).astype(cupy.float16),[98, 99, 99.9, 100]).get()`\r\n`array([ 980., 990., 999., 1000.])`\r\n\r\nFor comparison the numpy version always calculates correctly:\r\n\r\n`numpy.percentile(numpy.arange(1001).astype(numpy.int16),[98, 99, 99.9, 100])`\r\n`array([ 980., 990., 999., 1000.])`\r\n\r\nCupy configuration:\r\nCuPy Version : 7.6.0\r\nCUDA Root : C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.2\r\nCUDA Build Version : 10020\r\nCUDA Driver Version : 10020\r\nCUDA Runtime Version : 10020\r\n\n", "before_files": [{"content": "import warnings\n\nimport cupy\nfrom cupy import core\nfrom cupy.core import _routines_statistics as _statistics\nfrom cupy.core import _fusion_thread_local\nfrom cupy.logic import content\n\n\ndef amin(a, axis=None, out=None, keepdims=False):\n \"\"\"Returns the minimum of an array or the minimum along an axis.\n\n .. note::\n\n When at least one element is NaN, the corresponding min value will be\n NaN.\n\n Args:\n a (cupy.ndarray): Array to take the minimum.\n axis (int): Along which axis to take the minimum. The flattened array\n is used by default.\n out (cupy.ndarray): Output array.\n keepdims (bool): If ``True``, the axis is remained as an axis of\n size one.\n\n Returns:\n cupy.ndarray: The minimum of ``a``, along the axis if specified.\n\n .. seealso:: :func:`numpy.amin`\n\n \"\"\"\n if _fusion_thread_local.is_fusing():\n if keepdims:\n raise NotImplementedError(\n 'cupy.amin does not support `keepdims` in fusion yet.')\n return _fusion_thread_local.call_reduction(\n _statistics.amin, a, axis=axis, out=out)\n\n # TODO(okuta): check type\n return a.min(axis=axis, out=out, keepdims=keepdims)\n\n\ndef amax(a, axis=None, out=None, keepdims=False):\n \"\"\"Returns the maximum of an array or the maximum along an axis.\n\n .. note::\n\n When at least one element is NaN, the corresponding min value will be\n NaN.\n\n Args:\n a (cupy.ndarray): Array to take the maximum.\n axis (int): Along which axis to take the maximum. The flattened array\n is used by default.\n out (cupy.ndarray): Output array.\n keepdims (bool): If ``True``, the axis is remained as an axis of\n size one.\n\n Returns:\n cupy.ndarray: The maximum of ``a``, along the axis if specified.\n\n .. seealso:: :func:`numpy.amax`\n\n \"\"\"\n if _fusion_thread_local.is_fusing():\n if keepdims:\n raise NotImplementedError(\n 'cupy.amax does not support `keepdims` in fusion yet.')\n return _fusion_thread_local.call_reduction(\n _statistics.amax, a, axis=axis, out=out)\n\n # TODO(okuta): check type\n return a.max(axis=axis, out=out, keepdims=keepdims)\n\n\ndef nanmin(a, axis=None, out=None, keepdims=False):\n \"\"\"Returns the minimum of an array along an axis ignoring NaN.\n\n When there is a slice whose elements are all NaN, a :class:`RuntimeWarning`\n is raised and NaN is returned.\n\n Args:\n a (cupy.ndarray): Array to take the minimum.\n axis (int): Along which axis to take the minimum. The flattened array\n is used by default.\n out (cupy.ndarray): Output array.\n keepdims (bool): If ``True``, the axis is remained as an axis of\n size one.\n\n Returns:\n cupy.ndarray: The minimum of ``a``, along the axis if specified.\n\n .. warning::\n\n This function may synchronize the device.\n\n .. seealso:: :func:`numpy.nanmin`\n\n \"\"\"\n # TODO(niboshi): Avoid synchronization.\n res = core.nanmin(a, axis=axis, out=out, keepdims=keepdims)\n if content.isnan(res).any(): # synchronize!\n warnings.warn('All-NaN slice encountered', RuntimeWarning)\n return res\n\n\ndef nanmax(a, axis=None, out=None, keepdims=False):\n \"\"\"Returns the maximum of an array along an axis ignoring NaN.\n\n When there is a slice whose elements are all NaN, a :class:`RuntimeWarning`\n is raised and NaN is returned.\n\n Args:\n a (cupy.ndarray): Array to take the maximum.\n axis (int): Along which axis to take the maximum. The flattened array\n is used by default.\n out (cupy.ndarray): Output array.\n keepdims (bool): If ``True``, the axis is remained as an axis of\n size one.\n\n Returns:\n cupy.ndarray: The maximum of ``a``, along the axis if specified.\n\n .. warning::\n\n This function may synchronize the device.\n\n .. seealso:: :func:`numpy.nanmax`\n\n \"\"\"\n # TODO(niboshi): Avoid synchronization.\n res = core.nanmax(a, axis=axis, out=out, keepdims=keepdims)\n if content.isnan(res).any(): # synchronize!\n warnings.warn('All-NaN slice encountered', RuntimeWarning)\n return res\n\n\ndef ptp(a, axis=None, out=None, keepdims=False):\n \"\"\"Returns the range of values (maximum - minimum) along an axis.\n\n .. note::\n\n The name of the function comes from the acronym for 'peak to peak'.\n\n When at least one element is NaN, the corresponding ptp value will be\n NaN.\n\n Args:\n a (cupy.ndarray): Array over which to take the range.\n axis (int): Axis along which to take the minimum. The flattened\n array is used by default.\n out (cupy.ndarray): Output array.\n keepdims (bool): If ``True``, the axis is retained as an axis of\n size one.\n\n Returns:\n cupy.ndarray: The minimum of ``a``, along the axis if specified.\n\n .. seealso:: :func:`numpy.amin`\n\n \"\"\"\n return a.ptp(axis=axis, out=out, keepdims=keepdims)\n\n\ndef percentile(a, q, axis=None, out=None, interpolation='linear',\n keepdims=False):\n \"\"\"Computes the q-th percentile of the data along the specified axis.\n\n Args:\n a (cupy.ndarray): Array for which to compute percentiles.\n q (float, tuple of floats or cupy.ndarray): Percentiles to compute\n in the range between 0 and 100 inclusive.\n axis (int or tuple of ints): Along which axis or axes to compute the\n percentiles. The flattened array is used by default.\n out (cupy.ndarray): Output array.\n interpolation (str): Interpolation method when a quantile lies between\n two data points. ``linear`` interpolation is used by default.\n Supported interpolations are``lower``, ``higher``, ``midpoint``,\n ``nearest`` and ``linear``.\n keepdims (bool): If ``True``, the axis is remained as an axis of\n size one.\n\n Returns:\n cupy.ndarray: The percentiles of ``a``, along the axis if specified.\n\n .. seealso:: :func:`numpy.percentile`\n\n \"\"\"\n q = cupy.asarray(q, dtype=a.dtype)\n if q.ndim == 0:\n q = q[None]\n zerod = True\n else:\n zerod = False\n if q.ndim > 1:\n raise ValueError('Expected q to have a dimension of 1.\\n'\n 'Actual: {0} != 1'.format(q.ndim))\n\n if keepdims:\n if axis is None:\n keepdim = (1,) * a.ndim\n else:\n keepdim = list(a.shape)\n for ax in axis:\n keepdim[ax % a.ndim] = 1\n keepdim = tuple(keepdim)\n\n # Copy a since we need it sorted but without modifying the original array\n if isinstance(axis, int):\n axis = axis,\n if axis is None:\n ap = a.flatten()\n nkeep = 0\n else:\n # Reduce axes from a and put them last\n axis = tuple(ax % a.ndim for ax in axis)\n keep = set(range(a.ndim)) - set(axis)\n nkeep = len(keep)\n for i, s in enumerate(sorted(keep)):\n a = a.swapaxes(i, s)\n ap = a.reshape(a.shape[:nkeep] + (-1,)).copy()\n\n axis = -1\n ap.sort(axis=axis)\n Nx = ap.shape[axis]\n indices = q * 0.01 * (Nx - 1.) # percents to decimals\n\n if interpolation == 'lower':\n indices = cupy.floor(indices).astype(cupy.int32)\n elif interpolation == 'higher':\n indices = cupy.ceil(indices).astype(cupy.int32)\n elif interpolation == 'midpoint':\n indices = 0.5 * (cupy.floor(indices) + cupy.ceil(indices))\n elif interpolation == 'nearest':\n # TODO(hvy): Implement nearest using around\n raise ValueError('\\'nearest\\' interpolation is not yet supported. '\n 'Please use any other interpolation method.')\n elif interpolation == 'linear':\n pass\n else:\n raise ValueError('Unexpected interpolation method.\\n'\n 'Actual: \\'{0}\\' not in (\\'linear\\', \\'lower\\', '\n '\\'higher\\', \\'midpoint\\')'.format(interpolation))\n\n if indices.dtype == cupy.int32:\n ret = cupy.rollaxis(ap, axis)\n ret = ret.take(indices, axis=0, out=out)\n else:\n if out is None:\n ret = cupy.empty(ap.shape[:-1] + q.shape, dtype=cupy.float64)\n else:\n ret = cupy.rollaxis(out, 0, out.ndim)\n\n cupy.ElementwiseKernel(\n 'S idx, raw T a, raw int32 offset', 'U ret',\n '''\n ptrdiff_t idx_below = floor(idx);\n U weight_above = idx - idx_below;\n\n ptrdiff_t offset_i = _ind.get()[0] * offset;\n ret = a[offset_i + idx_below] * (1.0 - weight_above)\n + a[offset_i + idx_below + 1] * weight_above;\n ''',\n 'percentile_weightnening'\n )(indices, ap, ap.shape[-1] if ap.ndim > 1 else 0, ret)\n ret = cupy.rollaxis(ret, -1) # Roll q dimension back to first axis\n\n if zerod:\n ret = ret.squeeze(0)\n if keepdims:\n if q.size > 1:\n keepdim = (-1,) + keepdim\n ret = ret.reshape(keepdim)\n\n return core._internal_ascontiguousarray(ret)\n", "path": "cupy/statistics/order.py"}]}
| 3,948 | 123 |
gh_patches_debug_13801
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-center-index-19732
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[package] sail/0.9.0-rc2: PR broke the package
### Description
I've just noticed that the V2 version of sail suddenly disappeared from the center. This was caused by https://github.com/conan-io/conan-center-index/pull/18454
Please please please don't merge pull requests that break packages! Could you please also revert the PR? I have no access to the computer right now.
### Package and Environment Details
All envs
### Conan profile
All profiles
### Steps to reproduce
No steps
### Logs
No logs
</issue>
<code>
[start of recipes/sail/all/conanfile.py]
1 from conan import ConanFile
2 from conan.tools.cmake import CMake, CMakeDeps, CMakeToolchain, cmake_layout
3 from conan.tools.files import apply_conandata_patches, export_conandata_patches, copy, get, rename, rmdir
4 from conan.tools.microsoft import is_msvc
5 import os
6
7 required_conan_version = ">=1.53.0"
8
9 class SAILConan(ConanFile):
10 name = "sail"
11 description = "The missing small and fast image decoding library for humans (not for machines)"
12 url = "https://github.com/conan-io/conan-center-index"
13 homepage = "https://sail.software"
14 topics = ( "image", "encoding", "decoding", "graphics" )
15 license = "MIT"
16 settings = "os", "arch", "compiler", "build_type"
17 options = {
18 "shared": [True, False],
19 "fPIC": [True, False],
20 "thread_safe": [True, False],
21 "with_avif": [True, False],
22 "with_gif": [True, False],
23 "with_jpeg2000": [True, False],
24 "with_jpeg": ["libjpeg", "libjpeg-turbo", False],
25 "with_png": [True, False],
26 "with_tiff": [True, False],
27 "with_webp": [True, False],
28 }
29 default_options = {
30 "shared": False,
31 "fPIC": True,
32 "thread_safe": True,
33 "with_avif": True,
34 "with_gif": True,
35 "with_jpeg2000": True,
36 "with_jpeg": "libjpeg",
37 "with_png": True,
38 "with_tiff": True,
39 "with_webp": True,
40 }
41
42 def export_sources(self):
43 export_conandata_patches(self)
44
45 def config_options(self):
46 if self.settings.os == "Windows":
47 self.options.rm_safe("fPIC")
48
49 def configure(self):
50 if self.options.shared:
51 self.options.rm_safe("fPIC")
52
53 def requirements(self):
54 if self.options.with_avif:
55 self.requires("libavif/0.11.1")
56 if self.options.with_gif:
57 self.requires("giflib/5.2.1")
58 if self.options.with_jpeg2000:
59 self.requires("jasper/4.0.0")
60 if self.options.with_jpeg == "libjpeg-turbo":
61 self.requires("libjpeg-turbo/2.1.5")
62 elif self.options.with_jpeg == "libjpeg":
63 self.requires("libjpeg/9e")
64 if self.options.with_png:
65 self.requires("libpng/1.6.40")
66 if self.options.with_tiff:
67 self.requires("libtiff/4.5.1")
68 if self.options.with_webp:
69 self.requires("libwebp/1.3.1")
70
71 def layout(self):
72 cmake_layout(self, src_folder="src")
73
74 def source(self):
75 get(self, **self.conan_data["sources"][self.version],
76 strip_root=True, destination=self.source_folder)
77
78 def generate(self):
79 enable_codecs = []
80
81 if self.options.with_avif:
82 enable_codecs.append("avif")
83 if self.options.with_gif:
84 enable_codecs.append("gif")
85 if self.options.with_jpeg2000:
86 enable_codecs.append("jpeg2000")
87 if self.options.with_jpeg:
88 enable_codecs.append("jpeg")
89 if self.options.with_png:
90 enable_codecs.append("png")
91 if self.options.with_tiff:
92 enable_codecs.append("tiff")
93 if self.options.with_webp:
94 enable_codecs.append("webp")
95
96 tc = CMakeToolchain(self)
97 tc.variables["SAIL_BUILD_APPS"] = False
98 tc.variables["SAIL_BUILD_EXAMPLES"] = False
99 tc.variables["SAIL_BUILD_TESTS"] = False
100 tc.variables["SAIL_COMBINE_CODECS"] = True
101 tc.variables["SAIL_ENABLE_CODECS"] = ";".join(enable_codecs)
102 tc.variables["SAIL_INSTALL_PDB"] = False
103 tc.variables["SAIL_THREAD_SAFE"] = self.options.thread_safe
104 # TODO: Remove after fixing https://github.com/conan-io/conan/issues/12012
105 if is_msvc(self):
106 tc.cache_variables["CMAKE_TRY_COMPILE_CONFIGURATION"] = str(self.settings.build_type)
107 # TODO: Remove after fixing https://github.com/conan-io/conan-center-index/issues/13159
108 # C3I workaround to force CMake to choose the highest version of
109 # the windows SDK available in the system
110 if is_msvc(self) and not self.conf.get("tools.cmake.cmaketoolchain:system_version"):
111 tc.variables["CMAKE_SYSTEM_VERSION"] = "10.0"
112 tc.generate()
113
114 deps = CMakeDeps(self)
115 deps.generate()
116
117 def build(self):
118 apply_conandata_patches(self)
119
120 cmake = CMake(self)
121 cmake.configure()
122 cmake.build()
123
124 def package(self):
125 copy(self, "LICENSE.txt", self.source_folder, os.path.join(self.package_folder, "licenses"))
126 copy(self, "LICENSE.INIH.txt", self.source_folder, os.path.join(self.package_folder, "licenses"))
127 copy(self, "LICENSE.MUNIT.txt", self.source_folder, os.path.join(self.package_folder, "licenses"))
128
129 cmake = CMake(self)
130 cmake.install()
131
132 # Remove CMake and pkg-config rules
133 rmdir(self, os.path.join(self.package_folder, "lib", "cmake"))
134 rmdir(self, os.path.join(self.package_folder, "lib", "pkgconfig"))
135 # Move icons
136 rename(self, os.path.join(self.package_folder, "share"),
137 os.path.join(self.package_folder, "res"))
138
139 def package_info(self):
140 self.cpp_info.set_property("cmake_file_name", "Sail")
141
142 self.cpp_info.filenames["cmake_find_package"] = "Sail"
143 self.cpp_info.filenames["cmake_find_package_multi"] = "Sail"
144 self.cpp_info.names["cmake_find_package"] = "SAIL"
145 self.cpp_info.names["cmake_find_package_multi"] = "SAIL"
146
147 self.cpp_info.components["sail-common"].set_property("cmake_target_name", "SAIL::SailCommon")
148 self.cpp_info.components["sail-common"].set_property("pkg_config_name", "libsail-common")
149 self.cpp_info.components["sail-common"].names["cmake_find_package"] = "SailCommon"
150 self.cpp_info.components["sail-common"].names["cmake_find_package_multi"] = "SailCommon"
151 self.cpp_info.components["sail-common"].includedirs = ["include/sail"]
152 self.cpp_info.components["sail-common"].libs = ["sail-common"]
153
154 self.cpp_info.components["sail-codecs"].set_property("cmake_target_name", "SAIL::SailCodecs")
155 self.cpp_info.components["sail-codecs"].names["cmake_find_package"] = "SailCodecs"
156 self.cpp_info.components["sail-codecs"].names["cmake_find_package_multi"] = "SailCodecs"
157 self.cpp_info.components["sail-codecs"].libs = ["sail-codecs"]
158 self.cpp_info.components["sail-codecs"].requires = ["sail-common"]
159 if self.options.with_avif:
160 self.cpp_info.components["sail-codecs"].requires.append("libavif::libavif")
161 if self.options.with_gif:
162 self.cpp_info.components["sail-codecs"].requires.append("giflib::giflib")
163 if self.options.with_jpeg2000:
164 self.cpp_info.components["sail-codecs"].requires.append("jasper::jasper")
165 if self.options.with_jpeg:
166 self.cpp_info.components["sail-codecs"].requires.append("{0}::{0}".format(self.options.with_jpeg))
167 if self.options.with_png:
168 self.cpp_info.components["sail-codecs"].requires.append("libpng::libpng")
169 if self.options.with_tiff:
170 self.cpp_info.components["sail-codecs"].requires.append("libtiff::libtiff")
171 if self.options.with_webp:
172 self.cpp_info.components["sail-codecs"].requires.append("libwebp::libwebp")
173
174 self.cpp_info.components["libsail"].set_property("cmake_target_name", "SAIL::Sail")
175 self.cpp_info.components["libsail"].set_property("pkg_config_name", "libsail")
176 self.cpp_info.components["libsail"].names["cmake_find_package"] = "Sail"
177 self.cpp_info.components["libsail"].names["cmake_find_package_multi"] = "Sail"
178 self.cpp_info.components["libsail"].libs = ["sail"]
179 if self.settings.os in ["Linux", "FreeBSD"]:
180 self.cpp_info.components["libsail"].system_libs.append("dl")
181 if self.options.thread_safe:
182 self.cpp_info.components["libsail"].system_libs.append("pthread")
183 self.cpp_info.components["libsail"].requires = ["sail-common", "sail-codecs"]
184
185 self.cpp_info.components["sail-manip"].set_property("cmake_target_name", "SAIL::SailManip")
186 self.cpp_info.components["sail-manip"].set_property("pkg_config_name", "libsail-manip")
187 self.cpp_info.components["sail-manip"].names["cmake_find_package"] = "SailManip"
188 self.cpp_info.components["sail-manip"].names["cmake_find_package_multi"] = "SailManip"
189 self.cpp_info.components["sail-manip"].libs = ["sail-manip"]
190 self.cpp_info.components["sail-manip"].requires = ["sail-common"]
191
192 self.cpp_info.components["sail-c++"].set_property("cmake_target_name", "SAIL::SailC++")
193 self.cpp_info.components["sail-c++"].set_property("pkg_config_name", "libsail-c++")
194 self.cpp_info.components["sail-c++"].names["cmake_find_package"] = "SailC++"
195 self.cpp_info.components["sail-c++"].names["cmake_find_package_multi"] = "SailC++"
196 self.cpp_info.components["sail-c++"].libs = ["sail-c++"]
197 self.cpp_info.components["sail-c++"].requires = ["libsail", "sail-manip"]
198
[end of recipes/sail/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/recipes/sail/all/conanfile.py b/recipes/sail/all/conanfile.py
--- a/recipes/sail/all/conanfile.py
+++ b/recipes/sail/all/conanfile.py
@@ -104,11 +104,6 @@
# TODO: Remove after fixing https://github.com/conan-io/conan/issues/12012
if is_msvc(self):
tc.cache_variables["CMAKE_TRY_COMPILE_CONFIGURATION"] = str(self.settings.build_type)
- # TODO: Remove after fixing https://github.com/conan-io/conan-center-index/issues/13159
- # C3I workaround to force CMake to choose the highest version of
- # the windows SDK available in the system
- if is_msvc(self) and not self.conf.get("tools.cmake.cmaketoolchain:system_version"):
- tc.variables["CMAKE_SYSTEM_VERSION"] = "10.0"
tc.generate()
deps = CMakeDeps(self)
|
{"golden_diff": "diff --git a/recipes/sail/all/conanfile.py b/recipes/sail/all/conanfile.py\n--- a/recipes/sail/all/conanfile.py\n+++ b/recipes/sail/all/conanfile.py\n@@ -104,11 +104,6 @@\n # TODO: Remove after fixing https://github.com/conan-io/conan/issues/12012\n if is_msvc(self):\n tc.cache_variables[\"CMAKE_TRY_COMPILE_CONFIGURATION\"] = str(self.settings.build_type)\n- # TODO: Remove after fixing https://github.com/conan-io/conan-center-index/issues/13159\n- # C3I workaround to force CMake to choose the highest version of\n- # the windows SDK available in the system\n- if is_msvc(self) and not self.conf.get(\"tools.cmake.cmaketoolchain:system_version\"):\n- tc.variables[\"CMAKE_SYSTEM_VERSION\"] = \"10.0\"\n tc.generate()\n \n deps = CMakeDeps(self)\n", "issue": "[package] sail/0.9.0-rc2: PR broke the package\n### Description\n\nI've just noticed that the V2 version of sail suddenly disappeared from the center. This was caused by https://github.com/conan-io/conan-center-index/pull/18454\r\n\r\nPlease please please don't merge pull requests that break packages! Could you please also revert the PR? I have no access to the computer right now.\n\n### Package and Environment Details\n\nAll envs\n\n### Conan profile\n\nAll profiles\n\n### Steps to reproduce\n\nNo steps\n\n### Logs\n\nNo logs\n", "before_files": [{"content": "from conan import ConanFile\nfrom conan.tools.cmake import CMake, CMakeDeps, CMakeToolchain, cmake_layout\nfrom conan.tools.files import apply_conandata_patches, export_conandata_patches, copy, get, rename, rmdir\nfrom conan.tools.microsoft import is_msvc\nimport os\n\nrequired_conan_version = \">=1.53.0\"\n\nclass SAILConan(ConanFile):\n name = \"sail\"\n description = \"The missing small and fast image decoding library for humans (not for machines)\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://sail.software\"\n topics = ( \"image\", \"encoding\", \"decoding\", \"graphics\" )\n license = \"MIT\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"thread_safe\": [True, False],\n \"with_avif\": [True, False],\n \"with_gif\": [True, False],\n \"with_jpeg2000\": [True, False],\n \"with_jpeg\": [\"libjpeg\", \"libjpeg-turbo\", False],\n \"with_png\": [True, False],\n \"with_tiff\": [True, False],\n \"with_webp\": [True, False],\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"thread_safe\": True,\n \"with_avif\": True,\n \"with_gif\": True,\n \"with_jpeg2000\": True,\n \"with_jpeg\": \"libjpeg\",\n \"with_png\": True,\n \"with_tiff\": True,\n \"with_webp\": True,\n }\n\n def export_sources(self):\n export_conandata_patches(self)\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n self.options.rm_safe(\"fPIC\")\n\n def configure(self):\n if self.options.shared:\n self.options.rm_safe(\"fPIC\")\n\n def requirements(self):\n if self.options.with_avif:\n self.requires(\"libavif/0.11.1\")\n if self.options.with_gif:\n self.requires(\"giflib/5.2.1\")\n if self.options.with_jpeg2000:\n self.requires(\"jasper/4.0.0\")\n if self.options.with_jpeg == \"libjpeg-turbo\":\n self.requires(\"libjpeg-turbo/2.1.5\")\n elif self.options.with_jpeg == \"libjpeg\":\n self.requires(\"libjpeg/9e\")\n if self.options.with_png:\n self.requires(\"libpng/1.6.40\")\n if self.options.with_tiff:\n self.requires(\"libtiff/4.5.1\")\n if self.options.with_webp:\n self.requires(\"libwebp/1.3.1\")\n\n def layout(self):\n cmake_layout(self, src_folder=\"src\")\n\n def source(self):\n get(self, **self.conan_data[\"sources\"][self.version],\n strip_root=True, destination=self.source_folder)\n\n def generate(self):\n enable_codecs = []\n\n if self.options.with_avif:\n enable_codecs.append(\"avif\")\n if self.options.with_gif:\n enable_codecs.append(\"gif\")\n if self.options.with_jpeg2000:\n enable_codecs.append(\"jpeg2000\")\n if self.options.with_jpeg:\n enable_codecs.append(\"jpeg\")\n if self.options.with_png:\n enable_codecs.append(\"png\")\n if self.options.with_tiff:\n enable_codecs.append(\"tiff\")\n if self.options.with_webp:\n enable_codecs.append(\"webp\")\n\n tc = CMakeToolchain(self)\n tc.variables[\"SAIL_BUILD_APPS\"] = False\n tc.variables[\"SAIL_BUILD_EXAMPLES\"] = False\n tc.variables[\"SAIL_BUILD_TESTS\"] = False\n tc.variables[\"SAIL_COMBINE_CODECS\"] = True\n tc.variables[\"SAIL_ENABLE_CODECS\"] = \";\".join(enable_codecs)\n tc.variables[\"SAIL_INSTALL_PDB\"] = False\n tc.variables[\"SAIL_THREAD_SAFE\"] = self.options.thread_safe\n # TODO: Remove after fixing https://github.com/conan-io/conan/issues/12012\n if is_msvc(self):\n tc.cache_variables[\"CMAKE_TRY_COMPILE_CONFIGURATION\"] = str(self.settings.build_type)\n # TODO: Remove after fixing https://github.com/conan-io/conan-center-index/issues/13159\n # C3I workaround to force CMake to choose the highest version of\n # the windows SDK available in the system\n if is_msvc(self) and not self.conf.get(\"tools.cmake.cmaketoolchain:system_version\"):\n tc.variables[\"CMAKE_SYSTEM_VERSION\"] = \"10.0\"\n tc.generate()\n\n deps = CMakeDeps(self)\n deps.generate()\n\n def build(self):\n apply_conandata_patches(self)\n\n cmake = CMake(self)\n cmake.configure()\n cmake.build()\n\n def package(self):\n copy(self, \"LICENSE.txt\", self.source_folder, os.path.join(self.package_folder, \"licenses\"))\n copy(self, \"LICENSE.INIH.txt\", self.source_folder, os.path.join(self.package_folder, \"licenses\"))\n copy(self, \"LICENSE.MUNIT.txt\", self.source_folder, os.path.join(self.package_folder, \"licenses\"))\n\n cmake = CMake(self)\n cmake.install()\n\n # Remove CMake and pkg-config rules\n rmdir(self, os.path.join(self.package_folder, \"lib\", \"cmake\"))\n rmdir(self, os.path.join(self.package_folder, \"lib\", \"pkgconfig\"))\n # Move icons\n rename(self, os.path.join(self.package_folder, \"share\"),\n os.path.join(self.package_folder, \"res\"))\n\n def package_info(self):\n self.cpp_info.set_property(\"cmake_file_name\", \"Sail\")\n\n self.cpp_info.filenames[\"cmake_find_package\"] = \"Sail\"\n self.cpp_info.filenames[\"cmake_find_package_multi\"] = \"Sail\"\n self.cpp_info.names[\"cmake_find_package\"] = \"SAIL\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"SAIL\"\n\n self.cpp_info.components[\"sail-common\"].set_property(\"cmake_target_name\", \"SAIL::SailCommon\")\n self.cpp_info.components[\"sail-common\"].set_property(\"pkg_config_name\", \"libsail-common\")\n self.cpp_info.components[\"sail-common\"].names[\"cmake_find_package\"] = \"SailCommon\"\n self.cpp_info.components[\"sail-common\"].names[\"cmake_find_package_multi\"] = \"SailCommon\"\n self.cpp_info.components[\"sail-common\"].includedirs = [\"include/sail\"]\n self.cpp_info.components[\"sail-common\"].libs = [\"sail-common\"]\n\n self.cpp_info.components[\"sail-codecs\"].set_property(\"cmake_target_name\", \"SAIL::SailCodecs\")\n self.cpp_info.components[\"sail-codecs\"].names[\"cmake_find_package\"] = \"SailCodecs\"\n self.cpp_info.components[\"sail-codecs\"].names[\"cmake_find_package_multi\"] = \"SailCodecs\"\n self.cpp_info.components[\"sail-codecs\"].libs = [\"sail-codecs\"]\n self.cpp_info.components[\"sail-codecs\"].requires = [\"sail-common\"]\n if self.options.with_avif:\n self.cpp_info.components[\"sail-codecs\"].requires.append(\"libavif::libavif\")\n if self.options.with_gif:\n self.cpp_info.components[\"sail-codecs\"].requires.append(\"giflib::giflib\")\n if self.options.with_jpeg2000:\n self.cpp_info.components[\"sail-codecs\"].requires.append(\"jasper::jasper\")\n if self.options.with_jpeg:\n self.cpp_info.components[\"sail-codecs\"].requires.append(\"{0}::{0}\".format(self.options.with_jpeg))\n if self.options.with_png:\n self.cpp_info.components[\"sail-codecs\"].requires.append(\"libpng::libpng\")\n if self.options.with_tiff:\n self.cpp_info.components[\"sail-codecs\"].requires.append(\"libtiff::libtiff\")\n if self.options.with_webp:\n self.cpp_info.components[\"sail-codecs\"].requires.append(\"libwebp::libwebp\")\n\n self.cpp_info.components[\"libsail\"].set_property(\"cmake_target_name\", \"SAIL::Sail\")\n self.cpp_info.components[\"libsail\"].set_property(\"pkg_config_name\", \"libsail\")\n self.cpp_info.components[\"libsail\"].names[\"cmake_find_package\"] = \"Sail\"\n self.cpp_info.components[\"libsail\"].names[\"cmake_find_package_multi\"] = \"Sail\"\n self.cpp_info.components[\"libsail\"].libs = [\"sail\"]\n if self.settings.os in [\"Linux\", \"FreeBSD\"]:\n self.cpp_info.components[\"libsail\"].system_libs.append(\"dl\")\n if self.options.thread_safe:\n self.cpp_info.components[\"libsail\"].system_libs.append(\"pthread\")\n self.cpp_info.components[\"libsail\"].requires = [\"sail-common\", \"sail-codecs\"]\n\n self.cpp_info.components[\"sail-manip\"].set_property(\"cmake_target_name\", \"SAIL::SailManip\")\n self.cpp_info.components[\"sail-manip\"].set_property(\"pkg_config_name\", \"libsail-manip\")\n self.cpp_info.components[\"sail-manip\"].names[\"cmake_find_package\"] = \"SailManip\"\n self.cpp_info.components[\"sail-manip\"].names[\"cmake_find_package_multi\"] = \"SailManip\"\n self.cpp_info.components[\"sail-manip\"].libs = [\"sail-manip\"]\n self.cpp_info.components[\"sail-manip\"].requires = [\"sail-common\"]\n\n self.cpp_info.components[\"sail-c++\"].set_property(\"cmake_target_name\", \"SAIL::SailC++\")\n self.cpp_info.components[\"sail-c++\"].set_property(\"pkg_config_name\", \"libsail-c++\")\n self.cpp_info.components[\"sail-c++\"].names[\"cmake_find_package\"] = \"SailC++\"\n self.cpp_info.components[\"sail-c++\"].names[\"cmake_find_package_multi\"] = \"SailC++\"\n self.cpp_info.components[\"sail-c++\"].libs = [\"sail-c++\"]\n self.cpp_info.components[\"sail-c++\"].requires = [\"libsail\", \"sail-manip\"]\n", "path": "recipes/sail/all/conanfile.py"}]}
| 3,475 | 225 |
gh_patches_debug_32168
|
rasdani/github-patches
|
git_diff
|
cloud-custodian__cloud-custodian-4544
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
azure - event hub resources
Add event hub resource & implement firewall filter
</issue>
<code>
[start of tools/c7n_azure/c7n_azure/resources/event_hub.py]
1 # Copyright 2019 Microsoft Corporation
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from c7n_azure.provider import resources
16 from c7n_azure.resources.arm import ArmResourceManager
17
18
19 @resources.register('eventhub')
20 class EventHub(ArmResourceManager):
21 """Event Hub Resource
22
23 :example:
24
25 Finds all Event Hub resources in the subscription.
26
27 .. code-block:: yaml
28
29 policies:
30 - name: find-all-eventhubs
31 resource: azure.eventhub
32
33 """
34
35 class resource_type(ArmResourceManager.resource_type):
36 doc_groups = ['Events']
37
38 service = 'azure.mgmt.eventhub'
39 client = 'EventHubManagementClient'
40 enum_spec = ('namespaces', 'list', None)
41 default_report_fields = (
42 'name',
43 'location',
44 'resourceGroup',
45 'sku.name',
46 'properties.isAutoInflateEnabled'
47 )
48 resource_type = 'Microsoft.EventHub/namespaces'
49
[end of tools/c7n_azure/c7n_azure/resources/event_hub.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tools/c7n_azure/c7n_azure/resources/event_hub.py b/tools/c7n_azure/c7n_azure/resources/event_hub.py
--- a/tools/c7n_azure/c7n_azure/resources/event_hub.py
+++ b/tools/c7n_azure/c7n_azure/resources/event_hub.py
@@ -12,8 +12,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import logging
+
+from c7n_azure.filters import FirewallRulesFilter
from c7n_azure.provider import resources
from c7n_azure.resources.arm import ArmResourceManager
+from netaddr import IPSet
@resources.register('eventhub')
@@ -22,13 +26,17 @@
:example:
- Finds all Event Hub resources in the subscription.
+ This policy will find all Event Hubs allowing traffic from 1.2.2.128/25 CIDR.
.. code-block:: yaml
policies:
- - name: find-all-eventhubs
- resource: azure.eventhub
+ - name: find-event-hub-allowing-subnet
+ resource: azure.eventhub
+ filters:
+ - type: firewall-rules
+ include:
+ - '1.2.2.128/25'
"""
@@ -46,3 +54,29 @@
'properties.isAutoInflateEnabled'
)
resource_type = 'Microsoft.EventHub/namespaces'
+
+
[email protected]_registry.register('firewall-rules')
+class EventHubFirewallRulesFilter(FirewallRulesFilter):
+
+ def __init__(self, data, manager=None):
+ super(EventHubFirewallRulesFilter, self).__init__(data, manager)
+ self._log = logging.getLogger('custodian.azure.eventhub')
+ self.client = None
+
+ @property
+ def log(self):
+ return self._log
+
+ def process(self, resources, event=None):
+ self.client = self.manager.get_client()
+ return super(EventHubFirewallRulesFilter, self).process(resources, event)
+
+ def _query_rules(self, resource):
+ query = self.client.namespaces.get_network_rule_set(
+ resource['resourceGroup'],
+ resource['name'])
+
+ resource_rules = IPSet([r.ip_mask for r in query.ip_rules])
+
+ return resource_rules
|
{"golden_diff": "diff --git a/tools/c7n_azure/c7n_azure/resources/event_hub.py b/tools/c7n_azure/c7n_azure/resources/event_hub.py\n--- a/tools/c7n_azure/c7n_azure/resources/event_hub.py\n+++ b/tools/c7n_azure/c7n_azure/resources/event_hub.py\n@@ -12,8 +12,12 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+import logging\n+\n+from c7n_azure.filters import FirewallRulesFilter\n from c7n_azure.provider import resources\n from c7n_azure.resources.arm import ArmResourceManager\n+from netaddr import IPSet\n \n \n @resources.register('eventhub')\n@@ -22,13 +26,17 @@\n \n :example:\n \n- Finds all Event Hub resources in the subscription.\n+ This policy will find all Event Hubs allowing traffic from 1.2.2.128/25 CIDR.\n \n .. code-block:: yaml\n \n policies:\n- - name: find-all-eventhubs\n- resource: azure.eventhub\n+ - name: find-event-hub-allowing-subnet\n+ resource: azure.eventhub\n+ filters:\n+ - type: firewall-rules\n+ include:\n+ - '1.2.2.128/25'\n \n \"\"\"\n \n@@ -46,3 +54,29 @@\n 'properties.isAutoInflateEnabled'\n )\n resource_type = 'Microsoft.EventHub/namespaces'\n+\n+\[email protected]_registry.register('firewall-rules')\n+class EventHubFirewallRulesFilter(FirewallRulesFilter):\n+\n+ def __init__(self, data, manager=None):\n+ super(EventHubFirewallRulesFilter, self).__init__(data, manager)\n+ self._log = logging.getLogger('custodian.azure.eventhub')\n+ self.client = None\n+\n+ @property\n+ def log(self):\n+ return self._log\n+\n+ def process(self, resources, event=None):\n+ self.client = self.manager.get_client()\n+ return super(EventHubFirewallRulesFilter, self).process(resources, event)\n+\n+ def _query_rules(self, resource):\n+ query = self.client.namespaces.get_network_rule_set(\n+ resource['resourceGroup'],\n+ resource['name'])\n+\n+ resource_rules = IPSet([r.ip_mask for r in query.ip_rules])\n+\n+ return resource_rules\n", "issue": "azure - event hub resources\nAdd event hub resource & implement firewall filter\n", "before_files": [{"content": "# Copyright 2019 Microsoft Corporation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom c7n_azure.provider import resources\nfrom c7n_azure.resources.arm import ArmResourceManager\n\n\[email protected]('eventhub')\nclass EventHub(ArmResourceManager):\n \"\"\"Event Hub Resource\n\n :example:\n\n Finds all Event Hub resources in the subscription.\n\n .. code-block:: yaml\n\n policies:\n - name: find-all-eventhubs\n resource: azure.eventhub\n\n \"\"\"\n\n class resource_type(ArmResourceManager.resource_type):\n doc_groups = ['Events']\n\n service = 'azure.mgmt.eventhub'\n client = 'EventHubManagementClient'\n enum_spec = ('namespaces', 'list', None)\n default_report_fields = (\n 'name',\n 'location',\n 'resourceGroup',\n 'sku.name',\n 'properties.isAutoInflateEnabled'\n )\n resource_type = 'Microsoft.EventHub/namespaces'\n", "path": "tools/c7n_azure/c7n_azure/resources/event_hub.py"}]}
| 979 | 552 |
gh_patches_debug_20098
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-3495
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Project list widget fails due to a date vs. datetime formatting error
</issue>
<code>
[start of akvo/rsr/templatetags/rsr_filters.py]
1 # -*- coding: utf-8 -*-
2 """
3 Akvo RSR is covered by the GNU Affero General Public License.
4
5 See more details in the license.txt file located at the root folder of the Akvo RSR module.
6 For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
7 """
8
9 import datetime
10 import time
11
12 from django import template
13 from django.conf import settings
14 from decimal import Decimal, ROUND_HALF_UP
15
16 register = template.Library()
17
18 DECIMAL_PLACES = getattr(settings, 'DECIMALS_DECIMAL_PLACES', 2)
19
20
21 @register.filter
22 def get_item(dictionary, key):
23 """Enable lookup in dicts."""
24 return dictionary.get(key)
25
26
27 @register.filter
28 def string_to_date(value):
29 try:
30 time_format = "%Y-%m-%d %H:%M:%S"
31 fmt_time = time.strptime(value, time_format)
32 return datetime.datetime(*fmt_time[:6])
33 except:
34 return value
35
36 # http://stackoverflow.com/questions/250357/smart-truncate-in-python
37
38
39 @register.filter("smart_truncate")
40 def smart_truncate(content, length=100, suffix='...'):
41 if len(content) <= length:
42 return content
43 else:
44 return content[:length].rsplit(' ', 1)[0] + suffix
45
46
47 @register.filter
48 def round(value, decimal_places=DECIMAL_PLACES):
49 try:
50 value = Decimal(str(value))
51 except:
52 return u''
53 if settings.DECIMALS_DEBUG:
54 decimal_result = value.quantize(Decimal(10) ** -decimal_places)
55 return decimal_result
56 else:
57 decimal_result = value.quantize(Decimal(10), ROUND_HALF_UP)
58 return 0 if decimal_result <= 0 else decimal_result
59 round.is_safe = True
60
61
62 @register.filter
63 def countries_list(obj):
64 """ return a list of the countries of all locations of an object.
65 currently works for Project and Organisation """
66 return obj.locations.values_list('country__name', flat=True)
67
68
69 @register.filter
70 def continents_list(obj):
71 """return a list of the continents of all locations of an object"
72 currently works for Project and Organisation """
73 return obj.locations.values_list('country__continent', flat=True)
74
75
76 @register.filter
77 def rsr_sorted_set(iterable):
78 """ create a set of the iterable to eliminate duplicates
79 then make a list of the set and sort it
80 used with countries_list and continents_list
81 """
82 set_list = list(frozenset(iterable))
83 set_list.sort()
84 return set_list
85
86
87 @register.filter
88 def load_partnerships_and_orgs(project):
89 return project.partnerships.prefetch_related('organisation').all()
90
[end of akvo/rsr/templatetags/rsr_filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/akvo/rsr/templatetags/rsr_filters.py b/akvo/rsr/templatetags/rsr_filters.py
--- a/akvo/rsr/templatetags/rsr_filters.py
+++ b/akvo/rsr/templatetags/rsr_filters.py
@@ -6,9 +6,6 @@
For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
"""
-import datetime
-import time
-
from django import template
from django.conf import settings
from decimal import Decimal, ROUND_HALF_UP
@@ -24,18 +21,7 @@
return dictionary.get(key)
[email protected]
-def string_to_date(value):
- try:
- time_format = "%Y-%m-%d %H:%M:%S"
- fmt_time = time.strptime(value, time_format)
- return datetime.datetime(*fmt_time[:6])
- except:
- return value
-
# http://stackoverflow.com/questions/250357/smart-truncate-in-python
-
-
@register.filter("smart_truncate")
def smart_truncate(content, length=100, suffix='...'):
if len(content) <= length:
|
{"golden_diff": "diff --git a/akvo/rsr/templatetags/rsr_filters.py b/akvo/rsr/templatetags/rsr_filters.py\n--- a/akvo/rsr/templatetags/rsr_filters.py\n+++ b/akvo/rsr/templatetags/rsr_filters.py\n@@ -6,9 +6,6 @@\n For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n \"\"\"\n \n-import datetime\n-import time\n-\n from django import template\n from django.conf import settings\n from decimal import Decimal, ROUND_HALF_UP\n@@ -24,18 +21,7 @@\n return dictionary.get(key)\n \n \[email protected]\n-def string_to_date(value):\n- try:\n- time_format = \"%Y-%m-%d %H:%M:%S\"\n- fmt_time = time.strptime(value, time_format)\n- return datetime.datetime(*fmt_time[:6])\n- except:\n- return value\n-\n # http://stackoverflow.com/questions/250357/smart-truncate-in-python\n-\n-\n @register.filter(\"smart_truncate\")\n def smart_truncate(content, length=100, suffix='...'):\n if len(content) <= length:\n", "issue": "Project list widget fails due to a date vs. datetime formatting error\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nAkvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nimport datetime\nimport time\n\nfrom django import template\nfrom django.conf import settings\nfrom decimal import Decimal, ROUND_HALF_UP\n\nregister = template.Library()\n\nDECIMAL_PLACES = getattr(settings, 'DECIMALS_DECIMAL_PLACES', 2)\n\n\[email protected]\ndef get_item(dictionary, key):\n \"\"\"Enable lookup in dicts.\"\"\"\n return dictionary.get(key)\n\n\[email protected]\ndef string_to_date(value):\n try:\n time_format = \"%Y-%m-%d %H:%M:%S\"\n fmt_time = time.strptime(value, time_format)\n return datetime.datetime(*fmt_time[:6])\n except:\n return value\n\n# http://stackoverflow.com/questions/250357/smart-truncate-in-python\n\n\[email protected](\"smart_truncate\")\ndef smart_truncate(content, length=100, suffix='...'):\n if len(content) <= length:\n return content\n else:\n return content[:length].rsplit(' ', 1)[0] + suffix\n\n\[email protected]\ndef round(value, decimal_places=DECIMAL_PLACES):\n try:\n value = Decimal(str(value))\n except:\n return u''\n if settings.DECIMALS_DEBUG:\n decimal_result = value.quantize(Decimal(10) ** -decimal_places)\n return decimal_result\n else:\n decimal_result = value.quantize(Decimal(10), ROUND_HALF_UP)\n return 0 if decimal_result <= 0 else decimal_result\nround.is_safe = True\n\n\[email protected]\ndef countries_list(obj):\n \"\"\" return a list of the countries of all locations of an object.\n currently works for Project and Organisation \"\"\"\n return obj.locations.values_list('country__name', flat=True)\n\n\[email protected]\ndef continents_list(obj):\n \"\"\"return a list of the continents of all locations of an object\"\n currently works for Project and Organisation \"\"\"\n return obj.locations.values_list('country__continent', flat=True)\n\n\[email protected]\ndef rsr_sorted_set(iterable):\n \"\"\" create a set of the iterable to eliminate duplicates\n then make a list of the set and sort it\n used with countries_list and continents_list\n \"\"\"\n set_list = list(frozenset(iterable))\n set_list.sort()\n return set_list\n\n\[email protected]\ndef load_partnerships_and_orgs(project):\n return project.partnerships.prefetch_related('organisation').all()\n", "path": "akvo/rsr/templatetags/rsr_filters.py"}]}
| 1,328 | 268 |
gh_patches_debug_3430
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmdetection-1781
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot reproduce the results of SSD-300 on WIDER_FACE
Hi @sovrasov,
Recently, I'm reproducing the results of SSD-300 on WIDER_FACE.
I use the provided config based on mmdetection and get Recall 36.9, Precision 0.023 and AP 27.3.
The results are worse than Recall 43.4, Precision 0.029 and AP 34.7 (shown in https://github.com/open-mmlab/mmdetection/pull/765#issuecomment-502579220).
Generally, the models provided by mmdetection are trained on 8 gpus. So, is the SSD-300 on WIDER_FACE also trained on 8 gpus? If so, maybe I will change the learning rate because I trained the model on 4 gpus.
Moreover, any other advice?
Thanks a lot.
</issue>
<code>
[start of configs/wider_face/ssd300_wider_face.py]
1 # model settings
2 input_size = 300
3 model = dict(
4 type='SingleStageDetector',
5 pretrained='open-mmlab://vgg16_caffe',
6 backbone=dict(
7 type='SSDVGG',
8 input_size=input_size,
9 depth=16,
10 with_last_pool=False,
11 ceil_mode=True,
12 out_indices=(3, 4),
13 out_feature_indices=(22, 34),
14 l2_norm_scale=20),
15 neck=None,
16 bbox_head=dict(
17 type='SSDHead',
18 input_size=input_size,
19 in_channels=(512, 1024, 512, 256, 256, 256),
20 num_classes=2,
21 anchor_strides=(8, 16, 32, 64, 100, 300),
22 basesize_ratio_range=(0.15, 0.9),
23 anchor_ratios=([2], [2, 3], [2, 3], [2, 3], [2], [2]),
24 target_means=(.0, .0, .0, .0),
25 target_stds=(0.1, 0.1, 0.2, 0.2)))
26 # model training and testing settings
27 cudnn_benchmark = True
28 train_cfg = dict(
29 assigner=dict(
30 type='MaxIoUAssigner',
31 pos_iou_thr=0.5,
32 neg_iou_thr=0.5,
33 min_pos_iou=0.,
34 ignore_iof_thr=-1,
35 gt_max_assign_all=False),
36 smoothl1_beta=1.,
37 allowed_border=-1,
38 pos_weight=-1,
39 neg_pos_ratio=3,
40 debug=False)
41 test_cfg = dict(
42 nms=dict(type='nms', iou_thr=0.45),
43 min_bbox_size=0,
44 score_thr=0.02,
45 max_per_img=200)
46 # dataset settings
47 dataset_type = 'WIDERFaceDataset'
48 data_root = 'data/WIDERFace/'
49 img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[1, 1, 1], to_rgb=True)
50 train_pipeline = [
51 dict(type='LoadImageFromFile', to_float32=True),
52 dict(type='LoadAnnotations', with_bbox=True),
53 dict(
54 type='PhotoMetricDistortion',
55 brightness_delta=32,
56 contrast_range=(0.5, 1.5),
57 saturation_range=(0.5, 1.5),
58 hue_delta=18),
59 dict(
60 type='Expand',
61 mean=img_norm_cfg['mean'],
62 to_rgb=img_norm_cfg['to_rgb'],
63 ratio_range=(1, 4)),
64 dict(
65 type='MinIoURandomCrop',
66 min_ious=(0.1, 0.3, 0.5, 0.7, 0.9),
67 min_crop_size=0.3),
68 dict(type='Resize', img_scale=(300, 300), keep_ratio=False),
69 dict(type='Normalize', **img_norm_cfg),
70 dict(type='RandomFlip', flip_ratio=0.5),
71 dict(type='DefaultFormatBundle'),
72 dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
73 ]
74 test_pipeline = [
75 dict(type='LoadImageFromFile'),
76 dict(
77 type='MultiScaleFlipAug',
78 img_scale=(300, 300),
79 flip=False,
80 transforms=[
81 dict(type='Resize', keep_ratio=False),
82 dict(type='Normalize', **img_norm_cfg),
83 dict(type='ImageToTensor', keys=['img']),
84 dict(type='Collect', keys=['img']),
85 ])
86 ]
87 data = dict(
88 imgs_per_gpu=60,
89 workers_per_gpu=2,
90 train=dict(
91 type='RepeatDataset',
92 times=2,
93 dataset=dict(
94 type=dataset_type,
95 ann_file=data_root + 'train.txt',
96 img_prefix=data_root + 'WIDER_train/',
97 min_size=17,
98 pipeline=train_pipeline)),
99 val=dict(
100 type=dataset_type,
101 ann_file=data_root + 'val.txt',
102 img_prefix=data_root + 'WIDER_val/',
103 pipeline=test_pipeline),
104 test=dict(
105 type=dataset_type,
106 ann_file=data_root + 'val.txt',
107 img_prefix=data_root + 'WIDER_val/',
108 pipeline=test_pipeline))
109 # optimizer
110 optimizer = dict(type='SGD', lr=1e-3, momentum=0.9, weight_decay=5e-4)
111 optimizer_config = dict()
112 # learning policy
113 lr_config = dict(
114 policy='step',
115 warmup='linear',
116 warmup_iters=1000,
117 warmup_ratio=1.0 / 3,
118 step=[16, 20])
119 checkpoint_config = dict(interval=1)
120 # yapf:disable
121 log_config = dict(
122 interval=1,
123 hooks=[
124 dict(type='TextLoggerHook'),
125 # dict(type='TensorboardLoggerHook')
126 ])
127 # yapf:enable
128 # runtime settings
129 total_epochs = 24
130 dist_params = dict(backend='nccl')
131 log_level = 'INFO'
132 work_dir = './work_dirs/ssd300_wider'
133 load_from = None
134 resume_from = None
135 workflow = [('train', 1)]
136
[end of configs/wider_face/ssd300_wider_face.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/configs/wider_face/ssd300_wider_face.py b/configs/wider_face/ssd300_wider_face.py
--- a/configs/wider_face/ssd300_wider_face.py
+++ b/configs/wider_face/ssd300_wider_face.py
@@ -107,7 +107,7 @@
img_prefix=data_root + 'WIDER_val/',
pipeline=test_pipeline))
# optimizer
-optimizer = dict(type='SGD', lr=1e-3, momentum=0.9, weight_decay=5e-4)
+optimizer = dict(type='SGD', lr=0.012, momentum=0.9, weight_decay=5e-4)
optimizer_config = dict()
# learning policy
lr_config = dict(
|
{"golden_diff": "diff --git a/configs/wider_face/ssd300_wider_face.py b/configs/wider_face/ssd300_wider_face.py\n--- a/configs/wider_face/ssd300_wider_face.py\n+++ b/configs/wider_face/ssd300_wider_face.py\n@@ -107,7 +107,7 @@\n img_prefix=data_root + 'WIDER_val/',\n pipeline=test_pipeline))\n # optimizer\n-optimizer = dict(type='SGD', lr=1e-3, momentum=0.9, weight_decay=5e-4)\n+optimizer = dict(type='SGD', lr=0.012, momentum=0.9, weight_decay=5e-4)\n optimizer_config = dict()\n # learning policy\n lr_config = dict(\n", "issue": "Cannot reproduce the results of SSD-300 on WIDER_FACE\nHi @sovrasov,\r\nRecently, I'm reproducing the results of SSD-300 on WIDER_FACE.\r\nI use the provided config based on mmdetection and get Recall 36.9, Precision 0.023 and AP 27.3.\r\nThe results are worse than Recall 43.4, Precision 0.029 and AP 34.7 (shown in https://github.com/open-mmlab/mmdetection/pull/765#issuecomment-502579220).\r\n\r\nGenerally, the models provided by mmdetection are trained on 8 gpus. So, is the SSD-300 on WIDER_FACE also trained on 8 gpus? If so, maybe I will change the learning rate because I trained the model on 4 gpus.\r\n\r\nMoreover, any other advice?\r\n\r\nThanks a lot.\r\n\n", "before_files": [{"content": "# model settings\ninput_size = 300\nmodel = dict(\n type='SingleStageDetector',\n pretrained='open-mmlab://vgg16_caffe',\n backbone=dict(\n type='SSDVGG',\n input_size=input_size,\n depth=16,\n with_last_pool=False,\n ceil_mode=True,\n out_indices=(3, 4),\n out_feature_indices=(22, 34),\n l2_norm_scale=20),\n neck=None,\n bbox_head=dict(\n type='SSDHead',\n input_size=input_size,\n in_channels=(512, 1024, 512, 256, 256, 256),\n num_classes=2,\n anchor_strides=(8, 16, 32, 64, 100, 300),\n basesize_ratio_range=(0.15, 0.9),\n anchor_ratios=([2], [2, 3], [2, 3], [2, 3], [2], [2]),\n target_means=(.0, .0, .0, .0),\n target_stds=(0.1, 0.1, 0.2, 0.2)))\n# model training and testing settings\ncudnn_benchmark = True\ntrain_cfg = dict(\n assigner=dict(\n type='MaxIoUAssigner',\n pos_iou_thr=0.5,\n neg_iou_thr=0.5,\n min_pos_iou=0.,\n ignore_iof_thr=-1,\n gt_max_assign_all=False),\n smoothl1_beta=1.,\n allowed_border=-1,\n pos_weight=-1,\n neg_pos_ratio=3,\n debug=False)\ntest_cfg = dict(\n nms=dict(type='nms', iou_thr=0.45),\n min_bbox_size=0,\n score_thr=0.02,\n max_per_img=200)\n# dataset settings\ndataset_type = 'WIDERFaceDataset'\ndata_root = 'data/WIDERFace/'\nimg_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[1, 1, 1], to_rgb=True)\ntrain_pipeline = [\n dict(type='LoadImageFromFile', to_float32=True),\n dict(type='LoadAnnotations', with_bbox=True),\n dict(\n type='PhotoMetricDistortion',\n brightness_delta=32,\n contrast_range=(0.5, 1.5),\n saturation_range=(0.5, 1.5),\n hue_delta=18),\n dict(\n type='Expand',\n mean=img_norm_cfg['mean'],\n to_rgb=img_norm_cfg['to_rgb'],\n ratio_range=(1, 4)),\n dict(\n type='MinIoURandomCrop',\n min_ious=(0.1, 0.3, 0.5, 0.7, 0.9),\n min_crop_size=0.3),\n dict(type='Resize', img_scale=(300, 300), keep_ratio=False),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(300, 300),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=False),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img']),\n ])\n]\ndata = dict(\n imgs_per_gpu=60,\n workers_per_gpu=2,\n train=dict(\n type='RepeatDataset',\n times=2,\n dataset=dict(\n type=dataset_type,\n ann_file=data_root + 'train.txt',\n img_prefix=data_root + 'WIDER_train/',\n min_size=17,\n pipeline=train_pipeline)),\n val=dict(\n type=dataset_type,\n ann_file=data_root + 'val.txt',\n img_prefix=data_root + 'WIDER_val/',\n pipeline=test_pipeline),\n test=dict(\n type=dataset_type,\n ann_file=data_root + 'val.txt',\n img_prefix=data_root + 'WIDER_val/',\n pipeline=test_pipeline))\n# optimizer\noptimizer = dict(type='SGD', lr=1e-3, momentum=0.9, weight_decay=5e-4)\noptimizer_config = dict()\n# learning policy\nlr_config = dict(\n policy='step',\n warmup='linear',\n warmup_iters=1000,\n warmup_ratio=1.0 / 3,\n step=[16, 20])\ncheckpoint_config = dict(interval=1)\n# yapf:disable\nlog_config = dict(\n interval=1,\n hooks=[\n dict(type='TextLoggerHook'),\n # dict(type='TensorboardLoggerHook')\n ])\n# yapf:enable\n# runtime settings\ntotal_epochs = 24\ndist_params = dict(backend='nccl')\nlog_level = 'INFO'\nwork_dir = './work_dirs/ssd300_wider'\nload_from = None\nresume_from = None\nworkflow = [('train', 1)]\n", "path": "configs/wider_face/ssd300_wider_face.py"}]}
| 2,239 | 176 |
gh_patches_debug_60370
|
rasdani/github-patches
|
git_diff
|
Lightning-Universe__lightning-flash-597
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Confusing KerError message for flash registry
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
### To Reproduce
Steps to reproduce the behavior:
```
from flash.image import ImageClassificationData, ImageClassifier
print(ImageClassifier.backbones.get('abcd'))
```
#### Code sample
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
Minimal means having the shortest code but still preserving the bug. -->
### Expected behavior
It should throw a keyerror.
### Environment
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
### Additional context
Sending in PR.
</issue>
<code>
[start of flash/core/registry.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from functools import partial
15 from types import FunctionType
16 from typing import Any, Callable, Dict, List, Optional, Union
17
18 from pytorch_lightning.utilities import rank_zero_info
19 from pytorch_lightning.utilities.exceptions import MisconfigurationException
20
21 _REGISTERED_FUNCTION = Dict[str, Any]
22
23
24 class FlashRegistry:
25 """This class is used to register function or :class:`functools.partial` class to a registry."""
26
27 def __init__(self, name: str, verbose: bool = False) -> None:
28 self.name = name
29 self.functions: List[_REGISTERED_FUNCTION] = []
30 self._verbose = verbose
31
32 def __len__(self) -> int:
33 return len(self.functions)
34
35 def __contains__(self, key) -> bool:
36 return any(key == e["name"] for e in self.functions)
37
38 def __repr__(self) -> str:
39 return f'{self.__class__.__name__}(name={self.name}, functions={self.functions})'
40
41 def get(
42 self,
43 key: str,
44 with_metadata: bool = False,
45 strict: bool = True,
46 **metadata,
47 ) -> Union[Callable, _REGISTERED_FUNCTION, List[_REGISTERED_FUNCTION], List[Callable]]:
48 """
49 This function is used to gather matches from the registry:
50
51 Args:
52 key: Name of the registered function.
53 with_metadata: Whether to include the associated metadata in the return value.
54 strict: Whether to return all matches or just one.
55 metadata: Metadata used to filter against existing registry item's metadata.
56 """
57 matches = [e for e in self.functions if key == e["name"]]
58 if not matches:
59 raise KeyError(f"Key: {key} is not in {repr(self)}")
60
61 if metadata:
62 matches = [m for m in matches if metadata.items() <= m["metadata"].items()]
63 if not matches:
64 raise KeyError("Found no matches that fit your metadata criteria. Try removing some metadata")
65
66 matches = [e if with_metadata else e["fn"] for e in matches]
67 return matches[0] if strict else matches
68
69 def remove(self, key: str) -> None:
70 self.functions = [f for f in self.functions if f["name"] != key]
71
72 def _register_function(
73 self,
74 fn: Callable,
75 name: Optional[str] = None,
76 override: bool = False,
77 metadata: Optional[Dict[str, Any]] = None
78 ):
79 if not isinstance(fn, FunctionType) and not isinstance(fn, partial):
80 raise MisconfigurationException(f"You can only register a function, found: {fn}")
81
82 name = name or fn.__name__
83
84 if self._verbose:
85 rank_zero_info(f"Registering: {fn.__name__} function with name: {name} and metadata: {metadata}")
86
87 item = {"fn": fn, "name": name, "metadata": metadata or {}}
88
89 matching_index = self._find_matching_index(item)
90 if override and matching_index is not None:
91 self.functions[matching_index] = item
92 else:
93 if matching_index is not None:
94 raise MisconfigurationException(
95 f"Function with name: {name} and metadata: {metadata} is already present within {self}."
96 " HINT: Use `override=True`."
97 )
98 self.functions.append(item)
99
100 def _find_matching_index(self, item: _REGISTERED_FUNCTION) -> Optional[int]:
101 for idx, fn in enumerate(self.functions):
102 if all(fn[k] == item[k] for k in ("fn", "name", "metadata")):
103 return idx
104
105 def __call__(
106 self,
107 fn: Optional[Callable[..., Any]] = None,
108 name: Optional[str] = None,
109 override: bool = False,
110 **metadata
111 ) -> Callable:
112 """
113 This function is used to register new functions to the registry along their metadata.
114
115 Functions can be filtered using metadata using the ``get`` function.
116
117 """
118 if fn is not None:
119 self._register_function(fn=fn, name=name, override=override, metadata=metadata)
120 return fn
121
122 # raise the error ahead of time
123 if not (name is None or isinstance(name, str)):
124 raise TypeError(f'`name` must be a str, found {name}')
125
126 def _register(cls):
127 self._register_function(fn=cls, name=name, override=override, metadata=metadata)
128 return cls
129
130 return _register
131
132 def available_keys(self) -> List[str]:
133 return sorted(v["name"] for v in self.functions)
134
[end of flash/core/registry.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/flash/core/registry.py b/flash/core/registry.py
--- a/flash/core/registry.py
+++ b/flash/core/registry.py
@@ -56,7 +56,7 @@
"""
matches = [e for e in self.functions if key == e["name"]]
if not matches:
- raise KeyError(f"Key: {key} is not in {repr(self)}")
+ raise KeyError(f"Key: {key} is not in {type(self).__name__}")
if metadata:
matches = [m for m in matches if metadata.items() <= m["metadata"].items()]
|
{"golden_diff": "diff --git a/flash/core/registry.py b/flash/core/registry.py\n--- a/flash/core/registry.py\n+++ b/flash/core/registry.py\n@@ -56,7 +56,7 @@\n \"\"\"\n matches = [e for e in self.functions if key == e[\"name\"]]\n if not matches:\n- raise KeyError(f\"Key: {key} is not in {repr(self)}\")\n+ raise KeyError(f\"Key: {key} is not in {type(self).__name__}\")\n \n if metadata:\n matches = [m for m in matches if metadata.items() <= m[\"metadata\"].items()]\n", "issue": "Confusing KerError message for flash registry\n## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n### To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n```\r\nfrom flash.image import ImageClassificationData, ImageClassifier\r\n\r\nprint(ImageClassifier.backbones.get('abcd'))\r\n```\r\n\r\n#### Code sample\r\n<!-- Ideally attach a minimal code sample to reproduce the decried issue.\r\nMinimal means having the shortest code but still preserving the bug. -->\r\n\r\n### Expected behavior\r\n\r\nIt should throw a keyerror.\r\n\r\n### Environment\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, source):\r\n - Build command you used (if compiling from source):\r\n - Python version:\r\n - CUDA/cuDNN version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n### Additional context\r\n\r\nSending in PR.\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom functools import partial\nfrom types import FunctionType\nfrom typing import Any, Callable, Dict, List, Optional, Union\n\nfrom pytorch_lightning.utilities import rank_zero_info\nfrom pytorch_lightning.utilities.exceptions import MisconfigurationException\n\n_REGISTERED_FUNCTION = Dict[str, Any]\n\n\nclass FlashRegistry:\n \"\"\"This class is used to register function or :class:`functools.partial` class to a registry.\"\"\"\n\n def __init__(self, name: str, verbose: bool = False) -> None:\n self.name = name\n self.functions: List[_REGISTERED_FUNCTION] = []\n self._verbose = verbose\n\n def __len__(self) -> int:\n return len(self.functions)\n\n def __contains__(self, key) -> bool:\n return any(key == e[\"name\"] for e in self.functions)\n\n def __repr__(self) -> str:\n return f'{self.__class__.__name__}(name={self.name}, functions={self.functions})'\n\n def get(\n self,\n key: str,\n with_metadata: bool = False,\n strict: bool = True,\n **metadata,\n ) -> Union[Callable, _REGISTERED_FUNCTION, List[_REGISTERED_FUNCTION], List[Callable]]:\n \"\"\"\n This function is used to gather matches from the registry:\n\n Args:\n key: Name of the registered function.\n with_metadata: Whether to include the associated metadata in the return value.\n strict: Whether to return all matches or just one.\n metadata: Metadata used to filter against existing registry item's metadata.\n \"\"\"\n matches = [e for e in self.functions if key == e[\"name\"]]\n if not matches:\n raise KeyError(f\"Key: {key} is not in {repr(self)}\")\n\n if metadata:\n matches = [m for m in matches if metadata.items() <= m[\"metadata\"].items()]\n if not matches:\n raise KeyError(\"Found no matches that fit your metadata criteria. Try removing some metadata\")\n\n matches = [e if with_metadata else e[\"fn\"] for e in matches]\n return matches[0] if strict else matches\n\n def remove(self, key: str) -> None:\n self.functions = [f for f in self.functions if f[\"name\"] != key]\n\n def _register_function(\n self,\n fn: Callable,\n name: Optional[str] = None,\n override: bool = False,\n metadata: Optional[Dict[str, Any]] = None\n ):\n if not isinstance(fn, FunctionType) and not isinstance(fn, partial):\n raise MisconfigurationException(f\"You can only register a function, found: {fn}\")\n\n name = name or fn.__name__\n\n if self._verbose:\n rank_zero_info(f\"Registering: {fn.__name__} function with name: {name} and metadata: {metadata}\")\n\n item = {\"fn\": fn, \"name\": name, \"metadata\": metadata or {}}\n\n matching_index = self._find_matching_index(item)\n if override and matching_index is not None:\n self.functions[matching_index] = item\n else:\n if matching_index is not None:\n raise MisconfigurationException(\n f\"Function with name: {name} and metadata: {metadata} is already present within {self}.\"\n \" HINT: Use `override=True`.\"\n )\n self.functions.append(item)\n\n def _find_matching_index(self, item: _REGISTERED_FUNCTION) -> Optional[int]:\n for idx, fn in enumerate(self.functions):\n if all(fn[k] == item[k] for k in (\"fn\", \"name\", \"metadata\")):\n return idx\n\n def __call__(\n self,\n fn: Optional[Callable[..., Any]] = None,\n name: Optional[str] = None,\n override: bool = False,\n **metadata\n ) -> Callable:\n \"\"\"\n This function is used to register new functions to the registry along their metadata.\n\n Functions can be filtered using metadata using the ``get`` function.\n\n \"\"\"\n if fn is not None:\n self._register_function(fn=fn, name=name, override=override, metadata=metadata)\n return fn\n\n # raise the error ahead of time\n if not (name is None or isinstance(name, str)):\n raise TypeError(f'`name` must be a str, found {name}')\n\n def _register(cls):\n self._register_function(fn=cls, name=name, override=override, metadata=metadata)\n return cls\n\n return _register\n\n def available_keys(self) -> List[str]:\n return sorted(v[\"name\"] for v in self.functions)\n", "path": "flash/core/registry.py"}]}
| 2,145 | 137 |
gh_patches_debug_40410
|
rasdani/github-patches
|
git_diff
|
fidals__shopelectro-909
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Test update_pack command
Created in #864
</issue>
<code>
[start of shopelectro/models.py]
1 import enum
2 import random
3 import string
4 import typing
5 from uuid import uuid4
6
7 from django.conf import settings
8 from django.db import models
9 from django.urls import reverse
10 from django.utils.translation import ugettext_lazy as _
11
12 from catalog import models as catalog_models
13 from ecommerce import models as ecommerce_models
14 from pages import models as pages_models
15
16
17 def randomize_slug(slug: str) -> str:
18 slug_hash = ''.join(
19 random.choices(string.ascii_lowercase, k=settings.SLUG_HASH_SIZE)
20 )
21 return f'{slug}_{slug_hash}'
22
23
24 class SECategoryQuerySet(catalog_models.CategoryQuerySet):
25 def get_categories_tree_with_pictures(self) -> 'SECategoryQuerySet':
26 categories_with_pictures = (
27 self
28 .filter(products__page__images__isnull=False)
29 .distinct()
30 )
31
32 return categories_with_pictures.get_ancestors(include_self=True)
33
34
35 class SECategoryManager(
36 catalog_models.CategoryManager.from_queryset(SECategoryQuerySet)
37 ):
38 pass
39
40
41 class Category(catalog_models.AbstractCategory, pages_models.SyncPageMixin):
42
43 objects = SECategoryManager()
44 uuid = models.UUIDField(default=uuid4, editable=False)
45
46 @classmethod
47 def get_default_parent(cls):
48 return pages_models.CustomPage.objects.filter(slug='catalog').first()
49
50 @property
51 def image(self):
52 products = self.products.all()
53 return products[0].image if products else None
54
55 def get_absolute_url(self):
56 return reverse('category', args=(self.page.slug,))
57
58
59 class Product(
60 catalog_models.AbstractProduct,
61 catalog_models.AbstractPosition,
62 pages_models.SyncPageMixin
63 ):
64
65 # That's why we are needed to explicitly add objects manager here
66 # because of Django special managers behaviour.
67 # Se se#480 for details.
68 objects = catalog_models.ProductManager()
69
70 category = models.ForeignKey(
71 Category,
72 on_delete=models.CASCADE,
73 null=True,
74 related_name='products',
75 verbose_name=_('category'),
76 )
77
78 tags = models.ManyToManyField(
79 'Tag',
80 related_name='products',
81 blank=True,
82 verbose_name=_('tags'),
83 )
84
85 vendor_code = models.SmallIntegerField(verbose_name=_('vendor_code'))
86 uuid = models.UUIDField(default=uuid4, editable=False)
87 purchase_price = models.FloatField(
88 default=0, verbose_name=_('purchase_price'))
89 wholesale_small = models.FloatField(
90 default=0, verbose_name=_('wholesale_small'))
91 wholesale_medium = models.FloatField(
92 default=0, verbose_name=_('wholesale_medium'))
93 wholesale_large = models.FloatField(
94 default=0, verbose_name=_('wholesale_large'))
95
96 in_pack = models.PositiveSmallIntegerField(
97 default=1,
98 verbose_name=_('in pack'),
99 )
100
101 def get_absolute_url(self):
102 return reverse('product', args=(self.vendor_code,))
103
104 @property
105 def average_rate(self):
106 """Return rounded to first decimal averaged rating."""
107 rating = self.product_feedbacks.aggregate(
108 avg=models.Avg('rating')).get('avg', 0)
109 return round(rating, 1)
110
111 @property
112 def feedback_count(self):
113 return self.product_feedbacks.count()
114
115 @property
116 def feedback(self):
117 return self.product_feedbacks.all().order_by('-date')
118
119 def get_params(self):
120 return Tag.objects.filter_by_products([self]).group_tags()
121
122 def get_brand_name(self) -> str:
123 brand: typing.Optional['Tag'] = Tag.objects.get_brands([self]).get(self)
124 return brand.name if brand else ''
125
126
127 class ProductFeedback(models.Model):
128 product = models.ForeignKey(
129 Product, on_delete=models.CASCADE, null=True,
130 related_name='product_feedbacks'
131 )
132
133 date = models.DateTimeField(
134 auto_now=True, db_index=True, verbose_name=_('date'))
135 name = models.CharField(
136 max_length=255, db_index=True, verbose_name=_('name'))
137 rating = models.PositiveSmallIntegerField(
138 default=1, db_index=True, verbose_name=_('rating'))
139 dignities = models.TextField(
140 default='', blank=True, verbose_name=_('dignities'))
141 limitations = models.TextField(
142 default='', blank=True, verbose_name=_('limitations'))
143 general = models.TextField(
144 default='', blank=True, verbose_name=_('limitations'))
145
146
147 class ItemsEnum(enum.EnumMeta):
148 """
149 Provide dict-like `items` method.
150
151 https://docs.python.org/3/library/enum.html#enum-classes
152 """
153
154 def items(self):
155 return [(i.name, i.value) for i in self]
156
157 def __repr__(self):
158 fields = ', '.join(i.name for i in self)
159 return f"<enum '{self.__name__}: {fields}'>"
160
161
162 class PaymentOptions(enum.Enum, metaclass=ItemsEnum):
163 cash = 'Наличные'
164 cashless = 'Безналичные и денежные переводы'
165 AC = 'Банковская карта'
166 PC = 'Яндекс.Деньги'
167 GP = 'Связной (терминал)'
168 AB = 'Альфа-Клик'
169
170 @staticmethod
171 def default():
172 return PaymentOptions.cash
173
174
175 class Order(ecommerce_models.Order):
176 address = models.TextField(blank=True, default='')
177 payment_type = models.CharField(
178 max_length=255,
179 choices=PaymentOptions.items(),
180 default=PaymentOptions.default().name,
181 )
182 comment = models.TextField(blank=True, default='')
183 # total price - total purchase price
184 revenue = models.FloatField(default=0, null=True, verbose_name=_('revenue'))
185
186 @property
187 def payment_type_label(self):
188 """Return label for an order's payment option."""
189 return PaymentOptions[self.payment_type].value
190
191 def set_positions(self, cart):
192 """
193 Save cart's state into Order instance.
194
195 @todo #589:60m Create Cart model.
196 See details here: https://github.com/fidals/shopelectro/pull/590#discussion_r222544672
197 """
198 self.revenue = cart.total_revenue()
199 self.save()
200 for id_, position in cart:
201 self.positions.create(
202 order=self,
203 product_id=id_,
204 vendor_code=position['vendor_code'],
205 name=position['name'],
206 price=position['price'],
207 quantity=position['quantity'],
208 )
209 return self
210
211
212 class CategoryPage(pages_models.ModelPage):
213 """Create proxy model for Admin."""
214
215 class Meta(pages_models.ModelPage.Meta): # Ignore PycodestyleBear (E303)
216 proxy = True
217
218 # noinspection PyTypeChecker
219 objects = pages_models.ModelPage.create_model_page_managers(Category)
220
221
222 class ProductPage(pages_models.ModelPage):
223 """Create proxy model for Admin."""
224
225 class Meta(pages_models.ModelPage.Meta): # Ignore PycodestyleBear (E303)
226 proxy = True
227
228 # noinspection PyTypeChecker
229 objects = (
230 pages_models.ModelPage
231 .create_model_page_managers(Product)
232 )
233
234
235 class TagGroup(catalog_models.TagGroup):
236 pass
237
238
239 class TagQuerySet(catalog_models.TagQuerySet):
240
241 def products(self):
242 ids = self.values_list('products__id', flat=True)
243 return Product.objects.filter(id__in=ids).distinct()
244
245
246 class TagManager(catalog_models.TagManager.from_queryset(TagQuerySet)):
247 pass
248
249
250 class Tag(catalog_models.Tag):
251 group = models.ForeignKey(
252 TagGroup, on_delete=models.CASCADE, null=True, related_name='tags',
253 )
254
255 objects = TagManager()
256
[end of shopelectro/models.py]
[start of shopelectro/management/commands/_update_catalog/update_pack.py]
1 """
2 Update Product.in_pack and prices.
3
4 The update_catalog command always resets product prices to per unit format, so:
5 1. Parse in pack quantity from Tag.name and save it to Product.in_pack
6 2. Multiply product prices by in_pack value and save.
7 """
8 import logging
9 import typing
10
11 from django.conf import settings
12 from django.db import models, transaction
13
14 from catalog.models_expressions import Substring
15
16 from shopelectro.models import TagQuerySet, TagGroup
17
18 logger = logging.getLogger(__name__)
19 PRICES = ['price', 'purchase_price', 'wholesale_small', 'wholesale_medium', 'wholesale_large']
20
21
22 def find_pack_group() -> typing.Optional[TagGroup]:
23 pack_group = TagGroup.objects.filter(uuid=settings.PACK_GROUP_UUID).first()
24
25 # @todo #864:60m Raise errors in find_pack_group.
26 # Remove Optional type as returning value and test find_pack_group.
27 if not pack_group:
28 logger.error(
29 f'Couldn\'t find "{settings.PACK_GROUP_NAME}" tag group by'
30 f'UUID="{settings.PACK_GROUP_UUID}".\n'
31 'Update the PACK_GROUP_UUID django settings variable to set the new relevant UUID.'
32 )
33 pack_group = None
34 if not settings.PACK_GROUP_NAME.lower() not in pack_group.name.lower():
35 logger.error(
36 'The pack group name isn\'t matched with the set name:'
37 f' Pack group name: {pack_group.name}\n'
38 f' Set name: {settings.PACK_GROUP_NAME}\n'
39 'Update the PACK_GROUP_NAME django settings variable to set the new relevant name.'
40 )
41 pack_group = None
42
43 return pack_group
44
45
46 def update_in_packs(packs: TagQuerySet):
47 """Parse and save in pack quantity values."""
48 packs = (
49 packs
50 .annotate(
51 in_pack_str=Substring(
52 models.F('name'),
53 models.Value('[0-9]+\+?[0-9]*')))
54 .exclude(in_pack_str__exact='')
55 )
56
57 for pack in packs:
58 in_pack = sum(map(int, pack.in_pack_str.split('+')))
59 pack.products.all().update(in_pack=max(in_pack, 1))
60
61
62 def update_prices(packs: TagQuerySet):
63 """Multiply product prices on in pack quantity."""
64 fields_to_update = {}
65 for price in PRICES:
66 fields_to_update[price] = models.F(price) * models.F('in_pack')
67
68 with transaction.atomic():
69 packs.products().update(**fields_to_update)
70
71
72 def main(*args, **kwargs):
73 pack_group = find_pack_group()
74 if not pack_group:
75 return
76
77 return
78
79 packs = pack_group.tags.all().prefetch_related('products')
80 update_in_packs(packs)
81 update_prices(packs)
82
[end of shopelectro/management/commands/_update_catalog/update_pack.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/shopelectro/management/commands/_update_catalog/update_pack.py b/shopelectro/management/commands/_update_catalog/update_pack.py
--- a/shopelectro/management/commands/_update_catalog/update_pack.py
+++ b/shopelectro/management/commands/_update_catalog/update_pack.py
@@ -6,39 +6,34 @@
2. Multiply product prices by in_pack value and save.
"""
import logging
-import typing
from django.conf import settings
from django.db import models, transaction
from catalog.models_expressions import Substring
+from shopelectro.exception import UpdateCatalogException
from shopelectro.models import TagQuerySet, TagGroup
logger = logging.getLogger(__name__)
PRICES = ['price', 'purchase_price', 'wholesale_small', 'wholesale_medium', 'wholesale_large']
-def find_pack_group() -> typing.Optional[TagGroup]:
- pack_group = TagGroup.objects.filter(uuid=settings.PACK_GROUP_UUID).first()
-
- # @todo #864:60m Raise errors in find_pack_group.
- # Remove Optional type as returning value and test find_pack_group.
- if not pack_group:
- logger.error(
- f'Couldn\'t find "{settings.PACK_GROUP_NAME}" tag group by'
- f'UUID="{settings.PACK_GROUP_UUID}".\n'
- 'Update the PACK_GROUP_UUID django settings variable to set the new relevant UUID.'
+def find_pack_group() -> TagGroup:
+ try:
+ pack_group = TagGroup.objects.get_pack()
+ except TagGroup.DoesNotExist as error:
+ raise UpdateCatalogException(
+ 'Update the PACK_GROUP_UUID django settings variable to set the new relevant UUID. '
+ + str(error)
)
- pack_group = None
- if not settings.PACK_GROUP_NAME.lower() not in pack_group.name.lower():
- logger.error(
+ if settings.PACK_GROUP_NAME.lower() not in pack_group.name.lower():
+ raise UpdateCatalogException(
'The pack group name isn\'t matched with the set name:'
f' Pack group name: {pack_group.name}\n'
f' Set name: {settings.PACK_GROUP_NAME}\n'
'Update the PACK_GROUP_NAME django settings variable to set the new relevant name.'
)
- pack_group = None
return pack_group
@@ -70,12 +65,6 @@
def main(*args, **kwargs):
- pack_group = find_pack_group()
- if not pack_group:
- return
-
- return
-
- packs = pack_group.tags.all().prefetch_related('products')
+ packs = find_pack_group().tags.all().prefetch_related('products')
update_in_packs(packs)
update_prices(packs)
diff --git a/shopelectro/models.py b/shopelectro/models.py
--- a/shopelectro/models.py
+++ b/shopelectro/models.py
@@ -232,8 +232,15 @@
)
+class TagGroupManager(models.Manager):
+
+ def get_pack(self):
+ return self.get_queryset().get(uuid=settings.PACK_GROUP_UUID)
+
+
class TagGroup(catalog_models.TagGroup):
- pass
+
+ objects = TagGroupManager()
class TagQuerySet(catalog_models.TagQuerySet):
@@ -244,7 +251,9 @@
class TagManager(catalog_models.TagManager.from_queryset(TagQuerySet)):
- pass
+
+ def get_packs(self):
+ return TagGroup.objects.get_pack().tags.all()
class Tag(catalog_models.Tag):
|
{"golden_diff": "diff --git a/shopelectro/management/commands/_update_catalog/update_pack.py b/shopelectro/management/commands/_update_catalog/update_pack.py\n--- a/shopelectro/management/commands/_update_catalog/update_pack.py\n+++ b/shopelectro/management/commands/_update_catalog/update_pack.py\n@@ -6,39 +6,34 @@\n 2. Multiply product prices by in_pack value and save.\n \"\"\"\n import logging\n-import typing\n \n from django.conf import settings\n from django.db import models, transaction\n \n from catalog.models_expressions import Substring\n \n+from shopelectro.exception import UpdateCatalogException\n from shopelectro.models import TagQuerySet, TagGroup\n \n logger = logging.getLogger(__name__)\n PRICES = ['price', 'purchase_price', 'wholesale_small', 'wholesale_medium', 'wholesale_large']\n \n \n-def find_pack_group() -> typing.Optional[TagGroup]:\n- pack_group = TagGroup.objects.filter(uuid=settings.PACK_GROUP_UUID).first()\n-\n- # @todo #864:60m Raise errors in find_pack_group.\n- # Remove Optional type as returning value and test find_pack_group.\n- if not pack_group:\n- logger.error(\n- f'Couldn\\'t find \"{settings.PACK_GROUP_NAME}\" tag group by'\n- f'UUID=\"{settings.PACK_GROUP_UUID}\".\\n'\n- 'Update the PACK_GROUP_UUID django settings variable to set the new relevant UUID.'\n+def find_pack_group() -> TagGroup:\n+ try:\n+ pack_group = TagGroup.objects.get_pack()\n+ except TagGroup.DoesNotExist as error:\n+ raise UpdateCatalogException(\n+ 'Update the PACK_GROUP_UUID django settings variable to set the new relevant UUID. '\n+ + str(error)\n )\n- pack_group = None\n- if not settings.PACK_GROUP_NAME.lower() not in pack_group.name.lower():\n- logger.error(\n+ if settings.PACK_GROUP_NAME.lower() not in pack_group.name.lower():\n+ raise UpdateCatalogException(\n 'The pack group name isn\\'t matched with the set name:'\n f' Pack group name: {pack_group.name}\\n'\n f' Set name: {settings.PACK_GROUP_NAME}\\n'\n 'Update the PACK_GROUP_NAME django settings variable to set the new relevant name.'\n )\n- pack_group = None\n \n return pack_group\n \n@@ -70,12 +65,6 @@\n \n \n def main(*args, **kwargs):\n- pack_group = find_pack_group()\n- if not pack_group:\n- return\n-\n- return\n-\n- packs = pack_group.tags.all().prefetch_related('products')\n+ packs = find_pack_group().tags.all().prefetch_related('products')\n update_in_packs(packs)\n update_prices(packs)\ndiff --git a/shopelectro/models.py b/shopelectro/models.py\n--- a/shopelectro/models.py\n+++ b/shopelectro/models.py\n@@ -232,8 +232,15 @@\n )\n \n \n+class TagGroupManager(models.Manager):\n+\n+ def get_pack(self):\n+ return self.get_queryset().get(uuid=settings.PACK_GROUP_UUID)\n+\n+\n class TagGroup(catalog_models.TagGroup):\n- pass\n+\n+ objects = TagGroupManager()\n \n \n class TagQuerySet(catalog_models.TagQuerySet):\n@@ -244,7 +251,9 @@\n \n \n class TagManager(catalog_models.TagManager.from_queryset(TagQuerySet)):\n- pass\n+\n+ def get_packs(self):\n+ return TagGroup.objects.get_pack().tags.all()\n \n \n class Tag(catalog_models.Tag):\n", "issue": "Test update_pack command\nCreated in #864\n", "before_files": [{"content": "import enum\nimport random\nimport string\nimport typing\nfrom uuid import uuid4\n\nfrom django.conf import settings\nfrom django.db import models\nfrom django.urls import reverse\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom catalog import models as catalog_models\nfrom ecommerce import models as ecommerce_models\nfrom pages import models as pages_models\n\n\ndef randomize_slug(slug: str) -> str:\n slug_hash = ''.join(\n random.choices(string.ascii_lowercase, k=settings.SLUG_HASH_SIZE)\n )\n return f'{slug}_{slug_hash}'\n\n\nclass SECategoryQuerySet(catalog_models.CategoryQuerySet):\n def get_categories_tree_with_pictures(self) -> 'SECategoryQuerySet':\n categories_with_pictures = (\n self\n .filter(products__page__images__isnull=False)\n .distinct()\n )\n\n return categories_with_pictures.get_ancestors(include_self=True)\n\n\nclass SECategoryManager(\n catalog_models.CategoryManager.from_queryset(SECategoryQuerySet)\n):\n pass\n\n\nclass Category(catalog_models.AbstractCategory, pages_models.SyncPageMixin):\n\n objects = SECategoryManager()\n uuid = models.UUIDField(default=uuid4, editable=False)\n\n @classmethod\n def get_default_parent(cls):\n return pages_models.CustomPage.objects.filter(slug='catalog').first()\n\n @property\n def image(self):\n products = self.products.all()\n return products[0].image if products else None\n\n def get_absolute_url(self):\n return reverse('category', args=(self.page.slug,))\n\n\nclass Product(\n catalog_models.AbstractProduct,\n catalog_models.AbstractPosition,\n pages_models.SyncPageMixin\n):\n\n # That's why we are needed to explicitly add objects manager here\n # because of Django special managers behaviour.\n # Se se#480 for details.\n objects = catalog_models.ProductManager()\n\n category = models.ForeignKey(\n Category,\n on_delete=models.CASCADE,\n null=True,\n related_name='products',\n verbose_name=_('category'),\n )\n\n tags = models.ManyToManyField(\n 'Tag',\n related_name='products',\n blank=True,\n verbose_name=_('tags'),\n )\n\n vendor_code = models.SmallIntegerField(verbose_name=_('vendor_code'))\n uuid = models.UUIDField(default=uuid4, editable=False)\n purchase_price = models.FloatField(\n default=0, verbose_name=_('purchase_price'))\n wholesale_small = models.FloatField(\n default=0, verbose_name=_('wholesale_small'))\n wholesale_medium = models.FloatField(\n default=0, verbose_name=_('wholesale_medium'))\n wholesale_large = models.FloatField(\n default=0, verbose_name=_('wholesale_large'))\n\n in_pack = models.PositiveSmallIntegerField(\n default=1,\n verbose_name=_('in pack'),\n )\n\n def get_absolute_url(self):\n return reverse('product', args=(self.vendor_code,))\n\n @property\n def average_rate(self):\n \"\"\"Return rounded to first decimal averaged rating.\"\"\"\n rating = self.product_feedbacks.aggregate(\n avg=models.Avg('rating')).get('avg', 0)\n return round(rating, 1)\n\n @property\n def feedback_count(self):\n return self.product_feedbacks.count()\n\n @property\n def feedback(self):\n return self.product_feedbacks.all().order_by('-date')\n\n def get_params(self):\n return Tag.objects.filter_by_products([self]).group_tags()\n\n def get_brand_name(self) -> str:\n brand: typing.Optional['Tag'] = Tag.objects.get_brands([self]).get(self)\n return brand.name if brand else ''\n\n\nclass ProductFeedback(models.Model):\n product = models.ForeignKey(\n Product, on_delete=models.CASCADE, null=True,\n related_name='product_feedbacks'\n )\n\n date = models.DateTimeField(\n auto_now=True, db_index=True, verbose_name=_('date'))\n name = models.CharField(\n max_length=255, db_index=True, verbose_name=_('name'))\n rating = models.PositiveSmallIntegerField(\n default=1, db_index=True, verbose_name=_('rating'))\n dignities = models.TextField(\n default='', blank=True, verbose_name=_('dignities'))\n limitations = models.TextField(\n default='', blank=True, verbose_name=_('limitations'))\n general = models.TextField(\n default='', blank=True, verbose_name=_('limitations'))\n\n\nclass ItemsEnum(enum.EnumMeta):\n \"\"\"\n Provide dict-like `items` method.\n\n https://docs.python.org/3/library/enum.html#enum-classes\n \"\"\"\n\n def items(self):\n return [(i.name, i.value) for i in self]\n\n def __repr__(self):\n fields = ', '.join(i.name for i in self)\n return f\"<enum '{self.__name__}: {fields}'>\"\n\n\nclass PaymentOptions(enum.Enum, metaclass=ItemsEnum):\n cash = '\u041d\u0430\u043b\u0438\u0447\u043d\u044b\u0435'\n cashless = '\u0411\u0435\u0437\u043d\u0430\u043b\u0438\u0447\u043d\u044b\u0435 \u0438 \u0434\u0435\u043d\u0435\u0436\u043d\u044b\u0435 \u043f\u0435\u0440\u0435\u0432\u043e\u0434\u044b'\n AC = '\u0411\u0430\u043d\u043a\u043e\u0432\u0441\u043a\u0430\u044f \u043a\u0430\u0440\u0442\u0430'\n PC = '\u042f\u043d\u0434\u0435\u043a\u0441.\u0414\u0435\u043d\u044c\u0433\u0438'\n GP = '\u0421\u0432\u044f\u0437\u043d\u043e\u0439 (\u0442\u0435\u0440\u043c\u0438\u043d\u0430\u043b)'\n AB = '\u0410\u043b\u044c\u0444\u0430-\u041a\u043b\u0438\u043a'\n\n @staticmethod\n def default():\n return PaymentOptions.cash\n\n\nclass Order(ecommerce_models.Order):\n address = models.TextField(blank=True, default='')\n payment_type = models.CharField(\n max_length=255,\n choices=PaymentOptions.items(),\n default=PaymentOptions.default().name,\n )\n comment = models.TextField(blank=True, default='')\n # total price - total purchase price\n revenue = models.FloatField(default=0, null=True, verbose_name=_('revenue'))\n\n @property\n def payment_type_label(self):\n \"\"\"Return label for an order's payment option.\"\"\"\n return PaymentOptions[self.payment_type].value\n\n def set_positions(self, cart):\n \"\"\"\n Save cart's state into Order instance.\n\n @todo #589:60m Create Cart model.\n See details here: https://github.com/fidals/shopelectro/pull/590#discussion_r222544672\n \"\"\"\n self.revenue = cart.total_revenue()\n self.save()\n for id_, position in cart:\n self.positions.create(\n order=self,\n product_id=id_,\n vendor_code=position['vendor_code'],\n name=position['name'],\n price=position['price'],\n quantity=position['quantity'],\n )\n return self\n\n\nclass CategoryPage(pages_models.ModelPage):\n \"\"\"Create proxy model for Admin.\"\"\"\n\n class Meta(pages_models.ModelPage.Meta): # Ignore PycodestyleBear (E303)\n proxy = True\n\n # noinspection PyTypeChecker\n objects = pages_models.ModelPage.create_model_page_managers(Category)\n\n\nclass ProductPage(pages_models.ModelPage):\n \"\"\"Create proxy model for Admin.\"\"\"\n\n class Meta(pages_models.ModelPage.Meta): # Ignore PycodestyleBear (E303)\n proxy = True\n\n # noinspection PyTypeChecker\n objects = (\n pages_models.ModelPage\n .create_model_page_managers(Product)\n )\n\n\nclass TagGroup(catalog_models.TagGroup):\n pass\n\n\nclass TagQuerySet(catalog_models.TagQuerySet):\n\n def products(self):\n ids = self.values_list('products__id', flat=True)\n return Product.objects.filter(id__in=ids).distinct()\n\n\nclass TagManager(catalog_models.TagManager.from_queryset(TagQuerySet)):\n pass\n\n\nclass Tag(catalog_models.Tag):\n group = models.ForeignKey(\n TagGroup, on_delete=models.CASCADE, null=True, related_name='tags',\n )\n\n objects = TagManager()\n", "path": "shopelectro/models.py"}, {"content": "\"\"\"\nUpdate Product.in_pack and prices.\n\nThe update_catalog command always resets product prices to per unit format, so:\n1. Parse in pack quantity from Tag.name and save it to Product.in_pack\n2. Multiply product prices by in_pack value and save.\n\"\"\"\nimport logging\nimport typing\n\nfrom django.conf import settings\nfrom django.db import models, transaction\n\nfrom catalog.models_expressions import Substring\n\nfrom shopelectro.models import TagQuerySet, TagGroup\n\nlogger = logging.getLogger(__name__)\nPRICES = ['price', 'purchase_price', 'wholesale_small', 'wholesale_medium', 'wholesale_large']\n\n\ndef find_pack_group() -> typing.Optional[TagGroup]:\n pack_group = TagGroup.objects.filter(uuid=settings.PACK_GROUP_UUID).first()\n\n # @todo #864:60m Raise errors in find_pack_group.\n # Remove Optional type as returning value and test find_pack_group.\n if not pack_group:\n logger.error(\n f'Couldn\\'t find \"{settings.PACK_GROUP_NAME}\" tag group by'\n f'UUID=\"{settings.PACK_GROUP_UUID}\".\\n'\n 'Update the PACK_GROUP_UUID django settings variable to set the new relevant UUID.'\n )\n pack_group = None\n if not settings.PACK_GROUP_NAME.lower() not in pack_group.name.lower():\n logger.error(\n 'The pack group name isn\\'t matched with the set name:'\n f' Pack group name: {pack_group.name}\\n'\n f' Set name: {settings.PACK_GROUP_NAME}\\n'\n 'Update the PACK_GROUP_NAME django settings variable to set the new relevant name.'\n )\n pack_group = None\n\n return pack_group\n\n\ndef update_in_packs(packs: TagQuerySet):\n \"\"\"Parse and save in pack quantity values.\"\"\"\n packs = (\n packs\n .annotate(\n in_pack_str=Substring(\n models.F('name'),\n models.Value('[0-9]+\\+?[0-9]*')))\n .exclude(in_pack_str__exact='')\n )\n\n for pack in packs:\n in_pack = sum(map(int, pack.in_pack_str.split('+')))\n pack.products.all().update(in_pack=max(in_pack, 1))\n\n\ndef update_prices(packs: TagQuerySet):\n \"\"\"Multiply product prices on in pack quantity.\"\"\"\n fields_to_update = {}\n for price in PRICES:\n fields_to_update[price] = models.F(price) * models.F('in_pack')\n\n with transaction.atomic():\n packs.products().update(**fields_to_update)\n\n\ndef main(*args, **kwargs):\n pack_group = find_pack_group()\n if not pack_group:\n return\n\n return\n\n packs = pack_group.tags.all().prefetch_related('products')\n update_in_packs(packs)\n update_prices(packs)\n", "path": "shopelectro/management/commands/_update_catalog/update_pack.py"}]}
| 3,655 | 787 |
gh_patches_debug_38334
|
rasdani/github-patches
|
git_diff
|
joke2k__faker-1461
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pydecimal left_digits ignored if min_value or max_value is specified
`max_value` should be set to `10 ** (left_digits -1) - epsilon` before this:
https://github.com/joke2k/faker/blob/d9f4b00b9134e6dfbb09cc1caa81c912b79c3c7c/faker/providers/python/__init__.py#L92-L102
Use cases for using both include:
- `min_value=0`, since `positive=True` disallows `0` (a bug in itself IMO, but that's an age old debate!)
- `min_value` at all actually? As in, 4 left digits, but no less than 42 in value, for example.
- a `max_value` that has a semantically different reason for existing, so it's convenient to specify in addition to `left_digits` [^]
Work around is to specify a `max_value` (per above) instead of `left_digits` if `min_value` or `max_value` are needed too.
(I will have a PR for this shortly.)
[^] - e.g. `left_digits` could be a database requirement (`NUMERIC(left + right, right)`), but `max_value` something to do with the logic the fake is for.
</issue>
<code>
[start of faker/providers/python/__init__.py]
1 import string
2 import sys
3 import warnings
4
5 from decimal import Decimal
6
7 from .. import BaseProvider
8
9
10 class Provider(BaseProvider):
11 default_value_types = (
12 'str', 'str', 'str', 'str', 'float', 'int', 'int', 'decimal',
13 'date_time', 'uri', 'email',
14 )
15
16 def _check_signature(self, value_types, allowed_types):
17 if value_types is not None and not isinstance(value_types, (list, tuple)):
18 value_types = [value_types]
19 warnings.warn(
20 'Passing value types as positional arguments is going to be '
21 'deprecated. Pass them as a list or tuple instead.',
22 PendingDeprecationWarning,
23 )
24 if value_types is None:
25 value_types = ()
26 return tuple(value_types) + allowed_types
27
28 def pybool(self):
29 return self.random_int(0, 1) == 1
30
31 def pystr(self, min_chars=None, max_chars=20):
32 """
33 Generates a random string of upper and lowercase letters.
34 :type min_chars: int
35 :type max_chars: int
36 :return: String. Random of random length between min and max characters.
37 """
38 if min_chars is None:
39 return "".join(self.random_letters(length=max_chars))
40 else:
41 assert (
42 max_chars >= min_chars), "Maximum length must be greater than or equal to minimum length"
43 return "".join(
44 self.random_letters(
45 length=self.generator.random.randint(min_chars, max_chars),
46 ),
47 )
48
49 def pystr_format(self, string_format='?#-###{{random_int}}{{random_letter}}', letters=string.ascii_letters):
50 return self.bothify(self.generator.parse(string_format), letters=letters)
51
52 def pyfloat(self, left_digits=None, right_digits=None, positive=False,
53 min_value=None, max_value=None):
54 if left_digits is not None and left_digits < 0:
55 raise ValueError(
56 'A float number cannot have less than 0 digits in its '
57 'integer part')
58 if right_digits is not None and right_digits < 0:
59 raise ValueError(
60 'A float number cannot have less than 0 digits in its '
61 'fractional part')
62 if left_digits == 0 and right_digits == 0:
63 raise ValueError(
64 'A float number cannot have less than 0 digits in total')
65 if None not in (min_value, max_value) and min_value > max_value:
66 raise ValueError('Min value cannot be greater than max value')
67 if None not in (min_value, max_value) and min_value == max_value:
68 raise ValueError('Min and max value cannot be the same')
69 if positive and min_value is not None and min_value <= 0:
70 raise ValueError(
71 'Cannot combine positive=True with negative or zero min_value')
72
73 # Make sure at least either left or right is set
74 if left_digits is None and right_digits is None:
75 left_digits = self.random_int(1, sys.float_info.dig - 1)
76
77 # If only one side is set, choose #digits for other side
78 if (left_digits is None) ^ (right_digits is None):
79 if left_digits is None:
80 left_digits = max(1, sys.float_info.dig - right_digits)
81 else:
82 right_digits = max(1, sys.float_info.dig - left_digits)
83
84 # Make sure we don't ask for too many digits!
85 if left_digits + right_digits > sys.float_info.dig:
86 raise ValueError(
87 f'Asking for too many digits ({left_digits} + {right_digits} == {left_digits + right_digits} > '
88 f'{sys.float_info.dig})',
89 )
90
91 sign = ''
92 if (min_value is not None) or (max_value is not None):
93 if max_value is not None and max_value < 0:
94 max_value += 1 # as the random_int will be generated up to max_value - 1
95 if min_value is not None and min_value < 0:
96 min_value += 1 # as we then append digits after the left_number
97 left_number = self._safe_random_int(
98 min_value, max_value, positive,
99 )
100 else:
101 sign = '+' if positive else self.random_element(('+', '-'))
102 left_number = self.random_number(left_digits)
103
104 result = float(f'{sign}{left_number}.{self.random_number(right_digits)}')
105 if positive and result == 0:
106 if right_digits:
107 result = float('0.' + '0' * (right_digits - 1) + '1')
108 else:
109 result += sys.float_info.epsilon
110 return result
111
112 def _safe_random_int(self, min_value, max_value, positive):
113 orig_min_value = min_value
114 orig_max_value = max_value
115
116 if min_value is None:
117 min_value = max_value - self.random_int()
118 if max_value is None:
119 max_value = min_value + self.random_int()
120 if positive:
121 min_value = max(min_value, 0)
122
123 if min_value == max_value:
124 return self._safe_random_int(orig_min_value, orig_max_value, positive)
125 else:
126 return self.random_int(min_value, max_value - 1)
127
128 def pyint(self, min_value=0, max_value=9999, step=1):
129 return self.generator.random_int(min_value, max_value, step=step)
130
131 def pydecimal(self, left_digits=None, right_digits=None, positive=False,
132 min_value=None, max_value=None):
133
134 float_ = self.pyfloat(
135 left_digits, right_digits, positive, min_value, max_value)
136 return Decimal(str(float_))
137
138 def pytuple(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):
139 return tuple(
140 self._pyiterable(
141 nb_elements,
142 variable_nb_elements,
143 value_types,
144 *allowed_types))
145
146 def pyset(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):
147 return set(
148 self._pyiterable(
149 nb_elements,
150 variable_nb_elements,
151 value_types,
152 *allowed_types))
153
154 def pylist(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):
155 return list(
156 self._pyiterable(
157 nb_elements,
158 variable_nb_elements,
159 value_types,
160 *allowed_types))
161
162 def pyiterable(
163 self,
164 nb_elements=10,
165 variable_nb_elements=True,
166 value_types=None,
167 *allowed_types):
168 value_types = self._check_signature(value_types, allowed_types)
169 return self.random_element([self.pylist, self.pytuple, self.pyset])(
170 nb_elements, variable_nb_elements, value_types, *allowed_types)
171
172 def _random_type(self, type_list):
173 value_type = self.random_element(type_list)
174
175 method_name = f'py{value_type}'
176 if hasattr(self, method_name):
177 value_type = method_name
178
179 return self.generator.format(value_type)
180
181 def _pyiterable(
182 self,
183 nb_elements=10,
184 variable_nb_elements=True,
185 value_types=None,
186 *allowed_types):
187
188 value_types = self._check_signature(value_types, allowed_types)
189
190 value_types = [t if isinstance(t, str) else getattr(t, '__name__', type(t).__name__).lower()
191 for t in value_types
192 # avoid recursion
193 if t not in ['iterable', 'list', 'tuple', 'dict', 'set']]
194 if not value_types:
195 value_types = self.default_value_types
196
197 if variable_nb_elements:
198 nb_elements = self.randomize_nb_elements(nb_elements, min=1)
199
200 for _ in range(nb_elements):
201 yield self._random_type(value_types)
202
203 def pydict(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):
204 """
205 Returns a dictionary.
206
207 :nb_elements: number of elements for dictionary
208 :variable_nb_elements: is use variable number of elements for dictionary
209 :value_types: type of dictionary values
210 """
211 if variable_nb_elements:
212 nb_elements = self.randomize_nb_elements(nb_elements, min=1)
213
214 return dict(zip(
215 self.generator.words(nb_elements, unique=True),
216 self._pyiterable(nb_elements, False, value_types, *allowed_types),
217 ))
218
219 def pystruct(self, count=10, value_types=None, *allowed_types):
220 value_types = self._check_signature(value_types, allowed_types)
221
222 value_types = [t if isinstance(t, str) else getattr(t, '__name__', type(t).__name__).lower()
223 for t in value_types
224 # avoid recursion
225 if t != 'struct']
226 if not value_types:
227 value_types = self.default_value_types
228
229 types = []
230 d = {}
231 nd = {}
232 for i in range(count):
233 d[self.generator.word()] = self._random_type(value_types)
234 types.append(self._random_type(value_types))
235 nd[self.generator.word()] = {i: self._random_type(value_types),
236 i + 1: [self._random_type(value_types),
237 self._random_type(value_types),
238 self._random_type(value_types)],
239 i + 2: {i: self._random_type(value_types),
240 i + 1: self._random_type(value_types),
241 i + 2: [self._random_type(value_types),
242 self._random_type(value_types)]}}
243 return types, d, nd
244
[end of faker/providers/python/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/faker/providers/python/__init__.py b/faker/providers/python/__init__.py
--- a/faker/providers/python/__init__.py
+++ b/faker/providers/python/__init__.py
@@ -1,3 +1,4 @@
+import math
import string
import sys
import warnings
@@ -69,10 +70,15 @@
if positive and min_value is not None and min_value <= 0:
raise ValueError(
'Cannot combine positive=True with negative or zero min_value')
+ if left_digits is not None and max_value and math.ceil(math.log10(abs(max_value))) > left_digits:
+ raise ValueError('Max value must fit within left digits')
+ if left_digits is not None and min_value and math.ceil(math.log10(abs(min_value))) > left_digits:
+ raise ValueError('Min value must fit within left digits')
# Make sure at least either left or right is set
if left_digits is None and right_digits is None:
- left_digits = self.random_int(1, sys.float_info.dig - 1)
+ needed_left_digits = max(1, math.ceil(math.log10(max(abs(max_value or 1), abs(min_value or 1)))))
+ right_digits = self.random_int(1, sys.float_info.dig - needed_left_digits)
# If only one side is set, choose #digits for other side
if (left_digits is None) ^ (right_digits is None):
@@ -90,6 +96,13 @@
sign = ''
if (min_value is not None) or (max_value is not None):
+ # Make sure left_digits still respected
+ if left_digits is not None:
+ if max_value is None:
+ max_value = 10 ** left_digits # minus smallest representable, adjusted later
+ if min_value is None:
+ min_value = -(10 ** left_digits) # plus smallest representable, adjusted later
+
if max_value is not None and max_value < 0:
max_value += 1 # as the random_int will be generated up to max_value - 1
if min_value is not None and min_value < 0:
@@ -107,6 +120,14 @@
result = float('0.' + '0' * (right_digits - 1) + '1')
else:
result += sys.float_info.epsilon
+
+ if right_digits:
+ result = min(result, 10 ** left_digits - float(f'0.{"0" * (right_digits - 1)}1'))
+ result = max(result, -(10 ** left_digits + float(f'0.{"0" * (right_digits - 1)}1')))
+ else:
+ result = min(result, 10 ** left_digits - 1)
+ result = max(result, -(10 ** left_digits + 1))
+
return result
def _safe_random_int(self, min_value, max_value, positive):
|
{"golden_diff": "diff --git a/faker/providers/python/__init__.py b/faker/providers/python/__init__.py\n--- a/faker/providers/python/__init__.py\n+++ b/faker/providers/python/__init__.py\n@@ -1,3 +1,4 @@\n+import math\n import string\n import sys\n import warnings\n@@ -69,10 +70,15 @@\n if positive and min_value is not None and min_value <= 0:\n raise ValueError(\n 'Cannot combine positive=True with negative or zero min_value')\n+ if left_digits is not None and max_value and math.ceil(math.log10(abs(max_value))) > left_digits:\n+ raise ValueError('Max value must fit within left digits')\n+ if left_digits is not None and min_value and math.ceil(math.log10(abs(min_value))) > left_digits:\n+ raise ValueError('Min value must fit within left digits')\n \n # Make sure at least either left or right is set\n if left_digits is None and right_digits is None:\n- left_digits = self.random_int(1, sys.float_info.dig - 1)\n+ needed_left_digits = max(1, math.ceil(math.log10(max(abs(max_value or 1), abs(min_value or 1)))))\n+ right_digits = self.random_int(1, sys.float_info.dig - needed_left_digits)\n \n # If only one side is set, choose #digits for other side\n if (left_digits is None) ^ (right_digits is None):\n@@ -90,6 +96,13 @@\n \n sign = ''\n if (min_value is not None) or (max_value is not None):\n+ # Make sure left_digits still respected\n+ if left_digits is not None:\n+ if max_value is None:\n+ max_value = 10 ** left_digits # minus smallest representable, adjusted later\n+ if min_value is None:\n+ min_value = -(10 ** left_digits) # plus smallest representable, adjusted later\n+\n if max_value is not None and max_value < 0:\n max_value += 1 # as the random_int will be generated up to max_value - 1\n if min_value is not None and min_value < 0:\n@@ -107,6 +120,14 @@\n result = float('0.' + '0' * (right_digits - 1) + '1')\n else:\n result += sys.float_info.epsilon\n+\n+ if right_digits:\n+ result = min(result, 10 ** left_digits - float(f'0.{\"0\" * (right_digits - 1)}1'))\n+ result = max(result, -(10 ** left_digits + float(f'0.{\"0\" * (right_digits - 1)}1')))\n+ else:\n+ result = min(result, 10 ** left_digits - 1)\n+ result = max(result, -(10 ** left_digits + 1))\n+\n return result\n \n def _safe_random_int(self, min_value, max_value, positive):\n", "issue": "pydecimal left_digits ignored if min_value or max_value is specified\n`max_value` should be set to `10 ** (left_digits -1) - epsilon` before this:\r\n\r\nhttps://github.com/joke2k/faker/blob/d9f4b00b9134e6dfbb09cc1caa81c912b79c3c7c/faker/providers/python/__init__.py#L92-L102\r\n\r\nUse cases for using both include:\r\n\r\n- `min_value=0`, since `positive=True` disallows `0` (a bug in itself IMO, but that's an age old debate!)\r\n- `min_value` at all actually? As in, 4 left digits, but no less than 42 in value, for example.\r\n- a `max_value` that has a semantically different reason for existing, so it's convenient to specify in addition to `left_digits` [^]\r\n\r\nWork around is to specify a `max_value` (per above) instead of `left_digits` if `min_value` or `max_value` are needed too.\r\n\r\n(I will have a PR for this shortly.)\r\n\r\n[^] - e.g. `left_digits` could be a database requirement (`NUMERIC(left + right, right)`), but `max_value` something to do with the logic the fake is for.\n", "before_files": [{"content": "import string\nimport sys\nimport warnings\n\nfrom decimal import Decimal\n\nfrom .. import BaseProvider\n\n\nclass Provider(BaseProvider):\n default_value_types = (\n 'str', 'str', 'str', 'str', 'float', 'int', 'int', 'decimal',\n 'date_time', 'uri', 'email',\n )\n\n def _check_signature(self, value_types, allowed_types):\n if value_types is not None and not isinstance(value_types, (list, tuple)):\n value_types = [value_types]\n warnings.warn(\n 'Passing value types as positional arguments is going to be '\n 'deprecated. Pass them as a list or tuple instead.',\n PendingDeprecationWarning,\n )\n if value_types is None:\n value_types = ()\n return tuple(value_types) + allowed_types\n\n def pybool(self):\n return self.random_int(0, 1) == 1\n\n def pystr(self, min_chars=None, max_chars=20):\n \"\"\"\n Generates a random string of upper and lowercase letters.\n :type min_chars: int\n :type max_chars: int\n :return: String. Random of random length between min and max characters.\n \"\"\"\n if min_chars is None:\n return \"\".join(self.random_letters(length=max_chars))\n else:\n assert (\n max_chars >= min_chars), \"Maximum length must be greater than or equal to minimum length\"\n return \"\".join(\n self.random_letters(\n length=self.generator.random.randint(min_chars, max_chars),\n ),\n )\n\n def pystr_format(self, string_format='?#-###{{random_int}}{{random_letter}}', letters=string.ascii_letters):\n return self.bothify(self.generator.parse(string_format), letters=letters)\n\n def pyfloat(self, left_digits=None, right_digits=None, positive=False,\n min_value=None, max_value=None):\n if left_digits is not None and left_digits < 0:\n raise ValueError(\n 'A float number cannot have less than 0 digits in its '\n 'integer part')\n if right_digits is not None and right_digits < 0:\n raise ValueError(\n 'A float number cannot have less than 0 digits in its '\n 'fractional part')\n if left_digits == 0 and right_digits == 0:\n raise ValueError(\n 'A float number cannot have less than 0 digits in total')\n if None not in (min_value, max_value) and min_value > max_value:\n raise ValueError('Min value cannot be greater than max value')\n if None not in (min_value, max_value) and min_value == max_value:\n raise ValueError('Min and max value cannot be the same')\n if positive and min_value is not None and min_value <= 0:\n raise ValueError(\n 'Cannot combine positive=True with negative or zero min_value')\n\n # Make sure at least either left or right is set\n if left_digits is None and right_digits is None:\n left_digits = self.random_int(1, sys.float_info.dig - 1)\n\n # If only one side is set, choose #digits for other side\n if (left_digits is None) ^ (right_digits is None):\n if left_digits is None:\n left_digits = max(1, sys.float_info.dig - right_digits)\n else:\n right_digits = max(1, sys.float_info.dig - left_digits)\n\n # Make sure we don't ask for too many digits!\n if left_digits + right_digits > sys.float_info.dig:\n raise ValueError(\n f'Asking for too many digits ({left_digits} + {right_digits} == {left_digits + right_digits} > '\n f'{sys.float_info.dig})',\n )\n\n sign = ''\n if (min_value is not None) or (max_value is not None):\n if max_value is not None and max_value < 0:\n max_value += 1 # as the random_int will be generated up to max_value - 1\n if min_value is not None and min_value < 0:\n min_value += 1 # as we then append digits after the left_number\n left_number = self._safe_random_int(\n min_value, max_value, positive,\n )\n else:\n sign = '+' if positive else self.random_element(('+', '-'))\n left_number = self.random_number(left_digits)\n\n result = float(f'{sign}{left_number}.{self.random_number(right_digits)}')\n if positive and result == 0:\n if right_digits:\n result = float('0.' + '0' * (right_digits - 1) + '1')\n else:\n result += sys.float_info.epsilon\n return result\n\n def _safe_random_int(self, min_value, max_value, positive):\n orig_min_value = min_value\n orig_max_value = max_value\n\n if min_value is None:\n min_value = max_value - self.random_int()\n if max_value is None:\n max_value = min_value + self.random_int()\n if positive:\n min_value = max(min_value, 0)\n\n if min_value == max_value:\n return self._safe_random_int(orig_min_value, orig_max_value, positive)\n else:\n return self.random_int(min_value, max_value - 1)\n\n def pyint(self, min_value=0, max_value=9999, step=1):\n return self.generator.random_int(min_value, max_value, step=step)\n\n def pydecimal(self, left_digits=None, right_digits=None, positive=False,\n min_value=None, max_value=None):\n\n float_ = self.pyfloat(\n left_digits, right_digits, positive, min_value, max_value)\n return Decimal(str(float_))\n\n def pytuple(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):\n return tuple(\n self._pyiterable(\n nb_elements,\n variable_nb_elements,\n value_types,\n *allowed_types))\n\n def pyset(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):\n return set(\n self._pyiterable(\n nb_elements,\n variable_nb_elements,\n value_types,\n *allowed_types))\n\n def pylist(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):\n return list(\n self._pyiterable(\n nb_elements,\n variable_nb_elements,\n value_types,\n *allowed_types))\n\n def pyiterable(\n self,\n nb_elements=10,\n variable_nb_elements=True,\n value_types=None,\n *allowed_types):\n value_types = self._check_signature(value_types, allowed_types)\n return self.random_element([self.pylist, self.pytuple, self.pyset])(\n nb_elements, variable_nb_elements, value_types, *allowed_types)\n\n def _random_type(self, type_list):\n value_type = self.random_element(type_list)\n\n method_name = f'py{value_type}'\n if hasattr(self, method_name):\n value_type = method_name\n\n return self.generator.format(value_type)\n\n def _pyiterable(\n self,\n nb_elements=10,\n variable_nb_elements=True,\n value_types=None,\n *allowed_types):\n\n value_types = self._check_signature(value_types, allowed_types)\n\n value_types = [t if isinstance(t, str) else getattr(t, '__name__', type(t).__name__).lower()\n for t in value_types\n # avoid recursion\n if t not in ['iterable', 'list', 'tuple', 'dict', 'set']]\n if not value_types:\n value_types = self.default_value_types\n\n if variable_nb_elements:\n nb_elements = self.randomize_nb_elements(nb_elements, min=1)\n\n for _ in range(nb_elements):\n yield self._random_type(value_types)\n\n def pydict(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):\n \"\"\"\n Returns a dictionary.\n\n :nb_elements: number of elements for dictionary\n :variable_nb_elements: is use variable number of elements for dictionary\n :value_types: type of dictionary values\n \"\"\"\n if variable_nb_elements:\n nb_elements = self.randomize_nb_elements(nb_elements, min=1)\n\n return dict(zip(\n self.generator.words(nb_elements, unique=True),\n self._pyiterable(nb_elements, False, value_types, *allowed_types),\n ))\n\n def pystruct(self, count=10, value_types=None, *allowed_types):\n value_types = self._check_signature(value_types, allowed_types)\n\n value_types = [t if isinstance(t, str) else getattr(t, '__name__', type(t).__name__).lower()\n for t in value_types\n # avoid recursion\n if t != 'struct']\n if not value_types:\n value_types = self.default_value_types\n\n types = []\n d = {}\n nd = {}\n for i in range(count):\n d[self.generator.word()] = self._random_type(value_types)\n types.append(self._random_type(value_types))\n nd[self.generator.word()] = {i: self._random_type(value_types),\n i + 1: [self._random_type(value_types),\n self._random_type(value_types),\n self._random_type(value_types)],\n i + 2: {i: self._random_type(value_types),\n i + 1: self._random_type(value_types),\n i + 2: [self._random_type(value_types),\n self._random_type(value_types)]}}\n return types, d, nd\n", "path": "faker/providers/python/__init__.py"}]}
| 3,535 | 666 |
gh_patches_debug_21635
|
rasdani/github-patches
|
git_diff
|
google__osv.dev-1082
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Don't suggest using -X POST
On https://osv.dev/#use-the-api is the instruction
```
Query by commit hash
curl -X POST -d \
'{"commit": "6879efc2c1596d11a6a6ad296f80063b558d5e0f"}' \
"https://api.osv.dev/v1/query"
```
Using `-X POST` here is unnecessary, redundant and potentially dangerous as people cut and paste this into more places. curl will actually tell you this if you add `-v` to this command:
`Note: Unnecessary use of -X or --request, POST is already inferred.`
See also https://daniel.haxx.se/blog/2015/09/11/unnecessary-use-of-curl-x/
</issue>
<code>
[start of docs/build.py]
1 # Copyright 2021 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Documentation builder."""
15
16 import json
17 import os
18 import shutil
19 import subprocess
20
21 _ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
22 _GENERATED_FILENAME = 'v1/osv_service_v1.swagger.json'
23
24
25 def property_description_workaround(definition):
26 """Work around an OpenAPI limitation with a field descriptions getting
27 replaced by the object descriptions."""
28 # Workaround described in https://github.com/Redocly/redoc/issues/835.
29 for value in definition['properties'].values():
30 if '$ref' in value:
31 value['allOf'] = [{'$ref': value['$ref']}]
32 del value['$ref']
33
34
35 def replace_property_name(definition, key, replacement):
36 """Replace property name."""
37 definition['properties'][replacement] = definition['properties'][key]
38 del definition['properties'][key]
39
40
41 def main():
42 api_dir = os.path.join(_ROOT_DIR, 'gcp', 'api')
43 v1_api_dir = os.path.join(api_dir, 'v1')
44 googleapis_dir = os.path.join(api_dir, 'googleapis')
45 service_proto_path = os.path.join(v1_api_dir, 'osv_service_v1.proto')
46
47 # Add OSV dependencies.
48 osv_path = os.path.join(api_dir, 'osv')
49 if os.path.exists(osv_path):
50 shutil.rmtree(osv_path)
51
52 shutil.copytree(os.path.join(_ROOT_DIR, 'osv'), osv_path)
53
54 subprocess.run([
55 'protoc',
56 '-I',
57 api_dir,
58 '-I',
59 v1_api_dir,
60 '-I',
61 googleapis_dir,
62 '--openapiv2_out',
63 '.',
64 '--openapiv2_opt',
65 'logtostderr=true',
66 service_proto_path,
67 ],
68 check=True)
69
70 with open(_GENERATED_FILENAME) as f:
71 spec = json.load(f)
72
73 spec['host'] = 'api.osv.dev'
74 spec['info']['title'] = 'OSV'
75 spec['info']['version'] = '1.0'
76 spec['tags'] = [{
77 'name': 'api',
78 'x-displayName': 'API',
79 'description': 'The API has 3 methods:'
80 }, {
81 'name': 'vulnerability_schema',
82 'x-displayName': 'Vulnerability schema',
83 'description': 'Please see the [OpenSSF Open Source Vulnerability spec]'
84 '(https://ossf.github.io/osv-schema/).',
85 }]
86
87 spec['x-tagGroups'] = [{
88 'name': 'API',
89 'tags': ['api']
90 }, {
91 'name': 'Schema',
92 'tags': ['vulnerability_schema']
93 }]
94
95 spec['paths']['/v1/query']['post']['tags'] = ['api']
96 spec['paths']['/v1/querybatch']['post']['tags'] = ['api']
97 spec['paths']['/v1/vulns/{id}']['get']['tags'] = ['api']
98
99 spec['paths']['/v1/query']['post']['x-code-samples'] = [{
100 'lang':
101 'Curl example',
102 'source':
103 ('curl -X POST -d \\\n'
104 ' \'{"commit": "6879efc2c1596d11a6a6ad296f80063b558d5e0f"}\' \\\n'
105 ' "https://api.osv.dev/v1/query"\n\n'
106 'curl -X POST -d \\\n'
107 ' \'{"package": {"name": "mruby"}, "version": "2.1.2rc"}\' \\\n'
108 ' "https://api.osv.dev/v1/query"')
109 }]
110
111 spec['paths']['/v1/querybatch']['post']['x-code-samples'] = [{
112 'lang':
113 'Curl example',
114 'source':
115 ("""cat <<EOF | curl -X POST -d @- "https://api.osv.dev/v1/querybatch"
116 {
117 "queries": [
118 {
119 "package": {
120 "purl": "pkg:pypi/[email protected]"
121 }
122 },
123 {
124 "commit": "6879efc2c1596d11a6a6ad296f80063b558d5e0f"
125 },
126 {
127 "package": {
128 "ecosystem": "PyPI",
129 "name": "jinja2"
130 },
131 "version": "2.4.1"
132 }
133 ]
134 }
135 EOF""")
136 }]
137
138 spec['paths']['/v1/vulns/{id}']['get']['x-code-samples'] = [{
139 'lang': 'Curl example',
140 'source': 'curl "https://api.osv.dev/v1/vulns/OSV-2020-111"'
141 }]
142
143 property_description_workaround(spec['definitions']['v1Query'])
144 property_description_workaround(spec['definitions']['osvVulnerability'])
145
146 replace_property_name(spec['definitions']['osvVulnerability'],
147 'databaseSpecific', 'database_specific')
148
149 with open('sections.md') as f:
150 spec['info']['description'] = f.read()
151
152 with open(_GENERATED_FILENAME, 'w') as f:
153 f.write(json.dumps(spec, indent=2))
154
155 shutil.move(_GENERATED_FILENAME, os.path.basename(_GENERATED_FILENAME))
156
157
158 if __name__ == '__main__':
159 main()
160
[end of docs/build.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/build.py b/docs/build.py
--- a/docs/build.py
+++ b/docs/build.py
@@ -100,10 +100,10 @@
'lang':
'Curl example',
'source':
- ('curl -X POST -d \\\n'
+ ('curl -d \\\n'
' \'{"commit": "6879efc2c1596d11a6a6ad296f80063b558d5e0f"}\' \\\n'
' "https://api.osv.dev/v1/query"\n\n'
- 'curl -X POST -d \\\n'
+ 'curl -d \\\n'
' \'{"package": {"name": "mruby"}, "version": "2.1.2rc"}\' \\\n'
' "https://api.osv.dev/v1/query"')
}]
@@ -111,8 +111,7 @@
spec['paths']['/v1/querybatch']['post']['x-code-samples'] = [{
'lang':
'Curl example',
- 'source':
- ("""cat <<EOF | curl -X POST -d @- "https://api.osv.dev/v1/querybatch"
+ 'source': ("""cat <<EOF | curl -d @- "https://api.osv.dev/v1/querybatch"
{
"queries": [
{
|
{"golden_diff": "diff --git a/docs/build.py b/docs/build.py\n--- a/docs/build.py\n+++ b/docs/build.py\n@@ -100,10 +100,10 @@\n 'lang':\n 'Curl example',\n 'source':\n- ('curl -X POST -d \\\\\\n'\n+ ('curl -d \\\\\\n'\n ' \\'{\"commit\": \"6879efc2c1596d11a6a6ad296f80063b558d5e0f\"}\\' \\\\\\n'\n ' \"https://api.osv.dev/v1/query\"\\n\\n'\n- 'curl -X POST -d \\\\\\n'\n+ 'curl -d \\\\\\n'\n ' \\'{\"package\": {\"name\": \"mruby\"}, \"version\": \"2.1.2rc\"}\\' \\\\\\n'\n ' \"https://api.osv.dev/v1/query\"')\n }]\n@@ -111,8 +111,7 @@\n spec['paths']['/v1/querybatch']['post']['x-code-samples'] = [{\n 'lang':\n 'Curl example',\n- 'source':\n- (\"\"\"cat <<EOF | curl -X POST -d @- \"https://api.osv.dev/v1/querybatch\"\n+ 'source': (\"\"\"cat <<EOF | curl -d @- \"https://api.osv.dev/v1/querybatch\"\n {\n \"queries\": [\n {\n", "issue": "Don't suggest using -X POST\nOn https://osv.dev/#use-the-api is the instruction\r\n```\r\nQuery by commit hash\r\n\r\ncurl -X POST -d \\\r\n '{\"commit\": \"6879efc2c1596d11a6a6ad296f80063b558d5e0f\"}' \\\r\n \"https://api.osv.dev/v1/query\"\r\n```\r\n\r\nUsing `-X POST` here is unnecessary, redundant and potentially dangerous as people cut and paste this into more places. curl will actually tell you this if you add `-v` to this command:\r\n\r\n`Note: Unnecessary use of -X or --request, POST is already inferred.`\r\n\r\nSee also https://daniel.haxx.se/blog/2015/09/11/unnecessary-use-of-curl-x/\n", "before_files": [{"content": "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Documentation builder.\"\"\"\n\nimport json\nimport os\nimport shutil\nimport subprocess\n\n_ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n_GENERATED_FILENAME = 'v1/osv_service_v1.swagger.json'\n\n\ndef property_description_workaround(definition):\n \"\"\"Work around an OpenAPI limitation with a field descriptions getting\n replaced by the object descriptions.\"\"\"\n # Workaround described in https://github.com/Redocly/redoc/issues/835.\n for value in definition['properties'].values():\n if '$ref' in value:\n value['allOf'] = [{'$ref': value['$ref']}]\n del value['$ref']\n\n\ndef replace_property_name(definition, key, replacement):\n \"\"\"Replace property name.\"\"\"\n definition['properties'][replacement] = definition['properties'][key]\n del definition['properties'][key]\n\n\ndef main():\n api_dir = os.path.join(_ROOT_DIR, 'gcp', 'api')\n v1_api_dir = os.path.join(api_dir, 'v1')\n googleapis_dir = os.path.join(api_dir, 'googleapis')\n service_proto_path = os.path.join(v1_api_dir, 'osv_service_v1.proto')\n\n # Add OSV dependencies.\n osv_path = os.path.join(api_dir, 'osv')\n if os.path.exists(osv_path):\n shutil.rmtree(osv_path)\n\n shutil.copytree(os.path.join(_ROOT_DIR, 'osv'), osv_path)\n\n subprocess.run([\n 'protoc',\n '-I',\n api_dir,\n '-I',\n v1_api_dir,\n '-I',\n googleapis_dir,\n '--openapiv2_out',\n '.',\n '--openapiv2_opt',\n 'logtostderr=true',\n service_proto_path,\n ],\n check=True)\n\n with open(_GENERATED_FILENAME) as f:\n spec = json.load(f)\n\n spec['host'] = 'api.osv.dev'\n spec['info']['title'] = 'OSV'\n spec['info']['version'] = '1.0'\n spec['tags'] = [{\n 'name': 'api',\n 'x-displayName': 'API',\n 'description': 'The API has 3 methods:'\n }, {\n 'name': 'vulnerability_schema',\n 'x-displayName': 'Vulnerability schema',\n 'description': 'Please see the [OpenSSF Open Source Vulnerability spec]'\n '(https://ossf.github.io/osv-schema/).',\n }]\n\n spec['x-tagGroups'] = [{\n 'name': 'API',\n 'tags': ['api']\n }, {\n 'name': 'Schema',\n 'tags': ['vulnerability_schema']\n }]\n\n spec['paths']['/v1/query']['post']['tags'] = ['api']\n spec['paths']['/v1/querybatch']['post']['tags'] = ['api']\n spec['paths']['/v1/vulns/{id}']['get']['tags'] = ['api']\n\n spec['paths']['/v1/query']['post']['x-code-samples'] = [{\n 'lang':\n 'Curl example',\n 'source':\n ('curl -X POST -d \\\\\\n'\n ' \\'{\"commit\": \"6879efc2c1596d11a6a6ad296f80063b558d5e0f\"}\\' \\\\\\n'\n ' \"https://api.osv.dev/v1/query\"\\n\\n'\n 'curl -X POST -d \\\\\\n'\n ' \\'{\"package\": {\"name\": \"mruby\"}, \"version\": \"2.1.2rc\"}\\' \\\\\\n'\n ' \"https://api.osv.dev/v1/query\"')\n }]\n\n spec['paths']['/v1/querybatch']['post']['x-code-samples'] = [{\n 'lang':\n 'Curl example',\n 'source':\n (\"\"\"cat <<EOF | curl -X POST -d @- \"https://api.osv.dev/v1/querybatch\"\n{\n \"queries\": [\n {\n \"package\": {\n \"purl\": \"pkg:pypi/[email protected]\"\n }\n },\n {\n \"commit\": \"6879efc2c1596d11a6a6ad296f80063b558d5e0f\"\n },\n {\n \"package\": {\n \"ecosystem\": \"PyPI\",\n \"name\": \"jinja2\"\n },\n \"version\": \"2.4.1\"\n }\n ]\n}\nEOF\"\"\")\n }]\n\n spec['paths']['/v1/vulns/{id}']['get']['x-code-samples'] = [{\n 'lang': 'Curl example',\n 'source': 'curl \"https://api.osv.dev/v1/vulns/OSV-2020-111\"'\n }]\n\n property_description_workaround(spec['definitions']['v1Query'])\n property_description_workaround(spec['definitions']['osvVulnerability'])\n\n replace_property_name(spec['definitions']['osvVulnerability'],\n 'databaseSpecific', 'database_specific')\n\n with open('sections.md') as f:\n spec['info']['description'] = f.read()\n\n with open(_GENERATED_FILENAME, 'w') as f:\n f.write(json.dumps(spec, indent=2))\n\n shutil.move(_GENERATED_FILENAME, os.path.basename(_GENERATED_FILENAME))\n\n\nif __name__ == '__main__':\n main()\n", "path": "docs/build.py"}]}
| 2,440 | 329 |
gh_patches_debug_16275
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-1256
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Accounts post API crashes with unset id (if basicauth is enabled too)
```
gsurita-30820:~ gsurita$ echo '{"data": {"password": "me"}}' | http post localhost:8888/v1/accounts -a foo:bar
HTTP/1.1 500 Internal Server Error
(...)
```
```
Traceback (most recent call last):
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/tweens.py", line 22, in excview_tween
response = handler(request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/__init__.py", line 119, in tm_tween
reraise(*exc_info)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/compat.py", line 15, in reraise
raise value
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/__init__.py", line 98, in tm_tween
response = handler(request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/router.py", line 155, in handle_request
view_name
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/view.py", line 612, in _call_view
response = view_callable(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/config/views.py", line 181, in __call__
return view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 389, in attr_view
return view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 367, in predicate_wrapper
return view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 300, in secured_view
return view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 438, in rendered_view
result = view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 147, in _requestonly_view
response = view(request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/cornice/service.py", line 491, in wrapper
response = view_()
File "/Users/gsurita/kinto/kinto/kinto/plugins/accounts/views.py", line 81, in collection_post
result = super(Account, self).collection_post()
File "/Users/gsurita/kinto/kinto/kinto/core/resource/__init__.py", line 341, in collection_post
new_record = self.process_record(new_record)
File "/Users/gsurita/kinto/kinto/kinto/plugins/accounts/views.py", line 102, in process_record
if new[self.model.id_field] != self.request.selected_userid:
KeyError: 'id'
```
Accounts post API crashes with unset id (if basicauth is enabled too)
```
gsurita-30820:~ gsurita$ echo '{"data": {"password": "me"}}' | http post localhost:8888/v1/accounts -a foo:bar
HTTP/1.1 500 Internal Server Error
(...)
```
```
Traceback (most recent call last):
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/tweens.py", line 22, in excview_tween
response = handler(request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/__init__.py", line 119, in tm_tween
reraise(*exc_info)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/compat.py", line 15, in reraise
raise value
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/__init__.py", line 98, in tm_tween
response = handler(request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/router.py", line 155, in handle_request
view_name
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/view.py", line 612, in _call_view
response = view_callable(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/config/views.py", line 181, in __call__
return view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 389, in attr_view
return view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 367, in predicate_wrapper
return view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 300, in secured_view
return view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 438, in rendered_view
result = view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 147, in _requestonly_view
response = view(request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/cornice/service.py", line 491, in wrapper
response = view_()
File "/Users/gsurita/kinto/kinto/kinto/plugins/accounts/views.py", line 81, in collection_post
result = super(Account, self).collection_post()
File "/Users/gsurita/kinto/kinto/kinto/core/resource/__init__.py", line 341, in collection_post
new_record = self.process_record(new_record)
File "/Users/gsurita/kinto/kinto/kinto/plugins/accounts/views.py", line 102, in process_record
if new[self.model.id_field] != self.request.selected_userid:
KeyError: 'id'
```
</issue>
<code>
[start of kinto/plugins/accounts/__init__.py]
1 from kinto.authorization import PERMISSIONS_INHERITANCE_TREE
2
3
4 def includeme(config):
5 config.add_api_capability(
6 'accounts',
7 description='Manage user accounts.',
8 url='https://kinto.readthedocs.io/en/latest/api/1.x/accounts.html')
9
10 config.scan('kinto.plugins.accounts.views')
11
12 PERMISSIONS_INHERITANCE_TREE[''].update({
13 'account:create': {}
14 })
15 PERMISSIONS_INHERITANCE_TREE['account'] = {
16 'write': {'account': ['write']},
17 'read': {'account': ['write', 'read']}
18 }
19
[end of kinto/plugins/accounts/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kinto/plugins/accounts/__init__.py b/kinto/plugins/accounts/__init__.py
--- a/kinto/plugins/accounts/__init__.py
+++ b/kinto/plugins/accounts/__init__.py
@@ -1,4 +1,5 @@
from kinto.authorization import PERMISSIONS_INHERITANCE_TREE
+from pyramid.exceptions import ConfigurationError
def includeme(config):
@@ -16,3 +17,12 @@
'write': {'account': ['write']},
'read': {'account': ['write', 'read']}
}
+
+ # Add some safety to avoid weird behaviour with basicauth default policy.
+ settings = config.get_settings()
+ auth_policies = settings['multiauth.policies']
+ if 'basicauth' in auth_policies and 'account' in auth_policies:
+ if auth_policies.index('basicauth') < auth_policies.index('account'):
+ error_msg = ("'basicauth' should not be mentioned before 'account' "
+ "in 'multiauth.policies' setting.")
+ raise ConfigurationError(error_msg)
|
{"golden_diff": "diff --git a/kinto/plugins/accounts/__init__.py b/kinto/plugins/accounts/__init__.py\n--- a/kinto/plugins/accounts/__init__.py\n+++ b/kinto/plugins/accounts/__init__.py\n@@ -1,4 +1,5 @@\n from kinto.authorization import PERMISSIONS_INHERITANCE_TREE\n+from pyramid.exceptions import ConfigurationError\n \n \n def includeme(config):\n@@ -16,3 +17,12 @@\n 'write': {'account': ['write']},\n 'read': {'account': ['write', 'read']}\n }\n+\n+ # Add some safety to avoid weird behaviour with basicauth default policy.\n+ settings = config.get_settings()\n+ auth_policies = settings['multiauth.policies']\n+ if 'basicauth' in auth_policies and 'account' in auth_policies:\n+ if auth_policies.index('basicauth') < auth_policies.index('account'):\n+ error_msg = (\"'basicauth' should not be mentioned before 'account' \"\n+ \"in 'multiauth.policies' setting.\")\n+ raise ConfigurationError(error_msg)\n", "issue": "Accounts post API crashes with unset id (if basicauth is enabled too)\n```\r\ngsurita-30820:~ gsurita$ echo '{\"data\": {\"password\": \"me\"}}' | http post localhost:8888/v1/accounts -a foo:bar\r\nHTTP/1.1 500 Internal Server Error\r\n(...)\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/tweens.py\", line 22, in excview_tween\r\n response = handler(request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/__init__.py\", line 119, in tm_tween\r\n reraise(*exc_info)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/compat.py\", line 15, in reraise\r\n raise value\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/__init__.py\", line 98, in tm_tween\r\n response = handler(request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/router.py\", line 155, in handle_request\r\n view_name\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/view.py\", line 612, in _call_view\r\n response = view_callable(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/config/views.py\", line 181, in __call__\r\n return view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 389, in attr_view\r\n return view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 367, in predicate_wrapper\r\n return view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 300, in secured_view\r\n return view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 438, in rendered_view\r\n result = view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 147, in _requestonly_view\r\n response = view(request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/cornice/service.py\", line 491, in wrapper\r\n response = view_()\r\n File \"/Users/gsurita/kinto/kinto/kinto/plugins/accounts/views.py\", line 81, in collection_post\r\n result = super(Account, self).collection_post()\r\n File \"/Users/gsurita/kinto/kinto/kinto/core/resource/__init__.py\", line 341, in collection_post\r\n new_record = self.process_record(new_record)\r\n File \"/Users/gsurita/kinto/kinto/kinto/plugins/accounts/views.py\", line 102, in process_record\r\n if new[self.model.id_field] != self.request.selected_userid:\r\nKeyError: 'id'\r\n```\nAccounts post API crashes with unset id (if basicauth is enabled too)\n```\r\ngsurita-30820:~ gsurita$ echo '{\"data\": {\"password\": \"me\"}}' | http post localhost:8888/v1/accounts -a foo:bar\r\nHTTP/1.1 500 Internal Server Error\r\n(...)\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/tweens.py\", line 22, in excview_tween\r\n response = handler(request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/__init__.py\", line 119, in tm_tween\r\n reraise(*exc_info)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/compat.py\", line 15, in reraise\r\n raise value\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/__init__.py\", line 98, in tm_tween\r\n response = handler(request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/router.py\", line 155, in handle_request\r\n view_name\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/view.py\", line 612, in _call_view\r\n response = view_callable(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/config/views.py\", line 181, in __call__\r\n return view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 389, in attr_view\r\n return view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 367, in predicate_wrapper\r\n return view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 300, in secured_view\r\n return view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 438, in rendered_view\r\n result = view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 147, in _requestonly_view\r\n response = view(request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/cornice/service.py\", line 491, in wrapper\r\n response = view_()\r\n File \"/Users/gsurita/kinto/kinto/kinto/plugins/accounts/views.py\", line 81, in collection_post\r\n result = super(Account, self).collection_post()\r\n File \"/Users/gsurita/kinto/kinto/kinto/core/resource/__init__.py\", line 341, in collection_post\r\n new_record = self.process_record(new_record)\r\n File \"/Users/gsurita/kinto/kinto/kinto/plugins/accounts/views.py\", line 102, in process_record\r\n if new[self.model.id_field] != self.request.selected_userid:\r\nKeyError: 'id'\r\n```\n", "before_files": [{"content": "from kinto.authorization import PERMISSIONS_INHERITANCE_TREE\n\n\ndef includeme(config):\n config.add_api_capability(\n 'accounts',\n description='Manage user accounts.',\n url='https://kinto.readthedocs.io/en/latest/api/1.x/accounts.html')\n\n config.scan('kinto.plugins.accounts.views')\n\n PERMISSIONS_INHERITANCE_TREE[''].update({\n 'account:create': {}\n })\n PERMISSIONS_INHERITANCE_TREE['account'] = {\n 'write': {'account': ['write']},\n 'read': {'account': ['write', 'read']}\n }\n", "path": "kinto/plugins/accounts/__init__.py"}]}
| 2,289 | 241 |
gh_patches_debug_26382
|
rasdani/github-patches
|
git_diff
|
beetbox__beets-1675
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
EmbedArt Plugin: remove_art_file doesn't seem to work
I'm running beets version 1.15. The EmbedArt plugin isn't removing the art file from the file system.
Logfile: http://pastebin.com/n10bbdpS
Config: http://pastebin.com/ztrjd16C
</issue>
<code>
[start of beetsplug/embedart.py]
1 # This file is part of beets.
2 # Copyright 2015, Adrian Sampson.
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining
5 # a copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish,
8 # distribute, sublicense, and/or sell copies of the Software, and to
9 # permit persons to whom the Software is furnished to do so, subject to
10 # the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be
13 # included in all copies or substantial portions of the Software.
14
15 """Allows beets to embed album art into file metadata."""
16 from __future__ import (division, absolute_import, print_function,
17 unicode_literals)
18
19 import os.path
20
21 from beets.plugins import BeetsPlugin
22 from beets import ui
23 from beets.ui import decargs
24 from beets.util import syspath, normpath, displayable_path, bytestring_path
25 from beets.util.artresizer import ArtResizer
26 from beets import config
27 from beets import art
28
29
30 class EmbedCoverArtPlugin(BeetsPlugin):
31 """Allows albumart to be embedded into the actual files.
32 """
33 def __init__(self):
34 super(EmbedCoverArtPlugin, self).__init__()
35 self.config.add({
36 'maxwidth': 0,
37 'auto': True,
38 'compare_threshold': 0,
39 'ifempty': False,
40 'remove_art_file': False
41 })
42
43 if self.config['maxwidth'].get(int) and not ArtResizer.shared.local:
44 self.config['maxwidth'] = 0
45 self._log.warning(u"ImageMagick or PIL not found; "
46 u"'maxwidth' option ignored")
47 if self.config['compare_threshold'].get(int) and not \
48 ArtResizer.shared.can_compare:
49 self.config['compare_threshold'] = 0
50 self._log.warning(u"ImageMagick 6.8.7 or higher not installed; "
51 u"'compare_threshold' option ignored")
52
53 self.register_listener('art_set', self.process_album)
54
55 def commands(self):
56 # Embed command.
57 embed_cmd = ui.Subcommand(
58 'embedart', help='embed image files into file metadata'
59 )
60 embed_cmd.parser.add_option(
61 '-f', '--file', metavar='PATH', help='the image file to embed'
62 )
63 maxwidth = self.config['maxwidth'].get(int)
64 compare_threshold = self.config['compare_threshold'].get(int)
65 ifempty = self.config['ifempty'].get(bool)
66 remove_art_file = self.config['remove_art_file'].get(bool)
67
68 def embed_func(lib, opts, args):
69 if opts.file:
70 imagepath = normpath(opts.file)
71 if not os.path.isfile(syspath(imagepath)):
72 raise ui.UserError(u'image file {0} not found'.format(
73 displayable_path(imagepath)
74 ))
75 for item in lib.items(decargs(args)):
76 art.embed_item(self._log, item, imagepath, maxwidth, None,
77 compare_threshold, ifempty)
78 else:
79 for album in lib.albums(decargs(args)):
80 art.embed_album(self._log, album, maxwidth, False,
81 compare_threshold, ifempty)
82
83 if remove_art_file and album.artpath is not None:
84 if os.path.isfile(album.artpath):
85 self._log.debug(u'Removing album art file '
86 u'for {0}', album)
87 os.remove(album.artpath)
88 album.artpath = None
89 album.store()
90
91 embed_cmd.func = embed_func
92
93 # Extract command.
94 extract_cmd = ui.Subcommand('extractart',
95 help='extract an image from file metadata')
96 extract_cmd.parser.add_option('-o', dest='outpath',
97 help='image output file')
98 extract_cmd.parser.add_option('-n', dest='filename',
99 help='image filename to create for all '
100 'matched albums')
101 extract_cmd.parser.add_option('-a', dest='associate',
102 action='store_true',
103 help='associate the extracted images '
104 'with the album')
105
106 def extract_func(lib, opts, args):
107 if opts.outpath:
108 art.extract_first(self._log, normpath(opts.outpath),
109 lib.items(decargs(args)))
110 else:
111 filename = bytestring_path(opts.filename or
112 config['art_filename'].get())
113 if os.path.dirname(filename) != '':
114 self._log.error(u"Only specify a name rather than a path "
115 u"for -n")
116 return
117 for album in lib.albums(decargs(args)):
118 artpath = normpath(os.path.join(album.path, filename))
119 artpath = art.extract_first(self._log, artpath,
120 album.items())
121 if artpath and opts.associate:
122 album.set_art(artpath)
123 album.store()
124 extract_cmd.func = extract_func
125
126 # Clear command.
127 clear_cmd = ui.Subcommand('clearart',
128 help='remove images from file metadata')
129
130 def clear_func(lib, opts, args):
131 art.clear(self._log, lib, decargs(args))
132 clear_cmd.func = clear_func
133
134 return [embed_cmd, extract_cmd, clear_cmd]
135
136 def process_album(self, album):
137 """Automatically embed art after art has been set
138 """
139 if self.config['auto'] and config['import']['write']:
140 max_width = self.config['maxwidth'].get(int)
141 art.embed_album(self._log, album, max_width, True,
142 self.config['compare_threshold'].get(int),
143 self.config['ifempty'].get(bool))
144
[end of beetsplug/embedart.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/beetsplug/embedart.py b/beetsplug/embedart.py
--- a/beetsplug/embedart.py
+++ b/beetsplug/embedart.py
@@ -79,14 +79,7 @@
for album in lib.albums(decargs(args)):
art.embed_album(self._log, album, maxwidth, False,
compare_threshold, ifempty)
-
- if remove_art_file and album.artpath is not None:
- if os.path.isfile(album.artpath):
- self._log.debug(u'Removing album art file '
- u'for {0}', album)
- os.remove(album.artpath)
- album.artpath = None
- album.store()
+ self.remove_artfile(album)
embed_cmd.func = embed_func
@@ -141,3 +134,12 @@
art.embed_album(self._log, album, max_width, True,
self.config['compare_threshold'].get(int),
self.config['ifempty'].get(bool))
+ self.remove_artfile(album)
+ def remove_artfile(self, album)
+ if self.config['remove_art_file'] and album.artpath:
+ if os.path.isfile(album.artpath):
+ self._log.debug(u'Removing album art file '
+ u'for {0}', album)
+ os.remove(album.artpath)
+ album.artpath = None
+ album.store()
|
{"golden_diff": "diff --git a/beetsplug/embedart.py b/beetsplug/embedart.py\n--- a/beetsplug/embedart.py\n+++ b/beetsplug/embedart.py\n@@ -79,14 +79,7 @@\n for album in lib.albums(decargs(args)):\n art.embed_album(self._log, album, maxwidth, False,\n compare_threshold, ifempty)\n-\n- if remove_art_file and album.artpath is not None:\n- if os.path.isfile(album.artpath):\n- self._log.debug(u'Removing album art file '\n- u'for {0}', album)\n- os.remove(album.artpath)\n- album.artpath = None\n- album.store()\n+ self.remove_artfile(album)\n \n embed_cmd.func = embed_func\n \n@@ -141,3 +134,12 @@\n art.embed_album(self._log, album, max_width, True,\n self.config['compare_threshold'].get(int),\n self.config['ifempty'].get(bool))\n+ self.remove_artfile(album)\n+ def remove_artfile(self, album)\n+ if self.config['remove_art_file'] and album.artpath:\n+ if os.path.isfile(album.artpath):\n+ self._log.debug(u'Removing album art file '\n+ u'for {0}', album)\n+ os.remove(album.artpath)\n+ album.artpath = None\n+ album.store()\n", "issue": "EmbedArt Plugin: remove_art_file doesn't seem to work\nI'm running beets version 1.15. The EmbedArt plugin isn't removing the art file from the file system. \nLogfile: http://pastebin.com/n10bbdpS\nConfig: http://pastebin.com/ztrjd16C\n\n", "before_files": [{"content": "# This file is part of beets.\n# Copyright 2015, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Allows beets to embed album art into file metadata.\"\"\"\nfrom __future__ import (division, absolute_import, print_function,\n unicode_literals)\n\nimport os.path\n\nfrom beets.plugins import BeetsPlugin\nfrom beets import ui\nfrom beets.ui import decargs\nfrom beets.util import syspath, normpath, displayable_path, bytestring_path\nfrom beets.util.artresizer import ArtResizer\nfrom beets import config\nfrom beets import art\n\n\nclass EmbedCoverArtPlugin(BeetsPlugin):\n \"\"\"Allows albumart to be embedded into the actual files.\n \"\"\"\n def __init__(self):\n super(EmbedCoverArtPlugin, self).__init__()\n self.config.add({\n 'maxwidth': 0,\n 'auto': True,\n 'compare_threshold': 0,\n 'ifempty': False,\n 'remove_art_file': False\n })\n\n if self.config['maxwidth'].get(int) and not ArtResizer.shared.local:\n self.config['maxwidth'] = 0\n self._log.warning(u\"ImageMagick or PIL not found; \"\n u\"'maxwidth' option ignored\")\n if self.config['compare_threshold'].get(int) and not \\\n ArtResizer.shared.can_compare:\n self.config['compare_threshold'] = 0\n self._log.warning(u\"ImageMagick 6.8.7 or higher not installed; \"\n u\"'compare_threshold' option ignored\")\n\n self.register_listener('art_set', self.process_album)\n\n def commands(self):\n # Embed command.\n embed_cmd = ui.Subcommand(\n 'embedart', help='embed image files into file metadata'\n )\n embed_cmd.parser.add_option(\n '-f', '--file', metavar='PATH', help='the image file to embed'\n )\n maxwidth = self.config['maxwidth'].get(int)\n compare_threshold = self.config['compare_threshold'].get(int)\n ifempty = self.config['ifempty'].get(bool)\n remove_art_file = self.config['remove_art_file'].get(bool)\n\n def embed_func(lib, opts, args):\n if opts.file:\n imagepath = normpath(opts.file)\n if not os.path.isfile(syspath(imagepath)):\n raise ui.UserError(u'image file {0} not found'.format(\n displayable_path(imagepath)\n ))\n for item in lib.items(decargs(args)):\n art.embed_item(self._log, item, imagepath, maxwidth, None,\n compare_threshold, ifempty)\n else:\n for album in lib.albums(decargs(args)):\n art.embed_album(self._log, album, maxwidth, False,\n compare_threshold, ifempty)\n\n if remove_art_file and album.artpath is not None:\n if os.path.isfile(album.artpath):\n self._log.debug(u'Removing album art file '\n u'for {0}', album)\n os.remove(album.artpath)\n album.artpath = None\n album.store()\n\n embed_cmd.func = embed_func\n\n # Extract command.\n extract_cmd = ui.Subcommand('extractart',\n help='extract an image from file metadata')\n extract_cmd.parser.add_option('-o', dest='outpath',\n help='image output file')\n extract_cmd.parser.add_option('-n', dest='filename',\n help='image filename to create for all '\n 'matched albums')\n extract_cmd.parser.add_option('-a', dest='associate',\n action='store_true',\n help='associate the extracted images '\n 'with the album')\n\n def extract_func(lib, opts, args):\n if opts.outpath:\n art.extract_first(self._log, normpath(opts.outpath),\n lib.items(decargs(args)))\n else:\n filename = bytestring_path(opts.filename or\n config['art_filename'].get())\n if os.path.dirname(filename) != '':\n self._log.error(u\"Only specify a name rather than a path \"\n u\"for -n\")\n return\n for album in lib.albums(decargs(args)):\n artpath = normpath(os.path.join(album.path, filename))\n artpath = art.extract_first(self._log, artpath,\n album.items())\n if artpath and opts.associate:\n album.set_art(artpath)\n album.store()\n extract_cmd.func = extract_func\n\n # Clear command.\n clear_cmd = ui.Subcommand('clearart',\n help='remove images from file metadata')\n\n def clear_func(lib, opts, args):\n art.clear(self._log, lib, decargs(args))\n clear_cmd.func = clear_func\n\n return [embed_cmd, extract_cmd, clear_cmd]\n\n def process_album(self, album):\n \"\"\"Automatically embed art after art has been set\n \"\"\"\n if self.config['auto'] and config['import']['write']:\n max_width = self.config['maxwidth'].get(int)\n art.embed_album(self._log, album, max_width, True,\n self.config['compare_threshold'].get(int),\n self.config['ifempty'].get(bool))\n", "path": "beetsplug/embedart.py"}]}
| 2,155 | 305 |
gh_patches_debug_13392
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-4311
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Import-check may return error saying "director does not exist" when it actually just lack permissions
Endpoint /pulp/api/v3/importers/core/pulp/import-check/ returns error saying "Directory does not exist" when pulp user lack permissions to read said directory.
**To Reproduce**
Try importing content from a directory where pulp user doesn't have read access.
**Expected behavior**
Error returned should indicate the permission error.
**Additional context**
Pulp is using os.path.exists() method to verify if the directory exists: https://github.com/pulp/pulpcore/blob/main/pulpcore/app/views/importer.py#L44-L45
However, the method can return false if permission is not granted to access the directory even if the directory exists
~~~
os.path.exists(path)
Return True if path refers to an existing path or an open file descriptor. Returns False for broken symbolic links. On some platforms, this function may return False if permission is not granted to execute os.stat() on the requested file, even if the path physically exists.
~~~
os.path method documentation -> https://docs.python.org/3/library/os.path.html
</issue>
<code>
[start of pulpcore/app/views/importer.py]
1 from gettext import gettext as _
2 import json
3 import os
4 from drf_spectacular.utils import extend_schema
5 from rest_framework.views import APIView
6 from rest_framework.response import Response
7
8 from pulpcore.app import settings
9 from pulpcore.app.serializers import PulpImportCheckResponseSerializer, PulpImportCheckSerializer
10
11
12 def _check_allowed_import_path(a_path):
13 user_provided_realpath = os.path.realpath(a_path)
14 for allowed_path in settings.ALLOWED_IMPORT_PATHS:
15 if user_provided_realpath.startswith(allowed_path):
16 return True, None
17 return False, _(
18 "{} is not an allowed import path".format(os.path.dirname(os.path.realpath(a_path)))
19 )
20
21
22 def _validate_file(in_param, data):
23 """
24 Returns a (is-valid, msgs[]) tuple describing all problems found with data[in_param]
25
26 We check for a number of things, attempting to return all the errors we can find. We don't want
27 to give out information for files in arbitrary locations on the filesystem; if the check
28 for ALLOWED_IMPORT_PATHS fails, we report that and ignore any other problems.
29
30 If the directory containing the base-file doesn't exist, or isn't readable, or the specified
31 file doesn't exist, report and return.
32
33 Error-messages for all other checks are additive.
34 """
35 # check allowed, leave if failed
36 file = data[in_param]
37 real_file = os.path.realpath(file)
38 rc, msg = _check_allowed_import_path(real_file)
39 if not rc:
40 return rc, [msg]
41
42 # check directory-sanity, leave if failed
43 owning_dir = os.path.dirname(real_file)
44 if not os.path.exists(owning_dir):
45 return False, [_("directory {} does not exist").format(owning_dir)]
46 if not os.access(owning_dir, os.R_OK):
47 return False, [_("directory {} does not allow read-access").format(owning_dir)]
48
49 # check file-exists, leave if failed
50 if not os.path.exists(real_file):
51 return False, [_("file {} does not exist").format(real_file)]
52
53 # check file-sanity
54 msgs = []
55 isfile = os.path.isfile(real_file)
56 readable = os.access(real_file, os.R_OK)
57
58 rc = isfile and readable
59 if not isfile:
60 msgs.append(_("{} is not a file".format(real_file)))
61 if not readable:
62 msgs.append(_("{} exists but cannot be read".format(real_file)))
63
64 # extra check for toc-dir-write
65 if in_param == "toc":
66 if not os.access(owning_dir, os.W_OK):
67 rc = False
68 msgs.append(_("directory {} must allow pulp write-access".format(owning_dir)))
69
70 return rc, msgs
71
72
73 class PulpImporterImportCheckView(APIView):
74 """
75 Returns validity of proposed parameters for a PulpImport call.
76 """
77
78 @extend_schema(
79 summary="Validate the parameters to be used for a PulpImport call",
80 operation_id="pulp_import_check_post",
81 request=PulpImportCheckSerializer,
82 responses={200: PulpImportCheckResponseSerializer},
83 )
84 def post(self, request, format=None):
85 """
86 Evaluates validity of proposed PulpImport parameters 'toc', 'path', and 'repo_mapping'.
87
88 * Checks that toc, path are in ALLOWED_IMPORT_PATHS
89 * if ALLOWED:
90 * Checks that toc, path exist and are readable
91 * If toc specified, checks that containing dir is writeable
92 * Checks that repo_mapping is valid JSON
93 """
94 serializer = PulpImportCheckSerializer(data=request.data)
95 if serializer.is_valid():
96 data = {}
97 if "toc" in serializer.data:
98 data["toc"] = {}
99 data["toc"]["context"] = serializer.data["toc"]
100 data["toc"]["is_valid"], data["toc"]["messages"] = _validate_file(
101 "toc", serializer.data
102 )
103
104 if "path" in serializer.data:
105 data["path"] = {}
106 data["path"]["context"] = serializer.data["path"]
107 data["path"]["is_valid"], data["path"]["messages"] = _validate_file(
108 "path", serializer.data
109 )
110
111 if "repo_mapping" in serializer.data:
112 data["repo_mapping"] = {}
113 data["repo_mapping"]["context"] = serializer.data["repo_mapping"]
114 try:
115 json.loads(serializer.data["repo_mapping"])
116 data["repo_mapping"]["is_valid"] = True
117 data["repo_mapping"]["messages"] = []
118 except json.JSONDecodeError:
119 data["repo_mapping"]["is_valid"] = False
120 data["repo_mapping"]["messages"] = [_("invalid JSON")]
121
122 crs = PulpImportCheckResponseSerializer(data, context={"request": request})
123 return Response(crs.data)
124 return Response(serializer.errors, status=400)
125
[end of pulpcore/app/views/importer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pulpcore/app/views/importer.py b/pulpcore/app/views/importer.py
--- a/pulpcore/app/views/importer.py
+++ b/pulpcore/app/views/importer.py
@@ -40,11 +40,14 @@
return rc, [msg]
# check directory-sanity, leave if failed
+ # use os.stat to ensure directory exists and pulp has read-access
+ # return any errors received from os.stat to the user
+
owning_dir = os.path.dirname(real_file)
- if not os.path.exists(owning_dir):
- return False, [_("directory {} does not exist").format(owning_dir)]
- if not os.access(owning_dir, os.R_OK):
- return False, [_("directory {} does not allow read-access").format(owning_dir)]
+ try:
+ os.stat(owning_dir)
+ except OSError as e:
+ return False, [_("{}").format(e)]
# check file-exists, leave if failed
if not os.path.exists(real_file):
|
{"golden_diff": "diff --git a/pulpcore/app/views/importer.py b/pulpcore/app/views/importer.py\n--- a/pulpcore/app/views/importer.py\n+++ b/pulpcore/app/views/importer.py\n@@ -40,11 +40,14 @@\n return rc, [msg]\n \n # check directory-sanity, leave if failed\n+ # use os.stat to ensure directory exists and pulp has read-access\n+ # return any errors received from os.stat to the user\n+\n owning_dir = os.path.dirname(real_file)\n- if not os.path.exists(owning_dir):\n- return False, [_(\"directory {} does not exist\").format(owning_dir)]\n- if not os.access(owning_dir, os.R_OK):\n- return False, [_(\"directory {} does not allow read-access\").format(owning_dir)]\n+ try:\n+ os.stat(owning_dir)\n+ except OSError as e:\n+ return False, [_(\"{}\").format(e)]\n \n # check file-exists, leave if failed\n if not os.path.exists(real_file):\n", "issue": "Import-check may return error saying \"director does not exist\" when it actually just lack permissions\nEndpoint /pulp/api/v3/importers/core/pulp/import-check/ returns error saying \"Directory does not exist\" when pulp user lack permissions to read said directory.\r\n\r\n**To Reproduce**\r\n\r\nTry importing content from a directory where pulp user doesn't have read access.\r\n\r\n**Expected behavior**\r\nError returned should indicate the permission error.\r\n\r\n**Additional context**\r\n\r\nPulp is using os.path.exists() method to verify if the directory exists: https://github.com/pulp/pulpcore/blob/main/pulpcore/app/views/importer.py#L44-L45\r\n\r\nHowever, the method can return false if permission is not granted to access the directory even if the directory exists\r\n\r\n~~~\r\nos.path.exists(path)\r\nReturn True if path refers to an existing path or an open file descriptor. Returns False for broken symbolic links. On some platforms, this function may return False if permission is not granted to execute os.stat() on the requested file, even if the path physically exists.\r\n~~~\r\n\r\nos.path method documentation -> https://docs.python.org/3/library/os.path.html\r\n\n", "before_files": [{"content": "from gettext import gettext as _\nimport json\nimport os\nfrom drf_spectacular.utils import extend_schema\nfrom rest_framework.views import APIView\nfrom rest_framework.response import Response\n\nfrom pulpcore.app import settings\nfrom pulpcore.app.serializers import PulpImportCheckResponseSerializer, PulpImportCheckSerializer\n\n\ndef _check_allowed_import_path(a_path):\n user_provided_realpath = os.path.realpath(a_path)\n for allowed_path in settings.ALLOWED_IMPORT_PATHS:\n if user_provided_realpath.startswith(allowed_path):\n return True, None\n return False, _(\n \"{} is not an allowed import path\".format(os.path.dirname(os.path.realpath(a_path)))\n )\n\n\ndef _validate_file(in_param, data):\n \"\"\"\n Returns a (is-valid, msgs[]) tuple describing all problems found with data[in_param]\n\n We check for a number of things, attempting to return all the errors we can find. We don't want\n to give out information for files in arbitrary locations on the filesystem; if the check\n for ALLOWED_IMPORT_PATHS fails, we report that and ignore any other problems.\n\n If the directory containing the base-file doesn't exist, or isn't readable, or the specified\n file doesn't exist, report and return.\n\n Error-messages for all other checks are additive.\n \"\"\"\n # check allowed, leave if failed\n file = data[in_param]\n real_file = os.path.realpath(file)\n rc, msg = _check_allowed_import_path(real_file)\n if not rc:\n return rc, [msg]\n\n # check directory-sanity, leave if failed\n owning_dir = os.path.dirname(real_file)\n if not os.path.exists(owning_dir):\n return False, [_(\"directory {} does not exist\").format(owning_dir)]\n if not os.access(owning_dir, os.R_OK):\n return False, [_(\"directory {} does not allow read-access\").format(owning_dir)]\n\n # check file-exists, leave if failed\n if not os.path.exists(real_file):\n return False, [_(\"file {} does not exist\").format(real_file)]\n\n # check file-sanity\n msgs = []\n isfile = os.path.isfile(real_file)\n readable = os.access(real_file, os.R_OK)\n\n rc = isfile and readable\n if not isfile:\n msgs.append(_(\"{} is not a file\".format(real_file)))\n if not readable:\n msgs.append(_(\"{} exists but cannot be read\".format(real_file)))\n\n # extra check for toc-dir-write\n if in_param == \"toc\":\n if not os.access(owning_dir, os.W_OK):\n rc = False\n msgs.append(_(\"directory {} must allow pulp write-access\".format(owning_dir)))\n\n return rc, msgs\n\n\nclass PulpImporterImportCheckView(APIView):\n \"\"\"\n Returns validity of proposed parameters for a PulpImport call.\n \"\"\"\n\n @extend_schema(\n summary=\"Validate the parameters to be used for a PulpImport call\",\n operation_id=\"pulp_import_check_post\",\n request=PulpImportCheckSerializer,\n responses={200: PulpImportCheckResponseSerializer},\n )\n def post(self, request, format=None):\n \"\"\"\n Evaluates validity of proposed PulpImport parameters 'toc', 'path', and 'repo_mapping'.\n\n * Checks that toc, path are in ALLOWED_IMPORT_PATHS\n * if ALLOWED:\n * Checks that toc, path exist and are readable\n * If toc specified, checks that containing dir is writeable\n * Checks that repo_mapping is valid JSON\n \"\"\"\n serializer = PulpImportCheckSerializer(data=request.data)\n if serializer.is_valid():\n data = {}\n if \"toc\" in serializer.data:\n data[\"toc\"] = {}\n data[\"toc\"][\"context\"] = serializer.data[\"toc\"]\n data[\"toc\"][\"is_valid\"], data[\"toc\"][\"messages\"] = _validate_file(\n \"toc\", serializer.data\n )\n\n if \"path\" in serializer.data:\n data[\"path\"] = {}\n data[\"path\"][\"context\"] = serializer.data[\"path\"]\n data[\"path\"][\"is_valid\"], data[\"path\"][\"messages\"] = _validate_file(\n \"path\", serializer.data\n )\n\n if \"repo_mapping\" in serializer.data:\n data[\"repo_mapping\"] = {}\n data[\"repo_mapping\"][\"context\"] = serializer.data[\"repo_mapping\"]\n try:\n json.loads(serializer.data[\"repo_mapping\"])\n data[\"repo_mapping\"][\"is_valid\"] = True\n data[\"repo_mapping\"][\"messages\"] = []\n except json.JSONDecodeError:\n data[\"repo_mapping\"][\"is_valid\"] = False\n data[\"repo_mapping\"][\"messages\"] = [_(\"invalid JSON\")]\n\n crs = PulpImportCheckResponseSerializer(data, context={\"request\": request})\n return Response(crs.data)\n return Response(serializer.errors, status=400)\n", "path": "pulpcore/app/views/importer.py"}]}
| 2,101 | 237 |
gh_patches_debug_14994
|
rasdani/github-patches
|
git_diff
|
rootpy__rootpy-773
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exception on import when not forwarding X11
Dear developers,
I believe I'm experiencing a bug when trying to use rootpy over SSH. Simply importing
```Python
from rootpy.plotting import Hist
```
results in an exception:
```Python
WARNING:ROOT.TUnixSystem.SetDisplay] DISPLAY not set, setting it to :pts/0:S.8
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/__init__.py", line 12, in <module>
from .legend import Legend
File "/gridsoft/ipnls/root/v6.10.04/lib/ROOT.py", line 318, in _importhook
return _orig_ihook( name, *args, **kwds )
File "/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/legend.py", line 8, in <module>
from .box import _Positionable
File "/gridsoft/ipnls/root/v6.10.04/lib/ROOT.py", line 318, in _importhook
return _orig_ihook( name, *args, **kwds )
File "/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/box.py", line 5, in <module>
from .utils import canvases_with
File "/gridsoft/ipnls/root/v6.10.04/lib/ROOT.py", line 318, in _importhook
return _orig_ihook( name, *args, **kwds )
File "/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/utils.py", line 7, in <module>
from .canvas import _PadBase
File "/gridsoft/ipnls/root/v6.10.04/lib/ROOT.py", line 318, in _importhook
return _orig_ihook( name, *args, **kwds )
File "/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/canvas.py", line 186, in <module>
class Pad(_PadBase, QROOT.TPad):
File "/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/utils/module_facade.py", line 84, in __getattr__
result = sup.__getattr__(key)
File "/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/utils/quickroot.py", line 71, in __getattr__
libname, symbol))
RuntimeError: Unable to load libGui (required by TPad)
```
The problem does not occur if I connect with `ssh -Y`, but I would expect rootpy be usable also without GUI.
I'm using rootpy 1.0.0 installed with pip, Python 3.5.3, ROOT 6.10.04 with Scientific Linux 6.5.
</issue>
<code>
[start of rootpy/utils/quickroot.py]
1 """
2 Quickly load ROOT symbols without triggering PyROOT's finalSetup().
3 The main principle is that appropriate dictionaries first need to be loaded.
4 """
5 from __future__ import absolute_import
6
7 import ROOT
8
9 from .. import log; log = log[__name__]
10 from .module_facade import Facade
11
12 __all__ = []
13
14
15 root_module = ROOT.module._root
16 if hasattr(root_module, 'LookupCppEntity'): # pragma: no cover
17 lookup_func = 'LookupCppEntity'
18 else: # pragma: no cover
19 lookup_func = 'LookupRootEntity'
20
21 # Quick's __name__ needs to be the ROOT module for this to be transparent.
22 # The below is one way of obtaining such a function
23 # First determine the ROOT version without triggering PyROOT's finalSetup()
24 Quick = eval('lambda symbol: module._root.{0}(symbol)'.format(lookup_func),
25 ROOT.__dict__)
26
27 _gSystem = Quick("gSystem")
28 Load = _gSystem.Load
29
30 # It is not vital to list _all_ symbols in here, just enough that a library
31 # will be loaded by the time it is needed.
32 SYMBOLS = dict(
33 Hist='TH1 TGraph TGraphAsymmErrors',
34 Tree='TCut TTree',
35 Gui='TPad TCanvas',
36 Graf='TLegend TLine TEllipse',
37 Physics='TVector2 TVector3 TLorentzVector TRotation TLorentzRotation',
38 Matrix='TMatrixT',
39 RooStats='RooStats RooMsgService',
40 RooFit='RooFit RooWorkspace',
41 )
42
43 # Mapping of symbols to libraries which need to be loaded
44 SYMBOLS_TO_LIB = dict(
45 (sym, lib) for lib, syms in SYMBOLS.items() for sym in syms.split())
46
47 # If you encounter problems with particular symbols, add them to this set.
48 SLOW = set("".split())
49
50
51 @Facade(__name__, expose_internal=False)
52 class QuickROOT(object):
53 def __getattr__(self, symbol):
54 if symbol in SLOW: # pragma: no cover
55 log.warning(
56 "Tried to quickly load {0} which is always slow".format(symbol))
57
58 lib = SYMBOLS_TO_LIB.get(symbol, None)
59 if lib:
60 # Load() doesn't cost anything if the library is already loaded
61 libname = "lib{0}".format(lib)
62 if libname not in _gSystem.GetLibraries():
63 regex = "^duplicate entry .* for level 0; ignored$"
64 with log["/ROOT.TEnvRec.ChangeValue"].ignore(regex):
65 if Load(libname) == 0:
66 log.debug("Loaded {0} (required by {1})".format(
67 libname, symbol))
68 else: # pragma: no cover
69 raise RuntimeError(
70 "Unable to load {0} (required by {1})".format(
71 libname, symbol))
72
73 try:
74 thing = Quick(symbol)
75 except NameError: # pragma: no cover
76 # NameError: global name 'module' is not defined
77 # Python must be exiting...
78 return None
79 if isinstance(thing, root_module.PropertyProxy): # descriptor
80 setattr(self.__class__, symbol, thing)
81 return getattr(self, symbol)
82 # normal member
83 return thing
84
[end of rootpy/utils/quickroot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/rootpy/utils/quickroot.py b/rootpy/utils/quickroot.py
--- a/rootpy/utils/quickroot.py
+++ b/rootpy/utils/quickroot.py
@@ -65,6 +65,12 @@
if Load(libname) == 0:
log.debug("Loaded {0} (required by {1})".format(
libname, symbol))
+ elif lib == 'Gui':
+ # Possibly no X11 forwarding
+ log.debug("Unable to load {0} (required by {1}). "
+ "Putting ROOT in batch mode.".format(
+ libname, symbol))
+ ROOT.gROOT.SetBatch(True)
else: # pragma: no cover
raise RuntimeError(
"Unable to load {0} (required by {1})".format(
|
{"golden_diff": "diff --git a/rootpy/utils/quickroot.py b/rootpy/utils/quickroot.py\n--- a/rootpy/utils/quickroot.py\n+++ b/rootpy/utils/quickroot.py\n@@ -65,6 +65,12 @@\n if Load(libname) == 0:\n log.debug(\"Loaded {0} (required by {1})\".format(\n libname, symbol))\n+ elif lib == 'Gui':\n+ # Possibly no X11 forwarding\n+ log.debug(\"Unable to load {0} (required by {1}). \"\n+ \"Putting ROOT in batch mode.\".format(\n+ libname, symbol))\n+ ROOT.gROOT.SetBatch(True)\n else: # pragma: no cover\n raise RuntimeError(\n \"Unable to load {0} (required by {1})\".format(\n", "issue": "Exception on import when not forwarding X11\nDear developers,\r\n\r\nI believe I'm experiencing a bug when trying to use rootpy over SSH. Simply importing\r\n```Python\r\nfrom rootpy.plotting import Hist\r\n```\r\nresults in an exception:\r\n```Python\r\nWARNING:ROOT.TUnixSystem.SetDisplay] DISPLAY not set, setting it to :pts/0:S.8\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/__init__.py\", line 12, in <module>\r\n from .legend import Legend\r\n File \"/gridsoft/ipnls/root/v6.10.04/lib/ROOT.py\", line 318, in _importhook\r\n return _orig_ihook( name, *args, **kwds )\r\n File \"/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/legend.py\", line 8, in <module>\r\n from .box import _Positionable\r\n File \"/gridsoft/ipnls/root/v6.10.04/lib/ROOT.py\", line 318, in _importhook\r\n return _orig_ihook( name, *args, **kwds )\r\n File \"/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/box.py\", line 5, in <module>\r\n from .utils import canvases_with\r\n File \"/gridsoft/ipnls/root/v6.10.04/lib/ROOT.py\", line 318, in _importhook\r\n return _orig_ihook( name, *args, **kwds )\r\n File \"/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/utils.py\", line 7, in <module>\r\n from .canvas import _PadBase\r\n File \"/gridsoft/ipnls/root/v6.10.04/lib/ROOT.py\", line 318, in _importhook\r\n return _orig_ihook( name, *args, **kwds )\r\n File \"/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/canvas.py\", line 186, in <module>\r\n class Pad(_PadBase, QROOT.TPad):\r\n File \"/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/utils/module_facade.py\", line 84, in __getattr__\r\n result = sup.__getattr__(key)\r\n File \"/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/utils/quickroot.py\", line 71, in __getattr__\r\n libname, symbol))\r\nRuntimeError: Unable to load libGui (required by TPad)\r\n```\r\nThe problem does not occur if I connect with `ssh -Y`, but I would expect rootpy be usable also without GUI.\r\n\r\nI'm using rootpy 1.0.0 installed with pip, Python 3.5.3, ROOT 6.10.04 with Scientific Linux 6.5.\n", "before_files": [{"content": "\"\"\"\nQuickly load ROOT symbols without triggering PyROOT's finalSetup().\nThe main principle is that appropriate dictionaries first need to be loaded.\n\"\"\"\nfrom __future__ import absolute_import\n\nimport ROOT\n\nfrom .. import log; log = log[__name__]\nfrom .module_facade import Facade\n\n__all__ = []\n\n\nroot_module = ROOT.module._root\nif hasattr(root_module, 'LookupCppEntity'): # pragma: no cover\n lookup_func = 'LookupCppEntity'\nelse: # pragma: no cover\n lookup_func = 'LookupRootEntity'\n\n# Quick's __name__ needs to be the ROOT module for this to be transparent.\n# The below is one way of obtaining such a function\n# First determine the ROOT version without triggering PyROOT's finalSetup()\nQuick = eval('lambda symbol: module._root.{0}(symbol)'.format(lookup_func),\n ROOT.__dict__)\n\n_gSystem = Quick(\"gSystem\")\nLoad = _gSystem.Load\n\n# It is not vital to list _all_ symbols in here, just enough that a library\n# will be loaded by the time it is needed.\nSYMBOLS = dict(\n Hist='TH1 TGraph TGraphAsymmErrors',\n Tree='TCut TTree',\n Gui='TPad TCanvas',\n Graf='TLegend TLine TEllipse',\n Physics='TVector2 TVector3 TLorentzVector TRotation TLorentzRotation',\n Matrix='TMatrixT',\n RooStats='RooStats RooMsgService',\n RooFit='RooFit RooWorkspace',\n)\n\n# Mapping of symbols to libraries which need to be loaded\nSYMBOLS_TO_LIB = dict(\n (sym, lib) for lib, syms in SYMBOLS.items() for sym in syms.split())\n\n# If you encounter problems with particular symbols, add them to this set.\nSLOW = set(\"\".split())\n\n\n@Facade(__name__, expose_internal=False)\nclass QuickROOT(object):\n def __getattr__(self, symbol):\n if symbol in SLOW: # pragma: no cover\n log.warning(\n \"Tried to quickly load {0} which is always slow\".format(symbol))\n\n lib = SYMBOLS_TO_LIB.get(symbol, None)\n if lib:\n # Load() doesn't cost anything if the library is already loaded\n libname = \"lib{0}\".format(lib)\n if libname not in _gSystem.GetLibraries():\n regex = \"^duplicate entry .* for level 0; ignored$\"\n with log[\"/ROOT.TEnvRec.ChangeValue\"].ignore(regex):\n if Load(libname) == 0:\n log.debug(\"Loaded {0} (required by {1})\".format(\n libname, symbol))\n else: # pragma: no cover\n raise RuntimeError(\n \"Unable to load {0} (required by {1})\".format(\n libname, symbol))\n\n try:\n thing = Quick(symbol)\n except NameError: # pragma: no cover\n # NameError: global name 'module' is not defined\n # Python must be exiting...\n return None\n if isinstance(thing, root_module.PropertyProxy): # descriptor\n setattr(self.__class__, symbol, thing)\n return getattr(self, symbol)\n # normal member\n return thing\n", "path": "rootpy/utils/quickroot.py"}]}
| 2,079 | 179 |
gh_patches_debug_37733
|
rasdani/github-patches
|
git_diff
|
joke2k__faker-1520
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pydecimal unnecessarily limited by float's max digits
* Faker version: master at time of writing https://github.com/joke2k/faker/commit/d9f4b00b9134e6dfbb09cc1caa81c912b79c3c7c
* OS: Linux
Python's `Decimal` can be arbitrarily many digits; with default precision 28. Faker's `pydecimal` uses `pyfloat`, and so gets limited to `sys.float_info.dig`, which is appropriate for `pyfloat` but not really relevant for `pydecimal`. (The Decimal context could even be less than that.)
### Steps to reproduce
1. `pydecimal(left_digits=16)`
### Expected behavior
Get a 16 digit Decimal
### Actual behavior
> ValueError: Asking for too many digits (16 + 0 == 16 > 15)
pydecimal unnecessarily limited by float's max digits
* Faker version: master at time of writing https://github.com/joke2k/faker/commit/d9f4b00b9134e6dfbb09cc1caa81c912b79c3c7c
* OS: Linux
Python's `Decimal` can be arbitrarily many digits; with default precision 28. Faker's `pydecimal` uses `pyfloat`, and so gets limited to `sys.float_info.dig`, which is appropriate for `pyfloat` but not really relevant for `pydecimal`. (The Decimal context could even be less than that.)
### Steps to reproduce
1. `pydecimal(left_digits=16)`
### Expected behavior
Get a 16 digit Decimal
### Actual behavior
> ValueError: Asking for too many digits (16 + 0 == 16 > 15)
</issue>
<code>
[start of faker/providers/python/__init__.py]
1 import math
2 import string
3 import sys
4 import warnings
5
6 from decimal import Decimal
7
8 from .. import BaseProvider
9
10
11 class Provider(BaseProvider):
12 default_value_types = (
13 'str', 'str', 'str', 'str', 'float', 'int', 'int', 'decimal',
14 'date_time', 'uri', 'email',
15 )
16
17 def _check_signature(self, value_types, allowed_types):
18 if value_types is not None and not isinstance(value_types, (list, tuple)):
19 value_types = [value_types]
20 warnings.warn(
21 'Passing value types as positional arguments is going to be '
22 'deprecated. Pass them as a list or tuple instead.',
23 PendingDeprecationWarning,
24 )
25 if value_types is None:
26 value_types = ()
27 return tuple(value_types) + allowed_types
28
29 def pybool(self):
30 return self.random_int(0, 1) == 1
31
32 def pystr(self, min_chars=None, max_chars=20):
33 """
34 Generates a random string of upper and lowercase letters.
35 :type min_chars: int
36 :type max_chars: int
37 :return: String. Random of random length between min and max characters.
38 """
39 if min_chars is None:
40 return "".join(self.random_letters(length=max_chars))
41 else:
42 assert (
43 max_chars >= min_chars), "Maximum length must be greater than or equal to minimum length"
44 return "".join(
45 self.random_letters(
46 length=self.generator.random.randint(min_chars, max_chars),
47 ),
48 )
49
50 def pystr_format(self, string_format='?#-###{{random_int}}{{random_letter}}', letters=string.ascii_letters):
51 return self.bothify(self.generator.parse(string_format), letters=letters)
52
53 def pyfloat(self, left_digits=None, right_digits=None, positive=False,
54 min_value=None, max_value=None):
55 if left_digits is not None and left_digits < 0:
56 raise ValueError(
57 'A float number cannot have less than 0 digits in its '
58 'integer part')
59 if right_digits is not None and right_digits < 0:
60 raise ValueError(
61 'A float number cannot have less than 0 digits in its '
62 'fractional part')
63 if left_digits == 0 and right_digits == 0:
64 raise ValueError(
65 'A float number cannot have less than 0 digits in total')
66 if None not in (min_value, max_value) and min_value > max_value:
67 raise ValueError('Min value cannot be greater than max value')
68 if None not in (min_value, max_value) and min_value == max_value:
69 raise ValueError('Min and max value cannot be the same')
70 if positive and min_value is not None and min_value <= 0:
71 raise ValueError(
72 'Cannot combine positive=True with negative or zero min_value')
73 if left_digits is not None and max_value and math.ceil(math.log10(abs(max_value))) > left_digits:
74 raise ValueError('Max value must fit within left digits')
75 if left_digits is not None and min_value and math.ceil(math.log10(abs(min_value))) > left_digits:
76 raise ValueError('Min value must fit within left digits')
77
78 # Make sure at least either left or right is set
79 if left_digits is None and right_digits is None:
80 needed_left_digits = max(1, math.ceil(math.log10(max(abs(max_value or 1), abs(min_value or 1)))))
81 right_digits = self.random_int(1, sys.float_info.dig - needed_left_digits)
82
83 # If only one side is set, choose #digits for other side
84 if (left_digits is None) ^ (right_digits is None):
85 if left_digits is None:
86 left_digits = max(1, sys.float_info.dig - right_digits)
87 else:
88 right_digits = max(1, sys.float_info.dig - left_digits)
89
90 # Make sure we don't ask for too many digits!
91 if left_digits + right_digits > sys.float_info.dig:
92 raise ValueError(
93 f'Asking for too many digits ({left_digits} + {right_digits} == {left_digits + right_digits} > '
94 f'{sys.float_info.dig})',
95 )
96
97 sign = ''
98 if (min_value is not None) or (max_value is not None):
99 # Make sure left_digits still respected
100 if left_digits is not None:
101 if max_value is None:
102 max_value = 10 ** left_digits # minus smallest representable, adjusted later
103 if min_value is None:
104 min_value = -(10 ** left_digits) # plus smallest representable, adjusted later
105
106 if max_value is not None and max_value < 0:
107 max_value += 1 # as the random_int will be generated up to max_value - 1
108 if min_value is not None and min_value < 0:
109 min_value += 1 # as we then append digits after the left_number
110 left_number = self._safe_random_int(
111 min_value, max_value, positive,
112 )
113 else:
114 sign = '+' if positive else self.random_element(('+', '-'))
115 left_number = self.random_number(left_digits)
116
117 result = float(f'{sign}{left_number}.{self.random_number(right_digits)}')
118 if positive and result == 0:
119 if right_digits:
120 result = float('0.' + '0' * (right_digits - 1) + '1')
121 else:
122 result += sys.float_info.epsilon
123
124 if right_digits:
125 result = min(result, 10 ** left_digits - float(f'0.{"0" * (right_digits - 1)}1'))
126 result = max(result, -(10 ** left_digits + float(f'0.{"0" * (right_digits - 1)}1')))
127 else:
128 result = min(result, 10 ** left_digits - 1)
129 result = max(result, -(10 ** left_digits + 1))
130
131 return result
132
133 def _safe_random_int(self, min_value, max_value, positive):
134 orig_min_value = min_value
135 orig_max_value = max_value
136
137 if min_value is None:
138 min_value = max_value - self.random_int()
139 if max_value is None:
140 max_value = min_value + self.random_int()
141 if positive:
142 min_value = max(min_value, 0)
143
144 if min_value == max_value:
145 return self._safe_random_int(orig_min_value, orig_max_value, positive)
146 else:
147 return self.random_int(min_value, max_value - 1)
148
149 def pyint(self, min_value=0, max_value=9999, step=1):
150 return self.generator.random_int(min_value, max_value, step=step)
151
152 def pydecimal(self, left_digits=None, right_digits=None, positive=False,
153 min_value=None, max_value=None):
154
155 float_ = self.pyfloat(
156 left_digits, right_digits, positive, min_value, max_value)
157 return Decimal(str(float_))
158
159 def pytuple(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):
160 return tuple(
161 self._pyiterable(
162 nb_elements,
163 variable_nb_elements,
164 value_types,
165 *allowed_types))
166
167 def pyset(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):
168 return set(
169 self._pyiterable(
170 nb_elements,
171 variable_nb_elements,
172 value_types,
173 *allowed_types))
174
175 def pylist(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):
176 return list(
177 self._pyiterable(
178 nb_elements,
179 variable_nb_elements,
180 value_types,
181 *allowed_types))
182
183 def pyiterable(
184 self,
185 nb_elements=10,
186 variable_nb_elements=True,
187 value_types=None,
188 *allowed_types):
189 value_types = self._check_signature(value_types, allowed_types)
190 return self.random_element([self.pylist, self.pytuple, self.pyset])(
191 nb_elements, variable_nb_elements, value_types, *allowed_types)
192
193 def _random_type(self, type_list):
194 value_type = self.random_element(type_list)
195
196 method_name = f'py{value_type}'
197 if hasattr(self, method_name):
198 value_type = method_name
199
200 return self.generator.format(value_type)
201
202 def _pyiterable(
203 self,
204 nb_elements=10,
205 variable_nb_elements=True,
206 value_types=None,
207 *allowed_types):
208
209 value_types = self._check_signature(value_types, allowed_types)
210
211 value_types = [t if isinstance(t, str) else getattr(t, '__name__', type(t).__name__).lower()
212 for t in value_types
213 # avoid recursion
214 if t not in ['iterable', 'list', 'tuple', 'dict', 'set']]
215 if not value_types:
216 value_types = self.default_value_types
217
218 if variable_nb_elements:
219 nb_elements = self.randomize_nb_elements(nb_elements, min=1)
220
221 for _ in range(nb_elements):
222 yield self._random_type(value_types)
223
224 def pydict(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):
225 """
226 Returns a dictionary.
227
228 :nb_elements: number of elements for dictionary
229 :variable_nb_elements: is use variable number of elements for dictionary
230 :value_types: type of dictionary values
231 """
232 if variable_nb_elements:
233 nb_elements = self.randomize_nb_elements(nb_elements, min=1)
234
235 return dict(zip(
236 self.generator.words(nb_elements, unique=True),
237 self._pyiterable(nb_elements, False, value_types, *allowed_types),
238 ))
239
240 def pystruct(self, count=10, value_types=None, *allowed_types):
241 value_types = self._check_signature(value_types, allowed_types)
242
243 value_types = [t if isinstance(t, str) else getattr(t, '__name__', type(t).__name__).lower()
244 for t in value_types
245 # avoid recursion
246 if t != 'struct']
247 if not value_types:
248 value_types = self.default_value_types
249
250 types = []
251 d = {}
252 nd = {}
253 for i in range(count):
254 d[self.generator.word()] = self._random_type(value_types)
255 types.append(self._random_type(value_types))
256 nd[self.generator.word()] = {i: self._random_type(value_types),
257 i + 1: [self._random_type(value_types),
258 self._random_type(value_types),
259 self._random_type(value_types)],
260 i + 2: {i: self._random_type(value_types),
261 i + 1: self._random_type(value_types),
262 i + 2: [self._random_type(value_types),
263 self._random_type(value_types)]}}
264 return types, d, nd
265
[end of faker/providers/python/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/faker/providers/python/__init__.py b/faker/providers/python/__init__.py
--- a/faker/providers/python/__init__.py
+++ b/faker/providers/python/__init__.py
@@ -151,10 +151,58 @@
def pydecimal(self, left_digits=None, right_digits=None, positive=False,
min_value=None, max_value=None):
+ if left_digits is not None and left_digits < 0:
+ raise ValueError(
+ 'A decimal number cannot have less than 0 digits in its '
+ 'integer part')
+ if right_digits is not None and right_digits < 0:
+ raise ValueError(
+ 'A decimal number cannot have less than 0 digits in its '
+ 'fractional part')
+ if (left_digits is not None and left_digits == 0) and (right_digits is not None and right_digits == 0):
+ raise ValueError(
+ 'A decimal number cannot have 0 digits in total')
+ if None not in (min_value, max_value) and min_value > max_value:
+ raise ValueError('Min value cannot be greater than max value')
+ if None not in (min_value, max_value) and min_value == max_value:
+ raise ValueError('Min and max value cannot be the same')
+ if positive and min_value is not None and min_value <= 0:
+ raise ValueError(
+ 'Cannot combine positive=True with negative or zero min_value')
+ if left_digits is not None and max_value and math.ceil(math.log10(abs(max_value))) > left_digits:
+ raise ValueError('Max value must fit within left digits')
+ if left_digits is not None and min_value and math.ceil(math.log10(abs(min_value))) > left_digits:
+ raise ValueError('Min value must fit within left digits')
+
+ # if either left or right digits are not specified we randomly choose a length
+ max_random_digits = 100
+ minimum_left_digits = len(str(min_value)) if min_value is not None else 1
+ if left_digits is None and right_digits is None:
+ right_digits = self.random_int(1, max_random_digits)
+ left_digits = self.random_int(minimum_left_digits, max_random_digits)
+ if left_digits is not None and right_digits is None:
+ right_digits = self.random_int(1, max_random_digits)
+ if left_digits is None and right_digits is not None:
+ left_digits = self.random_int(minimum_left_digits, max_random_digits)
- float_ = self.pyfloat(
- left_digits, right_digits, positive, min_value, max_value)
- return Decimal(str(float_))
+ sign = ''
+ left_number = ''.join([str(self.random_digit()) for i in range(0, left_digits)]) or '0'
+ if right_digits is not None:
+ right_number = ''.join([str(self.random_digit()) for i in range(0, right_digits)])
+ else:
+ right_number = ''
+ sign = '+' if positive else self.random_element(('+', '-'))
+
+ result = Decimal(f'{sign}{left_number}.{right_number}')
+
+ # Because the random result might have the same number of decimals as max_value the random number
+ # might be above max_value or below min_value
+ if max_value is not None and result > max_value:
+ result = max_value
+ if min_value is not None and result < min_value:
+ result = min_value
+
+ return result
def pytuple(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):
return tuple(
|
{"golden_diff": "diff --git a/faker/providers/python/__init__.py b/faker/providers/python/__init__.py\n--- a/faker/providers/python/__init__.py\n+++ b/faker/providers/python/__init__.py\n@@ -151,10 +151,58 @@\n \n def pydecimal(self, left_digits=None, right_digits=None, positive=False,\n min_value=None, max_value=None):\n+ if left_digits is not None and left_digits < 0:\n+ raise ValueError(\n+ 'A decimal number cannot have less than 0 digits in its '\n+ 'integer part')\n+ if right_digits is not None and right_digits < 0:\n+ raise ValueError(\n+ 'A decimal number cannot have less than 0 digits in its '\n+ 'fractional part')\n+ if (left_digits is not None and left_digits == 0) and (right_digits is not None and right_digits == 0):\n+ raise ValueError(\n+ 'A decimal number cannot have 0 digits in total')\n+ if None not in (min_value, max_value) and min_value > max_value:\n+ raise ValueError('Min value cannot be greater than max value')\n+ if None not in (min_value, max_value) and min_value == max_value:\n+ raise ValueError('Min and max value cannot be the same')\n+ if positive and min_value is not None and min_value <= 0:\n+ raise ValueError(\n+ 'Cannot combine positive=True with negative or zero min_value')\n+ if left_digits is not None and max_value and math.ceil(math.log10(abs(max_value))) > left_digits:\n+ raise ValueError('Max value must fit within left digits')\n+ if left_digits is not None and min_value and math.ceil(math.log10(abs(min_value))) > left_digits:\n+ raise ValueError('Min value must fit within left digits')\n+\n+ # if either left or right digits are not specified we randomly choose a length\n+ max_random_digits = 100\n+ minimum_left_digits = len(str(min_value)) if min_value is not None else 1\n+ if left_digits is None and right_digits is None:\n+ right_digits = self.random_int(1, max_random_digits)\n+ left_digits = self.random_int(minimum_left_digits, max_random_digits)\n+ if left_digits is not None and right_digits is None:\n+ right_digits = self.random_int(1, max_random_digits)\n+ if left_digits is None and right_digits is not None:\n+ left_digits = self.random_int(minimum_left_digits, max_random_digits)\n \n- float_ = self.pyfloat(\n- left_digits, right_digits, positive, min_value, max_value)\n- return Decimal(str(float_))\n+ sign = ''\n+ left_number = ''.join([str(self.random_digit()) for i in range(0, left_digits)]) or '0'\n+ if right_digits is not None:\n+ right_number = ''.join([str(self.random_digit()) for i in range(0, right_digits)])\n+ else:\n+ right_number = ''\n+ sign = '+' if positive else self.random_element(('+', '-'))\n+\n+ result = Decimal(f'{sign}{left_number}.{right_number}')\n+\n+ # Because the random result might have the same number of decimals as max_value the random number\n+ # might be above max_value or below min_value\n+ if max_value is not None and result > max_value:\n+ result = max_value\n+ if min_value is not None and result < min_value:\n+ result = min_value\n+\n+ return result\n \n def pytuple(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):\n return tuple(\n", "issue": "pydecimal unnecessarily limited by float's max digits\n* Faker version: master at time of writing https://github.com/joke2k/faker/commit/d9f4b00b9134e6dfbb09cc1caa81c912b79c3c7c\r\n* OS: Linux\r\n\r\nPython's `Decimal` can be arbitrarily many digits; with default precision 28. Faker's `pydecimal` uses `pyfloat`, and so gets limited to `sys.float_info.dig`, which is appropriate for `pyfloat` but not really relevant for `pydecimal`. (The Decimal context could even be less than that.)\r\n\r\n### Steps to reproduce\r\n\r\n1. `pydecimal(left_digits=16)`\r\n\r\n### Expected behavior\r\n\r\nGet a 16 digit Decimal\r\n\r\n### Actual behavior\r\n\r\n> ValueError: Asking for too many digits (16 + 0 == 16 > 15)\r\n\npydecimal unnecessarily limited by float's max digits\n* Faker version: master at time of writing https://github.com/joke2k/faker/commit/d9f4b00b9134e6dfbb09cc1caa81c912b79c3c7c\r\n* OS: Linux\r\n\r\nPython's `Decimal` can be arbitrarily many digits; with default precision 28. Faker's `pydecimal` uses `pyfloat`, and so gets limited to `sys.float_info.dig`, which is appropriate for `pyfloat` but not really relevant for `pydecimal`. (The Decimal context could even be less than that.)\r\n\r\n### Steps to reproduce\r\n\r\n1. `pydecimal(left_digits=16)`\r\n\r\n### Expected behavior\r\n\r\nGet a 16 digit Decimal\r\n\r\n### Actual behavior\r\n\r\n> ValueError: Asking for too many digits (16 + 0 == 16 > 15)\r\n\n", "before_files": [{"content": "import math\nimport string\nimport sys\nimport warnings\n\nfrom decimal import Decimal\n\nfrom .. import BaseProvider\n\n\nclass Provider(BaseProvider):\n default_value_types = (\n 'str', 'str', 'str', 'str', 'float', 'int', 'int', 'decimal',\n 'date_time', 'uri', 'email',\n )\n\n def _check_signature(self, value_types, allowed_types):\n if value_types is not None and not isinstance(value_types, (list, tuple)):\n value_types = [value_types]\n warnings.warn(\n 'Passing value types as positional arguments is going to be '\n 'deprecated. Pass them as a list or tuple instead.',\n PendingDeprecationWarning,\n )\n if value_types is None:\n value_types = ()\n return tuple(value_types) + allowed_types\n\n def pybool(self):\n return self.random_int(0, 1) == 1\n\n def pystr(self, min_chars=None, max_chars=20):\n \"\"\"\n Generates a random string of upper and lowercase letters.\n :type min_chars: int\n :type max_chars: int\n :return: String. Random of random length between min and max characters.\n \"\"\"\n if min_chars is None:\n return \"\".join(self.random_letters(length=max_chars))\n else:\n assert (\n max_chars >= min_chars), \"Maximum length must be greater than or equal to minimum length\"\n return \"\".join(\n self.random_letters(\n length=self.generator.random.randint(min_chars, max_chars),\n ),\n )\n\n def pystr_format(self, string_format='?#-###{{random_int}}{{random_letter}}', letters=string.ascii_letters):\n return self.bothify(self.generator.parse(string_format), letters=letters)\n\n def pyfloat(self, left_digits=None, right_digits=None, positive=False,\n min_value=None, max_value=None):\n if left_digits is not None and left_digits < 0:\n raise ValueError(\n 'A float number cannot have less than 0 digits in its '\n 'integer part')\n if right_digits is not None and right_digits < 0:\n raise ValueError(\n 'A float number cannot have less than 0 digits in its '\n 'fractional part')\n if left_digits == 0 and right_digits == 0:\n raise ValueError(\n 'A float number cannot have less than 0 digits in total')\n if None not in (min_value, max_value) and min_value > max_value:\n raise ValueError('Min value cannot be greater than max value')\n if None not in (min_value, max_value) and min_value == max_value:\n raise ValueError('Min and max value cannot be the same')\n if positive and min_value is not None and min_value <= 0:\n raise ValueError(\n 'Cannot combine positive=True with negative or zero min_value')\n if left_digits is not None and max_value and math.ceil(math.log10(abs(max_value))) > left_digits:\n raise ValueError('Max value must fit within left digits')\n if left_digits is not None and min_value and math.ceil(math.log10(abs(min_value))) > left_digits:\n raise ValueError('Min value must fit within left digits')\n\n # Make sure at least either left or right is set\n if left_digits is None and right_digits is None:\n needed_left_digits = max(1, math.ceil(math.log10(max(abs(max_value or 1), abs(min_value or 1)))))\n right_digits = self.random_int(1, sys.float_info.dig - needed_left_digits)\n\n # If only one side is set, choose #digits for other side\n if (left_digits is None) ^ (right_digits is None):\n if left_digits is None:\n left_digits = max(1, sys.float_info.dig - right_digits)\n else:\n right_digits = max(1, sys.float_info.dig - left_digits)\n\n # Make sure we don't ask for too many digits!\n if left_digits + right_digits > sys.float_info.dig:\n raise ValueError(\n f'Asking for too many digits ({left_digits} + {right_digits} == {left_digits + right_digits} > '\n f'{sys.float_info.dig})',\n )\n\n sign = ''\n if (min_value is not None) or (max_value is not None):\n # Make sure left_digits still respected\n if left_digits is not None:\n if max_value is None:\n max_value = 10 ** left_digits # minus smallest representable, adjusted later\n if min_value is None:\n min_value = -(10 ** left_digits) # plus smallest representable, adjusted later\n\n if max_value is not None and max_value < 0:\n max_value += 1 # as the random_int will be generated up to max_value - 1\n if min_value is not None and min_value < 0:\n min_value += 1 # as we then append digits after the left_number\n left_number = self._safe_random_int(\n min_value, max_value, positive,\n )\n else:\n sign = '+' if positive else self.random_element(('+', '-'))\n left_number = self.random_number(left_digits)\n\n result = float(f'{sign}{left_number}.{self.random_number(right_digits)}')\n if positive and result == 0:\n if right_digits:\n result = float('0.' + '0' * (right_digits - 1) + '1')\n else:\n result += sys.float_info.epsilon\n\n if right_digits:\n result = min(result, 10 ** left_digits - float(f'0.{\"0\" * (right_digits - 1)}1'))\n result = max(result, -(10 ** left_digits + float(f'0.{\"0\" * (right_digits - 1)}1')))\n else:\n result = min(result, 10 ** left_digits - 1)\n result = max(result, -(10 ** left_digits + 1))\n\n return result\n\n def _safe_random_int(self, min_value, max_value, positive):\n orig_min_value = min_value\n orig_max_value = max_value\n\n if min_value is None:\n min_value = max_value - self.random_int()\n if max_value is None:\n max_value = min_value + self.random_int()\n if positive:\n min_value = max(min_value, 0)\n\n if min_value == max_value:\n return self._safe_random_int(orig_min_value, orig_max_value, positive)\n else:\n return self.random_int(min_value, max_value - 1)\n\n def pyint(self, min_value=0, max_value=9999, step=1):\n return self.generator.random_int(min_value, max_value, step=step)\n\n def pydecimal(self, left_digits=None, right_digits=None, positive=False,\n min_value=None, max_value=None):\n\n float_ = self.pyfloat(\n left_digits, right_digits, positive, min_value, max_value)\n return Decimal(str(float_))\n\n def pytuple(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):\n return tuple(\n self._pyiterable(\n nb_elements,\n variable_nb_elements,\n value_types,\n *allowed_types))\n\n def pyset(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):\n return set(\n self._pyiterable(\n nb_elements,\n variable_nb_elements,\n value_types,\n *allowed_types))\n\n def pylist(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):\n return list(\n self._pyiterable(\n nb_elements,\n variable_nb_elements,\n value_types,\n *allowed_types))\n\n def pyiterable(\n self,\n nb_elements=10,\n variable_nb_elements=True,\n value_types=None,\n *allowed_types):\n value_types = self._check_signature(value_types, allowed_types)\n return self.random_element([self.pylist, self.pytuple, self.pyset])(\n nb_elements, variable_nb_elements, value_types, *allowed_types)\n\n def _random_type(self, type_list):\n value_type = self.random_element(type_list)\n\n method_name = f'py{value_type}'\n if hasattr(self, method_name):\n value_type = method_name\n\n return self.generator.format(value_type)\n\n def _pyiterable(\n self,\n nb_elements=10,\n variable_nb_elements=True,\n value_types=None,\n *allowed_types):\n\n value_types = self._check_signature(value_types, allowed_types)\n\n value_types = [t if isinstance(t, str) else getattr(t, '__name__', type(t).__name__).lower()\n for t in value_types\n # avoid recursion\n if t not in ['iterable', 'list', 'tuple', 'dict', 'set']]\n if not value_types:\n value_types = self.default_value_types\n\n if variable_nb_elements:\n nb_elements = self.randomize_nb_elements(nb_elements, min=1)\n\n for _ in range(nb_elements):\n yield self._random_type(value_types)\n\n def pydict(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):\n \"\"\"\n Returns a dictionary.\n\n :nb_elements: number of elements for dictionary\n :variable_nb_elements: is use variable number of elements for dictionary\n :value_types: type of dictionary values\n \"\"\"\n if variable_nb_elements:\n nb_elements = self.randomize_nb_elements(nb_elements, min=1)\n\n return dict(zip(\n self.generator.words(nb_elements, unique=True),\n self._pyiterable(nb_elements, False, value_types, *allowed_types),\n ))\n\n def pystruct(self, count=10, value_types=None, *allowed_types):\n value_types = self._check_signature(value_types, allowed_types)\n\n value_types = [t if isinstance(t, str) else getattr(t, '__name__', type(t).__name__).lower()\n for t in value_types\n # avoid recursion\n if t != 'struct']\n if not value_types:\n value_types = self.default_value_types\n\n types = []\n d = {}\n nd = {}\n for i in range(count):\n d[self.generator.word()] = self._random_type(value_types)\n types.append(self._random_type(value_types))\n nd[self.generator.word()] = {i: self._random_type(value_types),\n i + 1: [self._random_type(value_types),\n self._random_type(value_types),\n self._random_type(value_types)],\n i + 2: {i: self._random_type(value_types),\n i + 1: self._random_type(value_types),\n i + 2: [self._random_type(value_types),\n self._random_type(value_types)]}}\n return types, d, nd\n", "path": "faker/providers/python/__init__.py"}]}
| 3,994 | 810 |
gh_patches_debug_38068
|
rasdani/github-patches
|
git_diff
|
networkx__networkx-4589
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Documentation and code in `floyd_warshall_numpy` are inconsistent
### Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
Using `floyd_warshall_numpy` with a specified set of nodes will only find paths that are confined to that subset of nodes. I'm not sure I agree with that choice, and certainly the documentation does not make it clear.
### Expected Behavior
<!--- Tell us what should happen -->
Based on the documentation, I would expect it to find a path that starts at one node and ends at another, even if that path must go through additional nodes not in the provided list.
### Steps to Reproduce
<!--- Provide a minimal example that reproduces the bug -->
https://stackoverflow.com/q/65771537/2966723
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.9
NetworkX version: 2.5
### Additional context
<!--- Add any other context about the problem here, screenshots, etc. -->
</issue>
<code>
[start of networkx/algorithms/shortest_paths/dense.py]
1 """Floyd-Warshall algorithm for shortest paths.
2 """
3 import networkx as nx
4
5 __all__ = [
6 "floyd_warshall",
7 "floyd_warshall_predecessor_and_distance",
8 "reconstruct_path",
9 "floyd_warshall_numpy",
10 ]
11
12
13 def floyd_warshall_numpy(G, nodelist=None, weight="weight"):
14 """Find all-pairs shortest path lengths using Floyd's algorithm.
15
16 Parameters
17 ----------
18 G : NetworkX graph
19
20 nodelist : list, optional
21 The rows and columns are ordered by the nodes in nodelist.
22 If nodelist is None then the ordering is produced by G.nodes().
23
24 weight: string, optional (default= 'weight')
25 Edge data key corresponding to the edge weight.
26
27 Returns
28 -------
29 distance : NumPy matrix
30 A matrix of shortest path distances between nodes.
31 If there is no path between to nodes the corresponding matrix entry
32 will be Inf.
33
34 Notes
35 -----
36 Floyd's algorithm is appropriate for finding shortest paths in
37 dense graphs or graphs with negative weights when Dijkstra's
38 algorithm fails. This algorithm can still fail if there are negative
39 cycles. It has running time $O(n^3)$ with running space of $O(n^2)$.
40 """
41 import numpy as np
42
43 # To handle cases when an edge has weight=0, we must make sure that
44 # nonedges are not given the value 0 as well.
45 A = nx.to_numpy_array(
46 G, nodelist=nodelist, multigraph_weight=min, weight=weight, nonedge=np.inf
47 )
48 n, m = A.shape
49 np.fill_diagonal(A, 0) # diagonal elements should be zero
50 for i in range(n):
51 # The second term has the same shape as A due to broadcasting
52 A = np.minimum(A, A[i, :][np.newaxis, :] + A[:, i][:, np.newaxis])
53 return A
54
55
56 def floyd_warshall_predecessor_and_distance(G, weight="weight"):
57 """Find all-pairs shortest path lengths using Floyd's algorithm.
58
59 Parameters
60 ----------
61 G : NetworkX graph
62
63 weight: string, optional (default= 'weight')
64 Edge data key corresponding to the edge weight.
65
66 Returns
67 -------
68 predecessor,distance : dictionaries
69 Dictionaries, keyed by source and target, of predecessors and distances
70 in the shortest path.
71
72 Examples
73 --------
74 >>> G = nx.DiGraph()
75 >>> G.add_weighted_edges_from(
76 ... [
77 ... ("s", "u", 10),
78 ... ("s", "x", 5),
79 ... ("u", "v", 1),
80 ... ("u", "x", 2),
81 ... ("v", "y", 1),
82 ... ("x", "u", 3),
83 ... ("x", "v", 5),
84 ... ("x", "y", 2),
85 ... ("y", "s", 7),
86 ... ("y", "v", 6),
87 ... ]
88 ... )
89 >>> predecessors, _ = nx.floyd_warshall_predecessor_and_distance(G)
90 >>> print(nx.reconstruct_path("s", "v", predecessors))
91 ['s', 'x', 'u', 'v']
92
93 Notes
94 -----
95 Floyd's algorithm is appropriate for finding shortest paths
96 in dense graphs or graphs with negative weights when Dijkstra's algorithm
97 fails. This algorithm can still fail if there are negative cycles.
98 It has running time $O(n^3)$ with running space of $O(n^2)$.
99
100 See Also
101 --------
102 floyd_warshall
103 floyd_warshall_numpy
104 all_pairs_shortest_path
105 all_pairs_shortest_path_length
106 """
107 from collections import defaultdict
108
109 # dictionary-of-dictionaries representation for dist and pred
110 # use some defaultdict magick here
111 # for dist the default is the floating point inf value
112 dist = defaultdict(lambda: defaultdict(lambda: float("inf")))
113 for u in G:
114 dist[u][u] = 0
115 pred = defaultdict(dict)
116 # initialize path distance dictionary to be the adjacency matrix
117 # also set the distance to self to 0 (zero diagonal)
118 undirected = not G.is_directed()
119 for u, v, d in G.edges(data=True):
120 e_weight = d.get(weight, 1.0)
121 dist[u][v] = min(e_weight, dist[u][v])
122 pred[u][v] = u
123 if undirected:
124 dist[v][u] = min(e_weight, dist[v][u])
125 pred[v][u] = v
126 for w in G:
127 dist_w = dist[w] # save recomputation
128 for u in G:
129 dist_u = dist[u] # save recomputation
130 for v in G:
131 d = dist_u[w] + dist_w[v]
132 if dist_u[v] > d:
133 dist_u[v] = d
134 pred[u][v] = pred[w][v]
135 return dict(pred), dict(dist)
136
137
138 def reconstruct_path(source, target, predecessors):
139 """Reconstruct a path from source to target using the predecessors
140 dict as returned by floyd_warshall_predecessor_and_distance
141
142 Parameters
143 ----------
144 source : node
145 Starting node for path
146
147 target : node
148 Ending node for path
149
150 predecessors: dictionary
151 Dictionary, keyed by source and target, of predecessors in the
152 shortest path, as returned by floyd_warshall_predecessor_and_distance
153
154 Returns
155 -------
156 path : list
157 A list of nodes containing the shortest path from source to target
158
159 If source and target are the same, an empty list is returned
160
161 Notes
162 -----
163 This function is meant to give more applicability to the
164 floyd_warshall_predecessor_and_distance function
165
166 See Also
167 --------
168 floyd_warshall_predecessor_and_distance
169 """
170 if source == target:
171 return []
172 prev = predecessors[source]
173 curr = prev[target]
174 path = [target, curr]
175 while curr != source:
176 curr = prev[curr]
177 path.append(curr)
178 return list(reversed(path))
179
180
181 def floyd_warshall(G, weight="weight"):
182 """Find all-pairs shortest path lengths using Floyd's algorithm.
183
184 Parameters
185 ----------
186 G : NetworkX graph
187
188 weight: string, optional (default= 'weight')
189 Edge data key corresponding to the edge weight.
190
191
192 Returns
193 -------
194 distance : dict
195 A dictionary, keyed by source and target, of shortest paths distances
196 between nodes.
197
198 Notes
199 -----
200 Floyd's algorithm is appropriate for finding shortest paths
201 in dense graphs or graphs with negative weights when Dijkstra's algorithm
202 fails. This algorithm can still fail if there are negative cycles.
203 It has running time $O(n^3)$ with running space of $O(n^2)$.
204
205 See Also
206 --------
207 floyd_warshall_predecessor_and_distance
208 floyd_warshall_numpy
209 all_pairs_shortest_path
210 all_pairs_shortest_path_length
211 """
212 # could make this its own function to reduce memory costs
213 return floyd_warshall_predecessor_and_distance(G, weight=weight)[1]
214
[end of networkx/algorithms/shortest_paths/dense.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/networkx/algorithms/shortest_paths/dense.py b/networkx/algorithms/shortest_paths/dense.py
--- a/networkx/algorithms/shortest_paths/dense.py
+++ b/networkx/algorithms/shortest_paths/dense.py
@@ -13,37 +13,57 @@
def floyd_warshall_numpy(G, nodelist=None, weight="weight"):
"""Find all-pairs shortest path lengths using Floyd's algorithm.
+ This algorithm for finding shortest paths takes advantage of
+ matrix representations of a graph and works well for dense
+ graphs where all-pairs shortest path lengths are desired.
+ The results are returned as a NumPy array, distance[i, j],
+ where i and j are the indexes of two nodes in nodelist.
+ The entry distance[i, j] is the distance along a shortest
+ path from i to j. If no path exists the distance is Inf.
+
Parameters
----------
G : NetworkX graph
- nodelist : list, optional
+ nodelist : list, optional (default=G.nodes)
The rows and columns are ordered by the nodes in nodelist.
- If nodelist is None then the ordering is produced by G.nodes().
+ If nodelist is None then the ordering is produced by G.nodes.
+ Nodelist should include all nodes in G.
- weight: string, optional (default= 'weight')
+ weight: string, optional (default='weight')
Edge data key corresponding to the edge weight.
Returns
-------
distance : NumPy matrix
A matrix of shortest path distances between nodes.
- If there is no path between to nodes the corresponding matrix entry
- will be Inf.
+ If there is no path between two nodes the value is Inf.
Notes
-----
Floyd's algorithm is appropriate for finding shortest paths in
dense graphs or graphs with negative weights when Dijkstra's
algorithm fails. This algorithm can still fail if there are negative
- cycles. It has running time $O(n^3)$ with running space of $O(n^2)$.
+ cycles. It has running time $O(n^3)$ with running space of $O(n^2)$.
+
+ Raises
+ ------
+ NetworkXError
+ If nodelist is not a list of the nodes in G.
"""
import numpy as np
+ if nodelist is not None:
+ if not (len(nodelist) == len(G) == len(set(nodelist))):
+ raise nx.NetworkXError(
+ "nodelist must contain every node in G with no repeats."
+ "If you wanted a subgraph of G use G.subgraph(nodelist)"
+ )
+
# To handle cases when an edge has weight=0, we must make sure that
# nonedges are not given the value 0 as well.
A = nx.to_numpy_array(
- G, nodelist=nodelist, multigraph_weight=min, weight=weight, nonedge=np.inf
+ G, nodelist, multigraph_weight=min, weight=weight, nonedge=np.inf
)
n, m = A.shape
np.fill_diagonal(A, 0) # diagonal elements should be zero
|
{"golden_diff": "diff --git a/networkx/algorithms/shortest_paths/dense.py b/networkx/algorithms/shortest_paths/dense.py\n--- a/networkx/algorithms/shortest_paths/dense.py\n+++ b/networkx/algorithms/shortest_paths/dense.py\n@@ -13,37 +13,57 @@\n def floyd_warshall_numpy(G, nodelist=None, weight=\"weight\"):\n \"\"\"Find all-pairs shortest path lengths using Floyd's algorithm.\n \n+ This algorithm for finding shortest paths takes advantage of\n+ matrix representations of a graph and works well for dense\n+ graphs where all-pairs shortest path lengths are desired.\n+ The results are returned as a NumPy array, distance[i, j],\n+ where i and j are the indexes of two nodes in nodelist.\n+ The entry distance[i, j] is the distance along a shortest\n+ path from i to j. If no path exists the distance is Inf.\n+\n Parameters\n ----------\n G : NetworkX graph\n \n- nodelist : list, optional\n+ nodelist : list, optional (default=G.nodes)\n The rows and columns are ordered by the nodes in nodelist.\n- If nodelist is None then the ordering is produced by G.nodes().\n+ If nodelist is None then the ordering is produced by G.nodes.\n+ Nodelist should include all nodes in G.\n \n- weight: string, optional (default= 'weight')\n+ weight: string, optional (default='weight')\n Edge data key corresponding to the edge weight.\n \n Returns\n -------\n distance : NumPy matrix\n A matrix of shortest path distances between nodes.\n- If there is no path between to nodes the corresponding matrix entry\n- will be Inf.\n+ If there is no path between two nodes the value is Inf.\n \n Notes\n -----\n Floyd's algorithm is appropriate for finding shortest paths in\n dense graphs or graphs with negative weights when Dijkstra's\n algorithm fails. This algorithm can still fail if there are negative\n- cycles. It has running time $O(n^3)$ with running space of $O(n^2)$.\n+ cycles. It has running time $O(n^3)$ with running space of $O(n^2)$.\n+\n+ Raises\n+ ------\n+ NetworkXError\n+ If nodelist is not a list of the nodes in G.\n \"\"\"\n import numpy as np\n \n+ if nodelist is not None:\n+ if not (len(nodelist) == len(G) == len(set(nodelist))):\n+ raise nx.NetworkXError(\n+ \"nodelist must contain every node in G with no repeats.\"\n+ \"If you wanted a subgraph of G use G.subgraph(nodelist)\"\n+ )\n+\n # To handle cases when an edge has weight=0, we must make sure that\n # nonedges are not given the value 0 as well.\n A = nx.to_numpy_array(\n- G, nodelist=nodelist, multigraph_weight=min, weight=weight, nonedge=np.inf\n+ G, nodelist, multigraph_weight=min, weight=weight, nonedge=np.inf\n )\n n, m = A.shape\n np.fill_diagonal(A, 0) # diagonal elements should be zero\n", "issue": "Documentation and code in `floyd_warshall_numpy` are inconsistent\n### Current Behavior\r\n<!--- Tell us what happens instead of the expected behavior -->\r\n\r\nUsing `floyd_warshall_numpy` with a specified set of nodes will only find paths that are confined to that subset of nodes. I'm not sure I agree with that choice, and certainly the documentation does not make it clear.\r\n\r\n### Expected Behavior\r\n<!--- Tell us what should happen -->\r\n\r\nBased on the documentation, I would expect it to find a path that starts at one node and ends at another, even if that path must go through additional nodes not in the provided list.\r\n\r\n### Steps to Reproduce\r\n<!--- Provide a minimal example that reproduces the bug -->\r\n\r\nhttps://stackoverflow.com/q/65771537/2966723\r\n\r\n### Environment\r\n<!--- Please provide details about your local environment -->\r\nPython version: 3.9\r\nNetworkX version: 2.5\r\n\r\n\r\n### Additional context\r\n<!--- Add any other context about the problem here, screenshots, etc. -->\r\n\n", "before_files": [{"content": "\"\"\"Floyd-Warshall algorithm for shortest paths.\n\"\"\"\nimport networkx as nx\n\n__all__ = [\n \"floyd_warshall\",\n \"floyd_warshall_predecessor_and_distance\",\n \"reconstruct_path\",\n \"floyd_warshall_numpy\",\n]\n\n\ndef floyd_warshall_numpy(G, nodelist=None, weight=\"weight\"):\n \"\"\"Find all-pairs shortest path lengths using Floyd's algorithm.\n\n Parameters\n ----------\n G : NetworkX graph\n\n nodelist : list, optional\n The rows and columns are ordered by the nodes in nodelist.\n If nodelist is None then the ordering is produced by G.nodes().\n\n weight: string, optional (default= 'weight')\n Edge data key corresponding to the edge weight.\n\n Returns\n -------\n distance : NumPy matrix\n A matrix of shortest path distances between nodes.\n If there is no path between to nodes the corresponding matrix entry\n will be Inf.\n\n Notes\n -----\n Floyd's algorithm is appropriate for finding shortest paths in\n dense graphs or graphs with negative weights when Dijkstra's\n algorithm fails. This algorithm can still fail if there are negative\n cycles. It has running time $O(n^3)$ with running space of $O(n^2)$.\n \"\"\"\n import numpy as np\n\n # To handle cases when an edge has weight=0, we must make sure that\n # nonedges are not given the value 0 as well.\n A = nx.to_numpy_array(\n G, nodelist=nodelist, multigraph_weight=min, weight=weight, nonedge=np.inf\n )\n n, m = A.shape\n np.fill_diagonal(A, 0) # diagonal elements should be zero\n for i in range(n):\n # The second term has the same shape as A due to broadcasting\n A = np.minimum(A, A[i, :][np.newaxis, :] + A[:, i][:, np.newaxis])\n return A\n\n\ndef floyd_warshall_predecessor_and_distance(G, weight=\"weight\"):\n \"\"\"Find all-pairs shortest path lengths using Floyd's algorithm.\n\n Parameters\n ----------\n G : NetworkX graph\n\n weight: string, optional (default= 'weight')\n Edge data key corresponding to the edge weight.\n\n Returns\n -------\n predecessor,distance : dictionaries\n Dictionaries, keyed by source and target, of predecessors and distances\n in the shortest path.\n\n Examples\n --------\n >>> G = nx.DiGraph()\n >>> G.add_weighted_edges_from(\n ... [\n ... (\"s\", \"u\", 10),\n ... (\"s\", \"x\", 5),\n ... (\"u\", \"v\", 1),\n ... (\"u\", \"x\", 2),\n ... (\"v\", \"y\", 1),\n ... (\"x\", \"u\", 3),\n ... (\"x\", \"v\", 5),\n ... (\"x\", \"y\", 2),\n ... (\"y\", \"s\", 7),\n ... (\"y\", \"v\", 6),\n ... ]\n ... )\n >>> predecessors, _ = nx.floyd_warshall_predecessor_and_distance(G)\n >>> print(nx.reconstruct_path(\"s\", \"v\", predecessors))\n ['s', 'x', 'u', 'v']\n\n Notes\n -----\n Floyd's algorithm is appropriate for finding shortest paths\n in dense graphs or graphs with negative weights when Dijkstra's algorithm\n fails. This algorithm can still fail if there are negative cycles.\n It has running time $O(n^3)$ with running space of $O(n^2)$.\n\n See Also\n --------\n floyd_warshall\n floyd_warshall_numpy\n all_pairs_shortest_path\n all_pairs_shortest_path_length\n \"\"\"\n from collections import defaultdict\n\n # dictionary-of-dictionaries representation for dist and pred\n # use some defaultdict magick here\n # for dist the default is the floating point inf value\n dist = defaultdict(lambda: defaultdict(lambda: float(\"inf\")))\n for u in G:\n dist[u][u] = 0\n pred = defaultdict(dict)\n # initialize path distance dictionary to be the adjacency matrix\n # also set the distance to self to 0 (zero diagonal)\n undirected = not G.is_directed()\n for u, v, d in G.edges(data=True):\n e_weight = d.get(weight, 1.0)\n dist[u][v] = min(e_weight, dist[u][v])\n pred[u][v] = u\n if undirected:\n dist[v][u] = min(e_weight, dist[v][u])\n pred[v][u] = v\n for w in G:\n dist_w = dist[w] # save recomputation\n for u in G:\n dist_u = dist[u] # save recomputation\n for v in G:\n d = dist_u[w] + dist_w[v]\n if dist_u[v] > d:\n dist_u[v] = d\n pred[u][v] = pred[w][v]\n return dict(pred), dict(dist)\n\n\ndef reconstruct_path(source, target, predecessors):\n \"\"\"Reconstruct a path from source to target using the predecessors\n dict as returned by floyd_warshall_predecessor_and_distance\n\n Parameters\n ----------\n source : node\n Starting node for path\n\n target : node\n Ending node for path\n\n predecessors: dictionary\n Dictionary, keyed by source and target, of predecessors in the\n shortest path, as returned by floyd_warshall_predecessor_and_distance\n\n Returns\n -------\n path : list\n A list of nodes containing the shortest path from source to target\n\n If source and target are the same, an empty list is returned\n\n Notes\n -----\n This function is meant to give more applicability to the\n floyd_warshall_predecessor_and_distance function\n\n See Also\n --------\n floyd_warshall_predecessor_and_distance\n \"\"\"\n if source == target:\n return []\n prev = predecessors[source]\n curr = prev[target]\n path = [target, curr]\n while curr != source:\n curr = prev[curr]\n path.append(curr)\n return list(reversed(path))\n\n\ndef floyd_warshall(G, weight=\"weight\"):\n \"\"\"Find all-pairs shortest path lengths using Floyd's algorithm.\n\n Parameters\n ----------\n G : NetworkX graph\n\n weight: string, optional (default= 'weight')\n Edge data key corresponding to the edge weight.\n\n\n Returns\n -------\n distance : dict\n A dictionary, keyed by source and target, of shortest paths distances\n between nodes.\n\n Notes\n -----\n Floyd's algorithm is appropriate for finding shortest paths\n in dense graphs or graphs with negative weights when Dijkstra's algorithm\n fails. This algorithm can still fail if there are negative cycles.\n It has running time $O(n^3)$ with running space of $O(n^2)$.\n\n See Also\n --------\n floyd_warshall_predecessor_and_distance\n floyd_warshall_numpy\n all_pairs_shortest_path\n all_pairs_shortest_path_length\n \"\"\"\n # could make this its own function to reduce memory costs\n return floyd_warshall_predecessor_and_distance(G, weight=weight)[1]\n", "path": "networkx/algorithms/shortest_paths/dense.py"}]}
| 2,916 | 712 |
gh_patches_debug_21555
|
rasdani/github-patches
|
git_diff
|
getpelican__pelican-845
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Conflicts rendering Category pages when category is not defined in consistent case
I was testing a jinja macro that dealt with creating links for categories.
I noted that if you define a category in one article as `Category: Something` and in another article as `Category: something` that these are treated as separate categories, however, when your category page is rendered, there is only the lowecase url, e.g. `category/something.html`. This will only associate with the articles with meta data defined as `Category: something` and not anywhere where it is defined with uppercase since there is no `category/Something.html`.
I am not sure if making this case insensitive would break code. Certainly, it would be unclear when printing the category name which case to use. From an intelligent template process, you would set you case using CSS style attribute to be sure it was the way you want, and it could always render categories in lower case.
Otherwise, it might just be sufficient to put this into the documentation. I always tend to capitalize by categories, but some people might not notice and wonder why some articles are missing. I have not yet tested this, but I would imagine the same issue exists for tags.
</issue>
<code>
[start of pelican/urlwrappers.py]
1 import os
2 import functools
3 import logging
4
5 import six
6
7 from pelican.utils import (slugify, python_2_unicode_compatible)
8
9 logger = logging.getLogger(__name__)
10
11
12 @python_2_unicode_compatible
13 @functools.total_ordering
14 class URLWrapper(object):
15 def __init__(self, name, settings):
16 self.name = name
17 self.slug = slugify(self.name)
18 self.settings = settings
19
20 def as_dict(self):
21 return self.__dict__
22
23 def __hash__(self):
24 return hash(self.name)
25
26 def _key(self):
27 return self.name
28
29 def _normalize_key(self, key):
30 return six.text_type(key)
31
32 def __eq__(self, other):
33 return self._key() == self._normalize_key(other)
34
35 def __ne__(self, other):
36 return self._key() != self._normalize_key(other)
37
38 def __lt__(self, other):
39 return self._key() < self._normalize_key(other)
40
41 def __str__(self):
42 return self.name
43
44 def __repr__(self):
45 return '<{} {}>'.format(type(self).__name__, str(self))
46
47 def _from_settings(self, key, get_page_name=False):
48 """Returns URL information as defined in settings.
49
50 When get_page_name=True returns URL without anything after {slug} e.g.
51 if in settings: CATEGORY_URL="cat/{slug}.html" this returns
52 "cat/{slug}" Useful for pagination.
53
54 """
55 setting = "%s_%s" % (self.__class__.__name__.upper(), key)
56 value = self.settings[setting]
57 if not isinstance(value, six.string_types):
58 logger.warning('%s is set to %s' % (setting, value))
59 return value
60 else:
61 if get_page_name:
62 return os.path.splitext(value)[0].format(**self.as_dict())
63 else:
64 return value.format(**self.as_dict())
65
66 page_name = property(functools.partial(_from_settings, key='URL',
67 get_page_name=True))
68 url = property(functools.partial(_from_settings, key='URL'))
69 save_as = property(functools.partial(_from_settings, key='SAVE_AS'))
70
71
72 class Category(URLWrapper):
73 pass
74
75
76 class Tag(URLWrapper):
77 def __init__(self, name, *args, **kwargs):
78 super(Tag, self).__init__(name.strip(), *args, **kwargs)
79
80
81 class Author(URLWrapper):
82 pass
83
[end of pelican/urlwrappers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pelican/urlwrappers.py b/pelican/urlwrappers.py
--- a/pelican/urlwrappers.py
+++ b/pelican/urlwrappers.py
@@ -13,21 +13,35 @@
@functools.total_ordering
class URLWrapper(object):
def __init__(self, name, settings):
+ # next 2 lines are redundant with the setter of the name property
+ # but are here for clarity
+ self._name = name
+ self.slug = slugify(name)
self.name = name
- self.slug = slugify(self.name)
self.settings = settings
+ @property
+ def name(self):
+ return self._name
+
+ @name.setter
+ def name(self, name):
+ self._name = name
+ self.slug = slugify(name)
+
def as_dict(self):
- return self.__dict__
+ d = self.__dict__
+ d['name'] = self.name
+ return d
def __hash__(self):
- return hash(self.name)
+ return hash(self.slug)
def _key(self):
- return self.name
+ return self.slug
def _normalize_key(self, key):
- return six.text_type(key)
+ return six.text_type(slugify(key))
def __eq__(self, other):
return self._key() == self._normalize_key(other)
|
{"golden_diff": "diff --git a/pelican/urlwrappers.py b/pelican/urlwrappers.py\n--- a/pelican/urlwrappers.py\n+++ b/pelican/urlwrappers.py\n@@ -13,21 +13,35 @@\n @functools.total_ordering\n class URLWrapper(object):\n def __init__(self, name, settings):\n+ # next 2 lines are redundant with the setter of the name property\n+ # but are here for clarity\n+ self._name = name\n+ self.slug = slugify(name)\n self.name = name\n- self.slug = slugify(self.name)\n self.settings = settings\n \n+ @property\n+ def name(self):\n+ return self._name\n+\n+ @name.setter\n+ def name(self, name):\n+ self._name = name\n+ self.slug = slugify(name)\n+\n def as_dict(self):\n- return self.__dict__\n+ d = self.__dict__\n+ d['name'] = self.name\n+ return d\n \n def __hash__(self):\n- return hash(self.name)\n+ return hash(self.slug)\n \n def _key(self):\n- return self.name\n+ return self.slug\n \n def _normalize_key(self, key):\n- return six.text_type(key)\n+ return six.text_type(slugify(key))\n \n def __eq__(self, other):\n return self._key() == self._normalize_key(other)\n", "issue": "Conflicts rendering Category pages when category is not defined in consistent case\nI was testing a jinja macro that dealt with creating links for categories.\n\nI noted that if you define a category in one article as `Category: Something` and in another article as `Category: something` that these are treated as separate categories, however, when your category page is rendered, there is only the lowecase url, e.g. `category/something.html`. This will only associate with the articles with meta data defined as `Category: something` and not anywhere where it is defined with uppercase since there is no `category/Something.html`.\n\nI am not sure if making this case insensitive would break code. Certainly, it would be unclear when printing the category name which case to use. From an intelligent template process, you would set you case using CSS style attribute to be sure it was the way you want, and it could always render categories in lower case.\n\nOtherwise, it might just be sufficient to put this into the documentation. I always tend to capitalize by categories, but some people might not notice and wonder why some articles are missing. I have not yet tested this, but I would imagine the same issue exists for tags.\n\n", "before_files": [{"content": "import os\nimport functools\nimport logging\n\nimport six\n\nfrom pelican.utils import (slugify, python_2_unicode_compatible)\n\nlogger = logging.getLogger(__name__)\n\n\n@python_2_unicode_compatible\[email protected]_ordering\nclass URLWrapper(object):\n def __init__(self, name, settings):\n self.name = name\n self.slug = slugify(self.name)\n self.settings = settings\n\n def as_dict(self):\n return self.__dict__\n\n def __hash__(self):\n return hash(self.name)\n\n def _key(self):\n return self.name\n\n def _normalize_key(self, key):\n return six.text_type(key)\n\n def __eq__(self, other):\n return self._key() == self._normalize_key(other)\n\n def __ne__(self, other):\n return self._key() != self._normalize_key(other)\n\n def __lt__(self, other):\n return self._key() < self._normalize_key(other)\n\n def __str__(self):\n return self.name\n\n def __repr__(self):\n return '<{} {}>'.format(type(self).__name__, str(self))\n\n def _from_settings(self, key, get_page_name=False):\n \"\"\"Returns URL information as defined in settings.\n\n When get_page_name=True returns URL without anything after {slug} e.g.\n if in settings: CATEGORY_URL=\"cat/{slug}.html\" this returns\n \"cat/{slug}\" Useful for pagination.\n\n \"\"\"\n setting = \"%s_%s\" % (self.__class__.__name__.upper(), key)\n value = self.settings[setting]\n if not isinstance(value, six.string_types):\n logger.warning('%s is set to %s' % (setting, value))\n return value\n else:\n if get_page_name:\n return os.path.splitext(value)[0].format(**self.as_dict())\n else:\n return value.format(**self.as_dict())\n\n page_name = property(functools.partial(_from_settings, key='URL',\n get_page_name=True))\n url = property(functools.partial(_from_settings, key='URL'))\n save_as = property(functools.partial(_from_settings, key='SAVE_AS'))\n\n\nclass Category(URLWrapper):\n pass\n\n\nclass Tag(URLWrapper):\n def __init__(self, name, *args, **kwargs):\n super(Tag, self).__init__(name.strip(), *args, **kwargs)\n\n\nclass Author(URLWrapper):\n pass\n", "path": "pelican/urlwrappers.py"}]}
| 1,478 | 320 |
gh_patches_debug_3713
|
rasdani/github-patches
|
git_diff
|
pwndbg__pwndbg-948
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Context fails when a particular value is present in any register
### Description
If a particular value is present in any register ctx throws
```
Cannot access memory at address 0x7ffffffff000
```
It seems to happen at 128TB split but curiously only with unaligned addresses that would cause a qword read to cross 128TB,
so 128TB-{1..7} throws but neither does 128TB-8 or 128TB
Full backtrace
```
Traceback (most recent call last):
File "/opt/pwndbg/pwndbg/commands/__init__.py", line 130, in __call__
return self.function(*args, **kwargs)
File "/opt/pwndbg/pwndbg/commands/__init__.py", line 221, in _OnlyWhenRunning
return function(*a, **kw)
File "/opt/pwndbg/pwndbg/commands/context.py", line 269, in context
result[target].extend(func(target=out,
File "/opt/pwndbg/pwndbg/commands/context.py", line 350, in context_regs
regs = get_regs()
File "/opt/pwndbg/pwndbg/commands/context.py", line 405, in get_regs
desc = pwndbg.chain.format(value)
File "/opt/pwndbg/pwndbg/chain.py", line 112, in format
enhanced = pwndbg.enhance.enhance(chain[-1], code=code)
File "/opt/pwndbg/pwndbg/enhance.py", line 109, in enhance
intval = int(pwndbg.memory.poi(pwndbg.typeinfo.pvoid, value))
gdb.MemoryError: Cannot access memory at address 0x7ffffffff000
```
### Steps to reproduce
```asm
.globl _start
_start:
mov $0x7fffffffeff8, %rdx # no err
mov $0x7fffffffeffa, %rdx # err
mov $0x7fffffffefff, %rdx # err
mov $0x7ffffffff000, %rdx # no err
int3
```
```sh
as test.s -o test.o ; ld -e _start test.o -o test
```
### My setup
<!--
Show us your gdb/python/pwndbg/OS/IDA Pro version (depending on your case).
NOTE: We are currently supporting only Ubuntu installations.
It is known that pwndbg is not fully working e.g. on Arch Linux (the heap stuff is not working there).
If you would like to change this situation - help us improving pwndbg and supporting other distros!
This can be displayed in pwndbg through `version` command.
If it is somehow unavailable, use:
* `show version` - for gdb
* `py import sys; print(sys.version)` - for python
* pwndbg version/git commit id
-->
Platform: Linux-5.13.9_1-x86_64-with-glibc2.32
Gdb: 10.2
Python: 3.9.6 (default, Jul 6 2021, 18:29:50) [GCC 10.2.1 20201203]
Pwndbg: 1.1.0 build: b9e7bf1
Capstone: 4.0.1024
Unicorn: 1.0.3
This GDB was configured as follows:
configure --host=x86_64-unknown-linux-gnu --target=x86_64-unknown-linux-gnu
--with-auto-load-dir=$debugdir:$datadir/auto-load
--with-auto-load-safe-path=$debugdir:$datadir/auto-load
--with-expat
--with-gdb-datadir=/usr/share/gdb (relocatable)
--with-jit-reader-dir=/usr/lib64/gdb (relocatable)
--without-libunwind-ia64
--with-lzma
--without-babeltrace
--without-intel-pt
--without-mpfr
--without-xxhash
--with-python=/usr (relocatable)
--with-python-libdir=/usr/lib (relocatable)
--with-debuginfod
--without-guile
--disable-source-highlight
--with-separate-debug-dir=/usr/lib64/debug (relocatable)
--with-system-gdbinit=/etc/gdb/gdbinit
</issue>
<code>
[start of pwndbg/enhance.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 """
4 Given an address in memory which does not contain a pointer elsewhere
5 into memory, attempt to describe the data as best as possible.
6
7 Currently prints out code, integers, or strings, in a best-effort manner
8 dependent on page permissions, the contents of the data, and any
9 supplemental information sources (e.g. active IDA Pro connection).
10 """
11
12 import string
13
14 import gdb
15
16 import pwndbg.arch
17 import pwndbg.color as color
18 import pwndbg.color.enhance as E
19 import pwndbg.config
20 import pwndbg.disasm
21 import pwndbg.memoize
22 import pwndbg.memory
23 import pwndbg.strings
24 import pwndbg.symbol
25 import pwndbg.typeinfo
26 from pwndbg.color.syntax_highlight import syntax_highlight
27
28 bad_instrs = [
29 '.byte',
30 '.long',
31 'rex.R',
32 'rex.XB',
33 '.inst',
34 '(bad)'
35 ]
36
37 def good_instr(i):
38 return not any(bad in i for bad in bad_instrs)
39
40 def int_str(value):
41 retval = '%#x' % int(value & pwndbg.arch.ptrmask)
42
43 # Try to unpack the value as a string
44 packed = pwndbg.arch.pack(int(value))
45 if all(c in string.printable.encode('utf-8') for c in packed):
46 if len(retval) > 4:
47 retval = '%s (%r)' % (retval, str(packed.decode('ascii', 'ignore')))
48
49 return retval
50
51
52 # @pwndbg.memoize.reset_on_stop
53 def enhance(value, code = True):
54 """
55 Given the last pointer in a chain, attempt to characterize
56
57 Note that 'the last pointer in a chain' may not at all actually be a pointer.
58
59 Additionally, optimizations are made based on various sources of data for
60 'value'. For example, if it is set to RWX, we try to get information on whether
61 it resides on the stack, or in a RW section that *happens* to be RWX, to
62 determine which order to print the fields.
63
64 Arguments:
65 value(obj): Value to enhance
66 code(bool): Hint that indicates the value may be an instruction
67 """
68 value = int(value)
69
70 name = pwndbg.symbol.get(value) or None
71 page = pwndbg.vmmap.find(value)
72
73 # If it's not in a page we know about, try to dereference
74 # it anyway just to test.
75 can_read = True
76 if not page or None == pwndbg.memory.peek(value):
77 can_read = False
78
79 if not can_read:
80 return E.integer(int_str(value))
81
82 # It's mapped memory, or we can at least read it.
83 # Try to find out if it's a string.
84 instr = None
85 exe = page and page.execute
86 rwx = page and page.rwx
87
88 # For the purpose of following pointers, don't display
89 # anything on the stack or heap as 'code'
90 if '[stack' in page.objfile or '[heap' in page.objfile:
91 rwx = exe = False
92
93 # If IDA doesn't think it's in a function, don't display it as code.
94 if pwndbg.ida.available() and not pwndbg.ida.GetFunctionName(value):
95 rwx = exe = False
96
97 if exe:
98 instr = pwndbg.disasm.one(value)
99 if instr:
100 instr = "%-6s %s" % (instr.mnemonic, instr.op_str)
101 if pwndbg.config.syntax_highlight:
102 instr = syntax_highlight(instr)
103
104 szval = pwndbg.strings.get(value) or None
105 szval0 = szval
106 if szval:
107 szval = E.string(repr(szval))
108
109 intval = int(pwndbg.memory.poi(pwndbg.typeinfo.pvoid, value))
110 intval0 = intval
111 if 0 <= intval < 10:
112 intval = E.integer(str(intval))
113 else:
114 intval = E.integer('%#x' % int(intval & pwndbg.arch.ptrmask))
115
116 retval = []
117
118 # print([instr,intval0,szval])
119 if not code:
120 instr = None
121
122 # If it's on the stack, don't display it as code in a chain.
123 if instr and 'stack' in page.objfile:
124 retval = [intval, szval]
125
126
127
128 # If it's RWX but a small value, don't display it as code in a chain.
129 elif instr and rwx and intval0 < 0x1000:
130 retval = [intval, szval]
131
132 # If it's an instruction and *not* RWX, display it unconditionally
133 elif instr and exe:
134 if not rwx:
135 if szval:
136 retval = [instr, szval]
137 else:
138 retval = [instr]
139 else:
140 retval = [instr, intval, szval]
141
142 # Otherwise strings have preference
143 elif szval:
144 if len(szval0) < pwndbg.arch.ptrsize:
145 retval = [intval, szval]
146 else:
147 retval = [szval]
148
149 # And then integer
150 else:
151 return E.integer(int_str(intval0))
152
153 retval = tuple(filter(lambda x: x is not None, retval))
154
155 if len(retval) == 0:
156 return E.unknown("???")
157
158 if len(retval) == 1:
159 return retval[0]
160
161 return retval[0] + E.comment(color.strip(' /* {} */'.format('; '.join(retval[1:]))))
162
[end of pwndbg/enhance.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pwndbg/enhance.py b/pwndbg/enhance.py
--- a/pwndbg/enhance.py
+++ b/pwndbg/enhance.py
@@ -106,6 +106,10 @@
if szval:
szval = E.string(repr(szval))
+ # Fix for case we can't read the end address anyway (#946)
+ if value + pwndbg.arch.ptrsize > page.end:
+ return E.integer(int_str(value))
+
intval = int(pwndbg.memory.poi(pwndbg.typeinfo.pvoid, value))
intval0 = intval
if 0 <= intval < 10:
|
{"golden_diff": "diff --git a/pwndbg/enhance.py b/pwndbg/enhance.py\n--- a/pwndbg/enhance.py\n+++ b/pwndbg/enhance.py\n@@ -106,6 +106,10 @@\n if szval:\n szval = E.string(repr(szval))\n \n+ # Fix for case we can't read the end address anyway (#946)\n+ if value + pwndbg.arch.ptrsize > page.end:\n+ return E.integer(int_str(value))\n+\n intval = int(pwndbg.memory.poi(pwndbg.typeinfo.pvoid, value))\n intval0 = intval\n if 0 <= intval < 10:\n", "issue": "Context fails when a particular value is present in any register\n### Description\r\nIf a particular value is present in any register ctx throws\r\n```\r\nCannot access memory at address 0x7ffffffff000\r\n```\r\nIt seems to happen at 128TB split but curiously only with unaligned addresses that would cause a qword read to cross 128TB,\r\nso 128TB-{1..7} throws but neither does 128TB-8 or 128TB\r\n\r\nFull backtrace\r\n```\r\nTraceback (most recent call last):\r\n File \"/opt/pwndbg/pwndbg/commands/__init__.py\", line 130, in __call__\r\n return self.function(*args, **kwargs)\r\n File \"/opt/pwndbg/pwndbg/commands/__init__.py\", line 221, in _OnlyWhenRunning\r\n return function(*a, **kw)\r\n File \"/opt/pwndbg/pwndbg/commands/context.py\", line 269, in context\r\n result[target].extend(func(target=out,\r\n File \"/opt/pwndbg/pwndbg/commands/context.py\", line 350, in context_regs\r\n regs = get_regs()\r\n File \"/opt/pwndbg/pwndbg/commands/context.py\", line 405, in get_regs\r\n desc = pwndbg.chain.format(value)\r\n File \"/opt/pwndbg/pwndbg/chain.py\", line 112, in format\r\n enhanced = pwndbg.enhance.enhance(chain[-1], code=code)\r\n File \"/opt/pwndbg/pwndbg/enhance.py\", line 109, in enhance\r\n intval = int(pwndbg.memory.poi(pwndbg.typeinfo.pvoid, value))\r\ngdb.MemoryError: Cannot access memory at address 0x7ffffffff000\r\n```\r\n### Steps to reproduce\r\n\r\n```asm\r\n.globl _start\r\n_start:\r\n mov $0x7fffffffeff8, %rdx # no err\r\n mov $0x7fffffffeffa, %rdx # err\r\n mov $0x7fffffffefff, %rdx # err\r\n mov $0x7ffffffff000, %rdx # no err\r\n int3\r\n```\r\n```sh\r\nas test.s -o test.o ; ld -e _start test.o -o test\r\n```\r\n\r\n### My setup\r\n\r\n<!--\r\nShow us your gdb/python/pwndbg/OS/IDA Pro version (depending on your case).\r\n\r\nNOTE: We are currently supporting only Ubuntu installations.\r\nIt is known that pwndbg is not fully working e.g. on Arch Linux (the heap stuff is not working there).\r\nIf you would like to change this situation - help us improving pwndbg and supporting other distros!\r\n\r\nThis can be displayed in pwndbg through `version` command.\r\n\r\nIf it is somehow unavailable, use:\r\n* `show version` - for gdb\r\n* `py import sys; print(sys.version)` - for python\r\n* pwndbg version/git commit id\r\n-->\r\n\r\nPlatform: Linux-5.13.9_1-x86_64-with-glibc2.32\r\nGdb: 10.2\r\nPython: 3.9.6 (default, Jul 6 2021, 18:29:50) [GCC 10.2.1 20201203]\r\nPwndbg: 1.1.0 build: b9e7bf1\r\nCapstone: 4.0.1024\r\nUnicorn: 1.0.3\r\nThis GDB was configured as follows:\r\n configure --host=x86_64-unknown-linux-gnu --target=x86_64-unknown-linux-gnu\r\n --with-auto-load-dir=$debugdir:$datadir/auto-load\r\n --with-auto-load-safe-path=$debugdir:$datadir/auto-load\r\n --with-expat\r\n --with-gdb-datadir=/usr/share/gdb (relocatable)\r\n --with-jit-reader-dir=/usr/lib64/gdb (relocatable)\r\n --without-libunwind-ia64\r\n --with-lzma\r\n --without-babeltrace\r\n --without-intel-pt\r\n --without-mpfr\r\n --without-xxhash\r\n --with-python=/usr (relocatable)\r\n --with-python-libdir=/usr/lib (relocatable)\r\n --with-debuginfod\r\n --without-guile\r\n --disable-source-highlight\r\n --with-separate-debug-dir=/usr/lib64/debug (relocatable)\r\n --with-system-gdbinit=/etc/gdb/gdbinit\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nGiven an address in memory which does not contain a pointer elsewhere\ninto memory, attempt to describe the data as best as possible.\n\nCurrently prints out code, integers, or strings, in a best-effort manner\ndependent on page permissions, the contents of the data, and any\nsupplemental information sources (e.g. active IDA Pro connection).\n\"\"\"\n\nimport string\n\nimport gdb\n\nimport pwndbg.arch\nimport pwndbg.color as color\nimport pwndbg.color.enhance as E\nimport pwndbg.config\nimport pwndbg.disasm\nimport pwndbg.memoize\nimport pwndbg.memory\nimport pwndbg.strings\nimport pwndbg.symbol\nimport pwndbg.typeinfo\nfrom pwndbg.color.syntax_highlight import syntax_highlight\n\nbad_instrs = [\n'.byte',\n'.long',\n'rex.R',\n'rex.XB',\n'.inst',\n'(bad)'\n]\n\ndef good_instr(i):\n return not any(bad in i for bad in bad_instrs)\n\ndef int_str(value):\n retval = '%#x' % int(value & pwndbg.arch.ptrmask)\n\n # Try to unpack the value as a string\n packed = pwndbg.arch.pack(int(value))\n if all(c in string.printable.encode('utf-8') for c in packed):\n if len(retval) > 4:\n retval = '%s (%r)' % (retval, str(packed.decode('ascii', 'ignore')))\n\n return retval\n\n\n# @pwndbg.memoize.reset_on_stop\ndef enhance(value, code = True):\n \"\"\"\n Given the last pointer in a chain, attempt to characterize\n\n Note that 'the last pointer in a chain' may not at all actually be a pointer.\n\n Additionally, optimizations are made based on various sources of data for\n 'value'. For example, if it is set to RWX, we try to get information on whether\n it resides on the stack, or in a RW section that *happens* to be RWX, to\n determine which order to print the fields.\n\n Arguments:\n value(obj): Value to enhance\n code(bool): Hint that indicates the value may be an instruction\n \"\"\"\n value = int(value)\n\n name = pwndbg.symbol.get(value) or None\n page = pwndbg.vmmap.find(value)\n\n # If it's not in a page we know about, try to dereference\n # it anyway just to test.\n can_read = True\n if not page or None == pwndbg.memory.peek(value):\n can_read = False\n\n if not can_read:\n return E.integer(int_str(value))\n\n # It's mapped memory, or we can at least read it.\n # Try to find out if it's a string.\n instr = None\n exe = page and page.execute\n rwx = page and page.rwx\n\n # For the purpose of following pointers, don't display\n # anything on the stack or heap as 'code'\n if '[stack' in page.objfile or '[heap' in page.objfile:\n rwx = exe = False\n\n # If IDA doesn't think it's in a function, don't display it as code.\n if pwndbg.ida.available() and not pwndbg.ida.GetFunctionName(value):\n rwx = exe = False\n\n if exe:\n instr = pwndbg.disasm.one(value)\n if instr:\n instr = \"%-6s %s\" % (instr.mnemonic, instr.op_str)\n if pwndbg.config.syntax_highlight:\n instr = syntax_highlight(instr)\n\n szval = pwndbg.strings.get(value) or None\n szval0 = szval\n if szval:\n szval = E.string(repr(szval))\n\n intval = int(pwndbg.memory.poi(pwndbg.typeinfo.pvoid, value))\n intval0 = intval\n if 0 <= intval < 10:\n intval = E.integer(str(intval))\n else:\n intval = E.integer('%#x' % int(intval & pwndbg.arch.ptrmask))\n\n retval = []\n\n # print([instr,intval0,szval])\n if not code:\n instr = None\n\n # If it's on the stack, don't display it as code in a chain.\n if instr and 'stack' in page.objfile:\n retval = [intval, szval]\n\n\n\n # If it's RWX but a small value, don't display it as code in a chain.\n elif instr and rwx and intval0 < 0x1000:\n retval = [intval, szval]\n\n # If it's an instruction and *not* RWX, display it unconditionally\n elif instr and exe:\n if not rwx:\n if szval:\n retval = [instr, szval]\n else:\n retval = [instr]\n else:\n retval = [instr, intval, szval]\n\n # Otherwise strings have preference\n elif szval:\n if len(szval0) < pwndbg.arch.ptrsize:\n retval = [intval, szval]\n else:\n retval = [szval]\n\n # And then integer\n else:\n return E.integer(int_str(intval0))\n\n retval = tuple(filter(lambda x: x is not None, retval))\n\n if len(retval) == 0:\n return E.unknown(\"???\")\n\n if len(retval) == 1:\n return retval[0]\n\n return retval[0] + E.comment(color.strip(' /* {} */'.format('; '.join(retval[1:]))))\n", "path": "pwndbg/enhance.py"}]}
| 3,182 | 153 |
gh_patches_debug_33365
|
rasdani/github-patches
|
git_diff
|
Parsl__parsl-370
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Duplicate log lines when using IPP
The following code fragment emits log lines to STDERR with the default format style even though no logging is requested. I suspect that `ipyparallel` is doing something dirty.
from parsl import DataFlowKernel
from parsl.configs.local import localIPP as config
dfk = DataFlowKernel(config=config)
dfk.cleanup()
The above code with the minor change of using threads will not emit the log lines.
from parsl import DataFlowKernel
from parsl.configs.local import localThreads as config
dfk = DataFlowKernel(config=config)
dfk.cleanup()
Please help test by running this with the latest parsl code.
</issue>
<code>
[start of parsl/dataflow/strategy.py]
1 import logging
2 import time
3 import math
4
5 logger = logging.getLogger(__name__)
6
7
8 class Strategy(object):
9 """FlowControl strategy.
10
11 As a workflow dag is processed by Parsl, new tasks are added and completed
12 asynchronously. Parsl interfaces executors with execution providers to construct
13 scalable executors to handle the variable work-load generated by the
14 workflow. This component is responsible for periodically checking outstanding
15 tasks and available compute capacity and trigger scaling events to match
16 workflow needs.
17
18 Here's a diagram of an executor. An executor consists of blocks, which are usually
19 created by single requests to a Local Resource Manager (LRM) such as slurm,
20 condor, torque, or even AWS API. The blocks could contain several task blocks
21 which are separate instances on workers.
22
23
24 .. code:: python
25
26 |<--min_blocks |<-init_blocks max_blocks-->|
27 +----------------------------------------------------------+
28 | +--------block----------+ +--------block--------+ |
29 executor = | | task task | ... | task task | |
30 | +-----------------------+ +---------------------+ |
31 +----------------------------------------------------------+
32
33 The relevant specification options are:
34 1. min_blocks: Minimum number of blocks to maintain
35 2. init_blocks: number of blocks to provision at initialization of workflow
36 3. max_blocks: Maximum number of blocks that can be active due to one workflow
37
38
39 .. code:: python
40
41 slots = current_capacity * tasks_per_node * nodes_per_block
42
43 active_tasks = pending_tasks + running_tasks
44
45 Parallelism = slots / tasks
46 = [0, 1] (i.e, 0 <= p <= 1)
47
48 For example:
49
50 When p = 0,
51 => compute with the least resources possible.
52 infinite tasks are stacked per slot.
53
54 .. code:: python
55
56 blocks = min_blocks { if active_tasks = 0
57 max(min_blocks, 1) { else
58
59 When p = 1,
60 => compute with the most resources.
61 one task is stacked per slot.
62
63 .. code:: python
64
65 blocks = min ( max_blocks,
66 ceil( active_tasks / slots ) )
67
68
69 When p = 1/2,
70 => We stack upto 2 tasks per slot before we overflow
71 and request a new block
72
73
74 let's say min:init:max = 0:0:4 and task_blocks=2
75 Consider the following example:
76 min_blocks = 0
77 init_blocks = 0
78 max_blocks = 4
79 tasks_per_node = 2
80 nodes_per_block = 1
81
82 In the diagram, X <- task
83
84 at 2 tasks:
85
86 .. code:: python
87
88 +---Block---|
89 | |
90 | X X |
91 |slot slot|
92 +-----------+
93
94 at 5 tasks, we overflow as the capacity of a single block is fully used.
95
96 .. code:: python
97
98 +---Block---| +---Block---|
99 | X X | ----> | |
100 | X X | | X |
101 |slot slot| |slot slot|
102 +-----------+ +-----------+
103
104 """
105
106 def __init__(self, dfk):
107 """Initialize strategy."""
108 self.dfk = dfk
109 self.config = dfk.config
110 self.executors = {}
111 self.max_idletime = 60 * 2 # 2 minutes
112
113 for e in self.dfk.config.executors:
114 self.executors[e.label] = {'idle_since': None, 'config': e.label}
115
116 self.strategies = {None: self._strategy_noop, 'simple': self._strategy_simple}
117
118 self.strategize = self.strategies[self.config.strategy]
119
120 logger.debug("Scaling strategy: {0}".format(self.config.strategy))
121
122 def _strategy_noop(self, tasks, *args, kind=None, **kwargs):
123 """Do nothing.
124
125 Args:
126 - tasks (task_ids): Not used here.
127
128 KWargs:
129 - kind (Not used)
130 """
131
132 def _strategy_simple(self, tasks, *args, kind=None, **kwargs):
133 """Peek at the DFK and the executors specified.
134
135 We assume here that tasks are not held in a runnable
136 state, and that all tasks from an app would be sent to
137 a single specific executor, i.e tasks cannot be specified
138 to go to one of more executors.
139
140 Args:
141 - tasks (task_ids): Not used here.
142
143 KWargs:
144 - kind (Not used)
145 """
146 # Add logic to check executors
147 # for task in tasks :
148 # if self.dfk.tasks[task]:
149
150 for label, executor in self.dfk.executors.items():
151 if not executor.scaling_enabled:
152 continue
153
154 # Tasks that are either pending completion
155 active_tasks = executor.executor.outstanding
156
157 status = executor.status()
158
159 # FIXME we need to handle case where provider does not define these
160 # FIXME probably more of this logic should be moved to the provider
161 min_blocks = executor.provider.min_blocks
162 max_blocks = executor.provider.max_blocks
163 tasks_per_node = executor.provider.tasks_per_node
164 nodes_per_block = executor.provider.nodes_per_block
165 parallelism = executor.provider.parallelism
166
167 active_blocks = sum([1 for x in status if x in ('RUNNING',
168 'SUBMITTING',
169 'PENDING')])
170 active_slots = active_blocks * tasks_per_node * nodes_per_block
171
172 # import pdb; pdb.set_trace()
173 logger.debug("Tasks:{} Slots:{} Parallelism:{}".format(len(active_tasks),
174 active_slots,
175 parallelism))
176
177 # Case 1
178 # No tasks.
179 if len(active_tasks) == 0:
180 # Case 1a
181 # Fewer blocks that min_blocks
182 if active_blocks <= min_blocks:
183 # Ignore
184 # logger.debug("Strategy: Case.1a")
185 pass
186
187 # Case 1b
188 # More blocks than min_blocks. Scale down
189 else:
190 # We want to make sure that max_idletime is reached
191 # before killing off resources
192 if not self.executors[executor.label]['idle_since']:
193 logger.debug("Strategy: Scale_in, tasks=0 starting kill timer")
194 self.executors[executor.label]['idle_since'] = time.time()
195
196 idle_since = self.executors[executor.label]['idle_since']
197 if (time.time() - idle_since) > self.max_idletime:
198 # We have resources idle for the max duration,
199 # we have to scale_in now.
200 logger.debug("Strategy: Scale_in, tasks=0")
201 executor.scale_in(active_blocks - min_blocks)
202
203 else:
204 pass
205 # logger.debug("Strategy: Case.1b. Waiting for timer : {0}".format(idle_since))
206
207 # Case 2
208 # More tasks than the available slots.
209 elif (float(active_slots) / len(active_tasks)) < parallelism:
210 # Case 2a
211 # We have the max blocks possible
212 if active_blocks >= max_blocks:
213 # Ignore since we already have the max nodes
214 # logger.debug("Strategy: Case.2a")
215 pass
216
217 # Case 2b
218 else:
219 # logger.debug("Strategy: Case.2b")
220 excess = math.ceil((len(active_tasks) * parallelism) - active_slots)
221 excess_blocks = math.ceil(float(excess) / (tasks_per_node * nodes_per_block))
222 logger.debug("Requesting {} more blocks".format(excess_blocks))
223 executor.scale_out(excess_blocks)
224
225 elif active_slots == 0 and len(active_tasks) > 0:
226 # Case 4
227 # Check if slots are being lost quickly ?
228 logger.debug("Requesting single slot")
229 executor.scale_out(1)
230 # Case 3
231 # tasks ~ slots
232 else:
233 # logger.debug("Strategy: Case 3")
234 pass
235
236
237 if __name__ == '__main__':
238
239 pass
240
[end of parsl/dataflow/strategy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/parsl/dataflow/strategy.py b/parsl/dataflow/strategy.py
--- a/parsl/dataflow/strategy.py
+++ b/parsl/dataflow/strategy.py
@@ -116,6 +116,8 @@
self.strategies = {None: self._strategy_noop, 'simple': self._strategy_simple}
self.strategize = self.strategies[self.config.strategy]
+ self.logger_flag = False
+ self.prior_loghandlers = set(logging.getLogger().handlers)
logger.debug("Scaling strategy: {0}".format(self.config.strategy))
@@ -129,6 +131,20 @@
- kind (Not used)
"""
+ def unset_logging(self):
+ """ Mute newly added handlers to the root level, right after calling executor.status
+ """
+ if self.logger_flag is True:
+ return
+
+ root_logger = logging.getLogger()
+
+ for hndlr in root_logger.handlers:
+ if hndlr not in self.prior_loghandlers:
+ hndlr.setLevel(logging.ERROR)
+
+ self.logger_flag = True
+
def _strategy_simple(self, tasks, *args, kind=None, **kwargs):
"""Peek at the DFK and the executors specified.
@@ -143,9 +159,6 @@
KWargs:
- kind (Not used)
"""
- # Add logic to check executors
- # for task in tasks :
- # if self.dfk.tasks[task]:
for label, executor in self.dfk.executors.items():
if not executor.scaling_enabled:
@@ -155,6 +168,7 @@
active_tasks = executor.executor.outstanding
status = executor.status()
+ self.unset_logging()
# FIXME we need to handle case where provider does not define these
# FIXME probably more of this logic should be moved to the provider
|
{"golden_diff": "diff --git a/parsl/dataflow/strategy.py b/parsl/dataflow/strategy.py\n--- a/parsl/dataflow/strategy.py\n+++ b/parsl/dataflow/strategy.py\n@@ -116,6 +116,8 @@\n self.strategies = {None: self._strategy_noop, 'simple': self._strategy_simple}\n \n self.strategize = self.strategies[self.config.strategy]\n+ self.logger_flag = False\n+ self.prior_loghandlers = set(logging.getLogger().handlers)\n \n logger.debug(\"Scaling strategy: {0}\".format(self.config.strategy))\n \n@@ -129,6 +131,20 @@\n - kind (Not used)\n \"\"\"\n \n+ def unset_logging(self):\n+ \"\"\" Mute newly added handlers to the root level, right after calling executor.status\n+ \"\"\"\n+ if self.logger_flag is True:\n+ return\n+\n+ root_logger = logging.getLogger()\n+\n+ for hndlr in root_logger.handlers:\n+ if hndlr not in self.prior_loghandlers:\n+ hndlr.setLevel(logging.ERROR)\n+\n+ self.logger_flag = True\n+\n def _strategy_simple(self, tasks, *args, kind=None, **kwargs):\n \"\"\"Peek at the DFK and the executors specified.\n \n@@ -143,9 +159,6 @@\n KWargs:\n - kind (Not used)\n \"\"\"\n- # Add logic to check executors\n- # for task in tasks :\n- # if self.dfk.tasks[task]:\n \n for label, executor in self.dfk.executors.items():\n if not executor.scaling_enabled:\n@@ -155,6 +168,7 @@\n active_tasks = executor.executor.outstanding\n \n status = executor.status()\n+ self.unset_logging()\n \n # FIXME we need to handle case where provider does not define these\n # FIXME probably more of this logic should be moved to the provider\n", "issue": "Duplicate log lines when using IPP\nThe following code fragment emits log lines to STDERR with the default format style even though no logging is requested. I suspect that `ipyparallel` is doing something dirty.\r\n\r\n from parsl import DataFlowKernel\r\n from parsl.configs.local import localIPP as config\r\n\r\n dfk = DataFlowKernel(config=config)\r\n dfk.cleanup()\r\n\r\nThe above code with the minor change of using threads will not emit the log lines.\r\n\r\n\r\n from parsl import DataFlowKernel\r\n from parsl.configs.local import localThreads as config\r\n\r\n dfk = DataFlowKernel(config=config)\r\n dfk.cleanup()\r\n\r\n\r\nPlease help test by running this with the latest parsl code. \n", "before_files": [{"content": "import logging\nimport time\nimport math\n\nlogger = logging.getLogger(__name__)\n\n\nclass Strategy(object):\n \"\"\"FlowControl strategy.\n\n As a workflow dag is processed by Parsl, new tasks are added and completed\n asynchronously. Parsl interfaces executors with execution providers to construct\n scalable executors to handle the variable work-load generated by the\n workflow. This component is responsible for periodically checking outstanding\n tasks and available compute capacity and trigger scaling events to match\n workflow needs.\n\n Here's a diagram of an executor. An executor consists of blocks, which are usually\n created by single requests to a Local Resource Manager (LRM) such as slurm,\n condor, torque, or even AWS API. The blocks could contain several task blocks\n which are separate instances on workers.\n\n\n .. code:: python\n\n |<--min_blocks |<-init_blocks max_blocks-->|\n +----------------------------------------------------------+\n | +--------block----------+ +--------block--------+ |\n executor = | | task task | ... | task task | |\n | +-----------------------+ +---------------------+ |\n +----------------------------------------------------------+\n\n The relevant specification options are:\n 1. min_blocks: Minimum number of blocks to maintain\n 2. init_blocks: number of blocks to provision at initialization of workflow\n 3. max_blocks: Maximum number of blocks that can be active due to one workflow\n\n\n .. code:: python\n\n slots = current_capacity * tasks_per_node * nodes_per_block\n\n active_tasks = pending_tasks + running_tasks\n\n Parallelism = slots / tasks\n = [0, 1] (i.e, 0 <= p <= 1)\n\n For example:\n\n When p = 0,\n => compute with the least resources possible.\n infinite tasks are stacked per slot.\n\n .. code:: python\n\n blocks = min_blocks { if active_tasks = 0\n max(min_blocks, 1) { else\n\n When p = 1,\n => compute with the most resources.\n one task is stacked per slot.\n\n .. code:: python\n\n blocks = min ( max_blocks,\n ceil( active_tasks / slots ) )\n\n\n When p = 1/2,\n => We stack upto 2 tasks per slot before we overflow\n and request a new block\n\n\n let's say min:init:max = 0:0:4 and task_blocks=2\n Consider the following example:\n min_blocks = 0\n init_blocks = 0\n max_blocks = 4\n tasks_per_node = 2\n nodes_per_block = 1\n\n In the diagram, X <- task\n\n at 2 tasks:\n\n .. code:: python\n\n +---Block---|\n | |\n | X X |\n |slot slot|\n +-----------+\n\n at 5 tasks, we overflow as the capacity of a single block is fully used.\n\n .. code:: python\n\n +---Block---| +---Block---|\n | X X | ----> | |\n | X X | | X |\n |slot slot| |slot slot|\n +-----------+ +-----------+\n\n \"\"\"\n\n def __init__(self, dfk):\n \"\"\"Initialize strategy.\"\"\"\n self.dfk = dfk\n self.config = dfk.config\n self.executors = {}\n self.max_idletime = 60 * 2 # 2 minutes\n\n for e in self.dfk.config.executors:\n self.executors[e.label] = {'idle_since': None, 'config': e.label}\n\n self.strategies = {None: self._strategy_noop, 'simple': self._strategy_simple}\n\n self.strategize = self.strategies[self.config.strategy]\n\n logger.debug(\"Scaling strategy: {0}\".format(self.config.strategy))\n\n def _strategy_noop(self, tasks, *args, kind=None, **kwargs):\n \"\"\"Do nothing.\n\n Args:\n - tasks (task_ids): Not used here.\n\n KWargs:\n - kind (Not used)\n \"\"\"\n\n def _strategy_simple(self, tasks, *args, kind=None, **kwargs):\n \"\"\"Peek at the DFK and the executors specified.\n\n We assume here that tasks are not held in a runnable\n state, and that all tasks from an app would be sent to\n a single specific executor, i.e tasks cannot be specified\n to go to one of more executors.\n\n Args:\n - tasks (task_ids): Not used here.\n\n KWargs:\n - kind (Not used)\n \"\"\"\n # Add logic to check executors\n # for task in tasks :\n # if self.dfk.tasks[task]:\n\n for label, executor in self.dfk.executors.items():\n if not executor.scaling_enabled:\n continue\n\n # Tasks that are either pending completion\n active_tasks = executor.executor.outstanding\n\n status = executor.status()\n\n # FIXME we need to handle case where provider does not define these\n # FIXME probably more of this logic should be moved to the provider\n min_blocks = executor.provider.min_blocks\n max_blocks = executor.provider.max_blocks\n tasks_per_node = executor.provider.tasks_per_node\n nodes_per_block = executor.provider.nodes_per_block\n parallelism = executor.provider.parallelism\n\n active_blocks = sum([1 for x in status if x in ('RUNNING',\n 'SUBMITTING',\n 'PENDING')])\n active_slots = active_blocks * tasks_per_node * nodes_per_block\n\n # import pdb; pdb.set_trace()\n logger.debug(\"Tasks:{} Slots:{} Parallelism:{}\".format(len(active_tasks),\n active_slots,\n parallelism))\n\n # Case 1\n # No tasks.\n if len(active_tasks) == 0:\n # Case 1a\n # Fewer blocks that min_blocks\n if active_blocks <= min_blocks:\n # Ignore\n # logger.debug(\"Strategy: Case.1a\")\n pass\n\n # Case 1b\n # More blocks than min_blocks. Scale down\n else:\n # We want to make sure that max_idletime is reached\n # before killing off resources\n if not self.executors[executor.label]['idle_since']:\n logger.debug(\"Strategy: Scale_in, tasks=0 starting kill timer\")\n self.executors[executor.label]['idle_since'] = time.time()\n\n idle_since = self.executors[executor.label]['idle_since']\n if (time.time() - idle_since) > self.max_idletime:\n # We have resources idle for the max duration,\n # we have to scale_in now.\n logger.debug(\"Strategy: Scale_in, tasks=0\")\n executor.scale_in(active_blocks - min_blocks)\n\n else:\n pass\n # logger.debug(\"Strategy: Case.1b. Waiting for timer : {0}\".format(idle_since))\n\n # Case 2\n # More tasks than the available slots.\n elif (float(active_slots) / len(active_tasks)) < parallelism:\n # Case 2a\n # We have the max blocks possible\n if active_blocks >= max_blocks:\n # Ignore since we already have the max nodes\n # logger.debug(\"Strategy: Case.2a\")\n pass\n\n # Case 2b\n else:\n # logger.debug(\"Strategy: Case.2b\")\n excess = math.ceil((len(active_tasks) * parallelism) - active_slots)\n excess_blocks = math.ceil(float(excess) / (tasks_per_node * nodes_per_block))\n logger.debug(\"Requesting {} more blocks\".format(excess_blocks))\n executor.scale_out(excess_blocks)\n\n elif active_slots == 0 and len(active_tasks) > 0:\n # Case 4\n # Check if slots are being lost quickly ?\n logger.debug(\"Requesting single slot\")\n executor.scale_out(1)\n # Case 3\n # tasks ~ slots\n else:\n # logger.debug(\"Strategy: Case 3\")\n pass\n\n\nif __name__ == '__main__':\n\n pass\n", "path": "parsl/dataflow/strategy.py"}]}
| 3,093 | 433 |
gh_patches_debug_38335
|
rasdani/github-patches
|
git_diff
|
ethereum__consensus-specs-863
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Rename `Transactions` back to `Operations`
A few of us implementers have been talking about the naming of `Transactions` and believe it is best renamed back to `Operations` to lower confusion and potentially mistaking `Transactions` with transactions in the classical sense. The only thing that should be known as a `Transaction` is a `Transfer`.
If not, it would be great to know what the reason behind the rename was.
</issue>
<code>
[start of utils/phase0/state_transition.py]
1 from . import spec
2
3
4 from typing import ( # noqa: F401
5 Any,
6 Callable,
7 List,
8 NewType,
9 Tuple,
10 )
11
12 from .spec import (
13 BeaconState,
14 BeaconBlock,
15 )
16
17
18 def expected_deposit_count(state: BeaconState) -> int:
19 return min(
20 spec.MAX_DEPOSITS,
21 state.latest_eth1_data.deposit_count - state.deposit_index
22 )
23
24
25 def process_transaction_type(state: BeaconState,
26 transactions: List[Any],
27 max_transactions: int,
28 tx_fn: Callable[[BeaconState, Any], None]) -> None:
29 assert len(transactions) <= max_transactions
30 for transaction in transactions:
31 tx_fn(state, transaction)
32
33
34 def process_transactions(state: BeaconState, block: BeaconBlock) -> None:
35 process_transaction_type(
36 state,
37 block.body.proposer_slashings,
38 spec.MAX_PROPOSER_SLASHINGS,
39 spec.process_proposer_slashing,
40 )
41
42 process_transaction_type(
43 state,
44 block.body.attester_slashings,
45 spec.MAX_ATTESTER_SLASHINGS,
46 spec.process_attester_slashing,
47 )
48
49 process_transaction_type(
50 state,
51 block.body.attestations,
52 spec.MAX_ATTESTATIONS,
53 spec.process_attestation,
54 )
55
56 assert len(block.body.deposits) == expected_deposit_count(state)
57 process_transaction_type(
58 state,
59 block.body.deposits,
60 spec.MAX_DEPOSITS,
61 spec.process_deposit,
62 )
63
64 process_transaction_type(
65 state,
66 block.body.voluntary_exits,
67 spec.MAX_VOLUNTARY_EXITS,
68 spec.process_voluntary_exit,
69 )
70
71 assert len(block.body.transfers) == len(set(block.body.transfers))
72 process_transaction_type(
73 state,
74 block.body.transfers,
75 spec.MAX_TRANSFERS,
76 spec.process_transfer,
77 )
78
79
80 def process_block(state: BeaconState,
81 block: BeaconBlock,
82 verify_state_root: bool=False) -> None:
83 spec.process_block_header(state, block)
84 spec.process_randao(state, block)
85 spec.process_eth1_data(state, block)
86
87 process_transactions(state, block)
88 if verify_state_root:
89 spec.verify_block_state_root(state, block)
90
91
92 def process_epoch_transition(state: BeaconState) -> None:
93 spec.update_justification_and_finalization(state)
94 spec.process_crosslinks(state)
95 spec.maybe_reset_eth1_period(state)
96 spec.apply_rewards(state)
97 spec.process_ejections(state)
98 spec.update_registry(state)
99 spec.process_slashings(state)
100 spec.process_exit_queue(state)
101 spec.finish_epoch_update(state)
102
103
104 def state_transition(state: BeaconState,
105 block: BeaconBlock,
106 verify_state_root: bool=False) -> BeaconState:
107 while state.slot < block.slot:
108 spec.cache_state(state)
109 if (state.slot + 1) % spec.SLOTS_PER_EPOCH == 0:
110 process_epoch_transition(state)
111 spec.advance_slot(state)
112 if block.slot == state.slot:
113 process_block(state, block, verify_state_root)
114
[end of utils/phase0/state_transition.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/utils/phase0/state_transition.py b/utils/phase0/state_transition.py
--- a/utils/phase0/state_transition.py
+++ b/utils/phase0/state_transition.py
@@ -22,31 +22,31 @@
)
-def process_transaction_type(state: BeaconState,
- transactions: List[Any],
- max_transactions: int,
- tx_fn: Callable[[BeaconState, Any], None]) -> None:
- assert len(transactions) <= max_transactions
- for transaction in transactions:
- tx_fn(state, transaction)
+def process_operation_type(state: BeaconState,
+ operations: List[Any],
+ max_operations: int,
+ tx_fn: Callable[[BeaconState, Any], None]) -> None:
+ assert len(operations) <= max_operations
+ for operation in operations:
+ tx_fn(state, operation)
-def process_transactions(state: BeaconState, block: BeaconBlock) -> None:
- process_transaction_type(
+def process_operations(state: BeaconState, block: BeaconBlock) -> None:
+ process_operation_type(
state,
block.body.proposer_slashings,
spec.MAX_PROPOSER_SLASHINGS,
spec.process_proposer_slashing,
)
- process_transaction_type(
+ process_operation_type(
state,
block.body.attester_slashings,
spec.MAX_ATTESTER_SLASHINGS,
spec.process_attester_slashing,
)
- process_transaction_type(
+ process_operation_type(
state,
block.body.attestations,
spec.MAX_ATTESTATIONS,
@@ -54,14 +54,14 @@
)
assert len(block.body.deposits) == expected_deposit_count(state)
- process_transaction_type(
+ process_operation_type(
state,
block.body.deposits,
spec.MAX_DEPOSITS,
spec.process_deposit,
)
- process_transaction_type(
+ process_operation_type(
state,
block.body.voluntary_exits,
spec.MAX_VOLUNTARY_EXITS,
@@ -69,7 +69,7 @@
)
assert len(block.body.transfers) == len(set(block.body.transfers))
- process_transaction_type(
+ process_operation_type(
state,
block.body.transfers,
spec.MAX_TRANSFERS,
@@ -84,7 +84,7 @@
spec.process_randao(state, block)
spec.process_eth1_data(state, block)
- process_transactions(state, block)
+ process_operations(state, block)
if verify_state_root:
spec.verify_block_state_root(state, block)
|
{"golden_diff": "diff --git a/utils/phase0/state_transition.py b/utils/phase0/state_transition.py\n--- a/utils/phase0/state_transition.py\n+++ b/utils/phase0/state_transition.py\n@@ -22,31 +22,31 @@\n )\n \n \n-def process_transaction_type(state: BeaconState,\n- transactions: List[Any],\n- max_transactions: int,\n- tx_fn: Callable[[BeaconState, Any], None]) -> None:\n- assert len(transactions) <= max_transactions\n- for transaction in transactions:\n- tx_fn(state, transaction)\n+def process_operation_type(state: BeaconState,\n+ operations: List[Any],\n+ max_operations: int,\n+ tx_fn: Callable[[BeaconState, Any], None]) -> None:\n+ assert len(operations) <= max_operations\n+ for operation in operations:\n+ tx_fn(state, operation)\n \n \n-def process_transactions(state: BeaconState, block: BeaconBlock) -> None:\n- process_transaction_type(\n+def process_operations(state: BeaconState, block: BeaconBlock) -> None:\n+ process_operation_type(\n state,\n block.body.proposer_slashings,\n spec.MAX_PROPOSER_SLASHINGS,\n spec.process_proposer_slashing,\n )\n \n- process_transaction_type(\n+ process_operation_type(\n state,\n block.body.attester_slashings,\n spec.MAX_ATTESTER_SLASHINGS,\n spec.process_attester_slashing,\n )\n \n- process_transaction_type(\n+ process_operation_type(\n state,\n block.body.attestations,\n spec.MAX_ATTESTATIONS,\n@@ -54,14 +54,14 @@\n )\n \n assert len(block.body.deposits) == expected_deposit_count(state)\n- process_transaction_type(\n+ process_operation_type(\n state,\n block.body.deposits,\n spec.MAX_DEPOSITS,\n spec.process_deposit,\n )\n \n- process_transaction_type(\n+ process_operation_type(\n state,\n block.body.voluntary_exits,\n spec.MAX_VOLUNTARY_EXITS,\n@@ -69,7 +69,7 @@\n )\n \n assert len(block.body.transfers) == len(set(block.body.transfers))\n- process_transaction_type(\n+ process_operation_type(\n state,\n block.body.transfers,\n spec.MAX_TRANSFERS,\n@@ -84,7 +84,7 @@\n spec.process_randao(state, block)\n spec.process_eth1_data(state, block)\n \n- process_transactions(state, block)\n+ process_operations(state, block)\n if verify_state_root:\n spec.verify_block_state_root(state, block)\n", "issue": "Rename `Transactions` back to `Operations`\nA few of us implementers have been talking about the naming of `Transactions` and believe it is best renamed back to `Operations` to lower confusion and potentially mistaking `Transactions` with transactions in the classical sense. The only thing that should be known as a `Transaction` is a `Transfer`.\r\n\r\nIf not, it would be great to know what the reason behind the rename was.\r\n\n", "before_files": [{"content": "from . import spec\n\n\nfrom typing import ( # noqa: F401\n Any,\n Callable,\n List,\n NewType,\n Tuple,\n)\n\nfrom .spec import (\n BeaconState,\n BeaconBlock,\n)\n\n\ndef expected_deposit_count(state: BeaconState) -> int:\n return min(\n spec.MAX_DEPOSITS,\n state.latest_eth1_data.deposit_count - state.deposit_index\n )\n\n\ndef process_transaction_type(state: BeaconState,\n transactions: List[Any],\n max_transactions: int,\n tx_fn: Callable[[BeaconState, Any], None]) -> None:\n assert len(transactions) <= max_transactions\n for transaction in transactions:\n tx_fn(state, transaction)\n\n\ndef process_transactions(state: BeaconState, block: BeaconBlock) -> None:\n process_transaction_type(\n state,\n block.body.proposer_slashings,\n spec.MAX_PROPOSER_SLASHINGS,\n spec.process_proposer_slashing,\n )\n\n process_transaction_type(\n state,\n block.body.attester_slashings,\n spec.MAX_ATTESTER_SLASHINGS,\n spec.process_attester_slashing,\n )\n\n process_transaction_type(\n state,\n block.body.attestations,\n spec.MAX_ATTESTATIONS,\n spec.process_attestation,\n )\n\n assert len(block.body.deposits) == expected_deposit_count(state)\n process_transaction_type(\n state,\n block.body.deposits,\n spec.MAX_DEPOSITS,\n spec.process_deposit,\n )\n\n process_transaction_type(\n state,\n block.body.voluntary_exits,\n spec.MAX_VOLUNTARY_EXITS,\n spec.process_voluntary_exit,\n )\n\n assert len(block.body.transfers) == len(set(block.body.transfers))\n process_transaction_type(\n state,\n block.body.transfers,\n spec.MAX_TRANSFERS,\n spec.process_transfer,\n )\n\n\ndef process_block(state: BeaconState,\n block: BeaconBlock,\n verify_state_root: bool=False) -> None:\n spec.process_block_header(state, block)\n spec.process_randao(state, block)\n spec.process_eth1_data(state, block)\n\n process_transactions(state, block)\n if verify_state_root:\n spec.verify_block_state_root(state, block)\n\n\ndef process_epoch_transition(state: BeaconState) -> None:\n spec.update_justification_and_finalization(state)\n spec.process_crosslinks(state)\n spec.maybe_reset_eth1_period(state)\n spec.apply_rewards(state)\n spec.process_ejections(state)\n spec.update_registry(state)\n spec.process_slashings(state)\n spec.process_exit_queue(state)\n spec.finish_epoch_update(state)\n\n\ndef state_transition(state: BeaconState,\n block: BeaconBlock,\n verify_state_root: bool=False) -> BeaconState:\n while state.slot < block.slot:\n spec.cache_state(state)\n if (state.slot + 1) % spec.SLOTS_PER_EPOCH == 0:\n process_epoch_transition(state)\n spec.advance_slot(state)\n if block.slot == state.slot:\n process_block(state, block, verify_state_root)\n", "path": "utils/phase0/state_transition.py"}]}
| 1,524 | 569 |
gh_patches_debug_30729
|
rasdani/github-patches
|
git_diff
|
wearepal__EthicML-337
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SVM Kernel name
Clearly [this](https://github.com/predictive-analytics-lab/EthicML/blob/f7fcf435b5807ef9931f3ff3b259fc7cc4b38da8/ethicml/algorithms/inprocess/svm.py#L20) is not right
</issue>
<code>
[start of ethicml/algorithms/inprocess/svm.py]
1 """Wrapper for SKLearn implementation of SVM."""
2 from typing import Optional, Union
3
4 import pandas as pd
5 from sklearn.svm import SVC, LinearSVC
6
7 from ethicml.common import implements
8 from ethicml.utility import DataTuple, Prediction, TestTuple
9
10 from .in_algorithm import InAlgorithm
11
12 __all__ = ["SVM"]
13
14
15 class SVM(InAlgorithm):
16 """Support Vector Machine."""
17
18 def __init__(self, C: Optional[float] = None, kernel: Optional[str] = None):
19 """Init SVM."""
20 kernel_name = f" (kernel)" if kernel is not None else ""
21 super().__init__(name="SVM" + kernel_name, is_fairness_algo=False)
22 self.C = SVC().C if C is None else C
23 self.kernel = SVC().kernel if kernel is None else kernel
24
25 @implements(InAlgorithm)
26 def run(self, train: DataTuple, test: Union[DataTuple, TestTuple]) -> Prediction:
27 clf = select_svm(self.C, self.kernel)
28 clf.fit(train.x, train.y.to_numpy().ravel())
29 return Prediction(hard=pd.Series(clf.predict(test.x)))
30
31
32 def select_svm(C: float, kernel: str) -> SVC:
33 """Select the appropriate SVM model for the given parameters."""
34 if kernel == "linear":
35 return LinearSVC(C=C, dual=False, tol=1e-12, random_state=888)
36 return SVC(C=C, kernel=kernel, gamma="auto", random_state=888)
37
[end of ethicml/algorithms/inprocess/svm.py]
[start of ethicml/algorithms/inprocess/logistic_regression.py]
1 """Wrapper around Sci-Kit Learn Logistic Regression."""
2 from typing import Optional
3
4 import pandas as pd
5 from sklearn.linear_model import LogisticRegression, LogisticRegressionCV
6 from sklearn.model_selection import KFold
7
8 from ethicml.common import implements
9 from ethicml.utility import DataTuple, Prediction, SoftPrediction, TestTuple
10
11 from .in_algorithm import InAlgorithm
12
13 __all__ = ["LR", "LRCV", "LRProb"]
14
15
16 class LR(InAlgorithm):
17 """Logistic regression with hard predictions."""
18
19 def __init__(self, C: Optional[float] = None):
20 """Init LR."""
21 self.C = LogisticRegression().C if C is None else C
22 super().__init__(name=f"Logistic Regression, C={self.C}", is_fairness_algo=False)
23
24 @implements(InAlgorithm)
25 def run(self, train: DataTuple, test: TestTuple) -> Prediction:
26 clf = LogisticRegression(solver="liblinear", random_state=888, C=self.C, multi_class="auto")
27 clf.fit(train.x, train.y.to_numpy().ravel())
28 return Prediction(hard=pd.Series(clf.predict(test.x)))
29
30
31 class LRProb(InAlgorithm):
32 """Logistic regression with soft output."""
33
34 def __init__(self, C: Optional[int] = None):
35 """Init LRProb."""
36 self.C = LogisticRegression().C if C is None else C
37 super().__init__(name=f"Logistic Regression Prob, C={self.C}", is_fairness_algo=False)
38
39 @implements(InAlgorithm)
40 def run(self, train: DataTuple, test: TestTuple) -> SoftPrediction:
41 clf = LogisticRegression(solver="liblinear", random_state=888, C=self.C, multi_class="auto")
42 clf.fit(train.x, train.y.to_numpy().ravel())
43 return SoftPrediction(soft=pd.Series(clf.predict_proba(test.x)[:, 1]))
44
45
46 class LRCV(InAlgorithm):
47 """Kind of a cheap hack for now, but gives a proper cross-valudeted LR."""
48
49 def __init__(self) -> None:
50 """Init LRCV."""
51 super().__init__(name="LRCV", is_fairness_algo=False)
52
53 @implements(InAlgorithm)
54 def run(self, train: DataTuple, test: TestTuple) -> Prediction:
55 folder = KFold(n_splits=3, shuffle=False)
56 clf = LogisticRegressionCV(
57 cv=folder, n_jobs=-1, random_state=888, solver="liblinear", multi_class="auto"
58 )
59 clf.fit(train.x, train.y.to_numpy().ravel())
60 return Prediction(hard=pd.Series(clf.predict(test.x)), info=dict(C=clf.C_[0]))
61
[end of ethicml/algorithms/inprocess/logistic_regression.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ethicml/algorithms/inprocess/logistic_regression.py b/ethicml/algorithms/inprocess/logistic_regression.py
--- a/ethicml/algorithms/inprocess/logistic_regression.py
+++ b/ethicml/algorithms/inprocess/logistic_regression.py
@@ -19,7 +19,7 @@
def __init__(self, C: Optional[float] = None):
"""Init LR."""
self.C = LogisticRegression().C if C is None else C
- super().__init__(name=f"Logistic Regression, C={self.C}", is_fairness_algo=False)
+ super().__init__(name=f"Logistic Regression (C={self.C})", is_fairness_algo=False)
@implements(InAlgorithm)
def run(self, train: DataTuple, test: TestTuple) -> Prediction:
@@ -34,7 +34,7 @@
def __init__(self, C: Optional[int] = None):
"""Init LRProb."""
self.C = LogisticRegression().C if C is None else C
- super().__init__(name=f"Logistic Regression Prob, C={self.C}", is_fairness_algo=False)
+ super().__init__(name=f"Logistic Regression Prob (C={self.C})", is_fairness_algo=False)
@implements(InAlgorithm)
def run(self, train: DataTuple, test: TestTuple) -> SoftPrediction:
diff --git a/ethicml/algorithms/inprocess/svm.py b/ethicml/algorithms/inprocess/svm.py
--- a/ethicml/algorithms/inprocess/svm.py
+++ b/ethicml/algorithms/inprocess/svm.py
@@ -17,7 +17,7 @@
def __init__(self, C: Optional[float] = None, kernel: Optional[str] = None):
"""Init SVM."""
- kernel_name = f" (kernel)" if kernel is not None else ""
+ kernel_name = f" ({kernel})" if kernel is not None else ""
super().__init__(name="SVM" + kernel_name, is_fairness_algo=False)
self.C = SVC().C if C is None else C
self.kernel = SVC().kernel if kernel is None else kernel
|
{"golden_diff": "diff --git a/ethicml/algorithms/inprocess/logistic_regression.py b/ethicml/algorithms/inprocess/logistic_regression.py\n--- a/ethicml/algorithms/inprocess/logistic_regression.py\n+++ b/ethicml/algorithms/inprocess/logistic_regression.py\n@@ -19,7 +19,7 @@\n def __init__(self, C: Optional[float] = None):\n \"\"\"Init LR.\"\"\"\n self.C = LogisticRegression().C if C is None else C\n- super().__init__(name=f\"Logistic Regression, C={self.C}\", is_fairness_algo=False)\n+ super().__init__(name=f\"Logistic Regression (C={self.C})\", is_fairness_algo=False)\n \n @implements(InAlgorithm)\n def run(self, train: DataTuple, test: TestTuple) -> Prediction:\n@@ -34,7 +34,7 @@\n def __init__(self, C: Optional[int] = None):\n \"\"\"Init LRProb.\"\"\"\n self.C = LogisticRegression().C if C is None else C\n- super().__init__(name=f\"Logistic Regression Prob, C={self.C}\", is_fairness_algo=False)\n+ super().__init__(name=f\"Logistic Regression Prob (C={self.C})\", is_fairness_algo=False)\n \n @implements(InAlgorithm)\n def run(self, train: DataTuple, test: TestTuple) -> SoftPrediction:\ndiff --git a/ethicml/algorithms/inprocess/svm.py b/ethicml/algorithms/inprocess/svm.py\n--- a/ethicml/algorithms/inprocess/svm.py\n+++ b/ethicml/algorithms/inprocess/svm.py\n@@ -17,7 +17,7 @@\n \n def __init__(self, C: Optional[float] = None, kernel: Optional[str] = None):\n \"\"\"Init SVM.\"\"\"\n- kernel_name = f\" (kernel)\" if kernel is not None else \"\"\n+ kernel_name = f\" ({kernel})\" if kernel is not None else \"\"\n super().__init__(name=\"SVM\" + kernel_name, is_fairness_algo=False)\n self.C = SVC().C if C is None else C\n self.kernel = SVC().kernel if kernel is None else kernel\n", "issue": "SVM Kernel name\nClearly [this](https://github.com/predictive-analytics-lab/EthicML/blob/f7fcf435b5807ef9931f3ff3b259fc7cc4b38da8/ethicml/algorithms/inprocess/svm.py#L20) is not right \n", "before_files": [{"content": "\"\"\"Wrapper for SKLearn implementation of SVM.\"\"\"\nfrom typing import Optional, Union\n\nimport pandas as pd\nfrom sklearn.svm import SVC, LinearSVC\n\nfrom ethicml.common import implements\nfrom ethicml.utility import DataTuple, Prediction, TestTuple\n\nfrom .in_algorithm import InAlgorithm\n\n__all__ = [\"SVM\"]\n\n\nclass SVM(InAlgorithm):\n \"\"\"Support Vector Machine.\"\"\"\n\n def __init__(self, C: Optional[float] = None, kernel: Optional[str] = None):\n \"\"\"Init SVM.\"\"\"\n kernel_name = f\" (kernel)\" if kernel is not None else \"\"\n super().__init__(name=\"SVM\" + kernel_name, is_fairness_algo=False)\n self.C = SVC().C if C is None else C\n self.kernel = SVC().kernel if kernel is None else kernel\n\n @implements(InAlgorithm)\n def run(self, train: DataTuple, test: Union[DataTuple, TestTuple]) -> Prediction:\n clf = select_svm(self.C, self.kernel)\n clf.fit(train.x, train.y.to_numpy().ravel())\n return Prediction(hard=pd.Series(clf.predict(test.x)))\n\n\ndef select_svm(C: float, kernel: str) -> SVC:\n \"\"\"Select the appropriate SVM model for the given parameters.\"\"\"\n if kernel == \"linear\":\n return LinearSVC(C=C, dual=False, tol=1e-12, random_state=888)\n return SVC(C=C, kernel=kernel, gamma=\"auto\", random_state=888)\n", "path": "ethicml/algorithms/inprocess/svm.py"}, {"content": "\"\"\"Wrapper around Sci-Kit Learn Logistic Regression.\"\"\"\nfrom typing import Optional\n\nimport pandas as pd\nfrom sklearn.linear_model import LogisticRegression, LogisticRegressionCV\nfrom sklearn.model_selection import KFold\n\nfrom ethicml.common import implements\nfrom ethicml.utility import DataTuple, Prediction, SoftPrediction, TestTuple\n\nfrom .in_algorithm import InAlgorithm\n\n__all__ = [\"LR\", \"LRCV\", \"LRProb\"]\n\n\nclass LR(InAlgorithm):\n \"\"\"Logistic regression with hard predictions.\"\"\"\n\n def __init__(self, C: Optional[float] = None):\n \"\"\"Init LR.\"\"\"\n self.C = LogisticRegression().C if C is None else C\n super().__init__(name=f\"Logistic Regression, C={self.C}\", is_fairness_algo=False)\n\n @implements(InAlgorithm)\n def run(self, train: DataTuple, test: TestTuple) -> Prediction:\n clf = LogisticRegression(solver=\"liblinear\", random_state=888, C=self.C, multi_class=\"auto\")\n clf.fit(train.x, train.y.to_numpy().ravel())\n return Prediction(hard=pd.Series(clf.predict(test.x)))\n\n\nclass LRProb(InAlgorithm):\n \"\"\"Logistic regression with soft output.\"\"\"\n\n def __init__(self, C: Optional[int] = None):\n \"\"\"Init LRProb.\"\"\"\n self.C = LogisticRegression().C if C is None else C\n super().__init__(name=f\"Logistic Regression Prob, C={self.C}\", is_fairness_algo=False)\n\n @implements(InAlgorithm)\n def run(self, train: DataTuple, test: TestTuple) -> SoftPrediction:\n clf = LogisticRegression(solver=\"liblinear\", random_state=888, C=self.C, multi_class=\"auto\")\n clf.fit(train.x, train.y.to_numpy().ravel())\n return SoftPrediction(soft=pd.Series(clf.predict_proba(test.x)[:, 1]))\n\n\nclass LRCV(InAlgorithm):\n \"\"\"Kind of a cheap hack for now, but gives a proper cross-valudeted LR.\"\"\"\n\n def __init__(self) -> None:\n \"\"\"Init LRCV.\"\"\"\n super().__init__(name=\"LRCV\", is_fairness_algo=False)\n\n @implements(InAlgorithm)\n def run(self, train: DataTuple, test: TestTuple) -> Prediction:\n folder = KFold(n_splits=3, shuffle=False)\n clf = LogisticRegressionCV(\n cv=folder, n_jobs=-1, random_state=888, solver=\"liblinear\", multi_class=\"auto\"\n )\n clf.fit(train.x, train.y.to_numpy().ravel())\n return Prediction(hard=pd.Series(clf.predict(test.x)), info=dict(C=clf.C_[0]))\n", "path": "ethicml/algorithms/inprocess/logistic_regression.py"}]}
| 1,742 | 489 |
gh_patches_debug_35533
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-3739
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Set environment variable when running scrapy check
Sometimes it is nice to be able to enable/disable functionality, e.g. calculating things in settings.py when just checking spider contracts instead of running a crawl. I therefor propose setting an environment variable like `SCRAPY_CHECK` when using the check command.
</issue>
<code>
[start of scrapy/utils/misc.py]
1 """Helper functions which don't fit anywhere else"""
2 import re
3 import hashlib
4 from importlib import import_module
5 from pkgutil import iter_modules
6
7 import six
8 from w3lib.html import replace_entities
9
10 from scrapy.utils.python import flatten, to_unicode
11 from scrapy.item import BaseItem
12
13
14 _ITERABLE_SINGLE_VALUES = dict, BaseItem, six.text_type, bytes
15
16
17 def arg_to_iter(arg):
18 """Convert an argument to an iterable. The argument can be a None, single
19 value, or an iterable.
20
21 Exception: if arg is a dict, [arg] will be returned
22 """
23 if arg is None:
24 return []
25 elif not isinstance(arg, _ITERABLE_SINGLE_VALUES) and hasattr(arg, '__iter__'):
26 return arg
27 else:
28 return [arg]
29
30
31 def load_object(path):
32 """Load an object given its absolute object path, and return it.
33
34 object can be a class, function, variable or an instance.
35 path ie: 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware'
36 """
37
38 try:
39 dot = path.rindex('.')
40 except ValueError:
41 raise ValueError("Error loading object '%s': not a full path" % path)
42
43 module, name = path[:dot], path[dot+1:]
44 mod = import_module(module)
45
46 try:
47 obj = getattr(mod, name)
48 except AttributeError:
49 raise NameError("Module '%s' doesn't define any object named '%s'" % (module, name))
50
51 return obj
52
53
54 def walk_modules(path):
55 """Loads a module and all its submodules from the given module path and
56 returns them. If *any* module throws an exception while importing, that
57 exception is thrown back.
58
59 For example: walk_modules('scrapy.utils')
60 """
61
62 mods = []
63 mod = import_module(path)
64 mods.append(mod)
65 if hasattr(mod, '__path__'):
66 for _, subpath, ispkg in iter_modules(mod.__path__):
67 fullpath = path + '.' + subpath
68 if ispkg:
69 mods += walk_modules(fullpath)
70 else:
71 submod = import_module(fullpath)
72 mods.append(submod)
73 return mods
74
75
76 def extract_regex(regex, text, encoding='utf-8'):
77 """Extract a list of unicode strings from the given text/encoding using the following policies:
78
79 * if the regex contains a named group called "extract" that will be returned
80 * if the regex contains multiple numbered groups, all those will be returned (flattened)
81 * if the regex doesn't contain any group the entire regex matching is returned
82 """
83
84 if isinstance(regex, six.string_types):
85 regex = re.compile(regex, re.UNICODE)
86
87 try:
88 strings = [regex.search(text).group('extract')] # named group
89 except Exception:
90 strings = regex.findall(text) # full regex or numbered groups
91 strings = flatten(strings)
92
93 if isinstance(text, six.text_type):
94 return [replace_entities(s, keep=['lt', 'amp']) for s in strings]
95 else:
96 return [replace_entities(to_unicode(s, encoding), keep=['lt', 'amp'])
97 for s in strings]
98
99
100 def md5sum(file):
101 """Calculate the md5 checksum of a file-like object without reading its
102 whole content in memory.
103
104 >>> from io import BytesIO
105 >>> md5sum(BytesIO(b'file content to hash'))
106 '784406af91dd5a54fbb9c84c2236595a'
107 """
108 m = hashlib.md5()
109 while True:
110 d = file.read(8096)
111 if not d:
112 break
113 m.update(d)
114 return m.hexdigest()
115
116
117 def rel_has_nofollow(rel):
118 """Return True if link rel attribute has nofollow type"""
119 return rel is not None and 'nofollow' in rel.split()
120
121
122 def create_instance(objcls, settings, crawler, *args, **kwargs):
123 """Construct a class instance using its ``from_crawler`` or
124 ``from_settings`` constructors, if available.
125
126 At least one of ``settings`` and ``crawler`` needs to be different from
127 ``None``. If ``settings `` is ``None``, ``crawler.settings`` will be used.
128 If ``crawler`` is ``None``, only the ``from_settings`` constructor will be
129 tried.
130
131 ``*args`` and ``**kwargs`` are forwarded to the constructors.
132
133 Raises ``ValueError`` if both ``settings`` and ``crawler`` are ``None``.
134 """
135 if settings is None:
136 if crawler is None:
137 raise ValueError("Specifiy at least one of settings and crawler.")
138 settings = crawler.settings
139 if crawler and hasattr(objcls, 'from_crawler'):
140 return objcls.from_crawler(crawler, *args, **kwargs)
141 elif hasattr(objcls, 'from_settings'):
142 return objcls.from_settings(settings, *args, **kwargs)
143 else:
144 return objcls(*args, **kwargs)
145
[end of scrapy/utils/misc.py]
[start of scrapy/commands/check.py]
1 from __future__ import print_function
2 import time
3 import sys
4 from collections import defaultdict
5 from unittest import TextTestRunner, TextTestResult as _TextTestResult
6
7 from scrapy.commands import ScrapyCommand
8 from scrapy.contracts import ContractsManager
9 from scrapy.utils.misc import load_object
10 from scrapy.utils.conf import build_component_list
11
12
13 class TextTestResult(_TextTestResult):
14 def printSummary(self, start, stop):
15 write = self.stream.write
16 writeln = self.stream.writeln
17
18 run = self.testsRun
19 plural = "s" if run != 1 else ""
20
21 writeln(self.separator2)
22 writeln("Ran %d contract%s in %.3fs" % (run, plural, stop - start))
23 writeln()
24
25 infos = []
26 if not self.wasSuccessful():
27 write("FAILED")
28 failed, errored = map(len, (self.failures, self.errors))
29 if failed:
30 infos.append("failures=%d" % failed)
31 if errored:
32 infos.append("errors=%d" % errored)
33 else:
34 write("OK")
35
36 if infos:
37 writeln(" (%s)" % (", ".join(infos),))
38 else:
39 write("\n")
40
41
42 class Command(ScrapyCommand):
43 requires_project = True
44 default_settings = {'LOG_ENABLED': False}
45
46 def syntax(self):
47 return "[options] <spider>"
48
49 def short_desc(self):
50 return "Check spider contracts"
51
52 def add_options(self, parser):
53 ScrapyCommand.add_options(self, parser)
54 parser.add_option("-l", "--list", dest="list", action="store_true",
55 help="only list contracts, without checking them")
56 parser.add_option("-v", "--verbose", dest="verbose", default=False, action='store_true',
57 help="print contract tests for all spiders")
58
59 def run(self, args, opts):
60 # load contracts
61 contracts = build_component_list(self.settings.getwithbase('SPIDER_CONTRACTS'))
62 conman = ContractsManager(load_object(c) for c in contracts)
63 runner = TextTestRunner(verbosity=2 if opts.verbose else 1)
64 result = TextTestResult(runner.stream, runner.descriptions, runner.verbosity)
65
66 # contract requests
67 contract_reqs = defaultdict(list)
68
69 spider_loader = self.crawler_process.spider_loader
70
71 for spidername in args or spider_loader.list():
72 spidercls = spider_loader.load(spidername)
73 spidercls.start_requests = lambda s: conman.from_spider(s, result)
74
75 tested_methods = conman.tested_methods_from_spidercls(spidercls)
76 if opts.list:
77 for method in tested_methods:
78 contract_reqs[spidercls.name].append(method)
79 elif tested_methods:
80 self.crawler_process.crawl(spidercls)
81
82 # start checks
83 if opts.list:
84 for spider, methods in sorted(contract_reqs.items()):
85 if not methods and not opts.verbose:
86 continue
87 print(spider)
88 for method in sorted(methods):
89 print(' * %s' % method)
90 else:
91 start = time.time()
92 self.crawler_process.start()
93 stop = time.time()
94
95 result.printErrors()
96 result.printSummary(start, stop)
97 self.exitcode = int(not result.wasSuccessful())
98
99
[end of scrapy/commands/check.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scrapy/commands/check.py b/scrapy/commands/check.py
--- a/scrapy/commands/check.py
+++ b/scrapy/commands/check.py
@@ -6,7 +6,7 @@
from scrapy.commands import ScrapyCommand
from scrapy.contracts import ContractsManager
-from scrapy.utils.misc import load_object
+from scrapy.utils.misc import load_object, set_environ
from scrapy.utils.conf import build_component_list
@@ -68,16 +68,17 @@
spider_loader = self.crawler_process.spider_loader
- for spidername in args or spider_loader.list():
- spidercls = spider_loader.load(spidername)
- spidercls.start_requests = lambda s: conman.from_spider(s, result)
-
- tested_methods = conman.tested_methods_from_spidercls(spidercls)
- if opts.list:
- for method in tested_methods:
- contract_reqs[spidercls.name].append(method)
- elif tested_methods:
- self.crawler_process.crawl(spidercls)
+ with set_environ(SCRAPY_CHECK='true'):
+ for spidername in args or spider_loader.list():
+ spidercls = spider_loader.load(spidername)
+ spidercls.start_requests = lambda s: conman.from_spider(s, result)
+
+ tested_methods = conman.tested_methods_from_spidercls(spidercls)
+ if opts.list:
+ for method in tested_methods:
+ contract_reqs[spidercls.name].append(method)
+ elif tested_methods:
+ self.crawler_process.crawl(spidercls)
# start checks
if opts.list:
diff --git a/scrapy/utils/misc.py b/scrapy/utils/misc.py
--- a/scrapy/utils/misc.py
+++ b/scrapy/utils/misc.py
@@ -1,6 +1,8 @@
"""Helper functions which don't fit anywhere else"""
+import os
import re
import hashlib
+from contextlib import contextmanager
from importlib import import_module
from pkgutil import iter_modules
@@ -142,3 +144,21 @@
return objcls.from_settings(settings, *args, **kwargs)
else:
return objcls(*args, **kwargs)
+
+
+@contextmanager
+def set_environ(**kwargs):
+ """Temporarily set environment variables inside the context manager and
+ fully restore previous environment afterwards
+ """
+
+ original_env = {k: os.environ.get(k) for k in kwargs}
+ os.environ.update(kwargs)
+ try:
+ yield
+ finally:
+ for k, v in original_env.items():
+ if v is None:
+ del os.environ[k]
+ else:
+ os.environ[k] = v
|
{"golden_diff": "diff --git a/scrapy/commands/check.py b/scrapy/commands/check.py\n--- a/scrapy/commands/check.py\n+++ b/scrapy/commands/check.py\n@@ -6,7 +6,7 @@\n \n from scrapy.commands import ScrapyCommand\n from scrapy.contracts import ContractsManager\n-from scrapy.utils.misc import load_object\n+from scrapy.utils.misc import load_object, set_environ\n from scrapy.utils.conf import build_component_list\n \n \n@@ -68,16 +68,17 @@\n \n spider_loader = self.crawler_process.spider_loader\n \n- for spidername in args or spider_loader.list():\n- spidercls = spider_loader.load(spidername)\n- spidercls.start_requests = lambda s: conman.from_spider(s, result)\n-\n- tested_methods = conman.tested_methods_from_spidercls(spidercls)\n- if opts.list:\n- for method in tested_methods:\n- contract_reqs[spidercls.name].append(method)\n- elif tested_methods:\n- self.crawler_process.crawl(spidercls)\n+ with set_environ(SCRAPY_CHECK='true'):\n+ for spidername in args or spider_loader.list():\n+ spidercls = spider_loader.load(spidername)\n+ spidercls.start_requests = lambda s: conman.from_spider(s, result)\n+\n+ tested_methods = conman.tested_methods_from_spidercls(spidercls)\n+ if opts.list:\n+ for method in tested_methods:\n+ contract_reqs[spidercls.name].append(method)\n+ elif tested_methods:\n+ self.crawler_process.crawl(spidercls)\n \n # start checks\n if opts.list:\ndiff --git a/scrapy/utils/misc.py b/scrapy/utils/misc.py\n--- a/scrapy/utils/misc.py\n+++ b/scrapy/utils/misc.py\n@@ -1,6 +1,8 @@\n \"\"\"Helper functions which don't fit anywhere else\"\"\"\n+import os\n import re\n import hashlib\n+from contextlib import contextmanager\n from importlib import import_module\n from pkgutil import iter_modules\n \n@@ -142,3 +144,21 @@\n return objcls.from_settings(settings, *args, **kwargs)\n else:\n return objcls(*args, **kwargs)\n+\n+\n+@contextmanager\n+def set_environ(**kwargs):\n+ \"\"\"Temporarily set environment variables inside the context manager and\n+ fully restore previous environment afterwards\n+ \"\"\"\n+\n+ original_env = {k: os.environ.get(k) for k in kwargs}\n+ os.environ.update(kwargs)\n+ try:\n+ yield\n+ finally:\n+ for k, v in original_env.items():\n+ if v is None:\n+ del os.environ[k]\n+ else:\n+ os.environ[k] = v\n", "issue": "Set environment variable when running scrapy check\nSometimes it is nice to be able to enable/disable functionality, e.g. calculating things in settings.py when just checking spider contracts instead of running a crawl. I therefor propose setting an environment variable like `SCRAPY_CHECK` when using the check command.\n", "before_files": [{"content": "\"\"\"Helper functions which don't fit anywhere else\"\"\"\nimport re\nimport hashlib\nfrom importlib import import_module\nfrom pkgutil import iter_modules\n\nimport six\nfrom w3lib.html import replace_entities\n\nfrom scrapy.utils.python import flatten, to_unicode\nfrom scrapy.item import BaseItem\n\n\n_ITERABLE_SINGLE_VALUES = dict, BaseItem, six.text_type, bytes\n\n\ndef arg_to_iter(arg):\n \"\"\"Convert an argument to an iterable. The argument can be a None, single\n value, or an iterable.\n\n Exception: if arg is a dict, [arg] will be returned\n \"\"\"\n if arg is None:\n return []\n elif not isinstance(arg, _ITERABLE_SINGLE_VALUES) and hasattr(arg, '__iter__'):\n return arg\n else:\n return [arg]\n\n\ndef load_object(path):\n \"\"\"Load an object given its absolute object path, and return it.\n\n object can be a class, function, variable or an instance.\n path ie: 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware'\n \"\"\"\n\n try:\n dot = path.rindex('.')\n except ValueError:\n raise ValueError(\"Error loading object '%s': not a full path\" % path)\n\n module, name = path[:dot], path[dot+1:]\n mod = import_module(module)\n\n try:\n obj = getattr(mod, name)\n except AttributeError:\n raise NameError(\"Module '%s' doesn't define any object named '%s'\" % (module, name))\n\n return obj\n\n\ndef walk_modules(path):\n \"\"\"Loads a module and all its submodules from the given module path and\n returns them. If *any* module throws an exception while importing, that\n exception is thrown back.\n\n For example: walk_modules('scrapy.utils')\n \"\"\"\n\n mods = []\n mod = import_module(path)\n mods.append(mod)\n if hasattr(mod, '__path__'):\n for _, subpath, ispkg in iter_modules(mod.__path__):\n fullpath = path + '.' + subpath\n if ispkg:\n mods += walk_modules(fullpath)\n else:\n submod = import_module(fullpath)\n mods.append(submod)\n return mods\n\n\ndef extract_regex(regex, text, encoding='utf-8'):\n \"\"\"Extract a list of unicode strings from the given text/encoding using the following policies:\n\n * if the regex contains a named group called \"extract\" that will be returned\n * if the regex contains multiple numbered groups, all those will be returned (flattened)\n * if the regex doesn't contain any group the entire regex matching is returned\n \"\"\"\n\n if isinstance(regex, six.string_types):\n regex = re.compile(regex, re.UNICODE)\n\n try:\n strings = [regex.search(text).group('extract')] # named group\n except Exception:\n strings = regex.findall(text) # full regex or numbered groups\n strings = flatten(strings)\n\n if isinstance(text, six.text_type):\n return [replace_entities(s, keep=['lt', 'amp']) for s in strings]\n else:\n return [replace_entities(to_unicode(s, encoding), keep=['lt', 'amp'])\n for s in strings]\n\n\ndef md5sum(file):\n \"\"\"Calculate the md5 checksum of a file-like object without reading its\n whole content in memory.\n\n >>> from io import BytesIO\n >>> md5sum(BytesIO(b'file content to hash'))\n '784406af91dd5a54fbb9c84c2236595a'\n \"\"\"\n m = hashlib.md5()\n while True:\n d = file.read(8096)\n if not d:\n break\n m.update(d)\n return m.hexdigest()\n\n\ndef rel_has_nofollow(rel):\n \"\"\"Return True if link rel attribute has nofollow type\"\"\"\n return rel is not None and 'nofollow' in rel.split()\n\n\ndef create_instance(objcls, settings, crawler, *args, **kwargs):\n \"\"\"Construct a class instance using its ``from_crawler`` or\n ``from_settings`` constructors, if available.\n\n At least one of ``settings`` and ``crawler`` needs to be different from\n ``None``. If ``settings `` is ``None``, ``crawler.settings`` will be used.\n If ``crawler`` is ``None``, only the ``from_settings`` constructor will be\n tried.\n\n ``*args`` and ``**kwargs`` are forwarded to the constructors.\n\n Raises ``ValueError`` if both ``settings`` and ``crawler`` are ``None``.\n \"\"\"\n if settings is None:\n if crawler is None:\n raise ValueError(\"Specifiy at least one of settings and crawler.\")\n settings = crawler.settings\n if crawler and hasattr(objcls, 'from_crawler'):\n return objcls.from_crawler(crawler, *args, **kwargs)\n elif hasattr(objcls, 'from_settings'):\n return objcls.from_settings(settings, *args, **kwargs)\n else:\n return objcls(*args, **kwargs)\n", "path": "scrapy/utils/misc.py"}, {"content": "from __future__ import print_function\nimport time\nimport sys\nfrom collections import defaultdict\nfrom unittest import TextTestRunner, TextTestResult as _TextTestResult\n\nfrom scrapy.commands import ScrapyCommand\nfrom scrapy.contracts import ContractsManager\nfrom scrapy.utils.misc import load_object\nfrom scrapy.utils.conf import build_component_list\n\n\nclass TextTestResult(_TextTestResult):\n def printSummary(self, start, stop):\n write = self.stream.write\n writeln = self.stream.writeln\n\n run = self.testsRun\n plural = \"s\" if run != 1 else \"\"\n\n writeln(self.separator2)\n writeln(\"Ran %d contract%s in %.3fs\" % (run, plural, stop - start))\n writeln()\n\n infos = []\n if not self.wasSuccessful():\n write(\"FAILED\")\n failed, errored = map(len, (self.failures, self.errors))\n if failed:\n infos.append(\"failures=%d\" % failed)\n if errored:\n infos.append(\"errors=%d\" % errored)\n else:\n write(\"OK\")\n\n if infos:\n writeln(\" (%s)\" % (\", \".join(infos),))\n else:\n write(\"\\n\")\n\n\nclass Command(ScrapyCommand):\n requires_project = True\n default_settings = {'LOG_ENABLED': False}\n\n def syntax(self):\n return \"[options] <spider>\"\n\n def short_desc(self):\n return \"Check spider contracts\"\n\n def add_options(self, parser):\n ScrapyCommand.add_options(self, parser)\n parser.add_option(\"-l\", \"--list\", dest=\"list\", action=\"store_true\",\n help=\"only list contracts, without checking them\")\n parser.add_option(\"-v\", \"--verbose\", dest=\"verbose\", default=False, action='store_true',\n help=\"print contract tests for all spiders\")\n\n def run(self, args, opts):\n # load contracts\n contracts = build_component_list(self.settings.getwithbase('SPIDER_CONTRACTS'))\n conman = ContractsManager(load_object(c) for c in contracts)\n runner = TextTestRunner(verbosity=2 if opts.verbose else 1)\n result = TextTestResult(runner.stream, runner.descriptions, runner.verbosity)\n\n # contract requests\n contract_reqs = defaultdict(list)\n\n spider_loader = self.crawler_process.spider_loader\n\n for spidername in args or spider_loader.list():\n spidercls = spider_loader.load(spidername)\n spidercls.start_requests = lambda s: conman.from_spider(s, result)\n\n tested_methods = conman.tested_methods_from_spidercls(spidercls)\n if opts.list:\n for method in tested_methods:\n contract_reqs[spidercls.name].append(method)\n elif tested_methods:\n self.crawler_process.crawl(spidercls)\n\n # start checks\n if opts.list:\n for spider, methods in sorted(contract_reqs.items()):\n if not methods and not opts.verbose:\n continue\n print(spider)\n for method in sorted(methods):\n print(' * %s' % method)\n else:\n start = time.time()\n self.crawler_process.start()\n stop = time.time()\n\n result.printErrors()\n result.printSummary(start, stop)\n self.exitcode = int(not result.wasSuccessful())\n\n", "path": "scrapy/commands/check.py"}]}
| 2,942 | 604 |
gh_patches_debug_28181
|
rasdani/github-patches
|
git_diff
|
carpentries__amy-622
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
List of people who taught at events of specific type
Usecase: Tracy wants to grab list of people who taught at DC workshops, so that she knows who is experienced DC instructor.
</issue>
<code>
[start of workshops/filters.py]
1 from distutils.util import strtobool
2
3 import django.forms.widgets
4
5 import django_filters
6 from django_countries import Countries
7
8 from workshops.models import Event, Host, Person, Task, Airport, EventRequest
9
10 EMPTY_SELECTION = (None, '---------')
11
12
13 class AllCountriesFilter(django_filters.ChoiceFilter):
14 @property
15 def field(self):
16 qs = self.model._default_manager.distinct()
17 qs = qs.order_by(self.name).values_list(self.name, flat=True)
18
19 choices = [o for o in qs if o]
20 countries = Countries()
21 countries.only = choices
22
23 self.extra['choices'] = list(countries)
24 self.extra['choices'].insert(0, EMPTY_SELECTION)
25 return super().field
26
27
28 class ForeignKeyAllValuesFilter(django_filters.ChoiceFilter):
29 def __init__(self, model, *args, **kwargs):
30 self.lookup_model = model
31 super().__init__(*args, **kwargs)
32
33 @property
34 def field(self):
35 name = self.name
36 model = self.lookup_model
37
38 qs1 = self.model._default_manager.distinct()
39 qs1 = qs1.order_by(name).values_list(name, flat=True)
40 qs2 = model.objects.filter(pk__in=qs1)
41 self.extra['choices'] = [(o.pk, str(o)) for o in qs2]
42 self.extra['choices'].insert(0, EMPTY_SELECTION)
43 return super().field
44
45
46 class EventStateFilter(django_filters.ChoiceFilter):
47 def filter(self, qs, value):
48 if isinstance(value, django_filters.fields.Lookup):
49 value = value.value
50
51 # no filtering
52 if value in ([], (), {}, None, '', 'all'):
53 return qs
54
55 # no need to check if value exists in self.extra['choices'] because
56 # validation is done by django_filters
57 try:
58 return getattr(qs, "{}_events".format(value))()
59 except AttributeError:
60 return qs
61
62
63 class EventFilter(django_filters.FilterSet):
64 assigned_to = ForeignKeyAllValuesFilter(Person)
65 host = ForeignKeyAllValuesFilter(Host)
66 administrator = ForeignKeyAllValuesFilter(Host)
67
68 STATUS_CHOICES = [
69 ('', 'All'),
70 ('past', 'Past'),
71 ('ongoing', 'Ongoing'),
72 ('upcoming', 'Upcoming'),
73 ('unpublished', 'Unpublished'),
74 ('uninvoiced', 'Uninvoiced'),
75 ]
76 status = EventStateFilter(choices=STATUS_CHOICES)
77
78 invoice_status = django_filters.ChoiceFilter(
79 choices=(EMPTY_SELECTION, ) + Event.INVOICED_CHOICES,
80 )
81
82 class Meta:
83 model = Event
84 fields = [
85 'assigned_to',
86 'tags',
87 'host',
88 'administrator',
89 'invoice_status',
90 'completed',
91 ]
92 order_by = ['-slug', 'slug', 'start', '-start', 'end', '-end']
93
94
95 class EventRequestFilter(django_filters.FilterSet):
96 assigned_to = ForeignKeyAllValuesFilter(Person)
97 country = AllCountriesFilter()
98 active = django_filters.TypedChoiceFilter(
99 choices=(('true', 'Open'), ('false', 'Closed')),
100 coerce=strtobool,
101 label='Status',
102 widget=django.forms.widgets.RadioSelect,
103 )
104
105 class Meta:
106 model = EventRequest
107 fields = [
108 'assigned_to',
109 'workshop_type',
110 'active',
111 'country',
112 ]
113 order_by = ['-created_at', 'created_at']
114
115
116 class HostFilter(django_filters.FilterSet):
117 country = AllCountriesFilter()
118
119 class Meta:
120 model = Host
121 fields = [
122 'country',
123 ]
124 order_by = ['fullname', '-fullname', 'domain', '-domain', ]
125
126
127 class PersonFilter(django_filters.FilterSet):
128 class Meta:
129 model = Person
130 fields = [
131 'badges',
132 ]
133 order_by = ["lastname", "-lastname", "firstname", "-firstname",
134 "email", "-email"]
135
136 def get_order_by(self, order_value):
137 if order_value == 'firstname':
138 return ['personal', 'middle', 'family']
139 elif order_value == '-firstname':
140 return ['-personal', '-middle', '-family']
141 elif order_value == 'lastname':
142 return ['family', 'middle', 'personal']
143 elif order_value == '-lastname':
144 return ['-family', '-middle', '-personal']
145 return super().get_order_by(order_value)
146
147
148 class TaskFilter(django_filters.FilterSet):
149 class Meta:
150 model = Task
151 fields = [
152 'event',
153 # can't filter on person because person's name contains 3 fields:
154 # person.personal, person.middle, person.family
155 # 'person',
156 'role',
157 ]
158 order_by = [
159 ['event__slug', 'Event'],
160 ['-event__slug', 'Event (descending)'],
161 ['person__family', 'Person'],
162 ['-person__family', 'Person (descending)'],
163 ['role', 'Role'],
164 ['-role', 'Role (descending)'],
165 ]
166
167
168 class AirportFilter(django_filters.FilterSet):
169 fullname = django_filters.CharFilter(lookup_type='icontains')
170
171 class Meta:
172 model = Airport
173 fields = [
174 'fullname',
175 ]
176 order_by = ["iata", "-iata", "fullname", "-fullname"]
177
[end of workshops/filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/workshops/filters.py b/workshops/filters.py
--- a/workshops/filters.py
+++ b/workshops/filters.py
@@ -5,7 +5,17 @@
import django_filters
from django_countries import Countries
-from workshops.models import Event, Host, Person, Task, Airport, EventRequest
+from workshops.models import (
+ Event,
+ Host,
+ Person,
+ Task,
+ Airport,
+ EventRequest,
+ Tag,
+ Role,
+ Task,
+)
EMPTY_SELECTION = (None, '---------')
@@ -124,11 +134,31 @@
order_by = ['fullname', '-fullname', 'domain', '-domain', ]
+def filter_taught_workshops(queryset, values):
+ """Limit Persons to only instructors from events with specific tags.
+
+ This needs to be in a separate function because django-filters doesn't
+ support `action` parameter as supposed, ie. with
+ `action='filter_taught_workshops'` it doesn't call the method; instead it
+ tries calling a string, which results in error."""
+ if not values:
+ return queryset
+
+ return queryset.filter(task__role__name='instructor') \
+ .filter(task__event__tags__in=values) \
+ .distinct()
+
+
class PersonFilter(django_filters.FilterSet):
+ taught_workshops = django_filters.ModelMultipleChoiceFilter(
+ queryset=Tag.objects.all(), label='Taught at workshops of type',
+ action=filter_taught_workshops,
+ )
+
class Meta:
model = Person
fields = [
- 'badges',
+ 'badges', 'taught_workshops',
]
order_by = ["lastname", "-lastname", "firstname", "-firstname",
"email", "-email"]
|
{"golden_diff": "diff --git a/workshops/filters.py b/workshops/filters.py\n--- a/workshops/filters.py\n+++ b/workshops/filters.py\n@@ -5,7 +5,17 @@\n import django_filters\n from django_countries import Countries\n \n-from workshops.models import Event, Host, Person, Task, Airport, EventRequest\n+from workshops.models import (\n+ Event,\n+ Host,\n+ Person,\n+ Task,\n+ Airport,\n+ EventRequest,\n+ Tag,\n+ Role,\n+ Task,\n+)\n \n EMPTY_SELECTION = (None, '---------')\n \n@@ -124,11 +134,31 @@\n order_by = ['fullname', '-fullname', 'domain', '-domain', ]\n \n \n+def filter_taught_workshops(queryset, values):\n+ \"\"\"Limit Persons to only instructors from events with specific tags.\n+\n+ This needs to be in a separate function because django-filters doesn't\n+ support `action` parameter as supposed, ie. with\n+ `action='filter_taught_workshops'` it doesn't call the method; instead it\n+ tries calling a string, which results in error.\"\"\"\n+ if not values:\n+ return queryset\n+\n+ return queryset.filter(task__role__name='instructor') \\\n+ .filter(task__event__tags__in=values) \\\n+ .distinct()\n+\n+\n class PersonFilter(django_filters.FilterSet):\n+ taught_workshops = django_filters.ModelMultipleChoiceFilter(\n+ queryset=Tag.objects.all(), label='Taught at workshops of type',\n+ action=filter_taught_workshops,\n+ )\n+\n class Meta:\n model = Person\n fields = [\n- 'badges',\n+ 'badges', 'taught_workshops',\n ]\n order_by = [\"lastname\", \"-lastname\", \"firstname\", \"-firstname\",\n \"email\", \"-email\"]\n", "issue": "List of people who taught at events of specific type\nUsecase: Tracy wants to grab list of people who taught at DC workshops, so that she knows who is experienced DC instructor.\n\n", "before_files": [{"content": "from distutils.util import strtobool\n\nimport django.forms.widgets\n\nimport django_filters\nfrom django_countries import Countries\n\nfrom workshops.models import Event, Host, Person, Task, Airport, EventRequest\n\nEMPTY_SELECTION = (None, '---------')\n\n\nclass AllCountriesFilter(django_filters.ChoiceFilter):\n @property\n def field(self):\n qs = self.model._default_manager.distinct()\n qs = qs.order_by(self.name).values_list(self.name, flat=True)\n\n choices = [o for o in qs if o]\n countries = Countries()\n countries.only = choices\n\n self.extra['choices'] = list(countries)\n self.extra['choices'].insert(0, EMPTY_SELECTION)\n return super().field\n\n\nclass ForeignKeyAllValuesFilter(django_filters.ChoiceFilter):\n def __init__(self, model, *args, **kwargs):\n self.lookup_model = model\n super().__init__(*args, **kwargs)\n\n @property\n def field(self):\n name = self.name\n model = self.lookup_model\n\n qs1 = self.model._default_manager.distinct()\n qs1 = qs1.order_by(name).values_list(name, flat=True)\n qs2 = model.objects.filter(pk__in=qs1)\n self.extra['choices'] = [(o.pk, str(o)) for o in qs2]\n self.extra['choices'].insert(0, EMPTY_SELECTION)\n return super().field\n\n\nclass EventStateFilter(django_filters.ChoiceFilter):\n def filter(self, qs, value):\n if isinstance(value, django_filters.fields.Lookup):\n value = value.value\n\n # no filtering\n if value in ([], (), {}, None, '', 'all'):\n return qs\n\n # no need to check if value exists in self.extra['choices'] because\n # validation is done by django_filters\n try:\n return getattr(qs, \"{}_events\".format(value))()\n except AttributeError:\n return qs\n\n\nclass EventFilter(django_filters.FilterSet):\n assigned_to = ForeignKeyAllValuesFilter(Person)\n host = ForeignKeyAllValuesFilter(Host)\n administrator = ForeignKeyAllValuesFilter(Host)\n\n STATUS_CHOICES = [\n ('', 'All'),\n ('past', 'Past'),\n ('ongoing', 'Ongoing'),\n ('upcoming', 'Upcoming'),\n ('unpublished', 'Unpublished'),\n ('uninvoiced', 'Uninvoiced'),\n ]\n status = EventStateFilter(choices=STATUS_CHOICES)\n\n invoice_status = django_filters.ChoiceFilter(\n choices=(EMPTY_SELECTION, ) + Event.INVOICED_CHOICES,\n )\n\n class Meta:\n model = Event\n fields = [\n 'assigned_to',\n 'tags',\n 'host',\n 'administrator',\n 'invoice_status',\n 'completed',\n ]\n order_by = ['-slug', 'slug', 'start', '-start', 'end', '-end']\n\n\nclass EventRequestFilter(django_filters.FilterSet):\n assigned_to = ForeignKeyAllValuesFilter(Person)\n country = AllCountriesFilter()\n active = django_filters.TypedChoiceFilter(\n choices=(('true', 'Open'), ('false', 'Closed')),\n coerce=strtobool,\n label='Status',\n widget=django.forms.widgets.RadioSelect,\n )\n\n class Meta:\n model = EventRequest\n fields = [\n 'assigned_to',\n 'workshop_type',\n 'active',\n 'country',\n ]\n order_by = ['-created_at', 'created_at']\n\n\nclass HostFilter(django_filters.FilterSet):\n country = AllCountriesFilter()\n\n class Meta:\n model = Host\n fields = [\n 'country',\n ]\n order_by = ['fullname', '-fullname', 'domain', '-domain', ]\n\n\nclass PersonFilter(django_filters.FilterSet):\n class Meta:\n model = Person\n fields = [\n 'badges',\n ]\n order_by = [\"lastname\", \"-lastname\", \"firstname\", \"-firstname\",\n \"email\", \"-email\"]\n\n def get_order_by(self, order_value):\n if order_value == 'firstname':\n return ['personal', 'middle', 'family']\n elif order_value == '-firstname':\n return ['-personal', '-middle', '-family']\n elif order_value == 'lastname':\n return ['family', 'middle', 'personal']\n elif order_value == '-lastname':\n return ['-family', '-middle', '-personal']\n return super().get_order_by(order_value)\n\n\nclass TaskFilter(django_filters.FilterSet):\n class Meta:\n model = Task\n fields = [\n 'event',\n # can't filter on person because person's name contains 3 fields:\n # person.personal, person.middle, person.family\n # 'person',\n 'role',\n ]\n order_by = [\n ['event__slug', 'Event'],\n ['-event__slug', 'Event (descending)'],\n ['person__family', 'Person'],\n ['-person__family', 'Person (descending)'],\n ['role', 'Role'],\n ['-role', 'Role (descending)'],\n ]\n\n\nclass AirportFilter(django_filters.FilterSet):\n fullname = django_filters.CharFilter(lookup_type='icontains')\n\n class Meta:\n model = Airport\n fields = [\n 'fullname',\n ]\n order_by = [\"iata\", \"-iata\", \"fullname\", \"-fullname\"]\n", "path": "workshops/filters.py"}]}
| 2,147 | 410 |
gh_patches_debug_8168
|
rasdani/github-patches
|
git_diff
|
pyg-team__pytorch_geometric-7387
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fail to import Nell dataset
### 🐛 Describe the bug
I tried to import Nell data set using NELL class:
from torch_geometric.datasets import NELL
dataset = NELL(root='data/Nell')
data = dataset[0]
But I got the following error message:
Traceback (most recent call last):
File "c:\Users\13466\Desktop\USTLab\LabDoc\HPCA23\Nell.py", line 10, in <module>
dataset = NELL(root='data/Nell')
File "C:\Users\13466\anaconda3\lib\site-packages\torch_geometric\datasets\nell.py", line 62, in __init__
super().__init__(root, transform, pre_transform)
File "C:\Users\13466\anaconda3\lib\site-packages\torch_geometric\data\in_memory_dataset.py", line 57, in __init__
super().__init__(root, transform, pre_transform, pre_filter, log)
File "C:\Users\13466\anaconda3\lib\site-packages\torch_geometric\data\dataset.py", line 97, in __init__
self._process()
File "C:\Users\13466\anaconda3\lib\site-packages\torch_geometric\data\dataset.py", line 230, in _process
self.process()
File "C:\Users\13466\anaconda3\lib\site-packages\torch_geometric\datasets\nell.py", line 82, in process
data = read_planetoid_data(self.raw_dir, 'nell.0.001')
File "C:\Users\13466\anaconda3\lib\site-packages\torch_geometric\io\planetoid.py", line 53, in read_planetoid_data
row, col, value = SparseTensor.from_dense(x).coo()
AttributeError: type object 'SparseTensor' has no attribute 'from_dense'
### Environment
* PyG version:2.3.1
* PyTorch version:2.0.1
* OS:Windows 11
* Python version:3.10
* CUDA/cuDNN version:
* How you installed PyTorch and PyG (`conda`, `pip`, source):pip
* Any other relevant information (*e.g.*, version of `torch-scatter`):
</issue>
<code>
[start of torch_geometric/typing.py]
1 import warnings
2 from typing import Dict, List, Optional, Tuple, Union
3
4 import numpy as np
5 import torch
6 from torch import Tensor
7
8 WITH_PT2 = int(torch.__version__.split('.')[0]) >= 2
9
10 try:
11 import pyg_lib # noqa
12 WITH_PYG_LIB = True
13 WITH_GMM = WITH_PT2 and hasattr(pyg_lib.ops, 'grouped_matmul')
14 WITH_SAMPLED_OP = hasattr(pyg_lib.ops, 'sampled_add')
15 WITH_INDEX_SORT = hasattr(pyg_lib.ops, 'index_sort')
16 except (ImportError, OSError) as e:
17 if isinstance(e, OSError):
18 warnings.warn(f"An issue occurred while importing 'pyg-lib'. "
19 f"Disabling its usage. Stacktrace: {e}")
20 pyg_lib = object
21 WITH_PYG_LIB = False
22 WITH_GMM = False
23 WITH_SAMPLED_OP = False
24 WITH_INDEX_SORT = False
25
26 try:
27 import torch_scatter # noqa
28 WITH_TORCH_SCATTER = True
29 except (ImportError, OSError) as e:
30 if isinstance(e, OSError):
31 warnings.warn(f"An issue occurred while importing 'torch-scatter'. "
32 f"Disabling its usage. Stacktrace: {e}")
33 torch_scatter = object
34 WITH_TORCH_SCATTER = False
35
36 try:
37 import torch_cluster # noqa
38 WITH_TORCH_CLUSTER = True
39 WITH_TORCH_CLUSTER_BATCH_SIZE = 'batch_size' in torch_cluster.knn.__doc__
40 except (ImportError, OSError) as e:
41 if isinstance(e, OSError):
42 warnings.warn(f"An issue occurred while importing 'torch-cluster'. "
43 f"Disabling its usage. Stacktrace: {e}")
44 WITH_TORCH_CLUSTER = False
45
46 try:
47 import torch_spline_conv # noqa
48 WITH_TORCH_SPLINE_CONV = True
49 except (ImportError, OSError) as e:
50 if isinstance(e, OSError):
51 warnings.warn(
52 f"An issue occurred while importing 'torch-spline-conv'. "
53 f"Disabling its usage. Stacktrace: {e}")
54 WITH_TORCH_SPLINE_CONV = False
55
56 try:
57 import torch_sparse # noqa
58 from torch_sparse import SparseStorage, SparseTensor
59 WITH_TORCH_SPARSE = True
60 except (ImportError, OSError) as e:
61 if isinstance(e, OSError):
62 warnings.warn(f"An issue occurred while importing 'torch-sparse'. "
63 f"Disabling its usage. Stacktrace: {e}")
64 WITH_TORCH_SPARSE = False
65
66 class SparseStorage:
67 def __init__(
68 self,
69 row: Optional[Tensor] = None,
70 rowptr: Optional[Tensor] = None,
71 col: Optional[Tensor] = None,
72 value: Optional[Tensor] = None,
73 sparse_sizes: Optional[Tuple[Optional[int], Optional[int]]] = None,
74 rowcount: Optional[Tensor] = None,
75 colptr: Optional[Tensor] = None,
76 colcount: Optional[Tensor] = None,
77 csr2csc: Optional[Tensor] = None,
78 csc2csr: Optional[Tensor] = None,
79 is_sorted: bool = False,
80 trust_data: bool = False,
81 ):
82 raise ImportError("'SparseStorage' requires 'torch-sparse'")
83
84 class SparseTensor:
85 def __init__(
86 self,
87 row: Optional[Tensor] = None,
88 rowptr: Optional[Tensor] = None,
89 col: Optional[Tensor] = None,
90 value: Optional[Tensor] = None,
91 sparse_sizes: Optional[Tuple[Optional[int], Optional[int]]] = None,
92 is_sorted: bool = False,
93 trust_data: bool = False,
94 ):
95 raise ImportError("'SparseTensor' requires 'torch-sparse'")
96
97 @classmethod
98 def from_edge_index(
99 self,
100 edge_index: Tensor,
101 edge_attr: Optional[Tensor] = None,
102 sparse_sizes: Optional[Tuple[Optional[int], Optional[int]]] = None,
103 is_sorted: bool = False,
104 trust_data: bool = False,
105 ) -> 'SparseTensor':
106 raise ImportError("'SparseTensor' requires 'torch-sparse'")
107
108 def size(self, dim: int) -> int:
109 raise ImportError("'SparseTensor' requires 'torch-sparse'")
110
111 def is_cuda(self) -> bool:
112 raise ImportError("'SparseTensor' requires 'torch-sparse'")
113
114 def has_value(self) -> bool:
115 raise ImportError("'SparseTensor' requires 'torch-sparse'")
116
117 def set_value(self, value: Optional[Tensor],
118 layout: Optional[str] = None) -> 'SparseTensor':
119 raise ImportError("'SparseTensor' requires 'torch-sparse'")
120
121 def fill_value(self, fill_value: float,
122 dtype: Optional[torch.dtype] = None) -> 'SparseTensor':
123 raise ImportError("'SparseTensor' requires 'torch-sparse'")
124
125 def coo(self) -> Tuple[Tensor, Tensor, Optional[Tensor]]:
126 raise ImportError("'SparseTensor' requires 'torch-sparse'")
127
128 def csr(self) -> Tuple[Tensor, Tensor, Optional[Tensor]]:
129 raise ImportError("'SparseTensor' requires 'torch-sparse'")
130
131 def to_torch_sparse_csr_tensor(
132 self,
133 dtype: Optional[torch.dtype] = None,
134 ) -> Tensor:
135 raise ImportError("'SparseTensor' requires 'torch-sparse'")
136
137 class torch_sparse:
138 @staticmethod
139 def matmul(src: SparseTensor, other: Tensor,
140 reduce: str = "sum") -> Tensor:
141 raise ImportError("'matmul' requires 'torch-sparse'")
142
143 @staticmethod
144 def sum(src: SparseTensor, dim: Optional[int] = None) -> Tensor:
145 raise ImportError("'sum' requires 'torch-sparse'")
146
147 @staticmethod
148 def mul(src: SparseTensor, other: Tensor) -> SparseTensor:
149 raise ImportError("'mul' requires 'torch-sparse'")
150
151 @staticmethod
152 def set_diag(src: SparseTensor, values: Optional[Tensor] = None,
153 k: int = 0) -> SparseTensor:
154 raise ImportError("'set_diag' requires 'torch-sparse'")
155
156 @staticmethod
157 def fill_diag(src: SparseTensor, fill_value: float,
158 k: int = 0) -> SparseTensor:
159 raise ImportError("'fill_diag' requires 'torch-sparse'")
160
161 @staticmethod
162 def masked_select_nnz(src: SparseTensor, mask: Tensor,
163 layout: Optional[str] = None) -> SparseTensor:
164 raise ImportError("'masked_select_nnz' requires 'torch-sparse'")
165
166
167 # Types for accessing data ####################################################
168
169 # Node-types are denoted by a single string, e.g.: `data['paper']`:
170 NodeType = str
171
172 # Edge-types are denotes by a triplet of strings, e.g.:
173 # `data[('author', 'writes', 'paper')]
174 EdgeType = Tuple[str, str, str]
175
176 DEFAULT_REL = 'to'
177 EDGE_TYPE_STR_SPLIT = '__'
178
179
180 class EdgeTypeStr(str):
181 r"""A helper class to construct serializable edge types by merging an edge
182 type tuple into a single string."""
183 def __new__(cls, *args):
184 if isinstance(args[0], (list, tuple)):
185 # Unwrap `EdgeType((src, rel, dst))` and `EdgeTypeStr((src, dst))`:
186 args = tuple(args[0])
187
188 if len(args) == 1 and isinstance(args[0], str):
189 args = args[0] # An edge type string was passed.
190
191 elif len(args) == 2 and all(isinstance(arg, str) for arg in args):
192 # A `(src, dst)` edge type was passed - add `DEFAULT_REL`:
193 args = (args[0], DEFAULT_REL, args[1])
194 args = EDGE_TYPE_STR_SPLIT.join(args)
195
196 elif len(args) == 3 and all(isinstance(arg, str) for arg in args):
197 # A `(src, rel, dst)` edge type was passed:
198 args = EDGE_TYPE_STR_SPLIT.join(args)
199
200 else:
201 raise ValueError(f"Encountered invalid edge type '{args}'")
202
203 return str.__new__(cls, args)
204
205 def to_tuple(self) -> EdgeType:
206 r"""Returns the original edge type."""
207 out = tuple(self.split(EDGE_TYPE_STR_SPLIT))
208 if len(out) != 3:
209 raise ValueError(f"Cannot convert the edge type '{self}' to a "
210 f"tuple since it holds invalid characters")
211 return out
212
213
214 # There exist some short-cuts to query edge-types (given that the full triplet
215 # can be uniquely reconstructed, e.g.:
216 # * via str: `data['writes']`
217 # * via Tuple[str, str]: `data[('author', 'paper')]`
218 QueryType = Union[NodeType, EdgeType, str, Tuple[str, str]]
219
220 Metadata = Tuple[List[NodeType], List[EdgeType]]
221
222 # A representation of a feature tensor
223 FeatureTensorType = Union[Tensor, np.ndarray]
224
225 # A representation of an edge index, following the possible formats:
226 # * COO: (row, col)
227 # * CSC: (row, colptr)
228 # * CSR: (rowptr, col)
229 EdgeTensorType = Tuple[Tensor, Tensor]
230
231 # Types for message passing ###################################################
232
233 Adj = Union[Tensor, SparseTensor]
234 OptTensor = Optional[Tensor]
235 PairTensor = Tuple[Tensor, Tensor]
236 OptPairTensor = Tuple[Tensor, Optional[Tensor]]
237 PairOptTensor = Tuple[Optional[Tensor], Optional[Tensor]]
238 Size = Optional[Tuple[int, int]]
239 NoneType = Optional[Tensor]
240
241 MaybeHeteroNodeTensor = Union[Tensor, Dict[NodeType, Tensor]]
242 MaybeHeteroEdgeTensor = Union[Tensor, Dict[EdgeType, Tensor]]
243
244 # Types for sampling ##########################################################
245
246 InputNodes = Union[OptTensor, NodeType, Tuple[NodeType, OptTensor]]
247 InputEdges = Union[OptTensor, EdgeType, Tuple[EdgeType, OptTensor]]
248
[end of torch_geometric/typing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/torch_geometric/typing.py b/torch_geometric/typing.py
--- a/torch_geometric/typing.py
+++ b/torch_geometric/typing.py
@@ -105,6 +105,11 @@
) -> 'SparseTensor':
raise ImportError("'SparseTensor' requires 'torch-sparse'")
+ @classmethod
+ def from_dense(self, mat: Tensor,
+ has_value: bool = True) -> 'SparseTensor':
+ raise ImportError("'SparseTensor' requires 'torch-sparse'")
+
def size(self, dim: int) -> int:
raise ImportError("'SparseTensor' requires 'torch-sparse'")
|
{"golden_diff": "diff --git a/torch_geometric/typing.py b/torch_geometric/typing.py\n--- a/torch_geometric/typing.py\n+++ b/torch_geometric/typing.py\n@@ -105,6 +105,11 @@\n ) -> 'SparseTensor':\n raise ImportError(\"'SparseTensor' requires 'torch-sparse'\")\n \n+ @classmethod\n+ def from_dense(self, mat: Tensor,\n+ has_value: bool = True) -> 'SparseTensor':\n+ raise ImportError(\"'SparseTensor' requires 'torch-sparse'\")\n+\n def size(self, dim: int) -> int:\n raise ImportError(\"'SparseTensor' requires 'torch-sparse'\")\n", "issue": "Fail to import Nell dataset\n### \ud83d\udc1b Describe the bug\n\nI tried to import Nell data set using NELL class:\r\nfrom torch_geometric.datasets import NELL\r\n\r\ndataset = NELL(root='data/Nell')\r\ndata = dataset[0]\r\n\r\nBut I got the following error message:\r\nTraceback (most recent call last):\r\n File \"c:\\Users\\13466\\Desktop\\USTLab\\LabDoc\\HPCA23\\Nell.py\", line 10, in <module>\r\n dataset = NELL(root='data/Nell')\r\n File \"C:\\Users\\13466\\anaconda3\\lib\\site-packages\\torch_geometric\\datasets\\nell.py\", line 62, in __init__\r\n super().__init__(root, transform, pre_transform)\r\n File \"C:\\Users\\13466\\anaconda3\\lib\\site-packages\\torch_geometric\\data\\in_memory_dataset.py\", line 57, in __init__\r\n super().__init__(root, transform, pre_transform, pre_filter, log)\r\n File \"C:\\Users\\13466\\anaconda3\\lib\\site-packages\\torch_geometric\\data\\dataset.py\", line 97, in __init__\r\n self._process()\r\n File \"C:\\Users\\13466\\anaconda3\\lib\\site-packages\\torch_geometric\\data\\dataset.py\", line 230, in _process\r\n self.process()\r\n File \"C:\\Users\\13466\\anaconda3\\lib\\site-packages\\torch_geometric\\datasets\\nell.py\", line 82, in process\r\n data = read_planetoid_data(self.raw_dir, 'nell.0.001')\r\n File \"C:\\Users\\13466\\anaconda3\\lib\\site-packages\\torch_geometric\\io\\planetoid.py\", line 53, in read_planetoid_data\r\n row, col, value = SparseTensor.from_dense(x).coo()\r\nAttributeError: type object 'SparseTensor' has no attribute 'from_dense'\r\n\r\n\n\n### Environment\n\n* PyG version:2.3.1\r\n* PyTorch version:2.0.1\r\n* OS:Windows 11\r\n* Python version:3.10\r\n* CUDA/cuDNN version:\r\n* How you installed PyTorch and PyG (`conda`, `pip`, source):pip\r\n* Any other relevant information (*e.g.*, version of `torch-scatter`):\r\n\n", "before_files": [{"content": "import warnings\nfrom typing import Dict, List, Optional, Tuple, Union\n\nimport numpy as np\nimport torch\nfrom torch import Tensor\n\nWITH_PT2 = int(torch.__version__.split('.')[0]) >= 2\n\ntry:\n import pyg_lib # noqa\n WITH_PYG_LIB = True\n WITH_GMM = WITH_PT2 and hasattr(pyg_lib.ops, 'grouped_matmul')\n WITH_SAMPLED_OP = hasattr(pyg_lib.ops, 'sampled_add')\n WITH_INDEX_SORT = hasattr(pyg_lib.ops, 'index_sort')\nexcept (ImportError, OSError) as e:\n if isinstance(e, OSError):\n warnings.warn(f\"An issue occurred while importing 'pyg-lib'. \"\n f\"Disabling its usage. Stacktrace: {e}\")\n pyg_lib = object\n WITH_PYG_LIB = False\n WITH_GMM = False\n WITH_SAMPLED_OP = False\n WITH_INDEX_SORT = False\n\ntry:\n import torch_scatter # noqa\n WITH_TORCH_SCATTER = True\nexcept (ImportError, OSError) as e:\n if isinstance(e, OSError):\n warnings.warn(f\"An issue occurred while importing 'torch-scatter'. \"\n f\"Disabling its usage. Stacktrace: {e}\")\n torch_scatter = object\n WITH_TORCH_SCATTER = False\n\ntry:\n import torch_cluster # noqa\n WITH_TORCH_CLUSTER = True\n WITH_TORCH_CLUSTER_BATCH_SIZE = 'batch_size' in torch_cluster.knn.__doc__\nexcept (ImportError, OSError) as e:\n if isinstance(e, OSError):\n warnings.warn(f\"An issue occurred while importing 'torch-cluster'. \"\n f\"Disabling its usage. Stacktrace: {e}\")\n WITH_TORCH_CLUSTER = False\n\ntry:\n import torch_spline_conv # noqa\n WITH_TORCH_SPLINE_CONV = True\nexcept (ImportError, OSError) as e:\n if isinstance(e, OSError):\n warnings.warn(\n f\"An issue occurred while importing 'torch-spline-conv'. \"\n f\"Disabling its usage. Stacktrace: {e}\")\n WITH_TORCH_SPLINE_CONV = False\n\ntry:\n import torch_sparse # noqa\n from torch_sparse import SparseStorage, SparseTensor\n WITH_TORCH_SPARSE = True\nexcept (ImportError, OSError) as e:\n if isinstance(e, OSError):\n warnings.warn(f\"An issue occurred while importing 'torch-sparse'. \"\n f\"Disabling its usage. Stacktrace: {e}\")\n WITH_TORCH_SPARSE = False\n\n class SparseStorage:\n def __init__(\n self,\n row: Optional[Tensor] = None,\n rowptr: Optional[Tensor] = None,\n col: Optional[Tensor] = None,\n value: Optional[Tensor] = None,\n sparse_sizes: Optional[Tuple[Optional[int], Optional[int]]] = None,\n rowcount: Optional[Tensor] = None,\n colptr: Optional[Tensor] = None,\n colcount: Optional[Tensor] = None,\n csr2csc: Optional[Tensor] = None,\n csc2csr: Optional[Tensor] = None,\n is_sorted: bool = False,\n trust_data: bool = False,\n ):\n raise ImportError(\"'SparseStorage' requires 'torch-sparse'\")\n\n class SparseTensor:\n def __init__(\n self,\n row: Optional[Tensor] = None,\n rowptr: Optional[Tensor] = None,\n col: Optional[Tensor] = None,\n value: Optional[Tensor] = None,\n sparse_sizes: Optional[Tuple[Optional[int], Optional[int]]] = None,\n is_sorted: bool = False,\n trust_data: bool = False,\n ):\n raise ImportError(\"'SparseTensor' requires 'torch-sparse'\")\n\n @classmethod\n def from_edge_index(\n self,\n edge_index: Tensor,\n edge_attr: Optional[Tensor] = None,\n sparse_sizes: Optional[Tuple[Optional[int], Optional[int]]] = None,\n is_sorted: bool = False,\n trust_data: bool = False,\n ) -> 'SparseTensor':\n raise ImportError(\"'SparseTensor' requires 'torch-sparse'\")\n\n def size(self, dim: int) -> int:\n raise ImportError(\"'SparseTensor' requires 'torch-sparse'\")\n\n def is_cuda(self) -> bool:\n raise ImportError(\"'SparseTensor' requires 'torch-sparse'\")\n\n def has_value(self) -> bool:\n raise ImportError(\"'SparseTensor' requires 'torch-sparse'\")\n\n def set_value(self, value: Optional[Tensor],\n layout: Optional[str] = None) -> 'SparseTensor':\n raise ImportError(\"'SparseTensor' requires 'torch-sparse'\")\n\n def fill_value(self, fill_value: float,\n dtype: Optional[torch.dtype] = None) -> 'SparseTensor':\n raise ImportError(\"'SparseTensor' requires 'torch-sparse'\")\n\n def coo(self) -> Tuple[Tensor, Tensor, Optional[Tensor]]:\n raise ImportError(\"'SparseTensor' requires 'torch-sparse'\")\n\n def csr(self) -> Tuple[Tensor, Tensor, Optional[Tensor]]:\n raise ImportError(\"'SparseTensor' requires 'torch-sparse'\")\n\n def to_torch_sparse_csr_tensor(\n self,\n dtype: Optional[torch.dtype] = None,\n ) -> Tensor:\n raise ImportError(\"'SparseTensor' requires 'torch-sparse'\")\n\n class torch_sparse:\n @staticmethod\n def matmul(src: SparseTensor, other: Tensor,\n reduce: str = \"sum\") -> Tensor:\n raise ImportError(\"'matmul' requires 'torch-sparse'\")\n\n @staticmethod\n def sum(src: SparseTensor, dim: Optional[int] = None) -> Tensor:\n raise ImportError(\"'sum' requires 'torch-sparse'\")\n\n @staticmethod\n def mul(src: SparseTensor, other: Tensor) -> SparseTensor:\n raise ImportError(\"'mul' requires 'torch-sparse'\")\n\n @staticmethod\n def set_diag(src: SparseTensor, values: Optional[Tensor] = None,\n k: int = 0) -> SparseTensor:\n raise ImportError(\"'set_diag' requires 'torch-sparse'\")\n\n @staticmethod\n def fill_diag(src: SparseTensor, fill_value: float,\n k: int = 0) -> SparseTensor:\n raise ImportError(\"'fill_diag' requires 'torch-sparse'\")\n\n @staticmethod\n def masked_select_nnz(src: SparseTensor, mask: Tensor,\n layout: Optional[str] = None) -> SparseTensor:\n raise ImportError(\"'masked_select_nnz' requires 'torch-sparse'\")\n\n\n# Types for accessing data ####################################################\n\n# Node-types are denoted by a single string, e.g.: `data['paper']`:\nNodeType = str\n\n# Edge-types are denotes by a triplet of strings, e.g.:\n# `data[('author', 'writes', 'paper')]\nEdgeType = Tuple[str, str, str]\n\nDEFAULT_REL = 'to'\nEDGE_TYPE_STR_SPLIT = '__'\n\n\nclass EdgeTypeStr(str):\n r\"\"\"A helper class to construct serializable edge types by merging an edge\n type tuple into a single string.\"\"\"\n def __new__(cls, *args):\n if isinstance(args[0], (list, tuple)):\n # Unwrap `EdgeType((src, rel, dst))` and `EdgeTypeStr((src, dst))`:\n args = tuple(args[0])\n\n if len(args) == 1 and isinstance(args[0], str):\n args = args[0] # An edge type string was passed.\n\n elif len(args) == 2 and all(isinstance(arg, str) for arg in args):\n # A `(src, dst)` edge type was passed - add `DEFAULT_REL`:\n args = (args[0], DEFAULT_REL, args[1])\n args = EDGE_TYPE_STR_SPLIT.join(args)\n\n elif len(args) == 3 and all(isinstance(arg, str) for arg in args):\n # A `(src, rel, dst)` edge type was passed:\n args = EDGE_TYPE_STR_SPLIT.join(args)\n\n else:\n raise ValueError(f\"Encountered invalid edge type '{args}'\")\n\n return str.__new__(cls, args)\n\n def to_tuple(self) -> EdgeType:\n r\"\"\"Returns the original edge type.\"\"\"\n out = tuple(self.split(EDGE_TYPE_STR_SPLIT))\n if len(out) != 3:\n raise ValueError(f\"Cannot convert the edge type '{self}' to a \"\n f\"tuple since it holds invalid characters\")\n return out\n\n\n# There exist some short-cuts to query edge-types (given that the full triplet\n# can be uniquely reconstructed, e.g.:\n# * via str: `data['writes']`\n# * via Tuple[str, str]: `data[('author', 'paper')]`\nQueryType = Union[NodeType, EdgeType, str, Tuple[str, str]]\n\nMetadata = Tuple[List[NodeType], List[EdgeType]]\n\n# A representation of a feature tensor\nFeatureTensorType = Union[Tensor, np.ndarray]\n\n# A representation of an edge index, following the possible formats:\n# * COO: (row, col)\n# * CSC: (row, colptr)\n# * CSR: (rowptr, col)\nEdgeTensorType = Tuple[Tensor, Tensor]\n\n# Types for message passing ###################################################\n\nAdj = Union[Tensor, SparseTensor]\nOptTensor = Optional[Tensor]\nPairTensor = Tuple[Tensor, Tensor]\nOptPairTensor = Tuple[Tensor, Optional[Tensor]]\nPairOptTensor = Tuple[Optional[Tensor], Optional[Tensor]]\nSize = Optional[Tuple[int, int]]\nNoneType = Optional[Tensor]\n\nMaybeHeteroNodeTensor = Union[Tensor, Dict[NodeType, Tensor]]\nMaybeHeteroEdgeTensor = Union[Tensor, Dict[EdgeType, Tensor]]\n\n# Types for sampling ##########################################################\n\nInputNodes = Union[OptTensor, NodeType, Tuple[NodeType, OptTensor]]\nInputEdges = Union[OptTensor, EdgeType, Tuple[EdgeType, OptTensor]]\n", "path": "torch_geometric/typing.py"}]}
| 3,891 | 146 |
gh_patches_debug_35133
|
rasdani/github-patches
|
git_diff
|
cowrie__cowrie-1472
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MalShare uploader not working
**Describe the bug**
In my config I have
```
[output_malshare]
enabled = true
```
and in my logs I have
```
[stdout#info] Sending file to MalShare
[stdout#info] Submited to MalShare
```
but when I check on MalShare I can't find any the binaries that have been caught in my honeypot.
**To Reproduce**
Steps to reproduce the behavior:
1. Enable MalShare submission in your config
2. Wait for a bot to drop a binary in your honeypot
3. Try to find the binary on malshare (search by md5)
4. Observe that the binary is not there
**Expected behavior**
The binary should be uploaded successfully to MalShare
**Server (please complete the following information):**
- OS: [e.g. RedHat Linux 7.1, output of uname -a] Ubuntu 20.04, Linux 5.4.0
- Python: 3.8.5
**Additional context**
Based on [MalShare's API docs](https://malshare.com/doc.php) it seems that uploading files now requires an API key and a slightly different POST path than the one [defined in cowrie](https://github.com/cowrie/cowrie/blob/b848ec261554ee9128640601eb9a6734b2bffefe/src/cowrie/output/malshare.py#L90). Probably adding an API key option to the config and updating the uploader with the new path and to use the API key will solve this.
</issue>
<code>
[start of src/cowrie/output/malshare.py]
1 # Copyright (c) 2015 Michel Oosterhof <[email protected]>
2 # All rights reserved.
3 #
4 # Redistribution and use in source and binary forms, with or without
5 # modification, are permitted provided that the following conditions
6 # are met:
7 #
8 # 1. Redistributions of source code must retain the above copyright
9 # notice, this list of conditions and the following disclaimer.
10 # 2. Redistributions in binary form must reproduce the above copyright
11 # notice, this list of conditions and the following disclaimer in the
12 # documentation and/or other materials provided with the distribution.
13 # 3. The names of the author(s) may not be used to endorse or promote
14 # products derived from this software without specific prior written
15 # permission.
16 #
17 # THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS`` AND ANY EXPRESS OR
18 # IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
19 # OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
20 # IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY DIRECT, INDIRECT,
21 # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
22 # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
23 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
24 # AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
25 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
26 # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
27 # SUCH DAMAGE.
28
29 """
30 Send files to https://malshare.com/
31 More info https://malshare.com/doc.php
32 """
33
34 from __future__ import absolute_import, division
35
36 import os
37
38 try:
39 from urllib.parse import urlparse
40 except ImportError:
41 from urlparse import urlparse
42 import requests
43
44 import cowrie.core.output
45
46
47 class Output(cowrie.core.output.Output):
48 """
49 malshare output
50
51 TODO: use `treq`
52 """
53 def start(self):
54 """
55 Start output plugin
56 """
57 pass
58
59 def stop(self):
60 """
61 Stop output plugin
62 """
63 pass
64
65 def write(self, entry):
66 if entry["eventid"] == "cowrie.session.file_download":
67 print("Sending file to MalShare")
68 p = urlparse(entry["url"]).path
69 if p == "":
70 fileName = entry["shasum"]
71 else:
72 b = os.path.basename(p)
73 if b == "":
74 fileName = entry["shasum"]
75 else:
76 fileName = b
77
78 self.postfile(entry["outfile"], fileName)
79
80 elif entry["eventid"] == "cowrie.session.file_upload":
81 print("Sending file to MalShare")
82 self.postfile(entry["outfile"], entry["filename"])
83
84 def postfile(self, artifact, fileName):
85 """
86 Send a file to MalShare
87 """
88 try:
89 res = requests.post(
90 "https://malshare.com/api.php?mode=cli",
91 files={fileName: open(artifact, "rb")}
92 )
93 if res and res.ok:
94 print("Submited to MalShare")
95 else:
96 print("MalShare Request failed: {}".format(res.status_code))
97 except Exception as e:
98 print("MalShare Request failed: {}".format(e))
99
[end of src/cowrie/output/malshare.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cowrie/output/malshare.py b/src/cowrie/output/malshare.py
--- a/src/cowrie/output/malshare.py
+++ b/src/cowrie/output/malshare.py
@@ -41,7 +41,10 @@
from urlparse import urlparse
import requests
+from twisted.python import log
+
import cowrie.core.output
+from cowrie.core.config import CowrieConfig
class Output(cowrie.core.output.Output):
@@ -54,7 +57,7 @@
"""
Start output plugin
"""
- pass
+ self.apiKey = CowrieConfig().get('output_malshare', 'api_key')
def stop(self):
"""
@@ -64,7 +67,6 @@
def write(self, entry):
if entry["eventid"] == "cowrie.session.file_download":
- print("Sending file to MalShare")
p = urlparse(entry["url"]).path
if p == "":
fileName = entry["shasum"]
@@ -78,7 +80,6 @@
self.postfile(entry["outfile"], fileName)
elif entry["eventid"] == "cowrie.session.file_upload":
- print("Sending file to MalShare")
self.postfile(entry["outfile"], entry["filename"])
def postfile(self, artifact, fileName):
@@ -87,12 +88,12 @@
"""
try:
res = requests.post(
- "https://malshare.com/api.php?mode=cli",
- files={fileName: open(artifact, "rb")}
+ "https://malshare.com/api.php?api_key="+self.apiKey+"&action=upload",
+ files={"upload": open(artifact, "rb")}
)
if res and res.ok:
- print("Submited to MalShare")
+ log.msg("Submitted to MalShare")
else:
- print("MalShare Request failed: {}".format(res.status_code))
+ log.msg("MalShare Request failed: {}".format(res.status_code))
except Exception as e:
- print("MalShare Request failed: {}".format(e))
+ log.msg("MalShare Request failed: {}".format(e))
|
{"golden_diff": "diff --git a/src/cowrie/output/malshare.py b/src/cowrie/output/malshare.py\n--- a/src/cowrie/output/malshare.py\n+++ b/src/cowrie/output/malshare.py\n@@ -41,7 +41,10 @@\n from urlparse import urlparse\n import requests\n \n+from twisted.python import log\n+\n import cowrie.core.output\n+from cowrie.core.config import CowrieConfig\n \n \n class Output(cowrie.core.output.Output):\n@@ -54,7 +57,7 @@\n \"\"\"\n Start output plugin\n \"\"\"\n- pass\n+ self.apiKey = CowrieConfig().get('output_malshare', 'api_key')\n \n def stop(self):\n \"\"\"\n@@ -64,7 +67,6 @@\n \n def write(self, entry):\n if entry[\"eventid\"] == \"cowrie.session.file_download\":\n- print(\"Sending file to MalShare\")\n p = urlparse(entry[\"url\"]).path\n if p == \"\":\n fileName = entry[\"shasum\"]\n@@ -78,7 +80,6 @@\n self.postfile(entry[\"outfile\"], fileName)\n \n elif entry[\"eventid\"] == \"cowrie.session.file_upload\":\n- print(\"Sending file to MalShare\")\n self.postfile(entry[\"outfile\"], entry[\"filename\"])\n \n def postfile(self, artifact, fileName):\n@@ -87,12 +88,12 @@\n \"\"\"\n try:\n res = requests.post(\n- \"https://malshare.com/api.php?mode=cli\",\n- files={fileName: open(artifact, \"rb\")}\n+ \"https://malshare.com/api.php?api_key=\"+self.apiKey+\"&action=upload\",\n+ files={\"upload\": open(artifact, \"rb\")}\n )\n if res and res.ok:\n- print(\"Submited to MalShare\")\n+ log.msg(\"Submitted to MalShare\")\n else:\n- print(\"MalShare Request failed: {}\".format(res.status_code))\n+ log.msg(\"MalShare Request failed: {}\".format(res.status_code))\n except Exception as e:\n- print(\"MalShare Request failed: {}\".format(e))\n+ log.msg(\"MalShare Request failed: {}\".format(e))\n", "issue": "MalShare uploader not working\n**Describe the bug**\r\nIn my config I have\r\n```\r\n[output_malshare]\r\nenabled = true\r\n```\r\n\r\nand in my logs I have\r\n```\r\n[stdout#info] Sending file to MalShare\r\n[stdout#info] Submited to MalShare\r\n```\r\n\r\nbut when I check on MalShare I can't find any the binaries that have been caught in my honeypot.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Enable MalShare submission in your config\r\n2. Wait for a bot to drop a binary in your honeypot\r\n3. Try to find the binary on malshare (search by md5)\r\n4. Observe that the binary is not there\r\n\r\n**Expected behavior**\r\nThe binary should be uploaded successfully to MalShare\r\n\r\n**Server (please complete the following information):**\r\n - OS: [e.g. RedHat Linux 7.1, output of uname -a] Ubuntu 20.04, Linux 5.4.0\r\n - Python: 3.8.5\r\n\r\n**Additional context**\r\nBased on [MalShare's API docs](https://malshare.com/doc.php) it seems that uploading files now requires an API key and a slightly different POST path than the one [defined in cowrie](https://github.com/cowrie/cowrie/blob/b848ec261554ee9128640601eb9a6734b2bffefe/src/cowrie/output/malshare.py#L90). Probably adding an API key option to the config and updating the uploader with the new path and to use the API key will solve this.\r\n\n", "before_files": [{"content": "# Copyright (c) 2015 Michel Oosterhof <[email protected]>\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# 1. Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# 2. Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# 3. The names of the author(s) may not be used to endorse or promote\n# products derived from this software without specific prior written\n# permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS`` AND ANY EXPRESS OR\n# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES\n# OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.\n# IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED\n# AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n# SUCH DAMAGE.\n\n\"\"\"\nSend files to https://malshare.com/\nMore info https://malshare.com/doc.php\n\"\"\"\n\nfrom __future__ import absolute_import, division\n\nimport os\n\ntry:\n from urllib.parse import urlparse\nexcept ImportError:\n from urlparse import urlparse\nimport requests\n\nimport cowrie.core.output\n\n\nclass Output(cowrie.core.output.Output):\n \"\"\"\n malshare output\n\n TODO: use `treq`\n \"\"\"\n def start(self):\n \"\"\"\n Start output plugin\n \"\"\"\n pass\n\n def stop(self):\n \"\"\"\n Stop output plugin\n \"\"\"\n pass\n\n def write(self, entry):\n if entry[\"eventid\"] == \"cowrie.session.file_download\":\n print(\"Sending file to MalShare\")\n p = urlparse(entry[\"url\"]).path\n if p == \"\":\n fileName = entry[\"shasum\"]\n else:\n b = os.path.basename(p)\n if b == \"\":\n fileName = entry[\"shasum\"]\n else:\n fileName = b\n\n self.postfile(entry[\"outfile\"], fileName)\n\n elif entry[\"eventid\"] == \"cowrie.session.file_upload\":\n print(\"Sending file to MalShare\")\n self.postfile(entry[\"outfile\"], entry[\"filename\"])\n\n def postfile(self, artifact, fileName):\n \"\"\"\n Send a file to MalShare\n \"\"\"\n try:\n res = requests.post(\n \"https://malshare.com/api.php?mode=cli\",\n files={fileName: open(artifact, \"rb\")}\n )\n if res and res.ok:\n print(\"Submited to MalShare\")\n else:\n print(\"MalShare Request failed: {}\".format(res.status_code))\n except Exception as e:\n print(\"MalShare Request failed: {}\".format(e))\n", "path": "src/cowrie/output/malshare.py"}]}
| 1,784 | 484 |
gh_patches_debug_7192
|
rasdani/github-patches
|
git_diff
|
aio-libs__aiohttp-649
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
aiohttp filtering out "Authorization" header
Apparently aiohttp is filtering out the "Authorization" header in aiohttp.wsgi:69 in create_wsgi_environ.
This bug was found while using aiopyramid + jwtauth, you can find more details (and an example project) on https://github.com/housleyjk/aiopyramid/issues/14
</issue>
<code>
[start of aiohttp/wsgi.py]
1 """wsgi server.
2
3 TODO:
4 * proxy protocol
5 * x-forward security
6 * wsgi file support (os.sendfile)
7 """
8
9 import asyncio
10 import inspect
11 import io
12 import os
13 import sys
14 from urllib.parse import urlsplit
15
16 import aiohttp
17 from aiohttp import server, hdrs
18
19 __all__ = ('WSGIServerHttpProtocol',)
20
21
22 class WSGIServerHttpProtocol(server.ServerHttpProtocol):
23 """HTTP Server that implements the Python WSGI protocol.
24
25 It uses 'wsgi.async' of 'True'. 'wsgi.input' can behave differently
26 depends on 'readpayload' constructor parameter. If readpayload is set to
27 True, wsgi server reads all incoming data into BytesIO object and
28 sends it as 'wsgi.input' environ var. If readpayload is set to false
29 'wsgi.input' is a StreamReader and application should read incoming
30 data with "yield from environ['wsgi.input'].read()". It defaults to False.
31 """
32
33 SCRIPT_NAME = os.environ.get('SCRIPT_NAME', '')
34
35 def __init__(self, app, readpayload=False, is_ssl=False, *args, **kw):
36 super().__init__(*args, **kw)
37
38 self.wsgi = app
39 self.is_ssl = is_ssl
40 self.readpayload = readpayload
41
42 def create_wsgi_response(self, message):
43 return WsgiResponse(self.writer, message)
44
45 def create_wsgi_environ(self, message, payload):
46 uri_parts = urlsplit(message.path)
47 url_scheme = 'https' if self.is_ssl else 'http'
48
49 environ = {
50 'wsgi.input': payload,
51 'wsgi.errors': sys.stderr,
52 'wsgi.version': (1, 0),
53 'wsgi.async': True,
54 'wsgi.multithread': False,
55 'wsgi.multiprocess': False,
56 'wsgi.run_once': False,
57 'wsgi.file_wrapper': FileWrapper,
58 'wsgi.url_scheme': url_scheme,
59 'SERVER_SOFTWARE': aiohttp.HttpMessage.SERVER_SOFTWARE,
60 'REQUEST_METHOD': message.method,
61 'QUERY_STRING': uri_parts.query or '',
62 'RAW_URI': message.path,
63 'SERVER_PROTOCOL': 'HTTP/%s.%s' % message.version
64 }
65
66 script_name = self.SCRIPT_NAME
67
68 for hdr_name, hdr_value in message.headers.items():
69 if hdr_name == 'AUTHORIZATION':
70 continue
71 elif hdr_name == 'SCRIPT_NAME':
72 script_name = hdr_value
73 elif hdr_name == 'CONTENT-TYPE':
74 environ['CONTENT_TYPE'] = hdr_value
75 continue
76 elif hdr_name == 'CONTENT-LENGTH':
77 environ['CONTENT_LENGTH'] = hdr_value
78 continue
79
80 key = 'HTTP_%s' % hdr_name.replace('-', '_')
81 if key in environ:
82 hdr_value = '%s,%s' % (environ[key], hdr_value)
83
84 environ[key] = hdr_value
85
86 # authors should be aware that REMOTE_HOST and REMOTE_ADDR
87 # may not qualify the remote addr
88 # also SERVER_PORT variable MUST be set to the TCP/IP port number on
89 # which this request is received from the client.
90 # http://www.ietf.org/rfc/rfc3875
91
92 remote = self.transport.get_extra_info('peername')
93 environ['REMOTE_ADDR'] = remote[0]
94 environ['REMOTE_PORT'] = remote[1]
95
96 sockname = self.transport.get_extra_info('sockname')
97 environ['SERVER_PORT'] = str(sockname[1])
98 host = message.headers.get("HOST", None)
99 if host:
100 environ['SERVER_NAME'] = host.split(":")[0]
101 else:
102 environ['SERVER_NAME'] = sockname[0]
103
104 path_info = uri_parts.path
105 if script_name:
106 path_info = path_info.split(script_name, 1)[-1]
107
108 environ['PATH_INFO'] = path_info
109 environ['SCRIPT_NAME'] = script_name
110
111 environ['async.reader'] = self.reader
112 environ['async.writer'] = self.writer
113
114 return environ
115
116 @asyncio.coroutine
117 def handle_request(self, message, payload):
118 """Handle a single HTTP request"""
119 now = self._loop.time()
120
121 if self.readpayload:
122 wsgiinput = io.BytesIO()
123 wsgiinput.write((yield from payload.read()))
124 wsgiinput.seek(0)
125 payload = wsgiinput
126
127 environ = self.create_wsgi_environ(message, payload)
128 response = self.create_wsgi_response(message)
129
130 riter = self.wsgi(environ, response.start_response)
131 if isinstance(riter, asyncio.Future) or inspect.isgenerator(riter):
132 riter = yield from riter
133
134 resp = response.response
135 try:
136 for item in riter:
137 if isinstance(item, asyncio.Future):
138 item = yield from item
139 yield from resp.write(item)
140
141 yield from resp.write_eof()
142 finally:
143 if hasattr(riter, 'close'):
144 riter.close()
145
146 if resp.keep_alive():
147 self.keep_alive(True)
148
149 self.log_access(
150 message, environ, response.response, self._loop.time() - now)
151
152
153 class FileWrapper:
154 """Custom file wrapper."""
155
156 def __init__(self, fobj, chunk_size=8192):
157 self.fobj = fobj
158 self.chunk_size = chunk_size
159 if hasattr(fobj, 'close'):
160 self.close = fobj.close
161
162 def __iter__(self):
163 return self
164
165 def __next__(self):
166 data = self.fobj.read(self.chunk_size)
167 if data:
168 return data
169 raise StopIteration
170
171
172 class WsgiResponse:
173 """Implementation of start_response() callable as specified by PEP 3333"""
174
175 status = None
176
177 HOP_HEADERS = {
178 hdrs.CONNECTION,
179 hdrs.KEEP_ALIVE,
180 hdrs.PROXY_AUTHENTICATE,
181 hdrs.PROXY_AUTHORIZATION,
182 hdrs.TE,
183 hdrs.TRAILER,
184 hdrs.TRANSFER_ENCODING,
185 hdrs.UPGRADE,
186 }
187
188 def __init__(self, writer, message):
189 self.writer = writer
190 self.message = message
191
192 def start_response(self, status, headers, exc_info=None):
193 if exc_info:
194 try:
195 if self.status:
196 raise exc_info[1]
197 finally:
198 exc_info = None
199
200 status_code = int(status.split(' ', 1)[0])
201
202 self.status = status
203 resp = self.response = aiohttp.Response(
204 self.writer, status_code,
205 self.message.version, self.message.should_close)
206 resp.HOP_HEADERS = self.HOP_HEADERS
207 resp.add_headers(*headers)
208
209 if resp.has_chunked_hdr:
210 resp.enable_chunked_encoding()
211
212 # send headers immediately for websocket connection
213 if status_code == 101 and resp.upgrade and resp.websocket:
214 resp.send_headers()
215 else:
216 resp._send_headers = True
217 return self.response.write
218
[end of aiohttp/wsgi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/aiohttp/wsgi.py b/aiohttp/wsgi.py
--- a/aiohttp/wsgi.py
+++ b/aiohttp/wsgi.py
@@ -66,9 +66,7 @@
script_name = self.SCRIPT_NAME
for hdr_name, hdr_value in message.headers.items():
- if hdr_name == 'AUTHORIZATION':
- continue
- elif hdr_name == 'SCRIPT_NAME':
+ if hdr_name == 'SCRIPT_NAME':
script_name = hdr_value
elif hdr_name == 'CONTENT-TYPE':
environ['CONTENT_TYPE'] = hdr_value
|
{"golden_diff": "diff --git a/aiohttp/wsgi.py b/aiohttp/wsgi.py\n--- a/aiohttp/wsgi.py\n+++ b/aiohttp/wsgi.py\n@@ -66,9 +66,7 @@\n script_name = self.SCRIPT_NAME\n \n for hdr_name, hdr_value in message.headers.items():\n- if hdr_name == 'AUTHORIZATION':\n- continue\n- elif hdr_name == 'SCRIPT_NAME':\n+ if hdr_name == 'SCRIPT_NAME':\n script_name = hdr_value\n elif hdr_name == 'CONTENT-TYPE':\n environ['CONTENT_TYPE'] = hdr_value\n", "issue": "aiohttp filtering out \"Authorization\" header\nApparently aiohttp is filtering out the \"Authorization\" header in aiohttp.wsgi:69 in create_wsgi_environ.\n\nThis bug was found while using aiopyramid + jwtauth, you can find more details (and an example project) on https://github.com/housleyjk/aiopyramid/issues/14\n\n", "before_files": [{"content": "\"\"\"wsgi server.\n\nTODO:\n * proxy protocol\n * x-forward security\n * wsgi file support (os.sendfile)\n\"\"\"\n\nimport asyncio\nimport inspect\nimport io\nimport os\nimport sys\nfrom urllib.parse import urlsplit\n\nimport aiohttp\nfrom aiohttp import server, hdrs\n\n__all__ = ('WSGIServerHttpProtocol',)\n\n\nclass WSGIServerHttpProtocol(server.ServerHttpProtocol):\n \"\"\"HTTP Server that implements the Python WSGI protocol.\n\n It uses 'wsgi.async' of 'True'. 'wsgi.input' can behave differently\n depends on 'readpayload' constructor parameter. If readpayload is set to\n True, wsgi server reads all incoming data into BytesIO object and\n sends it as 'wsgi.input' environ var. If readpayload is set to false\n 'wsgi.input' is a StreamReader and application should read incoming\n data with \"yield from environ['wsgi.input'].read()\". It defaults to False.\n \"\"\"\n\n SCRIPT_NAME = os.environ.get('SCRIPT_NAME', '')\n\n def __init__(self, app, readpayload=False, is_ssl=False, *args, **kw):\n super().__init__(*args, **kw)\n\n self.wsgi = app\n self.is_ssl = is_ssl\n self.readpayload = readpayload\n\n def create_wsgi_response(self, message):\n return WsgiResponse(self.writer, message)\n\n def create_wsgi_environ(self, message, payload):\n uri_parts = urlsplit(message.path)\n url_scheme = 'https' if self.is_ssl else 'http'\n\n environ = {\n 'wsgi.input': payload,\n 'wsgi.errors': sys.stderr,\n 'wsgi.version': (1, 0),\n 'wsgi.async': True,\n 'wsgi.multithread': False,\n 'wsgi.multiprocess': False,\n 'wsgi.run_once': False,\n 'wsgi.file_wrapper': FileWrapper,\n 'wsgi.url_scheme': url_scheme,\n 'SERVER_SOFTWARE': aiohttp.HttpMessage.SERVER_SOFTWARE,\n 'REQUEST_METHOD': message.method,\n 'QUERY_STRING': uri_parts.query or '',\n 'RAW_URI': message.path,\n 'SERVER_PROTOCOL': 'HTTP/%s.%s' % message.version\n }\n\n script_name = self.SCRIPT_NAME\n\n for hdr_name, hdr_value in message.headers.items():\n if hdr_name == 'AUTHORIZATION':\n continue\n elif hdr_name == 'SCRIPT_NAME':\n script_name = hdr_value\n elif hdr_name == 'CONTENT-TYPE':\n environ['CONTENT_TYPE'] = hdr_value\n continue\n elif hdr_name == 'CONTENT-LENGTH':\n environ['CONTENT_LENGTH'] = hdr_value\n continue\n\n key = 'HTTP_%s' % hdr_name.replace('-', '_')\n if key in environ:\n hdr_value = '%s,%s' % (environ[key], hdr_value)\n\n environ[key] = hdr_value\n\n # authors should be aware that REMOTE_HOST and REMOTE_ADDR\n # may not qualify the remote addr\n # also SERVER_PORT variable MUST be set to the TCP/IP port number on\n # which this request is received from the client.\n # http://www.ietf.org/rfc/rfc3875\n\n remote = self.transport.get_extra_info('peername')\n environ['REMOTE_ADDR'] = remote[0]\n environ['REMOTE_PORT'] = remote[1]\n\n sockname = self.transport.get_extra_info('sockname')\n environ['SERVER_PORT'] = str(sockname[1])\n host = message.headers.get(\"HOST\", None)\n if host:\n environ['SERVER_NAME'] = host.split(\":\")[0]\n else:\n environ['SERVER_NAME'] = sockname[0]\n\n path_info = uri_parts.path\n if script_name:\n path_info = path_info.split(script_name, 1)[-1]\n\n environ['PATH_INFO'] = path_info\n environ['SCRIPT_NAME'] = script_name\n\n environ['async.reader'] = self.reader\n environ['async.writer'] = self.writer\n\n return environ\n\n @asyncio.coroutine\n def handle_request(self, message, payload):\n \"\"\"Handle a single HTTP request\"\"\"\n now = self._loop.time()\n\n if self.readpayload:\n wsgiinput = io.BytesIO()\n wsgiinput.write((yield from payload.read()))\n wsgiinput.seek(0)\n payload = wsgiinput\n\n environ = self.create_wsgi_environ(message, payload)\n response = self.create_wsgi_response(message)\n\n riter = self.wsgi(environ, response.start_response)\n if isinstance(riter, asyncio.Future) or inspect.isgenerator(riter):\n riter = yield from riter\n\n resp = response.response\n try:\n for item in riter:\n if isinstance(item, asyncio.Future):\n item = yield from item\n yield from resp.write(item)\n\n yield from resp.write_eof()\n finally:\n if hasattr(riter, 'close'):\n riter.close()\n\n if resp.keep_alive():\n self.keep_alive(True)\n\n self.log_access(\n message, environ, response.response, self._loop.time() - now)\n\n\nclass FileWrapper:\n \"\"\"Custom file wrapper.\"\"\"\n\n def __init__(self, fobj, chunk_size=8192):\n self.fobj = fobj\n self.chunk_size = chunk_size\n if hasattr(fobj, 'close'):\n self.close = fobj.close\n\n def __iter__(self):\n return self\n\n def __next__(self):\n data = self.fobj.read(self.chunk_size)\n if data:\n return data\n raise StopIteration\n\n\nclass WsgiResponse:\n \"\"\"Implementation of start_response() callable as specified by PEP 3333\"\"\"\n\n status = None\n\n HOP_HEADERS = {\n hdrs.CONNECTION,\n hdrs.KEEP_ALIVE,\n hdrs.PROXY_AUTHENTICATE,\n hdrs.PROXY_AUTHORIZATION,\n hdrs.TE,\n hdrs.TRAILER,\n hdrs.TRANSFER_ENCODING,\n hdrs.UPGRADE,\n }\n\n def __init__(self, writer, message):\n self.writer = writer\n self.message = message\n\n def start_response(self, status, headers, exc_info=None):\n if exc_info:\n try:\n if self.status:\n raise exc_info[1]\n finally:\n exc_info = None\n\n status_code = int(status.split(' ', 1)[0])\n\n self.status = status\n resp = self.response = aiohttp.Response(\n self.writer, status_code,\n self.message.version, self.message.should_close)\n resp.HOP_HEADERS = self.HOP_HEADERS\n resp.add_headers(*headers)\n\n if resp.has_chunked_hdr:\n resp.enable_chunked_encoding()\n\n # send headers immediately for websocket connection\n if status_code == 101 and resp.upgrade and resp.websocket:\n resp.send_headers()\n else:\n resp._send_headers = True\n return self.response.write\n", "path": "aiohttp/wsgi.py"}]}
| 2,699 | 130 |
gh_patches_debug_5923
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-488
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Featured careeropprotunities are not featured
The featured opportunities are not prioritized over other opportunities.
</issue>
<code>
[start of apps/careeropportunity/views.py]
1 #-*- coding: utf-8 -*-
2 from django.utils import timezone
3
4 from datetime import datetime
5
6 from django.conf import settings
7 from django.shortcuts import render_to_response
8 from django.shortcuts import get_object_or_404
9 from django.template import RequestContext
10
11 from apps.careeropportunity.models import CareerOpportunity
12
13
14 def index(request):
15 opportunities = CareerOpportunity.objects.filter(
16 start__lte=timezone.now(), end__gte=timezone.now()).order_by('featured', '-start')
17
18 return render_to_response('careeropportunity/index.html', \
19 {'opportunities': opportunities}, \
20 context_instance=RequestContext(request))
21
22
23 def details(request, opportunity_id):
24 opportunity = get_object_or_404(CareerOpportunity, pk=opportunity_id)
25
26 return render_to_response('careeropportunity/details.html', \
27 {'opportunity': opportunity}, \
28 context_instance=RequestContext(request))
29
[end of apps/careeropportunity/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apps/careeropportunity/views.py b/apps/careeropportunity/views.py
--- a/apps/careeropportunity/views.py
+++ b/apps/careeropportunity/views.py
@@ -13,7 +13,7 @@
def index(request):
opportunities = CareerOpportunity.objects.filter(
- start__lte=timezone.now(), end__gte=timezone.now()).order_by('featured', '-start')
+ start__lte=timezone.now(), end__gte=timezone.now()).order_by('-featured', '-start')
return render_to_response('careeropportunity/index.html', \
{'opportunities': opportunities}, \
|
{"golden_diff": "diff --git a/apps/careeropportunity/views.py b/apps/careeropportunity/views.py\n--- a/apps/careeropportunity/views.py\n+++ b/apps/careeropportunity/views.py\n@@ -13,7 +13,7 @@\n \n def index(request):\n opportunities = CareerOpportunity.objects.filter(\n- \tstart__lte=timezone.now(), end__gte=timezone.now()).order_by('featured', '-start')\n+ \tstart__lte=timezone.now(), end__gte=timezone.now()).order_by('-featured', '-start')\n \n return render_to_response('careeropportunity/index.html', \\\n {'opportunities': opportunities}, \\\n", "issue": "Featured careeropprotunities are not featured\nThe featured opportunities are not prioritized over other opportunities. \n\n", "before_files": [{"content": "#-*- coding: utf-8 -*-\nfrom django.utils import timezone\n\nfrom datetime import datetime\n\nfrom django.conf import settings\nfrom django.shortcuts import render_to_response\nfrom django.shortcuts import get_object_or_404\nfrom django.template import RequestContext\n\nfrom apps.careeropportunity.models import CareerOpportunity\n\n\ndef index(request):\n opportunities = CareerOpportunity.objects.filter(\n \tstart__lte=timezone.now(), end__gte=timezone.now()).order_by('featured', '-start')\n \n return render_to_response('careeropportunity/index.html', \\\n {'opportunities': opportunities}, \\\n context_instance=RequestContext(request))\n\n\ndef details(request, opportunity_id):\n opportunity = get_object_or_404(CareerOpportunity, pk=opportunity_id)\n\n return render_to_response('careeropportunity/details.html', \\\n {'opportunity': opportunity}, \\\n context_instance=RequestContext(request))\n", "path": "apps/careeropportunity/views.py"}]}
| 807 | 140 |
gh_patches_debug_36979
|
rasdani/github-patches
|
git_diff
|
getnikola__nikola-1389
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
rest plugins should know which page is being compiled
I want to access some of the metadata of the current page from a rest plugin and I can't find a way to get the current page in self.state.
I found out that by adding a reference to the "source" path when calling the rest compiler, I can retrieve it as self.state.document.settings._source, then find a matching page. Is it a good solution? Could this (or something similar) be integrated in the default rest compiler?
</issue>
<code>
[start of nikola/plugins/compile/rest/__init__.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2014 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 from __future__ import unicode_literals
28 import io
29 import os
30 import re
31
32 try:
33 import docutils.core
34 import docutils.nodes
35 import docutils.utils
36 import docutils.io
37 import docutils.readers.standalone
38 import docutils.writers.html4css1
39 has_docutils = True
40 except ImportError:
41 has_docutils = False
42
43 from nikola.plugin_categories import PageCompiler
44 from nikola.utils import get_logger, makedirs, req_missing, write_metadata
45
46
47 class CompileRest(PageCompiler):
48 """Compile reSt into HTML."""
49
50 name = "rest"
51 demote_headers = True
52 logger = None
53
54 def compile_html(self, source, dest, is_two_file=True):
55 """Compile reSt into HTML."""
56
57 if not has_docutils:
58 req_missing(['docutils'], 'build this site (compile reStructuredText)')
59 makedirs(os.path.dirname(dest))
60 error_level = 100
61 with io.open(dest, "w+", encoding="utf8") as out_file:
62 with io.open(source, "r", encoding="utf8") as in_file:
63 data = in_file.read()
64 add_ln = 0
65 if not is_two_file:
66 spl = re.split('(\n\n|\r\n\r\n)', data, maxsplit=1)
67 data = spl[-1]
68 if len(spl) != 1:
69 # If errors occur, this will be added to the line
70 # number reported by docutils so the line number
71 # matches the actual line number (off by 7 with default
72 # metadata, could be more or less depending on the post
73 # author).
74 add_ln = len(spl[0].splitlines()) + 1
75
76 default_template_path = os.path.join(os.path.dirname(__file__), 'template.txt')
77 output, error_level, deps = rst2html(
78 data, settings_overrides={
79 'initial_header_level': 1,
80 'record_dependencies': True,
81 'stylesheet_path': None,
82 'link_stylesheet': True,
83 'syntax_highlight': 'short',
84 'math_output': 'mathjax',
85 'template': default_template_path,
86 }, logger=self.logger, l_source=source, l_add_ln=add_ln)
87 out_file.write(output)
88 deps_path = dest + '.dep'
89 if deps.list:
90 with io.open(deps_path, "w+", encoding="utf8") as deps_file:
91 deps_file.write('\n'.join(deps.list))
92 else:
93 if os.path.isfile(deps_path):
94 os.unlink(deps_path)
95 if error_level < 3:
96 return True
97 else:
98 return False
99
100 def create_post(self, path, **kw):
101 content = kw.pop('content', None)
102 onefile = kw.pop('onefile', False)
103 # is_page is not used by create_post as of now.
104 kw.pop('is_page', False)
105 metadata = {}
106 metadata.update(self.default_metadata)
107 metadata.update(kw)
108 makedirs(os.path.dirname(path))
109 if not content.endswith('\n'):
110 content += '\n'
111 with io.open(path, "w+", encoding="utf8") as fd:
112 if onefile:
113 fd.write(write_metadata(metadata))
114 fd.write('\n' + content)
115
116 def set_site(self, site):
117 for plugin_info in site.plugin_manager.getPluginsOfCategory("RestExtension"):
118 if plugin_info.name in site.config['DISABLED_PLUGINS']:
119 site.plugin_manager.removePluginFromCategory(plugin_info, "RestExtension")
120 continue
121
122 site.plugin_manager.activatePluginByName(plugin_info.name)
123 plugin_info.plugin_object.set_site(site)
124 plugin_info.plugin_object.short_help = plugin_info.description
125
126 self.logger = get_logger('compile_rest', site.loghandlers)
127 if not site.debug:
128 self.logger.level = 4
129
130 return super(CompileRest, self).set_site(site)
131
132
133 def get_observer(settings):
134 """Return an observer for the docutils Reporter."""
135 def observer(msg):
136 """Report docutils/rest messages to a Nikola user.
137
138 Error code mapping:
139
140 +------+---------+------+----------+
141 | dNUM | dNAME | lNUM | lNAME | d = docutils, l = logbook
142 +------+---------+------+----------+
143 | 0 | DEBUG | 1 | DEBUG |
144 | 1 | INFO | 2 | INFO |
145 | 2 | WARNING | 4 | WARNING |
146 | 3 | ERROR | 5 | ERROR |
147 | 4 | SEVERE | 6 | CRITICAL |
148 +------+---------+------+----------+
149 """
150 errormap = {0: 1, 1: 2, 2: 4, 3: 5, 4: 6}
151 text = docutils.nodes.Element.astext(msg)
152 line = msg['line'] + settings['add_ln'] if 'line' in msg else 0
153 out = '[{source}{colon}{line}] {text}'.format(
154 source=settings['source'], colon=(':' if line else ''),
155 line=line, text=text)
156 settings['logger'].log(errormap[msg['level']], out)
157
158 return observer
159
160
161 class NikolaReader(docutils.readers.standalone.Reader):
162
163 def new_document(self):
164 """Create and return a new empty document tree (root node)."""
165 document = docutils.utils.new_document(self.source.source_path, self.settings)
166 document.reporter.stream = False
167 document.reporter.attach_observer(get_observer(self.l_settings))
168 return document
169
170
171 def add_node(node, visit_function=None, depart_function=None):
172 """
173 Register a Docutils node class.
174 This function is completely optional. It is a same concept as
175 `Sphinx add_node function <http://sphinx-doc.org/ext/appapi.html#sphinx.application.Sphinx.add_node>`_.
176
177 For example::
178
179 class Plugin(RestExtension):
180
181 name = "rest_math"
182
183 def set_site(self, site):
184 self.site = site
185 directives.register_directive('math', MathDirective)
186 add_node(MathBlock, visit_Math, depart_Math)
187 return super(Plugin, self).set_site(site)
188
189 class MathDirective(Directive):
190 def run(self):
191 node = MathBlock()
192 return [node]
193
194 class Math(docutils.nodes.Element): pass
195
196 def visit_Math(self, node):
197 self.body.append(self.starttag(node, 'math'))
198
199 def depart_Math(self, node):
200 self.body.append('</math>')
201
202 For full example, you can refer to `Microdata plugin <http://plugins.getnikola.com/#microdata>`_
203 """
204 docutils.nodes._add_node_class_names([node.__name__])
205 if visit_function:
206 setattr(docutils.writers.html4css1.HTMLTranslator, 'visit_' + node.__name__, visit_function)
207 if depart_function:
208 setattr(docutils.writers.html4css1.HTMLTranslator, 'depart_' + node.__name__, depart_function)
209
210
211 def rst2html(source, source_path=None, source_class=docutils.io.StringInput,
212 destination_path=None, reader=None,
213 parser=None, parser_name='restructuredtext', writer=None,
214 writer_name='html', settings=None, settings_spec=None,
215 settings_overrides=None, config_section=None,
216 enable_exit_status=None, logger=None, l_source='', l_add_ln=0):
217 """
218 Set up & run a `Publisher`, and return a dictionary of document parts.
219 Dictionary keys are the names of parts, and values are Unicode strings;
220 encoding is up to the client. For programmatic use with string I/O.
221
222 For encoded string input, be sure to set the 'input_encoding' setting to
223 the desired encoding. Set it to 'unicode' for unencoded Unicode string
224 input. Here's how::
225
226 publish_parts(..., settings_overrides={'input_encoding': 'unicode'})
227
228 Parameters: see `publish_programmatically`.
229
230 WARNING: `reader` should be None (or NikolaReader()) if you want Nikola to report
231 reStructuredText syntax errors.
232 """
233 if reader is None:
234 reader = NikolaReader()
235 # For our custom logging, we have special needs and special settings we
236 # specify here.
237 # logger a logger from Nikola
238 # source source filename (docutils gets a string)
239 # add_ln amount of metadata lines (see comment in compile_html above)
240 reader.l_settings = {'logger': logger, 'source': l_source,
241 'add_ln': l_add_ln}
242
243 pub = docutils.core.Publisher(reader, parser, writer, settings=settings,
244 source_class=source_class,
245 destination_class=docutils.io.StringOutput)
246 pub.set_components(None, parser_name, writer_name)
247 pub.process_programmatic_settings(
248 settings_spec, settings_overrides, config_section)
249 pub.set_source(source, source_path)
250 pub.set_destination(None, destination_path)
251 pub.publish(enable_exit_status=enable_exit_status)
252
253 return pub.writer.parts['docinfo'] + pub.writer.parts['fragment'], pub.document.reporter.max_level, pub.settings.record_dependencies
254
[end of nikola/plugins/compile/rest/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nikola/plugins/compile/rest/__init__.py b/nikola/plugins/compile/rest/__init__.py
--- a/nikola/plugins/compile/rest/__init__.py
+++ b/nikola/plugins/compile/rest/__init__.py
@@ -83,7 +83,7 @@
'syntax_highlight': 'short',
'math_output': 'mathjax',
'template': default_template_path,
- }, logger=self.logger, l_source=source, l_add_ln=add_ln)
+ }, logger=self.logger, source_path=source, l_add_ln=add_ln)
out_file.write(output)
deps_path = dest + '.dep'
if deps.list:
@@ -213,7 +213,7 @@
parser=None, parser_name='restructuredtext', writer=None,
writer_name='html', settings=None, settings_spec=None,
settings_overrides=None, config_section=None,
- enable_exit_status=None, logger=None, l_source='', l_add_ln=0):
+ enable_exit_status=None, logger=None, l_add_ln=0):
"""
Set up & run a `Publisher`, and return a dictionary of document parts.
Dictionary keys are the names of parts, and values are Unicode strings;
@@ -237,7 +237,7 @@
# logger a logger from Nikola
# source source filename (docutils gets a string)
# add_ln amount of metadata lines (see comment in compile_html above)
- reader.l_settings = {'logger': logger, 'source': l_source,
+ reader.l_settings = {'logger': logger, 'source': source_path,
'add_ln': l_add_ln}
pub = docutils.core.Publisher(reader, parser, writer, settings=settings,
@@ -246,7 +246,8 @@
pub.set_components(None, parser_name, writer_name)
pub.process_programmatic_settings(
settings_spec, settings_overrides, config_section)
- pub.set_source(source, source_path)
+ pub.set_source(source, None)
+ pub.settings._nikola_source_path = source_path
pub.set_destination(None, destination_path)
pub.publish(enable_exit_status=enable_exit_status)
|
{"golden_diff": "diff --git a/nikola/plugins/compile/rest/__init__.py b/nikola/plugins/compile/rest/__init__.py\n--- a/nikola/plugins/compile/rest/__init__.py\n+++ b/nikola/plugins/compile/rest/__init__.py\n@@ -83,7 +83,7 @@\n 'syntax_highlight': 'short',\n 'math_output': 'mathjax',\n 'template': default_template_path,\n- }, logger=self.logger, l_source=source, l_add_ln=add_ln)\n+ }, logger=self.logger, source_path=source, l_add_ln=add_ln)\n out_file.write(output)\n deps_path = dest + '.dep'\n if deps.list:\n@@ -213,7 +213,7 @@\n parser=None, parser_name='restructuredtext', writer=None,\n writer_name='html', settings=None, settings_spec=None,\n settings_overrides=None, config_section=None,\n- enable_exit_status=None, logger=None, l_source='', l_add_ln=0):\n+ enable_exit_status=None, logger=None, l_add_ln=0):\n \"\"\"\n Set up & run a `Publisher`, and return a dictionary of document parts.\n Dictionary keys are the names of parts, and values are Unicode strings;\n@@ -237,7 +237,7 @@\n # logger a logger from Nikola\n # source source filename (docutils gets a string)\n # add_ln amount of metadata lines (see comment in compile_html above)\n- reader.l_settings = {'logger': logger, 'source': l_source,\n+ reader.l_settings = {'logger': logger, 'source': source_path,\n 'add_ln': l_add_ln}\n \n pub = docutils.core.Publisher(reader, parser, writer, settings=settings,\n@@ -246,7 +246,8 @@\n pub.set_components(None, parser_name, writer_name)\n pub.process_programmatic_settings(\n settings_spec, settings_overrides, config_section)\n- pub.set_source(source, source_path)\n+ pub.set_source(source, None)\n+ pub.settings._nikola_source_path = source_path\n pub.set_destination(None, destination_path)\n pub.publish(enable_exit_status=enable_exit_status)\n", "issue": "rest plugins should know which page is being compiled\nI want to access some of the metadata of the current page from a rest plugin and I can't find a way to get the current page in self.state.\n\nI found out that by adding a reference to the \"source\" path when calling the rest compiler, I can retrieve it as self.state.document.settings._source, then find a matching page. Is it a good solution? Could this (or something similar) be integrated in the default rest compiler?\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2014 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nfrom __future__ import unicode_literals\nimport io\nimport os\nimport re\n\ntry:\n import docutils.core\n import docutils.nodes\n import docutils.utils\n import docutils.io\n import docutils.readers.standalone\n import docutils.writers.html4css1\n has_docutils = True\nexcept ImportError:\n has_docutils = False\n\nfrom nikola.plugin_categories import PageCompiler\nfrom nikola.utils import get_logger, makedirs, req_missing, write_metadata\n\n\nclass CompileRest(PageCompiler):\n \"\"\"Compile reSt into HTML.\"\"\"\n\n name = \"rest\"\n demote_headers = True\n logger = None\n\n def compile_html(self, source, dest, is_two_file=True):\n \"\"\"Compile reSt into HTML.\"\"\"\n\n if not has_docutils:\n req_missing(['docutils'], 'build this site (compile reStructuredText)')\n makedirs(os.path.dirname(dest))\n error_level = 100\n with io.open(dest, \"w+\", encoding=\"utf8\") as out_file:\n with io.open(source, \"r\", encoding=\"utf8\") as in_file:\n data = in_file.read()\n add_ln = 0\n if not is_two_file:\n spl = re.split('(\\n\\n|\\r\\n\\r\\n)', data, maxsplit=1)\n data = spl[-1]\n if len(spl) != 1:\n # If errors occur, this will be added to the line\n # number reported by docutils so the line number\n # matches the actual line number (off by 7 with default\n # metadata, could be more or less depending on the post\n # author).\n add_ln = len(spl[0].splitlines()) + 1\n\n default_template_path = os.path.join(os.path.dirname(__file__), 'template.txt')\n output, error_level, deps = rst2html(\n data, settings_overrides={\n 'initial_header_level': 1,\n 'record_dependencies': True,\n 'stylesheet_path': None,\n 'link_stylesheet': True,\n 'syntax_highlight': 'short',\n 'math_output': 'mathjax',\n 'template': default_template_path,\n }, logger=self.logger, l_source=source, l_add_ln=add_ln)\n out_file.write(output)\n deps_path = dest + '.dep'\n if deps.list:\n with io.open(deps_path, \"w+\", encoding=\"utf8\") as deps_file:\n deps_file.write('\\n'.join(deps.list))\n else:\n if os.path.isfile(deps_path):\n os.unlink(deps_path)\n if error_level < 3:\n return True\n else:\n return False\n\n def create_post(self, path, **kw):\n content = kw.pop('content', None)\n onefile = kw.pop('onefile', False)\n # is_page is not used by create_post as of now.\n kw.pop('is_page', False)\n metadata = {}\n metadata.update(self.default_metadata)\n metadata.update(kw)\n makedirs(os.path.dirname(path))\n if not content.endswith('\\n'):\n content += '\\n'\n with io.open(path, \"w+\", encoding=\"utf8\") as fd:\n if onefile:\n fd.write(write_metadata(metadata))\n fd.write('\\n' + content)\n\n def set_site(self, site):\n for plugin_info in site.plugin_manager.getPluginsOfCategory(\"RestExtension\"):\n if plugin_info.name in site.config['DISABLED_PLUGINS']:\n site.plugin_manager.removePluginFromCategory(plugin_info, \"RestExtension\")\n continue\n\n site.plugin_manager.activatePluginByName(plugin_info.name)\n plugin_info.plugin_object.set_site(site)\n plugin_info.plugin_object.short_help = plugin_info.description\n\n self.logger = get_logger('compile_rest', site.loghandlers)\n if not site.debug:\n self.logger.level = 4\n\n return super(CompileRest, self).set_site(site)\n\n\ndef get_observer(settings):\n \"\"\"Return an observer for the docutils Reporter.\"\"\"\n def observer(msg):\n \"\"\"Report docutils/rest messages to a Nikola user.\n\n Error code mapping:\n\n +------+---------+------+----------+\n | dNUM | dNAME | lNUM | lNAME | d = docutils, l = logbook\n +------+---------+------+----------+\n | 0 | DEBUG | 1 | DEBUG |\n | 1 | INFO | 2 | INFO |\n | 2 | WARNING | 4 | WARNING |\n | 3 | ERROR | 5 | ERROR |\n | 4 | SEVERE | 6 | CRITICAL |\n +------+---------+------+----------+\n \"\"\"\n errormap = {0: 1, 1: 2, 2: 4, 3: 5, 4: 6}\n text = docutils.nodes.Element.astext(msg)\n line = msg['line'] + settings['add_ln'] if 'line' in msg else 0\n out = '[{source}{colon}{line}] {text}'.format(\n source=settings['source'], colon=(':' if line else ''),\n line=line, text=text)\n settings['logger'].log(errormap[msg['level']], out)\n\n return observer\n\n\nclass NikolaReader(docutils.readers.standalone.Reader):\n\n def new_document(self):\n \"\"\"Create and return a new empty document tree (root node).\"\"\"\n document = docutils.utils.new_document(self.source.source_path, self.settings)\n document.reporter.stream = False\n document.reporter.attach_observer(get_observer(self.l_settings))\n return document\n\n\ndef add_node(node, visit_function=None, depart_function=None):\n \"\"\"\n Register a Docutils node class.\n This function is completely optional. It is a same concept as\n `Sphinx add_node function <http://sphinx-doc.org/ext/appapi.html#sphinx.application.Sphinx.add_node>`_.\n\n For example::\n\n class Plugin(RestExtension):\n\n name = \"rest_math\"\n\n def set_site(self, site):\n self.site = site\n directives.register_directive('math', MathDirective)\n add_node(MathBlock, visit_Math, depart_Math)\n return super(Plugin, self).set_site(site)\n\n class MathDirective(Directive):\n def run(self):\n node = MathBlock()\n return [node]\n\n class Math(docutils.nodes.Element): pass\n\n def visit_Math(self, node):\n self.body.append(self.starttag(node, 'math'))\n\n def depart_Math(self, node):\n self.body.append('</math>')\n\n For full example, you can refer to `Microdata plugin <http://plugins.getnikola.com/#microdata>`_\n \"\"\"\n docutils.nodes._add_node_class_names([node.__name__])\n if visit_function:\n setattr(docutils.writers.html4css1.HTMLTranslator, 'visit_' + node.__name__, visit_function)\n if depart_function:\n setattr(docutils.writers.html4css1.HTMLTranslator, 'depart_' + node.__name__, depart_function)\n\n\ndef rst2html(source, source_path=None, source_class=docutils.io.StringInput,\n destination_path=None, reader=None,\n parser=None, parser_name='restructuredtext', writer=None,\n writer_name='html', settings=None, settings_spec=None,\n settings_overrides=None, config_section=None,\n enable_exit_status=None, logger=None, l_source='', l_add_ln=0):\n \"\"\"\n Set up & run a `Publisher`, and return a dictionary of document parts.\n Dictionary keys are the names of parts, and values are Unicode strings;\n encoding is up to the client. For programmatic use with string I/O.\n\n For encoded string input, be sure to set the 'input_encoding' setting to\n the desired encoding. Set it to 'unicode' for unencoded Unicode string\n input. Here's how::\n\n publish_parts(..., settings_overrides={'input_encoding': 'unicode'})\n\n Parameters: see `publish_programmatically`.\n\n WARNING: `reader` should be None (or NikolaReader()) if you want Nikola to report\n reStructuredText syntax errors.\n \"\"\"\n if reader is None:\n reader = NikolaReader()\n # For our custom logging, we have special needs and special settings we\n # specify here.\n # logger a logger from Nikola\n # source source filename (docutils gets a string)\n # add_ln amount of metadata lines (see comment in compile_html above)\n reader.l_settings = {'logger': logger, 'source': l_source,\n 'add_ln': l_add_ln}\n\n pub = docutils.core.Publisher(reader, parser, writer, settings=settings,\n source_class=source_class,\n destination_class=docutils.io.StringOutput)\n pub.set_components(None, parser_name, writer_name)\n pub.process_programmatic_settings(\n settings_spec, settings_overrides, config_section)\n pub.set_source(source, source_path)\n pub.set_destination(None, destination_path)\n pub.publish(enable_exit_status=enable_exit_status)\n\n return pub.writer.parts['docinfo'] + pub.writer.parts['fragment'], pub.document.reporter.max_level, pub.settings.record_dependencies\n", "path": "nikola/plugins/compile/rest/__init__.py"}]}
| 3,559 | 479 |
gh_patches_debug_16266
|
rasdani/github-patches
|
git_diff
|
deepchecks__deepchecks-1149
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] [CV] typo in context prediction validation
**Describe the bug**
it says batch_to_images instead of infer_on_batch

</issue>
<code>
[start of deepchecks/vision/context.py]
1 # ----------------------------------------------------------------------------
2 # Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)
3 #
4 # This file is part of Deepchecks.
5 # Deepchecks is distributed under the terms of the GNU Affero General
6 # Public License (version 3 or later).
7 # You should have received a copy of the GNU Affero General Public License
8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.
9 # ----------------------------------------------------------------------------
10 #
11 """Module for base vision context."""
12 import logging
13 from typing import Mapping, Union
14
15 import torch
16 from torch import nn
17 from ignite.metrics import Metric
18
19 from deepchecks.core import DatasetKind
20 from deepchecks.vision.vision_data import VisionData, TaskType
21 from deepchecks.core.errors import (
22 DatasetValidationError, DeepchecksNotImplementedError, ModelValidationError,
23 DeepchecksNotSupportedError, DeepchecksValueError, ValidationError
24 )
25
26
27 __all__ = ['Context']
28
29
30 logger = logging.getLogger('deepchecks')
31
32
33 class Context:
34 """Contains all the data + properties the user has passed to a check/suite, and validates it seamlessly.
35
36 Parameters
37 ----------
38 train : VisionData , default: None
39 Dataset or DataFrame object, representing data an estimator was fitted on
40 test : VisionData , default: None
41 Dataset or DataFrame object, representing data an estimator predicts on
42 model : BasicModel , default: None
43 A scikit-learn-compatible fitted estimator instance
44 model_name: str , default: ''
45 The name of the model
46 scorers : Mapping[str, Metric] , default: None
47 dict of scorers names to a Metric
48 scorers_per_class : Mapping[str, Metric] , default: None
49 dict of scorers for classification without averaging of the classes.
50 See <a href=
51 "https://scikit-learn.org/stable/modules/model_evaluation.html#from-binary-to-multiclass-and-multilabel">
52 scikit-learn docs</a>
53 device : Union[str, torch.device], default: 'cpu'
54 processing unit for use
55 random_state : int
56 A seed to set for pseudo-random functions
57 n_samples : int, default: None
58 """
59
60 def __init__(self,
61 train: VisionData = None,
62 test: VisionData = None,
63 model: nn.Module = None,
64 model_name: str = '',
65 scorers: Mapping[str, Metric] = None,
66 scorers_per_class: Mapping[str, Metric] = None,
67 device: Union[str, torch.device, None] = 'cpu',
68 random_state: int = 42,
69 n_samples: int = None
70 ):
71 # Validations
72 if train is None and test is None and model is None:
73 raise DeepchecksValueError('At least one dataset (or model) must be passed to the method!')
74 if test and not train:
75 raise DatasetValidationError('Can\'t initialize context with only test. if you have single dataset, '
76 'initialize it as train')
77 if train and test:
78 train.validate_shared_label(test)
79
80 self._device = torch.device(device) if isinstance(device, str) else (device if device else torch.device('cpu'))
81 self._prediction_formatter_error = {}
82
83 if model is not None:
84 if not isinstance(model, nn.Module):
85 logger.warning('Model is not a torch.nn.Module. Deepchecks can\'t validate that model is in '
86 'evaluation state.')
87 elif model.training:
88 raise DatasetValidationError('Model is not in evaluation state. Please set model training '
89 'parameter to False or run model.eval() before passing it.')
90
91 for dataset, dataset_type in zip([train, test], [DatasetKind.TRAIN, DatasetKind.TEST]):
92 if dataset is not None:
93 try:
94 dataset.validate_prediction(next(iter(dataset.data_loader)), model, self._device)
95 msg = None
96 except DeepchecksNotImplementedError:
97 msg = f'infer_on_batch() was not implemented in {dataset_type} ' \
98 f'dataset, some checks will not run'
99 except ValidationError as ex:
100 msg = f'batch_to_images() was not implemented correctly in {dataset_type}, the ' \
101 f'validation has failed with the error: {ex}. To test your prediction formatting use the ' \
102 f'function `vision_data.validate_prediction(batch, model, device)`'
103
104 if msg:
105 self._prediction_formatter_error[dataset_type] = msg
106 logger.warning(msg)
107
108 # The copy does 2 things: Sample n_samples if parameter exists, and shuffle the data.
109 # we shuffle because the data in VisionData is set to be sampled in a fixed order (in the init), so if the user
110 # wants to run without random_state we need to forcefully shuffle (to have different results on different runs
111 # from the same VisionData object), and if there is a random_state the shuffle will always have same result
112 if train:
113 train = train.copy(shuffle=True, n_samples=n_samples, random_state=random_state)
114 if test:
115 test = test.copy(shuffle=True, n_samples=n_samples, random_state=random_state)
116
117 self._train = train
118 self._test = test
119 self._model = model
120 self._user_scorers = scorers
121 self._user_scorers_per_class = scorers_per_class
122 self._model_name = model_name
123 self.random_state = random_state
124
125 # Properties
126 # Validations note: We know train & test fit each other so all validations can be run only on train
127
128 @property
129 def train(self) -> VisionData:
130 """Return train if exists, otherwise raise error."""
131 if self._train is None:
132 raise DeepchecksNotSupportedError('Check is irrelevant for Datasets without train dataset')
133 return self._train
134
135 @property
136 def test(self) -> VisionData:
137 """Return test if exists, otherwise raise error."""
138 if self._test is None:
139 raise DeepchecksNotSupportedError('Check is irrelevant for Datasets without test dataset')
140 return self._test
141
142 @property
143 def model(self) -> nn.Module:
144 """Return & validate model if model exists, otherwise raise error."""
145 if self._model is None:
146 raise DeepchecksNotSupportedError('Check is irrelevant for Datasets without model')
147 return self._model
148
149 @property
150 def model_name(self):
151 """Return model name."""
152 return self._model_name
153
154 @property
155 def device(self) -> torch.device:
156 """Return device specified by the user."""
157 return self._device
158
159 def have_test(self):
160 """Return whether there is test dataset defined."""
161 return self._test is not None
162
163 def assert_task_type(self, *expected_types: TaskType):
164 """Assert task_type matching given types."""
165 if self.train.task_type not in expected_types:
166 raise ModelValidationError(
167 f'Check is irrelevant for task of type {self.train.task_type}')
168 return True
169
170 def assert_predictions_valid(self, kind: DatasetKind = None):
171 """Assert that for given DatasetKind the model & dataset infer_on_batch return predictions in right format."""
172 error = self._prediction_formatter_error.get(kind)
173 if error:
174 raise DeepchecksValueError(error)
175
176 def get_data_by_kind(self, kind: DatasetKind):
177 """Return the relevant VisionData by given kind."""
178 if kind == DatasetKind.TRAIN:
179 return self.train
180 elif kind == DatasetKind.TEST:
181 return self.test
182 else:
183 raise DeepchecksValueError(f'Unexpected dataset kind {kind}')
184
[end of deepchecks/vision/context.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/deepchecks/vision/context.py b/deepchecks/vision/context.py
--- a/deepchecks/vision/context.py
+++ b/deepchecks/vision/context.py
@@ -97,7 +97,7 @@
msg = f'infer_on_batch() was not implemented in {dataset_type} ' \
f'dataset, some checks will not run'
except ValidationError as ex:
- msg = f'batch_to_images() was not implemented correctly in {dataset_type}, the ' \
+ msg = f'infer_on_batch() was not implemented correctly in {dataset_type}, the ' \
f'validation has failed with the error: {ex}. To test your prediction formatting use the ' \
f'function `vision_data.validate_prediction(batch, model, device)`'
|
{"golden_diff": "diff --git a/deepchecks/vision/context.py b/deepchecks/vision/context.py\n--- a/deepchecks/vision/context.py\n+++ b/deepchecks/vision/context.py\n@@ -97,7 +97,7 @@\n msg = f'infer_on_batch() was not implemented in {dataset_type} ' \\\n f'dataset, some checks will not run'\n except ValidationError as ex:\n- msg = f'batch_to_images() was not implemented correctly in {dataset_type}, the ' \\\n+ msg = f'infer_on_batch() was not implemented correctly in {dataset_type}, the ' \\\n f'validation has failed with the error: {ex}. To test your prediction formatting use the ' \\\n f'function `vision_data.validate_prediction(batch, model, device)`'\n", "issue": "[BUG] [CV] typo in context prediction validation\n**Describe the bug**\r\nit says batch_to_images instead of infer_on_batch\r\n\r\n\r\n\n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"Module for base vision context.\"\"\"\nimport logging\nfrom typing import Mapping, Union\n\nimport torch\nfrom torch import nn\nfrom ignite.metrics import Metric\n\nfrom deepchecks.core import DatasetKind\nfrom deepchecks.vision.vision_data import VisionData, TaskType\nfrom deepchecks.core.errors import (\n DatasetValidationError, DeepchecksNotImplementedError, ModelValidationError,\n DeepchecksNotSupportedError, DeepchecksValueError, ValidationError\n)\n\n\n__all__ = ['Context']\n\n\nlogger = logging.getLogger('deepchecks')\n\n\nclass Context:\n \"\"\"Contains all the data + properties the user has passed to a check/suite, and validates it seamlessly.\n\n Parameters\n ----------\n train : VisionData , default: None\n Dataset or DataFrame object, representing data an estimator was fitted on\n test : VisionData , default: None\n Dataset or DataFrame object, representing data an estimator predicts on\n model : BasicModel , default: None\n A scikit-learn-compatible fitted estimator instance\n model_name: str , default: ''\n The name of the model\n scorers : Mapping[str, Metric] , default: None\n dict of scorers names to a Metric\n scorers_per_class : Mapping[str, Metric] , default: None\n dict of scorers for classification without averaging of the classes.\n See <a href=\n \"https://scikit-learn.org/stable/modules/model_evaluation.html#from-binary-to-multiclass-and-multilabel\">\n scikit-learn docs</a>\n device : Union[str, torch.device], default: 'cpu'\n processing unit for use\n random_state : int\n A seed to set for pseudo-random functions\n n_samples : int, default: None\n \"\"\"\n\n def __init__(self,\n train: VisionData = None,\n test: VisionData = None,\n model: nn.Module = None,\n model_name: str = '',\n scorers: Mapping[str, Metric] = None,\n scorers_per_class: Mapping[str, Metric] = None,\n device: Union[str, torch.device, None] = 'cpu',\n random_state: int = 42,\n n_samples: int = None\n ):\n # Validations\n if train is None and test is None and model is None:\n raise DeepchecksValueError('At least one dataset (or model) must be passed to the method!')\n if test and not train:\n raise DatasetValidationError('Can\\'t initialize context with only test. if you have single dataset, '\n 'initialize it as train')\n if train and test:\n train.validate_shared_label(test)\n\n self._device = torch.device(device) if isinstance(device, str) else (device if device else torch.device('cpu'))\n self._prediction_formatter_error = {}\n\n if model is not None:\n if not isinstance(model, nn.Module):\n logger.warning('Model is not a torch.nn.Module. Deepchecks can\\'t validate that model is in '\n 'evaluation state.')\n elif model.training:\n raise DatasetValidationError('Model is not in evaluation state. Please set model training '\n 'parameter to False or run model.eval() before passing it.')\n\n for dataset, dataset_type in zip([train, test], [DatasetKind.TRAIN, DatasetKind.TEST]):\n if dataset is not None:\n try:\n dataset.validate_prediction(next(iter(dataset.data_loader)), model, self._device)\n msg = None\n except DeepchecksNotImplementedError:\n msg = f'infer_on_batch() was not implemented in {dataset_type} ' \\\n f'dataset, some checks will not run'\n except ValidationError as ex:\n msg = f'batch_to_images() was not implemented correctly in {dataset_type}, the ' \\\n f'validation has failed with the error: {ex}. To test your prediction formatting use the ' \\\n f'function `vision_data.validate_prediction(batch, model, device)`'\n\n if msg:\n self._prediction_formatter_error[dataset_type] = msg\n logger.warning(msg)\n\n # The copy does 2 things: Sample n_samples if parameter exists, and shuffle the data.\n # we shuffle because the data in VisionData is set to be sampled in a fixed order (in the init), so if the user\n # wants to run without random_state we need to forcefully shuffle (to have different results on different runs\n # from the same VisionData object), and if there is a random_state the shuffle will always have same result\n if train:\n train = train.copy(shuffle=True, n_samples=n_samples, random_state=random_state)\n if test:\n test = test.copy(shuffle=True, n_samples=n_samples, random_state=random_state)\n\n self._train = train\n self._test = test\n self._model = model\n self._user_scorers = scorers\n self._user_scorers_per_class = scorers_per_class\n self._model_name = model_name\n self.random_state = random_state\n\n # Properties\n # Validations note: We know train & test fit each other so all validations can be run only on train\n\n @property\n def train(self) -> VisionData:\n \"\"\"Return train if exists, otherwise raise error.\"\"\"\n if self._train is None:\n raise DeepchecksNotSupportedError('Check is irrelevant for Datasets without train dataset')\n return self._train\n\n @property\n def test(self) -> VisionData:\n \"\"\"Return test if exists, otherwise raise error.\"\"\"\n if self._test is None:\n raise DeepchecksNotSupportedError('Check is irrelevant for Datasets without test dataset')\n return self._test\n\n @property\n def model(self) -> nn.Module:\n \"\"\"Return & validate model if model exists, otherwise raise error.\"\"\"\n if self._model is None:\n raise DeepchecksNotSupportedError('Check is irrelevant for Datasets without model')\n return self._model\n\n @property\n def model_name(self):\n \"\"\"Return model name.\"\"\"\n return self._model_name\n\n @property\n def device(self) -> torch.device:\n \"\"\"Return device specified by the user.\"\"\"\n return self._device\n\n def have_test(self):\n \"\"\"Return whether there is test dataset defined.\"\"\"\n return self._test is not None\n\n def assert_task_type(self, *expected_types: TaskType):\n \"\"\"Assert task_type matching given types.\"\"\"\n if self.train.task_type not in expected_types:\n raise ModelValidationError(\n f'Check is irrelevant for task of type {self.train.task_type}')\n return True\n\n def assert_predictions_valid(self, kind: DatasetKind = None):\n \"\"\"Assert that for given DatasetKind the model & dataset infer_on_batch return predictions in right format.\"\"\"\n error = self._prediction_formatter_error.get(kind)\n if error:\n raise DeepchecksValueError(error)\n\n def get_data_by_kind(self, kind: DatasetKind):\n \"\"\"Return the relevant VisionData by given kind.\"\"\"\n if kind == DatasetKind.TRAIN:\n return self.train\n elif kind == DatasetKind.TEST:\n return self.test\n else:\n raise DeepchecksValueError(f'Unexpected dataset kind {kind}')\n", "path": "deepchecks/vision/context.py"}]}
| 2,727 | 170 |
gh_patches_debug_11224
|
rasdani/github-patches
|
git_diff
|
ethereum__web3.py-3248
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Web3.isAddress doesn't work for non prefixed checksumed values
* Version: 4.0.0b11
* Python: 3.6
* OS: linux
### What was wrong?
As stated in the docs http://web3py.readthedocs.io/en/latest/overview.html#Web3.isAddress the function Web3.isAddress(value) should **allow both 0x prefixed and non prefixed values**.
If the address is not checksumed, it's ok not to have the **0x**:
```
>>> Web3.isAddress('d3cda913deb6f67967b99d67acdfa1712c293601')
>>> True
```
But if it's checksumed
```
>>> Web3.isAddress('d3CdA913deB6f67967B99D67aCDFa1712C293601')
>>> False
```
No problem if we add the **0x**:
```
>>> Web3.isAddress('0xd3CdA913deB6f67967B99D67aCDFa1712C293601')
>>> True
```
### How can it be fixed?
Changing the documentation to state that checksumed addresses must have 0x or changing the function to accept checksumed addresses with 0x. I would just remove 0x at the beginning of the function (if found) and work with the address as that.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 from setuptools import (
3 find_packages,
4 setup,
5 )
6
7 extras_require = {
8 "tester": [
9 "eth-tester[py-evm]==v0.10.0-b.1",
10 "py-geth>=4.1.0",
11 ],
12 "linter": [
13 "black>=22.1.0",
14 "flake8==3.8.3",
15 "isort>=5.11.0",
16 "mypy==1.4.1",
17 "types-setuptools>=57.4.4",
18 "types-requests>=2.26.1",
19 "types-protobuf==3.19.13",
20 ],
21 "docs": [
22 "sphinx>=5.3.0",
23 "sphinx_rtd_theme>=1.0.0",
24 "towncrier>=21,<22",
25 ],
26 "dev": [
27 "bumpversion",
28 "flaky>=3.7.0",
29 "hypothesis>=3.31.2",
30 "importlib-metadata<5.0;python_version<'3.8'",
31 "pytest>=7.0.0",
32 "pytest-asyncio>=0.18.1,<0.23",
33 "pytest-mock>=1.10",
34 "pytest-watch>=4.2",
35 "pytest-xdist>=1.29",
36 "setuptools>=38.6.0",
37 "tox>=3.18.0",
38 "tqdm>4.32",
39 "twine>=1.13",
40 "when-changed>=0.3.0",
41 "build>=0.9.0",
42 ],
43 "ipfs": [
44 "ipfshttpclient==0.8.0a2",
45 ],
46 }
47
48 extras_require["dev"] = (
49 extras_require["tester"]
50 + extras_require["linter"]
51 + extras_require["docs"]
52 + extras_require["ipfs"]
53 + extras_require["dev"]
54 )
55
56 with open("./README.md") as readme:
57 long_description = readme.read()
58
59 setup(
60 name="web3",
61 # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.
62 version="6.14.0",
63 description="""web3.py""",
64 long_description_content_type="text/markdown",
65 long_description=long_description,
66 author="The Ethereum Foundation",
67 author_email="[email protected]",
68 url="https://github.com/ethereum/web3.py",
69 include_package_data=True,
70 install_requires=[
71 "aiohttp>=3.7.4.post0",
72 "eth-abi>=4.0.0",
73 "eth-account>=0.8.0",
74 "eth-hash[pycryptodome]>=0.5.1",
75 "eth-typing>=3.0.0",
76 "eth-utils>=2.1.0",
77 "hexbytes>=0.1.0,<0.4.0",
78 "jsonschema>=4.0.0",
79 "protobuf>=4.21.6",
80 "pydantic>=2.4.0",
81 "pywin32>=223;platform_system=='Windows'",
82 "requests>=2.16.0",
83 "typing-extensions>=4.0.1",
84 "websockets>=10.0.0",
85 "pyunormalize>=15.0.0",
86 ],
87 python_requires=">=3.8",
88 extras_require=extras_require,
89 py_modules=["web3", "ens", "ethpm"],
90 entry_points={"pytest11": ["pytest_ethereum = web3.tools.pytest_ethereum.plugins"]},
91 license="MIT",
92 zip_safe=False,
93 keywords="ethereum",
94 packages=find_packages(exclude=["tests", "tests.*"]),
95 package_data={"web3": ["py.typed"]},
96 classifiers=[
97 "Development Status :: 5 - Production/Stable",
98 "Intended Audience :: Developers",
99 "License :: OSI Approved :: MIT License",
100 "Natural Language :: English",
101 "Programming Language :: Python :: 3",
102 "Programming Language :: Python :: 3.8",
103 "Programming Language :: Python :: 3.9",
104 "Programming Language :: Python :: 3.10",
105 "Programming Language :: Python :: 3.11",
106 ],
107 )
108
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -6,7 +6,7 @@
extras_require = {
"tester": [
- "eth-tester[py-evm]==v0.10.0-b.1",
+ "eth-tester[py-evm]==v0.10.0-b.3",
"py-geth>=4.1.0",
],
"linter": [
@@ -73,7 +73,7 @@
"eth-account>=0.8.0",
"eth-hash[pycryptodome]>=0.5.1",
"eth-typing>=3.0.0",
- "eth-utils>=2.1.0",
+ "eth-utils>=4.0.0",
"hexbytes>=0.1.0,<0.4.0",
"jsonschema>=4.0.0",
"protobuf>=4.21.6",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -6,7 +6,7 @@\n \n extras_require = {\n \"tester\": [\n- \"eth-tester[py-evm]==v0.10.0-b.1\",\n+ \"eth-tester[py-evm]==v0.10.0-b.3\",\n \"py-geth>=4.1.0\",\n ],\n \"linter\": [\n@@ -73,7 +73,7 @@\n \"eth-account>=0.8.0\",\n \"eth-hash[pycryptodome]>=0.5.1\",\n \"eth-typing>=3.0.0\",\n- \"eth-utils>=2.1.0\",\n+ \"eth-utils>=4.0.0\",\n \"hexbytes>=0.1.0,<0.4.0\",\n \"jsonschema>=4.0.0\",\n \"protobuf>=4.21.6\",\n", "issue": "Web3.isAddress doesn't work for non prefixed checksumed values\n* Version: 4.0.0b11\r\n* Python: 3.6\r\n* OS: linux\r\n\r\n### What was wrong?\r\n\r\nAs stated in the docs http://web3py.readthedocs.io/en/latest/overview.html#Web3.isAddress the function Web3.isAddress(value) should **allow both 0x prefixed and non prefixed values**.\r\n\r\nIf the address is not checksumed, it's ok not to have the **0x**:\r\n\r\n```\r\n>>> Web3.isAddress('d3cda913deb6f67967b99d67acdfa1712c293601')\r\n>>> True\r\n```\r\n\r\nBut if it's checksumed\r\n\r\n```\r\n>>> Web3.isAddress('d3CdA913deB6f67967B99D67aCDFa1712C293601')\r\n>>> False\r\n```\r\n\r\nNo problem if we add the **0x**:\r\n\r\n```\r\n>>> Web3.isAddress('0xd3CdA913deB6f67967B99D67aCDFa1712C293601')\r\n>>> True\r\n```\r\n\r\n### How can it be fixed?\r\n\r\nChanging the documentation to state that checksumed addresses must have 0x or changing the function to accept checksumed addresses with 0x. I would just remove 0x at the beginning of the function (if found) and work with the address as that. \r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n \"tester\": [\n \"eth-tester[py-evm]==v0.10.0-b.1\",\n \"py-geth>=4.1.0\",\n ],\n \"linter\": [\n \"black>=22.1.0\",\n \"flake8==3.8.3\",\n \"isort>=5.11.0\",\n \"mypy==1.4.1\",\n \"types-setuptools>=57.4.4\",\n \"types-requests>=2.26.1\",\n \"types-protobuf==3.19.13\",\n ],\n \"docs\": [\n \"sphinx>=5.3.0\",\n \"sphinx_rtd_theme>=1.0.0\",\n \"towncrier>=21,<22\",\n ],\n \"dev\": [\n \"bumpversion\",\n \"flaky>=3.7.0\",\n \"hypothesis>=3.31.2\",\n \"importlib-metadata<5.0;python_version<'3.8'\",\n \"pytest>=7.0.0\",\n \"pytest-asyncio>=0.18.1,<0.23\",\n \"pytest-mock>=1.10\",\n \"pytest-watch>=4.2\",\n \"pytest-xdist>=1.29\",\n \"setuptools>=38.6.0\",\n \"tox>=3.18.0\",\n \"tqdm>4.32\",\n \"twine>=1.13\",\n \"when-changed>=0.3.0\",\n \"build>=0.9.0\",\n ],\n \"ipfs\": [\n \"ipfshttpclient==0.8.0a2\",\n ],\n}\n\nextras_require[\"dev\"] = (\n extras_require[\"tester\"]\n + extras_require[\"linter\"]\n + extras_require[\"docs\"]\n + extras_require[\"ipfs\"]\n + extras_require[\"dev\"]\n)\n\nwith open(\"./README.md\") as readme:\n long_description = readme.read()\n\nsetup(\n name=\"web3\",\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version=\"6.14.0\",\n description=\"\"\"web3.py\"\"\",\n long_description_content_type=\"text/markdown\",\n long_description=long_description,\n author=\"The Ethereum Foundation\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ethereum/web3.py\",\n include_package_data=True,\n install_requires=[\n \"aiohttp>=3.7.4.post0\",\n \"eth-abi>=4.0.0\",\n \"eth-account>=0.8.0\",\n \"eth-hash[pycryptodome]>=0.5.1\",\n \"eth-typing>=3.0.0\",\n \"eth-utils>=2.1.0\",\n \"hexbytes>=0.1.0,<0.4.0\",\n \"jsonschema>=4.0.0\",\n \"protobuf>=4.21.6\",\n \"pydantic>=2.4.0\",\n \"pywin32>=223;platform_system=='Windows'\",\n \"requests>=2.16.0\",\n \"typing-extensions>=4.0.1\",\n \"websockets>=10.0.0\",\n \"pyunormalize>=15.0.0\",\n ],\n python_requires=\">=3.8\",\n extras_require=extras_require,\n py_modules=[\"web3\", \"ens\", \"ethpm\"],\n entry_points={\"pytest11\": [\"pytest_ethereum = web3.tools.pytest_ethereum.plugins\"]},\n license=\"MIT\",\n zip_safe=False,\n keywords=\"ethereum\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n package_data={\"web3\": [\"py.typed\"]},\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n ],\n)\n", "path": "setup.py"}]}
| 2,054 | 219 |
gh_patches_debug_17300
|
rasdani/github-patches
|
git_diff
|
techmatters__terraso-backend-889
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
RuntimeWarning: DateTimeField Log.client_timestamp received a naive datetime
## Description
When running `make test`, many warnings of this form are observed:
```
/home/terraso/.local/lib/python3.11/site-packages/django/db/models/fields/__init__.py:1595: RuntimeWarning: DateTimeField Log.client_timestamp received a naive datetime (2023-07-11 22:39:48.700825) while time zone support is active.
warnings.warn(
```
</issue>
<code>
[start of terraso_backend/apps/audit_logs/services.py]
1 import typing
2 from datetime import datetime
3 from enum import Enum
4
5 from django.contrib.contenttypes.models import ContentType
6 from django.core.paginator import Paginator
7 from django.db import transaction
8 from django.db.models.query import QuerySet
9
10 from apps.core.models import User
11
12 from . import api, models
13
14 TEMPLATE = "{client_time} - {user} {action} {resource}"
15
16
17 class _AuditLogService:
18 """
19 AuditLogService implements the AuditLog protocol
20 """
21
22 def log(
23 self,
24 user: User,
25 action: api.ACTIONS,
26 resource: object,
27 metadata: typing.Optional[dict[str, any]] = None,
28 client_time: typing.Optional[datetime] = None,
29 ) -> None:
30 """
31 log logs an action performed by a user on a resource
32 example:
33 log(user, "create", resource, client_time=1234567890)
34 :param client_time:
35 :param metadata:
36 :param action:
37 :param user:
38 :type resource: object
39
40 """
41 if not hasattr(user, "id"):
42 raise ValueError("Invalid user")
43
44 get_user_readable = getattr(user, "human_readable", None)
45 user_readable = get_user_readable() if callable(get_user_readable) else user.full_name()
46
47 if not isinstance(action, Enum) or not hasattr(models.Events, action.value):
48 raise ValueError("Invalid action")
49
50 resource_id = resource.id if hasattr(resource, "id") else None
51 if resource_id is None:
52 raise ValueError("Invalid resource")
53
54 get_resource_human_readable = getattr(resource, "human_readable", None)
55 if callable(get_resource_human_readable):
56 resource_human_readable = get_resource_human_readable()
57 else:
58 resource_human_readable = resource_id
59
60 content_type = ContentType.objects.get_for_model(resource)
61 resource_obj = resource
62
63 resource_repr = resource.__dict__.__str__()
64
65 if metadata is None:
66 metadata = {}
67
68 with transaction.atomic():
69 log = models.Log(
70 user=user,
71 event=action.value,
72 resource_id=resource_id,
73 resource_content_type=content_type,
74 resource_object=resource_obj,
75 resource_json_repr=resource_repr,
76 resource_human_readable=str(resource_human_readable),
77 user_human_readable=str(user_readable),
78 )
79
80 if client_time is None:
81 client_time = datetime.now()
82 log.client_timestamp = client_time
83
84 log.metadata = metadata
85 log.save()
86
87
88 class LogData:
89 """
90 LazyPaginator implements the Paginator protocol
91 """
92
93 def __init__(self, data: QuerySet):
94 self.data = data
95
96 def get_paginator(self, page_size: int = 10):
97 return Paginator(self.data, page_size)
98
99 def __len__(self):
100 return len(self.data)
101
102 def __iter__(self):
103 return iter(self.data)
104
105
106 def new_audit_logger() -> api.AuditLog:
107 """
108 new_audit_logger creates a new audit log
109 """
110 return _AuditLogService()
111
[end of terraso_backend/apps/audit_logs/services.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/terraso_backend/apps/audit_logs/services.py b/terraso_backend/apps/audit_logs/services.py
--- a/terraso_backend/apps/audit_logs/services.py
+++ b/terraso_backend/apps/audit_logs/services.py
@@ -2,6 +2,7 @@
from datetime import datetime
from enum import Enum
+from django.conf import settings
from django.contrib.contenttypes.models import ContentType
from django.core.paginator import Paginator
from django.db import transaction
@@ -79,7 +80,12 @@
if client_time is None:
client_time = datetime.now()
- log.client_timestamp = client_time
+ if settings.USE_TZ:
+ from django.utils.timezone import make_aware
+
+ log.client_timestamp = make_aware(client_time)
+ else:
+ log.client_timestamp = client_time
log.metadata = metadata
log.save()
|
{"golden_diff": "diff --git a/terraso_backend/apps/audit_logs/services.py b/terraso_backend/apps/audit_logs/services.py\n--- a/terraso_backend/apps/audit_logs/services.py\n+++ b/terraso_backend/apps/audit_logs/services.py\n@@ -2,6 +2,7 @@\n from datetime import datetime\n from enum import Enum\n \n+from django.conf import settings\n from django.contrib.contenttypes.models import ContentType\n from django.core.paginator import Paginator\n from django.db import transaction\n@@ -79,7 +80,12 @@\n \n if client_time is None:\n client_time = datetime.now()\n- log.client_timestamp = client_time\n+ if settings.USE_TZ:\n+ from django.utils.timezone import make_aware\n+\n+ log.client_timestamp = make_aware(client_time)\n+ else:\n+ log.client_timestamp = client_time\n \n log.metadata = metadata\n log.save()\n", "issue": "RuntimeWarning: DateTimeField Log.client_timestamp received a naive datetime\n## Description\r\nWhen running `make test`, many warnings of this form are observed:\r\n```\r\n /home/terraso/.local/lib/python3.11/site-packages/django/db/models/fields/__init__.py:1595: RuntimeWarning: DateTimeField Log.client_timestamp received a naive datetime (2023-07-11 22:39:48.700825) while time zone support is active.\r\n warnings.warn(\r\n```\n", "before_files": [{"content": "import typing\nfrom datetime import datetime\nfrom enum import Enum\n\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.paginator import Paginator\nfrom django.db import transaction\nfrom django.db.models.query import QuerySet\n\nfrom apps.core.models import User\n\nfrom . import api, models\n\nTEMPLATE = \"{client_time} - {user} {action} {resource}\"\n\n\nclass _AuditLogService:\n \"\"\"\n AuditLogService implements the AuditLog protocol\n \"\"\"\n\n def log(\n self,\n user: User,\n action: api.ACTIONS,\n resource: object,\n metadata: typing.Optional[dict[str, any]] = None,\n client_time: typing.Optional[datetime] = None,\n ) -> None:\n \"\"\"\n log logs an action performed by a user on a resource\n example:\n log(user, \"create\", resource, client_time=1234567890)\n :param client_time:\n :param metadata:\n :param action:\n :param user:\n :type resource: object\n\n \"\"\"\n if not hasattr(user, \"id\"):\n raise ValueError(\"Invalid user\")\n\n get_user_readable = getattr(user, \"human_readable\", None)\n user_readable = get_user_readable() if callable(get_user_readable) else user.full_name()\n\n if not isinstance(action, Enum) or not hasattr(models.Events, action.value):\n raise ValueError(\"Invalid action\")\n\n resource_id = resource.id if hasattr(resource, \"id\") else None\n if resource_id is None:\n raise ValueError(\"Invalid resource\")\n\n get_resource_human_readable = getattr(resource, \"human_readable\", None)\n if callable(get_resource_human_readable):\n resource_human_readable = get_resource_human_readable()\n else:\n resource_human_readable = resource_id\n\n content_type = ContentType.objects.get_for_model(resource)\n resource_obj = resource\n\n resource_repr = resource.__dict__.__str__()\n\n if metadata is None:\n metadata = {}\n\n with transaction.atomic():\n log = models.Log(\n user=user,\n event=action.value,\n resource_id=resource_id,\n resource_content_type=content_type,\n resource_object=resource_obj,\n resource_json_repr=resource_repr,\n resource_human_readable=str(resource_human_readable),\n user_human_readable=str(user_readable),\n )\n\n if client_time is None:\n client_time = datetime.now()\n log.client_timestamp = client_time\n\n log.metadata = metadata\n log.save()\n\n\nclass LogData:\n \"\"\"\n LazyPaginator implements the Paginator protocol\n \"\"\"\n\n def __init__(self, data: QuerySet):\n self.data = data\n\n def get_paginator(self, page_size: int = 10):\n return Paginator(self.data, page_size)\n\n def __len__(self):\n return len(self.data)\n\n def __iter__(self):\n return iter(self.data)\n\n\ndef new_audit_logger() -> api.AuditLog:\n \"\"\"\n new_audit_logger creates a new audit log\n \"\"\"\n return _AuditLogService()\n", "path": "terraso_backend/apps/audit_logs/services.py"}]}
| 1,535 | 198 |
gh_patches_debug_10615
|
rasdani/github-patches
|
git_diff
|
pandas-dev__pandas-14007
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DEPR: deprecate SparseList
</issue>
<code>
[start of pandas/sparse/list.py]
1 import numpy as np
2 from pandas.core.base import PandasObject
3 from pandas.formats.printing import pprint_thing
4
5 from pandas.types.common import is_scalar
6 from pandas.sparse.array import SparseArray
7 import pandas._sparse as splib
8
9
10 class SparseList(PandasObject):
11
12 """
13 Data structure for accumulating data to be converted into a
14 SparseArray. Has similar API to the standard Python list
15
16 Parameters
17 ----------
18 data : scalar or array-like
19 fill_value : scalar, default NaN
20 """
21
22 def __init__(self, data=None, fill_value=np.nan):
23 self.fill_value = fill_value
24 self._chunks = []
25
26 if data is not None:
27 self.append(data)
28
29 def __unicode__(self):
30 contents = '\n'.join(repr(c) for c in self._chunks)
31 return '%s\n%s' % (object.__repr__(self), pprint_thing(contents))
32
33 def __len__(self):
34 return sum(len(c) for c in self._chunks)
35
36 def __getitem__(self, i):
37 if i < 0:
38 if i + len(self) < 0: # pragma: no cover
39 raise ValueError('%d out of range' % i)
40 i += len(self)
41
42 passed = 0
43 j = 0
44 while i >= passed + len(self._chunks[j]):
45 passed += len(self._chunks[j])
46 j += 1
47 return self._chunks[j][i - passed]
48
49 def __setitem__(self, i, value):
50 raise NotImplementedError
51
52 @property
53 def nchunks(self):
54 return len(self._chunks)
55
56 @property
57 def is_consolidated(self):
58 return self.nchunks == 1
59
60 def consolidate(self, inplace=True):
61 """
62 Internally consolidate chunks of data
63
64 Parameters
65 ----------
66 inplace : boolean, default True
67 Modify the calling object instead of constructing a new one
68
69 Returns
70 -------
71 splist : SparseList
72 If inplace=False, new object, otherwise reference to existing
73 object
74 """
75 if not inplace:
76 result = self.copy()
77 else:
78 result = self
79
80 if result.is_consolidated:
81 return result
82
83 result._consolidate_inplace()
84 return result
85
86 def _consolidate_inplace(self):
87 new_values = np.concatenate([c.sp_values for c in self._chunks])
88 new_index = _concat_sparse_indexes([c.sp_index for c in self._chunks])
89 new_arr = SparseArray(new_values, sparse_index=new_index,
90 fill_value=self.fill_value)
91 self._chunks = [new_arr]
92
93 def copy(self):
94 """
95 Return copy of the list
96
97 Returns
98 -------
99 new_list : SparseList
100 """
101 new_splist = SparseList(fill_value=self.fill_value)
102 new_splist._chunks = list(self._chunks)
103 return new_splist
104
105 def to_array(self):
106 """
107 Return SparseArray from data stored in the SparseList
108
109 Returns
110 -------
111 sparr : SparseArray
112 """
113 self.consolidate(inplace=True)
114 return self._chunks[0]
115
116 def append(self, value):
117 """
118 Append element or array-like chunk of data to the SparseList
119
120 Parameters
121 ----------
122 value: scalar or array-like
123 """
124 if is_scalar(value):
125 value = [value]
126
127 sparr = SparseArray(value, fill_value=self.fill_value)
128 self._chunks.append(sparr)
129 self._consolidated = False
130
131
132 def _concat_sparse_indexes(indexes):
133 all_indices = []
134 total_length = 0
135
136 for index in indexes:
137 # increment by offset
138 inds = index.to_int_index().indices + total_length
139
140 all_indices.append(inds)
141 total_length += index.length
142
143 return splib.IntIndex(total_length, np.concatenate(all_indices))
144
[end of pandas/sparse/list.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pandas/sparse/list.py b/pandas/sparse/list.py
--- a/pandas/sparse/list.py
+++ b/pandas/sparse/list.py
@@ -1,3 +1,4 @@
+import warnings
import numpy as np
from pandas.core.base import PandasObject
from pandas.formats.printing import pprint_thing
@@ -20,6 +21,11 @@
"""
def __init__(self, data=None, fill_value=np.nan):
+
+ # see gh-13784
+ warnings.warn("SparseList is deprecated and will be removed "
+ "in a future version", FutureWarning, stacklevel=2)
+
self.fill_value = fill_value
self._chunks = []
|
{"golden_diff": "diff --git a/pandas/sparse/list.py b/pandas/sparse/list.py\n--- a/pandas/sparse/list.py\n+++ b/pandas/sparse/list.py\n@@ -1,3 +1,4 @@\n+import warnings\n import numpy as np\n from pandas.core.base import PandasObject\n from pandas.formats.printing import pprint_thing\n@@ -20,6 +21,11 @@\n \"\"\"\n \n def __init__(self, data=None, fill_value=np.nan):\n+\n+ # see gh-13784\n+ warnings.warn(\"SparseList is deprecated and will be removed \"\n+ \"in a future version\", FutureWarning, stacklevel=2)\n+\n self.fill_value = fill_value\n self._chunks = []\n", "issue": "DEPR: deprecate SparseList\n\n", "before_files": [{"content": "import numpy as np\nfrom pandas.core.base import PandasObject\nfrom pandas.formats.printing import pprint_thing\n\nfrom pandas.types.common import is_scalar\nfrom pandas.sparse.array import SparseArray\nimport pandas._sparse as splib\n\n\nclass SparseList(PandasObject):\n\n \"\"\"\n Data structure for accumulating data to be converted into a\n SparseArray. Has similar API to the standard Python list\n\n Parameters\n ----------\n data : scalar or array-like\n fill_value : scalar, default NaN\n \"\"\"\n\n def __init__(self, data=None, fill_value=np.nan):\n self.fill_value = fill_value\n self._chunks = []\n\n if data is not None:\n self.append(data)\n\n def __unicode__(self):\n contents = '\\n'.join(repr(c) for c in self._chunks)\n return '%s\\n%s' % (object.__repr__(self), pprint_thing(contents))\n\n def __len__(self):\n return sum(len(c) for c in self._chunks)\n\n def __getitem__(self, i):\n if i < 0:\n if i + len(self) < 0: # pragma: no cover\n raise ValueError('%d out of range' % i)\n i += len(self)\n\n passed = 0\n j = 0\n while i >= passed + len(self._chunks[j]):\n passed += len(self._chunks[j])\n j += 1\n return self._chunks[j][i - passed]\n\n def __setitem__(self, i, value):\n raise NotImplementedError\n\n @property\n def nchunks(self):\n return len(self._chunks)\n\n @property\n def is_consolidated(self):\n return self.nchunks == 1\n\n def consolidate(self, inplace=True):\n \"\"\"\n Internally consolidate chunks of data\n\n Parameters\n ----------\n inplace : boolean, default True\n Modify the calling object instead of constructing a new one\n\n Returns\n -------\n splist : SparseList\n If inplace=False, new object, otherwise reference to existing\n object\n \"\"\"\n if not inplace:\n result = self.copy()\n else:\n result = self\n\n if result.is_consolidated:\n return result\n\n result._consolidate_inplace()\n return result\n\n def _consolidate_inplace(self):\n new_values = np.concatenate([c.sp_values for c in self._chunks])\n new_index = _concat_sparse_indexes([c.sp_index for c in self._chunks])\n new_arr = SparseArray(new_values, sparse_index=new_index,\n fill_value=self.fill_value)\n self._chunks = [new_arr]\n\n def copy(self):\n \"\"\"\n Return copy of the list\n\n Returns\n -------\n new_list : SparseList\n \"\"\"\n new_splist = SparseList(fill_value=self.fill_value)\n new_splist._chunks = list(self._chunks)\n return new_splist\n\n def to_array(self):\n \"\"\"\n Return SparseArray from data stored in the SparseList\n\n Returns\n -------\n sparr : SparseArray\n \"\"\"\n self.consolidate(inplace=True)\n return self._chunks[0]\n\n def append(self, value):\n \"\"\"\n Append element or array-like chunk of data to the SparseList\n\n Parameters\n ----------\n value: scalar or array-like\n \"\"\"\n if is_scalar(value):\n value = [value]\n\n sparr = SparseArray(value, fill_value=self.fill_value)\n self._chunks.append(sparr)\n self._consolidated = False\n\n\ndef _concat_sparse_indexes(indexes):\n all_indices = []\n total_length = 0\n\n for index in indexes:\n # increment by offset\n inds = index.to_int_index().indices + total_length\n\n all_indices.append(inds)\n total_length += index.length\n\n return splib.IntIndex(total_length, np.concatenate(all_indices))\n", "path": "pandas/sparse/list.py"}]}
| 1,725 | 164 |
gh_patches_debug_8799
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-9612
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Last Active Organization not set in API
Currently, a user's last active organization is set only in the Django code: https://github.com/getsentry/sentry/blob/master/src/sentry/web/frontend/base.py#L34
This means that last active organization is not set when a user navigates to a view via a front-end route.
As more of Sentry's views are converted to React, we will lose accurate functionality around a user's last active organization.
</issue>
<code>
[start of src/sentry/api/bases/organization.py]
1 from __future__ import absolute_import
2
3 from sentry.api.base import Endpoint, logger
4 from sentry.api.exceptions import ResourceDoesNotExist, SsoRequired, TwoFactorRequired
5 from sentry.api.permissions import ScopedPermission
6 from sentry.app import raven
7 from sentry.auth import access
8 from sentry.auth.superuser import is_active_superuser
9 from sentry.models import (
10 ApiKey, Authenticator, Organization, OrganizationMemberTeam, Project, ProjectTeam, ReleaseProject, Team
11 )
12 from sentry.utils import auth
13
14
15 class OrganizationPermission(ScopedPermission):
16 scope_map = {
17 'GET': ['org:read', 'org:write', 'org:admin'],
18 'POST': ['org:write', 'org:admin'],
19 'PUT': ['org:write', 'org:admin'],
20 'DELETE': ['org:admin'],
21 }
22
23 def is_not_2fa_compliant(self, user, organization):
24 return organization.flags.require_2fa and not Authenticator.objects.user_has_2fa(user)
25
26 def needs_sso(self, request, organization):
27 # XXX(dcramer): this is very similar to the server-rendered views
28 # logic for checking valid SSO
29 if not request.access.requires_sso:
30 return False
31 if not auth.has_completed_sso(request, organization.id):
32 return True
33 if not request.access.sso_is_valid:
34 return True
35 return False
36
37 def has_object_permission(self, request, view, organization):
38 if request.user and request.user.is_authenticated() and request.auth:
39 request.access = access.from_request(
40 request,
41 organization,
42 scopes=request.auth.get_scopes(),
43 )
44
45 elif request.auth:
46 if request.auth.organization_id == organization.id:
47 request.access = access.from_auth(request.auth)
48 else:
49 request.access = access.DEFAULT
50
51 else:
52 request.access = access.from_request(request, organization)
53
54 if auth.is_user_signed_request(request):
55 # if the user comes from a signed request
56 # we let them pass if sso is enabled
57 logger.info(
58 'access.signed-sso-passthrough',
59 extra={
60 'organization_id': organization.id,
61 'user_id': request.user.id,
62 }
63 )
64 elif request.user.is_authenticated():
65 # session auth needs to confirm various permissions
66 if self.needs_sso(request, organization):
67
68 logger.info(
69 'access.must-sso',
70 extra={
71 'organization_id': organization.id,
72 'user_id': request.user.id,
73 }
74 )
75
76 raise SsoRequired(organization)
77
78 if self.is_not_2fa_compliant(
79 request.user, organization):
80 logger.info(
81 'access.not-2fa-compliant',
82 extra={
83 'organization_id': organization.id,
84 'user_id': request.user.id,
85 }
86 )
87 raise TwoFactorRequired()
88
89 allowed_scopes = set(self.scope_map.get(request.method, []))
90 return any(request.access.has_scope(s) for s in allowed_scopes)
91
92
93 # These are based on ProjectReleasePermission
94 # additional checks to limit actions to releases
95 # associated with projects people have access to
96 class OrganizationReleasePermission(OrganizationPermission):
97 scope_map = {
98 'GET': ['project:read', 'project:write', 'project:admin', 'project:releases'],
99 'POST': ['project:write', 'project:admin', 'project:releases'],
100 'PUT': ['project:write', 'project:admin', 'project:releases'],
101 'DELETE': ['project:admin', 'project:releases'],
102 }
103
104
105 class OrganizationIntegrationsPermission(OrganizationPermission):
106 scope_map = {
107 'GET': ['org:read', 'org:write', 'org:admin', 'org:integrations'],
108 'POST': ['org:write', 'org:admin', 'org:integrations'],
109 'PUT': ['org:write', 'org:admin', 'org:integrations'],
110 'DELETE': ['org:admin', 'org:integrations'],
111 }
112
113
114 class OrganizationAdminPermission(OrganizationPermission):
115 scope_map = {
116 'GET': ['org:admin'],
117 'POST': ['org:admin'],
118 'PUT': ['org:admin'],
119 'DELETE': ['org:admin'],
120 }
121
122
123 class OrganizationAuthProviderPermission(OrganizationPermission):
124 scope_map = {
125 'GET': ['org:read'],
126 'POST': ['org:admin'],
127 'PUT': ['org:admin'],
128 'DELETE': ['org:admin'],
129 }
130
131
132 class OrganizationEndpoint(Endpoint):
133 permission_classes = (OrganizationPermission, )
134
135 def convert_args(self, request, organization_slug, *args, **kwargs):
136 try:
137 organization = Organization.objects.get_from_cache(
138 slug=organization_slug,
139 )
140 except Organization.DoesNotExist:
141 raise ResourceDoesNotExist
142
143 self.check_object_permissions(request, organization)
144
145 raven.tags_context({
146 'organization': organization.id,
147 })
148
149 request._request.organization = organization
150
151 kwargs['organization'] = organization
152 return (args, kwargs)
153
154
155 class OrganizationReleasesBaseEndpoint(OrganizationEndpoint):
156 permission_classes = (OrganizationReleasePermission, )
157
158 def get_allowed_projects(self, request, organization):
159 has_valid_api_key = False
160 if isinstance(request.auth, ApiKey):
161 if request.auth.organization_id != organization.id:
162 return []
163 has_valid_api_key = request.auth.has_scope('project:releases') or \
164 request.auth.has_scope('project:write')
165
166 if not (has_valid_api_key or request.user.is_authenticated()):
167 return []
168
169 if has_valid_api_key or is_active_superuser(request) or organization.flags.allow_joinleave:
170 allowed_teams = Team.objects.filter(organization=organization).values_list(
171 'id', flat=True
172 )
173 else:
174 allowed_teams = OrganizationMemberTeam.objects.filter(
175 organizationmember__user=request.user,
176 team__organization_id=organization.id,
177 ).values_list(
178 'team_id', flat=True
179 )
180
181 return Project.objects.filter(
182 id__in=ProjectTeam.objects.filter(
183 team_id__in=allowed_teams,
184 ).values_list('project_id', flat=True)
185 )
186
187 def has_release_permission(self, request, organization, release):
188 return ReleaseProject.objects.filter(
189 release=release,
190 project__in=self.get_allowed_projects(request, organization),
191 ).exists()
192
[end of src/sentry/api/bases/organization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/sentry/api/bases/organization.py b/src/sentry/api/bases/organization.py
--- a/src/sentry/api/bases/organization.py
+++ b/src/sentry/api/bases/organization.py
@@ -145,9 +145,13 @@
raven.tags_context({
'organization': organization.id,
})
-
request._request.organization = organization
+ # Track the 'active' organization when the request came from
+ # a cookie based agent (react app)
+ if request.auth is None and request.user:
+ request.session['activeorg'] = organization.slug
+
kwargs['organization'] = organization
return (args, kwargs)
|
{"golden_diff": "diff --git a/src/sentry/api/bases/organization.py b/src/sentry/api/bases/organization.py\n--- a/src/sentry/api/bases/organization.py\n+++ b/src/sentry/api/bases/organization.py\n@@ -145,9 +145,13 @@\n raven.tags_context({\n 'organization': organization.id,\n })\n-\n request._request.organization = organization\n \n+ # Track the 'active' organization when the request came from\n+ # a cookie based agent (react app)\n+ if request.auth is None and request.user:\n+ request.session['activeorg'] = organization.slug\n+\n kwargs['organization'] = organization\n return (args, kwargs)\n", "issue": "Last Active Organization not set in API\nCurrently, a user's last active organization is set only in the Django code: https://github.com/getsentry/sentry/blob/master/src/sentry/web/frontend/base.py#L34\r\n\r\nThis means that last active organization is not set when a user navigates to a view via a front-end route.\r\n\r\nAs more of Sentry's views are converted to React, we will lose accurate functionality around a user's last active organization.\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom sentry.api.base import Endpoint, logger\nfrom sentry.api.exceptions import ResourceDoesNotExist, SsoRequired, TwoFactorRequired\nfrom sentry.api.permissions import ScopedPermission\nfrom sentry.app import raven\nfrom sentry.auth import access\nfrom sentry.auth.superuser import is_active_superuser\nfrom sentry.models import (\n ApiKey, Authenticator, Organization, OrganizationMemberTeam, Project, ProjectTeam, ReleaseProject, Team\n)\nfrom sentry.utils import auth\n\n\nclass OrganizationPermission(ScopedPermission):\n scope_map = {\n 'GET': ['org:read', 'org:write', 'org:admin'],\n 'POST': ['org:write', 'org:admin'],\n 'PUT': ['org:write', 'org:admin'],\n 'DELETE': ['org:admin'],\n }\n\n def is_not_2fa_compliant(self, user, organization):\n return organization.flags.require_2fa and not Authenticator.objects.user_has_2fa(user)\n\n def needs_sso(self, request, organization):\n # XXX(dcramer): this is very similar to the server-rendered views\n # logic for checking valid SSO\n if not request.access.requires_sso:\n return False\n if not auth.has_completed_sso(request, organization.id):\n return True\n if not request.access.sso_is_valid:\n return True\n return False\n\n def has_object_permission(self, request, view, organization):\n if request.user and request.user.is_authenticated() and request.auth:\n request.access = access.from_request(\n request,\n organization,\n scopes=request.auth.get_scopes(),\n )\n\n elif request.auth:\n if request.auth.organization_id == organization.id:\n request.access = access.from_auth(request.auth)\n else:\n request.access = access.DEFAULT\n\n else:\n request.access = access.from_request(request, organization)\n\n if auth.is_user_signed_request(request):\n # if the user comes from a signed request\n # we let them pass if sso is enabled\n logger.info(\n 'access.signed-sso-passthrough',\n extra={\n 'organization_id': organization.id,\n 'user_id': request.user.id,\n }\n )\n elif request.user.is_authenticated():\n # session auth needs to confirm various permissions\n if self.needs_sso(request, organization):\n\n logger.info(\n 'access.must-sso',\n extra={\n 'organization_id': organization.id,\n 'user_id': request.user.id,\n }\n )\n\n raise SsoRequired(organization)\n\n if self.is_not_2fa_compliant(\n request.user, organization):\n logger.info(\n 'access.not-2fa-compliant',\n extra={\n 'organization_id': organization.id,\n 'user_id': request.user.id,\n }\n )\n raise TwoFactorRequired()\n\n allowed_scopes = set(self.scope_map.get(request.method, []))\n return any(request.access.has_scope(s) for s in allowed_scopes)\n\n\n# These are based on ProjectReleasePermission\n# additional checks to limit actions to releases\n# associated with projects people have access to\nclass OrganizationReleasePermission(OrganizationPermission):\n scope_map = {\n 'GET': ['project:read', 'project:write', 'project:admin', 'project:releases'],\n 'POST': ['project:write', 'project:admin', 'project:releases'],\n 'PUT': ['project:write', 'project:admin', 'project:releases'],\n 'DELETE': ['project:admin', 'project:releases'],\n }\n\n\nclass OrganizationIntegrationsPermission(OrganizationPermission):\n scope_map = {\n 'GET': ['org:read', 'org:write', 'org:admin', 'org:integrations'],\n 'POST': ['org:write', 'org:admin', 'org:integrations'],\n 'PUT': ['org:write', 'org:admin', 'org:integrations'],\n 'DELETE': ['org:admin', 'org:integrations'],\n }\n\n\nclass OrganizationAdminPermission(OrganizationPermission):\n scope_map = {\n 'GET': ['org:admin'],\n 'POST': ['org:admin'],\n 'PUT': ['org:admin'],\n 'DELETE': ['org:admin'],\n }\n\n\nclass OrganizationAuthProviderPermission(OrganizationPermission):\n scope_map = {\n 'GET': ['org:read'],\n 'POST': ['org:admin'],\n 'PUT': ['org:admin'],\n 'DELETE': ['org:admin'],\n }\n\n\nclass OrganizationEndpoint(Endpoint):\n permission_classes = (OrganizationPermission, )\n\n def convert_args(self, request, organization_slug, *args, **kwargs):\n try:\n organization = Organization.objects.get_from_cache(\n slug=organization_slug,\n )\n except Organization.DoesNotExist:\n raise ResourceDoesNotExist\n\n self.check_object_permissions(request, organization)\n\n raven.tags_context({\n 'organization': organization.id,\n })\n\n request._request.organization = organization\n\n kwargs['organization'] = organization\n return (args, kwargs)\n\n\nclass OrganizationReleasesBaseEndpoint(OrganizationEndpoint):\n permission_classes = (OrganizationReleasePermission, )\n\n def get_allowed_projects(self, request, organization):\n has_valid_api_key = False\n if isinstance(request.auth, ApiKey):\n if request.auth.organization_id != organization.id:\n return []\n has_valid_api_key = request.auth.has_scope('project:releases') or \\\n request.auth.has_scope('project:write')\n\n if not (has_valid_api_key or request.user.is_authenticated()):\n return []\n\n if has_valid_api_key or is_active_superuser(request) or organization.flags.allow_joinleave:\n allowed_teams = Team.objects.filter(organization=organization).values_list(\n 'id', flat=True\n )\n else:\n allowed_teams = OrganizationMemberTeam.objects.filter(\n organizationmember__user=request.user,\n team__organization_id=organization.id,\n ).values_list(\n 'team_id', flat=True\n )\n\n return Project.objects.filter(\n id__in=ProjectTeam.objects.filter(\n team_id__in=allowed_teams,\n ).values_list('project_id', flat=True)\n )\n\n def has_release_permission(self, request, organization, release):\n return ReleaseProject.objects.filter(\n release=release,\n project__in=self.get_allowed_projects(request, organization),\n ).exists()\n", "path": "src/sentry/api/bases/organization.py"}]}
| 2,478 | 152 |
gh_patches_debug_17785
|
rasdani/github-patches
|
git_diff
|
cisagov__manage.get.gov-1717
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Clean up test noise (that includes EPP and migration scripts)
### Issue description
Right now if you run the test suite locally or see the output from github, there is a lot of added prints and logs that make it hard to troubleshoot where your particular error is coming from. This ticket is clean up test noise in general including EPP and migration scripts.
### Acceptance criteria
- [ ] unnecessary prints/logs on tests are removed
### Additional context
_No response_
### Links to other issues
_No response_
</issue>
<code>
[start of src/epplibwrapper/utility/pool.py]
1 import logging
2 from typing import List
3 import gevent
4 from geventconnpool import ConnectionPool
5 from epplibwrapper.socket import Socket
6 from epplibwrapper.utility.pool_error import PoolError, PoolErrorCodes
7
8 try:
9 from epplib.commands import Hello
10 from epplib.exceptions import TransportError
11 except ImportError:
12 pass
13
14 from gevent.lock import BoundedSemaphore
15 from collections import deque
16
17 logger = logging.getLogger(__name__)
18
19
20 class EPPConnectionPool(ConnectionPool):
21 """A connection pool for EPPLib.
22
23 Args:
24 client (Client): The client
25 login (commands.Login): Login creds
26 options (dict): Options for the ConnectionPool
27 base class
28 """
29
30 def __init__(self, client, login, options: dict):
31 # For storing shared credentials
32 self._client = client
33 self._login = login
34
35 # Keep track of each greenlet
36 self.greenlets: List[gevent.Greenlet] = []
37
38 # Define optional pool settings.
39 # Kept in a dict so that the parent class,
40 # client.py, can maintain seperation/expandability
41 self.size = 1
42 if "size" in options:
43 self.size = options["size"]
44
45 self.exc_classes = tuple((TransportError,))
46 if "exc_classes" in options:
47 self.exc_classes = options["exc_classes"]
48
49 self.keepalive = None
50 if "keepalive" in options:
51 self.keepalive = options["keepalive"]
52
53 # Determines the period in which new
54 # gevent threads are spun up.
55 # This time period is in seconds. So for instance, .1 would be .1 seconds.
56 self.spawn_frequency = 0.1
57 if "spawn_frequency" in options:
58 self.spawn_frequency = options["spawn_frequency"]
59
60 self.conn: deque = deque()
61 self.lock = BoundedSemaphore(self.size)
62
63 self.populate_all_connections()
64
65 def _new_connection(self):
66 socket = self._create_socket(self._client, self._login)
67 try:
68 connection = socket.connect()
69 return connection
70 except Exception as err:
71 message = f"Failed to execute due to a registry error: {err}"
72 logger.error(message, exc_info=True)
73 # We want to raise a pool error rather than a LoginError here
74 # because if this occurs internally, we should handle this
75 # differently than we otherwise would for LoginError.
76 raise PoolError(code=PoolErrorCodes.NEW_CONNECTION_FAILED) from err
77
78 def _keepalive(self, c):
79 """Sends a command to the server to keep the connection alive."""
80 try:
81 # Sends a ping to the registry via EPPLib
82 c.send(Hello())
83 except Exception as err:
84 message = "Failed to keep the connection alive."
85 logger.error(message, exc_info=True)
86 raise PoolError(code=PoolErrorCodes.KEEP_ALIVE_FAILED) from err
87
88 def _create_socket(self, client, login) -> Socket:
89 """Creates and returns a socket instance"""
90 socket = Socket(client, login)
91 return socket
92
93 def get_connections(self):
94 """Returns the connection queue"""
95 return self.conn
96
97 def kill_all_connections(self):
98 """Kills all active connections in the pool."""
99 try:
100 if len(self.conn) > 0 or len(self.greenlets) > 0:
101 logger.info("Attempting to kill connections")
102 gevent.killall(self.greenlets)
103
104 self.greenlets.clear()
105 for connection in self.conn:
106 connection.disconnect()
107 self.conn.clear()
108
109 # Clear the semaphore
110 self.lock = BoundedSemaphore(self.size)
111 logger.info("Finished killing connections")
112 else:
113 logger.info("No connections to kill.")
114 except Exception as err:
115 logger.error("Could not kill all connections.")
116 raise PoolError(code=PoolErrorCodes.KILL_ALL_FAILED) from err
117
118 def populate_all_connections(self):
119 """Generates the connection pool.
120 If any connections exist, kill them first.
121 Based off of the __init__ definition for geventconnpool.
122 """
123 if len(self.conn) > 0 or len(self.greenlets) > 0:
124 self.kill_all_connections()
125
126 # Setup the lock
127 for i in range(self.size):
128 self.lock.acquire()
129
130 # Open multiple connections
131 for i in range(self.size):
132 self.greenlets.append(gevent.spawn_later(self.spawn_frequency * i, self._addOne))
133
134 # Open a "keepalive" thread if we want to ping open connections
135 if self.keepalive:
136 self.greenlets.append(gevent.spawn(self._keepalive_periodic))
137
[end of src/epplibwrapper/utility/pool.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/epplibwrapper/utility/pool.py b/src/epplibwrapper/utility/pool.py
--- a/src/epplibwrapper/utility/pool.py
+++ b/src/epplibwrapper/utility/pool.py
@@ -85,6 +85,21 @@
logger.error(message, exc_info=True)
raise PoolError(code=PoolErrorCodes.KEEP_ALIVE_FAILED) from err
+ def _keepalive_periodic(self):
+ """Overriding _keepalive_periodic from geventconnpool so that PoolErrors
+ are properly handled, as opposed to printing to stdout"""
+ delay = float(self.keepalive) / self.size
+ while 1:
+ try:
+ with self.get() as c:
+ self._keepalive(c)
+ except PoolError as err:
+ logger.error(err.message, exc_info=True)
+ except self.exc_classes:
+ # Nothing to do, the pool will generate a new connection later
+ pass
+ gevent.sleep(delay)
+
def _create_socket(self, client, login) -> Socket:
"""Creates and returns a socket instance"""
socket = Socket(client, login)
|
{"golden_diff": "diff --git a/src/epplibwrapper/utility/pool.py b/src/epplibwrapper/utility/pool.py\n--- a/src/epplibwrapper/utility/pool.py\n+++ b/src/epplibwrapper/utility/pool.py\n@@ -85,6 +85,21 @@\n logger.error(message, exc_info=True)\n raise PoolError(code=PoolErrorCodes.KEEP_ALIVE_FAILED) from err\n \n+ def _keepalive_periodic(self):\n+ \"\"\"Overriding _keepalive_periodic from geventconnpool so that PoolErrors\n+ are properly handled, as opposed to printing to stdout\"\"\"\n+ delay = float(self.keepalive) / self.size\n+ while 1:\n+ try:\n+ with self.get() as c:\n+ self._keepalive(c)\n+ except PoolError as err:\n+ logger.error(err.message, exc_info=True)\n+ except self.exc_classes:\n+ # Nothing to do, the pool will generate a new connection later\n+ pass\n+ gevent.sleep(delay)\n+\n def _create_socket(self, client, login) -> Socket:\n \"\"\"Creates and returns a socket instance\"\"\"\n socket = Socket(client, login)\n", "issue": "Clean up test noise (that includes EPP and migration scripts)\n### Issue description\r\n\r\nRight now if you run the test suite locally or see the output from github, there is a lot of added prints and logs that make it hard to troubleshoot where your particular error is coming from. This ticket is clean up test noise in general including EPP and migration scripts. \r\n\r\n\r\n\r\n### Acceptance criteria\r\n\r\n- [ ] unnecessary prints/logs on tests are removed\r\n\r\n### Additional context\r\n\r\n_No response_\r\n\r\n### Links to other issues\r\n\r\n_No response_\n", "before_files": [{"content": "import logging\nfrom typing import List\nimport gevent\nfrom geventconnpool import ConnectionPool\nfrom epplibwrapper.socket import Socket\nfrom epplibwrapper.utility.pool_error import PoolError, PoolErrorCodes\n\ntry:\n from epplib.commands import Hello\n from epplib.exceptions import TransportError\nexcept ImportError:\n pass\n\nfrom gevent.lock import BoundedSemaphore\nfrom collections import deque\n\nlogger = logging.getLogger(__name__)\n\n\nclass EPPConnectionPool(ConnectionPool):\n \"\"\"A connection pool for EPPLib.\n\n Args:\n client (Client): The client\n login (commands.Login): Login creds\n options (dict): Options for the ConnectionPool\n base class\n \"\"\"\n\n def __init__(self, client, login, options: dict):\n # For storing shared credentials\n self._client = client\n self._login = login\n\n # Keep track of each greenlet\n self.greenlets: List[gevent.Greenlet] = []\n\n # Define optional pool settings.\n # Kept in a dict so that the parent class,\n # client.py, can maintain seperation/expandability\n self.size = 1\n if \"size\" in options:\n self.size = options[\"size\"]\n\n self.exc_classes = tuple((TransportError,))\n if \"exc_classes\" in options:\n self.exc_classes = options[\"exc_classes\"]\n\n self.keepalive = None\n if \"keepalive\" in options:\n self.keepalive = options[\"keepalive\"]\n\n # Determines the period in which new\n # gevent threads are spun up.\n # This time period is in seconds. So for instance, .1 would be .1 seconds.\n self.spawn_frequency = 0.1\n if \"spawn_frequency\" in options:\n self.spawn_frequency = options[\"spawn_frequency\"]\n\n self.conn: deque = deque()\n self.lock = BoundedSemaphore(self.size)\n\n self.populate_all_connections()\n\n def _new_connection(self):\n socket = self._create_socket(self._client, self._login)\n try:\n connection = socket.connect()\n return connection\n except Exception as err:\n message = f\"Failed to execute due to a registry error: {err}\"\n logger.error(message, exc_info=True)\n # We want to raise a pool error rather than a LoginError here\n # because if this occurs internally, we should handle this\n # differently than we otherwise would for LoginError.\n raise PoolError(code=PoolErrorCodes.NEW_CONNECTION_FAILED) from err\n\n def _keepalive(self, c):\n \"\"\"Sends a command to the server to keep the connection alive.\"\"\"\n try:\n # Sends a ping to the registry via EPPLib\n c.send(Hello())\n except Exception as err:\n message = \"Failed to keep the connection alive.\"\n logger.error(message, exc_info=True)\n raise PoolError(code=PoolErrorCodes.KEEP_ALIVE_FAILED) from err\n\n def _create_socket(self, client, login) -> Socket:\n \"\"\"Creates and returns a socket instance\"\"\"\n socket = Socket(client, login)\n return socket\n\n def get_connections(self):\n \"\"\"Returns the connection queue\"\"\"\n return self.conn\n\n def kill_all_connections(self):\n \"\"\"Kills all active connections in the pool.\"\"\"\n try:\n if len(self.conn) > 0 or len(self.greenlets) > 0:\n logger.info(\"Attempting to kill connections\")\n gevent.killall(self.greenlets)\n\n self.greenlets.clear()\n for connection in self.conn:\n connection.disconnect()\n self.conn.clear()\n\n # Clear the semaphore\n self.lock = BoundedSemaphore(self.size)\n logger.info(\"Finished killing connections\")\n else:\n logger.info(\"No connections to kill.\")\n except Exception as err:\n logger.error(\"Could not kill all connections.\")\n raise PoolError(code=PoolErrorCodes.KILL_ALL_FAILED) from err\n\n def populate_all_connections(self):\n \"\"\"Generates the connection pool.\n If any connections exist, kill them first.\n Based off of the __init__ definition for geventconnpool.\n \"\"\"\n if len(self.conn) > 0 or len(self.greenlets) > 0:\n self.kill_all_connections()\n\n # Setup the lock\n for i in range(self.size):\n self.lock.acquire()\n\n # Open multiple connections\n for i in range(self.size):\n self.greenlets.append(gevent.spawn_later(self.spawn_frequency * i, self._addOne))\n\n # Open a \"keepalive\" thread if we want to ping open connections\n if self.keepalive:\n self.greenlets.append(gevent.spawn(self._keepalive_periodic))\n", "path": "src/epplibwrapper/utility/pool.py"}]}
| 1,961 | 256 |
gh_patches_debug_22309
|
rasdani/github-patches
|
git_diff
|
cupy__cupy-5494
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update docs for `cupy.linalg.eigh` and `cupy.linalg.eigvalsh`
https://docs.cupy.dev/en/stable/reference/generated/cupy.linalg.eigvalsh.html
> Calculates eigenvalues of a symmetric matrix.
https://numpy.org/doc/stable/reference/generated/numpy.linalg.eigvalsh.html
> Compute the eigenvalues of a complex Hermitian or real symmetric matrix.
Documentation needs to be updated as we already support Hermitian matrix in https://github.com/cupy/cupy/pull/1518.
</issue>
<code>
[start of cupy/linalg/_eigenvalue.py]
1 import numpy
2
3 import cupy
4 from cupy_backends.cuda.libs import cublas
5 from cupy_backends.cuda.libs import cusolver
6 from cupy.cuda import device
7 from cupy.linalg import _util
8
9
10 def _syevd(a, UPLO, with_eigen_vector):
11 if UPLO not in ('L', 'U'):
12 raise ValueError('UPLO argument must be \'L\' or \'U\'')
13
14 # reject_float16=False for backward compatibility
15 dtype, v_dtype = _util.linalg_common_type(a, reject_float16=False)
16 real_dtype = dtype.char.lower()
17 w_dtype = v_dtype.char.lower()
18
19 # Note that cuSolver assumes fortran array
20 v = a.astype(dtype, order='F', copy=True)
21
22 m, lda = a.shape
23 w = cupy.empty(m, real_dtype)
24 dev_info = cupy.empty((), numpy.int32)
25 handle = device.Device().cusolver_handle
26
27 if with_eigen_vector:
28 jobz = cusolver.CUSOLVER_EIG_MODE_VECTOR
29 else:
30 jobz = cusolver.CUSOLVER_EIG_MODE_NOVECTOR
31
32 if UPLO == 'L':
33 uplo = cublas.CUBLAS_FILL_MODE_LOWER
34 else: # UPLO == 'U'
35 uplo = cublas.CUBLAS_FILL_MODE_UPPER
36
37 if dtype == 'f':
38 buffer_size = cupy.cuda.cusolver.ssyevd_bufferSize
39 syevd = cupy.cuda.cusolver.ssyevd
40 elif dtype == 'd':
41 buffer_size = cupy.cuda.cusolver.dsyevd_bufferSize
42 syevd = cupy.cuda.cusolver.dsyevd
43 elif dtype == 'F':
44 buffer_size = cupy.cuda.cusolver.cheevd_bufferSize
45 syevd = cupy.cuda.cusolver.cheevd
46 elif dtype == 'D':
47 buffer_size = cupy.cuda.cusolver.zheevd_bufferSize
48 syevd = cupy.cuda.cusolver.zheevd
49 else:
50 raise RuntimeError('Only float and double and cuComplex and '
51 + 'cuDoubleComplex are supported')
52
53 work_size = buffer_size(
54 handle, jobz, uplo, m, v.data.ptr, lda, w.data.ptr)
55 work = cupy.empty(work_size, dtype)
56 syevd(
57 handle, jobz, uplo, m, v.data.ptr, lda,
58 w.data.ptr, work.data.ptr, work_size, dev_info.data.ptr)
59 cupy.linalg._util._check_cusolver_dev_info_if_synchronization_allowed(
60 syevd, dev_info)
61
62 return w.astype(w_dtype, copy=False), v.astype(v_dtype, copy=False)
63
64
65 # TODO(okuta): Implement eig
66
67
68 def eigh(a, UPLO='L'):
69 """Eigenvalues and eigenvectors of a symmetric matrix.
70
71 This method calculates eigenvalues and eigenvectors of a given
72 symmetric matrix.
73
74 Args:
75 a (cupy.ndarray): A symmetric 2-D square matrix ``(M, M)`` or a batch
76 of symmetric 2-D square matrices ``(..., M, M)``.
77 UPLO (str): Select from ``'L'`` or ``'U'``. It specifies which
78 part of ``a`` is used. ``'L'`` uses the lower triangular part of
79 ``a``, and ``'U'`` uses the upper triangular part of ``a``.
80 Returns:
81 tuple of :class:`~cupy.ndarray`:
82 Returns a tuple ``(w, v)``. ``w`` contains eigenvalues and
83 ``v`` contains eigenvectors. ``v[:, i]`` is an eigenvector
84 corresponding to an eigenvalue ``w[i]``. For batch input,
85 ``v[k, :, i]`` is an eigenvector corresponding to an eigenvalue
86 ``w[k, i]`` of ``a[k]``.
87
88 .. warning::
89 This function calls one or more cuSOLVER routine(s) which may yield
90 invalid results if input conditions are not met.
91 To detect these invalid results, you can set the `linalg`
92 configuration to a value that is not `ignore` in
93 :func:`cupyx.errstate` or :func:`cupyx.seterr`.
94
95 .. seealso:: :func:`numpy.linalg.eigh`
96 """
97 if a.ndim < 2:
98 raise ValueError('Array must be at least two-dimensional')
99
100 m, n = a.shape[-2:]
101 if m != n:
102 raise ValueError('Last 2 dimensions of the array must be square')
103
104 if a.ndim > 2:
105 return cupy.cusolver.syevj(a, UPLO, True)
106 else:
107 return _syevd(a, UPLO, True)
108
109
110 # TODO(okuta): Implement eigvals
111
112
113 def eigvalsh(a, UPLO='L'):
114 """Calculates eigenvalues of a symmetric matrix.
115
116 This method calculates eigenvalues a given symmetric matrix.
117 Note that :func:`cupy.linalg.eigh` calculates both eigenvalues and
118 eigenvectors.
119
120 Args:
121 a (cupy.ndarray): A symmetric 2-D square matrix ``(M, M)`` or a batch
122 of symmetric 2-D square matrices ``(..., M, M)``.
123 UPLO (str): Select from ``'L'`` or ``'U'``. It specifies which
124 part of ``a`` is used. ``'L'`` uses the lower triangular part of
125 ``a``, and ``'U'`` uses the upper triangular part of ``a``.
126 Returns:
127 cupy.ndarray:
128 Returns eigenvalues as a vector ``w``. For batch input,
129 ``w[k]`` is a vector of eigenvalues of matrix ``a[k]``.
130
131 .. warning::
132 This function calls one or more cuSOLVER routine(s) which may yield
133 invalid results if input conditions are not met.
134 To detect these invalid results, you can set the `linalg`
135 configuration to a value that is not `ignore` in
136 :func:`cupyx.errstate` or :func:`cupyx.seterr`.
137
138 .. seealso:: :func:`numpy.linalg.eigvalsh`
139 """
140 if a.ndim < 2:
141 raise ValueError('Array must be at least two-dimensional')
142
143 _util._assert_nd_squareness(a)
144
145 if a.ndim > 2:
146 return cupy.cusolver.syevj(a, UPLO, False)
147 else:
148 return _syevd(a, UPLO, False)[0]
149
[end of cupy/linalg/_eigenvalue.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cupy/linalg/_eigenvalue.py b/cupy/linalg/_eigenvalue.py
--- a/cupy/linalg/_eigenvalue.py
+++ b/cupy/linalg/_eigenvalue.py
@@ -66,10 +66,13 @@
def eigh(a, UPLO='L'):
- """Eigenvalues and eigenvectors of a symmetric matrix.
+ """
+ Return the eigenvalues and eigenvectors of a complex Hermitian
+ (conjugate symmetric) or a real symmetric matrix.
- This method calculates eigenvalues and eigenvectors of a given
- symmetric matrix.
+ Returns two objects, a 1-D array containing the eigenvalues of `a`, and
+ a 2-D square array or matrix (depending on the input type) of the
+ corresponding eigenvectors (in columns).
Args:
a (cupy.ndarray): A symmetric 2-D square matrix ``(M, M)`` or a batch
@@ -111,11 +114,10 @@
def eigvalsh(a, UPLO='L'):
- """Calculates eigenvalues of a symmetric matrix.
+ """
+ Compute the eigenvalues of a complex Hermitian or real symmetric matrix.
- This method calculates eigenvalues a given symmetric matrix.
- Note that :func:`cupy.linalg.eigh` calculates both eigenvalues and
- eigenvectors.
+ Main difference from eigh: the eigenvectors are not computed.
Args:
a (cupy.ndarray): A symmetric 2-D square matrix ``(M, M)`` or a batch
|
{"golden_diff": "diff --git a/cupy/linalg/_eigenvalue.py b/cupy/linalg/_eigenvalue.py\n--- a/cupy/linalg/_eigenvalue.py\n+++ b/cupy/linalg/_eigenvalue.py\n@@ -66,10 +66,13 @@\n \n \n def eigh(a, UPLO='L'):\n- \"\"\"Eigenvalues and eigenvectors of a symmetric matrix.\n+ \"\"\"\n+ Return the eigenvalues and eigenvectors of a complex Hermitian\n+ (conjugate symmetric) or a real symmetric matrix.\n \n- This method calculates eigenvalues and eigenvectors of a given\n- symmetric matrix.\n+ Returns two objects, a 1-D array containing the eigenvalues of `a`, and\n+ a 2-D square array or matrix (depending on the input type) of the\n+ corresponding eigenvectors (in columns).\n \n Args:\n a (cupy.ndarray): A symmetric 2-D square matrix ``(M, M)`` or a batch\n@@ -111,11 +114,10 @@\n \n \n def eigvalsh(a, UPLO='L'):\n- \"\"\"Calculates eigenvalues of a symmetric matrix.\n+ \"\"\"\n+ Compute the eigenvalues of a complex Hermitian or real symmetric matrix.\n \n- This method calculates eigenvalues a given symmetric matrix.\n- Note that :func:`cupy.linalg.eigh` calculates both eigenvalues and\n- eigenvectors.\n+ Main difference from eigh: the eigenvectors are not computed.\n \n Args:\n a (cupy.ndarray): A symmetric 2-D square matrix ``(M, M)`` or a batch\n", "issue": "Update docs for `cupy.linalg.eigh` and `cupy.linalg.eigvalsh`\nhttps://docs.cupy.dev/en/stable/reference/generated/cupy.linalg.eigvalsh.html\r\n> Calculates eigenvalues of a symmetric matrix.\r\n\r\nhttps://numpy.org/doc/stable/reference/generated/numpy.linalg.eigvalsh.html\r\n> Compute the eigenvalues of a complex Hermitian or real symmetric matrix.\r\n\r\nDocumentation needs to be updated as we already support Hermitian matrix in https://github.com/cupy/cupy/pull/1518.\n", "before_files": [{"content": "import numpy\n\nimport cupy\nfrom cupy_backends.cuda.libs import cublas\nfrom cupy_backends.cuda.libs import cusolver\nfrom cupy.cuda import device\nfrom cupy.linalg import _util\n\n\ndef _syevd(a, UPLO, with_eigen_vector):\n if UPLO not in ('L', 'U'):\n raise ValueError('UPLO argument must be \\'L\\' or \\'U\\'')\n\n # reject_float16=False for backward compatibility\n dtype, v_dtype = _util.linalg_common_type(a, reject_float16=False)\n real_dtype = dtype.char.lower()\n w_dtype = v_dtype.char.lower()\n\n # Note that cuSolver assumes fortran array\n v = a.astype(dtype, order='F', copy=True)\n\n m, lda = a.shape\n w = cupy.empty(m, real_dtype)\n dev_info = cupy.empty((), numpy.int32)\n handle = device.Device().cusolver_handle\n\n if with_eigen_vector:\n jobz = cusolver.CUSOLVER_EIG_MODE_VECTOR\n else:\n jobz = cusolver.CUSOLVER_EIG_MODE_NOVECTOR\n\n if UPLO == 'L':\n uplo = cublas.CUBLAS_FILL_MODE_LOWER\n else: # UPLO == 'U'\n uplo = cublas.CUBLAS_FILL_MODE_UPPER\n\n if dtype == 'f':\n buffer_size = cupy.cuda.cusolver.ssyevd_bufferSize\n syevd = cupy.cuda.cusolver.ssyevd\n elif dtype == 'd':\n buffer_size = cupy.cuda.cusolver.dsyevd_bufferSize\n syevd = cupy.cuda.cusolver.dsyevd\n elif dtype == 'F':\n buffer_size = cupy.cuda.cusolver.cheevd_bufferSize\n syevd = cupy.cuda.cusolver.cheevd\n elif dtype == 'D':\n buffer_size = cupy.cuda.cusolver.zheevd_bufferSize\n syevd = cupy.cuda.cusolver.zheevd\n else:\n raise RuntimeError('Only float and double and cuComplex and '\n + 'cuDoubleComplex are supported')\n\n work_size = buffer_size(\n handle, jobz, uplo, m, v.data.ptr, lda, w.data.ptr)\n work = cupy.empty(work_size, dtype)\n syevd(\n handle, jobz, uplo, m, v.data.ptr, lda,\n w.data.ptr, work.data.ptr, work_size, dev_info.data.ptr)\n cupy.linalg._util._check_cusolver_dev_info_if_synchronization_allowed(\n syevd, dev_info)\n\n return w.astype(w_dtype, copy=False), v.astype(v_dtype, copy=False)\n\n\n# TODO(okuta): Implement eig\n\n\ndef eigh(a, UPLO='L'):\n \"\"\"Eigenvalues and eigenvectors of a symmetric matrix.\n\n This method calculates eigenvalues and eigenvectors of a given\n symmetric matrix.\n\n Args:\n a (cupy.ndarray): A symmetric 2-D square matrix ``(M, M)`` or a batch\n of symmetric 2-D square matrices ``(..., M, M)``.\n UPLO (str): Select from ``'L'`` or ``'U'``. It specifies which\n part of ``a`` is used. ``'L'`` uses the lower triangular part of\n ``a``, and ``'U'`` uses the upper triangular part of ``a``.\n Returns:\n tuple of :class:`~cupy.ndarray`:\n Returns a tuple ``(w, v)``. ``w`` contains eigenvalues and\n ``v`` contains eigenvectors. ``v[:, i]`` is an eigenvector\n corresponding to an eigenvalue ``w[i]``. For batch input,\n ``v[k, :, i]`` is an eigenvector corresponding to an eigenvalue\n ``w[k, i]`` of ``a[k]``.\n\n .. warning::\n This function calls one or more cuSOLVER routine(s) which may yield\n invalid results if input conditions are not met.\n To detect these invalid results, you can set the `linalg`\n configuration to a value that is not `ignore` in\n :func:`cupyx.errstate` or :func:`cupyx.seterr`.\n\n .. seealso:: :func:`numpy.linalg.eigh`\n \"\"\"\n if a.ndim < 2:\n raise ValueError('Array must be at least two-dimensional')\n\n m, n = a.shape[-2:]\n if m != n:\n raise ValueError('Last 2 dimensions of the array must be square')\n\n if a.ndim > 2:\n return cupy.cusolver.syevj(a, UPLO, True)\n else:\n return _syevd(a, UPLO, True)\n\n\n# TODO(okuta): Implement eigvals\n\n\ndef eigvalsh(a, UPLO='L'):\n \"\"\"Calculates eigenvalues of a symmetric matrix.\n\n This method calculates eigenvalues a given symmetric matrix.\n Note that :func:`cupy.linalg.eigh` calculates both eigenvalues and\n eigenvectors.\n\n Args:\n a (cupy.ndarray): A symmetric 2-D square matrix ``(M, M)`` or a batch\n of symmetric 2-D square matrices ``(..., M, M)``.\n UPLO (str): Select from ``'L'`` or ``'U'``. It specifies which\n part of ``a`` is used. ``'L'`` uses the lower triangular part of\n ``a``, and ``'U'`` uses the upper triangular part of ``a``.\n Returns:\n cupy.ndarray:\n Returns eigenvalues as a vector ``w``. For batch input,\n ``w[k]`` is a vector of eigenvalues of matrix ``a[k]``.\n\n .. warning::\n This function calls one or more cuSOLVER routine(s) which may yield\n invalid results if input conditions are not met.\n To detect these invalid results, you can set the `linalg`\n configuration to a value that is not `ignore` in\n :func:`cupyx.errstate` or :func:`cupyx.seterr`.\n\n .. seealso:: :func:`numpy.linalg.eigvalsh`\n \"\"\"\n if a.ndim < 2:\n raise ValueError('Array must be at least two-dimensional')\n\n _util._assert_nd_squareness(a)\n\n if a.ndim > 2:\n return cupy.cusolver.syevj(a, UPLO, False)\n else:\n return _syevd(a, UPLO, False)[0]\n", "path": "cupy/linalg/_eigenvalue.py"}]}
| 2,455 | 363 |
gh_patches_debug_10114
|
rasdani/github-patches
|
git_diff
|
electricitymaps__electricitymaps-contrib-982
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Mockserver server.py does not work in Python 3
```mockserver_1 | Traceback (most recent call last):
mockserver_1 | File "../server.py", line 5, in <module>
mockserver_1 | from SimpleHTTPServer import SimpleHTTPRequestHandler, BaseHTTPServer
mockserver_1 | ModuleNotFoundError: No module named 'SimpleHTTPServer'
```
Looks like some modules have been reorganized in Python 3. Hopefully this is just a matter of updating the imports.
</issue>
<code>
[start of mockserver/server.py]
1 #! /usr/bin/env python
2
3 # Usage: python __file__.py <port>
4
5 from SimpleHTTPServer import SimpleHTTPRequestHandler, BaseHTTPServer
6
7 class CORSRequestHandler(SimpleHTTPRequestHandler):
8 def do_OPTIONS(self):
9 self.send_response(200, 'OK')
10 self.end_headers()
11
12 def end_headers(self):
13 self.send_header('Access-Control-Allow-Origin', '*')
14 self.send_header('Access-Control-Allow-Headers', 'x-request-timestamp, x-signature, electricitymap-token')
15 SimpleHTTPRequestHandler.end_headers(self)
16
17 if __name__ == '__main__':
18 BaseHTTPServer.test(CORSRequestHandler, BaseHTTPServer.HTTPServer)
19
[end of mockserver/server.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mockserver/server.py b/mockserver/server.py
--- a/mockserver/server.py
+++ b/mockserver/server.py
@@ -2,7 +2,7 @@
# Usage: python __file__.py <port>
-from SimpleHTTPServer import SimpleHTTPRequestHandler, BaseHTTPServer
+from http.server import SimpleHTTPRequestHandler, HTTPServer, test
class CORSRequestHandler(SimpleHTTPRequestHandler):
def do_OPTIONS(self):
@@ -15,4 +15,4 @@
SimpleHTTPRequestHandler.end_headers(self)
if __name__ == '__main__':
- BaseHTTPServer.test(CORSRequestHandler, BaseHTTPServer.HTTPServer)
+ test(CORSRequestHandler, HTTPServer)
|
{"golden_diff": "diff --git a/mockserver/server.py b/mockserver/server.py\n--- a/mockserver/server.py\n+++ b/mockserver/server.py\n@@ -2,7 +2,7 @@\n \n # Usage: python __file__.py <port>\n \n-from SimpleHTTPServer import SimpleHTTPRequestHandler, BaseHTTPServer\n+from http.server import SimpleHTTPRequestHandler, HTTPServer, test\n \n class CORSRequestHandler(SimpleHTTPRequestHandler):\n def do_OPTIONS(self):\n@@ -15,4 +15,4 @@\n SimpleHTTPRequestHandler.end_headers(self)\n \n if __name__ == '__main__':\n- BaseHTTPServer.test(CORSRequestHandler, BaseHTTPServer.HTTPServer)\n+ test(CORSRequestHandler, HTTPServer)\n", "issue": "Mockserver server.py does not work in Python 3\n```mockserver_1 | Traceback (most recent call last):\r\nmockserver_1 | File \"../server.py\", line 5, in <module>\r\nmockserver_1 | from SimpleHTTPServer import SimpleHTTPRequestHandler, BaseHTTPServer\r\nmockserver_1 | ModuleNotFoundError: No module named 'SimpleHTTPServer'\r\n```\r\nLooks like some modules have been reorganized in Python 3. Hopefully this is just a matter of updating the imports.\n", "before_files": [{"content": "#! /usr/bin/env python\n\n# Usage: python __file__.py <port>\n\nfrom SimpleHTTPServer import SimpleHTTPRequestHandler, BaseHTTPServer\n\nclass CORSRequestHandler(SimpleHTTPRequestHandler):\n def do_OPTIONS(self):\n self.send_response(200, 'OK')\n self.end_headers()\n\n def end_headers(self):\n self.send_header('Access-Control-Allow-Origin', '*')\n self.send_header('Access-Control-Allow-Headers', 'x-request-timestamp, x-signature, electricitymap-token')\n SimpleHTTPRequestHandler.end_headers(self)\n\nif __name__ == '__main__':\n BaseHTTPServer.test(CORSRequestHandler, BaseHTTPServer.HTTPServer)\n", "path": "mockserver/server.py"}]}
| 816 | 150 |
gh_patches_debug_148
|
rasdani/github-patches
|
git_diff
|
AUTOMATIC1111__stable-diffusion-webui-7583
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug]: vae does not appear when clicking refresh button in models/VAE
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
Pressing the button to update the VAE list does not update the VAE list.
### Steps to reproduce the problem
1. Insert new VAE file to models/VAE
2. Press buttion Refresh VAE list
### What should have happened?
Apprear new VAE file in list
### Commit where the problem happens
Lastest
### What platforms do you use to access the UI ?
_No response_
### What browsers do you use to access the UI ?
_No response_
### Command Line Arguments
```Shell
No
```
### List of extensions
No
### Console logs
```Shell
Nothing
```
### Additional information
_No response_
</issue>
<code>
[start of modules/shared_items.py]
1
2
3 def realesrgan_models_names():
4 import modules.realesrgan_model
5 return [x.name for x in modules.realesrgan_model.get_realesrgan_models(None)]
6
7
8 def postprocessing_scripts():
9 import modules.scripts
10
11 return modules.scripts.scripts_postproc.scripts
12
13
14 def sd_vae_items():
15 import modules.sd_vae
16
17 return ["Automatic", "None"] + list(modules.sd_vae.vae_dict)
18
19
20 def refresh_vae_list():
21 import modules.sd_vae
22
23 return modules.sd_vae.refresh_vae_list
24
[end of modules/shared_items.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/modules/shared_items.py b/modules/shared_items.py
--- a/modules/shared_items.py
+++ b/modules/shared_items.py
@@ -20,4 +20,4 @@
def refresh_vae_list():
import modules.sd_vae
- return modules.sd_vae.refresh_vae_list
+ return modules.sd_vae.refresh_vae_list()
|
{"golden_diff": "diff --git a/modules/shared_items.py b/modules/shared_items.py\n--- a/modules/shared_items.py\n+++ b/modules/shared_items.py\n@@ -20,4 +20,4 @@\n def refresh_vae_list():\r\n import modules.sd_vae\r\n \r\n- return modules.sd_vae.refresh_vae_list\r\n+ return modules.sd_vae.refresh_vae_list()\n", "issue": "[Bug]: vae does not appear when clicking refresh button in models/VAE\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nPressing the button to update the VAE list does not update the VAE list.\n\n### Steps to reproduce the problem\n\n1. Insert new VAE file to models/VAE\r\n2. Press buttion Refresh VAE list \n\n### What should have happened?\n\nApprear new VAE file in list\n\n### Commit where the problem happens\n\nLastest\n\n### What platforms do you use to access the UI ?\n\n_No response_\n\n### What browsers do you use to access the UI ?\n\n_No response_\n\n### Command Line Arguments\n\n```Shell\nNo\n```\n\n\n### List of extensions\n\nNo\n\n### Console logs\n\n```Shell\nNothing\n```\n\n\n### Additional information\n\n_No response_\n", "before_files": [{"content": "\r\n\r\ndef realesrgan_models_names():\r\n import modules.realesrgan_model\r\n return [x.name for x in modules.realesrgan_model.get_realesrgan_models(None)]\r\n\r\n\r\ndef postprocessing_scripts():\r\n import modules.scripts\r\n\r\n return modules.scripts.scripts_postproc.scripts\r\n\r\n\r\ndef sd_vae_items():\r\n import modules.sd_vae\r\n\r\n return [\"Automatic\", \"None\"] + list(modules.sd_vae.vae_dict)\r\n\r\n\r\ndef refresh_vae_list():\r\n import modules.sd_vae\r\n\r\n return modules.sd_vae.refresh_vae_list\r\n", "path": "modules/shared_items.py"}]}
| 885 | 78 |
gh_patches_debug_42142
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-87
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add way to temporarily/permanently disable hooks.
[overcommit](https://github.com/causes/overcommit) uses environment variables to do temporary skipping...
For instance:
`SKIP=foo git commit` will skip the `foo` hook
Whereas I've used a more-permanent switching with `git config hooks.foo false` in the past.
Considering both approaches, I think overcommit does this quite elegantly while focusing on only _temporarily_ disabling hooks.
</issue>
<code>
[start of pre_commit/commands.py]
1 from __future__ import print_function
2
3 import logging
4 import os
5 import pkg_resources
6 import shutil
7 import stat
8 import subprocess
9 import sys
10 from asottile.ordereddict import OrderedDict
11 from asottile.yaml import ordered_dump
12 from asottile.yaml import ordered_load
13 from plumbum import local
14
15 import pre_commit.constants as C
16 from pre_commit import git
17 from pre_commit import color
18 from pre_commit.clientlib.validate_config import CONFIG_JSON_SCHEMA
19 from pre_commit.clientlib.validate_config import load_config
20 from pre_commit.jsonschema_extensions import remove_defaults
21 from pre_commit.logging_handler import LoggingHandler
22 from pre_commit.repository import Repository
23 from pre_commit.staged_files_only import staged_files_only
24 from pre_commit.util import noop_context
25
26
27 logger = logging.getLogger('pre_commit')
28
29 COLS = int(subprocess.Popen(['tput', 'cols'], stdout=subprocess.PIPE).communicate()[0])
30
31 PASS_FAIL_LENGTH = 6
32
33
34 def install(runner):
35 """Install the pre-commit hooks."""
36 pre_commit_file = pkg_resources.resource_filename('pre_commit', 'resources/pre-commit.sh')
37 with open(runner.pre_commit_path, 'w') as pre_commit_file_obj:
38 pre_commit_file_obj.write(open(pre_commit_file).read())
39
40 original_mode = os.stat(runner.pre_commit_path).st_mode
41 os.chmod(
42 runner.pre_commit_path,
43 original_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH,
44 )
45
46 print('pre-commit installed at {0}'.format(runner.pre_commit_path))
47
48 return 0
49
50
51 def uninstall(runner):
52 """Uninstall the pre-commit hooks."""
53 if os.path.exists(runner.pre_commit_path):
54 os.remove(runner.pre_commit_path)
55 print('pre-commit uninstalled')
56 return 0
57
58
59 class RepositoryCannotBeUpdatedError(RuntimeError):
60 pass
61
62
63 def _update_repository(repo_config):
64 """Updates a repository to the tip of `master`. If the repository cannot
65 be updated because a hook that is configured does not exist in `master`,
66 this raises a RepositoryCannotBeUpdatedError
67
68 Args:
69 repo_config - A config for a repository
70 """
71 repo = Repository(repo_config)
72
73 with repo.in_checkout():
74 local['git']['fetch']()
75 head_sha = local['git']['rev-parse', 'origin/master']().strip()
76
77 # Don't bother trying to update if our sha is the same
78 if head_sha == repo_config['sha']:
79 return repo_config
80
81 # Construct a new config with the head sha
82 new_config = OrderedDict(repo_config)
83 new_config['sha'] = head_sha
84 new_repo = Repository(new_config)
85
86 # See if any of our hooks were deleted with the new commits
87 hooks = set(repo.hooks.keys())
88 hooks_missing = hooks - (hooks & set(new_repo.manifest.keys()))
89 if hooks_missing:
90 raise RepositoryCannotBeUpdatedError(
91 'Cannot update because the tip of master is missing these hooks:\n'
92 '{0}'.format(', '.join(sorted(hooks_missing)))
93 )
94
95 return remove_defaults([new_config], CONFIG_JSON_SCHEMA)[0]
96
97
98 def autoupdate(runner):
99 """Auto-update the pre-commit config to the latest versions of repos."""
100 retv = 0
101 output_configs = []
102 changed = False
103
104 input_configs = load_config(
105 runner.config_file_path,
106 load_strategy=ordered_load,
107 )
108
109 for repo_config in input_configs:
110 print('Updating {0}...'.format(repo_config['repo']), end='')
111 try:
112 new_repo_config = _update_repository(repo_config)
113 except RepositoryCannotBeUpdatedError as error:
114 print(error.args[0])
115 output_configs.append(repo_config)
116 retv = 1
117 continue
118
119 if new_repo_config['sha'] != repo_config['sha']:
120 changed = True
121 print(
122 'updating {0} -> {1}.'.format(
123 repo_config['sha'], new_repo_config['sha'],
124 )
125 )
126 output_configs.append(new_repo_config)
127 else:
128 print('already up to date.')
129 output_configs.append(repo_config)
130
131 if changed:
132 with open(runner.config_file_path, 'w') as config_file:
133 config_file.write(
134 ordered_dump(output_configs, **C.YAML_DUMP_KWARGS)
135 )
136
137 return retv
138
139
140 def clean(runner):
141 if os.path.exists(runner.hooks_workspace_path):
142 shutil.rmtree(runner.hooks_workspace_path)
143 print('Cleaned {0}.'.format(runner.hooks_workspace_path))
144 return 0
145
146
147 def _run_single_hook(runner, repository, hook_id, args, write):
148 if args.all_files:
149 get_filenames = git.get_all_files_matching
150 elif git.is_in_merge_conflict():
151 get_filenames = git.get_conflicted_files_matching
152 else:
153 get_filenames = git.get_staged_files_matching
154
155 hook = repository.hooks[hook_id]
156
157 filenames = get_filenames(hook['files'], hook['exclude'])
158 if not filenames:
159 no_files_msg = '(no files to check) '
160 skipped_msg = 'Skipped'
161 write(
162 '{0}{1}{2}{3}\n'.format(
163 hook['name'],
164 '.' * (
165 COLS -
166 len(hook['name']) -
167 len(no_files_msg) -
168 len(skipped_msg) -
169 6
170 ),
171 no_files_msg,
172 color.format_color(skipped_msg, color.TURQUOISE, args.color),
173 )
174 )
175 return 0
176
177 # Print the hook and the dots first in case the hook takes hella long to
178 # run.
179 write(
180 '{0}{1}'.format(
181 hook['name'],
182 '.' * (COLS - len(hook['name']) - PASS_FAIL_LENGTH - 6),
183 ),
184 )
185 sys.stdout.flush()
186
187 retcode, stdout, stderr = repository.run_hook(
188 runner.cmd_runner,
189 hook_id,
190 filenames,
191 )
192
193 if retcode != repository.hooks[hook_id]['expected_return_value']:
194 retcode = 1
195 print_color = color.RED
196 pass_fail = 'Failed'
197 else:
198 retcode = 0
199 print_color = color.GREEN
200 pass_fail = 'Passed'
201
202 write(color.format_color(pass_fail, print_color, args.color) + '\n')
203
204 if (stdout or stderr) and (retcode or args.verbose):
205 write('\n')
206 for output in (stdout, stderr):
207 if output.strip():
208 write(output.strip() + '\n')
209 write('\n')
210
211 return retcode
212
213
214 def _run_hooks(runner, args, write):
215 """Actually run the hooks."""
216 retval = 0
217
218 for repo in runner.repositories:
219 for hook_id in repo.hooks:
220 retval |= _run_single_hook(runner, repo, hook_id, args, write=write)
221
222 return retval
223
224
225 def _run_hook(runner, hook_id, args, write):
226 for repo in runner.repositories:
227 if hook_id in repo.hooks:
228 return _run_single_hook(runner, repo, hook_id, args, write=write)
229 else:
230 write('No hook with id `{0}`\n'.format(hook_id))
231 return 1
232
233
234 def _has_unmerged_paths(runner):
235 _, stdout, _ = runner.cmd_runner.run(['git', 'ls-files', '--unmerged'])
236 return bool(stdout.strip())
237
238
239 def run(runner, args, write=sys.stdout.write):
240 # Set up our logging handler
241 logger.addHandler(LoggingHandler(args.color, write=write))
242 logger.setLevel(logging.INFO)
243
244 # Check if we have unresolved merge conflict files and fail fast.
245 if _has_unmerged_paths(runner):
246 logger.error('Unmerged files. Resolve before committing.')
247 return 1
248
249 if args.no_stash or args.all_files:
250 ctx = noop_context()
251 else:
252 ctx = staged_files_only(runner.cmd_runner)
253
254 with ctx:
255 if args.hook:
256 return _run_hook(runner, args.hook, args, write=write)
257 else:
258 return _run_hooks(runner, args, write=write)
259
[end of pre_commit/commands.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/commands.py b/pre_commit/commands.py
--- a/pre_commit/commands.py
+++ b/pre_commit/commands.py
@@ -144,7 +144,42 @@
return 0
-def _run_single_hook(runner, repository, hook_id, args, write):
+def _get_skips(environ):
+ skips = environ.get('SKIP', '')
+ return set(skip.strip() for skip in skips.split(',') if skip.strip())
+
+
+def _print_no_files_skipped(hook, write, args):
+ no_files_msg = '(no files to check) '
+ skipped_msg = 'Skipped'
+ write(
+ '{0}{1}{2}{3}\n'.format(
+ hook['name'],
+ '.' * (
+ COLS -
+ len(hook['name']) -
+ len(no_files_msg) -
+ len(skipped_msg) -
+ 6
+ ),
+ no_files_msg,
+ color.format_color(skipped_msg, color.TURQUOISE, args.color),
+ )
+ )
+
+
+def _print_user_skipped(hook, write, args):
+ skipped_msg = 'Skipped'
+ write(
+ '{0}{1}{2}\n'.format(
+ hook['name'],
+ '.' * (COLS - len(hook['name']) - len(skipped_msg) - 6),
+ color.format_color(skipped_msg, color.YELLOW, args.color),
+ ),
+ )
+
+
+def _run_single_hook(runner, repository, hook_id, args, write, skips=set()):
if args.all_files:
get_filenames = git.get_all_files_matching
elif git.is_in_merge_conflict():
@@ -155,23 +190,11 @@
hook = repository.hooks[hook_id]
filenames = get_filenames(hook['files'], hook['exclude'])
- if not filenames:
- no_files_msg = '(no files to check) '
- skipped_msg = 'Skipped'
- write(
- '{0}{1}{2}{3}\n'.format(
- hook['name'],
- '.' * (
- COLS -
- len(hook['name']) -
- len(no_files_msg) -
- len(skipped_msg) -
- 6
- ),
- no_files_msg,
- color.format_color(skipped_msg, color.TURQUOISE, args.color),
- )
- )
+ if hook_id in skips:
+ _print_user_skipped(hook, write, args)
+ return 0
+ elif not filenames:
+ _print_no_files_skipped(hook, write, args)
return 0
# Print the hook and the dots first in case the hook takes hella long to
@@ -211,18 +234,23 @@
return retcode
-def _run_hooks(runner, args, write):
+def _run_hooks(runner, args, write, environ):
"""Actually run the hooks."""
retval = 0
+ skips = _get_skips(environ)
+
for repo in runner.repositories:
for hook_id in repo.hooks:
- retval |= _run_single_hook(runner, repo, hook_id, args, write=write)
+ retval |= _run_single_hook(
+ runner, repo, hook_id, args, write, skips=skips,
+ )
return retval
-def _run_hook(runner, hook_id, args, write):
+def _run_hook(runner, args, write):
+ hook_id = args.hook
for repo in runner.repositories:
if hook_id in repo.hooks:
return _run_single_hook(runner, repo, hook_id, args, write=write)
@@ -236,7 +264,7 @@
return bool(stdout.strip())
-def run(runner, args, write=sys.stdout.write):
+def run(runner, args, write=sys.stdout.write, environ=os.environ):
# Set up our logging handler
logger.addHandler(LoggingHandler(args.color, write=write))
logger.setLevel(logging.INFO)
@@ -253,6 +281,6 @@
with ctx:
if args.hook:
- return _run_hook(runner, args.hook, args, write=write)
+ return _run_hook(runner, args, write=write)
else:
- return _run_hooks(runner, args, write=write)
+ return _run_hooks(runner, args, write=write, environ=environ)
|
{"golden_diff": "diff --git a/pre_commit/commands.py b/pre_commit/commands.py\n--- a/pre_commit/commands.py\n+++ b/pre_commit/commands.py\n@@ -144,7 +144,42 @@\n return 0\n \n \n-def _run_single_hook(runner, repository, hook_id, args, write):\n+def _get_skips(environ):\n+ skips = environ.get('SKIP', '')\n+ return set(skip.strip() for skip in skips.split(',') if skip.strip())\n+\n+\n+def _print_no_files_skipped(hook, write, args):\n+ no_files_msg = '(no files to check) '\n+ skipped_msg = 'Skipped'\n+ write(\n+ '{0}{1}{2}{3}\\n'.format(\n+ hook['name'],\n+ '.' * (\n+ COLS -\n+ len(hook['name']) -\n+ len(no_files_msg) -\n+ len(skipped_msg) -\n+ 6\n+ ),\n+ no_files_msg,\n+ color.format_color(skipped_msg, color.TURQUOISE, args.color),\n+ )\n+ )\n+\n+\n+def _print_user_skipped(hook, write, args):\n+ skipped_msg = 'Skipped'\n+ write(\n+ '{0}{1}{2}\\n'.format(\n+ hook['name'],\n+ '.' * (COLS - len(hook['name']) - len(skipped_msg) - 6),\n+ color.format_color(skipped_msg, color.YELLOW, args.color),\n+ ),\n+ )\n+\n+\n+def _run_single_hook(runner, repository, hook_id, args, write, skips=set()):\n if args.all_files:\n get_filenames = git.get_all_files_matching\n elif git.is_in_merge_conflict():\n@@ -155,23 +190,11 @@\n hook = repository.hooks[hook_id]\n \n filenames = get_filenames(hook['files'], hook['exclude'])\n- if not filenames:\n- no_files_msg = '(no files to check) '\n- skipped_msg = 'Skipped'\n- write(\n- '{0}{1}{2}{3}\\n'.format(\n- hook['name'],\n- '.' * (\n- COLS -\n- len(hook['name']) -\n- len(no_files_msg) -\n- len(skipped_msg) -\n- 6\n- ),\n- no_files_msg,\n- color.format_color(skipped_msg, color.TURQUOISE, args.color),\n- )\n- )\n+ if hook_id in skips:\n+ _print_user_skipped(hook, write, args)\n+ return 0\n+ elif not filenames:\n+ _print_no_files_skipped(hook, write, args)\n return 0\n \n # Print the hook and the dots first in case the hook takes hella long to\n@@ -211,18 +234,23 @@\n return retcode\n \n \n-def _run_hooks(runner, args, write):\n+def _run_hooks(runner, args, write, environ):\n \"\"\"Actually run the hooks.\"\"\"\n retval = 0\n \n+ skips = _get_skips(environ)\n+\n for repo in runner.repositories:\n for hook_id in repo.hooks:\n- retval |= _run_single_hook(runner, repo, hook_id, args, write=write)\n+ retval |= _run_single_hook(\n+ runner, repo, hook_id, args, write, skips=skips,\n+ )\n \n return retval\n \n \n-def _run_hook(runner, hook_id, args, write):\n+def _run_hook(runner, args, write):\n+ hook_id = args.hook\n for repo in runner.repositories:\n if hook_id in repo.hooks:\n return _run_single_hook(runner, repo, hook_id, args, write=write)\n@@ -236,7 +264,7 @@\n return bool(stdout.strip())\n \n \n-def run(runner, args, write=sys.stdout.write):\n+def run(runner, args, write=sys.stdout.write, environ=os.environ):\n # Set up our logging handler\n logger.addHandler(LoggingHandler(args.color, write=write))\n logger.setLevel(logging.INFO)\n@@ -253,6 +281,6 @@\n \n with ctx:\n if args.hook:\n- return _run_hook(runner, args.hook, args, write=write)\n+ return _run_hook(runner, args, write=write)\n else:\n- return _run_hooks(runner, args, write=write)\n+ return _run_hooks(runner, args, write=write, environ=environ)\n", "issue": "Add way to temporarily/permanently disable hooks.\n[overcommit](https://github.com/causes/overcommit) uses environment variables to do temporary skipping...\n\nFor instance:\n\n`SKIP=foo git commit` will skip the `foo` hook\n\nWhereas I've used a more-permanent switching with `git config hooks.foo false` in the past.\n\nConsidering both approaches, I think overcommit does this quite elegantly while focusing on only _temporarily_ disabling hooks.\n\n", "before_files": [{"content": "from __future__ import print_function\n\nimport logging\nimport os\nimport pkg_resources\nimport shutil\nimport stat\nimport subprocess\nimport sys\nfrom asottile.ordereddict import OrderedDict\nfrom asottile.yaml import ordered_dump\nfrom asottile.yaml import ordered_load\nfrom plumbum import local\n\nimport pre_commit.constants as C\nfrom pre_commit import git\nfrom pre_commit import color\nfrom pre_commit.clientlib.validate_config import CONFIG_JSON_SCHEMA\nfrom pre_commit.clientlib.validate_config import load_config\nfrom pre_commit.jsonschema_extensions import remove_defaults\nfrom pre_commit.logging_handler import LoggingHandler\nfrom pre_commit.repository import Repository\nfrom pre_commit.staged_files_only import staged_files_only\nfrom pre_commit.util import noop_context\n\n\nlogger = logging.getLogger('pre_commit')\n\nCOLS = int(subprocess.Popen(['tput', 'cols'], stdout=subprocess.PIPE).communicate()[0])\n\nPASS_FAIL_LENGTH = 6\n\n\ndef install(runner):\n \"\"\"Install the pre-commit hooks.\"\"\"\n pre_commit_file = pkg_resources.resource_filename('pre_commit', 'resources/pre-commit.sh')\n with open(runner.pre_commit_path, 'w') as pre_commit_file_obj:\n pre_commit_file_obj.write(open(pre_commit_file).read())\n\n original_mode = os.stat(runner.pre_commit_path).st_mode\n os.chmod(\n runner.pre_commit_path,\n original_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH,\n )\n\n print('pre-commit installed at {0}'.format(runner.pre_commit_path))\n\n return 0\n\n\ndef uninstall(runner):\n \"\"\"Uninstall the pre-commit hooks.\"\"\"\n if os.path.exists(runner.pre_commit_path):\n os.remove(runner.pre_commit_path)\n print('pre-commit uninstalled')\n return 0\n\n\nclass RepositoryCannotBeUpdatedError(RuntimeError):\n pass\n\n\ndef _update_repository(repo_config):\n \"\"\"Updates a repository to the tip of `master`. If the repository cannot\n be updated because a hook that is configured does not exist in `master`,\n this raises a RepositoryCannotBeUpdatedError\n\n Args:\n repo_config - A config for a repository\n \"\"\"\n repo = Repository(repo_config)\n\n with repo.in_checkout():\n local['git']['fetch']()\n head_sha = local['git']['rev-parse', 'origin/master']().strip()\n\n # Don't bother trying to update if our sha is the same\n if head_sha == repo_config['sha']:\n return repo_config\n\n # Construct a new config with the head sha\n new_config = OrderedDict(repo_config)\n new_config['sha'] = head_sha\n new_repo = Repository(new_config)\n\n # See if any of our hooks were deleted with the new commits\n hooks = set(repo.hooks.keys())\n hooks_missing = hooks - (hooks & set(new_repo.manifest.keys()))\n if hooks_missing:\n raise RepositoryCannotBeUpdatedError(\n 'Cannot update because the tip of master is missing these hooks:\\n'\n '{0}'.format(', '.join(sorted(hooks_missing)))\n )\n\n return remove_defaults([new_config], CONFIG_JSON_SCHEMA)[0]\n\n\ndef autoupdate(runner):\n \"\"\"Auto-update the pre-commit config to the latest versions of repos.\"\"\"\n retv = 0\n output_configs = []\n changed = False\n\n input_configs = load_config(\n runner.config_file_path,\n load_strategy=ordered_load,\n )\n\n for repo_config in input_configs:\n print('Updating {0}...'.format(repo_config['repo']), end='')\n try:\n new_repo_config = _update_repository(repo_config)\n except RepositoryCannotBeUpdatedError as error:\n print(error.args[0])\n output_configs.append(repo_config)\n retv = 1\n continue\n\n if new_repo_config['sha'] != repo_config['sha']:\n changed = True\n print(\n 'updating {0} -> {1}.'.format(\n repo_config['sha'], new_repo_config['sha'],\n )\n )\n output_configs.append(new_repo_config)\n else:\n print('already up to date.')\n output_configs.append(repo_config)\n\n if changed:\n with open(runner.config_file_path, 'w') as config_file:\n config_file.write(\n ordered_dump(output_configs, **C.YAML_DUMP_KWARGS)\n )\n\n return retv\n\n\ndef clean(runner):\n if os.path.exists(runner.hooks_workspace_path):\n shutil.rmtree(runner.hooks_workspace_path)\n print('Cleaned {0}.'.format(runner.hooks_workspace_path))\n return 0\n\n\ndef _run_single_hook(runner, repository, hook_id, args, write):\n if args.all_files:\n get_filenames = git.get_all_files_matching\n elif git.is_in_merge_conflict():\n get_filenames = git.get_conflicted_files_matching\n else:\n get_filenames = git.get_staged_files_matching\n\n hook = repository.hooks[hook_id]\n\n filenames = get_filenames(hook['files'], hook['exclude'])\n if not filenames:\n no_files_msg = '(no files to check) '\n skipped_msg = 'Skipped'\n write(\n '{0}{1}{2}{3}\\n'.format(\n hook['name'],\n '.' * (\n COLS -\n len(hook['name']) -\n len(no_files_msg) -\n len(skipped_msg) -\n 6\n ),\n no_files_msg,\n color.format_color(skipped_msg, color.TURQUOISE, args.color),\n )\n )\n return 0\n\n # Print the hook and the dots first in case the hook takes hella long to\n # run.\n write(\n '{0}{1}'.format(\n hook['name'],\n '.' * (COLS - len(hook['name']) - PASS_FAIL_LENGTH - 6),\n ),\n )\n sys.stdout.flush()\n\n retcode, stdout, stderr = repository.run_hook(\n runner.cmd_runner,\n hook_id,\n filenames,\n )\n\n if retcode != repository.hooks[hook_id]['expected_return_value']:\n retcode = 1\n print_color = color.RED\n pass_fail = 'Failed'\n else:\n retcode = 0\n print_color = color.GREEN\n pass_fail = 'Passed'\n\n write(color.format_color(pass_fail, print_color, args.color) + '\\n')\n\n if (stdout or stderr) and (retcode or args.verbose):\n write('\\n')\n for output in (stdout, stderr):\n if output.strip():\n write(output.strip() + '\\n')\n write('\\n')\n\n return retcode\n\n\ndef _run_hooks(runner, args, write):\n \"\"\"Actually run the hooks.\"\"\"\n retval = 0\n\n for repo in runner.repositories:\n for hook_id in repo.hooks:\n retval |= _run_single_hook(runner, repo, hook_id, args, write=write)\n\n return retval\n\n\ndef _run_hook(runner, hook_id, args, write):\n for repo in runner.repositories:\n if hook_id in repo.hooks:\n return _run_single_hook(runner, repo, hook_id, args, write=write)\n else:\n write('No hook with id `{0}`\\n'.format(hook_id))\n return 1\n\n\ndef _has_unmerged_paths(runner):\n _, stdout, _ = runner.cmd_runner.run(['git', 'ls-files', '--unmerged'])\n return bool(stdout.strip())\n\n\ndef run(runner, args, write=sys.stdout.write):\n # Set up our logging handler\n logger.addHandler(LoggingHandler(args.color, write=write))\n logger.setLevel(logging.INFO)\n\n # Check if we have unresolved merge conflict files and fail fast.\n if _has_unmerged_paths(runner):\n logger.error('Unmerged files. Resolve before committing.')\n return 1\n\n if args.no_stash or args.all_files:\n ctx = noop_context()\n else:\n ctx = staged_files_only(runner.cmd_runner)\n\n with ctx:\n if args.hook:\n return _run_hook(runner, args.hook, args, write=write)\n else:\n return _run_hooks(runner, args, write=write)\n", "path": "pre_commit/commands.py"}]}
| 3,084 | 1,016 |
gh_patches_debug_29553
|
rasdani/github-patches
|
git_diff
|
borgbackup__borg-2980
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
netbsd: no readline
Python build missing readline, likely:
```
def import_paperkey(self, args):
# imported here because it has global side effects
> import readline
E ImportError: No module named 'readline'
.tox/py34/lib/python3.4/site-packages/borg/crypto/keymanager.py:146: ImportError
```
</issue>
<code>
[start of src/borg/crypto/keymanager.py]
1 import binascii
2 import pkgutil
3 import textwrap
4 from binascii import unhexlify, a2b_base64, b2a_base64
5 from hashlib import sha256
6
7 from ..helpers import Manifest, NoManifestError, Error, yes, bin_to_hex, dash_open
8 from ..repository import Repository
9
10 from .key import KeyfileKey, KeyfileNotFoundError, KeyBlobStorage, identify_key
11
12
13 class UnencryptedRepo(Error):
14 """Keymanagement not available for unencrypted repositories."""
15
16
17 class UnknownKeyType(Error):
18 """Keytype {0} is unknown."""
19
20
21 class RepoIdMismatch(Error):
22 """This key backup seems to be for a different backup repository, aborting."""
23
24
25 class NotABorgKeyFile(Error):
26 """This file is not a borg key backup, aborting."""
27
28
29 def sha256_truncated(data, num):
30 h = sha256()
31 h.update(data)
32 return h.hexdigest()[:num]
33
34
35 class KeyManager:
36 def __init__(self, repository):
37 self.repository = repository
38 self.keyblob = None
39 self.keyblob_storage = None
40
41 try:
42 manifest_data = self.repository.get(Manifest.MANIFEST_ID)
43 except Repository.ObjectNotFound:
44 raise NoManifestError
45
46 key = identify_key(manifest_data)
47 self.keyblob_storage = key.STORAGE
48 if self.keyblob_storage == KeyBlobStorage.NO_STORAGE:
49 raise UnencryptedRepo()
50
51 def load_keyblob(self):
52 if self.keyblob_storage == KeyBlobStorage.KEYFILE:
53 k = KeyfileKey(self.repository)
54 target = k.find_key()
55 with open(target, 'r') as fd:
56 self.keyblob = ''.join(fd.readlines()[1:])
57
58 elif self.keyblob_storage == KeyBlobStorage.REPO:
59 self.keyblob = self.repository.load_key().decode()
60
61 def store_keyblob(self, args):
62 if self.keyblob_storage == KeyBlobStorage.KEYFILE:
63 k = KeyfileKey(self.repository)
64 try:
65 target = k.find_key()
66 except KeyfileNotFoundError:
67 target = k.get_new_target(args)
68
69 self.store_keyfile(target)
70 elif self.keyblob_storage == KeyBlobStorage.REPO:
71 self.repository.save_key(self.keyblob.encode('utf-8'))
72
73 def get_keyfile_data(self):
74 data = '%s %s\n' % (KeyfileKey.FILE_ID, bin_to_hex(self.repository.id))
75 data += self.keyblob
76 if not self.keyblob.endswith('\n'):
77 data += '\n'
78 return data
79
80 def store_keyfile(self, target):
81 with open(target, 'w') as fd:
82 fd.write(self.get_keyfile_data())
83
84 def export(self, path):
85 self.store_keyfile(path)
86
87 def export_qr(self, path):
88 with open(path, 'wb') as fd:
89 key_data = self.get_keyfile_data()
90 html = pkgutil.get_data('borg', 'paperkey.html')
91 html = html.replace(b'</textarea>', key_data.encode() + b'</textarea>')
92 fd.write(html)
93
94 def export_paperkey(self, path):
95 def grouped(s):
96 ret = ''
97 i = 0
98 for ch in s:
99 if i and i % 6 == 0:
100 ret += ' '
101 ret += ch
102 i += 1
103 return ret
104
105 export = 'To restore key use borg key import --paper /path/to/repo\n\n'
106
107 binary = a2b_base64(self.keyblob)
108 export += 'BORG PAPER KEY v1\n'
109 lines = (len(binary) + 17) // 18
110 repoid = bin_to_hex(self.repository.id)[:18]
111 complete_checksum = sha256_truncated(binary, 12)
112 export += 'id: {0:d} / {1} / {2} - {3}\n'.format(lines,
113 grouped(repoid),
114 grouped(complete_checksum),
115 sha256_truncated((str(lines) + '/' + repoid + '/' + complete_checksum).encode('ascii'), 2))
116 idx = 0
117 while len(binary):
118 idx += 1
119 binline = binary[:18]
120 checksum = sha256_truncated(idx.to_bytes(2, byteorder='big') + binline, 2)
121 export += '{0:2d}: {1} - {2}\n'.format(idx, grouped(bin_to_hex(binline)), checksum)
122 binary = binary[18:]
123
124 if path:
125 with open(path, 'w') as fd:
126 fd.write(export)
127 else:
128 print(export)
129
130 def import_keyfile(self, args):
131 file_id = KeyfileKey.FILE_ID
132 first_line = file_id + ' ' + bin_to_hex(self.repository.id) + '\n'
133 with dash_open(args.path, 'r') as fd:
134 file_first_line = fd.read(len(first_line))
135 if file_first_line != first_line:
136 if not file_first_line.startswith(file_id):
137 raise NotABorgKeyFile()
138 else:
139 raise RepoIdMismatch()
140 self.keyblob = fd.read()
141
142 self.store_keyblob(args)
143
144 def import_paperkey(self, args):
145 # imported here because it has global side effects
146 import readline
147
148 repoid = bin_to_hex(self.repository.id)[:18]
149 try:
150 while True: # used for repeating on overall checksum mismatch
151 # id line input
152 while True:
153 idline = input('id: ').replace(' ', '')
154 if idline == "":
155 if yes("Abort import? [yN]:"):
156 raise EOFError()
157
158 try:
159 (data, checksum) = idline.split('-')
160 except ValueError:
161 print("each line must contain exactly one '-', try again")
162 continue
163 try:
164 (id_lines, id_repoid, id_complete_checksum) = data.split('/')
165 except ValueError:
166 print("the id line must contain exactly three '/', try again")
167 continue
168 if sha256_truncated(data.lower().encode('ascii'), 2) != checksum:
169 print('line checksum did not match, try same line again')
170 continue
171 try:
172 lines = int(id_lines)
173 except ValueError:
174 print('internal error while parsing length')
175
176 break
177
178 if repoid != id_repoid:
179 raise RepoIdMismatch()
180
181 result = b''
182 idx = 1
183 # body line input
184 while True:
185 inline = input('{0:2d}: '.format(idx))
186 inline = inline.replace(' ', '')
187 if inline == "":
188 if yes("Abort import? [yN]:"):
189 raise EOFError()
190 try:
191 (data, checksum) = inline.split('-')
192 except ValueError:
193 print("each line must contain exactly one '-', try again")
194 continue
195 try:
196 part = unhexlify(data)
197 except binascii.Error:
198 print("only characters 0-9 and a-f and '-' are valid, try again")
199 continue
200 if sha256_truncated(idx.to_bytes(2, byteorder='big') + part, 2) != checksum:
201 print('line checksum did not match, try line {0} again'.format(idx))
202 continue
203 result += part
204 if idx == lines:
205 break
206 idx += 1
207
208 if sha256_truncated(result, 12) != id_complete_checksum:
209 print('The overall checksum did not match, retry or enter a blank line to abort.')
210 continue
211
212 self.keyblob = '\n'.join(textwrap.wrap(b2a_base64(result).decode('ascii'))) + '\n'
213 self.store_keyblob(args)
214 break
215
216 except EOFError:
217 print('\n - aborted')
218 return
219
[end of src/borg/crypto/keymanager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/borg/crypto/keymanager.py b/src/borg/crypto/keymanager.py
--- a/src/borg/crypto/keymanager.py
+++ b/src/borg/crypto/keymanager.py
@@ -142,8 +142,11 @@
self.store_keyblob(args)
def import_paperkey(self, args):
- # imported here because it has global side effects
- import readline
+ try:
+ # imported here because it has global side effects
+ import readline
+ except ImportError:
+ print('Note: No line editing available due to missing readline support')
repoid = bin_to_hex(self.repository.id)[:18]
try:
@@ -151,8 +154,8 @@
# id line input
while True:
idline = input('id: ').replace(' ', '')
- if idline == "":
- if yes("Abort import? [yN]:"):
+ if idline == '':
+ if yes('Abort import? [yN]:'):
raise EOFError()
try:
@@ -184,8 +187,8 @@
while True:
inline = input('{0:2d}: '.format(idx))
inline = inline.replace(' ', '')
- if inline == "":
- if yes("Abort import? [yN]:"):
+ if inline == '':
+ if yes('Abort import? [yN]:'):
raise EOFError()
try:
(data, checksum) = inline.split('-')
|
{"golden_diff": "diff --git a/src/borg/crypto/keymanager.py b/src/borg/crypto/keymanager.py\n--- a/src/borg/crypto/keymanager.py\n+++ b/src/borg/crypto/keymanager.py\n@@ -142,8 +142,11 @@\n self.store_keyblob(args)\n \n def import_paperkey(self, args):\n- # imported here because it has global side effects\n- import readline\n+ try:\n+ # imported here because it has global side effects\n+ import readline\n+ except ImportError:\n+ print('Note: No line editing available due to missing readline support')\n \n repoid = bin_to_hex(self.repository.id)[:18]\n try:\n@@ -151,8 +154,8 @@\n # id line input\n while True:\n idline = input('id: ').replace(' ', '')\n- if idline == \"\":\n- if yes(\"Abort import? [yN]:\"):\n+ if idline == '':\n+ if yes('Abort import? [yN]:'):\n raise EOFError()\n \n try:\n@@ -184,8 +187,8 @@\n while True:\n inline = input('{0:2d}: '.format(idx))\n inline = inline.replace(' ', '')\n- if inline == \"\":\n- if yes(\"Abort import? [yN]:\"):\n+ if inline == '':\n+ if yes('Abort import? [yN]:'):\n raise EOFError()\n try:\n (data, checksum) = inline.split('-')\n", "issue": "netbsd: no readline\nPython build missing readline, likely:\r\n```\r\n def import_paperkey(self, args):\r\n # imported here because it has global side effects\r\n> import readline\r\nE ImportError: No module named 'readline'\r\n\r\n.tox/py34/lib/python3.4/site-packages/borg/crypto/keymanager.py:146: ImportError\r\n```\n", "before_files": [{"content": "import binascii\nimport pkgutil\nimport textwrap\nfrom binascii import unhexlify, a2b_base64, b2a_base64\nfrom hashlib import sha256\n\nfrom ..helpers import Manifest, NoManifestError, Error, yes, bin_to_hex, dash_open\nfrom ..repository import Repository\n\nfrom .key import KeyfileKey, KeyfileNotFoundError, KeyBlobStorage, identify_key\n\n\nclass UnencryptedRepo(Error):\n \"\"\"Keymanagement not available for unencrypted repositories.\"\"\"\n\n\nclass UnknownKeyType(Error):\n \"\"\"Keytype {0} is unknown.\"\"\"\n\n\nclass RepoIdMismatch(Error):\n \"\"\"This key backup seems to be for a different backup repository, aborting.\"\"\"\n\n\nclass NotABorgKeyFile(Error):\n \"\"\"This file is not a borg key backup, aborting.\"\"\"\n\n\ndef sha256_truncated(data, num):\n h = sha256()\n h.update(data)\n return h.hexdigest()[:num]\n\n\nclass KeyManager:\n def __init__(self, repository):\n self.repository = repository\n self.keyblob = None\n self.keyblob_storage = None\n\n try:\n manifest_data = self.repository.get(Manifest.MANIFEST_ID)\n except Repository.ObjectNotFound:\n raise NoManifestError\n\n key = identify_key(manifest_data)\n self.keyblob_storage = key.STORAGE\n if self.keyblob_storage == KeyBlobStorage.NO_STORAGE:\n raise UnencryptedRepo()\n\n def load_keyblob(self):\n if self.keyblob_storage == KeyBlobStorage.KEYFILE:\n k = KeyfileKey(self.repository)\n target = k.find_key()\n with open(target, 'r') as fd:\n self.keyblob = ''.join(fd.readlines()[1:])\n\n elif self.keyblob_storage == KeyBlobStorage.REPO:\n self.keyblob = self.repository.load_key().decode()\n\n def store_keyblob(self, args):\n if self.keyblob_storage == KeyBlobStorage.KEYFILE:\n k = KeyfileKey(self.repository)\n try:\n target = k.find_key()\n except KeyfileNotFoundError:\n target = k.get_new_target(args)\n\n self.store_keyfile(target)\n elif self.keyblob_storage == KeyBlobStorage.REPO:\n self.repository.save_key(self.keyblob.encode('utf-8'))\n\n def get_keyfile_data(self):\n data = '%s %s\\n' % (KeyfileKey.FILE_ID, bin_to_hex(self.repository.id))\n data += self.keyblob\n if not self.keyblob.endswith('\\n'):\n data += '\\n'\n return data\n\n def store_keyfile(self, target):\n with open(target, 'w') as fd:\n fd.write(self.get_keyfile_data())\n\n def export(self, path):\n self.store_keyfile(path)\n\n def export_qr(self, path):\n with open(path, 'wb') as fd:\n key_data = self.get_keyfile_data()\n html = pkgutil.get_data('borg', 'paperkey.html')\n html = html.replace(b'</textarea>', key_data.encode() + b'</textarea>')\n fd.write(html)\n\n def export_paperkey(self, path):\n def grouped(s):\n ret = ''\n i = 0\n for ch in s:\n if i and i % 6 == 0:\n ret += ' '\n ret += ch\n i += 1\n return ret\n\n export = 'To restore key use borg key import --paper /path/to/repo\\n\\n'\n\n binary = a2b_base64(self.keyblob)\n export += 'BORG PAPER KEY v1\\n'\n lines = (len(binary) + 17) // 18\n repoid = bin_to_hex(self.repository.id)[:18]\n complete_checksum = sha256_truncated(binary, 12)\n export += 'id: {0:d} / {1} / {2} - {3}\\n'.format(lines,\n grouped(repoid),\n grouped(complete_checksum),\n sha256_truncated((str(lines) + '/' + repoid + '/' + complete_checksum).encode('ascii'), 2))\n idx = 0\n while len(binary):\n idx += 1\n binline = binary[:18]\n checksum = sha256_truncated(idx.to_bytes(2, byteorder='big') + binline, 2)\n export += '{0:2d}: {1} - {2}\\n'.format(idx, grouped(bin_to_hex(binline)), checksum)\n binary = binary[18:]\n\n if path:\n with open(path, 'w') as fd:\n fd.write(export)\n else:\n print(export)\n\n def import_keyfile(self, args):\n file_id = KeyfileKey.FILE_ID\n first_line = file_id + ' ' + bin_to_hex(self.repository.id) + '\\n'\n with dash_open(args.path, 'r') as fd:\n file_first_line = fd.read(len(first_line))\n if file_first_line != first_line:\n if not file_first_line.startswith(file_id):\n raise NotABorgKeyFile()\n else:\n raise RepoIdMismatch()\n self.keyblob = fd.read()\n\n self.store_keyblob(args)\n\n def import_paperkey(self, args):\n # imported here because it has global side effects\n import readline\n\n repoid = bin_to_hex(self.repository.id)[:18]\n try:\n while True: # used for repeating on overall checksum mismatch\n # id line input\n while True:\n idline = input('id: ').replace(' ', '')\n if idline == \"\":\n if yes(\"Abort import? [yN]:\"):\n raise EOFError()\n\n try:\n (data, checksum) = idline.split('-')\n except ValueError:\n print(\"each line must contain exactly one '-', try again\")\n continue\n try:\n (id_lines, id_repoid, id_complete_checksum) = data.split('/')\n except ValueError:\n print(\"the id line must contain exactly three '/', try again\")\n continue\n if sha256_truncated(data.lower().encode('ascii'), 2) != checksum:\n print('line checksum did not match, try same line again')\n continue\n try:\n lines = int(id_lines)\n except ValueError:\n print('internal error while parsing length')\n\n break\n\n if repoid != id_repoid:\n raise RepoIdMismatch()\n\n result = b''\n idx = 1\n # body line input\n while True:\n inline = input('{0:2d}: '.format(idx))\n inline = inline.replace(' ', '')\n if inline == \"\":\n if yes(\"Abort import? [yN]:\"):\n raise EOFError()\n try:\n (data, checksum) = inline.split('-')\n except ValueError:\n print(\"each line must contain exactly one '-', try again\")\n continue\n try:\n part = unhexlify(data)\n except binascii.Error:\n print(\"only characters 0-9 and a-f and '-' are valid, try again\")\n continue\n if sha256_truncated(idx.to_bytes(2, byteorder='big') + part, 2) != checksum:\n print('line checksum did not match, try line {0} again'.format(idx))\n continue\n result += part\n if idx == lines:\n break\n idx += 1\n\n if sha256_truncated(result, 12) != id_complete_checksum:\n print('The overall checksum did not match, retry or enter a blank line to abort.')\n continue\n\n self.keyblob = '\\n'.join(textwrap.wrap(b2a_base64(result).decode('ascii'))) + '\\n'\n self.store_keyblob(args)\n break\n\n except EOFError:\n print('\\n - aborted')\n return\n", "path": "src/borg/crypto/keymanager.py"}]}
| 2,867 | 332 |
gh_patches_debug_23129
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-1567
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove or block impersonate's "list" and "search" urls
Saleor uses the [django-impersonate](https://bitbucket.org/petersanchez/django-impersonate/overview) for client impersonation feature. While working on #1549 I've found out that in addition to two views that we are using (start and stop impersonating the user), the library brings additional two views that we don't really want to support:
https://demo.getsaleor.com/impersonate/list/
https://demo.getsaleor.com/impersonate/search/?q=admin (note: this one 500's on link)
Ideally, library would've provided us with a settings to disable those views, but this isn't the case.
So its worth asking ourselves what harm is there in keeping those views around, and if we really want to get rid of those two views, how would we go about it?
Looking at the [imersonate.urls](https://bitbucket.org/petersanchez/django-impersonate/src/f898c697b2bd9945187f8667d680e6d10d06dc33/impersonate/urls.py?at=default&fileviewer=file-view-default), it may be as simple as updating our `urls.py` to explictly define `impersonate-start` and `impersonate-stop`, or perhaps we should open the issue upstream and see what library's author thinks about it?
</issue>
<code>
[start of saleor/urls.py]
1 from django.conf import settings
2 from django.conf.urls import url, include
3 from django.conf.urls.static import static
4 from django.contrib.sitemaps.views import sitemap
5 from django.contrib.staticfiles.views import serve
6 from django.views.i18n import JavaScriptCatalog
7 from graphene_django.views import GraphQLView
8
9 from .cart.urls import urlpatterns as cart_urls
10 from .checkout.urls import urlpatterns as checkout_urls
11 from .core.sitemaps import sitemaps
12 from .core.urls import urlpatterns as core_urls
13 from .dashboard.urls import urlpatterns as dashboard_urls
14 from .data_feeds.urls import urlpatterns as feed_urls
15 from .order.urls import urlpatterns as order_urls
16 from .product.urls import urlpatterns as product_urls
17 from .registration.urls import urlpatterns as registration_urls
18 from .search.urls import urlpatterns as search_urls
19 from .userprofile.urls import urlpatterns as userprofile_urls
20
21 urlpatterns = [
22 url(r'^', include(core_urls)),
23 url(r'^account/', include(registration_urls)),
24 url(r'^cart/', include((cart_urls, 'cart'), namespace='cart')),
25 url(r'^checkout/',
26 include((checkout_urls, 'checkout'), namespace='checkout')),
27 url(r'^dashboard/',
28 include((dashboard_urls, 'dashboard'), namespace='dashboard')),
29 url(r'^graphql', GraphQLView.as_view(graphiql=settings.DEBUG)),
30 url(r'^impersonate/', include('impersonate.urls')),
31 url(r'^jsi18n/$', JavaScriptCatalog.as_view(), name='javascript-catalog'),
32 url(r'^order/', include((order_urls, 'order'), namespace='order')),
33 url(r'^products/',
34 include((product_urls, 'product'), namespace='product')),
35 url(r'^profile/',
36 include((userprofile_urls, 'profile'), namespace='profile')),
37 url(r'^feeds/',
38 include((feed_urls, 'data_feeds'), namespace='data_feeds')),
39 url(r'^search/', include((search_urls, 'search'), namespace='search')),
40 url(r'^sitemap\.xml$', sitemap, {'sitemaps': sitemaps},
41 name='django.contrib.sitemaps.views.sitemap'),
42 url(r'', include('payments.urls')),
43 url('', include('social_django.urls', namespace='social')),
44 ]
45
46 if settings.DEBUG:
47 # static files (images, css, javascript, etc.)
48 urlpatterns += [
49 url(r'^static/(?P<path>.*)$', serve)
50 ] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
51
[end of saleor/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/saleor/urls.py b/saleor/urls.py
--- a/saleor/urls.py
+++ b/saleor/urls.py
@@ -5,6 +5,7 @@
from django.contrib.staticfiles.views import serve
from django.views.i18n import JavaScriptCatalog
from graphene_django.views import GraphQLView
+from impersonate.views import impersonate, stop_impersonate
from .cart.urls import urlpatterns as cart_urls
from .checkout.urls import urlpatterns as checkout_urls
@@ -27,7 +28,8 @@
url(r'^dashboard/',
include((dashboard_urls, 'dashboard'), namespace='dashboard')),
url(r'^graphql', GraphQLView.as_view(graphiql=settings.DEBUG)),
- url(r'^impersonate/', include('impersonate.urls')),
+ url(r'^impersonate/stop/$', stop_impersonate, name='impersonate-stop'),
+ url(r'^impersonate/(?P<uid>\d+)/$', impersonate, name='impersonate-start'),
url(r'^jsi18n/$', JavaScriptCatalog.as_view(), name='javascript-catalog'),
url(r'^order/', include((order_urls, 'order'), namespace='order')),
url(r'^products/',
|
{"golden_diff": "diff --git a/saleor/urls.py b/saleor/urls.py\n--- a/saleor/urls.py\n+++ b/saleor/urls.py\n@@ -5,6 +5,7 @@\n from django.contrib.staticfiles.views import serve\n from django.views.i18n import JavaScriptCatalog\n from graphene_django.views import GraphQLView\n+from impersonate.views import impersonate, stop_impersonate\n \n from .cart.urls import urlpatterns as cart_urls\n from .checkout.urls import urlpatterns as checkout_urls\n@@ -27,7 +28,8 @@\n url(r'^dashboard/',\n include((dashboard_urls, 'dashboard'), namespace='dashboard')),\n url(r'^graphql', GraphQLView.as_view(graphiql=settings.DEBUG)),\n- url(r'^impersonate/', include('impersonate.urls')),\n+ url(r'^impersonate/stop/$', stop_impersonate, name='impersonate-stop'),\n+ url(r'^impersonate/(?P<uid>\\d+)/$', impersonate, name='impersonate-start'),\n url(r'^jsi18n/$', JavaScriptCatalog.as_view(), name='javascript-catalog'),\n url(r'^order/', include((order_urls, 'order'), namespace='order')),\n url(r'^products/',\n", "issue": "Remove or block impersonate's \"list\" and \"search\" urls\nSaleor uses the [django-impersonate](https://bitbucket.org/petersanchez/django-impersonate/overview) for client impersonation feature. While working on #1549 I've found out that in addition to two views that we are using (start and stop impersonating the user), the library brings additional two views that we don't really want to support:\r\n\r\nhttps://demo.getsaleor.com/impersonate/list/\r\nhttps://demo.getsaleor.com/impersonate/search/?q=admin (note: this one 500's on link)\r\n\r\nIdeally, library would've provided us with a settings to disable those views, but this isn't the case.\r\n\r\nSo its worth asking ourselves what harm is there in keeping those views around, and if we really want to get rid of those two views, how would we go about it?\r\n\r\nLooking at the [imersonate.urls](https://bitbucket.org/petersanchez/django-impersonate/src/f898c697b2bd9945187f8667d680e6d10d06dc33/impersonate/urls.py?at=default&fileviewer=file-view-default), it may be as simple as updating our `urls.py` to explictly define `impersonate-start` and `impersonate-stop`, or perhaps we should open the issue upstream and see what library's author thinks about it?\r\n \n", "before_files": [{"content": "from django.conf import settings\nfrom django.conf.urls import url, include\nfrom django.conf.urls.static import static\nfrom django.contrib.sitemaps.views import sitemap\nfrom django.contrib.staticfiles.views import serve\nfrom django.views.i18n import JavaScriptCatalog\nfrom graphene_django.views import GraphQLView\n\nfrom .cart.urls import urlpatterns as cart_urls\nfrom .checkout.urls import urlpatterns as checkout_urls\nfrom .core.sitemaps import sitemaps\nfrom .core.urls import urlpatterns as core_urls\nfrom .dashboard.urls import urlpatterns as dashboard_urls\nfrom .data_feeds.urls import urlpatterns as feed_urls\nfrom .order.urls import urlpatterns as order_urls\nfrom .product.urls import urlpatterns as product_urls\nfrom .registration.urls import urlpatterns as registration_urls\nfrom .search.urls import urlpatterns as search_urls\nfrom .userprofile.urls import urlpatterns as userprofile_urls\n\nurlpatterns = [\n url(r'^', include(core_urls)),\n url(r'^account/', include(registration_urls)),\n url(r'^cart/', include((cart_urls, 'cart'), namespace='cart')),\n url(r'^checkout/',\n include((checkout_urls, 'checkout'), namespace='checkout')),\n url(r'^dashboard/',\n include((dashboard_urls, 'dashboard'), namespace='dashboard')),\n url(r'^graphql', GraphQLView.as_view(graphiql=settings.DEBUG)),\n url(r'^impersonate/', include('impersonate.urls')),\n url(r'^jsi18n/$', JavaScriptCatalog.as_view(), name='javascript-catalog'),\n url(r'^order/', include((order_urls, 'order'), namespace='order')),\n url(r'^products/',\n include((product_urls, 'product'), namespace='product')),\n url(r'^profile/',\n include((userprofile_urls, 'profile'), namespace='profile')),\n url(r'^feeds/',\n include((feed_urls, 'data_feeds'), namespace='data_feeds')),\n url(r'^search/', include((search_urls, 'search'), namespace='search')),\n url(r'^sitemap\\.xml$', sitemap, {'sitemaps': sitemaps},\n name='django.contrib.sitemaps.views.sitemap'),\n url(r'', include('payments.urls')),\n url('', include('social_django.urls', namespace='social')),\n]\n\nif settings.DEBUG:\n # static files (images, css, javascript, etc.)\n urlpatterns += [\n url(r'^static/(?P<path>.*)$', serve)\n ] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)\n", "path": "saleor/urls.py"}]}
| 1,467 | 266 |
gh_patches_debug_16174
|
rasdani/github-patches
|
git_diff
|
googleapis__google-cloud-python-6912
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Firestore] WriteBatch doesn't return instance so you cannot chain.
The WriteBatch methods don’t return the WriteBatch instances for chaining.
</issue>
<code>
[start of firestore/google/cloud/firestore_v1beta1/batch.py]
1 # Copyright 2017 Google LLC All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Helpers for batch requests to the Google Cloud Firestore API."""
16
17
18 from google.cloud.firestore_v1beta1 import _helpers
19
20
21 class WriteBatch(object):
22 """Accumulate write operations to be sent in a batch.
23
24 This has the same set of methods for write operations that
25 :class:`~.firestore_v1beta1.document.DocumentReference` does,
26 e.g. :meth:`~.firestore_v1beta1.document.DocumentReference.create`.
27
28 Args:
29 client (~.firestore_v1beta1.client.Client): The client that
30 created this batch.
31 """
32
33 def __init__(self, client):
34 self._client = client
35 self._write_pbs = []
36
37 def _add_write_pbs(self, write_pbs):
38 """Add `Write`` protobufs to this transaction.
39
40 This method intended to be over-ridden by subclasses.
41
42 Args:
43 write_pbs (List[google.cloud.proto.firestore.v1beta1.\
44 write_pb2.Write]): A list of write protobufs to be added.
45 """
46 self._write_pbs.extend(write_pbs)
47
48 def create(self, reference, document_data):
49 """Add a "change" to this batch to create a document.
50
51 If the document given by ``reference`` already exists, then this
52 batch will fail when :meth:`commit`-ed.
53
54 Args:
55 reference (~.firestore_v1beta1.document.DocumentReference): A
56 document reference to be created in this batch.
57 document_data (dict): Property names and values to use for
58 creating a document.
59 """
60 write_pbs = _helpers.pbs_for_create(reference._document_path, document_data)
61 self._add_write_pbs(write_pbs)
62
63 def set(self, reference, document_data, merge=False):
64 """Add a "change" to replace a document.
65
66 See
67 :meth:`~.firestore_v1beta1.document.DocumentReference.set` for
68 more information on how ``option`` determines how the change is
69 applied.
70
71 Args:
72 reference (~.firestore_v1beta1.document.DocumentReference):
73 A document reference that will have values set in this batch.
74 document_data (dict):
75 Property names and values to use for replacing a document.
76 merge (Optional[bool] or Optional[List<apispec>]):
77 If True, apply merging instead of overwriting the state
78 of the document.
79 """
80 if merge is not False:
81 write_pbs = _helpers.pbs_for_set_with_merge(
82 reference._document_path, document_data, merge
83 )
84 else:
85 write_pbs = _helpers.pbs_for_set_no_merge(
86 reference._document_path, document_data
87 )
88
89 self._add_write_pbs(write_pbs)
90
91 def update(self, reference, field_updates, option=None):
92 """Add a "change" to update a document.
93
94 See
95 :meth:`~.firestore_v1beta1.document.DocumentReference.update` for
96 more information on ``field_updates`` and ``option``.
97
98 Args:
99 reference (~.firestore_v1beta1.document.DocumentReference): A
100 document reference that will be deleted in this batch.
101 field_updates (dict): Field names or paths to update and values
102 to update with.
103 option (Optional[~.firestore_v1beta1.client.WriteOption]): A
104 write option to make assertions / preconditions on the server
105 state of the document before applying changes.
106 """
107 if option.__class__.__name__ == "ExistsOption":
108 raise ValueError("you must not pass an explicit write option to " "update.")
109 write_pbs = _helpers.pbs_for_update(
110 reference._document_path, field_updates, option
111 )
112 self._add_write_pbs(write_pbs)
113
114 def delete(self, reference, option=None):
115 """Add a "change" to delete a document.
116
117 See
118 :meth:`~.firestore_v1beta1.document.DocumentReference.delete` for
119 more information on how ``option`` determines how the change is
120 applied.
121
122 Args:
123 reference (~.firestore_v1beta1.document.DocumentReference): A
124 document reference that will be deleted in this batch.
125 option (Optional[~.firestore_v1beta1.client.WriteOption]): A
126 write option to make assertions / preconditions on the server
127 state of the document before applying changes.
128 """
129 write_pb = _helpers.pb_for_delete(reference._document_path, option)
130 self._add_write_pbs([write_pb])
131
132 def commit(self):
133 """Commit the changes accumulated in this batch.
134
135 Returns:
136 List[google.cloud.proto.firestore.v1beta1.\
137 write_pb2.WriteResult, ...]: The write results corresponding
138 to the changes committed, returned in the same order as the
139 changes were applied to this batch. A write result contains an
140 ``update_time`` field.
141 """
142 commit_response = self._client._firestore_api.commit(
143 self._client._database_string,
144 self._write_pbs,
145 transaction=None,
146 metadata=self._client._rpc_metadata,
147 )
148
149 self._write_pbs = []
150 return list(commit_response.write_results)
151
[end of firestore/google/cloud/firestore_v1beta1/batch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/firestore/google/cloud/firestore_v1beta1/batch.py b/firestore/google/cloud/firestore_v1beta1/batch.py
--- a/firestore/google/cloud/firestore_v1beta1/batch.py
+++ b/firestore/google/cloud/firestore_v1beta1/batch.py
@@ -33,6 +33,8 @@
def __init__(self, client):
self._client = client
self._write_pbs = []
+ self.write_results = None
+ self.commit_time = None
def _add_write_pbs(self, write_pbs):
"""Add `Write`` protobufs to this transaction.
@@ -147,4 +149,13 @@
)
self._write_pbs = []
- return list(commit_response.write_results)
+ self.write_results = results = list(commit_response.write_results)
+ self.commit_time = commit_response.commit_time
+ return results
+
+ def __enter__(self):
+ return self
+
+ def __exit__(self, exc_type, exc_value, traceback):
+ if exc_type is None:
+ self.commit()
|
{"golden_diff": "diff --git a/firestore/google/cloud/firestore_v1beta1/batch.py b/firestore/google/cloud/firestore_v1beta1/batch.py\n--- a/firestore/google/cloud/firestore_v1beta1/batch.py\n+++ b/firestore/google/cloud/firestore_v1beta1/batch.py\n@@ -33,6 +33,8 @@\n def __init__(self, client):\n self._client = client\n self._write_pbs = []\n+ self.write_results = None\n+ self.commit_time = None\n \n def _add_write_pbs(self, write_pbs):\n \"\"\"Add `Write`` protobufs to this transaction.\n@@ -147,4 +149,13 @@\n )\n \n self._write_pbs = []\n- return list(commit_response.write_results)\n+ self.write_results = results = list(commit_response.write_results)\n+ self.commit_time = commit_response.commit_time\n+ return results\n+\n+ def __enter__(self):\n+ return self\n+\n+ def __exit__(self, exc_type, exc_value, traceback):\n+ if exc_type is None:\n+ self.commit()\n", "issue": "[Firestore] WriteBatch doesn't return instance so you cannot chain.\nThe WriteBatch methods don\u2019t return the WriteBatch instances for chaining.\r\n\n", "before_files": [{"content": "# Copyright 2017 Google LLC All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Helpers for batch requests to the Google Cloud Firestore API.\"\"\"\n\n\nfrom google.cloud.firestore_v1beta1 import _helpers\n\n\nclass WriteBatch(object):\n \"\"\"Accumulate write operations to be sent in a batch.\n\n This has the same set of methods for write operations that\n :class:`~.firestore_v1beta1.document.DocumentReference` does,\n e.g. :meth:`~.firestore_v1beta1.document.DocumentReference.create`.\n\n Args:\n client (~.firestore_v1beta1.client.Client): The client that\n created this batch.\n \"\"\"\n\n def __init__(self, client):\n self._client = client\n self._write_pbs = []\n\n def _add_write_pbs(self, write_pbs):\n \"\"\"Add `Write`` protobufs to this transaction.\n\n This method intended to be over-ridden by subclasses.\n\n Args:\n write_pbs (List[google.cloud.proto.firestore.v1beta1.\\\n write_pb2.Write]): A list of write protobufs to be added.\n \"\"\"\n self._write_pbs.extend(write_pbs)\n\n def create(self, reference, document_data):\n \"\"\"Add a \"change\" to this batch to create a document.\n\n If the document given by ``reference`` already exists, then this\n batch will fail when :meth:`commit`-ed.\n\n Args:\n reference (~.firestore_v1beta1.document.DocumentReference): A\n document reference to be created in this batch.\n document_data (dict): Property names and values to use for\n creating a document.\n \"\"\"\n write_pbs = _helpers.pbs_for_create(reference._document_path, document_data)\n self._add_write_pbs(write_pbs)\n\n def set(self, reference, document_data, merge=False):\n \"\"\"Add a \"change\" to replace a document.\n\n See\n :meth:`~.firestore_v1beta1.document.DocumentReference.set` for\n more information on how ``option`` determines how the change is\n applied.\n\n Args:\n reference (~.firestore_v1beta1.document.DocumentReference):\n A document reference that will have values set in this batch.\n document_data (dict):\n Property names and values to use for replacing a document.\n merge (Optional[bool] or Optional[List<apispec>]):\n If True, apply merging instead of overwriting the state\n of the document.\n \"\"\"\n if merge is not False:\n write_pbs = _helpers.pbs_for_set_with_merge(\n reference._document_path, document_data, merge\n )\n else:\n write_pbs = _helpers.pbs_for_set_no_merge(\n reference._document_path, document_data\n )\n\n self._add_write_pbs(write_pbs)\n\n def update(self, reference, field_updates, option=None):\n \"\"\"Add a \"change\" to update a document.\n\n See\n :meth:`~.firestore_v1beta1.document.DocumentReference.update` for\n more information on ``field_updates`` and ``option``.\n\n Args:\n reference (~.firestore_v1beta1.document.DocumentReference): A\n document reference that will be deleted in this batch.\n field_updates (dict): Field names or paths to update and values\n to update with.\n option (Optional[~.firestore_v1beta1.client.WriteOption]): A\n write option to make assertions / preconditions on the server\n state of the document before applying changes.\n \"\"\"\n if option.__class__.__name__ == \"ExistsOption\":\n raise ValueError(\"you must not pass an explicit write option to \" \"update.\")\n write_pbs = _helpers.pbs_for_update(\n reference._document_path, field_updates, option\n )\n self._add_write_pbs(write_pbs)\n\n def delete(self, reference, option=None):\n \"\"\"Add a \"change\" to delete a document.\n\n See\n :meth:`~.firestore_v1beta1.document.DocumentReference.delete` for\n more information on how ``option`` determines how the change is\n applied.\n\n Args:\n reference (~.firestore_v1beta1.document.DocumentReference): A\n document reference that will be deleted in this batch.\n option (Optional[~.firestore_v1beta1.client.WriteOption]): A\n write option to make assertions / preconditions on the server\n state of the document before applying changes.\n \"\"\"\n write_pb = _helpers.pb_for_delete(reference._document_path, option)\n self._add_write_pbs([write_pb])\n\n def commit(self):\n \"\"\"Commit the changes accumulated in this batch.\n\n Returns:\n List[google.cloud.proto.firestore.v1beta1.\\\n write_pb2.WriteResult, ...]: The write results corresponding\n to the changes committed, returned in the same order as the\n changes were applied to this batch. A write result contains an\n ``update_time`` field.\n \"\"\"\n commit_response = self._client._firestore_api.commit(\n self._client._database_string,\n self._write_pbs,\n transaction=None,\n metadata=self._client._rpc_metadata,\n )\n\n self._write_pbs = []\n return list(commit_response.write_results)\n", "path": "firestore/google/cloud/firestore_v1beta1/batch.py"}]}
| 2,184 | 250 |
gh_patches_debug_16952
|
rasdani/github-patches
|
git_diff
|
easybuilders__easybuild-easyblocks-3223
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
OpenMPI: Unknown UCC configure option
Trying to build OpenMPI 4.1.1 I get the error/warning that `--with-ucc` is not a known configure option.
It was added in https://github.com/easybuilders/easybuild-easyblocks/pull/2847
@SebastianAchilles Do you remember which version has this for sure, i.e. where you found that to be missing/required/supported?
We might need to add a version check there.
</issue>
<code>
[start of easybuild/easyblocks/o/openmpi.py]
1 ##
2 # Copyright 2019-2023 Ghent University
3 #
4 # This file is part of EasyBuild,
5 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
6 # with support of Ghent University (http://ugent.be/hpc),
7 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),
8 # Flemish Research Foundation (FWO) (http://www.fwo.be/en)
9 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
10 #
11 # https://github.com/easybuilders/easybuild
12 #
13 # EasyBuild is free software: you can redistribute it and/or modify
14 # it under the terms of the GNU General Public License as published by
15 # the Free Software Foundation v2.
16 #
17 # EasyBuild is distributed in the hope that it will be useful,
18 # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 # GNU General Public License for more details.
21 #
22 # You should have received a copy of the GNU General Public License
23 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
24 ##
25 """
26 EasyBuild support for OpenMPI, implemented as an easyblock
27
28 @author: Kenneth Hoste (Ghent University)
29 @author: Robert Mijakovic (LuxProvide)
30 """
31 import os
32 import re
33 from easybuild.tools import LooseVersion
34
35 import easybuild.tools.toolchain as toolchain
36 from easybuild.easyblocks.generic.configuremake import ConfigureMake
37 from easybuild.framework.easyconfig.constants import EASYCONFIG_CONSTANTS
38 from easybuild.tools.build_log import EasyBuildError
39 from easybuild.tools.config import build_option
40 from easybuild.tools.modules import get_software_root
41 from easybuild.tools.systemtools import check_os_dependency, get_shared_lib_ext
42 from easybuild.tools.toolchain.mpi import get_mpi_cmd_template
43
44
45 class EB_OpenMPI(ConfigureMake):
46 """OpenMPI easyblock."""
47
48 def configure_step(self):
49 """Custom configuration step for OpenMPI."""
50
51 def config_opt_used(key, enable_opt=False):
52 """Helper function to check whether a configure option is already specified in 'configopts'."""
53 if enable_opt:
54 regex = '--(disable|enable)-%s' % key
55 else:
56 regex = '--(with|without)-%s' % key
57
58 return bool(re.search(regex, self.cfg['configopts']))
59
60 config_opt_names = [
61 # suppress failure modes in relation to mpirun path
62 'mpirun-prefix-by-default',
63 # build shared libraries
64 'shared',
65 ]
66
67 for key in config_opt_names:
68 if not config_opt_used(key, enable_opt=True):
69 self.cfg.update('configopts', '--enable-%s' % key)
70
71 # List of EasyBuild dependencies for which OMPI has known options
72 known_dependencies = ('CUDA', 'hwloc', 'libevent', 'libfabric', 'PMIx', 'UCX', 'UCC')
73 # Value to use for `--with-<dep>=<value>` if the dependency is not specified in the easyconfig
74 # No entry is interpreted as no option added at all
75 # This is to make builds reproducible even when the system libraries are changed and avoids failures
76 # due to e.g. finding only PMIx but not libevent on the system
77 unused_dep_value = dict()
78 # Known options since version 3.0 (no earlier ones checked)
79 if LooseVersion(self.version) >= LooseVersion('3.0'):
80 # Default to disable the option with "no"
81 unused_dep_value = {dep: 'no' for dep in known_dependencies}
82 # For these the default is to use an internal copy and not using any is not supported
83 for dep in ('hwloc', 'libevent', 'PMIx'):
84 unused_dep_value[dep] = 'internal'
85
86 # handle dependencies
87 for dep in known_dependencies:
88 opt_name = dep.lower()
89 # If the option is already used, don't add it
90 if config_opt_used(opt_name):
91 continue
92
93 # libfabric option renamed in OpenMPI 3.1.0 to ofi
94 if dep == 'libfabric' and LooseVersion(self.version) >= LooseVersion('3.1'):
95 opt_name = 'ofi'
96 # Check new option name. They are synonyms since 3.1.0 for backward compatibility
97 if config_opt_used(opt_name):
98 continue
99
100 dep_root = get_software_root(dep)
101 # If the dependency is loaded, specify its path, else use the "unused" value, if any
102 if dep_root:
103 opt_value = dep_root
104 else:
105 opt_value = unused_dep_value.get(dep)
106 if opt_value is not None:
107 self.cfg.update('configopts', '--with-%s=%s' % (opt_name, opt_value))
108
109 if bool(get_software_root('PMIx')) != bool(get_software_root('libevent')):
110 raise EasyBuildError('You must either use both PMIx and libevent as dependencies or none of them. '
111 'This is to enforce the same libevent is used for OpenMPI as for PMIx or '
112 'the behavior may be unpredictable.')
113
114 # check whether VERBS support should be enabled
115 if not config_opt_used('verbs'):
116
117 # for OpenMPI v4.x, the openib BTL should be disabled when UCX is used;
118 # this is required to avoid "error initializing an OpenFabrics device" warnings,
119 # see also https://www.open-mpi.org/faq/?category=all#ofa-device-error
120 is_ucx_enabled = ('--with-ucx' in self.cfg['configopts'] and
121 '--with-ucx=no' not in self.cfg['configopts'])
122 if LooseVersion(self.version) >= LooseVersion('4.0.0') and is_ucx_enabled:
123 verbs = False
124 else:
125 # auto-detect based on available OS packages
126 os_packages = EASYCONFIG_CONSTANTS['OS_PKG_IBVERBS_DEV'][0]
127 verbs = any(check_os_dependency(osdep) for osdep in os_packages)
128 # for OpenMPI v5.x, the verbs support is removed, only UCX is available
129 # see https://github.com/open-mpi/ompi/pull/6270
130 if LooseVersion(self.version) <= LooseVersion('5.0.0'):
131 if verbs:
132 self.cfg.update('configopts', '--with-verbs')
133 else:
134 self.cfg.update('configopts', '--without-verbs')
135
136 super(EB_OpenMPI, self).configure_step()
137
138 def test_step(self):
139 """Test step for OpenMPI"""
140 # Default to `make check` if nothing is set. Disable with "runtest = False" in the EC
141 if self.cfg['runtest'] is None:
142 self.cfg['runtest'] = 'check'
143
144 super(EB_OpenMPI, self).test_step()
145
146 def load_module(self, *args, **kwargs):
147 """
148 Load (temporary) module file, after resetting to initial environment.
149
150 Also put RPATH wrappers back in place if needed, to ensure that sanity check commands work as expected.
151 """
152 super(EB_OpenMPI, self).load_module(*args, **kwargs)
153
154 # ensure RPATH wrappers are in place, otherwise compiling minimal test programs will fail
155 if build_option('rpath'):
156 if self.toolchain.options.get('rpath', True):
157 self.toolchain.prepare_rpath_wrappers(rpath_filter_dirs=self.rpath_filter_dirs,
158 rpath_include_dirs=self.rpath_include_dirs)
159
160 def sanity_check_step(self):
161 """Custom sanity check for OpenMPI."""
162
163 bin_names = ['mpicc', 'mpicxx', 'mpif90', 'mpifort', 'mpirun', 'ompi_info', 'opal_wrapper']
164 if LooseVersion(self.version) >= LooseVersion('5.0.0'):
165 bin_names.append('prterun')
166 else:
167 bin_names.append('orterun')
168 bin_files = [os.path.join('bin', x) for x in bin_names]
169
170 shlib_ext = get_shared_lib_ext()
171 lib_names = ['mpi_mpifh', 'mpi', 'open-pal']
172 if LooseVersion(self.version) >= LooseVersion('5.0.0'):
173 lib_names.append('prrte')
174 else:
175 lib_names.extend(['ompitrace', 'open-rte'])
176 lib_files = [os.path.join('lib', 'lib%s.%s' % (x, shlib_ext)) for x in lib_names]
177
178 inc_names = ['mpi-ext', 'mpif-config', 'mpif', 'mpi', 'mpi_portable_platform']
179 if LooseVersion(self.version) >= LooseVersion('5.0.0'):
180 inc_names.append('prte')
181 inc_files = [os.path.join('include', x + '.h') for x in inc_names]
182
183 custom_paths = {
184 'files': bin_files + inc_files + lib_files,
185 'dirs': [],
186 }
187
188 # make sure MPI compiler wrappers pick up correct compilers
189 expected = {
190 'mpicc': os.getenv('CC', 'gcc'),
191 'mpicxx': os.getenv('CXX', 'g++'),
192 'mpifort': os.getenv('FC', 'gfortran'),
193 'mpif90': os.getenv('F90', 'gfortran'),
194 }
195 # actual pattern for gfortran is "GNU Fortran"
196 for key in ['mpifort', 'mpif90']:
197 if expected[key] == 'gfortran':
198 expected[key] = "GNU Fortran"
199 # for PGI, correct pattern is "pgfortran" with mpif90
200 if expected['mpif90'] == 'pgf90':
201 expected['mpif90'] = 'pgfortran'
202 # for Clang the pattern is always clang
203 for key in ['mpicxx', 'mpifort', 'mpif90']:
204 if expected[key] in ['clang++', 'flang']:
205 expected[key] = 'clang'
206
207 custom_commands = ["%s --version | grep '%s'" % (key, expected[key]) for key in sorted(expected.keys())]
208
209 # Add minimal test program to sanity checks
210 # Run with correct MPI launcher
211 mpi_cmd_tmpl, params = get_mpi_cmd_template(toolchain.OPENMPI, dict(), mpi_version=self.version)
212 # Limit number of ranks to 8 to avoid it failing due to hyperthreading
213 ranks = min(8, self.cfg['parallel'])
214 for srcdir, src, compiler in (
215 ('examples', 'hello_c.c', 'mpicc'),
216 ('examples', 'hello_mpifh.f', 'mpifort'),
217 ('examples', 'hello_usempi.f90', 'mpif90'),
218 ('examples', 'ring_c.c', 'mpicc'),
219 ('examples', 'ring_mpifh.f', 'mpifort'),
220 ('examples', 'ring_usempi.f90', 'mpif90'),
221 ('test/simple', 'thread_init.c', 'mpicc'),
222 ('test/simple', 'intercomm1.c', 'mpicc'),
223 ('test/simple', 'mpi_barrier.c', 'mpicc'),
224 ):
225 src_path = os.path.join(self.cfg['start_dir'], srcdir, src)
226 if os.path.exists(src_path):
227 test_exe = os.path.join(self.builddir, 'mpi_test_' + os.path.splitext(src)[0])
228 self.log.info("Adding minimal MPI test program to sanity checks: %s", test_exe)
229
230 # Build test binary
231 custom_commands.append("%s %s -o %s" % (compiler, src_path, test_exe))
232
233 # Run the test if chosen
234 if build_option('mpi_tests'):
235 params.update({'nr_ranks': ranks, 'cmd': test_exe})
236 # Allow oversubscription for this test (in case of hyperthreading)
237 custom_commands.append("OMPI_MCA_rmaps_base_oversubscribe=1 " + mpi_cmd_tmpl % params)
238 # Run with 1 process which may trigger other bugs
239 # See https://github.com/easybuilders/easybuild-easyconfigs/issues/12978
240 params['nr_ranks'] = 1
241 custom_commands.append(mpi_cmd_tmpl % params)
242
243 super(EB_OpenMPI, self).sanity_check_step(custom_paths=custom_paths, custom_commands=custom_commands)
244
[end of easybuild/easyblocks/o/openmpi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/easybuild/easyblocks/o/openmpi.py b/easybuild/easyblocks/o/openmpi.py
--- a/easybuild/easyblocks/o/openmpi.py
+++ b/easybuild/easyblocks/o/openmpi.py
@@ -69,7 +69,10 @@
self.cfg.update('configopts', '--enable-%s' % key)
# List of EasyBuild dependencies for which OMPI has known options
- known_dependencies = ('CUDA', 'hwloc', 'libevent', 'libfabric', 'PMIx', 'UCX', 'UCC')
+ known_dependencies = ['CUDA', 'hwloc', 'libevent', 'libfabric', 'PMIx', 'UCX']
+ if LooseVersion(self.version) >= '4.1.4':
+ known_dependencies.append('UCC')
+
# Value to use for `--with-<dep>=<value>` if the dependency is not specified in the easyconfig
# No entry is interpreted as no option added at all
# This is to make builds reproducible even when the system libraries are changed and avoids failures
|
{"golden_diff": "diff --git a/easybuild/easyblocks/o/openmpi.py b/easybuild/easyblocks/o/openmpi.py\n--- a/easybuild/easyblocks/o/openmpi.py\n+++ b/easybuild/easyblocks/o/openmpi.py\n@@ -69,7 +69,10 @@\n self.cfg.update('configopts', '--enable-%s' % key)\n \n # List of EasyBuild dependencies for which OMPI has known options\n- known_dependencies = ('CUDA', 'hwloc', 'libevent', 'libfabric', 'PMIx', 'UCX', 'UCC')\n+ known_dependencies = ['CUDA', 'hwloc', 'libevent', 'libfabric', 'PMIx', 'UCX']\n+ if LooseVersion(self.version) >= '4.1.4':\n+ known_dependencies.append('UCC')\n+\n # Value to use for `--with-<dep>=<value>` if the dependency is not specified in the easyconfig\n # No entry is interpreted as no option added at all\n # This is to make builds reproducible even when the system libraries are changed and avoids failures\n", "issue": "OpenMPI: Unknown UCC configure option\nTrying to build OpenMPI 4.1.1 I get the error/warning that `--with-ucc` is not a known configure option.\r\n\r\nIt was added in https://github.com/easybuilders/easybuild-easyblocks/pull/2847 \r\n\r\n@SebastianAchilles Do you remember which version has this for sure, i.e. where you found that to be missing/required/supported?\r\n\r\nWe might need to add a version check there.\n", "before_files": [{"content": "##\n# Copyright 2019-2023 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for OpenMPI, implemented as an easyblock\n\n@author: Kenneth Hoste (Ghent University)\n@author: Robert Mijakovic (LuxProvide)\n\"\"\"\nimport os\nimport re\nfrom easybuild.tools import LooseVersion\n\nimport easybuild.tools.toolchain as toolchain\nfrom easybuild.easyblocks.generic.configuremake import ConfigureMake\nfrom easybuild.framework.easyconfig.constants import EASYCONFIG_CONSTANTS\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.config import build_option\nfrom easybuild.tools.modules import get_software_root\nfrom easybuild.tools.systemtools import check_os_dependency, get_shared_lib_ext\nfrom easybuild.tools.toolchain.mpi import get_mpi_cmd_template\n\n\nclass EB_OpenMPI(ConfigureMake):\n \"\"\"OpenMPI easyblock.\"\"\"\n\n def configure_step(self):\n \"\"\"Custom configuration step for OpenMPI.\"\"\"\n\n def config_opt_used(key, enable_opt=False):\n \"\"\"Helper function to check whether a configure option is already specified in 'configopts'.\"\"\"\n if enable_opt:\n regex = '--(disable|enable)-%s' % key\n else:\n regex = '--(with|without)-%s' % key\n\n return bool(re.search(regex, self.cfg['configopts']))\n\n config_opt_names = [\n # suppress failure modes in relation to mpirun path\n 'mpirun-prefix-by-default',\n # build shared libraries\n 'shared',\n ]\n\n for key in config_opt_names:\n if not config_opt_used(key, enable_opt=True):\n self.cfg.update('configopts', '--enable-%s' % key)\n\n # List of EasyBuild dependencies for which OMPI has known options\n known_dependencies = ('CUDA', 'hwloc', 'libevent', 'libfabric', 'PMIx', 'UCX', 'UCC')\n # Value to use for `--with-<dep>=<value>` if the dependency is not specified in the easyconfig\n # No entry is interpreted as no option added at all\n # This is to make builds reproducible even when the system libraries are changed and avoids failures\n # due to e.g. finding only PMIx but not libevent on the system\n unused_dep_value = dict()\n # Known options since version 3.0 (no earlier ones checked)\n if LooseVersion(self.version) >= LooseVersion('3.0'):\n # Default to disable the option with \"no\"\n unused_dep_value = {dep: 'no' for dep in known_dependencies}\n # For these the default is to use an internal copy and not using any is not supported\n for dep in ('hwloc', 'libevent', 'PMIx'):\n unused_dep_value[dep] = 'internal'\n\n # handle dependencies\n for dep in known_dependencies:\n opt_name = dep.lower()\n # If the option is already used, don't add it\n if config_opt_used(opt_name):\n continue\n\n # libfabric option renamed in OpenMPI 3.1.0 to ofi\n if dep == 'libfabric' and LooseVersion(self.version) >= LooseVersion('3.1'):\n opt_name = 'ofi'\n # Check new option name. They are synonyms since 3.1.0 for backward compatibility\n if config_opt_used(opt_name):\n continue\n\n dep_root = get_software_root(dep)\n # If the dependency is loaded, specify its path, else use the \"unused\" value, if any\n if dep_root:\n opt_value = dep_root\n else:\n opt_value = unused_dep_value.get(dep)\n if opt_value is not None:\n self.cfg.update('configopts', '--with-%s=%s' % (opt_name, opt_value))\n\n if bool(get_software_root('PMIx')) != bool(get_software_root('libevent')):\n raise EasyBuildError('You must either use both PMIx and libevent as dependencies or none of them. '\n 'This is to enforce the same libevent is used for OpenMPI as for PMIx or '\n 'the behavior may be unpredictable.')\n\n # check whether VERBS support should be enabled\n if not config_opt_used('verbs'):\n\n # for OpenMPI v4.x, the openib BTL should be disabled when UCX is used;\n # this is required to avoid \"error initializing an OpenFabrics device\" warnings,\n # see also https://www.open-mpi.org/faq/?category=all#ofa-device-error\n is_ucx_enabled = ('--with-ucx' in self.cfg['configopts'] and\n '--with-ucx=no' not in self.cfg['configopts'])\n if LooseVersion(self.version) >= LooseVersion('4.0.0') and is_ucx_enabled:\n verbs = False\n else:\n # auto-detect based on available OS packages\n os_packages = EASYCONFIG_CONSTANTS['OS_PKG_IBVERBS_DEV'][0]\n verbs = any(check_os_dependency(osdep) for osdep in os_packages)\n # for OpenMPI v5.x, the verbs support is removed, only UCX is available\n # see https://github.com/open-mpi/ompi/pull/6270\n if LooseVersion(self.version) <= LooseVersion('5.0.0'):\n if verbs:\n self.cfg.update('configopts', '--with-verbs')\n else:\n self.cfg.update('configopts', '--without-verbs')\n\n super(EB_OpenMPI, self).configure_step()\n\n def test_step(self):\n \"\"\"Test step for OpenMPI\"\"\"\n # Default to `make check` if nothing is set. Disable with \"runtest = False\" in the EC\n if self.cfg['runtest'] is None:\n self.cfg['runtest'] = 'check'\n\n super(EB_OpenMPI, self).test_step()\n\n def load_module(self, *args, **kwargs):\n \"\"\"\n Load (temporary) module file, after resetting to initial environment.\n\n Also put RPATH wrappers back in place if needed, to ensure that sanity check commands work as expected.\n \"\"\"\n super(EB_OpenMPI, self).load_module(*args, **kwargs)\n\n # ensure RPATH wrappers are in place, otherwise compiling minimal test programs will fail\n if build_option('rpath'):\n if self.toolchain.options.get('rpath', True):\n self.toolchain.prepare_rpath_wrappers(rpath_filter_dirs=self.rpath_filter_dirs,\n rpath_include_dirs=self.rpath_include_dirs)\n\n def sanity_check_step(self):\n \"\"\"Custom sanity check for OpenMPI.\"\"\"\n\n bin_names = ['mpicc', 'mpicxx', 'mpif90', 'mpifort', 'mpirun', 'ompi_info', 'opal_wrapper']\n if LooseVersion(self.version) >= LooseVersion('5.0.0'):\n bin_names.append('prterun')\n else:\n bin_names.append('orterun')\n bin_files = [os.path.join('bin', x) for x in bin_names]\n\n shlib_ext = get_shared_lib_ext()\n lib_names = ['mpi_mpifh', 'mpi', 'open-pal']\n if LooseVersion(self.version) >= LooseVersion('5.0.0'):\n lib_names.append('prrte')\n else:\n lib_names.extend(['ompitrace', 'open-rte'])\n lib_files = [os.path.join('lib', 'lib%s.%s' % (x, shlib_ext)) for x in lib_names]\n\n inc_names = ['mpi-ext', 'mpif-config', 'mpif', 'mpi', 'mpi_portable_platform']\n if LooseVersion(self.version) >= LooseVersion('5.0.0'):\n inc_names.append('prte')\n inc_files = [os.path.join('include', x + '.h') for x in inc_names]\n\n custom_paths = {\n 'files': bin_files + inc_files + lib_files,\n 'dirs': [],\n }\n\n # make sure MPI compiler wrappers pick up correct compilers\n expected = {\n 'mpicc': os.getenv('CC', 'gcc'),\n 'mpicxx': os.getenv('CXX', 'g++'),\n 'mpifort': os.getenv('FC', 'gfortran'),\n 'mpif90': os.getenv('F90', 'gfortran'),\n }\n # actual pattern for gfortran is \"GNU Fortran\"\n for key in ['mpifort', 'mpif90']:\n if expected[key] == 'gfortran':\n expected[key] = \"GNU Fortran\"\n # for PGI, correct pattern is \"pgfortran\" with mpif90\n if expected['mpif90'] == 'pgf90':\n expected['mpif90'] = 'pgfortran'\n # for Clang the pattern is always clang\n for key in ['mpicxx', 'mpifort', 'mpif90']:\n if expected[key] in ['clang++', 'flang']:\n expected[key] = 'clang'\n\n custom_commands = [\"%s --version | grep '%s'\" % (key, expected[key]) for key in sorted(expected.keys())]\n\n # Add minimal test program to sanity checks\n # Run with correct MPI launcher\n mpi_cmd_tmpl, params = get_mpi_cmd_template(toolchain.OPENMPI, dict(), mpi_version=self.version)\n # Limit number of ranks to 8 to avoid it failing due to hyperthreading\n ranks = min(8, self.cfg['parallel'])\n for srcdir, src, compiler in (\n ('examples', 'hello_c.c', 'mpicc'),\n ('examples', 'hello_mpifh.f', 'mpifort'),\n ('examples', 'hello_usempi.f90', 'mpif90'),\n ('examples', 'ring_c.c', 'mpicc'),\n ('examples', 'ring_mpifh.f', 'mpifort'),\n ('examples', 'ring_usempi.f90', 'mpif90'),\n ('test/simple', 'thread_init.c', 'mpicc'),\n ('test/simple', 'intercomm1.c', 'mpicc'),\n ('test/simple', 'mpi_barrier.c', 'mpicc'),\n ):\n src_path = os.path.join(self.cfg['start_dir'], srcdir, src)\n if os.path.exists(src_path):\n test_exe = os.path.join(self.builddir, 'mpi_test_' + os.path.splitext(src)[0])\n self.log.info(\"Adding minimal MPI test program to sanity checks: %s\", test_exe)\n\n # Build test binary\n custom_commands.append(\"%s %s -o %s\" % (compiler, src_path, test_exe))\n\n # Run the test if chosen\n if build_option('mpi_tests'):\n params.update({'nr_ranks': ranks, 'cmd': test_exe})\n # Allow oversubscription for this test (in case of hyperthreading)\n custom_commands.append(\"OMPI_MCA_rmaps_base_oversubscribe=1 \" + mpi_cmd_tmpl % params)\n # Run with 1 process which may trigger other bugs\n # See https://github.com/easybuilders/easybuild-easyconfigs/issues/12978\n params['nr_ranks'] = 1\n custom_commands.append(mpi_cmd_tmpl % params)\n\n super(EB_OpenMPI, self).sanity_check_step(custom_paths=custom_paths, custom_commands=custom_commands)\n", "path": "easybuild/easyblocks/o/openmpi.py"}]}
| 4,008 | 243 |
gh_patches_debug_16929
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-306
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix content app not showing file size for 0 byte files
fixes: #5100
</issue>
<code>
[start of setup.py]
1 from setuptools import find_packages, setup
2
3 with open('README.md') as f:
4 long_description = f.read()
5
6 requirements = [
7 'coreapi~=2.3.3',
8 'Django~=2.2.3', # LTS version, switch only if we have a compelling reason to
9 'django-filter~=2.2.0',
10 'djangorestframework~=3.10.2',
11 'djangorestframework-queryfields~=1.0.0',
12 'drf-nested-routers~=0.91.0',
13 'drf-yasg~=1.16.1',
14 'gunicorn~=19.9.0',
15 'packaging', # until drf-yasg 1.16.2 is out https://github.com/axnsan12/drf-yasg/issues/412
16 'PyYAML~=5.1.1',
17 'rq~=1.1.0',
18 'redis~=3.1.0',
19 'setuptools>=41.0.1,<41.3.0',
20 'dynaconf~=2.1.0',
21 'whitenoise~=4.1.3',
22 ]
23
24 setup(
25 name='pulpcore',
26 version='3.0.0rc6.dev',
27 description='Pulp Django Application and Related Modules',
28 long_description=long_description,
29 long_description_content_type="text/markdown",
30 license='GPLv2+',
31 packages=find_packages(exclude=['test']),
32 author='Pulp Team',
33 author_email='[email protected]',
34 url='http://www.pulpproject.org',
35 python_requires='>=3.6',
36 install_requires=requirements,
37 extras_require={
38 'postgres': ['psycopg2-binary'],
39 'mysql': ['mysqlclient']
40 },
41 include_package_data=True,
42 classifiers=(
43 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',
44 'Operating System :: POSIX :: Linux',
45 'Development Status :: 4 - Beta',
46 'Framework :: Django',
47 'Programming Language :: Python',
48 'Programming Language :: Python :: 3',
49 'Programming Language :: Python :: 3.6',
50 'Programming Language :: Python :: 3.7',
51 ),
52 scripts=['bin/pulp-content'],
53 )
54
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -13,6 +13,7 @@
'drf-yasg~=1.16.1',
'gunicorn~=19.9.0',
'packaging', # until drf-yasg 1.16.2 is out https://github.com/axnsan12/drf-yasg/issues/412
+ 'psycopg2-binary',
'PyYAML~=5.1.1',
'rq~=1.1.0',
'redis~=3.1.0',
@@ -34,10 +35,6 @@
url='http://www.pulpproject.org',
python_requires='>=3.6',
install_requires=requirements,
- extras_require={
- 'postgres': ['psycopg2-binary'],
- 'mysql': ['mysqlclient']
- },
include_package_data=True,
classifiers=(
'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -13,6 +13,7 @@\n 'drf-yasg~=1.16.1',\n 'gunicorn~=19.9.0',\n 'packaging', # until drf-yasg 1.16.2 is out https://github.com/axnsan12/drf-yasg/issues/412\n+ 'psycopg2-binary',\n 'PyYAML~=5.1.1',\n 'rq~=1.1.0',\n 'redis~=3.1.0',\n@@ -34,10 +35,6 @@\n url='http://www.pulpproject.org',\n python_requires='>=3.6',\n install_requires=requirements,\n- extras_require={\n- 'postgres': ['psycopg2-binary'],\n- 'mysql': ['mysqlclient']\n- },\n include_package_data=True,\n classifiers=(\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n", "issue": "Fix content app not showing file size for 0 byte files\nfixes: #5100\n", "before_files": [{"content": "from setuptools import find_packages, setup\n\nwith open('README.md') as f:\n long_description = f.read()\n\nrequirements = [\n 'coreapi~=2.3.3',\n 'Django~=2.2.3', # LTS version, switch only if we have a compelling reason to\n 'django-filter~=2.2.0',\n 'djangorestframework~=3.10.2',\n 'djangorestframework-queryfields~=1.0.0',\n 'drf-nested-routers~=0.91.0',\n 'drf-yasg~=1.16.1',\n 'gunicorn~=19.9.0',\n 'packaging', # until drf-yasg 1.16.2 is out https://github.com/axnsan12/drf-yasg/issues/412\n 'PyYAML~=5.1.1',\n 'rq~=1.1.0',\n 'redis~=3.1.0',\n 'setuptools>=41.0.1,<41.3.0',\n 'dynaconf~=2.1.0',\n 'whitenoise~=4.1.3',\n]\n\nsetup(\n name='pulpcore',\n version='3.0.0rc6.dev',\n description='Pulp Django Application and Related Modules',\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n license='GPLv2+',\n packages=find_packages(exclude=['test']),\n author='Pulp Team',\n author_email='[email protected]',\n url='http://www.pulpproject.org',\n python_requires='>=3.6',\n install_requires=requirements,\n extras_require={\n 'postgres': ['psycopg2-binary'],\n 'mysql': ['mysqlclient']\n },\n include_package_data=True,\n classifiers=(\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Operating System :: POSIX :: Linux',\n 'Development Status :: 4 - Beta',\n 'Framework :: Django',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ),\n scripts=['bin/pulp-content'],\n)\n", "path": "setup.py"}]}
| 1,164 | 241 |
gh_patches_debug_23526
|
rasdani/github-patches
|
git_diff
|
OpenMined__PySyft-3589
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
sy.grid.register() should print useful information
**Is your feature request related to a problem? Please describe.**
When registering a node on OpenGrid, we want to convey some information to the user using sys.stdout.write()
A few things we thought to add.
- Information: connecting to opengrid...etc.
- Information: Can I connect to the main grid node... graceful error message if you can't.
- Disclaimer: OpenGrid is an experimental feature currently in alpha. Do not use this to protect real-world data.
- Where to get Help:
- Join our slack (slack.openmined.org) and ask for help in the #lib_syft channel.
- File a Github Issue: https://github.com/OpenMined/PySyft and add the string "#opengrid" in the issue title.
</issue>
<code>
[start of syft/grid/__init__.py]
1 from .network import Network
2 import uuid
3
4 DEFAULT_NETWORK_URL = "ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com"
5
6
7 def register(**kwargs):
8 """ Add this process as a new peer registering it in the grid network.
9
10 Returns:
11 peer: Peer Network instance.
12 """
13 if not kwargs:
14 args = args = {"max_size": None, "timeout": 444, "url": DEFAULT_NETWORK_URL}
15 else:
16 args = kwargs
17
18 peer_id = str(uuid.uuid4())
19 peer = Network(peer_id, **args)
20 peer.start()
21
22 return peer
23
[end of syft/grid/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/syft/grid/__init__.py b/syft/grid/__init__.py
--- a/syft/grid/__init__.py
+++ b/syft/grid/__init__.py
@@ -1,4 +1,5 @@
from .network import Network
+import sys
import uuid
DEFAULT_NETWORK_URL = "ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com"
@@ -16,7 +17,32 @@
args = kwargs
peer_id = str(uuid.uuid4())
+ sys.stdout.write(
+ "Connecting to OpenGrid (" + "\033[94m" + DEFAULT_NETWORK_URL + "\033[0m" + ") ... "
+ )
peer = Network(peer_id, **args)
+
+ sys.stdout.write("\033[92m" + "OK" + "\033[0m" + "\n")
+ sys.stdout.write("Peer ID: " + peer_id + "\n")
+
+ sys.stdout.write(
+ "\033[93m" + "DISCLAIMER" + "\033[0m"
+ ":"
+ + "\033[1m"
+ + " OpenGrid is an experimental feature currently in alpha. Do not use this to protect real-world data.\n"
+ + "\033[0m"
+ )
+
+ sys.stdout.write("Where to get help: \n")
+ sys.stdout.write(
+ " - Join our slack (https://slack.openmined.org) and ask for help in the #lib_syft channel.\n"
+ )
+ sys.stdout.write(
+ " - File a Github Issue: https://github.com/OpenMined/PySyft and add the string '#opengrid' in the issue title.\n"
+ )
+ sys.stdout.write(
+ " - Want to join in our development team? Apply here: https://forms.gle/wcH1vxzvPyDSbSVW6\n"
+ )
peer.start()
return peer
|
{"golden_diff": "diff --git a/syft/grid/__init__.py b/syft/grid/__init__.py\n--- a/syft/grid/__init__.py\n+++ b/syft/grid/__init__.py\n@@ -1,4 +1,5 @@\n from .network import Network\n+import sys\n import uuid\n \n DEFAULT_NETWORK_URL = \"ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com\"\n@@ -16,7 +17,32 @@\n args = kwargs\n \n peer_id = str(uuid.uuid4())\n+ sys.stdout.write(\n+ \"Connecting to OpenGrid (\" + \"\\033[94m\" + DEFAULT_NETWORK_URL + \"\\033[0m\" + \") ... \"\n+ )\n peer = Network(peer_id, **args)\n+\n+ sys.stdout.write(\"\\033[92m\" + \"OK\" + \"\\033[0m\" + \"\\n\")\n+ sys.stdout.write(\"Peer ID: \" + peer_id + \"\\n\")\n+\n+ sys.stdout.write(\n+ \"\\033[93m\" + \"DISCLAIMER\" + \"\\033[0m\"\n+ \":\"\n+ + \"\\033[1m\"\n+ + \" OpenGrid is an experimental feature currently in alpha. Do not use this to protect real-world data.\\n\"\n+ + \"\\033[0m\"\n+ )\n+\n+ sys.stdout.write(\"Where to get help: \\n\")\n+ sys.stdout.write(\n+ \" - Join our slack (https://slack.openmined.org) and ask for help in the #lib_syft channel.\\n\"\n+ )\n+ sys.stdout.write(\n+ \" - File a Github Issue: https://github.com/OpenMined/PySyft and add the string '#opengrid' in the issue title.\\n\"\n+ )\n+ sys.stdout.write(\n+ \" - Want to join in our development team? Apply here: https://forms.gle/wcH1vxzvPyDSbSVW6\\n\"\n+ )\n peer.start()\n \n return peer\n", "issue": "sy.grid.register() should print useful information\n**Is your feature request related to a problem? Please describe.**\r\nWhen registering a node on OpenGrid, we want to convey some information to the user using sys.stdout.write()\r\n\r\nA few things we thought to add.\r\n\r\n- Information: connecting to opengrid...etc.\r\n - Information: Can I connect to the main grid node... graceful error message if you can't.\r\n- Disclaimer: OpenGrid is an experimental feature currently in alpha. Do not use this to protect real-world data.\r\n- Where to get Help:\r\n - Join our slack (slack.openmined.org) and ask for help in the #lib_syft channel.\r\n - File a Github Issue: https://github.com/OpenMined/PySyft and add the string \"#opengrid\" in the issue title.\r\n \r\n\n", "before_files": [{"content": "from .network import Network\nimport uuid\n\nDEFAULT_NETWORK_URL = \"ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com\"\n\n\ndef register(**kwargs):\n \"\"\" Add this process as a new peer registering it in the grid network.\n \n Returns:\n peer: Peer Network instance.\n \"\"\"\n if not kwargs:\n args = args = {\"max_size\": None, \"timeout\": 444, \"url\": DEFAULT_NETWORK_URL}\n else:\n args = kwargs\n\n peer_id = str(uuid.uuid4())\n peer = Network(peer_id, **args)\n peer.start()\n\n return peer\n", "path": "syft/grid/__init__.py"}]}
| 893 | 468 |
gh_patches_debug_13740
|
rasdani/github-patches
|
git_diff
|
TOMToolkit__tom_base-580
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
not having a LASAIR token in Settings.py breaks
Fail more gracefully if no LASAIR token in settings.py
</issue>
<code>
[start of tom_alerts/brokers/lasair.py]
1 import requests
2 from urllib.parse import urlencode
3
4 from crispy_forms.layout import HTML, Layout, Div, Fieldset, Row, Column
5 from django import forms
6 from django.conf import settings
7
8 from tom_alerts.alerts import GenericQueryForm, GenericAlert, GenericBroker
9 from tom_targets.models import Target
10
11 LASAIR_URL = 'https://lasair-ztf.lsst.ac.uk'
12
13
14 class LasairBrokerForm(GenericQueryForm):
15 cone_ra = forms.CharField(required=False, label='RA', help_text='Object RA (Decimal Degrees)',
16 widget=forms.TextInput(attrs={'placeholder': '1.2345'}))
17 cone_dec = forms.CharField(required=False, label='Dec', help_text='Object Dec (Decimal Degrees)',
18 widget=forms.TextInput(attrs={'placeholder': '1.2345'}))
19 cone_radius = forms.CharField(required=False, label='Radius', help_text='Search Radius (Arcsec)', initial='10',
20 widget=forms.TextInput(attrs={'placeholder': '10'}))
21 sqlquery = forms.CharField(required=False, label='SQL Query Conditions',
22 help_text='The "WHERE" criteria to restrict which objects are returned. '
23 '(i.e. gmag < 12.0)')
24
25 def __init__(self, *args, **kwargs):
26 super().__init__(*args, **kwargs)
27 self.helper.layout = Layout(
28 HTML('''
29 <p>
30 Please see the <a href="https://lasair-ztf.lsst.ac.uk/api">Lasair website</a> for more detailed
31 instructions on querying the broker.
32 '''),
33 self.common_layout,
34 Fieldset(
35 'Cone Search',
36 Row(
37 Column('cone_ra', css_class='form-group col-md-4 mb-0'),
38 Column('cone_dec', css_class='form-group col-md-4 mb-0'),
39 Column('cone_radius', css_class='form-group col-md-4 mb-0'),
40 css_class='form-row'
41 ),
42 HTML("""<br>
43 <h4>SQL Query Search</h4>
44 """),
45
46 Div('sqlquery')
47 )
48 )
49
50 def clean(self):
51 cleaned_data = super().clean()
52
53 # Ensure that either cone search or sqlquery are populated
54 if not ((cleaned_data['cone_ra'] and cleaned_data['cone_dec']) or cleaned_data['sqlquery']):
55 raise forms.ValidationError('Either RA/Dec or Freeform SQL Query must be populated.')
56
57 return cleaned_data
58
59
60 def get_lasair_object(obj):
61 """Parse lasair object table"""
62 objectid = obj['objectId']
63 jdmax = obj['candidates'][0]['mjd']
64 ra = obj['objectData']['ramean']
65 dec = obj['objectData']['decmean']
66 glon = obj['objectData']['glonmean']
67 glat = obj['objectData']['glatmean']
68 magpsf = obj['candidates'][0]['magpsf']
69 return {
70 'alert_id': objectid,
71 'timestamp': jdmax,
72 'ra': ra,
73 'dec': dec,
74 'galactic_lng': glon,
75 'galactic_lat': glat,
76 'mag': magpsf
77 }
78
79
80 class LasairBroker(GenericBroker):
81 """
82 The ``LasairBroker`` is the interface to the Lasair alert broker. For information regarding the query format for
83 Lasair, please see https://lasair-ztf.lsst.ac.uk/.
84
85 Requires a LASAIR_TOKEN in settings.py.
86 See https://lasair-ztf.lsst.ac.uk/api for details about how to acquire an authorization token.
87 """
88
89 name = 'Lasair'
90 form = LasairBrokerForm
91
92 def fetch_alerts(self, parameters):
93 token = settings.LASAIR_TOKEN
94 alerts = []
95 broker_feedback = ''
96 object_ids = ''
97
98 # Check for Cone Search
99 if 'cone_ra' in parameters and len(parameters['cone_ra'].strip()) > 0 and\
100 'cone_dec' in parameters and len(parameters['cone_dec'].strip()) > 0:
101
102 cone_query = {'ra': parameters['cone_ra'].strip(),
103 'dec': parameters['cone_dec'].strip(),
104 'radius': parameters['cone_radius'].strip(), # defaults to 10"
105 'requestType': 'all' # Return all objects within radius
106 }
107 parsed_cone_query = urlencode(cone_query)
108
109 # Query LASAIR Cone Search API
110 cone_response = requests.get(
111 LASAIR_URL + '/api/cone/?' + parsed_cone_query + f'&token={token}&format=json'
112 )
113 search_results = cone_response.json()
114 # Successful Search ~ [{'object': 'ZTF19abuaekk', 'separation': 205.06135003141878},...]
115 # Unsuccessful Search ~ {'error': 'No object found ...'}
116 try:
117 # Provide comma separated string of Object IDs matching search criteria
118 object_ids = ','.join([result['object'] for result in search_results])
119 except TypeError:
120 for key in search_results:
121 broker_feedback += f'{key}:{search_results[key]}'
122
123 # Check for SQL Condition Query
124 elif 'sqlquery' in parameters and len(parameters['sqlquery'].strip()) > 0:
125 sql_query = {'selected': 'objectId', # The only parameter we need returned is the objectId
126 'tables': 'objects', # The only table we need to search is the objects table
127 'conditions': parameters['sqlquery'].strip(),
128 'limit': '1000' # limit number of returned objects to 1000
129 }
130 parsed_sql_query = urlencode(sql_query)
131
132 # Query LASAIR SQLQuery API
133 query_response = requests.get(
134 LASAIR_URL + '/api/query/?' + parsed_sql_query + f'&token={token}&format=json'
135 )
136
137 search_results = query_response.json()
138 # Successful Search ~ [{'objectId': 'ZTF18aagzzzz'},...]
139 # Unsuccessful Search ~ []
140 try:
141 # Provide comma separated string of Object IDs matching search criteria
142 object_ids = ','.join([result['objectId'] for result in search_results])
143 except TypeError:
144 for key in search_results:
145 broker_feedback += f'{key}:{search_results[key]}'
146
147 # Supply feedback for empty results
148 if not object_ids and not broker_feedback:
149 broker_feedback += f"No objects found with conditions: {sql_query['conditions']}"
150 else:
151 return iter(alerts), broker_feedback
152
153 if object_ids:
154 # Query LASAIR object API
155 obj_response = requests.get(
156 LASAIR_URL + '/api/objects/' + f'?objectIds={object_ids}&token={token}&format=json'
157 )
158 obj_results = obj_response.json()
159 # Successful Search ~ [{'objectId': 'ZTF19abuaekk', 'objectData': {...}},...]
160
161 for obj in obj_results:
162 alerts.append(get_lasair_object(obj))
163 return iter(alerts), broker_feedback
164
165 def fetch_alert(self, alert_id):
166 url = LASAIR_URL + '/object/' + alert_id + '/json/'
167 response = requests.get(url)
168 response.raise_for_status()
169 parsed = response.json()
170 return parsed
171
172 def process_reduced_data(self, target, alert=None):
173 pass
174
175 def to_generic_alert(self, alert):
176 return GenericAlert(
177 url=LASAIR_URL + '/object/' + alert['alert_id'],
178 id=alert['alert_id'],
179 name=alert['alert_id'],
180 ra=alert['ra'],
181 dec=alert['dec'],
182 timestamp=alert['timestamp'],
183 mag=alert['mag'],
184 score=1, # dunno what this means ..?
185 )
186
187 def to_target(self, alert):
188 for c in alert['candidates']:
189 if 'candid' in c:
190 break
191 return Target.objects.create(
192 name=alert.get('objectId'),
193 type='SIDEREAL',
194 ra=alert['objectData']['ramean'],
195 dec=alert['objectData']['decmean'],
196 galactic_lng=alert['objectData']['glonmean'],
197 galactic_lat=alert['objectData']['glatmean'],
198 )
199
[end of tom_alerts/brokers/lasair.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tom_alerts/brokers/lasair.py b/tom_alerts/brokers/lasair.py
--- a/tom_alerts/brokers/lasair.py
+++ b/tom_alerts/brokers/lasair.py
@@ -90,10 +90,15 @@
form = LasairBrokerForm
def fetch_alerts(self, parameters):
- token = settings.LASAIR_TOKEN
alerts = []
broker_feedback = ''
object_ids = ''
+ try:
+ token = settings.LASAIR_TOKEN
+ except AttributeError:
+ broker_feedback += 'Requires a LASAIR_TOKEN in settings.py. See https://lasair-ztf.lsst.ac.uk/api' \
+ ' for details about how to acquire an authorization token.'
+ return iter(alerts), broker_feedback
# Check for Cone Search
if 'cone_ra' in parameters and len(parameters['cone_ra'].strip()) > 0 and\
|
{"golden_diff": "diff --git a/tom_alerts/brokers/lasair.py b/tom_alerts/brokers/lasair.py\n--- a/tom_alerts/brokers/lasair.py\n+++ b/tom_alerts/brokers/lasair.py\n@@ -90,10 +90,15 @@\n form = LasairBrokerForm\n \n def fetch_alerts(self, parameters):\n- token = settings.LASAIR_TOKEN\n alerts = []\n broker_feedback = ''\n object_ids = ''\n+ try:\n+ token = settings.LASAIR_TOKEN\n+ except AttributeError:\n+ broker_feedback += 'Requires a LASAIR_TOKEN in settings.py. See https://lasair-ztf.lsst.ac.uk/api' \\\n+ ' for details about how to acquire an authorization token.'\n+ return iter(alerts), broker_feedback\n \n # Check for Cone Search\n if 'cone_ra' in parameters and len(parameters['cone_ra'].strip()) > 0 and\\\n", "issue": "not having a LASAIR token in Settings.py breaks\nFail more gracefully if no LASAIR token in settings.py\n", "before_files": [{"content": "import requests\nfrom urllib.parse import urlencode\n\nfrom crispy_forms.layout import HTML, Layout, Div, Fieldset, Row, Column\nfrom django import forms\nfrom django.conf import settings\n\nfrom tom_alerts.alerts import GenericQueryForm, GenericAlert, GenericBroker\nfrom tom_targets.models import Target\n\nLASAIR_URL = 'https://lasair-ztf.lsst.ac.uk'\n\n\nclass LasairBrokerForm(GenericQueryForm):\n cone_ra = forms.CharField(required=False, label='RA', help_text='Object RA (Decimal Degrees)',\n widget=forms.TextInput(attrs={'placeholder': '1.2345'}))\n cone_dec = forms.CharField(required=False, label='Dec', help_text='Object Dec (Decimal Degrees)',\n widget=forms.TextInput(attrs={'placeholder': '1.2345'}))\n cone_radius = forms.CharField(required=False, label='Radius', help_text='Search Radius (Arcsec)', initial='10',\n widget=forms.TextInput(attrs={'placeholder': '10'}))\n sqlquery = forms.CharField(required=False, label='SQL Query Conditions',\n help_text='The \"WHERE\" criteria to restrict which objects are returned. '\n '(i.e. gmag < 12.0)')\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.helper.layout = Layout(\n HTML('''\n <p>\n Please see the <a href=\"https://lasair-ztf.lsst.ac.uk/api\">Lasair website</a> for more detailed\n instructions on querying the broker.\n '''),\n self.common_layout,\n Fieldset(\n 'Cone Search',\n Row(\n Column('cone_ra', css_class='form-group col-md-4 mb-0'),\n Column('cone_dec', css_class='form-group col-md-4 mb-0'),\n Column('cone_radius', css_class='form-group col-md-4 mb-0'),\n css_class='form-row'\n ),\n HTML(\"\"\"<br>\n <h4>SQL Query Search</h4>\n \"\"\"),\n\n Div('sqlquery')\n )\n )\n\n def clean(self):\n cleaned_data = super().clean()\n\n # Ensure that either cone search or sqlquery are populated\n if not ((cleaned_data['cone_ra'] and cleaned_data['cone_dec']) or cleaned_data['sqlquery']):\n raise forms.ValidationError('Either RA/Dec or Freeform SQL Query must be populated.')\n\n return cleaned_data\n\n\ndef get_lasair_object(obj):\n \"\"\"Parse lasair object table\"\"\"\n objectid = obj['objectId']\n jdmax = obj['candidates'][0]['mjd']\n ra = obj['objectData']['ramean']\n dec = obj['objectData']['decmean']\n glon = obj['objectData']['glonmean']\n glat = obj['objectData']['glatmean']\n magpsf = obj['candidates'][0]['magpsf']\n return {\n 'alert_id': objectid,\n 'timestamp': jdmax,\n 'ra': ra,\n 'dec': dec,\n 'galactic_lng': glon,\n 'galactic_lat': glat,\n 'mag': magpsf\n }\n\n\nclass LasairBroker(GenericBroker):\n \"\"\"\n The ``LasairBroker`` is the interface to the Lasair alert broker. For information regarding the query format for\n Lasair, please see https://lasair-ztf.lsst.ac.uk/.\n\n Requires a LASAIR_TOKEN in settings.py.\n See https://lasair-ztf.lsst.ac.uk/api for details about how to acquire an authorization token.\n \"\"\"\n\n name = 'Lasair'\n form = LasairBrokerForm\n\n def fetch_alerts(self, parameters):\n token = settings.LASAIR_TOKEN\n alerts = []\n broker_feedback = ''\n object_ids = ''\n\n # Check for Cone Search\n if 'cone_ra' in parameters and len(parameters['cone_ra'].strip()) > 0 and\\\n 'cone_dec' in parameters and len(parameters['cone_dec'].strip()) > 0:\n\n cone_query = {'ra': parameters['cone_ra'].strip(),\n 'dec': parameters['cone_dec'].strip(),\n 'radius': parameters['cone_radius'].strip(), # defaults to 10\"\n 'requestType': 'all' # Return all objects within radius\n }\n parsed_cone_query = urlencode(cone_query)\n\n # Query LASAIR Cone Search API\n cone_response = requests.get(\n LASAIR_URL + '/api/cone/?' + parsed_cone_query + f'&token={token}&format=json'\n )\n search_results = cone_response.json()\n # Successful Search ~ [{'object': 'ZTF19abuaekk', 'separation': 205.06135003141878},...]\n # Unsuccessful Search ~ {'error': 'No object found ...'}\n try:\n # Provide comma separated string of Object IDs matching search criteria\n object_ids = ','.join([result['object'] for result in search_results])\n except TypeError:\n for key in search_results:\n broker_feedback += f'{key}:{search_results[key]}'\n\n # Check for SQL Condition Query\n elif 'sqlquery' in parameters and len(parameters['sqlquery'].strip()) > 0:\n sql_query = {'selected': 'objectId', # The only parameter we need returned is the objectId\n 'tables': 'objects', # The only table we need to search is the objects table\n 'conditions': parameters['sqlquery'].strip(),\n 'limit': '1000' # limit number of returned objects to 1000\n }\n parsed_sql_query = urlencode(sql_query)\n\n # Query LASAIR SQLQuery API\n query_response = requests.get(\n LASAIR_URL + '/api/query/?' + parsed_sql_query + f'&token={token}&format=json'\n )\n\n search_results = query_response.json()\n # Successful Search ~ [{'objectId': 'ZTF18aagzzzz'},...]\n # Unsuccessful Search ~ []\n try:\n # Provide comma separated string of Object IDs matching search criteria\n object_ids = ','.join([result['objectId'] for result in search_results])\n except TypeError:\n for key in search_results:\n broker_feedback += f'{key}:{search_results[key]}'\n\n # Supply feedback for empty results\n if not object_ids and not broker_feedback:\n broker_feedback += f\"No objects found with conditions: {sql_query['conditions']}\"\n else:\n return iter(alerts), broker_feedback\n\n if object_ids:\n # Query LASAIR object API\n obj_response = requests.get(\n LASAIR_URL + '/api/objects/' + f'?objectIds={object_ids}&token={token}&format=json'\n )\n obj_results = obj_response.json()\n # Successful Search ~ [{'objectId': 'ZTF19abuaekk', 'objectData': {...}},...]\n\n for obj in obj_results:\n alerts.append(get_lasair_object(obj))\n return iter(alerts), broker_feedback\n\n def fetch_alert(self, alert_id):\n url = LASAIR_URL + '/object/' + alert_id + '/json/'\n response = requests.get(url)\n response.raise_for_status()\n parsed = response.json()\n return parsed\n\n def process_reduced_data(self, target, alert=None):\n pass\n\n def to_generic_alert(self, alert):\n return GenericAlert(\n url=LASAIR_URL + '/object/' + alert['alert_id'],\n id=alert['alert_id'],\n name=alert['alert_id'],\n ra=alert['ra'],\n dec=alert['dec'],\n timestamp=alert['timestamp'],\n mag=alert['mag'],\n score=1, # dunno what this means ..?\n )\n\n def to_target(self, alert):\n for c in alert['candidates']:\n if 'candid' in c:\n break\n return Target.objects.create(\n name=alert.get('objectId'),\n type='SIDEREAL',\n ra=alert['objectData']['ramean'],\n dec=alert['objectData']['decmean'],\n galactic_lng=alert['objectData']['glonmean'],\n galactic_lat=alert['objectData']['glatmean'],\n )\n", "path": "tom_alerts/brokers/lasair.py"}]}
| 2,854 | 211 |
gh_patches_debug_23054
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-862
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update PyPI keywords and classifies in setup.py
# Description
As JAX is now a supported backend then it should additionally be added to the [list of keywords in `setup.py`](https://github.com/scikit-hep/pyhf/blob/917bd5127c1da023b279c076bb41614fbb859487/setup.py#L85). Additionally, the [classifies](https://packaging.python.org/guides/distributing-packages-using-setuptools/#classifiers) should be updated as well to include a `Development Status`, `License`, `Intended Audience`, and `Topic`.
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2 from pathlib import Path
3
4 this_directory = Path(__file__).parent.resolve()
5 with open(Path(this_directory).joinpath('README.rst'), encoding='utf-8') as readme_rst:
6 long_description = readme_rst.read()
7
8 extras_require = {
9 'tensorflow': ['tensorflow~=2.0', 'tensorflow-probability~=0.8'],
10 'torch': ['torch~=1.2'],
11 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],
12 'xmlio': ['uproot'],
13 'minuit': ['iminuit'],
14 }
15 extras_require['backends'] = sorted(
16 set(
17 extras_require['tensorflow']
18 + extras_require['torch']
19 + extras_require['jax']
20 + extras_require['minuit']
21 )
22 )
23 extras_require['contrib'] = sorted(set(['matplotlib']))
24
25 extras_require['test'] = sorted(
26 set(
27 extras_require['backends']
28 + extras_require['xmlio']
29 + extras_require['contrib']
30 + [
31 'pyflakes',
32 'pytest~=3.5',
33 'pytest-cov>=2.5.1',
34 'pytest-mock',
35 'pytest-benchmark[histogram]',
36 'pytest-console-scripts',
37 'pytest-mpl',
38 'pydocstyle',
39 'coverage>=4.0', # coveralls
40 'papermill~=2.0',
41 'nteract-scrapbook~=0.2',
42 'check-manifest',
43 'jupyter',
44 'uproot~=3.3',
45 'graphviz',
46 'jsonpatch',
47 'black',
48 ]
49 )
50 )
51 extras_require['docs'] = sorted(
52 set(
53 [
54 'sphinx',
55 'sphinxcontrib-bibtex',
56 'sphinx-click',
57 'sphinx_rtd_theme',
58 'nbsphinx',
59 'ipywidgets',
60 'sphinx-issues',
61 'sphinx-copybutton>0.2.9',
62 ]
63 )
64 )
65 extras_require['develop'] = sorted(
66 set(
67 extras_require['docs']
68 + extras_require['test']
69 + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'twine']
70 )
71 )
72 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
73
74
75 setup(
76 name='pyhf',
77 version='0.4.1',
78 description='(partial) pure python histfactory implementation',
79 long_description=long_description,
80 long_description_content_type='text/x-rst',
81 url='https://github.com/scikit-hep/pyhf',
82 author='Lukas Heinrich, Matthew Feickert, Giordon Stark',
83 author_email='[email protected], [email protected], [email protected]',
84 license='Apache',
85 keywords='physics fitting numpy scipy tensorflow pytorch',
86 classifiers=[
87 "Programming Language :: Python :: 3",
88 "Programming Language :: Python :: 3.6",
89 "Programming Language :: Python :: 3.7",
90 "Programming Language :: Python :: 3.8",
91 ],
92 package_dir={'': 'src'},
93 packages=find_packages(where='src'),
94 include_package_data=True,
95 python_requires=">=3.6",
96 install_requires=[
97 'scipy', # requires numpy, which is required by pyhf and tensorflow
98 'click>=6.0', # for console scripts,
99 'tqdm', # for readxml
100 'jsonschema>=3.2.0', # for utils
101 'jsonpatch',
102 'pyyaml', # for parsing CLI equal-delimited options
103 ],
104 extras_require=extras_require,
105 entry_points={'console_scripts': ['pyhf=pyhf.cli:cli']},
106 dependency_links=[],
107 use_scm_version=lambda: {'local_scheme': lambda version: ''},
108 )
109
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -79,11 +79,21 @@
long_description=long_description,
long_description_content_type='text/x-rst',
url='https://github.com/scikit-hep/pyhf',
+ project_urls={
+ "Documentation": "https://scikit-hep.org/pyhf/",
+ "Source": "https://github.com/scikit-hep/pyhf",
+ "Tracker": "https://github.com/scikit-hep/pyhf/issues",
+ },
author='Lukas Heinrich, Matthew Feickert, Giordon Stark',
author_email='[email protected], [email protected], [email protected]',
license='Apache',
- keywords='physics fitting numpy scipy tensorflow pytorch',
+ keywords='physics fitting numpy scipy tensorflow pytorch jax',
classifiers=[
+ "Development Status :: 4 - Beta",
+ "License :: OSI Approved :: Apache Software License",
+ "Intended Audience :: Science/Research",
+ "Topic :: Scientific/Engineering",
+ "Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -79,11 +79,21 @@\n long_description=long_description,\n long_description_content_type='text/x-rst',\n url='https://github.com/scikit-hep/pyhf',\n+ project_urls={\n+ \"Documentation\": \"https://scikit-hep.org/pyhf/\",\n+ \"Source\": \"https://github.com/scikit-hep/pyhf\",\n+ \"Tracker\": \"https://github.com/scikit-hep/pyhf/issues\",\n+ },\n author='Lukas Heinrich, Matthew Feickert, Giordon Stark',\n author_email='[email protected], [email protected], [email protected]',\n license='Apache',\n- keywords='physics fitting numpy scipy tensorflow pytorch',\n+ keywords='physics fitting numpy scipy tensorflow pytorch jax',\n classifiers=[\n+ \"Development Status :: 4 - Beta\",\n+ \"License :: OSI Approved :: Apache Software License\",\n+ \"Intended Audience :: Science/Research\",\n+ \"Topic :: Scientific/Engineering\",\n+ \"Topic :: Scientific/Engineering :: Physics\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n", "issue": "Update PyPI keywords and classifies in setup.py\n# Description\r\n\r\nAs JAX is now a supported backend then it should additionally be added to the [list of keywords in `setup.py`](https://github.com/scikit-hep/pyhf/blob/917bd5127c1da023b279c076bb41614fbb859487/setup.py#L85). Additionally, the [classifies](https://packaging.python.org/guides/distributing-packages-using-setuptools/#classifiers) should be updated as well to include a `Development Status`, `License`, `Intended Audience`, and `Topic`.\n", "before_files": [{"content": "from setuptools import setup, find_packages\nfrom pathlib import Path\n\nthis_directory = Path(__file__).parent.resolve()\nwith open(Path(this_directory).joinpath('README.rst'), encoding='utf-8') as readme_rst:\n long_description = readme_rst.read()\n\nextras_require = {\n 'tensorflow': ['tensorflow~=2.0', 'tensorflow-probability~=0.8'],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot'],\n 'minuit': ['iminuit'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + [\n 'pyflakes',\n 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'check-manifest',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n 'black',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['test']\n + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'twine']\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n name='pyhf',\n version='0.4.1',\n description='(partial) pure python histfactory implementation',\n long_description=long_description,\n long_description_content_type='text/x-rst',\n url='https://github.com/scikit-hep/pyhf',\n author='Lukas Heinrich, Matthew Feickert, Giordon Stark',\n author_email='[email protected], [email protected], [email protected]',\n license='Apache',\n keywords='physics fitting numpy scipy tensorflow pytorch',\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n package_dir={'': 'src'},\n packages=find_packages(where='src'),\n include_package_data=True,\n python_requires=\">=3.6\",\n install_requires=[\n 'scipy', # requires numpy, which is required by pyhf and tensorflow\n 'click>=6.0', # for console scripts,\n 'tqdm', # for readxml\n 'jsonschema>=3.2.0', # for utils\n 'jsonpatch',\n 'pyyaml', # for parsing CLI equal-delimited options\n ],\n extras_require=extras_require,\n entry_points={'console_scripts': ['pyhf=pyhf.cli:cli']},\n dependency_links=[],\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]}
| 1,750 | 294 |
gh_patches_debug_2538
|
rasdani/github-patches
|
git_diff
|
Parsl__parsl-328
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
fatal: Not a git repository: '/homes/vvikraman/anaconda3/lib/python3.6/site-packages/.git
Hi
When I try to run parsl I am getting the following issue:
fatal: Not a git repository: '/homes/vvikraman/anaconda3/lib/python3.6/site-packages/.git
Is it a real issue?
I am using python3 and jupyter but run parsl in a terminal.
Issue in parsl.log
I tried to run a simple script given in the parsl documentation
```
import parsl
from parsl import *
import time
workers = ThreadPoolExecutor(max_workers=4)
dfk = DataFlowKernel(executors=[workers])
print(1)
@App('python', dfk)
def hello ():
import time
time.sleep(5)
return 'Hello World!'
print(2)
app_future = hello()
print ('Done: %s' % app_future.done())
print ('Result: %s' % app_future.result())
print ('Done: %s' % app_future.done())
```
However, in the parsl.log shows this issue
2018-06-07 21:45:37 parsl.utils:24 [ERROR] Unable to determine code state
Traceback (most recent call last):
File "/homes/vvikraman/anaconda3/lib/python3.6/site-packages/parsl/utils.py", line 19, in get_version
head = subprocess.check_output(cmd, env=env).strip().decode('utf-8')
File "/homes/vvikraman/anaconda3/lib/python3.6/subprocess.py", line 336, in check_output
**kwargs).stdout
File "/homes/vvikraman/anaconda3/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['git', 'rev-parse', '--short', 'HEAD']' returned non-zero exit status 128.
</issue>
<code>
[start of parsl/utils.py]
1 import logging
2 import os
3 import shlex
4 import subprocess
5 import threading
6 import time
7 from contextlib import contextmanager
8 from functools import wraps
9
10 import parsl
11 from parsl.version import VERSION
12
13 logger = logging.getLogger(__name__)
14
15
16 def get_version():
17 version = parsl.__version__
18 work_tree = os.path.dirname(os.path.dirname(__file__))
19 git_dir = os.path.join(work_tree, '.git')
20 env = {'GIT_WORK_TREE': work_tree, 'GIT_DIR': git_dir}
21 try:
22 cmd = shlex.split('git rev-parse --short HEAD')
23 head = subprocess.check_output(cmd, env=env).strip().decode('utf-8')
24 diff = subprocess.check_output(shlex.split('git diff HEAD'), env=env)
25 status = 'dirty' if diff else 'clean'
26 version = '{v}-{head}-{status}'.format(v=VERSION, head=head, status=status)
27 except Exception as e:
28 logger.exception("Unable to determine code state")
29
30 return version
31
32
33 def get_all_checkpoints(rundir="runinfo"):
34 """Finds the checkpoints from all last runs.
35
36 Note that checkpoints are incremental, and this helper will not find
37 previous checkpoints from earlier than the most recent run. It probably
38 should be made to do so.
39
40 Kwargs:
41 - rundir(str) : Path to the runinfo directory
42
43 Returns:
44 - a list suitable for the checkpointFiles parameter of DataFlowKernel
45 constructor
46
47 """
48
49 if(not(os.path.isdir(rundir))):
50 return []
51
52 dirs = sorted(os.listdir(rundir))
53
54 checkpoints = []
55
56 for runid in dirs:
57
58 checkpoint = os.path.abspath('{}/{}/checkpoint'.format(rundir, runid))
59
60 if(os.path.isdir(checkpoint)):
61 checkpoints.append(checkpoint)
62
63 return checkpoints
64
65
66 def get_last_checkpoint(rundir="runinfo"):
67 """Finds the checkpoint from the last run, if one exists.
68
69 Note that checkpoints are incremental, and this helper will not find
70 previous checkpoints from earlier than the most recent run. It probably
71 should be made to do so.
72
73 Kwargs:
74 - rundir(str) : Path to the runinfo directory
75
76 Returns:
77 - a list suitable for checkpointFiles parameter of DataFlowKernel
78 constructor, with 0 or 1 elements
79
80 """
81
82 if(not(os.path.isdir(rundir))):
83 return []
84
85 dirs = sorted(os.listdir(rundir))
86
87 if(len(dirs) == 0):
88 return []
89
90 last_runid = dirs[-1]
91 last_checkpoint = os.path.abspath('{}/{}/checkpoint'.format(rundir, last_runid))
92
93 if(not(os.path.isdir(last_checkpoint))):
94 return []
95
96 return [last_checkpoint]
97
98
99 def timeout(seconds=None):
100 def decorator(func, *args, **kwargs):
101 @wraps(func)
102 def wrapper(*args, **kwargs):
103 t = threading.Thread(target=func, args=args, kwargs=kwargs)
104 t.start()
105 result = t.join(seconds)
106 if t.is_alive():
107 raise RuntimeError('timed out in {}'.format(func))
108 return result
109 return wrapper
110 return decorator
111
112
113 @contextmanager
114 def time_limited_open(path, mode, seconds=1):
115 @timeout(seconds)
116 def check_path(path):
117 while not os.path.exists(path):
118 time.sleep(0.1)
119 check_path(path)
120 f = open(path, mode)
121 yield f
122 f.close()
123
[end of parsl/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/parsl/utils.py b/parsl/utils.py
--- a/parsl/utils.py
+++ b/parsl/utils.py
@@ -25,7 +25,7 @@
status = 'dirty' if diff else 'clean'
version = '{v}-{head}-{status}'.format(v=VERSION, head=head, status=status)
except Exception as e:
- logger.exception("Unable to determine code state")
+ pass
return version
|
{"golden_diff": "diff --git a/parsl/utils.py b/parsl/utils.py\n--- a/parsl/utils.py\n+++ b/parsl/utils.py\n@@ -25,7 +25,7 @@\n status = 'dirty' if diff else 'clean'\n version = '{v}-{head}-{status}'.format(v=VERSION, head=head, status=status)\n except Exception as e:\n- logger.exception(\"Unable to determine code state\")\n+ pass\n \n return version\n", "issue": "fatal: Not a git repository: '/homes/vvikraman/anaconda3/lib/python3.6/site-packages/.git\nHi \r\nWhen I try to run parsl I am getting the following issue:\r\n\r\nfatal: Not a git repository: '/homes/vvikraman/anaconda3/lib/python3.6/site-packages/.git\r\n\r\nIs it a real issue?\r\n\r\nI am using python3 and jupyter but run parsl in a terminal. \nIssue in parsl.log\nI tried to run a simple script given in the parsl documentation \r\n\r\n```\r\nimport parsl\r\nfrom parsl import *\r\nimport time\r\n\r\nworkers = ThreadPoolExecutor(max_workers=4)\r\ndfk = DataFlowKernel(executors=[workers])\r\nprint(1)\r\n@App('python', dfk)\r\ndef hello ():\r\n import time\r\n time.sleep(5)\r\n return 'Hello World!'\r\nprint(2)\r\napp_future = hello()\r\nprint ('Done: %s' % app_future.done())\r\nprint ('Result: %s' % app_future.result())\r\nprint ('Done: %s' % app_future.done())\r\n```\r\nHowever, in the parsl.log shows this issue\r\n\r\n2018-06-07 21:45:37 parsl.utils:24 [ERROR] Unable to determine code state\r\nTraceback (most recent call last):\r\n File \"/homes/vvikraman/anaconda3/lib/python3.6/site-packages/parsl/utils.py\", line 19, in get_version\r\n head = subprocess.check_output(cmd, env=env).strip().decode('utf-8')\r\n File \"/homes/vvikraman/anaconda3/lib/python3.6/subprocess.py\", line 336, in check_output\r\n **kwargs).stdout\r\n File \"/homes/vvikraman/anaconda3/lib/python3.6/subprocess.py\", line 418, in run\r\n output=stdout, stderr=stderr)\r\nsubprocess.CalledProcessError: Command '['git', 'rev-parse', '--short', 'HEAD']' returned non-zero exit status 128.\r\n\n", "before_files": [{"content": "import logging\nimport os\nimport shlex\nimport subprocess\nimport threading\nimport time\nfrom contextlib import contextmanager\nfrom functools import wraps\n\nimport parsl\nfrom parsl.version import VERSION\n\nlogger = logging.getLogger(__name__)\n\n\ndef get_version():\n version = parsl.__version__\n work_tree = os.path.dirname(os.path.dirname(__file__))\n git_dir = os.path.join(work_tree, '.git')\n env = {'GIT_WORK_TREE': work_tree, 'GIT_DIR': git_dir}\n try:\n cmd = shlex.split('git rev-parse --short HEAD')\n head = subprocess.check_output(cmd, env=env).strip().decode('utf-8')\n diff = subprocess.check_output(shlex.split('git diff HEAD'), env=env)\n status = 'dirty' if diff else 'clean'\n version = '{v}-{head}-{status}'.format(v=VERSION, head=head, status=status)\n except Exception as e:\n logger.exception(\"Unable to determine code state\")\n\n return version\n\n\ndef get_all_checkpoints(rundir=\"runinfo\"):\n \"\"\"Finds the checkpoints from all last runs.\n\n Note that checkpoints are incremental, and this helper will not find\n previous checkpoints from earlier than the most recent run. It probably\n should be made to do so.\n\n Kwargs:\n - rundir(str) : Path to the runinfo directory\n\n Returns:\n - a list suitable for the checkpointFiles parameter of DataFlowKernel\n constructor\n\n \"\"\"\n\n if(not(os.path.isdir(rundir))):\n return []\n\n dirs = sorted(os.listdir(rundir))\n\n checkpoints = []\n\n for runid in dirs:\n\n checkpoint = os.path.abspath('{}/{}/checkpoint'.format(rundir, runid))\n\n if(os.path.isdir(checkpoint)):\n checkpoints.append(checkpoint)\n\n return checkpoints\n\n\ndef get_last_checkpoint(rundir=\"runinfo\"):\n \"\"\"Finds the checkpoint from the last run, if one exists.\n\n Note that checkpoints are incremental, and this helper will not find\n previous checkpoints from earlier than the most recent run. It probably\n should be made to do so.\n\n Kwargs:\n - rundir(str) : Path to the runinfo directory\n\n Returns:\n - a list suitable for checkpointFiles parameter of DataFlowKernel\n constructor, with 0 or 1 elements\n\n \"\"\"\n\n if(not(os.path.isdir(rundir))):\n return []\n\n dirs = sorted(os.listdir(rundir))\n\n if(len(dirs) == 0):\n return []\n\n last_runid = dirs[-1]\n last_checkpoint = os.path.abspath('{}/{}/checkpoint'.format(rundir, last_runid))\n\n if(not(os.path.isdir(last_checkpoint))):\n return []\n\n return [last_checkpoint]\n\n\ndef timeout(seconds=None):\n def decorator(func, *args, **kwargs):\n @wraps(func)\n def wrapper(*args, **kwargs):\n t = threading.Thread(target=func, args=args, kwargs=kwargs)\n t.start()\n result = t.join(seconds)\n if t.is_alive():\n raise RuntimeError('timed out in {}'.format(func))\n return result\n return wrapper\n return decorator\n\n\n@contextmanager\ndef time_limited_open(path, mode, seconds=1):\n @timeout(seconds)\n def check_path(path):\n while not os.path.exists(path):\n time.sleep(0.1)\n check_path(path)\n f = open(path, mode)\n yield f\n f.close()\n", "path": "parsl/utils.py"}]}
| 2,005 | 102 |
gh_patches_debug_20440
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-10310
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[feature] Tool: is_msvc helper
Our friend and great contributor SpaceIm is using a new detection which I believe could be part of the mainstream:
```python
@property
def _is_msvc(self):
return str(self.settings.compiler) in ["Visual Studio", "msvc"]
```
This property can be largely re-used, when checking on `validate()` or any other condition.
- [ ] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
</issue>
<code>
[start of conan/tools/microsoft/__init__.py]
1 from conan.tools.microsoft.toolchain import MSBuildToolchain
2 from conan.tools.microsoft.msbuild import MSBuild
3 from conan.tools.microsoft.msbuilddeps import MSBuildDeps
4 from conan.tools.microsoft.visual import msvc_runtime_flag, VCVars
5 from conan.tools.microsoft.subsystems import subsystem_path
6
[end of conan/tools/microsoft/__init__.py]
[start of conan/tools/microsoft/visual.py]
1 import os
2 import textwrap
3
4 from conans.client.tools import vs_installation_path
5 from conans.errors import ConanException
6
7 CONAN_VCVARS_FILE = "conanvcvars.bat"
8
9
10 def msvc_version_to_vs_ide_version(version):
11 _visuals = {'190': '14',
12 '191': '15',
13 '192': '16',
14 '193': '17'}
15 return _visuals[str(version)]
16
17
18 class VCVars:
19 def __init__(self, conanfile):
20 self._conanfile = conanfile
21
22 def generate(self, scope="build"):
23 """
24 write a conanvcvars.bat file with the good args from settings
25 """
26 conanfile = self._conanfile
27 os_ = conanfile.settings.get_safe("os")
28 if os_ != "Windows":
29 return
30
31 compiler = conanfile.settings.get_safe("compiler")
32 if compiler != "Visual Studio" and compiler != "msvc":
33 return
34
35 vs_version = vs_ide_version(conanfile)
36 vcvarsarch = vcvars_arch(conanfile)
37 vcvars_ver = _vcvars_vers(conanfile, compiler, vs_version)
38
39 vs_install_path = conanfile.conf["tools.microsoft.msbuild:installation_path"]
40 # The vs_install_path is like
41 # C:\Program Files (x86)\Microsoft Visual Studio\2019\Community
42 # C:\Program Files (x86)\Microsoft Visual Studio\2017\Community
43 # C:\Program Files (x86)\Microsoft Visual Studio 14.0
44 vcvars = vcvars_command(vs_version, architecture=vcvarsarch, platform_type=None,
45 winsdk_version=None, vcvars_ver=vcvars_ver,
46 vs_install_path=vs_install_path)
47
48 content = textwrap.dedent("""\
49 @echo off
50 {}
51 """.format(vcvars))
52 from conan.tools.env.environment import create_env_script
53 create_env_script(conanfile, content, CONAN_VCVARS_FILE, scope)
54
55
56 def vs_ide_version(conanfile):
57 compiler = conanfile.settings.get_safe("compiler")
58 compiler_version = (conanfile.settings.get_safe("compiler.base.version") or
59 conanfile.settings.get_safe("compiler.version"))
60 if compiler == "msvc":
61 toolset_override = conanfile.conf["tools.microsoft.msbuild:vs_version"]
62 if toolset_override:
63 visual_version = toolset_override
64 else:
65 visual_version = msvc_version_to_vs_ide_version(compiler_version)
66 else:
67 visual_version = compiler_version
68 return visual_version
69
70
71 def msvc_runtime_flag(conanfile):
72 settings = conanfile.settings
73 compiler = settings.get_safe("compiler")
74 runtime = settings.get_safe("compiler.runtime")
75 if compiler == "Visual Studio":
76 return runtime
77 if compiler == "msvc" or compiler == "intel-cc":
78 runtime_type = settings.get_safe("compiler.runtime_type")
79 runtime = "MT" if runtime == "static" else "MD"
80 if runtime_type == "Debug":
81 runtime = "{}d".format(runtime)
82 return runtime
83
84
85 def vcvars_command(version, architecture=None, platform_type=None, winsdk_version=None,
86 vcvars_ver=None, start_dir_cd=True, vs_install_path=None):
87 """ conan-agnostic construction of vcvars command
88 https://docs.microsoft.com/en-us/cpp/build/building-on-the-command-line
89 """
90 # TODO: This comes from conans/client/tools/win.py vcvars_command()
91 cmd = []
92 if start_dir_cd:
93 cmd.append('set "VSCMD_START_DIR=%CD%" &&')
94
95 # The "call" is useful in case it is called from another .bat script
96 cmd.append('call "%s" ' % _vcvars_path(version, vs_install_path))
97 if architecture:
98 cmd.append(architecture)
99 if platform_type:
100 cmd.append(platform_type)
101 if winsdk_version:
102 cmd.append(winsdk_version)
103 if vcvars_ver:
104 cmd.append("-vcvars_ver=%s" % vcvars_ver)
105 return " ".join(cmd)
106
107
108 def _vcvars_path(version, vs_install_path):
109 # TODO: This comes from conans/client/tools/win.py vcvars_command()
110 vs_path = vs_install_path or vs_installation_path(version)
111 if not vs_path or not os.path.isdir(vs_path):
112 raise ConanException("VS non-existing installation: Visual Studio %s" % version)
113
114 if int(version) > 14:
115 vcpath = os.path.join(vs_path, "VC/Auxiliary/Build/vcvarsall.bat")
116 else:
117 vcpath = os.path.join(vs_path, "VC/vcvarsall.bat")
118 return vcpath
119
120
121 def vcvars_arch(conanfile):
122 """
123 computes the vcvars command line architecture based on conanfile settings (host) and
124 settings_build
125 :param conanfile:
126 :return:
127 """
128 # TODO: This comes from conans/client/tools/win.py vcvars_command()
129 settings_host = conanfile.settings
130 try:
131 settings_build = conanfile.settings_build
132 except AttributeError:
133 settings_build = settings_host
134
135 arch_host = str(settings_host.arch)
136 arch_build = str(settings_build.arch)
137
138 arch = None
139 if arch_build == 'x86_64':
140 arch = {'x86': "amd64_x86",
141 'x86_64': 'amd64',
142 'armv7': 'amd64_arm',
143 'armv8': 'amd64_arm64'}.get(arch_host)
144 elif arch_build == 'x86':
145 arch = {'x86': 'x86',
146 'x86_64': 'x86_amd64',
147 'armv7': 'x86_arm',
148 'armv8': 'x86_arm64'}.get(arch_host)
149
150 if not arch:
151 raise ConanException('vcvars unsupported architectures %s-%s' % (arch_build, arch_host))
152
153 return arch
154
155
156 def _vcvars_vers(conanfile, compiler, vs_version):
157 if int(vs_version) <= 14:
158 return None
159
160 vcvars_ver = None
161 if compiler == "Visual Studio":
162 toolset = conanfile.settings.get_safe("compiler.toolset")
163 if toolset is not None:
164 vcvars_ver = {"v140": "14.0",
165 "v141": "14.1",
166 "v142": "14.2",
167 "v143": "14.3"}.get(toolset)
168 else:
169 assert compiler == "msvc"
170 # Code similar to CMakeToolchain toolset one
171 compiler_version = str(conanfile.settings.compiler.version)
172 # The equivalent of compiler 192 is toolset 14.2
173 vcvars_ver = "14.{}".format(compiler_version[-1])
174 return vcvars_ver
175
[end of conan/tools/microsoft/visual.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conan/tools/microsoft/__init__.py b/conan/tools/microsoft/__init__.py
--- a/conan/tools/microsoft/__init__.py
+++ b/conan/tools/microsoft/__init__.py
@@ -1,5 +1,5 @@
from conan.tools.microsoft.toolchain import MSBuildToolchain
from conan.tools.microsoft.msbuild import MSBuild
from conan.tools.microsoft.msbuilddeps import MSBuildDeps
-from conan.tools.microsoft.visual import msvc_runtime_flag, VCVars
+from conan.tools.microsoft.visual import msvc_runtime_flag, VCVars, is_msvc
from conan.tools.microsoft.subsystems import subsystem_path
diff --git a/conan/tools/microsoft/visual.py b/conan/tools/microsoft/visual.py
--- a/conan/tools/microsoft/visual.py
+++ b/conan/tools/microsoft/visual.py
@@ -172,3 +172,12 @@
# The equivalent of compiler 192 is toolset 14.2
vcvars_ver = "14.{}".format(compiler_version[-1])
return vcvars_ver
+
+
+def is_msvc(conanfile):
+ """ Validate if current compiler in setttings is 'Visual Studio' or 'msvc'
+ :param conanfile: ConanFile instance
+ :return: True, if the host compiler is related to Visual Studio, otherwise, False.
+ """
+ settings = conanfile.settings
+ return settings.get_safe("compiler") in ["Visual Studio", "msvc"]
|
{"golden_diff": "diff --git a/conan/tools/microsoft/__init__.py b/conan/tools/microsoft/__init__.py\n--- a/conan/tools/microsoft/__init__.py\n+++ b/conan/tools/microsoft/__init__.py\n@@ -1,5 +1,5 @@\n from conan.tools.microsoft.toolchain import MSBuildToolchain\n from conan.tools.microsoft.msbuild import MSBuild\n from conan.tools.microsoft.msbuilddeps import MSBuildDeps\n-from conan.tools.microsoft.visual import msvc_runtime_flag, VCVars\n+from conan.tools.microsoft.visual import msvc_runtime_flag, VCVars, is_msvc\n from conan.tools.microsoft.subsystems import subsystem_path\ndiff --git a/conan/tools/microsoft/visual.py b/conan/tools/microsoft/visual.py\n--- a/conan/tools/microsoft/visual.py\n+++ b/conan/tools/microsoft/visual.py\n@@ -172,3 +172,12 @@\n # The equivalent of compiler 192 is toolset 14.2\n vcvars_ver = \"14.{}\".format(compiler_version[-1])\n return vcvars_ver\n+\n+\n+def is_msvc(conanfile):\n+ \"\"\" Validate if current compiler in setttings is 'Visual Studio' or 'msvc'\n+ :param conanfile: ConanFile instance\n+ :return: True, if the host compiler is related to Visual Studio, otherwise, False.\n+ \"\"\"\n+ settings = conanfile.settings\n+ return settings.get_safe(\"compiler\") in [\"Visual Studio\", \"msvc\"]\n", "issue": "[feature] Tool: is_msvc helper\nOur friend and great contributor SpaceIm is using a new detection which I believe could be part of the mainstream:\r\n\r\n```python\r\n@property\r\ndef _is_msvc(self):\r\n return str(self.settings.compiler) in [\"Visual Studio\", \"msvc\"]\r\n```\r\n\r\nThis property can be largely re-used, when checking on `validate()` or any other condition.\r\n\r\n\r\n- [ ] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).\r\n\n", "before_files": [{"content": "from conan.tools.microsoft.toolchain import MSBuildToolchain\nfrom conan.tools.microsoft.msbuild import MSBuild\nfrom conan.tools.microsoft.msbuilddeps import MSBuildDeps\nfrom conan.tools.microsoft.visual import msvc_runtime_flag, VCVars\nfrom conan.tools.microsoft.subsystems import subsystem_path\n", "path": "conan/tools/microsoft/__init__.py"}, {"content": "import os\nimport textwrap\n\nfrom conans.client.tools import vs_installation_path\nfrom conans.errors import ConanException\n\nCONAN_VCVARS_FILE = \"conanvcvars.bat\"\n\n\ndef msvc_version_to_vs_ide_version(version):\n _visuals = {'190': '14',\n '191': '15',\n '192': '16',\n '193': '17'}\n return _visuals[str(version)]\n\n\nclass VCVars:\n def __init__(self, conanfile):\n self._conanfile = conanfile\n\n def generate(self, scope=\"build\"):\n \"\"\"\n write a conanvcvars.bat file with the good args from settings\n \"\"\"\n conanfile = self._conanfile\n os_ = conanfile.settings.get_safe(\"os\")\n if os_ != \"Windows\":\n return\n\n compiler = conanfile.settings.get_safe(\"compiler\")\n if compiler != \"Visual Studio\" and compiler != \"msvc\":\n return\n\n vs_version = vs_ide_version(conanfile)\n vcvarsarch = vcvars_arch(conanfile)\n vcvars_ver = _vcvars_vers(conanfile, compiler, vs_version)\n\n vs_install_path = conanfile.conf[\"tools.microsoft.msbuild:installation_path\"]\n # The vs_install_path is like\n # C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\n # C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\n # C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\n vcvars = vcvars_command(vs_version, architecture=vcvarsarch, platform_type=None,\n winsdk_version=None, vcvars_ver=vcvars_ver,\n vs_install_path=vs_install_path)\n\n content = textwrap.dedent(\"\"\"\\\n @echo off\n {}\n \"\"\".format(vcvars))\n from conan.tools.env.environment import create_env_script\n create_env_script(conanfile, content, CONAN_VCVARS_FILE, scope)\n\n\ndef vs_ide_version(conanfile):\n compiler = conanfile.settings.get_safe(\"compiler\")\n compiler_version = (conanfile.settings.get_safe(\"compiler.base.version\") or\n conanfile.settings.get_safe(\"compiler.version\"))\n if compiler == \"msvc\":\n toolset_override = conanfile.conf[\"tools.microsoft.msbuild:vs_version\"]\n if toolset_override:\n visual_version = toolset_override\n else:\n visual_version = msvc_version_to_vs_ide_version(compiler_version)\n else:\n visual_version = compiler_version\n return visual_version\n\n\ndef msvc_runtime_flag(conanfile):\n settings = conanfile.settings\n compiler = settings.get_safe(\"compiler\")\n runtime = settings.get_safe(\"compiler.runtime\")\n if compiler == \"Visual Studio\":\n return runtime\n if compiler == \"msvc\" or compiler == \"intel-cc\":\n runtime_type = settings.get_safe(\"compiler.runtime_type\")\n runtime = \"MT\" if runtime == \"static\" else \"MD\"\n if runtime_type == \"Debug\":\n runtime = \"{}d\".format(runtime)\n return runtime\n\n\ndef vcvars_command(version, architecture=None, platform_type=None, winsdk_version=None,\n vcvars_ver=None, start_dir_cd=True, vs_install_path=None):\n \"\"\" conan-agnostic construction of vcvars command\n https://docs.microsoft.com/en-us/cpp/build/building-on-the-command-line\n \"\"\"\n # TODO: This comes from conans/client/tools/win.py vcvars_command()\n cmd = []\n if start_dir_cd:\n cmd.append('set \"VSCMD_START_DIR=%CD%\" &&')\n\n # The \"call\" is useful in case it is called from another .bat script\n cmd.append('call \"%s\" ' % _vcvars_path(version, vs_install_path))\n if architecture:\n cmd.append(architecture)\n if platform_type:\n cmd.append(platform_type)\n if winsdk_version:\n cmd.append(winsdk_version)\n if vcvars_ver:\n cmd.append(\"-vcvars_ver=%s\" % vcvars_ver)\n return \" \".join(cmd)\n\n\ndef _vcvars_path(version, vs_install_path):\n # TODO: This comes from conans/client/tools/win.py vcvars_command()\n vs_path = vs_install_path or vs_installation_path(version)\n if not vs_path or not os.path.isdir(vs_path):\n raise ConanException(\"VS non-existing installation: Visual Studio %s\" % version)\n\n if int(version) > 14:\n vcpath = os.path.join(vs_path, \"VC/Auxiliary/Build/vcvarsall.bat\")\n else:\n vcpath = os.path.join(vs_path, \"VC/vcvarsall.bat\")\n return vcpath\n\n\ndef vcvars_arch(conanfile):\n \"\"\"\n computes the vcvars command line architecture based on conanfile settings (host) and\n settings_build\n :param conanfile:\n :return:\n \"\"\"\n # TODO: This comes from conans/client/tools/win.py vcvars_command()\n settings_host = conanfile.settings\n try:\n settings_build = conanfile.settings_build\n except AttributeError:\n settings_build = settings_host\n\n arch_host = str(settings_host.arch)\n arch_build = str(settings_build.arch)\n\n arch = None\n if arch_build == 'x86_64':\n arch = {'x86': \"amd64_x86\",\n 'x86_64': 'amd64',\n 'armv7': 'amd64_arm',\n 'armv8': 'amd64_arm64'}.get(arch_host)\n elif arch_build == 'x86':\n arch = {'x86': 'x86',\n 'x86_64': 'x86_amd64',\n 'armv7': 'x86_arm',\n 'armv8': 'x86_arm64'}.get(arch_host)\n\n if not arch:\n raise ConanException('vcvars unsupported architectures %s-%s' % (arch_build, arch_host))\n\n return arch\n\n\ndef _vcvars_vers(conanfile, compiler, vs_version):\n if int(vs_version) <= 14:\n return None\n\n vcvars_ver = None\n if compiler == \"Visual Studio\":\n toolset = conanfile.settings.get_safe(\"compiler.toolset\")\n if toolset is not None:\n vcvars_ver = {\"v140\": \"14.0\",\n \"v141\": \"14.1\",\n \"v142\": \"14.2\",\n \"v143\": \"14.3\"}.get(toolset)\n else:\n assert compiler == \"msvc\"\n # Code similar to CMakeToolchain toolset one\n compiler_version = str(conanfile.settings.compiler.version)\n # The equivalent of compiler 192 is toolset 14.2\n vcvars_ver = \"14.{}\".format(compiler_version[-1])\n return vcvars_ver\n", "path": "conan/tools/microsoft/visual.py"}]}
| 2,742 | 339 |
gh_patches_debug_28236
|
rasdani/github-patches
|
git_diff
|
talonhub__community-763
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Do we need _capitalize_defaults now that Talon lexicon includes capitalization?
</issue>
<code>
[start of code/vocabulary.py]
1 import logging
2 from typing import Dict, Sequence
3
4 from talon import Context, Module, actions
5 from .user_settings import get_list_from_csv
6
7 mod = Module()
8 ctx = Context()
9
10 mod.list("vocabulary", desc="additional vocabulary words")
11
12
13 # Default words that will need to be capitalized (particularly under w2l).
14 # NB. These defaults and those later in this file are ONLY used when
15 # auto-creating the corresponding settings/*.csv files. Those csv files
16 # determine the contents of user.vocabulary and dictate.word_map. Once they
17 # exist, the contents of the lists/dictionaries below are irrelevant.
18 _capitalize_defaults = [
19 "I",
20 "I'm",
21 "I've",
22 "I'll",
23 "I'd",
24 "Monday",
25 "Mondays",
26 "Tuesday",
27 "Tuesdays",
28 "Wednesday",
29 "Wednesdays",
30 "Thursday",
31 "Thursdays",
32 "Friday",
33 "Fridays",
34 "Saturday",
35 "Saturdays",
36 "Sunday",
37 "Sundays",
38 "January",
39 "February",
40 # March omitted because it's a regular word too
41 "April",
42 # May omitted because it's a regular word too
43 "June",
44 "July",
45 "August",
46 "September",
47 "October",
48 "November",
49 "December",
50 ]
51
52 # Default words that need to be remapped.
53 _word_map_defaults = {
54 # E.g:
55 # "cash": "cache",
56 # This is the opposite ordering to words_to_replace.csv (the latter has the target word first)
57 }
58 _word_map_defaults.update({word.lower(): word for word in _capitalize_defaults})
59
60
61 # phrases_to_replace is a spoken form -> written form map, used by
62 # `user.replace_phrases` to rewrite words and phrases Talon recognized.
63 # This does not change the priority with which Talon recognizes
64 # particular phrases over others.
65 phrases_to_replace = get_list_from_csv(
66 "words_to_replace.csv",
67 headers=("Replacement", "Original"),
68 default=_word_map_defaults
69 )
70
71 # "dictate.word_map" is used by `actions.dictate.replace_words`;
72 # a built-in Talon action similar to `replace_phrases`, but supporting
73 # only single-word replacements. Multi-word phrases are ignored.
74 ctx.settings["dictate.word_map"] = phrases_to_replace
75
76
77 # Default words that should be added to Talon's vocabulary.
78 # Don't edit this. Edit 'additional_vocabulary.csv' instead
79 _simple_vocab_default = ["nmap", "admin", "Cisco", "Citrix", "VPN", "DNS", "Minecraft"]
80
81 # Defaults for different pronounciations of words that need to be added to
82 # Talon's vocabulary.
83 _default_vocabulary = {
84 "N map": "nmap",
85 "under documented": "under-documented",
86 }
87 _default_vocabulary.update({word: word for word in _simple_vocab_default})
88
89 # "user.vocabulary" is used to explicitly add words/phrases that Talon doesn't
90 # recognize. Words in user.vocabulary (or other lists and captures) are
91 # "command-like" and their recognition is prioritized over ordinary words.
92 ctx.lists["user.vocabulary"] = get_list_from_csv(
93 "additional_words.csv",
94 headers=("Word(s)", "Spoken Form (If Different)"),
95 default=_default_vocabulary,
96 )
97
98 # for quick verification of the reload
99 # print(str(ctx.settings["dictate.word_map"]))
100 # print(str(ctx.lists["user.vocabulary"]))
101
102 class PhraseReplacer:
103 """Utility for replacing phrases by other phrases inside text or word lists.
104
105 Replacing longer phrases has priority.
106
107 Args:
108 - phrase_dict: dictionary mapping recognized/spoken forms to written forms
109 """
110
111 def __init__(self, phrase_dict: Dict[str, str]):
112 # Index phrases by first word, then number of subsequent words n_next
113 phrase_index = dict()
114 for spoken_form, written_form in phrase_dict.items():
115 words = spoken_form.split()
116 if not words:
117 logging.warning("Found empty spoken form for written form"
118 f"{written_form}, ignored")
119 continue
120 first_word, n_next = words[0], len(words) - 1
121 phrase_index.setdefault(first_word, {}) \
122 .setdefault(n_next, {})[tuple(words[1:])] = written_form
123
124 # Sort n_next index so longer phrases have priority
125 self.phrase_index = {
126 first_word: list(sorted(same_first_word.items(), key=lambda x: -x[0]))
127 for first_word, same_first_word in phrase_index.items()
128 }
129
130 def replace(self, input_words: Sequence[str]) -> Sequence[str]:
131 input_words = tuple(input_words) # tuple to ensure hashability of slices
132 output_words = []
133 first_word_i = 0
134 while first_word_i < len(input_words):
135 first_word = input_words[first_word_i]
136 next_word_i = first_word_i + 1
137 # Could this word be the first of a phrase we should replace?
138 for n_next, phrases_n_next in self.phrase_index.get(first_word, []):
139 # Yes. Perhaps a phrase with n_next subsequent words?
140 continuation = input_words[next_word_i : next_word_i + n_next]
141 if continuation in phrases_n_next:
142 # Found a match!
143 output_words.append(phrases_n_next[continuation])
144 first_word_i += 1 + n_next
145 break
146 else:
147 # No match, just add the word to the result
148 output_words.append(first_word)
149 first_word_i += 1
150 return output_words
151
152 # Wrapper used for testing.
153 def replace_string(self, text: str) -> str:
154 return ' '.join(self.replace(text.split()))
155
156 # Unit tests for PhraseReplacer
157 rep = PhraseReplacer({
158 'this': 'foo',
159 'that': 'bar',
160 'this is': 'stopping early',
161 'this is a test': 'it worked!',
162 })
163 assert rep.replace_string('gnork') == 'gnork'
164 assert rep.replace_string('this') == 'foo'
165 assert rep.replace_string('this that this') == 'foo bar foo'
166 assert rep.replace_string('this is a test') == 'it worked!'
167 assert rep.replace_string('well this is a test really') == 'well it worked! really'
168 assert rep.replace_string('try this is too') == 'try stopping early too'
169 assert rep.replace_string('this is a tricky one') == 'stopping early a tricky one'
170
171 phrase_replacer = PhraseReplacer(phrases_to_replace)
172
173 @mod.action_class
174 class Actions:
175 def replace_phrases(words: Sequence[str]) -> Sequence[str]:
176 """Replace phrases according to words_to_replace.csv"""
177 try:
178 return phrase_replacer.replace(words)
179 except:
180 # fall back to dictate.replace_words for error-robustness
181 logging.error("phrase replacer failed!")
182 return actions.dictate.replace_words(words)
183
[end of code/vocabulary.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/code/vocabulary.py b/code/vocabulary.py
--- a/code/vocabulary.py
+++ b/code/vocabulary.py
@@ -10,31 +10,15 @@
mod.list("vocabulary", desc="additional vocabulary words")
-# Default words that will need to be capitalized (particularly under w2l).
-# NB. These defaults and those later in this file are ONLY used when
+# Default words that will need to be capitalized.
+# DON'T EDIT THIS. Edit settings/words_to_replace.csv instead.
+# These defaults and those later in this file are ONLY used when
# auto-creating the corresponding settings/*.csv files. Those csv files
# determine the contents of user.vocabulary and dictate.word_map. Once they
# exist, the contents of the lists/dictionaries below are irrelevant.
_capitalize_defaults = [
- "I",
- "I'm",
- "I've",
- "I'll",
- "I'd",
- "Monday",
- "Mondays",
- "Tuesday",
- "Tuesdays",
- "Wednesday",
- "Wednesdays",
- "Thursday",
- "Thursdays",
- "Friday",
- "Fridays",
- "Saturday",
- "Saturdays",
- "Sunday",
- "Sundays",
+ # NB. the lexicon now capitalizes January/February by default, but not the
+ # others below. Not sure why.
"January",
"February",
# March omitted because it's a regular word too
@@ -42,7 +26,7 @@
# May omitted because it's a regular word too
"June",
"July",
- "August",
+ "August", # technically also an adjective but the month is far more common
"September",
"October",
"November",
|
{"golden_diff": "diff --git a/code/vocabulary.py b/code/vocabulary.py\n--- a/code/vocabulary.py\n+++ b/code/vocabulary.py\n@@ -10,31 +10,15 @@\n mod.list(\"vocabulary\", desc=\"additional vocabulary words\")\n \n \n-# Default words that will need to be capitalized (particularly under w2l).\n-# NB. These defaults and those later in this file are ONLY used when\n+# Default words that will need to be capitalized.\n+# DON'T EDIT THIS. Edit settings/words_to_replace.csv instead.\n+# These defaults and those later in this file are ONLY used when\n # auto-creating the corresponding settings/*.csv files. Those csv files\n # determine the contents of user.vocabulary and dictate.word_map. Once they\n # exist, the contents of the lists/dictionaries below are irrelevant.\n _capitalize_defaults = [\n- \"I\",\n- \"I'm\",\n- \"I've\",\n- \"I'll\",\n- \"I'd\",\n- \"Monday\",\n- \"Mondays\",\n- \"Tuesday\",\n- \"Tuesdays\",\n- \"Wednesday\",\n- \"Wednesdays\",\n- \"Thursday\",\n- \"Thursdays\",\n- \"Friday\",\n- \"Fridays\",\n- \"Saturday\",\n- \"Saturdays\",\n- \"Sunday\",\n- \"Sundays\",\n+ # NB. the lexicon now capitalizes January/February by default, but not the\n+ # others below. Not sure why.\n \"January\",\n \"February\",\n # March omitted because it's a regular word too\n@@ -42,7 +26,7 @@\n # May omitted because it's a regular word too\n \"June\",\n \"July\",\n- \"August\",\n+ \"August\", # technically also an adjective but the month is far more common\n \"September\",\n \"October\",\n \"November\",\n", "issue": "Do we need _capitalize_defaults now that Talon lexicon includes capitalization?\n\n", "before_files": [{"content": "import logging\nfrom typing import Dict, Sequence\n\nfrom talon import Context, Module, actions\nfrom .user_settings import get_list_from_csv\n\nmod = Module()\nctx = Context()\n\nmod.list(\"vocabulary\", desc=\"additional vocabulary words\")\n\n\n# Default words that will need to be capitalized (particularly under w2l).\n# NB. These defaults and those later in this file are ONLY used when\n# auto-creating the corresponding settings/*.csv files. Those csv files\n# determine the contents of user.vocabulary and dictate.word_map. Once they\n# exist, the contents of the lists/dictionaries below are irrelevant.\n_capitalize_defaults = [\n \"I\",\n \"I'm\",\n \"I've\",\n \"I'll\",\n \"I'd\",\n \"Monday\",\n \"Mondays\",\n \"Tuesday\",\n \"Tuesdays\",\n \"Wednesday\",\n \"Wednesdays\",\n \"Thursday\",\n \"Thursdays\",\n \"Friday\",\n \"Fridays\",\n \"Saturday\",\n \"Saturdays\",\n \"Sunday\",\n \"Sundays\",\n \"January\",\n \"February\",\n # March omitted because it's a regular word too\n \"April\",\n # May omitted because it's a regular word too\n \"June\",\n \"July\",\n \"August\",\n \"September\",\n \"October\",\n \"November\",\n \"December\",\n]\n\n# Default words that need to be remapped.\n_word_map_defaults = {\n # E.g:\n # \"cash\": \"cache\",\n # This is the opposite ordering to words_to_replace.csv (the latter has the target word first)\n}\n_word_map_defaults.update({word.lower(): word for word in _capitalize_defaults})\n\n\n# phrases_to_replace is a spoken form -> written form map, used by\n# `user.replace_phrases` to rewrite words and phrases Talon recognized.\n# This does not change the priority with which Talon recognizes\n# particular phrases over others.\nphrases_to_replace = get_list_from_csv(\n \"words_to_replace.csv\",\n headers=(\"Replacement\", \"Original\"),\n default=_word_map_defaults\n)\n\n# \"dictate.word_map\" is used by `actions.dictate.replace_words`;\n# a built-in Talon action similar to `replace_phrases`, but supporting\n# only single-word replacements. Multi-word phrases are ignored.\nctx.settings[\"dictate.word_map\"] = phrases_to_replace\n\n\n# Default words that should be added to Talon's vocabulary.\n# Don't edit this. Edit 'additional_vocabulary.csv' instead\n_simple_vocab_default = [\"nmap\", \"admin\", \"Cisco\", \"Citrix\", \"VPN\", \"DNS\", \"Minecraft\"]\n\n# Defaults for different pronounciations of words that need to be added to\n# Talon's vocabulary.\n_default_vocabulary = {\n \"N map\": \"nmap\",\n \"under documented\": \"under-documented\",\n}\n_default_vocabulary.update({word: word for word in _simple_vocab_default})\n\n# \"user.vocabulary\" is used to explicitly add words/phrases that Talon doesn't\n# recognize. Words in user.vocabulary (or other lists and captures) are\n# \"command-like\" and their recognition is prioritized over ordinary words.\nctx.lists[\"user.vocabulary\"] = get_list_from_csv(\n \"additional_words.csv\",\n headers=(\"Word(s)\", \"Spoken Form (If Different)\"),\n default=_default_vocabulary,\n)\n\n# for quick verification of the reload\n# print(str(ctx.settings[\"dictate.word_map\"]))\n# print(str(ctx.lists[\"user.vocabulary\"]))\n\nclass PhraseReplacer:\n \"\"\"Utility for replacing phrases by other phrases inside text or word lists.\n\n Replacing longer phrases has priority.\n\n Args:\n - phrase_dict: dictionary mapping recognized/spoken forms to written forms\n \"\"\"\n\n def __init__(self, phrase_dict: Dict[str, str]):\n # Index phrases by first word, then number of subsequent words n_next\n phrase_index = dict()\n for spoken_form, written_form in phrase_dict.items():\n words = spoken_form.split()\n if not words:\n logging.warning(\"Found empty spoken form for written form\"\n f\"{written_form}, ignored\")\n continue\n first_word, n_next = words[0], len(words) - 1\n phrase_index.setdefault(first_word, {}) \\\n .setdefault(n_next, {})[tuple(words[1:])] = written_form\n\n # Sort n_next index so longer phrases have priority\n self.phrase_index = {\n first_word: list(sorted(same_first_word.items(), key=lambda x: -x[0]))\n for first_word, same_first_word in phrase_index.items()\n }\n\n def replace(self, input_words: Sequence[str]) -> Sequence[str]:\n input_words = tuple(input_words) # tuple to ensure hashability of slices\n output_words = []\n first_word_i = 0\n while first_word_i < len(input_words):\n first_word = input_words[first_word_i]\n next_word_i = first_word_i + 1\n # Could this word be the first of a phrase we should replace?\n for n_next, phrases_n_next in self.phrase_index.get(first_word, []):\n # Yes. Perhaps a phrase with n_next subsequent words?\n continuation = input_words[next_word_i : next_word_i + n_next]\n if continuation in phrases_n_next:\n # Found a match!\n output_words.append(phrases_n_next[continuation])\n first_word_i += 1 + n_next\n break\n else:\n # No match, just add the word to the result\n output_words.append(first_word)\n first_word_i += 1\n return output_words\n\n # Wrapper used for testing.\n def replace_string(self, text: str) -> str:\n return ' '.join(self.replace(text.split()))\n\n# Unit tests for PhraseReplacer\nrep = PhraseReplacer({\n 'this': 'foo',\n 'that': 'bar',\n 'this is': 'stopping early',\n 'this is a test': 'it worked!',\n})\nassert rep.replace_string('gnork') == 'gnork'\nassert rep.replace_string('this') == 'foo'\nassert rep.replace_string('this that this') == 'foo bar foo'\nassert rep.replace_string('this is a test') == 'it worked!'\nassert rep.replace_string('well this is a test really') == 'well it worked! really'\nassert rep.replace_string('try this is too') == 'try stopping early too'\nassert rep.replace_string('this is a tricky one') == 'stopping early a tricky one'\n\nphrase_replacer = PhraseReplacer(phrases_to_replace)\n\[email protected]_class\nclass Actions:\n def replace_phrases(words: Sequence[str]) -> Sequence[str]:\n \"\"\"Replace phrases according to words_to_replace.csv\"\"\"\n try:\n return phrase_replacer.replace(words)\n except:\n # fall back to dictate.replace_words for error-robustness\n logging.error(\"phrase replacer failed!\")\n return actions.dictate.replace_words(words)\n", "path": "code/vocabulary.py"}]}
| 2,491 | 408 |
gh_patches_debug_42376
|
rasdani/github-patches
|
git_diff
|
wemake-services__wemake-python-styleguide-2750
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add `wemake-plain` output formatter
It should be the same as `wemake`, but without colors. Why?
When I try to save the output to file, I get this:
```
106:29 WPS220 Found too deep nesting: 28 > 20
[34mtry[39;49;00m:
^
```
And I want to have this:
```
106:29 WPS220 Found too deep nesting: 28 > 20
try:
^
```
</issue>
<code>
[start of wemake_python_styleguide/formatter.py]
1 """
2 Our very own ``flake8`` formatter for better error messages.
3
4 That's how all ``flake8`` formatters work:
5
6 .. mermaid::
7 :caption: ``flake8`` formatting API calls order.
8
9 graph LR
10 F2[start] --> F3[after_init]
11 F3 --> F4[start]
12 F4 --> F5[beginning]
13 F5 --> F6[handle]
14 F6 --> F7[format]
15 F6 --> F8[show_source]
16 F6 --> F9[show_statistic]
17 F7 --> F10[finished]
18 F8 --> F10[finished]
19 F9 --> F10[finished]
20 F10 -.-> F5
21 F10 --> F11[stop]
22
23 .. autoclass:: WemakeFormatter
24 :no-undoc-members:
25
26 """
27
28 from collections import defaultdict
29 from typing import ClassVar, DefaultDict, List
30
31 from flake8.formatting.base import BaseFormatter
32 from flake8.statistics import Statistics
33 from flake8.style_guide import Violation
34 from pygments import highlight
35 from pygments.formatters import TerminalFormatter
36 from pygments.lexers import PythonLexer
37 from typing_extensions import Final
38
39 from wemake_python_styleguide.version import pkg_version
40
41 #: That url is generated and hosted by Sphinx.
42 DOCS_URL_TEMPLATE: Final = (
43 'https://wemake-python-styleguide.rtfd.io/en/{0}/pages/usage/violations/'
44 )
45
46 #: This url points to the specific violation page.
47 SHORTLINK_TEMPLATE: Final = (
48 'https://pyflak.es/{0}'
49 )
50
51
52 class WemakeFormatter(BaseFormatter): # noqa: WPS214
53 """
54 We need to format our style :term:`violations <violation>` beatifully.
55
56 The default formatter does not allow us to do that.
57 What things do we miss?
58
59 1. Spacing, everything is just mixed up and glued together
60 2. Colors and decoration, some information is easier
61 to gather just with colors or underlined text
62 3. Grouping, we need explicit grouping by filename
63 4. Incomplete and non-informative statistics
64
65 """
66
67 _doc_url: ClassVar[str] = DOCS_URL_TEMPLATE.format(pkg_version)
68
69 # API:
70
71 def after_init(self):
72 """Called after the original ``init`` is used to set extra fields."""
73 self._lexer = PythonLexer()
74 self._formatter = TerminalFormatter()
75
76 # Logic:
77 self._processed_filenames: List[str] = []
78 self._error_count = 0
79
80 def handle(self, error: Violation) -> None: # noqa: WPS110
81 """Processes each :term:`violation` to print it and all related."""
82 if error.filename not in self._processed_filenames:
83 self._print_header(error.filename)
84 self._processed_filenames.append(error.filename)
85
86 line = self.format(error)
87 source = self.show_source(error)
88 link = self._show_link(error)
89
90 self._write(line)
91 if link:
92 self._write(link)
93 if source:
94 self._write(source)
95
96 self._error_count += 1
97
98 def format(self, error: Violation) -> str: # noqa: WPS125
99 """Called to format each individual :term:`violation`."""
100 return '{newline} {row_col:<8} {code:<5} {text}'.format(
101 newline=self.newline if self._should_show_source(error) else '',
102 code=error.code,
103 text=error.text,
104 row_col='{0}:{1}'.format(error.line_number, error.column_number),
105 )
106
107 def show_source(self, error: Violation) -> str:
108 """Called when ``--show-source`` option is provided."""
109 if not self._should_show_source(error):
110 return ''
111
112 formatted_line = error.physical_line.lstrip()
113 adjust = len(error.physical_line) - len(formatted_line)
114
115 code = _highlight(
116 formatted_line,
117 self._lexer,
118 self._formatter,
119 )
120
121 return ' {code} {spacing}^'.format(
122 code=code,
123 spacing=' ' * (error.column_number - 1 - adjust),
124 )
125
126 def show_statistics(self, statistics: Statistics) -> None: # noqa: WPS210
127 """Called when ``--statistic`` option is passed."""
128 all_errors = 0
129 for error_code in statistics.error_codes():
130 stats_for_error_code = statistics.statistics_for(error_code)
131 statistic = next(stats_for_error_code)
132
133 count = statistic.count
134 count += sum(stat.count for stat in stats_for_error_code)
135 all_errors += count
136 error_by_file = _count_per_filename(statistics, error_code)
137
138 self._print_violation_per_file(
139 statistic,
140 error_code,
141 count,
142 error_by_file,
143 )
144
145 self._write(self.newline)
146 self._write(_underline(_bold('All errors: {0}'.format(all_errors))))
147
148 def stop(self) -> None:
149 """Runs once per app when the formatting ends."""
150 if self._error_count:
151 message = '{0}Full list of violations and explanations:{0}{1}'
152 self._write(message.format(self.newline, self._doc_url))
153
154 # Our own methods:
155
156 def _show_link(self, error: Violation) -> str:
157 """Called when ``--show-violation-links`` option is provided."""
158 if not self.options.show_violation_links:
159 return ''
160
161 return ' {spacing}-> {link}'.format(
162 spacing=' ' * 9,
163 link=SHORTLINK_TEMPLATE.format(error.code),
164 )
165
166 def _print_header(self, filename: str) -> None:
167 self._write(
168 '{newline}{filename}'.format(
169 filename=_underline(_bold(filename)),
170 newline=self.newline,
171 ),
172 )
173
174 def _print_violation_per_file(
175 self,
176 statistic: Statistics,
177 error_code: str,
178 count: int,
179 error_by_file: DefaultDict[str, int],
180 ):
181 self._write(
182 '{newline}{error_code}: {message}'.format(
183 newline=self.newline,
184 error_code=_bold(error_code),
185 message=statistic.message,
186 ),
187 )
188 for filename, error_count in error_by_file.items():
189 self._write(
190 ' {error_count:<5} {filename}'.format(
191 error_count=error_count,
192 filename=filename,
193 ),
194 )
195 self._write(_underline('Total: {0}'.format(count)))
196
197 def _should_show_source(self, error: Violation) -> bool:
198 return self.options.show_source and error.physical_line is not None
199
200
201 # Formatting text:
202
203 def _bold(text: str) -> str:
204 r"""
205 Returns bold formatted text.
206
207 >>> _bold('Hello!')
208 '\x1b[1mHello!\x1b[0m'
209
210 """
211 return '\033[1m{0}\033[0m'.format(text)
212
213
214 def _underline(text: str) -> str:
215 r"""
216 Returns underlined formatted text.
217
218 >>> _underline('Hello!')
219 '\x1b[4mHello!\x1b[0m'
220
221 """
222 return '\033[4m{0}\033[0m'.format(text)
223
224
225 def _highlight(source: str, lexer, formatter) -> str:
226 """
227 Highlights source code. Might fail.
228
229 See also:
230 https://github.com/wemake-services/wemake-python-styleguide/issues/794
231
232 """
233 try:
234 return highlight(source, lexer, formatter)
235 except Exception: # pragma: no cover
236 # Might fail on some systems, when colors are set incorrectly,
237 # or not available at all. In this case code will be just text.
238 return source
239
240
241 # Helpers:
242
243 def _count_per_filename(
244 statistics: Statistics,
245 error_code: str,
246 ) -> DefaultDict[str, int]:
247 filenames: DefaultDict[str, int] = defaultdict(int)
248 stats_for_error_code = statistics.statistics_for(error_code)
249
250 for stat in stats_for_error_code:
251 filenames[stat.filename] += stat.count
252
253 return filenames
254
[end of wemake_python_styleguide/formatter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wemake_python_styleguide/formatter.py b/wemake_python_styleguide/formatter.py
--- a/wemake_python_styleguide/formatter.py
+++ b/wemake_python_styleguide/formatter.py
@@ -26,7 +26,8 @@
"""
from collections import defaultdict
-from typing import ClassVar, DefaultDict, List
+from os import environ
+from typing import ClassVar, DefaultDict, Final, List
from flake8.formatting.base import BaseFormatter
from flake8.statistics import Statistics
@@ -34,19 +35,20 @@
from pygments import highlight
from pygments.formatters import TerminalFormatter
from pygments.lexers import PythonLexer
-from typing_extensions import Final
from wemake_python_styleguide.version import pkg_version
#: That url is generated and hosted by Sphinx.
-DOCS_URL_TEMPLATE: Final = (
+_DOCS_URL_TEMPLATE: Final = (
'https://wemake-python-styleguide.rtfd.io/en/{0}/pages/usage/violations/'
)
#: This url points to the specific violation page.
-SHORTLINK_TEMPLATE: Final = (
- 'https://pyflak.es/{0}'
-)
+_SHORTLINK_TEMPLATE: Final = 'https://pyflak.es/{0}'
+
+#: Option to disable any code highlight and text output format.
+#: See https://no-color.org
+_NO_COLOR: Final = environ.get('NO_COLOR', '0') == '1'
class WemakeFormatter(BaseFormatter): # noqa: WPS214
@@ -64,7 +66,7 @@
"""
- _doc_url: ClassVar[str] = DOCS_URL_TEMPLATE.format(pkg_version)
+ _doc_url: ClassVar[str] = _DOCS_URL_TEMPLATE.format(pkg_version)
# API:
@@ -160,7 +162,7 @@
return ' {spacing}-> {link}'.format(
spacing=' ' * 9,
- link=SHORTLINK_TEMPLATE.format(error.code),
+ link=_SHORTLINK_TEMPLATE.format(error.code),
)
def _print_header(self, filename: str) -> None:
@@ -200,36 +202,61 @@
# Formatting text:
-def _bold(text: str) -> str:
+def _bold(text: str, *, no_color: bool = _NO_COLOR) -> str:
r"""
Returns bold formatted text.
>>> _bold('Hello!')
'\x1b[1mHello!\x1b[0m'
+ Returns non-formatted text if environment variable ``NO_COLOR=1``.
+
+ >>> _bold('Hello!', no_color=True)
+ 'Hello!'
+
"""
+ if no_color:
+ return text
return '\033[1m{0}\033[0m'.format(text)
-def _underline(text: str) -> str:
+def _underline(text: str, *, no_color: bool = _NO_COLOR) -> str:
r"""
Returns underlined formatted text.
>>> _underline('Hello!')
'\x1b[4mHello!\x1b[0m'
+ Returns non-formatted text if environment variable ``NO_COLOR=1``.
+
+ >>> _underline('Hello!', no_color=True)
+ 'Hello!'
+
"""
+ if no_color:
+ return text
return '\033[4m{0}\033[0m'.format(text)
-def _highlight(source: str, lexer, formatter) -> str:
+def _highlight(
+ source: str,
+ lexer: PythonLexer,
+ formatter: TerminalFormatter,
+ *,
+ no_color: bool = _NO_COLOR,
+) -> str:
"""
Highlights source code. Might fail.
+ Returns non-formatted text if environment variable ``NO_COLOR=1``.
+
See also:
https://github.com/wemake-services/wemake-python-styleguide/issues/794
+ https://no-color.org
"""
+ if no_color:
+ return source
try:
return highlight(source, lexer, formatter)
except Exception: # pragma: no cover
|
{"golden_diff": "diff --git a/wemake_python_styleguide/formatter.py b/wemake_python_styleguide/formatter.py\n--- a/wemake_python_styleguide/formatter.py\n+++ b/wemake_python_styleguide/formatter.py\n@@ -26,7 +26,8 @@\n \"\"\"\n \n from collections import defaultdict\n-from typing import ClassVar, DefaultDict, List\n+from os import environ\n+from typing import ClassVar, DefaultDict, Final, List\n \n from flake8.formatting.base import BaseFormatter\n from flake8.statistics import Statistics\n@@ -34,19 +35,20 @@\n from pygments import highlight\n from pygments.formatters import TerminalFormatter\n from pygments.lexers import PythonLexer\n-from typing_extensions import Final\n \n from wemake_python_styleguide.version import pkg_version\n \n #: That url is generated and hosted by Sphinx.\n-DOCS_URL_TEMPLATE: Final = (\n+_DOCS_URL_TEMPLATE: Final = (\n 'https://wemake-python-styleguide.rtfd.io/en/{0}/pages/usage/violations/'\n )\n \n #: This url points to the specific violation page.\n-SHORTLINK_TEMPLATE: Final = (\n- 'https://pyflak.es/{0}'\n-)\n+_SHORTLINK_TEMPLATE: Final = 'https://pyflak.es/{0}'\n+\n+#: Option to disable any code highlight and text output format.\n+#: See https://no-color.org\n+_NO_COLOR: Final = environ.get('NO_COLOR', '0') == '1'\n \n \n class WemakeFormatter(BaseFormatter): # noqa: WPS214\n@@ -64,7 +66,7 @@\n \n \"\"\"\n \n- _doc_url: ClassVar[str] = DOCS_URL_TEMPLATE.format(pkg_version)\n+ _doc_url: ClassVar[str] = _DOCS_URL_TEMPLATE.format(pkg_version)\n \n # API:\n \n@@ -160,7 +162,7 @@\n \n return ' {spacing}-> {link}'.format(\n spacing=' ' * 9,\n- link=SHORTLINK_TEMPLATE.format(error.code),\n+ link=_SHORTLINK_TEMPLATE.format(error.code),\n )\n \n def _print_header(self, filename: str) -> None:\n@@ -200,36 +202,61 @@\n \n # Formatting text:\n \n-def _bold(text: str) -> str:\n+def _bold(text: str, *, no_color: bool = _NO_COLOR) -> str:\n r\"\"\"\n Returns bold formatted text.\n \n >>> _bold('Hello!')\n '\\x1b[1mHello!\\x1b[0m'\n \n+ Returns non-formatted text if environment variable ``NO_COLOR=1``.\n+\n+ >>> _bold('Hello!', no_color=True)\n+ 'Hello!'\n+\n \"\"\"\n+ if no_color:\n+ return text\n return '\\033[1m{0}\\033[0m'.format(text)\n \n \n-def _underline(text: str) -> str:\n+def _underline(text: str, *, no_color: bool = _NO_COLOR) -> str:\n r\"\"\"\n Returns underlined formatted text.\n \n >>> _underline('Hello!')\n '\\x1b[4mHello!\\x1b[0m'\n \n+ Returns non-formatted text if environment variable ``NO_COLOR=1``.\n+\n+ >>> _underline('Hello!', no_color=True)\n+ 'Hello!'\n+\n \"\"\"\n+ if no_color:\n+ return text\n return '\\033[4m{0}\\033[0m'.format(text)\n \n \n-def _highlight(source: str, lexer, formatter) -> str:\n+def _highlight(\n+ source: str,\n+ lexer: PythonLexer,\n+ formatter: TerminalFormatter,\n+ *,\n+ no_color: bool = _NO_COLOR,\n+) -> str:\n \"\"\"\n Highlights source code. Might fail.\n \n+ Returns non-formatted text if environment variable ``NO_COLOR=1``.\n+\n See also:\n https://github.com/wemake-services/wemake-python-styleguide/issues/794\n+ https://no-color.org\n \n \"\"\"\n+ if no_color:\n+ return source\n try:\n return highlight(source, lexer, formatter)\n except Exception: # pragma: no cover\n", "issue": "Add `wemake-plain` output formatter\nIt should be the same as `wemake`, but without colors. Why?\r\n\r\nWhen I try to save the output to file, I get this:\r\n\r\n```\r\n 106:29 WPS220 Found too deep nesting: 28 > 20\r\n \u001b[34mtry\u001b[39;49;00m:\r\n ^\r\n```\r\n\r\nAnd I want to have this:\r\n\r\n```\r\n 106:29 WPS220 Found too deep nesting: 28 > 20\r\n try:\r\n ^\r\n```\n", "before_files": [{"content": "\"\"\"\nOur very own ``flake8`` formatter for better error messages.\n\nThat's how all ``flake8`` formatters work:\n\n.. mermaid::\n :caption: ``flake8`` formatting API calls order.\n\n graph LR\n F2[start] --> F3[after_init]\n F3 --> F4[start]\n F4 --> F5[beginning]\n F5 --> F6[handle]\n F6 --> F7[format]\n F6\t --> F8[show_source]\n F6\t --> F9[show_statistic]\n F7 --> F10[finished]\n F8 --> F10[finished]\n F9 --> F10[finished]\n F10 -.-> F5\n F10 --> F11[stop]\n\n.. autoclass:: WemakeFormatter\n :no-undoc-members:\n\n\"\"\"\n\nfrom collections import defaultdict\nfrom typing import ClassVar, DefaultDict, List\n\nfrom flake8.formatting.base import BaseFormatter\nfrom flake8.statistics import Statistics\nfrom flake8.style_guide import Violation\nfrom pygments import highlight\nfrom pygments.formatters import TerminalFormatter\nfrom pygments.lexers import PythonLexer\nfrom typing_extensions import Final\n\nfrom wemake_python_styleguide.version import pkg_version\n\n#: That url is generated and hosted by Sphinx.\nDOCS_URL_TEMPLATE: Final = (\n 'https://wemake-python-styleguide.rtfd.io/en/{0}/pages/usage/violations/'\n)\n\n#: This url points to the specific violation page.\nSHORTLINK_TEMPLATE: Final = (\n 'https://pyflak.es/{0}'\n)\n\n\nclass WemakeFormatter(BaseFormatter): # noqa: WPS214\n \"\"\"\n We need to format our style :term:`violations <violation>` beatifully.\n\n The default formatter does not allow us to do that.\n What things do we miss?\n\n 1. Spacing, everything is just mixed up and glued together\n 2. Colors and decoration, some information is easier\n to gather just with colors or underlined text\n 3. Grouping, we need explicit grouping by filename\n 4. Incomplete and non-informative statistics\n\n \"\"\"\n\n _doc_url: ClassVar[str] = DOCS_URL_TEMPLATE.format(pkg_version)\n\n # API:\n\n def after_init(self):\n \"\"\"Called after the original ``init`` is used to set extra fields.\"\"\"\n self._lexer = PythonLexer()\n self._formatter = TerminalFormatter()\n\n # Logic:\n self._processed_filenames: List[str] = []\n self._error_count = 0\n\n def handle(self, error: Violation) -> None: # noqa: WPS110\n \"\"\"Processes each :term:`violation` to print it and all related.\"\"\"\n if error.filename not in self._processed_filenames:\n self._print_header(error.filename)\n self._processed_filenames.append(error.filename)\n\n line = self.format(error)\n source = self.show_source(error)\n link = self._show_link(error)\n\n self._write(line)\n if link:\n self._write(link)\n if source:\n self._write(source)\n\n self._error_count += 1\n\n def format(self, error: Violation) -> str: # noqa: WPS125\n \"\"\"Called to format each individual :term:`violation`.\"\"\"\n return '{newline} {row_col:<8} {code:<5} {text}'.format(\n newline=self.newline if self._should_show_source(error) else '',\n code=error.code,\n text=error.text,\n row_col='{0}:{1}'.format(error.line_number, error.column_number),\n )\n\n def show_source(self, error: Violation) -> str:\n \"\"\"Called when ``--show-source`` option is provided.\"\"\"\n if not self._should_show_source(error):\n return ''\n\n formatted_line = error.physical_line.lstrip()\n adjust = len(error.physical_line) - len(formatted_line)\n\n code = _highlight(\n formatted_line,\n self._lexer,\n self._formatter,\n )\n\n return ' {code} {spacing}^'.format(\n code=code,\n spacing=' ' * (error.column_number - 1 - adjust),\n )\n\n def show_statistics(self, statistics: Statistics) -> None: # noqa: WPS210\n \"\"\"Called when ``--statistic`` option is passed.\"\"\"\n all_errors = 0\n for error_code in statistics.error_codes():\n stats_for_error_code = statistics.statistics_for(error_code)\n statistic = next(stats_for_error_code)\n\n count = statistic.count\n count += sum(stat.count for stat in stats_for_error_code)\n all_errors += count\n error_by_file = _count_per_filename(statistics, error_code)\n\n self._print_violation_per_file(\n statistic,\n error_code,\n count,\n error_by_file,\n )\n\n self._write(self.newline)\n self._write(_underline(_bold('All errors: {0}'.format(all_errors))))\n\n def stop(self) -> None:\n \"\"\"Runs once per app when the formatting ends.\"\"\"\n if self._error_count:\n message = '{0}Full list of violations and explanations:{0}{1}'\n self._write(message.format(self.newline, self._doc_url))\n\n # Our own methods:\n\n def _show_link(self, error: Violation) -> str:\n \"\"\"Called when ``--show-violation-links`` option is provided.\"\"\"\n if not self.options.show_violation_links:\n return ''\n\n return ' {spacing}-> {link}'.format(\n spacing=' ' * 9,\n link=SHORTLINK_TEMPLATE.format(error.code),\n )\n\n def _print_header(self, filename: str) -> None:\n self._write(\n '{newline}{filename}'.format(\n filename=_underline(_bold(filename)),\n newline=self.newline,\n ),\n )\n\n def _print_violation_per_file(\n self,\n statistic: Statistics,\n error_code: str,\n count: int,\n error_by_file: DefaultDict[str, int],\n ):\n self._write(\n '{newline}{error_code}: {message}'.format(\n newline=self.newline,\n error_code=_bold(error_code),\n message=statistic.message,\n ),\n )\n for filename, error_count in error_by_file.items():\n self._write(\n ' {error_count:<5} {filename}'.format(\n error_count=error_count,\n filename=filename,\n ),\n )\n self._write(_underline('Total: {0}'.format(count)))\n\n def _should_show_source(self, error: Violation) -> bool:\n return self.options.show_source and error.physical_line is not None\n\n\n# Formatting text:\n\ndef _bold(text: str) -> str:\n r\"\"\"\n Returns bold formatted text.\n\n >>> _bold('Hello!')\n '\\x1b[1mHello!\\x1b[0m'\n\n \"\"\"\n return '\\033[1m{0}\\033[0m'.format(text)\n\n\ndef _underline(text: str) -> str:\n r\"\"\"\n Returns underlined formatted text.\n\n >>> _underline('Hello!')\n '\\x1b[4mHello!\\x1b[0m'\n\n \"\"\"\n return '\\033[4m{0}\\033[0m'.format(text)\n\n\ndef _highlight(source: str, lexer, formatter) -> str:\n \"\"\"\n Highlights source code. Might fail.\n\n See also:\n https://github.com/wemake-services/wemake-python-styleguide/issues/794\n\n \"\"\"\n try:\n return highlight(source, lexer, formatter)\n except Exception: # pragma: no cover\n # Might fail on some systems, when colors are set incorrectly,\n # or not available at all. In this case code will be just text.\n return source\n\n\n# Helpers:\n\ndef _count_per_filename(\n statistics: Statistics,\n error_code: str,\n) -> DefaultDict[str, int]:\n filenames: DefaultDict[str, int] = defaultdict(int)\n stats_for_error_code = statistics.statistics_for(error_code)\n\n for stat in stats_for_error_code:\n filenames[stat.filename] += stat.count\n\n return filenames\n", "path": "wemake_python_styleguide/formatter.py"}]}
| 3,171 | 926 |
gh_patches_debug_9546
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-5266
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
User order_expiry_time as the parameter to expire orders
**Describe the bug**
Currently we are expiring orders after 10 minutes. We should change it to order_expiry_time parameter.
</issue>
<code>
[start of app/api/helpers/order.py]
1 import logging
2 from datetime import timedelta, datetime, timezone
3
4 from flask import render_template
5
6 from app.api.helpers import ticketing
7 from app.api.helpers.db import save_to_db, safe_query_without_soft_deleted_entries, get_count
8 from app.api.helpers.exceptions import UnprocessableEntity, ConflictException
9 from app.api.helpers.files import create_save_pdf
10 from app.api.helpers.storage import UPLOAD_PATHS
11 from app.models import db
12 from app.models.ticket import Ticket
13 from app.models.ticket_holder import TicketHolder
14
15
16 def delete_related_attendees_for_order(order):
17 """
18 Delete the associated attendees of an order when it is cancelled/deleted/expired
19 :param order: Order whose attendees have to be deleted.
20 :return:
21 """
22 for ticket_holder in order.ticket_holders:
23 db.session.delete(ticket_holder)
24 try:
25 db.session.commit()
26 except Exception as e:
27 logging.error('DB Exception! %s' % e)
28 db.session.rollback()
29
30
31 def set_expiry_for_order(order, override=False):
32 """
33 Expire the order after the time slot(10 minutes) if the order is pending.
34 Also expires the order if we want to expire an order regardless of the state and time.
35 :param order: Order to be expired.
36 :param override: flag to force expiry.
37 :return:
38 """
39 if order and not order.paid_via and (override or (order.status == 'pending' and (
40 order.created_at +
41 timedelta(minutes=ticketing.TicketingManager.get_order_expiry())) < datetime.now(timezone.utc))):
42 order.status = 'expired'
43 delete_related_attendees_for_order(order)
44 save_to_db(order)
45 return order
46
47
48 def create_pdf_tickets_for_holder(order):
49 """
50 Create tickets for the holders of an order.
51 :param order: The order for which to create tickets for.
52 """
53 if order.status == 'completed':
54 pdf = create_save_pdf(render_template('pdf/ticket_purchaser.html', order=order),
55 UPLOAD_PATHS['pdf']['ticket_attendee'],
56 dir_path='/static/uploads/pdf/tickets/')
57 order.tickets_pdf_url = pdf
58
59 for holder in order.ticket_holders:
60 if (not holder.user) or holder.user.id != order.user_id:
61 # holder is not the order buyer.
62 pdf = create_save_pdf(render_template('pdf/ticket_attendee.html', order=order, holder=holder),
63 UPLOAD_PATHS['pdf']['ticket_attendee'],
64 dir_path='/static/uploads/pdf/tickets/')
65 else:
66 # holder is the order buyer.
67 pdf = order.tickets_pdf_url
68 holder.pdf_url = pdf
69 save_to_db(holder)
70
71 save_to_db(order)
72
73
74 def create_onsite_attendees_for_order(data):
75 """
76 Creates on site ticket holders for an order and adds it into the request data.
77 :param data: data initially passed in the POST request for order.
78 :return:
79 """
80 on_site_tickets = data.get('on_site_tickets')
81
82 if not on_site_tickets:
83 raise UnprocessableEntity({'pointer': 'data/attributes/on_site_tickets'}, 'on_site_tickets info missing')
84
85 data['ticket_holders'] = []
86
87 for on_site_ticket in on_site_tickets:
88 ticket_id = on_site_ticket['id']
89 quantity = int(on_site_ticket['quantity'])
90
91 ticket = safe_query_without_soft_deleted_entries(db, Ticket, 'id', ticket_id, 'ticket_id')
92
93 ticket_sold_count = get_count(db.session.query(TicketHolder.id).
94 filter_by(ticket_id=int(ticket.id), deleted_at=None))
95
96 # Check if the ticket is already sold out or not.
97 if ticket_sold_count + quantity > ticket.quantity:
98 # delete the already created attendees.
99 for holder in data['ticket_holders']:
100 ticket_holder = db.session.query(TicketHolder).filter(id == int(holder)).one()
101 db.session.delete(ticket_holder)
102 try:
103 db.session.commit()
104 except Exception as e:
105 logging.error('DB Exception! %s' % e)
106 db.session.rollback()
107
108 raise ConflictException(
109 {'pointer': '/data/attributes/on_site_tickets'},
110 "Ticket with id: {} already sold out. You can buy at most {} tickets".format(ticket_id,
111 ticket.quantity -
112 ticket_sold_count)
113 )
114
115 for _ in range(1, quantity):
116 ticket_holder = TicketHolder(firstname='onsite', lastname='attendee', email='[email protected]',
117 ticket_id=ticket.id, event_id=data.get('event'))
118 save_to_db(ticket_holder)
119 data['ticket_holders'].append(ticket_holder.id)
120
121 # delete from the data.
122 del data['on_site_tickets']
123
[end of app/api/helpers/order.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/app/api/helpers/order.py b/app/api/helpers/order.py
--- a/app/api/helpers/order.py
+++ b/app/api/helpers/order.py
@@ -38,7 +38,7 @@
"""
if order and not order.paid_via and (override or (order.status == 'pending' and (
order.created_at +
- timedelta(minutes=ticketing.TicketingManager.get_order_expiry())) < datetime.now(timezone.utc))):
+ timedelta(minutes=order.event.order_expiry_time)) < datetime.now(timezone.utc))):
order.status = 'expired'
delete_related_attendees_for_order(order)
save_to_db(order)
|
{"golden_diff": "diff --git a/app/api/helpers/order.py b/app/api/helpers/order.py\n--- a/app/api/helpers/order.py\n+++ b/app/api/helpers/order.py\n@@ -38,7 +38,7 @@\n \"\"\"\n if order and not order.paid_via and (override or (order.status == 'pending' and (\n order.created_at +\n- timedelta(minutes=ticketing.TicketingManager.get_order_expiry())) < datetime.now(timezone.utc))):\n+ timedelta(minutes=order.event.order_expiry_time)) < datetime.now(timezone.utc))):\n order.status = 'expired'\n delete_related_attendees_for_order(order)\n save_to_db(order)\n", "issue": "User order_expiry_time as the parameter to expire orders\n**Describe the bug**\r\nCurrently we are expiring orders after 10 minutes. We should change it to order_expiry_time parameter. \n", "before_files": [{"content": "import logging\nfrom datetime import timedelta, datetime, timezone\n\nfrom flask import render_template\n\nfrom app.api.helpers import ticketing\nfrom app.api.helpers.db import save_to_db, safe_query_without_soft_deleted_entries, get_count\nfrom app.api.helpers.exceptions import UnprocessableEntity, ConflictException\nfrom app.api.helpers.files import create_save_pdf\nfrom app.api.helpers.storage import UPLOAD_PATHS\nfrom app.models import db\nfrom app.models.ticket import Ticket\nfrom app.models.ticket_holder import TicketHolder\n\n\ndef delete_related_attendees_for_order(order):\n \"\"\"\n Delete the associated attendees of an order when it is cancelled/deleted/expired\n :param order: Order whose attendees have to be deleted.\n :return:\n \"\"\"\n for ticket_holder in order.ticket_holders:\n db.session.delete(ticket_holder)\n try:\n db.session.commit()\n except Exception as e:\n logging.error('DB Exception! %s' % e)\n db.session.rollback()\n\n\ndef set_expiry_for_order(order, override=False):\n \"\"\"\n Expire the order after the time slot(10 minutes) if the order is pending.\n Also expires the order if we want to expire an order regardless of the state and time.\n :param order: Order to be expired.\n :param override: flag to force expiry.\n :return:\n \"\"\"\n if order and not order.paid_via and (override or (order.status == 'pending' and (\n order.created_at +\n timedelta(minutes=ticketing.TicketingManager.get_order_expiry())) < datetime.now(timezone.utc))):\n order.status = 'expired'\n delete_related_attendees_for_order(order)\n save_to_db(order)\n return order\n\n\ndef create_pdf_tickets_for_holder(order):\n \"\"\"\n Create tickets for the holders of an order.\n :param order: The order for which to create tickets for.\n \"\"\"\n if order.status == 'completed':\n pdf = create_save_pdf(render_template('pdf/ticket_purchaser.html', order=order),\n UPLOAD_PATHS['pdf']['ticket_attendee'],\n dir_path='/static/uploads/pdf/tickets/')\n order.tickets_pdf_url = pdf\n\n for holder in order.ticket_holders:\n if (not holder.user) or holder.user.id != order.user_id:\n # holder is not the order buyer.\n pdf = create_save_pdf(render_template('pdf/ticket_attendee.html', order=order, holder=holder),\n UPLOAD_PATHS['pdf']['ticket_attendee'],\n dir_path='/static/uploads/pdf/tickets/')\n else:\n # holder is the order buyer.\n pdf = order.tickets_pdf_url\n holder.pdf_url = pdf\n save_to_db(holder)\n\n save_to_db(order)\n\n\ndef create_onsite_attendees_for_order(data):\n \"\"\"\n Creates on site ticket holders for an order and adds it into the request data.\n :param data: data initially passed in the POST request for order.\n :return:\n \"\"\"\n on_site_tickets = data.get('on_site_tickets')\n\n if not on_site_tickets:\n raise UnprocessableEntity({'pointer': 'data/attributes/on_site_tickets'}, 'on_site_tickets info missing')\n\n data['ticket_holders'] = []\n\n for on_site_ticket in on_site_tickets:\n ticket_id = on_site_ticket['id']\n quantity = int(on_site_ticket['quantity'])\n\n ticket = safe_query_without_soft_deleted_entries(db, Ticket, 'id', ticket_id, 'ticket_id')\n\n ticket_sold_count = get_count(db.session.query(TicketHolder.id).\n filter_by(ticket_id=int(ticket.id), deleted_at=None))\n\n # Check if the ticket is already sold out or not.\n if ticket_sold_count + quantity > ticket.quantity:\n # delete the already created attendees.\n for holder in data['ticket_holders']:\n ticket_holder = db.session.query(TicketHolder).filter(id == int(holder)).one()\n db.session.delete(ticket_holder)\n try:\n db.session.commit()\n except Exception as e:\n logging.error('DB Exception! %s' % e)\n db.session.rollback()\n\n raise ConflictException(\n {'pointer': '/data/attributes/on_site_tickets'},\n \"Ticket with id: {} already sold out. You can buy at most {} tickets\".format(ticket_id,\n ticket.quantity -\n ticket_sold_count)\n )\n\n for _ in range(1, quantity):\n ticket_holder = TicketHolder(firstname='onsite', lastname='attendee', email='[email protected]',\n ticket_id=ticket.id, event_id=data.get('event'))\n save_to_db(ticket_holder)\n data['ticket_holders'].append(ticket_holder.id)\n\n # delete from the data.\n del data['on_site_tickets']\n", "path": "app/api/helpers/order.py"}]}
| 1,840 | 136 |
gh_patches_debug_26569
|
rasdani/github-patches
|
git_diff
|
pymedusa__Medusa-4239
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Recomended show IMDB Popular error
Medusa Info: | Branch: master Commit: 212cd1c8a350f2d5ca40f172ed5a227d9a5cb80f Version: v0.2.3 Database: 44.9
-- | --
Python Version: | 2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:25:58) [MSC v.1500 64 bit (AMD64)]
SSL Version: | OpenSSL 1.0.2k 26 Jan 2017
OS: | Windows-10-10.0.14393
Locale: | nl_NL.cp1252

```
2018-05-21 10:48:00 WARNING Thread_24 :: [212cd1c] Could not parse show tt6845390 with error: u'year'
```
</issue>
<code>
[start of medusa/show/recommendations/imdb.py]
1 # coding=utf-8
2
3 from __future__ import unicode_literals
4
5 import logging
6 import os
7 import posixpath
8 import re
9 from builtins import object
10
11 from imdbpie import imdbpie
12
13 from medusa import helpers
14 from medusa.cache import recommended_series_cache
15 from medusa.indexers.indexer_config import INDEXER_TVDBV2
16 from medusa.logger.adapters.style import BraceAdapter
17 from medusa.session.core import MedusaSession
18 from medusa.show.recommendations.recommended import (
19 RecommendedShow, cached_get_imdb_series_details, create_key_from_series,
20 update_recommended_series_cache_index
21 )
22
23 from requests import RequestException
24
25 from six import binary_type
26
27 log = BraceAdapter(logging.getLogger(__name__))
28 log.logger.addHandler(logging.NullHandler())
29
30 imdb_api = imdbpie.Imdb()
31
32
33 class ImdbPopular(object):
34 """Gets a list of most popular TV series from imdb."""
35
36 def __init__(self):
37 """Initialize class."""
38 self.cache_subfolder = __name__.split('.')[-1] if '.' in __name__ else __name__
39 self.session = MedusaSession()
40 self.recommender = 'IMDB Popular'
41 self.default_img_src = 'poster.png'
42
43 @recommended_series_cache.cache_on_arguments(namespace='imdb', function_key_generator=create_key_from_series)
44 def _create_recommended_show(self, series, storage_key=None):
45 """Create the RecommendedShow object from the returned showobj."""
46 tvdb_id = helpers.get_tvdb_from_id(series.get('imdb_tt'), 'IMDB')
47
48 if not tvdb_id:
49 return None
50
51 rec_show = RecommendedShow(
52 self,
53 series.get('imdb_tt'),
54 series.get('name'),
55 INDEXER_TVDBV2,
56 int(tvdb_id),
57 **{'rating': series.get('rating'),
58 'votes': series.get('votes'),
59 'image_href': series.get('imdb_url')}
60 )
61
62 if series.get('image_url'):
63 rec_show.cache_image(series.get('image_url'))
64
65 return rec_show
66
67 def fetch_popular_shows(self):
68 """Get popular show information from IMDB."""
69 popular_shows = []
70
71 imdb_result = imdb_api.get_popular_shows()
72
73 for imdb_show in imdb_result['ranks']:
74 series = {}
75 imdb_id = series['imdb_tt'] = imdb_show['id'].strip('/').split('/')[-1]
76
77 if imdb_id:
78 show_details = cached_get_imdb_series_details(imdb_id)
79 if show_details:
80 try:
81 series['year'] = imdb_show['year']
82 series['name'] = imdb_show['title']
83 series['image_url_large'] = imdb_show['image']['url']
84 series['image_path'] = posixpath.join('images', 'imdb_popular',
85 os.path.basename(series['image_url_large']))
86 series['image_url'] = '{0}{1}'.format(imdb_show['image']['url'].split('V1')[0], '_SY600_AL_.jpg')
87 series['imdb_url'] = 'http://www.imdb.com{imdb_id}'.format(imdb_id=imdb_show['id'])
88 series['votes'] = show_details['ratings'].get('ratingCount', 0)
89 series['outline'] = show_details['plot'].get('outline', {}).get('text')
90 series['rating'] = show_details['ratings'].get('rating', 0)
91 except Exception as error:
92 log.warning('Could not parse show {imdb_id} with error: {error}',
93 {'imdb_id': imdb_id, 'error': error})
94 else:
95 continue
96
97 if all([series['year'], series['name'], series['imdb_tt']]):
98 popular_shows.append(series)
99
100 result = []
101 for series in popular_shows:
102 try:
103 recommended_show = self._create_recommended_show(series, storage_key=b'imdb_{0}'.format(series['imdb_tt']))
104 if recommended_show:
105 result.append(recommended_show)
106 except RequestException:
107 log.warning(
108 u'Could not connect to indexers to check if you already have'
109 u' this show in your library: {show} ({year})',
110 {'show': series['name'], 'year': series['name']}
111 )
112
113 # Update the dogpile index. This will allow us to retrieve all stored dogpile shows from the dbm.
114 update_recommended_series_cache_index('imdb', [binary_type(s.series_id) for s in result])
115
116 return result
117
118 @staticmethod
119 def change_size(image_url, factor=3):
120 """Change the size of the image we get from IMDB.
121
122 :param: image_url: Image source URL
123 :param: factor: Multiplier for the image size
124 """
125 match = re.search(r'(.+[X|Y])(\d+)(_CR\d+,\d+,)(\d+),(\d+)', image_url)
126
127 if match:
128 matches = list(match.groups())
129 matches[1] = int(matches[1]) * factor
130 matches[3] = int(matches[3]) * factor
131 matches[4] = int(matches[4]) * factor
132
133 return '{0}{1}{2}{3},{4}_AL_.jpg'.format(matches[0], matches[1], matches[2],
134 matches[3], matches[4])
135 else:
136 return image_url
137
[end of medusa/show/recommendations/imdb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/medusa/show/recommendations/imdb.py b/medusa/show/recommendations/imdb.py
--- a/medusa/show/recommendations/imdb.py
+++ b/medusa/show/recommendations/imdb.py
@@ -78,7 +78,7 @@
show_details = cached_get_imdb_series_details(imdb_id)
if show_details:
try:
- series['year'] = imdb_show['year']
+ series['year'] = imdb_show.get('year')
series['name'] = imdb_show['title']
series['image_url_large'] = imdb_show['image']['url']
series['image_path'] = posixpath.join('images', 'imdb_popular',
@@ -89,7 +89,7 @@
series['outline'] = show_details['plot'].get('outline', {}).get('text')
series['rating'] = show_details['ratings'].get('rating', 0)
except Exception as error:
- log.warning('Could not parse show {imdb_id} with error: {error}',
+ log.warning('Could not parse show {imdb_id} with error: {error!r}',
{'imdb_id': imdb_id, 'error': error})
else:
continue
|
{"golden_diff": "diff --git a/medusa/show/recommendations/imdb.py b/medusa/show/recommendations/imdb.py\n--- a/medusa/show/recommendations/imdb.py\n+++ b/medusa/show/recommendations/imdb.py\n@@ -78,7 +78,7 @@\n show_details = cached_get_imdb_series_details(imdb_id)\n if show_details:\n try:\n- series['year'] = imdb_show['year']\n+ series['year'] = imdb_show.get('year')\n series['name'] = imdb_show['title']\n series['image_url_large'] = imdb_show['image']['url']\n series['image_path'] = posixpath.join('images', 'imdb_popular',\n@@ -89,7 +89,7 @@\n series['outline'] = show_details['plot'].get('outline', {}).get('text')\n series['rating'] = show_details['ratings'].get('rating', 0)\n except Exception as error:\n- log.warning('Could not parse show {imdb_id} with error: {error}',\n+ log.warning('Could not parse show {imdb_id} with error: {error!r}',\n {'imdb_id': imdb_id, 'error': error})\n else:\n continue\n", "issue": "Add Recomended show IMDB Popular error\nMedusa Info: | Branch: master Commit: 212cd1c8a350f2d5ca40f172ed5a227d9a5cb80f Version: v0.2.3 Database: 44.9\r\n-- | --\r\nPython Version: | 2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:25:58) [MSC v.1500 64 bit (AMD64)]\r\nSSL Version: | OpenSSL 1.0.2k 26 Jan 2017\r\nOS: | Windows-10-10.0.14393\r\nLocale: | nl_NL.cp1252\r\n\r\n\r\n\r\n```\r\n2018-05-21 10:48:00 WARNING Thread_24 :: [212cd1c] Could not parse show tt6845390 with error: u'year'\r\n```\n", "before_files": [{"content": "# coding=utf-8\n\nfrom __future__ import unicode_literals\n\nimport logging\nimport os\nimport posixpath\nimport re\nfrom builtins import object\n\nfrom imdbpie import imdbpie\n\nfrom medusa import helpers\nfrom medusa.cache import recommended_series_cache\nfrom medusa.indexers.indexer_config import INDEXER_TVDBV2\nfrom medusa.logger.adapters.style import BraceAdapter\nfrom medusa.session.core import MedusaSession\nfrom medusa.show.recommendations.recommended import (\n RecommendedShow, cached_get_imdb_series_details, create_key_from_series,\n update_recommended_series_cache_index\n)\n\nfrom requests import RequestException\n\nfrom six import binary_type\n\nlog = BraceAdapter(logging.getLogger(__name__))\nlog.logger.addHandler(logging.NullHandler())\n\nimdb_api = imdbpie.Imdb()\n\n\nclass ImdbPopular(object):\n \"\"\"Gets a list of most popular TV series from imdb.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize class.\"\"\"\n self.cache_subfolder = __name__.split('.')[-1] if '.' in __name__ else __name__\n self.session = MedusaSession()\n self.recommender = 'IMDB Popular'\n self.default_img_src = 'poster.png'\n\n @recommended_series_cache.cache_on_arguments(namespace='imdb', function_key_generator=create_key_from_series)\n def _create_recommended_show(self, series, storage_key=None):\n \"\"\"Create the RecommendedShow object from the returned showobj.\"\"\"\n tvdb_id = helpers.get_tvdb_from_id(series.get('imdb_tt'), 'IMDB')\n\n if not tvdb_id:\n return None\n\n rec_show = RecommendedShow(\n self,\n series.get('imdb_tt'),\n series.get('name'),\n INDEXER_TVDBV2,\n int(tvdb_id),\n **{'rating': series.get('rating'),\n 'votes': series.get('votes'),\n 'image_href': series.get('imdb_url')}\n )\n\n if series.get('image_url'):\n rec_show.cache_image(series.get('image_url'))\n\n return rec_show\n\n def fetch_popular_shows(self):\n \"\"\"Get popular show information from IMDB.\"\"\"\n popular_shows = []\n\n imdb_result = imdb_api.get_popular_shows()\n\n for imdb_show in imdb_result['ranks']:\n series = {}\n imdb_id = series['imdb_tt'] = imdb_show['id'].strip('/').split('/')[-1]\n\n if imdb_id:\n show_details = cached_get_imdb_series_details(imdb_id)\n if show_details:\n try:\n series['year'] = imdb_show['year']\n series['name'] = imdb_show['title']\n series['image_url_large'] = imdb_show['image']['url']\n series['image_path'] = posixpath.join('images', 'imdb_popular',\n os.path.basename(series['image_url_large']))\n series['image_url'] = '{0}{1}'.format(imdb_show['image']['url'].split('V1')[0], '_SY600_AL_.jpg')\n series['imdb_url'] = 'http://www.imdb.com{imdb_id}'.format(imdb_id=imdb_show['id'])\n series['votes'] = show_details['ratings'].get('ratingCount', 0)\n series['outline'] = show_details['plot'].get('outline', {}).get('text')\n series['rating'] = show_details['ratings'].get('rating', 0)\n except Exception as error:\n log.warning('Could not parse show {imdb_id} with error: {error}',\n {'imdb_id': imdb_id, 'error': error})\n else:\n continue\n\n if all([series['year'], series['name'], series['imdb_tt']]):\n popular_shows.append(series)\n\n result = []\n for series in popular_shows:\n try:\n recommended_show = self._create_recommended_show(series, storage_key=b'imdb_{0}'.format(series['imdb_tt']))\n if recommended_show:\n result.append(recommended_show)\n except RequestException:\n log.warning(\n u'Could not connect to indexers to check if you already have'\n u' this show in your library: {show} ({year})',\n {'show': series['name'], 'year': series['name']}\n )\n\n # Update the dogpile index. This will allow us to retrieve all stored dogpile shows from the dbm.\n update_recommended_series_cache_index('imdb', [binary_type(s.series_id) for s in result])\n\n return result\n\n @staticmethod\n def change_size(image_url, factor=3):\n \"\"\"Change the size of the image we get from IMDB.\n\n :param: image_url: Image source URL\n :param: factor: Multiplier for the image size\n \"\"\"\n match = re.search(r'(.+[X|Y])(\\d+)(_CR\\d+,\\d+,)(\\d+),(\\d+)', image_url)\n\n if match:\n matches = list(match.groups())\n matches[1] = int(matches[1]) * factor\n matches[3] = int(matches[3]) * factor\n matches[4] = int(matches[4]) * factor\n\n return '{0}{1}{2}{3},{4}_AL_.jpg'.format(matches[0], matches[1], matches[2],\n matches[3], matches[4])\n else:\n return image_url\n", "path": "medusa/show/recommendations/imdb.py"}]}
| 2,334 | 274 |
gh_patches_debug_18443
|
rasdani/github-patches
|
git_diff
|
sunpy__sunpy-3398
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add missing ASDF schemas for new coordinate frames in 1.1
Whoops
</issue>
<code>
[start of sunpy/io/special/asdf/tags/coordinates/frames.py]
1 import os
2 import glob
3
4 from astropy.io.misc.asdf.tags.coordinates.frames import BaseCoordType
5
6 import sunpy.coordinates
7
8 from ...types import SunPyType
9
10 __all__ = ['SunPyCoordType']
11
12
13 SCHEMA_PATH = os.path.abspath(os.path.join(os.path.dirname(__file__),
14 '..', '..',
15 'schemas',
16 'sunpy.org',
17 'sunpy'))
18
19
20 def _get_frames():
21 """
22 By reading the schema files, get the list of all the frames we can
23 save/load.
24 """
25 search = os.path.join(SCHEMA_PATH, 'coordinates', 'frames', '*.yaml')
26 files = glob.glob(search)
27
28 names = []
29 for fpath in files:
30 path, fname = os.path.split(fpath)
31 frame, _ = fname.split('-')
32 exclude_schemas = []
33 if frame not in exclude_schemas:
34 names.append(frame)
35
36 return names
37
38
39 class SunPyCoordType(BaseCoordType, SunPyType):
40 _tag_prefix = "coordinates/frames/"
41 name = ["coordinates/frames/" + f for f in _get_frames()]
42 types = [
43 sunpy.coordinates.HeliographicCarrington,
44 sunpy.coordinates.HeliographicStonyhurst,
45 sunpy.coordinates.Heliocentric,
46 sunpy.coordinates.Helioprojective,
47 ]
48 requires = ['sunpy', 'astropy>=3.1']
49 version = "1.0.0"
50
51 @classmethod
52 def assert_equal(cls, old, new):
53 assert isinstance(new, type(old))
54
[end of sunpy/io/special/asdf/tags/coordinates/frames.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sunpy/io/special/asdf/tags/coordinates/frames.py b/sunpy/io/special/asdf/tags/coordinates/frames.py
--- a/sunpy/io/special/asdf/tags/coordinates/frames.py
+++ b/sunpy/io/special/asdf/tags/coordinates/frames.py
@@ -3,7 +3,9 @@
from astropy.io.misc.asdf.tags.coordinates.frames import BaseCoordType
-import sunpy.coordinates
+from sunpy.coordinates import frames
+
+sunpy_frames = list(map(lambda name: getattr(frames, name), frames.__all__))
from ...types import SunPyType
@@ -39,12 +41,7 @@
class SunPyCoordType(BaseCoordType, SunPyType):
_tag_prefix = "coordinates/frames/"
name = ["coordinates/frames/" + f for f in _get_frames()]
- types = [
- sunpy.coordinates.HeliographicCarrington,
- sunpy.coordinates.HeliographicStonyhurst,
- sunpy.coordinates.Heliocentric,
- sunpy.coordinates.Helioprojective,
- ]
+ types = sunpy_frames
requires = ['sunpy', 'astropy>=3.1']
version = "1.0.0"
|
{"golden_diff": "diff --git a/sunpy/io/special/asdf/tags/coordinates/frames.py b/sunpy/io/special/asdf/tags/coordinates/frames.py\n--- a/sunpy/io/special/asdf/tags/coordinates/frames.py\n+++ b/sunpy/io/special/asdf/tags/coordinates/frames.py\n@@ -3,7 +3,9 @@\n \n from astropy.io.misc.asdf.tags.coordinates.frames import BaseCoordType\n \n-import sunpy.coordinates\n+from sunpy.coordinates import frames\n+\n+sunpy_frames = list(map(lambda name: getattr(frames, name), frames.__all__))\n \n from ...types import SunPyType\n \n@@ -39,12 +41,7 @@\n class SunPyCoordType(BaseCoordType, SunPyType):\n _tag_prefix = \"coordinates/frames/\"\n name = [\"coordinates/frames/\" + f for f in _get_frames()]\n- types = [\n- sunpy.coordinates.HeliographicCarrington,\n- sunpy.coordinates.HeliographicStonyhurst,\n- sunpy.coordinates.Heliocentric,\n- sunpy.coordinates.Helioprojective,\n- ]\n+ types = sunpy_frames\n requires = ['sunpy', 'astropy>=3.1']\n version = \"1.0.0\"\n", "issue": "Add missing ASDF schemas for new coordinate frames in 1.1\nWhoops\n", "before_files": [{"content": "import os\nimport glob\n\nfrom astropy.io.misc.asdf.tags.coordinates.frames import BaseCoordType\n\nimport sunpy.coordinates\n\nfrom ...types import SunPyType\n\n__all__ = ['SunPyCoordType']\n\n\nSCHEMA_PATH = os.path.abspath(os.path.join(os.path.dirname(__file__),\n '..', '..',\n 'schemas',\n 'sunpy.org',\n 'sunpy'))\n\n\ndef _get_frames():\n \"\"\"\n By reading the schema files, get the list of all the frames we can\n save/load.\n \"\"\"\n search = os.path.join(SCHEMA_PATH, 'coordinates', 'frames', '*.yaml')\n files = glob.glob(search)\n\n names = []\n for fpath in files:\n path, fname = os.path.split(fpath)\n frame, _ = fname.split('-')\n exclude_schemas = []\n if frame not in exclude_schemas:\n names.append(frame)\n\n return names\n\n\nclass SunPyCoordType(BaseCoordType, SunPyType):\n _tag_prefix = \"coordinates/frames/\"\n name = [\"coordinates/frames/\" + f for f in _get_frames()]\n types = [\n sunpy.coordinates.HeliographicCarrington,\n sunpy.coordinates.HeliographicStonyhurst,\n sunpy.coordinates.Heliocentric,\n sunpy.coordinates.Helioprojective,\n ]\n requires = ['sunpy', 'astropy>=3.1']\n version = \"1.0.0\"\n\n @classmethod\n def assert_equal(cls, old, new):\n assert isinstance(new, type(old))\n", "path": "sunpy/io/special/asdf/tags/coordinates/frames.py"}]}
| 1,005 | 274 |
gh_patches_debug_16356
|
rasdani/github-patches
|
git_diff
|
PlasmaPy__PlasmaPy-2300
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ExcessStatistics class throws an error for `time_step` with an astropy unit
### Bug description
The current implementation of the `ExcessStatistics` class doesn't allow astropy units for the parameter `time_step`. See example below.
### Expected outcome
Ideally, `total_time_above_threshold`, `average_times` and `rms_times` should have the same units as `time_step`.
### Minimal complete verifiable example
```Python
from plasmapy.analysis.time_series.excess_statistics import ExcessStatistics
import astropy.units as u
signal = [0, 0, 2, 2, 0, 4]
thresholds = 1
time_step = 1 * u.s
excess_statistics = ExcessStatistics(signal, thresholds, time_step)
```
### Package versions
v2023.5.1
### Additional context
This is also relevant for PR #2275. One could also add a check whether time units are used for `time_step` or would this be to cumbersome?
Since I implemented the `ExcessStatistics` class I would be happy to be assigned to this issue.
</issue>
<code>
[start of plasmapy/analysis/time_series/excess_statistics.py]
1 """
2 Functionality to calculate excess statistics of time series.
3
4 .. attention::
5
6 |expect-api-changes|
7 """
8
9 __all__ = ["ExcessStatistics"]
10
11
12 import numbers
13 import numpy as np
14
15 from collections.abc import Iterable
16
17
18 class ExcessStatistics:
19 """
20 Calculate total time, number of upwards crossings, average time and
21 root-mean-square time above given thresholds of a sequence.
22
23 Parameters
24 ----------
25 signal : 1D |array_like|
26 Signal to be analyzed.
27
28 thresholds : 1D |array_like|
29 Threshold values.
30
31 time_step : int
32 Time step of ``signal``.
33
34 Raises
35 ------
36 `ValueError`
37 If ``time_step`` ≤ 0.
38
39 Example
40 -------
41 >>> from plasmapy.analysis.time_series.excess_statistics import ExcessStatistics
42 >>> signal = [0, 0, 2, 2, 0, 4]
43 >>> thresholds = [1, 3, 5]
44 >>> time_step = 1
45 >>> excess_statistics = ExcessStatistics(signal, thresholds, time_step)
46 >>> excess_statistics.total_time_above_threshold
47 [3, 1, 0]
48 >>> excess_statistics.number_of_crossings
49 [2, 1, 0]
50 >>> excess_statistics.average_times
51 [1.5, 1.0, 0]
52 >>> excess_statistics.rms_times
53 [0.5, 0.0, 0]
54 """
55
56 def __init__(self, signal, thresholds, time_step):
57 if time_step <= 0:
58 raise ValueError("time_step must be positive")
59
60 # make sure thresholds is an iterable
61 if not isinstance(thresholds, Iterable):
62 thresholds = [thresholds]
63
64 self._total_time_above_threshold = []
65 self._number_of_crossings = []
66 self._average_times = []
67 self._rms_times = []
68 self.events_per_threshold = {}
69
70 self._calculate_excess_statistics(signal, thresholds, time_step)
71
72 def _calculate_excess_statistics(self, signal, thresholds, time_step):
73 for threshold in thresholds:
74 indices_above_threshold = np.where(np.array(signal) > threshold)[0]
75
76 if len(indices_above_threshold) == 0:
77 self._times_above_threshold = []
78 self._total_time_above_threshold.append(0)
79 self._number_of_crossings.append(0)
80 self._average_times.append(0)
81 self._rms_times.append(0)
82
83 else:
84 self._total_time_above_threshold.append(
85 time_step * len(indices_above_threshold)
86 )
87
88 distances_to_next_index = (
89 indices_above_threshold[1:] - indices_above_threshold[:-1]
90 )
91 split_indices = np.where(distances_to_next_index != 1)[0]
92 event_lengths = np.split(distances_to_next_index, split_indices)
93
94 # set correct length for first event
95 event_lengths[0] = np.append(event_lengths[0], 1)
96
97 self._times_above_threshold = [
98 time_step * len(event_lengths[i]) for i in range(len(event_lengths))
99 ]
100
101 self._number_of_crossings.append(len(event_lengths))
102 if indices_above_threshold[0] == 0:
103 # Don't count the first event if there is no crossing.
104 self._number_of_crossings[-1] -= 1
105
106 self._average_times.append(np.mean(self._times_above_threshold))
107 self._rms_times.append(np.std(self._times_above_threshold))
108
109 self.events_per_threshold.update({threshold: self._times_above_threshold})
110
111 def hist(self, bins=32):
112 """
113 Computes the probability density function of the time above each value
114 in ``thresholds``.
115
116 Parameters
117 ----------
118 bins : int, default: 32
119 The number of bins in the estimation of the PDF above ``thresholds``.
120
121 Returns
122 -------
123 hist: 2D `~numpy.ndarray`, shape (``thresholds.size``, ``bins`` )
124 For each value in ``thresholds``, returns the estimated PDF of time
125 above threshold.
126
127 bin_centers: 2D `~numpy.ndarray`, shape (``thresholds.size``, ``bins`` )
128 Bin centers for ``hist``.
129
130 Raises
131 ------
132 `TypeError`
133 If ``bins`` is not a positive integer.
134
135 Examples
136 --------
137 >>> from plasmapy.analysis.time_series.excess_statistics import ExcessStatistics
138 >>> signal = [0, 0, 2, 0, 4]
139 >>> thresholds = [1, 3, 5]
140 >>> time_step = 1
141 >>> excess_statistics = ExcessStatistics(signal, thresholds, time_step)
142 >>> excess_statistics.hist(2)
143 (array([[0., 2.],
144 [0., 2.],
145 [0., 0.]]), array([[0.75, 1.25],
146 [0.75, 1.25],
147 [0. , 0. ]]))
148 """
149
150 if not isinstance(bins, numbers.Integral):
151 raise TypeError("bins must be an integer")
152
153 hist = np.zeros((len(self.events_per_threshold), bins))
154 bin_centers = np.zeros((len(self.events_per_threshold), bins))
155
156 for i, threshold in enumerate(self.events_per_threshold.keys()):
157 if len(self.events_per_threshold[threshold]) >= 1:
158 hist[i, :], bin_edges = np.histogram(
159 self.events_per_threshold[threshold], bins=bins, density=True
160 )
161 bin_centers[i, :] = (bin_edges[1:] + bin_edges[:-1]) / 2
162 return hist, bin_centers
163
164 @property
165 def total_time_above_threshold(self):
166 """
167 Total time above threshold(s).
168
169 Returns
170 -------
171 total_time_above_threshold: 1D |array_like|
172 Total time above threshold for each value in ``thresholds``.
173 """
174
175 return self._total_time_above_threshold
176
177 @property
178 def number_of_crossings(self):
179 """
180 Total number of upwards crossings for threshold(s).
181
182 Returns
183 -------
184 number_of_crossings: 1D |array_like|
185 Total number of upwards crossings for each value in ``thresholds``.
186 """
187
188 return self._number_of_crossings
189
190 @property
191 def average_times(self):
192 """
193 Average time above threshold(s).
194
195 Returns
196 -------
197 average_times: 1D |array_like|
198 Average time above each value in ``thresholds``.
199 """
200
201 return self._average_times
202
203 @property
204 def rms_times(self):
205 """
206 Root-mean-square values of time above threshold(s).
207
208 Returns
209 -------
210 rms_times: 1D |array_like|
211 Root-mean-square values of time above each value in ``thresholds``.
212 """
213
214 return self._rms_times
215
[end of plasmapy/analysis/time_series/excess_statistics.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plasmapy/analysis/time_series/excess_statistics.py b/plasmapy/analysis/time_series/excess_statistics.py
--- a/plasmapy/analysis/time_series/excess_statistics.py
+++ b/plasmapy/analysis/time_series/excess_statistics.py
@@ -9,6 +9,7 @@
__all__ = ["ExcessStatistics"]
+import astropy.units as u
import numbers
import numpy as np
@@ -98,6 +99,9 @@
time_step * len(event_lengths[i]) for i in range(len(event_lengths))
]
+ if isinstance(time_step, u.Quantity):
+ self._times_above_threshold *= time_step.unit
+
self._number_of_crossings.append(len(event_lengths))
if indices_above_threshold[0] == 0:
# Don't count the first event if there is no crossing.
|
{"golden_diff": "diff --git a/plasmapy/analysis/time_series/excess_statistics.py b/plasmapy/analysis/time_series/excess_statistics.py\n--- a/plasmapy/analysis/time_series/excess_statistics.py\n+++ b/plasmapy/analysis/time_series/excess_statistics.py\n@@ -9,6 +9,7 @@\n __all__ = [\"ExcessStatistics\"]\n \n \n+import astropy.units as u\n import numbers\n import numpy as np\n \n@@ -98,6 +99,9 @@\n time_step * len(event_lengths[i]) for i in range(len(event_lengths))\n ]\n \n+ if isinstance(time_step, u.Quantity):\n+ self._times_above_threshold *= time_step.unit\n+\n self._number_of_crossings.append(len(event_lengths))\n if indices_above_threshold[0] == 0:\n # Don't count the first event if there is no crossing.\n", "issue": "ExcessStatistics class throws an error for `time_step` with an astropy unit \n### Bug description\n\nThe current implementation of the `ExcessStatistics` class doesn't allow astropy units for the parameter `time_step`. See example below.\n\n### Expected outcome\n\nIdeally, `total_time_above_threshold`, `average_times` and `rms_times` should have the same units as `time_step`.\n\n### Minimal complete verifiable example\n\n```Python\nfrom plasmapy.analysis.time_series.excess_statistics import ExcessStatistics\r\nimport astropy.units as u\r\n\r\nsignal = [0, 0, 2, 2, 0, 4]\r\nthresholds = 1\r\ntime_step = 1 * u.s\r\n\r\nexcess_statistics = ExcessStatistics(signal, thresholds, time_step)\n```\n\n\n### Package versions\n\nv2023.5.1\n\n### Additional context\n\nThis is also relevant for PR #2275. One could also add a check whether time units are used for `time_step` or would this be to cumbersome? \r\n\r\nSince I implemented the `ExcessStatistics` class I would be happy to be assigned to this issue.\r\n\n", "before_files": [{"content": "\"\"\"\nFunctionality to calculate excess statistics of time series.\n\n.. attention::\n\n |expect-api-changes|\n\"\"\"\n\n__all__ = [\"ExcessStatistics\"]\n\n\nimport numbers\nimport numpy as np\n\nfrom collections.abc import Iterable\n\n\nclass ExcessStatistics:\n \"\"\"\n Calculate total time, number of upwards crossings, average time and\n root-mean-square time above given thresholds of a sequence.\n\n Parameters\n ----------\n signal : 1D |array_like|\n Signal to be analyzed.\n\n thresholds : 1D |array_like|\n Threshold values.\n\n time_step : int\n Time step of ``signal``.\n\n Raises\n ------\n `ValueError`\n If ``time_step`` \u2264 0.\n\n Example\n -------\n >>> from plasmapy.analysis.time_series.excess_statistics import ExcessStatistics\n >>> signal = [0, 0, 2, 2, 0, 4]\n >>> thresholds = [1, 3, 5]\n >>> time_step = 1\n >>> excess_statistics = ExcessStatistics(signal, thresholds, time_step)\n >>> excess_statistics.total_time_above_threshold\n [3, 1, 0]\n >>> excess_statistics.number_of_crossings\n [2, 1, 0]\n >>> excess_statistics.average_times\n [1.5, 1.0, 0]\n >>> excess_statistics.rms_times\n [0.5, 0.0, 0]\n \"\"\"\n\n def __init__(self, signal, thresholds, time_step):\n if time_step <= 0:\n raise ValueError(\"time_step must be positive\")\n\n # make sure thresholds is an iterable\n if not isinstance(thresholds, Iterable):\n thresholds = [thresholds]\n\n self._total_time_above_threshold = []\n self._number_of_crossings = []\n self._average_times = []\n self._rms_times = []\n self.events_per_threshold = {}\n\n self._calculate_excess_statistics(signal, thresholds, time_step)\n\n def _calculate_excess_statistics(self, signal, thresholds, time_step):\n for threshold in thresholds:\n indices_above_threshold = np.where(np.array(signal) > threshold)[0]\n\n if len(indices_above_threshold) == 0:\n self._times_above_threshold = []\n self._total_time_above_threshold.append(0)\n self._number_of_crossings.append(0)\n self._average_times.append(0)\n self._rms_times.append(0)\n\n else:\n self._total_time_above_threshold.append(\n time_step * len(indices_above_threshold)\n )\n\n distances_to_next_index = (\n indices_above_threshold[1:] - indices_above_threshold[:-1]\n )\n split_indices = np.where(distances_to_next_index != 1)[0]\n event_lengths = np.split(distances_to_next_index, split_indices)\n\n # set correct length for first event\n event_lengths[0] = np.append(event_lengths[0], 1)\n\n self._times_above_threshold = [\n time_step * len(event_lengths[i]) for i in range(len(event_lengths))\n ]\n\n self._number_of_crossings.append(len(event_lengths))\n if indices_above_threshold[0] == 0:\n # Don't count the first event if there is no crossing.\n self._number_of_crossings[-1] -= 1\n\n self._average_times.append(np.mean(self._times_above_threshold))\n self._rms_times.append(np.std(self._times_above_threshold))\n\n self.events_per_threshold.update({threshold: self._times_above_threshold})\n\n def hist(self, bins=32):\n \"\"\"\n Computes the probability density function of the time above each value\n in ``thresholds``.\n\n Parameters\n ----------\n bins : int, default: 32\n The number of bins in the estimation of the PDF above ``thresholds``.\n\n Returns\n -------\n hist: 2D `~numpy.ndarray`, shape (``thresholds.size``, ``bins`` )\n For each value in ``thresholds``, returns the estimated PDF of time\n above threshold.\n\n bin_centers: 2D `~numpy.ndarray`, shape (``thresholds.size``, ``bins`` )\n Bin centers for ``hist``.\n\n Raises\n ------\n `TypeError`\n If ``bins`` is not a positive integer.\n\n Examples\n --------\n >>> from plasmapy.analysis.time_series.excess_statistics import ExcessStatistics\n >>> signal = [0, 0, 2, 0, 4]\n >>> thresholds = [1, 3, 5]\n >>> time_step = 1\n >>> excess_statistics = ExcessStatistics(signal, thresholds, time_step)\n >>> excess_statistics.hist(2)\n (array([[0., 2.],\n [0., 2.],\n [0., 0.]]), array([[0.75, 1.25],\n [0.75, 1.25],\n [0. , 0. ]]))\n \"\"\"\n\n if not isinstance(bins, numbers.Integral):\n raise TypeError(\"bins must be an integer\")\n\n hist = np.zeros((len(self.events_per_threshold), bins))\n bin_centers = np.zeros((len(self.events_per_threshold), bins))\n\n for i, threshold in enumerate(self.events_per_threshold.keys()):\n if len(self.events_per_threshold[threshold]) >= 1:\n hist[i, :], bin_edges = np.histogram(\n self.events_per_threshold[threshold], bins=bins, density=True\n )\n bin_centers[i, :] = (bin_edges[1:] + bin_edges[:-1]) / 2\n return hist, bin_centers\n\n @property\n def total_time_above_threshold(self):\n \"\"\"\n Total time above threshold(s).\n\n Returns\n -------\n total_time_above_threshold: 1D |array_like|\n Total time above threshold for each value in ``thresholds``.\n \"\"\"\n\n return self._total_time_above_threshold\n\n @property\n def number_of_crossings(self):\n \"\"\"\n Total number of upwards crossings for threshold(s).\n\n Returns\n -------\n number_of_crossings: 1D |array_like|\n Total number of upwards crossings for each value in ``thresholds``.\n \"\"\"\n\n return self._number_of_crossings\n\n @property\n def average_times(self):\n \"\"\"\n Average time above threshold(s).\n\n Returns\n -------\n average_times: 1D |array_like|\n Average time above each value in ``thresholds``.\n \"\"\"\n\n return self._average_times\n\n @property\n def rms_times(self):\n \"\"\"\n Root-mean-square values of time above threshold(s).\n\n Returns\n -------\n rms_times: 1D |array_like|\n Root-mean-square values of time above each value in ``thresholds``.\n \"\"\"\n\n return self._rms_times\n", "path": "plasmapy/analysis/time_series/excess_statistics.py"}]}
| 2,827 | 189 |
gh_patches_debug_4207
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-2164
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow using tuple (and/or iterable) as an alias to the GraphQL type
## Feature Request Type
Alteration (enhancement/optimization) of existing feature(s)
## Description
Returning a `tuple` when the attribute is hinted as `list` [works](https://play.strawberry.rocks/?gist=0815d1a1f0c58a613bd356cbeb45c8a1).
But hinting the return type _correctly as a tuple_ causes an error:
```python
@strawberry.type
class Dictionary:
entries: tuple[Entry]
sources: tuple[Source]
```
```
TypeError: Unexpected type tuple[Entry]
```
Our code uses tuples and iterables whenever appropriate for robustness, efficiency, and documentation. We also use strict type hinting. It'd be great for these read-only sequences to be explicitly supported.
</issue>
<code>
[start of strawberry/annotation.py]
1 import sys
2 import typing
3 from collections import abc
4 from enum import Enum
5 from typing import ( # type: ignore[attr-defined]
6 TYPE_CHECKING,
7 Any,
8 Dict,
9 List,
10 Optional,
11 TypeVar,
12 Union,
13 _eval_type,
14 )
15
16 from typing_extensions import Annotated, get_args, get_origin
17
18 from strawberry.private import is_private
19
20
21 try:
22 from typing import ForwardRef
23 except ImportError: # pragma: no cover
24 # ForwardRef is private in python 3.6 and 3.7
25 from typing import _ForwardRef as ForwardRef # type: ignore
26
27 from strawberry.custom_scalar import ScalarDefinition
28 from strawberry.enum import EnumDefinition
29 from strawberry.lazy_type import LazyType, StrawberryLazyReference
30 from strawberry.type import (
31 StrawberryList,
32 StrawberryOptional,
33 StrawberryType,
34 StrawberryTypeVar,
35 )
36 from strawberry.types.types import TypeDefinition
37 from strawberry.unset import UNSET
38 from strawberry.utils.typing import is_generic, is_list, is_type_var, is_union
39
40
41 if TYPE_CHECKING:
42 from strawberry.union import StrawberryUnion
43
44
45 ASYNC_TYPES = (
46 abc.AsyncGenerator,
47 abc.AsyncIterable,
48 abc.AsyncIterator,
49 typing.AsyncContextManager,
50 typing.AsyncGenerator,
51 typing.AsyncIterable,
52 typing.AsyncIterator,
53 )
54
55
56 class StrawberryAnnotation:
57 def __init__(
58 self, annotation: Union[object, str], *, namespace: Optional[Dict] = None
59 ):
60 self.annotation = annotation
61 self.namespace = namespace
62
63 def __eq__(self, other: object) -> bool:
64 if not isinstance(other, StrawberryAnnotation):
65 return NotImplemented
66
67 return self.resolve() == other.resolve()
68
69 @staticmethod
70 def parse_annotated(annotation: object) -> object:
71 from strawberry.auto import StrawberryAuto
72
73 if get_origin(annotation) is Annotated:
74 annotated_args = get_args(annotation)
75 annotation_type = annotated_args[0]
76
77 for arg in annotated_args[1:]:
78 if isinstance(arg, StrawberryLazyReference):
79 assert isinstance(annotation_type, ForwardRef)
80
81 return arg.resolve_forward_ref(annotation_type)
82
83 if isinstance(arg, StrawberryAuto):
84 return arg
85
86 return StrawberryAnnotation.parse_annotated(annotation_type)
87
88 if is_union(annotation):
89 return Union[
90 tuple(
91 StrawberryAnnotation.parse_annotated(arg)
92 for arg in get_args(annotation)
93 ) # pyright: ignore
94 ] # pyright: ignore
95
96 if is_list(annotation):
97 return List[StrawberryAnnotation.parse_annotated(get_args(annotation)[0])] # type: ignore # noqa: E501
98
99 return annotation
100
101 def resolve(self) -> Union[StrawberryType, type]:
102 annotation = self.parse_annotated(self.annotation)
103
104 if isinstance(self.annotation, str):
105 annotation = ForwardRef(self.annotation)
106
107 evaled_type = _eval_type(annotation, self.namespace, None)
108
109 if is_private(evaled_type):
110 return evaled_type
111 if self._is_async_type(evaled_type):
112 evaled_type = self._strip_async_type(evaled_type)
113 if self._is_lazy_type(evaled_type):
114 return evaled_type
115
116 if self._is_generic(evaled_type):
117 if any(is_type_var(type_) for type_ in evaled_type.__args__):
118 return evaled_type
119 return self.create_concrete_type(evaled_type)
120
121 # Simply return objects that are already StrawberryTypes
122 if self._is_strawberry_type(evaled_type):
123 return evaled_type
124
125 # Everything remaining should be a raw annotation that needs to be turned into
126 # a StrawberryType
127 if self._is_enum(evaled_type):
128 return self.create_enum(evaled_type)
129 if self._is_list(evaled_type):
130 return self.create_list(evaled_type)
131 elif self._is_optional(evaled_type):
132 return self.create_optional(evaled_type)
133 elif self._is_union(evaled_type):
134 return self.create_union(evaled_type)
135 elif is_type_var(evaled_type):
136 return self.create_type_var(evaled_type)
137
138 # TODO: Raise exception now, or later?
139 # ... raise NotImplementedError(f"Unknown type {evaled_type}")
140 return evaled_type
141
142 def create_concrete_type(self, evaled_type: type) -> type:
143 if _is_object_type(evaled_type):
144 type_definition: TypeDefinition
145 type_definition = evaled_type._type_definition # type: ignore
146 return type_definition.resolve_generic(evaled_type)
147
148 raise ValueError(f"Not supported {evaled_type}")
149
150 def create_enum(self, evaled_type: Any) -> EnumDefinition:
151 return evaled_type._enum_definition
152
153 def create_list(self, evaled_type: Any) -> StrawberryList:
154 of_type = StrawberryAnnotation(
155 annotation=evaled_type.__args__[0],
156 namespace=self.namespace,
157 ).resolve()
158
159 return StrawberryList(of_type)
160
161 def create_optional(self, evaled_type: Any) -> StrawberryOptional:
162 types = evaled_type.__args__
163 non_optional_types = tuple(
164 filter(
165 lambda x: x is not type(None) and x is not type(UNSET), # noqa: E721
166 types,
167 )
168 )
169
170 # Note that passing a single type to `Union` is equivalent to not using `Union`
171 # at all. This allows us to not di any checks for how many types have been
172 # passed as we can safely use `Union` for both optional types
173 # (e.g. `Optional[str]`) and optional unions (e.g.
174 # `Optional[Union[TypeA, TypeB]]`)
175 child_type = Union[non_optional_types] # type: ignore
176
177 of_type = StrawberryAnnotation(
178 annotation=child_type,
179 namespace=self.namespace,
180 ).resolve()
181
182 return StrawberryOptional(of_type)
183
184 def create_type_var(self, evaled_type: TypeVar) -> StrawberryTypeVar:
185 return StrawberryTypeVar(evaled_type)
186
187 def create_union(self, evaled_type) -> "StrawberryUnion":
188 # Prevent import cycles
189 from strawberry.union import StrawberryUnion
190
191 # TODO: Deal with Forward References/origin
192 if isinstance(evaled_type, StrawberryUnion):
193 return evaled_type
194
195 types = evaled_type.__args__
196 union = StrawberryUnion(
197 type_annotations=tuple(StrawberryAnnotation(type_) for type_ in types),
198 )
199 return union
200
201 @classmethod
202 def _is_async_type(cls, annotation: type) -> bool:
203 origin = getattr(annotation, "__origin__", None)
204 return origin in ASYNC_TYPES
205
206 @classmethod
207 def _is_enum(cls, annotation: Any) -> bool:
208 # Type aliases are not types so we need to make sure annotation can go into
209 # issubclass
210 if not isinstance(annotation, type):
211 return False
212 return issubclass(annotation, Enum)
213
214 @classmethod
215 def _is_generic(cls, annotation: Any) -> bool:
216 if hasattr(annotation, "__origin__"):
217 return is_generic(annotation.__origin__)
218
219 return False
220
221 @classmethod
222 def _is_lazy_type(cls, annotation: Any) -> bool:
223 return isinstance(annotation, LazyType)
224
225 @classmethod
226 def _is_optional(cls, annotation: Any) -> bool:
227 """Returns True if the annotation is Optional[SomeType]"""
228
229 # Optionals are represented as unions
230 if not cls._is_union(annotation):
231 return False
232
233 types = annotation.__args__
234
235 # A Union to be optional needs to have at least one None type
236 return any(x is type(None) for x in types) # noqa: E721
237
238 @classmethod
239 def _is_list(cls, annotation: Any) -> bool:
240 """Returns True if annotation is a List"""
241
242 annotation_origin = getattr(annotation, "__origin__", None)
243
244 return annotation_origin == list
245
246 @classmethod
247 def _is_strawberry_type(cls, evaled_type: Any) -> bool:
248 # Prevent import cycles
249 from strawberry.union import StrawberryUnion
250
251 if isinstance(evaled_type, EnumDefinition):
252 return True
253 elif _is_input_type(evaled_type): # TODO: Replace with StrawberryInputObject
254 return True
255 # TODO: add support for StrawberryInterface when implemented
256 elif isinstance(evaled_type, StrawberryList):
257 return True
258 elif _is_object_type(evaled_type): # TODO: Replace with StrawberryObject
259 return True
260 elif isinstance(evaled_type, TypeDefinition):
261 return True
262 elif isinstance(evaled_type, StrawberryOptional):
263 return True
264 elif isinstance(
265 evaled_type, ScalarDefinition
266 ): # TODO: Replace with StrawberryScalar
267 return True
268 elif isinstance(evaled_type, StrawberryUnion):
269 return True
270
271 return False
272
273 @classmethod
274 def _is_union(cls, annotation: Any) -> bool:
275 """Returns True if annotation is a Union"""
276
277 # this check is needed because unions declared with the new syntax `A | B`
278 # don't have a `__origin__` property on them, but they are instances of
279 # `UnionType`, which is only available in Python 3.10+
280 if sys.version_info >= (3, 10):
281 from types import UnionType
282
283 if isinstance(annotation, UnionType):
284 return True
285
286 # unions declared as Union[A, B] fall through to this check, even on python 3.10+
287
288 annotation_origin = getattr(annotation, "__origin__", None)
289
290 return annotation_origin is typing.Union
291
292 @classmethod
293 def _strip_async_type(cls, annotation) -> type:
294 return annotation.__args__[0]
295
296 @classmethod
297 def _strip_lazy_type(cls, annotation: LazyType) -> type:
298 return annotation.resolve_type()
299
300
301 ################################################################################
302 # Temporary functions to be removed with new types
303 ################################################################################
304
305
306 def _is_input_type(type_: Any) -> bool:
307 if not _is_object_type(type_):
308 return False
309
310 return type_._type_definition.is_input
311
312
313 def _is_object_type(type_: Any) -> bool:
314 # isinstance(type_, StrawberryObjectType) # noqa: E800
315 return hasattr(type_, "_type_definition")
316
[end of strawberry/annotation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/strawberry/annotation.py b/strawberry/annotation.py
--- a/strawberry/annotation.py
+++ b/strawberry/annotation.py
@@ -241,7 +241,11 @@
annotation_origin = getattr(annotation, "__origin__", None)
- return annotation_origin == list
+ return (
+ annotation_origin == list
+ or annotation_origin == tuple
+ or annotation_origin is abc.Sequence
+ )
@classmethod
def _is_strawberry_type(cls, evaled_type: Any) -> bool:
|
{"golden_diff": "diff --git a/strawberry/annotation.py b/strawberry/annotation.py\n--- a/strawberry/annotation.py\n+++ b/strawberry/annotation.py\n@@ -241,7 +241,11 @@\n \n annotation_origin = getattr(annotation, \"__origin__\", None)\n \n- return annotation_origin == list\n+ return (\n+ annotation_origin == list\n+ or annotation_origin == tuple\n+ or annotation_origin is abc.Sequence\n+ )\n \n @classmethod\n def _is_strawberry_type(cls, evaled_type: Any) -> bool:\n", "issue": "Allow using tuple (and/or iterable) as an alias to the GraphQL type\n## Feature Request Type\r\n\r\nAlteration (enhancement/optimization) of existing feature(s)\r\n\r\n## Description\r\n\r\nReturning a `tuple` when the attribute is hinted as `list` [works](https://play.strawberry.rocks/?gist=0815d1a1f0c58a613bd356cbeb45c8a1).\r\n\r\nBut hinting the return type _correctly as a tuple_ causes an error:\r\n\r\n```python\r\[email protected]\r\nclass Dictionary:\r\n entries: tuple[Entry]\r\n sources: tuple[Source]\r\n```\r\n\r\n```\r\nTypeError: Unexpected type tuple[Entry]\r\n```\r\n\r\nOur code uses tuples and iterables whenever appropriate for robustness, efficiency, and documentation. We also use strict type hinting. It'd be great for these read-only sequences to be explicitly supported.\n", "before_files": [{"content": "import sys\nimport typing\nfrom collections import abc\nfrom enum import Enum\nfrom typing import ( # type: ignore[attr-defined]\n TYPE_CHECKING,\n Any,\n Dict,\n List,\n Optional,\n TypeVar,\n Union,\n _eval_type,\n)\n\nfrom typing_extensions import Annotated, get_args, get_origin\n\nfrom strawberry.private import is_private\n\n\ntry:\n from typing import ForwardRef\nexcept ImportError: # pragma: no cover\n # ForwardRef is private in python 3.6 and 3.7\n from typing import _ForwardRef as ForwardRef # type: ignore\n\nfrom strawberry.custom_scalar import ScalarDefinition\nfrom strawberry.enum import EnumDefinition\nfrom strawberry.lazy_type import LazyType, StrawberryLazyReference\nfrom strawberry.type import (\n StrawberryList,\n StrawberryOptional,\n StrawberryType,\n StrawberryTypeVar,\n)\nfrom strawberry.types.types import TypeDefinition\nfrom strawberry.unset import UNSET\nfrom strawberry.utils.typing import is_generic, is_list, is_type_var, is_union\n\n\nif TYPE_CHECKING:\n from strawberry.union import StrawberryUnion\n\n\nASYNC_TYPES = (\n abc.AsyncGenerator,\n abc.AsyncIterable,\n abc.AsyncIterator,\n typing.AsyncContextManager,\n typing.AsyncGenerator,\n typing.AsyncIterable,\n typing.AsyncIterator,\n)\n\n\nclass StrawberryAnnotation:\n def __init__(\n self, annotation: Union[object, str], *, namespace: Optional[Dict] = None\n ):\n self.annotation = annotation\n self.namespace = namespace\n\n def __eq__(self, other: object) -> bool:\n if not isinstance(other, StrawberryAnnotation):\n return NotImplemented\n\n return self.resolve() == other.resolve()\n\n @staticmethod\n def parse_annotated(annotation: object) -> object:\n from strawberry.auto import StrawberryAuto\n\n if get_origin(annotation) is Annotated:\n annotated_args = get_args(annotation)\n annotation_type = annotated_args[0]\n\n for arg in annotated_args[1:]:\n if isinstance(arg, StrawberryLazyReference):\n assert isinstance(annotation_type, ForwardRef)\n\n return arg.resolve_forward_ref(annotation_type)\n\n if isinstance(arg, StrawberryAuto):\n return arg\n\n return StrawberryAnnotation.parse_annotated(annotation_type)\n\n if is_union(annotation):\n return Union[\n tuple(\n StrawberryAnnotation.parse_annotated(arg)\n for arg in get_args(annotation)\n ) # pyright: ignore\n ] # pyright: ignore\n\n if is_list(annotation):\n return List[StrawberryAnnotation.parse_annotated(get_args(annotation)[0])] # type: ignore # noqa: E501\n\n return annotation\n\n def resolve(self) -> Union[StrawberryType, type]:\n annotation = self.parse_annotated(self.annotation)\n\n if isinstance(self.annotation, str):\n annotation = ForwardRef(self.annotation)\n\n evaled_type = _eval_type(annotation, self.namespace, None)\n\n if is_private(evaled_type):\n return evaled_type\n if self._is_async_type(evaled_type):\n evaled_type = self._strip_async_type(evaled_type)\n if self._is_lazy_type(evaled_type):\n return evaled_type\n\n if self._is_generic(evaled_type):\n if any(is_type_var(type_) for type_ in evaled_type.__args__):\n return evaled_type\n return self.create_concrete_type(evaled_type)\n\n # Simply return objects that are already StrawberryTypes\n if self._is_strawberry_type(evaled_type):\n return evaled_type\n\n # Everything remaining should be a raw annotation that needs to be turned into\n # a StrawberryType\n if self._is_enum(evaled_type):\n return self.create_enum(evaled_type)\n if self._is_list(evaled_type):\n return self.create_list(evaled_type)\n elif self._is_optional(evaled_type):\n return self.create_optional(evaled_type)\n elif self._is_union(evaled_type):\n return self.create_union(evaled_type)\n elif is_type_var(evaled_type):\n return self.create_type_var(evaled_type)\n\n # TODO: Raise exception now, or later?\n # ... raise NotImplementedError(f\"Unknown type {evaled_type}\")\n return evaled_type\n\n def create_concrete_type(self, evaled_type: type) -> type:\n if _is_object_type(evaled_type):\n type_definition: TypeDefinition\n type_definition = evaled_type._type_definition # type: ignore\n return type_definition.resolve_generic(evaled_type)\n\n raise ValueError(f\"Not supported {evaled_type}\")\n\n def create_enum(self, evaled_type: Any) -> EnumDefinition:\n return evaled_type._enum_definition\n\n def create_list(self, evaled_type: Any) -> StrawberryList:\n of_type = StrawberryAnnotation(\n annotation=evaled_type.__args__[0],\n namespace=self.namespace,\n ).resolve()\n\n return StrawberryList(of_type)\n\n def create_optional(self, evaled_type: Any) -> StrawberryOptional:\n types = evaled_type.__args__\n non_optional_types = tuple(\n filter(\n lambda x: x is not type(None) and x is not type(UNSET), # noqa: E721\n types,\n )\n )\n\n # Note that passing a single type to `Union` is equivalent to not using `Union`\n # at all. This allows us to not di any checks for how many types have been\n # passed as we can safely use `Union` for both optional types\n # (e.g. `Optional[str]`) and optional unions (e.g.\n # `Optional[Union[TypeA, TypeB]]`)\n child_type = Union[non_optional_types] # type: ignore\n\n of_type = StrawberryAnnotation(\n annotation=child_type,\n namespace=self.namespace,\n ).resolve()\n\n return StrawberryOptional(of_type)\n\n def create_type_var(self, evaled_type: TypeVar) -> StrawberryTypeVar:\n return StrawberryTypeVar(evaled_type)\n\n def create_union(self, evaled_type) -> \"StrawberryUnion\":\n # Prevent import cycles\n from strawberry.union import StrawberryUnion\n\n # TODO: Deal with Forward References/origin\n if isinstance(evaled_type, StrawberryUnion):\n return evaled_type\n\n types = evaled_type.__args__\n union = StrawberryUnion(\n type_annotations=tuple(StrawberryAnnotation(type_) for type_ in types),\n )\n return union\n\n @classmethod\n def _is_async_type(cls, annotation: type) -> bool:\n origin = getattr(annotation, \"__origin__\", None)\n return origin in ASYNC_TYPES\n\n @classmethod\n def _is_enum(cls, annotation: Any) -> bool:\n # Type aliases are not types so we need to make sure annotation can go into\n # issubclass\n if not isinstance(annotation, type):\n return False\n return issubclass(annotation, Enum)\n\n @classmethod\n def _is_generic(cls, annotation: Any) -> bool:\n if hasattr(annotation, \"__origin__\"):\n return is_generic(annotation.__origin__)\n\n return False\n\n @classmethod\n def _is_lazy_type(cls, annotation: Any) -> bool:\n return isinstance(annotation, LazyType)\n\n @classmethod\n def _is_optional(cls, annotation: Any) -> bool:\n \"\"\"Returns True if the annotation is Optional[SomeType]\"\"\"\n\n # Optionals are represented as unions\n if not cls._is_union(annotation):\n return False\n\n types = annotation.__args__\n\n # A Union to be optional needs to have at least one None type\n return any(x is type(None) for x in types) # noqa: E721\n\n @classmethod\n def _is_list(cls, annotation: Any) -> bool:\n \"\"\"Returns True if annotation is a List\"\"\"\n\n annotation_origin = getattr(annotation, \"__origin__\", None)\n\n return annotation_origin == list\n\n @classmethod\n def _is_strawberry_type(cls, evaled_type: Any) -> bool:\n # Prevent import cycles\n from strawberry.union import StrawberryUnion\n\n if isinstance(evaled_type, EnumDefinition):\n return True\n elif _is_input_type(evaled_type): # TODO: Replace with StrawberryInputObject\n return True\n # TODO: add support for StrawberryInterface when implemented\n elif isinstance(evaled_type, StrawberryList):\n return True\n elif _is_object_type(evaled_type): # TODO: Replace with StrawberryObject\n return True\n elif isinstance(evaled_type, TypeDefinition):\n return True\n elif isinstance(evaled_type, StrawberryOptional):\n return True\n elif isinstance(\n evaled_type, ScalarDefinition\n ): # TODO: Replace with StrawberryScalar\n return True\n elif isinstance(evaled_type, StrawberryUnion):\n return True\n\n return False\n\n @classmethod\n def _is_union(cls, annotation: Any) -> bool:\n \"\"\"Returns True if annotation is a Union\"\"\"\n\n # this check is needed because unions declared with the new syntax `A | B`\n # don't have a `__origin__` property on them, but they are instances of\n # `UnionType`, which is only available in Python 3.10+\n if sys.version_info >= (3, 10):\n from types import UnionType\n\n if isinstance(annotation, UnionType):\n return True\n\n # unions declared as Union[A, B] fall through to this check, even on python 3.10+\n\n annotation_origin = getattr(annotation, \"__origin__\", None)\n\n return annotation_origin is typing.Union\n\n @classmethod\n def _strip_async_type(cls, annotation) -> type:\n return annotation.__args__[0]\n\n @classmethod\n def _strip_lazy_type(cls, annotation: LazyType) -> type:\n return annotation.resolve_type()\n\n\n################################################################################\n# Temporary functions to be removed with new types\n################################################################################\n\n\ndef _is_input_type(type_: Any) -> bool:\n if not _is_object_type(type_):\n return False\n\n return type_._type_definition.is_input\n\n\ndef _is_object_type(type_: Any) -> bool:\n # isinstance(type_, StrawberryObjectType) # noqa: E800\n return hasattr(type_, \"_type_definition\")\n", "path": "strawberry/annotation.py"}]}
| 3,815 | 133 |
gh_patches_debug_789
|
rasdani/github-patches
|
git_diff
|
geopandas__geopandas-372
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bbox filter from read_file doesn't take advantage of fiona filtering
In line: https://github.com/geopandas/geopandas/blob/master/geopandas/io/file.py#L28
The function goes through the trouble of checking if `bbox` is not null, but just calls `f` in `from_features` just the same.
Line 28 just needs to be changed to the intended `f_filt` to return filtered results or non-filtered if no bbox is passed in.
</issue>
<code>
[start of geopandas/io/file.py]
1 import os
2
3 import fiona
4 import numpy as np
5 from shapely.geometry import mapping
6
7 from six import iteritems
8 from geopandas import GeoDataFrame
9
10
11 def read_file(filename, **kwargs):
12 """
13 Returns a GeoDataFrame from a file.
14
15 *filename* is either the absolute or relative path to the file to be
16 opened and *kwargs* are keyword args to be passed to the `open` method
17 in the fiona library when opening the file. For more information on
18 possible keywords, type: ``import fiona; help(fiona.open)``
19 """
20 bbox = kwargs.pop('bbox', None)
21 with fiona.open(filename, **kwargs) as f:
22 crs = f.crs
23 if bbox is not None:
24 assert len(bbox)==4
25 f_filt = f.filter(bbox=bbox)
26 else:
27 f_filt = f
28 gdf = GeoDataFrame.from_features(f, crs=crs)
29
30 return gdf
31
32
33 def to_file(df, filename, driver="ESRI Shapefile", schema=None,
34 **kwargs):
35 """
36 Write this GeoDataFrame to an OGR data source
37
38 A dictionary of supported OGR providers is available via:
39 >>> import fiona
40 >>> fiona.supported_drivers
41
42 Parameters
43 ----------
44 df : GeoDataFrame to be written
45 filename : string
46 File path or file handle to write to.
47 driver : string, default 'ESRI Shapefile'
48 The OGR format driver used to write the vector file.
49 schema : dict, default None
50 If specified, the schema dictionary is passed to Fiona to
51 better control how the file is written. If None, GeoPandas
52 will determine the schema based on each column's dtype
53
54 The *kwargs* are passed to fiona.open and can be used to write
55 to multi-layer data, store data within archives (zip files), etc.
56 """
57 if schema is None:
58 schema = infer_schema(df)
59 filename = os.path.abspath(os.path.expanduser(filename))
60 with fiona.open(filename, 'w', driver=driver, crs=df.crs,
61 schema=schema, **kwargs) as c:
62 for feature in df.iterfeatures():
63 c.write(feature)
64
65
66 def infer_schema(df):
67 try:
68 from collections import OrderedDict
69 except ImportError:
70 from ordereddict import OrderedDict
71
72 def convert_type(in_type):
73 if in_type == object:
74 return 'str'
75 out_type = type(np.asscalar(np.zeros(1, in_type))).__name__
76 if out_type == 'long':
77 out_type = 'int'
78 return out_type
79
80 properties = OrderedDict([
81 (col, convert_type(_type)) for col, _type in
82 zip(df.columns, df.dtypes) if col != df._geometry_column_name
83 ])
84
85 geom_type = _common_geom_type(df)
86 if not geom_type:
87 raise ValueError("Geometry column cannot contain mutiple "
88 "geometry types when writing to file.")
89
90 schema = {'geometry': geom_type, 'properties': properties}
91
92 return schema
93
94
95 def _common_geom_type(df):
96 # Need to check geom_types before we write to file...
97 # Some (most?) providers expect a single geometry type:
98 # Point, LineString, or Polygon
99 geom_types = df.geometry.geom_type.unique()
100
101 from os.path import commonprefix # To find longest common prefix
102 geom_type = commonprefix([g[::-1] for g in geom_types if g])[::-1] # Reverse
103 if not geom_type:
104 geom_type = None
105
106 return geom_type
107
[end of geopandas/io/file.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/geopandas/io/file.py b/geopandas/io/file.py
--- a/geopandas/io/file.py
+++ b/geopandas/io/file.py
@@ -25,7 +25,7 @@
f_filt = f.filter(bbox=bbox)
else:
f_filt = f
- gdf = GeoDataFrame.from_features(f, crs=crs)
+ gdf = GeoDataFrame.from_features(f_filt, crs=crs)
return gdf
|
{"golden_diff": "diff --git a/geopandas/io/file.py b/geopandas/io/file.py\n--- a/geopandas/io/file.py\n+++ b/geopandas/io/file.py\n@@ -25,7 +25,7 @@\n f_filt = f.filter(bbox=bbox)\n else:\n f_filt = f\n- gdf = GeoDataFrame.from_features(f, crs=crs)\n+ gdf = GeoDataFrame.from_features(f_filt, crs=crs)\n \n return gdf\n", "issue": "bbox filter from read_file doesn't take advantage of fiona filtering\nIn line: https://github.com/geopandas/geopandas/blob/master/geopandas/io/file.py#L28\n\nThe function goes through the trouble of checking if `bbox` is not null, but just calls `f` in `from_features` just the same.\n\nLine 28 just needs to be changed to the intended `f_filt` to return filtered results or non-filtered if no bbox is passed in.\n\n", "before_files": [{"content": "import os\n\nimport fiona\nimport numpy as np\nfrom shapely.geometry import mapping\n\nfrom six import iteritems\nfrom geopandas import GeoDataFrame\n\n\ndef read_file(filename, **kwargs):\n \"\"\"\n Returns a GeoDataFrame from a file.\n\n *filename* is either the absolute or relative path to the file to be\n opened and *kwargs* are keyword args to be passed to the `open` method\n in the fiona library when opening the file. For more information on \n possible keywords, type: ``import fiona; help(fiona.open)``\n \"\"\"\n bbox = kwargs.pop('bbox', None)\n with fiona.open(filename, **kwargs) as f:\n crs = f.crs\n if bbox is not None:\n assert len(bbox)==4\n f_filt = f.filter(bbox=bbox)\n else:\n f_filt = f\n gdf = GeoDataFrame.from_features(f, crs=crs)\n\n return gdf\n\n\ndef to_file(df, filename, driver=\"ESRI Shapefile\", schema=None,\n **kwargs):\n \"\"\"\n Write this GeoDataFrame to an OGR data source\n\n A dictionary of supported OGR providers is available via:\n >>> import fiona\n >>> fiona.supported_drivers\n\n Parameters\n ----------\n df : GeoDataFrame to be written\n filename : string\n File path or file handle to write to.\n driver : string, default 'ESRI Shapefile'\n The OGR format driver used to write the vector file.\n schema : dict, default None\n If specified, the schema dictionary is passed to Fiona to\n better control how the file is written. If None, GeoPandas\n will determine the schema based on each column's dtype\n\n The *kwargs* are passed to fiona.open and can be used to write\n to multi-layer data, store data within archives (zip files), etc.\n \"\"\"\n if schema is None:\n schema = infer_schema(df)\n filename = os.path.abspath(os.path.expanduser(filename))\n with fiona.open(filename, 'w', driver=driver, crs=df.crs,\n schema=schema, **kwargs) as c:\n for feature in df.iterfeatures():\n c.write(feature)\n\n\ndef infer_schema(df):\n try:\n from collections import OrderedDict\n except ImportError:\n from ordereddict import OrderedDict\n\n def convert_type(in_type):\n if in_type == object:\n return 'str'\n out_type = type(np.asscalar(np.zeros(1, in_type))).__name__\n if out_type == 'long':\n out_type = 'int'\n return out_type\n\n properties = OrderedDict([\n (col, convert_type(_type)) for col, _type in\n zip(df.columns, df.dtypes) if col != df._geometry_column_name\n ])\n\n geom_type = _common_geom_type(df)\n if not geom_type:\n raise ValueError(\"Geometry column cannot contain mutiple \"\n \"geometry types when writing to file.\")\n\n schema = {'geometry': geom_type, 'properties': properties}\n\n return schema\n\n\ndef _common_geom_type(df):\n # Need to check geom_types before we write to file...\n # Some (most?) providers expect a single geometry type:\n # Point, LineString, or Polygon\n geom_types = df.geometry.geom_type.unique()\n\n from os.path import commonprefix # To find longest common prefix\n geom_type = commonprefix([g[::-1] for g in geom_types if g])[::-1] # Reverse\n if not geom_type:\n geom_type = None\n\n return geom_type\n", "path": "geopandas/io/file.py"}]}
| 1,643 | 108 |
gh_patches_debug_17391
|
rasdani/github-patches
|
git_diff
|
google__jax-4999
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error caused by shutil.rmtree
```
Traceback (most recent call last):
File "\\?\C:\Users\cloud\AppData\Local\Temp\Bazel.runfiles_vfpgffuf\runfiles\__main__\build\install_xla_in_source_tree.py", line 83, in <module>
shutil.rmtree(jaxlib_dir)
File "C:\Users\cloud\miniconda3\lib\shutil.py", line 516, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\cloud\miniconda3\lib\shutil.py", line 400, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "C:\Users\cloud\miniconda3\lib\shutil.py", line 398, in _rmtree_unsafe
os.unlink(fullname)
WindowsError: [Error 5] Access is denied.: 'D:\\jax\\build\\jaxlib\\cublas_kernels.pyd'
```
This only happens on rebuild.
The reason is `shutil.rmtree` will not delete readonly file on Windows.
</issue>
<code>
[start of build/build_wheel.py]
1 # Copyright 2020 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # Script that builds a jaxlib wheel, intended to be run via bazel run as part
16 # of the jaxlib build process.
17
18 # Most users should not run this script directly; use build.py instead.
19
20 import argparse
21 import functools
22 import glob
23 import os
24 import platform
25 import shutil
26 import subprocess
27 import sys
28 import tempfile
29
30 from bazel_tools.tools.python.runfiles import runfiles
31
32 parser = argparse.ArgumentParser()
33 parser.add_argument(
34 "--sources_path",
35 default=None,
36 help="Path in which the wheel's sources should be prepared. Optional. If "
37 "omitted, a temporary directory will be used.")
38 parser.add_argument(
39 "--output_path",
40 default=None,
41 required=True,
42 help="Path to which the output wheel should be written. Required.")
43 args = parser.parse_args()
44
45 r = runfiles.Create()
46
47
48 def _is_windows():
49 return sys.platform.startswith("win32")
50
51
52 def _copy_so(src_file, dst_dir, dst_filename=None):
53 src_filename = os.path.basename(src_file)
54 if not dst_filename:
55 if _is_windows() and src_filename.endswith(".so"):
56 dst_filename = src_filename[:-3] + ".pyd"
57 else:
58 dst_filename = src_filename
59 dst_file = os.path.join(dst_dir, dst_filename)
60 shutil.copy(src_file, dst_file)
61
62
63 def _copy_normal(src_file, dst_dir, dst_filename=None):
64 src_filename = os.path.basename(src_file)
65 dst_file = os.path.join(dst_dir, dst_filename or src_filename)
66 shutil.copy(src_file, dst_file)
67
68
69 def copy_file(src_file, dst_dir, dst_filename=None):
70 if src_file.endswith(".so"):
71 _copy_so(src_file, dst_dir, dst_filename=dst_filename)
72 else:
73 _copy_normal(src_file, dst_dir, dst_filename=dst_filename)
74
75 def patch_copy_xla_client_py(dst_dir):
76 with open(r.Rlocation("org_tensorflow/tensorflow/compiler/xla/python/xla_client.py")) as f:
77 src = f.read()
78 src = src.replace("from tensorflow.compiler.xla.python import xla_extension as _xla",
79 "from . import xla_extension as _xla")
80 with open(os.path.join(dst_dir, "xla_client.py"), "w") as f:
81 f.write(src)
82
83
84 def patch_copy_tpu_client_py(dst_dir):
85 with open(r.Rlocation("org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/client/tpu_client.py")) as f:
86 src = f.read()
87 src = src.replace("from tensorflow.compiler.xla.python import xla_extension as _xla",
88 "from . import xla_extension as _xla")
89 src = src.replace("from tensorflow.compiler.xla.python import xla_client",
90 "from . import xla_client")
91 src = src.replace(
92 "from tensorflow.compiler.xla.python.tpu_driver.client import tpu_client_extension as _tpu_client",
93 "from . import tpu_client_extension as _tpu_client")
94 with open(os.path.join(dst_dir, "tpu_client.py"), "w") as f:
95 f.write(src)
96
97
98 def prepare_wheel(sources_path):
99 """Assembles a source tree for the wheel in `sources_path`."""
100 jaxlib_dir = os.path.join(sources_path, "jaxlib")
101 os.makedirs(jaxlib_dir)
102 copy_to_jaxlib = functools.partial(copy_file, dst_dir=jaxlib_dir)
103
104 copy_file(r.Rlocation("__main__/jaxlib/setup.py"), dst_dir=sources_path)
105 copy_to_jaxlib(r.Rlocation("__main__/jaxlib/init.py"), dst_filename="__init__.py")
106 copy_to_jaxlib(r.Rlocation("__main__/jaxlib/lapack.so"))
107 copy_to_jaxlib(r.Rlocation("__main__/jaxlib/_pocketfft.so"))
108 copy_to_jaxlib(r.Rlocation("__main__/jaxlib/pocketfft_flatbuffers_py_generated.py"))
109 copy_to_jaxlib(r.Rlocation("__main__/jaxlib/pocketfft.py"))
110 if r.Rlocation("__main__/jaxlib/cusolver_kernels.so") is not None:
111 copy_to_jaxlib(r.Rlocation("__main__/jaxlib/cusolver_kernels.so"))
112 copy_to_jaxlib(r.Rlocation("__main__/jaxlib/cublas_kernels.so"))
113 copy_to_jaxlib(r.Rlocation("__main__/jaxlib/cusolver_kernels.so"))
114 copy_to_jaxlib(r.Rlocation("__main__/jaxlib/cuda_prng_kernels.so"))
115 if r.Rlocation("__main__/jaxlib/cusolver_kernels.pyd") is not None:
116 copy_to_jaxlib(r.Rlocation("__main__/jaxlib/cusolver_kernels.pyd"))
117 copy_to_jaxlib(r.Rlocation("__main__/jaxlib/cublas_kernels.pyd"))
118 copy_to_jaxlib(r.Rlocation("__main__/jaxlib/cusolver_kernels.pyd"))
119 copy_to_jaxlib(r.Rlocation("__main__/jaxlib/cuda_prng_kernels.pyd"))
120 copy_to_jaxlib(r.Rlocation("__main__/jaxlib/version.py"))
121 copy_to_jaxlib(r.Rlocation("__main__/jaxlib/cusolver.py"))
122 copy_to_jaxlib(r.Rlocation("__main__/jaxlib/cuda_prng.py"))
123
124 if _is_windows():
125 copy_to_jaxlib(r.Rlocation("org_tensorflow/tensorflow/compiler/xla/python/xla_extension.pyd"))
126 else:
127 copy_to_jaxlib(r.Rlocation("org_tensorflow/tensorflow/compiler/xla/python/xla_extension.so"))
128 patch_copy_xla_client_py(jaxlib_dir)
129
130 if not _is_windows():
131 copy_to_jaxlib(r.Rlocation("org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/client/tpu_client_extension.so"))
132 patch_copy_tpu_client_py(jaxlib_dir)
133
134
135 def build_wheel(sources_path, output_path):
136 """Builds a wheel in `output_path` using the source tree in `sources_path`."""
137 platform_name = {
138 "Linux": "manylinux2010",
139 "Darwin": "macosx_10_9",
140 "Windows": "win",
141 }[platform.system()]
142 cpu_name = "amd64" if platform.system() == "Windows" else "x86_64"
143 python_tag_arg = (f"--python-tag=cp{sys.version_info.major}"
144 f"{sys.version_info.minor}")
145 platform_tag_arg = f"--plat-name={platform_name}_{cpu_name}"
146 cwd = os.getcwd()
147 os.chdir(sources_path)
148 subprocess.run([sys.executable, "setup.py", "bdist_wheel",
149 python_tag_arg, platform_tag_arg])
150 os.chdir(cwd)
151 for wheel in glob.glob(os.path.join(sources_path, "dist", "*.whl")):
152 output_file = os.path.join(output_path, os.path.basename(wheel))
153 sys.stderr.write(f"Output wheel: {output_file}\n\n")
154 sys.stderr.write(f"To install the newly-built jaxlib wheel, run:\n")
155 sys.stderr.write(f" pip install {output_file}\n\n")
156 shutil.copy(wheel, output_path)
157
158
159 tmpdir = None
160 sources_path = args.sources_path
161 if sources_path is None:
162 tmpdir = tempfile.TemporaryDirectory(prefix="jaxlib")
163 sources_path = tmpdir.name
164
165 try:
166 os.makedirs(args.output_path, exist_ok=True)
167 prepare_wheel(sources_path)
168 build_wheel(sources_path, args.output_path)
169 finally:
170 if tmpdir:
171 tmpdir.cleanup()
172
173
[end of build/build_wheel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/build/build_wheel.py b/build/build_wheel.py
--- a/build/build_wheel.py
+++ b/build/build_wheel.py
@@ -57,13 +57,19 @@
else:
dst_filename = src_filename
dst_file = os.path.join(dst_dir, dst_filename)
- shutil.copy(src_file, dst_file)
+ if _is_windows():
+ shutil.copyfile(src_file, dst_file)
+ else:
+ shutil.copy(src_file, dst_file)
def _copy_normal(src_file, dst_dir, dst_filename=None):
src_filename = os.path.basename(src_file)
dst_file = os.path.join(dst_dir, dst_filename or src_filename)
- shutil.copy(src_file, dst_file)
+ if _is_windows():
+ shutil.copyfile(src_file, dst_file)
+ else:
+ shutil.copy(src_file, dst_file)
def copy_file(src_file, dst_dir, dst_filename=None):
@@ -169,4 +175,3 @@
finally:
if tmpdir:
tmpdir.cleanup()
-
|
{"golden_diff": "diff --git a/build/build_wheel.py b/build/build_wheel.py\n--- a/build/build_wheel.py\n+++ b/build/build_wheel.py\n@@ -57,13 +57,19 @@\n else:\n dst_filename = src_filename\n dst_file = os.path.join(dst_dir, dst_filename)\n- shutil.copy(src_file, dst_file)\n+ if _is_windows():\n+ shutil.copyfile(src_file, dst_file)\n+ else:\n+ shutil.copy(src_file, dst_file)\n \n \n def _copy_normal(src_file, dst_dir, dst_filename=None):\n src_filename = os.path.basename(src_file)\n dst_file = os.path.join(dst_dir, dst_filename or src_filename)\n- shutil.copy(src_file, dst_file)\n+ if _is_windows():\n+ shutil.copyfile(src_file, dst_file)\n+ else:\n+ shutil.copy(src_file, dst_file)\n \n \n def copy_file(src_file, dst_dir, dst_filename=None):\n@@ -169,4 +175,3 @@\n finally:\n if tmpdir:\n tmpdir.cleanup()\n-\n", "issue": "Error caused by shutil.rmtree \n```\r\nTraceback (most recent call last):\r\n File \"\\\\?\\C:\\Users\\cloud\\AppData\\Local\\Temp\\Bazel.runfiles_vfpgffuf\\runfiles\\__main__\\build\\install_xla_in_source_tree.py\", line 83, in <module>\r\n shutil.rmtree(jaxlib_dir)\r\n File \"C:\\Users\\cloud\\miniconda3\\lib\\shutil.py\", line 516, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"C:\\Users\\cloud\\miniconda3\\lib\\shutil.py\", line 400, in _rmtree_unsafe\r\n onerror(os.unlink, fullname, sys.exc_info())\r\n File \"C:\\Users\\cloud\\miniconda3\\lib\\shutil.py\", line 398, in _rmtree_unsafe\r\n os.unlink(fullname)\r\nWindowsError: [Error 5] Access is denied.: 'D:\\\\jax\\\\build\\\\jaxlib\\\\cublas_kernels.pyd'\r\n```\r\n\r\nThis only happens on rebuild.\r\n\r\nThe reason is `shutil.rmtree` will not delete readonly file on Windows.\n", "before_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Script that builds a jaxlib wheel, intended to be run via bazel run as part\n# of the jaxlib build process.\n\n# Most users should not run this script directly; use build.py instead.\n\nimport argparse\nimport functools\nimport glob\nimport os\nimport platform\nimport shutil\nimport subprocess\nimport sys\nimport tempfile\n\nfrom bazel_tools.tools.python.runfiles import runfiles\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\n \"--sources_path\",\n default=None,\n help=\"Path in which the wheel's sources should be prepared. Optional. If \"\n \"omitted, a temporary directory will be used.\")\nparser.add_argument(\n \"--output_path\",\n default=None,\n required=True,\n help=\"Path to which the output wheel should be written. Required.\")\nargs = parser.parse_args()\n\nr = runfiles.Create()\n\n\ndef _is_windows():\n return sys.platform.startswith(\"win32\")\n\n\ndef _copy_so(src_file, dst_dir, dst_filename=None):\n src_filename = os.path.basename(src_file)\n if not dst_filename:\n if _is_windows() and src_filename.endswith(\".so\"):\n dst_filename = src_filename[:-3] + \".pyd\"\n else:\n dst_filename = src_filename\n dst_file = os.path.join(dst_dir, dst_filename)\n shutil.copy(src_file, dst_file)\n\n\ndef _copy_normal(src_file, dst_dir, dst_filename=None):\n src_filename = os.path.basename(src_file)\n dst_file = os.path.join(dst_dir, dst_filename or src_filename)\n shutil.copy(src_file, dst_file)\n\n\ndef copy_file(src_file, dst_dir, dst_filename=None):\n if src_file.endswith(\".so\"):\n _copy_so(src_file, dst_dir, dst_filename=dst_filename)\n else:\n _copy_normal(src_file, dst_dir, dst_filename=dst_filename)\n\ndef patch_copy_xla_client_py(dst_dir):\n with open(r.Rlocation(\"org_tensorflow/tensorflow/compiler/xla/python/xla_client.py\")) as f:\n src = f.read()\n src = src.replace(\"from tensorflow.compiler.xla.python import xla_extension as _xla\",\n \"from . import xla_extension as _xla\")\n with open(os.path.join(dst_dir, \"xla_client.py\"), \"w\") as f:\n f.write(src)\n\n\ndef patch_copy_tpu_client_py(dst_dir):\n with open(r.Rlocation(\"org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/client/tpu_client.py\")) as f:\n src = f.read()\n src = src.replace(\"from tensorflow.compiler.xla.python import xla_extension as _xla\",\n \"from . import xla_extension as _xla\")\n src = src.replace(\"from tensorflow.compiler.xla.python import xla_client\",\n \"from . import xla_client\")\n src = src.replace(\n \"from tensorflow.compiler.xla.python.tpu_driver.client import tpu_client_extension as _tpu_client\",\n \"from . import tpu_client_extension as _tpu_client\")\n with open(os.path.join(dst_dir, \"tpu_client.py\"), \"w\") as f:\n f.write(src)\n\n\ndef prepare_wheel(sources_path):\n \"\"\"Assembles a source tree for the wheel in `sources_path`.\"\"\"\n jaxlib_dir = os.path.join(sources_path, \"jaxlib\")\n os.makedirs(jaxlib_dir)\n copy_to_jaxlib = functools.partial(copy_file, dst_dir=jaxlib_dir)\n\n copy_file(r.Rlocation(\"__main__/jaxlib/setup.py\"), dst_dir=sources_path)\n copy_to_jaxlib(r.Rlocation(\"__main__/jaxlib/init.py\"), dst_filename=\"__init__.py\")\n copy_to_jaxlib(r.Rlocation(\"__main__/jaxlib/lapack.so\"))\n copy_to_jaxlib(r.Rlocation(\"__main__/jaxlib/_pocketfft.so\"))\n copy_to_jaxlib(r.Rlocation(\"__main__/jaxlib/pocketfft_flatbuffers_py_generated.py\"))\n copy_to_jaxlib(r.Rlocation(\"__main__/jaxlib/pocketfft.py\"))\n if r.Rlocation(\"__main__/jaxlib/cusolver_kernels.so\") is not None:\n copy_to_jaxlib(r.Rlocation(\"__main__/jaxlib/cusolver_kernels.so\"))\n copy_to_jaxlib(r.Rlocation(\"__main__/jaxlib/cublas_kernels.so\"))\n copy_to_jaxlib(r.Rlocation(\"__main__/jaxlib/cusolver_kernels.so\"))\n copy_to_jaxlib(r.Rlocation(\"__main__/jaxlib/cuda_prng_kernels.so\"))\n if r.Rlocation(\"__main__/jaxlib/cusolver_kernels.pyd\") is not None:\n copy_to_jaxlib(r.Rlocation(\"__main__/jaxlib/cusolver_kernels.pyd\"))\n copy_to_jaxlib(r.Rlocation(\"__main__/jaxlib/cublas_kernels.pyd\"))\n copy_to_jaxlib(r.Rlocation(\"__main__/jaxlib/cusolver_kernels.pyd\"))\n copy_to_jaxlib(r.Rlocation(\"__main__/jaxlib/cuda_prng_kernels.pyd\"))\n copy_to_jaxlib(r.Rlocation(\"__main__/jaxlib/version.py\"))\n copy_to_jaxlib(r.Rlocation(\"__main__/jaxlib/cusolver.py\"))\n copy_to_jaxlib(r.Rlocation(\"__main__/jaxlib/cuda_prng.py\"))\n\n if _is_windows():\n copy_to_jaxlib(r.Rlocation(\"org_tensorflow/tensorflow/compiler/xla/python/xla_extension.pyd\"))\n else:\n copy_to_jaxlib(r.Rlocation(\"org_tensorflow/tensorflow/compiler/xla/python/xla_extension.so\"))\n patch_copy_xla_client_py(jaxlib_dir)\n\n if not _is_windows():\n copy_to_jaxlib(r.Rlocation(\"org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/client/tpu_client_extension.so\"))\n patch_copy_tpu_client_py(jaxlib_dir)\n\n\ndef build_wheel(sources_path, output_path):\n \"\"\"Builds a wheel in `output_path` using the source tree in `sources_path`.\"\"\"\n platform_name = {\n \"Linux\": \"manylinux2010\",\n \"Darwin\": \"macosx_10_9\",\n \"Windows\": \"win\",\n }[platform.system()]\n cpu_name = \"amd64\" if platform.system() == \"Windows\" else \"x86_64\"\n python_tag_arg = (f\"--python-tag=cp{sys.version_info.major}\"\n f\"{sys.version_info.minor}\")\n platform_tag_arg = f\"--plat-name={platform_name}_{cpu_name}\"\n cwd = os.getcwd()\n os.chdir(sources_path)\n subprocess.run([sys.executable, \"setup.py\", \"bdist_wheel\",\n python_tag_arg, platform_tag_arg])\n os.chdir(cwd)\n for wheel in glob.glob(os.path.join(sources_path, \"dist\", \"*.whl\")):\n output_file = os.path.join(output_path, os.path.basename(wheel))\n sys.stderr.write(f\"Output wheel: {output_file}\\n\\n\")\n sys.stderr.write(f\"To install the newly-built jaxlib wheel, run:\\n\")\n sys.stderr.write(f\" pip install {output_file}\\n\\n\")\n shutil.copy(wheel, output_path)\n\n\ntmpdir = None\nsources_path = args.sources_path\nif sources_path is None:\n tmpdir = tempfile.TemporaryDirectory(prefix=\"jaxlib\")\n sources_path = tmpdir.name\n\ntry:\n os.makedirs(args.output_path, exist_ok=True)\n prepare_wheel(sources_path)\n build_wheel(sources_path, args.output_path)\nfinally:\n if tmpdir:\n tmpdir.cleanup()\n\n", "path": "build/build_wheel.py"}]}
| 2,977 | 231 |
gh_patches_debug_21853
|
rasdani/github-patches
|
git_diff
|
aws__aws-sam-cli-935
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
sam package of template with SAR metadata fails when using sam build
<!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description
`sam package` fails, when trying to package artifacts built by `sam build`, if the template contains SAR metadata and references local files for `LicenseUrl` or `ReadmeUrl` which should get uploaded by `sam package`. Without using `sam build` that works properly, as the files are present in the template directory.
### Steps to reproduce
```
/tmp $ sam init
2019-01-14 13:44:20 Generating grammar tables from /usr/lib/python3.7/lib2to3/Grammar.txt
2019-01-14 13:44:20 Generating grammar tables from /usr/lib/python3.7/lib2to3/PatternGrammar.txt
[+] Initializing project structure...
[SUCCESS] - Read sam-app/README.md for further instructions on how to proceed
[*] Project initialization is now complete
/tmp $ cd sam-app/
```
* Insert minimal SAR-meta data into the template:
```
Metadata:
AWS::ServerlessRepo::Application:
Name: hello-world
Description: hello world
Author: John
SpdxLicenseId: MIT
LicenseUrl: ./LICENSE
SemanticVersion: 0.0.1
```
```
/tmp/sam-app $ echo "dummy license text" > LICENSE
/tmp/sam-app $ sam build --use-container
2019-01-14 13:45:23 Starting Build inside a container
2019-01-14 13:45:23 Found credentials in shared credentials file: ~/.aws/credentials
2019-01-14 13:45:23 Building resource 'HelloWorldFunction'
Fetching lambci/lambda:build-nodejs8.10 Docker container image......
2019-01-14 13:45:32 Mounting /tmp/sam-app/hello-world as /tmp/samcli/source:ro inside runtime container
Build Succeeded
Built Artifacts : .aws-sam/build
Built Template : .aws-sam/build/template.yaml
Commands you can use next
=========================
[*] Invoke Function: sam local invoke
[*] Package: sam package --s3-bucket <yourbucket>
'nodejs' runtime has not been validated!
Running NodejsNpmBuilder:NpmPack
Running NodejsNpmBuilder:CopySource
Running NodejsNpmBuilder:NpmInstall
/tmp/sam-app $ sam package --s3-bucket dummy
Unable to upload artifact ./LICENSE referenced by LicenseUrl parameter of AWS::ServerlessRepo::Application resource.
Parameter LicenseUrl of resource AWS::ServerlessRepo::Application refers to a file or folder that does not exist /tmp/sam-app/.aws-sam/build/LICENSE
```
### Observed result
`sam package` fails, because the `LICENSE` file isn't present in the build directory.
### Expected result
`sam package` succeeds.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Debian/unstable
2. `sam --version`: `SAM CLI, version 0.10.0`
</issue>
<code>
[start of samcli/commands/_utils/template.py]
1 """
2 Utilities to manipulate template
3 """
4
5 import os
6 import six
7 import yaml
8
9 try:
10 import pathlib
11 except ImportError:
12 import pathlib2 as pathlib
13
14 from samcli.yamlhelper import yaml_parse, yaml_dump
15
16
17 _RESOURCES_WITH_LOCAL_PATHS = {
18 "AWS::Serverless::Function": ["CodeUri"],
19 "AWS::Serverless::Api": ["DefinitionUri"],
20 "AWS::AppSync::GraphQLSchema": ["DefinitionS3Location"],
21 "AWS::AppSync::Resolver": ["RequestMappingTemplateS3Location", "ResponseMappingTemplateS3Location"],
22 "AWS::Lambda::Function": ["Code"],
23 "AWS::ApiGateway::RestApi": ["BodyS3Location"],
24 "AWS::ElasticBeanstalk::ApplicationVersion": ["SourceBundle"],
25 "AWS::CloudFormation::Stack": ["TemplateURL"],
26 "AWS::Serverless::Application": ["Location"],
27 "AWS::Lambda::LayerVersion": ["Content"],
28 "AWS::Serverless::LayerVersion": ["ContentUri"]
29 }
30
31
32 def get_template_data(template_file):
33 """
34 Read the template file, parse it as JSON/YAML and return the template as a dictionary.
35
36 Parameters
37 ----------
38 template_file : string
39 Path to the template to read
40
41 Returns
42 -------
43 Template data as a dictionary
44 """
45
46 if not pathlib.Path(template_file).exists():
47 raise ValueError("Template file not found at {}".format(template_file))
48
49 with open(template_file, 'r') as fp:
50 try:
51 return yaml_parse(fp.read())
52 except (ValueError, yaml.YAMLError) as ex:
53 raise ValueError("Failed to parse template: {}".format(str(ex)))
54
55
56 def move_template(src_template_path,
57 dest_template_path,
58 template_dict):
59 """
60 Move the SAM/CloudFormation template from ``src_template_path`` to ``dest_template_path``. For convenience, this
61 method accepts a dictionary of template data ``template_dict`` that will be written to the destination instead of
62 reading from the source file.
63
64 SAM/CloudFormation template can contain certain properties whose value is a relative path to a local file/folder.
65 This path is always relative to the template's location. Before writing the template to ``dest_template_path`,
66 we will update these paths to be relative to the new location.
67
68 This methods updates resource properties supported by ``aws cloudformation package`` command:
69 https://docs.aws.amazon.com/cli/latest/reference/cloudformation/package.html
70
71 You must use this method if you are reading a template from one location, modifying it, and writing it back to a
72 different location.
73
74 Parameters
75 ----------
76 src_template_path : str
77 Path to the original location of the template
78
79 dest_template_path : str
80 Path to the destination location where updated template should be written to
81
82 template_dict : dict
83 Dictionary containing template contents. This dictionary will be updated & written to ``dest`` location.
84 """
85
86 original_root = os.path.dirname(src_template_path)
87 new_root = os.path.dirname(dest_template_path)
88
89 # Next up, we will be writing the template to a different location. Before doing so, we should
90 # update any relative paths in the template to be relative to the new location.
91 modified_template = _update_relative_paths(template_dict,
92 original_root,
93 new_root)
94
95 with open(dest_template_path, "w") as fp:
96 fp.write(yaml_dump(modified_template))
97
98
99 def _update_relative_paths(template_dict,
100 original_root,
101 new_root):
102 """
103 SAM/CloudFormation template can contain certain properties whose value is a relative path to a local file/folder.
104 This path is usually relative to the template's location. If the template is being moved from original location
105 ``original_root`` to new location ``new_root``, use this method to update these paths to be
106 relative to ``new_root``.
107
108 After this method is complete, it is safe to write the template to ``new_root`` without
109 breaking any relative paths.
110
111 This methods updates resource properties supported by ``aws cloudformation package`` command:
112 https://docs.aws.amazon.com/cli/latest/reference/cloudformation/package.html
113
114 If a property is either an absolute path or a S3 URI, this method will not update them.
115
116
117 Parameters
118 ----------
119 template_dict : dict
120 Dictionary containing template contents. This dictionary will be updated & written to ``dest`` location.
121
122 original_root : str
123 Path to the directory where all paths were originally set relative to. This is usually the directory
124 containing the template originally
125
126 new_root : str
127 Path to the new directory that all paths set relative to after this method completes.
128
129 Returns
130 -------
131 Updated dictionary
132
133 """
134
135 for _, resource in template_dict.get("Resources", {}).items():
136 resource_type = resource.get("Type")
137
138 if resource_type not in _RESOURCES_WITH_LOCAL_PATHS:
139 # Unknown resource. Skipping
140 continue
141
142 for path_prop_name in _RESOURCES_WITH_LOCAL_PATHS[resource_type]:
143 properties = resource.get("Properties", {})
144 path = properties.get(path_prop_name)
145
146 updated_path = _resolve_relative_to(path, original_root, new_root)
147 if not updated_path:
148 # This path does not need to get updated
149 continue
150
151 properties[path_prop_name] = updated_path
152
153 # AWS::Includes can be anywhere within the template dictionary. Hence we need to recurse through the
154 # dictionary in a separate method to find and update relative paths in there
155 template_dict = _update_aws_include_relative_path(template_dict, original_root, new_root)
156
157 return template_dict
158
159
160 def _update_aws_include_relative_path(template_dict, original_root, new_root):
161 """
162 Update relative paths in "AWS::Include" directive. This directive can be present at any part of the template,
163 and not just within resources.
164 """
165
166 for key, val in template_dict.items():
167 if key == "Fn::Transform":
168 if isinstance(val, dict) and val.get("Name") == "AWS::Include":
169 path = val.get("Parameters", {}).get("Location", {})
170 updated_path = _resolve_relative_to(path, original_root, new_root)
171 if not updated_path:
172 # This path does not need to get updated
173 continue
174
175 val["Parameters"]["Location"] = updated_path
176
177 # Recurse through all dictionary values
178 elif isinstance(val, dict):
179 _update_aws_include_relative_path(val, original_root, new_root)
180 elif isinstance(val, list):
181 for item in val:
182 if isinstance(item, dict):
183 _update_aws_include_relative_path(item, original_root, new_root)
184
185 return template_dict
186
187
188 def _resolve_relative_to(path, original_root, new_root):
189 """
190 If the given ``path`` is a relative path, then assume it is relative to ``original_root``. This method will
191 update the path to be resolve it relative to ``new_root`` and return.
192
193 Examples
194 -------
195 # Assume a file called template.txt at location /tmp/original/root/template.txt expressed as relative path
196 # We are trying to update it to be relative to /tmp/new/root instead of the /tmp/original/root
197 >>> result = _resolve_relative_to("template.txt", \
198 "/tmp/original/root", \
199 "/tmp/new/root")
200 >>> result
201 ../../original/root/template.txt
202
203 Returns
204 -------
205 Updated path if the given path is a relative path. None, if the path is not a relative path.
206 """
207
208 if not isinstance(path, six.string_types) \
209 or path.startswith("s3://") \
210 or os.path.isabs(path):
211 # Value is definitely NOT a relative path. It is either a S3 URi or Absolute path or not a string at all
212 return None
213
214 # Value is definitely a relative path. Change it relative to the destination directory
215 return os.path.relpath(
216 os.path.normpath(os.path.join(original_root, path)), # Absolute original path w.r.t ``original_root``
217 new_root) # Resolve the original path with respect to ``new_root``
218
[end of samcli/commands/_utils/template.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/samcli/commands/_utils/template.py b/samcli/commands/_utils/template.py
--- a/samcli/commands/_utils/template.py
+++ b/samcli/commands/_utils/template.py
@@ -14,6 +14,10 @@
from samcli.yamlhelper import yaml_parse, yaml_dump
+_METADATA_WITH_LOCAL_PATHS = {
+ "AWS::ServerlessRepo::Application": ["LicenseUrl", "ReadmeUrl"]
+}
+
_RESOURCES_WITH_LOCAL_PATHS = {
"AWS::Serverless::Function": ["CodeUri"],
"AWS::Serverless::Api": ["DefinitionUri"],
@@ -132,6 +136,22 @@
"""
+ for resource_type, properties in template_dict.get("Metadata", {}).items():
+
+ if resource_type not in _METADATA_WITH_LOCAL_PATHS:
+ # Unknown resource. Skipping
+ continue
+
+ for path_prop_name in _METADATA_WITH_LOCAL_PATHS[resource_type]:
+ path = properties.get(path_prop_name)
+
+ updated_path = _resolve_relative_to(path, original_root, new_root)
+ if not updated_path:
+ # This path does not need to get updated
+ continue
+
+ properties[path_prop_name] = updated_path
+
for _, resource in template_dict.get("Resources", {}).items():
resource_type = resource.get("Type")
|
{"golden_diff": "diff --git a/samcli/commands/_utils/template.py b/samcli/commands/_utils/template.py\n--- a/samcli/commands/_utils/template.py\n+++ b/samcli/commands/_utils/template.py\n@@ -14,6 +14,10 @@\n from samcli.yamlhelper import yaml_parse, yaml_dump\n \n \n+_METADATA_WITH_LOCAL_PATHS = {\n+ \"AWS::ServerlessRepo::Application\": [\"LicenseUrl\", \"ReadmeUrl\"]\n+}\n+\n _RESOURCES_WITH_LOCAL_PATHS = {\n \"AWS::Serverless::Function\": [\"CodeUri\"],\n \"AWS::Serverless::Api\": [\"DefinitionUri\"],\n@@ -132,6 +136,22 @@\n \n \"\"\"\n \n+ for resource_type, properties in template_dict.get(\"Metadata\", {}).items():\n+\n+ if resource_type not in _METADATA_WITH_LOCAL_PATHS:\n+ # Unknown resource. Skipping\n+ continue\n+\n+ for path_prop_name in _METADATA_WITH_LOCAL_PATHS[resource_type]:\n+ path = properties.get(path_prop_name)\n+\n+ updated_path = _resolve_relative_to(path, original_root, new_root)\n+ if not updated_path:\n+ # This path does not need to get updated\n+ continue\n+\n+ properties[path_prop_name] = updated_path\n+\n for _, resource in template_dict.get(\"Resources\", {}).items():\n resource_type = resource.get(\"Type\")\n", "issue": "sam package of template with SAR metadata fails when using sam build\n<!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). \r\nIf you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->\r\n\r\n### Description\r\n\r\n`sam package` fails, when trying to package artifacts built by `sam build`, if the template contains SAR metadata and references local files for `LicenseUrl` or `ReadmeUrl` which should get uploaded by `sam package`. Without using `sam build` that works properly, as the files are present in the template directory.\r\n\r\n### Steps to reproduce\r\n\r\n```\r\n/tmp $ sam init\r\n2019-01-14 13:44:20 Generating grammar tables from /usr/lib/python3.7/lib2to3/Grammar.txt\r\n2019-01-14 13:44:20 Generating grammar tables from /usr/lib/python3.7/lib2to3/PatternGrammar.txt\r\n[+] Initializing project structure...\r\n[SUCCESS] - Read sam-app/README.md for further instructions on how to proceed\r\n[*] Project initialization is now complete\r\n/tmp $ cd sam-app/\r\n```\r\n* Insert minimal SAR-meta data into the template:\r\n```\r\nMetadata:\r\n AWS::ServerlessRepo::Application:\r\n Name: hello-world \r\n Description: hello world\r\n Author: John\r\n SpdxLicenseId: MIT \r\n LicenseUrl: ./LICENSE \r\n SemanticVersion: 0.0.1\r\n```\r\n```\r\n/tmp/sam-app $ echo \"dummy license text\" > LICENSE\r\n/tmp/sam-app $ sam build --use-container\r\n2019-01-14 13:45:23 Starting Build inside a container\r\n2019-01-14 13:45:23 Found credentials in shared credentials file: ~/.aws/credentials\r\n2019-01-14 13:45:23 Building resource 'HelloWorldFunction'\r\n\r\nFetching lambci/lambda:build-nodejs8.10 Docker container image......\r\n2019-01-14 13:45:32 Mounting /tmp/sam-app/hello-world as /tmp/samcli/source:ro inside runtime container\r\n\r\nBuild Succeeded\r\n\r\nBuilt Artifacts : .aws-sam/build\r\nBuilt Template : .aws-sam/build/template.yaml\r\n\r\nCommands you can use next\r\n=========================\r\n[*] Invoke Function: sam local invoke\r\n[*] Package: sam package --s3-bucket <yourbucket>\r\n \r\n'nodejs' runtime has not been validated!\r\nRunning NodejsNpmBuilder:NpmPack\r\nRunning NodejsNpmBuilder:CopySource\r\nRunning NodejsNpmBuilder:NpmInstall\r\n/tmp/sam-app $ sam package --s3-bucket dummy\r\n\r\nUnable to upload artifact ./LICENSE referenced by LicenseUrl parameter of AWS::ServerlessRepo::Application resource.\r\nParameter LicenseUrl of resource AWS::ServerlessRepo::Application refers to a file or folder that does not exist /tmp/sam-app/.aws-sam/build/LICENSE\r\n```\r\n### Observed result\r\n\r\n`sam package` fails, because the `LICENSE` file isn't present in the build directory.\r\n\r\n### Expected result\r\n\r\n`sam package` succeeds.\r\n\r\n### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)\r\n\r\n1. OS: Debian/unstable\r\n2. `sam --version`: `SAM CLI, version 0.10.0`\n", "before_files": [{"content": "\"\"\"\nUtilities to manipulate template\n\"\"\"\n\nimport os\nimport six\nimport yaml\n\ntry:\n import pathlib\nexcept ImportError:\n import pathlib2 as pathlib\n\nfrom samcli.yamlhelper import yaml_parse, yaml_dump\n\n\n_RESOURCES_WITH_LOCAL_PATHS = {\n \"AWS::Serverless::Function\": [\"CodeUri\"],\n \"AWS::Serverless::Api\": [\"DefinitionUri\"],\n \"AWS::AppSync::GraphQLSchema\": [\"DefinitionS3Location\"],\n \"AWS::AppSync::Resolver\": [\"RequestMappingTemplateS3Location\", \"ResponseMappingTemplateS3Location\"],\n \"AWS::Lambda::Function\": [\"Code\"],\n \"AWS::ApiGateway::RestApi\": [\"BodyS3Location\"],\n \"AWS::ElasticBeanstalk::ApplicationVersion\": [\"SourceBundle\"],\n \"AWS::CloudFormation::Stack\": [\"TemplateURL\"],\n \"AWS::Serverless::Application\": [\"Location\"],\n \"AWS::Lambda::LayerVersion\": [\"Content\"],\n \"AWS::Serverless::LayerVersion\": [\"ContentUri\"]\n}\n\n\ndef get_template_data(template_file):\n \"\"\"\n Read the template file, parse it as JSON/YAML and return the template as a dictionary.\n\n Parameters\n ----------\n template_file : string\n Path to the template to read\n\n Returns\n -------\n Template data as a dictionary\n \"\"\"\n\n if not pathlib.Path(template_file).exists():\n raise ValueError(\"Template file not found at {}\".format(template_file))\n\n with open(template_file, 'r') as fp:\n try:\n return yaml_parse(fp.read())\n except (ValueError, yaml.YAMLError) as ex:\n raise ValueError(\"Failed to parse template: {}\".format(str(ex)))\n\n\ndef move_template(src_template_path,\n dest_template_path,\n template_dict):\n \"\"\"\n Move the SAM/CloudFormation template from ``src_template_path`` to ``dest_template_path``. For convenience, this\n method accepts a dictionary of template data ``template_dict`` that will be written to the destination instead of\n reading from the source file.\n\n SAM/CloudFormation template can contain certain properties whose value is a relative path to a local file/folder.\n This path is always relative to the template's location. Before writing the template to ``dest_template_path`,\n we will update these paths to be relative to the new location.\n\n This methods updates resource properties supported by ``aws cloudformation package`` command:\n https://docs.aws.amazon.com/cli/latest/reference/cloudformation/package.html\n\n You must use this method if you are reading a template from one location, modifying it, and writing it back to a\n different location.\n\n Parameters\n ----------\n src_template_path : str\n Path to the original location of the template\n\n dest_template_path : str\n Path to the destination location where updated template should be written to\n\n template_dict : dict\n Dictionary containing template contents. This dictionary will be updated & written to ``dest`` location.\n \"\"\"\n\n original_root = os.path.dirname(src_template_path)\n new_root = os.path.dirname(dest_template_path)\n\n # Next up, we will be writing the template to a different location. Before doing so, we should\n # update any relative paths in the template to be relative to the new location.\n modified_template = _update_relative_paths(template_dict,\n original_root,\n new_root)\n\n with open(dest_template_path, \"w\") as fp:\n fp.write(yaml_dump(modified_template))\n\n\ndef _update_relative_paths(template_dict,\n original_root,\n new_root):\n \"\"\"\n SAM/CloudFormation template can contain certain properties whose value is a relative path to a local file/folder.\n This path is usually relative to the template's location. If the template is being moved from original location\n ``original_root`` to new location ``new_root``, use this method to update these paths to be\n relative to ``new_root``.\n\n After this method is complete, it is safe to write the template to ``new_root`` without\n breaking any relative paths.\n\n This methods updates resource properties supported by ``aws cloudformation package`` command:\n https://docs.aws.amazon.com/cli/latest/reference/cloudformation/package.html\n\n If a property is either an absolute path or a S3 URI, this method will not update them.\n\n\n Parameters\n ----------\n template_dict : dict\n Dictionary containing template contents. This dictionary will be updated & written to ``dest`` location.\n\n original_root : str\n Path to the directory where all paths were originally set relative to. This is usually the directory\n containing the template originally\n\n new_root : str\n Path to the new directory that all paths set relative to after this method completes.\n\n Returns\n -------\n Updated dictionary\n\n \"\"\"\n\n for _, resource in template_dict.get(\"Resources\", {}).items():\n resource_type = resource.get(\"Type\")\n\n if resource_type not in _RESOURCES_WITH_LOCAL_PATHS:\n # Unknown resource. Skipping\n continue\n\n for path_prop_name in _RESOURCES_WITH_LOCAL_PATHS[resource_type]:\n properties = resource.get(\"Properties\", {})\n path = properties.get(path_prop_name)\n\n updated_path = _resolve_relative_to(path, original_root, new_root)\n if not updated_path:\n # This path does not need to get updated\n continue\n\n properties[path_prop_name] = updated_path\n\n # AWS::Includes can be anywhere within the template dictionary. Hence we need to recurse through the\n # dictionary in a separate method to find and update relative paths in there\n template_dict = _update_aws_include_relative_path(template_dict, original_root, new_root)\n\n return template_dict\n\n\ndef _update_aws_include_relative_path(template_dict, original_root, new_root):\n \"\"\"\n Update relative paths in \"AWS::Include\" directive. This directive can be present at any part of the template,\n and not just within resources.\n \"\"\"\n\n for key, val in template_dict.items():\n if key == \"Fn::Transform\":\n if isinstance(val, dict) and val.get(\"Name\") == \"AWS::Include\":\n path = val.get(\"Parameters\", {}).get(\"Location\", {})\n updated_path = _resolve_relative_to(path, original_root, new_root)\n if not updated_path:\n # This path does not need to get updated\n continue\n\n val[\"Parameters\"][\"Location\"] = updated_path\n\n # Recurse through all dictionary values\n elif isinstance(val, dict):\n _update_aws_include_relative_path(val, original_root, new_root)\n elif isinstance(val, list):\n for item in val:\n if isinstance(item, dict):\n _update_aws_include_relative_path(item, original_root, new_root)\n\n return template_dict\n\n\ndef _resolve_relative_to(path, original_root, new_root):\n \"\"\"\n If the given ``path`` is a relative path, then assume it is relative to ``original_root``. This method will\n update the path to be resolve it relative to ``new_root`` and return.\n\n Examples\n -------\n # Assume a file called template.txt at location /tmp/original/root/template.txt expressed as relative path\n # We are trying to update it to be relative to /tmp/new/root instead of the /tmp/original/root\n >>> result = _resolve_relative_to(\"template.txt\", \\\n \"/tmp/original/root\", \\\n \"/tmp/new/root\")\n >>> result\n ../../original/root/template.txt\n\n Returns\n -------\n Updated path if the given path is a relative path. None, if the path is not a relative path.\n \"\"\"\n\n if not isinstance(path, six.string_types) \\\n or path.startswith(\"s3://\") \\\n or os.path.isabs(path):\n # Value is definitely NOT a relative path. It is either a S3 URi or Absolute path or not a string at all\n return None\n\n # Value is definitely a relative path. Change it relative to the destination directory\n return os.path.relpath(\n os.path.normpath(os.path.join(original_root, path)), # Absolute original path w.r.t ``original_root``\n new_root) # Resolve the original path with respect to ``new_root``\n", "path": "samcli/commands/_utils/template.py"}]}
| 3,593 | 311 |
gh_patches_debug_32931
|
rasdani/github-patches
|
git_diff
|
Lightning-Universe__lightning-flash-109
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make categorical_input/numerical_input optional for TabularData.from_df
## 🐛 Bug
If I have only numerical features then I still have to pass an empty list for the categorical_input. It can be optional.

### To Reproduce
Steps to reproduce the behavior:
1. df = pd.DataFrame({'digit': [1,2,3], 'odd_even':[0,1,0]})
2. datamodule = TabularData.from_df(df, 'odd_even',
numerical_input=['digit'],
)
```
TypeError Traceback (most recent call last)
<ipython-input-122-405a8bb49976> in <module>
1 datamodule = TabularData.from_df(final_data, 'target',
----> 2 numerical_input=train_x.columns.tolist(),
3 # categorical_input=[],
4 )
TypeError: from_df() missing 1 required positional argument: 'categorical_input'
```
#### Code sample
```
df = pd.DataFrame({'digit': [1,2,3], 'odd_even':[0,1,0]})
datamodule = TabularData.from_df(df, 'odd_even',
numerical_input=['digit'],
)
```
### Expected behaviour
If only one of categorical or numerical input is passed then users should not be forced to enter an empty list.
### Environment
- PyTorch Version (e.g., 1.0): '1.7.1'
- OS (e.g., Linux): MacOS
- How you installed PyTorch (`conda`, `pip`, source): pip
- Python version:Python 3.7.9
> I would love to start my contribution to Flash by fixing this issue.
</issue>
<code>
[start of flash/tabular/classification/data/data.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Any, Dict, List, Optional
15
16 import numpy as np
17 import pandas as pd
18 from pandas.core.frame import DataFrame
19 from sklearn.model_selection import train_test_split
20 from torch import Tensor
21
22 from flash.core.classification import ClassificationDataPipeline
23 from flash.core.data import DataPipeline
24 from flash.core.data.datamodule import DataModule
25 from flash.core.data.utils import _contains_any_tensor
26 from flash.tabular.classification.data.dataset import (
27 _compute_normalization,
28 _dfs_to_samples,
29 _generate_codes,
30 _impute,
31 _pre_transform,
32 PandasDataset,
33 )
34
35
36 class TabularDataPipeline(ClassificationDataPipeline):
37
38 def __init__(
39 self,
40 categorical_input: List,
41 numerical_input: List,
42 target: str,
43 mean: DataFrame,
44 std: DataFrame,
45 codes: Dict,
46 ):
47 self._categorical_input = categorical_input
48 self._numerical_input = numerical_input
49 self._target = target
50 self._mean = mean
51 self._std = std
52 self._codes = codes
53
54 def before_collate(self, samples: Any) -> Any:
55 """Override to apply transformations to samples"""
56 if _contains_any_tensor(samples, dtype=(Tensor, np.ndarray)):
57 return samples
58 if isinstance(samples, str):
59 samples = pd.read_csv(samples)
60 if isinstance(samples, DataFrame):
61 samples = [samples]
62 dfs = _pre_transform(
63 samples, self._numerical_input, self._categorical_input, self._codes, self._mean, self._std
64 )
65 return _dfs_to_samples(dfs, self._categorical_input, self._numerical_input)
66
67
68 class TabularData(DataModule):
69 """Data module for tabular tasks"""
70
71 def __init__(
72 self,
73 train_df: DataFrame,
74 categorical_input: List,
75 numerical_input: List,
76 target: str,
77 valid_df: Optional[DataFrame] = None,
78 test_df: Optional[DataFrame] = None,
79 batch_size: int = 2,
80 num_workers: Optional[int] = None,
81 ):
82 dfs = [train_df]
83 self._test_df = None
84
85 if valid_df is not None:
86 dfs.append(valid_df)
87
88 if test_df is not None:
89 # save for predict function
90 self._test_df = test_df.copy()
91 self._test_df.drop(target, axis=1)
92 dfs.append(test_df)
93
94 # impute missing values
95 dfs = _impute(dfs, numerical_input)
96
97 # compute train dataset stats
98 self.mean, self.std = _compute_normalization(dfs[0], numerical_input)
99
100 if dfs[0][target].dtype == object:
101 # if the target is a category, not an int
102 self.target_codes = _generate_codes(dfs, [target])
103 else:
104 self.target_codes = None
105
106 self.codes = _generate_codes(dfs, categorical_input)
107
108 dfs = _pre_transform(
109 dfs, numerical_input, categorical_input, self.codes, self.mean, self.std, target, self.target_codes
110 )
111
112 # normalize
113 self.cat_cols = categorical_input
114 self.num_cols = numerical_input
115
116 self._num_classes = len(train_df[target].unique())
117
118 train_ds = PandasDataset(dfs[0], categorical_input, numerical_input, target)
119 valid_ds = PandasDataset(dfs[1], categorical_input, numerical_input, target) if valid_df is not None else None
120 test_ds = PandasDataset(dfs[-1], categorical_input, numerical_input, target) if test_df is not None else None
121 super().__init__(train_ds, valid_ds, test_ds, batch_size=batch_size, num_workers=num_workers)
122
123 @property
124 def num_classes(self) -> int:
125 return self._num_classes
126
127 @property
128 def num_features(self) -> int:
129 return len(self.cat_cols) + len(self.num_cols)
130
131 @classmethod
132 def from_df(
133 cls,
134 train_df: DataFrame,
135 target: str,
136 categorical_input: List,
137 numerical_input: List,
138 valid_df: Optional[DataFrame] = None,
139 test_df: Optional[DataFrame] = None,
140 batch_size: int = 8,
141 num_workers: Optional[int] = None,
142 val_size: float = None,
143 test_size: float = None,
144 ):
145 """Creates a TabularData object from pandas DataFrames.
146
147 Args:
148 train_df: train data DataFrame
149 target: The column containing the class id.
150 categorical_input: The list of categorical columns.
151 numerical_input: The list of numerical columns.
152 valid_df: validation data DataFrame
153 test_df: test data DataFrame
154 batch_size: the batchsize to use for parallel loading. Defaults to 64.
155 num_workers: The number of workers to use for parallelized loading.
156 Defaults to None which equals the number of available CPU threads.
157 val_size: float between 0 and 1 to create a validation dataset from train dataset
158 test_size: float between 0 and 1 to create a test dataset from train validation
159
160 Returns:
161 TabularData: The constructed data module.
162
163 Examples::
164
165 text_data = TextClassificationData.from_files("train.csv", label_field="class", text_field="sentence")
166 """
167 if valid_df is None and isinstance(val_size, float) and isinstance(test_size, float):
168 assert 0 < val_size and val_size < 1
169 assert 0 < test_size and test_size < 1
170 train_df, valid_df = train_test_split(train_df, test_size=(val_size + test_size))
171
172 if test_df is None and isinstance(test_size, float):
173 assert 0 < test_size and test_size < 1
174 valid_df, test_df = train_test_split(valid_df, test_size=test_size)
175
176 datamodule = cls(
177 train_df=train_df,
178 target=target,
179 categorical_input=categorical_input,
180 numerical_input=numerical_input,
181 valid_df=valid_df,
182 test_df=test_df,
183 batch_size=batch_size,
184 num_workers=num_workers,
185 )
186 datamodule.data_pipeline = TabularDataPipeline(
187 categorical_input, numerical_input, target, datamodule.mean, datamodule.std, datamodule.codes
188 )
189
190 return datamodule
191
192 @classmethod
193 def from_csv(
194 cls,
195 train_csv: str,
196 target: str,
197 categorical_input: List,
198 numerical_input: List,
199 valid_csv: Optional[str] = None,
200 test_csv: Optional[str] = None,
201 batch_size: int = 8,
202 num_workers: Optional[int] = None,
203 val_size: Optional[float] = None,
204 test_size: Optional[float] = None,
205 **pandas_kwargs,
206 ):
207 """Creates a TextClassificationData object from pandas DataFrames.
208
209 Args:
210 train_csv: train data csv file.
211 target: The column containing the class id.
212 categorical_input: The list of categorical columns.
213 numerical_input: The list of numerical columns.
214 valid_csv: validation data csv file.
215 test_csv: test data csv file.
216 batch_size: the batchsize to use for parallel loading. Defaults to 64.
217 num_workers: The number of workers to use for parallelized loading.
218 Defaults to None which equals the number of available CPU threads.
219 val_size: float between 0 and 1 to create a validation dataset from train dataset
220 test_size: float between 0 and 1 to create a test dataset from train validation
221
222 Returns:
223 TabularData: The constructed data module.
224
225 Examples::
226
227 text_data = TabularData.from_files("train.csv", label_field="class", text_field="sentence")
228 """
229 train_df = pd.read_csv(train_csv, **pandas_kwargs)
230 valid_df = pd.read_csv(valid_csv, **pandas_kwargs) if valid_csv is not None else None
231 test_df = pd.read_csv(test_csv, **pandas_kwargs) if test_csv is not None else None
232 datamodule = cls.from_df(
233 train_df, target, categorical_input, numerical_input, valid_df, test_df, batch_size, num_workers, val_size,
234 test_size
235 )
236 return datamodule
237
238 @property
239 def emb_sizes(self) -> list:
240 """Recommended embedding sizes."""
241
242 # https://developers.googleblog.com/2017/11/introducing-tensorflow-feature-columns.html
243 # The following "formula" provides a general rule of thumb about the number of embedding dimensions:
244 # embedding_dimensions = number_of_categories**0.25
245
246 num_classes = [len(self.codes[cat]) for cat in self.cat_cols]
247 emb_dims = [max(int(n**0.25), 16) for n in num_classes]
248 return list(zip(num_classes, emb_dims))
249
250 @staticmethod
251 def default_pipeline() -> DataPipeline():
252 # TabularDataPipeline depends on the data
253 return DataPipeline()
254
[end of flash/tabular/classification/data/data.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/flash/tabular/classification/data/data.py b/flash/tabular/classification/data/data.py
--- a/flash/tabular/classification/data/data.py
+++ b/flash/tabular/classification/data/data.py
@@ -71,9 +71,9 @@
def __init__(
self,
train_df: DataFrame,
- categorical_input: List,
- numerical_input: List,
target: str,
+ categorical_input: Optional[List] = None,
+ numerical_input: Optional[List] = None,
valid_df: Optional[DataFrame] = None,
test_df: Optional[DataFrame] = None,
batch_size: int = 2,
@@ -82,6 +82,12 @@
dfs = [train_df]
self._test_df = None
+ if categorical_input is None and numerical_input is None:
+ raise RuntimeError('Both `categorical_input` and `numerical_input` are None!')
+
+ categorical_input = categorical_input if categorical_input is not None else []
+ numerical_input = numerical_input if numerical_input is not None else []
+
if valid_df is not None:
dfs.append(valid_df)
@@ -133,8 +139,8 @@
cls,
train_df: DataFrame,
target: str,
- categorical_input: List,
- numerical_input: List,
+ categorical_input: Optional[List] = None,
+ numerical_input: Optional[List] = None,
valid_df: Optional[DataFrame] = None,
test_df: Optional[DataFrame] = None,
batch_size: int = 8,
@@ -194,8 +200,8 @@
cls,
train_csv: str,
target: str,
- categorical_input: List,
- numerical_input: List,
+ categorical_input: Optional[List] = None,
+ numerical_input: Optional[List] = None,
valid_csv: Optional[str] = None,
test_csv: Optional[str] = None,
batch_size: int = 8,
|
{"golden_diff": "diff --git a/flash/tabular/classification/data/data.py b/flash/tabular/classification/data/data.py\n--- a/flash/tabular/classification/data/data.py\n+++ b/flash/tabular/classification/data/data.py\n@@ -71,9 +71,9 @@\n def __init__(\n self,\n train_df: DataFrame,\n- categorical_input: List,\n- numerical_input: List,\n target: str,\n+ categorical_input: Optional[List] = None,\n+ numerical_input: Optional[List] = None,\n valid_df: Optional[DataFrame] = None,\n test_df: Optional[DataFrame] = None,\n batch_size: int = 2,\n@@ -82,6 +82,12 @@\n dfs = [train_df]\n self._test_df = None\n \n+ if categorical_input is None and numerical_input is None:\n+ raise RuntimeError('Both `categorical_input` and `numerical_input` are None!')\n+\n+ categorical_input = categorical_input if categorical_input is not None else []\n+ numerical_input = numerical_input if numerical_input is not None else []\n+\n if valid_df is not None:\n dfs.append(valid_df)\n \n@@ -133,8 +139,8 @@\n cls,\n train_df: DataFrame,\n target: str,\n- categorical_input: List,\n- numerical_input: List,\n+ categorical_input: Optional[List] = None,\n+ numerical_input: Optional[List] = None,\n valid_df: Optional[DataFrame] = None,\n test_df: Optional[DataFrame] = None,\n batch_size: int = 8,\n@@ -194,8 +200,8 @@\n cls,\n train_csv: str,\n target: str,\n- categorical_input: List,\n- numerical_input: List,\n+ categorical_input: Optional[List] = None,\n+ numerical_input: Optional[List] = None,\n valid_csv: Optional[str] = None,\n test_csv: Optional[str] = None,\n batch_size: int = 8,\n", "issue": "Make categorical_input/numerical_input optional for TabularData.from_df\n## \ud83d\udc1b Bug\r\nIf I have only numerical features then I still have to pass an empty list for the categorical_input. It can be optional.\r\n\r\n\r\n### To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. df = pd.DataFrame({'digit': [1,2,3], 'odd_even':[0,1,0]})\r\n2. datamodule = TabularData.from_df(df, 'odd_even', \r\n numerical_input=['digit'],\r\n )\r\n\r\n```\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-122-405a8bb49976> in <module>\r\n 1 datamodule = TabularData.from_df(final_data, 'target', \r\n----> 2 numerical_input=train_x.columns.tolist(),\r\n 3 # categorical_input=[],\r\n 4 )\r\n\r\nTypeError: from_df() missing 1 required positional argument: 'categorical_input'\r\n```\r\n\r\n#### Code sample\r\n```\r\ndf = pd.DataFrame({'digit': [1,2,3], 'odd_even':[0,1,0]})\r\ndatamodule = TabularData.from_df(df, 'odd_even', \r\n numerical_input=['digit'],\r\n )\r\n```\r\n\r\n### Expected behaviour\r\nIf only one of categorical or numerical input is passed then users should not be forced to enter an empty list.\r\n\r\n### Environment\r\n\r\n - PyTorch Version (e.g., 1.0): '1.7.1'\r\n - OS (e.g., Linux): MacOS\r\n - How you installed PyTorch (`conda`, `pip`, source): pip\r\n - Python version:Python 3.7.9\r\n\r\n\r\n> I would love to start my contribution to Flash by fixing this issue.\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any, Dict, List, Optional\n\nimport numpy as np\nimport pandas as pd\nfrom pandas.core.frame import DataFrame\nfrom sklearn.model_selection import train_test_split\nfrom torch import Tensor\n\nfrom flash.core.classification import ClassificationDataPipeline\nfrom flash.core.data import DataPipeline\nfrom flash.core.data.datamodule import DataModule\nfrom flash.core.data.utils import _contains_any_tensor\nfrom flash.tabular.classification.data.dataset import (\n _compute_normalization,\n _dfs_to_samples,\n _generate_codes,\n _impute,\n _pre_transform,\n PandasDataset,\n)\n\n\nclass TabularDataPipeline(ClassificationDataPipeline):\n\n def __init__(\n self,\n categorical_input: List,\n numerical_input: List,\n target: str,\n mean: DataFrame,\n std: DataFrame,\n codes: Dict,\n ):\n self._categorical_input = categorical_input\n self._numerical_input = numerical_input\n self._target = target\n self._mean = mean\n self._std = std\n self._codes = codes\n\n def before_collate(self, samples: Any) -> Any:\n \"\"\"Override to apply transformations to samples\"\"\"\n if _contains_any_tensor(samples, dtype=(Tensor, np.ndarray)):\n return samples\n if isinstance(samples, str):\n samples = pd.read_csv(samples)\n if isinstance(samples, DataFrame):\n samples = [samples]\n dfs = _pre_transform(\n samples, self._numerical_input, self._categorical_input, self._codes, self._mean, self._std\n )\n return _dfs_to_samples(dfs, self._categorical_input, self._numerical_input)\n\n\nclass TabularData(DataModule):\n \"\"\"Data module for tabular tasks\"\"\"\n\n def __init__(\n self,\n train_df: DataFrame,\n categorical_input: List,\n numerical_input: List,\n target: str,\n valid_df: Optional[DataFrame] = None,\n test_df: Optional[DataFrame] = None,\n batch_size: int = 2,\n num_workers: Optional[int] = None,\n ):\n dfs = [train_df]\n self._test_df = None\n\n if valid_df is not None:\n dfs.append(valid_df)\n\n if test_df is not None:\n # save for predict function\n self._test_df = test_df.copy()\n self._test_df.drop(target, axis=1)\n dfs.append(test_df)\n\n # impute missing values\n dfs = _impute(dfs, numerical_input)\n\n # compute train dataset stats\n self.mean, self.std = _compute_normalization(dfs[0], numerical_input)\n\n if dfs[0][target].dtype == object:\n # if the target is a category, not an int\n self.target_codes = _generate_codes(dfs, [target])\n else:\n self.target_codes = None\n\n self.codes = _generate_codes(dfs, categorical_input)\n\n dfs = _pre_transform(\n dfs, numerical_input, categorical_input, self.codes, self.mean, self.std, target, self.target_codes\n )\n\n # normalize\n self.cat_cols = categorical_input\n self.num_cols = numerical_input\n\n self._num_classes = len(train_df[target].unique())\n\n train_ds = PandasDataset(dfs[0], categorical_input, numerical_input, target)\n valid_ds = PandasDataset(dfs[1], categorical_input, numerical_input, target) if valid_df is not None else None\n test_ds = PandasDataset(dfs[-1], categorical_input, numerical_input, target) if test_df is not None else None\n super().__init__(train_ds, valid_ds, test_ds, batch_size=batch_size, num_workers=num_workers)\n\n @property\n def num_classes(self) -> int:\n return self._num_classes\n\n @property\n def num_features(self) -> int:\n return len(self.cat_cols) + len(self.num_cols)\n\n @classmethod\n def from_df(\n cls,\n train_df: DataFrame,\n target: str,\n categorical_input: List,\n numerical_input: List,\n valid_df: Optional[DataFrame] = None,\n test_df: Optional[DataFrame] = None,\n batch_size: int = 8,\n num_workers: Optional[int] = None,\n val_size: float = None,\n test_size: float = None,\n ):\n \"\"\"Creates a TabularData object from pandas DataFrames.\n\n Args:\n train_df: train data DataFrame\n target: The column containing the class id.\n categorical_input: The list of categorical columns.\n numerical_input: The list of numerical columns.\n valid_df: validation data DataFrame\n test_df: test data DataFrame\n batch_size: the batchsize to use for parallel loading. Defaults to 64.\n num_workers: The number of workers to use for parallelized loading.\n Defaults to None which equals the number of available CPU threads.\n val_size: float between 0 and 1 to create a validation dataset from train dataset\n test_size: float between 0 and 1 to create a test dataset from train validation\n\n Returns:\n TabularData: The constructed data module.\n\n Examples::\n\n text_data = TextClassificationData.from_files(\"train.csv\", label_field=\"class\", text_field=\"sentence\")\n \"\"\"\n if valid_df is None and isinstance(val_size, float) and isinstance(test_size, float):\n assert 0 < val_size and val_size < 1\n assert 0 < test_size and test_size < 1\n train_df, valid_df = train_test_split(train_df, test_size=(val_size + test_size))\n\n if test_df is None and isinstance(test_size, float):\n assert 0 < test_size and test_size < 1\n valid_df, test_df = train_test_split(valid_df, test_size=test_size)\n\n datamodule = cls(\n train_df=train_df,\n target=target,\n categorical_input=categorical_input,\n numerical_input=numerical_input,\n valid_df=valid_df,\n test_df=test_df,\n batch_size=batch_size,\n num_workers=num_workers,\n )\n datamodule.data_pipeline = TabularDataPipeline(\n categorical_input, numerical_input, target, datamodule.mean, datamodule.std, datamodule.codes\n )\n\n return datamodule\n\n @classmethod\n def from_csv(\n cls,\n train_csv: str,\n target: str,\n categorical_input: List,\n numerical_input: List,\n valid_csv: Optional[str] = None,\n test_csv: Optional[str] = None,\n batch_size: int = 8,\n num_workers: Optional[int] = None,\n val_size: Optional[float] = None,\n test_size: Optional[float] = None,\n **pandas_kwargs,\n ):\n \"\"\"Creates a TextClassificationData object from pandas DataFrames.\n\n Args:\n train_csv: train data csv file.\n target: The column containing the class id.\n categorical_input: The list of categorical columns.\n numerical_input: The list of numerical columns.\n valid_csv: validation data csv file.\n test_csv: test data csv file.\n batch_size: the batchsize to use for parallel loading. Defaults to 64.\n num_workers: The number of workers to use for parallelized loading.\n Defaults to None which equals the number of available CPU threads.\n val_size: float between 0 and 1 to create a validation dataset from train dataset\n test_size: float between 0 and 1 to create a test dataset from train validation\n\n Returns:\n TabularData: The constructed data module.\n\n Examples::\n\n text_data = TabularData.from_files(\"train.csv\", label_field=\"class\", text_field=\"sentence\")\n \"\"\"\n train_df = pd.read_csv(train_csv, **pandas_kwargs)\n valid_df = pd.read_csv(valid_csv, **pandas_kwargs) if valid_csv is not None else None\n test_df = pd.read_csv(test_csv, **pandas_kwargs) if test_csv is not None else None\n datamodule = cls.from_df(\n train_df, target, categorical_input, numerical_input, valid_df, test_df, batch_size, num_workers, val_size,\n test_size\n )\n return datamodule\n\n @property\n def emb_sizes(self) -> list:\n \"\"\"Recommended embedding sizes.\"\"\"\n\n # https://developers.googleblog.com/2017/11/introducing-tensorflow-feature-columns.html\n # The following \"formula\" provides a general rule of thumb about the number of embedding dimensions:\n # embedding_dimensions = number_of_categories**0.25\n\n num_classes = [len(self.codes[cat]) for cat in self.cat_cols]\n emb_dims = [max(int(n**0.25), 16) for n in num_classes]\n return list(zip(num_classes, emb_dims))\n\n @staticmethod\n def default_pipeline() -> DataPipeline():\n # TabularDataPipeline depends on the data\n return DataPipeline()\n", "path": "flash/tabular/classification/data/data.py"}]}
| 3,717 | 442 |
gh_patches_debug_13699
|
rasdani/github-patches
|
git_diff
|
nautobot__nautobot-176
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Editing an existing user token shows "create" buttons instead of "update"
<!--
NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.
This form is only for reporting reproducible bugs. If you need assistance
with Nautobot installation, or if you have a general question, please start a
discussion instead: https://github.com/nautobot/nautobot/discussions
Please describe the environment in which you are running Nautobot. Be sure
that you are running an unmodified instance of the latest stable release
before submitting a bug report, and that any plugins have been disabled.
-->
### Environment
* Python version:
* Nautobot version:
<!--
Describe in detail the exact steps that someone else can take to reproduce
this bug using the current stable release of Nautobot. Begin with the
creation of any necessary database objects and call out every operation
being performed explicitly. If reporting a bug in the REST API, be sure to
reconstruct the raw HTTP request(s) being made: Don't rely on a client
library such as pynautobot.
-->
### Steps to Reproduce
1. Navigate to user "Profile"
2. Navigate to "API Tokens"
3. Click "Add token"
4. Click "Create"
5. From the token list view, click "Edit" on the token you just created
<!-- What did you expect to happen? -->
### Expected Behavior
There should be an "Update" button.
<!-- What happened instead? -->
### Observed Behavior
There are "Create" and "Create and Add Another" buttons.

</issue>
<code>
[start of nautobot/users/views.py]
1 import logging
2
3 from django.conf import settings
4 from django.contrib import messages
5 from django.contrib.auth import (
6 login as auth_login,
7 logout as auth_logout,
8 update_session_auth_hash,
9 )
10 from django.contrib.auth.mixins import LoginRequiredMixin
11 from django.contrib.auth.models import update_last_login
12 from django.contrib.auth.signals import user_logged_in
13 from django.http import HttpResponseForbidden, HttpResponseRedirect
14 from django.shortcuts import get_object_or_404, redirect, render
15 from django.urls import reverse
16 from django.utils.decorators import method_decorator
17 from django.utils.http import is_safe_url
18 from django.views.decorators.debug import sensitive_post_parameters
19 from django.views.generic import View
20
21 from nautobot.utilities.forms import ConfirmationForm
22 from .forms import LoginForm, PasswordChangeForm, TokenForm
23 from .models import Token
24
25
26 #
27 # Login/logout
28 #
29
30
31 class LoginView(View):
32 """
33 Perform user authentication via the web UI.
34 """
35
36 template_name = "login.html"
37
38 @method_decorator(sensitive_post_parameters("password"))
39 def dispatch(self, *args, **kwargs):
40 return super().dispatch(*args, **kwargs)
41
42 def get(self, request):
43 form = LoginForm(request)
44
45 if request.user.is_authenticated:
46 logger = logging.getLogger("nautobot.auth.login")
47 return self.redirect_to_next(request, logger)
48
49 return render(
50 request,
51 self.template_name,
52 {
53 "form": form,
54 },
55 )
56
57 def post(self, request):
58 logger = logging.getLogger("nautobot.auth.login")
59 form = LoginForm(request, data=request.POST)
60
61 if form.is_valid():
62 logger.debug("Login form validation was successful")
63
64 # If maintenance mode is enabled, assume the database is read-only, and disable updating the user's
65 # last_login time upon authentication.
66 if settings.MAINTENANCE_MODE:
67 logger.warning("Maintenance mode enabled: disabling update of most recent login time")
68 user_logged_in.disconnect(update_last_login, dispatch_uid="update_last_login")
69
70 # Authenticate user
71 auth_login(request, form.get_user())
72 logger.info(f"User {request.user} successfully authenticated")
73 messages.info(request, "Logged in as {}.".format(request.user))
74
75 return self.redirect_to_next(request, logger)
76
77 else:
78 logger.debug("Login form validation failed")
79
80 return render(
81 request,
82 self.template_name,
83 {
84 "form": form,
85 },
86 )
87
88 def redirect_to_next(self, request, logger):
89 if request.method == "POST":
90 redirect_to = request.POST.get("next", reverse("home"))
91 else:
92 redirect_to = request.GET.get("next", reverse("home"))
93
94 if redirect_to and not is_safe_url(url=redirect_to, allowed_hosts=request.get_host()):
95 logger.warning(f"Ignoring unsafe 'next' URL passed to login form: {redirect_to}")
96 redirect_to = reverse("home")
97
98 logger.debug(f"Redirecting user to {redirect_to}")
99 return HttpResponseRedirect(redirect_to)
100
101
102 class LogoutView(View):
103 """
104 Deauthenticate a web user.
105 """
106
107 def get(self, request):
108 logger = logging.getLogger("nautobot.auth.logout")
109
110 # Log out the user
111 username = request.user
112 auth_logout(request)
113 logger.info(f"User {username} has logged out")
114 messages.info(request, "You have logged out.")
115
116 # Delete session key cookie (if set) upon logout
117 response = HttpResponseRedirect(reverse("home"))
118 response.delete_cookie("session_key")
119
120 return response
121
122
123 #
124 # User profiles
125 #
126
127
128 class ProfileView(LoginRequiredMixin, View):
129 template_name = "users/profile.html"
130
131 def get(self, request):
132
133 return render(
134 request,
135 self.template_name,
136 {
137 "active_tab": "profile",
138 },
139 )
140
141
142 class UserConfigView(LoginRequiredMixin, View):
143 template_name = "users/preferences.html"
144
145 def get(self, request):
146
147 return render(
148 request,
149 self.template_name,
150 {
151 "preferences": request.user.all_config(),
152 "active_tab": "preferences",
153 },
154 )
155
156 def post(self, request):
157 user = request.user
158 data = user.all_config()
159
160 # Delete selected preferences
161 for key in request.POST.getlist("pk"):
162 if key in data:
163 user.clear_config(key)
164 user.save()
165 messages.success(request, "Your preferences have been updated.")
166
167 return redirect("user:preferences")
168
169
170 class ChangePasswordView(LoginRequiredMixin, View):
171 template_name = "users/change_password.html"
172
173 def get(self, request):
174 # LDAP users cannot change their password here
175 if getattr(request.user, "ldap_username", None):
176 messages.warning(
177 request,
178 "LDAP-authenticated user credentials cannot be changed within Nautobot.",
179 )
180 return redirect("user:profile")
181
182 form = PasswordChangeForm(user=request.user)
183
184 return render(
185 request,
186 self.template_name,
187 {
188 "form": form,
189 "active_tab": "change_password",
190 },
191 )
192
193 def post(self, request):
194 form = PasswordChangeForm(user=request.user, data=request.POST)
195 if form.is_valid():
196 form.save()
197 update_session_auth_hash(request, form.user)
198 messages.success(request, "Your password has been changed successfully.")
199 return redirect("user:profile")
200
201 return render(
202 request,
203 self.template_name,
204 {
205 "form": form,
206 "active_tab": "change_password",
207 },
208 )
209
210
211 #
212 # API tokens
213 #
214
215
216 class TokenListView(LoginRequiredMixin, View):
217 def get(self, request):
218
219 tokens = Token.objects.filter(user=request.user)
220
221 return render(
222 request,
223 "users/api_tokens.html",
224 {
225 "tokens": tokens,
226 "active_tab": "api_tokens",
227 },
228 )
229
230
231 class TokenEditView(LoginRequiredMixin, View):
232 def get(self, request, pk=None):
233
234 if pk is not None:
235 if not request.user.has_perm("users.change_token"):
236 return HttpResponseForbidden()
237 token = get_object_or_404(Token.objects.filter(user=request.user), pk=pk)
238 else:
239 if not request.user.has_perm("users.add_token"):
240 return HttpResponseForbidden()
241 token = Token(user=request.user)
242
243 form = TokenForm(instance=token)
244
245 return render(
246 request,
247 "generic/object_edit.html",
248 {
249 "obj": token,
250 "obj_type": token._meta.verbose_name,
251 "form": form,
252 "return_url": reverse("user:token_list"),
253 },
254 )
255
256 def post(self, request, pk=None):
257
258 if pk is not None:
259 token = get_object_or_404(Token.objects.filter(user=request.user), pk=pk)
260 form = TokenForm(request.POST, instance=token)
261 else:
262 token = Token()
263 form = TokenForm(request.POST)
264
265 if form.is_valid():
266 token = form.save(commit=False)
267 token.user = request.user
268 token.save()
269
270 msg = "Modified token {}".format(token) if pk else "Created token {}".format(token)
271 messages.success(request, msg)
272
273 if "_addanother" in request.POST:
274 return redirect(request.path)
275 else:
276 return redirect("user:token_list")
277
278 return render(
279 request,
280 "generic/object_edit.html",
281 {
282 "obj": token,
283 "obj_type": token._meta.verbose_name,
284 "form": form,
285 "return_url": reverse("user:token_list"),
286 },
287 )
288
289
290 class TokenDeleteView(LoginRequiredMixin, View):
291 def get(self, request, pk):
292
293 token = get_object_or_404(Token.objects.filter(user=request.user), pk=pk)
294 initial_data = {
295 "return_url": reverse("user:token_list"),
296 }
297 form = ConfirmationForm(initial=initial_data)
298
299 return render(
300 request,
301 "generic/object_delete.html",
302 {
303 "obj": token,
304 "obj_type": token._meta.verbose_name,
305 "form": form,
306 "return_url": reverse("user:token_list"),
307 },
308 )
309
310 def post(self, request, pk):
311
312 token = get_object_or_404(Token.objects.filter(user=request.user), pk=pk)
313 form = ConfirmationForm(request.POST)
314 if form.is_valid():
315 token.delete()
316 messages.success(request, "Token deleted")
317 return redirect("user:token_list")
318
319 return render(
320 request,
321 "generic/object_delete.html",
322 {
323 "obj": token,
324 "obj_type": token._meta.verbose_name,
325 "form": form,
326 "return_url": reverse("user:token_list"),
327 },
328 )
329
[end of nautobot/users/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nautobot/users/views.py b/nautobot/users/views.py
--- a/nautobot/users/views.py
+++ b/nautobot/users/views.py
@@ -250,6 +250,7 @@
"obj_type": token._meta.verbose_name,
"form": form,
"return_url": reverse("user:token_list"),
+ "editing": token.present_in_database,
},
)
@@ -283,6 +284,7 @@
"obj_type": token._meta.verbose_name,
"form": form,
"return_url": reverse("user:token_list"),
+ "editing": token.present_in_database,
},
)
|
{"golden_diff": "diff --git a/nautobot/users/views.py b/nautobot/users/views.py\n--- a/nautobot/users/views.py\n+++ b/nautobot/users/views.py\n@@ -250,6 +250,7 @@\n \"obj_type\": token._meta.verbose_name,\n \"form\": form,\n \"return_url\": reverse(\"user:token_list\"),\n+ \"editing\": token.present_in_database,\n },\n )\n \n@@ -283,6 +284,7 @@\n \"obj_type\": token._meta.verbose_name,\n \"form\": form,\n \"return_url\": reverse(\"user:token_list\"),\n+ \"editing\": token.present_in_database,\n },\n )\n", "issue": "Editing an existing user token shows \"create\" buttons instead of \"update\"\n<!--\r\n NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.\r\n\r\n This form is only for reporting reproducible bugs. If you need assistance\r\n with Nautobot installation, or if you have a general question, please start a\r\n discussion instead: https://github.com/nautobot/nautobot/discussions\r\n\r\n Please describe the environment in which you are running Nautobot. Be sure\r\n that you are running an unmodified instance of the latest stable release\r\n before submitting a bug report, and that any plugins have been disabled.\r\n-->\r\n### Environment\r\n* Python version:\r\n* Nautobot version:\r\n\r\n<!--\r\n Describe in detail the exact steps that someone else can take to reproduce\r\n this bug using the current stable release of Nautobot. Begin with the\r\n creation of any necessary database objects and call out every operation\r\n being performed explicitly. If reporting a bug in the REST API, be sure to\r\n reconstruct the raw HTTP request(s) being made: Don't rely on a client\r\n library such as pynautobot.\r\n-->\r\n### Steps to Reproduce\r\n1. Navigate to user \"Profile\"\r\n2. Navigate to \"API Tokens\"\r\n3. Click \"Add token\"\r\n4. Click \"Create\"\r\n5. From the token list view, click \"Edit\" on the token you just created\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\n\r\nThere should be an \"Update\" button.\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\n\r\nThere are \"Create\" and \"Create and Add Another\" buttons. \r\n\r\n\r\n\r\n\n", "before_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth import (\n login as auth_login,\n logout as auth_logout,\n update_session_auth_hash,\n)\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.contrib.auth.models import update_last_login\nfrom django.contrib.auth.signals import user_logged_in\nfrom django.http import HttpResponseForbidden, HttpResponseRedirect\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.urls import reverse\nfrom django.utils.decorators import method_decorator\nfrom django.utils.http import is_safe_url\nfrom django.views.decorators.debug import sensitive_post_parameters\nfrom django.views.generic import View\n\nfrom nautobot.utilities.forms import ConfirmationForm\nfrom .forms import LoginForm, PasswordChangeForm, TokenForm\nfrom .models import Token\n\n\n#\n# Login/logout\n#\n\n\nclass LoginView(View):\n \"\"\"\n Perform user authentication via the web UI.\n \"\"\"\n\n template_name = \"login.html\"\n\n @method_decorator(sensitive_post_parameters(\"password\"))\n def dispatch(self, *args, **kwargs):\n return super().dispatch(*args, **kwargs)\n\n def get(self, request):\n form = LoginForm(request)\n\n if request.user.is_authenticated:\n logger = logging.getLogger(\"nautobot.auth.login\")\n return self.redirect_to_next(request, logger)\n\n return render(\n request,\n self.template_name,\n {\n \"form\": form,\n },\n )\n\n def post(self, request):\n logger = logging.getLogger(\"nautobot.auth.login\")\n form = LoginForm(request, data=request.POST)\n\n if form.is_valid():\n logger.debug(\"Login form validation was successful\")\n\n # If maintenance mode is enabled, assume the database is read-only, and disable updating the user's\n # last_login time upon authentication.\n if settings.MAINTENANCE_MODE:\n logger.warning(\"Maintenance mode enabled: disabling update of most recent login time\")\n user_logged_in.disconnect(update_last_login, dispatch_uid=\"update_last_login\")\n\n # Authenticate user\n auth_login(request, form.get_user())\n logger.info(f\"User {request.user} successfully authenticated\")\n messages.info(request, \"Logged in as {}.\".format(request.user))\n\n return self.redirect_to_next(request, logger)\n\n else:\n logger.debug(\"Login form validation failed\")\n\n return render(\n request,\n self.template_name,\n {\n \"form\": form,\n },\n )\n\n def redirect_to_next(self, request, logger):\n if request.method == \"POST\":\n redirect_to = request.POST.get(\"next\", reverse(\"home\"))\n else:\n redirect_to = request.GET.get(\"next\", reverse(\"home\"))\n\n if redirect_to and not is_safe_url(url=redirect_to, allowed_hosts=request.get_host()):\n logger.warning(f\"Ignoring unsafe 'next' URL passed to login form: {redirect_to}\")\n redirect_to = reverse(\"home\")\n\n logger.debug(f\"Redirecting user to {redirect_to}\")\n return HttpResponseRedirect(redirect_to)\n\n\nclass LogoutView(View):\n \"\"\"\n Deauthenticate a web user.\n \"\"\"\n\n def get(self, request):\n logger = logging.getLogger(\"nautobot.auth.logout\")\n\n # Log out the user\n username = request.user\n auth_logout(request)\n logger.info(f\"User {username} has logged out\")\n messages.info(request, \"You have logged out.\")\n\n # Delete session key cookie (if set) upon logout\n response = HttpResponseRedirect(reverse(\"home\"))\n response.delete_cookie(\"session_key\")\n\n return response\n\n\n#\n# User profiles\n#\n\n\nclass ProfileView(LoginRequiredMixin, View):\n template_name = \"users/profile.html\"\n\n def get(self, request):\n\n return render(\n request,\n self.template_name,\n {\n \"active_tab\": \"profile\",\n },\n )\n\n\nclass UserConfigView(LoginRequiredMixin, View):\n template_name = \"users/preferences.html\"\n\n def get(self, request):\n\n return render(\n request,\n self.template_name,\n {\n \"preferences\": request.user.all_config(),\n \"active_tab\": \"preferences\",\n },\n )\n\n def post(self, request):\n user = request.user\n data = user.all_config()\n\n # Delete selected preferences\n for key in request.POST.getlist(\"pk\"):\n if key in data:\n user.clear_config(key)\n user.save()\n messages.success(request, \"Your preferences have been updated.\")\n\n return redirect(\"user:preferences\")\n\n\nclass ChangePasswordView(LoginRequiredMixin, View):\n template_name = \"users/change_password.html\"\n\n def get(self, request):\n # LDAP users cannot change their password here\n if getattr(request.user, \"ldap_username\", None):\n messages.warning(\n request,\n \"LDAP-authenticated user credentials cannot be changed within Nautobot.\",\n )\n return redirect(\"user:profile\")\n\n form = PasswordChangeForm(user=request.user)\n\n return render(\n request,\n self.template_name,\n {\n \"form\": form,\n \"active_tab\": \"change_password\",\n },\n )\n\n def post(self, request):\n form = PasswordChangeForm(user=request.user, data=request.POST)\n if form.is_valid():\n form.save()\n update_session_auth_hash(request, form.user)\n messages.success(request, \"Your password has been changed successfully.\")\n return redirect(\"user:profile\")\n\n return render(\n request,\n self.template_name,\n {\n \"form\": form,\n \"active_tab\": \"change_password\",\n },\n )\n\n\n#\n# API tokens\n#\n\n\nclass TokenListView(LoginRequiredMixin, View):\n def get(self, request):\n\n tokens = Token.objects.filter(user=request.user)\n\n return render(\n request,\n \"users/api_tokens.html\",\n {\n \"tokens\": tokens,\n \"active_tab\": \"api_tokens\",\n },\n )\n\n\nclass TokenEditView(LoginRequiredMixin, View):\n def get(self, request, pk=None):\n\n if pk is not None:\n if not request.user.has_perm(\"users.change_token\"):\n return HttpResponseForbidden()\n token = get_object_or_404(Token.objects.filter(user=request.user), pk=pk)\n else:\n if not request.user.has_perm(\"users.add_token\"):\n return HttpResponseForbidden()\n token = Token(user=request.user)\n\n form = TokenForm(instance=token)\n\n return render(\n request,\n \"generic/object_edit.html\",\n {\n \"obj\": token,\n \"obj_type\": token._meta.verbose_name,\n \"form\": form,\n \"return_url\": reverse(\"user:token_list\"),\n },\n )\n\n def post(self, request, pk=None):\n\n if pk is not None:\n token = get_object_or_404(Token.objects.filter(user=request.user), pk=pk)\n form = TokenForm(request.POST, instance=token)\n else:\n token = Token()\n form = TokenForm(request.POST)\n\n if form.is_valid():\n token = form.save(commit=False)\n token.user = request.user\n token.save()\n\n msg = \"Modified token {}\".format(token) if pk else \"Created token {}\".format(token)\n messages.success(request, msg)\n\n if \"_addanother\" in request.POST:\n return redirect(request.path)\n else:\n return redirect(\"user:token_list\")\n\n return render(\n request,\n \"generic/object_edit.html\",\n {\n \"obj\": token,\n \"obj_type\": token._meta.verbose_name,\n \"form\": form,\n \"return_url\": reverse(\"user:token_list\"),\n },\n )\n\n\nclass TokenDeleteView(LoginRequiredMixin, View):\n def get(self, request, pk):\n\n token = get_object_or_404(Token.objects.filter(user=request.user), pk=pk)\n initial_data = {\n \"return_url\": reverse(\"user:token_list\"),\n }\n form = ConfirmationForm(initial=initial_data)\n\n return render(\n request,\n \"generic/object_delete.html\",\n {\n \"obj\": token,\n \"obj_type\": token._meta.verbose_name,\n \"form\": form,\n \"return_url\": reverse(\"user:token_list\"),\n },\n )\n\n def post(self, request, pk):\n\n token = get_object_or_404(Token.objects.filter(user=request.user), pk=pk)\n form = ConfirmationForm(request.POST)\n if form.is_valid():\n token.delete()\n messages.success(request, \"Token deleted\")\n return redirect(\"user:token_list\")\n\n return render(\n request,\n \"generic/object_delete.html\",\n {\n \"obj\": token,\n \"obj_type\": token._meta.verbose_name,\n \"form\": form,\n \"return_url\": reverse(\"user:token_list\"),\n },\n )\n", "path": "nautobot/users/views.py"}]}
| 3,699 | 151 |
gh_patches_debug_24316
|
rasdani/github-patches
|
git_diff
|
gratipay__gratipay.com-2757
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can't update my take
I tried to increase my take from one of the teams that I belong but was unable to. I got the following error.

I was increasing to less that double the amount and my history shows that the new amount is less than double the the amount I was attempting to take.
</issue>
<code>
[start of gratipay/models/_mixin_team.py]
1 """Teams on Gratipay are plural participants with members.
2 """
3 from collections import OrderedDict
4 from decimal import Decimal
5
6 from aspen.utils import typecheck
7
8
9 class MemberLimitReached(Exception): pass
10
11 class StubParticipantAdded(Exception): pass
12
13 class MixinTeam(object):
14 """This class provides methods for working with a Participant as a Team.
15
16 :param Participant participant: the underlying :py:class:`~gratipay.participant.Participant` object for this team
17
18 """
19
20 # XXX These were all written with the ORM and need to be converted.
21
22 def __init__(self, participant):
23 self.participant = participant
24
25 def show_as_team(self, user):
26 """Return a boolean, whether to show this participant as a team.
27 """
28 if not self.IS_PLURAL:
29 return False
30 if user.ADMIN:
31 return True
32 if not self.get_current_takes():
33 if self == user.participant:
34 return True
35 return False
36 return True
37
38 def add_member(self, member):
39 """Add a member to this team.
40 """
41 assert self.IS_PLURAL
42 if len(self.get_current_takes()) == 149:
43 raise MemberLimitReached
44 if not member.is_claimed:
45 raise StubParticipantAdded
46 self.__set_take_for(member, Decimal('0.01'), self)
47
48 def remove_member(self, member):
49 """Remove a member from this team.
50 """
51 assert self.IS_PLURAL
52 self.__set_take_for(member, Decimal('0.00'), self)
53
54 def remove_all_members(self, cursor=None):
55 (cursor or self.db).run("""
56 INSERT INTO takes (ctime, member, team, amount, recorder) (
57 SELECT ctime, member, %(username)s, 0.00, %(username)s
58 FROM current_takes
59 WHERE team=%(username)s
60 AND amount > 0
61 );
62 """, dict(username=self.username))
63
64 def member_of(self, team):
65 """Given a Participant object, return a boolean.
66 """
67 assert team.IS_PLURAL
68 for take in team.get_current_takes():
69 if take['member'] == self.username:
70 return True
71 return False
72
73 def get_take_last_week_for(self, member):
74 """What did the user actually take most recently? Used in throttling.
75 """
76 assert self.IS_PLURAL
77 membername = member.username if hasattr(member, 'username') \
78 else member['username']
79 return self.db.one("""
80
81 SELECT amount
82 FROM transfers
83 WHERE tipper=%s AND tippee=%s AND context='take'
84 AND timestamp > (
85 SELECT ts_start
86 FROM paydays
87 WHERE ts_end > ts_start
88 ORDER BY ts_start DESC LIMIT 1
89 )
90 ORDER BY timestamp ASC LIMIT 1
91
92 """, (self.username, membername), default=Decimal('0.00'))
93
94 def get_take_for(self, member):
95 """Return a Decimal representation of the take for this member, or 0.
96 """
97 assert self.IS_PLURAL
98 return self.db.one( "SELECT amount FROM current_takes "
99 "WHERE member=%s AND team=%s"
100 , (member.username, self.username)
101 , default=Decimal('0.00')
102 )
103
104 def compute_max_this_week(self, last_week):
105 """2x last week's take, but at least a dollar.
106 """
107 return max(last_week * Decimal('2'), Decimal('1.00'))
108
109 def set_take_for(self, member, take, recorder):
110 """Sets member's take from the team pool.
111 """
112 assert self.IS_PLURAL
113
114 # lazy import to avoid circular import
115 from gratipay.security.user import User
116 from gratipay.models.participant import Participant
117
118 typecheck( member, Participant
119 , take, Decimal
120 , recorder, (Participant, User)
121 )
122
123 last_week = self.get_take_last_week_for(member)
124 max_this_week = self.compute_max_this_week(last_week)
125 if take > max_this_week:
126 take = max_this_week
127
128 self.__set_take_for(member, take, recorder)
129 return take
130
131 def __set_take_for(self, member, amount, recorder):
132 assert self.IS_PLURAL
133 # XXX Factored out for testing purposes only! :O Use .set_take_for.
134 with self.db.get_cursor() as cursor:
135 # Lock to avoid race conditions
136 cursor.run("LOCK TABLE takes IN EXCLUSIVE MODE")
137 # Compute the current takes
138 old_takes = self.compute_actual_takes(cursor)
139 # Insert the new take
140 cursor.run("""
141
142 INSERT INTO takes (ctime, member, team, amount, recorder)
143 VALUES ( COALESCE (( SELECT ctime
144 FROM takes
145 WHERE member=%(member)s
146 AND team=%(team)s
147 LIMIT 1
148 ), CURRENT_TIMESTAMP)
149 , %(member)s
150 , %(team)s
151 , %(amount)s
152 , %(recorder)s
153 )
154
155 """, dict(member=member.username, team=self.username, amount=amount,
156 recorder=recorder.username))
157 # Compute the new takes
158 new_takes = self.compute_actual_takes(cursor)
159 # Update receiving amounts in the participants table
160 self.update_taking(old_takes, new_takes, cursor, member)
161
162 def update_taking(self, old_takes, new_takes, cursor=None, member=None):
163 """Update `taking` amounts based on the difference between `old_takes`
164 and `new_takes`.
165 """
166 for username in set(old_takes.keys()).union(new_takes.keys()):
167 if username == self.username:
168 continue
169 old = old_takes.get(username, {}).get('actual_amount', Decimal(0))
170 new = new_takes.get(username, {}).get('actual_amount', Decimal(0))
171 diff = new - old
172 if diff != 0:
173 r = (self.db or cursor).one("""
174 UPDATE participants
175 SET taking = (taking + %(diff)s)
176 , receiving = (receiving + %(diff)s)
177 WHERE username=%(username)s
178 RETURNING taking, receiving
179 """, dict(username=username, diff=diff))
180 if member and username == member.username:
181 member.set_attributes(**r._asdict())
182
183 def get_current_takes(self, cursor=None):
184 """Return a list of member takes for a team.
185 """
186 assert self.IS_PLURAL
187 TAKES = """
188 SELECT member, amount, ctime, mtime
189 FROM current_takes
190 WHERE team=%(team)s
191 ORDER BY ctime DESC
192 """
193 records = (cursor or self.db).all(TAKES, dict(team=self.username))
194 return [r._asdict() for r in records]
195
196 def get_team_take(self, cursor=None):
197 """Return a single take for a team, the team itself's take.
198 """
199 assert self.IS_PLURAL
200 TAKE = "SELECT sum(amount) FROM current_takes WHERE team=%s"
201 total_take = (cursor or self.db).one(TAKE, (self.username,), default=0)
202 team_take = max(self.receiving - total_take, 0)
203 membership = { "ctime": None
204 , "mtime": None
205 , "member": self.username
206 , "amount": team_take
207 }
208 return membership
209
210 def compute_actual_takes(self, cursor=None):
211 """Get the takes, compute the actual amounts, and return an OrderedDict.
212 """
213 actual_takes = OrderedDict()
214 nominal_takes = self.get_current_takes(cursor=cursor)
215 nominal_takes.append(self.get_team_take(cursor=cursor))
216 budget = balance = self.balance + self.receiving
217 for take in nominal_takes:
218 nominal_amount = take['nominal_amount'] = take.pop('amount')
219 actual_amount = take['actual_amount'] = min(nominal_amount, balance)
220 if take['member'] != self.username:
221 balance -= actual_amount
222 take['balance'] = balance
223 take['percentage'] = (actual_amount / budget) if budget > 0 else 0
224 actual_takes[take['member']] = take
225 return actual_takes
226
227 def get_members(self, current_participant):
228 """Return a list of member dicts.
229 """
230 assert self.IS_PLURAL
231 takes = self.compute_actual_takes()
232 members = []
233 for take in takes.values():
234 member = {}
235 member['username'] = take['member']
236 member['take'] = take['nominal_amount']
237 member['balance'] = take['balance']
238 member['percentage'] = take['percentage']
239
240 member['removal_allowed'] = current_participant == self
241 member['editing_allowed'] = False
242 member['is_current_user'] = False
243 if current_participant is not None:
244 if member['username'] == current_participant.username:
245 member['is_current_user'] = True
246 if take['ctime'] is not None:
247 # current user, but not the team itself
248 member['editing_allowed']= True
249
250 member['last_week'] = last_week = self.get_take_last_week_for(member)
251 member['max_this_week'] = self.compute_max_this_week(last_week)
252 members.append(member)
253 return members
254
[end of gratipay/models/_mixin_team.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gratipay/models/_mixin_team.py b/gratipay/models/_mixin_team.py
--- a/gratipay/models/_mixin_team.py
+++ b/gratipay/models/_mixin_team.py
@@ -71,7 +71,7 @@
return False
def get_take_last_week_for(self, member):
- """What did the user actually take most recently? Used in throttling.
+ """Get the user's nominal take last week. Used in throttling.
"""
assert self.IS_PLURAL
membername = member.username if hasattr(member, 'username') \
@@ -79,15 +79,15 @@
return self.db.one("""
SELECT amount
- FROM transfers
- WHERE tipper=%s AND tippee=%s AND context='take'
- AND timestamp > (
+ FROM takes
+ WHERE team=%s AND member=%s
+ AND mtime < (
SELECT ts_start
FROM paydays
WHERE ts_end > ts_start
ORDER BY ts_start DESC LIMIT 1
)
- ORDER BY timestamp ASC LIMIT 1
+ ORDER BY mtime DESC LIMIT 1
""", (self.username, membername), default=Decimal('0.00'))
|
{"golden_diff": "diff --git a/gratipay/models/_mixin_team.py b/gratipay/models/_mixin_team.py\n--- a/gratipay/models/_mixin_team.py\n+++ b/gratipay/models/_mixin_team.py\n@@ -71,7 +71,7 @@\n return False\n \n def get_take_last_week_for(self, member):\n- \"\"\"What did the user actually take most recently? Used in throttling.\n+ \"\"\"Get the user's nominal take last week. Used in throttling.\n \"\"\"\n assert self.IS_PLURAL\n membername = member.username if hasattr(member, 'username') \\\n@@ -79,15 +79,15 @@\n return self.db.one(\"\"\"\n \n SELECT amount\n- FROM transfers\n- WHERE tipper=%s AND tippee=%s AND context='take'\n- AND timestamp > (\n+ FROM takes\n+ WHERE team=%s AND member=%s\n+ AND mtime < (\n SELECT ts_start\n FROM paydays\n WHERE ts_end > ts_start\n ORDER BY ts_start DESC LIMIT 1\n )\n- ORDER BY timestamp ASC LIMIT 1\n+ ORDER BY mtime DESC LIMIT 1\n \n \"\"\", (self.username, membername), default=Decimal('0.00'))\n", "issue": "Can't update my take\nI tried to increase my take from one of the teams that I belong but was unable to. I got the following error.\n\n\n\nI was increasing to less that double the amount and my history shows that the new amount is less than double the the amount I was attempting to take. \n\n", "before_files": [{"content": "\"\"\"Teams on Gratipay are plural participants with members.\n\"\"\"\nfrom collections import OrderedDict\nfrom decimal import Decimal\n\nfrom aspen.utils import typecheck\n\n\nclass MemberLimitReached(Exception): pass\n\nclass StubParticipantAdded(Exception): pass\n\nclass MixinTeam(object):\n \"\"\"This class provides methods for working with a Participant as a Team.\n\n :param Participant participant: the underlying :py:class:`~gratipay.participant.Participant` object for this team\n\n \"\"\"\n\n # XXX These were all written with the ORM and need to be converted.\n\n def __init__(self, participant):\n self.participant = participant\n\n def show_as_team(self, user):\n \"\"\"Return a boolean, whether to show this participant as a team.\n \"\"\"\n if not self.IS_PLURAL:\n return False\n if user.ADMIN:\n return True\n if not self.get_current_takes():\n if self == user.participant:\n return True\n return False\n return True\n\n def add_member(self, member):\n \"\"\"Add a member to this team.\n \"\"\"\n assert self.IS_PLURAL\n if len(self.get_current_takes()) == 149:\n raise MemberLimitReached\n if not member.is_claimed:\n raise StubParticipantAdded\n self.__set_take_for(member, Decimal('0.01'), self)\n\n def remove_member(self, member):\n \"\"\"Remove a member from this team.\n \"\"\"\n assert self.IS_PLURAL\n self.__set_take_for(member, Decimal('0.00'), self)\n\n def remove_all_members(self, cursor=None):\n (cursor or self.db).run(\"\"\"\n INSERT INTO takes (ctime, member, team, amount, recorder) (\n SELECT ctime, member, %(username)s, 0.00, %(username)s\n FROM current_takes\n WHERE team=%(username)s\n AND amount > 0\n );\n \"\"\", dict(username=self.username))\n\n def member_of(self, team):\n \"\"\"Given a Participant object, return a boolean.\n \"\"\"\n assert team.IS_PLURAL\n for take in team.get_current_takes():\n if take['member'] == self.username:\n return True\n return False\n\n def get_take_last_week_for(self, member):\n \"\"\"What did the user actually take most recently? Used in throttling.\n \"\"\"\n assert self.IS_PLURAL\n membername = member.username if hasattr(member, 'username') \\\n else member['username']\n return self.db.one(\"\"\"\n\n SELECT amount\n FROM transfers\n WHERE tipper=%s AND tippee=%s AND context='take'\n AND timestamp > (\n SELECT ts_start\n FROM paydays\n WHERE ts_end > ts_start\n ORDER BY ts_start DESC LIMIT 1\n )\n ORDER BY timestamp ASC LIMIT 1\n\n \"\"\", (self.username, membername), default=Decimal('0.00'))\n\n def get_take_for(self, member):\n \"\"\"Return a Decimal representation of the take for this member, or 0.\n \"\"\"\n assert self.IS_PLURAL\n return self.db.one( \"SELECT amount FROM current_takes \"\n \"WHERE member=%s AND team=%s\"\n , (member.username, self.username)\n , default=Decimal('0.00')\n )\n\n def compute_max_this_week(self, last_week):\n \"\"\"2x last week's take, but at least a dollar.\n \"\"\"\n return max(last_week * Decimal('2'), Decimal('1.00'))\n\n def set_take_for(self, member, take, recorder):\n \"\"\"Sets member's take from the team pool.\n \"\"\"\n assert self.IS_PLURAL\n\n # lazy import to avoid circular import\n from gratipay.security.user import User\n from gratipay.models.participant import Participant\n\n typecheck( member, Participant\n , take, Decimal\n , recorder, (Participant, User)\n )\n\n last_week = self.get_take_last_week_for(member)\n max_this_week = self.compute_max_this_week(last_week)\n if take > max_this_week:\n take = max_this_week\n\n self.__set_take_for(member, take, recorder)\n return take\n\n def __set_take_for(self, member, amount, recorder):\n assert self.IS_PLURAL\n # XXX Factored out for testing purposes only! :O Use .set_take_for.\n with self.db.get_cursor() as cursor:\n # Lock to avoid race conditions\n cursor.run(\"LOCK TABLE takes IN EXCLUSIVE MODE\")\n # Compute the current takes\n old_takes = self.compute_actual_takes(cursor)\n # Insert the new take\n cursor.run(\"\"\"\n\n INSERT INTO takes (ctime, member, team, amount, recorder)\n VALUES ( COALESCE (( SELECT ctime\n FROM takes\n WHERE member=%(member)s\n AND team=%(team)s\n LIMIT 1\n ), CURRENT_TIMESTAMP)\n , %(member)s\n , %(team)s\n , %(amount)s\n , %(recorder)s\n )\n\n \"\"\", dict(member=member.username, team=self.username, amount=amount,\n recorder=recorder.username))\n # Compute the new takes\n new_takes = self.compute_actual_takes(cursor)\n # Update receiving amounts in the participants table\n self.update_taking(old_takes, new_takes, cursor, member)\n\n def update_taking(self, old_takes, new_takes, cursor=None, member=None):\n \"\"\"Update `taking` amounts based on the difference between `old_takes`\n and `new_takes`.\n \"\"\"\n for username in set(old_takes.keys()).union(new_takes.keys()):\n if username == self.username:\n continue\n old = old_takes.get(username, {}).get('actual_amount', Decimal(0))\n new = new_takes.get(username, {}).get('actual_amount', Decimal(0))\n diff = new - old\n if diff != 0:\n r = (self.db or cursor).one(\"\"\"\n UPDATE participants\n SET taking = (taking + %(diff)s)\n , receiving = (receiving + %(diff)s)\n WHERE username=%(username)s\n RETURNING taking, receiving\n \"\"\", dict(username=username, diff=diff))\n if member and username == member.username:\n member.set_attributes(**r._asdict())\n\n def get_current_takes(self, cursor=None):\n \"\"\"Return a list of member takes for a team.\n \"\"\"\n assert self.IS_PLURAL\n TAKES = \"\"\"\n SELECT member, amount, ctime, mtime\n FROM current_takes\n WHERE team=%(team)s\n ORDER BY ctime DESC\n \"\"\"\n records = (cursor or self.db).all(TAKES, dict(team=self.username))\n return [r._asdict() for r in records]\n\n def get_team_take(self, cursor=None):\n \"\"\"Return a single take for a team, the team itself's take.\n \"\"\"\n assert self.IS_PLURAL\n TAKE = \"SELECT sum(amount) FROM current_takes WHERE team=%s\"\n total_take = (cursor or self.db).one(TAKE, (self.username,), default=0)\n team_take = max(self.receiving - total_take, 0)\n membership = { \"ctime\": None\n , \"mtime\": None\n , \"member\": self.username\n , \"amount\": team_take\n }\n return membership\n\n def compute_actual_takes(self, cursor=None):\n \"\"\"Get the takes, compute the actual amounts, and return an OrderedDict.\n \"\"\"\n actual_takes = OrderedDict()\n nominal_takes = self.get_current_takes(cursor=cursor)\n nominal_takes.append(self.get_team_take(cursor=cursor))\n budget = balance = self.balance + self.receiving\n for take in nominal_takes:\n nominal_amount = take['nominal_amount'] = take.pop('amount')\n actual_amount = take['actual_amount'] = min(nominal_amount, balance)\n if take['member'] != self.username:\n balance -= actual_amount\n take['balance'] = balance\n take['percentage'] = (actual_amount / budget) if budget > 0 else 0\n actual_takes[take['member']] = take\n return actual_takes\n\n def get_members(self, current_participant):\n \"\"\"Return a list of member dicts.\n \"\"\"\n assert self.IS_PLURAL\n takes = self.compute_actual_takes()\n members = []\n for take in takes.values():\n member = {}\n member['username'] = take['member']\n member['take'] = take['nominal_amount']\n member['balance'] = take['balance']\n member['percentage'] = take['percentage']\n\n member['removal_allowed'] = current_participant == self\n member['editing_allowed'] = False\n member['is_current_user'] = False\n if current_participant is not None:\n if member['username'] == current_participant.username:\n member['is_current_user'] = True\n if take['ctime'] is not None:\n # current user, but not the team itself\n member['editing_allowed']= True\n\n member['last_week'] = last_week = self.get_take_last_week_for(member)\n member['max_this_week'] = self.compute_max_this_week(last_week)\n members.append(member)\n return members\n", "path": "gratipay/models/_mixin_team.py"}]}
| 3,340 | 278 |
gh_patches_debug_18008
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-3330
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Google logins broken with django-allauth 0.62+
# Recipe
- Open incognito window (just in case it matters)
- Navigate to grand-challenge.org
- Click Third party auth -> Google to login

- Acknowledge that you are sent to a "third party" by clicking continue on the next page.

# Result

> Unexpected Error
No login possible.
@amickan reported that no sentry errors are being recorded. I cannot login, presumably many other people cannot login either.
</issue>
<code>
[start of app/grandchallenge/profiles/providers/gmail/views.py]
1 from allauth.socialaccount.providers.google.views import GoogleOAuth2Adapter
2 from allauth.socialaccount.providers.oauth2.views import (
3 OAuth2CallbackView,
4 OAuth2LoginView,
5 )
6
7 from grandchallenge.profiles.providers.gmail.provider import GmailProvider
8
9
10 class GmailOAuth2Adapter(GoogleOAuth2Adapter):
11 provider_id = GmailProvider.id
12
13
14 oauth2_login = OAuth2LoginView.adapter_view(GmailOAuth2Adapter)
15 oauth2_callback = OAuth2CallbackView.adapter_view(GmailOAuth2Adapter)
16
[end of app/grandchallenge/profiles/providers/gmail/views.py]
[start of app/grandchallenge/profiles/providers/gmail/provider.py]
1 from allauth.socialaccount.providers.google.provider import GoogleProvider
2
3
4 class GmailProvider(GoogleProvider):
5 id = "gmail"
6 name = "Google"
7
8 def extract_uid(self, data):
9 return str(data["email"])
10
11
12 provider_classes = [GmailProvider]
13
[end of app/grandchallenge/profiles/providers/gmail/provider.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/app/grandchallenge/profiles/providers/gmail/provider.py b/app/grandchallenge/profiles/providers/gmail/provider.py
--- a/app/grandchallenge/profiles/providers/gmail/provider.py
+++ b/app/grandchallenge/profiles/providers/gmail/provider.py
@@ -1,9 +1,12 @@
from allauth.socialaccount.providers.google.provider import GoogleProvider
+from grandchallenge.profiles.providers.gmail.views import GmailOAuth2Adapter
+
class GmailProvider(GoogleProvider):
id = "gmail"
name = "Google"
+ oauth2_adapter_class = GmailOAuth2Adapter
def extract_uid(self, data):
return str(data["email"])
diff --git a/app/grandchallenge/profiles/providers/gmail/views.py b/app/grandchallenge/profiles/providers/gmail/views.py
--- a/app/grandchallenge/profiles/providers/gmail/views.py
+++ b/app/grandchallenge/profiles/providers/gmail/views.py
@@ -4,11 +4,9 @@
OAuth2LoginView,
)
-from grandchallenge.profiles.providers.gmail.provider import GmailProvider
-
class GmailOAuth2Adapter(GoogleOAuth2Adapter):
- provider_id = GmailProvider.id
+ provider_id = "gmail"
oauth2_login = OAuth2LoginView.adapter_view(GmailOAuth2Adapter)
|
{"golden_diff": "diff --git a/app/grandchallenge/profiles/providers/gmail/provider.py b/app/grandchallenge/profiles/providers/gmail/provider.py\n--- a/app/grandchallenge/profiles/providers/gmail/provider.py\n+++ b/app/grandchallenge/profiles/providers/gmail/provider.py\n@@ -1,9 +1,12 @@\n from allauth.socialaccount.providers.google.provider import GoogleProvider\n \n+from grandchallenge.profiles.providers.gmail.views import GmailOAuth2Adapter\n+\n \n class GmailProvider(GoogleProvider):\n id = \"gmail\"\n name = \"Google\"\n+ oauth2_adapter_class = GmailOAuth2Adapter\n \n def extract_uid(self, data):\n return str(data[\"email\"])\ndiff --git a/app/grandchallenge/profiles/providers/gmail/views.py b/app/grandchallenge/profiles/providers/gmail/views.py\n--- a/app/grandchallenge/profiles/providers/gmail/views.py\n+++ b/app/grandchallenge/profiles/providers/gmail/views.py\n@@ -4,11 +4,9 @@\n OAuth2LoginView,\n )\n \n-from grandchallenge.profiles.providers.gmail.provider import GmailProvider\n-\n \n class GmailOAuth2Adapter(GoogleOAuth2Adapter):\n- provider_id = GmailProvider.id\n+ provider_id = \"gmail\"\n \n \n oauth2_login = OAuth2LoginView.adapter_view(GmailOAuth2Adapter)\n", "issue": "Google logins broken with django-allauth 0.62+\n# Recipe\r\n\r\n- Open incognito window (just in case it matters)\r\n- Navigate to grand-challenge.org\r\n- Click Third party auth -> Google to login\r\n \r\n\r\n\r\n- Acknowledge that you are sent to a \"third party\" by clicking continue on the next page.\r\n\r\n\r\n\r\n# Result\r\n\r\n\r\n\r\n> Unexpected Error\r\n\r\nNo login possible.\r\n\r\n@amickan reported that no sentry errors are being recorded. I cannot login, presumably many other people cannot login either.\r\n\n", "before_files": [{"content": "from allauth.socialaccount.providers.google.views import GoogleOAuth2Adapter\nfrom allauth.socialaccount.providers.oauth2.views import (\n OAuth2CallbackView,\n OAuth2LoginView,\n)\n\nfrom grandchallenge.profiles.providers.gmail.provider import GmailProvider\n\n\nclass GmailOAuth2Adapter(GoogleOAuth2Adapter):\n provider_id = GmailProvider.id\n\n\noauth2_login = OAuth2LoginView.adapter_view(GmailOAuth2Adapter)\noauth2_callback = OAuth2CallbackView.adapter_view(GmailOAuth2Adapter)\n", "path": "app/grandchallenge/profiles/providers/gmail/views.py"}, {"content": "from allauth.socialaccount.providers.google.provider import GoogleProvider\n\n\nclass GmailProvider(GoogleProvider):\n id = \"gmail\"\n name = \"Google\"\n\n def extract_uid(self, data):\n return str(data[\"email\"])\n\n\nprovider_classes = [GmailProvider]\n", "path": "app/grandchallenge/profiles/providers/gmail/provider.py"}]}
| 1,061 | 276 |
gh_patches_debug_25268
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-2543
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'NoneType' object is not subscriptable
Running `checkov -d .` now emits an exception
```
2022-02-25 17:45:59,050 [MainThread ] [ERROR] Failed to run check: Ensure no NACL allow ingress from 0.0.0.0:0 to port 21 for configuration: {'cidr_block': ['0.0.0.0/0'], 'egress': [False], 'network_acl_id': ['aws_default_network_acl.public.id'], 'protocol': ['-1'], 'rule_action': ['allow'], 'rule_number': [100]} at file: /modules/network/regional/main.tf
Process ForkProcess-1:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/[email protected]/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/opt/homebrew/Cellar/[email protected]/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/homebrew/lib/python3.10/site-packages/checkov/common/parallelizer/parallel_runner.py", line 29, in func_wrapper
result = original_func(item)
File "/opt/homebrew/lib/python3.10/site-packages/checkov/common/runners/runner_registry.py", line 66, in <lambda>
lambda runner: runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,
File "/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/runner.py", line 119, in run
self.check_tf_definition(report, root_folder, runner_filter, collect_skip_comments)
File "/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/runner.py", line 215, in check_tf_definition
self.run_all_blocks(definition, self.context, full_file_path, root_folder, report,
File "/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/runner.py", line 225, in run_all_blocks
self.run_block(definition[block_type], definitions_context,
File "/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/runner.py", line 297, in run_block
results = registry.scan(scanned_file, entity, skipped_checks, runner_filter)
File "/opt/homebrew/lib/python3.10/site-packages/checkov/common/checks/base_check_registry.py", line 121, in scan
result = self.run_check(check, entity_configuration, entity_name, entity_type, scanned_file, skip_info)
File "/opt/homebrew/lib/python3.10/site-packages/checkov/common/checks/base_check_registry.py", line 135, in run_check
result = check.run(
File "/opt/homebrew/lib/python3.10/site-packages/checkov/common/checks/base_check.py", line 86, in run
raise e
File "/opt/homebrew/lib/python3.10/site-packages/checkov/common/checks/base_check.py", line 73, in run
check_result["result"] = self.scan_entity_conf(entity_configuration, entity_type)
File "/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/checks/resource/base_resource_check.py", line 70, in scan_entity_conf
return self.scan_resource_conf(conf)
File "/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py", line 41, in scan_resource_conf
if not self.check_rule(conf):
File "/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py", line 51, in check_rule
if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):
TypeError: 'NoneType' object is not subscriptable
```
- OS: MacOS 12.2
- Checkov Version 2.0.902
Relevant resource maybe as follows:
```
resource "aws_network_acl_rule" "public_ingress" {
network_acl_id = aws_default_network_acl.public.id
rule_number = 100
egress = false
protocol = "-1"
rule_action = "allow"
cidr_block = "0.0.0.0/0"
}
```
</issue>
<code>
[start of checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py]
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
3 from checkov.common.util.type_forcers import force_list
4 from checkov.common.util.type_forcers import force_int
5
6
7 class AbsNACLUnrestrictedIngress(BaseResourceCheck):
8 def __init__(self, check_id, port):
9 name = "Ensure no NACL allow ingress from 0.0.0.0:0 to port %d" % port
10 supported_resources = ['aws_network_acl', 'aws_network_acl_rule']
11 categories = [CheckCategories.NETWORKING]
12 super().__init__(name=name, id=check_id, categories=categories, supported_resources=supported_resources)
13 self.port = port
14
15 def scan_resource_conf(self, conf):
16 """
17
18 Return PASS if:
19 - The NACL doesnt allows restricted ingress access to the port
20 - The resource is an aws_network_acl of type 'ingress' that does not violate the check.
21
22 Return FAIL if:
23 - The the NACL allows unrestricted access to the port
24
25 Return UNKNOWN if:
26 - the resource is an NACL of type 'egress', OR
27
28 :param conf: aws_network_acl configuration
29 :return: <CheckResult>
30 """
31
32 if conf.get("ingress"):
33 ingress = conf.get("ingress")
34 for rule in ingress:
35 if not self.check_rule(rule):
36 return CheckResult.FAILED
37 return CheckResult.PASSED
38 # maybe its an network_acl_rule
39 if conf.get("network_acl_id"):
40 if not conf.get("egress")[0]:
41 if not self.check_rule(conf):
42 return CheckResult.FAILED
43 return CheckResult.PASSED
44
45 return CheckResult.UNKNOWN
46
47 def check_rule(self, rule):
48 if rule.get('cidr_block'):
49 if rule.get('cidr_block') == ["0.0.0.0/0"]:
50 if rule.get('action') == ["allow"] or rule.get('rule_action') == ["allow"]:
51 if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):
52 return False
53 if rule.get('ipv6_cidr_block'):
54 if rule.get('ipv6_cidr_block') == ["::/0"]:
55 if rule.get('action') == ["allow"] or rule.get('rule_action') == ["allow"]:
56 if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):
57 return False
58 return True
59
[end of checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py b/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py
--- a/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py
+++ b/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py
@@ -48,11 +48,17 @@
if rule.get('cidr_block'):
if rule.get('cidr_block') == ["0.0.0.0/0"]:
if rule.get('action') == ["allow"] or rule.get('rule_action') == ["allow"]:
+ protocol = rule.get('protocol')
+ if protocol and str(protocol[0]) == "-1":
+ return False
if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):
return False
if rule.get('ipv6_cidr_block'):
if rule.get('ipv6_cidr_block') == ["::/0"]:
if rule.get('action') == ["allow"] or rule.get('rule_action') == ["allow"]:
+ protocol = rule.get('protocol')
+ if protocol and str(protocol[0]) == "-1":
+ return False
if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):
return False
return True
|
{"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py b/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py\n--- a/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py\n+++ b/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py\n@@ -48,11 +48,17 @@\n if rule.get('cidr_block'):\n if rule.get('cidr_block') == [\"0.0.0.0/0\"]:\n if rule.get('action') == [\"allow\"] or rule.get('rule_action') == [\"allow\"]:\n+ protocol = rule.get('protocol')\n+ if protocol and str(protocol[0]) == \"-1\":\n+ return False\n if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):\n return False\n if rule.get('ipv6_cidr_block'):\n if rule.get('ipv6_cidr_block') == [\"::/0\"]:\n if rule.get('action') == [\"allow\"] or rule.get('rule_action') == [\"allow\"]:\n+ protocol = rule.get('protocol')\n+ if protocol and str(protocol[0]) == \"-1\":\n+ return False\n if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):\n return False\n return True\n", "issue": "'NoneType' object is not subscriptable\nRunning `checkov -d .` now emits an exception\r\n\r\n```\r\n2022-02-25 17:45:59,050 [MainThread ] [ERROR] Failed to run check: Ensure no NACL allow ingress from 0.0.0.0:0 to port 21 for configuration: {'cidr_block': ['0.0.0.0/0'], 'egress': [False], 'network_acl_id': ['aws_default_network_acl.public.id'], 'protocol': ['-1'], 'rule_action': ['allow'], 'rule_number': [100]} at file: /modules/network/regional/main.tf\r\nProcess ForkProcess-1:\r\nTraceback (most recent call last):\r\n File \"/opt/homebrew/Cellar/[email protected]/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py\", line 315, in _bootstrap\r\n self.run()\r\n File \"/opt/homebrew/Cellar/[email protected]/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py\", line 108, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/common/parallelizer/parallel_runner.py\", line 29, in func_wrapper\r\n result = original_func(item)\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/common/runners/runner_registry.py\", line 66, in <lambda>\r\n lambda runner: runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/runner.py\", line 119, in run\r\n self.check_tf_definition(report, root_folder, runner_filter, collect_skip_comments)\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/runner.py\", line 215, in check_tf_definition\r\n self.run_all_blocks(definition, self.context, full_file_path, root_folder, report,\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/runner.py\", line 225, in run_all_blocks\r\n self.run_block(definition[block_type], definitions_context,\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/runner.py\", line 297, in run_block\r\n results = registry.scan(scanned_file, entity, skipped_checks, runner_filter)\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/common/checks/base_check_registry.py\", line 121, in scan\r\n result = self.run_check(check, entity_configuration, entity_name, entity_type, scanned_file, skip_info)\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/common/checks/base_check_registry.py\", line 135, in run_check\r\n result = check.run(\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/common/checks/base_check.py\", line 86, in run\r\n raise e\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/common/checks/base_check.py\", line 73, in run\r\n check_result[\"result\"] = self.scan_entity_conf(entity_configuration, entity_type)\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/checks/resource/base_resource_check.py\", line 70, in scan_entity_conf\r\n return self.scan_resource_conf(conf)\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py\", line 41, in scan_resource_conf\r\n if not self.check_rule(conf):\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py\", line 51, in check_rule\r\n if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):\r\nTypeError: 'NoneType' object is not subscriptable\r\n```\r\n\r\n - OS: MacOS 12.2\r\n - Checkov Version 2.0.902\r\n\r\nRelevant resource maybe as follows:\r\n```\r\nresource \"aws_network_acl_rule\" \"public_ingress\" {\r\n network_acl_id = aws_default_network_acl.public.id\r\n rule_number = 100\r\n egress = false\r\n protocol = \"-1\"\r\n rule_action = \"allow\"\r\n cidr_block = \"0.0.0.0/0\"\r\n}\r\n```\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nfrom checkov.common.util.type_forcers import force_list\nfrom checkov.common.util.type_forcers import force_int\n\n\nclass AbsNACLUnrestrictedIngress(BaseResourceCheck):\n def __init__(self, check_id, port):\n name = \"Ensure no NACL allow ingress from 0.0.0.0:0 to port %d\" % port\n supported_resources = ['aws_network_acl', 'aws_network_acl_rule']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=check_id, categories=categories, supported_resources=supported_resources)\n self.port = port\n\n def scan_resource_conf(self, conf):\n \"\"\"\n\n Return PASS if:\n - The NACL doesnt allows restricted ingress access to the port\n - The resource is an aws_network_acl of type 'ingress' that does not violate the check.\n\n Return FAIL if:\n - The the NACL allows unrestricted access to the port\n\n Return UNKNOWN if:\n - the resource is an NACL of type 'egress', OR\n\n :param conf: aws_network_acl configuration\n :return: <CheckResult>\n \"\"\"\n\n if conf.get(\"ingress\"):\n ingress = conf.get(\"ingress\")\n for rule in ingress:\n if not self.check_rule(rule):\n return CheckResult.FAILED\n return CheckResult.PASSED\n # maybe its an network_acl_rule\n if conf.get(\"network_acl_id\"):\n if not conf.get(\"egress\")[0]:\n if not self.check_rule(conf):\n return CheckResult.FAILED\n return CheckResult.PASSED\n\n return CheckResult.UNKNOWN\n\n def check_rule(self, rule):\n if rule.get('cidr_block'):\n if rule.get('cidr_block') == [\"0.0.0.0/0\"]:\n if rule.get('action') == [\"allow\"] or rule.get('rule_action') == [\"allow\"]:\n if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):\n return False\n if rule.get('ipv6_cidr_block'):\n if rule.get('ipv6_cidr_block') == [\"::/0\"]:\n if rule.get('action') == [\"allow\"] or rule.get('rule_action') == [\"allow\"]:\n if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):\n return False\n return True\n", "path": "checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py"}]}
| 2,275 | 312 |
gh_patches_debug_987
|
rasdani/github-patches
|
git_diff
|
DataBiosphere__toil-3070
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Progress bar is cool but...
It requires the terminal to be `reset` when run in a screen session. Also, for cactus anyway, it spends the vast majority of the runtime at 99%/100%.
┆Issue is synchronized with this [Jira Task](https://ucsc-cgl.atlassian.net/browse/TOIL-558)
┆Issue Number: TOIL-558
</issue>
<code>
[start of setup.py]
1 # Copyright (C) 2015-2016 Regents of the University of California
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from setuptools import find_packages, setup
15 import os
16
17
18 def runSetup():
19 """
20 Calls setup(). This function exists so the setup() invocation preceded more internal
21 functionality. The `version` module is imported dynamically by importVersion() below.
22 """
23 boto = 'boto==2.48.0'
24 boto3 = 'boto3>=1.7.50, <2.0'
25 futures = 'futures==3.1.1'
26 pycryptodome = 'pycryptodome==3.5.1'
27 pymesos = 'pymesos==0.3.15'
28 psutil = 'psutil >= 3.0.1, <6'
29 pynacl = 'pynacl==1.3.0'
30 gcs = 'google-cloud-storage==1.6.0'
31 gcs_oauth2_boto_plugin = 'gcs_oauth2_boto_plugin==1.14'
32 apacheLibcloud = 'apache-libcloud==2.2.1'
33 cwltool = 'cwltool==3.0.20200324120055'
34 galaxyToolUtil = 'galaxy-tool-util'
35 htcondor = 'htcondor>=8.6.0'
36 kubernetes = 'kubernetes>=10, <11'
37 idna = 'idna>=2'
38 pytz = 'pytz>=2012'
39 dill = 'dill==0.3.1.1'
40 six = 'six>=1.10.0'
41 future = 'future'
42 requests = 'requests>=2, <3'
43 docker = 'docker==2.5.1'
44 dateutil = 'python-dateutil'
45 addict = 'addict<=2.2.0'
46 pathlib2 = 'pathlib2==2.3.2'
47 enlighten = 'enlighten>=1.5.1, <2'
48
49 core_reqs = [
50 dill,
51 six,
52 future,
53 requests,
54 docker,
55 dateutil,
56 psutil,
57 addict,
58 pathlib2,
59 pytz,
60 enlighten]
61
62 aws_reqs = [
63 boto,
64 boto3,
65 futures,
66 pycryptodome]
67 cwl_reqs = [
68 cwltool,
69 galaxyToolUtil]
70 encryption_reqs = [
71 pynacl]
72 google_reqs = [
73 gcs_oauth2_boto_plugin, # is this being used??
74 apacheLibcloud,
75 gcs]
76 htcondor_reqs = [
77 htcondor]
78 kubernetes_reqs = [
79 kubernetes,
80 idna] # Kubernetes's urllib3 can mange to use idna without really depending on it.
81 mesos_reqs = [
82 pymesos,
83 psutil]
84 wdl_reqs = []
85
86
87 # htcondor is not supported by apple
88 # this is tricky to conditionally support in 'all' due
89 # to how wheels work, so it is not included in all and
90 # must be explicitly installed as an extra
91 all_reqs = \
92 aws_reqs + \
93 cwl_reqs + \
94 encryption_reqs + \
95 google_reqs + \
96 kubernetes_reqs + \
97 mesos_reqs
98
99
100 setup(
101 name='toil',
102 version=version.distVersion,
103 description='Pipeline management software for clusters.',
104 author='Benedict Paten',
105 author_email='[email protected]',
106 url="https://github.com/DataBiosphere/toil",
107 classifiers=[
108 'Development Status :: 5 - Production/Stable',
109 'Environment :: Console',
110 'Intended Audience :: Developers',
111 'Intended Audience :: Science/Research',
112 'Intended Audience :: Healthcare Industry',
113 'License :: OSI Approved :: Apache Software License',
114 'Natural Language :: English',
115 'Operating System :: MacOS :: MacOS X',
116 'Operating System :: POSIX',
117 'Operating System :: POSIX :: Linux',
118 'Programming Language :: Python :: 3.6',
119 'Topic :: Scientific/Engineering',
120 'Topic :: Scientific/Engineering :: Bio-Informatics',
121 'Topic :: Scientific/Engineering :: Astronomy',
122 'Topic :: Scientific/Engineering :: Atmospheric Science',
123 'Topic :: Scientific/Engineering :: Information Analysis',
124 'Topic :: Scientific/Engineering :: Medical Science Apps.',
125 'Topic :: System :: Distributed Computing',
126 'Topic :: Utilities'],
127 license="Apache License v2.0",
128 python_requires=">=3.6",
129 install_requires=core_reqs,
130 extras_require={
131 'aws': aws_reqs,
132 'cwl': cwl_reqs,
133 'encryption': encryption_reqs,
134 'google': google_reqs,
135 'htcondor:sys_platform!="darwin"': htcondor_reqs,
136 'kubernetes': kubernetes_reqs,
137 'mesos': mesos_reqs,
138 'wdl': wdl_reqs,
139 'all': all_reqs},
140 package_dir={'': 'src'},
141 packages=find_packages(where='src',
142 # Note that we intentionally include the top-level `test` package for
143 # functionality like the @experimental and @integrative decoratorss:
144 exclude=['*.test.*']),
145 package_data = {
146 '': ['*.yml', 'cloud-config'],
147 },
148 # Unfortunately, the names of the entry points are hard-coded elsewhere in the code base so
149 # you can't just change them here. Luckily, most of them are pretty unique strings, and thus
150 # easy to search for.
151 entry_points={
152 'console_scripts': [
153 'toil = toil.utils.toilMain:main',
154 '_toil_worker = toil.worker:main',
155 'cwltoil = toil.cwl.cwltoil:cwltoil_was_removed [cwl]',
156 'toil-cwl-runner = toil.cwl.cwltoil:main [cwl]',
157 'toil-wdl-runner = toil.wdl.toilwdl:main',
158 '_toil_mesos_executor = toil.batchSystems.mesos.executor:main [mesos]',
159 '_toil_kubernetes_executor = toil.batchSystems.kubernetes:executor [kubernetes]']})
160
161
162 def importVersion():
163 """
164 Load and return the module object for src/toil/version.py, generating it from the template if
165 required.
166 """
167 import imp
168 try:
169 # Attempt to load the template first. It only exists in a working copy cloned via git.
170 import version_template
171 except ImportError:
172 # If loading the template fails we must be in a unpacked source distribution and
173 # src/toil/version.py will already exist.
174 pass
175 else:
176 # Use the template to generate src/toil/version.py
177 import os
178 import errno
179 from tempfile import NamedTemporaryFile
180
181 new = version_template.expand_()
182 try:
183 with open('src/toil/version.py') as f:
184 old = f.read()
185 except IOError as e:
186 if e.errno == errno.ENOENT:
187 old = None
188 else:
189 raise
190
191 if old != new:
192 with NamedTemporaryFile(mode='w', dir='src/toil', prefix='version.py.', delete=False) as f:
193 f.write(new)
194 os.rename(f.name, 'src/toil/version.py')
195 # Unfortunately, we can't use a straight import here because that would also load the stuff
196 # defined in src/toil/__init__.py which imports modules from external dependencies that may
197 # yet to be installed when setup.py is invoked.
198 return imp.load_source('toil.version', 'src/toil/version.py')
199
200
201 version = importVersion()
202 runSetup()
203
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -44,7 +44,7 @@
dateutil = 'python-dateutil'
addict = 'addict<=2.2.0'
pathlib2 = 'pathlib2==2.3.2'
- enlighten = 'enlighten>=1.5.1, <2'
+ enlighten = 'enlighten>=1.5.2, <2'
core_reqs = [
dill,
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -44,7 +44,7 @@\n dateutil = 'python-dateutil'\n addict = 'addict<=2.2.0'\n pathlib2 = 'pathlib2==2.3.2'\n- enlighten = 'enlighten>=1.5.1, <2'\n+ enlighten = 'enlighten>=1.5.2, <2'\n \n core_reqs = [\n dill,\n", "issue": "Progress bar is cool but...\nIt requires the terminal to be `reset` when run in a screen session. Also, for cactus anyway, it spends the vast majority of the runtime at 99%/100%.\n\n\u2506Issue is synchronized with this [Jira Task](https://ucsc-cgl.atlassian.net/browse/TOIL-558)\n\u2506Issue Number: TOIL-558\n\n", "before_files": [{"content": "# Copyright (C) 2015-2016 Regents of the University of California\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom setuptools import find_packages, setup\nimport os\n\n\ndef runSetup():\n \"\"\"\n Calls setup(). This function exists so the setup() invocation preceded more internal\n functionality. The `version` module is imported dynamically by importVersion() below.\n \"\"\"\n boto = 'boto==2.48.0'\n boto3 = 'boto3>=1.7.50, <2.0'\n futures = 'futures==3.1.1'\n pycryptodome = 'pycryptodome==3.5.1'\n pymesos = 'pymesos==0.3.15'\n psutil = 'psutil >= 3.0.1, <6'\n pynacl = 'pynacl==1.3.0'\n gcs = 'google-cloud-storage==1.6.0'\n gcs_oauth2_boto_plugin = 'gcs_oauth2_boto_plugin==1.14'\n apacheLibcloud = 'apache-libcloud==2.2.1'\n cwltool = 'cwltool==3.0.20200324120055'\n galaxyToolUtil = 'galaxy-tool-util'\n htcondor = 'htcondor>=8.6.0'\n kubernetes = 'kubernetes>=10, <11'\n idna = 'idna>=2'\n pytz = 'pytz>=2012'\n dill = 'dill==0.3.1.1'\n six = 'six>=1.10.0'\n future = 'future'\n requests = 'requests>=2, <3'\n docker = 'docker==2.5.1'\n dateutil = 'python-dateutil'\n addict = 'addict<=2.2.0'\n pathlib2 = 'pathlib2==2.3.2'\n enlighten = 'enlighten>=1.5.1, <2'\n\n core_reqs = [\n dill,\n six,\n future,\n requests,\n docker,\n dateutil,\n psutil,\n addict,\n pathlib2,\n pytz,\n enlighten]\n\n aws_reqs = [\n boto,\n boto3,\n futures,\n pycryptodome]\n cwl_reqs = [\n cwltool,\n galaxyToolUtil]\n encryption_reqs = [\n pynacl]\n google_reqs = [\n gcs_oauth2_boto_plugin, # is this being used??\n apacheLibcloud,\n gcs]\n htcondor_reqs = [\n htcondor]\n kubernetes_reqs = [\n kubernetes,\n idna] # Kubernetes's urllib3 can mange to use idna without really depending on it.\n mesos_reqs = [\n pymesos,\n psutil]\n wdl_reqs = []\n \n\n # htcondor is not supported by apple\n # this is tricky to conditionally support in 'all' due\n # to how wheels work, so it is not included in all and\n # must be explicitly installed as an extra\n all_reqs = \\\n aws_reqs + \\\n cwl_reqs + \\\n encryption_reqs + \\\n google_reqs + \\\n kubernetes_reqs + \\\n mesos_reqs\n\n\n setup(\n name='toil',\n version=version.distVersion,\n description='Pipeline management software for clusters.',\n author='Benedict Paten',\n author_email='[email protected]',\n url=\"https://github.com/DataBiosphere/toil\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Healthcare Industry',\n 'License :: OSI Approved :: Apache Software License',\n 'Natural Language :: English',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Bio-Informatics',\n 'Topic :: Scientific/Engineering :: Astronomy',\n 'Topic :: Scientific/Engineering :: Atmospheric Science',\n 'Topic :: Scientific/Engineering :: Information Analysis',\n 'Topic :: Scientific/Engineering :: Medical Science Apps.',\n 'Topic :: System :: Distributed Computing',\n 'Topic :: Utilities'],\n license=\"Apache License v2.0\",\n python_requires=\">=3.6\",\n install_requires=core_reqs,\n extras_require={\n 'aws': aws_reqs,\n 'cwl': cwl_reqs,\n 'encryption': encryption_reqs,\n 'google': google_reqs,\n 'htcondor:sys_platform!=\"darwin\"': htcondor_reqs,\n 'kubernetes': kubernetes_reqs,\n 'mesos': mesos_reqs,\n 'wdl': wdl_reqs,\n 'all': all_reqs},\n package_dir={'': 'src'},\n packages=find_packages(where='src',\n # Note that we intentionally include the top-level `test` package for\n # functionality like the @experimental and @integrative decoratorss:\n exclude=['*.test.*']),\n package_data = {\n '': ['*.yml', 'cloud-config'],\n },\n # Unfortunately, the names of the entry points are hard-coded elsewhere in the code base so\n # you can't just change them here. Luckily, most of them are pretty unique strings, and thus\n # easy to search for.\n entry_points={\n 'console_scripts': [\n 'toil = toil.utils.toilMain:main',\n '_toil_worker = toil.worker:main',\n 'cwltoil = toil.cwl.cwltoil:cwltoil_was_removed [cwl]',\n 'toil-cwl-runner = toil.cwl.cwltoil:main [cwl]',\n 'toil-wdl-runner = toil.wdl.toilwdl:main',\n '_toil_mesos_executor = toil.batchSystems.mesos.executor:main [mesos]',\n '_toil_kubernetes_executor = toil.batchSystems.kubernetes:executor [kubernetes]']})\n\n\ndef importVersion():\n \"\"\"\n Load and return the module object for src/toil/version.py, generating it from the template if\n required.\n \"\"\"\n import imp\n try:\n # Attempt to load the template first. It only exists in a working copy cloned via git.\n import version_template\n except ImportError:\n # If loading the template fails we must be in a unpacked source distribution and\n # src/toil/version.py will already exist.\n pass\n else:\n # Use the template to generate src/toil/version.py\n import os\n import errno\n from tempfile import NamedTemporaryFile\n\n new = version_template.expand_()\n try:\n with open('src/toil/version.py') as f:\n old = f.read()\n except IOError as e:\n if e.errno == errno.ENOENT:\n old = None\n else:\n raise\n\n if old != new:\n with NamedTemporaryFile(mode='w', dir='src/toil', prefix='version.py.', delete=False) as f:\n f.write(new)\n os.rename(f.name, 'src/toil/version.py')\n # Unfortunately, we can't use a straight import here because that would also load the stuff\n # defined in src/toil/__init__.py which imports modules from external dependencies that may\n # yet to be installed when setup.py is invoked.\n return imp.load_source('toil.version', 'src/toil/version.py')\n\n\nversion = importVersion()\nrunSetup()\n", "path": "setup.py"}]}
| 2,949 | 118 |
gh_patches_debug_32497
|
rasdani/github-patches
|
git_diff
|
mampfes__hacs_waste_collection_schedule-1600
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug]: abfallwirtschaft_pforzheim_de has change the URL.
### I Have A Problem With:
A specific source
### What's Your Problem
The URL changes from "https://www.abfallwirtschaft-pforzheim.de/kundenportal/abfallkalender" to "https://www.abfallwirtschaft-pforzheim.de/abfallkalender". On the new Site you need to select a checkbox for the year. I think this option would disappear on the beginning of the next year. But the addon doesnt show me the calendar for 2023 anymore. Its complete empty.
### Source (if relevant)
abfallwirtschaft_pforzheim_de
### Logs
```Shell
no relevant logs
```
### Relevant Configuration
```YAML
abfallwirtschaft_pforzheim_de
```
### Checklist Source Error
- [ ] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [ ] Checked that the website of your service provider is still working
- [ ] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [ ] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py]
1 from html.parser import HTMLParser
2
3 import requests
4 from waste_collection_schedule import Collection # type: ignore[attr-defined]
5 from waste_collection_schedule.service.ICS import ICS
6
7 # Source code based on rh_entsorgung_de.md
8 TITLE = "Abfallwirtschaft Pforzheim"
9 DESCRIPTION = "Source for Abfallwirtschaft Pforzheim."
10 URL = "https://www.abfallwirtschaft-pforzheim.de"
11 TEST_CASES = {
12 "Abnobstraße": {
13 "street": "Abnobastraße",
14 "house_number": 3,
15 "address_suffix": "",
16 },
17 "Im Buchbusch": {
18 "street": "Im Buchbusch",
19 "house_number": 12,
20 },
21 "Eisenbahnstraße": {
22 "street": "Eisenbahnstraße",
23 "house_number": 29,
24 "address_suffix": "-33",
25 },
26 }
27
28 ICON_MAP = {
29 "Restmuell": "mdi:trash-can",
30 "Biobehaelter": "mdi:leaf",
31 "Papierbehaelter": "mdi:package-variant",
32 "Gelbe": "mdi:recycle",
33 "Grossmuellbehaelter": "mdi:delete-circle",
34 }
35
36
37 API_URL = "https://onlineservices.abfallwirtschaft-pforzheim.de/WasteManagementPforzheim/WasteManagementServlet"
38
39 # Parser for HTML input (hidden) text
40
41
42 class HiddenInputParser(HTMLParser):
43 def __init__(self):
44 super().__init__()
45 self._args = {}
46
47 @property
48 def args(self):
49 return self._args
50
51 def handle_starttag(self, tag, attrs):
52 if tag == "input":
53 d = dict(attrs)
54 if str(d["type"]).lower() == "hidden":
55 self._args[d["name"]] = d["value"] if "value" in d else ""
56
57
58 class Source:
59 def __init__(self, street: str, house_number: int, address_suffix: str = ""):
60 self._street = street
61 self._hnr = house_number
62 self._suffix = address_suffix
63 self._ics = ICS()
64
65 def fetch(self):
66 session = requests.session()
67
68 r = session.get(
69 API_URL,
70 params={"SubmitAction": "wasteDisposalServices",
71 "InFrameMode": "TRUE"},
72 )
73 r.raise_for_status()
74 r.encoding = "utf-8"
75
76 parser = HiddenInputParser()
77 parser.feed(r.text)
78
79 args = parser.args
80 args["Ort"] = self._street[0].upper()
81 args["Strasse"] = self._street
82 args["Hausnummer"] = str(self._hnr)
83 args["Hausnummerzusatz"] = self._suffix
84 args["SubmitAction"] = "CITYCHANGED"
85 r = session.post(
86 API_URL,
87 data=args,
88 )
89 r.raise_for_status()
90
91 args["SubmitAction"] = "forward"
92 args["ContainerGewaehltRM"] = "on"
93 args["ContainerGewaehltBM"] = "on"
94 args["ContainerGewaehltLVP"] = "on"
95 args["ContainerGewaehltPA"] = "on"
96 args["ContainerGewaehltPrMuell"] = "on"
97 r = session.post(
98 API_URL,
99 data=args,
100 )
101 r.raise_for_status()
102
103 args["ApplicationName"] = "com.athos.nl.mvc.abfterm.AbfuhrTerminModel"
104 args["SubmitAction"] = "filedownload_ICAL"
105
106 r = session.post(
107 API_URL,
108 data=args,
109 )
110 r.raise_for_status()
111
112 dates = self._ics.convert(r.text)
113
114 entries = []
115 for d in dates:
116 entries.append(
117 Collection(
118 d[0], d[1], ICON_MAP.get(d[1].split(" ")[0])
119 )
120 )
121 return entries
122
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py
@@ -1,3 +1,4 @@
+from datetime import datetime
from html.parser import HTMLParser
import requests
@@ -63,12 +64,21 @@
self._ics = ICS()
def fetch(self):
+ now = datetime.now()
+ entries = self.get_data(now.year)
+ if now.month == 12:
+ try:
+ entries += self.get_data(now.year + 1)
+ except Exception:
+ pass
+ return entries
+
+ def get_data(self, year):
session = requests.session()
r = session.get(
API_URL,
- params={"SubmitAction": "wasteDisposalServices",
- "InFrameMode": "TRUE"},
+ params={"SubmitAction": "wasteDisposalServices", "InFrameMode": "TRUE"},
)
r.raise_for_status()
r.encoding = "utf-8"
@@ -82,6 +92,7 @@
args["Hausnummer"] = str(self._hnr)
args["Hausnummerzusatz"] = self._suffix
args["SubmitAction"] = "CITYCHANGED"
+ args["Zeitraum"] = f"Jahresübersicht {year}"
r = session.post(
API_URL,
data=args,
@@ -113,9 +124,5 @@
entries = []
for d in dates:
- entries.append(
- Collection(
- d[0], d[1], ICON_MAP.get(d[1].split(" ")[0])
- )
- )
+ entries.append(Collection(d[0], d[1], ICON_MAP.get(d[1].split(" ")[0])))
return entries
|
{"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py\n@@ -1,3 +1,4 @@\n+from datetime import datetime\n from html.parser import HTMLParser\n \n import requests\n@@ -63,12 +64,21 @@\n self._ics = ICS()\n \n def fetch(self):\n+ now = datetime.now()\n+ entries = self.get_data(now.year)\n+ if now.month == 12:\n+ try:\n+ entries += self.get_data(now.year + 1)\n+ except Exception:\n+ pass\n+ return entries\n+\n+ def get_data(self, year):\n session = requests.session()\n \n r = session.get(\n API_URL,\n- params={\"SubmitAction\": \"wasteDisposalServices\",\n- \"InFrameMode\": \"TRUE\"},\n+ params={\"SubmitAction\": \"wasteDisposalServices\", \"InFrameMode\": \"TRUE\"},\n )\n r.raise_for_status()\n r.encoding = \"utf-8\"\n@@ -82,6 +92,7 @@\n args[\"Hausnummer\"] = str(self._hnr)\n args[\"Hausnummerzusatz\"] = self._suffix\n args[\"SubmitAction\"] = \"CITYCHANGED\"\n+ args[\"Zeitraum\"] = f\"Jahres\u00fcbersicht {year}\"\n r = session.post(\n API_URL,\n data=args,\n@@ -113,9 +124,5 @@\n \n entries = []\n for d in dates:\n- entries.append(\n- Collection(\n- d[0], d[1], ICON_MAP.get(d[1].split(\" \")[0])\n- )\n- )\n+ entries.append(Collection(d[0], d[1], ICON_MAP.get(d[1].split(\" \")[0])))\n return entries\n", "issue": "[Bug]: abfallwirtschaft_pforzheim_de has change the URL.\n### I Have A Problem With:\n\nA specific source\n\n### What's Your Problem\n\nThe URL changes from \"https://www.abfallwirtschaft-pforzheim.de/kundenportal/abfallkalender\" to \"https://www.abfallwirtschaft-pforzheim.de/abfallkalender\". On the new Site you need to select a checkbox for the year. I think this option would disappear on the beginning of the next year. But the addon doesnt show me the calendar for 2023 anymore. Its complete empty.\n\n### Source (if relevant)\n\nabfallwirtschaft_pforzheim_de\n\n### Logs\n\n```Shell\nno relevant logs\n```\n\n\n### Relevant Configuration\n\n```YAML\nabfallwirtschaft_pforzheim_de\n```\n\n\n### Checklist Source Error\n\n- [ ] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [ ] Checked that the website of your service provider is still working\n- [ ] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [ ] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "from html.parser import HTMLParser\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\n# Source code based on rh_entsorgung_de.md\nTITLE = \"Abfallwirtschaft Pforzheim\"\nDESCRIPTION = \"Source for Abfallwirtschaft Pforzheim.\"\nURL = \"https://www.abfallwirtschaft-pforzheim.de\"\nTEST_CASES = {\n \"Abnobstra\u00dfe\": {\n \"street\": \"Abnobastra\u00dfe\",\n \"house_number\": 3,\n \"address_suffix\": \"\",\n },\n \"Im Buchbusch\": {\n \"street\": \"Im Buchbusch\",\n \"house_number\": 12,\n },\n \"Eisenbahnstra\u00dfe\": {\n \"street\": \"Eisenbahnstra\u00dfe\",\n \"house_number\": 29,\n \"address_suffix\": \"-33\",\n },\n}\n\nICON_MAP = {\n \"Restmuell\": \"mdi:trash-can\",\n \"Biobehaelter\": \"mdi:leaf\",\n \"Papierbehaelter\": \"mdi:package-variant\",\n \"Gelbe\": \"mdi:recycle\",\n \"Grossmuellbehaelter\": \"mdi:delete-circle\",\n}\n\n\nAPI_URL = \"https://onlineservices.abfallwirtschaft-pforzheim.de/WasteManagementPforzheim/WasteManagementServlet\"\n\n# Parser for HTML input (hidden) text\n\n\nclass HiddenInputParser(HTMLParser):\n def __init__(self):\n super().__init__()\n self._args = {}\n\n @property\n def args(self):\n return self._args\n\n def handle_starttag(self, tag, attrs):\n if tag == \"input\":\n d = dict(attrs)\n if str(d[\"type\"]).lower() == \"hidden\":\n self._args[d[\"name\"]] = d[\"value\"] if \"value\" in d else \"\"\n\n\nclass Source:\n def __init__(self, street: str, house_number: int, address_suffix: str = \"\"):\n self._street = street\n self._hnr = house_number\n self._suffix = address_suffix\n self._ics = ICS()\n\n def fetch(self):\n session = requests.session()\n\n r = session.get(\n API_URL,\n params={\"SubmitAction\": \"wasteDisposalServices\",\n \"InFrameMode\": \"TRUE\"},\n )\n r.raise_for_status()\n r.encoding = \"utf-8\"\n\n parser = HiddenInputParser()\n parser.feed(r.text)\n\n args = parser.args\n args[\"Ort\"] = self._street[0].upper()\n args[\"Strasse\"] = self._street\n args[\"Hausnummer\"] = str(self._hnr)\n args[\"Hausnummerzusatz\"] = self._suffix\n args[\"SubmitAction\"] = \"CITYCHANGED\"\n r = session.post(\n API_URL,\n data=args,\n )\n r.raise_for_status()\n\n args[\"SubmitAction\"] = \"forward\"\n args[\"ContainerGewaehltRM\"] = \"on\"\n args[\"ContainerGewaehltBM\"] = \"on\"\n args[\"ContainerGewaehltLVP\"] = \"on\"\n args[\"ContainerGewaehltPA\"] = \"on\"\n args[\"ContainerGewaehltPrMuell\"] = \"on\"\n r = session.post(\n API_URL,\n data=args,\n )\n r.raise_for_status()\n\n args[\"ApplicationName\"] = \"com.athos.nl.mvc.abfterm.AbfuhrTerminModel\"\n args[\"SubmitAction\"] = \"filedownload_ICAL\"\n\n r = session.post(\n API_URL,\n data=args,\n )\n r.raise_for_status()\n\n dates = self._ics.convert(r.text)\n\n entries = []\n for d in dates:\n entries.append(\n Collection(\n d[0], d[1], ICON_MAP.get(d[1].split(\" \")[0])\n )\n )\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py"}]}
| 2,092 | 481 |
gh_patches_debug_16795
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-3042
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[chatgpt] change critic input as state
> ## 📌 Checklist before creating the PR
> * [x] I have created an issue for this PR for traceability
> * [x] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description`
> * [ ] I have added relevant tags if possible for us to better distinguish different PRs
>
> ## 🚨 Issue number
> > Link this PR to your issue with words like fixed to automatically close the linked issue upon merge
> > e.g. `fixed #1234`, `closed #1234`, `resolved #1234`
> > fixed #3042
>
> ## 📝 What does this PR do?
> > Summarize your work here.
> > if you have any plots/diagrams/screenshots/tables, please attach them here.
>
> This commit fix chatgpt critic input as state according to A2C RL algorithm.
>
> ## 💥 Checklist before requesting a review
> * [x] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))
> * [x] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
> * [x] I have performed a self-review of my code
> * [ ] I have added thorough tests.
> * [ ] I have added docstrings for all the functions/methods I implemented
>
> ## ⭐️ Do you enjoy contributing to Colossal-AI?
> * [x] 🌝 Yes, I do.
> * [ ] 🌚 No, I don't.
>
> Tell us more if you don't enjoy contributing to Colossal-AI.
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of applications/ChatGPT/chatgpt/models/base/critic.py]
1 from typing import Optional
2
3 import torch
4 import torch.nn as nn
5
6 from ..lora import LoRAModule
7 from ..utils import masked_mean
8
9
10 class Critic(LoRAModule):
11 """
12 Critic model base class.
13
14 Args:
15 model (nn.Module): Critic model.
16 value_head (nn.Module): Value head to get value.
17 lora_rank (int): LoRA rank.
18 lora_train_bias (str): LoRA bias training mode.
19 """
20
21 def __init__(self,
22 model: nn.Module,
23 value_head: nn.Module,
24 lora_rank: int = 0,
25 lora_train_bias: str = 'none') -> None:
26
27 super().__init__(lora_rank=lora_rank, lora_train_bias=lora_train_bias)
28 self.model = model
29 self.value_head = value_head
30 self.convert_to_lora()
31
32 def forward(self,
33 sequences: torch.LongTensor,
34 action_mask: Optional[torch.Tensor] = None,
35 attention_mask: Optional[torch.Tensor] = None) -> torch.Tensor:
36 outputs = self.model(sequences, attention_mask=attention_mask)
37 last_hidden_states = outputs['last_hidden_state']
38
39 values = self.value_head(last_hidden_states).squeeze(-1)[:, :-1]
40
41 if action_mask is not None:
42 num_actions = action_mask.size(1)
43 values = values[:, -num_actions:]
44 value = masked_mean(values, action_mask, dim=1)
45 return value
46 value = values.mean(dim=1).squeeze(1)
47 return value
48
[end of applications/ChatGPT/chatgpt/models/base/critic.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/applications/ChatGPT/chatgpt/models/base/critic.py b/applications/ChatGPT/chatgpt/models/base/critic.py
--- a/applications/ChatGPT/chatgpt/models/base/critic.py
+++ b/applications/ChatGPT/chatgpt/models/base/critic.py
@@ -36,12 +36,15 @@
outputs = self.model(sequences, attention_mask=attention_mask)
last_hidden_states = outputs['last_hidden_state']
- values = self.value_head(last_hidden_states).squeeze(-1)[:, :-1]
+ values = self.value_head(last_hidden_states).squeeze(-1)
if action_mask is not None:
num_actions = action_mask.size(1)
- values = values[:, -num_actions:]
- value = masked_mean(values, action_mask, dim=1)
+ prompt_mask = attention_mask[:, :-num_actions]
+ values = values[:, :-num_actions]
+ value = masked_mean(values, prompt_mask, dim=1)
return value
+
+ values = values[:, :-1]
value = values.mean(dim=1).squeeze(1)
return value
|
{"golden_diff": "diff --git a/applications/ChatGPT/chatgpt/models/base/critic.py b/applications/ChatGPT/chatgpt/models/base/critic.py\n--- a/applications/ChatGPT/chatgpt/models/base/critic.py\n+++ b/applications/ChatGPT/chatgpt/models/base/critic.py\n@@ -36,12 +36,15 @@\n outputs = self.model(sequences, attention_mask=attention_mask)\n last_hidden_states = outputs['last_hidden_state']\n \n- values = self.value_head(last_hidden_states).squeeze(-1)[:, :-1]\n+ values = self.value_head(last_hidden_states).squeeze(-1)\n \n if action_mask is not None:\n num_actions = action_mask.size(1)\n- values = values[:, -num_actions:]\n- value = masked_mean(values, action_mask, dim=1)\n+ prompt_mask = attention_mask[:, :-num_actions]\n+ values = values[:, :-num_actions]\n+ value = masked_mean(values, prompt_mask, dim=1)\n return value\n+\n+ values = values[:, :-1]\n value = values.mean(dim=1).squeeze(1)\n return value\n", "issue": "[chatgpt] change critic input as state\n> ## \ud83d\udccc Checklist before creating the PR\r\n> * [x] I have created an issue for this PR for traceability\r\n> * [x] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description`\r\n> * [ ] I have added relevant tags if possible for us to better distinguish different PRs\r\n> \r\n> ## \ud83d\udea8 Issue number\r\n> > Link this PR to your issue with words like fixed to automatically close the linked issue upon merge\r\n> > e.g. `fixed #1234`, `closed #1234`, `resolved #1234`\r\n> > fixed #3042\r\n> \r\n> ## \ud83d\udcdd What does this PR do?\r\n> > Summarize your work here.\r\n> > if you have any plots/diagrams/screenshots/tables, please attach them here.\r\n> \r\n> This commit fix chatgpt critic input as state according to A2C RL algorithm.\r\n> \r\n> ## \ud83d\udca5 Checklist before requesting a review\r\n> * [x] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))\r\n> * [x] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible\r\n> * [x] I have performed a self-review of my code\r\n> * [ ] I have added thorough tests.\r\n> * [ ] I have added docstrings for all the functions/methods I implemented\r\n> \r\n> ## \u2b50\ufe0f Do you enjoy contributing to Colossal-AI?\r\n> * [x] \ud83c\udf1d Yes, I do.\r\n> * [ ] \ud83c\udf1a No, I don't.\r\n> \r\n> Tell us more if you don't enjoy contributing to Colossal-AI.\r\n\r\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from typing import Optional\n\nimport torch\nimport torch.nn as nn\n\nfrom ..lora import LoRAModule\nfrom ..utils import masked_mean\n\n\nclass Critic(LoRAModule):\n \"\"\"\n Critic model base class.\n\n Args:\n model (nn.Module): Critic model.\n value_head (nn.Module): Value head to get value.\n lora_rank (int): LoRA rank.\n lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n\n def __init__(self,\n model: nn.Module,\n value_head: nn.Module,\n lora_rank: int = 0,\n lora_train_bias: str = 'none') -> None:\n\n super().__init__(lora_rank=lora_rank, lora_train_bias=lora_train_bias)\n self.model = model\n self.value_head = value_head\n self.convert_to_lora()\n\n def forward(self,\n sequences: torch.LongTensor,\n action_mask: Optional[torch.Tensor] = None,\n attention_mask: Optional[torch.Tensor] = None) -> torch.Tensor:\n outputs = self.model(sequences, attention_mask=attention_mask)\n last_hidden_states = outputs['last_hidden_state']\n\n values = self.value_head(last_hidden_states).squeeze(-1)[:, :-1]\n\n if action_mask is not None:\n num_actions = action_mask.size(1)\n values = values[:, -num_actions:]\n value = masked_mean(values, action_mask, dim=1)\n return value\n value = values.mean(dim=1).squeeze(1)\n return value\n", "path": "applications/ChatGPT/chatgpt/models/base/critic.py"}]}
| 1,404 | 254 |
gh_patches_debug_10448
|
rasdani/github-patches
|
git_diff
|
biolab__orange3-text-176
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Chardet fails on Slovenian characters
Preprocess Text fails with Slovenian stopword list. Seems like a chardet issue.
</issue>
<code>
[start of orangecontrib/text/preprocess/filter.py]
1 import os
2
3 import re
4 from gensim import corpora
5 from nltk.corpus import stopwords
6
7 __all__ = ['BaseTokenFilter', 'StopwordsFilter', 'LexiconFilter', 'RegexpFilter', 'FrequencyFilter']
8
9
10 class BaseTokenFilter:
11 name = NotImplemented
12
13 def __call__(self, corpus):
14 if len(corpus) == 0:
15 return corpus
16 if isinstance(corpus[0], str):
17 return self.filter(corpus)
18 return [self.filter(tokens) for tokens in corpus]
19
20 def filter(self, tokens):
21 return list(filter(self.check, tokens))
22
23 def check(self, token):
24 raise NotImplementedError
25
26 def __str__(self):
27 return self.name
28
29 def set_up(self):
30 """ A method for setting filters up before every __call__. """
31 pass
32
33 def tear_down(self):
34 """ A method for cleaning up after every __call__. """
35 pass
36
37
38 class WordListMixin:
39 def __init__(self, word_list=None):
40 self.file_path = None
41 self.word_list = word_list or []
42
43 def from_file(self, path):
44 self.file_path = path
45 if not path:
46 self.word_list = []
47 else:
48 with open(path) as f:
49 self.word_list = set([line.strip() for line in f])
50
51
52 class StopwordsFilter(BaseTokenFilter, WordListMixin):
53 """ Remove tokens present in NLTK's language specific lists or a file. """
54 name = 'Stopwords'
55
56 supported_languages = [file.capitalize() for file in os.listdir(stopwords._get_root())
57 if file.islower()]
58
59 def __init__(self, language='English', word_list=None):
60 WordListMixin.__init__(self, word_list)
61 super().__init__()
62 self.language = language
63
64 @property
65 def language(self):
66 return self._language
67
68 @language.setter
69 def language(self, value):
70 self._language = value
71 if not self._language:
72 self.stopwords = []
73 else:
74 self.stopwords = set(stopwords.words(self.language.lower()))
75
76 def __str__(self):
77 config = ''
78 config += 'Language: {}, '.format(self.language.capitalize()) if self.language else ''
79 config += 'File: {}, '.format(self.file_path) if self.file_path else ''
80 return '{} ({})'.format(self.name, config.strip(', '))
81
82 def check(self, token):
83 return token not in self.stopwords and token not in self.word_list
84
85
86 class LexiconFilter(BaseTokenFilter, WordListMixin):
87 """ Keep only tokens present in a file. """
88 name = 'Lexicon'
89
90 def __init__(self, lexicon=None):
91 WordListMixin.__init__(self, word_list=lexicon)
92
93 @property
94 def lexicon(self):
95 return self.word_list
96
97 @lexicon.setter
98 def lexicon(self, value):
99 self.word_list = set(value)
100
101 def check(self, token):
102 return not self.lexicon or token in self.lexicon
103
104 def __str__(self):
105 return '{} ({})'.format(self.name, 'File: {}'.format(self.file_path))
106
107
108 class RegexpFilter(BaseTokenFilter):
109 """ Remove tokens matching this regular expressions. """
110 name = 'Regexp'
111
112 def __init__(self, pattern=r'\.|,|:|!|\?'):
113 self._pattern = pattern
114 # Compiled Regexes are NOT deepcopy-able and hence to make Corpus deepcopy-able
115 # we cannot store then (due to Corpus also storing used_preprocessor for BoW compute values).
116 # To bypass the problem regex is compiled before every __call__ and discarded right after.
117 self.regex = None
118 self.set_up()
119
120 @property
121 def pattern(self):
122 return self._pattern
123
124 @pattern.setter
125 def pattern(self, value):
126 self._pattern = value
127 self.set_up()
128
129 @staticmethod
130 def validate_regexp(regexp):
131 try:
132 re.compile(regexp)
133 return True
134 except re.error:
135 return False
136
137 def check(self, token):
138 return not self.regex.match(token)
139
140 def __str__(self):
141 return '{} ({})'.format(self.name, self.pattern)
142
143 def set_up(self):
144 """ Compile Regex before the __call__. """
145 self.regex = re.compile(self.pattern)
146
147 def tear_down(self):
148 """ Delete Regex after every __call__. """
149 self.regex = None
150
151
152 class FrequencyFilter(LexiconFilter):
153 """Remove tokens with document frequency outside this range;
154 use either absolute or relative frequency. """
155 name = 'Document frequency'
156
157 def __init__(self, min_df=0., max_df=1., keep_n=None):
158 super().__init__()
159 self._corpus_len = 0
160 self.keep_n = keep_n
161 self._max_df = max_df
162 self._min_df = min_df
163
164 def fit_filter(self, corpus):
165 self._corpus_len = len(corpus)
166 tokens = getattr(corpus, 'tokens', corpus)
167 dictionary = corpora.Dictionary(tokens)
168 dictionary.filter_extremes(self.min_df, self.max_df, self.keep_n)
169 self.lexicon = dictionary.token2id.keys()
170 return self(tokens), dictionary
171
172 @property
173 def max_df(self):
174 if isinstance(self._max_df, int):
175 return self._max_df / self._corpus_len if self._corpus_len else 1.
176 else:
177 return self._max_df
178
179 @max_df.setter
180 def max_df(self, value):
181 self._max_df = value
182
183 @property
184 def min_df(self):
185 if isinstance(self._min_df, float):
186 return int(self._corpus_len * self._min_df) or 1
187 else:
188 return self._min_df
189
190 @min_df.setter
191 def min_df(self, value):
192 self._min_df = value
193
194 def __str__(self):
195 keep = ', keep {}'.format(self.keep_n) if self.keep_n else ''
196 return "{} (range [{}, {}]{})".format(self.name, self._min_df,
197 self._max_df, keep)
198
199 def check(self, token):
200 return token in self.lexicon
201
[end of orangecontrib/text/preprocess/filter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/orangecontrib/text/preprocess/filter.py b/orangecontrib/text/preprocess/filter.py
--- a/orangecontrib/text/preprocess/filter.py
+++ b/orangecontrib/text/preprocess/filter.py
@@ -1,6 +1,7 @@
import os
-
import re
+
+from Orange.data.io import detect_encoding
from gensim import corpora
from nltk.corpus import stopwords
@@ -45,7 +46,8 @@
if not path:
self.word_list = []
else:
- with open(path) as f:
+ enc = detect_encoding(path)
+ with open(path, encoding=enc) as f:
self.word_list = set([line.strip() for line in f])
|
{"golden_diff": "diff --git a/orangecontrib/text/preprocess/filter.py b/orangecontrib/text/preprocess/filter.py\n--- a/orangecontrib/text/preprocess/filter.py\n+++ b/orangecontrib/text/preprocess/filter.py\n@@ -1,6 +1,7 @@\n import os\n-\n import re\n+\n+from Orange.data.io import detect_encoding\n from gensim import corpora\n from nltk.corpus import stopwords\n \n@@ -45,7 +46,8 @@\n if not path:\n self.word_list = []\n else:\n- with open(path) as f:\n+ enc = detect_encoding(path)\n+ with open(path, encoding=enc) as f:\n self.word_list = set([line.strip() for line in f])\n", "issue": "Chardet fails on Slovenian characters\nPreprocess Text fails with Slovenian stopword list. Seems like a chardet issue.\n", "before_files": [{"content": "import os\n\nimport re\nfrom gensim import corpora\nfrom nltk.corpus import stopwords\n\n__all__ = ['BaseTokenFilter', 'StopwordsFilter', 'LexiconFilter', 'RegexpFilter', 'FrequencyFilter']\n\n\nclass BaseTokenFilter:\n name = NotImplemented\n\n def __call__(self, corpus):\n if len(corpus) == 0:\n return corpus\n if isinstance(corpus[0], str):\n return self.filter(corpus)\n return [self.filter(tokens) for tokens in corpus]\n\n def filter(self, tokens):\n return list(filter(self.check, tokens))\n\n def check(self, token):\n raise NotImplementedError\n\n def __str__(self):\n return self.name\n\n def set_up(self):\n \"\"\" A method for setting filters up before every __call__. \"\"\"\n pass\n\n def tear_down(self):\n \"\"\" A method for cleaning up after every __call__. \"\"\"\n pass\n\n\nclass WordListMixin:\n def __init__(self, word_list=None):\n self.file_path = None\n self.word_list = word_list or []\n\n def from_file(self, path):\n self.file_path = path\n if not path:\n self.word_list = []\n else:\n with open(path) as f:\n self.word_list = set([line.strip() for line in f])\n\n\nclass StopwordsFilter(BaseTokenFilter, WordListMixin):\n \"\"\" Remove tokens present in NLTK's language specific lists or a file. \"\"\"\n name = 'Stopwords'\n\n supported_languages = [file.capitalize() for file in os.listdir(stopwords._get_root())\n if file.islower()]\n\n def __init__(self, language='English', word_list=None):\n WordListMixin.__init__(self, word_list)\n super().__init__()\n self.language = language\n\n @property\n def language(self):\n return self._language\n\n @language.setter\n def language(self, value):\n self._language = value\n if not self._language:\n self.stopwords = []\n else:\n self.stopwords = set(stopwords.words(self.language.lower()))\n\n def __str__(self):\n config = ''\n config += 'Language: {}, '.format(self.language.capitalize()) if self.language else ''\n config += 'File: {}, '.format(self.file_path) if self.file_path else ''\n return '{} ({})'.format(self.name, config.strip(', '))\n\n def check(self, token):\n return token not in self.stopwords and token not in self.word_list\n\n\nclass LexiconFilter(BaseTokenFilter, WordListMixin):\n \"\"\" Keep only tokens present in a file. \"\"\"\n name = 'Lexicon'\n\n def __init__(self, lexicon=None):\n WordListMixin.__init__(self, word_list=lexicon)\n\n @property\n def lexicon(self):\n return self.word_list\n\n @lexicon.setter\n def lexicon(self, value):\n self.word_list = set(value)\n\n def check(self, token):\n return not self.lexicon or token in self.lexicon\n\n def __str__(self):\n return '{} ({})'.format(self.name, 'File: {}'.format(self.file_path))\n\n\nclass RegexpFilter(BaseTokenFilter):\n \"\"\" Remove tokens matching this regular expressions. \"\"\"\n name = 'Regexp'\n\n def __init__(self, pattern=r'\\.|,|:|!|\\?'):\n self._pattern = pattern\n # Compiled Regexes are NOT deepcopy-able and hence to make Corpus deepcopy-able\n # we cannot store then (due to Corpus also storing used_preprocessor for BoW compute values).\n # To bypass the problem regex is compiled before every __call__ and discarded right after.\n self.regex = None\n self.set_up()\n\n @property\n def pattern(self):\n return self._pattern\n\n @pattern.setter\n def pattern(self, value):\n self._pattern = value\n self.set_up()\n\n @staticmethod\n def validate_regexp(regexp):\n try:\n re.compile(regexp)\n return True\n except re.error:\n return False\n\n def check(self, token):\n return not self.regex.match(token)\n\n def __str__(self):\n return '{} ({})'.format(self.name, self.pattern)\n\n def set_up(self):\n \"\"\" Compile Regex before the __call__. \"\"\"\n self.regex = re.compile(self.pattern)\n\n def tear_down(self):\n \"\"\" Delete Regex after every __call__. \"\"\"\n self.regex = None\n\n\nclass FrequencyFilter(LexiconFilter):\n \"\"\"Remove tokens with document frequency outside this range;\n use either absolute or relative frequency. \"\"\"\n name = 'Document frequency'\n\n def __init__(self, min_df=0., max_df=1., keep_n=None):\n super().__init__()\n self._corpus_len = 0\n self.keep_n = keep_n\n self._max_df = max_df\n self._min_df = min_df\n\n def fit_filter(self, corpus):\n self._corpus_len = len(corpus)\n tokens = getattr(corpus, 'tokens', corpus)\n dictionary = corpora.Dictionary(tokens)\n dictionary.filter_extremes(self.min_df, self.max_df, self.keep_n)\n self.lexicon = dictionary.token2id.keys()\n return self(tokens), dictionary\n\n @property\n def max_df(self):\n if isinstance(self._max_df, int):\n return self._max_df / self._corpus_len if self._corpus_len else 1.\n else:\n return self._max_df\n\n @max_df.setter\n def max_df(self, value):\n self._max_df = value\n\n @property\n def min_df(self):\n if isinstance(self._min_df, float):\n return int(self._corpus_len * self._min_df) or 1\n else:\n return self._min_df\n\n @min_df.setter\n def min_df(self, value):\n self._min_df = value\n\n def __str__(self):\n keep = ', keep {}'.format(self.keep_n) if self.keep_n else ''\n return \"{} (range [{}, {}]{})\".format(self.name, self._min_df,\n self._max_df, keep)\n\n def check(self, token):\n return token in self.lexicon\n", "path": "orangecontrib/text/preprocess/filter.py"}]}
| 2,434 | 156 |
gh_patches_debug_36284
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-2966
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider albert_heijn is broken
During the global build at 2021-06-02-14-42-40, spider **albert_heijn** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/albert_heijn.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/albert_heijn.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/albert_heijn.geojson))
</issue>
<code>
[start of locations/spiders/albert_heijn.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 from locations.items import GeojsonPointItem
4 import json
5
6 class AlbertHeijnSpider(scrapy.Spider):
7 name = 'albert_heijn'
8 item_attributes = {'brand': "Albert Heijn"}
9 allowed_domains = ['www.ah.nl']
10
11 def start_requests(self):
12 url = 'https://www.ah.nl/data/winkelinformatie/winkels/json'
13 yield scrapy.Request(url, callback=self.parse)
14
15 def parse(self, response):
16 stores = json.loads(response.body_as_unicode())
17 for store in stores['stores']:
18 try:
19 phone_number = store['phoneNumber']
20 except:
21 phone_number = ""
22 yield GeojsonPointItem(
23 lat=store['lat'],
24 lon=store['lng'],
25 addr_full="%s %s" % (store['street'], store["housenr"]),
26 city=store['city'],
27 phone=phone_number,
28 state="",
29 postcode=store['zip'],
30 ref=store['no'],
31 country="Netherlands",
32 website="https://www.ah.nl/winkel/albert-heijn/%s/%s/%s" % (store['city'], store['street'], store['no'])
33 )
34
[end of locations/spiders/albert_heijn.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/albert_heijn.py b/locations/spiders/albert_heijn.py
--- a/locations/spiders/albert_heijn.py
+++ b/locations/spiders/albert_heijn.py
@@ -1,33 +1,53 @@
# -*- coding: utf-8 -*-
+import json
+import re
+
import scrapy
+
+from locations.hours import OpeningHours
from locations.items import GeojsonPointItem
-import json
-class AlbertHeijnSpider(scrapy.Spider):
- name = 'albert_heijn'
- item_attributes = {'brand': "Albert Heijn"}
- allowed_domains = ['www.ah.nl']
- def start_requests(self):
- url = 'https://www.ah.nl/data/winkelinformatie/winkels/json'
- yield scrapy.Request(url, callback=self.parse)
+class AlbertHeijnSpider(scrapy.Spider):
+ name = "albert_heijn"
+ item_attributes = {"brand": "Albert Heijn", "brand_wikidata": "Q1653985"}
+ allowed_domains = ["www.ah.nl", "www.ah.be"]
+ start_urls = (
+ "https://www.ah.nl/sitemaps/entities/stores/stores.xml",
+ "https://www.ah.be/sitemaps/entities/stores/stores.xml",
+ )
def parse(self, response):
- stores = json.loads(response.body_as_unicode())
- for store in stores['stores']:
- try:
- phone_number = store['phoneNumber']
- except:
- phone_number = ""
- yield GeojsonPointItem(
- lat=store['lat'],
- lon=store['lng'],
- addr_full="%s %s" % (store['street'], store["housenr"]),
- city=store['city'],
- phone=phone_number,
- state="",
- postcode=store['zip'],
- ref=store['no'],
- country="Netherlands",
- website="https://www.ah.nl/winkel/albert-heijn/%s/%s/%s" % (store['city'], store['street'], store['no'])
- )
+ response.selector.remove_namespaces()
+ for url in response.xpath("//loc/text()").extract():
+ if re.search("/winkel/albert-heijn/", url):
+ yield scrapy.Request(url, callback=self.parse_store)
+
+ def parse_store(self, response):
+ for ldjson in response.xpath(
+ '//script[@type="application/ld+json"]/text()'
+ ).extract():
+ data = json.loads(ldjson)
+ if data["@type"] != "GroceryStore":
+ continue
+
+ opening_hours = OpeningHours()
+ for spec in data["openingHoursSpecification"]:
+ opening_hours.add_range(
+ spec["dayOfWeek"][:2], spec["opens"], spec["closes"]
+ )
+
+ properties = {
+ "ref": response.url,
+ "website": response.url,
+ "name": data["name"],
+ "phone": data["telephone"],
+ "lat": data["geo"]["latitude"],
+ "lon": data["geo"]["longitude"],
+ "addr_full": data["address"]["streetAddress"],
+ "city": data["address"]["addressLocality"],
+ "postcode": data["address"]["postalCode"],
+ "country": data["address"]["addressCountry"],
+ "opening_hours": opening_hours.as_opening_hours(),
+ }
+ yield GeojsonPointItem(**properties)
|
{"golden_diff": "diff --git a/locations/spiders/albert_heijn.py b/locations/spiders/albert_heijn.py\n--- a/locations/spiders/albert_heijn.py\n+++ b/locations/spiders/albert_heijn.py\n@@ -1,33 +1,53 @@\n # -*- coding: utf-8 -*-\n+import json\n+import re\n+\n import scrapy\n+\n+from locations.hours import OpeningHours\n from locations.items import GeojsonPointItem\n-import json\n \n-class AlbertHeijnSpider(scrapy.Spider):\n- name = 'albert_heijn'\n- item_attributes = {'brand': \"Albert Heijn\"}\n- allowed_domains = ['www.ah.nl']\n \n- def start_requests(self):\n- url = 'https://www.ah.nl/data/winkelinformatie/winkels/json'\n- yield scrapy.Request(url, callback=self.parse)\n+class AlbertHeijnSpider(scrapy.Spider):\n+ name = \"albert_heijn\"\n+ item_attributes = {\"brand\": \"Albert Heijn\", \"brand_wikidata\": \"Q1653985\"}\n+ allowed_domains = [\"www.ah.nl\", \"www.ah.be\"]\n+ start_urls = (\n+ \"https://www.ah.nl/sitemaps/entities/stores/stores.xml\",\n+ \"https://www.ah.be/sitemaps/entities/stores/stores.xml\",\n+ )\n \n def parse(self, response):\n- stores = json.loads(response.body_as_unicode())\n- for store in stores['stores']:\n- try:\n- phone_number = store['phoneNumber']\n- except:\n- phone_number = \"\"\n- yield GeojsonPointItem(\n- lat=store['lat'],\n- lon=store['lng'],\n- addr_full=\"%s %s\" % (store['street'], store[\"housenr\"]),\n- city=store['city'],\n- phone=phone_number,\n- state=\"\",\n- postcode=store['zip'],\n- ref=store['no'],\n- country=\"Netherlands\",\n- website=\"https://www.ah.nl/winkel/albert-heijn/%s/%s/%s\" % (store['city'], store['street'], store['no'])\n- )\n+ response.selector.remove_namespaces()\n+ for url in response.xpath(\"//loc/text()\").extract():\n+ if re.search(\"/winkel/albert-heijn/\", url):\n+ yield scrapy.Request(url, callback=self.parse_store)\n+\n+ def parse_store(self, response):\n+ for ldjson in response.xpath(\n+ '//script[@type=\"application/ld+json\"]/text()'\n+ ).extract():\n+ data = json.loads(ldjson)\n+ if data[\"@type\"] != \"GroceryStore\":\n+ continue\n+\n+ opening_hours = OpeningHours()\n+ for spec in data[\"openingHoursSpecification\"]:\n+ opening_hours.add_range(\n+ spec[\"dayOfWeek\"][:2], spec[\"opens\"], spec[\"closes\"]\n+ )\n+\n+ properties = {\n+ \"ref\": response.url,\n+ \"website\": response.url,\n+ \"name\": data[\"name\"],\n+ \"phone\": data[\"telephone\"],\n+ \"lat\": data[\"geo\"][\"latitude\"],\n+ \"lon\": data[\"geo\"][\"longitude\"],\n+ \"addr_full\": data[\"address\"][\"streetAddress\"],\n+ \"city\": data[\"address\"][\"addressLocality\"],\n+ \"postcode\": data[\"address\"][\"postalCode\"],\n+ \"country\": data[\"address\"][\"addressCountry\"],\n+ \"opening_hours\": opening_hours.as_opening_hours(),\n+ }\n+ yield GeojsonPointItem(**properties)\n", "issue": "Spider albert_heijn is broken\nDuring the global build at 2021-06-02-14-42-40, spider **albert_heijn** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/albert_heijn.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/albert_heijn.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/albert_heijn.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nfrom locations.items import GeojsonPointItem\nimport json\n\nclass AlbertHeijnSpider(scrapy.Spider):\n name = 'albert_heijn'\n item_attributes = {'brand': \"Albert Heijn\"}\n allowed_domains = ['www.ah.nl']\n\n def start_requests(self):\n url = 'https://www.ah.nl/data/winkelinformatie/winkels/json'\n yield scrapy.Request(url, callback=self.parse)\n\n def parse(self, response):\n stores = json.loads(response.body_as_unicode())\n for store in stores['stores']:\n try:\n phone_number = store['phoneNumber']\n except:\n phone_number = \"\"\n yield GeojsonPointItem(\n lat=store['lat'],\n lon=store['lng'],\n addr_full=\"%s %s\" % (store['street'], store[\"housenr\"]),\n city=store['city'],\n phone=phone_number,\n state=\"\",\n postcode=store['zip'],\n ref=store['no'],\n country=\"Netherlands\",\n website=\"https://www.ah.nl/winkel/albert-heijn/%s/%s/%s\" % (store['city'], store['street'], store['no'])\n )\n", "path": "locations/spiders/albert_heijn.py"}]}
| 1,055 | 782 |
gh_patches_debug_25605
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-1878
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider tesla is broken
During the global build at 2021-05-26-14-42-23, spider **tesla** failed with **486 features** and **5 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/tesla.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tesla.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tesla.geojson))
</issue>
<code>
[start of locations/spiders/tesla.py]
1 # -*- coding: utf-8 -*-
2 import re
3 import scrapy
4 import urllib.parse
5 from locations.items import GeojsonPointItem
6
7
8 class TeslaSpider(scrapy.Spider):
9 name = "tesla"
10 item_attributes = { 'brand': "Tesla" }
11 allowed_domains = ['www.tesla.com']
12 start_urls = [
13 'https://www.tesla.com/findus/list',
14 ]
15 download_delay = 0.5
16 custom_settings = {
17 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',
18 }
19
20 def parse(self, response):
21 # Only scrape stores and service centers
22 country_urls = response.xpath('//a[contains(@href,"stores") or contains(@href,"services")]/@href').extract()
23 for country_url in country_urls:
24 yield scrapy.Request(response.urljoin(country_url), callback=self.parse_store_list)
25
26 def parse_store_list(self, response):
27 store_urls = response.xpath('//a[@class="fn org url"]/@href').extract()
28 for store_url in store_urls:
29 yield scrapy.Request(response.urljoin(store_url), callback=self.parse_store)
30
31 def parse_store(self, response):
32 # Skip if "Coming Soon" - no content to capture yet
33 if response.xpath('//span[@class="coming-soon"]/text()').extract_first() == "Coming Soon":
34 pass
35 else:
36 ref = re.search(r'.+/(.+?)/?(?:\.html|$)', response.url).group(1)
37
38 # city, state, and zip do not have separate classes - contained together in locality class as text
39 name = response.xpath('normalize-space(//header/h1/text())').extract_first()
40 common_name = response.xpath('normalize-space(//span[@class="common-name"]//text())').extract_first()
41 street_address = response.xpath('normalize-space(//span[@class="street-address"]//text())').extract_first()
42 city_state_zip = response.xpath('normalize-space(//span[@class="locality"]//text())').extract_first()
43
44 if common_name and street_address and city_state_zip:
45 addr_full = common_name + ' ' + street_address + ', ' + city_state_zip
46 elif street_address and not city_state_zip:
47 addr_full = street_address
48 elif city_state_zip and not street_address:
49 addr_full = city_state_zip
50 elif street_address and city_state_zip:
51 addr_full = street_address + ', ' + city_state_zip
52
53 country_url = response.xpath('//header[@class="findus-list-header"]/a/@href').extract_first()
54 country = urllib.parse.unquote_plus(re.search(r'.+/(.+?)/?(?:\.html|$)', country_url).group(1))
55 phone = response.xpath('normalize-space(//span[@class="tel"]/span[2]/text())').extract_first()
56 location_type = re.search(r".+/(.+?)/(.+?)/?(?:\.html|$)", response.url).group(1)
57
58 # map link varies across store pages
59 if response.xpath('normalize-space(//a[contains(@href,"maps.google")]/@href)').extract_first():
60 map_link = response.xpath('normalize-space(//a[contains(@href,"maps.google")]/@href)').extract_first()
61 else:
62 map_link = response.xpath('normalize-space(//img[contains(@src,"maps.google")]/@src)').extract_first()
63
64 # extract coordinates from map link
65 if re.search(r'.+=([0-9.-]+),\s?([0-9.-]+)', map_link):
66 lat = re.search(r'.+=([0-9.-]+),\s?([0-9.-]+)', map_link).group(1)
67 lon = re.search(r'.+=([0-9.-]+),\s?([0-9.-]+)', map_link).group(2)
68 elif re.search(r'.+@([0-9.-]+),\s?([0-9.-]+)', map_link):
69 lat = re.search(r'.+@([0-9.-]+),\s?([0-9.-]+)', map_link).group(1)
70 lon = re.search(r'.+@([0-9.-]+),\s?([0-9.-]+)', map_link).group(2)
71 else:
72 lat = None
73 lon = None
74
75 properties = {
76 'ref': ref,
77 'name': name,
78 'addr_full': addr_full,
79 'country': country,
80 'phone': phone,
81 'website': response.url,
82 'lat': lat,
83 'lon': lon,
84 'extras':
85 {
86 'location_type': location_type # Is this a service center or store/gallery
87 }
88 }
89
90 yield GeojsonPointItem(**properties)
91
[end of locations/spiders/tesla.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/tesla.py b/locations/spiders/tesla.py
--- a/locations/spiders/tesla.py
+++ b/locations/spiders/tesla.py
@@ -19,7 +19,7 @@
def parse(self, response):
# Only scrape stores and service centers
- country_urls = response.xpath('//a[contains(@href,"stores") or contains(@href,"services")]/@href').extract()
+ country_urls = response.xpath('//a[contains(@href,"stores") or contains(@href,"services") or contains(@href,"superchargers")]/@href').extract()
for country_url in country_urls:
yield scrapy.Request(response.urljoin(country_url), callback=self.parse_store_list)
@@ -41,6 +41,7 @@
street_address = response.xpath('normalize-space(//span[@class="street-address"]//text())').extract_first()
city_state_zip = response.xpath('normalize-space(//span[@class="locality"]//text())').extract_first()
+ addr_full = ""
if common_name and street_address and city_state_zip:
addr_full = common_name + ' ' + street_address + ', ' + city_state_zip
elif street_address and not city_state_zip:
|
{"golden_diff": "diff --git a/locations/spiders/tesla.py b/locations/spiders/tesla.py\n--- a/locations/spiders/tesla.py\n+++ b/locations/spiders/tesla.py\n@@ -19,7 +19,7 @@\n \n def parse(self, response):\n # Only scrape stores and service centers\n- country_urls = response.xpath('//a[contains(@href,\"stores\") or contains(@href,\"services\")]/@href').extract()\n+ country_urls = response.xpath('//a[contains(@href,\"stores\") or contains(@href,\"services\") or contains(@href,\"superchargers\")]/@href').extract()\n for country_url in country_urls:\n yield scrapy.Request(response.urljoin(country_url), callback=self.parse_store_list)\n \n@@ -41,6 +41,7 @@\n street_address = response.xpath('normalize-space(//span[@class=\"street-address\"]//text())').extract_first()\n city_state_zip = response.xpath('normalize-space(//span[@class=\"locality\"]//text())').extract_first()\n \n+ addr_full = \"\"\n if common_name and street_address and city_state_zip:\n addr_full = common_name + ' ' + street_address + ', ' + city_state_zip\n elif street_address and not city_state_zip:\n", "issue": "Spider tesla is broken\nDuring the global build at 2021-05-26-14-42-23, spider **tesla** failed with **486 features** and **5 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/tesla.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tesla.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tesla.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport re\nimport scrapy\nimport urllib.parse\nfrom locations.items import GeojsonPointItem\n\n\nclass TeslaSpider(scrapy.Spider):\n name = \"tesla\"\n item_attributes = { 'brand': \"Tesla\" }\n allowed_domains = ['www.tesla.com']\n start_urls = [\n 'https://www.tesla.com/findus/list',\n ]\n download_delay = 0.5\n custom_settings = {\n 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',\n }\n\n def parse(self, response):\n # Only scrape stores and service centers\n country_urls = response.xpath('//a[contains(@href,\"stores\") or contains(@href,\"services\")]/@href').extract()\n for country_url in country_urls:\n yield scrapy.Request(response.urljoin(country_url), callback=self.parse_store_list)\n\n def parse_store_list(self, response):\n store_urls = response.xpath('//a[@class=\"fn org url\"]/@href').extract()\n for store_url in store_urls:\n yield scrapy.Request(response.urljoin(store_url), callback=self.parse_store)\n\n def parse_store(self, response):\n # Skip if \"Coming Soon\" - no content to capture yet\n if response.xpath('//span[@class=\"coming-soon\"]/text()').extract_first() == \"Coming Soon\":\n pass\n else:\n ref = re.search(r'.+/(.+?)/?(?:\\.html|$)', response.url).group(1)\n\n # city, state, and zip do not have separate classes - contained together in locality class as text\n name = response.xpath('normalize-space(//header/h1/text())').extract_first()\n common_name = response.xpath('normalize-space(//span[@class=\"common-name\"]//text())').extract_first()\n street_address = response.xpath('normalize-space(//span[@class=\"street-address\"]//text())').extract_first()\n city_state_zip = response.xpath('normalize-space(//span[@class=\"locality\"]//text())').extract_first()\n\n if common_name and street_address and city_state_zip:\n addr_full = common_name + ' ' + street_address + ', ' + city_state_zip\n elif street_address and not city_state_zip:\n addr_full = street_address\n elif city_state_zip and not street_address:\n addr_full = city_state_zip\n elif street_address and city_state_zip:\n addr_full = street_address + ', ' + city_state_zip\n\n country_url = response.xpath('//header[@class=\"findus-list-header\"]/a/@href').extract_first()\n country = urllib.parse.unquote_plus(re.search(r'.+/(.+?)/?(?:\\.html|$)', country_url).group(1))\n phone = response.xpath('normalize-space(//span[@class=\"tel\"]/span[2]/text())').extract_first()\n location_type = re.search(r\".+/(.+?)/(.+?)/?(?:\\.html|$)\", response.url).group(1)\n\n # map link varies across store pages\n if response.xpath('normalize-space(//a[contains(@href,\"maps.google\")]/@href)').extract_first():\n map_link = response.xpath('normalize-space(//a[contains(@href,\"maps.google\")]/@href)').extract_first()\n else:\n map_link = response.xpath('normalize-space(//img[contains(@src,\"maps.google\")]/@src)').extract_first()\n\n # extract coordinates from map link\n if re.search(r'.+=([0-9.-]+),\\s?([0-9.-]+)', map_link):\n lat = re.search(r'.+=([0-9.-]+),\\s?([0-9.-]+)', map_link).group(1)\n lon = re.search(r'.+=([0-9.-]+),\\s?([0-9.-]+)', map_link).group(2)\n elif re.search(r'.+@([0-9.-]+),\\s?([0-9.-]+)', map_link):\n lat = re.search(r'.+@([0-9.-]+),\\s?([0-9.-]+)', map_link).group(1)\n lon = re.search(r'.+@([0-9.-]+),\\s?([0-9.-]+)', map_link).group(2)\n else:\n lat = None\n lon = None\n\n properties = {\n 'ref': ref,\n 'name': name,\n 'addr_full': addr_full,\n 'country': country,\n 'phone': phone,\n 'website': response.url,\n 'lat': lat,\n 'lon': lon,\n 'extras':\n {\n 'location_type': location_type # Is this a service center or store/gallery\n }\n }\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/tesla.py"}]}
| 1,970 | 275 |
gh_patches_debug_1658
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-4628
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plugins.twitcasting: Writes JSON into video files when it shouldn't
### Checklist
- [X] This is a plugin issue and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest build from the master branch
### Description
https://github.com/streamlink/streamlink/pull/4608 introduced a bug of JSON being written to the output file.
- When running streamlink on a channel that is live but only for members, using `-o out.mp4` flag to output to a file, it creates a video file containing just a single JSON line in it:
```
$ cat out.mp4
{"type":"status","code":403,"text":"Access Forbidden"}
```
The expected behavior is that it doesn't create the file in such situation, like it used to behave before https://github.com/streamlink/streamlink/pull/4608 fixes were made.
- It also adds `{"type":"status","code":504,"text":"End of Live"}` at the end of video files when the stream ends:
```
$ xxd -s -128 -c 16 out.ts
24b5bee9: 5c75 7cc6 7e38 e099 55d9 6257 59d8 eb6e \u|.~8..U.bWY..n
24b5bef9: b7aa 49bb ef3a dd18 7767 8c77 7dc6 6ade ..I..:..wg.w}.j.
24b5bf09: 6d54 2175 2acf 0926 400f 0449 2bc6 a816 mT!u*..&@..I+...
24b5bf19: 3523 72e9 db4d 6c5a 5aba ec75 3c0a ad72 5#r..MlZZ..u<..r
24b5bf29: 2258 0b2f ebc2 b50a 7ed3 bbbd 8d30 c77b "X./....~....0.{
24b5bf39: 2274 7970 6522 3a22 7374 6174 7573 222c "type":"status",
24b5bf49: 2263 6f64 6522 3a35 3034 2c22 7465 7874 "code":504,"text
24b5bf59: 223a 2245 6e64 206f 6620 4c69 7665 227d ":"End of Live"}
```

- Perhaps it shouldn't be writing any `response['type'] == 'status'` to the file?
- While at it, maybe there is something else that it's writing to a video file that it shouldn't? As mentioned in https://github.com/streamlink/streamlink/issues/4604#issuecomment-1166177130, Twitcasting also sends `{"type":"event","code":100,"text":""}` sometimes. Would that get written into the video file too? Is that something that should be written into it?
### Debug log
```text
[cli][debug] OS: Linux-5.10.0-14-amd64-x86_64-with-glibc2.31
[cli][debug] Python: 3.9.2
[cli][debug] Streamlink: 4.1.0+37.g2c564dbe
[cli][debug] Dependencies:
[cli][debug] isodate: 0.6.0
[cli][debug] lxml: 4.7.1
[cli][debug] pycountry: 20.7.3
[cli][debug] pycryptodome: 3.10.1
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.28.0
[cli][debug] websocket-client: 1.2.3
[cli][debug] Arguments:
[cli][debug] url=https://twitcasting.tv/[REDACTED]
[cli][debug] stream=['best']
[cli][debug] --config=['../config']
[cli][debug] --loglevel=debug
[cli][debug] --output=[REDACTED]
[cli][debug] --retry-streams=1.0
[cli][debug] --retry-max=300
[cli][info] Found matching plugin twitcasting for URL https://twitcasting.tv/[REDACTED]
[plugins.twitcasting][debug] Live stream info: {'movie': {'id': [REDACTED], 'live': True}, 'fmp4': {'host': '202-218-171-197.twitcasting.tv', 'proto': 'wss', 'source': False, 'mobilesource': False}}
[plugins.twitcasting][debug] Real stream url: wss://202-218-171-197.twitcasting.tv/ws.app/stream/[REDACTED]/fmp4/bd/1/1500?mode=base
[cli][info] Available streams: base (worst, best)
[cli][info] Opening stream: base (stream)
[cli][info] Writing output to
[REDACTED]
[cli][debug] Checking file output
[plugin.api.websocket][debug] Connecting to: wss://202-218-171-197.twitcasting.tv/ws.app/stream/[REDACTED]/fmp4/bd/1/1500?mode=base
[cli][debug] Pre-buffering 8192 bytes
[plugin.api.websocket][debug] Connected: wss://202-218-171-197.twitcasting.tv/ws.app/stream/[REDACTED]/fmp4/bd/1/1500?mode=base
[cli][debug] Writing stream to output
[plugin.api.websocket][error] Connection to remote host was lost.
[plugin.api.websocket][debug] Closed: wss://202-218-171-197.twitcasting.tv/ws.app/stream/[REDACTED]/fmp4/bd/1/1500?mode=base
[cli][info] Stream ended
[cli][info] Closing currently open stream...
```
</issue>
<code>
[start of src/streamlink/plugins/twitcasting.py]
1 """
2 $description Global live broadcasting and live broadcast archiving social platform.
3 $url twitcasting.tv
4 $type live
5 """
6
7 import hashlib
8 import logging
9 import re
10
11 from streamlink.buffers import RingBuffer
12 from streamlink.plugin import Plugin, PluginArgument, PluginArguments, PluginError, pluginmatcher
13 from streamlink.plugin.api import validate
14 from streamlink.plugin.api.websocket import WebsocketClient
15 from streamlink.stream.stream import Stream, StreamIO
16 from streamlink.utils.url import update_qsd
17
18
19 log = logging.getLogger(__name__)
20
21
22 @pluginmatcher(re.compile(
23 r"https?://twitcasting\.tv/(?P<channel>[^/]+)"
24 ))
25 class TwitCasting(Plugin):
26 arguments = PluginArguments(
27 PluginArgument(
28 "password",
29 sensitive=True,
30 metavar="PASSWORD",
31 help="Password for private Twitcasting streams."
32 )
33 )
34 _STREAM_INFO_URL = "https://twitcasting.tv/streamserver.php?target={channel}&mode=client"
35 _STREAM_REAL_URL = "{proto}://{host}/ws.app/stream/{movie_id}/fmp4/bd/1/1500?mode={mode}"
36
37 _STREAM_INFO_SCHEMA = validate.Schema({
38 validate.optional("movie"): {
39 "id": int,
40 "live": bool
41 },
42 validate.optional("fmp4"): {
43 "host": str,
44 "proto": str,
45 "source": bool,
46 "mobilesource": bool
47 }
48 })
49
50 def __init__(self, url):
51 super().__init__(url)
52 self.channel = self.match.group("channel")
53
54 def _get_streams(self):
55 stream_info = self._get_stream_info()
56 log.debug(f"Live stream info: {stream_info}")
57
58 if not stream_info.get("movie") or not stream_info["movie"]["live"]:
59 raise PluginError("The live stream is offline")
60
61 if not stream_info.get("fmp4"):
62 raise PluginError("Login required")
63
64 # Keys are already validated by schema above
65 proto = stream_info["fmp4"]["proto"]
66 host = stream_info["fmp4"]["host"]
67 movie_id = stream_info["movie"]["id"]
68
69 if stream_info["fmp4"]["source"]:
70 mode = "main" # High quality
71 elif stream_info["fmp4"]["mobilesource"]:
72 mode = "mobilesource" # Medium quality
73 else:
74 mode = "base" # Low quality
75
76 if (proto == '') or (host == '') or (not movie_id):
77 raise PluginError(f"No stream available for user {self.channel}")
78
79 real_stream_url = self._STREAM_REAL_URL.format(proto=proto, host=host, movie_id=movie_id, mode=mode)
80
81 password = self.options.get("password")
82 if password is not None:
83 password_hash = hashlib.md5(password.encode()).hexdigest()
84 real_stream_url = update_qsd(real_stream_url, {"word": password_hash})
85
86 log.debug(f"Real stream url: {real_stream_url}")
87
88 return {mode: TwitCastingStream(session=self.session, url=real_stream_url)}
89
90 def _get_stream_info(self):
91 url = self._STREAM_INFO_URL.format(channel=self.channel)
92 res = self.session.http.get(url)
93 return self.session.http.json(res, schema=self._STREAM_INFO_SCHEMA)
94
95
96 class TwitCastingWsClient(WebsocketClient):
97 def __init__(self, buffer: RingBuffer, *args, **kwargs):
98 self.buffer = buffer
99 super().__init__(*args, **kwargs)
100
101 def on_close(self, *args, **kwargs):
102 super().on_close(*args, **kwargs)
103 self.buffer.close()
104
105 def on_data(self, wsapp, data, data_type, cont):
106 if data_type == self.OPCODE_TEXT:
107 data = bytes(data, "utf-8")
108
109 try:
110 self.buffer.write(data)
111 except Exception as err:
112 log.error(err)
113 self.close()
114
115
116 class TwitCastingReader(StreamIO):
117 def __init__(self, stream: "TwitCastingStream", timeout=None):
118 super().__init__()
119 self.session = stream.session
120 self.stream = stream
121 self.timeout = timeout or self.session.options.get("stream-timeout")
122
123 buffer_size = self.session.get_option("ringbuffer-size")
124 self.buffer = RingBuffer(buffer_size)
125
126 self.wsclient = TwitCastingWsClient(
127 self.buffer,
128 stream.session,
129 stream.url,
130 origin="https://twitcasting.tv/"
131 )
132
133 def open(self):
134 self.wsclient.start()
135
136 def close(self):
137 self.wsclient.close()
138 self.buffer.close()
139
140 def read(self, size):
141 return self.buffer.read(
142 size,
143 block=self.wsclient.is_alive(),
144 timeout=self.timeout
145 )
146
147
148 class TwitCastingStream(Stream):
149 def __init__(self, session, url):
150 super().__init__(session)
151 self.url = url
152
153 def to_url(self):
154 return self.url
155
156 def open(self):
157 reader = TwitCastingReader(self)
158 reader.open()
159 return reader
160
161
162 __plugin__ = TwitCasting
163
[end of src/streamlink/plugins/twitcasting.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/streamlink/plugins/twitcasting.py b/src/streamlink/plugins/twitcasting.py
--- a/src/streamlink/plugins/twitcasting.py
+++ b/src/streamlink/plugins/twitcasting.py
@@ -104,7 +104,7 @@
def on_data(self, wsapp, data, data_type, cont):
if data_type == self.OPCODE_TEXT:
- data = bytes(data, "utf-8")
+ return
try:
self.buffer.write(data)
|
{"golden_diff": "diff --git a/src/streamlink/plugins/twitcasting.py b/src/streamlink/plugins/twitcasting.py\n--- a/src/streamlink/plugins/twitcasting.py\n+++ b/src/streamlink/plugins/twitcasting.py\n@@ -104,7 +104,7 @@\n \n def on_data(self, wsapp, data, data_type, cont):\n if data_type == self.OPCODE_TEXT:\n- data = bytes(data, \"utf-8\")\n+ return\n \n try:\n self.buffer.write(data)\n", "issue": "plugins.twitcasting: Writes JSON into video files when it shouldn't\n### Checklist\r\n\r\n- [X] This is a plugin issue and not a different kind of issue\r\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\r\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\r\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\r\n\r\n### Streamlink version\r\n\r\nLatest build from the master branch\r\n\r\n### Description\r\n\r\nhttps://github.com/streamlink/streamlink/pull/4608 introduced a bug of JSON being written to the output file.\r\n\r\n- When running streamlink on a channel that is live but only for members, using `-o out.mp4` flag to output to a file, it creates a video file containing just a single JSON line in it:\r\n\r\n ```\r\n $ cat out.mp4\r\n {\"type\":\"status\",\"code\":403,\"text\":\"Access Forbidden\"}\r\n ```\r\n\r\n The expected behavior is that it doesn't create the file in such situation, like it used to behave before https://github.com/streamlink/streamlink/pull/4608 fixes were made.\r\n\r\n- It also adds `{\"type\":\"status\",\"code\":504,\"text\":\"End of Live\"}` at the end of video files when the stream ends:\r\n\r\n ```\r\n $ xxd -s -128 -c 16 out.ts\r\n 24b5bee9: 5c75 7cc6 7e38 e099 55d9 6257 59d8 eb6e \\u|.~8..U.bWY..n\r\n 24b5bef9: b7aa 49bb ef3a dd18 7767 8c77 7dc6 6ade ..I..:..wg.w}.j.\r\n 24b5bf09: 6d54 2175 2acf 0926 400f 0449 2bc6 a816 mT!u*..&@..I+...\r\n 24b5bf19: 3523 72e9 db4d 6c5a 5aba ec75 3c0a ad72 5#r..MlZZ..u<..r\r\n 24b5bf29: 2258 0b2f ebc2 b50a 7ed3 bbbd 8d30 c77b \"X./....~....0.{\r\n 24b5bf39: 2274 7970 6522 3a22 7374 6174 7573 222c \"type\":\"status\",\r\n 24b5bf49: 2263 6f64 6522 3a35 3034 2c22 7465 7874 \"code\":504,\"text\r\n 24b5bf59: 223a 2245 6e64 206f 6620 4c69 7665 227d \":\"End of Live\"}\r\n ```\r\n \r\n\r\n\r\n- Perhaps it shouldn't be writing any `response['type'] == 'status'` to the file?\r\n\r\n- While at it, maybe there is something else that it's writing to a video file that it shouldn't? As mentioned in https://github.com/streamlink/streamlink/issues/4604#issuecomment-1166177130, Twitcasting also sends `{\"type\":\"event\",\"code\":100,\"text\":\"\"}` sometimes. Would that get written into the video file too? Is that something that should be written into it?\r\n\r\n### Debug log\r\n\r\n```text\r\n[cli][debug] OS: Linux-5.10.0-14-amd64-x86_64-with-glibc2.31\r\n[cli][debug] Python: 3.9.2\r\n[cli][debug] Streamlink: 4.1.0+37.g2c564dbe\r\n[cli][debug] Dependencies:\r\n[cli][debug] isodate: 0.6.0\r\n[cli][debug] lxml: 4.7.1\r\n[cli][debug] pycountry: 20.7.3\r\n[cli][debug] pycryptodome: 3.10.1\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.28.0\r\n[cli][debug] websocket-client: 1.2.3\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://twitcasting.tv/[REDACTED]\r\n[cli][debug] stream=['best']\r\n[cli][debug] --config=['../config']\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --output=[REDACTED]\r\n[cli][debug] --retry-streams=1.0\r\n[cli][debug] --retry-max=300\r\n[cli][info] Found matching plugin twitcasting for URL https://twitcasting.tv/[REDACTED]\r\n[plugins.twitcasting][debug] Live stream info: {'movie': {'id': [REDACTED], 'live': True}, 'fmp4': {'host': '202-218-171-197.twitcasting.tv', 'proto': 'wss', 'source': False, 'mobilesource': False}}\r\n[plugins.twitcasting][debug] Real stream url: wss://202-218-171-197.twitcasting.tv/ws.app/stream/[REDACTED]/fmp4/bd/1/1500?mode=base\r\n[cli][info] Available streams: base (worst, best)\r\n[cli][info] Opening stream: base (stream)\r\n[cli][info] Writing output to\r\n[REDACTED]\r\n[cli][debug] Checking file output\r\n[plugin.api.websocket][debug] Connecting to: wss://202-218-171-197.twitcasting.tv/ws.app/stream/[REDACTED]/fmp4/bd/1/1500?mode=base\r\n[cli][debug] Pre-buffering 8192 bytes\r\n[plugin.api.websocket][debug] Connected: wss://202-218-171-197.twitcasting.tv/ws.app/stream/[REDACTED]/fmp4/bd/1/1500?mode=base\r\n[cli][debug] Writing stream to output\r\n[plugin.api.websocket][error] Connection to remote host was lost.\r\n[plugin.api.websocket][debug] Closed: wss://202-218-171-197.twitcasting.tv/ws.app/stream/[REDACTED]/fmp4/bd/1/1500?mode=base\r\n[cli][info] Stream ended\r\n[cli][info] Closing currently open stream...\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\n$description Global live broadcasting and live broadcast archiving social platform.\n$url twitcasting.tv\n$type live\n\"\"\"\n\nimport hashlib\nimport logging\nimport re\n\nfrom streamlink.buffers import RingBuffer\nfrom streamlink.plugin import Plugin, PluginArgument, PluginArguments, PluginError, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.plugin.api.websocket import WebsocketClient\nfrom streamlink.stream.stream import Stream, StreamIO\nfrom streamlink.utils.url import update_qsd\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://twitcasting\\.tv/(?P<channel>[^/]+)\"\n))\nclass TwitCasting(Plugin):\n arguments = PluginArguments(\n PluginArgument(\n \"password\",\n sensitive=True,\n metavar=\"PASSWORD\",\n help=\"Password for private Twitcasting streams.\"\n )\n )\n _STREAM_INFO_URL = \"https://twitcasting.tv/streamserver.php?target={channel}&mode=client\"\n _STREAM_REAL_URL = \"{proto}://{host}/ws.app/stream/{movie_id}/fmp4/bd/1/1500?mode={mode}\"\n\n _STREAM_INFO_SCHEMA = validate.Schema({\n validate.optional(\"movie\"): {\n \"id\": int,\n \"live\": bool\n },\n validate.optional(\"fmp4\"): {\n \"host\": str,\n \"proto\": str,\n \"source\": bool,\n \"mobilesource\": bool\n }\n })\n\n def __init__(self, url):\n super().__init__(url)\n self.channel = self.match.group(\"channel\")\n\n def _get_streams(self):\n stream_info = self._get_stream_info()\n log.debug(f\"Live stream info: {stream_info}\")\n\n if not stream_info.get(\"movie\") or not stream_info[\"movie\"][\"live\"]:\n raise PluginError(\"The live stream is offline\")\n\n if not stream_info.get(\"fmp4\"):\n raise PluginError(\"Login required\")\n\n # Keys are already validated by schema above\n proto = stream_info[\"fmp4\"][\"proto\"]\n host = stream_info[\"fmp4\"][\"host\"]\n movie_id = stream_info[\"movie\"][\"id\"]\n\n if stream_info[\"fmp4\"][\"source\"]:\n mode = \"main\" # High quality\n elif stream_info[\"fmp4\"][\"mobilesource\"]:\n mode = \"mobilesource\" # Medium quality\n else:\n mode = \"base\" # Low quality\n\n if (proto == '') or (host == '') or (not movie_id):\n raise PluginError(f\"No stream available for user {self.channel}\")\n\n real_stream_url = self._STREAM_REAL_URL.format(proto=proto, host=host, movie_id=movie_id, mode=mode)\n\n password = self.options.get(\"password\")\n if password is not None:\n password_hash = hashlib.md5(password.encode()).hexdigest()\n real_stream_url = update_qsd(real_stream_url, {\"word\": password_hash})\n\n log.debug(f\"Real stream url: {real_stream_url}\")\n\n return {mode: TwitCastingStream(session=self.session, url=real_stream_url)}\n\n def _get_stream_info(self):\n url = self._STREAM_INFO_URL.format(channel=self.channel)\n res = self.session.http.get(url)\n return self.session.http.json(res, schema=self._STREAM_INFO_SCHEMA)\n\n\nclass TwitCastingWsClient(WebsocketClient):\n def __init__(self, buffer: RingBuffer, *args, **kwargs):\n self.buffer = buffer\n super().__init__(*args, **kwargs)\n\n def on_close(self, *args, **kwargs):\n super().on_close(*args, **kwargs)\n self.buffer.close()\n\n def on_data(self, wsapp, data, data_type, cont):\n if data_type == self.OPCODE_TEXT:\n data = bytes(data, \"utf-8\")\n\n try:\n self.buffer.write(data)\n except Exception as err:\n log.error(err)\n self.close()\n\n\nclass TwitCastingReader(StreamIO):\n def __init__(self, stream: \"TwitCastingStream\", timeout=None):\n super().__init__()\n self.session = stream.session\n self.stream = stream\n self.timeout = timeout or self.session.options.get(\"stream-timeout\")\n\n buffer_size = self.session.get_option(\"ringbuffer-size\")\n self.buffer = RingBuffer(buffer_size)\n\n self.wsclient = TwitCastingWsClient(\n self.buffer,\n stream.session,\n stream.url,\n origin=\"https://twitcasting.tv/\"\n )\n\n def open(self):\n self.wsclient.start()\n\n def close(self):\n self.wsclient.close()\n self.buffer.close()\n\n def read(self, size):\n return self.buffer.read(\n size,\n block=self.wsclient.is_alive(),\n timeout=self.timeout\n )\n\n\nclass TwitCastingStream(Stream):\n def __init__(self, session, url):\n super().__init__(session)\n self.url = url\n\n def to_url(self):\n return self.url\n\n def open(self):\n reader = TwitCastingReader(self)\n reader.open()\n return reader\n\n\n__plugin__ = TwitCasting\n", "path": "src/streamlink/plugins/twitcasting.py"}]}
| 3,806 | 110 |
gh_patches_debug_19241
|
rasdani/github-patches
|
git_diff
|
Gallopsled__pwntools-2240
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Do not overwrite global `bytes` in code or examples
It looks like there's a few places we overwrite `bytes` (the type identifier) with a local variable.
```
$ git grep -E -e '^ +bytes *=' -- '*.py'
pwnlib/commandline/disasm.py:81: bytes = disasm(dat, vma=safeeval.const(args.address), instructions=False, offset=False)
pwnlib/commandline/elfpatch.py:29: bytes = unhex(a.bytes)
pwnlib/elf/elf.py:195: bytes = 4
```
And a few cases we do it in tests, which could have cross-test impact if the global state isn't reset (hint: it isn't).
```
~/pwntools $ git grep -E -e '^ +>>> bytes *=' -- '*.py'
pwnlib/runner.py:42: >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
pwnlib/runner.py:48: >>> bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')
pwnlib/runner.py:87: >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
```
</issue>
<code>
[start of pwnlib/runner.py]
1 from __future__ import absolute_import
2 from __future__ import division
3
4 import os
5 import tempfile
6
7 from pwnlib.context import LocalContext
8 from pwnlib.elf import ELF
9 from pwnlib.tubes.process import process
10
11 __all__ = ['run_assembly', 'run_shellcode', 'run_assembly_exitcode', 'run_shellcode_exitcode']
12
13 @LocalContext
14 def run_assembly(assembly):
15 """
16 Given an assembly listing, assemble and execute it.
17
18 Returns:
19
20 A :class:`pwnlib.tubes.process.process` tube to interact with the process.
21
22 Example:
23
24 >>> p = run_assembly('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
25 >>> p.wait_for_close()
26 >>> p.poll()
27 3
28
29 >>> p = run_assembly('mov r0, #12; mov r7, #1; svc #0', arch='arm')
30 >>> p.wait_for_close()
31 >>> p.poll()
32 12
33 """
34 return ELF.from_assembly(assembly).process()
35
36 @LocalContext
37 def run_shellcode(bytes, **kw):
38 """Given assembled machine code bytes, execute them.
39
40 Example:
41
42 >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
43 >>> p = run_shellcode(bytes)
44 >>> p.wait_for_close()
45 >>> p.poll()
46 3
47
48 >>> bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')
49 >>> p = run_shellcode(bytes, arch='arm')
50 >>> p.wait_for_close()
51 >>> p.poll()
52 12
53 """
54 return ELF.from_bytes(bytes, **kw).process()
55
56 @LocalContext
57 def run_assembly_exitcode(assembly):
58 """
59 Given an assembly listing, assemble and execute it, and wait for
60 the process to die.
61
62 Returns:
63
64 The exit code of the process.
65
66 Example:
67
68 >>> run_assembly_exitcode('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
69 3
70 """
71 p = run_assembly(assembly)
72 p.wait_for_close()
73 return p.poll()
74
75 @LocalContext
76 def run_shellcode_exitcode(bytes):
77 """
78 Given assembled machine code bytes, execute them, and wait for
79 the process to die.
80
81 Returns:
82
83 The exit code of the process.
84
85 Example:
86
87 >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
88 >>> run_shellcode_exitcode(bytes)
89 3
90 """
91 p = run_shellcode(bytes)
92 p.wait_for_close()
93 return p.poll()
94
[end of pwnlib/runner.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pwnlib/runner.py b/pwnlib/runner.py
--- a/pwnlib/runner.py
+++ b/pwnlib/runner.py
@@ -39,14 +39,14 @@
Example:
- >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
- >>> p = run_shellcode(bytes)
+ >>> insn_bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
+ >>> p = run_shellcode(insn_bytes)
>>> p.wait_for_close()
>>> p.poll()
3
- >>> bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')
- >>> p = run_shellcode(bytes, arch='arm')
+ >>> insn_bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')
+ >>> p = run_shellcode(insn_bytes, arch='arm')
>>> p.wait_for_close()
>>> p.poll()
12
@@ -84,8 +84,8 @@
Example:
- >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
- >>> run_shellcode_exitcode(bytes)
+ >>> insn_bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
+ >>> run_shellcode_exitcode(insn_bytes)
3
"""
p = run_shellcode(bytes)
|
{"golden_diff": "diff --git a/pwnlib/runner.py b/pwnlib/runner.py\n--- a/pwnlib/runner.py\n+++ b/pwnlib/runner.py\n@@ -39,14 +39,14 @@\n \n Example:\n \n- >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n- >>> p = run_shellcode(bytes)\n+ >>> insn_bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n+ >>> p = run_shellcode(insn_bytes)\n >>> p.wait_for_close()\n >>> p.poll()\n 3\n \n- >>> bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')\n- >>> p = run_shellcode(bytes, arch='arm')\n+ >>> insn_bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')\n+ >>> p = run_shellcode(insn_bytes, arch='arm')\n >>> p.wait_for_close()\n >>> p.poll()\n 12\n@@ -84,8 +84,8 @@\n \n Example:\n \n- >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n- >>> run_shellcode_exitcode(bytes)\n+ >>> insn_bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n+ >>> run_shellcode_exitcode(insn_bytes)\n 3\n \"\"\"\n p = run_shellcode(bytes)\n", "issue": "Do not overwrite global `bytes` in code or examples\nIt looks like there's a few places we overwrite `bytes` (the type identifier) with a local variable.\r\n\r\n```\r\n$ git grep -E -e '^ +bytes *=' -- '*.py'\r\npwnlib/commandline/disasm.py:81: bytes = disasm(dat, vma=safeeval.const(args.address), instructions=False, offset=False)\r\npwnlib/commandline/elfpatch.py:29: bytes = unhex(a.bytes)\r\npwnlib/elf/elf.py:195: bytes = 4\r\n```\r\n\r\nAnd a few cases we do it in tests, which could have cross-test impact if the global state isn't reset (hint: it isn't).\r\n\r\n```\r\n~/pwntools $ git grep -E -e '^ +>>> bytes *=' -- '*.py'\r\npwnlib/runner.py:42: >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\r\npwnlib/runner.py:48: >>> bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')\r\npwnlib/runner.py:87: >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\n\nimport os\nimport tempfile\n\nfrom pwnlib.context import LocalContext\nfrom pwnlib.elf import ELF\nfrom pwnlib.tubes.process import process\n\n__all__ = ['run_assembly', 'run_shellcode', 'run_assembly_exitcode', 'run_shellcode_exitcode']\n\n@LocalContext\ndef run_assembly(assembly):\n \"\"\"\n Given an assembly listing, assemble and execute it.\n\n Returns:\n\n A :class:`pwnlib.tubes.process.process` tube to interact with the process.\n\n Example:\n\n >>> p = run_assembly('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n >>> p.wait_for_close()\n >>> p.poll()\n 3\n\n >>> p = run_assembly('mov r0, #12; mov r7, #1; svc #0', arch='arm')\n >>> p.wait_for_close()\n >>> p.poll()\n 12\n \"\"\"\n return ELF.from_assembly(assembly).process()\n\n@LocalContext\ndef run_shellcode(bytes, **kw):\n \"\"\"Given assembled machine code bytes, execute them.\n\n Example:\n\n >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n >>> p = run_shellcode(bytes)\n >>> p.wait_for_close()\n >>> p.poll()\n 3\n\n >>> bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')\n >>> p = run_shellcode(bytes, arch='arm')\n >>> p.wait_for_close()\n >>> p.poll()\n 12\n \"\"\"\n return ELF.from_bytes(bytes, **kw).process()\n\n@LocalContext\ndef run_assembly_exitcode(assembly):\n \"\"\"\n Given an assembly listing, assemble and execute it, and wait for\n the process to die.\n\n Returns:\n\n The exit code of the process.\n\n Example:\n\n >>> run_assembly_exitcode('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n 3\n \"\"\"\n p = run_assembly(assembly)\n p.wait_for_close()\n return p.poll()\n\n@LocalContext\ndef run_shellcode_exitcode(bytes):\n \"\"\"\n Given assembled machine code bytes, execute them, and wait for\n the process to die.\n\n Returns:\n\n The exit code of the process.\n\n Example:\n\n >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n >>> run_shellcode_exitcode(bytes)\n 3\n \"\"\"\n p = run_shellcode(bytes)\n p.wait_for_close()\n return p.poll()\n", "path": "pwnlib/runner.py"}]}
| 1,632 | 364 |
gh_patches_debug_35111
|
rasdani/github-patches
|
git_diff
|
electricitymaps__electricitymaps-contrib-2854
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
King Island: battery never seems to discharge
I've been keeping an eye on AUS-TAS-KI since it was added to the map. Charging works fine, discharging doesn't show up.
</issue>
<code>
[start of parsers/AUS_TAS_KI.py]
1 # Initial PR https://github.com/tmrowco/electricitymap-contrib/pull/2456
2 # Discussion thread https://github.com/tmrowco/electricitymap-contrib/issues/636
3 # A promotion webpage for King's Island energy production is here : https://www.hydro.com.au/clean-energy/hybrid-energy-solutions/success-stories/king-island
4 # As of 09/2020, it embeds with <iframe> the URI https://data.ajenti.com.au/KIREIP/index.html
5 # About the data, the feed we get seems to be counters with a 2 seconds interval.
6 # That means that if we fetch these counters every 15 minutes, we only are reading "instantaneous" metters that could differ from the total quantity of energies at play. To get the very exact data, we would need to have a parser running constanty to collect those 2-sec interval counters.
7
8 import asyncio
9 import json
10 import logging
11 import arrow
12 from signalr import Connection
13 from requests import Session
14
15 class SignalR:
16 def __init__(self, url):
17 self.url = url
18
19 def update_res(self, msg):
20 if (msg != {}):
21 self.res = msg
22
23 def get_value(self, hub, method):
24 self.res = {}
25 with Session() as session:
26 #create a connection
27 connection = Connection(self.url, session)
28 chat = connection.register_hub(hub)
29 chat.client.on(method, self.update_res)
30 connection.start()
31 connection.wait(3)
32 connection.close()
33 return self.res
34
35 def parse_payload(logger, payload):
36 technologies_parsed = {}
37 if not 'technologies' in payload:
38 raise KeyError(
39 f"No 'technologies' in payload\n"
40 f"serie : {json.dumps(payload)}"
41 )
42 else:
43 logger.debug(f"serie : {json.dumps(payload)}")
44 for technology in payload['technologies']:
45 assert technology['unit'] == 'kW'
46 # The upstream API gives us kW, we need MW
47 technologies_parsed[technology['id']] = int(technology['value'])/1000
48 logger.debug(f"production : {json.dumps(technologies_parsed)}")
49
50 biodiesel_percent = payload['biodiesel']['percent']
51
52 return technologies_parsed, biodiesel_percent
53
54 # Both keys battery and flywheel are negative when storing energy, and positive when feeding energy to the grid
55 def format_storage_techs(technologies_parsed):
56 storage_techs = technologies_parsed['battery']+technologies_parsed['flywheel']
57 battery_production = storage_techs if storage_techs > 0 else 0
58 battery_storage = storage_techs if storage_techs < 0 else 0
59
60 return battery_production, battery_storage
61
62 def fetch_production(zone_key='AUS-TAS-KI', session=None, target_datetime=None, logger: logging.Logger = logging.getLogger(__name__)):
63
64 if target_datetime is not None:
65 raise NotImplementedError('The datasource currently implemented is only real time')
66
67 payload = SignalR("https://data.ajenti.com.au/live/signalr").get_value("TagHub", "Dashboard")
68 technologies_parsed, biodiesel_percent = parse_payload(logger, payload)
69 battery_production, battery_storage = format_storage_techs(technologies_parsed)
70 return {
71 'zoneKey': zone_key,
72 'datetime': arrow.now(tz='Australia/Currie').datetime,
73 'production': {
74 'battery discharge': battery_production,
75 'biomass': technologies_parsed['diesel']*biodiesel_percent/100,
76 'coal': 0,
77 'gas': 0,
78 'hydro': 0,
79 'nuclear': 0,
80 'oil': technologies_parsed['diesel']*(100-biodiesel_percent)/100,
81 'solar': technologies_parsed['solar'],
82 'wind': 0 if technologies_parsed['wind'] < 0 and technologies_parsed['wind'] > -0.1 else technologies_parsed['wind'], #If wind between 0 and -0.1 set to 0 to ignore self-consumption
83 'geothermal': 0,
84 'unknown': 0
85 },
86 'storage': {
87 'battery': battery_storage*-1
88 },
89 'source': 'https://data.ajenti.com.au/KIREIP/index.html'
90 }
91
92 if __name__ == '__main__':
93 print(fetch_production())
94
[end of parsers/AUS_TAS_KI.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/parsers/AUS_TAS_KI.py b/parsers/AUS_TAS_KI.py
--- a/parsers/AUS_TAS_KI.py
+++ b/parsers/AUS_TAS_KI.py
@@ -52,12 +52,10 @@
return technologies_parsed, biodiesel_percent
# Both keys battery and flywheel are negative when storing energy, and positive when feeding energy to the grid
-def format_storage_techs(technologies_parsed):
+def sum_storage_techs(technologies_parsed):
storage_techs = technologies_parsed['battery']+technologies_parsed['flywheel']
- battery_production = storage_techs if storage_techs > 0 else 0
- battery_storage = storage_techs if storage_techs < 0 else 0
- return battery_production, battery_storage
+ return storage_techs
def fetch_production(zone_key='AUS-TAS-KI', session=None, target_datetime=None, logger: logging.Logger = logging.getLogger(__name__)):
@@ -66,12 +64,11 @@
payload = SignalR("https://data.ajenti.com.au/live/signalr").get_value("TagHub", "Dashboard")
technologies_parsed, biodiesel_percent = parse_payload(logger, payload)
- battery_production, battery_storage = format_storage_techs(technologies_parsed)
+ storage_techs = sum_storage_techs(technologies_parsed)
return {
'zoneKey': zone_key,
'datetime': arrow.now(tz='Australia/Currie').datetime,
'production': {
- 'battery discharge': battery_production,
'biomass': technologies_parsed['diesel']*biodiesel_percent/100,
'coal': 0,
'gas': 0,
@@ -84,9 +81,9 @@
'unknown': 0
},
'storage': {
- 'battery': battery_storage*-1
+ 'battery': storage_techs*-1 #Somewhat counterintuitively,to ElectricityMap positive means charging and negative means discharging
},
- 'source': 'https://data.ajenti.com.au/KIREIP/index.html'
+ 'source': 'https://www.hydro.com.au/clean-energy/hybrid-energy-solutions/success-stories/king-island' #Iframe: https://data.ajenti.com.au/KIREIP/index.html
}
if __name__ == '__main__':
|
{"golden_diff": "diff --git a/parsers/AUS_TAS_KI.py b/parsers/AUS_TAS_KI.py\n--- a/parsers/AUS_TAS_KI.py\n+++ b/parsers/AUS_TAS_KI.py\n@@ -52,12 +52,10 @@\n return technologies_parsed, biodiesel_percent\n \n # Both keys battery and flywheel are negative when storing energy, and positive when feeding energy to the grid\n-def format_storage_techs(technologies_parsed):\n+def sum_storage_techs(technologies_parsed):\n storage_techs = technologies_parsed['battery']+technologies_parsed['flywheel']\n- battery_production = storage_techs if storage_techs > 0 else 0\n- battery_storage = storage_techs if storage_techs < 0 else 0\n \n- return battery_production, battery_storage\n+ return storage_techs\n \n def fetch_production(zone_key='AUS-TAS-KI', session=None, target_datetime=None, logger: logging.Logger = logging.getLogger(__name__)):\n \n@@ -66,12 +64,11 @@\n \n payload = SignalR(\"https://data.ajenti.com.au/live/signalr\").get_value(\"TagHub\", \"Dashboard\")\n technologies_parsed, biodiesel_percent = parse_payload(logger, payload)\n- battery_production, battery_storage = format_storage_techs(technologies_parsed)\n+ storage_techs = sum_storage_techs(technologies_parsed)\n return {\n 'zoneKey': zone_key,\n 'datetime': arrow.now(tz='Australia/Currie').datetime,\n 'production': {\n- 'battery discharge': battery_production,\n 'biomass': technologies_parsed['diesel']*biodiesel_percent/100,\n 'coal': 0,\n 'gas': 0,\n@@ -84,9 +81,9 @@\n 'unknown': 0\n },\n 'storage': {\n- 'battery': battery_storage*-1\n+ 'battery': storage_techs*-1 #Somewhat counterintuitively,to ElectricityMap positive means charging and negative means discharging\n },\n- 'source': 'https://data.ajenti.com.au/KIREIP/index.html'\n+ 'source': 'https://www.hydro.com.au/clean-energy/hybrid-energy-solutions/success-stories/king-island' #Iframe: https://data.ajenti.com.au/KIREIP/index.html\n }\n \n if __name__ == '__main__':\n", "issue": "King Island: battery never seems to discharge \nI've been keeping an eye on AUS-TAS-KI since it was added to the map. Charging works fine, discharging doesn't show up.\n", "before_files": [{"content": "# Initial PR https://github.com/tmrowco/electricitymap-contrib/pull/2456\n# Discussion thread https://github.com/tmrowco/electricitymap-contrib/issues/636\n# A promotion webpage for King's Island energy production is here : https://www.hydro.com.au/clean-energy/hybrid-energy-solutions/success-stories/king-island\n# As of 09/2020, it embeds with <iframe> the URI https://data.ajenti.com.au/KIREIP/index.html\n# About the data, the feed we get seems to be counters with a 2 seconds interval.\n# That means that if we fetch these counters every 15 minutes, we only are reading \"instantaneous\" metters that could differ from the total quantity of energies at play. To get the very exact data, we would need to have a parser running constanty to collect those 2-sec interval counters.\n\nimport asyncio\nimport json\nimport logging\nimport arrow\nfrom signalr import Connection\nfrom requests import Session\n\nclass SignalR:\n def __init__(self, url):\n self.url = url\n \n def update_res(self, msg):\n if (msg != {}):\n self.res = msg\n\n def get_value(self, hub, method):\n self.res = {}\n with Session() as session:\n #create a connection\n connection = Connection(self.url, session)\n chat = connection.register_hub(hub)\n chat.client.on(method, self.update_res)\n connection.start()\n connection.wait(3)\n connection.close()\n return self.res\n \ndef parse_payload(logger, payload):\n technologies_parsed = {}\n if not 'technologies' in payload:\n raise KeyError(\n f\"No 'technologies' in payload\\n\"\n f\"serie : {json.dumps(payload)}\"\n )\n else:\n logger.debug(f\"serie : {json.dumps(payload)}\")\n for technology in payload['technologies']:\n assert technology['unit'] == 'kW'\n # The upstream API gives us kW, we need MW\n technologies_parsed[technology['id']] = int(technology['value'])/1000\n logger.debug(f\"production : {json.dumps(technologies_parsed)}\")\n\n biodiesel_percent = payload['biodiesel']['percent']\n\n return technologies_parsed, biodiesel_percent\n\n# Both keys battery and flywheel are negative when storing energy, and positive when feeding energy to the grid\ndef format_storage_techs(technologies_parsed):\n storage_techs = technologies_parsed['battery']+technologies_parsed['flywheel']\n battery_production = storage_techs if storage_techs > 0 else 0\n battery_storage = storage_techs if storage_techs < 0 else 0\n\n return battery_production, battery_storage\n\ndef fetch_production(zone_key='AUS-TAS-KI', session=None, target_datetime=None, logger: logging.Logger = logging.getLogger(__name__)):\n\n if target_datetime is not None:\n raise NotImplementedError('The datasource currently implemented is only real time')\n \n payload = SignalR(\"https://data.ajenti.com.au/live/signalr\").get_value(\"TagHub\", \"Dashboard\")\n technologies_parsed, biodiesel_percent = parse_payload(logger, payload)\n battery_production, battery_storage = format_storage_techs(technologies_parsed)\n return {\n 'zoneKey': zone_key,\n 'datetime': arrow.now(tz='Australia/Currie').datetime,\n 'production': {\n 'battery discharge': battery_production,\n 'biomass': technologies_parsed['diesel']*biodiesel_percent/100,\n 'coal': 0,\n 'gas': 0,\n 'hydro': 0,\n 'nuclear': 0,\n 'oil': technologies_parsed['diesel']*(100-biodiesel_percent)/100,\n 'solar': technologies_parsed['solar'],\n 'wind': 0 if technologies_parsed['wind'] < 0 and technologies_parsed['wind'] > -0.1 else technologies_parsed['wind'], #If wind between 0 and -0.1 set to 0 to ignore self-consumption\n 'geothermal': 0,\n 'unknown': 0\n },\n 'storage': {\n 'battery': battery_storage*-1\n },\n 'source': 'https://data.ajenti.com.au/KIREIP/index.html'\n }\n\nif __name__ == '__main__':\n print(fetch_production())\n", "path": "parsers/AUS_TAS_KI.py"}]}
| 1,726 | 540 |
gh_patches_debug_3443
|
rasdani/github-patches
|
git_diff
|
crytic__slither-1971
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Suggestion to make the recommendation in the `msgvalue-inside-a-loop` detector wiki clearer
### Describe the desired feature
Reference: https://github.com/crytic/slither/wiki/Detector-Documentation#msgvalue-inside-a-loop
This is the current recommendation for the `msgvalue-inside-a-loop` detector:
```solidity
Track msg.value through a local variable and decrease its amount on every iteration/usage.
```
This is a vague recommendation - it does not address the issue head-on, i.e., what mathematical technique the developer should use to remedy the bug.
My suggestions:
1. Recommend dividing by the number of `receivers`
2. Recommend providing an explicit array of amounts alongside the `receivers` array, and check that the sum of each element in that array matches `msg.value`
</issue>
<code>
[start of slither/detectors/statements/msg_value_in_loop.py]
1 from typing import List, Optional
2 from slither.core.cfg.node import NodeType, Node
3 from slither.detectors.abstract_detector import (
4 AbstractDetector,
5 DetectorClassification,
6 DETECTOR_INFO,
7 )
8 from slither.slithir.operations import InternalCall
9 from slither.core.declarations import SolidityVariableComposed, Contract
10 from slither.utils.output import Output
11
12
13 def detect_msg_value_in_loop(contract: Contract) -> List[Node]:
14 results: List[Node] = []
15 for f in contract.functions_entry_points:
16 if f.is_implemented and f.payable:
17 msg_value_in_loop(f.entry_point, 0, [], results)
18 return results
19
20
21 def msg_value_in_loop(
22 node: Optional[Node], in_loop_counter: int, visited: List[Node], results: List[Node]
23 ) -> None:
24
25 if node is None:
26 return
27
28 if node in visited:
29 return
30 # shared visited
31 visited.append(node)
32
33 if node.type == NodeType.STARTLOOP:
34 in_loop_counter += 1
35 elif node.type == NodeType.ENDLOOP:
36 in_loop_counter -= 1
37
38 for ir in node.all_slithir_operations():
39 if in_loop_counter > 0 and SolidityVariableComposed("msg.value") in ir.read:
40 results.append(ir.node)
41 if isinstance(ir, (InternalCall)):
42 msg_value_in_loop(ir.function.entry_point, in_loop_counter, visited, results)
43
44 for son in node.sons:
45 msg_value_in_loop(son, in_loop_counter, visited, results)
46
47
48 class MsgValueInLoop(AbstractDetector):
49 """
50 Detect the use of msg.value inside a loop
51 """
52
53 ARGUMENT = "msg-value-loop"
54 HELP = "msg.value inside a loop"
55 IMPACT = DetectorClassification.HIGH
56 CONFIDENCE = DetectorClassification.MEDIUM
57
58 WIKI = "https://github.com/crytic/slither/wiki/Detector-Documentation/#msgvalue-inside-a-loop"
59
60 WIKI_TITLE = "`msg.value` inside a loop"
61 WIKI_DESCRIPTION = "Detect the use of `msg.value` inside a loop."
62
63 # region wiki_exploit_scenario
64 WIKI_EXPLOIT_SCENARIO = """
65 ```solidity
66 contract MsgValueInLoop{
67
68 mapping (address => uint256) balances;
69
70 function bad(address[] memory receivers) public payable {
71 for (uint256 i=0; i < receivers.length; i++) {
72 balances[receivers[i]] += msg.value;
73 }
74 }
75
76 }
77 ```
78 """
79 # endregion wiki_exploit_scenario
80
81 WIKI_RECOMMENDATION = """
82 Track msg.value through a local variable and decrease its amount on every iteration/usage.
83 """
84
85 def _detect(self) -> List[Output]:
86 """"""
87 results: List[Output] = []
88 for c in self.compilation_unit.contracts_derived:
89 values = detect_msg_value_in_loop(c)
90 for node in values:
91 func = node.function
92
93 info: DETECTOR_INFO = [func, " use msg.value in a loop: ", node, "\n"]
94 res = self.generate_result(info)
95 results.append(res)
96
97 return results
98
[end of slither/detectors/statements/msg_value_in_loop.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/slither/detectors/statements/msg_value_in_loop.py b/slither/detectors/statements/msg_value_in_loop.py
--- a/slither/detectors/statements/msg_value_in_loop.py
+++ b/slither/detectors/statements/msg_value_in_loop.py
@@ -79,7 +79,7 @@
# endregion wiki_exploit_scenario
WIKI_RECOMMENDATION = """
-Track msg.value through a local variable and decrease its amount on every iteration/usage.
+Provide an explicit array of amounts alongside the receivers array, and check that the sum of all amounts matches `msg.value`.
"""
def _detect(self) -> List[Output]:
|
{"golden_diff": "diff --git a/slither/detectors/statements/msg_value_in_loop.py b/slither/detectors/statements/msg_value_in_loop.py\n--- a/slither/detectors/statements/msg_value_in_loop.py\n+++ b/slither/detectors/statements/msg_value_in_loop.py\n@@ -79,7 +79,7 @@\n # endregion wiki_exploit_scenario\n \n WIKI_RECOMMENDATION = \"\"\"\n-Track msg.value through a local variable and decrease its amount on every iteration/usage.\n+Provide an explicit array of amounts alongside the receivers array, and check that the sum of all amounts matches `msg.value`.\n \"\"\"\n \n def _detect(self) -> List[Output]:\n", "issue": "Suggestion to make the recommendation in the `msgvalue-inside-a-loop` detector wiki clearer\n### Describe the desired feature\n\nReference: https://github.com/crytic/slither/wiki/Detector-Documentation#msgvalue-inside-a-loop\r\n\r\nThis is the current recommendation for the `msgvalue-inside-a-loop` detector:\r\n\r\n```solidity\r\nTrack msg.value through a local variable and decrease its amount on every iteration/usage.\r\n```\r\n\r\nThis is a vague recommendation - it does not address the issue head-on, i.e., what mathematical technique the developer should use to remedy the bug.\r\n\r\nMy suggestions:\r\n\r\n1. Recommend dividing by the number of `receivers`\r\n2. Recommend providing an explicit array of amounts alongside the `receivers` array, and check that the sum of each element in that array matches `msg.value`\n", "before_files": [{"content": "from typing import List, Optional\nfrom slither.core.cfg.node import NodeType, Node\nfrom slither.detectors.abstract_detector import (\n AbstractDetector,\n DetectorClassification,\n DETECTOR_INFO,\n)\nfrom slither.slithir.operations import InternalCall\nfrom slither.core.declarations import SolidityVariableComposed, Contract\nfrom slither.utils.output import Output\n\n\ndef detect_msg_value_in_loop(contract: Contract) -> List[Node]:\n results: List[Node] = []\n for f in contract.functions_entry_points:\n if f.is_implemented and f.payable:\n msg_value_in_loop(f.entry_point, 0, [], results)\n return results\n\n\ndef msg_value_in_loop(\n node: Optional[Node], in_loop_counter: int, visited: List[Node], results: List[Node]\n) -> None:\n\n if node is None:\n return\n\n if node in visited:\n return\n # shared visited\n visited.append(node)\n\n if node.type == NodeType.STARTLOOP:\n in_loop_counter += 1\n elif node.type == NodeType.ENDLOOP:\n in_loop_counter -= 1\n\n for ir in node.all_slithir_operations():\n if in_loop_counter > 0 and SolidityVariableComposed(\"msg.value\") in ir.read:\n results.append(ir.node)\n if isinstance(ir, (InternalCall)):\n msg_value_in_loop(ir.function.entry_point, in_loop_counter, visited, results)\n\n for son in node.sons:\n msg_value_in_loop(son, in_loop_counter, visited, results)\n\n\nclass MsgValueInLoop(AbstractDetector):\n \"\"\"\n Detect the use of msg.value inside a loop\n \"\"\"\n\n ARGUMENT = \"msg-value-loop\"\n HELP = \"msg.value inside a loop\"\n IMPACT = DetectorClassification.HIGH\n CONFIDENCE = DetectorClassification.MEDIUM\n\n WIKI = \"https://github.com/crytic/slither/wiki/Detector-Documentation/#msgvalue-inside-a-loop\"\n\n WIKI_TITLE = \"`msg.value` inside a loop\"\n WIKI_DESCRIPTION = \"Detect the use of `msg.value` inside a loop.\"\n\n # region wiki_exploit_scenario\n WIKI_EXPLOIT_SCENARIO = \"\"\"\n```solidity\ncontract MsgValueInLoop{\n\n mapping (address => uint256) balances;\n\n function bad(address[] memory receivers) public payable {\n for (uint256 i=0; i < receivers.length; i++) {\n balances[receivers[i]] += msg.value;\n }\n }\n\n}\n```\n\"\"\"\n # endregion wiki_exploit_scenario\n\n WIKI_RECOMMENDATION = \"\"\"\nTrack msg.value through a local variable and decrease its amount on every iteration/usage.\n\"\"\"\n\n def _detect(self) -> List[Output]:\n \"\"\"\"\"\"\n results: List[Output] = []\n for c in self.compilation_unit.contracts_derived:\n values = detect_msg_value_in_loop(c)\n for node in values:\n func = node.function\n\n info: DETECTOR_INFO = [func, \" use msg.value in a loop: \", node, \"\\n\"]\n res = self.generate_result(info)\n results.append(res)\n\n return results\n", "path": "slither/detectors/statements/msg_value_in_loop.py"}]}
| 1,600 | 146 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.